December 1986 – Irony

In my post December 1986, I presented a histogram showing the GISS estimate of December 1986 minus the actual for GHCN stations in Europe and Russia. As noted, GISS under-estimated December 1986 for this region by a greater than 2 to 1 margin. The result was, when GISS combined multiple records for a single station, the stations with a cold estimate for December 1986 had their records artificially cooled pre-1987. By cooling the older record and leaving the current record unchanged, an enhanced warming trend was introduced.

I promised I would show other regions of the world in future posts. Therefore, in this post I present Africa, which essentially shows polar-opposite results from Europe / Russia.

In Africa, GISS tends to over-estimate December 1986 when combining records. Because the temperature is over-estimated, older records must be warmed slightly before they are combined with the present record. By introducing artificial warming in a past record, the overall trend through the present is cooled.

Following is a histogram showing the GISS estimate of December 1986 minus the actual for GHCN stations in Africa.

africa.GIF

The implication is that the GISS algorithm introduces a cooling trend to most African records.


As can be seen in the next plot, however, the number of stations reporting temperature data in Africa drops off rather sharply before 1950. This means any warming of past records likely does not go very far back in time.

africastations.GIF

We need to peek backwards some and see how many of the “warmed” station records actually exist before 1950:

  1950 1940 1930 1920
Warmed 50 10 8 5
Cooled 31 13 13 10
No Change 52 22 19 15

As can be seen from the table above, prior to 1950 the “cooled” stations tend to outnumber the “warmed” stations. In other words, from roughly 1950 to 1986, GISS artificially warms the African records, and prior to 1950 it artificially cools the records. Granted, we are not talking about a lot of stations here, but it does give one whiplash from all of the double-takes.

As was pointed out in several comments to December 1986, the average bias for that month, while negative, was not particularly large. Furthermore, the value would end up being divided by 36 or 48 in order to yield the adjustment amount. See here and here.

The same is of course true of Africa. The implication in both cases is that the net adjustment ends up being so small that we won’t see it at the global or perhaps even zonal level. This might indeed be true. Whether the trend is enhanced or not does not necessarily mean the trend is not there. At the macroscopic level the adjustment may not matter at all.

Nevertheless, I find it rather amusing / interesting /  ironic that as I go back in time and look at the average bias adjustment of African stations, the cooled stations not only outnumber the warmed stations, but they far outweigh them when averaging the adjustment. This comes in spite of the fact that most of the records get the warming bias.

Here is what I mean:

africabias.GIF

61 Comments

  1. Sam Urbinto
    Posted Aug 19, 2008 at 3:12 PM | Permalink

    The word that springs to mind is “insane”.

  2. Lance
    Posted Aug 19, 2008 at 3:51 PM | Permalink

    Very interesting analysis.
    Oh and the first sentence under the second graph begins “We need to <b>peak</b> backwards …” Unless that is a subtle pun I believe you meant <b>peek</b>.

    Reply: Lance, thanks. No pun intended, although I have been assembling a cross-country training plan for a team I coach and “peaking” was definitely on my mind.

  3. Sam Urbinto
    Posted Aug 19, 2008 at 4:10 PM | Permalink

    Isn’t that “peak upwards”?

  4. Ron Cram
    Posted Aug 19, 2008 at 11:02 PM | Permalink

    John,

    Thank you for this analysis. Sometimes I tend to think all of their unwarranted adjustments exaggerate the warming trend. I am glad you present the data as you find it and do not hide data that may be inconvenient to a particular point of view.

  5. Geoff Sherrington
    Posted Aug 19, 2008 at 11:40 PM | Permalink

    Excellent work, John. Can I help with Australian data?

    A novice observer might comment that there are 365 days in a year, 366 in a leap year, and that for many locations we have a temperature measurement for many of those days, so why not graph it? There are many simple ways to infill missing data, or to ignore them, without disturbing observations from years ago.

    If the novice reader had read “Alice in Wonderland”, she/he might have an alternative comprehension. Ho hum. Another day, another temperature. (With licence) –

    “Then you should say what you mean,” the March Hare went on. “I do,” Alice hastily replied; “at least – at least I mean what I say – that’s the same thing you know.” “Not the same thing a bit!” said the Hatter. “Why, you might as well say that ‘I see what I heat’ is the same thing as ‘I heat what I see!'”

  6. fred
    Posted Aug 20, 2008 at 12:29 AM | Permalink

    Its surely not important if the adjustments introduce a warming or cooling trend. The important thing is that scientists are changing historical observations on grounds which have nothing to do with the observations themselves. It would be one thing if you could show that in 1852 the instrument used had systematic bias. That is not what is happening. It is being changed because someone does not like the look of it. The idea that you can have a valid reason to change the reading of an instrument in 1852 because of the readings of other instruments, before, after, or nearby, is simply crazy.

    The only right way to do this is plot the readings as recorded, then write some sort of explanation of why the record does or does not mean what it appears to show. The real victim of this sort of thing is climate science. From a public policy point of view, it is impossible to retain any credibility once this stuff is widely known. Someone points out on Lucia that the Hadley series is similarly being changed, also readings 100+ years old. In no other field would this fly. Can you imagine, for instance, revising casualty figures from a distant battle because of comparisons to other similar battles? Or changing the recorded incidence of a certain disease because of other epidemics in nearby locations? Or still more crazy, and perhaps a better analogy, changing the numbers of recorded deaths from the Plague in Pepys day in the UK in the light of death rates a century or two later in France, or maybe India?

  7. jryan
    Posted Aug 20, 2008 at 8:00 AM | Permalink

    I would figure that you can stop going out of your way to point out, with each uncovered adjustment error, that the given error is minor on the global climate scale. At some point, given that this is statistics, we should start looking at the trends and realize that if every region audited has serious adjustments problems then all the regions are probably screwed.

    North America, South America, the oceans, now Africa….

    When can we start adding these all up?

  8. Gaelan Clark
    Posted Aug 20, 2008 at 9:01 AM | Permalink

    I apologize in advance for not understanding, but can anyone explain the exact nature and reasons for “adjusting” the temperature history and/or the current temperature.

    To me, this is the anthropogenic part of “global warming”.

  9. Larry Sheldon
    Posted Aug 20, 2008 at 9:32 AM | Permalink

    this is the anthropogenic part of “global warming”.

    This non-scientist thinks you understand better than you let on.

    The cover story is that adjustments are necessary for time-of-day-of-observation (of a thermometer that records “maximum” and “minimum”), and other stuff that in general makes even less sense to me on the whole.

    The apparent requirement is that the long-running average (including recreations from tree-rings, chicken innards, and fossil tea leaves) show a positive trend .

  10. oakgeo
    Posted Aug 20, 2008 at 2:12 PM | Permalink

    John, what kind of weighting is given to data from the pre-1950 African sites relative to modern African data? I.e. do the 20 sites in 1900 carry equal weight as the 160 sites in 1980? My statistical skills are poor but I would think that one would need to place a higher degree of confidence on the recent, more numerous sites simply because 160 is more representative than 20 (although Africa is huge and either number seems woefully inadequate). Is there a statistical method to account for the poor coverage pre-1950 relative to the eight-fold better coverage now? Is such a method being used by the Team in their reconstructions, or do they weight the confidence and representativity of the 1900 data equally with the 1980 data?

    As for the bias adjustments, they may be insignificant but Gaelan’s point is well taken. Why are historical temperatures adjusted anyway? It is hard to accept that it is some nefarious alarmist plot. What are the scientific and/or technical reasons for making these historical data adjustments?

    fils de bellefeuille

  11. Posted Aug 20, 2008 at 2:38 PM | Permalink

    I haven’t notice John V or Atmoz lately. You think the Python ate them? John G. I admire your effort. Do you have a link to some of the original discussions on this adjustment. I seem to recall a few that had fairly large adjustments that might stimulate the conversation.

  12. Tolz
    Posted Aug 20, 2008 at 2:47 PM | Permalink

    John, have you done enough analysis to conclude that the bias in the historical record, dubious as the adjustments to observations may be, CAN’T be anything but insignificant? Or that based on your work, there’s probably more meat on another bone of contention? If that were the case, isn’t the conclusion that we may accept the historical record, estimates/adjustments and all, and move on?

  13. DeWitt Payne
    Posted Aug 20, 2008 at 2:49 PM | Permalink

    Re:#8,
    Correct me if I’m wrong, but I’m pretty sure the adjustments are made to keep the graphs pretty, i.e. no discontinuities. The current temperature can’t be adjusted to achieve this so that means past temperatures have to go up or down. Normal practice in the lab, though, is not to report data from a new instrument that doesn’t agree with the previous instrument until the discrepancy is resolved. Since this is historical data, there is no means to resolve a discrepancy so adjustments shouldn’t be made at all.

  14. Posted Aug 20, 2008 at 3:15 PM | Permalink

    13 DeWitt,

    If only that was the case. The whole 1987 just blows my mind. Not just the unnecessary adjustment for a missing December, but the wholesale instrumentation change and the odd temperature jump during the transition that appears to be natural variation. It is like the perfect storm of uncertainty.

  15. John S
    Posted Aug 20, 2008 at 5:05 PM | Permalink

    There is also another means by which the “proper” trend in the averages of historic records is maintained: stations that show contrary behavior are simply dropped from GHCN updates. There are scores of such instances world-wide, of which Wellington is but one example.

  16. John Goetz
    Posted Aug 20, 2008 at 9:46 PM | Permalink

    #12 Tolz

    I am not yet convinced of the significance one way or the other.

    1. The distribution of stations worldwide is nowhere near uniform. This is true now and back through the historical record. The United States has far more representation in the record than any other country or region on the planet, yet occupies only about 3% of the surface, as we are sometimes gently reminded. The gridded temperatures covering the US are going to therefore be more accurate than elsewhere. How much more is not yet known to me.

    2. The number of stations participating in the effort to measure global temperature was greatest from roughly 1950 to 1990. Prior to 1950 the number grew from very few stations in 1880 to a moderate number in the late 1940s, before jumping dramatically. Since 1990 the number has dropped dramatically, such that now the number of stations participating is roughly equal to that in the very early part of the 1900s. There’s progress for you! Of course, that does not mean the same few stations reporting now were reporting in 1900. Some have come and some have gone.

    3. Even the few stations that have been reporting that long have gone through changes that affect the fidelity of their records. We have seen that the Burlington, Vermont station has not always been located at their international airport because, well, airplanes did not exist when the station first began reporting. Like a present-day yuppie, the station migrated to the burbs over time, having originated in downtown close to the lake. Along the way it spent time in a couple back yards, on a city roof, and at the local university. Hopefully not partying.

    So, we start with rather mediocre spatial and temporal representation. To that we add:

    4. Of the records we do have, most are missing a number of monthly data points – some more than others. There are many reasons data might be missing, but one thing we have learned is that GHCN will drop an entire month’s worth of data if even one day is missing or suspicious. This happens over and over and is not a rare occurrence. This missing day or days could be estimated using a variety of techniques, or better yet, some of the days can be recovered if someone actually looked at the record and spotted the typical transcription error. You know, when June 10 is 24C and June 12 is 25C, but June 11 is -23C. That type of error will result in the entire month being discarded. Whaddya thing the real June 11 temperature was???

    5. So rather than fixing a transcription error here or there or estimating the missing day or two, GHCN drops the month and leaves it up to GISS to estimate the month. We have seen just how robust that estimation is. (Imagine for a second you are watching Lewis Black tell you this).

    6. Now, GHCN very, very often passes along multiple records meant to represent a single station. Many times these records are consistent, indicating they are essentially pages from the same book. Sometimes they are not, which could mean they came from some other nearby location, were collected by a different piece of equipment – who knows. GISS says “what the heck, I don’t have a lot of records to begin with, why don’t I just splice these bad boys together and make one long record.”

    You can imagine what might happen when two records from different books are spliced. You get sausage. But interestingly enough, as we have seen, even when they are from the same book you still get sausage, just a milder variety.

    7. Keeping to the theme of not throwing away a good record no matter how crappy it looks after we’ve ground it up, we come to the point where we need to adjust urban stations because we know their temperature trends are artificially inflated by the urban heat island effect. So we find the rural stations that are within 500km, or wait, maybe sometimes 1000km (the distance from Indianapolis, IN to New York, NY), munge their trends together, and decide that MUST be the real trend of the urban station. So we go in and change the urban stations record so that its trend matches the munged trend, whipping the station back into line.

    From that point comes the gridding part, which I have yet to explore in my spare time. I’ve also not bothered to mention the other machinations that go on with the record, such as the TOBS adjustment and infilling. I’ve ignored mentioning how the temperature is collected and whether or not station standards are consistently met or not met.

    Nevertheless, from this we have pronouncements that the earth is 0.7 degrees warmer now than it was 100 years ago, and that Armageddon is upon us and we need to make some serious policy decisions based on the data. And equally robust simulations.

    So no, I can’t conclude that the adjustments “CAN’T be anything but insignificant”. To me, what is done with the historical record is nothing more that putting out a fire with an ice pick.

  17. Keith W.
    Posted Aug 21, 2008 at 9:48 AM | Permalink

    John G., interesting that you bring up your point #7. I had a post about this on Climate Skeptic. Someone observed the number of USHCN stations that were involved in the GHCN (1221 out of 7280 or 16.77%). When I asked how this disproportionate percentage was reconciled by GISS, a warmist suggested I check out GISTEMP to find out how. So I did. Here is the post I wrote following my admittedly cursory examination of the process.

    “I went and checked GISTEMP, and they do explain how they divide up the Earth for their statistical averaging on this page. But one thing snagged at my mind a bit. Step three of the process involves dividing the planet into 8000 grid boxes. This would create boxes of roughly 63,759 square kilometers within which an average temperature anomaly is computed and then supplied to the global computation. It is how the average temperature is computed that bothers me.

    That average temperature anomaly is computed from the temperature stations within that grid box, and also any within 1200 kilometers. To give a graphic perspective so people can visualize this, let’s say my grid box is centered in St. Louis, Missouri. My local grid average is determined by not only the stations within my grid (roughly within 375 kilometers of me), but also from stations in Pittsburgh, Atlanta, Dallas, and Minneapolis, to name just a few. Basically, the radius of effect means that my one grid box’s temperature anomaly is not determined from the 63,759 square kilometers within it, but from the 4.52 million square kilometers around it, an area 71 times as large.

    Based upon this computation design, the United States should be represented by roughly two grid boxes if you are looking at total area involved in determining the average temperature anomaly compared to total area of the US (9,826,630 square kilometers total area of the United States divided by the 4,521,600 square kilometers derived from the 1200 kilometer radius of effect). But the US grid sample is still based upon the 63,759 square kilometer grid box determination, so the total US grid boxes are 154. That would seem to me to heavily weight the US sample.

    Why the 1200 kilometer area of effect? Doesn’t this mean that certain stations get counted multiple times? Based upon my math above, it would suggest that many stations in the United States get counted over 77 times. Does the temperature in St. Louis really effect the temperature in Atlanta, and vice versa? Surely, we could just use the temperatures provided by the stations within the grid boxes to determine that grid box’s average anomaly and work out a good global average from that.”

    It does bother me that temperatures from regions separated by natural geologic barriers that would disrupt weather patterns (as an example, mountain ranges) are used to determine one anomaly statistic. I can see Hansen’s problem due to coverage zones for his 8000 grid boxes. There are probably several areas in the South Pacific where he would be lucky to get one reliable temperature reporting station within a 1200 km radius. Even so, if he would admit this, and explain his rationale for his process, I’m sure it would create more understanding. I also think that it makes no sense to have stations count more than once in the total computation.

  18. Tolz
    Posted Aug 21, 2008 at 10:33 AM | Permalink

    #16 (John)

    Thank you! Great work.

  19. Sam Urbinto
    Posted Aug 21, 2008 at 10:53 AM | Permalink

    Maybe I’m calculating this incorrectly, but 5×5 squares would give us 5184 of them. Roughly, The ones at the poles would be 74 sq km 1/3 down 392 sq km 2/3 down 517 sq km and at the equator 556 sq km

    Or do they calculate them as 8000 of equal area?

  20. EW
    Posted Aug 21, 2008 at 11:20 AM | Permalink

    6. Now, GHCN very, very often passes along multiple records meant to represent a single station. Many times these records are consistent, indicating they are essentially pages from the same book. Sometimes they are not, which could mean they came from some other nearby location, were collected by a different piece of equipment – who knows. GISS says “what the heck, I don’t have a lot of records to begin with, why don’t I just splice these bad boys together and make one long record.”

    This is what angers me the most. Some of these multiple records for the same station come from quite civilized locations, so there’s nothing easier than write a polite letter to the meteorological institute in the country in question and ask them to clarify or even get a homogenized and reconciled record from them, done by people who know what the local conditions are and which station can be adjusted or homogenized with which. There are even papers published about homogenization of certain long-time series, so I really don’t see, why GHCN produces their arbitrarily made versions.

  21. Keith W.
    Posted Aug 21, 2008 at 12:06 PM | Permalink

    #18 – Sam, based upon the step 3 description, they do it five ways. First, as the 8000 grid boxes of equal area, then 100 grid boxes for the 80 regions, then the latitudinal zones, the hemispheric, and then global. It certainly looks like a lot of data massaging. If you look at the raw temp summation data they provide here and here, you can see that the Northern Hemisphere habitable zones are regularly the warmest regions of the planet with regard to temperature anomaly. My question is does the higher quantity of data from these regions provide a greater bias in the calculation, considering that many stations get reported multiple times?

  22. oakgeo
    Posted Aug 21, 2008 at 12:09 PM | Permalink

    Further to Keith W’s post…

    Speaking only of the northern hemisphere, I would think that station density increases southward with population density; thus, there would be more “warmer” stations within a 1200 km radius. I’ve never seen a map of station locations worldwide so maybe there is actually no correlation. Does station distribution decrease with increasing latitude? If so, is there a warming bias within the 1200 km radius averaging?

  23. Keith W.
    Posted Aug 21, 2008 at 1:05 PM | Permalink

    #21 – Oakgeo, Station density varies depending upon latitude, but it does closely mirror population in Western Civilization (i.e. of European descent). Here are two graphic presentations of the station locations. This first graphic is a time elapsed video of the station locations from 1950 to 1999. As you can see watching it, the United States shows a near saturation of stations early in the video, and never really loses the density of stations it had from the 50’s. The second graphic is color coded to indicate station duration, with active stations being the large dots. It is a very busy graphic, and not easy to read. Again, the United States is saturated as is Europe. The interpretation I would take is that most of the information is coming from Europe and the United States, the two areas with the highest Urbanization effects.

  24. Sam Urbinto
    Posted Aug 21, 2008 at 1:13 PM | Permalink

    Oh, sorry, I was thinking about GHCN. And I accidentally circled the globe both ways, but if you’re doing 360 one direction, you only have to do 180 the other.

    This data set contains gridded mean temperature anomalies from the GHCN V2 monthly temperature data sets. GHCN homogeneity adjusted data was the primary source for developing the gridded fields. In grid boxes without homogeneity adjusted data, GHCN raw data was used to provide additional coverage when possible. Each month of data consists of 2592 gridded data points produced on a 5 X 5 degree basis for the entire globe (72 longitude X 36 latitude grid boxes).

    I wonder how they reconcile GISTEMP and GCHN when mixing.

    Regardless. Wouldn’t taking a GISTEMP square and extending it all four directions out 1200 kilometers only extend the size of the box to cover? You’d just have box with a frame, so to speak.

    63759 is 252.51 per side. Adding 2.4 to a side (1.2 each direction) gives 254.91 or 64949

  25. Keith W.
    Posted Aug 21, 2008 at 1:44 PM | Permalink

    #23 – Sam, they don’t increase the length of the side of box. The 63,759 square kilometers is the area inside the box. I arrived at that number by dividing the surface area of the Earth (510,072,000 square kilometers) by 8000. Assuming a square box (not realistic I know, but convenient), you would only need the side of the box to be 252.5 kilometers to get 63,759 square kilometers inside the square (252.5 squared is 63,756.25). But they include a 1200 kilometer radius. The area of that circle is 1200 squared times pi, or 1,440,000 times 3.1415, or 4,523,760 square kilometers. That’s a much larger area, one which would hold 71 of our little grid boxes.

    Note, my 63,759 square kilometers figure is derived from using the entire Earth’s surface, both land and ocean. If you only use the land surface area (148,940,000 square kilometers), our grid box shrinks to 18,618 square kilometers. The 1200 kilometer radius circle would hold 243 of those boxes. I gave Hansen the benefit of the doubt that he used the entire planet surface when computing his grid. The source for the surface area data of the Earth is wikipedia.

  26. Posted Aug 21, 2008 at 1:51 PM | Permalink

    This was my reply to a post over on Sea Ice 2 which dovetails in to this thread as an observational rather than scientific comment.

    “ That’s an astonishing temperature difference and makes me wonder again if producing a world average mean temperature serves any real purpose?
    IF all the weather stations remained constant in number and position.
    IF exactly the same equipment was used in each one
    IF the same person did the monitoring
    IF the conditions around the station did not change (ie urbanisation)
    IF there were sufficient stations to capture ALL the possible temp variations
    Then PERHAPS there might be some small point to it all.

    Today I walked from our village round the estuary to the town on the other side-three miles walking but only 200 yards as the sea gull flies
    In the same conditions OUR side of the estuary was 5 degree F warmer than the other.
    The town on the other side has a temperature and sunshine gauge so will form part of the overall national records. However is its temperature any more ‘correct’ than the one in our village?
    Also its sunshine measuring equipment is less sensitive than another towns equipment seven miles away. This other town regularly records three or four more hours sunshine daily than our local one. Surely when temperature variations are so pronounced from one place to another only yards away, and different sorts of equipment can record the same amounts of sunshine at wildly differing levels, it is all a bit meaningless?
    I am not disputing your analogy of temp differences either side of a front, but we are not dealing with that scenario in my particular case which must be replicated millions of times worldwide.”

    Bearing in mind the contents of this thread, perhaps someone can comment whether there is any scientific value whatsoever in the ‘World average’ as it seems to be an extremely artificial and theoretical temperature construction.
    TonyB

  27. John Goetz
    Posted Aug 21, 2008 at 2:13 PM | Permalink

    #22 Keith

    My hypothesis, which I have yet to test, is that station density correlates with economic prosperity.

  28. Keith W.
    Posted Aug 21, 2008 at 2:28 PM | Permalink

    #26 – John, sounds good. It would make since that people with greater economic success would have more time to be concerned about numerating just how hot or cold it was. Anecdotal evidence from my parents (the daughter of a sharecropper and the son of a stonemason in Depression-Era southern United States) was that the precise number measurement didn’t really matter. It was either hot or cold, and you got on with your life. Somehow I doubt someone who is worrying about what food is going to make up the one meal they eat during the day is all that concerned about a tenth of a degree centigrade measured on somebody’s thermometer.

  29. oakgeo
    Posted Aug 21, 2008 at 3:15 PM | Permalink

    Thanks Keith.

    In calculating a global mean, is a U.S. grid cell (let’s say with 20 stations) weighted equally with an African station (let’s say with only 2)? I ask because an addition or subtraction of 1 station to the African cell would likely bias the cell mean dramatically whereas the addition or subtraction of 1 station in the U.S. cell would only marginally affect the cell mean. Do you know how this is reconciled?

    I guess that what bothers me most about the surface records is the layers of estimates and assumptions required to get to one number (global mean annual temperature). I’ve dealt with fair sized datasets in geochemical mapping and I know from those experiences that failing to address sample size disparities can easily lead to spurious results. We call it the nugget effect because (for example) a single, large grain of gold in a sample can easily skew an assay trend, even though it is simply a statistical freak. If the nugget effect is huge then you can readily address it, but if it is subtle or within expectations then you will miss it.

  30. Sam Urbinto
    Posted Aug 21, 2008 at 3:28 PM | Permalink

    Keith; yes, 510 million and 8000 gives that number. I did that calculation also. But it’s not a radius, it’s picking any station within 1.2 km of the border, essentially making a frame by extending the diagonals of the box by 1.2 So 63,759’s root is indeed 252 per side. Lengthen that by double 1.2 and you now have 254.4 per side, which squared is 64,719.

    Imagine a picture that’s 1×1 and extend it to 2×2 with a matte. It gets bigger by the sum of those two sides, now encompassing everything lying within an inch of the picture and will fit in a 2×2 frame fully.

    And it gets bigger by the old and new size, or an area of 1 to 2 is 1 + 1+2 = 4

    4 to 8 is 16+4+5+5+6+6+7+7+8=64 or 48 bigger
    6 to 10 is 36+13+15+17+19=100 or 64 bigger
    10 to 14 is 100+21+23+25+27=196 or 96 bigger.

    I wonder if there’s a formula to do that. Of course, it’s easier just to square the new size and subtract the old one. 🙂

    Anyway, the point being is that all adding 1.2 km does is extend the diagonals to make a 254.4 km on a side box instead of a 252 km on a side box.

  31. kuhnkat
    Posted Aug 21, 2008 at 3:51 PM | Permalink

    #22 Keith W.,

    the first video graphic of station density shows a big drop in stations in Canada around 89-90. I believe there is a worldwide drop about the same period. In 99 there is a HUGE drop in US stations.

    Take another look at it. Most of the WARMING may actually be attributed to cooler station loss.

  32. Keith W.
    Posted Aug 21, 2008 at 3:53 PM | Permalink

    Sam, it is not 1.2 kilometers added to the diagonals. It is 1200 kilometers in radius. I quote the GISTEMP site.

    “A grid of 8000 grid boxes of equal area is used. Time series are changed
    to series of anomalies. For each grid box, the stations within that grid
    box and also any station within 1200km of the center of that box are
    combined using the reference station method.”

    There is a circle drawn out 1200 kilometers from the center of the box within which any stations are included, even though they may be over 1000 kilometers the constraints of the grid box. As I said, 4.52 million square kilometers in area.

  33. Sam Urbinto
    Posted Aug 21, 2008 at 4:12 PM | Permalink

    Ah, so they have a circle with a radius encompassing all land within 1200 kilometers of the center of the circle, which is the center of the box. If that’s true (I’m assuming it is; that they got their own explanation of what they do correct) we don’t have a 252 km per side box covering 64K kilometers, in area, we have a 1,200 km radius circle, centered on each of the 8000 boxes, each circle covering like you said, 4,521,600 km

    Seems crazy. How do they handle the overlap? I still don’t want to think they’re actually trying to deal with overlapping areas of 4.5 millon km They couldn’t mean 120 meters, because that’s an area of only 4.5K

    Oh, well. Hmmmm.

  34. John S.
    Posted Aug 21, 2008 at 5:18 PM | Permalink

    The fundamental problem is that the vast majority of the 8000 grid cells are devoid of records covering the 20th century in unbroken fashion. Thus GISS resorts to a piecemeal annual anomaly treatment, with an interpolation radius of 1200km. Since most of the intact records outside the US are in urban centers and there are no rural records by which to correct for UHI, their sausage machine produces global time series that are biased toward warming and geographically unbalanced–the “nugget” effect mentioned by oakgeao (#28). Only the most uncritical minds can accept the resulting time-series as the “global average temperature.”

  35. Sam Urbinto
    Posted Aug 21, 2008 at 5:28 PM | Permalink

    1200 (not 120) versus 1200 K

  36. jc-at-play
    Posted Aug 21, 2008 at 6:30 PM | Permalink

    #22 – Keith W.

    … most of the information is coming from Europe and the United States, the two areas with the highest Urbanization effects.

    You have a point about Europe, but is the USA really “one of the two areas with the highest Urbanization effects”? It seems to me that, compared to the rest of the world, an unusually large percentage of USA weather information come from rural sites.

  37. Keith W.
    Posted Aug 22, 2008 at 10:49 AM | Permalink

    #36 – JC, urbanization is not just being inside a city. If you really look at the population development within the United States, you would see that urban development is rather wide spread, and when you consider a 1200 kilometer radius viewpoint, almost extensive.

    As this report from the US Census bureau shows, population growth is evident in 49 of the 50 states.

    This report from Columbia University includes graphics showing population densities in the Eastern United States and Europe are really only exceeded by certain areas in Asia.

    This report from the University of Pittsburgh documents a study that is ongoing worldwide in connection to global large urban centers. As you can see on their Table 1 and Figure 1, the United States has a higher number of study cities than any other country, and they are spread out across the country, many within 1200 kilometers of each other.

    Finally, this table details the top 50 most populous cities in the US. These are not the top urban centers, as some of these cities are actually adjacent to others (Dallas, Ft. Worth, and Arlington, TX being a perfect example). Even so, this fifty cities are spread out over 29 states plus the District of Columbia. They split pretty evenly between East and West, North and South. There is a little higher density in the Northeast, but Southern California is almost equivalent.

    #31 – Kuhnkat, I’m inclined to agree with. If you look at the GISTEMP global trend maps, as well as their raw data reports, the highest increases are being reported for the Arctic regions, which are the areas with the fewest stations other than the South Pacific. The 1200 km extension for the reported averages would account for some increases in temperature.

  38. Keith W.
    Posted Aug 22, 2008 at 10:52 AM | Permalink

    #36 – JC, urbanization is not just being inside a city. If you really look at the population development within the United States, you would see that urban development is rather wide spread, and when you consider a 1200 kilometer radius viewpoint, almost extensive.

    As this report from the US Census bureau shows, population growth is evident in 49 of the 50 states.

    This report from Columbia University includes graphics showing population densities in the Eastern United States and Europe are really only exceeded by certain areas in Asia.

    This report from the University of Pittsburgh documents a study that is ongoing worldwide in connection to global large urban centers. As you can see on their Table 1 and Figure 1, the United States has a higher number of study cities than any other country, and they are spread out across the country, many within 1200 kilometers of each other.

    Finally, this table details the top 50 most populous cities in the US. These are not the top urban centers, as some of these cities are actually adjacent to others (Dallas, Ft. Worth, and Arlington, TX being a perfect example). Even so, this fifty cities are spread out over 29 states plus the District of Columbia. They split pretty evenly between East and West, North and South. There is a little higher density in the Northeast, but Southern California is almost equivalent.

    #31 – Kuhnkat, I’m inclined to agree with you. If you look at the GISTEMP global trend maps, as well as their raw data reports, the highest increases are being reported for the Arctic regions, which are the areas with the fewest stations other than the South Pacific. The 1200 km extension for the reported averages would account for some increases in temperature.

  39. Posted Aug 22, 2008 at 11:58 AM | Permalink

    Re my post #26

    I live near Exeter UK, the home of the Met office.

    They are always getting it in the neck for their complete failure to predict weather for our area only 15 miles away from their highly expensive offices stuffed full of taxpayer funded computers…,

    This article appeared in todays local paper based on a statement from the Met office in response to criticism of their continually changing forecast for what is the last big holiday weekend of the year.

    ” Weather forecasting was not an exact science and probably never would be. It is particularly difficult in the south west (of England) because there are so many micro climates-one area can have different weather from somewhere a few miles away.’

    As far as I can see the whole world is comprised of micro climates (as observed in my post #26) and I can not see any scientific basis or practical purpose in trying to come up with one global average based on data from extremely large grids, which will produce an average that probably does not relate to any part of that grid. Can someone please explain the point?

    TonyB

  40. Keith W.
    Posted Aug 22, 2008 at 12:09 PM | Permalink

    #31 – Kuhnkat, I’m inclined to agree with you. If you look at the GISTEMP global trend maps, as well as their raw data reports, the highest increases are being reported for the Arctic regions, which are the areas with the fewest stations other than the South Pacific. The 1200 km extension for the reported averages would account for some increases in temperature.

  41. DeWitt Payne
    Posted Aug 22, 2008 at 1:43 PM | Permalink

    Kieth W.,

    …the highest increases are being reported for the Arctic regions…

    The satellite measurements also report this, so I don’t think you can blame UHI, homogenization or whatever. High latitude temperatures have gone up, particularly over Siberia. The question is why.

  42. Sam Urbinto
    Posted Aug 22, 2008 at 2:02 PM | Permalink

    On a generic level, if a radiation steam explosion in the Ukraine can make it to England, why can’t waste heat and other weather-affecting patterns from Chicago make it to Alaska or soot from China make it to Siberia? One would think any number of things like that might cause an increase in the anomaly in some other area far away. Especially in the NH up closer to the Arctic.

  43. DeWitt Payne
    Posted Aug 22, 2008 at 2:10 PM | Permalink

    It’s a question of scale. Radioactive elements can be determined at ridiculously low levels. So it almost literally only takes a few atoms making it to the UK to see Chernobyl. Soot from China, most of which ends up in the Pacific, won’t have much of an effect unless there’s a lot of it.

  44. Sam Urbinto
    Posted Aug 22, 2008 at 2:32 PM | Permalink

    DeWitt, yes. I’m simply saying the planet’s weather is all teleconnected to some extent. if that makes a difference on a practical matter, that’s another issue.

    If it’s not some other place on the planet causing some overall trend rise up in the average of t_mean over a month or number of months, then what is happening in Siberia itself that would do so?

    I don’t know the answer to that question, simply that:

    A) Someplace else is (or someplaces else are) making Siberia warmer.
    B) Something in Siberia is making it warmer.
    C) A and B.
    D) Siberia is not getting warmer.

  45. DeWitt Payne
    Posted Aug 22, 2008 at 4:26 PM | Permalink

    Sam,

    D) is not in the cards. There is too much corroborating evidence like the pattern of ice loss in the Arctic Ocean. How? B) is unlikely. It has to be heat transfer from the tropics by atmospheric and oceanic circulation. Why? I dunno. Will it keep going or reverse? Dunno that either. I’m not convinced anybody else does either, but that’s speculation on my part and I promised bender not to do that any more, or at least not as much.

  46. Sam Urbinto
    Posted Aug 22, 2008 at 5:39 PM | Permalink

    Ah. Well.

    I would say A) also. If the anomaly rise, ice and snow behavior, and water temperatures all point towards a truely warmer Siberia in general, it would be prudent to accept that it is warming as our premise.

    If nothing particularly interesting is going on in Siberia to account for it, that leaves A)

    So if we’re trying to answer the question of what is it, obviously warmth from elsewhere has some reason. If we remove “natural variability” from the table, and remember that the long-lived greenhouse gases are well-mixed, what else does that leave us?

    My talk about UHI and the like, specifically polution, albedo, wind and waste heat effects, seems the most likely way to answer the question; the problem is, are any of them alone or in combination enough to be making it warmer up towards the poles? Maybe there are instead a number of interlocking factors that aren’t clear or can’t be decoupled.

    It appears, according to the satellites, that it’s simply the atmosphere getting warmer closer to the ground and getting colder farther from the ground, for what appears to be a statistically insignificant overall change.

    Now of course we live in the very lower troposphere, which shows for land GHCN (1979-2006) rise of a trend of .28 C per decade (trend .1 to .9 1979-2006) or a +.8 rise in the trend over land since the same time the satellites started. UHI? Albedo increases? Measurement devices got better? Bad methods of adjusting the anomaly? Biased readings? Waste heat? More urbanization and industrialization?

    The ERSST doesn’t have an easy charting device.

    But can you really compare air ~5 up over the ground (that it’s a proxy for) on 5×5 land squares over all the ~30% of the globe to (however deep) engine outlets that go over the shipping lanes of the water (whatever that covers) or bouys or such in 2×2 squares? No, I wouldn’t think so. I rather liken it to comparing a country with a 1 ground station per 2500 km to one with 100 over 250 km.

    But if ground alone is .31 (.1 to .9) per decade, and together with water over the same period of .16 (0 to .5), it would appear the oceans are sucking in .15 C of anomal trend rise per decade!

    Or is it just the air being warmer closer to the ground in general, and over highly changed land in specific?

    Or is it just the rise in population from 1979 of 4 billion to almost 7 billion?

    The fall of the USSR and the formation of the EU? The industrialization of countries like India, China, Vietnam, and Russia? A rise in global terrorism? The conversion of farmland to biofuel uses?

    No, of course not. It’s the 14% more carbon dioxide than there was in 1979. That 46 ppmv is all to blame. 🙂

    So I agree. “It has to be heat transfer from the tropics by atmospheric and oceanic circulation. Why? I dunno. Will it keep going or reverse? Dunno that either. I’m not convinced anybody else does either”

    Note I’m not speculating what it is (my opinion is that it’s some mix somehow), just saying that it’s not so simple and there’s a lot to account for.

  47. John S.
    Posted Aug 23, 2008 at 3:10 PM | Permalink

    Sam (re #44),

    Something IS going on in Siberia, and much of subtropical Asia: unprecedented economic development since the collapse of the Soviet Union in the early ’90s and the somewhat earlier turn to capitalism in China. In subsequent decades, wide swaths of Siberian taiga have been clear-cut for timber export and the number of motor vehicles has increased by at least a factor of 5 in both Russia and China.

    In a deeply continental climate, where temperature inversions dominate, both of these patent developments are more plausible causes of abrupt regional warming than are changes in remote ocean currents and other conjectured factors. If one takes away Asia and does away with the GISS sausage machine in constructing world-wide temperature indices, there is no secular trend to be found in 20-th century non-urban station records. There are, however, sharp swings that took temperatures to deep lows in the 1970s and proportionate highs near the recent turn of the century. It is here that global circulation and long-term cloud patterns come into play.

  48. John S.
    Posted Aug 23, 2008 at 3:26 PM | Permalink

    Sam,

    After reading your #46, it’s apparent that you anticipated some of my musings.

  49. Geoff Sherrington
    Posted Aug 23, 2008 at 9:23 PM | Permalink

    Re this matter of stations up to 1200 km away being used in weighted averages of temp. I do not know if this method is used elsewhere, but in Australia there are 2 graphs given in “Updating Australia’s high-quality annual temperature dataset” by Paul Della-Martin, Dean Collins and Karl Braganza, Aust. Met. Mag 53 (2004) pp 75-93.

    The Figures 4 have “Correlation %” on the Y axis from -30 to +100; and “Station Separation” (0 km to 4,000 km) on the X axis. The caption is approx: “1961-1990 inter-station correlation for annual (temperatures) as a fn of distance. Exponentially damped cosine functions are fitted where d is less than 3,000 km.”

    The smoothing starts at a correlation of 70-80% and drops fairly straight line to 20% at 2,000 km separation. Roughly, the correlation at 1200 km is below 40%, which is weak. This appears to be part (if not all) of the justification to smooth 2D data up to 1200 km apart.

    The problem is that annual mean data are used. As an analogy, a reasonable correlation to montly data can be gained by using a sine curve and correlating it with temp. It just tells that winter is colder than summer. Or, if your test station is Darwin, that you are drawing data from another hemisphere. The graphs in fig 4 incorporate the information that it gets colder as you approach the poles and warmer as you go to the equator. And possibly that weather systems move around the globe faster than once a year, so a hot or cold spell at a station might be repeated sometime in the same year somewhere else.

    But this is correlation with inappropriate causation for the job in hand.

    The job in hand is to interpolate a missing value at a station from surrounding stations AT ABOUT THE SAME TIME. Or at least when subjected to the same moving weather cell. Or at least in a similar season.

    So the graphs should be constructed on a time basis closer to daily temperature figures, not annual.

    Geostatistics and other techniques can give an indication if one station has predictive power for another and what weight should be used. That’s the way I’d go. I’d be surprised to find causation from one station to another extending usefully beyond a few tens of km.

    About the size of a storm system.

  50. Posted Aug 25, 2008 at 5:03 AM | Permalink

    Re #22
    I did a quick and dirty calculation of a black body Earth with no seasons (no tilt of the axle). For simplicity I assumed that 1000 W/m^2 (mean heating 250W/m^2) reaches the surface of the planet without atmosphere. The temperature is measured at 10 deg intervals (roughly 1200 km apart) along a single meridian from the equator to the poles. The temperature of a measurement station is reported as the mean of the specific station and other stations within 1200 km radius. Question: How big will the temperature error be due to the temperature sampling for this idealized case? Using the relation for black bodey temperature P=d*A*T^4 one gets (rounded to one degree):
    0 deg error 0 K equator
    10 deg error 1 K
    20 deg error 1 K
    30 deg error 1 K
    40 deg error 1 K
    50 deg error 1 K
    60 deg error 2 K
    70 deg error 4 K
    80 deg error 45 K
    90 deg error 110 K Pole
    It is thus clear that simply sampling using a 1200 km radius will cause a BIG bias at the poles in this idealized case. Adding an atmosphere, seasons, thermal inertia and heat transport will obviously make the errors much smaller. In any case it seems obvious that using a big 1200 km radius will introduce significant errors in the estimation of polar temperatures on a real earth too. The 1200 km sampling introduces a positive bias in the estimated polar temperatures. The only case where there wouldn’t be a polar warm bias is if the local temperature is a linear function of latitude. This is not the case in the real world, I think.

  51. Keith W.
    Posted Aug 25, 2008 at 2:28 PM | Permalink

    Persistent searching sometimes pays off. For those who are interested, this link leads to a world map showing the GHCN stations worldwide. The scale is rather large, but it does show the stations. The spread of stations across the Northern Hemisphere is much more complete than the Southern Hemisphere. The page does not have any date data listed, and it does include stations in the United States that are not included in the USHCN list available at surfacestations.org, so I cannot verify the map’s accuracy with regard to the current GHCN list. It does say that it contains the locations of the 7280 WMO stations of the GHCN.

  52. John S.
    Posted Aug 25, 2008 at 4:26 PM | Permalink

    Geoff (re: #49),

    To be sure, daily interpolation from highly correlated stations would be a preferrable means of filling short gaps in time series, rather than leaving the monthly average blank in the GHCN series and letting GISS produce its data sausage through “homegenization” of anomalies. The frequent problem, however, is that in many grid-cells there are long gaps to be filled and no nearby stations that are highly correlated.

    I agree that on a physical basis, tens of kilometers, rather than 1200, is the typical radius of very high overall R^2. But I’ve found via cross-spectrum analysis of yearly station-averages that the rapid fall-off with distance is the result primarily of loss of coherence at the sub-Wolfe frequencies(i.e., f < 1/11yrs). At higher frequencies, coherence often remains high even at separations of 500km or more. This is the regional imprint of year-to-year temperature variations, which can be successfully exploited to patch time series on the same time scale.

    But that cannot be accomplished by simply interpolating the anomalies via a distance-weighted scheme, because the anomalies at different stations are subject to different modulation by the sub-Wolfe components. It is at this point that the unexamined assumption of a broadly uniform regional climate, to which homogenization of anomalies can be applied, utterly fails. And trend-adjusting homogenization simply compounds the removal of the data product from any semblance of realism.

    Lars (re: #50),

    The patching/homogenization is performed not upon actual temperatures, but upon anomalies. The egregeous errors introduced thereby thus are entirely different in nature than what your zonal results show.

  53. John Goetz
    Posted Aug 25, 2008 at 9:18 PM | Permalink

    #51 Keith W.

    I like the image. However, it is a bit misleading in that it implies that all of the stations shown produce records that are used in the global temperature calculation today. Unfortunately, only about 1/4 are used today.

  54. Geoff Sherrington
    Posted Aug 25, 2008 at 10:19 PM | Permalink

    Re $53 John Goetz

    Having been puzzled by the lack of rebuttal from AGW people that global average climate has been a plateau for a decade, I wonder if the plateau is an artefact of removing a disproportionate number of hotter, bad stations from the older dataset and not admitting it. This would produce an apparent cooling. Then, all you have to do is sit around for a few years after making an announcement that more heating is ‘in the pipeline’.

    Goodness, an incredible mess has been made of presenting daily temperatures on a daily basis. Derivation of averages over time or space, smoothings, predictions, have to be very carefully qualified.

    I do not know how many “foreign” stations are used by coordinators in the USA, nor what homogenisation has already been done. For all I know, some adjustments might be getting done twice, once at home and again in the USA. The best I have got so far from our BOM is a suggestion to read a 1996 PhD thesis and that “after the data leave here we are no longer responsible for what other people do to them”.

  55. Posted Aug 25, 2008 at 10:35 PM | Permalink

    Geoff, it shows up in satllite datasets (UAH and RSS) also. I don’t think its the root cause. A controbutor to the plateau on surface sets perhaps.

  56. Keith W.
    Posted Aug 27, 2008 at 7:52 AM | Permalink

    #53 – John, I don’t think that first map is inaccurate or extremely out of date. I did a little more searching and found some more data from the NOAA website concerning GHCN monthly version 2. The images here and here are supposed to show the station locations for mean temperature and min/max temperature. They do state here that they use a slightly smaller set for the homogeneity-adjusted network, they just don’t give any list for this subset. The website says it was updated on January 3, 2007. The first image is very similar to the large map I first found, and even the reduced min/max network is not that dissimilar.

    Also, this table shows the number of stations involved in the GHCN listings by source network. It does look like there is some overlap in station location between a few of the networks. Trying to figure out which data set is supplying which data from which stations looks like a “needle in a haystack” type search. Couldn’t they just give us a simple list saying which stations are in the GHCN? I’m sure it could be done on an Excel spreadsheet by a decent typist in an hour or two.

  57. John F. Pittman
    Posted Aug 27, 2008 at 2:03 PM | Permalink

    From: Pittman [mailto:sc.rr.com]
    Sent: Saturday, August 18,2007 12:49 PM
    To: nasafoia@nasa.gov
    Subject: FOIA request

    Under the FOIA and amendments, I wish to obtain the raw data in digital format for:
    Hansen et al. 1999
    Hansen, J., R. Ruedy, J. Glascoe, and Mki.Sato, 1999: GISS analysis of surface temperature change. J. Geophys. Res.,
    104,30997-31022, doi:10.1029/1999JD900835.
    GISS analysis of surface temperature change
    J. Hansen, R. Ruedy, J. Glascoe, and M. Sato
    NASAGoddard Institute for Space Studies, New York
    The source of monthly mean station temperatures for our present analysis is the Global
    Historical Climatology Network (GHCN) version 2 of Peterson and Vose [1997].This is a
    compilation of 31 data sets, which include data from more than 7200 independent stations.
    I do not request the GHNC version 2 adjusted data. My request is for the raw data.

    Response
    Sent: Friday, July 18, 2008 12:37 PM
    To: sc.rr.com
    Subject: FOIA Request #07-163

    Pittman

    Dear Mr. Pittman,

    We apologize for the delay in responding to your request.

    This response to your Freedom of Information Act (FOIA) email request for Raw data in digital format for GISS analysis of surface temperature change.

    Mr. Retro Ruedy from NASA Goddard Institute for Space Studies, had stated that the information can be located at the below link:

    http://data.giss.nasa.gov/gistemp/foia/v2.mean_1999.txt

    If we can be of further assistance, please contact Jolyn Nace at 301-286-9030.

    Joan E. Belt
    FOIA Public Liaison Officer
    ***************************************
    NASA Goddard Space Flight Center
    Office of Public Affairs
    8800 Greenbelt Road, Mail Code 130
    Greenbelt, MD 20771
    Phone: (301) 286-4721 Fax: (301) 286-1712

    I wonder if I did recieve what I requested?

  58. Keith W.
    Posted Aug 27, 2008 at 3:13 PM | Permalink

    John, you may have, although it is hidden amongst a large amount of gobbledygook. I still wonder why they choose to list all this data in text files. Have they never heard of Spreadsheets? Even so, I found the temperature listings for my local weather station here in Memphis, TN, in that long line of numbers. I’m going to see if I can dissect the data further so I can at least tell you what scale and units they are using in that file. If it is still anomaly units they are using, then they are really trying hard to hide the data inside a pile of paperwork.

  59. Keith W.
    Posted Aug 28, 2008 at 3:42 PM | Permalink

    #57 – John, it looks like the file they referenced to you is the source file for the individual station data you can get if you go to their station data site. The temperature data is recorded in tenth’s of a degree centigrade. It’s not often somebody dumps a 46.1 meg text file on you. I am examining some of the data, specifically from my local airport records, because I found some weird inconsistencies that don’t jive with my recollections of the era in question. I visited the station during the 70’s as an elementary school student, and saw the temperature setup at the time, so I know it existed. But one record attributed to the station has the entire decade of the 70’s missing. I’m trying to see if I can figure out what they did to reconstruct it for the other record that has temperatures listed for that period.

    Also, just a side note from examining this, the airport in Memphis is not a USHCN listed site, but it is part of this temperature record. Since this is listed as the GHCN v.2 mean temperature data, that would seem to indicate that GISS uses whichever stations they wish to use when doing their analysis.

  60. John F. Pittman
    Posted Aug 28, 2008 at 5:28 PM | Permalink

    Thanks Keith. I did ask for the individual data that went into their paper before the sausage grinder got hold of it. It would be interesting to perhaps compare the tendency of the warming the present and cooling the past of 1998 versus 2008. I had asked for this data for someone who wanted the raw data rather than the processed data.

  61. Keith W.
    Posted Aug 29, 2008 at 11:33 AM | Permalink

    You’re welcome, John. Maybe this document will help in your next request. It seems to give names to the actual files that you might want, at least with regard to the USHCN stations. The entire directory for the NCDC/NOAA ftp site ( ftp://ftp.ncdc.noaa.gov/pub/data/ ) seems to provide some interesting possibilities for exploration.