The 1930s are getting Colder

According to the National Climatic Data Center (NCDC),, covered here, a new Beta version of the U.S. Historical Climatology Network will be released next year. They say that the new data set uses

“recent scientific advances that better address uncertainties in the instrumental record. Because different algorithms were used in making adjustments to the station data which comprise both data sets, there are small differences in annual average temperatures between the two data sets. These small differences in average temperatures result in minor changes in annual rankings for some years”.

One of these “minor changes” reverses the order of 1934 and 1999, with the relative change amounting to 0.45 deg F. And, in fact, the new changes are on top of some other puzzling changes which had already moved 1999 well up the league table.

Here’s the top part of the table comparing the top series in the two versions:

USHCN V1 USHCN Version 2 undergoing
beta testing
1. 2006 55.01 1998 55.09
2. 1998 54.94 2006 54.95
3. 1934 54.91 1999 54.61
4. 1999 54.53 1934 54.54
5. 1921 54.49 1921 54.37
6. 1931 54.34 1990 54.37
7. 1990 54.24 2001 54.35
8. 2001 54.23 2005 54.28
9. 1953 54.18 1931 54.20
10. 1954 54.13 1953 54.10

Contemporary Reporting

But even the USHCN V1 data shown here has already been adjusted substantially from contemporary reports. First here is a report from August 1999 on the 1998 annual results. Hansen stated:

The U.S. has warmed during the past century, but the warming hardly exceeds year-to-year variability. Indeed, in the U.S. the warmest decade was the 1930s and the warmest year was 1934.

Here is a graphic from the news release – I’ve been unable to locate a digital version of this dataset. In this image, the 1934 temperature is almost half a degree C (C not F) warmer than 1998, which was not even in second place.
From Hansen 1999 News Release.

Next, a contemporary account of 1999 said that it was the approximately the 10th warmest year:

The temperature in the United States was also warm, about 0.7°C above the 1951-1980 average (Figure 3). 1999 was approximately the 10th warmest year of the century. The warmest years in the United States occurred during the dust bowl era, with 1934 being the warmest year.

However, in the table shown above, 1999 had moved into 3rd place (in the 20th century standings) in USHCN V1 and then continued its retrospective advance up the league table by vaulting into 2nd place in USHCN V2 Beta. And to think that only 7 years, it ranked in 10th place.


  1. Steve Sadlov
    Posted Feb 15, 2007 at 4:28 PM | Permalink | Reply

    Seems they are discrediting the Dust Bowl years. What a big mistake. They were far worse than anything during the 90s and early 00s. When we actually do see another situation like the 1930s again, it will be nasty, people won’t expect it. This is the danger of finagling data to make the period 1990 – present appear warmer than actual, it belittles past periods which were in fact warmer, and makes people complacent. The ethical considerations are interesting.

  2. jae
    Posted Feb 15, 2007 at 5:57 PM | Permalink | Reply

    Maybe this is up-and-up, but I have become so cynical of climate science that I can’t help thinking that it looks just like selecting the proper proxies to make the MWP and LIA “go away.”

  3. K
    Posted Feb 15, 2007 at 6:17 PM | Permalink | Reply

    #1 I don’t grasp your reasoning on this one.

    How will this make people complacent? The GW people are constantly saying things are worse now and are going to be worser. That hardly seems likely to make people complacent.

    A little eyeballing makes me sure the new curve will produce a smooter rise and thus match CO2 levels better. It will also make the post-WW2 cooling ;ook less prominent. Who would have guessed?

    The cliche ‘the future remains the same, only the past changes’ comes to mind.

  4. Follow the Money
    Posted Feb 15, 2007 at 6:17 PM | Permalink | Reply

    All the 1990′s/2000′s dates are tweaked up, all the earlier dates tweaked down.

    Imagine that!

    All except 2006, tweaked down a smidge .06, perhaps to make 1998′s ascendance more digestible.

  5. Follow the Money
    Posted Feb 15, 2007 at 6:28 PM | Permalink | Reply

    “it looks just like selecting the proper proxies to make the MWP and LIA “go away.””

    Manufacturing “concensus,” that the 90′s/00′s were the “hottest” ever, creating the appearance of “crisis” that a “tipping point” is near, which can be averted by present legislation in America enacting carbon trad…ok, no politics on science threads. ;-)

  6. K
    Posted Feb 15, 2007 at 6:55 PM | Permalink | Reply

    #1. I don’t grasp your meaning. How will this make people more complacent?

    People are being told that today is worse than before. And that it will soon be worser. It seems to me that must reduce complacency.

    When I went to the article and looked at the revisions it seems that the overall pattern is to downplay the years before WW2, increase recent years, and reduce the effect of the post WW2 cooling. The net effect is a curve which rises more smoothly. And it will, no doubt, more closely agree with the rise of CO2.

    Who would have guessed?

    1954 really gets hit. It moved down about 7 notches among the 25 warmest years. Oddly enough, its warm companion, 1953, stayed in place.

    The cliche ‘the future remains the same, only the past changes’ comes to mind. It often does on this topic.

  7. Jeff Weffer
    Posted Feb 15, 2007 at 6:58 PM | Permalink | Reply

    In the US at least, 1934, 1931, 1921 and 1953 were at one time all higher than the record year of 1998.

    I just don’t understand how they can continually get away with rewriting history like this. There really should be an science ethics committee that can stop this from happening.

    If they get away with it again, every new year will become a new record.

  8. HFL
    Posted Feb 15, 2007 at 7:43 PM | Permalink | Reply

    Worrisome, but no surprise. Balling and Idso published a paper in GRL in 2002 (DOI 10.1029/2002GL014825) entitled “Analysis of adjustments to the United States Historical Climatology Network (USHCN) temperature database.” The abstract states:

    “The United States Historical Climatology Network (USHCN) temperature database is commonly used in regional climate analyses. However, the raw temperature records in the USHCN are adjusted substantially to account for a variety of potential contaminants to the dataset. We compare the USHCN data with several surface and upper-air datasets and show that the effects of the various USHCN adjustments produce a significantly more positive, and likely spurious, trend in the USHCN data.”

    They found that the trends in the unadjusted temperature records are not different from the trends observed using the satellite-based lower-tropospheric temperature record or from the trend of the balloon-based near-surface measurements. They go on to conclude that given that no substantial time of observation bias is contained in either the satellite-based or balloon-based measurements, and given that the time of observation bias is the dominant adjustment in the USHCN database, the present set of adjustments spuriously increase the long-term trend.

    The issue of exactly how station data adjustments are made has been a concern to many climatologists for decades and, I suspect, likely to continue being a concern.

  9. Steve Sadlov
    Posted Feb 15, 2007 at 8:42 PM | Permalink | Reply

    RE: #3 – Because very few alive today can actually remember the 30s. If you lock into people’s minds that the 90s are “the warmest ever” then they’ll beleive the Dust Bowl years were better than the 90s. So what has happened is, people’s expectations have actually and ironically been made more soft by this sort of language. Not if but when we get hit with another 1930s. people will be taken by surprise because in their minds, years like 1998, 2003 and 2006 were supposedly worse than the Dust Bowl years.

  10. David Smith
    Posted Feb 15, 2007 at 9:12 PM | Permalink | Reply

    Recall this paragraph from Brohan et al , page 6:

    “For some stations both the adjusted and unadjusted time-series are archived at CRU and so the adjustments that have been made are known… but for most stations only a single series is archived, so any adjustments that might have been made are unknown.”

    In other words, Hadley reported that they lost track of what data received (homogenisation) adjustments. That has the same effect as misplacing the original data, such that no one can go back and reconstruct what the actual historical temperatures were.

    I hope the NCDC doesn’t have the same uh-oh with the original US temperature records.

  11. Steve McIntyre
    Posted Feb 15, 2007 at 9:38 PM | Permalink | Reply

    #10. It makes me ill to read stuff like that. That’s probably why Jones won’t release his data; he’s lost too much original data and is embarrassed to have this in the sunshine. Hey, they’re the Team. Why should anyone expect them not to lose part of their data.

  12. David Smith
    Posted Feb 15, 2007 at 9:42 PM | Permalink | Reply

    RE #10 There is an interesting aspect to the discussion of homogenisation adjustments in Brohan (pages 6 to 8). The paper states that the most common reason for an adjustment was a station relocation from a (typically warmer) city to a (typically cooler) rural airport. The chart indicates that adjustments on the order of 0.5C were common, and were generally adjustments which lowered (cooled) the older temperature data.

    Yet, later on, Brohan seem to have some skepticism about UHI effects and seem to suggest a only small impact (0.05C/century) on the long-term trend.

    Which is it?

  13. John A
    Posted Feb 15, 2007 at 9:59 PM | Permalink | Reply

    What happened in the unseen labyrinth to which the pneumatic tubes led, he did not know in detail, but he did know in general terms. As soon as all the corrections which happened to be necessary in any particular number of The Times had been assembled and collated, that number would be reprinted, the original copy destroyed, and the corrected copy placed on the files in its stead.

    This process of continuous alteration was applied not only to newspapers, but to books, periodicals, pamphlets, posters, leaflets, films, sound-tracks, cartoons, photographs — to every kind of literature or documentation which might conceivably hold any political or ideological significance.

    Day by day and almost minute by minute the past was brought up to date. In this way every prediction made by the Party could be shown by documentary evidence to have been correct, nor was any item of news, or any expression of opinion, which conflicted with the needs of the moment, ever allowed to remain on record. All history was a palimpsest, scraped clean and reinscribed exactly as often as was necessary.

    In no case would it have been possible, once the deed was done, to prove that any falsification had taken place. The largest section of the Records Department, far larger than the one on which Winston worked, consisted simply of persons whose duty it was to track down and collect all copies of books, newspapers, and other documents which had been superseded and were due for destruction. A number of The Times which might, because of changes in political alignment, or mistaken prophecies uttered by Big Brother, have been rewritten a dozen times still stood on the files bearing its original date, and no other copy existed to contradict it. Books, also, were recalled and rewritten again and again, and were invariably reissued without any admission that any alteration had been made.

    Even the written instructions which Winston received, and which he invariably got rid of as soon as he had dealt with them, never stated or implied that an act of forgery was to be committed: always the reference was to slips, errors, misprints, or misquotations which it was necessary to put right in the interests of accuracy.

    George Orwell, “1984″

  14. tom
    Posted Feb 15, 2007 at 10:12 PM | Permalink | Reply

    You mean to tell me that Hadley CRU does not have the origninal temp data? That they only have data that has been tweaked and
    don’t have the origninal? Man, this is just sickening. Yet we’re ready to pass legislation. Oh boy…

  15. Steve McIntyre
    Posted Feb 15, 2007 at 10:26 PM | Permalink | Reply

    always the reference was to slips, errors, misprints, or misquotations which it was necessary to put right in the interests of accuracy.

    Nice find, John A. There’s always a “good” reason why past temperatures have to be reduced, isn’t there?

  16. K
    Posted Feb 15, 2007 at 10:47 PM | Permalink | Reply

    #9: thanks. I’m sorry for the (near) duplicate posts #3 & 6. I sent #3 and went to visit my daughter. Upon return #3 hadn’t posted yet and I wrote #6. (I think that is what happens when you ARE old enough to remember the dust bowl.)

    Anyway, I remember seeing my father occupied for a few minutes outside. And when he stepped away. You could see where he stood outlined on the sidewalk.

    You write: ‘Not if but when we get hit with another 1930s. people will be taken by surprise because in their minds, years like 1998, 2003 and 2006 were supposedly worse than the Dust Bowl years.’

    Actually a replication of the dust bowl would be hailed as strong evidence of GW. Heat, no rain, wind. The entire Great Plains is subject to such anyway – the first farming disasters there came before 1900. The huge aquifers and dams have masked the natural effects for sixty years. But the aquifers are being exhausted and the dams are dependent on snowfall in the Rockies.

    Enough of that.

    This adjustment business seems very serious. When algorithms are used to generate mass changes the original data may not be recoverable by simply knowing the algorithm and the output. It depends upon the procedure.

    Loss is even more likely when one data set ‘A’ is relied upon in calculating adjustments to another ‘B’. Data A may itself have been adjusted, or equally bad, it may be adjusted later.

  17. fFreddy
    Posted Feb 16, 2007 at 3:09 AM | Permalink | Reply

    Re #16, K

    And when he stepped away. You could see where he stood outlined on the sidewalk.

    Sorry, I don’t understand that. What was the outline ?

  18. John A
    Posted Feb 16, 2007 at 3:35 AM | Permalink | Reply

    Re #15, Steve

    History is being rewritten in order that the dominant scientific hypothesis (and attendant political cause) is not imperilled by reference to facts.

    I’m just shocked that no-one appears to object.

  19. Jeff Norman
    Posted Feb 16, 2007 at 6:28 AM | Permalink | Reply

    Re complacency.

    When you tell people over and over that the current climate is bad and getting worse because it is warming, how do you suppose people will respond when it starts getting cooler. I imagine that they will be relieved. And how do you suppose that people will respond when the doomsayers start decrying the cooling. I imagine that people will respond like the villagers in the boy who cried wolf.

  20. John Lang
    Posted Feb 16, 2007 at 7:12 AM | Permalink | Reply

    Jeff Norman #19 … how do you suppose people will respond when it starts getting cooler?

    But they won’t know it is actually colder because USHCN, GISS and the Hadley Centre will just keep rewriting history and will be continually telling them all the time “last month was the hottest on record.”

    That is the point of this thread. We will never know it is actually colder until a new independent data record organization is established.

  21. Posted Feb 16, 2007 at 7:22 AM | Permalink | Reply

    General question: Why HadCRUT starts from 1850? Why not use all the data instrumental data (before 1850 there must be some data)? Maybe sparse set, but uncertainty levels can be increased accordingly..

    More detailed one: (#10,#11)

    Brohan et al, on UHI

    No such complete meta-data are available, so in this analysis the same value for urbanisation uncertainty is used as in the previous analysis [Folland et al., 2001]; that is, a 1 \sigma value of 0.0055 C/decade, starting in 1900.

    So, need to read Folland et al (my bolds)

    The urbanization uncertainty could be regarded as one sided: stations cannot be “too rural” but may inadvertently be “too urban” (Jones et al., 1990; Peterson et al., 1999). However, because some cold biases are also possible in adjusted semi-urban data, we conservatively model this uncertainty as symmetrical about the optimum average. We assume that the global average LAT uncertainty (2\sigma ) owing to urbanization linearly increases from zero in 1900 to 0.1°C in 1990 (Jones et al, 1990), a value we extrapolate to 0.12°C in 2000 (Figure 1a). We have not accounted for other changes in land use as their effects have not been assessed.

    So, need to read Jones 1990

    It is unlikely that the remaining unsampled areas of the developing countries in tropical climates, or other highly populated parts of Europe, could significantly increase the overall urban bias above 0.05 C during the twentieth century.


    1) However, because some cold biases are also possible in adjusted semi-urban data, we conservatively model this uncertainty as symmetrical about the optimum average. ???

    2) We have not accounted for other changes in land use as their effects have not been assessed. ???

    I don’t understand how this correction is made, is there UHI correction applied, or is just put to uncertainties? If there is UHI correction, is it applied to global mean, or locally?

    BTW, Jones 1990

    .. many of the extreme urban biases that have been quoted are the largest daily occurrences, perhaps happening during still evenings or intense inversions. The effect on station monthly mean temperatures is sure to be considerably smaller.

    Monthly mean: from daily mid-ranges, or hourly-sampled average?

  22. MarkW
    Posted Feb 16, 2007 at 7:32 AM | Permalink | Reply

    I remember that just when Congress was getting ready to debate the bill banning CFC’s, NASA “suddenly” discovered new data
    that showed that a huge ozone hole had formed over the Arctic, and was drifting over the US. A year or so later, after the CFC
    ban was safely in place, NASA re-examined the data, and “discovered”, that they had made a few mistakes in calculating the data,
    and that there never was an Arctic ozone hole. So Sorry.

    Why do I get the feeling that the politicians a ginning up the data, once again?

  23. JerryB
    Posted Feb 16, 2007 at 7:34 AM | Permalink | Reply


    Digital versions of GISS USA temperatures, as of the indicated dates: 6-17-2000 3-11-2001 9-08-2004 6-27-2006 2-16-2007

    Meanwhile, their referring to the current version of USHCN as
    version 1 is odd, since the earlier version at
    would better deserve that label.

    The current version at
    has some similar adjustments, and some rather different adjustments
    than the earlier version, which George Taylor has stressed.

    Let me add that in both of those versions, the adjustments are clearly
    distinguished from the unadjusted data, but the number of changes of
    adjustments is large, and unsettling. A quick glance at the USHCN
    station history file indicates that the approximately 1220 USHCN
    “stations” resided at about 7,000 locations over the periods of record.
    In addition to differences of locations, other differences complicate
    any attempts to calculate suitable adjustments.

  24. Brooks Hurd
    Posted Feb 16, 2007 at 7:37 AM | Permalink | Reply

    It is frightening to realize that we may no longer be able to find the original data. We now have a series of adjustments to adjustments to adjustments. Since the adjustments have been adjusted, the process of trying to retrieve the original data set from what currently exists is akin to decrypting a very complex code.

    If a bank did these sorts of adjustments to your check book balances because they felt that they needed to correct your past spending patterns, you would be outraged. What would the reaction be if the bank then said that they were sorry but due to a server crash, they had lost all the original check and deposit images; but don’t worry they would correct the adjusted balances by further tweaking?

  25. Steve McIntyre
    Posted Feb 16, 2007 at 7:40 AM | Permalink | Reply

    #21. I asked Jones in the past for the data used in Jones 1990. He said that it was on a diskette somewhere and it would be too hard to track down.

  26. Brooks Hurd
    Posted Feb 16, 2007 at 7:49 AM | Permalink | Reply

    Re: 21 UC

    Monthly mean: from daily mid-ranges, or hourly-sampled average?

    Parker used daily Tmin to calculate monthly average temperatures in his paper which claimed that warm windy nights proved that there was no UHI.

    I am not certain whether this is standard usage at HadCRU, but with a steady decrease in diurnal temperature difference over the past 150 years, the use of Tmin provides an added bonus which the use of Tmax, or an average of the two, would not provide.

    Posted Feb 16, 2007 at 7:54 AM | Permalink | Reply

    Adjustment…I⳶e found a Spanish? weather site named Tu Tiempo…
    Compare their monthly values with Nasa-Giss or/and USHCN V 1/2…
    Big Brother is a very split personality it seems…I am going through
    Mr Hansenⳳ? adjusted? US “rural” weather stations from west to
    east most of the West of Missisippi done 7 out of 250 or so have
    the guy with the termite-infested leg …Albert Goreⳳ 2005 as the
    warmest year … 6 of them in NW Nebraska…Impressing…Gentle
    Women and Gentle Men (To use the same big words as the IPCC SPM guy did
    2 weeks ago…) We may have (90% probability)a case of of “unintentional” “fraud”
    of some 10% of humanity…Basically middle class, middle aged people
    with fairly dull lives like myself…If warming hysteria gets even
    worse I plan a world stand-up tour…named “BBTRCFOGATPB” Which every
    intelligent person real-lies means BRINGING BACK THE RIGHT CLIMATE FOR

  28. W.R.Newberry
    Posted Feb 16, 2007 at 8:04 AM | Permalink | Reply

    Re #17
    He’s referring to the outline that the dust left.

  29. Brooks Hurd
    Posted Feb 16, 2007 at 9:08 AM | Permalink | Reply


    I looked at the 25 warmest years at your link. Looking at all 25 warmest years, I went down the the first column (V1) and looked to see whether the same year was higher (warmer) or lower (colder) on the new beta data version.

    As I got past the first ten, I realized that I could predict that if the year were prior to 1986, then it would be lower in V2. If the data were after 1986, then the V2 ranking would be higher. 1986 moves up on V2. Two years, 1987 and 1921 did not change. There are two exceptions, 2006 and 1998 swap positions. In addition, 1941 drops off the V2 chart and 1995 is added to V2.

    I would expect that in a stochastic process that deviations above and below the mean would be randomly distributed. Logically it would follow that adjustments due to “recent scientific advances that better address uncertainties in the instrumental record” would likewise be randomly distributed. I would certainly not expect that all data before a certain date would be adjusted lower and all but one of the years after that date would be adjusted higher. Clearly, the selection method for these adjustments is not based on the temperature data, but rather on whether it was taken before or after 1986.

    One could argue that this is simply a coincidence. If roulette or craps games operated with this sort of coincidental predictibility, the Las Vegas casinos would have vanished long ago.

  30. Posted Feb 16, 2007 at 10:04 AM | Permalink | Reply


    Without data, Jones 1990 is next to nothing. No figure showing rural data vs. gridded data.. Just Table 1 claiming 5% significant trends. I don’t know whether UHI matters or not, but combined with bucket corrections it is quite interesting topic ;)

  31. Steve McIntyre
    Posted Feb 16, 2007 at 10:21 AM | Permalink | Reply

    #30. I did a couple of posts on bucket adjustments near the start of the blog . I don’t think that nearly enough attention has been paid to the bucket adjustments and people interested in UHI would do well to transfer some attention to SST data. Although here the original data is even more inaccessible.

    I recall some discussion that some gridcells that were entirely at sea appeared to show the influence of coastal seaports. It looks like the newer HadCRU versions contain more and more “optimal” smoothing so that it’s going to get harder and harder to disaggregate the data.

  32. Ross McKitrick
    Posted Feb 16, 2007 at 10:54 AM | Permalink | Reply

    #21: The continued reliance on Jones et al (1990) to ‘disprove’ global land-use impacts on the gridded data amazes me. In the new SPM, page 4, they say: “Urban heat island effects are real but local, and have a negligible influence (less than 0.006°C per decade over land and zero over the oceans) on these values. {3.2}”

    Where does the 0.006 number come from? Ultimately from a conjecture in the conclusions of Jones 1990. That paper examined Eastern China, Western USSR and Eastern Australia. In the 1st region they found an upward bias in the urban data (despite allowing cities of up to 100,000 people to be classified as ‘rural’)–the urban series was the only one with a “significant” trend (who knows how they did the t-test). In the USSR sample they found one of two pooled rural-urban data series had half the post-1930 cooling trend observed in the rural series (which includes towns up to 10,000). They didn’t find any difference in the eastern Australia data. They also tabulated some earlier results showing urban US data had a larger trend than pooled rural-urban US data and concluded:

    “In none of the three regions studied here is there any indication of significant urban influence in either of the two gridded series relative to the rural series…The United States result therefore does seem to be atypical compared with other industrialized regions of the world. The results from the United States clearly represent an upper limit to the urban influence on hemispheric temperature trends.”

    Despite tabulating a sizable effect in 2 of 3 regions they somehow conclude it is observed in none of them. Their only fig leaf is that almost none of the trends are significant, up or down. But on that basis, for those who want to use these findings to imply something about the whole world’s land mass, their paper equally proves the absence of any 20th century hemispheric temperature trends. Then they conclude:

    It is unlikely that the remaining unsampled areas of the developing countries in tropical climates, or other highly populated parts of Europe, could significantly increase the overall urban bias above 0.05C during the 20th century. A bias of this order is an order of magnitude smaller than the hemispheric and global-scale warming trend observed over the last 100 years.

    The “0.05/century”, or 0.005 per decade, thus appears in the 2nd-last paragraph of the paper, without derivation, as a conjecture for the entire land-base of the world. And, as noted above in #21, ever since it has been recycled regularly into new papers and reports (ie IPCC 2001) which are then, in turn, cited as further support for it.

  33. Steve Sadlov
    Posted Feb 16, 2007 at 11:19 AM | Permalink | Reply

    RE: #13 – What Orwell wrote! :)

  34. David Brewer
    Posted Feb 16, 2007 at 11:35 AM | Permalink | Reply

    Hansen documented major adjustments to his GISS series for the USA in this paper:

    The paper is interesting because it details the corrections recommended by the Karl team, and the extent to which Hansen took them on. He accepted most, but increased the very low adjustment for urban warming.

    Another interesting feature of Hansen’s paper is that a trend is calculated for “unlit” stations as determined by satellite imagery. The raw trend is minus 0.05 degrees C (i.e., cooling) over the 20th century, but adjustments are then applied to bring this up to plus 0.35 degrees (pages 7 and 20 of the pdf).

    Despite claims that recent times have been hotter than the Dust Bowl years, the years 1930 to 1937 still account for all-time record high temperatures in 24 of the 50 US states:

  35. K
    Posted Feb 16, 2007 at 1:24 PM | Permalink | Reply


    #27 is correct. Dust (dirt really) collected around his shoes as he stood. After seventy years I can’t recall why he was standing outside or the color of the sky. To me the dust pattern was only a momentary curiousity.

    #28. Definitely. The adjustments are not to calibrate. There is a pattern and, and among other things, it cools those troublsome 1930s. That is not to say that adjustment is never justified. History may help a little here.

    In the 1930s cities were not building much and an increased heat island effect would not be expected. And even in WW2, the 1940s, there was little downtown growth; resources were being used for war, not urban buildings.

    A significant event of the 1950s was new airports. Many of them in rather small cities. At the same time massive airports (large heat collectors) were being built near larger cities. Moving the weather stations to new airports would have an effect. I suspect calibration improved and also recording standards.

    Little can be done about tinkering with data. Those believing in an adjustment will always make it and defend it. But much can be done about making sure original data is not lost. That is where effort should be made.

  36. David Smith
    Posted Feb 16, 2007 at 3:08 PM | Permalink | Reply

    RE #35 Large airports are not only heat collectors but are also large open areas, where nightime breezes blow. Nightime breezes mix (cool) surface air with (warmer) above-surface air, elevating nightime temperatures.

    Older temperature stations in more-sheltered locations would not experience as strong of a breeze, and thus less mixing, and thus lower temperatures.

  37. tom
    Posted Feb 16, 2007 at 4:30 PM | Permalink | Reply

    Even if they are manipulating the data set, are not the individual meteorological station’s data still available…untouched? I know here in Minneapolis (KMSP) that access to each days high/low/precip is available since 1890. I am pretty certain each weather service office has it’s own data set for the local meteorological station. If worse came to worse couldn’t we just request the raw data from individual NWS offices?

  38. Steve Sadlov
    Posted Feb 16, 2007 at 4:38 PM | Permalink | Reply

    RE: #37 – That may be what’s required. Sad, if so.

  39. JerryB
    Posted Feb 16, 2007 at 5:26 PM | Permalink | Reply

    Re #37,

    Four days ago I stumbled on an NCDC website that has a file of about a
    gigabyte of daily min/max and prcp data from 32815 stations of 135

    I hope soon to put together a summary of how much of which kinds of data
    are included for what time periods for which stations.

  40. Carl Smith
    Posted Feb 17, 2007 at 12:18 AM | Permalink | Reply

    #39 JerryB, sounds like a gold mine to me – I hope you have burnt backup copies onto CDs or something.

    I don’t mean to sound cynical, but once it’s known someones accessed it, the original might just disappear into thin air or be secured so only ‘approved’ people can get access.

  41. Posted Feb 17, 2007 at 7:16 AM | Permalink | Reply


    Indeed, it seems to be the Jones 1990 value 0.05 C / century recycled again and again. In Folland 2001 et al. it is just doubled to get ’2-sigma’. In Brohan et al it is 0.0055 C / decade, but they use Jones value as 0.05 C for 1900-1990 period. 0.05/9 = 0.0055555555555555…

    BTW, there’s an interesting news article on clouds and global warming in Nature 27 Sep 1990. (Next issue after Jones article):

    ‘The question of whether clouds would substantially change the prediction of global warming caused by the accumulation of carbon dioxide in the atmosphere has been much debated, not least because there is no simple way of answering it with precision while back-of-the envelope calculations exist on either side and are memorable, even compelling. That is why the issue has been about for twenty years.’

    Should we say 40 years by now?

  42. Brooks Hurd
    Posted Feb 17, 2007 at 7:24 AM | Permalink | Reply

    Re: 36, David,

    I agree with you about the effect of mixing over airports. I also believe that urban areas have more verticle mixing than rural areas due to the fact that urban buildings cause increased turbulence which increases mixing.

    When I read Parker’s paper stating that warm windy nights prove the UHI does not exist, I disagreed with his conclusion because it appeared to me that he was only considering two dimensions. I believed then and now that warm windy nights over urban areas are caused in part by vertical mixing.

    Parker’s assumption was that the warmth was blown into urban areas by the wind. I assume that the warmth comes from vertical mixing and that the source of this warm air over cities is the cities themselves. I therefore believe that Parker’s conclusions are wrong. Far from refuting the existance of UHIs, I beleive that Parker’s warm windy nights prove that UHIs exist.

  43. Posted Feb 17, 2007 at 8:29 AM | Permalink | Reply


    0.3 C bucket adjustment? Check Brohan et al Figure 12. Green error in the sea temperatures is coverage error, not measurement error.. So quite accurate adjustment, the red band is not very thick.

    There are claims of great accuracy.

  44. TN_Skeptic
    Posted Feb 17, 2007 at 9:10 AM | Permalink | Reply

    Tom Nelson (poster #37, I suspect) linked his Ivory-bill Skeptic blog to this website, so I believe whatever is stated here.

  45. Norm Kalmanovitch
    Posted Feb 17, 2007 at 9:41 AM | Permalink | Reply

    The entire AGW case rests on a causal relationship between global temperature and atmospheric CO2 concentrations.
    There are monthly values for CO2 concentrations from Mauna Loa and there are monthly temperature anomalies from satellite data from 1978.
    If these two data sets are correlated there should be at least a 50% correlation for the AGW case to even be possible!

    It would be a very good exercise to correlate these two data sets independently and see if the results match the 22% correlation that I have been made aware of.

    If this is the case then there is positive proof that AGW is wrong using the very data that the AGW group cherry picks to make their case ie 1998 highest temp ever on record but not the temp drop from 2005 to 2006.
    Norm K

  46. David Smith
    Posted Feb 17, 2007 at 9:45 AM | Permalink | Reply

    On an entirely different note, the tropical Pacific looks like it’s trying to switch in a cool La Nina direction. Click on the “assorted plots” button of

    and look at the lower image. The lower image is a temperature cross-section of the Pacific, at the equator. Note the blob of cool water rising on the right side (Eastern Pacific). Equally interesting is the coolness on the west (Warm Pool) side.
    The surface winds don’t appear to support a full-blown La Nina , at least yet, but cooling is underway nevertheless. 2007 may continue the flatline global temperature pattern of the last five years.

    And on another entirely different note, there’s a news story this morning about an Australian paraglider that was sucked into a thunderstorm and rose to 10,000 metres (32,000 feet). The temperature there is about -40C and the air pressure is about one-third that of the surface. Updrafts and downdrafts can be 80km/hr and occur side by side.

    The paraglider lived, amazingly, but with frostbite injuries. That was the ride of a lifetime.

  47. tom
    Posted Feb 17, 2007 at 10:05 AM | Permalink | Reply


    I’m sure there can be some mixing due to the heat isle effect itself. On relatively calm nites when heat isle is most noticable
    the relative heat of the city compared to the coolness of surrounding grasslands etc…the heat differential will cause some draw
    of the air into the city. However, from my observations (20 yr meteorologist) if it is clear and calm it is usually calm in the city
    as well, but there likely is some draw from the cool surroundings into the city and then vertically mixed some. However if it is
    a large-scale synoptic forcing that is driving the wind the temp difference between city and rural is greatly reduced, even on clear
    nights but there is still some UHI that is noticed even on these nights. It is dependent on wind strength. In any event, I live in
    Minneapolis/St.Paul and have observed that UHI can be as great as 7-10F on clear calm nights but the overall averaged UHI signal is much
    less of course but I would suspect imperically that it is likely magnitudes more than GISS or CRU has corrected for.

    Each site needs to be gone over with a fine toothed comb in order to see what is really going on. I mean, if were going to take one
    meteorlogical station and statistically fit it to a large area to represent said area, well we have a lot of work to do put it that

  48. tom
    Posted Feb 17, 2007 at 10:14 AM | Permalink | Reply


    -Parker’s assumption was that the warmth was blown into urban areas by the wind. I assume that the warmth comes from vertical mixing and that the source of this warm air over cities is the cities themselves. I therefore believe that Parker’s conclusions are wrong. Far from refuting the existance of UHIs, I beleive that Parker’s warm windy nights prove that UHIs exist.-

    This is poppycock! Warm air at night does not originate in surrounding rural areas relative to city. Warm air is NEVER blown into a city at
    night from rural in a neutral airmass. If a warm front or airmass change is occurring then yes, of course but otherwise no.

  49. JP
    Posted Feb 17, 2007 at 10:52 AM | Permalink | Reply

    I looked at temperature data from Huntley Montana. This location was chosen because it was about 20 miles from where I live and is a small rural agricultural community. Simple yearly averages were done and then plotted for 2 30 year periods which appeared to be close to the same. The average yearly temperature for 1912-1942 was 46.2 and from 1976-2006 was 46.1. Long-Duration Drought Variability and Impacts on Ecosystem Services: A Case Study from Glacier National Park, Montana talks about how Glacier appears to be influenced by a 60 year oscillation caused by the PDO. Huntley’s temperature records also show a strong 60 year oscillation.

    Montana has been a drought for the last 7 years. Fort Peck Reservoir which is 134 miles long is over 30 feet below full pool. In Montana severe drought lasted from 1928-1939 with an average rainfall of -25% below average. Our current drought has lasted from 1999-2006 and is -16% below average. Average is based on records from 1951-2001. I hope our current drought does not last another 4 years.

  50. Brooks Hurd
    Posted Feb 17, 2007 at 2:52 PM | Permalink | Reply


    What was your source for the Huntley data?

  51. JP
    Posted Feb 17, 2007 at 3:35 PM | Permalink | Reply


  52. JP
    Posted Feb 17, 2007 at 6:03 PM | Permalink | Reply

    re:49 Should have been AMO not PDO.

  53. James Lane
    Posted Feb 18, 2007 at 1:32 AM | Permalink | Reply

    Re 46

    The paraglider pilot was actually a German girl, in Australia for the world championships. They were able to download the complete log of the flight from her GPS. She was sucked up at an average speed of 20m/s, peaking at 33m/s. She lost consciousness for about an hour, topped out at 10,000m, came to at about 6,000m and was able to pilot the rig (which was covered in ice) to the ground. A Taiwanese pilot was sucked into the same storm and killed.

  54. JerryB
    Posted Feb 21, 2007 at 11:36 AM | Permalink | Reply

    This comment constitutes a correction to my comment #23:

    Something odd (no number for 2006) about the usatemps.702 file,
    which I downloaded from
    a few days ago, got me wondering. I double checked the original, and
    it matched; still no 2006 number.

    So, I went back to the page at
    and found that the link has changed to
    and that version has a 2006 number. It also has changes of numbers
    for prior years, including the 2005 number being closer to what it was
    last June. However, the numbers for several years have changed by
    0.1 C, or more.

  55. JerryB
    Posted Feb 21, 2007 at 11:40 AM | Permalink | Reply

    I should add that when I downloaded from the old link, I
    did so by using my bookmarks. I do not want to suggest that
    the link changed within the past few days.

  56. robert simpson
    Posted Feb 23, 2007 at 8:17 AM | Permalink | Reply

    seems you have discovered what is truely AGW ….

  57. JerryB
    Posted Mar 10, 2007 at 11:31 AM | Permalink | Reply

    It appears the the output of USHCN V2 has been posted at

  58. JerryB
    Posted Mar 10, 2007 at 11:37 AM | Permalink | Reply

    I prematurely hit the submit button on my previous comment.

    A quick glance gives me the impression that while there are
    many new numbers, most of the numbers are the same as with
    the previous version, and that the raw, and TOB adjustment
    numbers are unchanged (but for rounding error. they seem
    never to round the same way from one issue to the next).

    Another quick glance gives me the impression that the new
    numbers have not yet been added to GHCN.

  59. JerryB
    Posted Mar 12, 2007 at 12:29 PM | Permalink | Reply

    Comparing averages of differences between the recently
    posted USHCN monthly mean numbers, and those previously
    posted, gives me the impression that the new numbers may
    not be from USHCN V2. The average differences are very
    small, and their directions seem not as one might expect
    from the NCDC web page referenced by Steve at the top of
    this thread. I’ll post a couple of graphs either later
    today, or tomorrow.

  60. JerryB
    Posted Mar 12, 2007 at 2:49 PM | Permalink | Reply

    Graphs comparing new/old USHCN adjustments are at

    My guess is that the recently posted data are *not* from
    USHCN Version 2, but that’s just a guess.

  61. JerryB
    Posted Jun 15, 2007 at 8:17 AM | Permalink | Reply


    Regarding USHCN updates:

    At the CDIAC USHCN site, you find a USHCN update with data through 2002,
    but at the NCDC USHCN site, more recent updates have been posted.

    an update with data through 2003 was posted in early 2005 (it has since
    been replaced). A copy of the mean data has been placed at
    in case you would like to review it.

    In August, or September, 2006, USHCN updates through early 2006 showed
    up in GHCN, and at GISS, without having been posted at the NCDC USHCN
    site (or at CDIAC).

    Then on March 1 of 2007, an update was posted at
    but did not get included in either GHCN, or GISS, as far as I can tell.
    I discussed that update briefly in comments 57 – 60 above shortly

  62. Steve McIntyre
    Posted Jun 15, 2007 at 9:02 AM | Permalink | Reply

    Jerry, in the file archived on MArch 1, 2007, the latest information is Dec 2002. Do you have an exact URL for any USHCN archive with later data?

  63. JerryB
    Posted Jun 15, 2007 at 9:47 AM | Permalink | Reply


    I just re-downloaded that file, and looked at it in a text
    editor, and the 2003 to late 2006 data are there.

    When uncompressed, the file size is 80921600 bytes including
    line feeds, and has 558080 lines. Do those numbers match
    what you have?

  64. Steve McIntyre
    Posted Jun 15, 2007 at 10:10 AM | Permalink | Reply

    I downloaded it again and, yes, I’ve got 555080 lines. Funny thing is that the version that I have dated 5/27/2006 is smaller, but, as you observe the CDIAC version is different and older than the NOAA version and maybe that’s the difference. I’ll update my ushcn records accordingly; thanks.

  65. JerryB
    Posted Jun 15, 2007 at 10:42 AM | Permalink | Reply

    Good. Since it appears that that update has not been added
    to GHCN or GISS, and since there are some differences in its
    adjustments relative to other USHCN updates, it may be of
    very limited, or no, value to you in the comparisons that
    you have been doing, but I did want you to be aware of its

    My guess is that GHCN and GISS are awaiting the release of
    USHCN V2 data.

  66. Posted Jun 15, 2007 at 2:28 PM | Permalink | Reply

    Re 32, 41, see my page

  67. Barry
    Posted Jun 15, 2007 at 2:55 PM | Permalink | Reply

    A few years back I saw satellite images of Atlanta showing it creating it’s own thunderstorms. No heat island ‘fer sure’.

  68. Vincent Gray
    Posted Oct 11, 2007 at 8:55 PM | Permalink | Reply

    The most interedting confession in Brohan et al is at the top of page 3

    ” It is always possible that some unknown error has contaminated the data, and no quantitative allowance can be made for such unknowns”

    They then have the gall to refer to the espert on this subject, Donald Rumsfeld, who in his June 2002 speech composed the following delicate poem

    “As we know, there are known knowns;
    There are things we know we know.
    We also know there are known unknowns;
    That is to say we know there are some things we do not know.
    But there are also unknown unknowns
    – the ones we don’t know we don’t know.”

Post a Comment

Required fields are marked *



Get every new post delivered to your Inbox.

Join 2,879 other followers

%d bloggers like this: