The September 2007 Bear Market in NASA Temperature "Pasts"

Since August 1, 2007, NASA has had 3 substantially different online versions of their 1221 USHCN stations (1221 in total.) The third and most recent version was slipped in without any announcement or notice in the last few days – subsequent to their code being placed online on Sept 7, 2007. (I can vouch for this as I completed a scrape of the dset=1 dataset in the early afternoon of Sept 7.)

We’ve been following the progress of the Detroit Lakes MN station and it’s instructive to follow the ups and downs of its history through these spasms. One is used to unpredictability in futures markets (I worked in the copper business in the 1970s and learned their vagaries first hand). But it’s quite unexpected to see similar volatility in the temperature “pasts”.

For example, the Oct 1931 value (GISS dset0 and dset1 – both are equal) for Detroit Lakes began August 2007 at 8.2 deg C; there was a short bull market in August with an increase to 9.1 deg C for a few weeks, but its value was hit by the September bear market and is now only 8.5 deg C. The Nov 1931 temperature went up by 0.8 deg (from -0.9 deg C to -0.1 deg C) in the August bull market, but went back down the full amount of 0.8 deg in the September bear market. December 1931 went up a full 1.0 deg C in the August bull market (from -7.6 deg C to -6.6 deg C) and has held onto its gains much better in the September bear market, falling back only 0.1 deg C -6.7 deg C.

All records of the August bull market in Detroit Lake pasts have been erased from the NASA website, but I managed to complete my downloads in time and am in a position to try to decode exactly what’s been going on.

First, here is a graphic showing the changes to the Detroit Lakes MN in the August “bull market” as NASA moved quickly to correct the “Y2K” error that I had drawn their attention to. Their patch was essentially a step adjustment at 2000, which had the effect of increasing all earlier values by about 0.8 deg C.


Second, here is a similar graphic showing the changes to Detroit Lakes MN in the September “bear market”. As you can see, Hansen has clawed back most of the gains of the 1930s relative to recent years – perhaps leading eventually to a re-discovery of 1998 as the warmest U.S. year of the 20th century.


Aside from other issues – which we shall get to – we have two crossword puzzles here: where did the data come from? In fact, the precise provenance of the NASA USHCN data has been raised in recent posts – most recently here where I posited the use of a vintage USHCN data set. One of the nice things about climateaudit group is that readers often have good answers. Jerry Brennan suggested that the vintage data at be consulted. There were two potentially relevant files here hcn_shap_ac_mean.Z and hcn_mmts_mean_data.Z. I examined these files for Detroit Lakes and compared them to the three NASA versions online over the summer and am pretty much able to trace the machinations back to their source.

1) the vintage USHCN data set hcn_shap_ac_mean.Z , as Jerry Brennan surmised, was almost certainly used in the pre-Y2K version and the Y2K-adjusted version;
2) the September bear market at Detroit Lakes MN was precipitated by an unannounced switch to the USHCN data set hcn_doe_mean.Z.

Here are some detailed comparisons. First here is a comparison of the NASA “pre-Y2K” version against the vintage SHAP_AC version. You can see the step at 2000; values – if they were available for plotting – would continue at the upper step. There are slight monthly differences relating to some NASA procedure doing monthly adjustments, but this graphic shows to a moral certainty that the SHAP_AC version was in use pre-Y2K.


The next figure compares the NASA Sept 7 version to the vintage SHAP_AC version. The monthly perturbation introduced by NASA has increased but the two versions are obviously connected – and you can see that the Y2K patch has eliminated the step from using inconsistent versions.


However, the changes introduced in the September bear market changed the relationship as shown below. So what is the provenance of the new data?


It’s not the vintage MMTS version, a comparison to which is shown below:


It’s not the GHCN raw version:

It’s not the GHCN adjusted version:

It’s not the current USHCN raw version:

It’s not the current USHCN TOBS version:

But it looks like the current USHCN “adjusted” version plus the step adjustment (which isn’t needed for this series – something that I observed earlier).


The current USHCN data is located in a file entitled hcn_doe_mean and there is a reference to this file in the source code placed online on Sept 7, 2007. This is a different file than the hcn_shap_ac_mean file that was used prior to Sept 7. Perhaps the change from hcn_doe_mean to hcn_shap_ac_mean is the sort of “simplification” that Hansen had in mind when he said, on the occasion of the code being placed online, that they:

would have preferred to have a week or two to combine these into a simpler more transparent structure, but because of a recent flood of demands for the programs, they are being made available as is. People interested in science may want to wait a week or two for a simplified version.

However, this sort of change should not be introduced in the guise of “simplification”. It’s a substantive change in procedure. Maybe it’s an improvement; maybe it’s not. If Hansen is making changes to “improve” his methodology, users are entitled to know of the change when they’re introduced, not after the fact through reverse engineering.

I have no information on why Hansen is picking this particular time to make unannounced “improvements” to his methodology. However, it seems like a poor time to be doing so, as many people will undoubtedly question the motives of doing so at this particular time – and particularly without any announcement. Of course, it could be an unintentional “accident”, just as the “Y2K” switch in versions was an “accident” – in which case, the timing of a second accident seems particularly inopportune.


  1. crosspatch
    Posted Sep 13, 2007 at 11:29 AM | Permalink

    but it seems very odd that they would “accidentally” change provenance

    It seems the Canadians share more of the British penchant for understatement than we Yanks do.

    In my own life history, when I see flailing like this, it has generally been an indication of panic. In this case it seems that a lot of attention is being paid to minutia and not the overall bigger picture. A clear statement of what data is used and what has been done to it seems to be something that could be whipped out in less than an hour and would go a long way toward defusing things. Instead we see the cloth over the hat ruffling as the rabbit underneath struggles and the magician breaks into a sweat.

  2. SteveSadlov
    Posted Sep 13, 2007 at 11:39 AM | Permalink

    It’s either a sign of panic, or, wilful creation of smoke screens / distractions / diversions / noise, designed for an intended effect.

  3. MattN
    Posted Sep 13, 2007 at 11:41 AM | Permalink

    #2, agreed. There’s no reason to be messing with temperture data almost 80 years old, unless you’re up to something….

  4. Yancey Ward
    Posted Sep 13, 2007 at 12:10 PM | Permalink

    Are these substantive changes to all the stations, or was the Detroit Lakes station used because it is the most dramatic example? In other words, will all the other stations show such dramatic changes in past records?

  5. Christopher Alleva
    Posted Sep 13, 2007 at 12:15 PM | Permalink

    I can understand why Dr. Hanson is acting cavalierly with the data, he sees his life’s work crumbling before his very eyes. Based on the premliminary work on the data sets and methods, the entire premise of this experiment is rapidly devolving into oblivion. In retrospect, using haphazardly sited weather stations intended as a local forcasting tool was probably not such a good idea. The adjustments appear abitrary at best. This coupled with a complete lack of an audit trail and sloppy or nonexistent documentation is reason enough to scrap the whole exercise.

    Investigators would not have to resort to piecing the puzzle together if this was a valid and legimate study. Nevertheless, I am anxiously awaiting a definitive conclusion from climate audit.

  6. Michael Jankowski
    Posted Sep 13, 2007 at 12:19 PM | Permalink

    So is Gavin going to tell us all the steps we need for replication are in Hansen’s prior publications?

    Maybe if you hold the papers at the right distance and angle, and focus your eyes on something in the distance, you’ll see one of those embedded image-types of things that tells you what additional steps you need to take between Sept 7 and Sept 10 to get the latest results.

  7. Steve McIntyre
    Posted Sep 13, 2007 at 12:24 PM | Permalink

    #4. Detroit Lakes MN was used because it’s been a site that’s been discussed as an example since last spring and readers are familiar with it. Eli Rabett/Josh Halpern is the one who particularly drew it to my attention, arguing that the microsite issues at this site “didn’t matter” because NASA/NOAA software could fix bad data.

    I was skeptical as to whether their software could adequately deal with microsite problems and investigated this site noticing the Y2K step here.

    I did an earlier post in July on the distribution of Y2K adjustments. Detroit Lakes is a relatively large adjustment (about 0.8 K) which is why it was noticeable there. Some Y2K adjustments went the other way, but overall there was a bias of about 0.15 deg C.

  8. StanJ
    Posted Sep 13, 2007 at 12:24 PM | Permalink

    Bismarck famously said “Laws are like sausages. It’s better not to see them being made”.

    It seems the same could be applied to global temperature reconstructions and I guess the powers that be would rather not know the processes involved as long as the resultant ‘sausages’ taste right to them.

  9. Steve McIntyre
    Posted Sep 13, 2007 at 12:27 PM | Permalink

    #6. I’m sure that Gavin or Josh Halpern will be along to say that all we ever needed to do was RTFR: I guess in this case, Hansen et al 2001 must be a bit like Nostradamus – holding encrypted clues to change data sets on Sep 11, 2007.

  10. bmcburney
    Posted Sep 13, 2007 at 12:31 PM | Permalink

    Hansen’s Second Law of Climate Dynamics: If the present
    fails to get warmer, the past MUST become colder.

    “Conservation of trends” is a fundamental principle.

  11. SteveSadlov
    Posted Sep 13, 2007 at 12:34 PM | Permalink

    RE: #5 – It is an interesting thing. I (fondly?) remember some of the early discussions I got into over at RC regarding UHI / microsite issues / anthropogenically originated biases. At the time, I can distinctly recall a sort of gloating, appealing to authority tone, whereby it was said, in effect – “Oh that? That is nothing. It’s been adjusted out of the record. Time to move on.”

    And now?

  12. John Lang
    Posted Sep 13, 2007 at 12:39 PM | Permalink

    Tracks are being covered. Review consultant has been engaged and will be arriving next week.

  13. Jeff C.
    Posted Sep 13, 2007 at 12:47 PM | Permalink

    If I’m following this correctly, the second plot labelled “Detroit Lakes MN” is GISS O Sept 10 minus GISS 0 Sept 7. Presumably, hcn_doe_mean minus hcn_shap_ac_mean would look identical (is that correct?).

    If so, are the specific dates of the steps (around 1934 and 1952) isolated to this site or do the dates of these steps show up in other sites also? Do the dates correlate with local station phenomena (e.g. station moves, equipment changes)?

    The NCDC station history has the following changes noted for Detroit Lakes around the step dates:

    Station move:
    2.1 miles NNE on 10/18/1951

    TOB change:
    From 19 (7 PM?) to 79 (unknown meaning) on 9/1/1933
    From 79 (unknown) to 7 (7 AM?) on 12/15/1953

    No equipment changes noted in this period.

    Do any of the dates correlate to the steps? It is impossible to tell from the scale of the plots

  14. wf
    Posted Sep 13, 2007 at 1:04 PM | Permalink

    The Hansen uncertainty principle: it is impossible to have a temperature series that has an arbitrarily well-defined position and trend simultaneously.

  15. M. Jeff
    Posted Sep 13, 2007 at 1:27 PM | Permalink

    Re #4

    Below is what I posted on Sept 11, concerning availablity of the rural Walhalla SC data from GISS. The newest GISS data no longer shows the early warmest values and the early temperatures from about 1900 to 1910 are shown to be remarkably cooler than they were in the original GISS data. Therefore, it appears that more than just Detroit Lakes has been significantly modified.

    “After reading the Anthony Watts comment #72 under Hansen Frees the Code, Sept 8, 2007, at, concerning the adjusted versus unadjusted GISS data for Walhalla, SC, I checked the Walhalla info at

    To better visualize the differences for my own edification, (not being a mathematician and not having the inclination to make my own spreadsheet charts), I merged the two graphs using photoediting software. The differences between the two versions were quite obvious.

    However, now I’ve tried to find the graphs on the GISS web site and all I can find is a different adjusted Walhalla graph. I could not find the raw GISS graph. The new GISS graph has no values higher than 16.5 deg C, whereas the original graphs did. And the differences in the graphs evidenced by merging the latest graph image with the two original ones suggest that a different algorithm is now being used.

    Has GISS changed their algorithm, is the raw information no longer available, or is the original information still available at some other web site?”

  16. Bob Sykes
    Posted Sep 13, 2007 at 1:37 PM | Permalink

    Considering all the adjustments, revisions and corrections that these data sets have experienced, is it not possible or even probable that the original data sets have been lost beyond recovery? In which case, all the calculations by Hansen and his critics would be a kind of delusional numerology.

    Do we actually have any original temperature records?

  17. Yancey Ward
    Posted Sep 13, 2007 at 1:56 PM | Permalink

    I also have to ask the same question that Bob Sykes asked.

  18. JerryB
    Posted Sep 13, 2007 at 1:59 PM | Permalink

    Re #13,

    Jeff C,

    Some Detroit Lakes changes of USHCN adjustments (annualized):

    1918 tob adjustment from +0.57 F to -0.81 F
    1918 other adjustment from -1.74 to -2.26 F
    1933 tob adjustment from -0.81 F to +0.57 F
    1951 other adjustment from -2.26 F to -0.16 F
    1958 tob adjustment from +0.57 F to – 1.46 F

    However, the above graphs are not of the adjustments, but of differences
    between differently adjusted renditions of the adjustments.

  19. D. Patterson
    Posted Sep 13, 2007 at 2:01 PM | Permalink

    Re: #16

    Take your pick:

    Dataset Documentation & Metadata

    Surface Metadata, Select A Dataset

  20. Posted Sep 13, 2007 at 2:09 PM | Permalink

    I supposed we could go back to NCDC and get the raw data and start over. Come to think of it, that might not be a bad idea.

  21. JerryB
    Posted Sep 13, 2007 at 2:17 PM | Permalink

    re #16,

    Bob S,

    It depends on how original you want to get. Historically, most original records
    were on paper. Some may have been well preserved, and some may not have been.

    Various sets of orginal data were collected by various people, some being
    weather historians of sorts, some being members of, for example, national
    weather bureaus. Such collectors would often consolidate daily data to
    monthly data. After that, the daily data may have been preserved, but not

    Eventually, various organizations, for example the US DOE, invested heavily in
    digitzing (computerizing) such data, and archiving them. Today, NOAA’s NCDC
    has several gigabytes of daily min/max temperature data online, some of it
    going back to the 19th century.

    So, there are records of original data, but the data sets in which the original
    data are stored are not the same data sets in which they were originally recorded.

  22. steven mosher
    Posted Sep 13, 2007 at 2:19 PM | Permalink

    RE 15..

    Stand back from the machine while the data is being adjusted

  23. Jeff C.
    Posted Sep 13, 2007 at 2:20 PM | Permalink

    re #18

    “However, the above graphs are not of the adjustments, but of differences
    between differently adjusted renditions of the adjustments.”

    Agreed. I’m wondering if the difference between Sept 10 and Sept 7 (the second plot in the post) was caused by the TOB/other adjustments changing. This seems to be the case. There are large steps in the second plot around 1933 and 1951 that seem to indicate the TOB adjustment of 1933 (TOB changed on 9/1/1933 accoring to station history) and the other adjustment (station move of 10/18/1951) have been modified.

    The steps in the plot do seem to correlate to specific changes in the station history record that would have a legitimate reason for an adjustment. It looks like the magnitude of those adjustments have changed.

  24. steven mosher
    Posted Sep 13, 2007 at 2:36 PM | Permalink

    Credit Wiki.

    Many years ago, there lived an emperor who was quite an average fairy tale ruler, with one exception: he cared much about his clothes. One day he heard from two swindlers named Guido and Luigi Farabutto that they could make the finest suit of clothes from the most beautiful cloth. This cloth, they said, also had the special capability that it was invisible to anyone who was either stupid or not fit for his position.

    Being a bit nervous about whether he himself would be able to see the cloth, the emperor first sent two of his trusted men to see it. Of course, neither would admit that they could not see the cloth and so praised it. All the townspeople had also heard of the cloth and were interested to learn how stupid their neighbors were.

    The emperor then allowed himself to be dressed in the clothes for a procession through town, never admitting that he was too unfit and stupid to see what he was wearing. He was afraid that the other people would think that he was stupid.

    Of course, all the townspeople wildly praised the magnificent clothes of the emperor, afraid to admit that they could not see them, until a small child said:

    “But he has nothing on!”

    This was whispered from person to person until everyone in the crowd was shouting that the emperor had nothing on. The emperor heard it and felt that they were correct, but held his head high and finished the procession.

    This story of the little boy puncturing the pretensions of the emperor’s court has parallels from other cultures, categorized as Aarne-Thompson folktale type 1620, although the tale itself has no identified oral sources.[1]

    The expressions The Emperor’s new clothes and The Emperor has no clothes are often used with allusion to Andersen’s tale. Most frequently, the metaphor involves a situation wherein the overwhelming (usually unempowered) majority of observers willingly share in a collective ignorance of an obvious fact, despite individually recognizing the absurdity. A similar twentieth-century metaphor is the Elephant in the room. A metaphor of the opposite, in which each individual insists on his or her own perspective in spite of the evidence of others, is shown in the various versions of the Blind Men and an Elephant story.

    In one interpretation, the story is also used to express a concept of “truth seen by the eyes of a child”, an idea that truth is often spoken by a person too naïve to understand group pressures to see contrary to the obvious. This is a general theme of “purity within innocence” throughout Andersen’s fables and many similar works of literature.

    In another interpretation, the child is not simply a naive person, but precisely a child, as the perspective of children is often unencumbered with the filtering “knowledge” and social conditioning that fills the heads of adults, warping their perspective.

    “The Emperor Wears No Clothes” or “The Emperor Has No Clothes” is often used in political and social contexts for any obvious truth denied by the majority despite the evidence of their eyes, especially when proclaimed by the government.

  25. JerryB
    Posted Sep 13, 2007 at 2:38 PM | Permalink

    Perhaps a better statement would be that some of the above graphs are of differences
    between adjustments, and some are of differences between adjusted temperatures to which
    different amounts of those adjustments have been applied.

  26. D. Patterson
    Posted Sep 13, 2007 at 2:39 PM | Permalink

    Re: #21

    Yes, that is correct. For example in the United States, some observations and data products were initially recorded on the WBAN Form 10A/B. Although this paper document served as the official and legal record carrying Federal penalties for falsification and so forth, the meteorological observer or flight controller was responsible for communicating the observation report to a weather data center by computer data transmission, telephone to a computer data transmission operator, or other means of communication and summarization. Despite efforts at enforcing quality control (QC), it was possible for errors in the encoding and tranmission of the observation reports to result in differences between the official WBAN Form 10A/B log and those data transmissions.

    Although the paper record such as the WBAN Form 10A/B contained entries for all 24 hour, 12 hour, 6 hour, 3 hour, 1 hour, and 10 minute special observations and data elements; the various datasets summarizing the data elements on the original paper typically did not and do not summarize each and every one of the available data elements. Many disregard special observations conducted at ten minute intervals due to special and/or hazardous weather conditions. Other summaries include only the 3 hour and 6 hour observation reports. Consequently, any researcher must understand the provenance of the observational data and the limitations upon the quality of the data due to errors and summarization omissions.

  27. Steve McIntyre
    Posted Sep 13, 2007 at 2:41 PM | Permalink

    All of the above graphs are differences between different versions (of monthly data not anomalized) and not differences between adjustments or even between anomaly data.

  28. JerryB
    Posted Sep 13, 2007 at 2:59 PM | Permalink


    To mix metaphors, thank you for getting my foot out of my keyboard.

  29. IanH
    Posted Sep 13, 2007 at 3:18 PM | Permalink

    I guess the problem is that the people recording, and much later the people transcribing didn’t see their simple endeavours would get hijacked for political purposes. Talking of which has anyone generated a family tree showing how later publications are based on the same few earlier ones which have been discredited, or are now suspicious, might be interesting to tie this into the IPCC references

  30. Jeff C.
    Posted Sep 13, 2007 at 3:18 PM | Permalink

    Re 25,27

    Understood, the plots are differences between different versions of monthly temperature datasets, not adjustments.

    I was trying to say (not very clearly) that the difference between the Sept 10 dataset and the Sept 7 dataset (shown in the second plot above) has steps in 1933 and 1951. Those dates correspond to a TOBS change and a Station move in the Detroit Lakes station history.

    That would seem to imply that the magnitude of the adjustment that was applied for these events have changed for some unknown reason.

  31. Posted Sep 13, 2007 at 3:31 PM | Permalink

    re 21 & 16

    I believe that the paper record for Tiffin, OH resides in a closet at Heidelberg College.

  32. JerryB
    Posted Sep 13, 2007 at 4:04 PM | Permalink


    Briefly: USHCN comes up with ajustments for such things as TOB, station
    moves, and missing data. Those adjustments have remained almost stable
    for several years. GISS wanted to use those adjustments except for the
    USHCN missing data adjustment, and USHCN prepared a special file to suit
    GISS about seven years ago, but in ended with 1999 data.

    For the years after 1999 GISS used raw USHCN data, which they got via
    GHCN, and last month Steve pointed out to GISS that there were many sharp
    jumps between 1999 data and 2000 data. GISS reacted by changing the
    adjustments for each station by an amount based on the size of the
    adjustments near the end of the period covered by the special file so as
    to smooth the jumps.

    That was the reason for changing the magnitude of the adjustments early
    last month. For each station the change was a relatively constant amount
    per year (about 0.82 C or 0.83 C for Detroit Lakes, which is why the dark
    line of the first graph is relatively flat), until this week.

    This week that change seems not be be anywhere near constant, and I do
    not have, nor have I seen, a clear idea of why that is.

  33. Anonymous
    Posted Sep 13, 2007 at 4:45 PM | Permalink

    How many of the station records have changed in the past few weeks?

  34. bill-tb
    Posted Sep 13, 2007 at 4:51 PM | Permalink

    Timber, the whole place is coming down. Best to stand outside and far enough away from the house of cards to not get buried. Now we know how it was done, proving it is in the actions of the keepers of the data. When there is no good reason for changes, but the changes are happening anyway, there is a good need for changes, and it’s critical — to keep others from finding out what the truth is.

  35. JerryB
    Posted Sep 13, 2007 at 5:02 PM | Permalink

    Re #33,

    At most about 1200, i.e. the USHCN records. However, some may have had no
    recent USHCN adjustments, so they presumably would not have changed recently,
    and many would have had only relatively slight recent USHCN adjustments, so any
    changes to them would presumably have been relatively slight.

  36. Phil
    Posted Sep 13, 2007 at 5:06 PM | Permalink

    Changes in GISS data from Walhalla between Sep 9 and Sep 13, 2007:


    RAW = “raw GHCN data+USHCN corrections”
    ADJ = “after homogeneity adjustment”

    A. Changes in Walhalla RAW b/ 9-9 and 9-13-2007 (9-9 minus 9-13):

    Elimination of all data points before 1990;
    Various adjustments b/ -1.0 and +2.3 to all data points from 1900 through 1909;
    (e.g. Jan 1900 (9-9)=4.2 & Jan 1900 (9-13)=5.2, for a delta of -1.0;
    Mar 1907 (9-9)=16.5 & Mar 1907 (9-13)=14.2, for a delta of +2.3)
    Adjustments from 1919 through 1984 to make winters COLDER and summers WARMER with respect to 9-9 version, ranging
    from about -0.1 for dec to feb,
    to about -0.2 for sep to nov,
    to about -0.3 or -0.4 for mar to may,
    to about -0.5 or -0.6 for jun through aug, although there are variations from year to year.
    (e.g.: 1940

    B. Changes in Walhalla ADJ b/ 9-9 and 9-13-2007 (9-9 minus 9-13):

    Elimination of all data points before 1990;
    Various adjustments b/ -1.9 and +1.7 to all data points from 1900 through 1909; (e.g. Dec 1901 (9-9)=2.9 & Dec 1901 (9-13)=4.8, for a delta of -1.9; Mar 1909 (9-9)=10.7 & Mar 1909 (9-13)=9.9, for a delta of +1.8)

    Adjustments from 1919 through about 1962 to make winters WARMER and summers COLDER with respect to 9-9 version, ranging
    from about 0.4 for dec to feb,
    to about 0.3 for sep and oct,
    to about 0.2 for March,
    to about 0.1 for apr & may,
    to about -0.1 for jun through aug, although there are variations from year to year. (e.g. 1924:

    From 1962 to about 2006, the adjustments fit the same seasonal pattern but are diminished in magnitude.

    D. RESULT:

    Calculated adjustments on 9-9-2007 (ADJ – RAW) to make adjusted data COLDER:

    -0.4 on all points from 1889 through 1905 (e.g. Jul 1889 raw=25.5 & jul 1889 adj=25.1)
    -0.3 on all points from 1906 through 1933
    -0.2 on all points from 1934 through 1962
    -0.1 on all points from 1963 through 1990

    Calculated adjustments on 9-13-2007 (ADJ – RAW) to make adjusted data COLDER:
    Elimination of all data points before 1990,
    -0.1 on all points from 1900 through 1901,
    -0.2 on all points from 1903 through 1944 (e.g. Jul 1905 raw=24.9 & jul 905 adj=25.1),
    -0.3 on all points from 1945 through 1987,
    -0.2 on all points from 1988 through 1994,
    -0.1 on all but 2 points from 1995 through 2001 and
    0 on all points from 2001 through 2006

    As a result, a given month may have up to 4 different values (e.g. Sep 1955: on 9-9: 23.2 raw and 23 adj and on 9-13: 22.9 raw and 23.2 adj). Also, the adjustment curve has changed significantly, from a linear one to one with a minimum between 1945 and 1987.

    It appears that a major data change has taken place perhaps in response to what has been posted on CA. That it has taken place without any announcement is very troubling. It is also hard to believe that this is historical data upon which the results of certain GW models were based. In fact, I would question whether there is any connection between this data and any GW models, since this data is only a few days old at most. (Steve: I can email you my 9-9 Walhalla downloads for your records, if you don’t have them.)

  37. Nate
    Posted Sep 13, 2007 at 5:12 PM | Permalink

    I think Hansen is on to a very clever way to reduce the capital gains taxes paid by individuals. If at the end of each year, you are able to rewrite the price histories of every security bought and sold, one could simply change the purchase price to the sale price and one would never have to pay capital gains tax again.

    Then again, if you do this the IRS and SEC would come after you. Come to think of it, maybe sending the IRS and SEC after Hansen might be a step forward.

  38. SteveSadlov
    Posted Sep 13, 2007 at 5:38 PM | Permalink

    RE: #37 – I think there are a few backdated stock options that also benefited from similar revisionism.

  39. SteveSadlov
    Posted Sep 13, 2007 at 5:57 PM | Permalink

    An anecdote. While what I will share has to do with discrete data (in this case, occurence rates of unwanted events) the principles involved are instructive. A couple years ago, I had to deal with a difficult demand. It was a report on dead-on-arrival (DOA) rate of a product at a customer’s network integration facility. DOA is an embarassing thing, in that it implies that the product has poor infant mortality, has not been tested adquately in the factory, has been mishandled, or all of the above. It is definitely in the category of bad news and negative PR. In my case, what made it difficult was that there was not complete serial number info in either the field data base or the repair data base. Some numbers were missing, some were bogus (typically, someone typed in the model number, or serial number of a component not the whole unit), some were corrupted. That made it essentially impossible to attribute all confirmed DOAs (e.g. ones claimed failed, then confirmed failed in repair) to their actual failure period (a particular month). Some 30% were not attributable at first blush. Working very hard, we improved it a bit – some were obvious typos, some had ancillary info that allowed inference, etc. In the end, we stated a very plainly worded disclaimer describing the issues with data quality, showed the raw claimed rate, the repair rate with only matched S/Ns and an estimated confirmed field rate which used a combination of the process of elimination, guesswork, and the general notion that if something would get returned within 18 months of failing. Imperfect, and honestly described. Now, using this example, let me describe some behaviors, laying out an ethical continuum:

    The unethical optimist – Throw out all un matched serial numbers. Report a so called “field DOA rate” based on the “culled” data. Issue no disclaimer. Claim victory. (Enron).

    The unethical pessimist – Include only the raw claimed field data. Cry wolf. Create a crisis. (The “Killer AGW” subculture).

    The ethical realist – See what I described above in my own case. Try to make lemons into lemonaide. Be forthright about the cases where “data” were dealt with via fudging and guesswork. Dislaim it with “use at your own risk.” (Most honest and practical people do this rountinely)

    The ethical purist – refuse to work with the flawed data. End of discussion. (Career limiting move. This backfires and labels the practitioner as being incredibly difficult to work with.)

  40. M. Jeff
    Posted Sep 13, 2007 at 6:58 PM | Permalink

    Re #36

    Are the 9-9 Walhalla downloads still publicly available?

  41. Phil
    Posted Sep 13, 2007 at 7:12 PM | Permalink

    #41 Not as far as I know. I downloaded the 9-13 data from exactly the same place as the 9-9 data: raw and adjusted.

  42. Follow the Money
    Posted Sep 13, 2007 at 7:37 PM | Permalink

    As you can see, Hansen has clawed back most of the gains of the 1930s relative to recent years – perhaps leading eventually to a re-discovery of 1998 as the warmest U.S. year of the 20th century.

    I don’t know. I suppose the downward trend could continue, but more changes would excite too much suspicion. My first impression was the 1930’s heating interrupted with straight shaft of a hockey stick, thereby nullfying the politically valuable theory heating could only be anthropogenic. So the 1930’s values might have been “misread” lower if someone had less than pure intentions. Later, when challenged and possibly concerned someone else will take a look at the old records, the accurate numbers from the old records were substituted in, the so-call “adjustment.” But here a second “adjustment” is made. This “adjustment” does not have the feeling of a “fact check” for “clerical errors” where one corrects the reported to fit the original reports, but an alteration to fit something else. But it’s beyond me, it’s Climate Science after all.

  43. Molon Labe
    Posted Sep 13, 2007 at 8:25 PM | Permalink

    Why doesn’t this whole charade come to a screeching halt as soon as it becomes apparent that the adjustments are on the same order of magnitude as the computed effect? It is so absurd it boggles my mind.


  44. aurbo
    Posted Sep 13, 2007 at 8:47 PM | Permalink

    One source of unadjusted raw data are the local newspapers in the various cities where data is produced. Many papers report the previous day’s max and min temperatures and sometimes even the hourly temperatures as provided to them from the local weather office. Therefore, the morgues of many newspapers contain an independent copy of the essentially raw climatological data. Some of these dailies are available online. Most of the others are preserved indefinitely, usually until the building burns down.

    For example, NYC City Office and/or Central Park data was available on a daily basis in several printed venues, one in particular would be the NY Times. BTW, I’ve yet to see a single explanation for the USHCNv1 UHI adjusted data compared with the USHCNv1 unadjusted (raw) data a difference of 3.6°C (colder for the UHI) for annual mean temperatures between 1961 and 1990. I’m begining to feel like I’m trapped in an anechoic chamber.

    BTW, a 20th century term related to the Emperor’s Clothes fable is “cognitive dissonance” where one’s attitudes don’t reflect one’s beliefs or observations. In either case it’s often difficult to determine whether the observer is legitmately blind or is deliberately lying.

  45. thetruthwillsetyoufree
    Posted Sep 13, 2007 at 9:03 PM | Permalink

    Could not resist….from today’s WSJ….sounds familiar really.

    “Miscalculation, poor study design or self-serving data analysis plague the majority of scientific studies, according to a medical scholar’s study of peer-reviewed papers.”

  46. PaddikJ
    Posted Sep 13, 2007 at 9:14 PM | Permalink

    Is anyone tracking all of these stealth revisions? My first hint of GISS hanky-panky was in Crichton’s State of Fear – in the bibliography where he had caught GISS arbitrarily cutting about 20 years off one of their T-graphs to make the trend look scarier. Possible thesis fodder for a Science Historian PhD project, optimistically assuming there are any who would bring a properly critical, skeptical attitude to the project.

    In a similar scholarly auditing vein, I have occasionally contemplated a wiki-blog project similar to in which all of the 900+ articles surveyed by Oreskes in the famous 12/3/04 Science paper – “The Scientific Consensus on Climate Change” – would be carefully parsed to see if a), the abstracts are a good representation of the contents, b), the conclusions (and abstracts) are supported by the authors’ own data, and c), how many of the articles surveyed were from the Life Sciences, where AGW is the assumed cause of some effect on the biosphere (ie: Consensus clearly not supported; no AGW data at all). Item C was inspired by a post several years ago in WCR where they looked at a biology paper (in Nature, as I recall), the abstract of which read more like a sermon than a scientific paper.

    As Monckton (I think) has noted, Oreskes didn’t study the contents of the articles, just the abstracts and conclusions. I don’t have a big issue with, or interest in, the paper itself (and Peiser’s rather lame effort to refute it), which was simply an effort to see if there was a broad scientific consensus on AGW; Oreskes even stated that the consensus might be wrong. True, she couldn’t resist the occasional moralizing about “duty to our grandchildren”, etc, which really has no place in a scholarly study, but in general, I think the paper did what she intended. My interest is not so much in the vaunted Consensus per se, but in testing how well the data in the exact papers she studied support the authors’ belief in the Consensus. This is a side-bar discussion, I admit, but I think that a study of Belief Bias could be valuable in preventing future manufactured crises.

    Oddly, I’ve never been able to locate anything but the short essay mentioned above. When I first heard about it, I just assumed that it was a long-ish, fairly technical, P-R’d paper, with a list of the 900+ articles surveyed, and arranged in tabular form according to the author’s criteria. If I get serious about this, I suppose I could contact Orestes and ask for a list of the articles . . .

    . . . trouble is, I’m a busy working Dad, so if anyone else is interested, be my guest.

  47. Larry
    Posted Sep 13, 2007 at 9:27 PM | Permalink

    45, this is so perfect:

    Statistically speaking, science suffers from an excess of significance. Overeager researchers often tinker too much with the statistical variables of their analysis to coax any meaningful insight from their data sets. “People are messing around with the data to find anything that seems significant, to show they have found something that is new and unusual,” Dr. Ioannidis said.

    Well Glory be and hallelujah!

  48. dirac angestun gesept
    Posted Sep 13, 2007 at 10:05 PM | Permalink

    Is Hansen changing raw data? That’s to say, is he changing the recorded readings from thermometers at weather stations around the world? Or is he adjusting them to correct for errors due to heat island effects, miscalibrations, and the like?

    If it’s the former, he should be fired. If it’s the latter, he should be made to explain exactly why he adjusted readings. Personally, I can’t see how an adjusted value can be anything other than an intelligent guess at what the actual value might be.

    The guy works for NASA, right? Hansen is entitled to his opinions, but what’s happening now seems to me to scream for NASA senior management to get hold of all this before NASA’s whole credibility gets called into question. Hansen has been called into question over the ‘hockey stick’ and ‘Y2K’, and now he’s being found seemingly rather cavalierly manipulating data – and seemingly raw data according to some posts here. I really don’t understand why NASA aren’t all over this.

    I wonder if maybe it’s because Hansen is bigger than NASA now, and is a political player in a much higher stakes game than anything NASA has ever played. Maybe he’s a sort of cuckoo in their nest that they’ve been busily feeding, and now has got to be bigger than his parents, and has taken over the nest, and can adjust/modify data in whatever way he wants with complete impunity.

    This thing has to come to a head somewhere. I commend you guys for what you’re doing. You all deserve medals.

  49. Phil
    Posted Sep 13, 2007 at 10:33 PM | Permalink

    #41 UPDATE

    I tried to find another public source for Walhalla data using Steve’s links at the left sidebar ( On that page, I quote Steve:

    There is a mirror at KNMI: (I haven’t verified that versions correspond yet.)

    (My emphasis)

    Using that link, I searched for Walhalla.

    At the top of that page is the following text:

    Climate Explorer
    Time series
    WALHALLA mean temperature
    Retrieving data from GHCN v2 (adjusted) database …

    Searching for substation nr 72312.7 in v2.temperature.inv, WALHALLA (UNITED STATES OF AMERICA), coordinates: 34.75N, -83.08E, 298m (prob: 305m), Near WMO station code: 72312.7 WALHALLA , temp from v2.mean adj nodup [Celsius], (postscript version, raw data, netcdf)

    Clicking on “raw data”, I downloaded a file called t72312.7.dat and opened it with Notepad++ and resaved it as a .txt file. Then I opened the txt file in a spreadsheet and compared the 9-13-2007 GISS ADJ to t72312.7.dat and took the difference (9-13 minus t72312.7.dat):

    9-13-2007 GISS ADJ MINUS t72312.7.dat:

    First of all, t72312.7.dat has data from 1889 through 1899 which was apparently removed from the GISS files. In addition, it has data for three months missing from GISS 9-13 ADJ: nov 1906, oct and nov 1907.

    For the years 1900 through 1909, the deltas vary but are between +1.8 for jan 1900 and -0.7 for dec 1903, may 1906, aug 1907, jan 1908, with most of the deltas being positive (i.e. GISS 9-13 ADJ being WARMER than t72312.7.dat).

    Between 1919 and 1983 the deltas are fairly uniform:

    Jan 0.5 to 0.6 until ’44, then 0.6 to 0.7
    Feb 0.5 to 0.6 until ’44, then 0.6 to 0.7
    Mar 0.7 to 0.8
    Apr 0.6 to 0.7
    May 0.4 to 0.5 until ’44, then 0.5 to 0.6
    Jun 0.5 to 0.6, with 0.7 in ’48 only
    Jul 0.4 to 0.5 until ’44, then 0.5 to 0.6
    Aug 0.5 to 0.6 until ’44, then 0.6 to 0.7
    Sep 0.3 to 0.4 until ’44, then 0.4 to 0.5
    Oct 0.5 to 0.6 until ’44, then 0.6 to 0.7
    Nov 0.5 to 0.6 until ’44, then 0.6 to 0.7
    Dec 0.6 to 0.7 until ’44, then 0.7 to 0.8

    Between 1984 and 2006 the deltas are a little larger (i.e. more +) in the winter months (up to +1.1) but decrease in the summer months (down to +0.3) but remain positive.

  50. Steve McIntyre
    Posted Sep 13, 2007 at 10:36 PM | Permalink

    #48. Hansen is using other people’s raw data. I’ve tried to specify things as HAnsen’s dset=0 or HAnsen’s dset=1 to be precise. For these stations, dset0 and dset1 are identical and are before the Hansen urban adjustment yielding dset2.

    In the most recent episode, he’s changed the input version for US stations from what appears to have been the HCN_SHAP_AC 3A series used up to Sept 7, 2007 to the HCN_DOE_MEAN 3A version used in the Sep 10 version. These have been adjusted at NOAA differently.

    To the extent that Hansen has described the data that he used in his articles or webpages, the descriptions obviously do not permit both data sets to be used indiscriminately. If a reason arose for changing the input data, then it seems to me that Hansen should have reported the reason and announced the change in data origin.

  51. Steve McIntyre
    Posted Sep 13, 2007 at 10:38 PM | Permalink

    #49. It looks like KNMI is using GHCN versions, which don’t necessarily coincide with USHCN versions. There may be an issue there and I’ll look at it, but it looks a little at crosspurposes.

  52. Phil
    Posted Sep 13, 2007 at 10:45 PM | Permalink

    #50 Steve, I apologize for not being more precise and trying to use your nomenclature. I believe my “RAW” for Walhalla would be dset=0 and my “ADJ” would be dset=2 assuming the “homogeneity” and “urban” adjustments are the same adjustment.

  53. PaddikJ
    Posted Sep 13, 2007 at 11:05 PM | Permalink

    dirac angestun gesept says on September 13th, 2007 at 10:05 pm, ca: #48:

    ” . . . but what’s happening now seems to me to scream for NASA senior management to get hold of all this before NASA’s whole credibility gets called into question.”

    ” . . . maybe it’s because Hansen is bigger than NASA now . . .”

    You got it – even Hansen’s nominal boss, NASA Director Michael Griffin, treads carefully around him. Griffin made a few perfectly reasonable remarks on NPR’s Morning Edition a few months ago, to the effect that maybe we should all just take a deep breath where AGW is concerned, which were immediately attacked by Hansen and hastily retracted. Welcome to Kafka-land.

    But that could be changing. See the “Hansen releases the Code” thread, which links to Hansen’s personal blog, in which Hansen unintentionally but clearly reveals that he released the code under pressure – undoubtedly coming from higher up the food chain than Griffin.

  54. Willis Eschenbach
    Posted Sep 14, 2007 at 12:43 AM | Permalink

    According to the information on Detroit Lakes here, it appears that the main portal into the GISS station code is still using the old data with the Y2K error. Can anyone confirm this?


  55. nrk
    Posted Sep 14, 2007 at 12:53 AM | Permalink

    I know i’m way behind you guys, but the explantion of the changes for v2 were supposed to be put in July 07. Could they still be fixing these changes?


  56. JerryB
    Posted Sep 14, 2007 at 5:29 AM | Permalink


    Not so. See Detroit Lakes samples
    and notice the dates of each file.

  57. Carl Gullans
    Posted Sep 14, 2007 at 5:35 AM | Permalink

    #16: Who was responsible for transcribing the old paper records into an electronic form? It would be interesting to see, from a sample of stations, how the paper record compares to the electronic record (has this ever been checked?).

  58. Posted Sep 14, 2007 at 5:46 AM | Permalink

    RE57, that is NCDC in Asheville. Unfortunately we don’t have access to the original paper logs from the observers.

  59. Severian
    Posted Sep 14, 2007 at 6:01 AM | Permalink

    This is just amazing. The further you dig into Hansen and the whole temperature measurement issue, the worse it becomes. First the Y2K bug, then Watts evaluation of temperature stations, now the amazingly morphing raw data. Add in alleged data fabrication for locations like China by Wang and Jones. And this is all for real world, measured data. Proxy data, as in the infamous “hockey stick” are even worse. It’s like the more rocks you turn over the more unpleasant, white, slimy things crawl out. I suspected the measured temperature records to be suspect, but this is just amazing. How in the world has this been let happen? Such sloppiness at best, and overt manipulation at worst?

    In another forum, I saw a poster actually complain about people like Steve McIntyre and such as nitpickers who are just dying to find some minor math error to attempt to deny (there’s that word again) the truth. Well, what we’ve seen are far from “minor” errors with little to no effect. There’s an old Heinlein saying to not attribute to malice what’s adequately explained by incompetence, but this whole area looks like its straying well past incompetence as an explanation.

  60. Sam Urbinto
    Posted Sep 14, 2007 at 9:53 AM | Permalink

    PaddikJ, 46, regarding Oreskes – good points. a) abstracts match contents b) conclusions supported by data and c) how many life sciences-related. But don’t waste your time worrying about it. Note that passive voice (“tested by analyzing”) is used to describe the “survey”, leading me to believe it was done by somebody else, a student(s), for class work maybe? Besides that, the original article had the wrong search terms multiple times. But there are various other issues with that “essay” (looks like Science’s equivalent of an op-ed) that make it rather meaningless to look into the “survey”.

    One example of the problems I have with it, it appears she tries to hide the reference to the IPCC in footnote 4, no doubt she cherry picks her quote and what part of TAR it’s from. “Human activities … are modifying the concentration of atmospheric constituents … that absorb or scatter radiant energy. … [M]ost of the observed warming over the last 50 years is likely to have been due to the increase in greenhouse gas concentrations.” The reference points to J. J. McCarthy et al., Eds., Climate Change 2001: Impacts, Adaptation, and Vulnerability Notice page 21.

    Notice how she conveniently left off “changes in land cover” and “properties of the surface that” from the quote. (And that it was from the TAR WGII Technical Summary 1.2, such summary “accepted but not approved in detail at the Sixth Session of IPCC Working Group II”)

    The main issue though is the reason for this “survey”. She says that AAAS, AMS, AGU, NAS and IPCC make statements and those statements probably reflect the views of the members. But perhaps there are dissenters. I know, let’s go survey scientists. By looking at article abstracts and seeing if there are any dissenters! Hunh?
    “…such reports and statements…. might downlplay legitimate dissenting opinions. That hypothesis was tested by analying 928 abstracts…” Now you tell me how looking at abstracts the reports and statements were taken from in the first place is a survey of scientists that might be dissenters. I sure don’t understand it!

    She also says “Remarkably, none of the papers disagreed with the consensus position.” Of course they wouldn’t. Those are the types of things the claim of a consensus is taken from! And even then they had to be handpicked it seems. And this wasn’t meant to be replicated. No lists of abstracts, no how they were graded, no criteria of how they were included or excluded. By climatologists only? Anyone involved in climate work? Randomly?

    Even all that aside, she didn’t even search for “anthropogenic global warming”.

  61. Anthony Watts
    Posted Sep 14, 2007 at 2:16 PM | Permalink

    RE57, JerryB I just downloaded and just got through comparing those three “raw” GISS datasets using a checksum method and found huge differences between the three. The first two changed a little, but the third (most recent) was 25% materially different. Checksums make it easy to spot immediately what the sum of change is.

    Either there is some huge new error that was accidentally introduced, or there is something else occurring.

    Just to be sure, can you give me the source for each of these files and/or the method used when they were collected and zipped? I just want to be sure I’m looking at the right things and that I am seeing what is in fact raw as opposed to an adjusted dataset.

  62. Steve McIntyre
    Posted Sep 14, 2007 at 2:23 PM | Permalink

    The original data (up to 1999) appears to come from (the 3A series) and from GHCN v2.mean after 2000. In August, they introduced a step adjustment to eliminate a discontinuity at Y2K.

    In Sept they appear to have substituted the 3A version.

    The difference is something to do with the NOAA station history adjustment – an adjustment that I’ve not probed yet.

    But the nerve of them to change their data source. The Sept 7 code refers to the hcn_doe_mean version not the shap_ac version – and to that extent, the code as published is a misrepresentation of how they derived their results.

  63. SteveSadlov
    Posted Sep 14, 2007 at 2:30 PM | Permalink

    The spirit of arrogance is detectable at RC, but in-your-face at the Wabbit Wun. How annoying.

  64. PaddikJ
    Posted Sep 14, 2007 at 2:42 PM | Permalink

    My criteria ‘b’, “conclusions (and abstracts) are supported by the authors’ own data”, was, on review, not very well stated. Something like “In addition to conclusions regarding the specific subjects of the papers, are there statements indicating acceptance of the AGW hypothesis which are not supported by the contents of said papers?”, would be closer to the mark.

    Putting aside for the moment that at the end of the day, it doesn’t matter how many researchers believe or dis-believe some proposition (re: Crichton’s excellent disparagement of “consensus science”), my immediate interest is in how belief in some over-arching proposition (AGW, in this case) may bias specific research projects. Obviously, I think Oreskes’ search criteria were too narrow, but would concede that she never claimed to do anything but gauge how widespread amoung scientists is the belief in AGW. My interest is in how that belief may or may not be at odds with data gathered and presented in particular research papers. I’ve seen several where the data do not support AGW, but the authors conclude that said data must be anomalous (usually some regional effect), since, of course, AGW is indisputable. I would like to test that on a larger sample, say, Oreskes’, but any randomly generated sampling of papers using a search term like “Anthropic Global Warming” would do. I just think it would be more interesting to use hers, if it’s available, since you could tabulate the new findings along side her original criteria/findings.

  65. Sam Urbinto
    Posted Sep 14, 2007 at 3:28 PM | Permalink

    I understood your criteria b to basically be that, I was just paraphrasing. My point is that the thing wasn’t meant to be replicated. I think it shouldn’t be replicated. I like your idea to do something on a wider scale, and I’d agree on the face of it, we have a case of “the data must be anomalous becasuse AGW is indisputable” where the conclusion proves itself rather than being researched. Like “If this isn’t a cat, it must be a dog.” or some such.

    Papers on all climate-related issues, “for Anthropogenic Global Warming”, from any source where the data in them supports their conclusions would be good to know.

    That “survey” was glossing over a few cherry picked abstracts chosen on a faulty premise to prove a point it didn’t and couldn’t prove. (Or probably was obvious on its face once you parse the thing.)

    If you want to know what she looked at, I think they went over them at Deltoid quite a bit, probably in the archives (it was a while ago), and she corresponded with some of the folks over there if I remember. So really, some of this has already been done, but not from a neutral point of view it didn’t seem. Like it said, it was a while ago, and following it deeply seemed pointless. As it seemed like the survey itself was meant as a distraction, in an op-ed meant as a diversion. But YYMV. 🙂

    You should be able to find the links to various aspects of the subject at Wikipedia (Oreskes, scientfic consensus, global warming dissenters, scientific opinion on climate change, some such like that) I’m sure. It’s probably a perfectly fine place to go to find links to the original stuff they link to so you can actually read for yourself unfiltered. (I’m not happy with the way they’ve phrased this matter to say the least.)

  66. JerryB
    Posted Sep 14, 2007 at 4:08 PM | Permalink

    Re #62,


    “Just to be sure, can you give me the source for each of these files and/or
    the method used when they were collected and zipped”

    They all came from GISS, the usual way, and I zipped them to preserve the dates
    one which I got them from GISS.

  67. Phil
    Posted Sep 14, 2007 at 4:09 PM | Permalink

    #41, #49 UPDATE I checked the annual average mean temperature for Walhalla from CDIAC. First, go to this page at CDIAC (these are supposed to be USHCN files). Then click on “go to monthly data user interface” at the bottom of the page. In IE6, a new window will open with a map of the US, where you can click on a state, then on a station to get to a page that will build a file for you in .csv format. I downloaded the “mean temperature” for Walhalla and compared it to my 9-9-2007 GISS dset=0 (my “raw”).

    All values were in degrees F. There was no data prior to 1900. However, the data discontinuities that existed in all the GISS files (i.e. may, oct, nov & dec 1902;
    mar, apr 1903;
    nov 1905;
    apr, dec 1906;
    nov, dec 1907;
    mar to dec 1908;
    jan, feb 1909;
    1910 to 1917;
    jan to aug 1918;
    and mar 2000) were not present in the CDIAC version, although the CDIAC version only had data through 2005. At first, I thought that the missing data in the GISS versions might have been due to WW1, but the values in the CDIAC version would tend to negate this. In addition, it would not appear that the CDIAC version data for the years missing in the GISS version is a reconstruction due to the difficulty/unlikelihood that monthly versions would have been filled in for such a large gap, but I suppose anything is possible.

    After converting CPIAC to Celsius, the deltas for the period 1900 to 1909 varied with no particular pattern from -3.2 for nov 1901 (i.e. 9-9 was 3.2 degrees C COLDER than CPIAC) to +2.5 for dec 1903.

    From 1919 through 1984, a familiar pattern emerged, with the 9-9 GISS version 1.2 to 1.3 degrees WARMER in the winter months and 0.2 to 0.3 degrees WARMER in the summer months with the spring and fall having intermediate deltas. From 1985 through 2005, the 9-9 GISS version was 0.8 to 0.9 degrees C WARMER in the winter months and 0.3 to 0.4 degrees C WARMER in the summer months, with the spring and fall months again having intermediate deltas.

    In case I wasn’t clear, both the CDIAC and the 9-9 GISS dset=0 files are supposed to be unadjusted for UHI. An version adjusted for UHI is also apparently available from CDIAC.

  68. Anthony Watts
    Posted Sep 14, 2007 at 4:14 PM | Permalink

    RE68, Phil, thank you. I know what to do now.

  69. Anthony Watts
    Posted Sep 14, 2007 at 4:23 PM | Permalink

    RE68 Phil, forgot to mention that CDIAC uses FILNET which “fills” in missing data based on an equation used to best guess what the data would be from examining data from neighboring sites.

    While what you have with degrees F is close to “raw” data, I don’t think it actually is.

  70. Phil
    Posted Sep 14, 2007 at 4:58 PM | Permalink

    #49 UPDATE

    I don’t know if this graph will show, but there does not appear to be much of a trend there.

  71. Phil
    Posted Sep 14, 2007 at 5:02 PM | Permalink

    #71 UPDATE

    Sorry: the picture caption didn’t show. The graph in #71 is the Walhalla unadjusted (dset=0) data from KNMI (probably GHCN data) which “raw data” was referred to in #49.

  72. SteveSadlov
    Posted Sep 14, 2007 at 5:40 PM | Permalink

    RE: #71 – No trend for the whole thing, but within it, PDO can be seen. Unquestionably.

  73. Posted Sep 14, 2007 at 6:05 PM | Permalink

    I attribute the fact that the temps are holding to the fact that people are buying degrees and holding on to them in the hopes of future appreciation. However, if the market keeps declining they will be left with worthless degrees.

    Or maybe the Star Wars influence:

    “These are not the data you are looking for, you can move along now”.

  74. Posted Sep 14, 2007 at 6:08 PM | Permalink

    I don’t get it. A confident man who knows the USA is a minor part of the data would not cook the books for tenths of a degree.

    The way the world data set is computed must have some big USA influences, would be my guess.

  75. Phil
    Posted Sep 14, 2007 at 6:12 PM | Permalink

    #70 Anthony, thanks for the clarification and for what you and Steve are doing.

    #73 Is this what you mean by PDO?

  76. Posted Sep 14, 2007 at 6:41 PM | Permalink

    dirac angestun gesept September 13th, 2007 at 10:05 pm,

    The Hockey stick adjustment was just a flash in the pan. It built the base for Y2K.

    Since Y2K busted out Congress critters (or their staff) are monitoring this joint in the hopes of collecting further goodies. Editorial writers (Cal Thomas comes to mind – way to go Steve) are eying the joint. Ordinary idiots (like me) have come out of the woodwork to join the fun.

    There is nothing right now that a certain political party wouldn’t give – not for proof of no AGW – but for proof of cooking the books. Even lawyers can understand accounting. None of this makes any sense unless the AGWers think they have to act soon before the consensus evaporates. Perhaps in his heart of hearts Hansen fears the solar scientists are right. Of course this paragraph is political speculation of the rankest sort and not to be taken seriously at all.

    Maybe the fish has been dead for a while, because it is sure starting to smell.

  77. JerryB
    Posted Sep 16, 2007 at 5:14 PM | Permalink

    Regarding the effects of GISS changing their input file for USHCN
    stations from an old USHCN file to to a recent one, some background.

    There have been several “editions” of USHCN data (I’ll use the word
    edition, rather than version, to avoid confusion with NCDC references
    to a new version 2 of USHCN). To distinguish among the several editions
    I will use the date of the end year of the data contained in each. Thus,
    there have been 1994, 1999, 2000, 2002, 2003, and 2006 editions of USHCN.
    (The apparent 2005 edition at CDIAC is simply a copy of the 2006 edition
    minus the data for the year 2006.)

    A principal difference between successive editions is the addition of
    data for year(s) since the previous edition. However, other changes
    also happen. In particular, SHAP (Station History Adjustment Program)
    adjustments change, even though the station histories do not. The USHCN
    station history file has not been updated since late 1994.

    On average, the net effects of the changes in SHAP adjustments are
    relatively small, i.e. less than 0.1 F, but for individual stations they
    may be several tenths of a degree F, in some cases more than 1 F, and may
    also change sign. (USHCN uses Fahrenheit.)

    So the effects of GISS changing their input file for USHCN stations would
    depend, in part, on SHAP adjustment changes between their old USHCN file
    and the one they currently use.

    On reviewing data from 1999, and 2000, USHCN editions, for stations of
    which the SHAP adjustments differed between those editions, such as
    Lakin, Kansas, and Hammon, Oklohoma, and comparing such data with GISS
    files from early August, i.e. prior to the implimentation of the “Y2K
    patch”, I must conclude that their old file was a 1999 edition.

    Without dumping too much data here, I would like to try to provide some
    (very rough), and indirect, indications of changes of SHAP adjustments
    between the 1999, and the 2006, USHCN editions.

    Following are some station counts and averages of annual USHCN
    temperature means adjusted through FILNET of USHCN editions for the
    indicated years, and differences (all temperature numbers in degrees F).
    Differences of SHAP adjustments would be the main contributors to the
    differences of these adjusted temperature means.

    1999 2006 06 – 99

    1911 1039 52.297809 1038 52.24843 -0.04937
    1912 1048 50.455186 1048 50.41807 -0.03710
    1913 1063 52.093353 1064 52.05193 -0.04142
    1931 1166 53.856469 1162 53.81293 -0.04353
    1932 1168 52.037618 1165 52.00142 -0.03625
    1933 1169 53.138960 1166 53.09360 -0.04539
    1951 1195 51.411498 1194 51.37631 -0.03513
    1952 1198 52.737762 1197 52.70506 -0.03264
    1953 1198 53.770401 1196 53.73274 -0.03772
    1971 1218 52.135674 1217 52.12548 -0.01021
    1972 1215 51.631277 1215 51.62854 -0.00267
    1973 1216 52.945400 1214 52.92428 -0.02113
    1991 1219 53.729329 1220 53.70822 -0.02112
    1992 1219 52.773082 1221 52.75776 -0.01528
    1993 1220 51.663433 1220 51.63488 -0.02858

    Those are an excerpt from this file.

    OK, a rough translation: in the 2006 edition, most old years appear
    slightly cooler than in the 1999 edition. Surprise, surprise.

    As mentioned, SHAP adjustments for individual stations may change
    considerably between USHCN editions, and in the case of Detroit Lakes,
    they did change considerably between the 1999, and the 2006, editions.

    ____ 1999 2006
    ____ adjt adjt (in F rounded to tenths)

    1898 -1.1 -1.9
    1917 -1.1 -1.9
    1919 -1.6 -2.4
    1932 -1.6 -2.4
    1935 -2.0 -2.4
    1950 -2.0 -2.4

    Those are derived from this other file.

    The changes of those adjustments between those editions would account for
    much of the difference between the first two graphs in this thread.

  78. John F. Pittman
    Posted Sep 17, 2007 at 10:42 AM | Permalink

    #79 JerryB
    I wonder if what we are seeing could be a “simple” real time adjustment as it were. Imagine if every so often we were to come up with a “most likely adjustement”. We would typically assume that the present data was more accurate, and more in line with present requirements. We would then use our calibration period to come up with the new offset. Now suppose this is what is being done. Assume in 1999, we were using 1985 to 1995. Then in 2005, we started using 1991 to 2001. Whether the apparent temperature went up because of UHI or GW, in order to come up with our new and improved version, the past would have to be decreased with respect to the present. I wonder if there is a way to tell with the data/procedures if this was done?

  79. JerryB
    Posted Sep 17, 2007 at 1:09 PM | Permalink

    Re #80,


    I don’t have a clear idea of what you have in mind, but my guess would be that
    if you look at the data at the second link in comment 79, it will not fit
    whatever it is that you do have in mind.

  80. John F. Pittman
    Posted Sep 17, 2007 at 6:04 PM | Permalink

    #81 I guess I need to look into the details of “mainly” SHAP adjustments versus FILNET. Thanks.

    Subtracting TOB from FILNET temperatures (in degrees F by the
    way) will give numbers that consist mainly of the SHAP adjustments for
    that USHCN edition, using the end year of the included data as the “name”
    of each edition.

  81. Corky Boyd
    Posted Sep 18, 2007 at 1:42 AM | Permalink


    I am a new reader of your site, so forgive me if what I am saying has been covered before.

    Weather station siting

    I just returned from a two week trip through the US National Parks travelling through Utah, Wyoming, Idaho and Arizona. Thanks to your excellent efforts to provide better accuracy in weather/climate reporting, I spotted about 15 weather sites during my trip of 3,000 miles. What struck me most was, with the exception of one, all were within 40 feet of the pavement (mostly blacktop). One was in the median of an Interstate highway, within 15 feet of both sides. Most appeared new, or at least the chain link fence appeared bright and shiney. My thought was the proximity to such a heatsink might skew the results. Are you the person tracking these sites? If not, would you give me his name and e-mail address and I can give general locations of some of them.

    Possible intentional climatic alterations by Soviets/Russians in Siberia.

    In the 70’s when there was concern with a possible new ice age, some folks were suggesting powdered coal be dispersed by aircraft to promote snow melt and absorb solar radiation. I also believe I read (possibly in Aviation Week) that the Soviets were experimenting with the concept. If there were ever an area that would benefit (if you look only at agriculture), it would be Siberia. Have you heard of this? Possibly someone with access to Lexis-Nexis can research it for you. It might explain the high anomalous temperatures there.

    Keep up the good work.

  82. D. Patterson
    Posted Sep 18, 2007 at 5:29 AM | Permalink

    Re: #83

    Yes, it is true, spreading coal dust and other materials to melt the snow and ice has been tried by recent researchers and for many centuries earlier.

    Over a thousand years ago, farmers in Asia knew that dark colors absorb the solar energy. So, they spread dark-colored materials such as soil and ashes over snow to promoted melting, and this is how they watered their crops in the springtime. Chinese and Russian researchers have recently tried something similar by sprinkling coal dust onto glaciers, hoping that the melting will provide water to the drought-stricken countries of India, Afghanistan, and Pakistan. However, the experiment proved to be too costly, and they have abandoned the idea.

  83. SteveSadlov
    Posted Sep 18, 2007 at 10:23 AM | Permalink

    RE: #84 – And getting a bit further into “Coast to Coast AM” territory, there was the Russian Woodpecker. There was an undercurrent of suspicion about that system, regarding some attempt to, by tweaking the magnetic field lines and ion distributions in the near polar region, influence the weather. Again, there are credible explanations that the “woodpecker” was actually a typical VLF com system for submarines.

  84. D. Patterson
    Posted Sep 18, 2007 at 10:56 AM | Permalink

    Re: #85

    Have you seen the article by Spencer Weart in American Institute of Physics (AIP) about weather modification schemes and climatological warfare?

  85. SteveSadlov
    Posted Sep 18, 2007 at 1:36 PM | Permalink

    RE: #85 – I have. In fact, there is, believe it or not, a treaty (not sure about ratification status) which attempts to ban climatological and meteorological warfare.

    Slight clarification on the woodpecker, after a bit of googling, I believe it was not sub com, but instead, phased array radar for an (illegal) ABM system proscribed by the (now defunct) ABM treaty.

  86. Sam Urbinto
    Posted Sep 18, 2007 at 4:28 PM | Permalink

    We could always use the solar wind and the moon to yank all the metalic objects out of an enemy’s country, right?

  87. Posted Sep 18, 2007 at 9:16 PM | Permalink

    Corky Boyd September 18th, 2007 at 1:42 am,

    The Russians have been trying for centuries to get people to live in Siberia. They have not been above cooking the books to give people false impressions. They may have done this with temperatures. It may be as simple as mounting the weather station near a conduit for the district heating steam system.

    Pure speculation of course.

  88. _Jim
    Posted Sep 18, 2007 at 10:06 PM | Permalink

    85 87 re: Russian Woodpecker:

    but instead, phased array radar for an

    OTH (Over the Horizon) HF (High Frequency meaning “shortwave” as on 3 – 30 MHz) RADAR; unsure of it’s exact purpose or what exactly they could see (pull out of the ‘mud’ so to speak with any received signals they picked up) as the years during which this was in service was quite awhile before easily-implemented DSP techniques … circa ’78 – ’79 at least when I worked at Heathkit (New Product Engineering/Ham Radio/Comm Dept) in Benton Harbor (ST. Joseph actully on Hilltop Road overlooking Lk Michigan) we would crank up a rig on 20 M (14 MHz) and using the electronic CW (Morse code) keyer send back a string of dits and Woody would QSY (shift frequency).

    I think Woody was looking for ship, aircraft and even perhaps missile activity; each would have a characteristic Doppler and/or range-rate shift impressed on any returning/reflected RADAR echos from … any where in the ‘skip zone’ of the radio signal. Today, there are still OTH RADARs that can be picked up; I recorded a few minutes on a cassette recorder for analysis of the ‘swept’ or chirped signal for later non-real-time analysis.

    “Phased array” antennas would have been used to form/steer the beam towards intended compass bearings to target specific areas of interest; it is tough create a 14 MHz ‘dish’ of any practical use so a series of vertical elements ‘fed’ as requrired accomplishes nearly the same trick (wave reinforcement or cancellation/simple wave physics).

  89. D. Patterson
    Posted Sep 19, 2007 at 4:23 AM | Permalink

    Re: #85, 87, 90

    FWIW, the Wikipedia article on the Soviet Woodpecker (aka Duga-3 and STEEL YARD) and many other articles found by a search have quite a bit to say about the subject.

  90. _Jim
    Posted Sep 19, 2007 at 9:24 AM | Permalink

    FWIW, the Wikipedia article on the Soviet Woodpecker

    Having lived a good bit of that period, having also endeavored in parallel if not directly in identical ‘fields’ in some cases, underscored with a technical (engineering) background to boot … I like to rely upon my own knowledge base as a ‘check’ on anything else that happens to drift my direction in the way of anecdotes, stories, accounts (call it auditing, if you will, keeping the discussion in line with the purpose of this site); not everything one reads on Wikipedia (or the internet) is authoratative, right true *or* complete.

    Lacking here now, in any discussion of the Woodpecker are accounts that would have discussed ‘it’ at the time it was taking place in ham publications e.g. as QST, 73 Magazine; discussions such as DFing the source of the pulses (to provinces of the old USSR).

    (As an aside, I imagine an assembled fleet of SAC B-52 bombers would have had quite a signature on ‘the woodpecker’, as did the daily readyness flights the 52’s particiapated in back in those days as part of the nuclear triad that was monitored: missiles, subs, airborn delivery platforms.)

  91. Posted Sep 19, 2007 at 9:41 AM | Permalink

    M. Simon, #77 and Professor Manuel, #78:

    Vincit Omnia Veritas

  92. Mark T.
    Posted Sep 19, 2007 at 9:46 AM | Permalink

    OTH (Over the Horizon) HF (High Frequency meaning “shortwave” as on 3 – 30 MHz) RADAR; unsure of it’s exact purpose or what exactly they could see (pull out of the ‘mud’ so to speak with any received signals they picked up) as the years during which this was in service was quite awhile before easily-implemented DSP techniques …

    Quite a lot, actually. The original successful OTH early warning radar had a transmit array in Rome, NY, and a receive array in Maine. It was put in place in 1955. From the wiki:

    The USAF’s Rome Laboratory had the first US success with their AN/FPS-118 OTH-B. A prototype with a 1 MW transmitter and a separate receiver was installed in Maine, offering coverage over a 60 degree arc between 900 to 3,300 km.

    They could detect missile launches, bombers, etc. Quite a useful beast. Now they are used primarily for other things, such as NOAA modeling ocean currents/wind patterns.

    Kwinkidentally, I just wrote a proposal for developing orthogonal waveforms for multiple-input, multiple-output (MIMO) radar, particularly for use in OTH radar. The original systems all used the same waveform, generally a frequency modulated continuous wave.


  93. _Jim
    Posted Sep 19, 2007 at 10:08 AM | Permalink

    Quite a lot, actually.

    Please; my comment was related to what ‘they’ could see over the continental US or Canada (AS the signal from said Woodpecker was receivable quite easily stateside) given primitive (by today’s standards) technology; the RADAR equation pretty much spells out the paramters to obtain a given signal strength of a certain value before the application of S/N improvement techniques (like pulse compression, alluded to earlier when I cited ‘chirping’).

    These replies are meant to be general in nature, relating back to ClimateAudit topics of discussion (I believe any number of us have easy access to such works as Skolnik’s (NOTA BENE: his “RADAR Handbook”) so perhaps we should lay low on further discourse on this subj).

  94. Mark T.
    Posted Sep 19, 2007 at 10:46 AM | Permalink

    So was my response (meant to be general in nature), _Jim, try not to be so testy.


  95. Roger Bell
    Posted Sep 21, 2007 at 3:41 PM | Permalink

    Re #83 on climate alterations .
    Science News – 9/1/2007, page 141 Has a brief article about Greenland receiving some soot from Canadian wild fires. However, the amount increased around 1850, when mills and power plants in CXanada and elsewhere in Northern Hemisphere began burning lots of coal.
    Industrial soot fell at its greatest rate between 1906 and 1910, causing the darkened snow to absorb about eight times as much solar radiation as it would have if free of soot. The change in energy balance warmed the snow and influenced climate.
    More recently, changes in regulations and technology have significantly reduced emissions of industrial soot. However, in the past few decades, Arctic snow has on average absorbed about 40% more solar energy than it did before the Industrial Revolution – see Sept 9 Science.

  96. MarkR
    Posted Sep 21, 2007 at 6:13 PM | Permalink

    #97 Roger. Over time, as the darkened snow heats more than the other, won’t the darkened (carbon particles) snow tend to sink more, and become less heat absorbing as it sinks, as it is no longer exposed to the suns rays?

  97. Steve McIntyre
    Posted Feb 17, 2008 at 3:13 PM | Permalink

    NOAA has deleted the files referred to in this post, the ones that were previously archived in this directory , and ones that were required to reconcile the former GISS results. If anyone has a copy of the SHAP file, I’d appreciate it.

  98. Posted Feb 17, 2008 at 7:46 PM | Permalink

    Have you searched your computer for it? You might have a cached copy. If you do the
    search, be sure to enable searching for hidden files and folders.

2 Trackbacks

  1. […] and 2006. NASA originally recalculated the numbers in response to pressure from Steve McIntyre with, who questioned disparities between official NASA tabulations and raw data reported by individual […]

  2. […] posted on the topic on Sept 13 […]

%d bloggers like this: