Revisiting Detroit Lakes

Some long time Climate Audit readers may remember this famous picture of the USHCN climate station of record in Detroit Lakes, MN.

This is what I wrote on July 26th, 2007 about it in:

How Not to Measure Temperature, Part 25

This picture, taken by www.surfacestations.org volunteer Don Kostuch is the Detroit Lakes, MN USHCN climate station of record. The Stevenson Screen is sinking into the swamp and the MMTS sensor is kept at a comfortable temperature thanks to the nearby A/C units.

Detroit_lakes_USHCN.jpg

The complete set of pictures is here

From NASA’s GISS, the plot makes it pretty easy to see there was no discernible multi-decadal temperature trend until the A/C units were installed. And it’s not hard to figure out when that was.

Detroit_lakes_GISSplot.jpg

And as you know, that curious jump in the GISS record, even though it coincided with the placement of the a/c heat exchangers (I checked with the chief engineer of the radio station and he pulled the invoices to check), it turns out that wasn’t the most important issue.

Steve McIntyre of Climate Audit saw something else, mainly because other nearby stations had the nearly the same odd jump in the data. That jump turned out to be discovery of a data splicing glitch in the NASA GISS processes joining the data pre and post year 2000.

It became known as The GISS Y2K glitch. It changed the balance of GISS surface temperature reporting, bringing 1998 down as no longer the hottest year on record. Here’s a writeup on it from Steve on the data itself.

Yesterday, volunteer Mark Ewens sent me some updated pictures of the Detroit Lakes site. It appears the embarrasment of having such a terrible station siting has forced the local NWS office into making some siting improvements:

Detroit_Lakes_1NNE_Looking_NorthWest

As you can see, the MMTS has been moved away from the a/c units and the building. The Stevenson Screen appears to be gone. Interesting story about the Stevenson Screen, it was originally moved out of that center location where the MMTS has been now, because there was concern that somebody might break the mercury thermometers inside, and the mercury would prompt a “wetlands hazmat response”, which would be any EPA field agent’s dream, a double whammy.

Here are more pictures:

Detroit_Lakes_1NNE_Looking_East

Detroit_Lakes_1NNE_Looking_East_Northeast

Detroit_Lakes_1NNE_Looking_West

Mark writes:

About a year ago I indicated that the MMTS at the Detroit Lakes 1NNE Coop site was moved. See attached the pictures I took last week while on a trip. Obviously not optimal, but much better. Like almost all radio stations this one is located in a swamp, so I’ve got limited options to work with. The observer did note that he has noticed a marked decrease in the average temperatures since the move – and not just due to global cooling!

The MMTS is ~80 feet from the building. The brown stalks are the left over winter kill of the saw grass that
is common in the swampy area of west central Minnesota.

Mark Ewens
Grand Forks ND

Apparently, the NWS thought enough of the criticism of the siting next to a/c heat exchangers to do something about it. And, I’ve been hearing from time to time, that stations that volunteers have visited and we have showcased in “How Not To Measure Temperature, Part X” have been quietly cleaned up.

While that is encouraging, the fact remains that it took a team of concerned citizens and some international embarrassment to get NOAA to fix quality control problems in climate monitoring stations that they should have recognized and corrected long ago.


99 Comments

  1. Posted Jun 9, 2009 at 10:14 AM | Permalink | Reply

    The real question is, are they thoroughly documenting these station changes? And are they admitting that they can’t really quantify ho historic temperatures from the sites which were badly-placed for years or decades?

  2. Andrew
    Posted Jun 9, 2009 at 10:19 AM | Permalink | Reply

    As I said at WUWT, it would seem that the discontinuity introduced by moving this station is at least as troubling as where it was put in the first place…

    • Andrew
      Posted Jun 9, 2009 at 10:44 AM | Permalink | Reply

      Re: Andrew (#2), I would like to thank Anthony for replying a WUWT to this sentiment. We are in agreement and I also hope that the change-point detection al-gore-rhythm can catch this.

  3. M. Villeger
    Posted Jun 9, 2009 at 10:19 AM | Permalink | Reply

    “How Not To Measure Temperature, Part X” have been quietly cleaned up.” Wouldn’t it make sense to acknowledge the problem and openly, not quietly, update stations?

  4. Clark
    Posted Jun 9, 2009 at 10:52 AM | Permalink | Reply

    Reading all of your fantastic How Not To Measure Temperature series made me aware of several things. Probably most important imo was the switch to having the sensors linked up by cable and the lack of funds with which to do it. This forced so many site managers to simply put the sensors a couple feet away from a building to limit the amount of work and money involved in running underground cables. I wonder how much this system-wide change affected measured temps.

  5. BRIAN M FLYNN
    Posted Jun 9, 2009 at 11:15 AM | Permalink | Reply

    Anthony: “I’ve been hearing from time to time, that stations … have been quietly cleaned up.”
    Great story of the power of, by and for the people where government is found complacent. In those circumstances, more “power to the people”, more often.

  6. Anthony Watts
    Posted Jun 9, 2009 at 11:19 AM | Permalink | Reply

    snip =-policy

  7. dearieme
    Posted Jun 9, 2009 at 11:20 AM | Permalink | Reply

    The thing to do with rubbishy observational data is to put it in the bin. Don’t adjust it, refine it or correct it. Bin it.

  8. Craig Loehle
    Posted Jun 9, 2009 at 12:47 PM | Permalink | Reply

    It seems like the proper thing to do when moving a station is to place the new station and calibrate it to the old for a while before removing the old one. “cleaning up” sites without doing this is simply adding more noise to the system.

  9. mpaul
    Posted Jun 9, 2009 at 1:24 PM | Permalink | Reply

    Steve, it would be interesting to see the revised GISS graph presented next to the original graph in this post now that the Y2K bug has been “remediated” and the instruments re-sited. It would also be interesting to know the date that the instruments were re-sited.

  10. Posted Jun 9, 2009 at 1:27 PM | Permalink | Reply

    Glad to see our work at Surfacestations.org has not fallen on deaf ears. Good job everyone.

  11. Jack
    Posted Jun 9, 2009 at 2:08 PM | Permalink | Reply

    +11
    “Steve, it would be interesting to see the revised GISS graph presented next to the original graph in this post now that the Y2K bug has been “remediated” and the instruments re-sited. It would also be interesting to know the date that the instruments were re-sited.”

    At the same time, why doncha put up the plots showing the globally averaged temperatures with and without this fix. I’m sure you’ll all be bowled over by the “huge” impact it had. What an amazing contribution you “citizen scientists” made.

    • RomanM
      Posted Jun 9, 2009 at 2:20 PM | Permalink | Reply

      Re: Jack (#13),

      I’m sure you’ll all be bowled over by the “huge” impact it had.

      So what makes you so certain that this is the only problem. You don’t know, Jack!

    • Steve McIntyre
      Posted Jun 9, 2009 at 2:41 PM | Permalink | Reply

      Re: Jack (#13),

      See the post http://www.climateaudit.org/?p=1885 which summarized the issues as of Aug 2007.

      As to your snark, many of the commenters here have PhDs in statistics and are not in your sense “citizen scientists”. They are professional statisticians.

      Unfortunately, the “amateurs” are all too often climate scientists in the authors of “peer reviewed literature” – as this literature is all too often reviewed by “peers” who are also weakly qualified in statistics.

  12. Gary Strand
    Posted Jun 9, 2009 at 2:28 PM | Permalink | Reply

    It would be interesting to see a time series plot of the US land surface average temperature by each class of station – from best to worst, to see the effect.

    • David Cauthen
      Posted Jun 9, 2009 at 2:36 PM | Permalink | Reply

      Re: Gary Strand (#15),
      What’s interesting is why some outfit, say NCAR for instance, hasn’t done that already. What’s also interesting is why class 3-5 stations are being used at all.

      • Gary Strand
        Posted Jun 9, 2009 at 3:05 PM | Permalink | Reply

        Re: David Cauthen (#16),

        What’s interesting is why some outfit, say NCAR for instance, hasn’t done that already.

        Actually, NOAA is the more relevant agency. That aside, are there any plans by the site examiners to create these data?

        • Steve McIntyre
          Posted Jun 9, 2009 at 3:19 PM | Permalink

          Re: Gary Strand (#19),

          are there any plans by the site examiners to create these data?

          Shouldn’t the people who are paid to collect the data be doing so?

    • Steve McIntyre
      Posted Jun 9, 2009 at 3:18 PM | Permalink | Reply

      Re: Gary Strand (#15),

      There is a noticeable difference between CRN5 and CRN1 trends in the sample available as at Sep 2007, discussed in posts below.

      http://www.climateaudit.org/?p=2061
      http://www.climateaudit.org/?p=2069
      http://www.climateaudit.org/?p=2145
      http://www.climateaudit.org/?p=4852

      This indicates that the argument of Jones et al 1990 that the UHI effect is no more than 0.1 deg C/century doesn’t hold up and that a UHI adjustment is necessary. To my knowledge, neither NOAA nor CRU attempt such an adjustment. The GISS adjustment in the US is reasonable, but outside the US, they use a different procedure which appears to do nothing except randomly permute the data.

    • RomanM
      Posted Jun 9, 2009 at 3:40 PM | Permalink | Reply

      Re: Gary Strand (#15),

      Simple time series plots don’t tell you the entire story because there are other factors involved at the same time. Try looking at the comment here to see the kind of analysis needed. That one was done with the stations that had been reviewed by WUWT at the time.

  13. JohnAnnArbor
    Posted Jun 9, 2009 at 3:01 PM | Permalink | Reply

    Shouldn’t all changes like these to long-term sensor sites be documented? It’s relevant to the data, so not having that information is a big handicap in understanding the data.

  14. Ian George
    Posted Jun 9, 2009 at 3:25 PM | Permalink | Reply

    In my home town in Australia we have a manual weather station and an automatic weather station within 300m of each other. The MWS is within metres of a tarred road and is surrounded by buildings (a newly built house is only some 20m away) while the AWS is in the middle of a grassed oval with no buildings/road within 50/60m. The long-term yearly maximum average temperature for the MWS is 26.8C and the AWS is 25.9C, a difference of 0.8C. However the minimums are the same at 13.2C (some average monthly variation).
    Also the MWS has been recording since 1858 and the AWS since 1994. Thus the AWS long-term average should be higher due to this past decade+ being warmer than previous periods. I wonder if there are other examples of this which can be compared so explicitly?

    • Geoff Sherrington
      Posted Jun 9, 2009 at 8:28 PM | Permalink | Reply

      Re: Ian George (#22),

      If Anthony and Steve do not mind this cross-reference to Australia, try David Stockwell’s Niche Modeling blog on which I have partially analysed 40 years of 17 truly rural stations. It’s WIP and a bit untidy, but the essence is there -
      http://landshape.org/enm/40-years-of-some-bom-australian-rural-temperature-data/

      The work of Anthony and co-workers in quality improvement is commendable, but it is of need limited to effects that are obvious or visual or documented or admitted. There is a further source of potential errors, “things we did not realise were important”. I suspect this to be at work in the data I referenced above. For some rason as yet unexplained to me, the inland climate stations have warmed over the last 40 years almost an order of magnitude faster than the coastal stations. I am beginning to think that there is an instrumental explanation but I cannot formulate it yet because of a lack of precise metadata, such as changes from thermometers to thermocouples/thermistors or whatever.

      It would help a great deal if people from all over the world with local knowledge took raw temps of pairs of stations up to 300 km apart, one on the coast, one inland but not elevated above say 300m, 30-40 years of temp data, calculated the linear trend in degC per year and emailed to me at sherro1@optusnet.com.au.

      Thank you

  15. PaulM
    Posted Jun 9, 2009 at 3:35 PM | Permalink | Reply

    But all the effort in improving the site seems to be in vain, as it looks as though GISS no longer uses this site! The obviously flawed data up to 2007 is used of course. mpaul – the “revised” GISS graph looks much the same: a 3 C jump.

    Gary yes that would be interesting. Why do you think they (NOAA or GISS) havent done it? Perhaps the result would not be to their liking?

  16. Jack
    Posted Jun 9, 2009 at 4:00 PM | Permalink | Reply

    “As to your snark, many of the commenters here have PhDs in statistics and are not in your sense “citizen scientists”. They are professional statisticians.”

    This is rich!!! By “Steve McIntyre standards”, that wasn’t a snark at all. Do you really mean to call that a snark and continue to make the claim that so many of your emails to “The Team” as you call them, are gentlemanly and polite? I was simply pointing out that the data clearly shows that in the grand scheme of things, your fix changed nothing. Sure, it was a “contribution.” You dotted an i quite succinctly.

    “Unfortunately, the “amateurs” are all too often climate scientists in the authors of “peer reviewed literature” – as this literature is all too often reviewed by “peers” who are also weakly qualified in statistics.”

    I find this amusing. You are calling PhDs in meteorology, physics, and atmospheric sciences “amateurs” with respect to analysis of climate science data and you believe professional statisticians are MORE qualified? Amusing, but not surprising, given your history.
    There is certainly a contribution to be made by statisticians, and climate scientists could benefit from their input. But to claim they are MORE qualified to analysis climate data than climate scientists is absurd. I don’t expect to change your mind about anything. I just wanted to point out the ridiculousness of your calling my comment a snark. Your site is certainly entertainment.

    • Posted Jun 9, 2009 at 4:16 PM | Permalink | Reply

      Re: Jack (#26),

      What is your explanation for the insanity of CPS and correlation sorting in M08? Was it statisticians or climatoknowledgests that reviewed that paper?

    • TAG
      Posted Jun 9, 2009 at 4:18 PM | Permalink | Reply

      Re: Jack (#26),

      I was simply pointing out that the data clearly shows that in the grand scheme of things, your fix changed nothing.

      I understood that the “fix” made a significant difference in the US

      There is certainly a contribution to be made by statisticians, and climate scientists could benefit from their input. But to claim they are MORE qualified to analysis climate data than climate scientists is absurd. I don’t expect to change your mind about anything. I just wanted to point out the ridiculousness of your calling my comment a snark. Your site is certainly entertainment

      The GISS data were in error for seven (7) years. No one from NASA or the broader climate science community noticed.

      When the Detroit Lakes anomaly was identified and publicized on the Internet, no one from the climate science community realized the mistake. There were even comments from some that this was evidence of a genuine natural effect.

    • compy
      Posted Jun 9, 2009 at 4:56 PM | Permalink | Reply

      Re: Jack (#26),

      Of course your comment is snark. It is ridiculous for you to claim otherwise. And yes, when it comes to many of the studies related to the hockey stick, a solid statistical basis is better that a background in climate science. M08, Steig09 (among others) are first and foremost statistical interpretations. Even Australia’s official (pro AGW) Garnaut Report relied on two econometricians for their climate analysis. Your relying on mockery only indicates weakness in your own argument.

      To repeat the familiar lament: why, oh why, when pro-AGW proponents comment on this site, can they not add anything of substance?? (Deep Climate – you may be the exception, but we are watching :-))

  17. Fred
    Posted Jun 9, 2009 at 4:11 PM | Permalink | Reply

    “You are calling PhDs in meteorology, physics, and atmospheric sciences “amateurs” with respect to analysis of climate science data”

    What would you call the Mann Hockey Team’s ability to use statistics ?

  18. Neil Fisher
    Posted Jun 9, 2009 at 4:12 PM | Permalink | Reply

    I will be interesting to see if the changed station siting is used as the reason for the recent cooling in the data – “oh, yes, of course the instruments show a cooling recently, we UHI de-contaminated at the source over the period 2005 – 2010.” Or something similar. Remember you heard it here first…

  19. steven mosher
    Posted Jun 9, 2009 at 4:32 PM | Permalink | Reply

    RE 15. We did a bunch of analysis back in the day when the sample was much smaller. The biggest issue gary was that we had two analysis options:

    1. Invent our own analysis approach.. for example doing a simple average without regard to area weighting
    or to issues like combining station histories ( petersons reference station method) or doing some simple
    area weighting approaches, or maybe throwing regem at the problem.. which we could sorta do now.
    John V wrote an open source tool that I used.. just search the threads. the project went dormant. I believe
    you need to know C++, but JohnV code was superb in terms of organization and clarity. ( maybe it
    was visual C++ cant recall its been a while since I looked at his code and recently switched from
    Windows to Linux and Mac OS so I’d have to go resurrect my windows box which I am loathe to do.

    2. Get GISSTEMP to compile and use that as the analysis tool.

    The problem with #1 is that If we showed a difference between CRN1 and CRN 5 ( which was shown with
    JohnVs program) then the analysis tool or approach would be questioned.

    The problem with #2 is that nobody could get it to compile and the real fortran programmers are all dead.
    just kiding. I only have read only abilities with that language. Plus there were some funky compiler/OS issues
    that nobody had the patience for.

    If you know fortran and can get it running in a non unix environment ( one guy associated with Gavin got it up on Mac OS) then everybody here would give you your programming props! It’s about 10K LOC, If you want to give it a try
    there are a couple of us here who “know” the code, just could never get it to compile.

    • Gary Strand
      Posted Jun 9, 2009 at 7:16 PM | Permalink | Reply

      Re: steven mosher (#31),

      If you know fortran and can get it running in a non unix environment ( one guy associated with Gavin got it up on Mac OS) then everybody here would give you your programming props! It’s about 10K LOC, If you want to give it a try
      there are a couple of us here who “know” the code, just could never get it to compile.

      I’ve got the code and am working through it.

      • rephelan
        Posted Jun 12, 2009 at 12:29 AM | Permalink | Reply

        Re: Gary Strand (#54),

        Gary, how do you intend to do that? I’ve found that using MS-DOS 6.x can emulate a unix environment for some purposes and I’ve tried to maintain a library of “ancient technology” and documentation, creating virtual machines to run the stuff… A very long time ago, I even taught FORTRAN. Can I help?

        • Gary Strand
          Posted Jun 12, 2009 at 1:48 AM | Permalink

          Re: rephelan (#95), no need to help – I’m on a Mac OS X box, with free Fortran compilers a-plenty. I’ve not done much as my real job takes up all my time. I may even modernize the code.

        • Steve McIntyre
          Posted Jun 12, 2009 at 6:17 AM | Permalink

          Re: Gary Strand (#96),

          FWIW I discourage people from spending more time from trying to get GISTEMP to run in antique machines. 99% of the processes in GISTEMP are Commodore 64-type operations, reading in one data set doing a little chunk of operation and writing out another data set.

          The underlying operations are very trivial in programming terms. It’s just averaging and smoothing. There are quirky aspects to some of the methods, but from my point of view, the main interest was to be able to express the methods in modern terminology and primarily to examine the various adjustment procedures.

          I’ve worked through a few steps in R in 2007 and we were into Step 3. Once you figure out what’s going on, large chunks can be reduced to a relatively few lines of code and the exercise renders things much more transparent. I got bottlenecked at Step 3 and stopped there.

          I dipped my toes recently into Steps 4 and 5 and may be able to close the circle.

          The purpose of getting it to work was to get intermediates so that one could benchmark modern code to see that you’d figured out what they were doing (and then analyse it.) GISS now provides copious archiving of a wide variety of intermediates and there’s simply no need to produce new intermediates. I’d urge people to work with their intermediates.

          I would urge people not to waste any more time trying to replicate the quirky operating environment. If they’re interested, the job should be to reduce Steps 3 and 4 to a modern language like R.

          Something immediately useful that I ahven’t done yet – check out the provenance of Hansen’s SST data, which splices a CRU version up to 1980 with an OI version after 1980. Bob Tisdale has some info on this.

      • Bob Koss
        Posted Jun 12, 2009 at 8:23 AM | Permalink | Reply

        Re: Gary Strand (#54), last August GISS accepted a Ravenbrook Ltd. offer to update the gistemp code to all python. If you are intent on doing a recoding, you might be interested in what they have accomplished so far. They have recoded a couple steps and put up links to their documentation and some of their coding. Project link.
        Since it is evidently a freebie they seem to have put the project on the back burner for awhile.

    • MrPete
      Posted Jun 9, 2009 at 9:05 PM | Permalink | Reply

      Re: steven mosher (#31),

      I’d have to go resurrect my windows box

      Mosh, you can triple-boot, or run a virtual machine. Go for it! Be the uber-tech :)

  20. Jack
    Posted Jun 9, 2009 at 4:33 PM | Permalink | Reply

    TAG:
    “I understood that the “fix” made a significant difference in the US”

    You understood incorrectly. Because the most trumpeted conclusion I hear is the “adjustment”
    of hottest year(s), here’s a reality check.
    Before the fix, the anomalies were:
    1934=1.23, 1998=1.24
    1934=1.25, 1998=1.23
    Those differences are statistically insignificant. In fact, GISS had stated (BEFORE the
    “fix” that uncertainties in the anomalies when comparing years over 50+ years were at
    least 0.1 degrees and that you could not claim a record year with confidence until the
    difference exceeded 0.1 degrees. The adjustments from the Y2K fix were well below that
    threshhold.

    The importance of this “fix” has been grossly exaggerated by a smallish group.

    • Steve McIntyre
      Posted Jun 9, 2009 at 4:45 PM | Permalink | Reply

      Re: Jack (#32),

      The importance of this “fix” has been grossly exaggerated by a smallish group.

      Jack, please read my actual comments on Hansen’s error at the linke provided above: http://www.climateaudit.org/?p=1885 and tell me whether there is anything in MY comments that you disagree with. I think that I always gave an evenhanded assessment, in that post and elsewhere.

  21. Jack
    Posted Jun 9, 2009 at 4:38 PM | Permalink | Reply

    Before the fix, the anomalies were:
    1934=1.23, 1998=1.24
    AND AFTER THE FIX THE ANOMALIES WERE:
    1934=1.25, 1998=1.23

    No proofreaders to help me out. Apologies.

    • RomanM
      Posted Jun 9, 2009 at 5:22 PM | Permalink | Reply

      Re: Jack (#33),

      Doesn’t it bother you that the statistically “omniscient” Hansen crowd screwed it up at all. There was more effect than just the getting the “warmest” year wrong – which I may point out is stated as a statistical tie, a nicety which was not used when it was slightly different the other way.

      Your earlier comment on the inate statistical abilities of climate scientists is a hoot! Tell me, do you have personal knowledge of that or are you merely talking want you want to believe” (serious question! The answer might surprise you!) Any professor at a university will tell you there is a wide range of capability in students and, believe it or not, some of them do not get it as well as others.

      Do you really believe that students in these other areas learn as much statistics (usually taught within the department) as the students who become practicing statisticians with sufficient knowledge to produce new correctly based statistical procedures? Your statement

      I find this amusing. You are calling PhDs in meteorology, physics, and atmospheric sciences “amateurs” with respect to analysis of climate science data and you believe professional statisticians are MORE qualified? Amusing, but not surprising, given your history.

      indicates that you are grossly unaware of the role of the statistician in virtually every scientific area. In the universities that I have been at, researchers from the areas that you mention and others were aware enough to know that having a statistician on board was useful enough to do on a regular basis. In most of these cases, they weren’t even inventing new procedures!

      In regard to this, I get the sense that you really don’t know jack, Jack.

  22. bmcburney
    Posted Jun 9, 2009 at 4:59 PM | Permalink | Reply

    Jack,

    I think you are closer to the nub of the problem than you may realize. If “climate data” could be obtained or analysis of the data performed without reference to statistical techniques, climate specialists could perform their work in peace. For better or worse, however, this is not the case. Climate science requires statistical analysis to reach scientifically defensible conclusions. Among other things, it is not possilbe to test climate theories experimentally. So, without the statistics, it is not science but a type of voodoo. It ought to concern you that so many statisticians see so many problems with the way statistical techniques are used in this field.

  23. Steve McIntyre
    Posted Jun 9, 2009 at 5:09 PM | Permalink | Reply

    To refresh the context, the Detroit Lakes series came into dispute because Josh Halpern argued vociferously that it was idiotic for Anthony Watts and others to worry about station discontinuities because GISS sotfware fixed any problems.

    I was unconvinced that GISS software could do so and spot checked some stations.

    The errors in individual stations were sometimes as much as 1 deg C and GISS software was unequal to the task of identifying these step changes. I concluded that claims made by Halpern and others did not hold up.

    In the process, we became of aware of the extensive adjustment of temperature data by GISS and others – adjustments that in some cases were a substantial proportion of the temperature increase being measured.

  24. Jack
    Posted Jun 9, 2009 at 5:35 PM | Permalink | Reply

    Steve M:
    “Jack, please read my actual comments on Hansen’s error at the linke provided above: http://www.climateaudit.org/?p=1885 and tell me whether there is anything in MY comments that you disagree with. I think that I always gave an evenhanded assessment, in that post and elsewhere.”

    Steve, I admit that your cited post is (mostly) more evenhanded than most of your commenters believe the situation to be. The “smallish group” I describe, however, is vocal at your blog and at blogs like yours. The zeal with which this group blows things like the Y2K fix out of proportion and with which they gleefully attack not only the GISS climate scientists but ALL climate scientists does nothing to promote useful communication, as you must surely be aware. I cannot hold you responsible for what your commenters believe. But I do believe that you have promoted and fostered an air of hostility toward all of climate science, based on your interactions with a handful of climate scientists.

    My take on it is that your complaints about the handling of the Y2K issue were primarily that:
    1) the adjustments can affect singular station time histories in a significant fashion.
    2) GISS did not appropriately notify users of the adjustment

    Do you really believe that the number of people using long time series for single stations is large at all? Honestly? Do you honestly think that a general press release was merited to notify the world that some single station time series in the US might be impacted by an error, although the changes to larger regional and global average were insignificant? I thought that the way GISS handled the correction was appropriate. I realize we disagree on that. However, I interpret your reaction/outrage
    to be more about finding a way to get a dig in at GISS scientists than any real honest dispair over the data handling.

    Your thought experiment about what the reaction would be if Spenser and
    Christy had made a comparable error was odd. I honestly don’t think it would
    have merited much comment from the general climate community at all.

    The default assumption by you and your commenters is that climate scientists are in the midst of a conspiracy to mislead. It is impossible to break through that barrier and begin communication.

    Steve: Please do not use the word “conspiracy”. It is not a word that I use nor a view that I hold. I would snip such language in a “supporter” and I request that you adhere to this request.

    • Steve McIntyre
      Posted Jun 9, 2009 at 5:45 PM | Permalink | Reply

      Re: Jack (#39),

      Your take is not all that correct. The main point for me was that the claims that GISS adjustments could fix bad data were refuted by their failure to pick up the Y2K error. For example, here was one of my first comments:

      However Detroit Lakes seems like rather a poor choice as a type case demonstrating the triumph of GISS adjustments, as it contains a relatively obvious error that appears to be little more than a programming error.

      At the time, people asserted in all seriousness that GISS adjustments could be relied on to “fix” inhomogeneities in the station data. I was dubious of this proposition and consider it disproved.

  25. Jack
    Posted Jun 9, 2009 at 5:39 PM | Permalink | Reply

    Roman:
    “Doesn’t it bother you that the statistically “omniscient” Hansen crowd screwed it up at all. There was more effect than just the getting the “warmest” year wrong – which I may point out is stated as a statistical tie, a nicety which was not used when it was slightly different the other way.”

    That is not true. Before the Y2k eruption, GISS stated that they could not claim a year was statistically “hotter” than 1934 unless it was more than 0.1 degrees C. There are a lot of falsehoods floating around that, no matter how often you repeat them, remain falsehoods.

  26. Jack
    Posted Jun 9, 2009 at 5:43 PM | Permalink | Reply

    Roman:
    “In the universities that I have been at, researchers from the areas that you mention and others were aware enough to know that having a statistician on board was useful enough to do on a regular basis. In most of these cases, they weren’t even inventing new procedures!In regard to this, I get the sense that you really don’t know jack, Jack.”

    I find it quite humorous that you completely ignored my statement in comment number 26 that “There is certainly a contribution to be made by statisticians, and climate scientists could benefit from their input.” and persisted in making that above complaint.

  27. Jack
    Posted Jun 9, 2009 at 5:48 PM | Permalink | Reply

    Steve:
    “Your take is not all that correct. The main point for me was that the claims that GISS adjustments could fix bad data were refuted by their failure to pick up the Y2K error.”

    OK.Understood.

    • Steve McIntyre
      Posted Jun 9, 2009 at 5:58 PM | Permalink | Reply

      Re: Jack (#43),

      I admit that I was annoyed by some aspects of how GISS handled the matter. I sent them an email specifying exactly what the problem was and provided my own quantification of the U.S> impact here . Hansen claimed that I had failed to quantify the impact, refusing to mention my name and suggesting that my “light upstairs” was off. Gavin Schmidt, then a de facto NASA spokesman, used realclimate.org to disseminate NASA’s viewpoint on the matter, misrepresenting what I’d done in the process. In comparison, I think that I was pretty evenhanded in my commentary.

  28. Chris Byrne
    Posted Jun 9, 2009 at 5:51 PM | Permalink | Reply

    snip – venting and editorializing

  29. Jack
    Posted Jun 9, 2009 at 5:59 PM | Permalink | Reply

    compy:
    “And yes, when it comes to many of the studies related to the hockey stick, a solid statistical basis is better that a background in climate science.”

    I think this is a very dangerous stance. Statistics without an understanding of the physics underlying the climate can lead to very misinformed results and dangerous misinterpretations of the data. In my work in atmospheric science, my coworkers and I have lamented the misuse of statistics many many times. I never claimed that statistics didn’t have a role to play in the overarching study of the climate and have always been a proponent of bringing a better understanding of statistics into climate science. My original comment was surprise at the suggestion that an understanding of statistics TRUMPED an understanding of atmospheric physics/radiation/chemistry. I think that is a very misinformed and narrow viewpoint.

    • MrPete
      Posted Jun 9, 2009 at 6:18 PM | Permalink | Reply

      Re: Jack (#46),
      This is an interesting aspect.

      In some aspects, physics/radiation/chem experts are involved.
      In some aspects, statisticians are involved.
      In some aspects, biologists are involved.
      etc.

      My observation: unfortunately, there are too many cases where people with expertise in one or more areas neglect to draw on other crucial areas of expertise, with the result that they publish (or review) material that others are easily able to falsify.

      I absolutely do NOT apply this to all of climate science. As I’ve often noted, at the very least I am related to scientists in said field and am not aware that my own relatives are getting it wrong.

      I suspect we’re dealing with a sea change in science that’s going through class III (IV?) rapids… how to become more multidisciplinary, and how to avoid becoming political… while rapidly advancing the science.

    • Steve McIntyre
      Posted Jun 9, 2009 at 6:23 PM | Permalink | Reply

      Re: Jack (#46),

      Uninformed work of any variety can lead to “very misinformed results”.

      I have never suggested that an

      understanding of statistics TRUMPED an understanding of atmospheric physics/radiation/chemistry

      and have never made any such suggestion.

      However, the papers that I’ve studied do not use “atmospheric physics/radiation/chemistry”.

      It’s not a matter of statistics “trumping” atmospheric physics. It’s a matter of proper statistical methodology winning out over improper and even incorrect methodology.

      I do not believe that you can give a single example where knowledge of “atmospheric physics” refutes any of the many criticisms of Mann et al 2008,… made here.

      In some cases, the “physics” is simply arm waving. Steig et al purport to give a “physical” meaning to their PCs, but the PCs are nothing more than Chladni pattern – a type of error pointed out in the 1970s by Buell, but repeated once again by Steig and coauthors.

    • RomanM
      Posted Jun 9, 2009 at 6:37 PM | Permalink | Reply

      Re: Jack (#46),

      My original comment was surprise at the suggestion that an understanding of statistics TRUMPED an understanding of atmospheric physics/radiation/chemistry.

      I don’t know where you get that suggestion from. We aren’t talking about statisticians doing research in a vacuum. They don’t come up with a concept and then run it through from start to finish by themselves.

      What you need to understand is that an experienced statistician knows what features to look for in a study to decide what an appropriate analysis might be along with the pitfalls that may lie in that direction. A competent statistician can find the questions to ask to determine the structure of the data and decide whether the structure can be analyzed that way.

      In many cases the statistics are completely absent, replaced by “you can see that …”. When a statistical approach is used, the climate scientist does not always have the same theoretical background so that when they start altering an already complicated procedure they can quickly exceed their statistical capability. It is my experience that a statistician can learn the necessary science for a particular situation faster that the scientist can learn what the best route for the analysis might be.

      The lack of statistical names on papers with a lot of authors indicates that many climate scientists don’t seem to share my viewpoint.

    • MrPete
      Posted Jun 9, 2009 at 8:04 PM | Permalink | Reply

      Re: Jack (#46),
      By the way, you can’t imagine how strongly most people here would agree with your statement:

      Statistics without an understanding of the [hard science] underlying the climate can lead to very misinformed results and dangerous misinterpretations of the data.

      Too much of the published work that is seriously questioned here has been created by scientists who assemble data from arenas where they have little understanding, then processed using statistical computer software that they do not understand and have not validated. And then they set up barriers against those who want to poke at their process.

      The community at this site embodies a growing number of senior scientists, statisticians, computer experts and more who shake their heads in disbelief when we see such shenanigans.

      Yes, mistakes are made. Yes, there’s a peanut gallery that gets out of hand on occasion. But this is a learning community that watches for its own mistakes, corrects them in public, and encourages open-ended interaction. We absolutely do not have a predetermined goal in mind.

      Most important of all, the denizens of this site represent every political and religious view one could imagine, because both topics are absolutely verboten. We’re not about a political or even policy end.

      We’re about science.

    • compy
      Posted Jun 9, 2009 at 8:51 PM | Permalink | Reply

      Re: Jack (#46),

      Statistics without an understanding of the physics underlying the climate can lead to very misinformed results and dangerous misinterpretations of the data.

      Hmm, do you mean like using Finnish lake sediments as a climate proxy with the reverse sign to the physical meaning? And then claiming that any criticism of this upside down treatment is bizarre?

      • compguy77
        Posted Jun 9, 2009 at 9:28 PM | Permalink | Reply

        Re: compy (#59),

        Or perhaps bristlecones that do not reflect the local temp, but are teleconnected to global temps?

  30. Andrew
    Posted Jun 9, 2009 at 6:24 PM | Permalink | Reply

    Am I to understand that my attempt to put an olive branch out to Jack in the name of rational discussion instead of just food fighting about “conspiracies” was over the line? If so, I apologize.

  31. Jack
    Posted Jun 9, 2009 at 6:37 PM | Permalink | Reply

    Mr. Pete, I agree completely.
    When even the coupling of ocean and atmosphere is considered “interdisciplinary”, it is obvious that the various components of complete climate science have evolved in isolation. Climate science remains a very young field, relative to most other disciplines, and I don’t think it has yet settled into its final form.

    Steve, granted, I am not familiar with paleoclimatology and as a simple data record, perhaps indeed atmospheric physics has less of a role to play in that than in, say, in understanding the trend of ice melt in the Arctic. I was responding to compy and worded my response poorly.

  32. Steve McIntyre
    Posted Jun 9, 2009 at 7:10 PM | Permalink | Reply

    It’s interesting to also revisit the chronology.

    July 26 Eli Rabett,an anonymous GISS apologist, purported to defend GISS adjustments – an argument that had been going on for a while.

    July 26 WUWT Anthony posted on Detroit Lakes closing with the snark:

    But hey, thy can “fix” the problem with math and adjustments to the temperature record.

    July 30 Tamino , another anonymous GISS apologist, defended GISS taking aim at Anthony’s type cases of Orland and MArysville.

    July 31 CA. I discussed adjustments in Arizona here and Orland-Marysville here, noting puzzles in the pattern associated with Y2K, but just as oddities.

    Aug 1 Eli Rabett wrote on Detroit Lakes, taking aim at Anthony Watts.

    Eli will let Tony Watts have the last word, because at least he got it right : “But hey, they can “fix” the problem with math and adjustments to the temperature record.”

    Aug 3 CA. I specifically discussed Detroit Lakes here identifying a pronounced Y2K inhomogeneity, criticizing Eli Rabett’s assertion that GISS software could fix things like this. I stated:

    Detroit Lakes seems like rather a poor choice as a type case demonstrating the triumph of GISS adjustments, as it contains a relatively obvious error that appears to be little more than a programming error.

    On Aug 6, I quantified the error in the US with a couple of interesting graphics.
    http://www.climateaudit.org/wp-content/uploads/2007/08/hansen40.gif
    http://www.climateaudit.org/wp-content/uploads/2007/08/hansen41.gif

    I sent an email to Hansen notifying him in polite terms of the error.

    To my surprise, a couple of days later, without any notice or acknowledgement, the entire US data base had been deleted and a new data base installed. (The wholesale changes because of the unfortunate GISS practice of using current data to adjust past data.)

    I noted the changes on Aug 8 here.

    On Aug 10, HAnsen stated

    My apologies if the quick response that I sent to Andy Revkin and several other journalists, including the suggestion that it was a tempest inside somebody’s teapot dome, and that perhaps a light was not on upstairs, was immoderate. It was not ad hominem, though.

    On Aug 10, Gavin Schmidt, using realclimate here to disseminate NASA positions. Instead of acknowledging that I had identified the problem as originating from a switch of data sources, Schmidt claimed that NASA had identified this themselves:

    On Monday, the people who work on the temperature analysis (not me), looked into it and found that this coincided with the switch between two sources of US temperature data.

    LAter repeating the untrue statenent:

    Steve M pointed out where the error came from in his blog posts and his email notifying GISS of the problem. The GISS people simply confirmed that he was correct.

    [Response: Not so. He saw the jump but did not speculate as to the cause. - gavin]

    On Aug 11, Hansen issued his “Lights On Upstairs” letter, reported at CA here

    On Aug 13, I responded with a detaled assessment of the matter here, reviewing some of the prior points and adding some others (crossposted at Anthonys because of service outages at CA that crippled the site. We moved servers as a result.)

    The chronology show quite clearly that the incident arose out of my being dubous about claims that NASA software could adjust bad data, but it’s interesting to see the direct role that Detroit Lakes played in the matter and how quickly things moved from Anthony’s post on July 26.

  33. Dishman
    Posted Jun 9, 2009 at 7:13 PM | Permalink | Reply

    However, I interpret your reaction/outrage to be more about finding a way to get a dig in at GISS scientists than any real honest dispair over the data handling.

    There’s more than just statistics involved, to be sure.

    I’m a software engineer. I have despaired over the code that handles the data.

    Hansen hasn’t even bothered to follow NASA procedures in documenting how critical GISTEMP is, or what its failure impact would be. That’s the root of any SQA process.

    I can only look at that and despair.

    Sorry.

  34. PhilH
    Posted Jun 9, 2009 at 7:21 PM | Permalink | Reply

    Jack:
    I have no idea how often or how long you have been tuning in here or whether you are aware of the long history, from the very beginning, of contemptuous disdain for McIntyre which has emanated from that portion of the climate science community politically wedded to the concept of alarmist AGW. I have been reading this site daily for four or five years and it has been an eye-opener for this non-scientist lawyer to see the treatment routinely visited upon him by these people; people who are supposedly objective scientific practitioners. There is a vast list of this kind of stuff: ad hominem attacks, insults, lies, misquotes, misrepresentations, refusal to make data available, etc. You name it. One has to understand this history in order to understand why and where Steve occasionally is coming from. If you are not familiar with this history, you are bound to see it inappropriately.

    However, I find it hard to believe that you are not fully aware of the fact that if you had made the kind of adversarial posts you are making here on Real Climate, you would have been instantly beheaded. There is simply no comparison between the level of civility and of substantive comment here on Climate Audit and that on Real Climate.

  35. Posted Jun 9, 2009 at 8:12 PM | Permalink | Reply

    Looking at Steve’s chronology, and the tempest over a single temperature station, and finally the photos of the present showing improvements to the siting, I’m reminded of the early and oft repeated digs at the surfacestations.org project:

    “photographs don’t matter”

    I’m glad I didn’t listen to that. Plenty of people said I’d quit too, that I wasn’t serious and neither were the volunteers. I’m happy to report that the project is now fast approaching an 80% surveyed mark of the 1221 total stations. I’m really gratified by all of the help and support I’ve gotten. Thank you, you know who you are.

    I’ve published a preliminary report at 70%, showing results of the census then. It has been well received. Having reached a numerically large and spatially well distributed sample, now the project is starting to enter the analysis phase to look at temperature trends between different station classes. It will take a few months to get a publication ready. Stay tuned.

  36. thefordprefect
    Posted Jun 9, 2009 at 10:25 PM | Permalink | Reply

    A couple of plots from WUWT.
    The first shows a major attempt of GISS to correct Marysville (to less warming)- note that there are no y2k problems evident and also that when the area under the MMTS was paved in the 1980s there is no sudden shift in temperature.
    The second shows the subject of this entry, Again note that the adjustment reduces warming. Something happened in 1991 to 1995 and subsequent data looks more variable but again no great shift in temperatures at any point.
    Both these stations are classed as greater than 5degC error.
    ….
    From WUWT Bill: Re Marysville
    [Marysville] Population by Year Change Rate
    2000 12,268 N/A
    2001 12,454 1.52%
    2002 12,558 0.84%
    2003 12,599 0.33%
    2004 12,491 -0.86%
    2005 12,131 -2.88%
    Not the most expansive of towns. So where does your UHI originate in a growth function.
    I still suggest that there should be a step change as the car park was paved – it is not visible
    ….

    It should also be noted that neither of the 2 records include the current date (suggestingthat the data is no longer used?)
    The GISS adjustment values are shown in green.

    • Steve McIntyre
      Posted Jun 9, 2009 at 11:01 PM | Permalink | Reply

      Re: thefordprefect (#62),

      The GISS adjustments that you show here are just their two legged adjustment.

      I really do not want to spend any bandwidth on GISS US adjustments of which there has been too much discussion. The issues that people should be addressing are:

      1. NOAA and CRU non-adjustment in the US
      2. ineffective (random) GISS adjustment in the ROW
      3. NOAA and CRU non-adjustment in the ROW

    • Posted Jun 9, 2009 at 11:20 PM | Permalink | Reply

      Re: thefordprefect (#62), Well I don’t know about you, but I see a bit of a step change about 1985, though it is hard to tell since the graphs numbers are so small, I can barely make them out.

      The observer told me that the Screen and MMTS both had been moved around the property during their history, so there are more uncertainties.

      The point remains, if the station was properly sited, we wouldn’t be spending time trying to deconvolve data from it. The whole premise of trying to deconvolve this data from moves, steps, UHI, and the like is what is the problem.

      • thefordprefect
        Posted Jun 10, 2009 at 3:07 AM | Permalink | Reply

        Re: Anthony Watts (#68), The blog reduces all displayed sizes but to see any graphics in more detail try increasing the zoom level (ctrl+mouse wheel) (Lack of detail may be the LCD monitor) or right click the graphic choose [properties] and then copy the url to paste into a new window.

        the intro states:

        From NASA’s GISS, the plot makes it pretty easy to see there was no discernible multi-decadal temperature trend until the A/C units were installed. And it’s not hard to figure out when that was.

        looking at my plot averaged data shows a clear trend of about 1.5degC until 1990.

        Re: Anthony Watts (#69), I assume that detroit lakes is not a coop site or has been deleted as it does not appear here?http://www7.ncdc.noaa.gov/IPS/coop/coop.html

        Re: Steve McIntyre (#64), : I do not understand your term “two legged”! The plots are for “as raw as available” and GISS “homogenised” which I believe has all adjusments made – the final GISS output.

        Re: steven mosher (#31), this may be of interest Clear Climate Code Project.
        http://clearclimatecode.org/

  37. rephelan
    Posted Jun 9, 2009 at 10:42 PM | Permalink | Reply

    OK, for anyone who missed it. Steve M. owes Anthony Watts BIG-TIME… and snipped his comment at #7 for violating blog policy… does RC, Tamino, or William Connolly have that sort of integity? And you’ll note that Anthony had the good grace to apologize and not get huffy.

    • Posted Jun 9, 2009 at 11:14 PM | Permalink | Reply

      Re: rephelan (#63),

      It’s more the other way around, I owe Steve a debt of gratitude, because he and this blog provided a basis for my own work. Had CA not existed, and had Steve not been so helpful, I doubt I ever would have taken the project forward with such a broad scope.

      I have had a lot of support base, from hundreds of people, but the catalyst was Steve and CA., and one little known person in Nevada City, named Russ Steele, a sometimes CA commenter, who took a casual comment from somebody we both know at a meeting, called me, and from that turned me on to CA – Anthony

      • rephelan
        Posted Jun 10, 2009 at 12:00 AM | Permalink | Reply

        Re: Anthony Watts (#67),

        I seem to recall a very long journey involving a dead motherboard… in any case, a commitment to integrity has marked you both.

  38. steven mosher
    Posted Jun 9, 2009 at 11:01 PM | Permalink | Reply

    re 54. Gary, if you have any questions, dont hesitate to ask. It’s been a while since I slogged through it.. some other guys ( e smith over at wuwt) probably have a better grasp of it now than I do, but I’ll lend a hand where I can in deciphering what I can.

    getting the code running is just the start. The analysis approach requires a lot of thought. If somebody just blindly throws the data ( CRN rating) at the meatgrinder you’ll get out results as interesting as that approach. CRN rating
    evolves over time ( some believe) and all we have is a snap shot of that rating taken at the end of the time series.

    perhaps roman can opine on the multivariate aspects of this problem..

  39. steven mosher
    Posted Jun 9, 2009 at 11:12 PM | Permalink | Reply

    re 57. I will say this. For a long time those of us who had anthony’s back had to fight against this stupid argument that photos didnt matter. AS IF, the images of sea ice are not photos, as if hansens nightlights were not photos.
    Luckily the climate scientists at NOAA didnt take Eli’s arguments seriously. They saw the problems with the sites
    and rolled out a program to fix them.

    For the record, I believe that microsite effects exists ( see Oke) that the US temp record is impacted by these effects,
    BUT THAT the effect is likely to be small. it wont explain all the warming. If this century has seen .6C warming
    I’d bound the UHI portion at NGT .3C and the microsite component at NGT .15C.. So you want to go hunting for an
    effect, you’re looking for a pretty small effect in a noisy time series..

    Steve: no more even of the least of the Carlin 7 please.

  40. Posted Jun 9, 2009 at 11:41 PM | Permalink | Reply

    I realized just now, since Marysville was one of the first, I hadn’t looked at the metadata for it in almost two years. Time for a look again given the discussion of this thread. I’ve been busy chasing hundreds of other stations but haven’t looked back to that one where I had the light bulb go on.

    I found some interesting things just now in the NCDC Metadata (MMS) for Marysville at http://mi3.ncdc.noaa.gov/mi3qry/login.cfm

    First in The UPDATES Tab

    [2009-02-04] 9999-12-31 2009-02-04 MSLAGLE AD HOC NONE — CLEAN UP OF COOP-A STN TYPE ISSUE MSLAGLE 2009-02-04

    Then this in the REMARKS Tab
    [2008-07-01] 9999-12-31 GENERAL REMARK REASON: UPDATE PUBLICATION; DATA NO LONGER PUBLISHED. DATA ONLY GOES TO WFO/STO. DATA THAT IS RECIEVED FROM THIS STATION WILL BE USED AS BACK UP TO TO MARYSVILLE AIRPORT DATA (04-5388) WHICH IS NOW BEING PUBLISHED. — INGEST_USER 2009-02-10

    It seems to me like they gave up on it. There is not much that can be fixed there in terms of siting like they did at Detroit Lakes. Marysville Fire Station property is 98% Asphalt/concrete/buildings, with a small patch of grass in the front by the street/sidewalk.

    I decided to check the B91 Forms, and sure enough, NOAA bailed on Marysville in October of 2007, just a few short months after I first brought it to national attention.

    See the screencap of the NCDC B91 database showing the span of record:

    So I guess NOAA saw enough problems at Marysville to merit a deferral of using the data in the climate record. So now they use the airport.

  41. geronimo
    Posted Jun 10, 2009 at 12:40 AM | Permalink | Reply

    Jack, I’m not a scientist, I’m an engineer, so fit precisely the ad hom “citizen scientist”. What I have observed is the appalling lack of rigour in the scientist who are the self-styled “Hockey Team”. Engineering, and indeed science, has a requirement to be precise in methodology, in ensuring that the data is correct, in documentation and archiving. In all disciplines if there is a mistake, or a challenge, the data and methodology should be, and is outside of the Hockey Team, available for scrutiny. When changes are made to any part of the process of the reasons should be recorded and the previous data retained and archived. None of this seems to be part of the discipline of the self-styled Hockey Team and the organisations they represent. GISS changes data without notification, makes adjustments without tranparency of method, scientists producing papers foretelling the end of the world refuse to share the data and methodologies used in these papers, schoolboy errors are found in peer reviewed papers by people outside the discipline, use of prinipal components without explanation that inflate temperatures and so on. Now I have postulated before that this may be due to, and you I hope will excuse me, climate science being a bit of a back water, where peer reviewed papers telling us what was happening in climate science appeared in magazines and was ignored by the public at large. Then one day a group of climate scientists popped up and said, “Unless you all change your lifestyle completely you’re going to bring the world to oblivion.” This instantly changed the papers from, if you like, their previous status as civil litigation, to criminal litigation, where the bar on the burden of proof to get a conviction rose dramatically. Others, including Steve Mc, asked to see how this forecast of impending doom had been calculated. The rest is history, refusal to share data, lost data, data changed without notification, old data not archived, refusal to share methodologies, schoolboy howlers, secrecy, poor documentation etc. etc. And, yes, citizen scientists showing the way on UHI.
    It looks bad to those in disciplines that have rigorous processes for archiving, and recording everything they do, and what is worse most of what they are doing is statistical analysis a disciplfore in which they have no formal qualifications. The could be seen to be citizen statisticians, but I would never say such a thing.

  42. Jack
    Posted Jun 10, 2009 at 5:05 AM | Permalink | Reply

    geronimo:
    The problem is that you are trying to compare fields that are science application (engineering) to one that is research (climate science). In a purely research mode, it is impossible to retain the level of documentation and archival that you demand. Code is continuously updated with new findings and experimentation. The very face of research is experimentation. Even things like satellite data go through multiple levels of processing before the raw numbers become usable information, and those processes themselves are continuously under improvement. Data archival is normally done, but as instrument quirks are discovered (for example), it is subjected to continuous revisions.
    This way of science works well and is, in fact, ESSENTIAL for research to proceed.

    The first time climate scientists began to consider global warming could result from the increase in CO2 was something like 30+ years ago, and it was a very tentative and theoretical subject. As the decades passed and the research proceeded, it became evident that the research was converging, and over the last decade, it has exploded into an issue of political importance. At this point, the fields of research and scientific application collided and the dust has not settled.

    I can’t refute the current absence of documentation and archival that you point out. However, I do refute that this reflects a lack of rigor. Your assumption that climate science is a “backwater” field indicates to me that you have not, in fact, done much reading of the history of research. If you are able to find access to something like the Journal of Geophysical Research archives, I encourage you to peruse the thousands of papers that document the rigor that has been applied to many/most (certaintly not all, I concede) studies through the years. In research, the rigor most often results from multiple papers attacking the same issue. I assure you that scientists are not afraid to attack each others papers. My most recent paper did exactly that – I made a few enemies but I think I stopped a group movement into a wrong perception about something in particular related to my area of study. Any idea that scientists are afraid to challenge one another is quite wrong.

    Climate science is certainly moving toward more open access and more easily available documentation. It is difficult because cuts to available funding over the last several decades means working on tight budgets. When there are limited funds and the choice is to spend it on hiring someone to clean up an old data set or to hire a fresh-out PhD who has an exciting and new perspective measuring the microphysical properties of aerosols, the choice has almost always been made to hire the fresh-out.

    I can understand that someone with an engineering background would be frustrated by the differences in approach in a research field. It is one of the most difficult tasks ahead for climatologists – to make the partial switch from research to application and the accompanying standards. However, your characterization of climate scientists as backwater and sloppy is inaccurate.

    • compy
      Posted Jun 10, 2009 at 5:54 AM | Permalink | Reply

      Re: Jack (#74),

      While many of your points are valid, you go too far in defending climate science documentation. There are many other research based studies which expect, and enforce, high levels of documentation. Check, for example, the requirements of archiving required by the American Economic Review. Or the standards that epidemiologists must meet. This is really not the major imposition you suggest and a failure to meet these standards of other disciplines is, well, sloppy.

    • Posted Jun 10, 2009 at 6:18 AM | Permalink | Reply

      Re: Jack (#74),

      In a purely research mode, it is impossible to retain the level of documentation and archival that you demand.

      This statement is yet another naked strawman (YANS). And yet again it is the community that does not implement good practices that claims that good practices that are SOP in all other areas of research cannot be implemented. Experimental research that does not produce an archival-grade record of its methods, processes, and procedures is not research. Consider the simple example of calibration of a thermometer that is to be used in an experiment.

      However there is an even larger aspect of the present situation that is always conveniently overlooked by those who claim that we cannot possibly follow good practices. When such ‘research’ results are to be used to set policies that affect the health and safety of the public, the rules change. Such policies have never been set based on rough results the pedigree of which cannot be determined.

      Who would board a commercial airline flight if we thought that the FAA allowed Airbus to attain certification if its methods, processes, and procedures were proclaimed to be rough research and not production-grade. Production-grade meaning that independent Verification and Validation tests have been successfully completed and that all aspects, from the drawing board to all finished pieces-parts products, were maintained under independently audited and approved Quality Assurance procedures. Does the climate change community think that somehow it should be exempt from the kinds of independent tests that, say, vendors of nuclear power reactor vendors, drug makers, processed-food producers, and the almost endless list of other products and services that have the potential to adversely affect the health and safety of the public?

      And, by the way, these organizations are never allowed to simply state that thousands of papers have been subjected to ‘peer-review’ prior to publication in the most important approved journals. They aren’t allowed to use this clearly deficient cop-out because the regulatory agencies know that none the these same journals have Quality requirements. This situation seems to be most apparent whenever computer software has been used to obtain the basis of the publications. Purely black-box software, having completely unknown standards behind it, can be used to generate the numbers that appear in the peer-reviewed paper. In my opinion this does not produce archival-grade papers suitable for publication.

      Whenever the health and safety of the public is at stake, “We do research” has never before been an acceptable standard. It wasn’t taught that way to me in high school lab classes.

    • Posted Jun 10, 2009 at 6:35 AM | Permalink | Reply

      Re: Jack (#74),

      This way of science works well and is, in fact, ESSENTIAL for research to proceed.

      This is just backwards. Documentation is ESSENTIAL for research to successfully progress.

      I can understand that someone with an engineering background would be frustrated by the differences in approach in a research field.

      Jack, maybe I’m the first to inform you, engineers do research. People in all kinds of both the hard and soft sciences do research. People not in sciences do research.

    • MikeU
      Posted Jun 10, 2009 at 6:56 AM | Permalink | Reply

      Re: Jack (#74),

      Sorry, but your assertion that it’s “impossible to retain the level of documentation and archival that you demand” is absurd. Revision control systems have been widely available for a couple of decades now, and they’re not terribly complicated to use. They can quite easily manage a large and continuously changing set of data and code, just as they do for countless large software development projects going on around the world. On a large project like the one I’m currenly working on, hundreds (and sometimes thousands) of changes are made each and every day. Yet I can reconstruct the exact state of all data and code for my project on any day you’d choose to pick over the past 3 years, given an hour or so to pull it all from the server. The amount of time that I spend dealing with the revision control system on any given day is typically a handful of minutes, and it’s well worth it.

      Software engineers used to argue back in the 80s (as certain scientists now do) that it was just “too difficult” or “too time-consuming” to use good archival processes. That was false then, and it’s false now. They (we) were simply resistant to change, preferring the processes we knew to the ones we didn’t. However, 20 years later, I’ve yet to meet a single developer who tried it and then went back to the bad old days – revision control is too useful and powerful. And on a large project, it’s simply indispensible – we could not function without it. Were climate scientists to make good use of that plus assorted other auto-archived tools like wikis, their jobs (and the jobs of those trying to replicate their work) would become easier, not harder.

    • Steve McIntyre
      Posted Jun 10, 2009 at 8:42 AM | Permalink | Reply

      Re: Jack (#74),

      Jack, I appreciate the thoughtful comment. A couple of points and ones that I’ve made on many occasions. If climate scientists were not engaging with the public, but merely debating fine points of Assyrian archaeology in seminars, then that would be one thing. As someone with experience in mining speculations, I was astonished that climate scientists issued press releases in the first place and even more astonished at the highly “promotional” nature of the press releases – much more promotional in my opinion than mining promoters would do in the same circumstances.

      I’ve consistently taken the position that climate scientists should not attorn to a lower standard of disclosure and due diligence than mining promoters.

      If you’re issuing a press release and thus engaging with the public, then you should ensure that you’ve got all your ducks in a row – which includes data and documentation. Otherwise, as a start, don’t issue a press release.

      I can’t refute the current absence of documentation and archival that you point out. However, I do refute that this reflects a lack of rigor.

      Look, we’re coming from different fields. I’m extremely familiar with the millenium reconstructions and I assure you that there is a lack of rigor. They are addicted to bristlecones, for example.

      In research, the rigor most often results from multiple papers attacking the same issue. I assure you that scientists are not afraid to attack each others papers.

      In general, this is undoubtedly true. But this is definitely not the case in paleoclimate. Dendro specialists, for example, are aware of specialist errors in Mann’s stuff, but won’t submit an adverse comment because they don’t want the grief. The term here is the “silence of the lambs”. You have to have a thick skin to criticize these folks and they don’t want to be bothered.

    • Craig Loehle
      Posted Jun 10, 2009 at 2:00 PM | Permalink | Reply

      Re: Jack (#74), When EPA starts the process of setting standards for some pollutant (e.g., a herbicide) it is required by law to develop defendable data, produce public and reviewed models, and respond to exhaustive criticism. If you publish a paper on protein structure, you must deposit your structure data in a database (you can’t say “mine, can’t have it”). In the journal Ecological Modelling your descriptions of your simulation model must have enough detail (equations, parameters) that someone can tell precisely what you did and even duplicate it. Don’t tell me that archiving stuff is too hard or not the standard in science. Even for “exploratory science” you need to document your work.

      • Steve McIntyre
        Posted Jun 10, 2009 at 2:07 PM | Permalink | Reply

        Re: Craig Loehle (#86),

        one of the interesting procedural aspects of the recent EPA finding, as I pointed out before, is their adoption of IPCC documents on the basis that they meet EPA standards. They don’t.

        • Craig Loehle
          Posted Jun 10, 2009 at 3:30 PM | Permalink

          Re: Steve McIntyre (#87), I agree that EPA adopting IPCC docs is out of their normal procedures, and they may get hung on that point.

  43. geronimo
    Posted Jun 10, 2009 at 10:15 AM | Permalink | Reply

    Jack: “However, I do refute that this reflects a lack of rigor. Your assumption that climate science is a “backwater” field indicates to me that you have not, in fact, done much reading of the history of research. If you are able to find access to something like the Journal of Geophysical Research archives, I encourage you to peruse the thousands of papers that document the rigor that has been applied to many/most (certaintly not all, I concede) studies through the years.”

    When I said it was a backwater I meant from the public’s point of view, I meant no slight for those working in climate science. The IPCC put them front and centre and drew attention to their work. I am afraid that it has been found wanting, in the Hockey Team at least.

    I am a bit puzzled by: “In a purely research mode, it is impossible to retain the level of documentation and archival that you demand”, I am not demanding anything, I am pointing out that archiving and retention of data and previous states of models are common practice, indeed essential if there is any need to go back on work. I am not sure that there is a pass for the scientific community on this simple process.

    Thanks for the discussion, you’re like a man playing multiple chess today and you’ve held up your end well.

    I am a bit puzzled by your assertion that keeping data

    • Steve McIntyre
      Posted Jun 10, 2009 at 2:09 PM | Permalink | Reply

      Re: geronimo (#81),

      please do not discuss whether climare science is a “backwater”. This is a pointlessly emotive adjective for present purposes. For our purposes, we will stipulate that it is not a “backwater”.

  44. bmcburney
    Posted Jun 10, 2009 at 11:39 AM | Permalink | Reply

    I think Jack has it backward. No one knows anything about climate per se that they did not learn from statistics.  Of course, it is possible to learn particular things about the atmosphere or solar radiation or weather systems without statistics (or with only limited use of statistics).  But weather is not climate.  Properly understood, “climate science” is ONLY applied statistics. 

    Am I wrong?

  45. Neill
    Posted Jun 10, 2009 at 1:54 PM | Permalink | Reply

    Steve’s comment wrt press-release science reminds me of a line from the Gene Hackman movie, ‘Hoosiers’:

    “Look, mister, there’s… two kinds of dumb, uh… guy that gets naked and runs out in the snow and barks at the moon, and, uh, guy who does the same thing in my living room. First one don’t matter, the second one you’re kinda forced to deal with.”

  46. Tolz
    Posted Jun 10, 2009 at 2:43 PM | Permalink | Reply

    Jack, thanks for your contributions. Whether or not you are “playing”, being a devil’s advocate advances this blog and, indeed, science. I think if you come to more closely follow the goings-on here you’ll be impressed with how even with those in agreement generally they “audit” each other’s comments, which is the essence of the blog. The tone may appear to be anti-AGW, but it really is more pro-verification of whatever might be claimed about the Climate, one way or the other. There is an integrity to the process here that I believe attracts thousands of non-scientists or statisticians like myself even when we don’t appreciate the nuances of the analysis. You might have gotten cuffed around a bit, but you help assure that integrity. Especially with your increasing civility. Cheers.

  47. Dishman
    Posted Jun 10, 2009 at 3:19 PM | Permalink | Reply

    Jack #74 wrote:

    In a purely research mode, it is impossible to retain the level of documentation and archival that you demand.

    I believe this captures a fundamental misunderstanding of what best-practices are, and how they improve work quality.

    In a perfect world, we would proceed along a clear path from start to completion. In that perfect world, best-practices would be far less critical.

    In practice, reality has a nasty way of intruding.

    Most of the best-practices are aimed at being able to answer the question “how did I get here?” Think of it as insurance. The question will come up because you never know exactly where you’re going or how to get there. Digging one’s way out of that situation is difficult and time-consuming at best.

    In general, best-practices get adopted because they make life easier.

    In a perfect world, they wouldn’t. This isn’t a perfect world, though.

  48. Ian George
    Posted Jun 10, 2009 at 4:23 PM | Permalink | Reply

    Geoff @ 58

    I was curious to know why there would be such a big difference between two stations 300m apart with an 0.9C max temp range but no discernible difference between the stations as far as the min temp range. The UH effect is obviously at work for the max temps but unsure about the min temps.
    WH has some interesting comments to make on his site at
    http://www.warwickhughes.com/blog/?p=207

    • Geoff Sherrington
      Posted Jun 12, 2009 at 5:36 AM | Permalink | Reply

      Re: Ian George (#93),

      Ian, in my #58 I mentioned stations 300 kilometers apart, (not 300m) not more than 300m difference in height above sea level.

      The trends of Tmax and Tmin over 40 years at the rural sites I studied can be about parallel, they can diverge or they can converge. I do not know the reasons yet.

      If you are referring to your own case history at post #22 (Sydney Observatory?), you’d have to give me a lot more info before I could make even an uninformed guess. Try looking up the BOM metadata sheet to see when the change occurred from daily reporting to half-hourly, which should be roughly when the thermocouple or thermistor replaced the thermometer. This was often in the mid 1990s. I’d be keen to see a BOM paper on the overlap periods at instrument changeovers.

      BTW, Warwick Hughes and I have worked, briefly at first, on the Aust thermometery problem for 20 years. He was the catalyst for my investigations.

  49. JT
    Posted Jun 14, 2009 at 10:21 AM | Permalink | Reply

    Now that people posting to this site have figured out how Steig’s Principal Component analysis and correlation of land temperature records with satellite records properly should be done, why not apply the same methods to the US land records and corresponding satellite records using only data from those stations Anthony Watt has classified as Category 1, or perhaps 1 and 2, so as to get a similar estimate of warming/cooling trends for the continental US as was done with Antartica?

    • Steve McIntyre
      Posted Jun 14, 2009 at 10:39 AM | Permalink | Reply

      Re: JT (#100),

      people posting to this site have figured out how Steig’s Principal Component analysis and correlation of land temperature records with satellite records properly should be done

      I don’t think that anyone, including Ryan or the Jeffs, claims to know how to “properly” do these steps. All that we’ve concluded is that the Steig method is not robust to irrelevant decisions on regpar and things like that, where there decisions have not been justified and do not appear justifiable.

      To arrive at a position where one can opine on how to do it “properly” requires a deeper understanding of the methods than the authors themselves seem to have had.

      It’s one thing to say that they haven’t proved their method. Quite a different thing to say what one should do.

  50. Ian George
    Posted Jun 14, 2009 at 2:08 PM | Permalink | Reply

    Geoff

    Both the MWS and the AWS are at 26m elevation and 300m apart. The MWS commenced in 1859 though was in the town centre (Post Office) for at least the first 100 years and then moved to the airport. The AWS has been in operation since 1994. The long term MWS max av temp is 26.8C and the AWS is 25.9. Seems a big difference for such a short distance.

  51. mondo
    Posted Jun 14, 2009 at 3:17 PM | Permalink | Reply

    Re Jack (#74) and others. Steve McIntyre has often remarked about the insistence in finance circles (supported by demanding standards such as the AusIMM JORC Code which are now law in Australia) that mining promoters must be very careful about what they say in public statements, and be certain that their statements hold up to scrutiny.

    Visitors to this site may not be aware of a related area where non-legal standards of compliance and scrutiny are demanded to be met. Standard practice in mining projects (my experience is with a A$1 billion project) is for the Feasibility Study (which in this case cost of the order of A$25 million) to be subjected to a detailed and rigorous due diligence scrutiny by independent engineers acceptable to the banks, but paid for by the client. One such firm in Australia is Behr Dolbear.

    The cost of a detailed due diligence scrutiny by Behr Dolbear would usually be in the range of $500k to $1000k. Behr Dolbear bring in independent experts qualified in the relevant disciplines from all over the world to undertake the study. Typically a team of 6-10 such people would be involved in the scrutiny. Usually these studies identify issues that don’t meet the required standards, and the sponsoring company must address those issues. Failure to secure a positive report from the independent experts means that the banks simply won’t advance the funds to allow the project to be developed.

    The financial models themselves are also subjected to detailed line by line scrutiny/auditing, this time by a specialised actuarial firm who have well developed procedures for undertaking this work, and a reputation for a strongly independent stance. In the case I am referring to, the model scrutiny was undertaken by firm called Mercer.

    Anyone who has gone through this sort of scrutiny (the process can take several months) knows how rigorous the requirements are. And how independent the scrutiny is. And what the consequences will be if the project does not pass scrutiny.

    It seems passing strange to me that the IPCC is not expected to submit to similarly robust independent scrutiny, especially given the sums of money, and likely economic impacts involved.

4 Trackbacks

  1. [...] I don’t purport to know the answer to the great global warming question, but I do like to read both Climate Audit and Real Climate to try and get more than one side of the story. Climate Audit has published some pictures of the temperature stations around the US, and has found many instances where the ground stations have been compromised by construction near them: [...]

  2. [...] realized in discussions at this Climate Audit thread, since Marysville was one of the first stations I surveyed, I hadn’t looked at the metadata [...]

  3. [...] realized in discussions at this Climate Audit thread, since Marysville was one of the first stations I surveyed, I hadn’t looked at the metadata [...]

  4. By Weather is not climate « Wolfville watch on Jun 11, 2009 at 10:22 AM

    [...] Garbage in, garbage out. [...]

Post a Comment

Required fields are marked *

*
*

Follow

Get every new post delivered to your Inbox.

Join 2,902 other followers

%d bloggers like this: