Gavin Schmidt: "The processing algorithm worked fine."

In the last few days, NASA has been forced to withdraw erroneous October temperature data. The NASA GISS site is down, but NASA spokesman Gavin Schmidt said at their blog outlet that “The processing algorithm worked fine.”

Schmidt blamed the failure on defects in a product from a NASA supplier and expressed irritation that NASA should bear any responsibility for defects attributable to a supplier:

I’m finding this continued tone of mock outrage a little tiresome. The errors are in the file ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v2/v2.mean.Z, not in the GISTEMP code (and by the way, the GISTEMP effort has nothing to do with me personally). The processing algorithm worked fine.

Although NASA blamed the error on their supplier (GHCN), in previous publications by Hansen et al, NASA had asserted that their supplier carried out “extensive quality control”:

The GHCN data have undergone extensive quality control, as described by Peterson et al. [1998c].

and that NASA (GISS) carried out their own quality control and verification of near real-time data:

Our analysis programs that ingest GHCN data include data quality checks that were developed for our earlier analysis of MCDW data. Retention of our own quality control checks is useful to guard against inadvertent errors in data transfer and processing, verification of any added near-real-time data, and testing of that portion of the GHCN data (specifically the United States Historical Climatology Network data) that was not screened by Peterson et al. [1998c].

Schmidt said that no one at NASA was even employed on a full-time basis to carry out quality control for the the widely used GISS temperature estimates

Current staffing from the GISTEMP analysis is about 0.25 FTE on an annualised basis (i’d estimate – it is not a specifically funded GISS activity).

Schmidt said that independent quality control would require a budget increase of about $500,000. NASA supporters called on critics to send personal checks to NASA to help them improve their quality.

At Verhojansk station, which I selected at random from the problem Russian statements, average October 2008 temperature was reported by NASA as 0.0 degrees. This was nearly 8 deg C higher than the previous October record (-7.9 deg). Contrary to the NASA spokesman’s claims, their quality control algorithm did not work “fine”.

What is more worrying is that no one seems to be minding the store. Schmidt says that the entire effort only takes about 1/4 of a man-year annually. (They are pretty busy at conferences, I guess.) CA readers know that the GISTEMP program is a complete mess and needs to be re-written from scratch. Schmidt seems not to even want to bother doing the work at NASA, saying that he’d prefer to hire ice sheet modelers and cloud parameterizers. He called on NOAA to do the job properly:

Those jobs are better done at NOAA who have a specific mandate from Congress to do these things.

On this point, I agree with NASA spokesman Schmidt. If NASA is not going to do the job properly, then it shouldn’t do the job at all. NASA should not be depending on the kindness of strangers for their quality control. Ross McKitrick has long observed that the collection of temperature data is a job sort of like making a Consumer Price Index and it should be done by professionals of the same sort. It doesn’t make any sense for people like James Hansen and Phil Jones to be trying to do this on a part-time basis. As long as it’s being done on such a haphazard basis, there’s really no way to prevent incidents like this one (or last year’s “Y2K” problem.)

94 Comments

  1. mike T
    Posted Nov 12, 2008 at 1:39 PM | Permalink

    None of what Schmidt says excuses the fact that, given the exceptional figure produced, nobody saw fit to do at least some sort of cursory check. As someone said earlier, would this have happened if the result had been a record cooling?

  2. craig loehle
    Posted Nov 12, 2008 at 1:46 PM | Permalink

    I would say that 0.25 FTE is in fact “no one minding the store”. And yet there is umbrage taken if someone questions the veracity of the GISS output. This would explain why thousands of world weather stations vanished from the database in the 1990s–there is no one to keep them updated. And this is ok?

    • Mike B
      Posted Nov 12, 2008 at 2:29 PM | Permalink

      Re: craig loehle (#2),

      Actually Craig, I disagree. 1/4 of an FTE is 10 hours a week, which should be plenty to do quality control and other administrative overhead on GISTEMP.

      $500K/year to maintain GISTEMP is nonsense.

      Recall Tom Karl’s praise for Anthony Watt’s effort, entirely staffed with volunteers.

      • craig loehle
        Posted Nov 12, 2008 at 3:23 PM | Permalink

        Re: Mike B (#15), It is not merely checking maps. There are thousands of missing weather stations out there that no one is bothering to update. This might be occuring at the NOAA data stage, but someone needs to be looking into it. There are problems with missing daily or monthly data from sites around the world that can often be resolved by going directly to site records to fix them, but no one is doing that either. The “adjustments” to the individual records by the automated algorithm seems inadequate, and these need eyeballs. The fact that sometimes rural stations are adjusted by urbad data is wrong and needs to be looked into. etc.

  3. ED
    Posted Nov 12, 2008 at 1:49 PM | Permalink

    .25 FTE is the same as spending 2 hours out of every 8 hour day devoted to quality control of the GISTEMP. That seems like a significant amount of time to devote to the task if the person is actually devoting that much time to it. However, if you are at a conference when these results are scheduled to be published perhaps the issue is whether a backup person was properly trained to evaluate the data to catch an error. I know it would be imperative in my business experience to train someone adequately to provide accurate reports to upper management while I’m on vacation.

    • Jeff Alberts
      Posted Nov 12, 2008 at 2:37 PM | Permalink

      Re: ED (#3),

      Or if you’re testifying in the UK as an expert witness for a trial that has nothing to do with climate studies, then sure, you might not have time.

  4. Jonathan
    Posted Nov 12, 2008 at 1:53 PM | Permalink

    I do find it extraordinary that we are proposing to spend billions of dollars to avoid a supposed problem, but only spend $20,000 per year on assembling the data which supposedly proves the existence of this problem.

    • craig loehle
      Posted Nov 12, 2008 at 1:55 PM | Permalink

      Re: Jonathan (#5), Give the man a cigar!

      • Jonathan
        Posted Nov 12, 2008 at 4:19 PM | Permalink

        Re: craig loehle (#5), Re: Jonathan (#5), Give the man a cigar!

        Would that cigar be carbon neutral?

        The extraordinary thing that has come out of this is Gavin’s statements that GISTEMP is just something GISS does in spare moments, and certainly not a priority for them. He would much rather spend money or more models than on getting the data right. Says it all really.

  5. Mikael H
    Posted Nov 12, 2008 at 2:03 PM | Permalink

    $500,000 huh?
    Well, at least we know the price to stop global “warming” for one year…

  6. Pierre Gosselin
    Posted Nov 12, 2008 at 2:03 PM | Permalink

    “Schmidt said that independent quality control would require a budget increase of about $500,000. NASA supporters called on critics to send personal checks to NASA to help them improve their quality.”

    Now let me get this straight. They’re admitting there’s poor quality control, and reliable results can’t really be expected. Yet they expect governments to implement radical policy changes based on these results!? We’re talking about drastic policy changes that would profoundly change people’s lives. That’s not worth a paltry 500K per year?

  7. Pierre Gosselin
    Posted Nov 12, 2008 at 2:06 PM | Permalink

    This is a national embarassment.

  8. Mike Bryant
    Posted Nov 12, 2008 at 2:08 PM | Permalink

    Why don’t they just post everything here before it is released? That would sure save Hansen, Schmidt and others alot of embarassment…

  9. Steve McIntyre
    Posted Nov 12, 2008 at 2:10 PM | Permalink

    They should be thankful that blog readers are as alert as they are. In another week, CRU and NOAA would probably have fallen into the same pit.

  10. Pierre Gosselin
    Posted Nov 12, 2008 at 2:11 PM | Permalink

    Climate science from climate scientists!

  11. Pierre Gosselin
    Posted Nov 12, 2008 at 2:12 PM | Permalink

    Next time we should hold our horses and let them fall into a pit. Then we could have some real fun.

  12. Dotto
    Posted Nov 12, 2008 at 2:21 PM | Permalink

    If you look at the global monthly temperature anomalies maps one finds that most of the anomalies arise in Siberia. A natural question that arises is the quality if this data. As we have learned from the recent false data variations, Siberia has a large impact on the global temperature.

  13. Steve McIntyre
    Posted Nov 12, 2008 at 2:27 PM | Permalink

    Warwick Hughes has reported on Siberian data issues in the past. BTW some of these Siberian sites which are “hot spots” on the map are pretty chilly. Verhojansk, which I’ve looked at as an anomaly, has average D,J , F temperatures of -43,-45 and -41. No wonder they are worried about potential temperature increases.

  14. darwin
    Posted Nov 12, 2008 at 2:31 PM | Permalink

    As I recall, the heads of Moody’s and Standard & Poor’s financial rating services told Congress that their ratings were merely “opinions” and not to be relied upon by investors as to the actual health of the securities they were rating. Ergo, they shunned any responsibility for the financial meltdown. Schmidt seems to be saying the same about GISSTEMP. It is merely an opinion and NASA has no responsibility if policy makers impose costly carbon caps for a non-existent warming. It seems a whole generation has adopted MAD’s Alfred E. Neuman “What, Me Worry?” slogan as a philosophy for how to perform their jobs.

  15. Jeff Alberts
    Posted Nov 12, 2008 at 2:41 PM | Permalink

    Never mind who’s minding the store. Who’s minding the minders? I think I’ve lost my mind.

    Don’t mind me.

    Oh never mind!

  16. Manfred
    Posted Nov 12, 2008 at 2:42 PM | Permalink

    maybe there are some volunteers on this website, who could offer GISS to do quality control and UHI correction improvement ecetera

  17. Ian Castles
    Posted Nov 12, 2008 at 2:47 PM | Permalink

    During the past year James Hansen has written many open letters – e.g., to Prime Minister Kevin Rudd of Australia (copies to all State Premiers), Prime Minister Gordon Brown of the UK, Chancellor Merkel of Germany, Prime Minister Fukuda of Japan, the CEO of Duke Energy Jim Rogers, the Houghton Mifflin Company about a school textbook, etc. etc. Some of these letters have been written by Hansen as a private citizen, others have been on NASA/Goddard letterhead.

    Yet the organisation that he heads can find only 0.25 FTE staff for GISTEMP?

  18. UK John
    Posted Nov 12, 2008 at 2:48 PM | Permalink

    I read Gavins blog on this and then he rambles on into Ocean Cooling, and the Argo float thing, and sea temps, and I have read all that stuff myself, including the engineering bits about what was made, with what bits, what manufacturing QA was applied or not, testing, when, how accurate it might have been, things they still don’t know etc.etc. and can see its all a bit flaky, and you could interpret it all either way, especially as the comparatives are at best a guess.

    But as a proof of anything, well I am not so sure.

    But Gavin really believes, I do not have his faith.

  19. adrian
    Posted Nov 12, 2008 at 2:51 PM | Permalink

    A shameful response from Gavin in view of the amount of money being proposed to spend on AGW.
    We need to know properly what is happening not a fudge to suit extra research grants

  20. Leon Palmer
    Posted Nov 12, 2008 at 2:56 PM | Permalink

    There’s now a 30 year overlap (since 1979) between GISS and RSS & UAH … enough to splice GISS historic estimates to RSS & UAH satellite based measurements and dispose of error and gaffe prone GISS from 1979 onward.

    That Gavin estimate of $500K would be better used, split between RSS & UAH.

  21. Jeff Alberts
    Posted Nov 12, 2008 at 2:58 PM | Permalink

    First we need to know if we even CAN know. I’m not convinced we’ll ever be able to accurately model global climate.

  22. Pierre Gosselin
    Posted Nov 12, 2008 at 2:59 PM | Permalink

    Gavin is taking a real beating at RC. I can’t help but notice that he devotes a fair amount of time to his RC blog. Wouldn’t his time be better spent elsewhere, like assuring the quality of the raw data? Does he do this on government time and expense?

  23. Dr Slop
    Posted Nov 12, 2008 at 3:00 PM | Permalink

    Piling on, but to slightly alter a quote from Gavin, he doesn’t need any evidence to know that we have a problem.

  24. Posted Nov 12, 2008 at 3:02 PM | Permalink

    RE: Siberia

    It’s very ironic that the driving force behind Budyko’s early work (so I’ve read) was Climate Control with a view to warm Siberia and thus allow easier access to the vast natural resources there.

    Now we can do it with a simple computer code that does sums and differences (well, maybe there’s a bug or two in there).

    21st Century Climate Science is a wondrous thing.

  25. Ian Castles
    Posted Nov 12, 2008 at 3:07 PM | Permalink

    In a letter published in the Sydney Morning Herald (11 November), Gavin Schmidt writes:

    “A simple look at the budget for climate change research in the US (or globally) reveals that the vast majority of the funds go on satellite and in-situ observations, with only a tiny fraction devoted to the complex modelling efforts needed to understand climate.”

  26. Michael Jankowski
    Posted Nov 12, 2008 at 3:13 PM | Permalink

    If $500k is the price for independent review, how much of a check do yout expect to receive, Steve? 🙂

  27. Phillip Bratby
    Posted Nov 12, 2008 at 3:38 PM | Permalink

    IMHO and based on many many years of experience, independent verification, with occasional and unnanounced independent auditing by a third party, is necessary for any activity which, if inadequately conceived or executed could lead to an unacceptable risk. In this instance we are talking about the expenditure of trillions and the risk to us mere mortals is enormous. When the lights go out due to the elimination of CO2 from human activities, millions may die. Because our leaders in there infinite wisdom are planning for a warming world instead of the equally likely outcome of a cooling world, millions may die of starvation. And all because these highly paid climate scientists cannot be bothered with ensuring that they get it right, and checking is way down their list of priority.

    Not fit for purpose springs to mind. I say sack them all and hand it over to someone like Steve who understands how to do a proper job.

    I would recommend Steve for a Nobel Prize, but it’s been debased!

  28. Mike C
    Posted Nov 12, 2008 at 3:44 PM | Permalink

    Steve and others, I’m going to disagree with you guys here… Gavin and Hansen (while dispicable activists masquerading about as serious scientists IMO) they are not to blame here, nor should they be criticized on this one. We all know what datasets GISS uses, and we all know how GISS applies different adjustments to the data. The real culprit here lies somewhere between the individual station observers and NOAA. We need to learn if this is an isolated incident or if it is a regular thing, perhaps Russia’s way of collecting on Kyoto, who knows… but we will never get the problem fixed if we do not focus the audit on the proper culprit.

  29. Mike Bryant
    Posted Nov 12, 2008 at 3:47 PM | Permalink

    Mike C,
    I respectfully disagree. There is plenty of blame for Gavin, Hansen, and anyone else who tries to hide or color the truth.
    Mike Bryant

  30. steven mosher
    Posted Nov 12, 2008 at 3:48 PM | Permalink

    SteveMC I’m glad you did this post. I was going to post on the particular Russian station you mentioned over at RC, but got wrapped up in other things. What I found was that over the past 100 years plus, october has averaged 17C cooler than Sept.
    ( I need to double check) with a 1 sig of 3C. never has the october temp been equal to or less than the september temp. Anyways good job.

  31. poid
    Posted Nov 12, 2008 at 3:49 PM | Permalink

    I would interpret Gavin’s comments as an admission that we should not consider GISTEMP a reliable anomaly series. Obviously the quality control process around the data going in to the algorithm is inadqeuate, and there is very little checking of the output as they dont have the resources.

    Given this situation it is absolutely not possible for GISS to claim that their anomaly series is without substantial error.

    Quite frankly i am surprised (yes, even though its GISTEMP) that their data checking processes are so inadequate. A few lines of code to check for duplicate data would be about the first thing you’d implement in this kind of analysis and modelling procedure.

  32. Steve McIntyre
    Posted Nov 12, 2008 at 4:16 PM | Permalink

    #32. It doesn’t matter whether the error occurred at a supplier level or at the NASA level. NASA said that the supplier had quality control and that they did their own QC. Finger pointing doesn’t cut it.

    I don’t regard NOAA as angels in this by any means. The GHCN failure to update stations for nearly 20 years is a disgrace. We talked previously about Wellington NZ, Canadian stations, all sorts of stations where you can get weather info on the internet but the GISS (and GHCN versions) end about 1990. If GISS uses GHCN as a supplier, they should have demanded better quality a long time ago. And if GHCN failed to do so, maybe Hansen or Phil Jones should have used their pulpits to demand that GHCN/NOAA do a better job.

    They get a lot of publicity out of data that they spend only 0.2 man-years preparing. I’m sure that some people responsible for funding presume that this takes more of their budget than it seems to.

  33. Geophys55
    Posted Nov 12, 2008 at 4:40 PM | Permalink

    Seems to me that any such dataset should be examined for hard zeros in the month-to-month variation – a simple check and computationally trivial. Tropical stations might come close but hard zeros are statistically near-impossible in a monthly average. Plot all those on your anomaly map and a glaring error would become immediately apparant. Yeah, I thought of that with 20-20 hindsight – but it was *easy*.

  34. Posted Nov 12, 2008 at 4:51 PM | Permalink

    GIGO

    The quality of data is suspect in everything from the placement of sites, to the way the data is gathered, to the way the data is accumulated, to the way the data is adjusted… so why should one expect the management of the data system to be any better?

  35. geoff chambers
    Posted Nov 12, 2008 at 4:52 PM | Permalink

    Gavin’s defense continues: “The processing algorithm worked fine… but you can’t check for all possible kinds of corrupted data in the input files ahead of time. Science works because people check things… this is what occurred here..”
    For $500,000 I’ll check things. Just send me a coloured map once a month, and if I see a purple blob covering half a continent, I’ll check it out wth CA, and split the proceeds with the unnameable…

  36. Old Dad
    Posted Nov 12, 2008 at 4:54 PM | Permalink

    I spent years managing a very hight tech manufacturing company, and I can certainly understand reliance on proper QA/QC from suppliers. We demanded it, and inspected for it.

    That’s not to say that quality problems never occurred. They did. But once we accepted a supplier’s component and put that component into an assembly, the problem became ours.

    No doubt, since we had already paid a supplier for certain QA/QC data, we didn’t replicate those efforts. But we did aggressively evaluate the performance of our product at multiple stages of manufacturing. When we found problems, we could (ususally) produce a paper trail that lead us back to the bad actor. Sometimes it was us. Sometimes a supplier.

    Regardless, our customer–who had his delivery delayed–was primarily concerned that we fix the problem and get him his widget. And they–as they should–often audited us.

    I’ve never worked within a large bureaucracy, and hope I never will. I can vaguely understand (alleged) underfunding, but I fail to see how that is an excuse? We’d have gotten fired for far less.

    Who’s the customer here? The tax payer? Besides CA who’s watching the hen house?

    I have no way to evaluate the ultimate effect of this error. Regardless, I can state categorically that no modestly competent manufacturing organization would have allowed it to happen.

    And NASA was the customer. NASA damn near invented the most sophisticated QA/QC techniques on the planet. End of rant.

  37. Gerald Machnee
    Posted Nov 12, 2008 at 4:58 PM | Permalink

    Maybe we do not need GISS, NASA, etc. I thought I read that we can get within .2 degrees from proxies. Lets leave a few trees standing and we will beat that 0.78 easily.

  38. Noblesse Oblige
    Posted Nov 12, 2008 at 5:06 PM | Permalink

    This particular incident is insignificant as a technical error. It is far more important for what it says about the climate science process and its integrity. It is rarely what you do that counts; it’s how you react to being caught doing what you do. The incident would have been no big deal if NASA simply acknowledged the error and said what it was going to do to try to prevent such events in the future.
    But the wagons have been circled for some time now as evidence against the central tenet of AGW accumulates and opposition becomes stronger and more coherent. What we are witnessing is the classic reaction to challenges to closely held belief — cognitive dissonance.
    The social psychologist Leon Festinger, developer of the concept, conducted early studies of the phenomenon. One study looked at people who bought bomb shelters during the Cold War. It was found that such people tended to exaggerate the threat of nuclear war, and nothing could dissuade them. Good news about relaxed tensions and peace initiatives was rejected as false. Such developments would produce cognitive dissonance, bizarrely almost as if they were invested in nuclear war. Festinger’s book, When Prophecy Fails, tells of a group of doomsday believers who predicted the end of the world on a particular date. When that didn’t happen, the believers became even more determined they were right. And they became even louder and proselytized even more aggressively after the disconfirmation.
    So we can expect more stonewalling and more extreme and opaque defenses from proponents as evidence continues to mount. The morphing of global warming into ‘climate change’ in which pretty much anything that happens can be ascribed to human influence, is a good example.
    There will be much more of this as time goes on.

  39. Patrick Henry
    Posted Nov 12, 2008 at 5:10 PM | Permalink

    Had Russia shown up in the October map as bright blue rather than dark red, they undoubtedly would have caught the error. The problem is that dark red is exactly what they want to see. Because of this inherent bias, errors will always skew towards higher temperatures.

    You can’t be coach, referee and scorekeeper – unless you are pushing to re-engineer the world economy based on the sub-standard work of 1/4 of an FTE.

  40. Will
    Posted Nov 12, 2008 at 5:35 PM | Permalink

    Here is a bit more of an explanation by Gavin of the checking process as taken from a post at RC:

    (Posted by Ed)
    .25 FTE’s is someone devoting 2 hours a day to verify the accuracy of this data. That sounds like enough given the number of sites (2000) and frequency of measurement (one report per month?). $500,000 for 6 people working full time 8 hours a day sounds like overkill. What am I missing? One person on the web was able to catch this glitch and probably did not have to spend 2 hours to do so.

    [Response: The work on this is done over a one or two day period once every month. Add in a bit more for code maintenance, writing for papers etc. But remember that this is an analysis not data generation. NOAA has a whole division devoted to what you are concerned about (and I’m sure there are counterparts in the other national weather centers). If you want GISTEMP do that level of additional investigation it takes people to do it. As you say, this was caught very quickly and with no cost. Seems a pretty good deal. – gavin]

    • Alan Wilkinson
      Posted Nov 12, 2008 at 6:08 PM | Permalink

      Re: Will (#46),

      As you say, this was caught very quickly and with no cost.

      Because these errors were so blatantly obvious. How many less obvious errors have been missed? What confidence can there be in data which relies on isolated external reality checks rather than built-in quality assurance?

  41. Ed
    Posted Nov 12, 2008 at 5:47 PM | Permalink

    Given that GISTEMP is based on what is fed to it from NOAA, what is the confidence then that the graph shown by the attached link for October 2008 is even accurate for the USA? Has the raw data basis for this been scrutinized?

    http://www.ncdc.noaa.gov/oa/climate/research/cag3/cag3.html

  42. geoff chambers
    Posted Nov 12, 2008 at 6:15 PM | Permalink

    re: Noblesse Oblige (#44)
    “This particular incident is insignificant as a technical error”.
    Not if you feed it into a climate model, fast forward 6 months, and “project” the melting of the Siberian permafrost and a Venusian climate in the near future. Think of the probable effect on a new green (and green) president-elect (clip .. clip.. I’d better stop)

    • Noblesse Oblige
      Posted Nov 12, 2008 at 6:58 PM | Permalink

      Re: geoff chambers (#49),
      Point well made. Still, this fiasco is all the more interesting for the NASA/RC et al reaction because of the bunker mentality it betrays.

  43. EJ
    Posted Nov 12, 2008 at 6:25 PM | Permalink

    I hate to be so crass, but I can’t help but picture Dr. Hansen as the Wizard of OZ, behind the curtains. Twisting the data knobs, bellowing about the myths of OZ.

    Fortunately there seems to be quite the pack of Totos, sniffing around the curtains, and beginning to pull them back to reveal the truth.

    I wish to second those who recommend scrapping GISS completely. Rebuild it from the ground up with competent codes, siting, equipment and most importantly, headed by professional (registered) officials, who have a license to lose. Eagerly archiving and sharing of data and code would be mandatory, of course.

    One can dream……

  44. John A
    Posted Nov 12, 2008 at 6:29 PM | Permalink

    I hope someone is bringing this latest epic fail to the attention of Michael Griffin.

    Like “Old Dad” I find myself incredulous that an organization that sets so much store on the quality of its data products should be trying to blame its suppliers, when its the quality control that has failed (yet again). Perhaps if James Hansen took a few less interviews every day and didn’t spend so much time defending vandalism of power stations on spurious grounds, then maybe the skeptic blogosphere wouldn’t need to exist.

  45. deadwood
    Posted Nov 12, 2008 at 6:40 PM | Permalink

    I wonder if this episode will be remembered as Steve’s “Red October” moment.

  46. GP
    Posted Nov 12, 2008 at 7:22 PM | Permalink

    A very simple graph comparing the latest values with a couple of recent previous years or maybe a table that automatically highlights values – ‘actual’ min/max comparisons or calculated Variance figures is not so hard to do. Likewise to pin point zero values or 999.9’s and do something to adjust for them realistically – or reject.

    With some very inexpensive (and therefore independently tested) software it would not even require code and could be automated to distribute warnings by email or a number of different methods. The objective would not be to ‘fix everything’ but simply to highlight potential problems. For $500 you could have several of them developed independently to cross check each other.

    Fixing the data is another matter and could be costly – but then that is the supplier’s problem.

    The only downside is that people then come to rely on the veracity of the monitoring processes and the embedded logic may not cover all developments as time passes. But then it seems that that is no different to what we have now.

  47. John Lang
    Posted Nov 12, 2008 at 7:30 PM | Permalink

    Given the GISTEMP code problems and the lack of quality control, what are we to think of the Global Climate Models or the General Circulation Models output.

    I note that the majority of the climate models started off by using Hansen’s models as a template. Who doublechecks Hansen’s work.

    Are we supposed to completely change how our economy works based on the models and quality control of GISS?

  48. John Norris
    Posted Nov 12, 2008 at 7:41 PM | Permalink

    re:

    “The processing algorithm worked fine.”

    If the collection process and the processing algorithm aren’t sophisticated enough to catch ~10 deg c discrete site month to month error, how can you have confidence that the process is adequate to establish ~.2 deg c global increase over a decade?

    If a blatant, easy to detect data error made it through, hidden, hard to detect data errors are certain to have made it through.

  49. Leon Palmer
    Posted Nov 12, 2008 at 7:42 PM | Permalink

    Regarding

    ” wish to second those who recommend scrapping GISS completely. Rebuild it from the ground up with competent codes, siting, equipment and most importantly, headed by professional (registered) officials, who have a license to lose. ”

    Better to scrap GISS and use satellites for climate research.

  50. Gerald Machnee
    Posted Nov 12, 2008 at 7:49 PM | Permalink

    The season was short in “The Hunt for Red October”.
    New title.

  51. Joe Black
    Posted Nov 12, 2008 at 8:26 PM | Permalink

    If garbage data sucked into GISTEMP results in garbage output it certainly raises the question of what input data quality is used in the Climate Models….

  52. Daryl M
    Posted Nov 12, 2008 at 8:35 PM | Permalink

    It this is the result when the processing algorithm worked fine, I can only imagine what the result would be if it didn’t work fine. On second thought, that scenario has been documented on this very website.

    Considering the resources available to these organizations, to think that a simple filter to perform some basic sanity checks on the data could not be implemented as part of the data loading process is practically unbelievable. One can only imagine if it was spring time and data from February or March were repeated. Heaven forbid, that would cause the temperature to be artificially reduced, which I doubt would ever slip past this gang.

    • Alan Wilkinson
      Posted Nov 12, 2008 at 8:47 PM | Permalink

      Re: Daryl M (#60),

      It’s kind of human natural, Daryl. I recall when writing Uni accounting systems that if your rounding alglorithm had a slight statistical bias towards over-charging you would hear about it around five minutes after go-live whereas if you seriously undercharged you would have to discover it yourself months later.

  53. hswiseman
    Posted Nov 12, 2008 at 8:48 PM | Permalink

    I am reminded of the MOVE disaster in Philly (entire city block burn’t down in effort to drive a radical group out an apartment building) and the comments of Mayor Goode in a televised speech the day after the disaster.

    “The plan was a good one. It could have worked. The fire was accidental and was, in fact, unexpected. That does not, however, take away from the fact that we had a good plan that could have resulted in no loss of life or property.”

    “Philadelphia Mayor: ‘Perfect, Except For the Fire.’ “

    • Steve Reynolds
      Posted Nov 12, 2008 at 8:58 PM | Permalink

      Re: hswiseman (#62), Also known as the operation was a success, but the patient died…

  54. jae
    Posted Nov 12, 2008 at 9:06 PM | Permalink

    I’m just glad that I’m not signed up to go on a Shuttle Mission, considering the obvious lack of scientific rigor and integity at one of the supposedly primier governmental agencies in the USA–NASA. I speak with authority when I state that the only thing that holds any of these governmental agencies together at all is the fact that they have enough money to hire consultants to do their work for millions of dollars–they are paid to make their case and make them look good. None of the “staff” do anything, but grandstand, hand out grants, and serve as “experts” and “spokesmen.” They spend almost all their time at conferences. Most of the administrators are political appointees (by hook-or-crook) and are “less than qualified” to be at the helm. This kind of neglect of duty and blaming others is a disgrace to the entire USA. The principals involved should be fired for incompetence.

  55. GeneII
    Posted Nov 12, 2008 at 9:22 PM | Permalink

    With 1998/1934 still clearly viewable in everyone’s memory this should not have happened. It especially should not have happened at NASA. We’re talking about NASA here. They put a man on the moon. You’d think they could get a simple data set right. The fact that NASA is like this tells me it is far removed from what it was in the 1960’s.

    If I want to see what NASA used to be I’ll have to watch the movie Apollo 13. Because I certainly can’t see it now.

  56. Posted Nov 12, 2008 at 9:52 PM | Permalink

    Lack of QC at NASA brings to mind Richard Feynman’s “Mr. Feynman Goes to Washington: Investigating the Space Shuttle Challenger Disaster,” Part 2 of his book, “What Do You Care What Other People Think?”.

    As I recall, he noted that QC at NASA was far more lax than at the FAA: After several airplane crash tragedies, FAA (and the airline industry) realized that if a complex system like an airplane has say 100 critical components, failure of any one of which can cause a crash, then the probability of any one failing has to be less than 1/100,000 in order to reduce the probability of a crash to a still too-high 1/1000. Hence, the airline industry implemented QC standards that seem absurdly rigorous in terms of the single component, but are not that unreasonable in terms of the system as a whole. (I haven’t reread it recently, so I’m just making up the numbers.)

    NASA, on the other hand, was content to make sure that each component had a reasonably low probability of failing, with the result that the probability of the system failing was actually quite high.

    So maybe it’s just a NASA thing…

  57. EJ
    Posted Nov 12, 2008 at 9:58 PM | Permalink

    NASA, do you hear this? Do you have an opinion here? Care to share it? Care to put your name on it?

  58. Posted Nov 12, 2008 at 10:02 PM | Permalink

    Or, “As long as our processing algorithm works fine, it’s the supplier’s job, not ours, to make sure the O-rings are flexible at low temperatures.”

    At least no one died in this screw-up.

  59. David L Hagen
    Posted Nov 12, 2008 at 10:18 PM | Permalink

    At RC #118 gavin Says:
    “12 November 2008 at 10:00 PM

    The corrected data is up. Met station index = 0.68, Land-ocean index = 0.58, details here. Turns out Siberia was quite warm last month.”

  60. Steve McIntyre
    Posted Nov 12, 2008 at 10:41 PM | Permalink

    Here is Warwick Hughes discussion of Siberia in 2000 – many of the stations are the same ones as in the present screwup.
    http://www.warwickhughes.com/climate/tamyr.htm

  61. Richard deSousa
    Posted Nov 12, 2008 at 10:51 PM | Permalink

    Gavin Schmidt’s excuse is… pathetic… they’re getting billions of dollars to run climate science and they can’t find $500,000 in that pile of money to insure the data is accurate??!!

  62. jae
    Posted Nov 12, 2008 at 10:57 PM | Permalink

    Continuing my rant: (snip if necessary; but perhaps someone will see it first)

    – invitation to snip accepted.

  63. Gerry Morrow
    Posted Nov 13, 2008 at 1:59 AM | Permalink

    It really speaks to the quality of the science here. A set of startling results that agree with your hypothisis go out unchecked, and it is considered to be making a mountain out of a molehill when they are found to be wrong. Not only wrong, wrong because of a stupid schoolkid error.

  64. Posted Nov 13, 2008 at 2:48 AM | Permalink

    It’s not a molehill if the MSM have made mileage out of the false figures. Anybody know?

  65. Pierre Gosselin
    Posted Nov 13, 2008 at 3:34 AM | Permalink

    Even the newly calculated October temperature is way out of line with respect to HadCrut, RSS and UAH. GISS seems to be in absolute shambles and has no idea what the global temp is.
    This is about as sloppy as science can get. This would never be tolerated in any other field.

  66. Posted Nov 13, 2008 at 3:47 AM | Permalink

    In reading Schmidt’s response to John Finn #48 at RC on this topic he essentially states that NASA takes the data “as is” and does its analysis expecting the data to be correct – until someone shows it isn’t when they then go back and question their data suppliers.

    Given that approach it seems highly likely that there will be errors but it is up to the rest of the world to find them after the event.

    This seems a very poor way to go about things and I agree with Steve – NASA should not do the job at all if that is going to be their approach.

  67. MarcH
    Posted Nov 13, 2008 at 5:07 AM | Permalink

    Generating a GISS map in polar view with a 250 km smoothing radius provides a better indication of anomalous stations and carry over of large anomaly from Siberia into northern Canada. Spread of anomalies seems very fishy, my money is on another stuff up.
    Parameters Polar – GHCN_GISS_250km_Anom10_2008_2008_1951_1990

  68. Robert Wood
    Posted Nov 13, 2008 at 5:13 AM | Permalink

    NASA’s processing algorithm did work well. Publicise a rising temp and hope those pesky auditors don’t notice,

  69. PHE
    Posted Nov 13, 2008 at 5:30 AM | Permalink

    Re 59, Joe Black
    ‘Garbage in, garbage out’ is the usual mantra for computer modellors (like myself). For the hockey team, its “garbage in, ‘right’result out”. In fact, they operate more like ‘hockey mums’ washing the team kit. Whatever goes into the washing machine, comes out all bright and clean.

  70. Frank K.
    Posted Nov 13, 2008 at 7:25 AM | Permalink

    “In reading Schmidt’s response to John Finn #48 at RC on this topic he essentially states that NASA takes the data “as is” and does its analysis expecting the data to be correct – until someone shows it isn’t when they then go back and question their data suppliers.”

    I wonder if this explanation would fly if, say, a tire manufacturer used defective raw materials to make tires. When you got into an accident because the tires failed, the tire company could use the “Gavin Schmidt” defense: “Hey, our tire manufacturing process worked perfectly!” One could extend this defense to other industries, such as baby products, foods, power tools, …

    What this incident tells me is that NASA (or at least GISS) really doesn’t care about it’s data products. And no Gavin, the processing algorithm did ** NOT ** work, as your codes should be checking the databases to catch these stupid errors.

    By the way, the “we don’t have enough FTEs for this…” is also what is used to defend the utter lack of serious documentation and verification of Model E…Hey, we need time to write our silly papers after all!

  71. craig loehle
    Posted Nov 13, 2008 at 8:27 AM | Permalink

    No time to do it right, but plenty of time for press releases and excuses.

  72. Bwanajohn
    Posted Nov 13, 2008 at 9:18 AM | Permalink

    You can tell Gavin, I’ll do ALL of the QC for $250k/yr. Data is data is data. When you have considerable historical records like this temp data, it seems easy enough to add “sanity checks” to your processing algorithms, like duplicate data, sign errors, magnitude errors, and excessive deviation from expected. I do this every day with manufacturing data, why not this?

    • Posted Nov 13, 2008 at 9:32 AM | Permalink

      Re: Bwanajohn (#82),

      I’ll do it for $50k/yr. During the first year I’ll get it bullet-proof automated. All other years, I’ll click Do It.

  73. conard
    Posted Nov 13, 2008 at 9:50 AM | Permalink

    Perhaps this falls into the category of picking nits but being made aware of an error hardly qualifies as “force”. Nor have I seen any signs of resistance to correcting the error which would require force. It should be noted in positive terms that once NASA was made aware of the problem they immediately set to work and in short order had the error corrected.

    Perhaps it is this asymmetry in tone that allows CA to be seen as the cat in the sandbox.

    Steve: I’ve never complained about NASA’s reluctance to make changes. We went through this with their “Y2K” problem and, at the time, I acknowledged the promptness of their changes, which contrasted favorably with (say) Mann. I criticized them then for failing to disclose the changes when they were made – though, after much criticism, they eventually disclosed their changes and their disclosure is now far, far better than comparable services. They have still failed to acknowledge their sources. For example, Gavin says that he learned of the problem at Watts Up and immediately contacted Hansen. Fine. But it would have been more open to have written a comment at Watts Up acknowledging the point and saying that he’d ask the GISTEMP folks to look into the matter and then report back on what happened. I sent Hansen a polite email notifying him of the issue as I had no basis then for assuming that any of them were aware of the problem. They could easily have written back acknowledging the issue, saying that they were already aware of the matter and were working on the problem. Instead, they left some readers with the impression (one posted here) that they had fixed the matter on their own, through their own QC.

    • PhilH
      Posted Nov 13, 2008 at 10:12 AM | Permalink

      Re: conard (#84), Speaking of “asymmetry in tone,” everytime, as a lurker, I go to RC I am appalled at the tone that Gavin takes with people he doesn’t agree with. Seemingly, his first response is always a personal, most times contemptuous, insult; something you will never “hear” from the proprietor of this site. Steve is a private individual; Gavin is a government employee. Who gets the pass here? Why are these guys so angry?

    • Posted Nov 15, 2008 at 8:03 AM | Permalink

      Re: conard (#84),
      If you ever had a child with a lice problem, you would appreciate that nit picking is a necessary.

      That said: Pointing out that GISS has extremely poor quality assurance is not nit picking. I assume by asymmetry in tone, you are alluding to tone in the top of the fold and reminding us that Gavin’s tone is more snide than other bloggers?

  74. iyportne
    Posted Nov 13, 2008 at 10:31 AM | Permalink

    re: “The processing algorithm worked fine.”

    Is that “algorithm” or “Al-Gore-ithm”?

  75. Sam Urbinto
    Posted Nov 13, 2008 at 11:10 AM | Permalink

    Why even accept the concept that this data is meaningful in the first place? The trend on the GHCN-ERSST 130 year mean over the entire planet anomaly is only +.7 The original readings are the mean of the highest and lowest whole number recorded temperatures, right?

    It is clear, however, that if they rely upon their supplier, the onus falls on them, regardless if the GHCN data has “undergone extensive quality control” or not. Regardless if GISS carries out their own QC and verification of anything or not.

    It does appear though that “the processing algorithm worked fine”. GIGO. The controls to ensure it was correct failed. So what.

    If 10 C errors over the world’s largest land mass only reduced the monthly anomaly by .21 C, that seems a rather telling bit of information. Rather than the source or responsibility for those errors. Perhaps that’s why all the noise from other directions, to shove the conversation to other matters.

  76. Derek
    Posted Nov 13, 2008 at 4:02 PM | Permalink

    This is what they’re paid to do. They’re paid out of YOUR pocket, and work for you. Yet call them on the piss poor job they’re doing in the rare minutes when they’re not out self-promoting, and all they can do is feign some smug irritation and tell you their budget is too small.

    snip – calm down a bit

  77. EJ
    Posted Nov 15, 2008 at 1:00 AM | Permalink

    Cat in a sand box.

    This sand box is Dr. M’s. The statistical guru.

    Dr. Steve M has demonstrated his competence in statistics for the record. Of course, I am no climatologist. I am no statistician, I am an engineer. I try to keep abreast of the science here. I thank you all for your contributions.

    I would like to remind those desperate souls that we can’t snap our fingers and make anything so. Should we vote to make something so by a democracy? We can’t pray to make it so. Let’s vote for independent science.

    We need a bulldog with a good nose here.

  78. craig loehle
    Posted Nov 15, 2008 at 11:04 AM | Permalink

    If someone takes a tone of omniscience, of infallibility, of condescension toward “sceptics”, and it then turns out that they are making up statistics that have never been tested, used data with geographic coordinates wrong, arbitrarily truncated data, lost recent data from entire countries, can’t seem to add, etc., then perhaps a little irony, a few snide remarks are in order. You may note that Steve M. (not a Dr., but quite qualified anyway) puts HIS data and R code in public and corrects his mistakes. It is the people who claim infallibility who hide their code, their data, and their methods.

  79. jeez
    Posted Nov 15, 2008 at 5:06 PM | Permalink

    Digging a little deeper, it looks like you’ve also found the hiding place of Santer’s reindeer.

    Verkhoyansk is a town in the Sakha Republic, Russia, situated on the Yana River, near the Arctic Circle, 675 km from Yakutsk. There is river port, an airport, a fur-collecting depot, and the center of a reindeer-raising area. Population: 1,434 (2002 Census).

  80. steven mosher
    Posted Nov 15, 2008 at 8:46 PM | Permalink

    mildly interesting stuff when you look at combining stations for verhojansk.
    7 different files. not much overlap between stations since 1987. also
    surrounding rural stations might yeild something. See the WUWT post on the
    central heating at verhojansk

  81. John F. Pittman
    Posted Nov 16, 2008 at 9:23 AM | Permalink

    Yes. Nit picking is required. I believe it has something to do with verification. After multiple episodes of lice ( I have four children and 17 nephews and nieces about the same age), I can testify that if you do not only nit pick, but do a perfect job of it, the problem will raise it’s ugly little head again and again, and…

  82. EJ
    Posted Nov 20, 2008 at 12:14 AM | Permalink

    The whole GISS, etc., the entire surface data system, needs to be rebuilt, no?

3 Trackbacks

  1. […] I was getting ready to publish, I got a note from SteveM. He also appears to perplexed by the idea the algorithm worked fine. « Why can’t I get this song out of my […]

  2. By The End of CRUTEM? « Climate Audit on Jan 30, 2010 at 4:24 PM

    […] here once again in 2008 after Gavin Schmidt had said that the entire GISS quality control effort took […]

  3. […] October was the warmest since 1880. They’d re-used September data for many northern stations. GISS blamed the error on the agency that supplied the data, but NASA said the supplier carried out “extensive quality control.” As always, the error […]