OK, What Caused the Problem?

Are you like me and a little puzzled as to exactly how the GHCN-GISS problem happened? GISS blamed their supplier (NOAA GHCN). Unfortunately NOAA’s been stone silent on the matter. I checked the Russian data at meteo.ru and there was nothing wrong with it. Nor is there anything wrong at GHCN-Daily for stations reporting there. So it’s something at GHCN-Monthly, a data set that I’ve been severely critical of in the past, primarily for the indolence of its updating, an indolence that has really reached a level of negligence.

In passing, while I was looking at Resolute data in connection with a question about a mis-step in temporarily losing some northern Canadian data while the Russian patch was being applied, I also noticed what appears to be a prior incident like the one that we just saw – only in reverse (and not a small error either, it was about 14 deg C). I’d also like to remind people that an identical incident with Finnish stations was reported here on Sep 23.

GHCN Non-Upating
Some of critics have asserted that I’ve been unfair in criticizing GISS, rather than GHCN. I submit that I’ve been drawing attention to problems at GHCN for a long time now. And last year, we actually had an incident when NASA GISS apologists made exactly the same fingerpointing excuses that they are now – that problems were GHCN’s fault and NASA GISS was blameless. Here are some prior posts on the matter.

In May 2007, I invited readers to help the climate community locate certain cities that the climate science community seemed unable to locate. The data set to which we had been directed by the CRU FOI officer as CRU’s course referred remarkably to “stations with no location”. I thought that CA readers would be intrigued by the idea of “stations with no location” and asked:

Were these pirate weather stations that changed locations to avoid detection by NASA? Were they voices from beyond – perhaps evidence of unknown civilizations? And there were over 420 such stations out of just over 5000 in total. So there were not just a few strangers among us. A number of the mystery stations came from the mysterious civilization known as “Chile”, whose existence has long been suspected.

A few months later, I invited readers to help NASA find the lost city of Wellington NZ, where NASA and GHCN had been unable to obtain records since 1988. I wondered whether the city had been destroyed by Scythians or perhaps Assyrians. Fortunately, one of the survivors made contact with us – indeed, the survivor was employed by a national meteorological service and assured us that records had in fact been kept since contact had been lost.

On another occasion, we pondered why GHCN had been unable to locate Canadian data (for Dawson and many other stations) since 1989 – and why NASA GISS had stood idly by, while nothing was done for 20 years. I asked:

How hard can it be to locate Canadian data? Maybe it’s time to ask the people who do the Consumer Price Index to compile temperature statistics. It’s all just data – maybe professional data people would do a better job than the present people who seem to have trouble getting off their La-Z-Boys.

We visited the same problem in connection with GHCN’s failure to update results from Peru and Bolivia since 1989, while NASA GISS merrily went about adjusting the data trends without bothering to collect up-to-date information readily available on the internet (and even at GHCN- Daily data). In this case, there was a small existing controversy as NASA GISS apologist (and Hansen’s other pit bull, Tamino) asserted stridently (see comments passim) that two sites, Cobija and Rurrenabaque, did not have post-1988 data and then, amusingly continued to assert this, in the face of simple proof that data almost up to the minute could be located on the internet.

Tamino:

There’s no post-1988 data for Cobija or Rurrenabaque

After I showed the existence of post-1988 data, a poster at Tamino’s site asked:

OK now i’m confused. Is there or is there not temp data for Cobija and Rurrenabaque after 1988? (as posted over at CA) Not trying to take any side here just losing faith on what to believe.

Even after I’d produced post-1988 data (and given active links to modern data), Tamino persisted:

[Response: I downloaded both the raw and adjusted datasets from GISS, and there’s no data beyond April 1989. ]

One of his posters persisted:

Dear Tamino,
I know you insist that “[t]here’s no data from Cobija or Rurrenabaque”, But McIntyre has posted the post 1988 temperature data for Cobija and Rurrenabaque at Climate Audit today.
Why the discrepancy?

To which Tamino answered:

[Response: He didn’t get it from GHCN or from NASA. Does it include adjustments for station moves, time-of-observation, instrument changes? Does Anthony Watts have photographs?]

Actually this wasn’t even true. I’d been able to get data from GHCN-Daily. Another reader persisted, asking the quesitons already raised here, as to why NASA GISS:

1) stopp[ed] using data series in 1988 when a full series exists till today (documented on CA for Cobija, Rurrenabaque).
2) Classiff[ied] stations as rural that are in fact urban (documented on CA for Yurimaguas, Moyobamba, Chachapoyas, Lambayeque, Tarapoto, Cajamarca, Tingo Maria) and adjusting them accordingly…

To which Tamino responded with the same fingerpointing argument recently used by Hansen’s other bulldog (Gavin):

[Response: … I asked you “in what way did GISS violate legitimate scientific method?” It appears that it’s not GISS but GHCN which left the post-1989 data out of the combined data supplied to GISS. Maybe there’s even a good reason. Clearly it was not GISS but GHCN which classified urban stations as rural. GISS was sufficiently dissatisfied with the classifications provided by GHCN to devise a whole new method and apply it to the U.S. Adjusting stations by comparing to other stations which have faulty population metadata is most certainly NOT a violation of legitimate scientific METHOD — it’s faulty metadata.

People who are not climate scientists typically have to scratch their heads a little when they see this sort of reasoning, which, as I just noted, is pretty much NASA’s present defence. The Other Dude Did It.

BTW NASA’s use of absurdly faulty population data from GHCN is an important issue in itself that we’ve discussed in the past. Because many of their “rural” locations outside the US are not “rural”, but medium-sized and rapidly-growing towns and even cities, their “adjustment” for UHI outside the U.S. is feckless and, in many cases, leads them to opposite adjustments in cities. This is a large topic in itself.

At their webpage, NOAA GHCN assures us that their quality control is “rigorous”.

Both historical and near-real-time GHCN data undergo rigorous quality assurance reviews.

This representation was endorsed in Hansen et al 1999 (with corresponding language in Hansen et al 2001) as follows

The GHCN data have undergone extensive quality control, as described by Peterson et al. [1998c].

I guess if you don’t actually update the majority of your data, it reduces the work involved in quality control.

I refer to these past posts as evidence that problems at GHCN have been on our radar screen long before the present incident. Indeed, I hope that access to GHCN procedures will be a positive outcome of the present contretemps.

Finland
CA reader, Andy, commented here on Sept 23, 2008:

BTW, GISS temperature data for the finnish towns like Oulu, Kajaani, Kuusamo etc shows exactly the same temperatures for July and August 2008. First time ever seen!

which was confirmed by Jean S here.

Resolute NWT
Now as promised above, here’s evidence of a prior incident. Because there’s been an amusing mishap with northern Canadian values being absent from the NASA map on Monday, present on Wednesday and absent again on Friday, I took a look at the Canadian source data for Resolute NWT, averaging the monthly values of -32.2 in March 2008 and -18.5 deg C in April.

url=”http://www.climate.weatheroffice.ec.gc.ca/climateData/bulkdata_e.html?timeframe=2&Prov=XX&StationID=1776&Year=2008&Month=10&Day=1&format=csv&type=dly”
test=read.csv(url,skip=23,header=TRUE)
round(tapply(test[,10],test$Month,mean,na.rm=T),2)

# 1 2 3 4 5 6 7 8 9 10 11 12
#-31.9 -35.3 -32.2 -18.5 -7.1 2.2 5.3 2.3 -5.0 -10.8 -20.0 NaN

I downloaded the most recent GHCN v2.mean data, unzipped it and looked at the 2008 values in the GHCN-Monthly data base. I bolded the March 2008 and April 2008 values, which are identical.

loc=”d:/temp/v2.mean”
v2=readLines(loc); id< -substr(v2,1,11);
temp=(id==”40371924000″);N=sum(temp)
v2[temp][N]

# “4037192400052008 -317 -351 -322 -322 -73 21 53 23 -50 -108-9999-9999″

The April 2008 value is invalid, being nearly 14 deg C colder than the actual value. I guess an error of 14 deg C is insufficient to engage their “rigorous” quality control.

Also, Jean S had already mentioned an identical incident with Finnish stations about a month before the most recent Hansen incident.

I don’t plan to spend time doing an inventory of incidents – surely NASA and NOAA have sufficient resources to do that. However, this one incident is sufficient to prove that the present incident is not isolated and that the same problem exists elsewhere in the system. I’m perplexed as to how the problem occurs in the first place, given that the error doesn’t occur in original data. I’m sure that we’ll find out in due course.

The bigger issue is, of course, why NOAA and NASA have been unable to update the majority of their network for 20 years.

158 Comments

  1. cbone
    Posted Nov 16, 2008 at 8:06 PM | Permalink

    Very interesting. I agree it appears that the problems with GHCN seem to be systemic and not an isolated incident.

  2. Stephen
    Posted Nov 16, 2008 at 8:25 PM | Permalink

    Having seen audits in the finanical world, and having been part of an organization, a major bank, where audits are SOP (Standard Operating Procedure) I find this quite funny.

    In the bank, auditors were these creatures seen as being like the flu, because they showed up randomly but you knew they were coming sometime. They were usually pretty friendly people who had some good methods for going through a lot of work quickly, and they usually left the department or branch with lots of good suggestions. You learned to like them and see their value.

    I have also been at orgs, non financial, where auditors were something to be tricked and hoodwinked and ignored because they apparently got in the way of “real business”. Inevitably, the value was missed and these werent healthy organizations.

    Finding the errors should be embraced, and in fact they should be striving bring proper process in to place. Everyone understands and expets mistakes to happen, procedures not to be followed, hence the need for auditors. The more it is seen as SOP, the less uipset everyone gets about them, the less emotion expended on defending the undefendable and inevitable errors and people focus on whats mportant, improving their processes and results.

    Steve, quite frankly NASA should be paying you large dollars for the work that you are doing, and the post audit parties were always great fun because after a couple of drinks the auditors would start telling the horror stories. Always a hoot.

    Valueable work, I just wish they would see it for what it was.

  3. Good Captain
    Posted Nov 16, 2008 at 8:41 PM | Permalink

    Unbelievable! I am fairly new to this site but confident that this site and a number of others are performing the necessary correction and reproof formerly performed by scientific collegues in w/in the arena.

  4. Steve McIntyre
    Posted Nov 16, 2008 at 8:43 PM | Permalink

    #2. In a way, the most interesting aspect of the present contretemps was Gavin Schmidt’s extraordinary admission that NASA GISS only spends 0.25 man-year on this data set. I doubt that CRU spends much more. They spend more time in airplanes.

    However, both NASA GISS and CRU have nice little franchises in which they get buckets of publicity for an unbelievably miniscule amount of work, involving no due diligence as any ordinary person understands it. CRU has been funded for years by DOE and I’ll bet that they’ve funded a lot of activities out of their margins on this contract. Gavin says that they have no line funding for GISTEMP, but surely this must feature in their block funding.

    When you think about it, if NASA GISS (and CRU) gave up its temperature franchise (and it seems that they are primarily distributors with a little packaging to make a house brand), they’d become merely another climate model – and probably not the best one.

    But they’ve invested nothing in the franchise and the results are sure showing.

  5. jae
    Posted Nov 16, 2008 at 8:48 PM | Permalink

    The Other Dude Did It.

    I have not had a better laugh in years. Thanks, Steve. I hope Saturday Nite Live gets you on soon! ROFLAMO!

  6. PeterW
    Posted Nov 16, 2008 at 8:48 PM | Permalink

    I’m just sitting in front of a computer shaking my head slightly from left to right and back again. For many years I rather naively as it’s turning out believed organisations like GISS and GHCN were run by people at least as professional as the managers I work with every day. Sure I thought, the odd one will have an off day, but this shemozzle seems to be systemic across the so called ‘climate science’ ‘industry’.

    The ‘industry’ suffers from an appalling lack of rigour and when its failings are pointed out, however gently, it arcs up and spews forth personal and spiteful attacks on those who have the temerity to challenge its orthodoxy, but doesn’t appear to address the problems – just carries on regardless.

    If the same standards were applied in mining or construction enterprises their sites would be strewn with wrecked and un-serviceable equipment surrounded by dead and dieing workers.

    It’s pathetic, but what chance the data will be re-visited and accurate reports produced? Bugger all probably.

  7. Posted Nov 16, 2008 at 8:51 PM | Permalink

    How do the overall Siberian satellite temperatures compare to Hansen’s/NOAA’s re-released data for September and October?

    Are the satellite temperatures resolved on a small enough scale that a specific satellite “cell” data can be compared against the (supposedly accurate) surface data recorded for that same Siberian city at the same time period?

    I have only seen satellite temperature trends for “mid-troposphere” temperatures: Can Huntsville translate upper/mid atmospheric temperatures to surface temperatures?

    Robert

    Steve: can you deal with this on another more closely related thread?

  8. Dishman
    Posted Nov 16, 2008 at 9:10 PM | Permalink

    So we have a group who claims to be offering a premier data product saying “Not our fault” when said product is in error.

    That’s professional.

  9. jae
    Posted Nov 16, 2008 at 9:22 PM | Permalink

    But they’ve invested nothing in the franchise and the results are sure showing

    Steve, I would probably get snipped for this comment. It is characteristic of the bungling way that government works. I can feel the snip!

  10. braddles
    Posted Nov 16, 2008 at 9:35 PM | Permalink

    Maybe there is a strategy at work here. For a long time, an argument from the catastrophists has been along the lines “if the models disagree with the data, then there must be something wrong with the data”. By creating a dataset that is untrustworthy, perhaps they think this will boost the credibility of the models; the data can no longer invalidate the models, even in principle.

  11. John Andrews
    Posted Nov 16, 2008 at 9:42 PM | Permalink

    From the errors that we are seeing, it looks like there is a lot of hand entry going on. Why? Once the data point is logged, shouldn’t all the rest of the processing be completely computerized? And documented, for that matter, so it can be fixed!

    Hey, maybe a fraction of a percent of that 750 Billion could be used to properly fund and operate Hansen’s project!

  12. Allan
    Posted Nov 16, 2008 at 9:59 PM | Permalink

    Hi Steve,
    I’ve found that people who display apparently irrational hostility (a la Hockey Team) are almost always hiding something important – in a long business career I have seen alcohol abuse, fraud, and utter incompetence – all masked by arrogance and hostility. Now when I encounter this behaviour I immediately start looking for problems, and usually find that something is severely wrong. Looks like you’ve experienced the very same phenomenon.
    Regards, Allan

  13. Hank
    Posted Nov 16, 2008 at 10:02 PM | Permalink

    These people are already convinced. Clearly they don’t see their mission being something as simple as creating a good record. I doubt the problem is lack of resources. Rather it must be a misapprehension of their role.

  14. Ian McLeod
    Posted Nov 16, 2008 at 10:33 PM | Permalink

    Not sure if you have seen this. Looks like the mainstream media is picking up the story. With a second problem now identified, I suspect the story is going to get big play.

    http://www.telegraph.co.uk/opinion/main.jhtml?xml=/opinion/2008/11/16/do1610.xml

  15. Jeff Alberts
    Posted Nov 16, 2008 at 10:39 PM | Permalink

    A new acronym, TODDI

  16. Mike C
    Posted Nov 16, 2008 at 10:55 PM | Permalink

    Steve Mc, It’s nice to see you are narrowing down the problem with the data. My first thought was that it had to do with the Russian data because the problem was limited to Russia. Now that you have eliminated the comrades as the cause of the problem and have eliminated the daily reporting from GHCN, it should be easy to figure out where the problem is.

    As for GISS et al’s decision to absolve themselves of any responsibility, I remain of the opinion that they are irrelevant. But readers might get a clue as to how the GISS word game works: the following link is to the GHCN’s “rigorous” QC testing, as taken moments ago from the NOAA GHCN V2.0 website;

    GLOBAL HISTORICAL CLIMATOLOGY NETWORK (GHCN) QUALITY
    CONTROL OF MONTHLY TEMPERATURE DATA, INTERNATIONAL JOURNAL OF CLIMATOLOGY Int. J. Climatol. 18: 1169–1179 (1998)

    Click to access ghcn_temp_qc.pdf

    I suppose one might ponder whether the Lost City of Wellington or the Dawson, Canada or even the Resolute data were nixed as a part of the GHCN QC checks as discussed in the aforementioned (and linked) study. I cannot be sure; I haven’t the wherewithal to check at the moment.

    In addition, any armchair quarterback derived QC checks that might have prevented this problem by producing red flags when individual stations report repetitive monthly temperatures are unrealistic because, in my experience, repetitive monthly temperatures are not unusual and I’ve been to the station originals to be sure. However, there was a note on the blackboard that such digital QC checks are entirely possible, so if this scenario of human derived repetitive temperatures again rears it’s ugly head, then the armchair quarterbacks will have a legitimate gripe.

    • Gerald Machnee
      Posted Nov 17, 2008 at 9:19 AM | Permalink

      Re: Mike C (#16),

      In addition, any armchair quarterback derived QC checks that might have prevented this problem by producing red flags when individual stations report repetitive monthly temperatures are unrealistic because, in my experience, repetitive monthly temperatures are not unusual and I’ve been to the station originals to be sure.
      In this case the repetitive monthly temperatures were not from the station reports but came later. I am not sure if you meant station reports as the source of the error.

  17. crosspatch
    Posted Nov 16, 2008 at 11:14 PM | Permalink

    This would seem to point to a problem in the software that populates the monthly values from the various daily files. I am willing to bet a small burger and order of fries that there is something in common with the daily files that causes this to happen. I have seen something similar happen with other data processing. My instincts tell me that this is more likely to happen when there is one or more missing values in the daily file so a new value must be calculated in some fashion different from simply walking through the daily file. That calculation process blows up and the value is left in its initialized state which in this case might be the previous month’s value rather than something obvious like 999. If you initialize an array or a variable using the previous month’s value then the calculation of this month’s value blows up, you are left with last month’s value. Sounds plausible if one opens last month’s monthly file, reads it in, repopulates the values derived from this month’s daily files and writes out the new monthly file. A step that blows up due to a missing value or a calculation out of some range or another causes the old value to remain and the program moves on to the next daily file to process. In other words, bad error handling is what I would be willing to bet the problem is, which is actually very common.

  18. crosspatch
    Posted Nov 16, 2008 at 11:18 PM | Permalink

    “in my experience, repetitive monthly temperatures are not unusual”

    True, in tropical regions and in temperate regions in the middle of summer and winter. But repetitive temperatures at high latitudes in spring and fall should be practically unheard of.

  19. Mike C
    Posted Nov 16, 2008 at 11:22 PM | Permalink

    I see that in my last post I failed to identify the GISS word game. That would be; “It’s the other guy’s fault but I use his data because he performs rigerous quality checks.” Instead they should of said, “It’s the other guys fault, and while he performs rigerous quality controll checks, he dont look for repetitive monthly data which caused this problem, that’s something I’m going to sit down and write an email to him about.”

  20. Mike C
    Posted Nov 16, 2008 at 11:23 PM | Permalink

    crosspatch, it was more like Michigan and New Hampshire

  21. TerryBixler
    Posted Nov 16, 2008 at 11:30 PM | Permalink

    Probably a string to array issue.

  22. Northern Plains Reader
    Posted Nov 16, 2008 at 11:32 PM | Permalink

    I wonder what the level of effort and cost would be to create an independent global surface temperature analysis? It is easy to hit NOAA and NASA year after year for their mistakes and their unwillingness to actually research each station that they use in their analysis to ensure that they are free of non-climatic influences. Computer automation is a great tool, but it is only as smart as those who are programing it. Instead of visiting each site to see the non-climatic effects for themselves, they try to get a computer to figure it out for them.

    Given that NOAA and NASA are hostile to constructive criticism, should we find a way to come up with our own analysis, using only confirmed quality stations that haven’t moved into a new microclimate and away from urban areas?

  23. Steve McIntyre
    Posted Nov 16, 2008 at 11:37 PM | Permalink

    In my opinion, the key thing is getting the institutions that are funded to do the job to do the job well. The starting point would be to have professional data people doing it, rather than people like Hansen and Jones who’d rather be doing something else.

  24. Mike C
    Posted Nov 16, 2008 at 11:38 PM | Permalink

    Northern Plains Reader, You probably have more toes than there are surface stations that are free of Urban, Micro Climate or Irrigation biases that are also free of the usual quality issues(station moves, sufficient station history, equipment problems etc). And that is assuming that you lost one foot in the war.

  25. crosspatch
    Posted Nov 16, 2008 at 11:39 PM | Permalink

    Mike C, again, I would expect repetitive temperatures to happen in, say, July and August or January and February. But not in September / October or October / November or April / May. Maybe very rarely, but not commonly. If you think of average temperatures as being almost a sine function, sure, it is easy to get two values the same on either side of the peak but practically impossible to get identical values on either slope and spring and fall are when temperatures are in that rapid slope. In tropical regions, temperatures might not vary much through the year so they don’t count.

    But I am saying only what I would expect to find and real data showing that spring/fall temperatures can be identical as far North as Michigan would be enlightening but (particularly if the locations are moderated by some large body of water) not impossible. Take someplace like North Dakota where April average temperatures are about 18 degrees warmer than March but June, July, and August can be nearly identical. It is the spring and fall temperatures at higher latitudes that I would expect to be easier to spot.

  26. Mike C
    Posted Nov 16, 2008 at 11:46 PM | Permalink

    I gotcha, crosspatch, but if you read the QC procedures I linked in my previous post, you’ll see that they have serious issues first. Resolving the real easy stuff (like catching non existant heat waves that were essentially caused by a typoe) is nothing compared to what they have had to deal with. Read it, the one that I get a kick out of is trying to homoginize a record with no other stations within 1200 KM. Sorry, cracks me up. In the mean time, looking for repetitive temps may be possible but it is still armchair quarterbacking on this occasion… but next time ol’ Mike will not be whispering common sensabilities in your ear… GISS and company will deserve everything they get thrown at them.

  27. crosspatch
    Posted Nov 17, 2008 at 12:45 AM | Permalink

    “GISS and company will deserve everything they get thrown at them.”

    Yeah, I imagine so. In this case it seems to me that NOAA has a broken computer program that they have probably had for quite some time. I would guess that will get fixed shortly. The larger question of why stations are being left out with the claim that data are unavailable when it is clear that they are available is more troubling. It is even more troubling still to think that people who seem to think that AGW is such a huge crisis waive off any criticism with a smug “the science is settled” when it comes out that apparently very little real science is being done. I believe it was someone on this blog who posted a comment about “torturing the data until it tells the ‘truth'”.

  28. Mike C
    Posted Nov 17, 2008 at 1:44 AM | Permalink

    Gee Willis, a bit more vulgar today than usual. Did the girls out there on the island got ya some old scotch that they dug up from where the Marines forgot it?
    Anyways, look at your post, and notice that you, like Hansen, fail to mention the specific QC. Too many folks are using QC in a vague context. Neither of you are doing the science any good. At least Steve and Anthony have delved into the causes and solutions.
    But not to dismiss you. Perhaps you can demonstrate some constructive energy and show where has the issue of repetitive temps been a previous problem? Because the armchair quarterbacking and Hansen bashing give the impression of severe polarization.

    • Willis Eschenbach
      Posted Nov 17, 2008 at 5:55 AM | Permalink

      Re: Mike C (#30), thanks for your reply. You say:

      Gee Willis, a bit more vulgar today than usual. Did the girls out there on the island got ya some old scotch that they dug up from where the Marines forgot it?

      Anyways, look at your post, and notice that you, like Hansen, fail to mention the specific QC. Too many folks are using QC in a vague context. Neither of you are doing the science any good. At least Steve and Anthony have delved into the causes and solutions.

      But not to dismiss you. Perhaps you can demonstrate some constructive energy and show where has the issue of repetitive temps been a previous problem? Because the armchair quarterbacking and Hansen bashing give the impression of severe polarization.

      Naw, Marines might leave a dead jeep behind but there’s two things they would never leave on the battlefield … the body of a brother-in-arms … and a bottle of scotch.

      The first thing you have to understand is that this is not like QC on a satellite download stream, looking for hidden errors. We are talking a particular type of errors here called “scribal errors”. These are errors due to the scribe, the person who wrote them down or transcribed them (either directly or by way of computer).

      All of these errors, including your example of the doubled data points and the recent error of one month reported twice, are very common in scribal records of all types. Transposed numbers, numbers from one station reported as another, negative reported as positive, I find these in my manager’s reports all the time. It’s not like they have to invent something new.

      Nor are they dealing with huge masses of data. There’s only eighteen hundred stations or so. If the QC software flags a couple dozen of them, the QC guy can spend a half hour checking each one, and still have three days left in the week to invent new ways to error trap the data …

      How would I error trap it? What kind of monthly QC would I subject it to? Haven’t thought that through, but here’s the list off of the top of my head. (This is the list for the monthly QC, and not the one-time original look at the entire station record for QC errors.)

      I would start by analyzing the historical data for each station. For each station, I would look at the monthly data. I would take the mean and standard deviation of the temperature, the change in temperature month over month (∆T), and the acceleration of temperature (∆∆T).

      I would then compare the reported T, ∆T, and ∆∆T values to the historical averages for that month. I’d flag any that were over say 3.5 SD or so. (I’d run the program to set the exact SD cutoff value.)

      Next, since duplication is always a problem, I’d put in a flag to catch “this month = last month”. (This flag may not be necessary. The unlikely duplicates will be caught by the previous test, and the others may well be valid data.)

      Next, I’d fit an ARMA model that is 12 months deep (to include seasonal variations) to the temperature record. I would run that model month by month over the length of the dataset, to see how well the model performed on historical data. I would take the mean and standard deviation of the monthly error between the predicted and actual results.

      Finally, I would use that ARMA model to predict the most recent month’s temperature, and I would compare that to the reported temperature. If the prediction and the reported temperatures were too far apart, I’d flag the data.

      Now, that’s the result of just ten minutes thought on the problem. Give me a week and I’ll think of a bunch more …

      The point to me is simple. We are paying somebody good money to QC the incoming data. They are failing at the job. I look at it as a businessman. Considering that they are asking us to bet billions of dollars that their numbers are correct, I would suggest that they should look at it in the same light.

      That’s all.

      w.

    • RomanM
      Posted Nov 17, 2008 at 6:01 AM | Permalink

      Re: Mike C (#30),

      But not to dismiss you. Perhaps you can demonstrate some constructive energy and show where has the issue of repetitive temps been a previous problem?

      Reading Gavin’s apologia on RC:

      There were 90 stations for which October numbers equalled September numbers in the corrupted GHCN file for 2008 (out of 908). This compares with an average of about 16 stations each year in the last decade (some earlier years have bigger counts, but none as big as this month, and are much less as a percentage of stations). These other cases seem to be mostly legitimate tropical stations where there isn’t much of a seasonal cycle. That makes it a little tricky to automatically scan for this problem, but putting in a check for the total number or percentage is probably sensible going forward.

      It appears to me that the existence (and something of the extent) of this problem was already known to GISS. Why would they not have implemented a QC check for it? It isn’t that difficult. Check the monthly means. Are they the same? If so, check the monthly data. That should work pretty well on tropical stations, summer, winter, etc. Of course, maybe it does not make a difference, so why bother…

  29. Gerry Morrow
    Posted Nov 17, 2008 at 1:46 AM | Permalink

    #12:”Hi Steve,
    I’ve found that people who display apparently irrational hostility (a la Hockey Team) are almost always hiding something important – in a long business career I have seen alcohol abuse, fraud, and utter incompetence – all masked by arrogance and hostility. Now when I encounter this behaviour I immediately start looking for problems, and usually find that something is severely wrong. Looks like you’ve experienced the very same phenomenon.
    Regards, Allan”

    Alan, you may be right about the Hockey Team I don’t know, but I have a theory and it is this. Up until 1998 the Climate community had beavered away without scrutiny. They produced charts showing a MWP, and LIA, and showed world temperatures on a daily and monthly basis, and we all said, “Very nice, interesting that.” Then one day they proclaimed: “Then end of the world is nigh, you must all do as we say, or you are doomed.”

    With a sudden jolt some of the rest of the world woke up and said, “What? How have you worked that out?” This was like a bucket of ice cold water being thrown over the climate community. Their work became under scrutiny, Steve and Ross comprehensively destroyed the hockeystick credibility by looking at the methodology and from then on the climate community, at least the alarmists, went on the back foot. They had assumed that the previous acceptance of all they said without question would continue. Everything has come under scrutiny and they have tried a number of stategies to get away from it. “They work for oil companies”, “There is a scientific consensus,” “They’re not climate scientists,” “You can’t see the data or methodologies,” and now, “A mistaken 12C anomoly is a storm in a teacup.” I beleive their work may have always been this sloppy, but no one cared before they announced the end of the world, but having had no scrutiny before, and possibly caused the greatest scare in our history, they simply don’t know how to react.

  30. Danny Butterworth
    Posted Nov 17, 2008 at 2:46 AM | Permalink

    I do love it when Steve takes the pit bulls to obedience school.

  31. steven mosher
    Posted Nov 17, 2008 at 3:04 AM | Permalink

    A short overview and some pointers to GISS and NOAA QC.

    http://rankexploits.com/musings/2008/pile-it-higher-and-deeper-can-a-phd-turn-mt-washington-into-mt-everest/#comment-6779

    Bottomline GISS QC is almost all done manually prior to processing. Largley undocumented except to say that the “cleaning process” doesnt change things. At NOAA the only documentation you have is a figure in a 1998 paper.

    Free the code.

  32. Posted Nov 17, 2008 at 4:04 AM | Permalink

    The bigger issue is, of course, why NOAA and NASA have been unable to update the majority of their network for 20 years.

    Presumably because Hansen has better things to do: like take 12 interviews per day, commute 170 miles per day (I wonder if he’s carbon neutral) or explaining to incredulous witnesses (though not the prosecuting barrister) how one small coal fired power station in Kent will cause the extinction of 400 species.

    There’s only so much that one man can do.

  33. Scott Brim
    Posted Nov 17, 2008 at 4:58 AM | Permalink

    Steve McIntyre: In my opinion, the key thing is getting the institutions that are funded to do the job to do the job well. The starting point would be to have professional data people doing it, rather than people like Hansen and Jones who’d rather be doing something else.

    But if someone else is independently responsible for the work and is doing that work according to some accountable form of QA-ed process and procedure, then Hansen and Jones are no longer in control of the process, is that not so?

  34. Brooks Hurd
    Posted Nov 17, 2008 at 6:20 AM | Permalink

    The problem as I see it is that people at NOAA and NASA involved in climate science may have what they call QC, but they have no QA program.

    A well constructed QA program will oftentimes reduce the need for QC (inspecting). The reverse is never true. If an organization relies only on QC, then on average, 2% errors will escape detection. The NASA space program has a good quality program in place, however Hansen and Schmidt are clearly not part of this program.

    • Jedwards
      Posted Nov 17, 2008 at 11:09 AM | Permalink

      Re: Brooks Hurd (#36),

      Actually I’d posit that what they have isn’t really either, or at least the QC is only directed toward the bulk product, and not necessarily the details.

      Here’s the corollary to what I see them doing. Suppose I own a manufacturing outfit that makes washers (the little round widgets that you use with screws and bolts). Now these washers are shipped out of the factory in boxes of 5000. How do I know each box has 5000 washers, I weigh them. (What you don’t this I counted them by hand, did you?) These means that each box of “5000” actually holds 5000 +/- 100. That’s good enough for pricing, and to be sure that these numbers are correct, I occaisionally hand count several sample boxes. Well now, over time I’ve started noticing that the average number of washers/per box is going down. In order to keep my profits where they should be, I implement an adjustment to the weighing process. So now instead of 5000 washers weighing 5 lbs, I use 5.05 lbs. Two years later, that changes to 5.12 lbs. 3 Years later 5.23 lbs. Etc.

      Now some un-named auditor comes in, and starts looking at everything. They tell me the problem isn’t with the weight of the boxes, but rather with the washers themselves. It seems the dies I was using to punch the holes in the washers have started becoming worn out and that instead of making washers with holes that are .5 inches in diameter, the holes are now .475 inches in diameter. The extra material was what was making the washers heavier, and that this had probably been ocurring over time. My QC process didn’t catch it because I was only QC’ing the bulk output. Oh sure, some un-named workers in the plant had been trying to tell me “watt’s up” with the washers themselves, but that wasn’t my department, so I just ignored it.

      Well, the upshot of this little exercise is that my customers had started noticing the problem with my product and were now shopping for new washers made by my competitors, Real Solid Steel (RSS) Washers and Ultimate Alminum Hardware (UAH).

      But then this is just a story and in no way resembles anything really happening in Climate Science.

  35. Patrick M.
    Posted Nov 17, 2008 at 6:29 AM | Permalink

    Perhaps a good place to start thinking about how to write a QC check would be to look at how people found this error in the first place. For example Steve’s approach from the OP seems like a good place to start:

    I took a look at the Canadian source data for Resolute NWT, averaging the monthly values of -32.2 in March 2008 and -18.5 deg C in April.

    url=”http://www.climate.weatheroffice.ec.gc.ca/climateData/bulkdata_e.html?timeframe=2&Prov=XX&StationID=1776&Year=2008&Month=10&Day=1&format=csv&type=dly”
    test=read.csv(url,skip=23,header=TRUE)
    round(tapply(test[,10],test$Month,mean,na.rm=T),2)

    # 1 2 3 4 5 6 7 8 9 10 11 12
    #-31.9 -35.3 -32.2 -18.5 -7.1 2.2 5.3 2.3 -5.0 -10.8 -20.0 NaN

    I downloaded the most recent GHCN v2.mean data, unzipped it and looked at the 2008 values in the GHCN-Monthly data base. I bolded the March 2008 and April 2008 values, which are identical.

    loc=”d:/temp/v2.mean”
    v2=readLines(loc); id< -substr(v2,1,11);
    temp=(id==”40371924000″);N=sum(temp)
    v2[temp][N]

    # “4037192400052008 -317 -351 -322 -322 -73 21 53 23 -50 -108-9999-9999”

    The April 2008 value is invalid, being nearly 14 deg C colder than the actual value. I guess an error of 14 deg C is insufficient to engage their “rigorous” quality control.

  36. Basil
    Posted Nov 17, 2008 at 7:58 AM | Permalink

    #27, Mike C.

    trying to homoginize a record with no other stations within 1200 KM.

    How is the global mean affected by this smoothing? When I compare 250km vs. 1200km smoothing, I see how the latter fills in huge areas of missing data. Is the global average simply an average of all the gridcells?

  37. deadwood
    Posted Nov 17, 2008 at 8:04 AM | Permalink

    I suspect that until someone invents a better system we will be stuck with the problems of GHCN and GISS. It is quite evident that the current caretakers and collators of the data have little incentive to clean up the data or reporting.

  38. David Jay
    Posted Nov 17, 2008 at 8:27 AM | Permalink

    Jeff:

    I believe I’ll have a toddy, err, TODDI

  39. Ian McLeod
    Posted Nov 17, 2008 at 8:32 AM | Permalink

    The story reached the National Post today, Lorne Gunter’s op-ed Warmest October ever … Not!
    http://www.nationalpost.com/opinion/story.html?id=1083d149-cd70-41be-b8a1-a3f13fd759ec
    It is similar to the Christopher Booker’s article in the Telegraph: The world has never seen such freezing heat. http://www.telegraph.co.uk/opinion/main.jhtml?xml=/opinion/2008/11/16/do1610.xml

    Both articles claim Steve is an “Toronto computer analyst”. Looks like Gunter got that info from Booker’s article when 2 minutes of checking would have clarified Steve’s background. Anyway, it is nice to see the mainstream media picking up the story.

  40. Mike C
    Posted Nov 17, 2008 at 8:36 AM | Permalink

    Willis, my understanding is that all of what you suggest when looking for standard deviations is already done by GHCN, but with the daily data. Now that they have been through this, I suppose they can call it the egghead NOAA typoe patch. But it’s still armchair quarterbacking.

    RomanM, I think that while Gavin is playing numbers games to mitigate the number of stations involved, this check was run after the fact.

    Basil, my point was that you need nearby stations to homogenize a station record. My point had nothing to do with the different smoothing techniques, that’s an entirely different issue.

    Deadwood, kinda sorta, but if you read the GHCN QC link that I posted in 16 you will probably walk away with the feeling that the job they have tried to do (produce a meaningful record of global temperature) is an impossible task. In reality, they simply tuck these problems away and fail to mention them as a regular part of their daily lives. But at the end of the day, it’s beyond garbage in / garbage out.

  41. Posted Nov 17, 2008 at 9:01 AM | Permalink

    I notice that NOAA has delayed the release of their Global analysis (here: http://www.ncdc.noaa.gov/oa/climate/research/2008/oct/global.html#introduction) They originally were going to release it last week. Hmmmm..

  42. Carl Gullans
    Posted Nov 17, 2008 at 9:22 AM | Permalink

    Did you see this Steve?

    http://www.telegraph.co.uk/opinion/main.jhtml?xml=/opinion/2008/11/16/do1610.xml

  43. thefordprefect
    Posted Nov 17, 2008 at 9:42 AM | Permalink

    OK then pull this apart and generate another few hundred comments. Then have a good session denigrating the metrologists for making the mistake. Then why not speculate on how much this mistake has cost the taxpayer. An finally rabbit on about how this must be occurring all the time and totally disproves global warming!!!

    http://www.climate.weatheroffice.ec.gc.ca/climateData/bulkdata_e.html?timeframe=2&Prov=XX&StationID=1776&Year=1991&Month=10&Day=1&format=csv&type=dly

    Date/Time Year Month Day Data Quality Max Temp (“’C) Max Temp Flag Min Temp (“’C) Min Temp Flag Mean Temp (“’C) Mean Temp Flag Heat Deg Days (“’C) Heat Deg Days Flag Cool Deg Days (“’C) Cool Deg Days Flag Total Rain (mm) Total Rain Flag Total Snow (cm) Total Snow Flag Total Precip (mm) Total Precip Flag Snow on Grnd (cm) Snow on Grnd Flag Dir of Max Gust (10’s Deg) Dir of Max Gust Flag Spd of Max Gust (km/h) Spd of Max Gust Flag

    04/11/1991 1991 11 4 -18.1 -27.2 -22.7 40.7 0 0 0 0 4 4 63
    05/11/1991 1991 11 5 18.1 -26.6 -4.3 22.3 0 0 0 T 0 T 3 0 0
    06/11/1991 1991 11 6 -15.2 -18.2 -16.7 34.7 0 0 0.8 0.8 3 11 41

    In case it does not format
    4/11/91=-18.1C max temp
    5/11/91=18.1C max temp
    6/11/91= -15.2C max temp

    By the way from 1948 to 2008 the resolute station has recorded 0.26degC/decade temperature rise (best fit straight line)
    Mike

  44. Craig Loehle
    Posted Nov 17, 2008 at 9:48 AM | Permalink

    In my view, far more serious than the lack of QC is the thousands of weather stations that vanished since 1990 or so, particularly in sparse and critical areas like the southern hemisphere and Russia and China. This is even more important because there are more rural sites missing than urban.

    Steve: As discussed on many occasions and in this post, the stations haven’t “vanished” though this is often repeated. All that’s happened is that GHCN hasn’t collected the data even when it’s available on the internet.

    • Craig Loehle
      Posted Nov 17, 2008 at 11:53 AM | Permalink

      Re: Craig Loehle (#48), To clarify, the stations are still there but have not been updated in the database at NOAA.

  45. thefordprefect
    Posted Nov 17, 2008 at 9:57 AM | Permalink

    From russian meterological web site (from their data presumably i.e. not touched by GISS):

    It’s getting warmer!

  46. Mike C
    Posted Nov 17, 2008 at 10:06 AM | Permalink

    Gerald Machnee, the point I was making was that I have seen repetitive monthly temperatures and I then looked at the station originals to see if it was a mistake and it was not.

  47. stan
    Posted Nov 17, 2008 at 10:10 AM | Permalink

    I think this month’s problem reflects on a much bigger issue of competence and credibility. Stop for a moment and look at the issue from the perspective of GISS managers going back 20 years. The boss has declared to the world that mankind is on a direct collision course with catastrophe. What is that prediction based upon? The work being done at GISS (at least in part). Further, the only way to avoid complete disaster is to embark on a series of actions which would completely change our way of life.

    Assuming that folks working with Hansen back then were reasonably bright, they had to think that their demand that billions of people completely change their lives might possibly result in a little bit of pushback. A few of these folks among the billions might get a little irritated (and not just the evil oil companies). In fact, some day these folks just might want to take a look at the GISS work product that was being used to support the boss’ pronouncements. And since GISS was paid for by the taxpayers, there might even come a day when some senator bearing subpoenas might insist on a review of what GISS did and how it did it.

    So for 20 years now the folks working for Dr. Hansen have known that it was possible people might come looking at their work. Given the enormous stakes involved, it might be a good idea for GISS to have its house in order. At least, one would think that they would be aware of the need to have topnotch quality assurance.

    Well, we still haven’t reached the point where an audit of GISS seems likely. But sometimes we do get brief peeks and glimpses of what goes on behind the curtain there. What we see doesn’t lead to a conclusion that the quality level is very high. Since they should know that some day their work is going to have to stand up to some really serious scrutiny, one has to wonder what they’re thinking. I would expect bright folks to plan ahead a little better than this.

    If, knowing that their work might someday be scrutinized under a really bright light, this is the best they can come up with, it doesn’t produce a lot of confidence in their competence.

  48. Bob Koss
    Posted Nov 17, 2008 at 10:22 AM | Permalink

    Who is to blame for this?

    First some speculation on what happens if we are ever able to control temperature precisely.

    After getting input from all the special interests each weather station will be assigned a temperature to maintain, monitored at an approved CRN-1 location. No longer will anyone care what the temperature anomaly happens to be relative to mid 20th century. Only the year to year anomaly would be of concern. As long as a zero anomaly is maintained everyone would be happy. But, woe be to any failures in this critical mission.

    Some people at GISS will be concerned this might affect employment. They will lobby to get the contract as the high priests of record maintenance. Making the claim they have been doing this type of work for years and already have software that can do the job.

    Skeptics raise concerns about the ability of the software to reliably perform the monitoring in an error free manner. The idea that no validation and verification has ever been performed on the software leaves them, well, skeptical.

    GISS responds saying V&V is unnecessary since our software is top notch ever since correcting those trivialities found by Steve McIntyre and Anthony Watts.

    In an attempt to mollify the skeptics, a simple test is performed in which GISS global surface data for the entire year 2007 is duplicated and used as the base period for generating a temperature anomaly for the time interval January-December 2007. Confidence was high that anomaly value shown proudly at the top right of the graph would demonstrate just how reliable the software actually is.

    Then they performed the test.

    That’s not photo-shopped. That’s for real. What a hoot!

    In all seriousness. How can it be possible for such an erroneous anomaly value to occur?
    I looked at the data file and all the valid data locations were zero and the no data values all 9999 as should be expected. Somehow, it seems the 9999 values are being used when creating the anomaly. Is this a special case, or is it possible for occasional erroneous 9999’s to slip into the anomaly calculation when using the 1951-1980 mean?

    Of the 16200 cells 2844 are 9999 the other 13356 are zero.
    The calculation (2844 * 9999) / 13356 = 2129.167 a couple points shy of their anomaly. Don’t know if that might be a hint as to what is happening.

    I also checked other years using the same setup and results are comparable. Huge anomaly whether 1200km or 250km smoothed.

    V&V required I think. Hope they don’t simply patch it by disallowing the setup or shamming in a zero anomaly.

    Steve: They use -9999 as a placeholder for NA values (ocean screened out in their land-only version. IT looks to me like you’re comparing a land version to a land+ocean version – nothing more than that.

  49. Mike C
    Posted Nov 17, 2008 at 10:26 AM | Permalink

    thefordprefect, I went to the GISS site and looked at Resolute for the same time period and you know what I found? For the same time period GISS shows a slight cooling, that doggown Hansen, now he’s flipped sides and is supporting global cooling! By the way, the fact that those temps were flagged is a sign that the system is working in that respect.

  50. Steve McIntyre
    Posted Nov 17, 2008 at 10:27 AM | Permalink

    I’ve added a mention of the following prior incident (Which I should have mentioned in the post) –

    CA reader, Andy, commented here on Sept 23, 2008:

    BTW, GISS temperature data for the finnish towns like Oulu, Kajaani, Kuusamo etc shows exactly the same temperatures for July and August 2008. First time ever seen!

    which was confirmed by Jean S here.

  51. Stan Palmer
    Posted Nov 17, 2008 at 10:37 AM | Permalink

    Does anyone know of a standard that details the procedures to be taken when a significant failure in quality occurs. From what I have seen in recent events, the company or agency is advised to take active ownership of the issue. it is necessary that the public perceive that eh company is taking active steps to minimize the effects of the failure and to prevent its re-occurance. The CEO becomes the visible face of this effort to indicate the commitment of the company to identifying the root causes of the issue and correcting them.

    One technique that I cannot see as finding favor is to blame a supplier and to minimize the failure while at the same time pointing out the incapacity of the company to be detect or be responsible for the error while asking for more grant money. This, in itself, could be seen as evidence of teh root cause of the failure

  52. Steve McIntyre
    Posted Nov 17, 2008 at 10:47 AM | Permalink

    #46. Look, I’ve also urged people to take a valium. Personally, I think that there is considerable evidence that it’s warmer now than the 19th century and have said so on many occasions.

    I’ve asked people here not to go a bridge too far in their criticism as it de-senssitizes people to other more fundamental issues?

    Does that excuse the abject failure of NOAA and NASA staff from failing to update the vast majority of records within their data base? Particularly when many of the records as available on the internet and even at NOAA (NCDC). Nope.

    You might retort – well, even if they’d collected the data, it wouldn’t change the answer as to the “big” questions. That might very well be the case, but that is not a justification for the failure of people who are relied on to make comprehensive collections of temperature data to carry out that task.

    I’ve also said on many occasions that , if I had a policy job, I would take advice from responsible institutions regardless of what I might think in a personal capacity, but that I would also do what I could to improve the QC of the institutions.

    I think that it’s important to do little things properly as well as big things. ANy hockey or basketball coach would say the same thing. If NASA and NOAA or whoever did their jobs and actually collected the temperature data actually available from New Zealand, Canada, Russia, Bolovia whereever, rather than sitting on their butts, it might actually improve their case.

    Also it doesn’t justify NASA attacking people who had the temerity to point an error. The initial post here was ironic and the issue seemed

    • Fish
      Posted Nov 17, 2008 at 3:23 PM | Permalink

      Re: Steve McIntyre (#56),

      Steve,

      Long time lurker, love the blog. You’ve often said that you think the evidence points to it being warmer today than in the 19th century. But that’s not terribly informative, is it? Would you guess that it’s more likely than not that the correct paleotemp graph looks more like the one drawn by Craig Lohle than the one Al Gore uses? (based on reading this blog for months, I’d guess the answer is yes.) And if so, isn’t the key to find empirical, physical ways (other than by mathematical climate model) to test the hypothesis that AGW is occurring primarily due to an increase in anthropogenic contributions to the atmosphere rather than primarily due to some complex of natural processes?

      Geologists (my undergraduate training) are comfortable with multiple working hypotheses but are always looking for ways to weed out those that can be disproven. What, in your mind are the best investigational avenues for proving or disproving that GHG have been the primary driver for the warming conceded to have occurred in the last century?

      From what I’ve read so far, seems like the “divergence problem” should be a focus for further research, but are there others?

      I recognize the purpose of the blog is to audit, but given the scope of the work that you’ve done (amazing!) I’m curious to hear what you’d do if you held the reins and the money to direct a global research effort.

  53. Bob Koss
    Posted Nov 17, 2008 at 10:47 AM | Permalink

    Not at all Steve. That comes from the GISS maps page. Using defaults settings except for time frames. No ocean is the default.

    Steve: Hmmm. OK< i replicated your map. It looks to me like an error in their Java script in which one version has -9999 values and another version doesn't. It's sloppy.

  54. Bob Koss
    Posted Nov 17, 2008 at 10:57 AM | Permalink

    Tried using land and ocean. Lower anomaly, but still 171.82.

  55. Bob Koss
    Posted Nov 17, 2008 at 11:04 AM | Permalink

    Only changed the following settings.

    Mean period: Annual(Jan-Dec)
    Time interval: begin 2007 end 2007
    Base period: begin 2007 end 2007

    I would expect it to show an anomaly of zero and create a white graph of valid land locations and leave the ocean and empty land data portion gray.

  56. jae
    Posted Nov 17, 2008 at 11:12 AM | Permalink

    A letter has been sent to NASA on this…

    • Stan Palmer
      Posted Nov 17, 2008 at 12:29 PM | Permalink

      Re: jae (#61),

      A letter has been sent to NASA on this…

      If this was the UK or some other country with a parliamentary system, this could be submitted by an opposition member as a question for Question Period. The government would have no obligation to release any information regarding a NASA code quality assurance program but there would be political consequences if they did not. The newspapers would report the refusal and ask their own questions.

      Is that a US equivalent in which a senator or congressman could submit such a question to NASA

  57. Ed
    Posted Nov 17, 2008 at 11:17 AM | Permalink

    Powerline Blog has now linked to and discussed the story reported in the Telegraph. Link is below. This site and Drudge gets some pretty heavy traffic.

    http://www.powerlineblog.com/

  58. Bob Koss
    Posted Nov 17, 2008 at 11:23 AM | Permalink

    I just noticed the zonal mean line graph below the one of global type I posted above has the correct values. Maybe the problem is with the rendering calculations for the global map and not using the correct reference for the anomaly value placed on it. Doesn’t inspire confidence.

  59. Steve McIntyre
    Posted Nov 17, 2008 at 11:23 AM | Permalink

    Lorne Gunter wrote an article in the National Post on this, saying that October 2008 was actually the 70th warmest on record. I emailed him asking him where he got this information – he said that they will make a correction saying that this refers to the U.S. only and not to the world – the discrepancy between US and world results being an interesting question that we’ve noted before.

  60. James Erlandson
    Posted Nov 17, 2008 at 11:36 AM | Permalink

    Isn’t it all about accepting responsibility for one’s actions?

    Originally published in Proceedings of the Aristotelian Society, 1956-7. (emphasis added)

    In general, the situation is one where someone is accused of having done something, or (if that will keep it any cleaner) where someone is said to have done something which is bad, wrong, inept, unwelcome, or in some other of the numerous possible ways untoward. Thereupon he, or someone on his behalf, will try to defend his conduct or to get him out of it.

    One way of going about this is to admit flatly that he, X, did do that very thing, A, but to argue that it was a good thing, or the right or sensible thing, or a permissible thing to do, either in general or at least in the special circumstances of the occasion. To take this line is to justify the action, to give reason for doing it: not to say, to brazen it out, to glory in it, or the like.

    A different way of going about it is to admit that it wasn’t a good thing to have done, but to argue that it is not quite fair or correct to say baldly ‘X did A’. We may say it isn’t fair just to say X did it; perhaps he was under somebody’s influence, or was nudged. Or, it isn’t fair to say baldly he did A; it may have been partly accidental, or an unintentional slip. Or, it isn’t fair to say he did simply A — he was really doing something quite different and A was only incidental, or he was looking at the whole thing quite differently. Naturally these arguments can be combined or overlap or run into each other.

    In the one defence, briefly, we accept responsibility but deny that it was bad: in the other, we admit that it was bad but don’t accept full, or even any, responsibility.

  61. Mark T.
    Posted Nov 17, 2008 at 11:42 AM | Permalink

    70th? Is that correct or a typo?

    Mark

  62. crosspatch
    Posted Nov 17, 2008 at 12:16 PM | Permalink

    “I think that there is considerable evidence that it’s warmer now than the 19th century and have said so on many occasions.”

    And I believe most people would agree. I would say there is enough evidence to say that it is warmer. I don’t believe there is enough evidence to say that the warming is due to humans or that it is anything other than a natural occurrence. We would have been recovering from the LIA in the 19th century and temperatures would be expected to rise as a result. It seems apparent that the recovery ended in the 1930’s and temperatures cooled until the mid/late 1970’s then warmed again until 1998 (though not reaching as warm as the 1930’s) and have probably begun to cool again. The latest cycle of warming and cooling since the 1930’s would appear on the surface to be closely related to oceanic oscillations (PDO, ENSO, etc).

    Nobody doubts that things are warmer, I don’t believe that is the nature of the dispute. The crux of the dispute is if it is caused by humans, if the degree of change is outside of what naturally happens in such cycles, and if there is anything we can really do that would change it by any measurable amount. There are people that are wanting to use a conclusion that we are burning up the atmosphere in order to enact draconian measures of government control over energy and industry. The first thing that is in doubt is that there is even a problem and the second thing is doubt that anything we do will make a difference.

    The AGW proponents have not, to my personal satisfaction, been able to show that they have answered either of these questions. And the closer the data they use to bolster their argument is examined, the more problems seem to be found with it. They can not expect to be taken seriously until they themselves take their own data and methods seriously. It isn’t a matter of “believing in” or not “believing in” AGW. It is a matter of nobody having shown with any degree of certainty that any warming is a result of human activity beyond local land use changes and not natural activity. Nothing I see so far supports the notion that human industrial activity is any major mover of the global climate of Earth and therefore we can spend until we are completely out of money and it won’t make any difference. Climate is going to change over time no matter how “green” we are.

    • MC
      Posted Nov 17, 2008 at 1:03 PM | Permalink

      Re: crosspatch (#68), Another sad day for reason then. My beef is that as a scientist, in particular an empirical scientist I dispense with a lot of the guff and just get down to the basics. People think science is complicated. Its not. It just takes the gumption to stand out from the crowd and say I’ll repeat this myself and see what I get.
      In this case I would say
      How well can you make a measurement? How did you do it? What sort of algorithm did you use in the analysis? (All common themes in this thread)
      Here’s a though experiment: Say tomorrow NASA / NOAA says “right, stuff it, here is a manual on how we do our GISTEMP” etc. They don’t have to provide code, just how you would code it. So everyone goes off and makes their own code and downloads the data, does the analysis, und so weiter. Then they compare it to NASA. If the majority of people get the same result then its natural to assume that most people will be content to believe NASA and a lot less people will continue to do their own double checking. If on the other hand the method is flawed, then the blogosphere will provide a very quick reality check and suggestions. There will be in both cases a convergence to what we should use and hopefully full transparency in the process. Then we will see what the data says and have to accept it for what it is.
      I have also thought about an appropriate term for what I am if I had to pigeon hole myself in a ‘climate’ camp. I’m not a climate denier or a climate alarmist. I would be a ‘climate empiricist’ or ‘ a climate realist’ which leads to the term…..you can see it coming….a ‘real-climatist’! Sorry Steve for the OT.

  63. Bill Larson
    Posted Nov 17, 2008 at 12:22 PM | Permalink

    One thing I learned in Air Force ROTC: “You can delegate authority, but you cannot delegate responsibility.” This is the essence of many comments above. GISS is trying to delegate responsibility.

  64. Mark T.
    Posted Nov 17, 2008 at 12:32 PM | Permalink

    Yup, 70th. This illustrates the media bias even when they’re pointing out evidence that goes against AGW hypothesis. When speaking in terms of “ranking,” it seems to make sense that a number like 70th out of 114 should instead be listed as 45th coolest.

    Mark

  65. Mark T.
    Posted Nov 17, 2008 at 12:34 PM | Permalink

    Hmm, it could even be NOAA’s bias, and the media simply reporting what they said. Hard to glean from the article snippet I read.

    Mark

  66. mpaul
    Posted Nov 17, 2008 at 12:35 PM | Permalink

    #55

    http://nodis3.gsfc.nasa.gov/displayDir.cfm?Internal_ID=N_PD_1280_0001_&page_name=main

    NASA is ISO 9000/9001 certified. This means that their major processes must be ISO 9000 compliant. Is this a major process — well given the amount on money involved and the significance of the decisions being made based on the output of this process, I’m sure an ISO 9000 auditor would find this to be a significant process. Its clear to me that this process is not compliant. A non compliant major process that does not have a documented corrective action plan is a reason for decertification.

    A compliant to their ISO 9000 internal auditor would likely be taken seriously.

    • Jonathan
      Posted Nov 17, 2008 at 12:51 PM | Permalink

      Re: mpaul (#73), note clause 2 on the web page you cite. Not clear that GISS is covered.

  67. Mark T.
    Posted Nov 17, 2008 at 12:53 PM | Permalink

    As an engineer that has gone through ISO certification, as well as written documents to support such certification, I must say that I think it has very little credibility or value. In fact, I lump it right up there with practices such as TQM (oh my).

    Mark

  68. Pierre Gosselin
    Posted Nov 17, 2008 at 2:25 PM | Permalink

    The Keystone cops would be challenged to outperform what NOAA has overseen thus far. I’ve never seen a system of data processing that has been so sloppy, arbitrary and indifferent regarding collection, verification, processing and presentation of data as what you have described here. Sorry Steve, but I’m sticking to my “shambles” characterisation. What else could it be called? A prestigious organisation with a few unfortunate fundamental flaws?

  69. Hank
    Posted Nov 17, 2008 at 2:30 PM | Permalink

    LEST WE FORGET

    Sept 23, 1999 – NASA effort flummoxed by subcontractor:

    http://en.wikipedia.org/wiki/Mars_Climate_Orbiter

  70. Pierre Gosselin
    Posted Nov 17, 2008 at 2:32 PM | Permalink

    Concerning ISO certification, would not GISS be required to have a QM Handbook? They’d also be required to archive all quality inspection protocols etc. I have a feeling QM auditors would have a field day with this organisation.

  71. Mark T.
    Posted Nov 17, 2008 at 2:49 PM | Permalink

    The nifty thing about the documentation required by ISO: there is no specific requirements on what the policy must be, only that one exists and it is followed. If the policy is nothing more than “acknowledge that we collect data,” it is sufficient as long as they acknowledge collection of data. It is a pointless certification in place only to guarantee the jobs of those that implement it. My views of ISO are cynical at best. It was nothing more than a huge waste of time that meant absolutely nothing in the end.

    Mark

  72. Pierre Gosselin
    Posted Nov 17, 2008 at 2:50 PM | Permalink

    “In my opinion, the key thing is getting the institutions that are funded to do the job to do the job well. The starting point would be to have professional data people doing it, rather than people like Hansen and Jones who’d rather be doing something else.”

    You seem to be advocating the need for profound personnel change.

    Here you’re talking about changing the entire fundamental philosophy of the institutions. That is going from cherry picking data and cooking results until they fit particular models and scenarios, to a philosophy of systemtic and accurate collection and processing of data to show how the climate is really behaving.

  73. Sam Urbinto
    Posted Nov 17, 2008 at 2:55 PM | Permalink

    If the daily data for the stations in question is okay but the monthly is/was not, at the GHCN site, it clearly is “someplace” in between the two that is in question. Of course, if the methods were all clearly described and/or implemented, it should be a simple process to find out the points(s) and repair them.

    If you were trying to produce the best product you could, wouldn’t you welcome the assistance of somebody who’s said

    if I had a policy job, I would take advice from responsible institutions regardless of what I might think in a personal capacity, but that I would also do what I could to improve the QC of the institutions.

    The entire issue perplexes me.

    On the other hand, causes and quantifications are debatable, and the meaning of trends of .1 or .3 or .8 is reasonably questionable. I’m fairly certain most are aware that my contention is that global mean temperature anomalies are probably not the best way to track whatever has been going on in the past 100 or so years. But it’s also clear that the satellite readings also show a rise in near surface readings.

    All that aside, what is not really too difficult to grasp is that there are various processes going on (sun activity, long-term weather patterns, planetary physics, biosphere and hydrosphere variations, et al) that contribute to whatever the anomaly reflects.

    Is it getting warmer? I’d say yes. Why? 7 billion people and their cars, cities, farms, animals and so on. Seems both logical and obvious; and probably true. Of course, it could be mostly those long-term weather patterns, coupled with current temperature sensors, siting issues, station coverage, data algorithms and computer processing methods/devices.

    What exactly is it anyone’s denying? That we know with certainty everything about the tiny frame of time we’re considering for an highly variable complex natural system? I’d say so. Is there a reason to deny (not agree with) that we know exactly what to do about what we know and think we know and know we don’t know about the climate?

  74. Eric Anderson
    Posted Nov 17, 2008 at 3:23 PM | Permalink

    “Is there a reason to deny that we know exactly what to do about what we know and think we know and know we don’t know about the climate?”

    Rumsfeld would be proud.

  75. jae
    Posted Nov 17, 2008 at 3:24 PM | Permalink

    Kinda ironic. The government requires me to establish and maintain QA/QC Plans for all the data I am required to track on pollution control devices, under the threat of severe penalties. But life isn’t fair…

  76. Jim Turner
    Posted Nov 17, 2008 at 4:11 PM | Permalink

    All of the people I mentioned this subject to at work today (mostly PhD scientists, interested in current affairs, and to the left of me politically) were completely unaware of the subject of this discussion. Several people here have linked to Christopher Booker, for the benefit of those outside the UK, he writes for The Sunday Telegraph, probably the most right-wing of the ‘serious’ national newspapers; and his column in particular would almost certainly be dismissed as a ‘right-wing rant’ by the centre-left, let alone the left.

    PLEASE DON’T SHOOT THE MESSENGER(!), my point is that this story seems not to have broken yet, at least in the UK. I expect we will need to wait until next Sunday’s papers to see if it ever will, after that I suspect it will just be old news. Maybe if Hansen was sacked it might just be newsworthy (or maybe not).
    Incidentally, guess what the ITV (not BBC) national news ran with this evening – polar bears in crisis! (at least they accepted that not everyone agrees).

    • Len van Burgel
      Posted Nov 17, 2008 at 11:46 PM | Permalink

      Re: Jim Turner (#86),

      Our state’s (almost) monopoly newspaper “The West Australian”, unquestionably mainstream, ran the story prominently on page 12 with the headline “Blunder hampers climate monitor”. It was similar to the Telegraph article with attribution in the article to Watts Up With That and Climate Audit.

  77. Dave Andrews
    Posted Nov 17, 2008 at 4:32 PM | Permalink

    NOAA can’t even QC the qualifications of its senior personnel – why should they be able to QC imp ortant data 🙂

  78. Posted Nov 17, 2008 at 5:27 PM | Permalink

    Jim Turner–
    I’m not surprised the story doesn’t break big time. It is a NOAA blunder, and Gavin’s rationalizations were nuts. But the only known consequences are a) an incorrect anomaly was posted for a day and b) enough people who already have doubts about GISSTemp learned about the problem to reinforce their distrust.

    NASA GISS needs to sort things out with GISSTemp. Yearly revelatins of mistakes aren’t going to make enhance confidence in their product. But this isn’t Chernobyl. It’s not going to make the front pages of the newspapers.

  79. paminator
    Posted Nov 17, 2008 at 5:36 PM | Permalink

    Brit Hume just had a short blurb on the October record temperature goof. Both CA and WUWT were specifically mentioned. Brit referenced the article by Christopher Booker. I think he said that a NASA spokesperson said that NASA did not have sufficient resources to maintain proper quality control over data supplied by outside agencies.

  80. paminator
    Posted Nov 17, 2008 at 5:37 PM | Permalink

    Brit Hume is on Foxnews from 6-7 pm EST.

  81. John S.
    Posted Nov 17, 2008 at 5:52 PM | Permalink

    Willis,

    Loved your comment about what Marines never leave on the battlefield. I was ready to post a few suggestions for screening monthly anomalies, but then I saw you already did my work in your #34 post. I prefer to do my own work on the scotch, however.

  82. Kenneth Fritsch
    Posted Nov 17, 2008 at 6:20 PM | Permalink

    When all is said and done and assuming, per our usual experiences in dealing with GISS, GHCN, NOAA and CRU, that the involved organizations are not going to change their modus operandi (simply because they do not have to and/or have no incentives to), why the heck are not we comparing the several available sources of (competing?) temperature data sets?

    The first question to be answered would be are the results of these data sets going back to the beginnings of the satellite measurements statistically different in either trend and standard deviation for global and zonal temperatures. The second would be whether it can be shown that any of these data sets are independent and if not how are they dependent.

    Audits are fine, but when the audited body has no incentive to react objectively and does not, other then to rather subjectively and emotionally react to personalities, I say it is time to do some analyses. It is after all not like we would be providing our own temperature reconstructions.

  83. sky
    Posted Nov 17, 2008 at 6:24 PM | Permalink

    Ho hum! Just another unprrrrecedented event.

  84. John Lang
    Posted Nov 17, 2008 at 6:41 PM | Permalink

    My biggest problem with this is that the people responsible for recording and publishing the temperature records are also the people responsible for running the climate models.

    If the sloppiness in quality control transfers into the climate modelling group, why would we be expected to believe the climate models. If the Russian hot spot can get posted on the net by GISS and the NOAA/GHCN without anyone suspecting there is a problem, how are errors in the climate models supposed to be found.

    Secondly, temperatures have not kept up with the trend that the climate model’s have predicted. In fact, temps are less than half of that predicted (even after all these errors, positive adjustments, UHI etc.). These agencies that also run the models DO have a motive to let these errors slip through – the negative impacting errors never slip through.

    • BarryW
      Posted Nov 17, 2008 at 7:43 PM | Permalink

      Re: John Lang (#94),

      If you check the history section on the GISS website you’ll find that the global temperature data was done to “validate” their climate model, and has taken on a life of it’s own.

  85. Sylvain
    Posted Nov 17, 2008 at 8:55 PM | Permalink

    I’m curious to know why so few station cause such a large effect globally while the Y2K problem barely made a dent in the graph?

  86. Scott Fraser
    Posted Nov 17, 2008 at 9:34 PM | Permalink

    I don’t plan to spend time doing an inventory of incidents – surely NASA and NOAA have sufficient resources to do that. However, this one incident is sufficient to prove that the present incident is not isolated and that the same problem exists elsewhere in the system.

    The work that goes on here on a daily basis is important and reviews of history would no doubt detract from it, but I wonder if a complete “inventory of incidents” might be necessary if sufficient pressure is to be brought on NASA / NOAA to change their ways. I will leave it to statisticians here present to calculate the probability that NASA would undertake such an effort on their own, but my guess is that the answer is, precisely, zero.

    Perhaps such an inventory of incidents could be written as a journal article (peer reviewed, of course) with suggestions as to how best practices in data quality from industry and accounting professions might be applied to the collection, processing and dissemination of surface temperature data.

    In the private sector, people lose their jobs if financial data integrity is not maintained. They go to jail if it is misrepresented. These temperature records are the socio-political equivalent of coroprate financial statements in that, based on these statements and the IPCC’s ananysis thereof, we are being asked to invest billions of dollars in AGW mitigation. Is NASA’s carelessness (or misrepresentation) of these data of lessor significance to the public?

    As a US taxpayer, I’m really PO’ed that our government can’t do better. I am certain that there are politicians in the US who would be happy to contront NASA with an “inventory” the next time they show up on Capitol Hill looking for budget dollars.

    • jae
      Posted Nov 17, 2008 at 9:50 PM | Permalink

      Re: Scott Fraser (#97),

      In the private sector, people lose their jobs if financial data integrity is not maintained. They go to jail if it is misrepresented. These temperature records are the socio-political equivalent of coroprate financial statements in that, based on these statements and the IPCC’s ananysis thereof, we are being asked to invest billions of dollars in AGW mitigation. Is NASA’s carelessness (or misrepresentation) of these data of lessor significance to the public?

      As a US taxpayer, I’m really PO’ed that our government can’t do better. I am certain that there are politicians in the US who would be happy to contront NASA with an “inventory” the next time they show up on Capitol Hill looking for budget dollars.

      Agreed. But the world is getting very bizzare, indeed. Nobody seems to care about the truth, anymore. I once had a boss that said, Perception is reality.” I argued with him for years, but he would not budge. It seems that he was “right.” Downright frightening, yes?

      Steve – you’re venting and editorializing.

  87. Tolz
    Posted Nov 17, 2008 at 10:22 PM | Permalink

    As a layman long searching for the “truth” as it pertains to Global Warming, well before this site was born, early on I observed a couple things: First, it didn’t take long at all to smell a rat from the pro-AGW side of the argument. Too much apparently credible information arguing against a problem and/or alarm readily available on the internet was available that was completely absent in mainstream media reporting. To me the silence was deafening. Second, I also thought “Hmm…this IS an important topic and I think we SHOULD study it for ABOUT 100 YEARS OR SO to see whether it is a problem….whether anything should be done about it….and whether anything can be done about it”. But that it would probably take that long to come up with anything cogent, and we definitely shouldn’t jump to any conclusions before then.

    We don’t even have the freaking data right, much less any truly resolved aspect of the “science”. I now am starting to think 100 years was optimistic. Still doesn’t mean it might not be a problem–it just means we’re way, WAY from being able to conclude it is, and it doesn’t help seeing a blatant political movement trying to ram “solutions” down our throats.

    Issues highlighted here should be part of the normal process of GETTING THE SCIENCE RIGHT for making good decisions years into the future. Let’s hope this all is given time to be properly sorted out.

    Steve: I disagree that you need to have every last detail ironed out in order to make decisions. I also ask people not to editorialize on policy. It’s not that it’s not important. It’s an editorial policy to avoid the blog getting swamped with opinions.

  88. Vincent Guerrini Jr
    Posted Nov 18, 2008 at 3:10 AM | Permalink

    if this ain’t a picture of a massively cooling world I don’t know what it is..

    In hindsight I think the lack of solar activity may be beginning to kick in on the SST wonder waht leif has to say on this or … David Archibald LOL

    • Len van Burgel
      Posted Nov 18, 2008 at 5:26 AM | Permalink

      Re: Vincent Guerrini Jr (#102),
      Vincent: I think the Mercator projection gives a misleading picture. The cooling is mostly in Southern latitudes south of 40S. I suspect that the Global SST anomaly, even on this daily chart, is either neutral or slightly positive.

  89. Jim Turner
    Posted Nov 18, 2008 at 3:52 AM | Permalink

    RE:Lucia #88

    Thanks for your response, you are of course correct, ‘Minor Federal Agency Publishes Incorrect Siberian Temperatures Shock!’ is not likely to be a front-page headline. The problem is that polar bears, glaciers and all the other stuff are. I would go further and say that even the educated public are mostly unaware of pretty much everything that has ever been said on this blog. Remember the recent BBC programme that did its best to vindicate the hockey stick.

  90. Pierre Gosselin
    Posted Nov 18, 2008 at 4:16 AM | Permalink

    OT topic I know.
    But in response to Vincent, no. 102:
    Compare the Nov. 17 chart to that from 3 months ago, August 18, 2008.

    The equatorial Pacific near So. America has cooled a lot.

    Could a La Nina be returning?

  91. David
    Posted Nov 18, 2008 at 4:17 AM | Permalink

    Bob Koss has a point (#52, 57-59). Admittedly it’s odd to ask for the anomaly of a year’s temperature using itself as baseline, but you’d expect the result to be zero. The problem isn’t just over the oceans. Trying different time periods you regularly get brown patches over Papua and the Arabian peninsula.

  92. christopher booker
    Posted Nov 18, 2008 at 6:41 AM | Permalink

    Plea for guidance.I am sorry to have offended Ian McLeod by using journalistic shorthand to describe Steve M as a ‘computer analyst’ in my Sunday Telegraph column, which seems to have travelled the world a bit (see above). When I have the space to describe Steve’s role at greater length,as in a recent book where I reconstructed the ‘hockey stick’ saga in some detail,I hope I have done so accurately. But squeezing words into tight newspaper columns enforces concision. Since I am likely to pay tribute to him again in future, could Steve himself please guide me as to the most accurate shorthand description of how he sees himself?

    • kim
      Posted Nov 18, 2008 at 7:05 AM | Permalink

      Re: christopher booker (#107),

      Eye On Climate Ball Keeper.
      ================

    • Scott Brim
      Posted Nov 18, 2008 at 8:08 AM | Permalink

      Re: christopher booker (#107),

      Perhaps a little too long, but accurate none-the-less:

      “The world’s first and currently the only industrial-grade climate science quality assurance auditor.”

  93. stan
    Posted Nov 18, 2008 at 7:24 AM | Permalink

    He’s the anomaly auditing the anomaly advocates.

  94. Posted Nov 18, 2008 at 7:57 AM | Permalink

    Christopher Booker,
    Thanks for covering the story in the Telegraph. In your brief summary you missed out one significant point – after GISS had published their “corrected” data there were still glaring errors that had to be pointed out to them, leading to a second correction. See summary on my page.
    I expect Steve is not offended – he has been called all sorts of things.

  95. Mike Bryant
    Posted Nov 18, 2008 at 8:34 AM | Permalink

    Still no update, even though it says it will be there the 17th.
    Perhaps they meant 17th of some other month?

    http://www.ncdc.noaa.gov/oa/climate/research/2008/oct/global.html#introduction

  96. Posted Nov 18, 2008 at 8:39 AM | Permalink

    the discrepancy between US and world results being an interesting question that we’ve noted before.

    Why is this an interesting question? The U.S. is strongly influenced by Pacific Ocean conditions, particularly weather systems that move eastward, which will be influenced by Pacific Ocean temperatures. And the cold highway from Canada (which I’m sure Mr. McIntyre knows about) is also an important weather/climate influence. Europe is much different (look at the difference in average temperatures at the same latitudes compared to the U.S.) I think it’s rather obvious that U.S. temperatures won’t necessarily track the global trends exactly; I guess that’s why they call it regional climate.

    • Criton
      Posted Nov 18, 2008 at 9:22 AM | Permalink

      Re: Oakden Wolf (#113),

      You obviously must have a strong opinion on the extremely heavy reliance and weighting placed on North American tree ring chronologies in the MBH 1998 global temperature reconstruction. I’d very much like to hear your opinion on it.

  97. STAFFAN LINDSTROEM
    Posted Nov 18, 2008 at 8:59 AM | Permalink

    Steve Mc… Resolute is in NUNAVUT now …[It’s impossible to be
    faultess when places etc. change names more often than you change shirts…]

  98. William Hughes
    Posted Nov 18, 2008 at 9:22 AM | Permalink

    Any data hand entered by humans needs to be checked. Depending upon the records kept of the frequency of errors (and how many are found per check) perhaps the check should be checked. My engineering background is quite intolerant of unchecked product drifting into the world to perhaps be constructed and then to maim or kill people.

    What is particularly egregious (nice word – I am using more) about this error is that it fails the “calibrated eyeball” check on the output. The output makes no sense, so why no investigation into it before pushing it out the door?

    These folks are using the public as their quality monitoring system. They allocate no resources to it as indicated above. The errors are understandable and foreseeable given their system.

    The question are: is there enough expected value in reducing the uncertainty of these quantities (measurement) to justify allocating more resources, and who will pay for it?

  99. Mike Bryant
    Posted Nov 18, 2008 at 9:28 AM | Permalink

    I still think the funniest thing about the BS (Big Screwup) was just adding the 13.7 to the end of the scale… That one really had me chuckling…

  100. Posted Nov 18, 2008 at 9:52 AM | Permalink

    I noticed that there is still issues October GISTemp with northern and western Canada. As of this morning, the without SST data and 250 km smooth, virtually all of Canada is blank. However when the SST analysis is added (and 250 km smoothing), all kinds of colours pop up. The most interesting is the land locked purple patch. See here:
    http://x9c.xanga.com/a32807f261537221202120/b173514822.jpg and

    • Urederra
      Posted Nov 18, 2008 at 2:01 PM | Permalink

      Re: Fred Nieuwenhuis (#119),
      I also find odd the big discontinuity that happened in Alaska. If you look at the maps you posted, you can see that there is a deep blue patch in center Alaska (-4 to -8 Celsius degrees anomaly). This patch is next to a deep red patch in Northern Alaska (depicting +4 to +8 Celsius degrees anomaly). That it an anomaly of at least +8 Celsius degrees in between fairly contiguous territories. That looks like a huge discontinuity to me. Is that normal? Any explanation?

  101. joshv
    Posted Nov 18, 2008 at 10:06 AM | Permalink

    “Why is this an interesting question? The U.S. is strongly influenced by Pacific Ocean conditions, particularly weather systems that move eastward, which will be influenced by Pacific Ocean temperatures. And the cold highway from Canada (which I’m sure Mr. McIntyre knows about) is also an important weather/climate influence. Europe is much different (look at the difference in average temperatures at the same latitudes compared to the U.S.) I think it’s rather obvious that U.S. temperatures won’t necessarily track the global trends exactly; I guess that’s why they call it regional climate.”

    Not to answer for Steve, but I too find it an interesting question. The hypothesis is that CO2 is causing the overall heat content of the earth’s atmosphere and oceans to increase. This does not appear to be reflected in the last 80 years or so of US surface temperature data, the only conceivable portion that could have been significantly influenced by CO2.

    I will grant that the US will have a unique climate, but when compared to itself, 70-80 years ago, we are actually cooler – this suggests that we have some regional climate phenomenon that exempts us from “global” warming effects. I will grant that this could be the case, but I would question the data before I entertained determining the climate mechanism behind such a persistent anomaly.

  102. Dave Andrews
    Posted Nov 18, 2008 at 10:35 AM | Permalink

    Re: Len van Burgel (#106),

    I don’t think it is a Mercator projection but rather a Plate Carree, or Equirectangular Cylindrical, projection. This doesn’t distort as much as you move N or S from the equator as does Mercator’s projection. Thus on the NOAA map Greenland ‘only’ appears to be about 2/3rds the size of South America, whereas Mercator would show it as big as South America despite being merely 1/8th of its size in reality. Likewise Alaska, on a Mercator projection, would be shown as big as Brazil whereas the latter is actually 5 times bigger than Alaska. Again the NOAA projection doesn’t distort as much as that.

    Incidentally, and OT, Mercator’s projection was muched loved by NATO cold war hawks because it enabled the depiction of a huge red Soviet Union/Warsaw Pact ‘menacing’ the world.

    • Jeff Alberts
      Posted Nov 18, 2008 at 1:02 PM | Permalink

      Re: Dave Andrews (#120),

      Equirectangular Cylindrical, projection

      I double-dog dare you to say that while eating crackers! 😉

    • Posted Nov 18, 2008 at 1:22 PM | Permalink

      Re: Dave Andrews (#120),

      Re: Len van Burgel (#106),

      I don’t think it is a Mercator projection but rather a Plate Carree, or Equirectangular Cylindrical, projection. This doesn’t distort as much as you move N or S from the equator as does Mercator’s projection.

      True, but the Plate Carree is still unnecessarily distortionary. The Lambert Equal Area Cylindrical projection and other equal-area projections are discussed at length on the 2/12/08 CA thread Equal Area Projections.

      The Lambert Equal Area Cylindrical can be made to have the same average
      detail as the Plate Carree simply by using a reference latitude of 40.08 degrees, which gives it a 2:1 aspect ratio.

      Another good projection that also preserves N/S position, while not distorting distances so much, is the Mollweide projection, also discussed on the earlier thread. The Lambert Equal Area Azimuthal is good for studying polar views without distortion.

      These superior projections have been around now for over 200 years. The antiquated Plate Carree was first developed by Marinus of Tyre in the 1st c AD. It is unfortunate that Hansen of NASA still clings to it!

  103. Mike Bryant
    Posted Nov 18, 2008 at 10:42 AM | Permalink

    If we had FLIR satellite maps of temperatures like this one,

    it seems that many questions could be answered very quickly, if updated at least daily. Why not let this type of satellite paint the pictures instead of the GISS?

  104. Sam Urbinto
    Posted Nov 18, 2008 at 11:24 AM | Permalink

    Just some things to throw out here.

    Climate researcher
    Mathematician
    Data analyst
    Statistician
    Climate statistician
    Climate-science auditor
    Puzzle solver
    Historic temperature record critic
    Blog author
    The dude in Toronto that must not be named
    Fuzzy-bunny-slipper-wearing climate-auditing dynamo
    Squash gold medalist
    Founder Northwest Exploration Company Limited
    Not a software engineer and Debian developer

  105. Larry T
    Posted Nov 18, 2008 at 12:11 PM | Permalink

    I have been responsible for databases on private, state and federal levels. They were of various types including scientific, inventory, monetary, and technical. I always had a set of tools which I ran to verify the database and to identify any problems both on major updates and after normal production runs. I looked for things like missing data, inadvertant modified data, outliers and generated reports that i could use to verify/correct problems. I did 2-3 hrs of quality ocntrol for every hour spent on updates.
    The most prominent climate change scientists seem like they never thought of any quality control. In fact they seem to give more weight to outlier data than to the data that look valid.

  106. Bob B
    Posted Nov 18, 2008 at 1:11 PM | Permalink

    Mike, or you could use the daily UAH readings:

    http://discover.itsc.uah.edu/amsutemps/

  107. Mike Bryant
    Posted Nov 18, 2008 at 1:39 PM | Permalink

    They have updated the NCDC page for historical perspective.

    http://www.ncdc.noaa.gov/oa/climate/research/2008/oct/global.html#introduction

  108. Mike C
    Posted Nov 18, 2008 at 1:54 PM | Permalink

    NCDC is also showing land temps to be the warmest October on record… someone tell me they didn’t miss the memo

  109. Criton
    Posted Nov 18, 2008 at 2:51 PM | Permalink

    On a related matter, Columbia University’s Earth Institute just issued a press release touting a new paper by James Hansen and nine co-authors. The press release can be found here.

    http://www.eurekalert.org/pub_releases/2008-11/teia-cda111808.php#

    The link to the actual article did not work. I had to chuckle at this particular paragraph.

    “The scientists say now that CO2 needs to be reduced to the level under which human civilization developed until the industrial age—about 350 parts per million (ppm)—to keep current warming trends from moving rapidly upward in coming years. The level is currently at 385 ppm, and rising about 2 ppm each year, mainly due to the burning of fossil fuels and incineration of forests. As a result, global temperatures have been creeping upward. The authors say that improved data on past climate changes, and the pace at which earth is changing now, especially in the polar regions, contributed to their conclusion. Among other things, ongoing observations of fast-melting ice masses that previously helped reflect solar radiation, and the release of stored-up “greenhouse” gases from warming soils and ocean waters, show that feedback processes previously thought to move slowly can occur within decades, not millennia, and thus warm the world further. Once CO2 gas is released, a large fraction of it stays in the air for hundreds of years.”

  110. Posted Nov 18, 2008 at 4:11 PM | Permalink

    Criton: I don’t really have an opinion at the MBH 1998 reconstruction and North American tree rings. Certainly the Pacific would be an influence on them, particularly western bristlecones. That seems rathr obvious. Post 65 was apparently discussing modern temperature (weather station) records. As an obvious example, North and South America are more influenced by El Nino and La Nina events than the rest of the world. This effects precipitatin and I think tree rings are influenced by precipitation, so that would be a problem (already described, hasn’t it been?)

    joshv: Regional temperatures related to climate are probably influenced by many factors. I believe extra water vapor from generated by warm Pacific waters has been indicated as a factor resutling in more cloud cover over the U.S., which would likely be an inland cooling factor. Of course I’ll say that until someone demonstrates a better explanation.

  111. Derek
    Posted Nov 18, 2008 at 4:11 PM | Permalink

    snip – I ask people both not to editorialize on policy and to vent angrily

  112. Frédéric
    Posted Nov 18, 2008 at 4:30 PM | Permalink

    Re: # 136

    This sort of political comment is wholly inappropriate and should not be allowed.Re: Derek (#136),

  113. Mike Bryant
    Posted Nov 18, 2008 at 4:38 PM | Permalink

    The anomaly maps are really stretching credibility, to put it mildly…

  114. mpaul
    Posted Nov 18, 2008 at 5:14 PM | Permalink

    re: #133

    “The scientists say now that CO2 needs to be reduced to the level under which human civilization developed until the industrial age”

    This is a perfectly logical conclusion if you believe that CO2 is creating a run away condition in the atmosphere. But before they start culling the population, it would be comforting to know that someone did a bit more that 1/4 FTE of due diligence on the data.

    Maybe the press could take a look. For those journalist who read this blog, here are the simple questions you can ask: (1) where does this data come from (what’s the complete chain of custody), (2) how exactly does the data get adjusted before it ultimately gets published by NASA and who does the adjusting (3) what checks exist to ensure that the data is high quality, that the adjustments are proper and that those adjusting the data are free from conflicts, and (4) how much transparency is there in the process?

    Make sure to fact check the answers.

  115. Posted Nov 18, 2008 at 6:17 PM | Permalink

    #30 Gerry Morrow Yes yes, and put it in the context of Lomborg’s statistical realism, telling how the whole environmental movement has over recent decades gotten worse and worse scare stats that are simply untrue eg “we will lose about half the world’s species in a generation” – no, actually about 0.7% in 50 years.

    #50 TheFordPrefect – glad to see these graphs, what does the difference between winter and summer trends signify?

    #133 Criton quoting new Hansen paper “Once CO2 gas is released, a large fraction of it stays in the air for hundreds of years” -er, this looks completely wrong, since the annual CO2 flux is about a quarter to a fifth of the total atmospheric CO2, and the biosphere and marine fauna (calcium carbonate) like to gobble up any spare CO2. But Hansen is not a master of accuracy…

    • Willis Eschenbach
      Posted Nov 18, 2008 at 8:06 PM | Permalink

      Re: Lucy Skywalker (#142), thanks for the post. You say:

      #133 Criton quoting new Hansen paper “Once CO2 gas is released, a large fraction of it stays in the air for hundreds of years” -er, this looks completely wrong, since the annual CO2 flux is about a quarter to a fifth of the total atmospheric CO2, and the biosphere and marine fauna (calcium carbonate) like to gobble up any spare CO2. But Hansen is not a master of accuracy…

      You are conflating two things here – residence time, and e-folding (or half-life) time. You are referring to residence time, which is how long an individual molecule stays in the air. This, as you point out, is somewhere on the order of five years. It is easy to calculate, and there are a couple of ways to do it, so that figure is pretty solid.

      E-folding time, or half-life time, is very different. This is an indirect measure of how long a pulse of CO2 injected into the atmosphere will take to decay back down to the pre-pulse value. Half-life time is how long it takes to decay to half the original value. E-folding time is how long it takes to decay to 1/e of its original value. They both measure the same thing, but in different ways.

      E-folding time is much more difficult to calculate, as the change between different e-folding values is slight and our data is short and poor. I have calculated it myself, and got about 40 years. Jacobson puts it at 30 – 40 years.

      Hansen and the IPCC use something called the “Bern Carbon Model”. This is a bizarre mathematical construct which figures that part of the CO2 decays with a short e-folding time (2.5 years), part of it decays at a medium e-folding time (18 years), and a small amount of it has an e-folding time of 171 years! I don’t know how they figure that happens in the real world (how does a molecule know?), but that’s how the math works.

      Unfortunately, because of lack of data, at present there is no way to say whether the Bern model, or the more straightforward model used by myself and also by Jacobson, fits the data better.

      Hope this helps,

      w.

  116. dh
    Posted Nov 18, 2008 at 6:19 PM | Permalink

    to Criton #133

    Re:Hansen-10 paper

    If you truncate the address to

    http://www.eurekalert.org/

    , it takes you to The Open Atmospheric Science Journal web page. Hunt around a little and you will find the Hansen-10 paper which you will be able to download.

  117. Steve McIntyre
    Posted Nov 18, 2008 at 6:54 PM | Permalink

    An obvious legend for this picture that I should have thought of – the Hunt for Red October.

    • Gerald Machnee
      Posted Nov 18, 2008 at 11:08 PM | Permalink

      Re: Steve McIntyre (#144),
      Following from “Gavin Schmidt: “The Processing Algorithm Worked Fine”

      Gerald Machnee:
      November 12th, 2008 at 7:49 pm

      The season was short in “The Hunt for Red October”.
      New title.

  118. Mike C
    Posted Nov 18, 2008 at 9:57 PM | Permalink

    If CO2 had a residence time in the atmosphere of hundreds of years, then it would be impossible to have an annual sawtooth shape in the Keeling curve.

  119. Steve McIntyre
    Posted Nov 18, 2008 at 10:15 PM | Permalink

    Folks, if you want to discuss carbon residence times, please start with some exposition of the original papers. Opinions are not very helpful until the underlying theories are properly collated and presented.

  120. Stan Palmer
    Posted Nov 18, 2008 at 10:17 PM | Permalink

    If CO2 had a residence time in the atmosphere of hundreds of years, then it would be impossible to have an annual sawtooth shape in the Keeling curve

    I believe that sawtooth curve represents CO2 been taken and given back to teh atmosphere on an annual basis by vegetation. This signal would persist even if the CO2 concentration were otherwise stable and is not sufficient to exhaust atmosphere of CO2.

    What the residence time mens is that there is a source and a sink of CO2 that is nto derived from vegetation. If the source stopped producing CO2, the sink is of such magnitude that the concentration of CO2 would fall to 0 in several hundred years.

    Think of a capacitor with a source of charge and a resistive path to ground. Superimpose an AC signal onto this capacitor. Human contributiion would be another source of charge

    Steve: As I asked before, please discuss this sort of thing in the context of the published literature. I’m not interested in personal theories or analogies without it, because there’s no context.

  121. Patrick Peake
    Posted Nov 19, 2008 at 3:37 AM | Permalink

    The West Australian newspaper on 18 Nov has an interesting article. “Blunder hampers climate monitor”. Picks up that CA and WUWT picked up Hansen’s October errors. The article points out that this is not the first time Hansen’s methodology has been questioned referring to the hottest decade of the 20th century being 1930s not 1990s. The article also noted the amazing claim made in Australia by Rajenda Pachauri that “global temperatures have recently been rising very much faster than ever” when they had been falling.

    Maybe the press is starting to pick up on the message that there needs to be some serious rethinking of processes.

  122. Nick
    Posted Nov 19, 2008 at 3:52 AM | Permalink

    An interesting paper on residence times can be found at

    http://folk.uio.no/tomvs/esef/ESEF3VO2.htm

    Carbon cycle modelling and the residence time of natural and anthropogenic atmospheric CO2: on the construction of the “Greenhouse Effect Global Warming” dogma. (Tom V. Segalstad, Mineralogical-Geological Museum, University of Oslo).

    The paper provides a good explanation of the various methods used to measure residence times.

    Table 2 in the document provides the following:

    Authors [publication year] Residence time (years)

    Based on natural carbon-14

    Craig [1957] 7 +/- 3

    Revelle & Suess [1957] 7

    Arnold & Anderson [1957] 10

    including living and dead biosphere

    (Siegenthaler, 1989) 4-9

    Craig [1958] 7 +/- 5

    Bolin & Eriksson [1959] 5

    Broecker [1963], recalc. by Broecker & Peng [1974] 8

    Craig [1963] 5-15

    Keeling [1973b] 7

    Broecker [1974] 9.2

    Oeschger et al. [1975] 6-9

    Keeling [1979] 7.53

    Peng et al. [1979] 7.6 (5.5-9.4)

    Siegenthaler et al. [1980] 7.5

    Lal & Suess [1983] 3-25

    Siegenthaler [1983] 7.9-10.6

    Kratz et al. [1983] 6.7

    Based on Suess Effect

    Ferguson [1958] 2 (1-8)

    Bacastow & Keeling [1973] 6.3-7.0

    Based on bomb carbon-14

    Bien & Suess [1967] >10

    Münnich & Roether [1967] 5.4

    Nydal [1968] 5-10

    Young & Fairhall [1968] 4-6

    Rafter & O’Brian [1970] 12

    Machta (1972) 2

    Broecker et al. [1980a] 6.2-8.8

    Stuiver [1980] 6.8

    Quay & Stuiver [1980] 7.5

    Delibrias [1980] 6.0

    Druffel & Suess [1983] 12.5

    Siegenthaler [1983] 6.99-7.54


    Based on radon-222

    Broecker & Peng [1974] 8

    Peng et al. [1979] 7.8-13.2

    Peng et al. [1983] 8.4

    Based on solubility data

    Murray (1992) 5.4


    Based on carbon-13/carbon-12 mass balance

    Segalstad (1992) 5.4

    I hope readers find this information of use.

  123. Steve McIntyre
    Posted Nov 19, 2008 at 9:28 AM | Permalink

    CO2 residence time is not a topic that I’ve ever brought up here and it falls in the class of topics that I’d prefer not to be discussed at this time until I’m familiar with the issues. Take it to the Bulletin Board if you must discuss it.

    As I observed above, you have to start with the standard papers and see exactly what they say. At this site, I’m interested in looking at conventional papers relied upon by IPCC.

  124. Sam Urbinto
    Posted Nov 19, 2008 at 11:08 AM | Permalink

    The IPCC says it varies and there isn’t one specific lifetime. No need to discuss it. 🙂

    IPCC TAR WGI technical summary C.1: Atmospheric lifetime 5 to 200 yr — No single lifetime can be defined for CO2 because of the different rates of uptake by different removal processes.”

    IPCC FAR WGI technical summary TS.2.1: Carbon dioxide does not have a specific lifetime because it is continuously cycled between the atmosphere, oceans and land biosphere and its net removal from the atmosphere involves a range of processes with different time scales.

    Note for lifetime of gases table is

    The CO2 response function used in this report is based on the revised version of the Bern Carbon cycle model used in Chapter 10 of this report (Bern2.5CC; Joos et al. 2001) using a background CO2 concentration value of 378 ppm. The decay of a pulse of CO2 with time t is given by “a sub 0 plus 3 over sigma i=1 a sub i times e to the -t divided by τ sub i” where a0 = 0.217, a1 = 0.259, a2 = 0.338, a3 = 0.186, τ1 = 172.9 years, τ2 = 18.51 years, and τ3 = 1.186 years, for t < 1,000 years.

    ——————

    Lifetime of excess atmospheric carbon dioxide Moore, B. ; Braswell, B.H. 1994 Jan 01 Global Biogeochemical Cycles, Vol. 8, No. 1, 23-38(Mar 1994)

    The authors explore the effects of a changing terrestrial biosphere on the atmospheric residence time of carbon dioxide using three simple ocean carbon cycling models and a model of global terrestrial carbon cycling. We find differences in model behavior associated with the assumption of an active terrestrial biosphere (forest regrowth) and significant differences if we assume a donor-dependent flux from the atmosphere to the terrestrial component (e.g., a hypothetical terrestrial fertilization flux). To avoid numerical difficulties associated with treating the atmospheric carbon dioxide decay (relaxation) curve as being well approximated by a weighted sum of exponential functions, we define the single half-life as the time it takes for a model atmosphere to relax from its present-day value half way to its equilibrium sub p CO2 value. This scenario-based approach also avoids the use of unit pulse (Dirac Delta) functions which can prove troublesome or unrealistic in the context of a terrestrial fertilization assumption.

    —————-

    Archer, David (2005), “Fate of fossil fuel CO2 in geologic time”, Journal of Geophysical Research 110 (C9):

    Caldeira, Ken; Wickett, Michael E. (2005), “Ocean model predictions of chemistry changes from carbon dioxide emissions to the atmosphere and ocean”, Journal of Geophysical Research 110 (C9)

  125. crosspatch
    Posted Nov 19, 2008 at 11:33 AM | Permalink

    snip -editorializing on policy

  126. Posted Nov 19, 2008 at 3:40 PM | Permalink

    Re # 128, 132, NCDC/NOAA is still (Wednesday) sticking by their story of Monday that Oct 2002 has the warmest land temperatures in 129 years (+1.12 dC), with the NH 3rd warmest at +1.14 dC and SH 2nd warmest at +1.06 dC.

    Are they using the invalid data GISS got from them, or are these figures based on corrected data??

    If the former, are there invalid data from the SH as well as NH causing both hemispheres to appear spuriously warm? I thought the GISS/CRU problem was concentrated in the NH.

  127. John A
    Posted Nov 19, 2008 at 5:10 PM | Permalink

    The Register has this story (and mentions both Anthony and Steve): http://www.theregister.co.uk/2008/11/19/nasa_giss_cockup_catalog/

    • Jonathan Schafer
      Posted Nov 19, 2008 at 8:07 PM | Permalink

      Re: John A (#155),

      Steve and Anthony were also mentioned the other day on the Mark Levin show, based on the Times article. They were referenced indirectly on the Glenn Beck show as well.

  128. Posted Nov 19, 2008 at 5:20 PM | Permalink

    sorry Steve, guess I started the CO2 residence stuff by drawing attention to Criton. I’ve started a thread on this in our forum here and though I cannot start with “the standard papers” unless people list them, I’m interested in trying to understand the process better.

    Steve
    : I’ve not followed the topic, so you’ll have to research the issue. I’m pretty sure that Bert Bolin’s claim to fame were a couple of papers in the 1960s purporting to explain how the atmosphere could build up CO2 – I’d suggest fully understanding his viewpoint. The critics may or may not be right, but be fair about it.

  129. Deep Climate
    Posted Nov 23, 2008 at 3:06 AM | Permalink

    I did a quick check on the latest GHCN monthly dataset (v2.mean.Z archived today)

    I sorted the 800 stations that had data for all 10 months and checked for Mar-Apr and Sep-Oct possible carryovers (i.e. 0 difference in these months). These three northern Canadian stations are definitely wrong in April (includes Resolute previously found by Steve).

    (Reminder: Divide by 10 to get actual temp averages, i.e. -286 = -28.6)

    StationID__ Name_________ Jan Feb Mar Apr May Jun Jul Aug Sep Oct
    403 71081 HALL BEACH,N. -286 -366 -308 -308 -36 16 59 38 -2 -67
    403 71926 RESOLUTE,N.W -295 -345 -313 -313 -30 52 134 104 23 -38
    403 71924 BAKER LAKE,N -317 -351 -322 -322 -73 21 53 23 -50 -108

    These two are possible carryover errors in Oct., but these would need to be confirmed by actual calculation of the avarage from station data

    309 84628 LIMA-CALLAO/A 225 230 237 203 174 177 186 177 175 175
    207 42933 AKOLA (India)… 228 227 292 325 347 311 283 273 276 276

    After that there are about a dozen “zero” difference records that are likely chance occurrences in the tropics, e.g.:
    149 63862 DODOMA (Tanz.) 238 224 220 220 215 200 198 208 220 244

    I haven’t checked all the months, but I did notice that the Finland July-August carryover error appeared to be gone.

    Tentative conclusions:
    – The “carryover” error appears to occur in clusters within countries.
    – There are still some such errors in the 2008 record, but most likely a small number of them, and the effect is probably insubstantial for the most part (but of course needs to be fixed)
    – Any fix in averaging should be applied to all 2008 data, and previous years should perhaps be examined (I’ll look at 2007 when I have a chance)
    – the sparse northern stations may have an upward effect on the April anomaly when corrected.

  130. Deep Climate
    Posted Nov 23, 2008 at 12:25 PM | Permalink

    Finland – summer 2008 (again divide by 10 for actual)
    StaionID____ Jun Jul Aug Name
    61402836000 108 134 102 SODANKYLA
    61402869000 110 130 100 KUUSAMO
    61402875000 122 148 126 OULU
    61402897000 120 142 119 KAJAANI
    61402929000 129 157 130 JOENSUU
    61402935000 127 150 125 JYVASKYLA
    61402958000 136 163 141 LAPPEENRANTA
    61402963000 138 163 140 JOKIOINEN
    61402972000 145 170 145 TURKU
    61402974000 145 173 148 HELSINKI/SEUTULA

    Speculation:
    Perhaps the bug in averaging module manifests on a particular situation in the data (perhaps missing records), but disappears with later datasets with more complete data. Or … the bug has been fixed but averaging has not been redone as far back as March 2008.

    Since GHCN continues to update past records as new data arrives, at least some of the past monthly averages have to be redone each month. I don’t know whether this done only for stations flagged with new data or globally, and so far, I haven’t seen this level of detail in the NCDC documentation.

  131. Posted Jan 26, 2010 at 3:04 AM | Permalink

    Hansen and the IPCC use something called the “Bern Carbon Model”. This is a bizarre mathematical construct which figures that part of the CO2 decays with a short e-folding time (2.5 years), part of it decays at a medium e-folding time (18 years), and a small amount of it has an e-folding time of 171 years! I don’t know how they figure that happens in the real world (how does a molecule know?), but that’s how the math works.

    This is a rather common situation in electronics which is called capacitor soakage. It is not bizarre at all in some situations. For first order approximations it is seldom used. It does come up in sample and hold circuits where the sample time is short and the hold time is very long. Depending on capacitor type and the times involved it can be a non-issue or it can ruin your month.

5 Trackbacks

  1. […] http://www.climateaudit.org/?p=4370 […]

  2. […] http://www.climateaudit.org/?p=4370 […]

  3. […] system vital to climate-related public policy. However, events have shown this to be false (see this, for example). I doubt lack of funding is the problem, given the relatively tiny sums […]

  4. By The End of CRUTEM? « Climate Audit on Jan 30, 2010 at 4:24 PM

    […] again in a couple of 2008 posts here here noting (in this case GISS’) inability to locate Canadian station data: How hard can it be to […]

  5. […] Warmest March ever in Finland A year and half ago Steve recalled some encounters with NASA GISS. One could imagine that after all that embarrassment the quality […]