Watch the Ball

NASA spokesman Gavin Schmidt announced at realclimate that Hansen et al had fixed the Russian (and elsewhere data), following corrections to the data made by their supplier (NOAA GHCN.) Even though errors of over 10 deg C had occurred over the world’s largest land mass, it only reduced GISS October temperature by 0.21 deg C (from 0.86 deg C to 0.65 deg C) and still left a large “hot spot” over Siberia. Schmidt reported at realclimate (which increasingly seems to be NASA’s method of communicating with the public):

The corrected data is up. Met station index = 0.68, Land-ocean index = 0.58, details here. Turns out Siberia was quite warm last month.

Actual temperatures in much of the lurid “hot spot” will average a balmy minus 40 deg C and lower over most of the next few months. Olenek or Verhojansk sound like ideal venues for large-scale gatherings of climate scientists.

The GISS website states that changes were made to incorporate corrected GHCN files (the new file timestamped 12.58 pm today):

2008-11-12: It seems that one of the sources sent September data rather than October data. Corrected GHCN files were created by NOAA. Due to network maintenance, we were only able to download our basic file late today. We redid the analysis – thanks to the many people who noticed and informed us of that problem.

Now look closely at the two figures below and see what else you notice.

   

Left: NASA GISS as of Nov 10, 2008; right – as of Nov 12, 2008.

All of a sudden, a “hot spot” has developed over the Canadian Arctic Islands and the Arctic Ocean north of North America, that wasn’t there on Monday (it was gray on Monday). A smaller hot spot also developed over Australia.

I had downloaded the GHCN file on Monday (and saved it). I downloaded the GHCN file once again and checked for stations that had October values today, but not on Monday. All but two were in Australia with the other two also in the SH.

I haven’t crosschecked the Australian data but at least there’s some new data to support this part of the change. There was no new information from GHCN on the Canadian Arctic Islands. So what accounted for the sudden hot spot in the Canadian Arctic Islands??

Why can Hansen obtain values for October in the Canadian Arctic Islands today when he couldn’t on Monday?

Maybe NASA spokesman Schmidt can explain exactly how.

Update Nov 13, 11.30 am: NASA spokesman Gavin Schmidt, complying with Hansen’s policy not to mention my name, provided the following answer on how Hansen “fixed” the problem:

However, between last friday (when GISTEMP downloaded the first GHCN data) and today (thursday), stations in Australia and northern Canada were reported. People claiming on other websites that Oct data for Resolute, Cambridge Bay and Eureka NWT are not in the latest download, should really check their files again. (To make it easy the station numbers are 403719240005, 403719250005 and 403719170006). Why anyone would automatically assume something nefarious was going on without even looking at the numbers is a mystery to me. None of these people have any biases of their own of course.

If data for the three Canadian sites were added between Friday, Nov 7 and Monday, that would explain matters. Gavin’s accusation that the above question was asked “without even looking at the numbers” is another fabrication by a NASA employee. I reported that I had compared stations in the GHCN download before the issue was public and after the issue the public. The Canadian data was in both data sets. According to NASA spokesman Schmidt, it appears that NASA used an even earlier version of the GHCN data. That answers the question.

It is also reasonable to inquire as to whether changes in methodology had occurred. In September 2007, after the Y2K problem, Hansen changed their methodology without any notice or announcement with the effect of reversing once again the order of 1934 and 1998. Determining that they had changed data sources from SHAP to FILNET wasn’t easy and a change in sources or method could hardly be precluded in this instance, though NASA has now said that this was not the case.

113 Comments

  1. Steve McIntyre
    Posted Nov 12, 2008 at 11:36 PM | Permalink

    Anthony has a post on this here which I read after finishing the above the post. Anthony noticed that the color scale also changed between the two graphs. Posters at WU also noticed both the changes in Canada and Australia.

    In my post, I’ve carried out the additional precaution of comparing Monday’s GHCN data to today’s GHCN data and confirmed that no relevant new northern Canadian data had entered GHCN.

  2. Mike C
    Posted Nov 12, 2008 at 11:36 PM | Permalink

    This is an area where GISS could use some improvement. They need to get a grasp on and highlight the word PRELIMINARY.

  3. Bruce
    Posted Nov 12, 2008 at 11:54 PM | Permalink

    If you take 3 from column A you got to put 3 in column B to compensate and get to the final predetermined number.
    snip

  4. Jared
    Posted Nov 13, 2008 at 12:16 AM | Permalink

    It still doesn’t add up.

    1. Siberia, the Arctic, and most of Asia is still considerably warmer than the satellites indicate.

    2. If huge portions of the Arctic were running 4-8C above normal, there is simply no way the Arctic ice would have been recover as fast as it did in October – a record amount of ice growth for that month.

    But at least GISS no longer has October 2008 as “warmest ever”…

  5. Posted Nov 13, 2008 at 12:24 AM | Permalink

    I wonder how the corrections changed the area around Great Britain. Does the formula cause Siberia to influence England?

  6. Posted Nov 13, 2008 at 12:53 AM | Permalink

    I also think that most of the remaining month-on-month GISS warming (contrasting with a slight satellite cooling) is bogus, too.

    The newer picture has about as much orange/positive anomaly as blue/negative. Yet, the “newly learned” temperatures (in Australia and Northern Middle Canada) happen to be positive. Clearly, there must exist some freedom whether individual regions are included as colorful or gray. And it seems that the choice is always done in such a way to increase the anomaly for the newest months. Maybe, they return them to gray/cooler readings for the months in the past.

    The September=October bug was easy to be found and too hard to handwave it away. However, there must obviously exist many other errors that are less manifest. Not all errors have “signatures” of this kind. And note that we are talking about errors in 2008 that should be much less likely due to advanced technology and experience. What about similar (and unsimilar) errors in 1908? Because these errors generate up to 0.3 °C of fake warming per month, I find it perfectly sensible to imagine that the 0.7 °C warming per century can be fake, too. At most, it is a “two sigma” signal that can easily be a result of mistakes and coincidences.

    This is about the doubts about the very existence of global warming. What about its origins and predictions that are much harder to be measured? There is no thermometer that measures man-made warming only and no thermometer that measures the weather in 2058 already today. Why would a sensible politician trust people who can’t look at a thermometer and send the number to another station (and they’re surely unable to do so repeatedly), in their predictions of weather in 2058?

  7. Mike C
    Posted Nov 13, 2008 at 1:30 AM | Permalink

    “Why can Hansen obtain values for October in the Canadian Arctic Islands today when he couldn’t on Monday?”

    That area always comes in a little later than the initial data.

  8. Phillip Bratby
    Posted Nov 13, 2008 at 1:44 AM | Permalink

    How come a grey area is always replaced by a positive anomaly?

  9. DJA
    Posted Nov 13, 2008 at 2:15 AM | Permalink

    Australia has been teleconnected to Siberia, what a total farce! Quote Wikipedia
    Farce “aims to entertain the audience by means of unlikely, extravagant, and improbable situations, disguise and mistaken identity”
    No doubt, this farce should be on the stage, science it is not.

    • Demesure
      Posted Nov 13, 2008 at 2:52 AM | Permalink

      Re: DJA (#9),
      The teleconnection sucks the heat from England (around the Hadley Center) and releases it into the Australian bush.
      The British Islands have been releaved from a big warming thanks to the GISS.

  10. bernie
    Posted Nov 13, 2008 at 3:02 AM | Permalink

    This reminds me of the novel by Solzhenitsyn, “We Never Make Mistakes”.

  11. Demesure
    Posted Nov 13, 2008 at 3:03 AM | Permalink

    A funny “analysis” by the NSIDC about the october arctic sea ice (emphasis mine) who forgot to check news at CA :

    Near-surface air temperatures in the Beaufort Sea north of Alaska were more than 7 degrees Celsius (13 degrees Fahrenheit) above normal and the warming extended well into higher levels of the atmosphere. These warm conditions are consistent with rapid ice growth.

    They should have said “these warm conditions are consistent with a blunder in QC at the GISS/NOAA which inflates the Beaufort Sea temperatures”.

  12. thefordprefect
    Posted Nov 13, 2008 at 3:58 AM | Permalink

    All of a sudden, a “hot spot” has developed over the Canadian Arctic Islands and

    The world is a globe. The map is flat!!!!!. The squares of temperature are presumably equal area? Therefor at the poles on this projection the squares will appear large.

  13. Magnus
    Posted Nov 13, 2008 at 4:03 AM | Permalink

    http://global-warming.accuweather.com/2008/11/microwave_temperature_images_f_1.html

    The new Kanada and Australia figures seems to fit in well in the patterns of RSS October data, but of course this just looks like a farse. Who can take GISS seriously?

  14. Jean S
    Posted Nov 13, 2008 at 4:07 AM | Permalink

    one of the sources sent September data rather than October data

    This is a scary explanation, since we know the same error was (at least) in Finnish data in the September revision (July=August). Was that also due to the same source sending wrong data? How many other times has “sources” sent in duplicate data which has gone unnoticed? How do we know that GHCN data is not seriously corrupt?

    Just a guess, but the Canadian “hot spot” developed between Monday and Wednesday might be related to this (bold mine):

    April 2006: HadISST ocean temperatures are now used only for regions that are identified as ice-free in both the NOAA and HadISST records. This change effects a small number of gridboxes in which HadISST has sea ice while NOAA has open water. The prior approach damped temperature change at these gridboxes because of specification of a fixed temperature in sea ice regions. The new approach still yields a conservative estimate of surface air temperature change, as surface air temperature usually changes markedly when sea ice is replaced by open water or vice versa.

    Maybe either one was not available for Canadian arctic (hence gray) on Monday.

  15. Posted Nov 13, 2008 at 4:14 AM | Permalink

    Look here to see the startling visual difference possible between global temp anomaly maps, according to what message you want to convey. Humlum compares to 1998-2006 baseline, NASA to 1951-1980. Thus NASA says IT’S HOTTER FOLKS without any outright porkies. Didn’t Oz have a cool year this year? oh perhaps not in comparison with 1951-1980? The NASA pics here have no decent SST – all grey – and we know SST has been dropping overall recently.

    The heat has been increasing, not in the troposphere but in the noosphere, as Crichton described in his classic speech Aliens Cause Global Warming where he says those memorable words “Consensus is the first refuge of scoundrels”. For decades, environmental science has got away with serious misrepresentation of data, and Climate Science is only the latest arena for a growing malfeasance.

    I hope episodes like this – even if they are of themselves “innocent” – help flush the generic issues of deception and suppression in Science out into the open.

  16. EW
    Posted Nov 13, 2008 at 4:39 AM | Permalink

    They also acquired in the meantime some data from Pacific, east from Australia, that were greyed Nov 10.

  17. Posted Nov 13, 2008 at 4:45 AM | Permalink

    Mr Schmidt has posted to say that the Canadian data was there all along see #128 on the thread at RC – I wonder?

  18. AlanB
    Posted Nov 13, 2008 at 4:45 AM | Permalink

    Worthwhile looking at GISS’s Polar Projestion for the October anomaly here.
    This shows the curious North America-Eurasia split quite clearly.

    • AlanB
      Posted Nov 13, 2008 at 5:36 AM | Permalink

      Re: AlanB (#21),

      • David L Hagen
        Posted Nov 13, 2008 at 7:15 AM | Permalink

        Re: AlanB (#25), Thanks for the visual. Could the residual anomalous temperature anomaly have any anti-correlation with the Soviet Gulag? (Or with population density?)

  19. Nylo
    Posted Nov 13, 2008 at 4:55 AM | Permalink

    Gavin Schmidt wrote at RC (#|28 on that thread):

    People claiming on other websites that Oct data for Resolute, Cambridge Bay and Eureka NWT are not in the latest download, should really check their files again. (To make it easy the station numbers are 403719240005, 403719250005 and 403719170006).

    Who claims something isn’t somewhere? I thought that the question was rather why the plot was incomplete on Monday, if data from Stations hasn’t changed in Canada.

    • Urederra
      Posted Nov 13, 2008 at 8:56 AM | Permalink

      Re: Nylo (#22),

      Gavin Schmidt wrote at RC (#|28 on that thread):

      People claiming on other websites that Oct data for Resolute, Cambridge Bay and Eureka NWT are not in the latest download, should really check their files again. (To make it easy the station numbers are 403719240005, 403719250005 and 403719170006).

      Geeezzz…. Hi Gavin, Why don’t you say the name of those websites? How are we suppose to believe what you are saying if you hide the name of those websites from us? It looks like you are fond of hiding data.

    • Michael Jankowski
      Posted Nov 13, 2008 at 10:56 AM | Permalink

      Re: Nylo (#22), bingo. He’s just wording things carefully. If you were a blind follower at RC, you wouldn’t know any better. You’d just continue to think that any criticisms were unwarranted.

  20. Larry Huldén
    Posted Nov 13, 2008 at 5:20 AM | Permalink

    Demesure wrote:
    …. Near-surface air temperatures in the Beaufort Sea north of Alaska were more than 7 degrees Celsius (13 degrees Fahrenheit) above normal and the warming extended well into higher levels of the atmosphere. These warm conditions are consistent with rapid ice growth.

    In the Finnish long term temperature data from Helsinki a sudden rise in temperature is visible in December exactly when the sea freezes outside Helsinki. May be NSIDC think of the same phenomena in Alaska.

    • andy
      Posted Nov 13, 2008 at 6:12 AM | Permalink

      Re: Larry Huldén (#23),

      What are you talking about? The sea in front of Helsinki may freeze on December, January, February, or not to freeze at all, as has happened several times during recent winters. And weather the sea has frozen or not, there are often noticable changes in the temperature, like from -20 to 0 C overnight if the wind is turning from northern side to S or SW bringing warm and humid atlantic air to Scandinavia.

  21. TitoYors
    Posted Nov 13, 2008 at 5:29 AM | Permalink

    How is it possible that the difference between September and October is +0’08º(0’50º-0’58º), if the difference in land is +0’02º (0’63º-0’65º), and oceans +0’01º (0’33º-0’34º)?

    http://data.giss.nasa.gov/gistemp/maps/

  22. Mike Bryant
    Posted Nov 13, 2008 at 6:08 AM | Permalink

    Now you see it, now you don’t.

  23. Mike Bryant
    Posted Nov 13, 2008 at 6:11 AM | Permalink

    Everyone knows that hot water freezes more quickly than cold water. It seems that it is also true that ice freezes more quickly in warmer air than cooler air. 🙂

  24. Nick Moon
    Posted Nov 13, 2008 at 6:11 AM | Permalink

    Looking at the gistemp map web pages, the choices available for projection are rather odd.

    If someone was selling cat food or finanacial services, it would be understood that you pick and choose the axes of your graph to make your product look better than the competition. if you are a government funded organisation then, really, you should choose graphics that best represent the data. So non numerate people won’t be misled.

    Both projections offered overstate, at least visually, the temperature anomalies in the polar regions. Given that this data is averaged to produce a single global temperature, it would be much better to use an equal area projection. Not providing that, at least as an option, is deceitful. Perhaps not deliberately deceitful.

  25. Mike Bryant
    Posted Nov 13, 2008 at 6:24 AM | Permalink

    The mercator projection is quite misleading.

    “People’s ideas of geography are not founded on actual facts but on Mercator’s map.” (Monmonier, 21) In 1947, a U.S. State Department geographer wrote in Scientific Monthly that the “use of the Mercator projection for world maps should be abjured [renounced] by authors and publishers for all purposes.”

  26. Johan i Kanada
    Posted Nov 13, 2008 at 6:36 AM | Permalink

    At one conclusion that can be drawn from this event: Ignore GISTEMP as it has no scientific value.
    The problem will be to convince its fawning fans.

  27. Len van Burgel
    Posted Nov 13, 2008 at 6:56 AM | Permalink

    Demesure #12 and Lucy Skywalker #18

    The warm spot over Australia is probably right and probably appeared due to more data points becoming available.
    According to the Australian Bureau anomaly charts, October had a positive anomaly of 1.5C. September 1.38C and August was negative at -.94C .

  28. David L Hagen
    Posted Nov 13, 2008 at 7:04 AM | Permalink

    At RealClimate.org

    # John Philip Says:
    11 November 2008 at 4:01 PM

    Gavin,

    A dignified and measured response.

    Just to be clear – is 908 the number of stations in a single corrupt data file, which is one of several such files? Or is 908 the total number of GHCN stations used in GISTEMP? Seems surprisingly low, if the latter.

    JP

    [Response: The rate at which stations report varies. This month 908 stations were reported by Nov 10 for October. The number of stations that will report eventually is about 2000. Of those 908 stations, 90 had this oddity – which is a significantly higher percentage than one would expect. – gavin]

    All this fuss over only 10% of the stations to date reporting false data! What error rate does Gavin’s “significantly higher than one would expect” imply?

    Apparently the fun has just begun. What will the rest of the stations show when they report in? Or are the errors not uniformly distributed?

    • craig loehle
      Posted Nov 13, 2008 at 7:45 AM | Permalink

      Re: David L Hagen (#32), They are sending out press releases with less than half the stations reporting in? I think I send out a press release with only half my data analyzed.

  29. Mike Bryant
    Posted Nov 13, 2008 at 7:29 AM | Permalink

    A comment from WATTS:

    “Patrick K (06:08:34) :

    Dal Bandin, Pakistan:

    WU: Oct – 24; Sept – 28
    GISS: Oct – 26.7; Sept – 30.8”

    Does anyone know if Weather Underground uses the same thermometers that GISS does?

    • Posted Nov 13, 2008 at 8:22 AM | Permalink

      Re: Mike Bryant (#34),
      I have just done the same for the suspect area of Siberia.
      Bratsk: GISS 8.1, Weather Underground 1.
      Irkutsk: GISS 9.9, WU 1.
      That’s a difference of 9 degrees C.
      Combined with #38, this is highly suggestive that there is still something wrong with the GISS data for Siberia.

  30. Mike Bryant
    Posted Nov 13, 2008 at 7:32 AM | Permalink

    Alan,
    If that is a population density map, the warmest region does appear to correlate with the less populous region.

    • David L Hagen
      Posted Nov 13, 2008 at 12:27 PM | Permalink

      Re: Mike Bryant (#35),
      Mike, apologies for not labeling that clearer. Those are the Soviet Gulag camps. A thematic play on the data quality.
      I was then guessing population distribution might be similar.

  31. Pierre Gosselin
    Posted Nov 13, 2008 at 7:33 AM | Permalink

    Interestingly the bulk of stations shown in 33 are in western Russia. True it was warm there in October. In Eastern Siberia it was cold though. Are weather station being cherry picked?

  32. Posted Nov 13, 2008 at 7:47 AM | Permalink

    Here is the corresponding image from RSS TLT for October 2008 from their FTP site.
    The comparison is quite interesting. There is some consistency – both show an anomaly of around +2 in Eastern Canada and central Australia, and cooler spots in Western Canada and around Iceland. But RSS has an anomaly of around +2 in Siberia, which is inconsistent with the GISS claim of up to 8 degrees.

    ( In case ftp inline images don’t work, it is at
    ftp://ftp.ssmi.com/msu/graphics/tlt/medium/global/ch_tlt_2008_10_anom_v03_2.png )
    #27, 29: Note that RSS use a less misleading projection than GISS.

  33. AlanB
    Posted Nov 13, 2008 at 8:35 AM | Permalink

    Sparse stations!
    This is the GISS Polar Projection for October 2008 with a 250km smooth

    “Note: Gray areas signify missing data.”

    compare to 1200km Smooth in: AlanB (#25),

  34. Ed
    Posted Nov 13, 2008 at 8:42 AM | Permalink

    Tamino on his website is reporting a very odd October value for CO2 at Mauna Loa. The graphs are here:

    http://tamino.wordpress.com/2008/11/12/co2-blip/

    Maybe there’s an explanation for that huge temperature spike in October afterall.

  35. PhilH
    Posted Nov 13, 2008 at 8:59 AM | Permalink

    Here’s a bit from Gavin on who blew the whistle to NASA

    That analysis has now been pulled (in under 24 hours) while they await a correction of input data from NOAA. Yes only after Steve Mac informed you [edit]

    They pulled the data AFTER I (Steve Mac sent them an email notifying them of the error (which had been pointed out to me by a CA reader and which I had confirmed). They did not identify the error on their own. [edit]

    [Response: You and McIntyre are mistaken. The first intimation of a problem was posted on Watt’s blog in the comments by ‘Chris’ at around 4pm EST. By 7pm in those comments John Goetz had confirmed that the NOAA file was the problem. Notifications to the GISTEMP team of a problem started arriving very shortly after that, and I personally emailed them that evening. However, no action was taken until the next morning because people had left work already. They had decided to take the analysis down before 8.14am (email time stamp to me) since the overnight update to the NOAA file (uploaded 4.30am) had not fixed the problem. McIntyre’s intervention sometime that morning is neither here nor there. Possibly he should consider that he is not the only person in the world with email, nor is he the only person that can read. The credit for first spotting this goes to the commentators on WUWT, and the first notification to GISTEMP was that evening. – gavin]

    Comment by Rob — 12 November 2008 @ 12:51 PM

  36. Len van Burgel
    Posted Nov 13, 2008 at 9:14 AM | Permalink

    Paulm #40
    WU reports a mean temperature for Bratsk for October 2008 as 1.0 as does tutiempo.net However it is harder to find the long term average for the month. Climate-charts.com reports the october average (no length of record stated) to be -0.4. This gives an anomaly of +1.4. Does the 8.1 from GISS represent anomaly or the actual temperature? Either would seem to be incorrect.

  37. Posted Nov 13, 2008 at 9:21 AM | Permalink

    I called the Canadian and Austalian anomalies here: http://rankexploits.com/musings/2008/giss-october-anomaly-warmer-than-september/#comment-6474

  38. Harald Bange
    Posted Nov 13, 2008 at 9:26 AM | Permalink

    Latest NASA GISS info on data status by Gavin:

    #

    Groan… The NOAA fixes are not complete. There are still some stations where they have some Sep data in the Oct column. Stay tuned for more updates…

    Comment by gavin — 13 November 2008 @ 10:14 AM

    So I guess changes are still coming…..

  39. Posted Nov 13, 2008 at 9:36 AM | Permalink

    The ‘corrected’ GISS data is definitely still wrong.

    Here is the WU and GISS October data for Irkutsk for the last 4 years:
    WU GISS
    05 4 4.9
    06 1 2.2
    07 1 2.1
    08 1 9.9
    and here is the same for Bratsk:
    WU GISS
    05 2 3.3
    06 -2 -0.8
    07 0 1.1
    08 1 8.1

    For previous years we see just the usual little GISS exaggeration, but the 08 figure is way off.

    As a final check you can get the data from the russian site http://meteo.infospace.ru.
    Using their numbers I get an October 08 mean of 0.85 for Irkutsk, 0.92 for Bratsk, in agreement with the WU numbers but not the GISS ones. These temperatures are normal for this area (worldclimate gives an average of 0.6 for October in Irkutsk).

  40. Charlie Iliff
    Posted Nov 13, 2008 at 9:51 AM | Permalink

    re: PaulM #48

    Lessee, if I put the decimal point right there…

  41. Posted Nov 13, 2008 at 9:59 AM | Permalink

    Those two cities both had 999 for Sept and figures of 9.9 and 8.1 for Oct in the ‘corrected’ GISS data.
    In fact these are the September numbers!
    By an amazing coincidence, Gavin seems to have noticed this shortly after my post here.
    What an astonishing comedy of errors.
    I am just amazed (sorry, running out of suitable words that won’t get snipped) that GISS can put up some incorrect data, have the error pointed out to them, and then put up ‘corrected’ data without performing even the most elementary checks on the ‘corrected’ data.
    It was perfectly obvious that the ‘corrected’ data still looked suspicious, and simple to confirm that it was wrong.

  42. AlanB
    Posted Nov 13, 2008 at 10:05 AM | Permalink

    Bizarre! Now the Regular projection has reverted to the first discredited map; but the Polar projection still uses the second lot of partially corrected data – as of 16:03 GMT. Why don’t they pull the figures until they have checked properly?

  43. Steve McIntyre
    Posted Nov 13, 2008 at 10:08 AM | Permalink

    #22. It’s all too typical of the Team’s tactics – Gavin fabricated a claim that no one had made and then self-righteously scourged the straw man. As you say, the issue was that there were no relevant changes in available Canadian data beween Monday on Wednesday.

  44. AlanB
    Posted Nov 13, 2008 at 10:12 AM | Permalink

    This is new on the Polar projection only:

    Grahics [sic] bug: Occasionally the color for the .5-1C range is replaced by gray

  45. Steve McIntyre
    Posted Nov 13, 2008 at 10:15 AM | Permalink

    John S sent me an email mentioning the Irkutsk problem at 3 am Eastern, so a number of people noticed the problem. I’ve checked Irkutsk and Bratsk and agree that both these sites still have Sept data. Also Erbogacen and Kirensk, in case these haven’t been noticed.

    I checked these sites against GHCN-Daily data since the transfer from GHCN-Daily to GHCN was said by some to be the origin of the problem. The GHCN-D data looked OK for Erbogacen and Kirensk, so it’s hard to see how they could have uniquely screwed up some sites and not others in a program.

    But something to add to the puzzle. GHCN-Daily has no data for Irkutsk or Bratsk later than 2000. So what’s the connection between GHCN-Daily and GHCN? Where does GHCN get their data from? And what exactly did their most recent patch consist of?

    • John Goetz
      Posted Nov 13, 2008 at 3:08 PM | Permalink

      Re: Steve McIntyre (#55),

      I noticed the same thing with Bor. The .dly ends at Dec 1999 but the v2.mean file has data to present, which is picked up by GISS. The timestamp on Bor’s .dly gets updated daily as well. That’s curious.

      So what is the input to V2.mean? What is the significance of the .dly files?

  46. Posted Nov 13, 2008 at 10:23 AM | Permalink

    I’m thinking we’re learning a whole new meaning and application area for The Precautionary Principle.

  47. Jean S
    Posted Nov 13, 2008 at 10:30 AM | Permalink

    GHCN files have been updated. New time stamp is 4:30:00 today.

  48. bill
    Posted Nov 13, 2008 at 10:32 AM | Permalink

    When you know the outcome you want beforehand, of course it takes some fine tunning, before the story lines up and crosschecks.

    Why do you think they were in a hurry top get the original October data out there …

    There is a new law called the Data Quality Act(DQA) which is supposed to stop these type of shenanigans.

  49. TerryBixler
    Posted Nov 13, 2008 at 10:38 AM | Permalink

    The unpublished code is clear and documented, it is just too complex for lay people to understand.

  50. brendy
    Posted Nov 13, 2008 at 11:17 AM | Permalink

    The significant freezing of Artic sea ice would ostensibly transfer heat energy to the Arctic air that might account for some temperature excursions. Watch for that sleight of hand explanation.

    • Craig Loehle
      Posted Nov 13, 2008 at 2:15 PM | Permalink

      Re: brendy (#60), When the artic ice freezes over, the ice acts as a blanket that stops heat loss to the air. When wind breaks up the ice a little, heat is able to escape. GCMs that treat the artic ice as a uniform sheet end up with the air much too cold. There is no way that the air gets warmer to accompany the freezing over of the ocean. That is backwards.

  51. Jim Turner
    Posted Nov 13, 2008 at 11:21 AM | Permalink

    I still do not understand all the differences between the two charts. So extra data was apparently added to Canada and Australia at the same time as correcting the Russian data (Is it normal practice to publish these charts incomplete and then modify later?) but why has the UK gone from a positive to negative anomaly, and the hot spot over the central Med and Libya cooled? I can see why adding data could fill in the grey bits, but why has the Alaskan cold patch extended into parts of Northwest Canada, that was formerly coloured orange? Does anything NASA has said explain any of this?

  52. Basil
    Posted Nov 13, 2008 at 11:22 AM | Permalink

    There surely seems to be something wrong with the surface station data in Siberia or NE Asia. Here’s an image in which I use a base period of 1979-1998 for the GISTemp map (with 250 km smoothing) in order to compare it with the RSS/MSU global TLT image for October:

    There’s nothing on the RSS image to compare to the “brown” on the GISTemp image. What is the possibility that wherever you see “brown” on the GISTemp map, there are some serious data quality issues? Then, you take that, and 1200 km smoothing, and you drag those quality issues all over most of NE Asia.

    BTW, I don’t have an answer to Steve’s questions about why all the red in the new image for Canada was not there the day before, but I’m sure that what we see in the newer image is just smoothing a few northern Greenland stations all over the place. And interestingly, the mass of Greenland is blue in the RSS image, and missing (gray) in the GISTemp image.

  53. Sam Urbinto
    Posted Nov 13, 2008 at 11:36 AM | Permalink

    The Oct-Nov-Dec GHCN-ERSST numbers. Will they be more like 2007 at .53 .48 and .40 or more like 1990 at .40 .41 and .37? Perhaps 2000 at .19 .26 and .20? Oh, those crazy monthly anomalies, what will they do.

    Re: Luboš Motl (#6),

    Regarding the “warming”. A .3 rise in the anomaly is well under the 1 resolution of the min and max recorded measurements at any given station. And it’s more like a .7 rise in the anomaly trend in 130 years, or .54 per century. But that’s also within the 1 degree measurement resolution.

    Re: TerryBixler (#58),

    At least without the secret decoder ring. I hear you can get one in a box of Frosted Flakes. The word is that it looks like a thimble, a car or a little dog.

  54. Real Richard Sharpe
    Posted Nov 13, 2008 at 11:43 AM | Permalink

    Perhaps it is moot: http://dotearth.blogs.nytimes.com/2008/11/12/will-next-ice-age-be-permanent/?ref=science

  55. Steve McIntyre
    Posted Nov 13, 2008 at 12:13 PM | Permalink

    In an inline response to a comment at 11.35 am Eastern, Gavin Schmidt

    Gavin Schmidt: [Response: There are still (at least) four stations that have Oct data in place of september data but that didn’t report september data (Kirensk, Irkutsk, Bratsk, Erbogacen). I expect that the SEP=OCT check that NOAA did, just didn’t catch these. Still, this is embarassing – but will be fixed today. Nobody is ‘indifferent’. – gavin]

    Is there a NASA policy prohibiting NASA employee from acknowledging CA as a source? These are the 4 stations are listed in my post.

    And BTW why does he say that these stations didn’t “report” September data? Did he think of checking the GHCN Daily files where daily max and min data from Erbogacen and Kirensk is readily available.

    • Stan Palmer
      Posted Nov 13, 2008 at 12:37 PM | Permalink

      Re: Steve McIntyre (#65),

      This is all about the need fro climate science, in general, to appear to be infallible. They are projecting catastrophe and using the catastrophe to require fundamental changes to society. If this was recommended by a bunch of guys who work at some obscure departments in some obscure universities and government agencies then nobody would pay attention

      However these are the people who hold the knowledge of arcane computer code and mathematical techniques. Their opinions are not their opinions but the findings of infallible science. Anything that will case doubt on this desired perception cannot be tolerated. it must be cast out and ignored.

    • Posted Nov 13, 2008 at 2:23 PM | Permalink

      Re: Steve McIntyre (#65),

      Comeon, RC acknowledges you all the time using code. Sometimes you are “many”, sometimes you are “some”. Today you are “People claiming on other websites”.

      Gavin is pullout all his classic Gavinisms. Here’s one:

      Why anyone would automatically assume something nefarious was going on without even looking at the numbers is a mystery to me

      So… Uhmm… Who is anyone? How does gavin know this ambiguous entity assumed something nefarious was going on? Why does he think that assumption is automatic?

      Does Gavin own a special Ouija board that informs him what this mysterious “anyone” person assumes? If so, maybe his all revealing Ouija board can also explain why this elusive “anyone” person assumes whatever it is Gavin thinks they assumed.

      To be serious: Gavin had developed an unfortunate habit of criticizing unnamed people for things no one anywhere appears to have done. The criticism is unrebutable because there may be someone somewhere in the on this planet (or living inside an AOGCM) who has done whatever it is Gavin is currently whining about. Who knows? If so, we can all agree this “anyone” entity should not automatically assume anything nefarious is going on.

      That said– something puzzling is going on GISS and GHCN.

  56. Barney Frank
    Posted Nov 13, 2008 at 12:15 PM | Permalink

    Re #55:

    I’m thinking we’re learning a whole new meaning and application area for The Precautionary Principle.

    Didn’t you mean to say The Peter Principle?

  57. Richard deSousa
    Posted Nov 13, 2008 at 12:55 PM | Permalink

    I’d bet no one will hear of this FUBAR in the mainstream press and TV.

  58. Bob Koss
    Posted Nov 13, 2008 at 1:18 PM | Permalink

    Gavin is incorrect about GISS still waiting for another 1000+ stations to report.

    For 2007 GISS only used about 1051 stations in their analysis. Late January 2008 I made up a spreadsheet of the the GISS station inventory indicating the first and last years of annual data for all 6200+ stations in their inventory.
    You can find it here. http://spreadsheets.google.com/pub?key=pow204GazN9gv5FwsHw6zVA

    In 2006 there were more than 2100 stations with annual data. About 1/3 of the way through 2007 they dropped more than 1000 of mostly US stations by inserting 999.9 for monthly values. I suspect most of the stations still report, but they just no longer update the data.

    Here is a kml file you can download and save on your machine for use in Google Earth.
    http://BobK07.googlepages.com/gissinfo
    You’ll have to append a .kml extension to the file. Clicking on it will load it into Google Earth. The pins will show the location of each station that had annual data in either 2006 or 2007. Under Temporary Places/GISS Stations you can uncheck the inactive folder to show only those still used in 2007. Pin colors are green for those stations designated rural by GISS and red for all others.

  59. Bob Koss
    Posted Nov 13, 2008 at 1:53 PM | Permalink

    One more thing. 699 of the US stations dropped in 2007 are designated rural. Population less than 10000. Leaving only 15 US rural with 9 of those in Alaska.

    • jae
      Posted Nov 13, 2008 at 3:08 PM | Permalink

      Re: Bob Koss (#72),

      One more thing. 699 of the US stations dropped in 2007 are designated rural. Population less than 10000. Leaving only 15 US rural with 9 of those in Alaska

      Wow! I wonder what the reason was for preferentially dropping rural stations. Maybe that is one reason why NASA’s temperature record shows higher anomalies than all the other records?

  60. Bob Koss
    Posted Nov 13, 2008 at 2:21 PM | Permalink

    I see my comment November 13th, 2008 at 1:18 pm is awaiting moderation. My comment November 13th, 2008 at 1:53 pm now labeled #71 is intended as a follow up.

  61. Steve McIntyre
    Posted Nov 13, 2008 at 2:31 PM | Permalink

    Lucia, a student at Columbia University is not permitted to “appropriate” results without providing “appropriate credit”. Gavin is doing this all the time. Maybe you understand the “code” but not everyone does. So explain to me exactly why Gavin is exempt from academic code of conduct obligations.

    • Posted Nov 13, 2008 at 3:57 PM | Permalink

      Re: Steve McIntyre (#78),
      The only possible answer is…. Gavin’s not a student at Columbia?

      Obviously, Gavin doesn’t give credit. This habit of his causes “many” to distrust him. He has other habits result in additional distrust.

      When I watched his debate with Crichton, Lindzen etc. I was tempted to make a list of Gavinisms that result in Gavin sounding like a used care salesman rather than a scientist. But… I didn’t want to watch that whole thing multiple times. Not specirically mentionoing who he is criticising, creating strawmen and debating them is another.

      I don’t think he is unaware of some of his habits– but I suspect others are intentional.

      Re: conard (#77),
      Shell games range from fun entertaining parlor tricks to con games. The same goes for magic shows vs. fake psychic shows, or card tricks vs. cheating at poker.

      Shells games played badly are boring. It’s a bit like watching a juggler who keeps dropping the ball.

  62. Steve McIntyre
    Posted Nov 13, 2008 at 2:49 PM | Permalink

    On Nov 10, a poster at Lucia’s projeted that Canada and Australia would go up with more data, citing country maps for October 2008 then available on the internet.

  63. Steve McIntyre
    Posted Nov 13, 2008 at 3:15 PM | Permalink

    #81. The USHCN stations get added after a delay.

  64. Steve McIntyre
    Posted Nov 13, 2008 at 6:13 PM | Permalink

    It’s hard to keep track of all of this. After Schmidt demanded an apology for some supposed offence in this post, in today’s edition, the northern Canada stuff has disappeared and is now back to what it was when all of this started. WTF??

    • Jean S
      Posted Nov 14, 2008 at 3:06 AM | Permalink

      Re: Steve McIntyre (#85),
      There is a new GHCN timestamp 11/13/2008 01:06:00 PM (compare #57). Has there been any statement from NOAA NCDC about the issue? Even a change log file? I think it is quite ridiculous if the only information available is some hearsay blog comments by a NASA employee.

      Both historical and near-real-time GHCN data undergo rigorous quality assurance reviews. These reviews include preprocessing checks on source data, time series checks that identify spurious changes in the mean and variance, spatial comparisons that verify the accuracy of the climatological mean and the seasonal cycle, and neighbor checks that identify outliers from both a serial and a spatial perspective.

  65. Mike Bryant
    Posted Nov 13, 2008 at 9:30 PM | Permalink

    Curiouser, and curiouser…

  66. Mike Bryant
    Posted Nov 13, 2008 at 11:08 PM | Permalink

    The Power of the Blog,

    The darkest color on the anomaly map has been bulldozed off the map in the West. In the East it has been pushed North once, and then shoved again further North, almost entirely into the Arctic Ocean. Is there enough fuel left to topple it over the edge of the map? Stay tuned for the exciting conclusion of:

    When Data Collides!!!

  67. AlanB
    Posted Nov 14, 2008 at 5:31 AM | Permalink

    These are the current polar views for October from Gistemp (I cleared my cache to be sure).

    At left 1200km Smooth; at right 250km smooth.

    Obviously the graphics bug still not sorted out.

    Also on this page:
    What’s New
    Nov. 13, 2008: NOAA corrected GHCN data for October, 2008 again (second correction).
    Monthly figures have been corrected (third version).
    Nov. 12, 2008: Monthly graphs and maps were created with corrected NOAA/GHCN data.
    Nov. 11, 2008: The monthly graphs and maps with yesterday’s October data were removed.
    Nov. 10, 2008: Monthly graphs and maps were updated with NOAA/GHCN October data
    which had some problems

  68. Steve McIntyre
    Posted Nov 14, 2008 at 12:40 PM | Permalink

    In response to some critics, I’ve changed the title of this post to “Watch the Ball”, something that squash players (among others) are told to do.

  69. Pierre Gosselin
    Posted Nov 14, 2008 at 12:51 PM | Permalink

    At the following German website you can see the two charts toggle from one to the other for bettter contrast (scroll down a little).
    http://klimakatastrophe.wordpress.com/2008/11/13/warmster-oktober-aller-zeiten-peinlicher-fauxpas-von-james-hansens-nasa-giss-arbeitsgruppe-%e2%80%93-teil2/
    In many areas, the two charts contradict each other.

  70. Bob Koss
    Posted Nov 14, 2008 at 1:07 PM | Permalink

    What does the word radius mean to GISS?

    When it comes to grid cells and weather stations I consider the definition to be the distance from the center of the cell to the stations around it. Sort of like the earth’s orbital radius around the sun. You measure center to center. Here is the GISS definition from their maps page. “Smoothing radius: Distance over which a station influences regional temperature, either 250 km or 1200 km (standard case = 1200 km).” Reads to me somewhat like my definition. Evidently they mean something else since it doesn’t work out using center to center.

    I recently commented about Australia having only a handful of 43 stations report annual temperature in 2007 versus the opposite in 2006 where only a handful did not report annual temperature. I compared maps of the two years using 250km smoothing radius and didn’t notice as big a difference in coverage as I expected. That called for further investigation so I isolated down to a map of winter DJF 2007 using 250km smoothing. I figured I could find a station with no other station co-mingling it’s data. Darwin, Australia(lower right) was the station I selected to look at in the data file created by the mapping process.

    The only valid cell values are the asterisks. They generated seven cells when only four actually fit within the 250km radius limit.
    ID: 501941200004 Darwin -12.4S 130.9E (top of Australia in yellow). Distance column calculated from cell coordinates to station by me.
    lon lat anomaly distance
    129.00 -13.00 .4263 216km *
    131.00 -13.00 .3632 68km *
    133.00 -13.00 .3001 237km *

    131.00 -11.00 .4148 156km *
    133.00 -11.00 .4148 277km

    131.00 -9.00 .4148 378km
    133.00 -9.00 .4148 442km

    Here is another one at aprox. the same latitude. Same problem. Too many cells generated.
    ID: 524969960000 Cocos Island -12.2S 96.8E (west from Darwin in yellow)
    lon lat anomaly distance
    95.00 -13.00 .2736 215km *
    97.00 -13.00 .2736 92km *
    99.00 -13.00 .2736 255km

    95.00 -11.00 .2736 237km *
    97.00 -11.00 .2736 135km *
    99.00 -11.00 .2736 274km

    95.00 -9.00 .2736 407km
    97.00 -9.00 .2736 356km
    99.00 -9.00 .2736 429km

    I don’t know what method GISS is using to generate all these cells, but it sure doesn’t look right to me. I assume the same situation exists at 1200km radius. With numerous stations over-lapping the same cells to varying degrees there must be quite a number of excess cells being generated around the edges which should contain no data. In this example each area had only one station contributing data.

  71. Bob Koss
    Posted Nov 14, 2008 at 1:16 PM | Permalink

    Hmmm. The image didn’t come out right. I’ll try again.

    • Keith W.
      Posted Nov 14, 2008 at 3:16 PM | Permalink

      Re: Bob Koss (#93), Bob, I mad a post here a few months back about how GISS puts together their grid cells. I couldn’t find the exact thread, but here is a summation of that I posted elsewhere as well as a link to GISS’ webpage where they explain how they put the data together. Your one station in Darwin probably has almost every other station in Australia contributing to its temperature reading.

      “I went and checked GISTEMP, and they do explain how they divide up the Earth for their statistical averaging on this page. But one thing snagged at my mind a bit. Step three of the process involves dividing the planet into 8000 grid boxes. This would create boxes of roughly 63,759 square kilometers within which an average temperature anomaly is computed and then supplied to the global computation. It is how the average temperature is computed that bothers me.

      That average temperature anomaly is computed from the temperature stations within that grid box, and also any within 1200 kilometers. To give a graphic perspective so people can visualize this, let’s say my grid box is centered in St. Louis, Missouri. My local grid average is determined by not only the stations within my grid (roughly within 375 kilometers of me), but also from stations in Pittsburgh, Atlanta, Dallas, and Minneapolis, to name just a few. Basically, the radius of effect means that my one grid box is not determined from the 63,759 square kilometers within it, but from the 4.52 million square kilometers around it, an area 71 times as large.

      Based upon this computation design, the United States should be represented by roughly two grid boxes if you are looking at total area involved in determining the average temperature anomaly compared to total area of the US (9,826,630 square kilometers total area of the United States divided by the 4,521,600 square kilometers derived from the 1200 kilometer radius of effect). But the US grid sample is still based upon the 63,759 square kilometer grid box determination, so the total US grid boxes are 154. That would seem to me to heavily weight the US sample.

      Why the 1200 kilometer area of effect? Doesn’t this mean that certain stations get counted multiple times? Based upon my math above, it would suggest many stations in the United States get counted over 77 times. Does the temperature in St. Louis really effect the temperature in Atlanta, and vice versa? Surely, we could just use the temperatures provided by the stations within the grid boxes to determine that grid box’s average anomaly and work out a good global average from that.”

      • Bob Koss
        Posted Nov 14, 2008 at 8:25 PM | Permalink

        Re: Keith W. (#99), because of the time period and station locations, no other stations have data that affects the ones I used.

  72. Steve McIntyre
    Posted Nov 14, 2008 at 1:37 PM | Permalink

    Bob, would you organize this material as a separate head post? I’ll cut and paste it into a head post.

  73. brendy
    Posted Nov 14, 2008 at 2:36 PM | Permalink

    By way of additional observation, here is what NSIDC is saying at http://nsidc.org/arcticseaicenews/

    By the way, don’t shoot the messenger, I don’t buy it, I am just reporting it.

    “An expected paradox: Autumn warmth and ice growth”

    “Higher-than-average air temperatures”

    Over much of the Arctic, especially over the Arctic Ocean, air temperatures were unusually high. Near-surface air temperatures in the Beaufort Sea north of Alaska were more than 7 degrees Celsius (13 degrees Fahrenheit) above normal and the warming extended well into higher levels of the atmosphere. These warm conditions are consistent with rapid ice growth.

    The freezing temperature of saline water is slightly lower than it is for fresh water, about –2 degrees Celsius (28 degrees Fahrenheit). While surface air temperatures in the Beaufort Sea region are well below freezing by late September, before sea ice can start to grow, the ocean must lose the heat it gained during the summer. One way the ocean does this is by transferring its heat to the atmosphere. This heat transfer is largely responsible for the anomalously high (but still below freezing) air temperatures over the Arctic Ocean seen in Figure 3. Only after the ocean loses its heat and cools to the freezing point, can ice begin to form. The process of ice formation also releases heat to the atmosphere. Part of the anomalous temperature pattern seen in Figure 3 is an expression of this process, which is generally called the latent heat of fusion.

    In the past five years, the Arctic has shown a pattern of strong low-level atmospheric warming over the Arctic Ocean in autumn because of heat loss from the ocean back to the atmosphere. Climate models project that this atmospheric warming, known as Arctic amplification, will become more prominent in coming decades and extend into the winter season. As larger expanses of open water are left at the end of each melt season, the ocean will continue to hand off heat to the atmosphere.

    Near-record October growth rates

    As mentioned above, unusually high air temperatures go hand-in-hand with the rapid increase in ice extent seen through most of the month of October.”

  74. Sam Urbinto
    Posted Nov 14, 2008 at 3:15 PM | Permalink

    Hey, what happened to pea-ing in a thimble?

  75. Bob Koss
    Posted Nov 14, 2008 at 7:22 PM | Permalink

    Steve Mc,

    I’ll put it up shortly. Doing a little rewrite. If the image doesn’t display properly I’ll email you a copy along with the image. Feel free to edit as I’m quite often clear as mud when explaining things.

  76. Bob Koss
    Posted Nov 14, 2008 at 8:18 PM | Permalink

    What does the word radius mean to GISS?

    When it comes to grid cells and weather stations I consider the definition to be the distance from the center of the cell to the stations around it. Sort of like the earth’s orbital radius around the sun. You measure center to center. Here is the GISS definition from their maps page.

    Smoothing radius: Distance over which a station influences regional temperature, either 250 km or 1200 km (standard case = 1200 km).

    Reads somewhat like my definition, but fuzzier. Evidently they mean something else since it doesn’t work out using center to center.

    I recently commented about Australia having only a handful of 43 stations report annual temperature in 2007 versus the opposite in 2006 where only a handful did not report annual temperature. I compared maps of annual data for the two years using 250km smoothing radius and didn’t notice as big a difference in coverage as I expected. That called for further investigation so I isolated down to a map of Winter (Dec-Feb) 2007 using 250km smoothing. I figured I could find a station with no other station co-mingling it’s data. Darwin, Australia(lower right) was the first station I looked at in the data file created by their mapping process. There are no other stations close enough to confound the data.

    The valid cell values using the center to center method are the asterisks. While they generated seven cells where only four actually fit within the 250km radius limit.
    ID: 501941200004 Darwin -12.4S 130.9E (top of Australia in yellow). Distance column was calculated between cell center and station by me.
    Cell center
    lon lat anomaly distance
    129.00 -13.00 .4263 216km *
    131.00 -13.00 .3632 68km *
    133.00 -13.00 .3001 237km *

    131.00 -11.00 .4148 156km *
    133.00 -11.00 .4148 277km

    131.00 -9.00 .4148 378km
    133.00 -9.00 .4148 442km

    Here is another one at aprox. the same latitude. One station source, but same problem. Too many cells generated.
    ID: 524969960000 Cocos Island -12.2S 96.8E (west from Darwin in yellow)
    Cell center
    lon lat anomaly distance
    95.00 -13.00 .2736 215km *
    97.00 -13.00 .2736 92km *
    99.00 -13.00 .2736 255km

    95.00 -11.00 .2736 237km *
    97.00 -11.00 .2736 135km *
    99.00 -11.00 .2736 274km

    95.00 -9.00 .2736 407km
    97.00 -9.00 .2736 356km
    99.00 -9.00 .2736 429km
    Nine cells for Cocos and seven for Darwin. That leads to a question. Why not an equal number of cells for each station? Cocos is showing larger grid footprint than Darwin.

    What they seem to be using as a qualifier is any cell where the station is less than 250km from any corner. They do something akin to scribing a circle with a radius of 250km around the station, with any cell boundary that intersects even a little getting a value. Boy, that sure can’t be right unless the goal is to maximize cell coverage. Scribing a circle around the cell center and only using stations falling within the scribed area seems a reasonable way to go.

    Below is an example of what I think is the GISS method by using the last three cells above that don’t qualify when using cell center to station. As can be seen each cell has four opportunities to qualify.
    95.00 -9.00 cell center
    cell corners
    94.00 -8.00 559km
    94.00 -12.00 305km
    96.00 -8.00 475km
    96.00 -12.00 90km *

    97.00 -9.00 cell center
    cell corners
    96.00 -8.00 475km
    96.00 -12.00 90km *
    98.00 -8.00 448km
    98.00 -12.00 132km *

    99.00 -9.00 cell center
    cell corners
    98.00 -8.00 485km
    98.00 -12.00 132km *
    100.00 -8.00 584km
    100.00 -12.00 348km

    I assume the same situation exists at 1200km radius. With numerous stations over-lapping the same cells to varying degrees there must be quite a number of excess cells being generated around the edges which should contain no data. This would seem to magnify the value of some stations over others when creating a grid.

  77. Bob Koss
    Posted Nov 14, 2008 at 10:22 PM | Permalink

    Steve Mc,

    Don’t post up #101. As a matter of fact you can remove that whole comment. The bottom section has errors. I’ll do another rewrite.

  78. Bob Koss
    Posted Nov 14, 2008 at 11:51 PM | Permalink

    Rewritten post below.

  79. Bob Koss
    Posted Nov 15, 2008 at 12:05 AM | Permalink

    What does the word radius mean to GISS?

    When it comes to grid cells and weather stations I consider the definition to be the distance from the center of the cell to the stations around it. Sort of like the earth’s orbital radius around the sun. You measure center to center. Here is the GISS definition from their maps page.

    Smoothing radius: Distance over which a station influences regional temperature, either 250 km or 1200 km (standard case = 1200 km).

    Reads somewhat like my definition, but fuzzier. Evidently they mean something else since distances don’t seem to work out using center to center.

    I recently commented about Australia having only a handful of 43 stations report annual temperature in 2007 versus the opposite in 2006 where only a handful did not report annual temperature. I compared maps of annual data for the two years using 250km smoothing radius and didn’t notice as big a difference in coverage as I expected. That called for further investigation so I isolated down to a map of Winter (Dec-Feb) 2007 using 250km smoothing. I figured I could find a station with no other station co-mingling it’s data. Darwin, Australia(lower right) was the first station I looked at in the data file created by their mapping process. There are no other stations close enough to confound the data.

    The valid cell values using the center to center method are the asterisks. While they generated seven cells where only four actually fit within the 250km radius limit.
    ID: 501941200004 Darwin -12.4S 130.9E (top of Australia in yellow). Distance column was calculated between cell center and station by me.
    Cell center
    lon lat anomaly distance
    129.00 -13.00 .4263 216km *
    131.00 -13.00 .3632 68km *
    133.00 -13.00 .3001 237km *

    131.00 -11.00 .4148 156km *
    133.00 -11.00 .4148 277km

    131.00 -9.00 .4148 378km
    133.00 -9.00 .4148 442km

    Here is another one at aprox. the same latitude. One station source, but same problem. Too many cells generated.
    ID: 524969960000 Cocos Island -12.2S 96.8E (west from Darwin in yellow)
    Cell center
    lon lat anomaly distance
    95.00 -13.00 .2736 215km *
    97.00 -13.00 .2736 92km *
    99.00 -13.00 .2736 255km

    95.00 -11.00 .2736 237km *
    97.00 -11.00 .2736 135km *
    99.00 -11.00 .2736 274km

    95.00 -9.00 .2736 407km
    97.00 -9.00 .2736 356km
    99.00 -9.00 .2736 429km
    Nine cells for Cocos and seven for Darwin. Cocos is showing a larger grid footprint than Darwin. Why not an equal number of cells for each station? Maybe there could be a difference of one, but two seems unlikely.

    Thought I had a handle on the GISS method, but I don’t. Generating all those cells doesn’t look right to me. I assume the same situation exists at 1200km radius. With numerous stations over-lapping the same cells to varying degrees there must be quite a number of excess cells being generated around the edges which contain erroneous data.

    I wonder if they could be confusing miles and kilometers. NASA did that with one of their Mars missions a few years ago. All their cells would be less than 250 using nautical miles.

  80. Pierre Gosselin
    Posted Nov 15, 2008 at 5:05 AM | Permalink

    Concerning Arctic waters freezing and thus releasing heat that warms the adjacent atmosphere, I think the mechanism of heat transfer involved here has to be examined. If the mechanism was a convective one, then indeed Arctic surface stations would measure and show warmer than normal surface temperatures. But convection is not the heat transfer mechanism here. It’s almost completely thermal radiation into space, and so I don’t think surface stations would measure warmer than normal temps – in my view.
    The logic of the freezing water warming the adjacent atmosphere near the surface boundary makes little sense to me. Water freezes more quickly than normal because it is colder than normal. If the surrounding atmosphere was warmer than normal, then water would not have frozen as quickly as it did in October. Maybe the heat transfer is much more complex then I’m imagining it to be.

  81. Raven
    Posted Nov 15, 2008 at 6:46 AM | Permalink

    Can hot water freeze faster than cold water?

    Yes — a general explanation

    Hot water can in fact freeze faster than cold water for a wide range of experimental conditions. This phenomenon is extremely counter- intuitive, and surprising even to most scientists, but it is in fact real. It has been seen and studied in numerous experiments. While this phenomenon has been known for centuries, and was described by Aristotle, Bacon, and Descartes [1-3], it was not introduced to the modern scientific community until 1969.

    http://math.ucr.edu/home/baez/physics/General/hot_water.html

    • Stan Palmer
      Posted Nov 15, 2008 at 7:18 AM | Permalink

      Re: Raven (#107),

      From the account at the URL

      Earlier, Dr Osborne, a professor of physics, had visited Mpemba’s high school. Mpemba had asked him to explain why hot water would freeze before cold water. Dr Osborne said that he could not think of any explanation, but would try the experiment later. When back in his laboratory, he asked a young technician to test Mpemba’s claim. The technician later reported that the hot water froze first, and said “But we’ll keep on repeating the experiment until we get the right result.” However, repeated tests gave the same result, and in 1969 Mpemba and Osborne wrote up their results

      Repeating the results until one gets the right answer
      This is similar to parameterizing a GCM until one gets a correct hindcast. One NASA student identified this as the means by which GCMs were verified and validated. After all they give the right answers so they must be correct. This V&V technique has a long history

      • Raven
        Posted Nov 15, 2008 at 7:48 AM | Permalink

        Re: Stan Palmer (#108)
        No doubt. I posted the link because the concept of warmer air during a freeze up was related. However, I do not think that the arctic ocean operates like water in a cup. If anything, I suspect the people making the claim are grasping at anything that could explain the observations without undermining their preferred hypothesis.

      • John M
        Posted Nov 15, 2008 at 8:08 AM | Permalink

        Re: Stan Palmer (#108),

        “But we’ll keep on repeating the experiment until we get the right result.”

        Reminds me of an old production plant axiom:

        “Purification by re-analysis.”

  82. John M
    Posted Nov 15, 2008 at 8:10 AM | Permalink

    Apocryphal of course!

  83. Buddenbrook
    Posted Nov 16, 2008 at 9:22 AM | Permalink

    snip – too much venting

    • John F. Pittman
      Posted Nov 16, 2008 at 9:51 AM | Permalink

      Re: Buddenbrook (#112), I hope you would consider a counter argument. Perhaps this is the best time to de-politicalize the conversation. That both sides should come to the agreement that a misatke was made. That GISS will most probably institute some QA into their procedure. But whether the different teams play together better, is a somewhat moot point. I think both sides should ask “Did the science get better?” I can understand Gavin’s reaction that this is a small issue. I can understand those who think that perhaps there are other issues with GISS that this last problem and the Y2K problem indicate and should be followed up on. I would agree with both. But I think it likely that when, and if, it is done that the mistakes, if found, in GISS will not be overwhelming. The reason I think this is that the 4 most used metrics are in close agreement. I would hope that both sides would look at continuing to improve, if possible. And that of course is the real question. Can improvements be made?

  84. Buddenbrook
    Posted Nov 16, 2008 at 10:28 AM | Permalink

    John F. Pittman, I would like to believe so, but I’m quite certain it is an impossible wish. As the doubts start to grow, politics and psychology will mix with science in increasing amounts. Scientists are not immune to the human factor. History knows several examples.

  85. Posted Dec 12, 2008 at 11:00 PM | Permalink

    This is a minor item but it illustrates NOAA/NCDC’s need for better software to flag outliers –

    NCDC’s US Climate at a Glance webpage uses preliminary temperature anomaly data to create a monthly anomaly map:

    The mauve arrow points to an oddity – a blazing hot spot around Greenwood, MS, with a peak anomaly approaching +8F. Greenwood’s Preliminary November Climate Data form (“CF6” report available here ) hints at the cause:

    For some reason Greenwood reported only the first seven days of the month, a warm spell where the temperature anomaly was +7F. The data missed the rest of the month, which was anomalously cool across the region. Despite the abbreviated and unrepresentative nature of the Greenwood data, it was apparently used in the map construction. This the hot spot.

    No doubt this will be corrected later but meanwhile an oddly flawed map has been issued to, and perhaps used by, the public.

  86. Geoff
    Posted Apr 15, 2009 at 11:27 PM | Permalink

    By the way, Peter Stott of the Met Office Hadley Centre has a new article in press at GRL titled “Variability of high latitude amplification of anthropogenic warming”. Abstract:

    Climate models have long predicted that the high latitude response of nearsurface air temperatures should be greater than at lower latitudes as a result of snow/ice feedbacks. Here we show that a regression analysis of observed global surface temperatures and anthropogenic and natural forcings could misleadingly suggest that climate models fail to capture the observed zonal mean pattern of response to anthropogenic forcings. A better approach to detecting changes and to determine consistency of climate models and observations is to use multiple features of the response pattern derived from physically-based climate models, as has been done in optimal detection studies. We show that multi-variable fingerprints can more easily detect anthropogenic changes thereby offering the potential to more robustly quantify anthropogenic influence on aspects of the climate system such as the cryosphere and the hydrological cycle.

5 Trackbacks

  1. […] It was thought that the issue with the GISS temperature anomaly for October related to the northern hemisphere, in particular, parts of Siberia.  But the corrected map from the Hanson teams, to accompany the corrected anomaly, now shows changes to Australia and the Canadian Arctic Islands in particular warming for October since Monday, November 10.  Read more here.  […]

  2. […] manage to fool “Corrected NASA GISTEMP data has been posted” Anthony Watts or “Watch the Pea” Steve […]

  3. […] We all know the best use of this too: Commenters at WUWT, Climate Audit pretty much everywhere pull down menus to create maps like the ones shown below: From Climate Audit […]

  4. […] http://www.climateaudit.org/?p=4332 […]

  5. […] posts in Watts Up With That and Climate Audit which discuss the NASA […]