David Smith on NOAA Outlier Software

David Smith writes:

This is a minor item but it illustrates NOAA/NCDC’s need for better software to flag outliers –

NCDC’s US Climate at a Glance webpage uses preliminary temperature anomaly data to create a monthly anomaly map:

The mauve arrow points to an oddity – a blazing hot spot around Greenwood, MS, with a peak anomaly approaching +8F. Greenwood’s Preliminary November Climate Data form (“CF6” report available here ) hints at the cause:

For some reason Greenwood reported only the first seven days of the month, a warm spell where the temperature anomaly was +7F. The data missed the rest of the month, which was anomalously cool across the region. Despite the abbreviated and unrepresentative nature of the Greenwood data, it was apparently used in the map construction. This the hot spot.

No doubt this will be corrected later but meanwhile an oddly flawed map has been issued to, and perhaps used by, the public.


  1. Posted Dec 12, 2008 at 11:45 PM | Permalink

    It seems to be a common trend. How can there be so many issues with the data? I can understand problems with speleothum dating, it’s complex for sure but this map looks like a nuke plant overheated. How many billion dollars does it take to get good auto-reporting temp sensors?

    Sorry for the partial OT but regarding other confusing data….
    I calculated my own anomaly values for NH sea ice, they don’t match published anomaly values. I think my calcs are correct because they were so simple. Are trends imparted in the ice anomaly through corrections in similar fashion as ground temp data?


    It’s frustrating.

  2. Jim Arndt
    Posted Dec 12, 2008 at 11:52 PM | Permalink

    Like how they use 1970 to 2000. Hey why don’t we try 1934 to 2008. Maybe we can try 1880 to present. The mean is… is that you can take any period in time to make your point. I personally like 13,000 bce to make my point when temperatures varied by 10 to 20 C in a decade. Maybe we can go to the Younger-Dryas period.

  3. Phil
    Posted Dec 13, 2008 at 12:33 AM | Permalink

    I would beg to differ. From http://hadobs.metoffice.com/hadcrut3/HadCRUT3_accepted.pdf (page 6):

    Measurement error … The random error in a single thermometer reading is about 0.2°C (1 sigma) [Folland et al., 2001]; the monthly average will be based on at least two readings a day throughout the month, giving 60 or more values contributing to the mean. So the error in the monthly average will be at most 0.2/SQRT(60) = 0.03°C and this will be uncorrelated with the value for any other station or the value for any other month.

    For Greenwood, MS. the measurement error would then be 0.2°C/SQRT(14) = 0.0535°C, given 7 days of data with 2 measurements for each day (one Tmax, one Tmin), each of which is an independent estimate of the monthly average. (/sarc)

    • Posted Dec 13, 2008 at 2:03 AM | Permalink

      Re: Phil (#3), The post isn’t about random error, it’s about missing data which would substantially change the 0.2C error.

      • Phil
        Posted Dec 25, 2009 at 3:01 AM | Permalink

        Please see the last five characters in parenthesis:


    • Phil
      Posted Dec 25, 2009 at 3:41 AM | Permalink

      After Climategate it appears that the link may be broken or not working. The reference is to a paper from 2005 by P. Brohan, J. J. Kennedy, I. Harris, S. F. B. Tett & P. D. Jones titled “Uncertainty estimates in regional and global observed temperature changes: a new dataset from 1850.”

  4. Chris jH
    Posted Dec 13, 2008 at 2:38 AM | Permalink

    Your calculations don’t apply to these measurements because they are not 7 measurements of the same quantity but seven measurements of 7 different quantities. The fact that you have used this argument on temperature data in November makes the problem obvious because the expected temperatures are decreasing as we head towards winter. Using the first few days to estimate the the temperature for the whole month is almost certain to lead to an overestimate.

    When using any kind of mathematical formula, it’s imperative that you remember the criteria for using it and the caveats that come with it.

    • Phil
      Posted Dec 25, 2009 at 4:06 AM | Permalink

      Please note that my tongue was firmly planted in my cheek. I couldn’t agree with you more. However, the sad part is that I quoted from an apparently peer-reviewed (cough, cough) publication that therefore can serve as a reference for other publications or for publications from entities such as the IPPC and apparently has already been cited over 150 times (google shows over 300 cites, including the IPCC for AR4).

      There appears to be an update:

      Received 2 August 2005; accepted 14 February 2006; published 24 June 2006.

      Citation: Brohan, P., J. J. Kennedy, I. Harris, S. F. B. Tett, and P. D. Jones (2006), Uncertainty estimates in regional and global observed temperature changes: A new data set from 1850, J. Geophys. Res., 111, D12106, doi:10.1029/2005JD006548.

      found at http://www.agu.org/pubs/crossref/2006/2005JD006548.shtml but behind a pay wall.

  5. Chris H
    Posted Dec 13, 2008 at 2:55 AM | Permalink

    @Jeff, I think Phil’s being sarcastic, hence the “(/sarc)” tag at the end.

    • Posted Dec 13, 2008 at 9:23 AM | Permalink

      Re: Chris H (#7),

      Sorry, I missed that.

      • Chris H
        Posted Dec 14, 2008 at 4:58 AM | Permalink

        Re: Jeff Id (#11)
        I made posts #6 & #7, but not #5. So either the blog software has a bug, or someone else is using the same (admittedly short) nick name “Chris H”. It just seemed very wierd that #5 appeared while I was typing reply #6, especially when I haven’t seen anyone else using that nick on your blog before.

        • MrPete
          Posted Dec 14, 2008 at 8:15 PM | Permalink

          Re: Chris H (#17),
          The #5 Chris H is a different Chris H. I’ve modified it a bit to make it clear…

  6. Dodgy Geezer
    Posted Dec 13, 2008 at 4:48 AM | Permalink

    “…Using the first few days to estimate the the temperature for the whole month is almost certain to lead to an overestimate….”

    An overestimate in the autumn months, and an underestimate during spring. I wonder if similar cold spots can be found, or whether they will have been detected and corrected away. This might be a possible way to show a collection bias for the base data…?

  7. Alan Bates
    Posted Dec 13, 2008 at 7:37 AM | Permalink

    Just a thought. Why are data being reported in deg F instead of the derived SI unit, deg C?

    I understood there was a requirement for a US Government organisation to report its results in SI or SI dervied units?

    “In January 1991, the Department of Commerce issued an addition to the Code of Federal
    Regulations entitled “Metric Conversion Policy for Federal Agencies,” 15 CFR 1170, which removes the
    voluntary aspect of the conversion to the SI for Federal agencies and gives in detail the policy for that
    conversion. Executive Order 12770, issued in July 1991, reinforces that policy by providing Presidential
    authority and direction for the use of the metric system of measurement by Federal agencies and


    • jae
      Posted Dec 13, 2008 at 9:13 AM | Permalink

      Re: Alan Bates (#10),

      Just a thought. Why are data being reported in deg F instead of the derived SI unit, deg C?

      Maybe they use degrees F to make things look worse for the public? I just hope they convert to C before melding the data with the rest of the world. 🙂 As careless as the governmental agencies are with data, it wouldn’t surprise me if they didn’t. David Smith’s post here is another good illustration of the problem of lazy and uncaring bureaucrats not bothering to just look at the map to see if it makes sense, before publishing it.

  8. Brooks Hurd
    Posted Dec 13, 2008 at 10:38 AM | Permalink

    If this were another government agency, they could be forgiven for missing what is clearly a data, rather than a temperature, anomoly. But this is NOAA, the weather agency of the US government. One would expect them to know that a 14F difference in the monthly average temperature between the Mississippi-Alabama border and Greenwood, is highly improbable.

    A monthly average difference of 2 or 3 degrees could be expected, but a 14 F difference over a few hundred KM in a Gulf Coast state would only be expected when warm front would be north west of a cold front. Although this is the reverse of normal weather patterns, it is possible. Fronts, however, are not stable for weeks, and even if they could be, they would have a widespread effect. That NOAA does not have even rudimentary error checking softare which would have easily highlighted this data problem is amazing. This once more underscores the poor quality of the data which we are told to trust.

  9. Posted Dec 13, 2008 at 10:58 AM | Permalink

    A slight change in the programming of their map software could flag odd gradients or excessive missing values and help avoid small issues like this. And, a final appearance-check by a human would also help matters.

    This is a minor problem and is unconnected to the AGW debate. But it does illustrate something that puzzles me about climate science in general – the relative inattention to simple detail. The problem is not the making of errors, including superficial ones like this, it’s that these problems seem to often be accepted as no big deal. Other science disciplines are obsessed with details, as details can matter, especially when looking for weak signals buried in noise.

    When a discipline’s superficical uh-ohs are readily apparent it makes one wonder a bit about the quality of their less-visible work.

    Here’s another example, from a well-known, highly-cited climate science paper. I’ve removed much of the detail so as to disguise the source, as this is not to pick at that particular paper but rather to illustrate a point:

    The mauve writing is mine. The columnar values should add to 100%. One column, obviously, does not.

    Did that simple graphing error undermine the paper’s argument? Nope. Was it an easily-catchable error and one which a cursory review, perhaps by a grad student or one of the authors, would have caught prior to going public? Yep. Did it make me wonder a bit about the attention to detail in the authors’ underlying work? Yes.

    • stan
      Posted Dec 13, 2008 at 12:39 PM | Permalink

      Re: David Smith (#13),

      This issue dominates. Thermometer siting, failure to replicate, lost data, failure to make data available, failure to account for contradictory studies, etc. etc. There is a general level of sloppiness which seems to permeate every aspect of the work by the alarmist side of the debate.

    • Geoff Sherrington
      Posted Jan 31, 2009 at 9:03 PM | Permalink

      Re: David Smith (#12),

      Saw this again today while looking for something else. The lines on the graph seem to sum 108, not 100, at the critical point.

      So the error might have larger consequences than at first glance.

  10. Urederra
    Posted Dec 13, 2008 at 11:52 AM | Permalink

    I have a question regarding the measurement error you guys are talking about. If the random error is about 0.2 C, or about 0.4 F, why the data is given without any significant figure after the decimal point? Does this lack of decimal digits increase the error?
    (Well, they are two questions) 🙂

  11. Fred
    Posted Dec 13, 2008 at 12:10 PM | Permalink

    Jim (#2),

    Why wouldn’t the NWS use the latest 30-yr average to describe the “normal” temperatures? That is more representative of the current climate than an older or longer-term record. I have heard skeptics whine about GISS using the 61-90 period to make the current anomalies look warmer, just imagine the bitching if the NWS went to full record norms, as the days and amount above average would increase dramatically!

  12. tty
    Posted Dec 13, 2008 at 1:26 PM | Permalink

    Re 15

    Actually GISS does not use the internationally agreed standard period 1961-90, but rather 1951-80. Whether this is because 51-80 was a colder period, or because GISStemp started in the eighties and couldn’t be bothered to change their code, I don’t know. Though in the latter case one would have expected them to use the standard period in use then, 1931-60 (though of course that was a rather warm interval).

  13. Chris H
    Posted Dec 14, 2008 at 5:09 AM | Permalink

    Hah, temporarily forgot which blog I was posting on. “your blog” should have been “this blog”.

  14. Posted Dec 14, 2008 at 9:07 AM | Permalink

    Re Alan Bates #9, Jae #10, Urederra #14, NOAA, for historical reasons, records daily min and max temperatures to the nearest dF, and then averages these to an integer dF, though monthly averages are carried to higher precision.

    This procedure could cause some important biases, depending on how the rounding is done, and could even affect trends, if the rules for rounding have changed over time.

    How are recorders instructed to read a glass F thermometer if the liquid is not right on a degree? Do they call it 61 if it’s over 60 but not over 61? Or do they call it 60 until it actually reaches 61? Or do they estimate the nearest degree? In the latter case, how do they break apparent ties, i.e. “half way” between 60 and 61? Have the rules been the same for the last 100 years, or have they changed with NWS administrations?

    The new MMTS remote devices are still recorded to the nearest dF, even though they probably could give .1 dF.

    A second rounding error occurs when the integer daily Min and Max are averaged and then rounded to an integer. The current practice seems to be to always round ties up, rather than to the nearest even integer. Eg on Nov. 8, Port Columbus (OH) Airport had a max of 48 and a min of 41, but the average was given as 45, not 44. Always rounding ties up rather than even biases average temperature up by +0.25 dF. Not a big deal for casual weather impressions, but a perceptible fraction of long-run climate change if the rule has changed over history.

    Does anyone know how raw C data is recorded?

    Perhaps this has already been discussed at length at http://wattsupwiththat.com/.

  15. Novoburgo
    Posted Dec 14, 2008 at 11:47 AM | Permalink

    Looks as though problems with Greenwood will carry over into December; first three days are missing!

    Hu McCulloch:

    The daily numbers for max and min are totaled for the month, divided by two and compared against the “norm.” That eliminates the rounding up error.

  16. Bob Koss
    Posted Dec 14, 2008 at 12:05 PM | Permalink


    You’re right about rounding up introducing an upward bias when a significant number of similar cases have to be processed.

    The Visual Basic language defaults to rounding toward even when the decimal is .5. (Bankers Rounding?) In my spreadsheet both positive and negative numbers round away from zero. In the case of temperature most of the bias would positive though cold stations would be the opposite. Maybe they do it all in a spreadsheet.

  17. Posted Dec 14, 2008 at 1:15 PM | Permalink

    Re Novoburgo #21,

    The daily numbers for max and min are totaled for the month, divided by two and compared against the “norm.” That eliminates the rounding up error.

    If the maxes and mins are each averaged to the nearest .1° or .01° and then the two averages are averaged together, the rounding up error from averaging will be virtually eliminated. But that still leaves the observational rounding error.

    Re Bob Koss, #22, I associate rounding even with engineering practice. Excel indeed rounds ties away from zero, and I don’t see an option to change this.

    I always suspected that banking software customarily rounded fractional cents in the bank’s favor on the customer’s statement, in the customer’s favor on the bank’s statement, and deposited the difference into the programmer’s account. (Just my cynical imagination working overtime!)

    PS: I’ve finally figured out, by trial and error, how to get the ° sign to show up in WordPress without going into LateX: Just type &-deg-; but without the hyphens. The same works with &-beta-; for β, etc. We’ve already seen that it is necessary to type &-lt-; etc to get the < sign to appear in WordPress, since WordPress will interpret that character by itself as the beginning of an HTML tag.

    Does anyone know a way to do superscripts and subscripts in HTML without going into LateX?

    • Nick Moon
      Posted Dec 14, 2008 at 1:24 PM | Permalink

      Re: Hu McCulloch (#23)

      Does anyone know a way to do superscripts and subscripts in HTML without going into LateX

      <sup> and <sub> should do what you want.

    • MC
      Posted Dec 17, 2008 at 2:21 PM | Permalink

      Re: Hu McCulloch (#22), Hu, you can try ALT + 248 on your numeric keypad on the keyboard. It should be a universal ASCII code for most keyboard buffers.
      ALT + 248 = ° (do a space after as well so it loads)
      ALT + 230 = µ
      ALT + 241 = ±
      I use them all the time in documents.

    • Curt
      Posted Dec 18, 2008 at 6:49 PM | Permalink

      Re: Hu McCulloch (#22),

      I always suspected that banking software customarily rounded fractional cents in the bank’s favor on the customer’s statement, in the customer’s favor on the bank’s statement, and deposited the difference into the programmer’s account. (Just my cynical imagination working overtime!)

      Believe it or not, I learned the term for this practice (which does occasionally occur) by reading Climate Audit: it’s called “slicing salami”! If you search the site for that term, you will find several references to this. Several site visitors, notably Sinan Unur, have made the same analogy.

  18. Posted Dec 14, 2008 at 1:53 PM | Permalink

    RE Nick Moon #24,
    Easy enough!

  19. Posted Dec 14, 2008 at 1:57 PM | Permalink

    Or maybe not — that worked in the Preview window, but didn’t come through in the post. There are, however, “Quicktags” that generate the same strings. Let’s see if they show up in the preceding text.

    • Nick Moon
      Posted Dec 14, 2008 at 6:14 PM | Permalink

      Re: Hu McCulloch (#26),

      Oops. I saw it worked in preview mode and didn’t check it in a real posting. Lets see:

      Test sentence with supertext and subtext.

      I expect any kind of tag will work in preview mode. However, somewhere inside the guts of WordPress their will be a filter that removes any possibly undesirable HTML elements. I’d expect <script>
      and <iframe> elements to be verboten. It would probably be possible to tweak WordPress to allow
      sup and sub through, I don’t think they can do any harm.

  20. Posted Dec 14, 2008 at 1:57 PM | Permalink


  21. Novoburgo
    Posted Dec 14, 2008 at 2:42 PM | Permalink

    Re: Hu McCulloch #23

    The original numbers you cited were for the Columbus Airport and used the results for a single day which obviously can result in a rounding error. However, when the totals for the columns are added up for the monthly summary the rounding error becomes minuscule. The ASOS equipment at the airport measures temp to the nearsest 1/10 degree C and that’s how its recorded in the METAR Observation. The Celsius readings are converted to Fahrenheit for domestic consumption. While it’s very possible there would be a rounding up error it would be quite small. Keep in mind that the entire network was never setup to measure temperatures to the nearest 1/100 or even 1/10 degree. In the case of Columbus and other airports it’s for aviation support. The rest of the observing network serves agriculture, industry, public safety, etc. The idea that we can extrapolate, interpolate, massage and manipulate these airport and co-op weather readings and come up with a planetary average is pure bulls…

    The system is so inadequate for the task that rounding errors would be one of the least serious problems. Screw ups like the case with Greenwood, MS are commonplace. Using the existing weather network and publishing world temp anomalies to the thousandth of a degree C is a joke. You can’t do delicate surgery with a machete and expect precise results but that’s what we’re attempting to do. And yes, you can read a Fahrenheit mercurial thermometer to the nearest 1/2 degree, rounding up or down as needed. That’s the way it was done for many years.

  22. Posted Dec 18, 2008 at 12:07 PM | Permalink

    David Smith wrote in the original post,

    No doubt this will be corrected later but meanwhile an oddly flawed map has been issued to, and perhaps used by, the public.

    There doesn’t seem to be any big rush to correct it — the map at http://lwf.ncdc.noaa.gov/oa/climate/research/cag3/cag3.html still has Greenwood burning up.

    One can’t presume that NOAA monitors CA for corrections. Perhaps David could notify the contact provided, Karin.L.Gleason@noaa.gov, of the error he found?

  23. Jeremy
    Posted Dec 18, 2008 at 5:22 PM | Permalink

    Their color choice on that graph is interesting. Notice that hotter areas are displayed with a bright, vibrant red. It’s unmistakable. Whereas cooler-than-normal places are displayed with an off-blue color that doesn’t stand out in any way. If I were just looking this over, I’d assume they’re trying to emphasize the warmer.

  24. Posted Dec 18, 2008 at 10:53 PM | Permalink

    Re Curt #33, Thanks!

    Steve wrote about Hansen’s creative rounding rules in the 8/31/07 thread “Waldo ‘Slices Salami'”.

    In the first comment on that thread, Sinan Unur gave a link to an amusing Wikipedia page on variants of the salami slicing scam: http://en.wikipedia.org/wiki/Salami_slicing. The bank account version turns out to be known as “penny shaving”.

    See also “Slicing Some Czech Salami” on 9/1/07.

    Re Jeremy #32, I think you’re right — the blues indicating cold subtly have lower saturation than the reds indicating hot. A case of “shady graphics?”

  25. Mike Lorrey
    Posted Dec 19, 2008 at 5:37 AM | Permalink

    I have to say this is absolutely bogus. I dont know where these guys are getting their data, but New Hampshire has had one of the coldest Novembers on record according to the local meteorologists, heck we had snow before Thanksgiving when typically its nothing but rain until early December.


  26. henry
    Posted Dec 19, 2008 at 9:44 AM | Permalink

    Re: Hu McCulloch (#34)

    Re Jeremy #32, I think you’re right — the blues indicating cold subtly have lower saturation than the reds indicating hot. A case of “shady graphics?”

    “Pictures are worth a thousand words”. This also goes with the use of zero on the anomaly charts. It would be hard to say that the large temp anomalies are “unprecedented” if some of the “hottest” fall near or less than “zero”.

  27. Posted Dec 19, 2008 at 10:14 AM | Permalink

    Re #36 etc,
    The legend is oddly incomplete, since the map shows 5 shades of warm from pink to dark red, while the legend only shows 4. Presumably the darkest red on the map represents > 8°F?

    • Joseph
      Posted Dec 20, 2008 at 12:15 PM | Permalink

      Re: Hu McCulloch (#37),

      I noticed that as well Hu. I think you are right, they must be claiming an anomaly >8F for those areas.

      I also noticed something else. The map doesn’t just display a 8F anomaly for Greenwood, it also displays a gradient of 6F, 4F, 2F into the -2F area surrounding it. Is the mapping software assuming a gradient must exist? Is that necessarily logical or real? The 4F band surrounding Greenwood encompasses the Greenville station as well (they are only about 50 miles apart). I checked the Greenville data for November and it is intact and complete, but I cannot find a table of the 1971-2000 monthly normals for the stations in the network to compare it to. Did Greenville report a 4F anomaly for November, or did the mapping software’s “need” to create a gradient around a (mistaken) hotspot “override” Greenville’s report?

      Inquiring minds want to know.

      • Joseph
        Posted Dec 24, 2008 at 3:14 PM | Permalink

        Re: Joseph (#38),

        I finally located the 1971-2000 monthly normals used by NCDC here. The Greenville, MS station reported a -2.4 °F departure from normal for November 2008. It was instead painted with a 4 °F departure as part of the gradient extending outward from the mistaken Greenwood hot spot. Apparently, the mapping software’s “need” to create a gradient between disparate departure reports DOES override the reports from intervening stations.

        This makes me wonder just how much of the information in the map is real, and how much is just an artifact of this curious feature of the mapping software. It will be interesting to see what the new map looks like if, or when, the Greenwood data is corrected.

  28. Posted Dec 23, 2008 at 10:55 PM | Permalink

    Here’s another little outlier mystery, this time with GISS. Please pardon my excessive use of images:

    First, here’s the GISS November anomaly map –

    What catches my eye is the very warm South Atlantic near South Georgia (circled in mauve). It’s odd to see large anomalies, hot or cold, so far away from land.

    I took a look at a mid-November NOAA SST anomaly map for the region, shown here –

    The mauve dot is South Georgia. As the map indicates, the surrounding SST was, if anything, somewhat cooler than average.

    What abour Spencer and Christy’s UAH map for the lower troposphere? Here it is –

    The region in question is somewhat warmer than normal, but not extremely so.

    What about HadCRUT3? Here’s their map for November:

    HadCRUT3, too, shows nothing extraordinary about the region in question.

    How does the GISS November value compare with other GISS Novembers at Gritvaken, South Georgia? Here’s a plot –

    The red line is 2008, the highest November value on record. The blue lines are the other years. November 2008 is an extreme.

    I don’t know which estimate is closer to the truth. Perhaps GISS has got it right, or maybe not. At this point I simply have, uh, Georgia on my mind.

    • John M
      Posted Dec 24, 2008 at 8:49 AM | Permalink

      Re: David Smith (#39),

      Nice “hot spot”, if you will.

      FWIW, here is what Wheather Underground has.

      Looks like they had quite a warm spell at the end of the month, but if I read it right, the reported average is 5 C for the month.

      Sometimes it would be nice to see decimal places, even if they might not be significant.

      • John M
        Posted Dec 24, 2008 at 9:06 AM | Permalink

        Re: John M (#40),

        Using the WU data, the average of the averages (had to fill some in myself) is 5.4 C. The average of the average min and average max is 5.7 C, which I guess gives a different weighting factor(?).

        The GISS temperature doesn’t look too far off, but it looks like the homogenization algorithm is doing something funny (or at least non-intuitive).

  29. John M
    Posted Dec 24, 2008 at 9:24 AM | Permalink

    OK, I think I’m officially in over my head.

    GISS “land only” data using 1200 Mi smoothing radius:

    “land only” 1200

    This figure matches David’s GISS map and the map on the “front page” of the GISS TEMP site.

    GISS map using land/ocean data and 1200 Mi smoothing radius.

    Land/ocean 1200

    GISS map using land/ocean data and 250 Mi smoothing radius

    Land/ocean 250

    The “land only” shows that warming in the Southern ocean. The “land/ocean” seems more like the others.

  30. John M
    Posted Dec 24, 2008 at 9:27 AM | Permalink

    One more thing and I need to go shopping.

    I should have said “met station”, not “land only”, although generating the maps on the GISS site leads one to believe they are “land only”.

    Over and out.

  31. John Norris
    Posted Dec 24, 2008 at 10:41 AM | Permalink

    Perhaps we could send Anthony Watts to the south Atlantic and take a closer look at the weather station by webcam 2.


  32. Steve McIntyre
    Posted Dec 24, 2008 at 10:54 AM | Permalink

    Grytviken is not part of the GHCN provenance. GISS picks up this data from the British Antarctic Survey http://www.antarctica.ac.uk/met/READER/surface/Grytviken.All.temperature.txt

  33. Steve McIntyre
    Posted Dec 24, 2008 at 10:56 AM | Permalink

    #44. There was a 10-year hiatus in reporting from this station about 10 years ago. The key issue for comparisons is whether the change in location introduced inhomogeneity in the record (which is known to happen in USHCN records).

  34. John Norris
    Posted Dec 24, 2008 at 11:03 AM | Permalink

    Sorry, the link in 44 didn’t get picked up correctly. Clicking on it doesn’t quite get you to the picture of the station. Cut from http … all the way through … _April_2008 and paste that in your browser. About a third of the way down the page is a picture of what looks to be the weather station.

  35. Posted Dec 25, 2008 at 8:21 PM | Permalink

    Thanks for the links, John M. You make a good point.

    The GISS 250km-smoothed maps indeed give better representations of global temperature details than do the GISS 1200km maps. And, the GISS land-and-sea maps depict the ocean regions near islands better than GISS’ land-only maps. Combined, the 250km land-and-sea map looks reasonably like the Hadley and UAH maps, at least in the area of interest. This leaves me with two questions:

    1. Why does GISS use the 1200km representation for its global temperature report, rather than the 250km representation?
    2. Why does GISS show temperature anomalies for ocean areas adjacent to land areas, including islands, on its 1200 knm land-only map? Why not omit the ocean cells?

    I’d appreciate a point in the right direction if these are discussed in GISS readmes, or elsewhere.

  36. Posted Dec 25, 2008 at 9:04 PM | Permalink

    Here’s a comparison of 1200km and 250km GISS smoothing for recent months:

    The coarser 1200 km smoothing used by GISS gives higher global temperatures, on average, than does finer 250km smoothing, at least for the months shown.

  37. Posted Dec 25, 2008 at 10:07 PM | Permalink

    This note (red underline by me) on the map header may help explain why the 250km runs cooler than the 1200km, and perhaps why GISS tends to run hotter than other analyses like Hadley and UAH:

    If the land anomaly is generally greater (warmer) than the ocean anomaly, and if GISS smoothes land data onto ocean areas but not vice-versa, then a hot 1200km-smoothed globe and a relatively hot GISS are not surprises. This has likely been discussed at CA but the implications of that discussion escaped me.

  38. steven mosher
    Posted Dec 26, 2008 at 1:48 PM | Permalink

    re 51. Hadcru do “blend” ocean data with land data.. see Jones et al for a discussion.

  39. Jeff Norman
    Posted Dec 26, 2008 at 9:42 PM | Permalink

    David Smith:
    December 25th, 2008 at 10:07 pm

    If the land anomaly is generally greater (warmer) than the ocean anomaly

    But the anomaly should be equal in both areas because the anomaly is caused by higher temperatures in the overlaying troposphere resulting from IR radiation trapped by the greenhouse gases. Assuming the AGW hypothesis is used to explain the anomalies.

    If the land anaomaly IS generally higher than the ocean anomaly then perhaps there is sun other explanation. 😉

  40. Joseph
    Posted Dec 27, 2008 at 5:33 PM | Permalink

    This is odd. At a different NCDC/NOAA website they are reporting the same information (departure from normal for November 2008) as at the top of this thread, but they are apparently using different mapping software (greater resolution) and a different database (Greenwood, MS doesn’t appear as a hot spot). There seems to be some discrepancies, though. The second map shows a hot spot in Utah the first map doesn’t. The second map also displays cool departures in West Texas and southern New Mexico the first doesn’t. What the heck is going on? Why would they be drawing departure maps from two different databases?


  41. Mike Bryant
    Posted Dec 27, 2008 at 9:28 PM | Permalink

    I tried your link, I guess the map has been removed. Not really very surprising, now is it?

  42. Posted Dec 27, 2008 at 10:43 PM | Permalink

    GISS uses an interesting smoothing when it estimates global temperature anomalies. The GISS smoothing has a one-way feature: land values (including islands) are smoothed onto adjacent ocean but ocean values are not smoothed onto adjacent land.

    This, I believe, tends to overweight land anomalies and underweight ocean anomalies in their global analysis. This effect would likely be bigger in GISS’ 1200km smoothing (the one generally reported by GISS) than in their 250kn smoothing.

    The GISS’ 1200km and 250km analyses have a healthy correlation (r-squared = 0.86 since 1/1/2000) but their trends differ:

    The 1200km trend is about double the 250km trend for the period examined.

    How do those GISS trends compare with other analyses, for 2000-2008?

  43. Joseph
    Posted Dec 28, 2008 at 11:21 AM | Permalink

    The second NCDC Nov 2008 departure map uses a different color scheme that makes comparison of the two difficult, so I have altered the color scheme of the second to match that of the “Climate at a Glance” map. Here they are together for comparison.

    They are broadly similar, but display many discrepancies. Besides the Greenwood, MS discrepancy, northern Missouri displays a cool departure in the top map and a warm departure on the other. The top map displays a swath of > 8 °F departure through Montana and Wyoming that is largely missing from the second map. There are many other differences.

    Even if the NCDC uses two different mapping programs that use different smoothing radii, it just doesn’t seem possible that these two maps were drawn from the same dataset. Any ideas?

  44. Posted Dec 28, 2008 at 12:29 PM | Permalink

    Re Joseph #54, 57, 58,

    This should work:

    The scale on this one is qualititative, and so is apparently designed for an innumerate audience who are not familiar with physical measurements like degrees F or C.

    But even so, the picture is totally different than the first map on this thread, even aside from the Greenwood issue, so something is indeed screwy here.

    The “Divisional Ranks” map appears to show administrative district averages rather than continuous interpolation, but that shouldn’t make much difference. Notice, though, that some of the districts in California, Nevada or Texas could swallow Indiana whole.

    Re #34, on closer inspection, the reds in the first graph (or this one) are not as highly saturated as they might be, so I’ll retract my snark about “shady graphics”.

    I notice that on quantitative temperature graphs like the first, the definitionof the color scale is often adapted to the extremes in the graph, so that every graph will show “extreme” hot or cold (or both), no matter how mild the temperatures are. This can be confusing if you’re not aware of it, but is no more deceptive than the common practice of scaling a graph so that the line(s) fill most of the space.

    Occasionally the top or bottom category on such graphs is extended slightly so as to incorporate a slightly warmer or colder temperature, eg 6° to 8.3° rather than 6° to 8° as the upper category. But it’s unusual that the map would add a color that is not on the legend at all, as with the 5th shade of warm on the first graph.

  45. Joseph
    Posted Dec 28, 2008 at 6:28 PM | Permalink

    Well, fooey. The first time I try to post images and I screw it up. Not sure what went wrong, the images did appear in the preview pane. Maybe a moderator could help me out here with #58?

    Yes Hu, I agree there is something screwy going on here. Now that I have thought about it, I wonder if the first map is based on raw data, and the second map is drawn from “adjusted” data? No clear trend to the adjustments, if so. Some areas become warmer, others colder (in their departures). This certainly doesn’t inspire confidence in the integrity of their data reporting.

  46. Mike Bryant
    Posted Dec 28, 2008 at 8:21 PM | Permalink

    If I am not mistaken the site, U S Climate at a Glance, supposedly compares one month to an average of the “normal” climate from 1971 to 2000. Shouldn’t the “normal” climate in that time period have some defined temperature range that would also be considered “normal”? If that is true then the anomaly up or down should be shown from that 4 to 6 degree “normal” area, shouldn’t it?
    I would really like to hear a few thoughts about this.
    Mike Bryant

  47. D. Patterson
    Posted Dec 29, 2008 at 2:42 AM | Permalink

    NOAA has lost credibility with many of us on Puget Sound. We’ve just experienced the by far greatest snowfall in more than fifteen years. Native plants in the area and our garden have succumbed to sustained low temperatures exceeding their tolerances. A number of wild flower species and domesticated lower cultivars in the garden have failed to flower or fruit properly for the past two or three years due to a lack of warmth in the Spring, Summer, and Fall seasons. Tomato cultivars adapted to the Pacific Northwest and Alaska have failed to ripen for the past two years because the cold Summer weather have left them green all the way into September and October. NOAA may be able to fool the Mass News Media like Nature, but it cannot fool nature. Someone needs to explain how it is supposed to be possible for the temperatures in Puget Sound to be “above normal” while the plant communities and apparent cold Summer and Winter temperatures have evidently been obviously and dramatically colder for the past two years.

  48. Posted Dec 29, 2008 at 2:12 PM | Permalink

    Re Joseph #60,
    On examining the source of your post, you appear to have used the Img tag button, but then did not paste in the URL of the photo. Here is what the source of my image looks like:

    <img src=”http://www.ncdc.noaa.gov/img/climate/research/2008/nov/11_11_2008_DvTempRank_pg.gif” alt=”Nov 08 NOAA Divisonal Rank map”/>

    The tags can just be typed in longhand without using the buttons.

  49. cmb
    Posted Dec 31, 2008 at 9:28 AM | Permalink

    Do I have this straight – going down the thread:

    You’re complaining about one datum in a preliminary release,
    You’re complaining about the corrections being made,
    You’re complaining about the removal of the preliminary release.

    Must be a slow news week. =)

    • Mike Lorrey
      Posted Jan 9, 2009 at 1:35 PM | Permalink

      Re: cmb (#64), those of us who know the climate in our home regions are finding the claims made by these maps to be fraudulent. I am concluding that the PTB are fabricating fraudulent data in many areas to support the AGW religion.

  50. Posted Jan 14, 2009 at 1:13 PM | Permalink

    Easy enough!

  51. Posted Jan 31, 2009 at 11:48 AM | Permalink

    Hm, very interesting.

  52. Posted Feb 1, 2009 at 4:57 PM | Permalink

    Re #68 Ah. Thanks, Goeff.

  53. Posted Feb 1, 2009 at 4:59 PM | Permalink

    Re #69 Geoff, I noticed that I misspelled your name about one nanosecond after submitting. My apology.

%d bloggers like this: