Hansen’s Y2K Error

Eli Rabett and Tamino have both advocated faith-based climate science in respect to USHCN and GISS adjustments. They say that the climate “professionals” know what they’re doing; yes, there are problems with siting and many sites do not meet even minimal compliance standards, but Hansen’s software is able to “fix” the defects in the surface sites. “Faith-based” because they do not believe that Hansen has any obligation to provide anything other than a cursory description of his software or, for that matter, the software itself. But if they are working with data that includes known bad data, then critical examination of the adjustment software becomes integral to the integrity of the record – as there is obviously little integrity in much of the raw data.

Eli Rabett has recently discussed the Detroit Lakes MN series as an example where the GISS adjusted software has supposedly triumphed over adversity and recovered signal from noise. And yet this same series displays a Hansen adjustment that will should leave anyone “gobsmacked”.

I’ve referred to Hansen’s Y2K problem in passing before, but it’s interesting to see it in a particularly loud form in the series cited by Rabett as an adjustment triumph. (By Y2K problem here, I don’t specifically mean that the error is due to 2-digit date formats, but that the error, whatever its source, is observed commencing Jan 2000.)

The first three panels of the figure compare the GISS raw version to three USHCN versions: “raw”, time-of-observation adjusted (TOBS) and adjusted (filnet). In 1951, the Detroit Lakes station moved from the back yards of DL 2.1 NNE to KDLM radio station. For the period 1951-1999, the GISS raw version is virtually identical to the USHCN adjusted version (which in turn is virtually equal to the TOBS version.)

But look at what happens in 2000. The input version at GISS switches from the USHCN adjusted/TOBS version to the USHCN raw version (without time-of-observation adjustment). This imparts an upward discontinuity of a deg C in wintertime and 0.8 deg C annually. I checked the monthly data and determined that the discontinuity occurred on January 2000 – and, to that extent, appears to be a Y2K problem. I presume that this is a programming error.

hansen60.gif
Figure 1. A comparison of GISS and USHCN versions.

The plot also shows a substantial difference between GISS raw and any of the USHCN versions prior to 1951. I don’t know what the explanation for this is right now. However the effect of this difference is to enhance the difference between recent and 1930s values in the GISS raw version. This difference is slightly attenuated by the GISS “UHI” adjustment.

This is not to say that increases in Minnesota temperatures are entirely due to Hansen’s Y2K error. Post-1999 values at Detroit Lakes are high without Hansen’s error. However, Hansen’s error here is still a large one – being equal in size to the entire estimated amount of global warming in the past century. This is also not to say that this error is what “causes” temperature increases in Hansen’s data. I’ve noticed this error in sites where there was a negative TOBS adjustment as of 2000 (Grand Canyon was another example). Many sites do not have a TOBS adjustment at 2000 and are unaffected. However Detroit Lakes seems like rather a poor choice as a type case demonstrating the triumph of GISS adjustments, as it contains a relatively obvious error that appears to be little more than a programming error.

42 Comments

  1. SteveSadlov
    Posted Aug 3, 2007 at 10:02 AM | Permalink

    Root cause analysis is warranted. There is definitely some sort of anomaly in either the software or semi automated “processing” of the data input files, which accounts for the upward step.

  2. Anthony Watts
    Posted Aug 3, 2007 at 10:28 AM | Permalink

    Well it just goes to show that things aren’t always as they appear. I blamed A/C placement in 1999, and while that may be a component, it appears there has been a lot of other factors that have combined.

    This is the second apparent programming error in adjustments that has been found recently, the first being the population value in the NYC Central Park adjustments.

    Rabbet readers pointed out that a number of stations near Detroit lakes also had positive spikes about that time and attribute that to natural occurance.

    Park Rapids (60km NE of Detroit lakes) jumped by about 4.5 deg C

    Iasca U (66km NE of Detroit lakes) jumped by 4 deg C over the same time (1997-2000)

    Ada (73km NW of Detroit Lakes) jumped by 4.5 deg C

    Wapeton (81km SW of Detroit lakes) jumped 4 deg C over the same time.

    That may be true, but I wonder if these other stations bear examination also since they all seem to have the same signature? If the Y2K component appears in those as well, there may very well be a systemic error.

  3. SteveSadlov
    Posted Aug 3, 2007 at 10:50 AM | Permalink

    RE: #2 – There is a fired / disgruntled NWS employee who posts prolifically at RC, Rabbet Run, etc, Hank somethingorother, who goes on and on decribing what he beleives to be / portrays as an upward step function “since the late 90s” in upper Mid West mean surface T. He also claims that this apparent “shift” in Upper Midwest climate has resulted in changes in stream flow. Wouldn’t it be ironic if his meme (clearly embraced by a willing audience at the aforementioned places) was based on a technical glitch?

  4. Fred
    Posted Aug 3, 2007 at 11:01 AM | Permalink

    I can’t speak to temperatures, but there has been a significant shift in precipitation in parts of the upper midwest/northern plains. One has to look no further than Devils Lake, ND which has been continuously rising (over 26 feet) due to increased precipitation since the early 1990s.

  5. Phil Dunne
    Posted Aug 3, 2007 at 11:08 AM | Permalink

    I’d be wary of making ‘Wizard of Oz’ comments. It doesn’t further your cause. As is well known, almost anything can be proven by a good statistician – even the opposite of what is true. Let’s stick to the facts and what is known.

  6. TCO
    Posted Aug 3, 2007 at 11:26 AM | Permalink

    Steve, given that data adjustment and selection is really an issue of applied statistics and sampling theory, (we agree on that), would you please back me up in my comment that I made to one of your “overwroughters” who said athat there was sufficent current knowledge to mandate throwing out any of the Anthony air conditioner horror case temp sites?

    P.s. You call them overwrought. I call them nitwits.

    P.s.s. You’re right to chide them, but I would do so for different reasons. You chide them for making sillin points which will then allow others to dismiss everything (a rhetorical failing). I chide them instead, because they are not thinking critically. And thus not contributing to problem solving.

  7. TCO
    Posted Aug 3, 2007 at 11:28 AM | Permalink

    My comment to the overwroughter was along the lines that one should not arbitrarily throw out data without considering the impact. That to do so (even if imperfect or contaminated) is a decision in itself. And that sampling expertise, senmsativity analyses and just some careful thought are needed.

    P.s. Same thing applies to BCPs.

    p.s.s. This is not to say that one should never exclude data. Just that this is something that has more dimemnsions to it than the hoi polloi realize.

  8. bernie
    Posted Aug 3, 2007 at 11:42 AM | Permalink

    TCO: I agree if you mean that historical data regardless of potential biases and contamination needs to be preserved and corrected by whatever legitimate and verifiable means are appropriate. At the same time, policy prescriptions based on uncorrected and potentially seriously flawed data should be treated very circumspectly.

  9. MrPete
    Posted Aug 3, 2007 at 11:47 AM | Permalink

    TCO, you’re still missing it, trying to complexify the surfacestations.org project. “Sampling expertise” has exactly zero relationship to the goals of that project, which is a comprehensive census. Either a station is compliant or it is not.

    If one is selecting compliant stations, it matters not a whit whether non-compliant stations are off by 10,1,.01 or .0001 degree. They are not compliant. No stats, sampling, sensitivity, or adjustments needed. Simple yes/no analysis is easy to do, even for the hoi polloi.

    Real Science is not limited to gray haired adults.

    p.s. This is not to say that one can always simplify data collection this way. Just that this is something far simpler than the TCO’s of the world realize.

  10. James Erlandson
    Posted Aug 3, 2007 at 11:49 AM | Permalink

    Much of their reluctance to provide source code for their methodology arises, in my opinion, because the methods are essentially trivial and they derive a certain satisfaction out of making things appear more complicated than they are, a little like the Wizard of Oz. And like the Wizard of Oz, they are not necessarily bad men, just not very good wizards.

    Do you believe in magic?

    Magicians protect their secrets not because the secrets are so large and important, but because they are so small and trivial. The wonderful effects created on stage are often the result of a secret so absurd that the magician would be embarrassed to admit that that was how it was done.

    http://www.climateaudit.org/?p=1737#comment-115534

  11. John G. Bell
    Posted Aug 3, 2007 at 12:03 PM | Permalink

    Re #5, Phil Dunne – I get tired of people making that obviously false statement. A statistician worthy of the name is a scientist and as such can’t promote a lie by knowingly breaking the laws of his trade. Nothing is proved with a false argument. Lawyers and politicians who have no understanding of science might believe your statement. It is important try to escape the cancer of political thinking when talking about science.

  12. Steve McIntyre
    Posted Aug 3, 2007 at 12:31 PM | Permalink

    #5,11. I’ve dialed back a few of the rhetorical embellishments, complying with my own editorial advice to others. The point speaks loudly enough by itself.

    #7. TCO, this post isn’t about data selection or retention. It’s about adjusting and the perils of adjusting. As far as I’m concerned, Hansen has made a programming error – nothing to do with data selection.

  13. steven mosher
    Posted Aug 3, 2007 at 12:31 PM | Permalink

    RE 7. TCO

    It is true that some have suggested “throwing out” the sites that do not conform. I’ve done that.
    It’s really a rhetorical flourish, I suppose. I suppose what one wants to say is that we would
    like to compare sites that meet guidelines with sites that don’t. The huff and puff about AC units
    is really, beside the point. The point is nobody wants to adjust for that nonsense, or justify that nonsense.

    Here is what I would say.

    1. Hansen and Gavin have suggested [personal communication] that the US is oversampled.

    2. on a sq mile basis it would appear that the US is more densely sampled than other geographic areas.

    3. Gavin [ personal communication] suggested that a good approach would be to analyze a grid utilzing
    only the “good sites” and compare this to the accepted numbers

    So, very simply. I would propose to analyze the sites using the CRN guidelines. I gave them
    to Anthony a while back, I think he posts them on his site. If you can’t find it ask me and I
    will direct you to the link at NOAA CRN ( or you can look for yourself)

    These guidelines rank sites from 1-5. Essentially class 1-3 do not have any presumed bias in Temperature.
    Sites ranked 4-5 do have microsite issues which may cause bias ( pavement, buildings, wind shelter )

    There is a study backing this up But I have been unable to locate. In any case this is the standard
    NOAA are currently using to establish a new network.

    It would seem to me that it would most intructive to test the following:

    1. Trend for sites that meet guidelines – trend for sites that don’t.

    Next, Do you have access to AMS? there is an interesting Paper by Oke on this very issue, but I am
    a cheap bastard and refuse to pay.

    The

  14. Steve McIntyre
    Posted Aug 3, 2007 at 12:37 PM | Permalink

    #13. I agree that the information should be looked at on the basis of “good” sites. Personally I expect “good” sites to show 20th century warming. I’m a little surprised at how many “good” sites (but not all) have elevated 1930s though.

    What is foolish is that, for all the time and money, spent on this is that Hansen and Karl haven’t gone to the trouble of identifying the “good” sites and the process of identifying these sites is being driven by outsiders. In addition, let’s also recall that the the 1930s-recent differential is MUCH lower in the US where there has at least been an effort to identify biases than in the rest of the world where the biases are essentially unknown and unadjusted for.

  15. Paul Penrose
    Posted Aug 3, 2007 at 1:13 PM | Permalink

    It is obvious that the sudden 1 degree step up in Jan 2000 for Detroit Lakes is just plain wrong. Anybody with an ounce of intelligence and honesty will admit that. So let’s stop arguing about whether there’s a problem with this particular record and start putting pressure on Hansen to fix it.

  16. TCO
    Posted Aug 3, 2007 at 1:40 PM | Permalink

    Why is it obvious? Can you prove that a 1 degree change is statistically unreasonable (some sort of significance test)?

  17. RomanM
    Posted Aug 3, 2007 at 2:10 PM | Permalink

    18:TCO

    A one degree change in the temperature from one year to another year is not what we are talking about. What we are talking about is a one degree “correction” (i.e. alteration) to the recorded temperatures at that location – all of the daily readings were off an average of that much for an entire year! Major thermometer failure!

    I’m curious however. Can you tell me what would make that “statistically reasonable” (whatever that means)? I would like to devise a statistical test for that.

  18. Steve McIntyre
    Posted Aug 3, 2007 at 2:13 PM | Permalink

    #16. The issue is not whether or not 1 deg C change is possible. The issue is that between 1951-Dec 1999, the difference between the GISS raw series and the USHCN TOBS/adjusted series was virtually zero and after January 2000, it was about 0.8 deg C, while before Jan 2000, the difference between GISS raw and USHCN raw was about 0.8 deg C and after Jan 2000, it was virtually zero.

    And you see the same phenomenon at other sites e.g. Grand Canyon.

    Hansen switched from TOBS adjusted versions to raw versions on Jan 1, 2000 without any disclosure of this change to readers and without any scientific justification. It’s nothing to do with throwing out data.

    If one did a t-test as to whether there was a difference in the deltas before and after Jan 2000, yes, the t-test would be hugely significant.

  19. TCO
    Posted Aug 3, 2007 at 2:26 PM | Permalink

    Thanks, didn’t understand that point.

    Hmmm…will still think about this. I wonder if it is just an issue of a step function versus a smooth ones, but that the impact is not substantial on a meta scale? (when averaged in with many sites)?

  20. Kenneth Fritsch
    Posted Aug 3, 2007 at 2:29 PM | Permalink

    Nothing personal here, but I think what Rabett and Tamino say about the findings is of much less importance than what the responsible measuring and measurement using scientists have to say about it. Why would one go to the bother of specifying a proper measurement site and process if it mattered so little how the process was handled? Pointing to all problems being amenable to fixing by adjustments without a detailed analysis almost sounds like cheerleading. It is almost like saying we have a set of “proper” sites that we can adjust from and we are merely being big hearted in allowing many well intentioned but ill prepared amateurs out there to feel good about collecting data from these “other” sites.

    If exercises like Anthony’s force a consolidation of measurement sites to those that can be validated as reliable, it may be well worth the effort. From my own personal observations of temperatures, anomalies of temperatures and their trends in local areas in close geographical proximity, I see differences significantly large to make me judge that the uncertainty from the sparseness of geographical coverage would be rather large. It also makes me think twice about the meaning of a global trend to a residence looking at local temperatures that can evidently be quite different. Of course, these local differences may be an artifact of the measuring problems and errors.

    I also have some reservations when the single or at least predominant aim often seems to be measurements aimed at obtaining a global anomaly temperature trend and not measuring local temperatures and anomalies. After all we have to live under the local conditions that differ in trends significantly from some global average.

  21. Howard
    Posted Aug 3, 2007 at 2:36 PM | Permalink

    #14 Steve Mc:

    I agree that the information should be looked at on the basis of “good” sites. Personally I expect “good” sites to show 20th century warming. I’m a little surprised at how many “good” sites (but not all) have elevated 1930s though.

    If your hunch is true, the 1930’s are a time-compressed version of the MWP and needs to be flattened for the greater good.

  22. Sam Urbinto
    Posted Aug 3, 2007 at 4:17 PM | Permalink

    #13 Exactly. Collect them, rank them, compare them. Decide what to do. We are in the collecting. The only concern is getting rid of the nonsense where possible, not figuring out what it is and adjusting it.

    #6,7 While some may suggest just throwing it all out, I never would. I would never myself say

    ” there was sufficent current knowledge to mandate throwing out any of the Anthony air conditioner horror case temp sites”

    nor

    “arbitrarily throw out data without considering the impact.”

    and I agree with you

    “That to do so (even if imperfect or contaminated) is a decision in itself. And that sampling expertise, senmsativity analyses and just some careful thought are needed.”

    as well as

    “This is not to say that one should never exclude data. Just that this is something that has more dimemnsions to it than the hoi polloi realize.”

    Although I think you do yourself a discredit to be using things like “nitwit” and “hoi polloi”.

    So, again, you’re confusing me. You’re talking now like you believe that auditing the sites isn’t a waste of time and that you understand that is the goal of this, not measuring AC (or similar contamination) details and correcting it out. (Since we’ve pretty much already ascertained that we don’t know the specific effects of whatever contamination variables are there by themselves, much less as a whole).

    Even if air measurements over the non-surface areas of the Earth (or any other measurement of climate) tells us very little about the entire system, the information is of some use, and will be gathered regardless, so why not make them as pure as possible and make sure they’re good.

    Site surveys are the first step.

    Agree, disagree, neutral?

  23. John F. Pittman
    Posted Aug 3, 2007 at 4:52 PM | Permalink

    There is a problem [snip] It is the blogs name “audit”. There seems to be a misunderstanding of what an auditor does by many here. Yes, choosing known scientific relationships, and showing complaince or lack thereof is one of the possibilities of an audit. But it is just one. There are others.

    Take the A/C, not only is it a violation of the siting standards (CA – corrective action in one of my audits), but it is also a POTENTIAL source for an incorrect instrument measurement (FU – Follow-up, actually I would use PP potentional problem, but I always think FU). As an auditor it would not be my responsibility to determine that it did not effect the measurement, it would be the auditied entity. If it could not be proven by them that it DEFINITELY did NOT effect measurement, it would become a CA.

    As an auditor I ask for raw data. If I were given adjusted data, I would still ask for unadjusted data and assign a FU. If the reasons for adjustment were not obviously correct, a CA would be issued. Once again, they, not I, have to supply the data, methodology, reasons.

    As an auditor, if someone uses a standard other than what is recognized (think of some the unsubstantiated claims of releveance posted on this sight attributed to Mann, Hansen, etc), it would automatically become a CA. Not that their standard is wrong, but rather if it is correct, then it would be as easy or easier to change to the standard (that IS one of the real advantages of a standard, as some have alluded to for Anthony’s work as to how much time and money it would cost to do the survey right, they are only TOO correct), in almost all cases. If it were not as easy or easier, then they would have to provide the documentation, data unadjusted, and science why 1. other standards failed, 2. why theirs was better, and 3. the measurements measure what they are meant to measure with the same or INCREASED accuracy. If all 3 were not provided, the CA would become “site has recieved an unsatisfactory rating” even if all other items in the audit were good.

    When one looks at the A/C units and the Hansen explanations [snip], if you were an auditor, you would have to assign “site is rated unsatisfactory”. Which for those that don’t know means fines, loss of revenue, closure, and in extreme cases, as in not following standards, potential jail time.

    Of course, this for “real” regulations in the “real” world.

  24. Sam Urbinto
    Posted Aug 3, 2007 at 4:59 PM | Permalink

    Wow, that is a fantastic explantion John.

  25. Nate
    Posted Aug 3, 2007 at 5:25 PM | Permalink

    #24 Maybe what we need to do is apply Sarbanes-Oxley to climate science papers. Wouldn’t want Mann and company to pull an Enron on us.

  26. steven mosher
    Posted Aug 3, 2007 at 5:45 PM | Permalink

    re 24.

    Thanks John. As someone who has gone through audits both data audits and financial audits I can
    attest to what you say.

  27. Bill F
    Posted Aug 3, 2007 at 5:49 PM | Permalink

    24,

    Good post John. That really boils down what I think the eventualy outcome of the whole surface stations project should be. A list of sites categorized by the level of compliance with recommendations for followup or corrective action. Researchers can then take the “good sites” and see how the trends compare to the rest of the sites and see if there is enough of a difference to justify making changes or adjustments. In the case of Detroit Lakes, the “audit” methodology has caught what is clearly a procedural error (using wrong data set) in Hansen’s data. The “CA” for that problem is simple and could be done and back on the net in revised form within an hour if Hansen truly cared to correct such obvious errors.

  28. JS
    Posted Aug 3, 2007 at 6:14 PM | Permalink

    # 24 John,
    Agreed! There is no way that anyone, with any reasonable degree of accuracy, can tell what adjustment can be given to any violation of siting standards, regardless of the amount of money that is spent on studying the sites. The MMTS thermometers have a manufacturers degree of accuracy of +/- 2 degrees, more than the highest estimation of global warming in the last 100 years. The system was made to measure weather, not monitor climate.

  29. Posted Aug 3, 2007 at 10:22 PM | Permalink

    We have estimates of the speed of light from the 1700s. Should that be included in modern estimate of the speed of light?

    After all it is a data point. And all data points have value. And throwing out a data point is a judgment. And who are we to judge?

    I propose an adjustment.

  30. Kenneth Fritsch
    Posted Aug 4, 2007 at 10:52 AM | Permalink

    January 2000 – and, to that extent, appears to be a Y2K problem. I presume that this is a programming error.

    Seeing no significant changes in the differences of the temperature comparisons for 50 some years then getting a sudden unexplained jump in 2000 is something difficult to understand or to attribute a cause. It may be a programming error, but in my estimation it is not the type of error I would expect to see from a Y2K problem. Those problems involved the use of 2 digits instead of 4 to denote the date. Nothing similar occurred in 1900. I was involved peripherally in a Y2K project as a consultant but do not have the IT knowledge of many who post here. I am curious as to whether the more informed here would think this error is of type one would expect for a Y2K one.

    I also would have a bit of a difficult time understanding how such an error (as opposed to a purposeful change or from measured data) would elude the attention of those responsible for the measurements. That leads me to believe that it was a purposeful change or from measured data and that makes it even more perplexing for me.

  31. JerryB
    Posted Aug 4, 2007 at 2:09 PM | Permalink

    Steve McI,

    Gobsmacking good catch!

    Some checking of a few stations indicates that whether the TOB adjustment
    is positive, or negative, GISS ignores it starting with data for the
    month of Jan 2000, and continues to ignore it thereafter, and uses “raw”
    USHCN numbers.

    Adjustments below are annual, and in degrees F.

    Station Name . Number TOBA OthAdj
    CHEYENNE WELLS 051564 -1.82 0.00
    EADS 2S …… 052446 0.73 0.06
    LAKIN …….. 144464 0.71 0.06
    HAMMON 3SSW .. 343871 0.68 0.07
    ENOSBURG FALLS 432769 -1.52 0.00
    HOPEWELL ….. 444101 -1.48 0.00

    GHCN Number
    42572401001 HOPEWELL
    42572465001 CHEYENNE WELLS
    42572612001 ENOSBURG FALLS
    42574530001 LAKIN
    42574530005 EADS 2S
    42574640002 HAMMON 3SSW

  32. Steve McIntyre
    Posted Aug 4, 2007 at 2:29 PM | Permalink

    That Hansen has made this type of gross programming error in something as extraordinarily simple as a program to merely calculate average temperatures should give pause to anyone assuming that Hansen’s climate model is error-free. The need for a proper arms-length audit of the GISS climate model (or some other equally important model) is obviously a major challenge and, in my opinion, something of very high priority for anyone wanting to rely on these models for policy.

    IT will be interesting to see if GISS apologist, Eli Rabett, comments on this matter.

  33. JerryB
    Posted Aug 4, 2007 at 2:52 PM | Permalink

    Steve,

    You might consider a nice note to Hansen mentioning that you
    noticed an apparent oddity, describe it briefly, and see what
    he replies.

  34. BarryW
    Posted Aug 4, 2007 at 3:47 PM | Permalink

    RE 30

    In rethinking the matter, if it was because of a two digit year you would see two regions affected: pre 1900 and post 2000. 1898 would get a correction for 1998 and 2001 would get 1901, unless the code expected the data to be in sequence.

  35. Steve McIntyre
    Posted Aug 4, 2007 at 4:04 PM | Permalink

    Just for clarification, I’m not saying that the error is specifically due to 2-digit rather than 4-digit date formats, but only that the error commenced in exactly Jan 2000.

  36. BarryW
    Posted Aug 4, 2007 at 4:16 PM | Permalink

    Re 35

    I realize that was just a conjecture. I was trying to see if there were any other anomalies that would be apparent if it was the two digit problem. As usual having the code would resolve the issue almost imediately, sigh.

  37. JerryB
    Posted Aug 4, 2007 at 4:33 PM | Permalink

    Let me mention a distinction regarding the start date of the
    change.

    The change from using USHCN adjustments to using USHCN “raw”
    data occurs when the dates of the data change from before
    2000 to after 1999. But when the programming change occurred,
    we do not know.

    This is a particular instance of a common problem when
    discussing timing of subsequent actions regarding historical
    data.

  38. Hank Roberts
    Posted Aug 6, 2007 at 7:25 PM | Permalink

    You got both the first name and the last name of the man you’re describing wrong, Sadlov.
    Check your attribution and correct your mistake please.

    “RE: #2 – There is a fired / disgruntled NWS employee … Hank somethingorother,…”

  39. Kenneth Fritsch
    Posted Aug 6, 2007 at 8:33 PM | Permalink

    Just for clarification, I’m not saying that the error is specifically due to 2-digit rather than 4-digit date formats, but only that the error commenced in exactly Jan 2000.

    Actually there were considerable amounts of new code added to programs as part of the Y2K program that were not related to the two digit problem. I think most of these programs were put online before the year 2000.

    If the evidence points strongly to a programming error and it has not been caught to date that says something worse about quality control than the fact that the error was originally made. I am not sure what it is that JerryB is showing in post # 31. Could you explain in more detail?

  40. steven mosher
    Posted Aug 6, 2007 at 10:24 PM | Permalink

    SteveS,

    please don’t pester the jazz cellist.. the wiggy dog boy

    http://en.wikipedia.org/wiki/Hank_Roberts

    http://www.hankrobertsmusic.com/

  41. Vincent Gray
    Posted Aug 8, 2007 at 3:51 PM | Permalink

    If there is all this doubt about the US sites what about the rest of the world? Why should you believe the “global mean temeprature anomaly?

  42. gdn
    Posted Aug 8, 2007 at 4:31 PM | Permalink

    Just to be clear, is the above “Vincent Gray” THE Vincent Gray, A Vincent Gray, or just someone who thinks it’s a neat name?

8 Trackbacks

  1. […] example started with this, which lead to this, and leading finally to […]

  2. By Hoystory » Blog Archive » Oh, the irony on Aug 10, 2007 at 2:56 AM

    […] Statisician Steve McIntyre, the guy who famously exposed the global warming hockey stick fraud, has done the world another service by pointing out an error in NASA climate expert James Hansen’s temperature adjustment algorithms that had the effect of causing a false jump in official temperatures. […]

  3. By Giochiamo con i numeri | Climate Monitor on Nov 5, 2008 at 1:33 PM

    […] (riscaldamento globale di origine antropica), tale Steve Mc Intyre ha discusso sul suo blog alcuni apparentemente inspiegabili “errori” del nuovo software di trattamento […]

  4. […] bijvoorbeeld een blog. Undernews kan later nieuws worden. Een mooi voorbeeld is McIntyre’s ontdekking in 2007 dat er een soort millenniumbug zat in Nasa’s Amerikaanse temperatuurreeks, waardoor […]

  5. […] had ik het gisteren over dit incident als voorbeeld van undernews dat news wordt. McIntyre bracht dit nieuws voor het eerst op 3 augustus 2007 en de e-mails in het vrijgegeven document beginnen ook op die […]

  6. […] observed recently that Hansen’s GISS series contains an apparent error in which Hansen switched the source of […]

  7. By Michelle Malkin » Hot news: NASA quietly fixes flawed temperature data; 1998 was NOT the warmest year in the millenium on Sep 5, 2011 at 9:54 PM

    […] Here is one of his first posts where he begins to understand what is happening. “This imparts an upward discontinuity of a deg C in wintertime and 0.8 deg C annually. I checked the monthly data and determined that the discontinuity occurred on January 2000 – and, to that extent, appears to be a Y2K problem. I presume that this is a programming error.” […]

  8. […] Y2K controversy in August 2007. On August 3 (10:46 am Eastern), I had published a post entitled Hansen’s Y2K Error in which I observed a previously unreported “Y2K error” in GISS USHCN conclusively […]