Should NASA climate accountants adhere to GAAP?

Shortly after, NASA published their source code on Sept 7, we started noticing puzzling discrepancies in the new data set. On Sep 12, 2007, I inquired about the changes to Hansen and Ruedy, observing that there was no notice of the apparent changes at their website:

Dear Sirs, I notice that you’ve changed the historical data for some US stations since Sep 7, 2007. In particular, I noticed that temperatures for Detroit Lakes MN in the early part of the century were reduced by nearly 0.5 deg C. These changes are subsequent to your changes in August 2007 for the changing versions. To my knowledge, there is no explanation for this most recent change and I was wondering what the reason is.


Figure 1. Difference between Sep 10, 2007 version of Detroit Lakes MN and Aug 25, 2007 version.

Thank you for your attention, Steve McIntyre

I posted on the topic on Sept 13 observing:

Since August 1, 2007, NASA has had 3 substantially different online versions of their 1221 USHCN stations (1221 in total.) The third and most recent version was slipped in without any announcement or notice in the last few days – subsequent to their code being placed online on Sept 7, 2007. (I can vouch for this as I completed a scrape of the dset=1 dataset in the early afternoon of Sept 7.)

The impact of the unreported changes was illustrated at Detroit Lakes MN using the same graphic as sent to Hansen and Ruedy. The post included the following prediction:

As you can see, Hansen has clawed back most of the gains of the 1930s relative to recent years – perhaps leading eventually to a re-discovery of 1998 as the warmest U.S. year of the 20th century.

This prediction came true quite quickly. On Sept 15, Jerry Brennan observed that the NASA U.S. temperature history had changed and that 1998 was now co-leader atop the U.S. leaderboard.

By this time, we’d figured out exactly what Hansen had done: they’d switched from using the SHAP version – which had been what they’d used for the past decade or so – to the FILNET version. The impact at Detroit Lakes was relatively large – which was why we’d noticed it, but in the network as a whole the impact of the change was to increase the trend slightly – enough obviously to make a difference between 1934 and 1998 – even though this supposedly was of no interest to anyone.

update42.gif
Average Impact of changing from SHAP to FILNET accounting.

Later on Sept 15, I observed:

This new leaderboard is really something else. I’m going to post on this: but if the SHAP version was what they used for the past decade, it’s a little – shall we say – “convenient” to decide in Sept 2007 that they are going to switch to the FILNET version (without announcing it on their website) and then, surprise, surprise, 1998 is now tied for the warmest year. This is going to send shivers up the spine of any readers familiar with accounting principles.

I’d been planning to write a post on this. There are undoubtedly more Climate Audit readers familiar with GAAP (Generally Accepted Accounting Principles) than at other climate websites, but it’s worth re-stating one of the fundamental GAAP principles:

Principle of the permanence of methods: This principle aims at allowing the coherence and comparison of the financial information published by the company.

Now you may say that this is “science” and accounting principles don’t apply. And my response would be that I’d expect GAAP principles to be a minimum standard for the type of climate statistics being carried out by NASA. Even if NASA climate statisticians are unaware of GAAP per se, they should be adhering to the principles. Sharp practice is sharp practice, however it is gussied up.

Hansen said that the difference between 1998 and 1934 was “statistically insignificant”. But business accountants are familiar with situations where a lot of attention is paid to numbers that may be “statistically insignificant”. I’ll give you an example. For a large corporation, the difference between a small profit and a small loss can be “statistically insignificant”, but there is a big difference in how they are perceived by the public. In some cases, unscrupulous corporations (and you can think of a few, including the most famous recent U.S. bankruptcy) will do whatever they can in terms of deferring expenses or recognizing revenue to change a reported loss into a reported profit. Accounting changes are a red flag to analysts for brokerage companies; there may be “good” reasons but the analyst needs to be right on top of the situation and they will be VERY unimpressed if a company tries to slip a change in without reporting it.

So while the difference between 1934 and 1998 may have been “statistically insignificant”. Hansen was obviously quite annoyed by the attention paid to 1934 being called the “warmest year” even in the U.S. and the change in rankings must have stuck in his craw. Was that motivation in the change from SHAP to FILNET accounting? I certainly hope not. Perhaps long before the Y2K error re-arranged things, NASA had already made long-standing plan to shift from SHAP accounting to FILNET accounting. But if this was not the case, then the timing of the change, especially with the all too “convenient” restoration of 1998 to the top of the leaderboard is certainly unfortunate.

This is precisely the type of situation that would have been avoided by NASA adhering to GAAP principles. Companies cannot change accounting procedures on a whim. Auditors will not permit companies to change methods merely to enhance reported earnings. And if a company changed accounting procedures without any disclosure, it would be viewed very seriously by regulatory agencies – whether or not the company said that it “mattered”. If the change from SHAP to FILNET accounting didn’t “matter”, then Hansen shouldn’t have done it. If it did matter, he still shouldn’t have done it right now just when he was archiving source code for the first time – and to do so without either formal disclosure or a re-statement of prior results simply boggles the imagination.

On Sept 17, Ruedy replied to my email asking that they disclose their changes, more or less refusing on the basis that the new data source could be detected in the “description of input files” in the source code.

Dear Sir,
As indicated in the description of our input files, we switched from the old year 2000 version of USHCN to the current version. The differences you noticed reflect corrections that were made by USHCN within the last six years.
Reto A. Ruedy

But this is not the same as a change statement. There’s no hint in the input file itself that they had changed the input file from what had been used previously and that the code archived on Sept 7 was NOT the code used to produce NASA results prior to Sept 7. They had not merely “simplified” the code; they had changed from SHAP to FILNET accounting. It’s also not good enough to simply slip the accounting change in with the source code. It should have been formally disclosed when the change was instituted, rather than leaving us to try to figure it out and (later disclosing it only when the change had already been discovered.)

In addition, his last sentence here – that the changes “reflect corrections that were made by USHCN within the last six years” is not correct. Both SHAP and FILNET accounts existed when Hansen et al 2001 was written. Hansen decided – for whatever reason- to use SHAP accounting. HE could have used FILNET accounting. And he decided to change in mid-September 2007. Did it “matter”? Well, it mattered enough to go to the trouble of making the change. It also – and perhaps this is sheer coincidence – mattered to the “statistically insignificant” leaderboard as 1998 is now your new co-leader/

Today NASA has attempted to cooper up this mess. At their website, they finally reported the change in accounting that we had already picked up and reported. They state:

September 2007: The year 2000 version of USHCN data was replaced by the current version (with data through 2005). In this newer version, NOAA removed or corrected a number of station records before year 2000. Since these changes included most of the records that failed our quality control checks, we no longer remove any USHCN records. The effect of station removal on analyzed global temperature is very small, as shown by graphs and maps available here.

This seems like a pretty odd description of what they appear to have done and perhaps I’ll re-visit this on another occasion. Hansen includes the following account of the Y2K error (conspicuously deleting his prior recognition of my role in identifying the error) and adding a reference to Usufruct and the Gorilla at the NASA website:

August 2007: A discontinuity in station records in the U.S. was discovered and corrected (GHCN data for 2000 and later years were inadvertently appended to USHCN data for prior years without including the adjustments at these stations that had been defined by the NOAA National Climate Data Center). This had a small impact on the U.S. average temperature, about 0.15’C, for 2000 and later years, and a negligible effect on global temperature, as is shown here.

This August 2007 change received international attention via discussions on various blogs and repetition by some other media, with no graphs provided to show the magnitude of the effect. Further discussions of the curious misinformation are provided by Dr. Hansen on his personal webpage (e.g., his post on “The Real Deal: Usufruct & the Gorilla”).

Obviously his claim that “no graphs had been provided to show the magnitude of the effect” is false. In one of my original posts on the matter, I showed graphics estimating the impact of the error on the U.S. temperature record and the distribution of errors on USHCN stations. I sent the following letter to Hansen and Ruedy today, notifying them that his statement was incorrect as follows:

Dear Sirs,

I see that you have decided to report the change in methodology as requested in my previous email. While you should have reported the change in methodology when it was made, it is better late than never.

In your new webpage, you state: ” This August 2007 change received international attention via discussions on various blogs and repetition by some other media, with no graphs provided to show the magnitude of the effect.” This is incorrect and I request that you correct this statement. On Aug 6, 2007, at Climate Audit, http://www.climateaudit.org/?p=1868 , the two graphs below were provided to estimate the magnitude of the effect. The first graph shown below estimated the impact on the U.S. temperature history at a little more than 0.15 deg C. Despite having no access to your source code, this proved to be an accurate estimate.

The next graph shown below shows the distribution of changes over the 1221 U.S. stations, which are very substantial in individual cases. Despite your professed concern for illustrating the impact of changes, you did not yourself provide any graph to show the magnitude of the changes on individual stations, nor did you even provide explicit notice on your webpage that any changes had been made.

Would you please correct the incorrect information on your webpage. This request is made pursuant to the Data Quality Act.

Yours truly,
Stephen McIntyre

A last point: as I’ve noted previously, as the classification of U.S. sites comes in, the actual GISS methodology for estimating U.S. temperatures looks a lot better than (say) the NOAA methodology. If NASA’s U.S. estimates stand up to scrutiny, that’s fine: that wouldn’t bother me a speck. I’m just trying to understand what weight can be put on which estimates. And regardless of what people may think, in a quick review of my posts, I haven’t located any posts in which I am particularly critical of NASA’s methods in the U.S., aside from the Y2K error. My position has been more: if NASA’s adjustments are right, then Parker 2006 and Jones et al 1990 etc are wrong. I had not personally criticized their lights methodology for classifying stations, preferring to see how station evaluation turned out. I have criticised poor and inaccurate disclosure, some of Hansen’s public comments and surveyed some of the data issues in the ROW (where’s Waldo?).

But I don’t think that I’ve been particularly critical of their U.S. methodology and, if the lights on-lights off criterion is a useful one for urban adjustments, that’s fine with me and I’ll be happy to acknowledge it. As noted elsewhere, that would leave many other open questions pertaining to the ROW, why there are discrepancies between NASA and NOAA, why NASA overall results are so similar to CRU results, if the individual stations are adjusted so differently etc etc.

But these matters are all quite different than (a) changing accounting systems; (b) doing so without notice; (c) archiving source code where the input file had been changed from what had been previously used; (d) making false statements on a NASA website.

UPDATE Sept 17 afternoon: Ruedy responded to my email as follows:

Thanks for bringing to our attention that the term “magnitude of effect” might be interpreted as “size” rather than “relevance”, our obvious intent. We clarified our formulation correspondingly.

They changed their website to read as follows (replacing magnitude with relevance):

This August 2007 change received international attention via discussions on various blogs and repetition by some other media, with no graphs provided to show the relevance of the effect.

Needless to say, this claim remains untrue. I sent the following letter (repeating the graphics shown above) requesting that the webapge be corrected, this time copying the Info Quality person at NASA:

Your revised webpage http://data.giss.nasa.gov/gistemp/ contains the following incorrect statement: “This August 2007 change received international attention via discussions on various blogs and repetition by some other media, with no graphs provided to show the relevance of the effect.”

This is incorrect and I request that you correct this statement. As I advised you previously, on Aug 6, 2007, at Climate Audit, http://www.climateaudit.org/?p=1868 , the two graphs below showed the relevance of the effect to U.S. temperature history and to U.S. stations.

The first graph shown below showed that the error was relevant to U.S. temperature history – a topic specifically considered in Hansen et al 2001.

The NASA website provides individual station histories, as well as U.S. and global estimates. The graph below showed the error was relevant to individual U.S. station histories.

The claim that “no graphs provided to show the relevance of the effect” remains incorrect. Once again, please correct the false statement on the NASA webpage http://data.giss.nasa.gov/gistemp/ . This request is made under the Data Quality Act.

Yours truly,
Steve McIntyre

210 Comments

  1. Posted Sep 17, 2007 at 9:22 AM | Permalink

    If you smooth out Hansen’s 9/10 adjustments, it becomes an inverse hockey stick!

  2. RomanM
    Posted Sep 17, 2007 at 9:36 AM | Permalink

    The shape of the histogram is very sharply bimodal. This sort of bimodality is a stong indicator of a sample which is a mixture from two distinct populations. Is there something that would make the stations with values less than or equal to about .15 different from stations whose values are greater than that value?

  3. Posted Sep 17, 2007 at 9:41 AM | Permalink

    This is rank foolishness on the part of Hansen et.al.

    Congress is paying attention. Suppose they get fed up and call for a full government audit?

    NASA will not like that.

    AS I have stated: lawyers may not understand the ins and outs of thermodynamics, accounting they get.

  4. Steve McIntyre
    Posted Sep 17, 2007 at 9:41 AM | Permalink

    Its all to do with time-of-observation adjustments – going from afternoon to morning or from morning to afternoon. The changes are chunky. There are no changes of 5 minutes or half an hour. The point here is that becauseo f the bimodal distribution the average absolute value of the change was considerably higher than the average change.

  5. jae
    Posted Sep 17, 2007 at 9:57 AM | Permalink

    Very bad PR moves by Hanson, especially since his shennagins will affect the public’s view of all of NASA. NASA does not need this problem, too.

  6. Posted Sep 17, 2007 at 10:14 AM | Permalink

    By my calculation, Hansen is 66 years old. Given his erratic behavior and intemperate remarks in the past year, perhaps it’s time he consider retirement.

    The down side to that is– to paraphrase Nixon– we won’t have Hansen to kick around anymore.

  7. stan palmer
    Posted Sep 17, 2007 at 10:29 AM | Permalink

    I wonder if Hansen et al could be convinced to change their tactics if they can be shown that they are counterproductive. Apple recently changed its marketing strategy for the iPhone. They had already harvested the true believers who were willing to wait hours outside of a store for the chance of buying a rather ordinary cell phone. To address the larger market of people without the same affiliation with the Apple brand, they recently reduced their prices by a large percentage. The Apple true believers were very unhappy but Apple already had their money.

    Hansen et al have been using a scientific authority strategy of infallibility in which statements are made ex cathedra. They have achieved significant success with this but have met resistance from people who are unimpressed by unqualified statements even from august scientists. Many of these people have advanced degrees of their own and are not to be convinced that peer reviewed scientific papers are without error. It would seem better to address the concerns of this group directly by admitting that there are doubts about AGW and inviting these people to become involved in solving them.

    Hansen’s Y2K blunder is a clear example of this. He is loath to admit that he made any error. I presume that this is because it is contrary to his ex cathedra strategy of scientific infallibility. It may also be because he was always the smartest kid in the class and expect to be the smartest person in the room but the effect is the same. Instead of denying the blunder he should embrace it and use it to show that even astounding blunders of that magnitude cannot shake the case for AGW. The AGW believers won’t like this but Hansen already has them and they have nowhere else to go.

  8. Frank K.
    Posted Sep 17, 2007 at 10:37 AM | Permalink

    Hi Steve McIntyre,

    Could you try to summarize what NASA’s current calculation procedure actually is for the USHCN? Here is an explanation of SHAP vs FILNET at the NOAA web site:

    http://www.ncdc.noaa.gov/oa/climate/research/ushcn/ushcn.html

    My understanding of NOAA’s procedure is:

    (1) Raw data
    (2) Remove outliers
    (3) Correct for TOBS
    (4) Correct for station history issues (moves) -> SHAP
    (5) Correct for missing data using surrounding stations -> FILNET
    (6) Correct for UHI

    So, is GISS now using some different procedure for step (5)? This is confusing to me.

    Thanks in advance,

    Frank K.

  9. Steve McIntyre
    Posted Sep 17, 2007 at 10:46 AM | Permalink

    #8. Previously they used data with Stage 4 adjustments; they changed to Stage 5 adjusted data. Both Stage 4 and Stage 5 adjustments have been done for years – so they could have used Stage 5 adjusted data in HAnsen et al 2001, but for reasons that no doubt seemed valid at the time, chose to use Stage4 adjusted data.

    I’m not arguing that SHAP is better or worse than FILNET – just that Hansen changed methods without providing any notice and then only after the switch had been spotted here and publicized here.

  10. crosspatch
    Posted Sep 17, 2007 at 10:49 AM | Permalink

    So, is GISS now using some different procedure for step (5)? This is confusing to me.

    I would speculate that they have lost confidence in their own method of filling in missing values and have “punted” the issue to NOAA and using NOAAs filled data. Not sure which comes first, the chicken or the egg. By that I mean, have they selected data that more closely validates their belief or is that simply happenstance now that they have decided to abandon their method of filling in the missing data?

  11. cce
    Posted Sep 17, 2007 at 10:52 AM | Permalink

    It’s pretty obvious (to me) that they decided to incorporate these changes to their methodology since they were in the process of updating the code for public consumption anyway.

    Back when all this started, they said that they will document the changes in their next paper on temperature analysis, and in their end-of-year summary. For those hanging on every thousandth of a degree change, this must be a big deal, and waiting precious months for the proper description of the changes and justification must be excruciating. For the other ~100% of the population, it won’t be such a problem.

  12. Posted Sep 17, 2007 at 11:01 AM | Permalink

    Re #8

    I pointed out previously on another thread here that the game has changed.

    While this happy crew of auditors is stuck auditing the past, Hansen has moved on to a new game with new rules, one of which is to throw surprises out to keep the auditors running in circles. Cheap entertainment for him!

    Rather than play Hansen’s new game, perhaps this happy crew needs to come up with it’s own game? Perhaps building a well audited transparent climate model? There’s been enough reverse engineering of the code to enumerate the categories of algorithms required, data types to be used, etc… A auditable transparent climate code built to the rigour of a professional software development.

  13. Larry
    Posted Sep 17, 2007 at 11:06 AM | Permalink

    12, that’s a big job. That might be doable with some funding, but without some way to keep the pretend physicists out, it would become a mess trying to referee the effort. Too many people who think that they just discovered something that eluded Singer and Lindzen.

  14. JerryB
    Posted Sep 17, 2007 at 11:11 AM | Permalink

    When the FILNET step inserts estimates for missing data, it adds a flag to
    indicate it has done so: the letter M. In such cases, the GISS code does not
    use that data, and will use its own method for handling missing data.

  15. David
    Posted Sep 17, 2007 at 11:12 AM | Permalink

    #12: It seems to me that if a model is bad or insufficient then:

    1. Its predictions will fail to materialize.

    and

    2. The predictions that do materialize will be able to be explained by other models.

  16. jae
    Posted Sep 17, 2007 at 11:12 AM | Permalink

    13, I don’t think Leon is talking about the same type of climate model that you are.

  17. Phil
    Posted Sep 17, 2007 at 11:26 AM | Permalink

    #14 So what method DOES Giss use then, if not FILNET? Isn’t that what releasing the code was all about? Now that they’ve changed the data on which much of the analysis at CA was done, it seems to me it makes it much more difficult if not practically impossible to verify WHAT method they actually used to fill in missing data. If you have any knowledge, please share it.

  18. Frank K.
    Posted Sep 17, 2007 at 11:34 AM | Permalink

    #14

    Thanks JerryB. That’s what I suspected – basically, as crosspatch noted, GISS have dispensed with their own FILNET algorithm (why?) in favor of NOAA’s. So, why didn’t they just say as much at the GISS website?! Why all the obfuscation? It makes no sense whatsoever…

    I really think the GISS climate group has become dysfunctional to the point that some reorganization is in order…

  19. Jean S
    Posted Sep 17, 2007 at 11:38 AM | Permalink

    #14 (JerryB): Interesting, but isn’t the difference between SHAP and FILNET in those infilled values? If GISS is not using those infilled values, where is the difference coming from? Is it possible that these difference are actually due to differences in USHCN corrections (old and new versions)?

  20. JerryB
    Posted Sep 17, 2007 at 11:41 AM | Permalink

    Re #17,

    Phil,

    The answer, presumably, is in the code that they released.

  21. Steve McIntyre
    Posted Sep 17, 2007 at 11:46 AM | Permalink

    Ruedy responded to my email as follows:

    Thanks for bringing to our attention that the term “magnitude of effect” might be interpreted as “size” rather than “relevance”, our obvious intent. We clarified our formulation correspondingly.

    They changed their website to read as follows (replacing magnitude with relevance):

    This August 2007 change received international attention via discussions on various blogs and repetition by some other media, with no graphs provided to show the relevance of the effect.

    Needless to say, this claim remains untrue. I sent the following letter (repeating the graphics shown above) requesting that the webapge be corrected, this time copying the Info Quality person at NASA:

    Your revised webpage http://data.giss.nasa.gov/gistemp/ contains the following incorrect statement: “This August 2007 change received international attention via discussions on various blogs and repetition by some other media, with no graphs provided to show the relevance of the effect.”

    This is incorrect and I request that you correct this statement. As I advised you previously, on Aug 6, 2007, at Climate Audit, http://www.climateaudit.org/?p=1868 , the two graphs below showed the relevance of the effect to U.S. temperature history and to U.S. stations.

    The first graph shown below showed that the error was relevant to U.S. temperature history – a topic specifically considered in Hansen et al 2001.

    The NASA website provides individual station histories, as well as U.S. and global estimates. The graph below showed the error was relevant to individual U.S. station histories.

    The claim that “no graphs provided to show the relevance of the effect” remains incorrect. Once again, please correct the false statement on the NASA webpage http://data.giss.nasa.gov/gistemp/ . This request is made under the Data Quality Act.

    Yours truly,
    Steve McIntyre

  22. Posted Sep 17, 2007 at 11:49 AM | Permalink

    GAAP always seems to be a moving target and then there is the Financial Accounting rules vs. Tax Accounting rules and it seems more and more common to have to go back to restate prior years statements. The accountants are paid by those they are accounting and there would seem to be motivation to keep the clients happy. Obviously Climate Auditing has a slightly different set of incentives and no GACAPs.

    Are there now stations to be added to the 1221 that AW is having surveyed?

  23. Steve McIntyre
    Posted Sep 17, 2007 at 11:49 AM | Permalink

    #19, 20. There’s more to the FILNET changes relative to SHAP than simply filling data (Although it does also fill data.) For example, there’s obviously more going on with the Detroit Lakes changes than filling a few missing years.

    The explanation does NOT lie in the released code. These adjustments are at the NOAA and we haven’t even begun to explore that set of adjustments yet.

  24. JerryB
    Posted Sep 17, 2007 at 11:54 AM | Permalink

    Re #18,

    Frank K,

    No, they continue to dispense with FILNET’s estimates of missing data, and use
    their own method.

    Re #19,

    Jean S,

    According to one USHCN description:

    “The FILNET program also completes the data adjustment process for
    stations that moved too often for the SHAP program to estimate the
    adjustments needed to debias the data.”

    However, my interpretation is that most of the differences are as you surmise,
    and as I discussed in my post last night in http://www.climateaudit.org/?p=2049
    comment 79.

  25. Frank K.
    Posted Sep 17, 2007 at 12:01 PM | Permalink

    #24

    OK – Thanks JerryB. Sorry for the confusion. So the differences seen recently are from “other corrections” – and presumably these “corrections” are reflected in the newly-released GISS codes? It is all becoming clear now…not ;^)

    Frank K.

  26. JerryB
    Posted Sep 17, 2007 at 12:08 PM | Permalink

    Frank K,

    To save typing let me refer you to the comment in the other thread to which I
    referred Jean S.

  27. Steve McIntyre
    Posted Sep 17, 2007 at 12:10 PM | Permalink

    #25. Nope. As I said above, the most recent differences come from changing sources – from SHAP to FILNET. There’s nothing on this in the NASA codes.

    Jerry, I agree with this diagnosis of yours:

    “The FILNET program also completes the data adjustment process for stations that moved too often for the SHAP program to estimate the adjustments needed to debias the data.”

    Many USHCN stations are identical between SHAP and FILNET. Detroit Lakes (as Jerry observed first) has, by chance a large adjustment. But it illustrates that the clause identified by Jerry is not merely passive. What the adjustment actually does is another story.

  28. John A
    Posted Sep 17, 2007 at 12:30 PM | Permalink

    Let me ask an ignorant question: What does the Data Quality Act actually specify as an improper change to fundamental data and what are the avenues for getting the data producers to revoke the improper change?

  29. Bob Koss
    Posted Sep 17, 2007 at 12:32 PM | Permalink

    This may be unimportant, but I noticed a change in their description of their two-stage modification.

    9-17 version.

    In step 1, if there are multiple records at a given location, these are combined into one record; in step 2, the urban and peri-urban (i.e., other than rural) stations are adjusted so that their long-term trend matches that of the mean of neighboring rural stations. Urban stations without nearby rural stations are dropped.

    9-12 version.

    in stage 1 we try to combine at each location the time records of the various sources; in stage 2 we adjust the non-rural stations in such a way that their longterm trend of annual means is as close as possible to that of the mean of the neighboring rural stations. Non-rural stations that cannot be adjusted are dropped.

    9-12 says they try to combine stations. 9-17 says they do combine them. Was there a rule change about this or just reworded?
    9-12 has two categories non-rural and rural. With non-rural dropped if no close rural. 9-17 has three categories. Urban, peri-urban and rural. With urban being dropped with no close rural, but no mention of peri-urban being dropped.

  30. steven mosher
    Posted Sep 17, 2007 at 12:57 PM | Permalink

    Some one needs to let folks know that the class 1 & class 2 classifications
    will be changing in the the next two days.

    Just kidding.

    Now, Imagine what people would say if Anthony suddenly changed the input to
    John V.s analysis.

  31. Jean S
    Posted Sep 17, 2007 at 1:14 PM | Permalink

    Thanks Jerry (#24), I had missed that. Since this is still slightly confusing, please confirm if I understood everything correctly. You say that there are some (other than infilling values which are not anyhow used by GISS) other differences between SHAP and FILNET versions, but your estimate is that the differencies we are seeing in GISS versions are mainly due to the different SHAP (and possible TOB) corrections (1999 vs. 2007) and only partly due to the switch from using SHAP to FILNET version?

  32. Frank K.
    Posted Sep 17, 2007 at 1:16 PM | Permalink

    #26, #27

    Thanks again Jerry and Steve. I read the link in Jerry’s thread (see #24) and it has now become clearer what was done.

    I suppose that as long as you have access to the raw data, you can judge for youself if the adjustments for a given station appear to be appropriate.

  33. Posted Sep 17, 2007 at 1:17 PM | Permalink

    Is this paper’s conclusion (use the raw USHCN data set) valid?

    “An Introduced Warming Bias in the USHCN Temperature Database Reference
    Balling Jr., R.C. and Idso, C.D. 2002. Analysis of adjustments to the United States Historical Climatology Network (USHCN) temperature database. Geophysical Research Letters 10.1029/2002GL014825.

    “What was done
    The authors examined and compared trends among six different temperature databases for the coterminous United States over the period 1930-2000 and/or 1979-2000.

    “What was learned
    For the period 1930-2000, the RAW or unadjusted USHCN time series revealed a linear cooling of 0.05°C per decade that is statistically significant at the 0.05 level of confidence. The FILNET USHCN time series, on the other hand – which contains adjustments to the RAW dataset designed to deal with biases believed to be introduced by variations in time of observation, the changeover to the new Maximum/Minimum Temperature System (MMTS), station history (including other types of instrument adjustments) and an interpolation scheme for estimating missing data from nearby highly-correlated station records – exhibited an insignificant warming of 0.01°C per decade.

    “Most interestingly, the difference between the two trends (FILNET-RAW) shows “a nearly monotonic, and highly statistically significant, increase of over 0.05°C per decade.” With respect to the 1979-2000 period, the authors say that “even at this relatively short time scale, the difference between the RAW and FILNET trends is highly significant (0.0001 level of confidence).” Over both time periods, they also find that “the trends in the unadjusted temperature records [RAW] are not different from the trends of the independent satellite-based lower-tropospheric temperature record or from the trend of the balloon-based near-surface measurements.”

    “What it means
    In the words of the authors, the adjustments that are being made to the raw USHCN temperature data “are producing a statistically significant, but spurious, warming trend in the USHCN temperature database.” In fact, they note that “the adjustments to the RAW record result in a significant warming signal in the record that approximates the widely-publicized 0.50°C increase in global temperatures over the past century.” It would thus appear that in this particular case of “data-doctoring,” the cure is worse than the disease. In fact, it would appear that the cure IS the disease.

    “Our prescription for wellness? Withhold the host of medications being given and the patient’s fever will subside.

    “Reviewed 29 May 2002”

  34. Posted Sep 17, 2007 at 1:40 PM | Permalink

    JohnA,

    The Data Quality Act mandated establishment of procedures be done by the agency providing the data. NASA guidelines provide for the following:

    D.1. Requesting Correction of Information by NASA If an affected person believes that information disseminated by NASA does not meet the guidelines for quality (utility, objectivity, and integrity), he or she may seek correction of the information. Requestors wishing to seek correction of information under NASA’s information quality guidelines must follow the procedures outlined below:
    • Requests must be in writing, and may be submitted by regular mail, electronic mail, or fax. [Final guidelines will include explicit submission mechanisms]
    • Requests must indicate that the correction of information is requested under NASA’s information quality guidelines
    • Requests must include the requestor’s name, phone number, preferred mechanism for receiving a written response from NASA (fax, e-mail, regular mail) with applicable contact information, and organizational affiliation (if any.)
    • Clearly describe the information that the requestor believes needs correcting, and include the name of the report or information source, the location if electronic, and the date of issuance.
    • State specifically what information should be corrected and what changes to the information, if any, are proposed. If possible, provide supporting evidence to document the claim.

    Steve appears to be following those rules and the NASA response is in accord with the guidelines.

    In the event that NASA declines to “fix” something, there is an appeal process outlined within the standards.

  35. Posted Sep 17, 2007 at 1:58 PM | Permalink

    re # 12

    Sorry, I was referring to a auditable, transparent “Hansen”-like code, not a GCM code. Since a lot of people have put a lot of effort into understanding what it does, and more importantly, what it should do, then documenting the functionality of a audiable, transparent algorithm flow should do, building it cleanly to CMMI level 3 (or an applicable software development quality) might be the eventual goal of this happy crew.

    Now back to lurking.

  36. JerryB
    Posted Sep 17, 2007 at 2:00 PM | Permalink

    Re #31,

    Jean S,

    Yes. By the way, TOB adjustments (for any given station/month ) have been
    virtually constant through successive USHCN editions, unlike SHAP adjustments.

  37. Steve Reynolds
    Posted Sep 17, 2007 at 2:11 PM | Permalink

    NASA: This August 2007 change received international attention via discussions on various blogs and repetition by some other media, with no graphs provided to show the relevance of the effect.

    My guess as to their next change: ….with no graphs provided to show the relevance of the effect on global temperatures.

  38. Boris
    Posted Sep 17, 2007 at 3:09 PM | Permalink

    What a waste of time this was.

  39. jae
    Posted Sep 17, 2007 at 3:15 PM | Permalink

    Boris, I guess auditing isn’t your thing. Maybe you could loan me some money for an oil well project, on the promise that I’ll pay you double within a year? Of course, I can’t give you any more details, because you might beat me to the punch.

  40. moptop
    Posted Sep 17, 2007 at 3:16 PM | Permalink

    CMM Level 3? I thought NASA was CMM Level 5

  41. Steve McIntyre
    Posted Sep 17, 2007 at 3:17 PM | Permalink

    I agree that Hansen has wasted our time. On day 1, at my initial request, he should have provided source code, so that we did not have to try to solve a variety of crossword puzzles. When he made each change, he should have reported the change together with its impact, rather than only reporting the changes after the changes had been identified here. Hansen should not have changed his accounting methodology especially since the only purpose of the changes seems to be to accomplish a statistically insignificant rearrangement of 1934 and 1998.

    It’s not just us here. Anthony Watts reports the frustration of one of his volunteers trying to figure out what was happening with Walhalla, as the data kept changing. It never occurred to him that Hansen was changing the books.

    Sorting out this nonsense has prevented even getting to the starting line of a statistical assessment of the ROW data. Yes, Hansen’s wasted a lot of people’s time. That’s who you should be blaming, Boris.

  42. harold
    Posted Sep 17, 2007 at 3:22 PM | Permalink

    Yas Comrade Boris.
    You and your Masters are very clever.

  43. Boris
    Posted Sep 17, 2007 at 3:28 PM | Permalink

    Hansen should not have changed his accounting methodology especially since the only purpose of the changes seems to be to accomplish a statistically insignificant rearrangement of 1934 and 1998.

    Yes, because Hansen has continually hawked 1998 as the warmest year in the US, right? Oh, no. As you well know, he talked about 1998 vs. 1934 as a statistical tie. In fact, the only people who care about 1934 vs 1998 in the U.S. were right wing blogs who seemed, en masse, to confuse global and US temps. Can you find me an example of anyone, anywhere touting the fact that 1998 was the warmest year in the United States? Hansen did not do so, so your theory that he must be trying to get 1998 back in first place makes no sense. He’s been consistent in pointing out the statistical tie between the two years.

  44. Steve McIntyre
    Posted Sep 17, 2007 at 3:32 PM | Permalink

    #43. Boris, I agree 100% that Hansen’s wasted a lot of people’s time.

  45. John Lang
    Posted Sep 17, 2007 at 3:35 PM | Permalink

    Hansen releases his code and then, promptly, starts using a completely different dataset and algorithm.

    What does that say?

    1. He purposely wasted everyone’s time in trying to analyze the code.
    2. His code always was so full of errors and bias that he now must use another code set-up and algorithm.

    Solution?

    1. Declare Victory. Declare the GISS code dead and unworthy of further analysis since it was full of so many errors anyway.

    2. Move onto the NOAA dataset and adjustments.

  46. Anthony Watts
    Posted Sep 17, 2007 at 3:39 PM | Permalink

    RE43, Boris says: “Can you find me an example of anyone, anywhere touting the fact that 1998 was the warmest year in the United States?”

    Wow, such an easy mark. Sure thing Boris, here are NOAA’s press releases:

    1998 WARMEST YEAR ON RECORD, NOAA ANNOUNCES
    http://www.publicaffairs.noaa.gov/releases99/jan99/noaa99-1.html

    1999: U.S. EXPERIENCES SECOND WARMEST YEAR ON RECORD; GLOBAL TEMPERATURES CONTINUE WARMING TREND
    http://www.publicaffairs.noaa.gov/releases99/dec99/noaa99083.html

    and before that:

    1997 WARMEST YEAR OF CENTURY, NOAA REPORTS
    http://www.publicaffairs.noaa.gov/pr98/jan98/noaa98-1.html

  47. Posted Sep 17, 2007 at 3:39 PM | Permalink

    I believe it is human nature to want to trust people and especially those who we think should be above reproach. Even when something looks like a duck and walks like a duck we often continue to see something other than the duck. If this issue were not so important we would have no issue with recognising the duck as a duck, pure and simple.

    The behaviour of not just Hansen, but Jones and Mann and others when confronted with what is a perfectly reasonable request for information should alert us to the fact that there is a real problem here.

    Is it a conspiracy? No. These people never understood that their methods would ever come under such scrutiny and they have slowly but surely realised that their work is not up to scrutiny and so they defend their reputations.
    The institutions that have taken their work and used it to covince governments that this is a serious problem are also defending their reputations, having accepted the work of these scientsists as of good quality. Governments are also defending their reputations in having taken the word of institutions like the IPCC.

    It is a house of cards.

  48. Posted Sep 17, 2007 at 4:06 PM | Permalink

    re 40

    Just because one can do CMMI level 5, doesn’t mean level 5 quality is required for this use. Anyhoo, NASA has different ways of rating software for reliability, manned spaceflight rated software is the most rigorous for testing, etc… than prototype code (ala Hansen 🙂

    So what is the appropriate quality of code (in the appropriate rating scale) for “spaceship earth”?

  49. steven mosher
    Posted Sep 17, 2007 at 4:27 PM | Permalink

    RE 48.

    Bajingo! Leon Wins. What Hansen is propsing folks is a “flight control” system for
    climate earth. A C02 limiter. That control system needs the same kind of testing that
    the typical fly by wire quad redundant FCS has.

  50. Posted Sep 17, 2007 at 4:27 PM | Permalink

    Leon Palmer September 17th, 2007 at 4:06 pm,

    How many lives could you save with $1 Gigabucks?

    That is how well rated the software ought to be, i.e. at least to FDA/FAA standards. The standards are clumsy and expensive. They do assure a certain level of quality.

  51. Pat Frank
    Posted Sep 17, 2007 at 4:35 PM | Permalink

    #41 “Sorting out this nonsense has prevented even getting to the starting line of a statistical assessment of the ROW data.

    Maybe that was the point.

    Not wanting to put you on the spot, Steve M., but wearing your auditor hat, what series of accounting stratagems would alert you to a conscious attempt at fraud? Can you provide a short list of what an auditor would look for?

    Paul, #47, I think you’re being too kind. Hansen and Mann are trained physicists. They each know just exactly what their mathematical methods do. It’s not a formal conspiracy, no. But I’ve pretty much abandoned the thought that they’re innocently negligent. Instead, I think they decided early on that they know what the answer is, and then jimmied their methods to produce that answer, confident that they’d be rescued by confirmatory facts down the line. But nature is cruelly indifferent to occult knowledge claims, and the passionate interest of other people means potentially embarrassing scrutiny always happens. Honest mistakes don’t survive the scrutiny, but they survive the embarrassment. Dishonest mistakes don’t survive either eventuality. Hansen and Mann are being hoisted by their own petard, but the fuses were lit by Steve McIntyre.

  52. Posted Sep 17, 2007 at 4:39 PM | Permalink

    BTW for true rigor data must be traced all the way back to NIST.

    Love to see those error bars. Heh.

  53. Posted Sep 17, 2007 at 4:47 PM | Permalink

    I’m a little thick here so maybe some one could help me.

    If rural stations determine the slope of the curve what is the purpose of including all the other stations if you want to know the slope of the curve?

  54. Larry
    Posted Sep 17, 2007 at 4:48 PM | Permalink

    51,

    confident that they’d be rescued by confirmatory facts down the line

    I think that’s correct. According to theory, temperatures are supposed to be taking off robustly. That that didn’t happen has required the construction of delayed-response theories.

    I’m sure that in 1990, they were cocksure that things would be taking off exponentially right now. The funny thing about the usufruct memo is that the less the earth obeys the predictions, the more upset they seem to get.

  55. dirac angestun gesept
    Posted Sep 17, 2007 at 4:55 PM | Permalink

    #12: While this happy crew of auditors is stuck auditing the past, Hansen has moved on to a new game with new rules, one of which is to throw surprises out to keep the auditors running in circles.

    That strongly reminded me of another eerily similar quote:

    “The aide said that guys like me were ‘in what we call the reality-based community,’ which he defined as people who ‘believe that solutions emerge from your judicious study of discernible reality.’ I nodded and murmured something about enlightenment principles and empiricism. He cut me off. ‘That’s not the way the world really works anymore,’ he continued. ‘We’re an empire now, and when we act, we create our own reality. And while you’re studying that reality – judiciously, as you will – we’ll act again, creating other new realities, which you can study too, and that’s how things will sort out. We’re history’s actors . . . and you, all of you, will be left to just study what we do.'”

  56. Posted Sep 17, 2007 at 5:14 PM | Permalink

    Re. #46

    First paper declares a tie between 1934 and 1998, so that one is a wash, though technically it counts because it was as warm as the warmest year on record.

    Third paper doesn’t have any mention of US temps….

    But, second paper gets the prize!!! Quote:

    For the calendar year 1999, the Commerce Department’s National Oceanic and Atmospheric Administration projects that the United States will have experienced its second warmest year on record since 1900 with an average for 1999 of 55.7 degrees F. This follows 1998’s all time record of 56.4 degrees. The values for both years exceed those of the warm decade of the 1930s.

  57. Anthony Watts
    Posted Sep 17, 2007 at 5:25 PM | Permalink

    RE57 Sonic I purposely posted both the 1998 and 1999 PR’s to illustrate that something “morphed” within NOAA’s PR engine. They did not actually make a PR with 1998 as the US leaderbaord top title as such, but in 1999 it somehow became that way in the release talking about 1999.

  58. Posted Sep 17, 2007 at 5:46 PM | Permalink

    Yep, and it has been the alarmist talking point du jour ever since.

  59. Robert Wood
    Posted Sep 17, 2007 at 6:05 PM | Permalink

    #57 Just for fun:

    Hey, it’s getting colder 🙂

    Sorry couldn’t resist.

    I still haven’t had explained to me how the seasonal variations in CO2 concentration create the seasons.

  60. Posted Sep 17, 2007 at 6:06 PM | Permalink

    AW, I figured you had a reason for posting a link with no mention of the supposed burning hell that was 1998, I just didn’t know what it was.

    PS. Sorry I haven’t got the Fresno Airport pics yet. I’m lame.

  61. Robert Wood
    Posted Sep 17, 2007 at 6:21 PM | Permalink

    Ode to Hansen, Jones and mann:

    So much data,
    So little information

  62. Alan D. McIntire
    Posted Sep 17, 2007 at 6:30 PM | Permalink

    Regarding #51- Use a search engine to look up “Benford’s rule fraud”. In the real world,
    numbers follow a roughly logarithmic distribution. When the accounting books deviate significantly
    from this distribution, there’s an indication of possible fraud.

    I don’t know how Benford’s rule could be applied in this case, since we’re dealing with temperature
    differences of a few tenths of a degree.

  63. PabloM
    Posted Sep 17, 2007 at 6:41 PM | Permalink

    I remember that – Benford’s Law and Zipf’s Law. Supposed you can detect fraud through the distribution of the digits.

  64. Boris
    Posted Sep 17, 2007 at 6:58 PM | Permalink

    46:

    Did you read your first link?

    “The United States average temperature in 1998 was 54.62°F (12.57°C), which placed the year in a virtual tie with1934 as the warmest year in records dating to 1895.”

    Your second link is a much better example. So 1 for NOAA, 1,763,319 (est.) right wing blogs and radio 🙂

  65. Boris
    Posted Sep 17, 2007 at 6:59 PM | Permalink

    Oh, and 0 for Hansen.

  66. Sam
    Posted Sep 17, 2007 at 7:04 PM | Permalink

    Hansen’s recalculation of temps during the brief interlude since admitting the Y2K issue where he has, once again, changed the leaderboard leads me speechless (quite a feat for anyone who knows me). How can any professional, regardless of his/her field of specialty, perform in this manner and expect to continue to be perceived as being at the top of his/her discipline. In addition to GAAP, what about the requirements placed upon the private sector to observe the strictures of Sarbanes-Oxley. As one who works in an industry intensely under the SOX hammer, I am left incredulous that Hansen has the termerity to try and pull this off.

    This should immediately bring sanctions against NASA. A full-blown audit of posted results and analysis of methodology is in order. If this doesn’t occur, then anything subsequent that comes out of this agency or Hansen should be completely disregarded.

  67. Sam
    Posted Sep 17, 2007 at 7:07 PM | Permalink

    #70 Sorry, “leads me speechless” should be “leaves me speechless.”

  68. aurbo
    Posted Sep 17, 2007 at 7:09 PM | Permalink

    Re #57:

    Hansen didn’t have to hype 1998, NOAA was doing it for him. I’m curious where Boris gets hius information that anyone critical of Mann or Hansen is a member of the right wing? Apparently his view on science is colored by politics. And on the subjet of Hansen discounting the US’s lack of a strong positive temperature trend because the US represents a relatively small fraction of this same non-representative of the ROW area, Thwe western US? Howe come he’s so eager to accept the Mann (hockey stick) analysis when the Briffa proxies represent even a smaller fraction of the ROW? That data seems to have been harvested not from

  69. Larry
    Posted Sep 17, 2007 at 7:21 PM | Permalink

    72

    I’m curious where Boris gets his information that anyone critical of Mann or Hansen is a member of the right wing?

    Three guesses.

  70. aurbo
    Posted Sep 17, 2007 at 7:22 PM | Permalink

    Re #43:

    Hansen didn’t have to hype 1998, NOAA was doing it for him. I’m curious where Boris gets his information that anyone critical of Mann or Hansen is a member of the right wing? Apparently his view on science is colored by his politics. And on the subjet of Hansen discounting the US’s lack of a strong positive temperature trend because the US represents a relatively small fraction of the ROW, how come he’s so eager to accept the Mann (hockey stick) analysis when the Briffa proxies represent even a smaller fraction of that same non-representative area…the Western US? Mann’s data seems to have been harvested not from Pinus longaeva but from Prunus cerasus. AGW should be based on science, not faith.

  71. Posted Sep 17, 2007 at 7:52 PM | Permalink

    Boris, re #69:

    Oh, and 0 for Hansen.

    Here is your challenge:

    Can you find me an example of anyone, anywhere touting the fact that 1998 was the warmest year in the United States?

    You didn’t specify it had to be Hansen.

  72. Steve McIntyre
    Posted Sep 17, 2007 at 8:14 PM | Permalink

    #72. Well, one reason why Hansen wasn’t touting 1998 when it happened was because his results then had 1934 as about 0.6 deg C warmer than 1998. See HAnsen et al 1999. Between 1999 and 2001, Hansen made some adjustments and 1998 warmed about 0.6 deg C relative to 1934, creating a virtual tie in 2001. Hansen was for 1934 before he was against it.

  73. John Goetz
    Posted Sep 17, 2007 at 8:20 PM | Permalink

    It just occurred to me (and maybe I am slow) that Hansen’s bias method also creates a moving target, although a bit more subtle than what we see here. This is because the quarterly and annual estimates he calculates for the latest scribal record depend on averages across the record, which is of course changing as new data is added to the record.

    Clever…

  74. D. Patterson
    Posted Sep 17, 2007 at 8:20 PM | Permalink

    Re: #77

    [QUOTE]
    NASA Home > Life on Earth > Looking at Earth…
    Earth Gets a Warm Feeling All Over…02.08.05
    [….]
    Globally, 1998 has proven to be the warmest year on record, with 2002 and 2003 coming in second and third, respectively. “There has been a strong warming trend over the past 30 years, a trend that has been shown to be due primarily to increasing greenhouse gases in the atmosphere,” Hansen said.
    http://www.nasa.gov/vision/earth/lookingatearth/earth_warm.html
    [UNQUOTE]

  75. jws
    Posted Sep 17, 2007 at 8:34 PM | Permalink

    “The five warmest years over the last century occurred in the last eight years,” said James Hansen, director of NASA GISS. They stack up as follows: the warmest was 2005, then 1998, 2002, 2003 and 2004.

    http://www.nasa.gov/vision/earth/environment/2005_warmest.html

  76. Posted Sep 17, 2007 at 8:35 PM | Permalink

    And why did the data change John G.? Is it a Hansen conspiracy? Or did it change because it is FUBAR? In the ’90’s AMS was finding 5 to 16 degree errors in the automated systems. In 04 they were going to correct for 1 plus degrees errors. This is an audit right? Where is the raw data to follow the corrections. And the corrections to the corrections? There has been more error in the digital age than in the old days. Corrections are being made from the wrong end.

  77. Posted Sep 17, 2007 at 8:38 PM | Permalink

    I’m curious where Boris gets his information

    Natasha mostly. Bullwinkle in a pinch.

  78. D. Patterson
    Posted Sep 17, 2007 at 8:44 PM | Permalink

    Re: #78

    Fooled me. I thought it was Fractured Fairy Tales. The pea under the mattress was always a cliffhanger.

    You don’t suppose Dr. Peabody is still around with his faithful Boy?

  79. Steve McIntyre
    Posted Sep 17, 2007 at 9:00 PM | Permalink

    In 2000, Hansen said:

    And we can predict with reasonable confidence that the record annual and decadal temperatures for the contiguous 48 U.S., set in the 1930s, will soon be broken.

    It only took about one year for Hansen’s prediction to come true, even though 2001 wasn’t very warm. Hansen et al 2001 did a major adjustment to US records and changed 1998 from being about 0.6 deg C cooler than 1934 to a statistical dead heat. 1998 continued to make small gains and the 1934 record was finally broken by 2005 – by a rejuvenated 1998.

    Left – from Hansen et al 1999; right – from Hansen et al 2001.

    And oh yes, help me here, who is it that is talking about the “record annual” temperatures, now said to be of no interest?

  80. D. Patterson
    Posted Sep 17, 2007 at 9:28 PM | Permalink

    Re: #80

    Global Warming, Playing Dice, and Berenstain Bears
    By James Hansen — January 2000

    In the summer of 1988, I testified to the U.S. Senate that the world was getting warmer and that the dominant cause was probably human-made greenhouse gases.

    The Senate, and the public, wanted to know the cause of parched conditions in the Midwest, where the Mississippi had practically dried up. I said that our numerical climate model indicated a tendency for more frequent and severe droughts as the world became warmer, but a specific drought was a matter of chance, dependent on fluctuating meteorological patterns.

    Although that testimony increased public awareness of global warming, it was soon evident that I had communicated poorly. On a Jeopardy quiz show the “answer” was that I had said the Midwest drought was caused by the greenhouse effect.

    People have a predilection for deterministic explanations of climate fluctuations. Even Albert Einstein abhorred the notion of chance in nature, saying, “God does not play dice”. But the science of quantum mechanics, with Einstein a major contributor, proved that uncertainty plays a big role in physics and in the world.

    One result is chaos in weather and climate. Temperature and precipitation patterns fluctuate in ways unpredictable beyond a few weeks at most. Yet climate, the average weather, can be changed in a deterministic way by a “forcing”, such as an increase of atmospheric gases.

    I tried to explain forcings and chaos with colored dice. One die represented normal climate for 1951-1980, with equal chances for warm, average and cool seasons. The other die was “loaded” due to forcing by greenhouse gases, such that the chance of an unusually warm season increased from 33 to about 60 percent, as calculated by our climate model for the late 1990s.

    When Sen. Albert Gore asked me to testify before the Senate again, in 1989, I wanted to explain the greenhouse effect better. I held up a one-watt Christmas tree bulb, saying that the human greenhouse effect is heating the Earth by an amount equal to two of these bulbs over every square yard of the Earth’s surface. In 100 years, this heating could double or quadruple, depending on how fast we put greenhouse gases into the air.

    This added heating intensifies dry conditions, when and where it is dry. However, over oceans and wet land, added heating increases evaporation, which eventually falls as rain. So my testimony was that global warming, paradoxically, increases both extremes of the hydrologic cycle. It causes more intense droughts and forest fires, but, at other places and times, it causes heavier rainfall, more intense storms fueled by latent heat of water vapor, and greater flooding.

    Unfortunately, this discussion was lost in a tempest caused by alterations to my testimony inserted by the White House Office of Management and Budget. The brouhaha may have helped keep attention on the global warming topic, but it failed to illuminate the scientific issues and uncertainties. And the public global warming “debate” continues to contrast opposite intransigent positions, rather than exemplifying how science research really works.

    I suggest to students that they view the debate in the media the way young Berenstain Bear viewed the botched bicycle lessons of Papa Bear: “this is what you should not do”. A good scientist does not act like a lawyer defending the position of a client.

    The fun in science is its objectivity. First exhilaration occurs when a young scientist compares alternative ideas or models with observations and discovers how something works. When the observations are of the Earth’s climate, it is awesome to think that our models can capture and predict the effect of the sun, volcanoes and greenhouse gases. But awe is tempered by realization that the “laboratory” is home to billions of people and wildlife.

    What have we learned about the greenhouse effect in 10 years? Bad news and good news. The bad news is that the world is warming, as predicted. The frequency of unusually warm seasons has increased to about 60 percent. Record warm temperatures occur more often than record cold. The year 1998 was, on global average, the warmest year in the history of instrumental data.

    Remarkable climate extremes have occurred recently: the Chicago heat wave of 1995, a run of 29 days of 100°F temperatures in Dallas in 1998, floods in the Midwest in 1993 and 1997 and in the Southeast in 1999. The high natural variability of climate prevents unique association of these events with global warming. But a quantitative index of temperature and moisture changes reveals that climate extremes are increasing at most places in the sense predicted for global warming. And we can predict with reasonable confidence that the record annual and decadal temperatures for the 48 contiguous United States, set in the 1930s, will soon be broken.

    The good news is that the growth rate of greenhouse gases has slowed. In the 1980s the rate was four more light bulbs per square yard in 100 years. Despite increased population and energy use, the rate has slowed to three more light bulbs per 100 years, rather than increasing to the five bulbs that were in the most popular climate forcing scenarios. Credit for the slowdown belongs in part to the public, legislatures, and businesses that phased out chlorofluorocarbons. Also methane and carbon dioxide growth rates slowed, for reasons that are not well understood and are perhaps only temporary.

    What’s to be done? First, we must avoid providing “lessons in what not to do”. Immediate, economically-wrenching constraints on energy use have negligible effect on climate forcings. But the other extreme, denial of the greenhouse problem, is equally foolish. Climate change is real, and it is a complex problem.

    Climate will change in the next few decades, regardless of our actions. But we can slow the planetary experiment as we develop better understanding. We need bi-partisan common sense strategies to encourage greenhouse benign technologies that continue the positive changes in our long term energy use trajectory. This is good for business and it will provide us with the option to eventually stabilize climate, thus maintaining a healthy planet for humans and bears.

    References
    Hansen, J.E., M. Sato, A. Lacis, R. Ruedy, I. Tegen, and E. Matthews 1998. Climate forcings in the industrial era. Proc. Natl. Acad. Sci. 95, 12753-12758.
    Hansen, J. 1999. Climate change in the new millenium. Columbia University State-of-the-Planet Conference, November 15, 1999.

    http://www.giss.nasa.gov/research/briefs/hansen_08/

    What or who is loading the dice? Is the anthropogenic forcing of the climate records a loading of the statistical records and dice, AGW, a combination, or something else? It is rather interesting to note Hansen himself suggested the climate records he is working upon can be described as loaded dice, and Hansen has been observed to be adjusting those same records in a direction which makes his predictions come true in those records.

  81. Posted Sep 17, 2007 at 9:44 PM | Permalink

    I love it when people quote Einstein! There is always a rebuttal.

    “The secret to creativity is knowing how to hide your sources.” Albert E.

  82. Anthony Watts
    Posted Sep 17, 2007 at 9:57 PM | Permalink

    Re64 Boris did you read 57?

    “So 1 for NOAA, 1,763,319 (est.) right wing blogs and radio” So no left wing blogs ever mentioned that? Wow your power of observation is truly incredulous.

    BTW, I suppose you really don’t understand how press releases work, do you? All that independent thought going on in the media and all they wouldn’t just repeat that verbatim would they?

  83. henry
    Posted Sep 17, 2007 at 10:01 PM | Permalink

    A question about the Data Quality Act. Shouldn’t an untouched, unbiased, un-adjusted historical record be available (so if the “new” ajusments are found to be inaccurate, we can return to them)?

    How can any future scientist, student, or researcher be sure that the history is good? How could someone who writes an article, or bases their thesis on the data be sure that their conclusions are built on solid data?

  84. dirac angestun gesept
    Posted Sep 17, 2007 at 10:41 PM | Permalink

    #59: I still haven’t had explained to me how the seasonal variations in CO2 concentration create the seasons.

    Well, first of all you have to forget all that stuff about the spinning Earth being tilted at an angle of twenty-something degrees to the ecliptic, and cooling during its Northern winters, warming during its summers. Forget about the Sun. Or better still, think of the Earth as being flat.

    Think instead about CO2 in the atmosphere. During summer, when plant photosynthesis is maximum, plants absorb CO2 from the atmosphere, and reduce its levels, and thus reduce global warming, resulting in a subsequent cold winter. During winter, when plant photosynthesis is at a minimum, CO2 levels tend to rise, resulting in global warming – and the subsequent warm summer.

    See. Quite easy to explain.

    You’ll probably want to know about anthropogenic seasonal variation too. That is, how humans manage to create the seasons. And this happens because, during winter, humans tend to light fires to keep warm, and these fires generate CO2, which causes global warming, and results in warm summers. During these warm summers, humans stop burning fires, and the excess CO2 is absorbed by plants, reducing atmospheric CO2 concentrations, and bringing global cooling, and the subsequent winter.

    The result, as I’m sure you’ll see, is that the seasonal cycle of spring-summer-autumn-winter is entirely created by human activity, and if humans would simply stop burning fires in winter, this seasonal variation would vanish, and terrestrial surface temperatures would remain more or less constant.

    Convinced? I’m sure you are. If you want to save the world from the endless cycle of the destruction of the creation, all you have to do is to not turn on your heating system when temperatures fall 10 or 20 degrees C below zero. It would also help if you stayed outside, and didn’t wear any clothes, or ate anything. You know by now that it makes no sense to do stupid things like that, right?

  85. Taipan
    Posted Sep 18, 2007 at 12:13 AM | Permalink

    Re 12. Leon your point is valid to a certain extent.

    However the issue then becomes one of credability.

    Climate audit is essential because it is one of the few places where papers are examined and looked at in detail.

    As a lot of the detail comes down to statistics, then there are few better places.

    As Steve says completely and correctly – it doesnt matter where things fall – only that they are right.

    The financial markets are brutual on failures of credability. Try getting a loan when previous prospectus have been proved to be false, either through error or recklessness.

    At this point we are starting to see financial markets factor in these issues based on these studies. They will bite them if the underlying work is incorrect.

    It is one thing where forecasts are made correctly and the outcome ends up different, another thing entirely where forecasts are made erroneously or to achieve a certain outcome.

  86. Posted Sep 18, 2007 at 12:18 AM | Permalink

    Re: #33

    I haven’t noticed any comments responding to comment #33, which I think is a very valid question. The following figure is from the Balling / Craig paper referred to in 33:

    This is what the authors say about the figure:

    The annual difference between the RAW and FILNET
    record (Figure 2) shows a nearly monotonic, and highly statistically
    significant, increase of over 0.05 C / decade. Our analyses of
    this difference are in complete agreement with Hansen et al. [2001]
    and reveal that virtually all of this difference can be traced to the
    adjustment for the time of observation bias. Hansen et al. [2001]
    and Karl et al. [1986] note that there have been many changes in
    the time of observation across the cooperative network, with a
    general shift away from evening observations to morning observations.
    The general shift to the morning over the past century may
    be responsible for the nearly monotonic warming adjustment seen
    in Figure 2. In a separate effort, Christy [2002] found that for
    summer temperatures in northern Alabama, the correction for all
    contaminants was to reduce the trend in the raw data since 1930,
    rather than increasing it as determined by the USHCN adjustments
    in Figure 2. It is noteworthy that while the various time series are
    highly correlated, the adjustments to the RAW record result in a
    significant warming signal in the record that approximates the
    widely-publicized 0.50 degree C increase in global temperatures over the
    past century.

    It seems that it is time to audit the adjustments made to the USHCN data.

  87. MarkR
    Posted Sep 18, 2007 at 12:24 AM | Permalink

    #85 Love it!

  88. R John
    Posted Sep 18, 2007 at 12:55 AM | Permalink

    #85

    Sarcasm well played my friend!

  89. Posted Sep 18, 2007 at 1:33 AM | Permalink

    Re: #88:

    The authors refer to Christy [2002] and temperatures in northern Alabama. The following figure compares USHCN temperature graphs (left – from the CO2 Science site – which according to previous correspondence with them, they say is the unadjusted data) and GISS graphs (right – from the NASA site – dset=1, as of Sep 17) for two stations in northern Alabama. There is definitely a difference showing up between the data sets in this Waldo-less area.

  90. PaulM
    Posted Sep 18, 2007 at 1:52 AM | Permalink

    If the theory and the observational data don’t match, then clearly something is wrong – with the data. The data has to be fiddled until it gives the ‘right’ result. That is how (climate) science works.

  91. PaddikJ
    Posted Sep 18, 2007 at 2:00 AM | Permalink

    No. 47 paul says on September 17th, 2007 at 3:39 pm:

    It is a house of cards.

    Pyramid of Cards would be more on the money.

  92. MarkW
    Posted Sep 18, 2007 at 5:31 AM | Permalink

    From stray comments it is fairly obvious that SteveMc and when he was still around JohnA are well left of center in their politics.

    But I’m sure Boris counts this as another conservative site.

  93. MarkW
    Posted Sep 18, 2007 at 5:33 AM | Permalink

    Boris has an interesting double standard.

    On the skeptics side, every blogger or radio host is counted.

    On the alarmist side, It’s Hansen, and only Hansen. (When forced he reluctantly adds NOAA)

    Hey Boris, if we are going to count every two bit blog on one side, why not count every two bit blog on the other?

  94. Carl Gullans
    Posted Sep 18, 2007 at 5:41 AM | Permalink

    You guys (most of you) need to stop hurling around irrelevant ad hominem comments and leave space here for analysis of the data.

  95. CO2Breath
    Posted Sep 18, 2007 at 5:45 AM | Permalink

    Re: 79 (re Boris)

    There’s also the possibility that Boris is Beaker (Dr. Bunsen Honeydew’s able assistant on the Muppets).

  96. CO2Breath
    Posted Sep 18, 2007 at 6:06 AM | Permalink

    href=”http://gallery.surfacestations.org/main.php?g2_itemId=27418&g2_imageViewsIndex=3>Monthly Trends at one Station

    Another “nearby” station

    I seem to be having trouble linking images and multiple links that worked before, but the above should be sufficient.

    OK “Mr. where’s the data?” Please explain why some type of more detailed analysis shouldn’t be performed on the data of nearby urban and rural stations to see if it makes ANY sense to use one to adjust the other. Homogeneity indeed.

  97. CO2Breath
    Posted Sep 18, 2007 at 6:09 AM | Permalink

    Monthly Trends at one Station

    Another “nearby” station

    I seem to be having trouble linking images and multiple links that worked before, but the above should be sufficient.

    OK “Mr. where’s the data?” Please explain why some type of more detailed analysis shouldn’t be performed on the data of nearby urban and rural stations to see if it makes ANY sense to use one to adjust the other. Homogeneity indeed.

    Maybe preview doesn’t work so swell under Ubuntu/firefox 2.0.0.6?

  98. CO2Breath
    Posted Sep 18, 2007 at 6:11 AM | Permalink

    Monthly Trends at one Station

    Another “nearby” station

    I seem to be having trouble linking images and multiple links that worked before, but the above should be sufficient.

    OK “Mr. where’s the data?” Please explain why some type of more detailed analysis shouldn’t be performed on the data of nearby urban and rural stations to see if it makes ANY sense to use one to adjust the other. Homogeneity indeed.

    One last shot at hand editing.

  99. CO2Breath
    Posted Sep 18, 2007 at 6:19 AM | Permalink

    “If the theory and the observational data don’t match, then clearly something is wrong – with the data. The data has to be fiddled until it gives the ‘right’ result. That is how (climate) science works.”

    The way I understood things is that Scientists and Engineers (And Financial Analysts) were supposed to explain why the data changed (varied, trended, etc), not why to change the data.

  100. welikerocks
    Posted Sep 18, 2007 at 6:37 AM | Permalink

    link

    This is not the first time Hansen has aligned himself with officials from the Democratic Party. As Cybercast News Service previously reported, Hansen publicly endorsed Democrat John Kerry for president in 2004 and received a $250,000 grant from the charitable foundation headed by Kerry’s wife.

    In addition, he acted as a consultant earlier this year to former Democratic Vice President Al Gore’s slide-show presentations on “global warming.”

    Hansen, who also complained about censorship during the administration of President George H. W. Bush in 1989, previously acknowledged that he supported the “emphasis on extreme scenarios” regarding climate change models in order to drive the public’s attention to the issue.

    The scientist touted by CBS News’ “60 Minutes” as arguably the “world’s leading researcher on global warming”…But Scott Pelley, the “60 Minutes” reporter who profiled Hansen and detailed his accusations of censorship on the March 19, edition of the newsmagazine, made no mention of Hansen’s links to Kerry and Gore and none to the fact that Kerry’s wife — Teresa Heinz Kerry — had been one of Hansen’s benefactors.

    link

    “The fun in science is to explore a topic from all angles and figure out how something works. To do this well, a scientist learns to be open-minded, ignoring prejudices that might be imposed by religious, political or other tendencies (Galileo being a model of excellence). Indeed, science thrives on repeated challenge of any interpretation, and there is even special pleasure in trying to find something wrong with well-accepted theory. Such challenges eventually strengthen our understanding of the subject, but it is a never-ending process as answers raise more questions to be pursued in order to further refine our knowledge.”

    James Hansen, The GW Debate, 1999

    The highest global surface temperature in more than a century of instrumental data was recorded in the 2005 calendar year in the GISS annual analysis. However, the error bar on the data implies that 2005 is practically in a dead heat with 1998, the warmest previous year.

    GISS Surface Temperature Analysis

  101. CO2Breath
    Posted Sep 18, 2007 at 6:40 AM | Permalink

    This would be for the adherents of Eliwabbet (and a test of html on this website)

  102. CO2Breath
    Posted Sep 18, 2007 at 6:43 AM | Permalink

    Test Failed: New test.

  103. CO2Breath
    Posted Sep 18, 2007 at 6:54 AM | Permalink

    “Are there spurious temperature trends in the United States
    Climate Division database?”

    Seems to depend on whom you ask.

    Click to access Keim_GRL2003.pdf

  104. Tony Edwards
    Posted Sep 18, 2007 at 7:01 AM | Permalink

    Just an observer, not much a participant, but what a great site. Off topic, dirac angestun gesept, what a great book “Wasp”, by Eric Frank Russell was.

  105. Jim C
    Posted Sep 18, 2007 at 7:23 AM | Permalink

    RE:#28

    Let me ask an ignorant question: What does the Data Quality Act actually specify as an improper change to fundamental data and what are the avenues for getting the data producers to revoke the improper change?

    Ha!

    My first visit here left me wondering how NASA/NOAA et al have gotten around this law for so long. It appears this law would require full disclosure and Congressional inquiry into the matter since the information disseminated unquestionably is “influential”.

    Read the entire Act people. Here is a snippet.

    “Influential” information
    The OMB guidelines apply stricter quality standards to the dissemination of information that is considered “influential.” In regard to scientific, financial, or statistical information, “influential” means that “the agency can reasonably determine that dissemination of the information will have or does have a clear and substantial impact on important public policies or important private sector decisions.” Each agency is authorized “to define ‘influential’ in ways appropriate for it, given the nature and multiplicity of issues for which the agency is responsible.”

    If an agency disseminates “influential” scientific, financial, or statistical information, that information must meet a reproducibility standard. Analytic results related to influential scientific, financial, or statistical information, must generally be sufficiently transparent about data, methods, models, assumptions, and statistical procedures that an independent reanalysis (or more practically, tests for sensitivity, uncertainty, or robustness) could be undertaken by a qualified member of the public. The guidelines direct agencies to consider in their own guidelines which categories of original and supporting data should be subject to the reproducibility standard and which should not.

    In cases where public access to data and methods cannot occur because of privacy or proprietary issues, agencies are directed to apply especially rigorous robustness checks to analytic results and document what checks were undertaken.

    If agencies wish to rely for important and far-reaching rulemaking on previously disseminated scientific, financial or statistical studies that at time of dissemination were not considered “influential,” then the studies would have to be evaluated to determine if they meet the “capable of being reproduced” standard.

  106. Bill Drissel
    Posted Sep 18, 2007 at 8:04 AM | Permalink

    Steve,
    The paper mentioned in Comment #33 is hidden behind subscription at GRL. Do you suppose they would permit you to publish it? I’m a data-oriented engineer but not a climatologist. Is this paper as important as it seems to me? Is global warming hanging by such slender threads as paint on the instrument shelters, data adjustment and nearby air conditioners? Is this the basis on which some would have us sitting shivering in the dark?

    Regards,
    Bill Drissel

  107. CO2Breath
    Posted Sep 18, 2007 at 8:23 AM | Permalink

    There’s a link to the paper at Watts UP:

    http://www.norcalblogs.com/watts/

    or direct:

    Click to access USHCN_Balling_2002GL014825.pdf

    Also it’s from CO2Science.org:

    Click to access USHCN_Balling_2002GL014825.pdf

    Monthly Trends at one Station

    Another “nearby” station

    I seem to be having trouble linking images and multiple links that worked before, but the above should be sufficient.

    OK “Mr. where’s the data?” Please explain why some type of more detailed analysis shouldn’t be performed on the data of nearby urban and rural stations to see if it makes ANY sense to use one to adjust the other. Homogeneity indeed.

    One last shot at hand editing.

  108. CO2Breath
    Posted Sep 18, 2007 at 8:25 AM | Permalink

    Big Opps.

    Here’s CO2 Science:

    http://www.co2science.org/scripts/CO2ScienceB2C/data/ushcn/ushcn_des.jsp

  109. JerryB
    Posted Sep 18, 2007 at 9:01 AM | Permalink

    Jean S

    A followup to my comment (#36) about TOB adjustments. I subtracted TOB
    adjusted temps of the 1999 edition from those of the 2000 edition. Most
    of the differences were zero, most of the rest were 0.01 F, and the remaining
    three were 0.02 F.

  110. CO2Breath
    Posted Sep 18, 2007 at 9:34 AM | Permalink

    Re #96

    Carl,

    Here’s a similar plot from a station sorta between the other two:

    http://gallery.surfacestations.org/main.php?g2_itemId=27408&g2_imageViewsIndex=2

    So, what is going on here? Two of the three stations, to me at least, are more alike than the other. What objective means should be used to decide if, from a climatological viewpoint, any of these three stations are homogeneous enough to use in comparing annual mean temperature data, much less using them to adjust each other?

  111. VG
    Posted Sep 18, 2007 at 9:35 AM | Permalink

    The message below in “”, was posted yesterday at cryosphere today by W Chapman. Is anybody at Climate Audit interested in an audit? (it changes the March 2007 to current yearly data for Antartica ice extent by approx minus one half to -1,000.000 sq km.

    “Correction: we had previously reported that there had been a new SH historic maximum ice area. Unfortunately, we found a small glitch in our software. The timeseries have now been corrected and are showing that we are very close to, but not yet, a new historic maximum sea ice area for the Southern Hemisphere”.

    This seems extraordinary and again without explanation. Is it possible to demand data quality info?

  112. steven mosher
    Posted Sep 18, 2007 at 9:39 AM | Permalink

    RE 111.

    JerryB you are the TOBS god.

    I’ve started reading Karl.

    Click to access i1520-0450-25-2-145.pdf

    A couple of thoughts.

    1. This would be very nice paper for SteveMc and/or yourself to hold court on, Especially now.

    2. Time series are adjusted using this model in order to remove BIAS, The adjustments, the argument
    would go, should recover the true mean. However, the adjustment is an estimate with an error.
    This error does not make its way into the final error terms of the temperature series. Do you think
    this is an issue when people want to make claims about “hottest year on record”

    3. It might be a ripe time to revist Karls work, especially with some CRN sites producing continuous
    data from 3 sensors. A TOBS validation of sorts.

  113. SteveSadlov
    Posted Sep 18, 2007 at 9:56 AM | Permalink

    RE: #113 – That pales in comparison with what he did with NH “data” earlier this year. It was like, the result of some sort of brainstorming session between he, Jones and Hansen.

  114. steven mosher
    Posted Sep 18, 2007 at 10:26 AM | Permalink

    JerryB..

    The plots of the errors in the TOBS model look kinda substantial… Bigger than the
    instrument errors.. am I reading that right.. If so, then you have a time series
    with an instrument error of ‘e’ and then an adjustment made to that record using a model
    that has a error of ‘2e’.. but when final calcs are done, somebody pretends that
    the error in the adjustment model vanishes.. Maybe I’m misunderstanding..

    Anyway. Other folks out there go ahead and read

    Click to access i1520-0450-25-2-145.pdf

    if you want to see how USCHN does its TOBS adjustment to raw.

    ( opens can of worms)

  115. JerryB
    Posted Sep 18, 2007 at 10:38 AM | Permalink

    Re #144,

    steven,

    Watch your language; this is a family friendly blog.

    I’be brief, partly because we’re off topic for this thread.

    The year to year fluctuations are such as to be a problem for hottest year claims at
    any one location; I can’t assess such a claim about averages of several locations.

    I don’t have the statistical, or other background, to be holding court on that
    paper. Steve may be up to his elbows with other stuff that he would like to do.

  116. CO2Breath
    Posted Sep 18, 2007 at 10:42 AM | Permalink

    A real quick TOB analysis goes like this:

    At ~45 deg lat, the Spring and Fall the mean temperature change is on the order of a degree F every three days (the max and mins differ from the Spring to Fall likely due to earth heat capacity and ground temperature). So, if one shifted the measurement off a half a day, all the values for the month would be off a half a day -> 0.18F or 0.1C (correction for the month). Since the mean of the hourly temperature measurements is typically different from the mean of min-max, the adjustment could be different but it wouldn’t seem to be nearly as much as the difference between the hourly mean and min-max mean (which I understand to be ~1.5 deg (F or C I don’t remember). The corrections should be lower for Jan and July (Mean min and max typically).

  117. Boris
    Posted Sep 18, 2007 at 10:43 AM | Permalink

    And oh yes, help me here, who is it that is talking about the “record annual” temperatures, now said to be of no interest?

    Nice strawman. We “alarmists” tout global records, yes, and records that are significant in relation to, say, the 1930s. It is absurd to say that alarmists were touting 1998 US temps when it’s far more logical and significant to tout 1998 global temps. Record temps do matter in public relations, and it is not dishonest to use them to underscore the FACT that the earth has warmed, and the FACT that climate scientists are increasingly convinced that human actions are responsible.

    And you didn’t tell everyone what Hansen said in 2001: that 34 and 98 were a tie.

  118. MarkW
    Posted Sep 18, 2007 at 11:01 AM | Permalink

    Interesting how Boris can’t admit that he was wrong.

    First he was full bore that only right wing wackos ever talked about which years were records.
    Now he is full bore in proclaiming that the “alarmists” were touting records, but they were justified in doing so for publicity purposes.

    Who cares that the data has no validity, it’s getting people sufficiently paniced that matters.

  119. Michael Jankowski
    Posted Sep 18, 2007 at 11:13 AM | Permalink

    Record temps do matter in public relations, and it is not dishonest to use them to underscore the FACT that the earth has warmed, and the FACT that climate scientists are increasingly convinced that human actions are responsible.

    Record temps underscore the “FACT that climate scientists are increasingly convinced that human actions are responsible?”

    If you seriously believe what you wrote, then you are pretty hopeless.

  120. CO2Breath
    Posted Sep 18, 2007 at 11:15 AM | Permalink

    Re: 120

    Picture of Boris or not? You decide.

  121. Steve McIntyre
    Posted Sep 18, 2007 at 11:15 AM | Permalink

    Boris, you said:

    In fact, the only people who care about 1934 vs 1998 in the U.S. were right wing blogs who seemed, en masse, to confuse global and US temps.

    As others have observed, NOAA issued “warmest U.S. year” press releases. And Hansen said as noted above:

    And we can predict with reasonable confidence that the record annual and decadal temperatures for the contiguous 48 U.S., set in the 1930s, will soon be broken.

    Hansen and NOAA raised the issue.

    And you didn’t tell everyone what Hansen said in 2001: that 34 and 98 were a tie.

    I’ve talked very specifically about Hansen. In 1999, Hansen said that 1934 was 0.6 deg C warmer than 1998. I agree with you – and I’ve said so on more than one occasion – that Hansen was for 1934, before he was against it.

  122. Scott
    Posted Sep 18, 2007 at 11:17 AM | Permalink

    #119 Boris:

    and it is not dishonest to use them to underscore the FACT that the earth has warmed

    It’s context Boris, context. Warmed as compared to what period? Since the so-called, “Little Ice Age”? Or, in the last ~1600 years BPA? Since the last major Ice Age?

    and the FACT that climate scientists are increasingly convinced that human actions are responsible.

    And that is unadulterated hogwash.

  123. Wayne Holder
    Posted Sep 18, 2007 at 11:47 AM | Permalink

    While slightly off topic, I thought an article in today’s WSJ seemed relevant:

    Most Science Studies Appear to Be Tainted By Sloppy Analysis

    From the article: “Statistically speaking, science suffers from an excess of significance. Overeager researchers often tinker too much with the statistical variables of their analysis to coax any meaningful insight from their data sets. “People are messing around with the data to find anything that seems significant, to show they have found something that is new and unusual,” Dr. Ioannidis said.

    • Geoff Sherrington
      Posted Nov 12, 2010 at 12:35 AM | Permalink

      Highly recommended. Also, dig and read some of the original papers by Prof. Ioannidis.

  124. Boris
    Posted Sep 18, 2007 at 11:54 AM | Permalink

    For it before he was against it? Clever, Steve.

    Tell me, when was he aginst it? You don’t seem to want to talk about the fact that Hansen never touted 1998 as the hottest year for the US. Please provide evidence he was “against it” as you say.

  125. Paul Linsay
    Posted Sep 18, 2007 at 11:58 AM | Permalink

    #118,

    If your description of TOB adjustment is correct then it is seriously flawed. It is inserting a historical trend into data that isn’t necessarily following that trend.

  126. Boris
    Posted Sep 18, 2007 at 12:01 PM | Permalink

    First he was full bore that only right wing wackos ever talked about which years were records.
    Now he is full bore in proclaiming that the “alarmists” were touting records, but they were justified in doing so for publicity purposes.

    There is a difference between touting a virtual tie in regional temps with touting a clear record in world temps. If you can’t see a difference, perhaps you should join the host of commentators who deny even that there is a consensus.

  127. Boris
    Posted Sep 18, 2007 at 12:04 PM | Permalink

    122:

    I’ll ignore your immature little jab to say that my favorite episode was when Beaker was cloned. Good stuff.

  128. Larry
    Posted Sep 18, 2007 at 12:18 PM | Permalink

    Boris demanding evidence. This is rich.

  129. Posted Sep 18, 2007 at 12:29 PM | Permalink

    CO2Breath September 18th, 2007 at 6:06 am,

    Perhaps you can answer my question.

    If climate change (slope of delta T) is what we are looking for and the slope is determined by rural stations, why in the heck are we doing all this adjustment fiddling on non rural stations?

    If slope is what we want, why not just find the slope for consistent sections of each record and combine them?

    That still leaves other problems: UHI/microsite/instrument etc. But at least it gives us some consistent way to use noisy signals of unknown consistency between different regimes (like a move or a change in instrumentation) to figure out what the slope might be.

    • Geoff Sherrington
      Posted Nov 12, 2010 at 12:43 AM | Permalink

      I’ve taken a number of truly rural stations in Australia and evaluated them from dailt max and mins 1988-2008. There remains a problem. Some stations show no change, some show an incease, some show Tmax and Tmin converging, othets parallel. The problem is that the noise envelope is so large that you can author any story that appeals to you.

  130. Sam Urbinto
    Posted Sep 18, 2007 at 12:30 PM | Permalink

    You really don’t get the Kerry ref, B? lol

    The thing is that depending on context and year, the story and PR about “warmest year” and expections kept changing, and the adjustments kept moving things, and what was said by one group doesn’t have to be said by the other. The old “Hey, I didn’t say that, they did” game.

    I’ve thought of an interesting experiment. What is the temperature of a room? Get an accurate to .01 C thermometer, and measure a room at the four corners and in the wall centers 1 foot from the wall at top middle and bottom of the height of the room. Average them. Do that every hour. Average them over a day. Do that for a week. Average the days. Then you could come out with “warmest day ever!” or “coldest day ever!” and the same for weeks, “week 7 has beat all records!” Then get the trends and start calculating and modeling. Or even better, do multiple rooms and compare them too! “Room 3 is trending down in week 22, and will soon exceed room 7 in week 15 as our all time low!”

    Of course, if you were doing 8 rooms, and 2 of them were on the second floor, you’d have to adjust for height and temperature biases. And of course get ratios of the size of the rooms and the heights of the ceilings and adjust for them too. Oh, and rooms with water would have to be treated differently than those without.

    Then for fun, different houses or neighboorhoods could be averaged and compared if enough people were doing it.

    I’m still trying to figure out the code for central versus window AC bias, oven bias in the kitchen, humidity bias in the shower, and door bias in the garage (and insulated vs not). Do you think the people next door will share their code with me?

  131. Boris
    Posted Sep 18, 2007 at 12:36 PM | Permalink

    You really don’t get the Kerry ref, B? lol

    That’s what made it clever, SU. lol.

  132. Jaye
    Posted Sep 18, 2007 at 12:41 PM | Permalink

    deny even that there is a consensus.

    A consensus of what, meta-magical-science where the result is determined prior to the method being applied?

  133. Posted Sep 18, 2007 at 12:48 PM | Permalink

    Well sure there is a consensus. Why not?

    I seem to recall there was a consensus in 1904 with a few minor problems. Soon to be ironed out.

    And never forget. Galileo was a denier.

    So yeah. All hail the crowned and conquering consensus. If almost every one believes it must be true. Just do a Bayesian analysis. That will iron out the kinks and put paid to the deniers.

  134. Larry
    Posted Sep 18, 2007 at 12:51 PM | Permalink

    Of course there’s a consensus. And Hansen put the ‘con’ in consensus.

  135. jae
    Posted Sep 18, 2007 at 12:54 PM | Permalink

    Is Boris Jousting or Jesting?

  136. MarkW
    Posted Sep 18, 2007 at 1:01 PM | Permalink

    I’d say obfuscating

  137. Posted Sep 18, 2007 at 1:09 PM | Permalink

    Steve,

    You’re saying you want scientists to be more like accountants? Is that what you’re saying? You don’t
    think this would lead to even more people taking up basket weaving courses in college?

  138. aurbo
    Posted Sep 18, 2007 at 1:13 PM | Permalink

    Boris seems now to be interested in Global Warming rather than US warming since the US data doesn’t quite fit his biases. But Boris, a master of the mobile goalposts, is perfectly happy to accept the hockey stick as representing Global climate when Mann’s princple source, his Bristlecone proxies, all came from a very small portion of the US. Situational science is a little like situation ethics, they’re only valid when one wants them to be valid.

    Re #114:

    I’m glad to see somebody else is finally reviewing Karl’s work. The validity of TOB can be determined easily by anyone who has access to hourly data files. Simply test the diferences between the 24-hour averages and the 24-hour mean as determined by taking once daily max-min readings by selecting any hour of the 24-hour period and recording the max-min derived mean from the prior 24 hourly observations. One will clearly see differences obtained by taking 24-hour max-mins at 7AM local time (double counts the mins) versus taking those obs at about 3PM local time (double counts the maxes). The idea is to derive a formula to estimate the differences between once a day means taken at specific times during the day and means based on Calendar day observations taken once a day at local midnight. These formulas should not be applied to stations with available hourly observations as the real differences can be determined directly.

    BTW, Tom Karl is one of the two candidates for President Elect of the AMS. The other is Sandy McDonald who is the Director of Earth System Research Laboratory and Deputy Assistant Administrator for NOAA Research Laboratories and Cooperative Institutes.

  139. UK John
    Posted Sep 18, 2007 at 1:14 PM | Permalink

    Ah well !

    Comment on Hansen et al. There is always a ready market for bullsh*t, and there is plenty around to sell.

    What I have learnt from visiting this site is that the interpretation of the observed record is just as accurate as the predictive climate models.

    You either believe or you don’t !

  140. Jaye
    Posted Sep 18, 2007 at 1:21 PM | Permalink

    You’re saying you want scientists to be more like accountants? Is that what you’re saying? You don’t
    think this would lead to even more people taking up basket weaving courses in college?

    Ok, THAT is either purposeful, malicious misconstruing of the most egregious kind or a complete absence of grokking so profound as to be embarrassing to the utterer.

  141. MarkW
    Posted Sep 18, 2007 at 1:21 PM | Permalink

    I being an “like an accountant” means that scientists should expect to have other people go over their work looking for mistakes.

    Then yes, scientists should be more like accountants.

    Those who don’t like having other people go over their work can go into ….
    I guess they would be out of luck, there are no other professions in which nobody goes over your work.

  142. SteveSadlov
    Posted Sep 18, 2007 at 1:25 PM | Permalink

    RE: #139 – virtually all engineering fields and a number of science fields already insist on design-for-quality. In engineering, that refers to designs of things. In science, that refers to designs of experiments, analytical techiques, frameworks of understanding, and in some cases, things. Why this resistence to quality? Wouldn’t improved quality help climate science to argue in the strictly factual realm and to take emotions and politics out of the discussion?

  143. Boris
    Posted Sep 18, 2007 at 1:27 PM | Permalink

    Ok, THAT is either purposeful, malicious misconstruing of the most egregious kind or a complete absence of grokking so profound as to be embarrassing to the utterer.

    AKA, a typical Climate Audit comment.

  144. Mark O
    Posted Sep 18, 2007 at 1:50 PM | Permalink

    I have spent my life working on and testing Rockets and spacecraft, neither of which can tolerate any significant failure or they will not complete their mission. There is an adage in the aerospace business about testing and it is “Test like you fly.” In other words, perform tests that will simulate as much as possible the conditions the vehicle will see while in use.

    In aerospace, a lot of time is spent coming up with good ways to put this adage into practice. However, the tests will mean nothing if you don’t take good data during the tests. You must use calibrated instruments, understand all the phenomenon that will affect the measurements, rigorously eliminate anything outside influence that might affect the data, perform an end-to-end error analysis etc.

    The reason I mention this is because if somebody who worked for me showed me the data used in global warming calculations, I would fire him on the spot. All the money we have spent measuring temperatures and analyzing the data has pretty much been wasted as far as I can tell. There is no basic understanding of how micro site affects skew the measurements; The sites picked to measure the temps rarely meet the requirements laid out (and where is the rigorous study that laid out those requirements?); The majority of the data is adjusted by some unknown, unproven algorithm; And on and on.

    In short, these global warming papers and studies can not be used to conclude anything with any certainty.

  145. Mike B
    Posted Sep 18, 2007 at 1:53 PM | Permalink

    BCL #139

    You’re saying you want scientists to be more like accountants? Is that what you’re saying? You don’t
    think this would lead to even more people taking up basket weaving courses in college?

    I would say that I expect U.S. Government Employees to follow regulations regarding data quality.

    And that goes for Scientists, Accountants, and everyone in between.

  146. CO2Breath
    Posted Sep 18, 2007 at 2:17 PM | Permalink

    Re: 129 “little.. immature…jab”

    LOL. When I saw that picture of Beaker with the huge flame in front of him, I started laughing hysterically. It seems so appropriate to describe my take on Gorebal Warning Alarmists. I’m actually still laughing, just thinking about it. I’d forgotten just how funny some of the Muppets stuff was(is). The fact that “Bunsen” is close to “Hansen” and is a “Scientist” is just icing on the cake. I had to keep beatin that poor dead horse until somebody complained (Though I thought Carl might rise to the bait.)

  147. Mark O
    Posted Sep 18, 2007 at 2:19 PM | Permalink

    Re: #147

    I believe bigcitylib meant his comment to be tongue-in-cheek in #139. However, if you do not follow standard data quality procedures, no matter what field you are in, you can be sure it will come back to bite you later.

    Another aerospace saying is that data is mainly used during a failure investigation. And believe me, any and all errors in your data, methodology, conclusions or anything else you did not do a good job on will be found in a failure investigation, so you better make sure you do a good job up front.

  148. CO2Breath
    Posted Sep 18, 2007 at 2:27 PM | Permalink

    Paul Linsay says:
    September 18th, 2007 at 11:58 am

    Re:127

    I’m just starting to try to figure out TOB, but given that the concern is primarily when the TOB changes in the data stream, that simple understanding is the first thing that makes some sense to me. Over a calendar year it should average out, except that there are strange days when the temp drops all day from midnight to midnight or rises all day. There are also places with Chinook winds that likely have frequent non standard temp vs. time of day curves where corrections could be problematic.

    It would seem that those irregular events are impossible to correct for in the historical record.

  149. Steve McIntyre
    Posted Sep 18, 2007 at 2:29 PM | Permalink

    #91. Was any anomaly observed during Talledega Nights?

  150. Bob Meyer
    Posted Sep 18, 2007 at 2:37 PM | Permalink

    Re: 140

    Mark O said “…if somebody who worked for me showed me the data used in global warming calculations, I would fire him on the spot.”

    I would too, unless he first said “Look at this crap data”.

    Seriously, I agree that the data is corrupted and I don’t know why people continue to use it at all. It reminds me of the gambler who went to a poker game knowing that the game was crooked and he couldn’t possibly win. When asked why he went he replied “Because it is the only game in town”.

    When the only game in town is rigged you have to decide if you want to play. When the data in town is corrupt you have to decide whether you want to try to process it anyway even though it can’t be used to justify any theories at all.

    Until someone can find a way to “de-corrupt” the data it is pointless to process it. The best that can be claimed is what Steve Mc is doing – proving that whether or not the data are corrupt, the method of adjusting it is not valid.

    BTW I don’t believe that the data can be “de-corrupted” because there are far too many unknowns involved. Adjusting this data is as likely to make it worse as it is to make it better because you have no way of knowing if you are closer to the truth. You have no standard to use to determine what constitutes an improvement. How do you know what the measurement “ought” to be? You need an independent standard to evaluate the data and that doesn’t exist.

    The orthodox AGW people believe that their theory is the standard, i.e adjusting the data is good to the extent that it brings the data into conformance with the theory. While they don’t state this explicitly, the adjustments seem to be consistently moving the data towards the theory. It also seems that many of the contrarians use their pet theories as the standard of judging the validity of the temperature data.

    If the data is bad then it can’t back up any theory so if you’re in this to prove or disprove a theory then you are probably wasting your time.

  151. Steve McIntyre
    Posted Sep 18, 2007 at 2:37 PM | Permalink

    At the NAS Panel hearings, Mann said that the bristlecones came from a “sweet spot” for estimating world temperatures. Nychka and Bloomfield, the statisticians, accepted this like bumps on a log.

  152. CO2Breath
    Posted Sep 18, 2007 at 2:45 PM | Permalink

    M. Simon says:
    September 18th, 2007 at 12:29 pm

    “If climate change (slope of delta T) is what we are looking for and the slope is determined by rural stations, why in the heck are we doing all this adjustment fiddling on non rural stations?”

    That’s one of my basic questions and so far the answer seems to be : who knows?

    Gavin Schmitt of RC and GISS seems to believe that he only needs the data from six good stations to cover the lower 48. Perhaps we can find six such stations as part of Anthony Watts’ surface stations exercise.

    I’m troubled by the seeming facts that the CO2 concentration is rather uniform geographically and temporally but the monthly temp trends at the few stations examined are all over the map. My question is” If there is “Global” (ubiquitous) warming, why is it not roughly the same everywhere (or even 100 mi apart, or even from month to month or from day to night)?

    It’s becoming clear that this auditing needs to start with the individual station daily records. If we were to find a few geographically dispersed, “good” stations, it might not be too difficult with many hands to get the whole chain of analysis out in the open and reduce the skepticism of the many.

  153. Sam Urbinto
    Posted Sep 18, 2007 at 2:48 PM | Permalink

    I don’t think you can compare real engineering or real science to climate science.

    If there is unknown microsite contamination or messy programming, nobody’s going to die or get hurt badly, like you do if you don’t build a plane correctly or mix acids and bases. Or put in capacitors that explode. It’s just temperature trends as a meaningless overview average, come on. Why bother doing it correctly?

    They aren’t the same thing. And least not in operation so far. I make no comment on what it means but it seems pretty clear.

  154. JerryB
    Posted Sep 18, 2007 at 2:50 PM | Permalink

    re #150,

    CO2Breath,

    Some TOB stuff.

  155. aurbo
    Posted Sep 18, 2007 at 2:53 PM | Permalink

    Re #150:

    The errors (differences between means derived from the actual time of observation and the Calendar day means) do not balance out over time. The reason is that for whatever time of day the observations are taken, the current temperature is counted twice if the obs are taken at or near the max, or at or near the min.
    Let’s say that the min occurs at 7AM which is also the ob time. The min shows up on that day’s report and also on the next day’s ob if the temperatures in the subsequent 24-hours are not any lower than the current day’s 7AM min. Similar double counting will occur with the maxes if the time of observation, say in the afternoon, is close to the daily max. This double counting does not wash out with time.

    If, let’s say, the daily min did not occur at or close to the 7AM ob time, but imnstead happened in the afternoon or following evening. In this case the min would be counted just once, but counted nevertheless. So on once daily oberservations of max and min temps, the time of day is very important and can differ considerably from means observed at midnight (calendar day) and average temperatures derived from 24 separate hourly observations.

  156. Bob Meyer
    Posted Sep 18, 2007 at 2:53 PM | Permalink

    Sam Urbinto said:

    “I don’t think you can compare real engineering or real science to climate science.”

    Since they have alternative medicine then why don’t we call it alternative science?

  157. jisc
    Posted Sep 18, 2007 at 2:54 PM | Permalink

    Steve and Anthony……. I believe the comparison of the temperature trends of ‘high quality’ rural and ‘high quality’ urban stations will be interesting. I had been assuming, and I’m still assuming, rural stations have provided the better data over time. However, a few days ago I caught a few minutes of a talk by Professor Jim O’Brien of COAPS at Florida State. He showed temperature trend examples (raw data, he said)for three stations in Florida. I don’t know if the temps were highs, lows or means for each station. I think they were lows. The two in the panhandle (one was DeFuniak Springs, I remember)showed downward trends – but he said the main driver they reflected was the ongoing reduction of wetlands around each station (and therefore a less moderated temperature drop overnight?). The one station in southern Florida showed an upward trend – but I believe he said the main driver was the gradual introduction of the sugar cane crop to the area, and that sugar cane needs and/or holds more surface water/moisture than the crop it replaced (and therefore moderates the temp drop overnight?). If such gradual land use changes (even some distance away from rural weather stations) do indeed change/reverse long term temperature trends, is there a filter to separate those stations from the rural stations that have always been surrounded by natural, or at least stable, landscapes?

  158. Larry
    Posted Sep 18, 2007 at 2:58 PM | Permalink

    Mann said that the bristlecones came from a “sweet spot” for estimating world temperatures.

    Yep. Land of fruits and nuts (with apologies to Anthony).

  159. Larry
    Posted Sep 18, 2007 at 3:00 PM | Permalink

    BTW, how does a location become a “sweet spot”? Care to expand on that, Dr. Mann?

  160. steven mosher
    Posted Sep 18, 2007 at 3:00 PM | Permalink

    RE 139.

    BCL, Amuse us and post some of your fiction here. I was especially fond of your pathetic
    Dinosaur piece.

  161. Mark O
    Posted Sep 18, 2007 at 3:14 PM | Permalink

    RE# 155

    I don’t think you can compare real engineering or real science to climate science.

    If there is unknown microsite contamination or messy programming, nobody’s going to die or get hurt badly, like you do if you don’t build a plane correctly or mix acids and bases. Or put in capacitors that explode. It’s just temperature trends as a meaningless overview average, come on. Why bother doing it correctly?

    They aren’t the same thing. And least not in operation so far. I make no comment on what it means but it seems pretty clear.

    If the debate was just for academic interest I would agree with you. However, governments are talking about spend billions and billions of dollars, regulating both businesses and individuals, and making major decisions that will have far reaching effects to world economies. That is why it is so important to say: Stop, we do not have good enough data to be talking about such drastic measures.

    Instead what should be done is to go back and start doing basic science like figuring out how to get good temperature readings.

  162. Mike B
    Posted Sep 18, 2007 at 3:40 PM | Permalink

    Regarding TOB: I’m making some progress in my analysis. The Karl paper helped me understand why my previous simulations failed to show a time of observation bias. My simulations were too simple, and didn’t include the crucial aspects of hourly temperature data that lead to TOB. Thanks to Jerry B for his patience in putting up with someone who is learning.

    As usual, the devil will be in the details of how NOAA is applying the adjustment. I recall a link that I didn’t keep that explained how NOAA applied the adjustment, breaking the day into morning observations, midday, and evening or something like that. Can anyone provide that link?

    This aspect of the adjustment is crucial, because of how fast the adjustment changes at certain times of day.

    Also, if anyone can give me a link to meta-data that would contain a historical record of time-of-observation by stations, and any information (even anectdotal) on the reliability of the time of observation (Does 0600 mean 0600 shrp, or does it mean sometime between 0545 and 0700?). Thanks in advance to the experts.

  163. Jaye
    Posted Sep 18, 2007 at 3:48 PM | Permalink

    AKA, a typical Climate Audit comment.

    That statement is foolish.

  164. Jaye
    Posted Sep 18, 2007 at 3:53 PM | Permalink

    The orthodox AGW people believe that their theory is the standard, i.e adjusting the data is good to the extent that it brings the data into conformance with the theory. While they don’t state this explicitly, the adjustments seem to be consistently moving the data towards the theory.

    Yep that is how meta-magical-science works. Start with a politically motivated, predetermined outcome and work backwards to make the science fit.

  165. Sam Urbinto
    Posted Sep 18, 2007 at 4:01 PM | Permalink

    Ah, but, Mark O, that’s the policy side of it. Not the same issue (but of course related)

    I’m just saying the adjustmenting of stations, the performing of organic chemistry experiments, and the designing of buildings are not really the same kind of things when it comes to what results we should expect, and what level of exactness they require or can provide.

  166. JerryB
    Posted Sep 18, 2007 at 4:06 PM | Permalink

    Re #164,

    Mike B,

    We’re spending too much time off topic in this thread. I’ll reply to you
    in the current unthreaded thread.

  167. Christopher Alleva
    Posted Sep 18, 2007 at 4:08 PM | Permalink

    You have been using the term GAAP to describe problems with the provenance and integrity of the data sets being used by NASA. GAAP is a set of standards applied in the preparation of financial statements period. Yes, there are analogus principles whenever one is using quantitative methods, nevertheless using GAAP as an adjective in this case is probably not right.

    What you have here is a failure or lack of internal controls. Internal control is a much broader concept. It can and should be properly applied to all activity of an organization. To use a simple example, a liquor pourer has nothing to do per se with accounting principles but using them assures proper portion control. Together with inventory procedures, the organization (a tavern) can minimize loss by cross checking receipts with the inventory drawdown. Steve, this looks to me like a rudimentary bookkeeping problem.

    A study of this magnitude with a lot of personnel would properly require a massive amount of documentation like who has the authority to change or adjust what data, when and how. Typically, the most profound weaknesses in IC are centered in the officers and or directors who are in position to short circuit the controls, someone like Hansen.

    In this instance, having the code is helpful, but you need to know the policies and procedures used to adjust data. They must have this kind documentation, if they don’t the study is effectively useless.
    The raw data needs to be tested and verified with substantive auditing procedures before you can test the algorythms.

  168. Steve McIntyre
    Posted Sep 18, 2007 at 4:19 PM | Permalink

    #169. There are layers of issues. The “GAAP” issue was the change in accounting policy – going form SHAP accounting to FILNET accounting without disclosure, without a change statement and probably without any approval or review other than Hansen’s fiat. GAAP is an analogy, but the concept of “Permanence of Method” is a useful principle here.

  169. Posted Sep 18, 2007 at 4:29 PM | Permalink

    Steve and Christopher:

    Here in New Zealand we have a legal framework which allows people to know the policies and procedures of any government department or government funded oranisation which makes decisions which directly affect us as an individual. ALL such policies and procedures have to be made available on request. I would be very surprised if NASA does not have such policies and procedures laid out and documented. Without such policies and procedures large organisations become chaotic and prone to the whims of individual employees.

  170. cytochrome sea
    Posted Sep 18, 2007 at 4:31 PM | Permalink

    Re: #153,

    Did the NAS panel find the work was robust to the presence/absence of “sweet spots”? 😉

  171. John M
    Posted Sep 18, 2007 at 4:37 PM | Permalink

    Re: Anthony Watts #83

    All that independent thought going on in the media and all they wouldn’t just repeat that verbatim would they?

    Interesting treatment about the same thing (not climate science) here.

    This site is a sort of irreverant look at the IT industry, but they seem to have a pretty good handle on “how things are done”. They slam “the Beeb” pretty good.

  172. Posted Sep 18, 2007 at 4:39 PM | Permalink

    “What you have here is a failure or lack of internal controls.”

    If you added “within a fiefdom which has become an embarassment to NASA” it would be completely accurate. I find application of elementary GAAP principals to be an excellent analogy considering the enormous sums likely to be misspent due to ill advised policy decisions.

    Would NASA try to generate hardware design based upon data as malleable as that which Dr Hansen manipulates so adroitly?

  173. Mark O
    Posted Sep 18, 2007 at 5:13 PM | Permalink

    Re#167: Sam Urbinto

    Ah, but, Mark O, that’s the policy side of it. Not the same issue (but of course related)

    I’m just saying the adjustmenting of stations, the performing of organic chemistry experiments, and the designing of buildings are not really the same kind of things when it comes to what results we should expect, and what level of exactness they require or can provide.

    Agreed. In an ideal world, the quality of the data would dictate what real world uses it had. However, with the current state of the climate science debate it is impossible to separate the science from the politics. And I really have no use for people on the left

    or

    right who politicize science.

  174. Scott Finegan
    Posted Sep 18, 2007 at 5:44 PM | Permalink

    Rick Ballard 174

    “Would NASA try to generate hardware design based upon data as malleable as that which Dr Hansen manipulates so adroitly?”

    Uh yeah,

    The space shuttle foam problem allegedly stems from the removal of cfc’s from the application process. NASA went “green” without weighing the consequences of burning up a shuttle high in the atmosphere.

  175. Kenneth Fritsch
    Posted Sep 18, 2007 at 5:48 PM | Permalink

    In post #105, CO2Breath links a paper comparing the temperature data sets of NCDC and USHCN (filenet version) for regions in the NE US. The authors use the USHCN data set as their control for critiquing the NCDC data set by citing it as well adjusted by methods described in peer reviewed literature. They claim by regression analysis that the differences in the data sets are primarily caused by the NCDC stations migrating in latitude, longitude and altitude while the USHCN stations were spatially constant over time. The unexplained part of difference they attribute to the more adjusted status of the USHCN data set.

    Regardless of the validity of the main arguments presented in this paper, it does show the same large differences in adjacent and nearby regions for temperature trends that I saw when I used the USHCN (fully adjusted version) dataset to look at local temperature trends in IL. I have listed below some of these differences from the NE.

    Temperature trends from 1931-2000 for NCDC for 14 regions in the states of VT, NH, ME, MA, CT and RI:

    Range for all regions = +1.3 to -0.7.

    Largest 5 trend differences in adjacent regions => 1.9, 1.8, 1.7, 1.5, and 1.4.

    Temperature trends from 1931-2000 for USCHN (filenet version) for 14 regions in the states of VT, NH, ME, MA, CT and RI:

    Range for all regions = +0.8 to 0.0.

    Largest 5 trend differences in adjacent regions => 0.8, 0.7, 0.6, 0.5, and 0.4.

    The questions arise as to how much of these differences in trends are errors in measurement and how much of these trends are mitigated by the USHCN adjustments which are more aimed to getting large area trends correct. Finally, what do global and regional temperature trends mean when we see large local differences?

  176. CO2Breath
    Posted Sep 18, 2007 at 6:12 PM | Permalink

    Re: 177

    I am interested in why the temperature plots in each graph in that paper appeared identical except for an offset and tilt. How could the shape of the curves look so identical if they were from (at least partly) different station sets?

    I’m also curious if in your IL study you looked at daily, monthly or just annual temperature trends? If you looked at daily or monthly, was there strong similarity among stations or not?

    If you have a few minutes, would you look at these three plots and see if you see any reason for the patterns that might exist?

    http://gallery.surfacestations.org/main.php?g2_itemId=27418&g2_imageViewsIndex=3

    http://gallery.surfacestations.org/main.php?g2_itemId=27453&g2_imageViewsIndex=2

    http://gallery.surfacestations.org/main.php?g2_itemId=27408&g2_imageViewsIndex=2

    Thanks.

  177. SteveSadlov
    Posted Sep 18, 2007 at 6:31 PM | Permalink

    RE: #153 – A xerophile slow growing strange tree, in an extreme, 14K foot alpine desert, at the Eastern edge of a semi permanent oceanic High Pressure system, but with maritime air blocked by a 14K foot mountain range, subject in the winter to outbreaks of Yukon air and in the summers of some years, quite a bit of cT air from Mexico, but not in all years, with radically variable moisture availability and temperature on a day to day, week to week, month to month, year to year and even possibly decade scale (as a normal part of the characteristics of its climate zone) on a substrate of intrusive igneous rock with minimal soil, as a sweet spot representative of “global average temperature” Ah, yeah, right …..

  178. SteveSadlov
    Posted Sep 18, 2007 at 6:37 PM | Permalink

    RE: #155 – Leaving aside any discussions regarding economic, political or social impacts, consider this. Let’s say that Hansen got his way. Let’s imagine that we had a “sequestration fever” scenario, whereby sequestration of CO2 became an obsession for, at a minimum, the major Western industrial nations, and perhaps the hole world. How would we stop it, and assuming we had such control, where? What if the figure for “where” was wrong? What if we took out too much CO2? Honestly, I fear the potential impacts of that possibility more than I fear doing absolutely nothing. We could literally suffocate the biosphere by drawing down the CO2 too far and wiping out all the chlorophylic flora, thereby leading to mass hypoxia of all fauna.

  179. Christopher Alleva
    Posted Sep 18, 2007 at 6:42 PM | Permalink

    Layers of issues, well that’s an understatement. I concur,”Permanence of Method” is a reasonable use of this term of art.

    AFP is reporting that disgraced South Korean cloning scientist Hwang Woo-Suk has fled to Thailand to escape controversy and continue his research.

    The South Korean government banned Hwang from research using human eggs after his claims that he created the first human stem cells through cloning were ruled to be bogus last year.

    Hwang remains on trial for embezzlement and fake research but has insisted in court that he could still prove he created the first cloned human stem cells.

    With his claim to have paved the way for treatment of incurable diseases by creating stem cells through cloning which would not be rejected when inserted into a patient’s body, Hwang ignited a world-wide political and ethical debate on the use of embryonic stem cells in research.

    His results could not be repeated by others, what AFP described as a “key test for scientific method.” His research was called into question after local media and other scientists raised the possibility that the data and photos of stem cells used for his scientific papers were fabricated.

    Sound familar?

  180. Jim C
    Posted Sep 18, 2007 at 6:48 PM | Permalink

    RE: #171
    Paul,
    Yes, NASA has a policy entitled “NASA POLICY ON THE RELEASE
    OF INFORMATION TO NEWS AND INFORMATION MEDIA”

    Click to access 145687main_information_policy.pdf

    The five principles at the heart of NASA’s new disclosure policy include commitments to:

    * Maintain a “culture of openness with the media and public” and that information will be “accurate and unfiltered.”

    * Provide the widest practical and appropriate dissemination of prompt, factual and complete information.

    * Ensure timely release of information.

    * Allow employees to speak to the press or public about their work.

    * Comply with other laws and regulations governing disclosure of information such the Freedom of Information Act or Executive Orders.

  181. Larry
    Posted Sep 18, 2007 at 6:52 PM | Permalink

    182, #4 is a gift to Hansen. He was coming under some criticism for spending so much time doing radio interviews on NASA time, and it appears that the new policy gives him cover. That’s not a good item.

  182. Kenneth Fritsch
    Posted Sep 18, 2007 at 6:59 PM | Permalink

    Re: #178

    The pattern in the Bozeman graph shows a late winter warming trend and a significantly lesser one for the summer. The reduced warming change in November from the adjacent months is both surprising and unexpected. Hebgen, on the other hand, shows an almost opposite trend to Bozeman, while Norris shows almost the same pattern as Bozeman, including the sharp dip in November. I cannot explain these differences within monthly trends nor make any estimates to whether some of it is due to measurement error. I can only wonder what global and regional temperature trends mean in terms of local differences (as I stated above) and here, as you recall it to my attention, significant month to month variations.

    I know for IL in the past 100 years or so, warming trends as you go from north to south tend to go from larger to smaller, while at the same time showing significant differences in trends for the same latitudes. The warming trend in the Chicago area is primarily from warmer winters. In fact in past decades the extreme maximum temperatures of summer have occurred less frequently in this area. One should, I suppose, do some research and see what the climate models predict for monthly and local variations – if they have that capability. For my IL analysis I looked at annual anomalies, but your graphs have tempted to go back and do some looking at monthly anomalies.

  183. Posted Sep 18, 2007 at 7:27 PM | Permalink

    Re #186 Anyone interested in regional or local US temperature trends, for months or seasons or years, can generate their own time series here .

  184. John Bowers
    Posted Sep 18, 2007 at 8:41 PM | Permalink

    Have just latched on and read some fantastic comments re the climate debate…..excellent.
    What put me on to your site was that a comment from your site was published on Free Republic.com. It was comment #85 by dirac angestun gesept. I thought this was so good, I pasted it into an email and it is going to be moving around …globally of course.
    In doing so, I hope I didn’t infringe on someone’s rights.
    Mahalo, John

  185. _Jim
    Posted Sep 18, 2007 at 9:29 PM | Permalink

    John, #190, the last time I engaged on the subject of AGW on FR the tide had seemed to have turned to ‘believers’ instead of ‘deniers’; there were those, it seemed, that had begun to sip the koolaid and bought into GW (if not outright AGW) so effective has the drum beat ongoing in the ‘press’ so as to erode away any healthy skepticism or curiosity on the mechanics or basis for ‘claims’ on this subject. Steve has done yeoman’s work to change this, to audit the process, examine the numbers and bring the light of day to an otherwise ‘closed’ scientific process.

    And don’t forget to hit the tip jar on the main page either (this includes you lurkers out there); the ‘counter’ to this site (RC or ‘realclimate.org’) on this subject/issue some say is funded by the dark or non-light side …

    Steve, et al: Mahalo nui loa (translating for those in Rio Linda: that was Hawaiian for “thank you very much”), _Jim

  186. DocMartyn
    Posted Sep 19, 2007 at 4:26 AM | Permalink

    #178 I suggest that the differences are due to the changes in water vapor. The more water, the higher the Tmin, the less water the lower the Tmin. Want to bet that the biggest change in all three stations is in Tmin and not Tmax?

  187. CO2Breath
    Posted Sep 19, 2007 at 4:37 AM | Permalink

    RE: 185

    David Smith

    says:
    September 18th, 2007 at 7:27 pm

    Re #186 Anyone interested in regional or local US temperature trends, for months or seasons or years, can generate their own time series here.
    ( http://lwf.ncdc.noaa.gov/oa/climate/research/cag3/cag3.html )

    I’ve been fooling with data for individual stations from here:

    http://www.co2science.org/scripts/CO2ScienceB2C/data/ushcn/ushcn.jsp

    I’ll have to look around the ncdc site more to figure out just which set of data they are uning.

  188. CO2Breath
    Posted Sep 19, 2007 at 4:45 AM | Permalink

    DocMartyn says:
    September 19th, 2007 at 4:26 am

    #178 I suggest that the differences are due to the changes in water vapor. The more water, the higher the Tmin, the less water the lower the Tmin. Want to bet that the biggest change in all three stations is in Tmin and not Tmax?

    I believe that the monthly min/maxs plotted are means of daily data for the stations.

    Hebgen is just below an impoundment that freezes in Winter, Norris in a canyon with water running all Winter and Bozeman at the eastern end of a large high valley.

  189. Posted Sep 19, 2007 at 6:11 AM | Permalink

    Re: # 81

    “The Senate, and the public, wanted to know the cause of parched conditions in the Midwest, where the Mississippi had practically dried up. I said that our numerical climate model indicated a tendency for more frequent and severe droughts as the world became warmer, but a specific drought was a matter of chance, dependent on fluctuating meteorological patterns.”

    This is an eggregious error, revealing appalling ignorance of climatic history. Droughts are associated with cooling. Has he never read the literature? It really shows that these are second class math wonks just playing with numbers, who cares where they come from or their quality.

    It is a consequence of big, bureaucratic science. The good scientists tend to stay in the lab. The second class types who can’t do science but have ambition and skills in bureaucratic games rise to the top of the institutions. There are exceptions e.g. Fred Singer, but in general I think it is the rule. King in the UK is a glaring example of the rule at work.

    It was the replication of this canard of warming=drought by the early GCMs that first alerted me to their inaccuracy. They continue to repeat it for the most part (one run of the Hadley model has a neutral effect of warming for precipitation on the Great Plains).

  190. CO2Breath
    Posted Sep 19, 2007 at 6:20 AM | Permalink

    Clearly, Climate consists of more than temperature.

    I’ve been looking for concurrent precipitation (there must be plenty of data as many temp stations I’ve visited had a rain pot associated) plots along with all the temperature plots and then for precip vs temp cross plots. Guess that I’ll have to look harder for some data to plot up myself. If we could reach consensus on which database to draw from, that’d be great.

  191. dirac angestun gesept
    Posted Sep 19, 2007 at 6:26 AM | Permalink

    In doing so, I hope I didn’t infringe on someone’s rights.
    Mahalo, John

    Be my guest.

  192. Jaye
    Posted Sep 19, 2007 at 7:33 AM | Permalink

    Would NASA try to generate hardware design based upon data as malleable as that which Dr Hansen manipulates so adroitly?

    Here is the short answer: No.

  193. Sam Urbinto
    Posted Sep 19, 2007 at 7:42 AM | Permalink

    Mark O: “the current state of the climate science debate it is impossible to separate the science from the politics” Which is why I said it’s not “real” science. It’s some kind of hodgepodge…. Models, proxies, and some kind of odd situation where we think we know what’s going on from sampling air temps and reading the top of the oceans and then combining them into some giant whole, then treat it as if it’s some fact or proof rather than a general idea. However, it’s pretty clear we do have some impact upon the system (of course) with our cities, our farms, our own numbers, our livestock, and the release of particulates and polution on both the atmosphere and land. But it’s fairly impossible to quantify, and I don’t know if you can call it science if you can’t get a better handle on it than we do. Once we get past the actual science of climatology and start talking about what it means and what to do about it, it goes out of being science into politics and public policy.

    SteveS: That’s one of the issues; without replicable results and a fairly clear idea of the risk/reward (or “unintended consequences”) we’re just guessing, hence the reason I believe there is inaction; Hansen can’t “get his way”, because the folks making the decisions are going to argue about it, because the issue is polarized in so many ways. Because of the uncertainty. Obviously, if everything was as clear as some make it out to be. If it was that clear, “more” would be “getting done”. Plus, not everyone is oblivious to the fact that there are many unknown impacts. A “The operation was successful, but the patient died” kind of thing. Cure worse than disease. That’s even if you assume that any attempt to “fix” things would have any major impact at all, even if the major Western industrial nations were willing to fund “enough” money to “take action”. Although I do wish that budgets for R&D would be looked at and increased if needed, there doesn’t seem to be enough funding in a number of areas. Those trying to politicize it, they don’t know how to ask for more, perhaps, or at least not effectively.

    Ah, water. It all comes down to water. Water vapor, clouds, glaciers, oceans and seas, rivers/lakes/streams, rain, aquifiers, irrigation systems, yard of the month.

  194. CO2Breath
    Posted Sep 19, 2007 at 10:50 AM | Permalink

    Ah, water. It all comes down to water. Water vapor, clouds, glaciers, oceans and seas, rivers/lakes/streams, rain, aquifiers, irrigation systems, yard of the month.

    So where are all the precip (and other forms of h2o) studies in the Gorebal Warning Science Journals?

  195. Sam Urbinto
    Posted Sep 19, 2007 at 11:03 AM | Permalink

    Nobody I’ve seen focuses on the fact that water in its various forms is the largest factor in play here. Relative humidity and temperature anyone?

  196. CO2Breath
    Posted Sep 19, 2007 at 11:26 AM | Permalink

    I’m partial to partial pressures (and absolute Temperature fractions).

  197. Andrey Levin
    Posted Sep 19, 2007 at 12:38 PM | Permalink

    CO2breath:

    Fresh from the press: Increase in atmospheric moisture tied to human activities

    “When you heat the planet, you increase the ability of the atmosphere to hold moisture,” said Benjamin Santer, lead author from Lawrence Livermore National Laboratory’s Program for Climate Modeling and Intercomparison. “The atmosphere’s water vapor content has increased by about 0.41 kilograms per square meter (kg/m²) per decade since 1988, and natural variability in climate just can’t explain this moisture change. The most plausible explanation is that it’s due to the human-caused increase in greenhouse gases.”

    “Using 22 different computer models of the climate system and measurements from the satellite-based Special Sensor Microwave Imager (SSM/I), atmospheric scientists from LLNL and eight other international research centers have shown that the recent increase in moisture content over the bulk of the world’s oceans is not due to solar forcing or gradual recovery from the 1991 eruption of Mount Pinatubo. The primary driver of this ‘atmospheric moistening’ is the increase in carbon dioxide caused by the burning of fossil fuels.

    As I know, day job of Livermore’s scientists is in nuclear weaponry. Now it is scary.

  198. Andrey Levin
    Posted Sep 19, 2007 at 4:40 PM | Permalink

    Ooops, here the link:

    http://www.llnl.gov/pao/news/news_releases/2007/NR-07-09-01.html

  199. SteveSadlov
    Posted Sep 19, 2007 at 5:03 PM | Permalink

    RE: #199, 200 – Even at a national lab, a hopelessly infantile and statically oriented view – “I heat the flask with water and air in it ….. I measure the RH ….. the RH increases ….. nothing moves …. no convection …… no fronts ….. no baroclinic action ….. etc ….. but GHGs increase …. darefore, it muss be a pawsative feedback thingey”

    If such thinking was not causing so much of a headache for so many, it would actually be rather entertaining.

  200. CO2Breath
    Posted Sep 20, 2007 at 8:43 AM | Permalink

    It would seem that a decent Global Climate Model should be able to isolate and measure all of the drivers of the seemingly large annual variation in the global air/sea temperature mean.

    If the temperature measurements are precise enough from one year to the next, and the oceans, land and air have enough heat capacity to have time constants of total heat on the order of decades, why do the annual numbers vary so much from one year to the next?

  201. JerryB
    Posted Sep 20, 2007 at 10:43 AM | Permalink

    RE #202,

    CO2Breath,

    There are several major “oscillations” that are not annual, among the most
    widely known of which would be ENSO, El Nino/Southern Oscillation.

  202. trevor
    Posted Sep 20, 2007 at 9:21 PM | Permalink

    It is interesting to do a search on Google News using – “James Hansen” NASA. You will see that the issue is getting at least some attention in alternative media, if not mainstream press – yet!

    This must be very embarrassing for NASA. Their whole credibility is being jeopardised by James Hansen and his antics. CA participants might like to send a comment to this effect to NASA at public-inquiries@hq.nasa.gov. I’m sure that Administrator Michael Griffin will be pleased to hear from you.

  203. nilram
    Posted Sep 23, 2007 at 10:00 PM | Permalink

    Is the graph of the temperature adjustments correct? I downloaded NASA’s data adjustments from http://data.giss.nasa.gov/gistemp/graphs/US_USHCN.2005vs1999.txt
    found the difference and graphed it but got a graphed that crossed the zero line circa 1960 but the shape was about the same. So I wonder if the graph on this page has a mistake or NASA’s reported change.

  204. M. Jeff
    Posted Sep 24, 2007 at 10:14 AM | Permalink

    Re: The McIntyre factor [Deltoid] · Articles. September 24th, 2007 at 9:07 am

    Even generally reputable sources, such as the New York Times, (see below), may on occasion distort reality, but they are honest in comparison to the source referenced above.

    Excerpt from August 26, New York Times article: … Mr. McIntyre and Dr. Hansen also agree that the NASA data glitch had no effect on the global temperature trend, nudging it by an insignificant thousandth of a degree. … This NYT article seems more like an op-ed than a science report, and could be considered to be misleading in the way it presents information out of context. Reality may better be described as follows?
    http://www.climateaudit.org/ amply indicates that McIntyre has serious doubts about US and world data and methods of analysis and measurements. The most important part about the 0.15 C error is that the programs used by NASA did not detect the error and the error of 0.15 C for a 6 year period is not trivial when applied to the US.

  205. SteveSadlov
    Posted Oct 5, 2007 at 12:42 PM | Permalink

    Reminder of Hansen’s 9/10 adjustment, see related chart above.

  206. Posted Oct 8, 2007 at 5:09 AM | Permalink

    Speaking of Enron:

    Here is a bit I wrote on Enron and Carbon Trading. Excerpted from a link provided in the above writing.

    Enron commissioned its own internal study of global warming science. It turned out to be largely in agreement with the same scientists that Enron was trying to shut up. After considering all of the inconsistencies in climate science, the report concluded: “The very real possibility is that the great climate alarm could be a false alarm. The anthropogenic warming could well be less than thought and favorably distributed.”

    One of Enron’s major consultants in that study was NASA scientist James Hansen, who started the whole global warming mess in 1988 with his bombastic congressional testimony. Recently he published a paper in the Proceedings of the National Academy of Sciences predicting exactly the same inconsequential amount of warming in the next 50 years as the scientists that Enron wanted to gag.

    They were a decade ahead of NASA. True to its plan, Enron never made its own findings public, self-censoring them while it pleaded with the Bush administration for a cap on carbon dioxide emissions that it could broker. That pleading continues today – the remnant-Enron still views global warming regulation as the straw that will raise it from its corporate oblivion.

    ”Enron stood to profit millions from global warming energy-trading schemes,” said Mike Carey, president of the Ohio Coal Association and American Coal Coalition. The investigation into the collapse of Enron will reveal much more about the intricacies of the Baptist-bootlegger coalition which was promoting the Kyoto cause within the Republican Party and within US business circles. Coal-burning utilities would have had to pay billions for permits because they emit more CO2 than do natural gas facilities. That would have encouraged closing coal plants in favor of natural gas or other kinds of power plants, driving up prices for those alternatives. Enron, along with other key energy companies in the so-called Clean Power Group – El Paso Corp., NiSource, Trigen Energy, and Calpine – would make money both coming and going – from selling permits and then their own energy at higher prices. If the Kyoto Protocol were ratified and in full force, experts estimated that Americans would lose between $100 billion and $400 billion each year. Additionally, between 1 and 3.5 million jobs could be lost. That means that each household could lose an average of up to $6,000 each year. That is a lot to ask of Americans just so large energy companies can pocket millions from a regulatory scheme. Moreover, a cost of $400 billion annually makes Enron’s current one-time loss of $6 billion look like pocket change.

    from: Investigate Magazine March 2006

  207. Smokey
    Posted Nov 1, 2007 at 7:01 PM | Permalink

    Are people actually still discussing a global warming “consensus” among scientists??

    I remember a big ‘consensus’ issue – which is still going on. My memory may not be 100%, but this is what I recall:

    In the early ’70’s, physicist Alan Guth of M.I.T. postulated that the proton decays over time. Dr Guth persuaded a ‘consensus’ of physicists that proton decay was real, and he proposed an elaborate experiment to prove it. Dr Guth certainly didn’t expect the ‘consensus’ opinion to be falsified.

    [Falsification is essential to the scientific method; if a conjecture can be proven false (falsified), it’s not scientifically valid. But if it cannot be proven false, it becomes a scientific theory; like the theory of gravity]. See: http://xxx.lanl.gov/abs/0707.1161v2

    So in 1982, in order to test Dr Guth’s proton decay physicists built a huge [and very expensive] detector thousands of feet underground called the Kamiokande. But the Kamiokande detector failed to prove that the proton decays, as predicted by scientific ‘consensus.’

    Governments and scientists did not give up. Next, they built Kamiokande II in 1985. The scientific consensus was overwhelming that this new, 10X more sensitive detector would prove that the proton decays over time [the lifetime of a proton was assumed – by consensus – to be about 1036 years].

    Kamiokande II failed to find any evidence of proton decay. But many physicists were certain that the proton decays [since they had staked their reputations on it]. They prevailed on the government to spend more $billions, and Superkamiokande [SuperK] was completed in 1996. Why? Because the overwhelming scientific consensus was still that the proton decays over time into lighter subatomic particles.

    But the ultra-sensitive SuperK failed to show any evidence of proton decay.

    The consensus for proton decay was getting a little shaky after many so many $billions were spent [although a few scientists were awarded the Nobel Prize for discovery of neutrinos from supernovas by using the K and SuperK detectors and studying the results for over many years – at a time when the Nobel Prize was more respected than it is now].

    Even though the consensus for the proton decay conjecture was finally eroding [after burning through much of the U.S. science budget, thereby starving many other programs], the search for proton decay had taken on an inertia of its own.

    In 2006, the latest and greatest detector was put on-line: the SuperK II. As you can probably guess, the SuperK II has shown zero evidence of proton decay. More than $20 billion has been spent so far on the proton decay conjecture – based on the ‘consensus’ of physicists.

    It hasn’t been money completely wasted. The purpose of the scientific method is to show whether a conjecture can be falsified. In the case of proton decay, the conjecture was falsified — forcing scientists to acknowledge that they needed a new and entirely diffeerent theory, to provide a hypothesis as to why the proton does not decay.
    [snip]

  208. Smokey
    Posted Nov 1, 2007 at 7:05 PM | Permalink

    Correction: the proton’s half-life was assumed to be 10,000,000,000,000,000,000,000,000,000,000,000,000 years [apparently my exponent didn’t get through the filter].

4 Trackbacks

  1. […] of Enron. (Noel Shepperd of NewsBusters.org  points us directly to Steve McIntyre’s blog Climate Audit.)  We’re talking about fudging data here – and in no insignficant way as Dr. Hansen would […]

  2. By The McIntyre factor [Deltoid] · Articles on Sep 24, 2007 at 9:07 AM

    […] In the US, 1998 and 1934 each changed by just 0.01 degrees C. So naturally Steve McIntyre wrote a 2,500 word post about how the changes were part of some NASA conspiracy to … ah, heck, if I summarized him you might wonder if I was being unfair, so in his own words: […]

  3. […] time someone serious looks at the data, it’s wrong.  They are constantly tweaking the numbers.  Who in their right mind can […]

  4. By Y2K Re-Visited « Climate Audit on Nov 11, 2010 at 10:54 AM

    […] summarized the puzzling changes in a post asking “Should NASA climate accountants adhere to GAAP?”[Generally Accepted Accounting […]