FOIed Emails on Hansen Y2K

If anyone is wondering whether emails by U.S. government employees are “private” and “personal” – an assertion sometimes made in respect to emails at CRU, an institution subject to UK FOI – the answer in respect to NASA GISS appears to be no.

Link: 

Judicial Watch announced today that it has obtained internal documents from NASA’s Goddard Institute for Space Studies (GISS) related to a controversy that erupted in 2007 when Canadian blogger Stephen McIntyre exposed an error in NASA’s handling of raw temperature data from 2000-2006 that exaggerated the reported rise in temperature readings in the United States.

242 Comments

  1. Steve McIntyre
    Posted Jan 14, 2010 at 5:38 PM | Permalink

    CA used to have a widget that enabled comments to be moved, but it didn’t survive the move to WordPress.

  2. Jonathan
    Posted Jan 14, 2010 at 5:47 PM | Permalink

    Gavin comes over as quite reasonable in most of these emails. I wonder where it all went wrong?

    • P Gosselin
      Posted Jan 15, 2010 at 7:04 AM | Permalink

      A deal from the devil is always difficult to refuse, even for the most virtuous among us. Anyone can be tempted. I mean look at how his life has been over the last 10 years: high profile, influential, prestigious…all really hard, if not impossible, to turn down when served up on a platter of gold.
      Before snipping, I’m not saying this is what happened…just speculating. Everybody has to make their own call.
      Like that Rolling Stones song:

    • Calvin Ball
      Posted Jan 15, 2010 at 6:04 PM | Permalink

      IMO, where it all went wrong, with Gavin and with several others who will remain unnamed, is when they started adopting a bunker mentality. One of the most remarkable things to observe is how these people will sound reasonable and rational until someone mentions ExxonMobil or the Heritage Foundation or some other trigger that sets them off like the Susquehanna Hat Company.

      • Sean Peake
        Posted Jan 15, 2010 at 6:19 PM | Permalink

        An obscure but very funny reference. LOL

        • Posted Jan 15, 2010 at 9:46 PM | Permalink

          I saved this one up till just before bed. I hadn’t heard of it either. Brilliant. In its own right and as an analogy.

      • Jimchip
        Posted Jan 15, 2010 at 7:46 PM | Permalink

        Re: Calvin Ball (Jan 15 18:04),

        When did Gavin know that this (Hansen Y2K) batch of emails was scheduled for release? (hmm)

      • Paul Linsay
        Posted Jan 16, 2010 at 10:53 AM | Permalink

        ExxonMobil or the Heritage Foundation or Niagra Falls

      • Steven Mosher
        Posted Jan 16, 2010 at 11:42 PM | Permalink

        What amazes me, given my PR training and experience, is the way in which they absolutely squandered a great opportunity when they released the code. And the way they have squandered opportunity going forward. Everytime the clearclimatecode guys find a minor bug, RC should do a Post on RC and highlight it. Sure it would put stark contrast between GISS and HADCRU, but it would give them tremendous talking points. In the end, and they know this, the code for other projects will all come out. The harry cat is out of the bag. They can continue to fight a losing battle or seize the transparency battlefield. To date their meager effort ( oh here’s a page of data and code) leaves a lot to be desired. Imagine an “green” army of dedicated undergraduates….

        • Posted Jan 17, 2010 at 6:55 AM | Permalink

          Even if mainstream climate science can never bring itself to agree fully with ‘sceptics’ or ‘lukewarmers’ (and who would want that anyway) this is something that they can do and must do. Is there yet a name for this movement, for the global campaign for openness of climate data and code, within the ‘Peer to Peer Review’ network? I’ve suggested Open Climate Initiative – but I’ve not registered any .org’s. The emphasis on clear code is of course a very desirable side-effect but the key thing is complete openness. As you point out, every little glitch discussed could become great PR for your specific area of endeavor – just as it has for Firefox’s olde Bugzilla and Linus Torvald’s amazing Git-powered future for Linux.

          Tim Berners-Lee, having been appointed by Gordon Brown to help the UK out, in desperation for more technical cred, and in his general role as national treasure, was talking in Downing Street recently about the general principles of openness with government code and data. What date was that again? Oh yes, 17 November 2009. Where did I hear that one before?

          It was quite a day from my perspective, a little timeline of its own I haven’t fully written up anywhere. But you can see my comments on the subject that evening on Glyn Moody’s blog – not as well visited as this one but Glyn’s book Rebel Code remains for me a cool contribution on the history of the open source movement. This first ever comment on the climate situation to a well-known IT openness geek – and his immediate agreement, despite having bought the AGW disaster story – seemed just a little prophetic to me later, given the activities of ‘RC’ on CA around midday our time, and what was going to break on WUWT a few hours later, starting with Charles and your good self!

        • C. Baxter
          Posted Jan 17, 2010 at 2:07 PM | Permalink

          I suggest a little less computing and more science might be the way forward.

        • Posted Jan 18, 2010 at 9:03 AM | Permalink

          I think I understand what you mean and am sympathetic. But perhaps a better way to frame what the world desperately needs now is: we require top class standards in all areas. Given that software is being used throughout climate science – of course – all the good things that have been learned from the open source movement (including the test first and regression testing emphasis within agile methods) should be applied in that area. We should also of course do better, more honest science and not use computing as a fig leaf to try and cover up crud. We need it all.

        • Jimchip
          Posted Jan 17, 2010 at 8:08 PM | Permalink

          Re: Steven Mosher (Jan 16 23:42),

          You say “squandered”. Another aspect of the tragedy. For example, take an intro engineering programming class, point them at Gavin’s directories. Let them go beg the Dean for the compilers they want. Start untarring and compiling. A top NASA scientist sends them an encouraging email once in awhile. A “green” army of dedicated undergraduates… has themselves “a rock star”…in some parallel universe.

      • James of Perth
        Posted Jan 18, 2010 at 12:11 AM | Permalink

        I had to look it up too, but it is right on point. Susquehanna Hat Company. I will have to remember that!

  3. Posted Jan 14, 2010 at 5:48 PM | Permalink

    It’s great to see FOI working. Something we in the ‘mother country’ might just like to emulate before too long.

  4. mpaul
    Posted Jan 14, 2010 at 5:56 PM | Permalink

    Steve, you need to close your link tag.

  5. bill-tb
    Posted Jan 14, 2010 at 6:02 PM | Permalink

    I guess Judicial Watch knows the judges …

    It’s an interesting read you can tell who is holding out for the truth.

    • LMB
      Posted Jan 16, 2010 at 10:17 AM | Permalink

      > I guess Judicial Watch knows the judges …

      Judicial Watch should know the judges; it is the most litigious FOIA organization in America. They file these types of FOIAs all the time. They have the legal experience and legal muscle to get just about anything they want. If anyone here wants something, but doesn’t have the time or money to file the FOIA request, I suggest you contact JW.

      “For help with Freedom of Information and other Open Records Requests – openrecords@judicialwatch.org

      http://www.judicialwatch.org/contact

  6. HotRod
    Posted Jan 14, 2010 at 6:09 PM | Permalink

    Steve, I don’t know what to say. Another set of amazing emails.

    I’ve written a piece for a UK monthly on temperature record-keeping and reconstruction post Climategate – despite the lack of coverage in MSM I fear by the time it comes out it will be old hat. Oh well.

  7. Vito
    Posted Jan 14, 2010 at 6:10 PM | Permalink

    As a U.S. government employee (I am a Foreign Service Officer in the State Dept), my emails are all subject to FOI requests. There is “no expectation of privacy” for anything done on my Dept of State computer. Any emails sent from my .gov address are not private in any way, shape or form.

    • Erik in Cairo
      Posted Jan 15, 2010 at 12:24 PM | Permalink

      Vito, as you no doubt know, the same is true of emails sent from DoD computers or .mil addresses. To my knowledge, there should never be an expectation of privacy within any federal government workplace. In the Clinton Tapes, Taylor Branch speaks at great length about the president’s fear of even making oral diary entries, since they were potentially subject to subpoena. As a historian, with some experience in presidential archives, I am aghast at a notion of a small set of midlevel government functionaries, who presumably have no access to any classified information whatsoever, appearing to consider themselves to be above the rules by which everyone else in the government lives, including the chief executive. Their sense of privilege is, for lack of a better word, amazing.

    • Darrin Sideman
      Posted Jan 19, 2010 at 5:13 PM | Permalink

      I second that amazement and concur w.r.t. zero expectation of privacy when working for the gubberment. In my case, it’s DOE, and there is no expectation of privacy proffered, implicitly or explicitly. The mundane task of logging on to one’s government-owned computer is a constant reminder. That anyone working at NASA or any other government agency would believe otherwise is astonishing, to say nothing of the very clearly delineated rules against using agency resources or work time to conduct personal business of any sort.

  8. Anthony Watts
    Posted Jan 14, 2010 at 6:18 PM | Permalink

    I fixed the link for Steve

  9. stansvonhorch
    Posted Jan 14, 2010 at 6:33 PM | Permalink

    hey there was a guest username/password for makiko santo’s data site in the PDF’s….

    hooray for wget!

  10. frost
    Posted Jan 14, 2010 at 7:11 PM | Permalink

    There was some question about whether emails from non-NASA accounts would be subject to FOI. Note that the first email in the PDF linked by Judicial Watch was from Gavin Schmidt using his columbia.edu email account.

  11. DS
    Posted Jan 14, 2010 at 7:33 PM | Permalink

    “Do we want to lower ourselves to debating with a court jester?”

    — James Hansen, 16 Aug 2007

    Your tax dollars at work.

  12. Greg F
    Posted Jan 14, 2010 at 7:37 PM | Permalink

    Page 201

    Jim gets many of these kinds of responses – a change whose effect we described as well within the margin of error has become an “astonishing change”.

    I guess the best thing is to ignore it and – if at all – set matters straight in a place like RealClimate.

    Reto

    • justbeau
      Posted Jan 14, 2010 at 11:12 PM | Permalink

      If Reto is a NASA guy, then this is a helpful quote for how NASA saw Real Climate, as a backdoor way of communicating. They are not trying to set the record straight through a NASA.gov site. Why not?

      • Steve McIntyre
        Posted Jan 14, 2010 at 11:17 PM | Permalink

        Communications through a NASA site have to be peer reviewed. Realclimate is a way of avoiding NASA regulations and policies – off-balance sheet financing, so to speak.

        • justbeau
          Posted Jan 15, 2010 at 12:08 AM | Permalink

          I expected the answer would be along these lines.
          The hockey team does not seem to have a deep bench of players. Hardworking Gavin seems to communicate closely with Hansen, so seems like Jim’s front man.

  13. Posted Jan 14, 2010 at 7:37 PM | Permalink

    Well, if nothing else this seriously weakens UEA’s objection to meeting FOI requests under the S.32(2)(B)(ii) exemption which they claiming release would inhibit free and frank exchanges.

    Regardless of the fact these were released under the American FOIA rather than ours, any scientist from now on has to be aware that emails may be released. So the possibility of release here creates no more inhibition than that already established by the possibility of release there.

    • justbeau
      Posted Jan 14, 2010 at 11:17 PM | Permalink

      There may be two kinds of differences. First, UEA is a private institution and not as obliged to be forthcoming as is the government itself. Second, the right to citizens seeing government information is probably stronger in the US.

      • George M
        Posted Jan 14, 2010 at 11:41 PM | Permalink

        The University of East Anglia is a private institution? Not funded by the government of some subset of Great Britain? In any event, CRU is government funded. International governments IIRC. Next someone will claim that Penn State is a private university.

        • justbeau
          Posted Jan 15, 2010 at 12:15 AM | Permalink

          FOIA’s are going to be most powerful in relation to US government agencies and their employees, as proven by the Judical Watch FOIA.
          Even if a university gets government funds, it is regarded as independant. Universities will tend to shield employees from FOIAs, my guess, given cultural traditions of academic freedom.

        • Jonathan
          Posted Jan 15, 2010 at 2:53 AM | Permalink

          Re: George M (Jan 14 23:41), the public/private status of UK universities is messy. They are public bodies for FOI purposes.

        • Jimchip
          Posted Jan 15, 2010 at 3:54 AM | Permalink

          Re: George M (Jan 14 23:41),

          CRU got approximately $20 million from the US. That should give the US some leverage. The team has to learn to quit trying to hide behind the law only when it’s convenient. They always want it both ways. Basically, they do anything they want, legit or not (no rules for the big boys–They make the rules), and whenever challenged they quickly decide what they can hide behind. Gavin turns into a lawyer pretty darn quick.

        • justbeau
          Posted Jan 15, 2010 at 7:57 AM | Permalink

          CRU may have received money, but US FOIA laws do not extend to Britain.
          US FOIA laws do apply to NASA. This is why Judicial Watch has gotten NASA emails.
          As an academic institution, outside the US, East Anglia’s administration and attorneys may have more leeway to shield their workers from information inquiries.

          The East Anglia leak lays bare the close collaboration with US government scientists. One of Jim Hansen’s most trusted associates, Gavin, is providing warnings to bloggers (via Mike Mann’s web site) to ignore a leak of information, not formally from NASA, but superficially from another country.
          Given Gavin’s involvement at the time of discovery of the leak, its almost like he regarded East Anglia as another Real Climate-like innovation, a sub-rosa subsidary of NASA. Interesting.

  14. RichG
    Posted Jan 14, 2010 at 7:41 PM | Permalink

    This is another 10.6 MB of pure gold.

    You can get a direct link to the Judicial Watch FOI file from their site here:

    Click to access 783_NASA_docs.pdf

    Best line so far: Hansen, in a thread where they are discussing how to spin Steve’s questioning of their opaque adjustment methodology:

    “Better not argue with him about whether we fix data; we do an urban adjustment, for example”.

    R

  15. Kasmir
    Posted Jan 14, 2010 at 7:56 PM | Permalink

    I just wish an investigative journalist would look into the cozy relationship between RealClimate, and Fenton Communications, Connely/Wikipedia, and the Green NGO’s funding it all. Fenton is an extremely effective PR firm using state of the the art methods. It’s far beyond astroturfing, which is one their specialties, their fingerprints are all over the brilliant Connelly/Wikipedia campaign the “denier” re-labeling of sceptics, and the “science is settled” campaign. That kind of successful media management doesn’t happen accidentally. Just brilliantly done by Fenton; hell, I’d hire them if they’d take my business. There’s quite a story in there for an enterprising journalist.

    • Jimchip
      Posted Jan 14, 2010 at 9:50 PM | Permalink

      Re: Kasmir (Jan 14 19:56),

      Environmental Media Services (founder of RC) was physically co-located at Fenton’s, I rhink.

      • D. Patterson
        Posted Jan 14, 2010 at 11:34 PM | Permalink

        5 U.S.C. § 3107 : US Code – Section 3107: Employment of publicity experts; restrictions
        Appropriated funds may not be used to pay a publicity expert unless specifically appropriated for that purpose.

        Other former US Code sections relating to criminal propaganda funded by foreign entities and their agents in the United States have been repealed.

        • Jimchip
          Posted Jan 15, 2010 at 3:56 AM | Permalink

          Re: D. Patterson (Jan 14 23:34),

          It might have been the other way around. Arnie Schardt (EMS) donates the services to them, for the good of the cause.

    • JPeden
      Posted Jan 16, 2010 at 1:42 AM | Permalink

      “Just brilliantly done by Fenton”

      What’s so brilliant about lieing? I’m serious, anyone could make up this kind of stuff. But who wants to?

      • Posted Jan 16, 2010 at 4:50 AM | Permalink

        It should be lying. But I think this way makes your point even better.

  16. Dennis Wingo
    Posted Jan 14, 2010 at 8:05 PM | Permalink

    ll things considered, it is strange there have not been legal challenges to Hansen and other Global Warming scientists, to verify their data and insure transparency of their methods.

    The first legal challenge to the EPA, based on climategate emails, has happened.

    http://www.capitalpress.com/lvstk/TH-beef-appeal-011510

    • David L. Hagen
      Posted Jan 14, 2010 at 8:48 PM | Permalink

      Thanks Dennis

      Under the Clean Air Act, parties have 60 days after a rule has been promulgated to appeal to the D.C. Circuit Court of Appeals. Once the appeal period for this ruling ends Feb. 15, the parties will likely decide on a briefing schedule and a hearing date will be set, Thies said.

      Any other parties wanting to sue have 1 month left before the deadline.

    • ianl8888
      Posted Jan 15, 2010 at 12:20 AM | Permalink

      I’m from Aus, so I’m confused about US procedures

      Does this EPA regulatory push mean that Obama need not get legislatory approval for AGW action ?

      • Sean Peake
        Posted Jan 15, 2010 at 1:04 AM | Permalink

        Yes.

      • jim edwards
        Posted Jan 15, 2010 at 1:34 AM | Permalink

        The short answer is yes. There’s a little US Constitutional history here.

        Environmental regulation was originally accomplished on a case-by-case basis through nuisance lawsuits in state courts. If a neighbor polluted your air or water, you went to court to get an injunction. Individual states later developed prospective laws passed through their legislatures. Pollutant spillover from one state to another was meant to be handled through suits between states – heard in the US Supreme Court.

        Under our Constitution, ALL national law-making power resides in Congress. Only Congress has power to make prospective environmental laws on a national scale. The President has zero power to make laws; he gets to enforce Congress’s choice of laws.

        Congress can only write so many laws per year, so this system virtually mandates a small central government [by design…]. FDR found this to be an impediment to his “New Deal”; his allies pushed through legislation supporting the concept of “delegation” of power to the President’s administrative agencies.

        Using delegation, agencies like the EPA are granted some of Congress’s law-making authority. It’s blatantly contrary to the Constitution, but FDR’s Supreme Court approved the concept – as long as Congress provided the agency with sufficient guidelines to develop new regs.

        So, when the Clean Air Act was passed ~40 years ago, granting the EPA authority to regulate air pollution, that was all the authority EPA needed to perpetually write unlimited regulations on emissions. Agency regulations have the same force of law as Congressional statutes. The only limits are that the agency has to use some discretion and its determinations must be reasonable interpretations of Congress’s authorizing statute(s).

        So, if Congress does nothing, EPA gets to regulate CO2 to its heart’s content. A business or other party could sue EPA to stop the regs, but to win EPA need only show they made a good faith effort and their determination wasn’t irrational. One possible point to challenge is where EPA fails to do its own legwork and relys on CRU / IPCC materials – especially if flaws are found in the international materials.

        One side effect of the proliferation of all of these agencies is that the important business of governing is passed from elected officials to civil servants, and our political debates become centered around issues of morality like abortion, prayer in school, and drug use.

        • Craig Loehle
          Posted Jan 15, 2010 at 8:22 AM | Permalink

          Legally you are exactly right. The problem with the GHG regs is that on many grounds they might violate the Clean Air Act statutes, not in the least because EPA has no grounds for giving a pass to sources which emit modest amounts of CO2 such as factories, hospitals, and even farms. There are other problems as well which will make this a bonanza for lawyers.

        • Doug Badgero
          Posted Jan 17, 2010 at 2:54 PM | Permalink

          Exactly:

          EPA’s regulations, as proposed, apply to stationary sources emitting greater than 25,000 tons per year. The CAA states that it applies to any stationary source emitting greater than 250 tons. It would be politically untenable of course to apply based on the language in the CAA.

        • Sean
          Posted Jan 15, 2010 at 10:14 AM | Permalink

          Nice summary.

        • QBeamus
          Posted Jan 15, 2010 at 6:03 PM | Permalink

          From a rigorous theoretical standpoint, I belive Jim E. has it right. The standard of review of the exersize of regulatory discretion is very low (Chevron Deference). This is because the theorectical reason for creating regulatory bodies is that they are more instutionally competent in resolving fact-sensitive issues in which the facts are rapidly changing. Issues governed by evolving science and technology are the poster child for this rationale. So it will be very hard for anyone to win this kind of suit on the scientific merits.

          A procedural attack is a far easier line of approach. I would expect the cattle board people to cut and paste from Steve’s excellent paper laying out how the adoption of the IPCC’s reports violates the EPA’s own promulgated rules on the adoption of the work of others. Such procedural call into question the legitimacy of a regulatory agency’s use of delegated power. Furthermore, they don’t require judges–who are notoriously scientifically illiterate–to second guess people who’s life work involves a particular kind of technical issue.

          Having said all that, it may still come down to a lottery, at least in the first instance–luck of the draw of judges. Unfortunately, judging has become a much more political process. Its reached the point where law schools are actually teaching young lawyers that it’s bad for judges to take the text of the laws they’re supposedly applying too seriously. My personal opinion is that, since the 60s or so, the courts have been the principle tool by which the Left has succeeded in implementing its preferred policy changes. That’s an entire generation–long enough to develop the cultural norm that accepts this as natural and proper.

          As a practical matter, the appellate process does have a substantial moderating effect on this behavior. Its far worst at the trial level, not too bad at the circuit court level, and reduced the now-popularly-familiar trench warfare at the Supreme Court level. Still, at one time it was commonly said that “the single largest function of the Supreme Court is to reverse the 9th Circuit.”

      • David L. Hagen
        Posted Jan 15, 2010 at 3:13 PM | Permalink

        The key change is that the Supreme Court allowed the EPA to make an “Endangerment Finding” on CO2 which then allows EPA to control CO2 under the Clean Air Act – which was never debated or anticipated by Congress.

        • ianl8888
          Posted Jan 15, 2010 at 3:42 PM | Permalink

          Thanks for the replies – I’d posted this before but it was deleted on some whimsy or other

          The point of my initial question, of course, was to ask if the gladiatorial legislature process could be bypassed. The answer is that it obviously already has

          Aus has State-based Land & Environment Courts that are engaging in a similar bypass process

  17. Posted Jan 14, 2010 at 8:07 PM | Permalink

    It seems odd that the documents are supplied in a PDF of scanned email messages. This seems pretty archaic. It’s not searchable. Perhaps this form was supplied since it’s not as easy to turn it into a searchable database.

    • bender
      Posted Jan 14, 2010 at 8:25 PM | Permalink

      Quel surpris.

      • Posted Jan 14, 2010 at 8:36 PM | Permalink

        Someone will OCR it if it proves interesting

      • Gardy LaRoche
        Posted Jan 14, 2010 at 10:24 PM | Permalink

        Re: bender (Jan 14 20:25),
        I presume you meant ” Quelle Surprise?”
        🙂

    • rxc
      Posted Jan 15, 2010 at 7:27 AM | Permalink

      The reason the emails are printed and scanned is that the email systems used in most of the government do not have the ability to organize the emails of interest into somplete electronic page files, with all of the header info and the contents together. I used to work at one such agency, and it was amazing to me that no one had ever written a program to make FOIAs easier to respond to, but no, they don’t.

      The FOIA lawyers require that emails in FOIAs have all the relevent info, such as the complete header info, printed out for the FOIA response. So, in order to move that info around electonically, it has to be scanned back into a .pdf file.

      Ugly, inefficient, crazy, but required by the law.

      • dp
        Posted Jan 15, 2010 at 11:18 AM | Permalink

        A technical point – the headers in the messages I’ve seen in the pdf file are far come complete – they in fact are technically considered an envelope. The machine to machine headers provide the actual times and path the message takes from submission to delivery to the end point. Without them you have no way to know who sent the message nor from where. Specifically, the “who” is never known, only the domain of the sender is known.

        This is why it’s important – all the contents of the envelope are untrustworthy. It can all be changed at will by the sender before sending. Need to make the From: address a .edu address? Just make that change in your mailer. Need to really obfuscate things? Don’t use a mailer at all. Submit the message directly to the mail transport agent (server) via any number of simple means.

        The “From” address is what you want it to be. This is well known to spammers. There is also no way to know if the envelope has been modified after the message was posted. And of course there is no way to know if any modifications or spoofing acts have occurred.

        In many cases the only copy of a delivered message is on the hard drive of the recipient’s workstation making recovery difficult, and there is no way to know if the message has been modified by the recipient. Mail systems that use a central “postoffice” exist and which are less vulnerable to end-user manipulation – not everyone uses such systems.

        End users can also redirect all incoming mail to an off-site location where discovery is impossible. A number of my email customers have all their mail redirected to other services such as gmail or hotmail, for instance.

  18. WillR
    Posted Jan 14, 2010 at 8:23 PM | Permalink

    I read every email. I have seen this sequence and set of reactions before.

    Sometimes when a person is honest, guileless and direct the person (group) on the other side of the situation begins to react as if they are faced with an incredibly devious person of ill intent. They begin to look for all the hidden traps, the tricks, the insults etc. — but there are none. That becomes evidence that they simply not figured out the game — so they go on the attack to deter further attack — even though there is none. Seems crazy but I have seen this a few times.

    Also, some of the people in the email addresses seem unaware of the political issues at work. That’s at least possible. For James Hansen — it’s difficult to believe — for Gavin? hmmm

    Were they that unaware of how there data was being used to promote the cause of AGW? Hard to believe. Maybe there is an ivory tower… some of them may have been in that space — but some appeared to be politically aware.

    Beyond that… I dunno…

  19. Jimchip
    Posted Jan 14, 2010 at 8:36 PM | Permalink

    My comments in []

    8/7/2008 p.8
    any attempts to teach or outsmart Steve are counter productive and a total waste of time.

    [I agree with the outsmart part]

    7 Aug 2007 p. 11
    BTW, your note to McIntyre perhaps should include a statement such as. This change and its effect will be noted in our next paper on temperature analysis submitted for publication and in our end-of-year temperature summary.

    [They are hiding corrections in later pubs. They don’t want an addendum to the original (which they should) since it would stand out]

    9 Aug 2007 p. 16
    as an alternative to attempting to reconstruct the origins of all station records…is it easier to use current data per se and that the difference to the global result is negligible?

    [We used the wrong data but see it doesn’t matter. Garbage in, correct answer out].

    09 Aug 2007 p. 17
    …using GHCN data (which would reduce the 1900-1999 warming over the US by .3 C and have no noticeable effect on global means.

    [As long as global means don’t change, who cares whether the US is cooler than we said before?. Americans, maybe]

    11 Aug 2007 p. 19 Hansen to Revkin

    [Hansen’s 10 year ‘forecast’ for the NYT is a classic]

    10 Aug 2007 p.36
    The blog you attached is a primes example of what gives bloggers a really bad name; somebody with no idea what he is talking about is spouting absolute nonsense

    [Mr. McIntyre, Reto Ruedy is saying bad things]

    8/10/2007 p. 39 re: National Geographic Temp. map
    I checked what this correction does to your map and it does change the colors somewhat over parts of the US; the rest of the world is unaffected…So there is little need to make any changes.

    The timing is a bit awkward, though. Sorry…

    Aug 14, 2007 p.41

    These are some desperate characters…Finally, if one wished to be scientific, as is the obvious

    intend [sic] of these critics/contrarians…

    13 Aug 2007 p.52
    p. 54 … Who is this man who understands American climate data so much better than the National Aeronautics and Space Adminstration? Well, he’s not even America [sic]: He’s Canadian. Just another immigrant doing the jobs the Americans won’t do…

    [except for “immigrant” he’s right. Getting the data right because the Americans won’t do it.]

    15 Aug 2007 p. 70
    …This is even more speculative, some people still try to deny in spite of the data that it is warm at all. To observe that warming accelerates would take even longer observation times, another 50-100 years…The frightening thing about today’s temperature rise…

    [but the hockey stick shows acceleration…]

    p.71 The annual US mean-changes are still large compared to any CO2 effect…

    [But, I thought CO2 effects were shown to…I’m confused (not)]

  20. Bo
    Posted Jan 14, 2010 at 8:42 PM | Permalink

    Seriously, I come here from time to time hoping to find news of newly discovered findings around the world that downplays the notion of AGW and almost always I get news or commentary about the climate gate letters. I know it’s an important topic and it could be one of the most important findings in the debate but don’t forget all the new discoveries and new data compilations made every day.

    Thanks

    • deech56
      Posted Jan 16, 2010 at 1:50 PM | Permalink

      Well, according to NASA-GISS, 2009 is tied for the second warmest year since the records began, but I don’t think that’s the kind of new finding you were looking for.

    • dp
      Posted Jan 18, 2010 at 2:17 AM | Permalink

      The site is called Climate Audit because this is where you find process checks and errors in the math. And I’d presume approval as appropriate. I would guess a better place to focus on the science would be at NASA or RealClimate. But check back here often to see if you know all you can about the numbers you see there. In terms of climate websites I don’t know that this one has any peers in their area of study.

      I have noticed, however, a number of posts like yours that seem to encourage changing the subject. Just an interesting data point.

  21. Anand Rajan KD
    Posted Jan 14, 2010 at 9:42 PM | Permalink

    Each and every one of the climategate emails has something abusive about Steve M. You must really have a strong stomach!!

    There is a price to pay – for everything. I hope someone from the ‘Team’ comes forward with an apology at some time.

  22. Jimchip
    Posted Jan 14, 2010 at 9:43 PM | Permalink

    My comments in []

    03 Aug 2007 p. 146 (Gavin to Reto)
    I think that the suggestion you have for fixing it is a better idea than what is being done now, though possibly it might make more sense to correct the later GHCN data rather than the earlier USHCN numbers (that doesn’t make a difference to the trend, 0f course)…

    [followed by more discussion of adjustments. I thought they didn’t fix data, hmmm. All of these fixes never make a difference]

    16 Aug 2007 p.92 re: Steve’s ‘robot’ and the IP block

    …I think we should just make the point clear that McIntyre’s story is a fabrication in a very generic was…Do we want to lower ourselves to debating with a court jester…

    August 15 2007 p. Steve’s interview

    16 Aug 2007 p.100 Schmidt, Hansen et al response

    16 Aug 2007 p.101 [Reto doesn’t like Steve]

    16 Aug 2007 p.104 (Gavin):
    …It does however highlight the power of saying that the code is secret and things are being kept from the public. It may still be worth putting up a clean version of the adjustment program on the website in order to have something to point to…

    8/17/07 p. 124
    Technical arguments with a jackass or a jester…Tom Karl’s group – they “fix”, we don’t…

    [they merely use an urban correction]

    24 Aug 2007 p.113
    Of course Reto thinks the ranking that shows which year was warmer by 0.01 degree is stupid…global temperature is much less noisy…

    [I think it’s false reporting of data. Too many significant digits, implying measurements in the 1/100th of a degree range. With those surface stations?]

    24 Aug 2007 p.131 (Revkin to Reto)is there a simple way to determine which shifts are NOT statistically significant?

    [They’re all pressed for time but Reto helps Revkin with the data…For the NYT? Tax dollars at work]

    p.136 (Reto): …We really should round to the nearest tenth of a degree as some other groups do) rather than showing 2 digits…

    [I question the precision of the nearest tenth]

  23. Posted Jan 14, 2010 at 9:43 PM | Permalink

    I don’t have much, if anything to add, except to boast that I got to the end of the PDF. 🙂 Lots of repetition in there, and identifying the details amid the Replies, Forwarded and CC’d is a little challenging.

    I think that this is significantly less damaging to the Team than the CRU leak. The behaviour of the GISS gang is significantly more professional than that of the CRU gang. One has to wonder if that’s principally due to the absence of Mann in this particular mix. But – dammit – to suggest that SteveM’s upstairs lightbulb is blown, after he’s identified an error in NASA’s own calculations, is damned cheeky.

    One thing I most certainly come away with is a sense that these guys firmly believe in the theories of AGW, despite working principally on data that don’t, frankly, contribute anything at all to reinforce the idea of an exponential warming trend as a result of AGHG. Belief in a thing, in the absence of evidence, is an act of faith that has no place in science.

    • johnl
      Posted Jan 15, 2010 at 5:16 PM | Permalink

      Nobody likes to get caught making a mistake. So calling the person who found it stupid can help the person who made it cope.

      • Rhoda R
        Posted Jan 17, 2010 at 3:53 PM | Permalink

        There is always the “Kennel Blindness” factor: Where you see your dog as being the epitome of the breed standard even when they are totally off. I.e. they are seeing what they WANT to see.

  24. Robert
    Posted Jan 14, 2010 at 9:45 PM | Permalink

    I work for government here in BC. As far as I now, all e-mails received and sent through my work are public information. When you keep this in mind you also keep your manners and remember that you are serving the public who is your client or customer. Methinks some of our ivory tower brethren need to be reminded of this.

  25. Geoff Sherrington
    Posted Jan 14, 2010 at 10:25 PM | Permalink

    On the same page of “Judicial Watch” there is material on Wurzelbacher v. Jones-Kelley, et al., a.k.a. the well known “Joe the Plumber” case.

    In short, is alleged that Joe asked a public question of President-to-be Obama; then was subjected to illegal personal record searches aimed at finding dirt on him.

    If Steve was a US citizen, and if NASA officials are under the same relevant obligations as politicians (which I do not know), a similar type of avenue might be open. The depictions of deniers by Jim Hansen in particular are hardly designed to encourage free speech and the advancement of understanding.

    Maybe there is a volunteer USA denier who feels strongly about statements like those quoted above, “Hansen, clearly frustrated by the attention paid to the NASA error, labeled McIntyre a “pest” and suggests those who disagree with his global warming theories “should be ready to crawl under a rock by now.” Hansen also suggests that those calling attention to the climate data error did not have a “light on upstairs.””

    My post here might be impractical or beyond USA law, but it expresses frustration at the action of officials who lack service to the First Amendment, whose retaliation procedures include adverse action that would be likely to “chill a person of ordinary firmness from further participation in that activity”.

    Frequent readers here might recall even stronger statements designed to chill people.

  26. Mark T
    Posted Jan 15, 2010 at 1:14 AM | Permalink

    RealClimate is not a NASA web site, so its a stretch to call his work on behalf of RC to be internal NASA work product.

    Except when he posts during work hours, using a work computer.

    Mark

    • justbeau
      Posted Jan 15, 2010 at 7:24 AM | Permalink

      If Gavin uses government owned equipment during work hours to post prodigiously and to say ominous things to bloggers, then this could constitute:
      –misuse of government property, because Gavin would not be doing work to communicate properly via a NASA web site, but serving another one, in the private sector.
      –misuse of work time, working on behalf of the private sector’s Real Climate, instead of doing work Congress funds NASA to perform.
      –if NASA wants to communicate views to the public, it should do so via NASA web sites. Gavin already has his face pictured on a NASA web site, so apparently NASA web sites exist.

      An outfit like Judicial Watch could render service by examining the personal conduct of NASA scientists, serving Mike Mann’s web site and spewing out their personal views, including being rude to members of the public they are duty bound to serve.

      On weekends and after working hours, if Gavin wants to spew his personal thoughts out on the Web via his own web site or one of Professor Mann’s, then fine. But given Gavin’s prolific posting, its hard to imagine none of this was on taxpayer paid time. And during those 40 hours per week, he should only be sharing his thoughts via a NASA.gov web site in compliance with appropriate procedures governing such a NASA site.
      The root problem for these NASA guys is they became eco-celebrities and lost track of proper behaviors for government employees. The vital importance of “saving the earth” via Real Climate must have led them to think that rules governing Federal workers simply did not apply to them. Maybe a Congressman slipped an exemption for Hansen and his boys into the fine print of a law.

      • snowmaneasy
        Posted Jan 15, 2010 at 10:01 AM | Permalink

        This is exactly what happened…they became eco-celebrities…a case of too much too soon….

        • Alan F
          Posted Jan 15, 2010 at 1:15 PM | Permalink

          The very same mentality kept the IRA and UVF stirring the pot for decades in Belfast. When little fish get to become big fish is in itself an addiction to surpass all others.

    • Posted Jan 15, 2010 at 10:45 AM | Permalink

      That doesn’t make his RC stuff the property of NASA. It just means he mis-appropriated his work time.

      Now, if he wrote a program on his NASA computer, then tried to claim it as his own, that would be considered NASA work product.

      • boballab
        Posted Jan 15, 2010 at 12:05 PM | Permalink

        Jeff you are creating a strawman, Just got to the NASA ethics FAQ webpage and they state it very clearly:

        Misuse of Position

        What’s the basic rule?
        The rule is simple: we may not use our public office for private gain. This includes our own private gain, or that of anyone else. However, the rule sweeps quite broadly, and applies in a wide variety of circumstances. The rule also covers misuse of non-public information, and misuse of Government property or time.

        I have a business at home, and my computer here at work has exactly the software package I need to keep my mailing lists.
        Stop right there. Employees may not use Government property for other than authorized purposes. NASA computers and networks are provided to employees for conducting official business only. Official business means internal and external communication and preparation and delivery of products or services which are part of one’s duties and require the use of NASA’s computer equipment, software and networks.

        Supervisors may permit limited personal use of Internet services (World Wide Web) provided the use does not interfere with the employee’s work or the work of others, and provided this privilege is not abused. It is not permissible to access, download, or print material which would offend others or create a hostile work environment. Access to the Web should be limited to brief periods when it can reasonably be assumed by supervisors, other employees, and the public, that the employee is in a non-duty status, such as during the lunch break.

        Expressly prohibited use of NASA computers and networks is that which is clearly not related to official business, such as conducting commercial or non-profit personal business; performing personal work (finances, investments, purchases, legal correspondence); performing work for non-work related organizations (social, political, religious); sending chain letters or social messages; playing computer games; or engaging in partisan political activity.

        http://www.nasa.gov/offices/ogc/general_law/ethicsfaq.html

        Now unless Gavin can show that he only did his blogging on his lunch break and only for a very short time span, he is in clear violation of NASA Ethics policies.

        • Skip Smith
          Posted Jan 15, 2010 at 2:41 PM | Permalink

          I’m not comfortable with this “gotcha” game on matters not directly related to the science. We’re losing focus …

  27. David S
    Posted Jan 15, 2010 at 2:33 AM | Permalink

    “13 Aug 2007 p.52
    p. 54 … Who is this man who understands American climate data so much better than the National Aeronautics and Space Adminstration? Well, he’s not even America [sic]: He’s Canadian. Just another immigrant doing the jobs the Americans won’t do…”

    But that doesn’t apply to Gavin. “Team” immigrants good, yours bad.

    • JBean
      Posted Jan 15, 2010 at 4:48 AM | Permalink

      The “immigrant” remark is from a column by political humorist Mark Steyn, not NASA. Steyn himself is a US immigrant from Canada, and was/is very supportive of Steve’s work. Read the whole column, as repeatedly quoted in the emails. It’s funny, and the line about “doing the jobs Americans won’t do” is taken from a US politican’s arrogant gaffe a few years back.

  28. Stephan
    Posted Jan 15, 2010 at 3:01 AM | Permalink

    By far the most significant news is video number 4.
    http://www.kusi.com/weather/colemanscorner/81583352.html It will have more of an effect than the UEA story because its in the USA

  29. R.S.Brown
    Posted Jan 15, 2010 at 3:23 AM | Permalink

    Steve,

    Here’s a link to a FOIA request that brought out the NASA logs
    of Freedom of Information Act requests from October 25, 2004
    (the start of FY2005) to December 4, 2007, (the early months
    of Fiscal Year 2008) in a 133 page PDF presentation:

    Click to access FOIA_Logs_NASA-HQ_FY2005-07.pdf

    Highlights:

    These have an inverted loading by Fiscal Year. In general, the
    higher the page number the earlier the request.

    You can move through the first 28 pages of the logs before you
    come to a request for a document on “Climate Change”, or anything
    close to that subject.

    On page 34 is a NPR request for target words Administrator Griffin
    and “global warming” and “Climate change”.

    On page 46 is an August 28, 2007, request from Christopher Horner,
    “Requesting all email sent to James Hansen and/or Reto A. Ruedy
    from Steven McIntyre”.

    On the same page is a second request from Mr. Horner for the “records,
    documents, and internal communications and other relevant covered
    material created by, provided to and/or sent by Goddard Institute for
    Space Science (GISS) relating to the…” [the rest is missing on this
    version of the form] Please see:

    Click to access hansen-giss-correction-foi-request.pdf

    for what may be the original request letter.

    On page 73 for March 27, 2006 is a “Request for access to source
    files” … with no detail entered by the NASA FOIA personnel as to
    exactly what source files were being requested.

    Also on page 73 and logged for March 31, 2006 is a request for
    “Report regarding climate change”.

    On page 78 NASA staff logged a May 5, 2006 request from Seth
    Borenstein for “All coresspondence, documents, memorandums, emails,
    meeting notes, responses, agenda items and phone and meeting
    transcripts regarding a December 6, 2005, FOIA…” [The rest of the
    request summary is missing on this version of the form]

    Page 125 has an entry dated July 20, 2005, asking for, “Global
    modeling and assimilation, how the land models work”.

    On August 1, 2005, page 126, the same individual requested the,
    “Global Modeling and assimilation office contact”.

    That’s about it. As of December, 2007, the good scientists at NASA
    seem to have been burdened by even few FOIA weather data/program
    requests than NOAA had over the same period.

    Please take a couple hours and wade through the NASA FOIA requests
    for your own edification and enlightenment.

  30. kan
    Posted Jan 15, 2010 at 3:29 AM | Permalink

    In the general population of people, there are 3 generally accepted beliefs about the practice of climate science that these emails and the CRU emails are beginning to change.

    1) Even though the regular Jane/Joe can use a thermometer, they know theirs is cheap. But climate scientists, they must have thermometers that can read to the .0001 degree, probably cost $1M bucks each. So when the news reporter says that AGT has risen 0.8C, that means it has risen exactly 0.8C (they know there is an error rate, but assume it is on the order of .00001). The reporters never say ” with an error of + or – 0.5C”.

    It is not that Jane/Joe doesn’t understand “+ or -“, they hear that with every political poll. In the case of climate change – the reporter never says it. If you tell someone the polling numbers are “80% of the people love candidate X, with +/- 50% error,and is going to win”, they will laugh at you.

    Looking through the emails with reporters, you never see the two numbers together. In fact, I wonder if Andy Revkin knows that the 0.8C rise in U.S temperatures since 1900 comes with a 0.5C possible error? If he does, has he reported it?

    2) Peer Review process. To Jane/Joe this means that when a paper is reviewed, the same experiment is run by others, and the results come out the same. In the case of papers on mathematics, or statistics, that means some else sat down and did the calculations and got the same results. If they did not get the same results, then the paper would NOT GET published, because it was wrong.

    In the CRU emails, we are learning that the review process means “this paper agree with me, it is good. That paper (which I have not fully read) disagrees with me, it is not real science, and is terrible.”

    3) Jane/Joe understands the climate is complex. They live in it everyday. They also know that computers can calculate complex mathematics. They know that computer modeling, when used by scientists, can model things from the the real world very well (like nuclear bombs – why did the US agree to stop testing real nukes?). However, they also know that computer modeling, when used by economists, cannot model the real world. Furthermore, they know that there are lies, damn lies, and statistics.

    Climate researchers are called climate scientists, not climate economists. The lack of any accurate prediction from a climate computer model, will eventually lead to the latter. Jane/Joe will listen to them, but they won’t believe them.

    • Jimchip
      Posted Jan 15, 2010 at 4:15 AM | Permalink

      Re: kan (Jan 15 03:29),

      wrt 1) They assume at least any thermometer is situated correctly instead of being next to the air conditioner exhaust, next to a hot water pipe about 4 feet above the asphalt.

      2) The difference between peer-review and reproducibility. Non-reproducibility always trips the alarm. Also, “That paper (which I have not fully read)” in many cases is ‘that paper by HIM will never see the light of day. I don’t have to bother reading it. I’ll tell Mike to write a bad review.’

    • Posted Jan 16, 2010 at 4:56 AM | Permalink

      That is an extremely helpful summary, thanks. Especially the point about computer modeling. Hadn’t seen it laid out that way. Climate statisticians?

  31. Stephan
    Posted Jan 15, 2010 at 4:20 AM | Permalink

    already J hansen has issued an immediate reply:
    NASA has issued the following statement in response to the KUSI Special Report. This statement is from Dr. James Hansen, Director of the NASA Goddard Institute for Space Studies in New York City:

    “NASA has not been involved in any manipulation of climate data used in the annual GISS global temperature analysis. The analysis utilizes three independent data sources provided by other agencies. Quality control checks are regularly performed on that data. The analysis methodology as well as updates to the analysis are publicly available on our website. The agency is confident of the quality of this data and stands by previous scientifically based conclusions regarding global temperatures.” (GISS temperature analysis website: http://data.giss.nasa.gov/gistemp/)

    other agencies hadley etc please…

    • Jimchip
      Posted Jan 15, 2010 at 4:56 AM | Permalink

      Re: Stephan (Jan 15 04:20),

      NASA has not been involved in any manipulation of climate data used in the annual GISS global temperature analysis.
      ———————
      03 Aug 2007 p. 146 (Gavin to Reto)
      I think that the suggestion you have for fixing it is a better idea than what is being done now, though possibly it might make more sense to correct the later GHCN data rather than the earlier USHCN numbers (that doesn’t make a difference to the trend, 0f course)…
      ———————

      He probably means manipulation of climate results. No matter what the data, same results, so it would be hard to manipulate. Maybe a ‘tricky’ term would help: “….any manipulation of….” but we tweak it all the time. [Can’t make a difference in the trend. They always add the term “global” when a statement like that is made.]

      I said ‘Global’, see, see. I wasn’t talking about all the other fixing-up we do.

      • JBean
        Posted Jan 15, 2010 at 5:36 AM | Permalink

        On 8/10/2007, Andy Revkin asks, “should people have always paid less attention to US (48 state) trend as a meaningful signal of AGW? (now that those earlier warm years intrude, it certainly makes the case that regional data can be a red herring).”

        (Er, or now that it looks like the data is a bit messy?)

        On 11 Aug 2007, Hansen responds that “the ‘reanalysis’ has not changed anything” and besides, “the contiguous U.S. is only 2% of the global area.”

        (Posh, 2%! Why bother with such an insignificant area, when they have the globe to save?)
        snip – policy

  32. Ryan O
    Posted Jan 15, 2010 at 9:19 AM | Permalink

    Agree with some prior comments – the GISS team seems to behave more professionally than the CRU team – at least in this set of emails. I find it quite interesting that they like to shoot the messenger (Steve Mc) for pointing out an error [i]because the press subsequently got really interested in the error.[/i]

    Nothing as bad in here as the CRU emails – though it does show that GISTEMP is more of a scientific work-in-progress than a rigorous index (like the Dow or other financial indices) . . . indicating that top brains in science probably aren’t the best folks to keep up on the day-to-day drudgery of quality control on a bunch of numbers. They really need to figure out how to transition the index from being a scientific toy to being a no-kidding real index.

    • Harold Vance
      Posted Jan 15, 2010 at 11:05 AM | Permalink

      Amen, Ryan O!

    • Richard
      Posted Jan 16, 2010 at 12:27 PM | Permalink

      Ryan

      Please be careful in citing the “Dow” as an exemplary time series of data. As you may not know, the individual components of the various Dows change regularly (i.e. swapping in Microsoft for some horse and buggy company in the Dow Industrial “Average”), so comparing the composite averages over time are just exercises in guestimation. On second thought, maybe this is where the hockey time got the idea of switching between data sets to get what they may have thought were more “realistic” results. No?

      • Bernie
        Posted Jan 16, 2010 at 12:48 PM | Permalink

        Almost all economic indicators suffer from the cumulative error type issues that Hansen bythely ignores. Much time and effort is spent trying to “adjust” such economic indicators -largely with limited success. For example, the Consumer Price Index is very problematic as is the definition of Poverty Level. Indeed “price” becomes a fuzzy measure when you try to compare prices over long time periods.

      • Jimchip
        Posted Jan 16, 2010 at 1:29 PM | Permalink

        Re: Richard (Jan 16 12:27),

        There used to be a corporation called General Motors that used to be a ‘part of’ the DJIA. At least one knows why the change was made. It wasn’t just, ‘I like Cisco more than GM’, or whatever.

  33. Kenneth Fritsch
    Posted Jan 15, 2010 at 9:49 AM | Permalink

    Although these emails were god awful monotonous to read, after a slog through them I have the following observations:

    1. Absolutely no science input.
    2. Lots of political thoughts and spin.
    3. Nothing about what finding the error revealed about the sloppiness of the GISS data set owners and the lack of transparency and as a result the corresponding amount of trust that can be applied to their data and data manipulations. Instead we get the mantra of the error does not matter to their precious global average warming.
    4. Something interesting that I have observed when temperature data set owners are asked about uncertainty in their data and that is to point to the hard statistical calculations that can determine uncertainty due to (lack of) spatial coverage, but then they hand wave away any other sources of uncertainty by stating that those sources will be small. That situation has set me onto a longer term study of those uncertainties. One of the NASA emailers even points to individual station quality control (as exposed by the Watts team) as not being an issue with uncertainty and obviously being able to come to that conclusion without studying the Watts team results.
    5. Andy Revkin appeared to go hat in hand to consensus scientists for talking points for his former column in the NYT.
    6. Do I found it unusual that an organization like NASA would spend much more effort in covering its tracks and defending past actions than improving their systems? In a word, no.

    • Jimchip
      Posted Jan 15, 2010 at 11:00 AM | Permalink

      Re: Kenneth Fritsch (Jan 15 09:49),

      4. Is the one for me. Raw temp data with (+/-) 2 oC (as an example) can’t turn into (+/-) 0.01

      • Dominic
        Posted Jan 15, 2010 at 2:16 PM | Permalink

        Re: Jimchip (Jan 15 11:00), I am not sure about this. Surely some weak variation of the central limit theory kicks in here which should imply that the standard error of the average is less than the error in the individual readings. I have discussed this elsewhere without satisfaction so I would be interested to read any references to this precise question which someone may have.

        • Henry
          Posted Jan 16, 2010 at 4:58 PM | Permalink

          The central limit theorem requires the errors to have a zero expected value and be independent of each other. But errors due to equipment or observation methods could be more systematic. Hence the Surfacestations project.

        • bender
          Posted Jan 17, 2010 at 12:16 AM | Permalink

          No it doesn’t! A sample mean converges to a population mean given increasingly large samples. That mean does not need to be zero.
          .
          You can have a bias in an instrument and lack of precision; they’re different things. The lack of precision will indeed vanish in the limit. The bias will not. But no one is saying the CLT irons out the bias. (This only happens if biases among replicate instruments have a mean of zero.)
          .
          However instrumental bias will vanish if what you’re interested in is trends in anomalies across repeated measures. As long as this bias is fixed, it cancels. Sure, some instruments drift in bias, hence the need for periodic recalibration. A mercury-filled thermometer probably does not drift in its bias and probably doesn’t need a whole lot of recalibration over time.
          .
          Failing to distinguish between precision and bias leads to erroneous conclusions about the role of CLT in increasing measurement precision.

      • bender
        Posted Jan 16, 2010 at 3:39 PM | Permalink

        +/-0.5C instrumental error can turn into 0.1C anomaly precision quite easily. I am prepared to debate you on it. And win.

        • Neil Fisher
          Posted Jan 16, 2010 at 5:25 PM | Permalink

          I’m sure you can create such precision easily enough, but the real question is whether or not it is meaningful. And even if it is meaningful in the mathematical sense, is it meaningful in the physical sense? For example, is it reasonable to infer the heat content of the atmosphere from a temperature set that is collected almost solely in the boundary layer? Pielke Snr has argued cogently in the litchurchur that it is not. Then there’s humidity, cloud cover….

        • bender
          Posted Jan 17, 2010 at 12:18 AM | Permalink

          That’s not “the” real question. It’s *a* question, and one that Steve has repeatedly said he’s not interested in debating at CA.

        • Neil Fisher
          Posted Jan 17, 2010 at 4:34 PM | Permalink

          Whilst I certainly agree that Steve has previously said he doesn’t want to debate these points, I have to disagree with you about it being “*a* question” vs the “real question”. We can calculate many things, some simple, some esoteric, some pertinent, others not. We should not waste time calculating – and then arguing about – things that are not pertinent, IMO. If any particular metric does not answer the questions we are asking, what value is there in calculating them? In terms of answering our question, the answer is obviously “none”. Of course, that’s not to say they don’t have value in terms of answering questions we haven’t asked or even thought of yet, or that there are not intersting analyses to do on said data. The point being, we need to focus on what the data *can* tell us, as much as what it *does* tell us – trying to work out what’s happening is hard enough with relevent data, but damn near impossible with irrelevent data, even if it’s “all we have”. We should not be afraid to suggest that we do not have the data we require to answer the questions we are asking, and neither is it inappropriate to argue what is needed to answer any particular question. Although this is probably not the right place to do so…

        • Posted Jan 18, 2010 at 9:05 AM | Permalink

          Well put, even if OT.

        • EdeF
          Posted Jan 17, 2010 at 9:32 AM | Permalink

          What are we trying to measure? I thought the main exercise was to measure the increased temperature over the last century of the instumented age due to the linear increase in green house gases, specifically CO2. Averaging 1000 stations spread out over the earth, mainly in urban areas will include CO2, along with radiant heat, albedo changes, de-forestation or re-forestation, forest fires, volcanos (underwater kind too), etc. But we can correct for the urban heat island effect. Really, haven’t seen it done much. The best way to correct for UHI is to have a close-by rural or sea station with the same elevation, geography, etc and compare the two. But if you had that data, you are better off just using the coastal, sea or rural data itself. The 1000 stations have great records of movements and changes that we can correct for. Huh? Like that LA station right next to the 101 freeway?

          The best way of seeing CO2 changes in the temperature data is to exclusivey use only good rural, mountain, desert, sea side or ocean station data. But those stations have the most spotty records, they contain gaps….Joe the Post Office guy who read the temp was out sick for two weeks in ’54. I am sure those 1930s Soviet records in Yamal were a high priority. Still, its the best we have and let’s try to make sense of it. From processing dozens of the temperature records I have observed that the non-urban temperature anomoly will be significantly lower that the full set including urban. It has still warmed, but not by as much. The question then is is this really due to CO2 or some other forcing
          (the s-word). Happy 2nd month anniversary of ClimateGate.

        • Posted Jan 17, 2010 at 10:08 AM | Permalink

          As there’s a tiny bit of friendly nit-picking going on on this thread, anyone else enjoy the familiar non-sequitur in

          2nd month anniversary

          Yes, on CA time-series is our speciality, year in, year out – and yet they have the gall to say we’re lunatics.

        • Jimchip
          Posted Jan 17, 2010 at 12:58 PM | Permalink

          Re: Richard Drake (Jan 17 10:08),

          There’s nothing wrong with Bender’s nitpicking. I hope people pick up on his little statement wrt temporal analysis vs. spacial analysis. There are terms being used frequently like bias vs. precision. In the ol’ school days a discussion might start out as ‘accuracy vs. precision’. Inaccuracy = Bias. Precision meant good reproducibility.

          Two different concepts. Errors in either can be corrected for after the fact– As long as a proper analysis has been done and one knows the reason for the cause, or source, of the errors. The most common, spatial, analogy used was the ol’ game of darts.

          Bullseye was the goal. One person: all over the place, they got some points. Step back and look at it, take an average… They were accurate! Average on the Bullseye. Recommendation: More practice, up the precision.

          Another person: Always, precisely, hitting at 9 o’clock. They got some points. Tight group. Inaccurate. Recommendation: “Move three paces to your right”. Recalibration.

          It’s way beyond that, now.

          Jeff Id recently posted a link and I’ll paraphrase even the question being asked: Anomaly first, mean second? or mean first, anomaly second? Seems nitpicky but these are the questions that need to be asked first. There’s even answers over there. One needs the raw data. If one gets ‘pre-anomalized’ or whatever then, maybe, that’s one bad step already.

          Specifically, wrt to Hansen’s Y2K FOI release but also the Crutapes: IMO, they’re not being nitpicky, at all.

        • Posted Jan 17, 2010 at 3:44 PM | Permalink

          I’ve already expressed my warm appreciation of bender’s expert guidance in this thread. This was totally light-hearted. You can’t actually have a two month anniversary. It wasn’t important, it was a joke.

        • Jimchip
          Posted Jan 17, 2010 at 5:15 PM | Permalink

          Re: Richard Drake (Jan 17 15:44),

          Nitpicker = stickler. He really can be that way 🙂 Neat

        • EdeF
          Posted Jan 17, 2010 at 5:35 PM | Permalink

          Two month lunaversary.

        • Jimchip
          Posted Jan 17, 2010 at 8:56 PM | Permalink

          Re: EdeF (Jan 17 17:35),

          Dec 2009 was once in a blue moon. That’s about the time frame of a FOI request and finally getting the documents.

          Nitpicky would be luna=28 month=28,30,31… maybe not exactly “2” month 🙂

      • harold
        Posted Jan 16, 2010 at 4:50 PM | Permalink

        Of course it can go from 2C to .01C, given a large enough data set.

        • Dominic
          Posted Jan 17, 2010 at 4:25 AM | Permalink

          Re: harold (Jan 16 16:50), Thanks for replies. To get to an error of 0.01C would need a very large data set since even if the CLT is obeyed (requires the error in temperature readings to be independent and identically distributed which is not the case in practice) to go from an error of 0.5C to 0.01C would require of the order of 2000+ readings and we know that GISS no longer has this many stations in the dataset they use.

          If the CLT conditions are not obeyed, many more readings would be required to reduce the error to this size. (Also, there are constants of proportionality here which we do not know in advance).

          There is a simple experiment which can be done to test for this and I hope Steve will indulge me as I am veering OT here. However as I said before, I have tried to discuss this elsewhere but hit a brick wall. So here goes:

          1) Buy 2N thermometers, one a standard mercury and the other an electronic one which has been calibrated with the mercury one and has say 0.01C accuracy (no idea if this is possible but let’s assume it is).
          2) Distribute them around the world and then have one volunteer do the daily readings for the mercury one.
          3) Each day the observer reports a temperature reading by eye to the nearest 0.5C. The accurate thermometer reports electronically. Do this for all N locations.
          5) Each day compute the average of the mercury thermometers readings and the average of the electronically generated temperatures. Call the difference the “daily error”.
          6) Repeat the process over several months and look at the standard deviation of the error.
          7) Repeat for N=100, 500, 1000, 2000 (actually just do it for 2000 and look at subsets).

          This process will take into account observer error as well as the error due to the precision of the mercury thermometer.

          We should find that the average daily error should be close to zero (with some possible bias due to an imperfect calibration, or some parallax in the reading or something else – this bias may also move over time). However the standard deviation of the daily error over time should be much much less than 0.5C.

          Seems so simple there must be a paper on it out there somewhere ?

        • Dominic
          Posted Jan 17, 2010 at 4:49 AM | Permalink

          Re: harold (Jan 16 16:50),

          Quick follow-up. This is not GISS but Jones at CRU talking about his global temperature data. Haven’t found anything at GISS yet.

          Assessing the accuracy of the resulting global and hemispheric temperature anomalies has always been of critical importance in the work of Jones et al.. Annual values are approximately accurate to +/- 0.05°C (two standard errors) for the period since 1951. They are about four times as uncertain during the 1850s, with the accuracy improving gradually between 1860 and 1950, except for temporary deteriorations during data-sparse, wartime intervals. Estimating accuracy is far from a trivial task, as the individual grid boxes are not independent of each other and the accuracy of each grid box time series varies through time (although an employed variance adjustment has reduced this influence to a large extent). The issue is discussed extensively by Brohan et al. (2006), Folland et al. (2001a,b), and Jones et al. (1997). Brohan et al. (2006) and Folland et al. (2001a,b) extend discussion to the estimate of accuracy of trends in the global and hemispheric series, including the additional uncertainties related to homogeneity corrections.

          The global and hemispheric averages are now given to a precision of three decimal places to enable seasonal values to be calculated to ±0.01°C. The extra precision implies no greater accuracy than two decimal places.

          FYI – The referenced paper which I am now looking for is Folland, C.K., N.A. Rayner, S.J. Brown, T.M. Smith, S.S.P. Shen, D.E. Parker, I. Macadam, P.D. Jones, R.N. Jones, N. Nicholls, and D.M.H. Sexton, 2001a. Global temperature change and its uncertainties since 1861. Geophysical Research Letters 28, 2621-2624.

          Unfortunately I don’t have a subscription to GRL.

  34. Marcel Crok
    Posted Jan 15, 2010 at 10:06 AM | Permalink

    It’s a pity that the released e-mails end around August 15 2007. We now read nothing abouy the release of Nasa’s source code which took place on Sep. 8, see

    Hansen Frees the Code

  35. Kevin
    Posted Jan 15, 2010 at 10:15 AM | Permalink

    Good work. It’s unfortunate that when you do good science, you are labelled a “pest”. Strange. Way to maintain good standards.

  36. Posted Jan 15, 2010 at 10:23 AM | Permalink

    I’ve read all 215 pages of NASA GISS emails at Judicial Watch. Thanks Steve McIntyre for finding the original error in 2007 and writing courteous emails with very specific and reasonable requests. James Hansen and others at GISS, in their internal emails and emails to friendly reporters, call you a “court jester” and question if you have “a light on upstairs?” Your light is BRILLIANT! The best court jesters let the King know the truth in a valuable way that didn’t get their heads cut off :^)

    Makiko Sato, the author of the email with the seven versions of the 1934 vs 1998 temperature anomaly data I graphed and mentioned by Jim Hansen in an email excerpt below, appears to be the innocent truth teller at GISS. She may turn out to be the heroine of this story! (Along with the hero “court jester”, Steve!)

    Jim Hansen writes [2007-08-10 at 11:59 -500]: “The appropriate response is to show the curves for U.S. and global temperatures before and after (before and after McIntyre’s correction). Makiko doubts that this is possible because the earlier result has been “thrown away”. We will never live this down if we give such a statement. … By the way, I think that we should save the results of the analyses at least once per year …”

    (If any of you downloaded my a href=”http://sites.google.com/site/bigira/climate-related-pps”>PowerPoint Show on Explaining Climategate, I’ve just updated it with the info just made available yesterday so you may want to download the newer version.)

    • WillR
      Posted Jan 15, 2010 at 11:08 AM | Permalink

      Re: Ira (Jan 15 10:23),

      That’s it — now I understand the “Court Jester” reference. They are always in the presence of royalty aren’t they?…

      Somehow I see this as pointing to the heart of the problem. I just can’t seem to put my finger on it yet…

    • Phillip Bratby
      Posted Jan 15, 2010 at 2:30 PM | Permalink

      That throw-away line of Hansen “By the way, I think that we should save the results of the analyses at least once per year” just about sums up the whole shoddy state of NASA/GISS science. They have no quality control, so the boss, in a throw-away statement, suggests they save things once a year.

      Truly unbelievable! No QA, no procedures, no archiving.

      Results therefore worthless.

      • WillR
        Posted Jan 15, 2010 at 4:59 PM | Permalink

        Re: Phillip Bratby (Jan 15 14:30),

        This is a problem with a lot of data collection systems. It’s not just CRU/NASA, “The Team”.

        They need statisticians on staff to help design experiments/data collection, and they need IT professionals who will safeguard and distribute the data.

        The data is paid for by tax dollars and mostly by government grant. That it should be freely and widely available I think is an issue for all countries to address. We could still end up with trillions of dollars spent on solutions looking for a problem.

  37. sbarron
    Posted Jan 15, 2010 at 10:38 AM | Permalink

    I think Hansen’s and Ruedy’s immediate insistence that this error does not effect the global average is misplaced. It clearly demonstrates their adherence to an ideology rather than the pursuit of good science.

    The temperature data was flawed. They admitted that right away. Fixing flawed data should be considered sacrosanct by every scientist. Particularly for scientists that work for the government, which these all do. Why the issue of the global average even comes up in this discussion shows the ideological battle these guys feel they have to fight.

    Basically, they respond to Steve’s request with “Yes, the data was flawed. But that doesn’t mean my pet theory is wrong!” And near as I can tell, neither Steve’s email, nor his early blog entries on Hansen’s Y2k Error, attempt to use this error as proof of anything, other than sloppy science.

  38. James Chamberlain
    Posted Jan 15, 2010 at 11:25 AM | Permalink

    The strangest, obvious thing to me when reading the mails is: When is the last time that Hansen did any science?

    He appears to be a PR manager at this point.

    • Anand Rajan KD
      Posted Jan 15, 2010 at 12:43 PM | Permalink

      Most lab ‘directors’ are like this at this stage of their careers. They generally spew generic wisdom to incoming post-docs and junior faculty, give inspiring talks to audiences outside their parent institutions. They claim they are taking care of the big picture when the devil is in the details.

      Notice how Hansen repeatedly claims that 1934 is warmer than 1998 if ever by only the merest of a fraction but remains blissfully oblivious to the fact that 1998 was warmer than 1934 by an amount more or less similar in magnitude, before being pointed out the error

      • James Chamberlain
        Posted Jan 15, 2010 at 2:43 PM | Permalink

        I understand and agree and have seen what you say over and over in my science career, but most “good” or interested scientists still get their hands wet and help with the results.

        • CentralCoastRick
          Posted Jan 15, 2010 at 8:08 PM | Permalink

          I’m not commenting to defend Hansen – but his bio page at http://pubs.giss.nasa.gov/authors/jhansen.html lists 3 refereed publications his name is on in 2009 and one submitted. The published ones are downloadable there.

        • Craig Loehle
          Posted Jan 15, 2010 at 8:33 PM | Permalink

          If you take out the papers that are proclamations of climate doom, and the ones where he is 20th author (or 5th), there isn’t much there in the past 6 yrs or more.

  39. Bernie
    Posted Jan 15, 2010 at 12:13 PM | Permalink

    I just read through the emails. Two things I have not seen commented on to date are (a) the strange relationship and/or almost total absence of emails between GISS and Tom Karl’s group and (b) James Hansen’s odd and largely dismissive response to a question from Andy Revkin(page 115 in the PDF file).

    On August 23, 2007 after the initial flurry of action and response about the Y2K issue, Andy Revkin asked James Hansen a very pointed question:
    “finally, do you agree that generally we (globally) should be doing a lot more to improve temperature tracking? I never, til today, visited http://www.surfacestations.org and found it quite amazing. If our stations are that shoddy, what’s it like in Mongolia?”

    On the same day, James Hansen replied to this question:
    “Of course it is good to improve the station data. Temperature is an absolute measurement, however, so errors over time are not cumulative. When there are several thousand stations it is easy to find what seem like a huge number of stations with problems.”

    This is clearly both non-responsive and – as does the reaction to the Y2K error – suggests that there is no concern about or tremendous confidence in the quality and integrity of the underlying temperature data. Hansen and his team do not see it as an issue. This fundamental lack of curiosity is quite amazing to me.

    I personally would have expected to see an effort to ensure that the data was indeed “clean” since another such error could prove devastating.

    • Kenneth Fritsch
      Posted Jan 15, 2010 at 1:23 PM | Permalink

      And did Andy Revkin ever write a column about being stonewalled by Hansen on this question? I might have missed that day.

      • Bernie
        Posted Jan 15, 2010 at 1:39 PM | Permalink

        Kenneth:
        I do not recall how Andy wrote this up. He seems to have dropped this question in his follow up email of August 24. However, my point is that James Hansen appears oblivious to the issue. The whole Y2K issue is driven by a response to correcting errors in the old records. It seems as though Tom Karl was operating a separate fiefdom and the GISS folks do not want to know.

      • Posted Jan 16, 2010 at 5:29 AM | Permalink

        We now know the precise day Revkin first visited surfacestations.org and was amazed. Did even that significant fact ever make it into the NYT?

        If not, Hansen by showing no interest was in effect snuffing out the intelligent interest and genuine journalistic instincts of the younger man. Signals can be very subtle, when such asymmetries of power and prestige are involved.

        Either way, hats off to the surfacestations effort. It’s the underlying intellectual battle that matters, as Richard North argued eloquently on Monday in Awaiting the Berlin Wall moment, prompted by Brian Micklethwait.

        • Jimchip
          Posted Jan 16, 2010 at 7:54 AM | Permalink

          Re: Richard Drake (Jan 16 05:29),

          Revkin had biases but journalists, too, need to trust the ‘experts’ and then report on what they say. He probably did report the way he did by being selective. He may have, finally, woken up after Michael Schlesinger threatened, “I sense that you are about to experience the ‘Big Cutoff’ from those of us who believe we can no longer trust you.”

          I think he’s teaching now.

        • Bernie
          Posted Jan 16, 2010 at 9:42 AM | Permalink

          Revkin is teaching but still blogs at dotearth. Confirmation bias affects us all – as can be seen by the over-reaction to pieces of evidence that support a particular position. The recent Coleman/Aleo/Smith piece is a good example.
          The issue revealed in the emails is not really Revkin’s lack of follow-through but the strange lack of critical thinking by Hansen wrt the quality of the underlying temperature data. His most recent pronouncement in resposne to the Coleman et al piece is hardly surprising.

    • Navy Bob
      Posted Jan 15, 2010 at 1:28 PM | Permalink

      Mongolian stations may be superior to ours. I vaguely recall a paper (I think by a Pielke) analyzing Mongolian weather stations. From the pictures, their locations at least looked great compared to our barbecue-, rooftop- and air conditioner-festered sites. Although I recall the paper being critical of the Mongolian stations, most, if not all, seemed to be in grassy fields (maybe the only topography available in Mongolia) away from heat-generating contrivances and on more or less level ground. To my untutored eye, they looked like CRN #1s .

    • Jimchip
      Posted Jan 16, 2010 at 7:47 AM | Permalink

      Re: Bernie (Jan 15 12:13),

      “Of course it is good to improve the station data. Temperature is an absolute measurement, however, so errors over time are not cumulative. When there are several thousand stations it is easy to find what seem like a huge number of stations with problems.”

      Might as well parse Hansen: “so errors over time are not cumulative”. That is probably true in the short term (Cumulative: Building up over time), however they could be using a constant, high, baseline from which to launch from. My take is they keep the stations high, GISS takes care of the ‘necessary’ cumulative increases. Also, since it doesn’t effect the global mean, who cares? Over the longer term, there can be cumulative errors. Even the definition of rural vs urban can change. They don’t want to pay attention to that.

      “When there are several thousand stations it is easy to find what seem like a huge number of stations with problems” except Anthony Watts found a huge percentage of stations. Hansen is definitely fudging here. He’s assuming ‘math illiteracy’, implying, because of thousands, only seem like a huge number, implying ‘small’ percentage, when it’s not.

      • Bernie
        Posted Jan 16, 2010 at 10:00 AM | Permalink

        For me, most of this comes back to control of temperature trends that are due to other factors besides GHGs. This is the essence of Pielke Snr’s position. Other human driven factors influence temperature and these really need to be accounted for if you want to propose a strong CO2 effect. This is also what McKitrick and Michaels were trying to illustrate.
        What is disconcerting is that Hansen seems to see all these other issues as of marginal emprirical relevance compared to his “Grand GHG Theory”. The quality of stations in Mongolia, the limited number of rural stations in Latin America and Africa, the incomplete SST data can all be ignored if CO2 is the dominant forcing.

        • Jimchip
          Posted Jan 16, 2010 at 12:48 PM | Permalink

          Re: Bernie (Jan 16 10:00),

          I’ll add Christy, too. I surmise Pielke, Sr. and Christy disagree on some important hypotheses but they still co-author papers. I’ve even heard Mosher called a ‘lukewarmist’; I think he’s hot, being an author and all that. End Digression.

          My one point would be re:marginal emprirical relevance compared to his “Grand GHG Theory”

          It’s not Grand.., It’s Global. Less noise in the global; doesn’t affect global, so who cares; global is all that really matters; wah, wah, wah. the Global Computer Model says: “I am of such low resolution wah wah wah” that anything goes. I was looking at some GCM output, wondering how they gridded some places.

          I stared, and stared, and stared, and then, I started seeing unicorns.

        • bender
          Posted Jan 16, 2010 at 1:26 PM | Permalink

          1. mosher not only calls himself a lukewarmer, he’s the original self-described lukewarmer
          2. GCM = general circulation model

          Whether “errors over time” are “cumulative” or not is a question. I don’t see any camp winning that debate right now. The argument of Hansen et al is that convective anomalies don’t matter in the long-run; over some sufficiently long time-scale the radiative forcings dominate. They may be right. I don’t see that it helps to misread or misprepresent what Hansen, Schmidt et al think about the GCMs.

          Also, Hansen’s point about station number is well taken. Related. I have seen many times this foolish argument that GMT can’t be expressed to anything more precise than a half degree because that’s the fundamental “instrumental error”. This belief ignores the CLT. Related. If you have a hundred observers read a single thermometer and record the observed temp, the “instrumental” error of 0.5C is no longer the relevant measure of imprecision for the temperature estimate. There is a misnomer at play here. The 0.5C error is not due to the “instrument”. It’s due to the observer interacting with the instrument. More observers yields a more precise reading. Similarly. If you have a thousand thermometers, the “instrumental error” on each reading may be 0.5C, but the error of the ensemble is much lower. Exactly how much lower is not known without knowing exactly how much the observer contributes to the observer-instrument imprecision. For a set of well-calibrated thermometers, it might be 0.1C, or even 0.01C. Depends on how many abservers and how skilled they are at mentally constructing a grid of ticks between the instruemntal ticks and interpolating from that image. Has such a basic question ever been addressed in the literature? I don’t know. But there are a lot of boneheaded arguments out there being made by people who would have you believe GMT is not as precise as it is. Just supposing a precision of 0.1C seems totally reasonable for 1000+ well-calibrated thermometers.

          The substantive issue is corrections made to data to improve observation accuracy/representativeness. e.g. Station moves and UHI. I would have hoped this thread would focus on those kinds of issues. They’re important.

        • Bernie
          Posted Jan 16, 2010 at 1:45 PM | Permalink

          bender:
          Nicely summarized. But the way you frame the argument it sounds like Hansen does not need any reliable measure of global temperature? Surely in order to test his theory about the impact of CO2 on climate, he needs an accurate way of assessing the climate sensitivity – which requires the partialling out of other factors that impact radiative forcing and/or temperature?

        • Jimchip
          Posted Jan 16, 2010 at 2:02 PM | Permalink

          Re: bender (Jan 16 13:26),

          Skip 1. wrt to 2. I was not trying to confuse the issue of the acronym GCM. I was careful about my acronym (by letting my greasemonkey bold my letters).

          Bender Bender (oops, don’t want to go to kansas). Since you wish, I rename my GCM to GUT: Global Unicorn Theory. OK? And I figured out a static experiment but ya better watch out: does GMT mean Global Mean Temp. or that observatory in Greenwich, Conn.? (I included UTC…)

          (sheesh, he can be such a stickler, sometimes)

        • Posted Jan 16, 2010 at 10:36 PM | Permalink

          Skip 1. wrt to 2. I was not trying to confuse the issue of the acronym GCM. I was careful about my acronym (by letting my greasemonkey bold my letters).

          Bender Bender (oops, don’t want to go to kansas). Since you wish, I rename my GCM to GUT: Global Unicorn Theory. OK? And I figured out a static experiment but ya better watch out: does GMT mean Global Mean Temp. or that observatory in Greenwich, Conn.? (I included UTC…)

          GMT can mean either, and neither. The difference between your GCM acronym and your GMT analogy is that there is no such thing as a “global computer model” that I’m aware of.

          If you’re just being silly, then never mind.

        • Jimchip
          Posted Jan 17, 2010 at 12:03 AM | Permalink

          Re: Jeff Alberts (Jan 16 22:36),

          Just briefly, I do get a little silly talking to bender (he has a way), however I made up GCM (I knew it ‘conflicted’) and GMT was a segue to my “static” experiment. Just so you know, I’ll leave it there.

        • DeWitt Payne
          Posted Jan 16, 2010 at 2:28 PM | Permalink

          Re: bender (Jan 16 13:26),

          The substantive issue is corrections made to data to improve observation accuracy/representativeness. e.g. Station moves and UHI. I would have hoped this thread would focus on those kinds of issues. They’re important.

          In analytical chemistry there’s a rule of thumb that 80% of the error in a measurement is due to sampling problems. The rest comes from sample prep, calibration and instrument noise. The equivalent in global temperature measurement would be siting problems, IMO. It’s not at all clear to me that this source of error has been included in the overall error estimates.

          Also, corrections for systematic error, while the may improve accuracy, also degrade precision because the corrections themselves are not error free. The analogy from my field, emission spectroscopy, would be line overlap corrections.

        • Phil
          Posted Jan 16, 2010 at 3:51 PM | Permalink

          Re: DeWitt Payne (Jan 16 14:28), Isn’t there also an assumption that samples are to be chosen randomly or that they are to be representative of the population? If what Chiefio is finding is true, then the assumption that stations are chosen randomly or so that they are representative may be violated.

        • Carrick
          Posted Jan 17, 2010 at 12:31 AM | Permalink

          DeWitt, I happen think that the biggest source of error is the coarse sampling of the global temperature field. I believe this introduces noise into the global mean temperature, and probably a time-dependent bias (which in turn is related to the change in total number of and spatial distribution of stations).

        • harold
          Posted Jan 16, 2010 at 3:29 PM | Permalink

          Count me as a bone head. I used to deal with metrology all the time. If I needed accurate temperature measurments, I started by establishing NIST traceability and calibration methods. Calibration methods had to be checked to see how sensitive the instrument was to changes in calibration procedures. Tests had to be done to quantify how sensitive the instrument was to how it was used in taking temperature readings, and in the case of analog instrument, how the instrument was actually read by people. It turns out that people tend to do things in a few different ways at each of these steps that can cause bias, artificially low variation, and artificially high variation. 6 months – 2 years is what I would expect a program like this to take using 1-4 people for a single type of temperature measurement instrument. Normality isn’t necessarily the norm for the variation that is found. Accurate temperature measurement is not as trivial as most people think.

          I haven’t seen any of this done in the peer reviewed literature, and I’ll believe it exists only when I see it. In the mean time, I consider the data to be data with unknown calibration status, underlying biases, etc. Perhaps not an important issue, but a very basic one.

        • harold
          Posted Jan 16, 2010 at 3:37 PM | Permalink

          On your last point, corrections being more important, I agree it’s likely they are. Certainly the information is theoretically more likely to be available, so it should be a more fruitful area to pursue.

        • bender
          Posted Jan 16, 2010 at 4:51 PM | Permalink

          Well, it’s important if boneheaded arguments to the contrary elevate it to that level of importance!
          .
          Thermometers may be miscalibrated from the start and never re-calibrated through their lifetime. This is a real source of inaccuracy; I don’t mean to suggest instruments do not vary in their manufacture. But if the miscalibrations/innaccuracies are random (say due to random manufacturing errors) then they vanish in the limit, via the CLT. High individual error thus can lead to infintely high collective precision, esp. if the inaccuracies are iid with mean 0. “Refutations” to the contrary are bogus.
          .
          This is probably better discussed at surfacestations. Maybe there are whole threads devoted to the topic.

        • Doug Badgero
          Posted Jan 17, 2010 at 10:23 PM | Permalink

          Bender,

          Isn’t it possible that inaccuracies are not random? In particular, that they may be caused by common mode time dependent effects and these effects predominately lead to low, or high, readings? Do we know if this is the case here? Finally, are the surface station thermometers on a calibration schedule or are they put in service and never touched again?

        • Bernie
          Posted Jan 16, 2010 at 1:32 PM | Permalink

          Jim:
          The “Grand” was an allusion to a class of theorists who develop theories with scant attention to the need to specify and account for the details that are associated with the phenomena. They proliferated before Bacon and are generally associated with emerging but poorly developed disciplines that have weak measures.

        • Jimchip
          Posted Jan 16, 2010 at 2:15 PM | Permalink

          Re: Bernie (Jan 16 13:32),

          Not ‘Grand Illusion’? meh. Of course, Arrheenius’ GHG theory could have taken off except for the cold spell that occurred at the time. Maybe that’s just an urban legend that needs a correction.

    • WillR
      Posted Jan 16, 2010 at 10:05 AM | Permalink

      Re: Bernie (Jan 15 12:13),

      Bernie:

      If the discussion is restricted to a point in time “global average temperature” — Hansen is right. (imo) However, if we are talking about using that data to create a dynamic model that predicts temperature trends, it’s another issue, and Hansen is wrong (imo). Again, one error means nothing, a bias or series of related errors should be seen by a good model as a trend — and it will (and should) take the predictions in that direction. This is hand waving I know — but this blog is about stats and presentation of data etc — so I am just trying to express an opinion — not offer proof. I would love to discuss dynamic models — but I don’t think this is the place. So for point data — world temperature this year — he’s right, use the data for predictive moving (time) models — I say he’s wrong — the mistakes are critical.

      I only wanted to point out how easy it is to make a statement like Hansen made — clarify later — and then say “I was telling the truth” — while having mislead many people as to the effects of the error. So I agree with you — his statement was wrong.

      • Bernie
        Posted Jan 16, 2010 at 11:35 AM | Permalink

        WillR:
        Unless one is clear about what one is measuring Hansen’s statement is clearly at least incomplete. When it comes to temperature measurements in areas impacted by human or natural activities that are cumulative in their effects his statement is simply wrong – the “thing” being measured at time t2 is a different “thing” than is being measured at time t1. This is the reason why there are standards for choosing and positioning instruments and measuring temperature – to hold other factors standard. Put simply, 100 years ago we measured the air temperature 4 or 5′ above the local dominant unshaded natural ground cover. Today we measure the air temperature 4 or 5′ above the ground that may or may not be effected by ongoing human activities.

        • WillR
          Posted Jan 16, 2010 at 12:24 PM | Permalink

          Re: Bernie (Jan 16 11:35),

          Bernie:

          I do agree with you. I’m pretty good at designing dynamic, predictive models (and not academic ones) — at least imo. But, any model I designed used under the circumstances you describe would just get wronger, and wronger and… — because of the biased data, and the changes in collection methods, the Anthony Watts factors. That type of error is cumulative — and it takes more than a few sentences to show that you (and I) are right … as in “follow the trend”. Hopefully people can see through his comments.

          Please don’t think I disagree — collecting unbiased accurate data is difficult — so is dynamic modeling. I guess I’ve lost my trust in some climate scientists after CRU and NASA revelations.

        • bender
          Posted Jan 16, 2010 at 5:01 PM | Permalink

          If you measure a temperature with inaccuracy e at time t and then again at time t+1 with the same inaccuracy e, then when you take the difference, the inaccuracies cancel.
          .
          Random sources of imprecision do not cancel. But systematic sources of bias do.
          .
          But what happens through time is not the same thing as what happens through space (with replicate instruments/observers). In part because replication is different from a repeated measure. In part because the arithmetic & statistics of spatial averaging is different from the arithmetic & statistics trend/differencing.

        • Carrick
          Posted Jan 17, 2010 at 12:45 AM | Permalink

          bender:

          Random sources of imprecision do not cancel. But systematic sources of bias do.

          I agree with with you…in principle, systematic sources of errors can be nearly canceled out (to first order). The question at hand, though, is whether in practice the codes are careful enough to prevent changes in systematic bias over time from creeping back into the measured quantities.

          And for myself, while I think they are sufficient, I don’t give these codes many points for innovation or sophistication. Really a new code is needed that uses more modern methods for avoiding e.g. systematic bias effects from temporal shifts in geographical sampling to affect the GMT.

        • Posted Jan 16, 2010 at 10:31 PM | Permalink

          Today we measure the air temperature 4 or 5′ above the ground that may or may not be effected by ongoing human activities.

          “Affected”, not “effected”. It’s just as easy to get these things right…

        • Bernie
          Posted Jan 17, 2010 at 9:19 AM | Permalink

          Jeff:
          Thanks

        • harold
          Posted Jan 16, 2010 at 4:06 PM | Permalink

          I’m not sure the thing to be measured is all that well defined. I think how temperature is sampled and measured is very closely related to the question to be answered. If the temperatures somewhat remote from population centers show one thing and the population centers show a slightly different thing, from a climate perspective I’d probably exclude the population centers completely and treat population centers as their own microclimate. It seems a lot of the studies are geared toward answering the question “What numbers do I need to put in my model?”, but the results are used as if they answer very different questions.

          In any event, I’m not sure that mean global temperature is a particularly useful metric – the question to be answered seems under specified to me.

        • bender
          Posted Jan 16, 2010 at 5:03 PM | Permalink

          I’m not sure that mean global temperature is a particularly useful metric

          Do you know how many have been snipped in the past for this very unproductive line of argumentation? My guess is more than three dozen, less than a hundred.

        • Neil Fisher
          Posted Jan 16, 2010 at 5:46 PM | Permalink

          Yes, because it’s off-topic. I’ve been waiting for a thread where it’s on-topic, but I haven’t seen one yet – alas!

        • Posted Jan 16, 2010 at 7:35 PM | Permalink

          Thanks bender for the excellent summary of GMT issues in this thread – and the ground rules here, in effect, because Climate Audit doesn’t buy Essex & McKitrick’s argument in Taken By Storm against the validity of GMT, or at least it thinks such arguments are unproductive, as you say.

          A couple of very basic questions. Does what Steve say on the subject back in 2005 still reflect his view?

          In notes in my private wiki (I know, it’s sad) I’ve moved from the term GMT to GATA, which I think is more precise – globally averaged temperature anomaly. I picked that up off Richard Lindzen, I think in a recent presentation in Washington. It helped me to begin to get a grip on the subject. But is the term used more widely than by Lindzen? Is it interchangeable with GMT?

          Thanks for pointing to station moves and the urban heat island effect as what we should be putting most of our focus on. As I said already, very helpful.

        • Harold
          Posted Jan 16, 2010 at 11:31 PM | Permalink

          You’re obviously right – nothing to do with auditing at all.

          Did you see this post?

          Punxatawney Phil
          Posted Jan 16, 2010 at 2:33 AM

      • Jimchip
        Posted Jan 16, 2010 at 1:19 PM | Permalink

        Re: WillR (Jan 16 10:05),

        I have a static experiment… I’ve heard that a neat time to catch Tmin is just before sunrise, some places. Tmax is 12 hours later. All agree to really do it at the same time relative to UTC/GMT (in ‘honor’ of UEA). There’s problems with “sunrise”, of course, some places, but…same UTC, 12 hour intervals, don’t even think Tmin/Tmax. Take the snapshot. Repeat, ad infinitum.

        Analysis: Go back before movies, start flipping the snapshots. It’s like those ‘once a day’ plant growth movies. Then, maybe, I’ll want to take an average of two, distant, temperatures. Global T(ave), 12 hr, same time, might say something.

        Dynamically-speaking, if it’s not at the same time then those temperatures and pressures have started to move and average=unicorn. Just mo. Like you said.

        • WillR
          Posted Jan 16, 2010 at 1:38 PM | Permalink

          Re: Jimchip (Jan 16 13:19),

          JimChip:

          Static? Uh-huh! 🙂

          I like your experimental design though — maybe you could apply to NASA. It would be no worse than some efforts… 🙂 That’s not fair of me though — I know they do a lot of good work.

  40. boballab
    Posted Jan 15, 2010 at 12:29 PM | Permalink

    Well here is a scary thought. First Willis E emailed NCDC back in Dec about Darwin. Dr. Peterson responded back to him with a very long email. Unlike what has been seen in the Climategate emails, Dr. Petereson was very helpful and forth comming. At the tiem of the exchange Dr. Peterson couldn’t look into why Darwin was the way it was (it was on Dec 23rd) but he did pass on some very new information that I believe some may be interested in and it does bare on the changing of temperatures in the past.

    Not to long ago NCDC came up with a new homogenization method and applied it to the USHCN dataset. Well they are in the process of applying it to the GHCN data set. This means a re-compute from GISS.

    What is very encourging from Dr. Peterson’s email is that NCDC is going to release the Computer Code and the intermediate files for this.

    We currently expect to release the new version of GHCN in February or March along with all the processing software and intermediate files which will dramatically increase the transparency of our process and make the job of people like you who evaluate and try to duplicate surface temperature data processing much easier.

    Darwin Zero Before and After

    There is alot more in that email and I encourage anyone thayt is interested to go read the full email at WUWT. It is in one of the last comments in the thread.

  41. thefordprefect
    Posted Jan 15, 2010 at 12:37 PM | Permalink

    The McIntyre “I’m banned” from GISS reality check.

    On about May 16, around 10:30 or 11:00 p.m. as I was getting ready to leave GISS for the night, I belatedly checked the error logs on the two web servers and discovered that there were several thousand errors in the log on Web2. On a normal day there would be about 500.
    The errors in question were all for addresses which didn’t exist in either CGI area or in the “work space” area for the GISTEMP station data
    script. Further investiation revealed that someone had been firing off requests to Web2 since about 2:00 that afternoon for the station data and by the time I looked into the situation, there had been at least 16,000 requests. Perhaps half of these had gone to addresses in the CGI directory, which means that activating CGI scripts to extract data, etc.
    The identity of the computer making the requests was consistent, and as best I recall was something in the domain of Rogers Communications, a Canadian phone company and ISP.
    Plainly this activity was from an “automated” agent, which in rough parlance is usually called a “robot”, Many robots have legitimate purposes, e.g. search engines such as Google or Yahoo, but others do not (spambots), and others one just doesn’t know.
    As the robot on May 16 came from a generic ISP address rather than, say, an academic address and further because it’s “user-agent” tag provided no further information about who was running it, and _also_ because the GISS websites have “robots.txt” files which instruct all well-behaved web robots to stay out of the CGI directories, I cut off access to the ISP in question to the websites on Web2.
    The next day I received e-mail from McIntyre asking what was up. He did not identify himself or on whose behalf he was acting,
    Page 1 of 2
    At some point Reto got involved in the communications, and he
    must have mentioned to Jim what was up. Later on Reto indicated to me that Jim had said to go ahead and re-grant McIntyre access to the material.
    I do not know if at any point McIntyre actually asked Jim or Reto if it was possible to obtain the GISS copy of the station data in
    a single or small number of files. Alt I know is that my first contact with him came because he was blasting umpteen thousand requests
    at the webservers.
    I have no idea how much traffic McIntyre’s website gets, and I don’t know that I have ever even looked at it. His tone in his e-mail was on the arrogant side, so I had no desire to prolong commnication with
    him any longer than was necessary.

    • Keith W.
      Posted Jan 15, 2010 at 1:38 PM | Permalink

      Ford, where are you finding this? I searched the PDF files for various keywords from your report here, and found no matches. No Rogers, no CGI.

      Plus, the only time I can find Steve saying anything about being blocked from GISS was in January of 2009, not May. Here’s the post from then.

      NASA GISS Withdraws Access Blocking

      • Keith W.
        Posted Jan 15, 2010 at 1:47 PM | Permalink

        Found it, never mind.

    • Steve McIntyre
      Posted Jan 15, 2010 at 2:20 PM | Permalink

      Ford, suerly you should realize by now that climate scientists frequently ratchet up comments from what was originally said and then take offence at the ratcheted up comments. Here is the post in which I reported the original incident https://climateaudit.org/2007/05/17/giss-blocks-data-access/. Please tell me what you find offensive or inappropriate about my characterization of the incident. I think that my comments are sensible.

      Just because Schmunk says something doesn’t mean it’s true. For example, his timeline is incorrect on several particulars. Our first contact was late May 16 – not the next day. I identified myself and said that I was downloading the data for legitimate research purposes. I did ask them for a single file on May 17 as an alternative to scraping the website; they refused and allowed me to continue scraping. My original scraping was on a Sunday and was blocked late Sunday evening. The scraping did not interfere with service – we tested it at the time.

      My account at the time included relevant emails and documentation and you should compare the contemporary account with Schmunk’s account

      • thefordprefect
        Posted Jan 15, 2010 at 3:40 PM | Permalink

        schmunk?

        In the post I have quoted Schmunk is in the CC the real name has been redacted.

        Your blog basically should say I was blocked because the access exceeded a limit. It was eventually restored.

        After a big song and dance about email exchanges you say:

        I have no particular objection to the webmaster blocking access until he was assured that the inquiry was legitimate or even that the webmaster referred the matter to his bosses. I also have no objection to how long GISS took to remove the block. If all climate data access issues were resolved this quickly, it would be great. Reasonable people can differ about whether they would have been so responsive in the absence of blog publicity. I happen to think that the publicity to the issue facilitated resolution of the matter, but I can’t prove that they wouldn’t have resolved it anyway. On the other hand, I don’t think that any of my actions were unreasonable.

        But your actions were unreasonable. Any webmaster must protect their website from attack – you mut agree with this?

        As a matter of interest I managed to get myself excluded from giss through downloading data via excel. It is not just You.

        • Posted Jan 15, 2010 at 4:27 PM | Permalink

          Your blog basically should say I was blocked because the access exceeded a limit

          What was the limit? Was it defined and published?

        • WillR
          Posted Jan 15, 2010 at 4:54 PM | Permalink

          Re: TAG (Jan 15 16:27),

          I checked.

          The webmaster did an arbitrary cut-off — that is clearly stated in the PDF file. So when I checked that I assumed the rest of the post was wrong.

        • jim edwards
          Posted Jan 15, 2010 at 5:34 PM | Permalink

          fordprefect:

          If the NASA site existed so that people could download data, how is it an “unreasonable” “attack” to download data from the site ?

          That’s using the site for its intended purpose.

          The NASA administrator didn’t say that site performance was compromised, only that traffic [measured by errors] was higher than normal and Steve was using a “scraping” technique that NASA generally doesn’t allow. Even though there is no evidence of site degradation (and none when we tested), one of the NASA emails prepared for press relations asserts that I caused site degradation.

          The fact that NASA’s initial action may have been prudent / reasonable doesn’t mean that Steve’s action was unreasonable.

          Steve: My action obviously wasn’t unreasonable since NASA consented to my continuing.

  42. Posted Jan 15, 2010 at 12:50 PM | Permalink

    I’ve got a plot from another blog on the data in these emails. It shows giss’s gradual adjustment of 1998 upward and 1934 downward over time.

    1934 – 1998 Gissmatic

  43. thefordprefect
    Posted Jan 15, 2010 at 12:55 PM | Permalink

    The warmest temperature (the bulk of the hundreds of pages)

    we’re talking 0.01 degree here remember – ludicrous

    James Hansen
    Subject: Re: just to be sure..
    Date: Thu, 23 Aug 2007 20:51:59 +0200 (14:51 EDT)
    We can add an uncertainty, indeed we already include a bar at several points on our temperature curve, but we note that it only includes the largest source of uncertainty in the temperature change (incomplete spatial coverage).
    As far as I know we do not make such a list. We don’t like such lists, because the results are not significant and are certain to differ from one group to another. It is generally the media that makes a list. We look for a new record high, but note that it is a virtual tie if the difference is small.
    Just look at our published paper. It has 1934 as the warmest year, by an insignificant amount, with 1998 second. The same result that we have now. This ranking was not affected by the flaw in post 2000 data.
    Of course it is good to improve the station data. Temperature is an absolute measurement, however, so errors over time are not cumulative. When there are several thousand stations it is easy to find what seem like a huge number of stations with problems

    • Kenneth Fritsch
      Posted Jan 15, 2010 at 1:29 PM | Permalink

      TFP, I hope your not attempting to emulated Hansen by changing the subject here. It is about correcting sloppy work and what was a lack of transparency and what that means in terms of the other aspects of temperature data handling – and further the attitude of the NASA people towards striving for those goals.

  44. Gord Richens
    Posted Jan 15, 2010 at 3:59 PM | Permalink

    Steve expends a great deal of effort to cull/discourage discussions that impute motive and policy. Insufficient data, fuzzy parameters and the existence of far more interesting questions to explore are reasons I can think of for his doing so.

    (Sunstein said government agents) “might enter chat rooms, online social networks, or even real-space groups…” Perhaps those trying to drag motive into the discourse here should be looked upon with suspicion. Whoops.

    • David L. Hagen
      Posted Jan 15, 2010 at 4:04 PM | Permalink

      Understand – That’s why I thought an “peer reviewed academic” paper against “deniers” was such a chuckle! Apologies if it was too much tongue in cheek.

      • Punxatawney Phil
        Posted Jan 16, 2010 at 2:33 AM | Permalink

        I was really stressed by hate mail from the Hockey Team calling me a warming denier and climate criminal, which they told me was VERY illegal.
        But now I shall speak my mind freely on February 2. I won’t fudge the data!
        Cheers, fellows- all the best to you!!

        • harold
          Posted Jan 16, 2010 at 4:59 PM | Permalink

          interesting comment

        • harold
          Posted Jan 16, 2010 at 5:02 PM | Permalink

          meant to reply to Punxatawney a couple messages above. Maybe it’ll be a short winter.

    • justbeau
      Posted Jan 16, 2010 at 9:04 AM | Permalink

      I can see the value of excluding motive and focusing on data and its interpretation. Its a good approach, in general. Too many people focus only on guessing motives and not on carefully looking at actual data.

      At the same time, when the topic of a thread is NASA emails, it seems salient context to bear in mind that the scientists involved have professional responsibilities and codes of conduct. They also work within an organization and at the top of NASA-Goddard sits Dr. Jim Hansen. Gavin seems to work closely with Jim. NASA emails suggest NASA viewed Real Climate, though a non-NASA web site, as their favored way to communicate their views, maybe because it was against rules to communicate their views via NASA. And Dr. Hansen does have oft and powerfully expressed views about what the data show. His workers well know his expectations.

      • justbeau
        Posted Jan 16, 2010 at 1:22 PM | Permalink

        While this hockey team may have millions of fans who love and admire them and root earnestly for them, the team itself may not be all that large.
        The coach seems to be Jim Hansen. Jim sits atop NASA-Goddard. Gavin must have been hired and developed, during Jim’s tenure. Gavin seems Jim’s mouthpiece via Real Climate.
        There may not be a lot of complexity, inside the team. Not a lot of clever Swiss bank accounts or Liberian flagged ships, with layers of phony companies to mask true ownership. At Goddard, it seems Gavin (who appears to be a player), serving Coach, via Real Climate in collaboration with another player, Dr. Mann.
        If you are the Coach, who knew about the awful peril of Global Warming, long ago, it has to be a little nettlesome to hear a discordant note. The temperatures are going up, me hearties, says the skipper. Who are these Canadian nobodies, with their empty teapot domes, who dare doubt ME? The President has just four years to save the earth. What do you mean 1934 is actually the hottest year, after I have been busy telling all our legions of fans that it is getting hotter and hotter, every year.

        Of course, the loyal crew-members do not want to cause the Captain such angst. They love the skipper, because he is going to save the earth. Dont worry boss, they tell him, the trend is still ok. Dont worry about the two Canadian loons.

        The crew placate his furrowed brow: you and you alone know what constitutes good scientific procedures, data quality, and data interpretation, data archiving, and transparency, oh infinitely wise one, our Coach. And don’t worry, your prescient views are confirmed by totally independant scientists, over in East Anglia and by the brilliant Professor Mann.

        • deech56
          Posted Jan 16, 2010 at 2:22 PM | Permalink

          Brilliant, simply brilliant. I salute you, sir.

        • justbeau
          Posted Jan 16, 2010 at 4:57 PM | Permalink

          We shall have to see how things turn out in 2010 for Dr. Hansen. He may get a chance to argue his opinion. He has a press release today with three colleagues, showing their best thinking, presumably.

          I fear Dr. Hansen may be nearing a situation like that encountered with J. Edgar Hoover or Admiral Hyman Rickover. He has rendered public service for many years. Sometimes it is good to introduce some fresh perspective and change.

          Several Presidents contemplated firing Hoover, but decided he might know too much dirt. Hoover died in office. His deputy W Mark Felt became a famous leaker to Woodward and Bernstein.

          Interestingly, Admiral Rickover was replaced by President Reagan, a big fan of the Navy. Three decades of running the Navy’s nuclear energy program was deemed sufficient honorable service to his nation.

        • justbeau
          Posted Jan 16, 2010 at 6:47 PM | Permalink

          The root issue: data assemblers and keepers have to be neutral honest brokers. This what society needs from them. As such, they should be the first ones to want all their data and their methods to be available for review and analysis by anyone.
          Unfortunately, whether he be right or wrong about the hockey stick, Dr. Hansen has been a remarkably outspoken advocate about his agency’s data. His outspokenness and vested personal views about what it means eliminate him as a disinterested keeper of data. His close colleague Dr. Schmidt works in ways unconventional for government employees; the separation between advocacy and science seems blurred. Emails indicate any critic of Dr. Hansen’s views is assumed to be corrupt. The East Anglia leaked emails reveal disrespect for alternative explanations and honest peer review. It is fine for an individual scientist to think this way, rightly or wrongly, as a personal choice, but it seems questionable for heading an organization (NASA-Goddard) that should be transparently making data available, in service to science and all scientists, in all nations.

        • Dave McK
          Posted Jan 17, 2010 at 12:59 AM | Permalink

          These people have committees to plan conferences to prepare roadmaps. They are all well rehearsed and on the same page before they get there. This is a culture of self validating delusions. It is the life style.

        • Dave McK
          Posted Jan 17, 2010 at 1:17 AM | Permalink

          But for the grace of god, Gavin could have been the next Carl Sagan.
          Carl saved the world from nuclear winter as I recall hearing from multiple sources.
          (he vanished after the Kuwaiti oil fires, though- maybe Al is doing rehab at the same place?)
          He must be really shattered. Sheeubie.

          Oh- Punxsatawney Phil – has only predicted an early spring one time- in 1999. It appears the unpublished marmot betters the met for his entire career.

          Hammer these guys. You’re doing the greatest job – and you really are saving the world from a catastrophe- the one they were making.

        • Craig Loehle
          Posted Jan 17, 2010 at 9:29 AM | Permalink

          Carl Sagan vanished because he is dead. But his nuclear winter hysteria was similar to the climate change scare–even worse. His models were hilariously wrong and fake. The similarity is more apt than you guessed.

        • Dave McK
          Posted Jan 17, 2010 at 2:05 PM | Permalink

          Guess what else.
          Carl perceived global warming as a growing, man-made danger and likened it to the natural development of Venus into a hot, life-hostile planet through a kind of runaway greenhouse effect.

          Fred Singer debated Carl over his ‘Kuwaiti winter’ back in 1991 while the fires were burning.
          Carl (much later and not entirely gracefully) admitted to error on that.
          The smoky air absorbed 80% of the solar radiation. It was filthy. Local temperatures dropped. No greenhouse effect. There was no significant global effect from open burning of 6 million barrels of crude a day for months.

  45. Adam Soereg
    Posted Jan 15, 2010 at 4:08 PM | Permalink

    OT, but I think it can be relevant:

    John Coleman's hourlong news special "Global Warming – The Other Side" now online, all five parts here

    Altough I have found this program outstanding, I have to admit that some statements made by E.M. Smith and D’Aleo regarding the accuracy of the global surface temperature record were obviously wrong. The reduction in the number of temperature measuring sites could have a significant effect on the most recently observed global trends, but not because of changes in spatial distribution. The mentioned temperature series are based on anomalies and not on absolute temperatures. So the dropout of stations sited in cold places have nothing to do with the estimated monthly anomalies, altough this event should produce a significant increase in uncertainty levels due to a more limited coverage and therefore larger sampling errors.

    In our case, the real culprit is the exact percentage of sites classified as ‘rural’, ‘semi-urban’ and ‘urban’ stations. Jeff Id pointed out that the GHCN raw station data is contaminated by the well-known but always belittled UHI effect. On a global scale, urban locations have shown almost 3 times more increase in annual mean temperatures than the rural sites in the last 30 years. These findings confirmed my earlier suspicions. After 1990 the percentage of stations which are located in cities or airports have been increased dramatically. The effects of this change may not have been negligible, even Tom Wigley admitted it privately in one of his emails to PD Jones:

    We probably need to say more about this. Land warming since 1980 has been twice the ocean warming — and skeptics might claim that this proves that urban warming is real and important.

    So true, urban warming is really-really important in order to get the pronounced 0.7-0.8°c warming in 150 years. PD Jones of CRU relied on the following argument in his UHI assessment paper in 1990 (available here):

    …in any gridded temperature data set, a single affected station is unlikely to have a large influence on the time series of the nearest grid point, because this is generally the weighted average of between 5 and 20 station records.

    The often cited Jones et al (1990) is a very doubtful paper now because of some obvious fabrications regarding the reliability of their rural reference networks, especially the Chinese one. However, the statement cited above received almost no attention, despite the fact that it contains an enormous flaw. The CRUTEM3 and HadCRUT gridded datasets are available in a 5×5 lat/lon. grid, this means that we have 2592 grid boxes globally. The number of GHCN stations with continuously available data is below 1500, see here. A majority of these sites are located in Europe or in the United States, so the coverage on other continents are even smaller than it can be in case of a homogeneous spatial distribution. Even a ‘homogeneous’ case means only an average of 1.9 stations per grid box (if 29 percent of the grid boxes are located over land). In fact, the spatial coverage of landmasses excluding Europe and the USA is far worse than this number – for example the coverage of Siberia is around 0.5 to 0.7 stations per gridcell, lightyears away from the 5 to 20 interval.

    Now we can conclude that the argument used by Dr. Jones regarding the reliability of gridded temperature datasets is untrue. Any urban station can have a significant effect on a single gridcell and an increasing percentage of these contaminated sites due to station dropout can alter the observed global temperature anomaly significantly. It could have a very strong effect at the early part of the record, especially before 1900. Numerous observation sites started as small towns or rural locations, and they were encircled by urbanisation in the last 100-150 years. CRU calculates the temperature anomaly wrt. 1961-90, and it is quite obvious that analysis with the UHI-contaminated 1961-90 base perod will produce cooler anomalies in the 19th century, when population and energy consumption in the vicinities of the measurement sites were lower.

    Another common pro-AGW argument is the observed ocean warming. The very first paper documenting global temperature changes was published by Jones, Wigley and Wright in Nature in 1986, titled Global temperature variations between 1861 and 1984. (see: http://adsabs.harvard.edu/abs/1986Natur.322..430J ) The authors admitted several uncertainties and difficulties in case of the ocean (SST) data and described the comparison and adjustment method of their SST records. They have chosen several coastal regions and examined the difference series of the local land-based and SST data. In most of the cases the raw SST data shown less temperature increase than the warming signal in the surface record, so they adjusted the SST data until its linear trend was equal with the land data… After a statement like this in a peer-reviewed article, I have found this argument completely irrelevant. Personally I possess a copy of this paper, but I think anyone can find this ‘gem of climate research’ on the Internet because of its high relevance and numerous citations.

    • Carrick
      Posted Jan 15, 2010 at 7:52 PM | Permalink

      Adam:

      Altough I have found this program outstanding, I have to admit that some statements made by E.M. Smith and D’Aleo regarding the accuracy of the global surface temperature record were obviously wrong. The reduction in the number of temperature measuring sites could have a significant effect on the most recently observed global trends, but not because of changes in spatial distribution. The mentioned temperature series are based on anomalies and not on absolute temperatures. So the dropout of stations sited in cold places have nothing to do with the estimated monthly anomalies, altough this event should produce a significant increase in uncertainty levels due to a more limited coverage and therefore larger sampling errors.

      There is a well known variation in trend in global mean temperature with latitude (Compare 0-30°N versus 60-90°N). Land also has a warming/cooling effect than ocean, so changing the spatial distribution has an effect for that reason too. Finally (related to the above) you can see an effect if you compare Northern Hemisphere to Southern.

      Start here.

      Zonal annual data here.

      They basically are correct on this point. I haven’t watched the show so I can’t comment on any of the other particulars.

    • Carrick
      Posted Jan 15, 2010 at 7:57 PM | Permalink

      This figure is particularly apropos. So yeah, the distribution of the sampling of the global mean temperature matters, even if you are computing anomalies.

    • David S
      Posted Jan 17, 2010 at 1:16 PM | Permalink

      Can someone please explain how the lat/lon grids work, particularly near the poles? Are they like a physical representation of a Mercator projection, with much more density of coverage near the poles (in spite of relatively few stations), or are they somehow constructed to give cells of equal area? Sorry for the dumb question.

  46. stansvonhorch
    Posted Jan 15, 2010 at 4:16 PM | Permalink

    Get it while it’s hot – here are some files from a protected folder at GISS. the login was buried in the FOIA’d PDF. Incidentally, the folder has been taken down since yesterday…

    Mostly just graphs, data, and work logs, might be some useful info once the dates are cross-referenced and such. Snip away if this shouldn’t be posted:

    http://www.mediafire.com/?nwexrent2jy

    • stansvonhorch
      Posted Jan 15, 2010 at 5:39 PM | Permalink

      Volume 2 here, a bit more:

      http://www.mediafire.com/?2z0zdg4jj4m

      Still not sure if any of this is really useful (other than for cross-referencing dates and such)

      Here’s the beginning of an interesting table in the first archive, GLB_USHCN.2005vs1999.txt:
      ——————————
      Global Temperature Anomaly (C)
      (Land-Ocean Index)
      ——————————
      USHCN Versions Note: Current version not only
      Year 2000 Current has data extended to 2005
      (data through but also data for 1880-1999 have
      1999 2005) been cleaned by NOAA.
      ——————————
      1880 -.25 -.25
      1881 -.20 -.20
      1882 -.22 -.23
      1883 -.24 -.24
      1884 -.30 -.30
      1885 -.30 -.31
      1886 -.25 -.25
      1887 -.35 -.35
      1888 -.26 -.27
      1889 -.15 -.15
      1890 -.37 -.37

      Might have to do with the 1934 vs 1998 issue, but i’m not sure if this data is public or not. Does this seem relevant at all, Steve?

      • stansvonhorch
        Posted Jan 15, 2010 at 5:44 PM | Permalink

        sorry for the triple post, but yeah it seems that above file is public somewhere else. oh well. the logs are probably the most valuable part of these, but i’ll keep looking

        “Target CO2” Paper
        Proof received, awful quality, I checked fig caps & ref’s 08’10’06
        Evelyn faxed, FedExed & e-mailed Jim’s new version 08’10’07
        Figure 6+S13 for press release PDF Target/S13&DearPM 08’10’09 08’10’14
        Schmunk uploaded on arXiv Main, Appendix 08’10’15 08’10’15
        Larry/Evelyn paid $800 for publication by fax 08’10’21 08’10’21
        2nd Proof, Jim sent e-mail and faxed re figure locations 08’10’27 08’10’27
        Jim sent pre-press release and Q&A to reporters in list 08’10’27 08’10’27
        3rd Proof, OK to Saima 08’10’29 08’10’29
        Published on-line, with chopped up words. Jim sent e-mail. 08’10’31 08’10’31

        • mpaul
          Posted Jan 15, 2010 at 10:57 PM | Permalink

          Just because someone loses a password does not mean that you have permission to use it. Don’t be stupid. You’re breaking the law.

          Steve, I would not let this user post stuff that he implies he got through such means.

        • Carrick
          Posted Jan 15, 2010 at 11:32 PM | Permalink

          This is a bit of an odd case…. the password in question was part of a FOI release. Normally I would agree 100% with you, this one is a bit more tricky to call. I noticed the open password and user name in the released emails last night, now I feel bad I didn’t notify the individual in question at the time.

        • stansvonhorch
          Posted Jan 16, 2010 at 1:07 AM | Permalink

          hence the “snip away”

          i don’t really have any idea if the FOIA makes this ok, either. i might be a bit over-zealous to consider anything i can get my hands on fair game, though… especially after everything steve and co have gone through to get data to work with.

        • Andy
          Posted Jan 16, 2010 at 10:28 AM | Permalink

          I asked my lawyer once if it was ok to continue logging into a system that I had previously had legal access to, but didn’t now. He said that was like walking into a buzzsaw. What you are doing is an even bigger buzzsaw.

      • Rattus Norvegicus
        Posted Jan 16, 2010 at 12:15 PM | Permalink

        The law regards “unauthorized” use of a computer. Even though you got the login/password from the FOI’d emails, you were not authorized to use the computer.

        I am thinking of notifying GISS of your illegal access. You could be in for some trouble.

        • Dave Dardinger
          Posted Jan 16, 2010 at 1:20 PM | Permalink

          Re: Rattus Norvegicus (Jan 16 12:15),

          And you could be in for being banned from this blog for making threats. But I suppose not. I’m sure GISS has one or more people who follow things here. (For all I know you could be one of them.) [And to make things even more on-topic, anyone who posts here has to be even more certain that a GISS or CRU employee that what they says is in the public domain.]

        • stansvonhorch
          Posted Jan 16, 2010 at 1:58 PM | Permalink

          i’d love to get the chance to cross examine jimmy hansen and his cronies… maybe file my own suit for “unauthorized” use of my tax money while i’m at it.

        • David S
          Posted Jan 17, 2010 at 1:11 PM | Permalink

          Rattus can you give us your considered legal opinion on Gavin Schmidt’s use of NASA computers for RealClimate activities?

  47. Sean Peake
    Posted Jan 15, 2010 at 5:54 PM | Permalink

    OT but Mann has just been criticized on Fox for receiving $540K in stimulus funds.

  48. John From MN
    Posted Jan 15, 2010 at 7:16 PM | Permalink

    Something smells in Denmark, to coin a phrase. We all know the northern hemisphere has been the coldest in many years during early January. The stories abound everywhere (I in S. MN. on the Iowa border have yet to see the temperature above freezing this year and spent most of the first two weeks below zero F). Now as us skeptics (realist) are gloating. It just so happens the people behind the curtain, come out and say we just had the warmest January day in recorded history during the frigid plunge? ( Some Global Temperature recording site UAH) Well excuse me, I wasn’t born yesterday. I would like to add, I found this site http://data.giss.nasa.gov/gistemp/station_data/ I typed in my local rural city and asked for raw data graphed since 1886 (real temp data I assume). Anyhow here is the result http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=425744400040&data_set=0&num_neighbors=1 Amazingly their is not any Hockey Stick My city is near as cold in 2009 as it has ever been in recorded History! No wonder I do not buy into the GW frenzy. I am a Farmer and would wish it would warm up so I could consistently raise better crops, extend my growing season and have less fear of severe crop loss from sudden freezes in the spring and fall…and it I really do have Global Cooling here in Rural MN………..John
    PS Why don’t some of you guys put in some pertinent cities and see if You also have Global cooling or at least zero warming over the last last 120 years Like I have…….John………

  49. Posted Jan 15, 2010 at 11:35 PM | Permalink

    Speaking of possible corrections as needed:

    http://www.informationisbeautiful.net/visualizations/climate-change-deniers-vs-the-consensus/

    I assume some of you have seen the above graph at that link.

    And at that, might have some input about that, as well as the comments section to the handy graph from the Information is Beautiful site, regarding the notion that if the Skepty position was correct in that outside sources like the Sun are causing the warming (as shown and no one here or in that forum denies this, just the antrhopogenic cause) the other layers of the Earth’s atmosphere would also be heating. Alas, they are not.

    Just….curious….

    I see most everything addressed here other than the fact that the consensus is strong for AGW, and the info backs this up, AND the AGW Skeptic community consists primarily of about 12% out of the total number of climate scientists, and the others are primarily ideological pundits and engineers funded by the AEI and oil barons.

  50. Posted Jan 16, 2010 at 12:36 AM | Permalink

    Just my two cents, but I was told very explicitly over a decade ago that any email I wrote belonged to the folks that owned the computer, i.e., my employer. Why is this news to others?

  51. Stephan
    Posted Jan 16, 2010 at 1:27 AM | Permalink

    Surely information below is grounds for legal restitution by the people?

    “E.Michael Smith notes “When doing a benchmark test of the program, I found patterns in the input data from NCDC that looked like dramatic and selective deletions of thermometers from cold locations.” Smith says after awhile, it became clear this was not a random strange pattern he was finding, but a well designed and orchestrated manipulation process. “The more I looked, the more I found patterns of deletion that could not be accidental. Thermometers moved from cold mountains to warm beaches; from Siberian Arctic to more southerly locations, and from pristine rural locations to jet airport tarmacs. The last remaining Arctic thermometer in Canada is in a place called ‘The Garden Spot of the Arctic,’ always moving away from the cold and toward the heat. I could not believe it was so blatant and it clearly looked like it was in support of an agenda,” Smith says”

  52. Geoff Sherrington
    Posted Jan 16, 2010 at 4:52 AM | Permalink

    For Adam Soereg
    Posted Jan 15, 2010 at 4:08 PM

    Can you please explain without doubt whether

    (a) the anomaly base period value was changed when the station numbers diminished, itself having fewer stations; or

    (b) the original base period values were retained, so that the subsequent dropping of cold stations would have an effect on the global calculation.

  53. justbeau
    Posted Jan 16, 2010 at 8:45 AM | Permalink

    A recurrent theme in literature is what should the crew do, when the captain is obsessed? Herman Melville wrote the great novel, Moby Dick, whose Captain Ahab is obsessed with pursuit of a white whale. The Caine Mutiny (Captain Queeg), Mutiny on the Bounty, perhaps Mister Roberts. A ship is out there on the water and doubts start to arise in the minds of the crew about the skipper.

    On the good ship NASA-Goddard, Jim Hansen is skipper. Decades ago, Jim caught a glimpse of the white whale or thinks he did. He feels obliged to save the world. Dr. Hansen utters over-the-top, hyperbolic opinions to exhort society to fix the problem of the hockey stick, which he claims is there, in the NASA-Goddard data. Every year is getting hotter, Captain Jim says. What, 1934 was hotter than 1998? Somebody must not have a light on in their tea-pot dome, fusses the Captain, in an email.

    Gavin Schmidt seems to be the Captain’s trusty first mate for communications to the public. Gavin’s a smoothie, has some manners, hard-working, maybe he believes the albino whale is out there.

    Imagine working for Captain Ahab or Captain Queeg or another boss, someone who seems inspired, but is it titched? For these NASA workers, the boss wants each year to be a hotter one. Dr. Hansen has prophesied this will be so and can be found in the data. They work for Him. Its their job and duty to serve him and to find ways of finding the rising temperatures in recent data.

    There may be a cover illustration on the next IPCC report for the next loyal sailor-NASA-scientist who can conjure up another hockey stick for our noble captain!

    • David S
      Posted Jan 17, 2010 at 1:08 PM | Permalink

      The emails suggest the crew’s professional practices had more in common with the Pirates in Gideon Defoe’s novellas, than with Captain Ahab’s crew.

  54. RichG
    Posted Jan 16, 2010 at 12:22 PM | Permalink

    Looks like these institutions have coordinated the release of these FOIA requests. Steve Horner reports that he finally got his reply here: http://planetgore.nationalreview.com/post/?q=Y2UzMmZhYzE3Nzc2ZWU0YThhYzA0YTMyN2YzODgzMmM=

    It looks like he got some of the same emails, as he comments on the same Andy Revkin emails we saw here. He also teases about documents that were included that were sent in response to queries about RC and how Gavin spends his (taxpayer funded) time.

    R

  55. Syl
    Posted Jan 16, 2010 at 3:44 PM | Permalink

    According to NASA`s own admission, their margin of error is 0.5 C !

    In a 2007 letter to Kris French of the National Geographic Maps, Reto Ruedy (Responsible for NASA software) writes:

    “Hi Kris

    Steve McIntyre, a former mining executive, now a blogger and global warming denier, is blowing a small correction has absolutely no impact on the global mean temperature time series, over the US it made a difference of .15 C.

    I checked what this correction does to your map and it does change the colors somewhat over parts of the US; the rest of the world is unaffected. Even the change over the US is way within the margin of error (0.5C). So there there is little need to make any changes.

    The timing is a bit awkqard, though. Sorry for that.

    Reto.”

    It seems that we learn later on that “maps” had already been published and printed so they would not change them (probably because of the extra cost and logistics I suppose).

    This is probably in response to the hoopla regarding the fact that 1934 was warmer than 1998 due to the discovery of an error by Steve in NASA data. In scientific terms the 0.01 C difference between 1934 and 1998 is insignificant. However, politically it is not.

  56. justbeau
    Posted Jan 16, 2010 at 4:25 PM | Permalink

    Maybe Mr. McIntyre has done this already, but if not, it would be of interest to read a scouting report about the roster members on the Hockey Team.

    Dr. Hansen seems its player/coach.

    Professor Mann used to be a leading player, but his work has been discredited in two NAS reviews, for bad methodology yielding bad science. He further shared some ill-advised thoughts via emails leaked out of East Anglia. Mann seems on the DL, owing to a credibility injury, reduced to writing unkind disparagements of Sarah Palin.

    Gavin Schmidt is a player who works closely for the coach. As a spokesperson for coach, his credibility rises and falls with coach’s.

    Dr. Phil Jones was a player, but his email output indicates some problems in relation to respecting normal conventions of scientific method. His credibility has taken a lasting hit.

    Another leading player must have been Dr. Trenbreth. He is happy saying the science is settled in public, but when chatting with team-mates, admits their understanding is a travesty. Not sure of his playability right now.

    An East Anglia guy like Briffa seems a substitute player. He did not understand Mann’s work, but signed onto the discredited collaboration. He is irrelevant.

    There must be legions of guys like Dr. Holdren, someone who supports the team with all his heart, but I doubt he is a player who can explain convincingly, with defensible and transparent documentation, that there is historically unusual warming, to buttress the intuitive fear of coach that the end of Creation is nigh.

    There are lots and lots of fans, cheerleaders, and groupies, but does Coach have any uninjured and capable players who can be called upon, to replace the fallen?

    We hear about the thousands of scientists who are willing to believe coach’s prophecies. I am willing to believe they exist, but many are followers. Are any of them able to play for the team, at its core, and help out coach?

    • Bruce
      Posted Jan 17, 2010 at 1:39 AM | Permalink

      Wm Connolly, the CAGW bullhorn and bully at Wikipedia, is a significant player in the propaganda dept. Every climate and weather related article there should be reviewed for his overwhelming bias.

    • Jimchip
      Posted Jan 17, 2010 at 8:28 PM | Permalink

      Re: justbeau (Jan 16 16:25),

      There are owners, management (including coaches), and players. I’m still playing ‘Who’s who’.

  57. John From MN
    Posted Jan 16, 2010 at 5:09 PM | Permalink

    This is amazing http://www.rockyhigh66.org/stuff/USHCN_revisions.htm somebody has some sp1ain’ to do

    • charles the moderator
      Posted Jan 17, 2010 at 6:56 AM | Permalink

      Re: John From MN (Jan 16 17:09),

      You may want to check this out.

      • thefordprefect
        Posted Jan 17, 2010 at 10:07 AM | Permalink

        Are you therefore saying that once a record of temperature is made it should never be changed – even if recording errors, digitising errors, UHI errors, time errors, measuing instrument change errors etc., etc. are discove3red later.

        So it is therefore wrong for GISS to have responded to McIntyre’s Y2000 error discovery?

        • RomanM
          Posted Jan 17, 2010 at 1:17 PM | Permalink

          tfp, you are either being disingenuous or you are incapable of distinguishing between necessary and appropriate corrections to a data set and manipulation of the data under the guise of “adjustment” or “homogenization”.

          Exactly how many times can you keep discovering “errors” in the same data correcting the previous “corrections” year after year after year? You would think that they would get it right the first time! 😉 These changes are based on other reasons which need to be overtly stated and explicitly justified. As it is, it is like watching a shell game anf trying to figure out which shell the pea is under.

          By the way, IMHO, UHI should not be classified as an “error” – the measured temperatures are not necessarily incorrect – but rather a phenomenon which creates an increase in temperature which does not depend on external forciings. This effect negates the ability of the temperature record to inform us on how those external forcings behave quantitatively.

    • Posted Jan 17, 2010 at 6:04 PM | Permalink

      Re: John From MN (Jan 16 17:09), to quote Steve “words fail me”.

      Surely this is the US Climategate.

      Hansen can be a “perfect gentleman” because his records are public. Yes, but only IF you take the trouble to scrape enough files and enough google archives, graph them and turn them into blink comparators, can you see it’s obviously an apocalyptically, globally, humungously, insanely corrupted record.

  58. Puggs
    Posted Jan 16, 2010 at 5:09 PM | Permalink

    Met Office mull over withdrawing long term weather forecasts:
    http://news.bbc.co.uk/1/hi/sci/tech/8462890.stm

  59. F. Patrick Crowley
    Posted Jan 16, 2010 at 8:48 PM | Permalink

    If anyone is wondering whether emails by U.S. government employees are “private” and “personal” – an assertion sometimes made in respect to emails at CRU, an institution subject to UK FOI – the answer in respect to NASA GISS appears to be no.

    As a retired State employee (Montana) we were always explicitly instructed by our IT department that ANYTHING you say or communicate using Government computers is PUBLIC business. There is no such thing as a private email on a gov’t computer. And they keep backups of all emails. So if they say it is “lost” or “deleted” it might not be. And everything you do on a gov’t computer is the same, public business, and subject to subpoena.

  60. vg
    Posted Jan 17, 2010 at 3:18 AM | Permalink

    Steve totally OT but might lighten the atmosphere
    On Mombiot’s blog. The comments are hilarious 99 to 1 against. I admire Monbiot for hacking it though….
    http://www.guardian.co.uk/environment/blog/2010/jan/06/cold-snap-climate-sceptics
    This is one of the funniest ones:

    I was wondering when the climate change lobby was going to get its mitts off and explain away the cold weather.

    I have nothing to say other than that I keep warm by tearing up climate change books and articles and stuffing them under my jumper.

    I knew they must be good for something and it works for me.

  61. justbeau
    Posted Jan 17, 2010 at 1:45 PM | Permalink

    Gavin Schmidt co-authored “Picturing the Science.” Acknowledgement: I have NOT read it (nor looked at the pictures).
    Its odd there is this nexus between enormous masses of temperature readings, looking for infantesimal changes in averages, amid many uncertainties that probably cannot be adjusted for by anyone, leading to supposed clear findings that temperatures are heating up globally, versus taking pictures of “climate” around the globe. Changes in climate in any location are going to be infantesimal (like the US difference been 1934 and 1998), if they exist at all (for the reasons Gavin believes), so how does someone snap pictures that display real climate changes?

    Gavin tries to do too much. It hurts his overall credibility. He coaches coach about how to deal with information requests from the Canadian teapots (remember to thank them); writes for Real Climate; tries to plug CRU leaks; and co-authors picture books on “change” (Gavin’s equivalent of going on Letterman, since Jim must keep primetime gigs for himself). He is hard-working, but with a public outreach emphasis. It would seem more crediblity enhancing if NASA bureaucrats would just focus on data collection, integration, and transparency, and let academics and Greenpeacers battle about what the data mean. So much advocacy and petty shadow-boxing against imagined enemies.

    • Jimchip
      Posted Jan 17, 2010 at 8:38 PM | Permalink

      Re: justbeau (Jan 17 13:45),

      “Gavin tries to do too much.” There’s a CRUTape email wherein Gavin thought he might have found a mistake regarding The Teams calculations. (At least he was trying, then). The answer was along the lines of ‘nothing to see here, move on’.

      Books, Realclimate, etc. were a big part of the media campaign. And, I agree, it must have been difficult trying to do work while being a drone for the campaign.

  62. G. E. Lambert
    Posted Jan 17, 2010 at 3:25 PM | Permalink

    If this has been covered, apologies. I was interested in the “tone” of the emails from Bloomberg and the NYT to “Team” members. Did McClean and Revkin contact Mr McIntyre for his side of this story?

  63. JT
    Posted Jan 17, 2010 at 6:31 PM | Permalink

    @justbeau If you are the Coach, who knew about the awful peril of Global Warming, long ago, it has to be a little nettlesome to hear a discordant note. The temperatures are going up, me hearties, says the skipper. Who are these Canadian nobodies, with their empty teapot domes, who dare doubt ME?

    Can’t resist. Every one knows Canadians are the best hockey players. They never should have taken up a stick they didn’t know how to use.

  64. Syl
    Posted Jan 18, 2010 at 6:07 AM | Permalink

    Here`s a french language article stating that a new study shows that Alaskan glaciers are melting less than previously thought. This is actually from a usually pro-AGW newspaper.

    http://www.cyberpresse.ca/environnement/dossiers/changements-climatiques/201001/17/01-940047-les-glaciers-dalaska-fondent-moins-vite-que-prevu.php

  65. Slabadang
    Posted Jan 18, 2010 at 8:05 AM | Permalink

    Hi Experts!

    I checked the official Arctic icehistory on Cryospfere on the year 1922
    and compare it with an article from 1922 whre they sailed up to
    81 lat 29 min on icefree arctic waters.So whats the conclusion?

    Official icecover history.

    The article from 1922,

    Click to access mwr-050-11-0589a.pdf

    Isnt this another proof of the systematic biases in historical

    data of the “teams”?

  66. Adam Soereg
    Posted Jan 18, 2010 at 4:58 PM | Permalink

    Please let me ask this:

    How could the folks at GISS explain the necessity of an adjustment like this? http://www.rockyhigh66.org/stuff/olney_raw_anim.gif Changes in instrumentation and/or location? TOB adjustment? Neither is possible.

    Maybe it was a correction for an ‘exceptional urban cooling’…

8 Trackbacks

  1. By Climategate, what is going on? - EcoWho on Jan 14, 2010 at 6:09 PM

    […] Judicial Watch Uncovers NASA Documents Related to Global Warming Controversy, NASA Scientists Go on Attack After Climate Data Error Exposed    climateaudit coverage […]

  2. […] FOI, Climategate, NASA and transparency, […]

  3. […] heeft vooralsnog alleen een korte aankondiging hier. De e-mails zijn interessant maar bevatten op het eerste gezicht geen explosief materiaal, zoals […]

  4. By Top Posts — WordPress.com on Jan 15, 2010 at 7:24 PM

    […] FOIed Emails on Hansen Y2K If anyone is wondering whether emails by U.S. government employees are “private” and “personal” […] […]

  5. […] before, but it’s one of the best Abbott and Costello bits I’ve ever seen. Hat tip to a comment over at Climate Audit.  ..bruce w.. Share and […]

  6. […] a comment » Some interesting material over at Climate Audit around the NASA FOI request for emails surrounding McI’s discovery of a mistake in the GISS […]

  7. […] information, data, and code from Steve McIntyre, founder of the climate disruption-denier website ClimateAudit.org. Clearly there was no metaphorical “smoking gun” in the emails, because the CEI […]

  8. […] news was linked later the same day by McIntyre himself. With 215 pages to go through, no wonder it took around a week for somebody to notice some peculiar […]