Mann: "Dirty Laundry" from MBH98-99

The Mann et al 1998-99 reconstruction had “steps” (grandiosely called “experiments” by Mann), but the results of the individual steps were never archived, only the splice of 11 steps. For statistical analysis, one needs to have the residuals, which we requested in 2003. Mann refused. At this point in 2003, contrary to North’s allegations, Mann hadn’t provided any source code, any computer programs nor any interpretations of computer programs. All that he had provided us was a URL on his FTP site for the proxy data supposedly used in MBH – a data set that he later claimed was the “wrong” data set and which he deleted in Nov 2003 and a new version of the MBH proxy data was made public. At the point that the residuals were refused, Mann hadn’t spent more than a few minutes responding to requests from us.

The MBH residual series crop up in the Climategate Letters. In July 2003, Mann had sent CRU the very data [Sep 4, 2014 – actually only the 1902-1980 residuals] that he later refused to provide to us. (BTW Osborn also asked Mann for source code and interpretation of matters that he did not understand.) Mann made it very clear to Osborn that the residual series were provided in total confidence, that they were “dirty laundry” which he did not want to “fall into” the wrong hands.

Did Mann refuse to provide the residual series to us because answering our inquiries had taken up so much of his time that he stopped answering? Or was there some other reason?

Michael Mann to Tim Osborn, CRU, July 2003

Attached are the calibration residual series for experiments based on available networks back to: AD 1000, AD 1400, AD 1600… You only want to look at the first column (year) and second column (residual) of the files. I can’t even remember what the other columns are! mike
p.s. I know I probably don’t need to mention this, but just to insure absolutely clarify on this, I’m providing these for your own personal use, since you’re a trusted colleague. So please don’t pass this along to others without checking w/ me first. This is the sort of “dirty laundry” one doesn’t want to fall into the hands of those who might potentially try to distort things…

McIntyre to Mann, December 2003 cc NSF

In MBH98 and MBH99, you refer to analyses of residuals carried out in these studies. Could you please provide me with (a) preferably, a FTP location for the residual series, together an FTP reference for the program generating the residuals; or, (b) in the absence of such FTP location, an email enclosing this information. Your analysis of these residuals was used to estimate confidence intervals in an influential scientific paper.

David Verardo Director, Paleoclimate Program, US National Science Foundation, Dec. 17, 2003 preemptively permitting Mann not to disclose his “dirty laundry”:

Dr. Mann and his other US colleagues are under no obligation to provide you with any additional data beyond the extensive data sets they have already made available. He is not required to provide you with computer programs, codes, etc. His research is published in the peer-reviewed literature which has passed muster with the editors of those journals and other scientists who have reviewed his manuscripts. You are free to your analysis of climate data and he is free to his.

McIntyre to Ziemelis of Nature, August 2004

we are writing to reiterate long-standing requests for data and results from MBH98, which we have already communicated on several occasions. You had stated that these requests would be resolved in the new SI, but unfortunately this is not the case. While you are undoubtedly weary of this correspondence, our original request for disclosure was reasonable and remains reasonable. It is only the unresponsiveness of the original authors that is placing a burden on you and your associates. Some of these items have been outstanding for 7 months. They were not attended to in the new SI and need to be dealt with promptly. … In particular, we still seek … the results of the 11 “experiments” referred to in MBH98, including: (b) the NH temperature reconstruction (11 series from the start of each calculation step to 1980); (c) the residuals (11 series from the start of each calculation step to 1980)… Since their claims of skill in reconstructing past climates depend on these “experiments” and their estimation of confidence intervals is based on the residual series, it is unnecessary to explain why these data are of interest. Again, we have repeatedly requested this data.

Ziemelis to McIntyre, Sept 2004
And with regard to the additional experimental results that you request, our view is that this too goes beyond an obligation on the part of the authors, given that the full listing of the source data and documentation of the procedures used to generate the final findings are provided in the corrected Supplementary Information. (This is the most that we would normally require of any author.)

Following paragraphs moved from first two paragraphs to appendix on Sep 4, 2014.

Here is Gerald North, Dec 1, 2009 purporting to justify data refusals by the Team:

McIntyre entered the fray by asking for data from Mann and his coauthors in about 2000 [actually 2003, but who expects accuracy from climate scientists]. As I understand it they complied, but the more they complied the more he wanted. He began to make requests of others. He sometimes not only wanted data, but computer programs. When he could not figure out how the programs worked he wanted help. From what they tell me this became so irritating that they stopped answering his emails.

North is giving public credence to this increasingly popular meme among climate scientists. Virtually everything in North’s meme is untrue. Rather than provide a detailed rebuttal, let me suggest an another and far more plausible alternative. Mann and others didn’t want to give data to potential critics. Let me show an actual situation, one where the Climategate Letters provide much new information.


  1. Posted Dec 1, 2009 at 4:57 PM | Permalink

    I was hoping you would get to the ‘residuals’ before too long. They spent more time saying no than it would have taken to provide the data.

    I wonder why?

  2. stevemcintyre
    Posted Dec 1, 2009 at 5:01 PM | Permalink

    It’s hard to know where to begin in these Augean stables.

  3. Justin
    Posted Dec 1, 2009 at 5:13 PM | Permalink

    I wish I could tell my clients that their repeated requests for information (which I ignore) are becoming “irritating”.

  4. wang
    Posted Dec 1, 2009 at 5:34 PM | Permalink

    Dear Mr McIntyre,

    as a relative “newbie” posting on this site, I must begin by commenting on how much I appreciate the moderation and calm perseverence which is apparent in your posts. Oh, if only others on both sides of the debate kept to the same tone…

    My request is whether, given the enormous public interest in the whole “climategate” phenomenon,you could not spend a bit of time putting together a short, easy-to-understand updated explanation of the whole controversy surrounding the hockey stick graph. Currently, I notice quite a lot of misunderstanding and confusion concerning the “bare bones” of this issue.

    Rather that the FOIA questions (which are indeed important, and possibly with legal implications), it would be very interesting to show and explain in layman’s terms the original 1998 MM graphs and how they were gradually modified, including via your efforts, together with other relevant points (e.g. the infamous “upside down” segment, the apparent ability of the model to show an uptick at the end whatever the data, the tiny amount of relevant tree samples, the “hidden” final part of the graph etc.).

    All graphs would ideally be accompanied by short text to put them into context, rather that assuming the reader can understand the implications of what is being shown.

    I am sure you have gone into these individual issues many times in your blog before, but an updated two-or three pager gathering together all the essential information about this “saga” would be extremely useful to clarify what has actually happenened.

    I would even go as far as to say that this explanation would be perfect for a press release, but I wager that this would probably not be your style…

    I for one would be inclined to believe your recollections on this particular topic, leaving me to then draw my own conclusions on the relevance of this issue on the whole MMGW and science credibility debate.

    Thank you.

  5. Craig Loehle
    Posted Dec 1, 2009 at 5:37 PM | Permalink

    It seems to me that places like Science seem unaware that providing “some” code and “some” data may be completely useless unless it is ALL provided because you can’t compute anything with “some” data. Providing “some” data is purely symbolic.

  6. Joe
    Posted Dec 1, 2009 at 5:47 PM | Permalink

    The question I have is are these institutions and journals strong enough to get to the bottom of this and clean house and get back to real science?

    Unless they are completely ignoring everything they must have at least a clue to what has happened here? Is Penn State’s investigation going to just be a joke or is it possible they will go after Mann? I mean its their reputation too. Are they as an institution promoting these tricks too?

  7. Al S.
    Posted Dec 1, 2009 at 5:57 PM | Permalink

    For Wang, and others new to Climate Audit:
    There is a great deal of information on the old web site, and it can be conveniently searched; or you may use the links on that site to various threads, such as MBH98 etc., the “divergence” problem, and so forth. There are also links to other sites of interest: JeffID, for example, has some important posts on the “hockey stick” graphs from an entirely different angle.
    Al S.

  8. Pat Frank
    Posted Dec 1, 2009 at 6:11 PM | Permalink

    So let’s see: Dr. Ziemelis of Nature writes a reply indicating that he thinks you and Ross have been supplied with, “the full listing of the source data and documentation of the procedures used to generate the final findings are provided in the corrected Supplementary Information.” which is , “the most that we would normally require of any author.

    Dr. Ziemelis’ letter sounds reasonable. You’ve got what every scientist says is necessary, and so you’re just being unreasonable in your (badgering) requests. QED.

    But then one goes and reads the original letter, laying out what it was you and Ross wanted to see. There, we discover that all the usual stuff experimental scientists would usually require in a ‘Materials and Methods’ section of a published peer-reviewed paper were missing from MBH 98/99. Things like what they actually did, and which data they actually used. Little things. Crucial little things. Crucial little things without which a published paper is not a science paper.

    Reading your letter with Ross, one is hard-pressed to know what Dr. Ziemelis was thinking when he decided that all was in order.

    Clearly, the core of Mann’s materials and methods were missing from his published work, including from the corrigendum. Your letter with Ross was entirely reasonable. And expected, from any responsible scientist centrally interested in knowing what Mann had done, so as to build upon it.

    A published word from Dr. Ziemelis explaining his thinking would be very much in order, here preferably, or elsewhere.

  9. TerryMN
    Posted Dec 1, 2009 at 6:11 PM | Permalink

    Dr. North (yes, I know you’re reading this, as are the rest of the gang),

    You can’t bring yourself to read the e-mails, but have no problem commenting on their content and (in)significance. This doesn’t inspire a lot of trust in what you’re saying, Doc – I believe the “I’m a scientist, trust me” card may have been played a time too many. You should be ashamed.

  10. CWELLS
    Posted Dec 1, 2009 at 6:20 PM | Permalink

    The major question regarding ‘peer review’ in my mind (not withstanding the Crutapes, as Lucia has dubbed them) is: How can such technical articles be ‘reviewed’ with any substance in any way WITHOUT the underlying RAW data(A), the processes (code-B) that were used to analyze it and the outcome data(C). C proceeds from A thru B.
    If the ‘peer review’ process DOES involve the transmittal of this information to reviewers and it is REVIEWED and VERIFIED, then it was reviewed. If NOT, it is a waste of ink.
    Calling this stuff ‘peer reviewed’ and thus somehow giving it standing is just ridiculous.

  11. Hoi Polloi
    Posted Dec 1, 2009 at 6:22 PM | Permalink

    In other words, Prof.North is either hopelessly ignorant or lying through his teeth. And worse, he thinks he can get away with it, because he refuses to read the emails and the Harry Files.

    It gives me the impression that they’re frantically trying to block the water that’s seeping through the wall but the cracks are appearing everywhere and they really don’t know where to start. Some storm in a teacup…

  12. Bob
    Posted Dec 1, 2009 at 6:58 PM | Permalink

    To Wang; rather than bother Steve on this you should read Monchton who wrote a fourty something page review yesterday

  13. Bill Jamison
    Posted Dec 1, 2009 at 7:11 PM | Permalink

    In the comments Dr. North acknowledges that he “had misinterpreted the ‘trick’ “. Once again Steve is vindicated.

  14. Haze
    Posted Dec 1, 2009 at 7:16 PM | Permalink

    I also think that the most natural explanation is that Jones et. al. did not want to share their data to skeptics. That seems to be confirmed in the emails.

    However, it would be very nice to see the detailed rebuttal of North’s claim. Several of my physicist colleges take the position that Steve McIntyre simply were obnoxious, disruptive, and not reasonable with their requests.

    It would be very good to exhibit in more detail the behavior of Jones et. al. so that it becomes obvious that they were simply hiding from scientific critique.

    Thanks for your skepticism!

  15. geo
    Posted Dec 1, 2009 at 7:28 PM | Permalink

    “actually 2003, but who expects accuracy from climate scientists”

    You do, Steve, and give them heck when they don’t attain a reasonable standard of it.

    I understand your indignation on this particular subject, but that kind of catty is not helpful.

  16. RomanM
    Posted Dec 1, 2009 at 7:37 PM | Permalink

    Carl Gullans, one of the byproducts of climategate has been the identification and separation of the honest scientists from the “defenders of the consensus”.

    The former have been able to rise above their personal feelings, understand what wrongdoings have transpired and condemn thesoe actions. The latter have denied their “lying ears and eyes” and jumped to the defence of the “victims” whose reprehensible private actions have been exposed using all means of misdirection – lack of truth in the responses is no barrier.

    It is sad to see the lack of professional integrity in such people and the corresponding lessening of confidence in whatever else they may do or say. I guess we may have suspected this all along, but it’s worse than we thought…

  17. John
    Posted Dec 1, 2009 at 7:42 PM | Permalink

    As one that works in the science field, I’m truly amazed. At the end of the day, I’m getting the feeling we will find that there is one magic tree with rings, one beautiful ice core and one piece of coral that “settled” the global warming debate. Why don’t we title this project what it really is “The raw data that is to embarrassing to release.” I’ve s -canned so many of these projects over the years in the biotech field, I’m amazed this one has gotten as far along as it did. Here’s an addendum to the scientific method; Please Bring Raw Data or Go Home.

  18. Shallow Climate
    Posted Dec 1, 2009 at 7:57 PM | Permalink

    When I feel like sticking needles into my eyeballs, I go over and take a gander at RC. And so I did that once during the Yamal revelations by Steve M. And I swear, in response to one commenter Gavin actually said, “Doesn’t anyone trust us?” Answer: No. Science totally is about NOT trusting ANYONE. Show us the data; show us the methods. So here again, as explicitly shown, more stonewalling by the in-crowd. It’s sad.

  19. JasonR
    Posted Dec 1, 2009 at 8:26 PM | Permalink

    For a layman’s guide to the hockey stick issue, Wang should seek out the blogger, Bishop Hill.

  20. Doug Badgero
    Posted Dec 1, 2009 at 8:49 PM | Permalink


    I have only been following the AGW debate with any dedication for about 6 months. I suggest you read Steve’s paper ohio.pdf for a discussion on the issues with the Mann hockey stick. It was something he presented at Ohio State University so it seems to be for an educated, but not expert, audience. It helped me understand that particular issue. It does however pre-date the climategate letters. The following link is also useful if you want to understand some about what they did with statistical analysis – this will help to understand the famous Mark Twain quote:

    Click to access McKitrick-hockeystick.pdf

    Of course, they are only lies if misused.

  21. curious
    Posted Dec 1, 2009 at 9:57 PM | Permalink

    Wang – try this:

    Click to access MM-W05-background.pdf

  22. curious
    Posted Dec 1, 2009 at 10:15 PM | Permalink

    Wang – and this:

    Click to access Climate_H.pdf

  23. David Jay
    Posted Dec 1, 2009 at 10:47 PM | Permalink

    My new credo:

    In God we trust, all others provide code and data!

  24. LMB
    Posted Dec 2, 2009 at 7:04 AM | Permalink

    “give them heck when they don’t attain a reasonable standard of it. I understand your indignation on this particular subject, but that kind of catty is not helpful.”

    I think the hacker got exasperated by all the years of stonewalling and decided to take matters into his/her own hands.

    The tragedy of Climategate was the willingness of so many to put their trust in so few.

    I still don’t understand why these experts kept emailing their dirty laundry when they knew or must have known the server could get hacked. It never occurred to them that the frustration they were causing could lead to a hack?

    Is it possible there is more dirty laundry which could be found by forensic computer investigations? Things they ‘deleted’ which can be recovered?

  25. MarkB
    Posted Dec 2, 2009 at 9:35 AM | Permalink

    Regarding this:

    “Dr. Mann and his other US colleagues are under no obligation to provide you with any additional data beyond the extensive data sets they have already made available”

    I am reminded of this:

    “Gentlemen don’t read other people’s mail.”

    It is obvious that editors of journals in this field consider it “bad form” to not trust the authors of papers they have already published. There’s an aristocratic assumption in it – “How dare you doubt your betters?”

  26. Sean Inglis
    Posted Dec 2, 2009 at 10:20 AM | Permalink

    Not certain if this is the right place for a “statistics 101” question, but here goes.

    When I see a graph that shows a moving average, and that graph also includes the confidence intervals, typically this ends up looking like a segment of road with a line down the middle

    As I understand it, the width of the road tells me something about the accuracy of the data used to create the graph; the wider the road, the less confidence I have that the measurement or derived value is accurate.

    And in fact, the *actual* moving average / fitted curve / whatever may bounce around within the constraints of the width of the road; there are an infinite number of paths the curve may take. As I become more accurate with my measurement, the width of the road narrow, and the path the curve can take also becomes more constrained.

    Am I following it correctly?

    If this is the case, is there any special significance to the curve that’s actually drawn at the midpoint of the confidence interval? In other words, is the curve shown likely to be the most accurate rendering overall, or does it share equal status in terms of the probability of it being “right” with every other random curve that can be drawn within the constraints?

    Sorry if this is painfully ignorant and I’m making attempts to get to grips with it elsewhere, but some guidance would be appreciated.

  27. mbabbitt
    Posted Dec 2, 2009 at 10:38 AM | Permalink

    David Jay: “In God we trust, all others provide code and data!”

    I imagine you may already know this but W. Edwards Deming, the father of the modern Quality movement and statistical process control is known for his assertion: “In God We Trust, all others bring data.”

    From Wikipedia (
    William Edwards Deming (October 14, 1900 – December 20, 1993) was an American statistician, professor, author, lecturer, and consultant. Deming is widely credited with improving production in the United States during the Cold War, although he is perhaps best known for his work in Japan. There, from 1950 onward he taught top management how to improve design (and thus service), product quality, testing and sales (the last through global markets)[1] through various methods, including the application of statistical methods.”

    I am not a scientist but a Quality Assurance/Software testing engineer and what strikes me is that there seems to be no real accountability or true measurement of the predictive power and success of global warminging assertions concerning C02. As long as other scientists agree with you, that seems to be the predominant proof today. Consensus equals “trust us” — in spite of the fact that assertions of atmospheric temps and other factors that would provide true validation (the fingerprint) don’t seem to be present as predicted. Could anyone imagine building a product where people’s lives were at risk or running a business like this?

  28. Alan S. Blue
    Posted Dec 2, 2009 at 12:03 PM | Permalink

    Sean Inglis,

    If you pick any vertical line on your plot, and traverse it from bottom to top, the histogram of the data should look like a standard Gaussian Distribution (bell curve) with the peak occurring right on “the line.” (Assuming enough data at each x-value and ignoring complicating factors like autocorrelation.)

    Picture this Gaussian Distribution as being perpendicular to the plane of your original chart. The line labeled mu is exactly on the best fit line. Now stretch or shrink (without warping) until the points labeled “one sigma” and “minus one sigma” coincide with the confidence limits. If your plot has “two sigma” confidence limits, they’re going to be wider than the equivalent one sigma limits – but the idea is the same.

    Yes, it should be more likely to be “right near the line” than any other equally sized area. But all of this assumes your data and model has passed various sanity checks in the first place. You can easily have a chart of data with a best-fit line and calculated confidence limits where the actual line has a near-zero value on a similar histogram.
    Picture a high school with 20 national merit scholars and 20 literal Neanderthals. You have one group clustered right around 95% right all the time, and the other right near zero. The fit for “test averages by date” will hover right near 47%. But the chances are very slim that anyone in the class will actually get a 47%.

    This would be a bi-modal distribution, which is decidedly non-Gaussian. If we constructed the histogram here, it would look like there were two completely separate bell curves. There is a distribution that would have the feature where the points on the line have equal probability with any other point between the confidence limits – think of what would happen if you charted “rolls on a twenty-sided die.” Each result is equally probable. But you can still determine an average and confidence limits.

  29. Gerry
    Posted Dec 2, 2009 at 2:51 PM | Permalink

    Steve found this quote among the e-mails from Tom Wigley, thought you might be interested:

    I have just read the M&M stuff critcizing MBH. A lot of it seems valid to me.
    At the very least MBH is a very sloppy piece of work — an opinion I have held
    for some time.
    Presumably what you have done with Keith is better? — or is it?
    I get asked about this a lot. Can you give me a brief heads up? Mike is too
    deep into this to be helpful.

  30. mrsean2k
    Posted Dec 2, 2009 at 3:07 PM | Permalink

    Alan, thanks for the explanation – a bit more reading required on my part, but now I can be a lot more focused. I assume that the “residuals” that are referred to here are intimately connected to determining confidence intervals for a number of “headline” graphs?

  31. Brian B
    Posted Dec 2, 2009 at 5:18 PM | Permalink

    I imagine you may already know this but W. Edwards Deming, the father of the modern Quality movement and statistical process control is known for his assertion: “In God We Trust, all others bring data.”

    I hope this survives a motive snip but I think the Team is more on the great Jean Shepherd’s team; In God We Trust All Others Pay Cash.

  32. Harry Eagar
    Posted Dec 2, 2009 at 6:14 PM | Permalink

    mbabbit: Deming also liked to say, ‘You’ve got to have a theory. If you don’t have a theory, how do you know when you are wrong?’ (I may not have the words exactly, but I heard him say something like this at two conferences.)

    I do think this is at least as relevant to climate science, as practiced, as your also excellent Deming quotation.

  33. Kenneth Fritsch
    Posted Dec 2, 2009 at 6:50 PM | Permalink

    I have noted that the defenders of the “emails” and those doing damage control (including the popular Judith Curry) tend very much to the making of generalized statements on the critical issues involved. Now why would scientists who are versed (or least supposed to be) in the details of the matter use the tricks of the trade of that we call politics.

  34. Steve F
    Posted Dec 2, 2009 at 9:22 PM | Permalink

    Mc, You need to contact the WAPO and be allowed to post a rebuttal. This is a direct attack to your credibility and borders on libel. Sure the WAPO is a liberal shill for the warmers, but if they printed this in their paper, not just on the website, then you should be allowed to clean this particular stable.

  35. hswiseman
    Posted Dec 2, 2009 at 10:58 PM | Permalink

    The residuals may turn out to be Mann’s 18 minute gap. I am sure all of the frequent participants here have searched the emails to see if they made the “enemies list”. Does Tom Wigley get to play John Dean in this drama. Are Redford and Hoffman available to play M & M in the movie?

  36. JohnM
    Posted Dec 3, 2009 at 12:47 AM | Permalink

    Steve M:

    It may be useful to go over the status of your FOIA requests and then consult with an English solicitor with experience with this law in the UK. I am sure that if you put out a call, you will find strong financial support for any associated costs.

    A polite, comprehensive and focused renewal of your requests in the current environment may produce useful results. Lord Moncton may have some ideas as to which UK MP’s may be best to copy with the renewed requests.

    The stonewalling needs to end and end now.
    snip – over the top

  37. Dr. Dweeb
    Posted Dec 3, 2009 at 9:56 AM | Permalink

    This is the sort of link I send to “joe sixpack” types. Visual, simple and quite damning.

  38. Alan S. Blue
    Posted Dec 3, 2009 at 11:42 AM | Permalink


    A residual is just a measure of how far measurements are from the model (r = model – actual). When discussing “the residuals,” people generally mean plots examining the values of the residuals for any irregularities.

    Using your road analogy, you’re plotting a straight line – the road’s centerline – at a distance of ‘zero from the model.’ When you put all the rest of the residuals on the same plot, you have certain expectations for a normal distribution. You expect 68% of the data to be inside “the lanes adjacent to the centerline.” The one-sigma lanes. You expect 95% of the residuals to be within the two-sigma region – think two lanes both ways. There should be some (5%) of the data way out on the shoulders.

    But the key piece about residuals is often just a visual inspection. Does the data visually curve relative to the line? Does it look more like a cone than a cylinder? Are the points on the shoulder way off in the ditch? Are the points on the shoulder all grouped together?

    There are a long list of experimental fiascoes that can be caught by just a glance at the residuals plot. Choosing an inappropriate model, or having non-normally distributed data are just a couple.

  39. AC
    Posted Dec 3, 2009 at 4:17 PM | Permalink

    Curious about this quote from Phil Jones, what does “we’ll be changing the SSTs imply?”:

    “One final thing – don’t worry too much about the 1940-60 period, as I think we’ll be
    changing the SSTs there for 1945-60 and with more digitized data for 1940-45. There is also
    a tendency for the last 10 years (1996-2005) to drift slightly low – all 3 lines. This may
    be down to SST issues.”

    in this email. There’s also a comment from Gil Compo:
    “I thought that correlations of 0.8 to 0.85 were high for an independent dataset this
    long. I think that these are higher than the proxies?” – not clear which datasets are underdiscussion, as there are three being compared.

  40. Posted Dec 3, 2009 at 9:53 PM | Permalink

    In the words of the great Dr. Peter Venkman: Back off man, I’m a scientist.

  41. nvw
    Posted Dec 3, 2009 at 10:32 PM | Permalink

    Nature has published a very strong editorial on climategate.

    One should compare the editorial to the facts presented in this post. Notice the Nature editor (Ziemelis) in Aug 2004 suggests that all the information was provided above and beyond the call of duty. That same approach is extended in the current Nature editorial, however when you look at the facts the assertions do not hold true.

    If all you had to go on, it would be on one side is the editorial staff of (what used to be) one of the most important scientific journals in the world. A published article in Nature would go on the top of the list any young professor would submit to his tenure committee. Why would you question Nature? Any mainstream academic doing so would be committing career suicide. It appears that Nature got so use to the god-like status afforded it, that it made the mistake of thinking it was true. On the other side you have Steve M daring to question authority. Nature would have you believe that because Steve is an outsider, a non-academic he is unqualified to participate in the debate. But what they fail to realize is that it is because Steve is an outsider that he is free to ask these questions. They can not destroy his career, but they can deny him access to data and use the pulpit to belittle him.

    And these emails as described in this posting are a complete vindication of those who have been asking for years for the simple scientific propriety of supporting data. Which brings us back to the current Nature editorial. You will see in the days ahead two approaches from those defending “the debate is over” climate science. One will be to continue to stonewall, cover up and resort to authority. The Nature editorial epitomizes this approach. The other will be the Monbiot approach, where stanch defenders will realize that they are on the wrong side of what they love about science and publicly apologize for their complicity in restricting the debate.

  42. Andrew Russell
    Posted Dec 4, 2009 at 1:32 PM | Permalink

    I wonder if the genesis for “hiding the data” came from a couple of very public embarrassments of Ben Santer and Tom Wigley in 1996.

    First was the revelation that Chapter 8 of the 1996 IPCC report had been doctored after a peer review in Madrid to hype a claim of a “discernable human influence” on the atmosphere. The nature of that doctoring was made public by Fredrick Seitz in a WSJ Op-Ed: “A Major Deception on Global Warming”

    The second was after publication of a paper in Nature, “A Search For Human Influences on the Thermal Structure of the Atmosphere”, that claimed that observed data from radiosondes confirmed the global warming computer models. That was subsequently blown up by Pat Michaels and Paul Knappenberger who showed that Santer and crew used a subset (can I say “cherry picked”?) of the radiosonde data set, and that the full data set showed their claim was bogus. See

    Lead Authors of 1996 IPCC AR2 Chapter 8: B. Santer, T. Wigley, T. Barnett, E. Anyamba. Authors of 1996 Nature paper: B. Santer, T. Wigley, P. Jones, J. Mitchell, A. Oort, R. Stoufer. Any of these look familiar?

  43. Carlo
    Posted Dec 5, 2009 at 10:10 AM | Permalink

    Michael E. Mann wrote:

    Dear Phil and Gabi,
    I’ve attached a cleaned-up and commented version of the matlab code that I wrote for
    doing the Mann and Jones (2003) composites. I did this knowing that Phil and I are
    likely to have to respond to more crap criticisms from the idiots in the near future, so
    best to clean up the code and provide to some of my close colleagues in case they want
    to test it, etc. Please feel free to use this code for your own internal purposes, but
    don’t pass it along where it may get into the hands of the wrong people.
    In the process of trying to clean it up, I realized I had something a bit odd, not
    necessarily wrong, but it makes a small difference. It seems that I used the ‘long’ NH
    instrumental series back to 1753 that we calculated in the following paper:
    * Mann, M.E., Rutherford, S., Bradley, R.S., Hughes, M.K., Keimig, F.T., [1]Optimal
    Surface Temperature Reconstructions using Terrestrial Borehole Data, Journal of
    Geophysical Research, 108 (D7), 4203, doi: 10.1029/2002JD002532, 2003.

    (based on the sparse available long instrumental records) to set the scale for the
    decadal standard deviation of the proxy composite. Not sure why I used this, rather than
    using the CRU NH record back to 1856 for this purpose. It looks like I had two similarly
    named series floating around in the code, and used perhaps the less preferable one for
    setting the scale.
    Turns it, this has the net effect of decreasing the amplitude of the NH reconstruction
    by a factor of 0.11/0.14 = 1.29.
    This may explain part of what perplexed Gabi when she was comparing w/ the instrumental
    series. I’ve attached the version of the reconstruction where the NH is scaled by the
    CRU NH record instead, as well as the Matlab code which you’re welcome to try to use
    yourself and play around with. Basically, this increases the amplitude of the
    reconstruction everywhere by the factor 1.29. Perhaps this is more in line w/ what Gabi
    was estimating (Gabi?)
    Anyway, doesn’t make a major difference, but you might want to take this into account in
    any further use of the Mann and Jones series…
    Phil: is this worth a followup note to GRL, w/ a link to the Matlab code?
    p.s. Gabi: when do you and Tom plan to publish your NH reconstruction that now goes back
    about 1500 years or so? It would be nice to have more independent reconstructions
    published in the near future! Maybe I missed this? Thanks…

    % (c) 2003, M.E. Mann
    % Jones, P.D., Mann, M.E., Climate Over Past Millennia, Reviews of Geophysics,
    % 42, RG2002, doi:10.1029/2003RG000143, 2004
    % Mann, M.E., Jones, P.D., Global Surface Temperatures over the Past two Millennia,
    % Geophysical Research Letters,
    % 30 (15), 1820, doi: 10.1029/2003GL017814, 2003
    % Read in CRU instrumental NH mean temeperature record (1856-2003)
    load nh.dat;
    % calculate both warm-season and annual means
    % use annual mean record in this analysis
    % Read in Mann et al (1998), Crowley and Lowery (2000), and Jones et al (1998)
    % NH temperature reconstructions
    load nhem-millennium.dat;
    load crowleylowery.dat;
    load joneshemisrecons.dat;
    % since some reconstructions are only decadally resolved, smooth each on
    % decadal timescales through use of a lowpass filter with cutoff at
    % f=0.1 cycle/year. Based on use of the filtering routine described in:
    % Mann, M.E., On Smoothing Potentially Non-Stationary Climate Time Series,
    % Geophysical Research Letters, 31, L07214, doi: 10.1029/2004GL019569, 2004.
    % using ‘minimum norm’ constraint at both boundaries for all time series
    % Mann et al (1998) already calibrated in terms of hemispheric annual mean temperature, but
    % reference mean has to be adjusted to equal that of the instrumental series
    % over the 1856-1980 overlap period (which uses a 1961-1990 reference period)
    % need to adjust and scale Jones et al (1998) and Crowley and Lowery (2000)
    % reconstructions to match mean and trend of smoothed instrumental series
    % over 1856-1980
    [yc,t,trend0,detrend0,xm,ym] = lintrend(x, y);
    [yc,t,trendcl,detrendcl,xm,ym] = lintrend(x, y);
    [yc,t,trendjones,detrendjones,xm,ym] = lintrend(x, y);
    load ‘china-series1.dat’
    load ‘itrdb-long-fixed.dat’
    load ‘westgreen-o18.dat’
    load ‘torny.dat’
    load ‘chesapeake.dat’
    load ‘mongolia-darrigo.dat’
    load ‘dahl-jensen-gripbh1yrinterp.txt’
    load ‘dahl-jensen-dye3bh1yrinterp.txt’
    % read in years
    % read in proxy values
    % Store decadal correlation of each proxy record with local available
    % overlapping CRU gridpoint surface temperature record (see Mann and Jones, 2003)
    % Estimate Area represented by each proxy record based on latitude of
    % record and estimated number of temperature gridpoints represented by record
    for j=1:M
    % determine min and max available years over all proxy records
    minarray=[min(x1) min(x2) min(x3) min(x4) min(x5) min(x6) min(x7) min(x8)];
    maxarray=[max(x1) max(x2) max(x3) max(x4) max(x5) max(x6) max(x7) max(x8)];
    % initialize proxy data matrix
    notnumber = -9999;
    for j=1:M
    for i=1:minarray(j)-1
    for i=minarray(j):tend
    for i=minarray(j):maxarray(j)
    if (j==1) mat(i,j)=y1(i-minarray(j)+1);
    if (j==2) mat(i,j)=y2(i-minarray(j)+1);
    if (j==3) mat(i,j)=y3(i-minarray(j)+1);
    if (j==4) mat(i,j)=y4(i-minarray(j)+1);
    if (j==5) mat(i,j)=y5(i-minarray(j)+1);
    if (j==6) mat(i,j)=y6(i-minarray(j)+1);
    if (j==7) mat(i,j)=y7(i-minarray(j)+1);
    if (j==8) mat(i,j)=y8(i-minarray(j)+1);
    % added in Jones and Mann (2004), extend series ending between
    % 1980 calibration period end and 2001 boundary by persistence of
    % last available value through 2001
    for i=maxarray(j)+1:tend
    if (j==1) mat(i,j)=y1(maxarray(j)-minarray(j)+1);
    if (j==2) mat(i,j)=y2(maxarray(j)-minarray(j)+1);
    if (j==3) mat(i,j)=y3(maxarray(j)-minarray(j)+1);
    if (j==4) mat(i,j)=y4(maxarray(j)-minarray(j)+1);
    if (j==5) mat(i,j)=y5(maxarray(j)-minarray(j)+1);
    if (j==6) mat(i,j)=y6(maxarray(j)-minarray(j)+1);
    if (j==7) mat(i,j)=y7(maxarray(j)-minarray(j)+1);
    if (j==8) mat(i,j)=y8(maxarray(j)-minarray(j)+1);
    data=[time mat];
    % decadally lowpass of proxy series at f=0.1 cycle/year as described earlier
    for j=1:M
    for i=1:minarray(j)-1
    for i=minarray(j):tend
    % standardize data
    % first remove mean from each series
    for j=1:M
    for i=1:tend
    if (filtered(i,j)>notnumber)
    % now divide through by standard deviation
    for j=1:M
    for i=1:tend
    if (filtered(i,j)>notnumber)
    for i=1:tend
    if (mat(i,j)>notnumber)
    % 4. Calculate NH mean temperature reconstruction through weighted (and
    % unweighted) composites of the decadally-smoothed proxy indicators
    % impose weighting scheme for NH mean composite
    for j=1:M
    % weighting method 1: weight each proxy series by approximate area
    % weighting method 2: weight each proxy series by correlation between
    % predictor and local gridpoint series over available overlap period
    % during calibration interval
    % weighting method 3: weight each proxy series by correlation between
    % predictor and NH mean series over calibration interval:
    % weightlong(j)=lincor(nhlong,standardized(1856:1980,j));
    % weighting method 4: combine 1 and 3
    % weighting method 5: combine 1 amd 2 (this is the ‘standard’ weighting
    % scheme chosen by Mann and Jones (2003)
    % use standard weighting scheme
    % perform reconstructions based on:
    % (1) the 6 proxy temperature records available over interval AD 200-1980
    % (2) all 8 proxy temperature records available over interval AD 553-1980
    for j=1:M
    if (istart1>=minarray(j))
    if (istart2>=minarray(j))
    % calculate composites through 1995 (too few series available after that date)
    % As discussed above, persistence is used to extend any series ending
    % between 1980 and 1995 as described by Jones and Mann (2004).
    for i=istart1:tend
    for j=1:M
    if (istart1>=minarray(j))
    if (istart2>=minarray(j))
    % scale composite to have same variance as decadally-smoothed instrumental
    % NH series

    % Mann and Jones (2003) and Jones and Mann (2004) used for this purpose
    % the extended (1753-1980) NH series used in:
    % Mann, M.E., Rutherford, S., Bradley, R.S., Hughes, M.K., Keimig, F.T.,
    % Optimal Surface Temperature Reconstructions using Terrestrial Borehole Data,
    % Journal of Geophysical Research, 108 (D7), 4203, doi: 10.1029/2002JD002532, 2003.
    % That series has a decadal standard deviation sd=0.1123
    % If instead, the 1856-2003 CRU instrumental NH mean record is used, with
    % a decadal standard deviation of sd=0.1446, the amplitude of the reconstruction
    % increases by a factor 1.29 (this scaling yields slightly lower verification
    % scores)
    load nhem-long.dat
    % use weighted (rather than unweighted) composite in this case
    % center composites on 1856-1980 calibration period
    % scale composite to standard deviation of instrumental series and re-center
    % to have same (1961-1990) zero reference period as CRU NH instrumental
    % temperature record
    % estimate uncertainty in reconstruction
    % nominal (white noise) unresolved calibration period variance
    % note: this is the *nominal* white noise uncertainty in the reconstruction
    % a spectral analysis of the calibration residuals [as discussed briefly in
    % Mann and Jones, 2003] indicates that a peak at the multidecadal timescale
    % that exceeds the white noise average residual variance by a factor of
    % approximately 6. A conservative estimate of the standard error in the
    % reconstruction thus inflates the nominal white noise estimate “sdunc” by a
    % factor of sqrt(6)
    sdlow = sdunc*sqrt(6)
    % calculate long-term verification statistics for reconstruction
    % use composite of Mann et al (1998)/Crowley and Lowery (2000)/Jones et al (1998)
    % and AD 1600-1855 interval
    % work with longer reconstruction (back to AD 200)
    %calculate verification R^2
    % calculate verification RE
    % insure convention of zero mean over calibration interval
    for i=857:981
    for i=601:856

  44. Barclay E. MacDonald
    Posted Dec 5, 2009 at 8:13 PM | Permalink

    Carlo, nice post. Aside from the manifest manipulation without detailed explanation, other than it gives the wrong result, I particularly like the following:

    p.s. Gabi: when do you and Tom plan to publish your NH reconstruction that now goes back
    about 1500 years or so? It would be nice to have more independent reconstructions.

  45. boballab
    Posted Dec 6, 2009 at 3:12 AM | Permalink

    Well check this piece out by Marc Sheppard on Mikes nature Trick. Doesn’t seem to do a bad job of showing how it was used, what was hidden, and why it is important.

    Steve: I wish that he’d paid more attention to the analyses by Jean S and myself which are more precise.

  46. Antony
    Posted Dec 7, 2009 at 5:59 AM | Permalink

    Briffa to Cook complaining about Mann’s work in 1024334440.txt:

    >I have just read this lettter – and I think it is crap. I am sick to
    >death of Mann stating his reconstruction represents the tropical
    >area just because it contains a few (poorly temperature
    >representative ) tropical series. He is just as capable of
    >regressing these data again any other “target” series , such as the
    >increasing trend of self-opinionated verbage he has produced over
    >the last few years , and … (better say no more)

    Cook to Briffa in 1051638938.txt ”

    |”I come more from the “cup half-full” camp when it comes to the MWP, maybe yes, maybe no, but it is too early to say what it is. Being a natural skeptic, I guess you might lean more towards the MBH camp, which is fine as long as one is honest and open about evaluating the evidence (I have my doubts about the MBH camp). We can always politely(?) disagree given the same admittedly equivocal evidence.
    I should say that Jan should at least be made aware of this reanalysis of his data.
    Admittedly, all of the Schweingruber data are in the public domain I believe, so that should not be an issue with those data. I just don’t want to get into an open critique of the Esper data because it would just add fuel to the MBH attack squad. They tend to work in their own somewhat agenda-filled ways. We should also work on this stuff on our
    own, but I do not think that we have an agenda per se, other than trying to objectively understand what is going on.

  47. TKl
    Posted Dec 9, 2009 at 4:48 AM | Permalink

    Interesting CRU-files: in \FOIA\documents there is a file ‘’. Unzipped it delievers under
    ‘FOIA\documents\mbh98-osborn\TREE\ITRDB\NOAMER’ this directories and other files:

2 Trackbacks

  1. […] who have disputed the man made warming claims have been derided by this new establishment, stonewalled and discriminated against. Yet actual temperature records as has recently been exposed by […]

  2. […] to Mann, December 2003 cc NSF (from the Climate Audit mirror […]

%d bloggers like this: