Mann Misrepresents NOAA OIG

In today’s post, I’ll consider a fifth investigation – by the NOAA Office of the Inspector General OIG here- and show that, like the other four considered so far, Mann’s claims that it “investigated” and “exonerated” Mann himself were untrue. In addition, I’ll show that Mann’s pleadings misrepresented the findings of this investigation both through grossly selective quotation and mis-statement. Finally, the OIG report re-opened questions about Mann’s role in Eugene Wahl’s destruction of emails as requested by Phil Jones. In Mann’s pleadings, Mann claimed that each of the investigation reports was “commented upon in the national and international media”. But, in this case, much of the coverage focused on renewed criticism of the apparent obtuseness of the Penn State inquiry committee. The episode even included accusations of libel by Mann against CEI’s Chris Horner as well as a wild and unjustified accusation of “dishonesty” by Mann against myself.

Terms of Reference
The OIG investigation was triggered by a letter from Senator James Inhofe on May 26, 2010. The OIG interpreted its terms of reference as follows:

Pursuant to your request, we conducted an inquiry to determine the following:
1. Whether NOAA carried out an internal review of the CRU emails posted on the internet;
2. The basis for Dr Lubchenco’s above testimony statement before the House Select Committee on December 2, 2009;
3. Whether NOAA has conducted a review of its global temperature data comprising the GHCN-M dataset, which is maintained by NOAA’s National Climatic Data Center;
4. Whether any CRU emails indicated that NOAA:
a. Inappropriately manipulated data comprising the GHCN-M temperature dataset;
b. Failed to adhere to appropriate peer review procedures;
c. Did not comply with federal laws pertaining to information/datas sharing, namely the Federal Information Quality Act, the Freedom of Information Act and the Shelby Amendment?

Nowhere do these terms of reference include or require the OIG to investigate Mann’s conduct and they did not do so.

GHCN-M(onthly)
As requested by Inhofe, the OIG report included a section on the GHCN-M monthly historical temperature dataset maintained by NOAA, which is common to the three major temperature indices (CRU, GISS, NOAA).

The NOAA summarized NOAA’s procedures for preparing GHCN-M data and reported:

We found no evidence in the CRU emails that NOAA inappropriately manipulated data comprising the GHCN-M dataset.

Given that the Climategate dossier contained very few references to CRU’s own CRUTEM data, it is unsurprising that the emails do not contain evidence on inappropriate manipulation of GHCN-M data – a product with which CRU was not involved, though it’s a reasonable enough point to crosscheck. (As a further editorial comment, temperature data has never been a major theme at Climate Audit, though it has at other blogs.)

But watch carefully here. The phrases in the above quotation – “no evidence” of “inappropriate manipulat[ion]” – will crop up in an entirely different context in the Mann pleadings, applied not just to GHCN-M in the light of Climategate emails, but much more widely.

The OIG on the Climategate Emails
The OIG report stated that it examined all the CG1 dossier, from which it extracted eight emails that concerned NOAA employees. Three of these emails also mention Mann (identified as a “researcher at Pennsylvania State University”).

The OIG says that it interviewed the relevant NOAA scientists and “summarized their responses and explanations”. (But again watch carefully: “summariz[ing] their responses and explanations” means exactly that: it does not entail that the OIG either endorsed or rejected their explanations and, in at least one case, the OIG left a matter of contention completely unresolved – see contemporary CA discussion here of the inconsistency between statements by Susan Solomon and by NOAA lawyers on advice supposedly given to Solomon on whether documents were the property of NOAA or IPCC.)

In our own review of all 1073 CRU emails, we found eight emails which, in our judgment, warranted further examination to clarify any possible issues involving the scientific integrity of particular NOAA scientists or NOAA’s data. As a result, we conducted interviews with the relevant NOAA scientists regarding these eight emails and have summarized their responses and explanations below.

The first of the three emails considered by the OIG in which Mann was involved was a 2006 email (CG-1 1140039406), in which Briffa conceded to Chapter Coordinating Lead Author Overpeck that there was “minimal” “real independence” to the paleoclimate analyses subsequent to TAR (this is obviously a longstanding Climate Audit position) and urged Overpeck not to let co-Chair Susan Solomon (a NOAA employee) or Mann “push” them beyond what they thought was right.

Peck, you have to consider that since the TAR , there has been a lot of argument re “hockey stick” and the real independence of the inputs to most subsequent analyses is minimal. True, there have been many different techniques used to aggregate and scale data – but the efficacy of these is still far from established. We should be careful not to push the conclusions beyond what we can securely justify – and this is not much other than a confirmation of the general conclusions of the TAR . We must resist being pushed to present the results such that we will be accused of bias – hence no need to attack Moberg . Just need to show the “most likely” course of temperatures over the last 1300 years – which we do well I think. Strong confirmation of TAR is a good result, given that we discuss uncertainty and base it on more data. Let us not try to over egg the pudding. For what it worth , the above comments are my (honestly long considered) views – and I would not be happy to go further . Of course this discussion now needs to go to the wider Chapter authorship, but do not let Susan [Solomon of NOAA] (or Mike [Mann]) push you (us) beyond where we know is right.

They reported Solomon’s answer as follows (without additional comment):

The Co-Chair explained to us that she had only requested that these scientists cite the evidence that they contended “reinforced” the TAR’s conclusion regarding the “exceptional warming of the late 20th century relative to the past 1000 years.” She told us her goal as Co-Chair was not to push a particular outcome but to ensure that the scientists provided “more clarity as to what the reasoning was for [the] particular statement.

They did not seek an explanation from Mann as to why IPCC authors were worried that Mann might push them beyond “what was right” or explain why Mann, who was not an AR4 author, would be mentioned with Co-Chair Solomon as pressing IPCC authors.

The NOAA OIG also considered a second Briffa email involving Mann and Solomon, in which Briffa told Mann that he had tried to get (what appears to be a self-congratulatory) statement about paleoclimate into the Summary for Policy-makers, but had been rebuffed by Solomon. The paragraph contains a disquieting phrase that the needs of science and IPCC are not always being the same.

I tried hard to balance the needs of the science and the IPCC, which were not always the same. I worried that you might think I gave the impression of not supporting you well enough while trying to report on the issues and uncertainties. Much had to be removed and I was particularly unhappy that I could not get the statement into the SPM regarding the AR4 reinforcement of the results and conclusions of the TAR. I tried my best but we were basically railroaded by Susan [Solomon].

Commenting to the OIG, Solomon denied that Briffa had made any such request to her (and it is entirely possible that Briffa was not being entirely candid with Mann) and stood by the language in the SPM.

The third email was the notorious email in which Jones had asked Mann to forward his deletion request to Eugene Wahl, who, at the time of the OIG report, was employed by NOAA, characterized by the OIG as follows:

CRU email 1212073451 dated May 29, 2008 in which the Director of the CRU requested a researcher from Pennsylvania State University to ask an individual, who is now a NOAA scientist, to delete certain emails related to his participation in the IPCC AR4.

The OIG reported that the incident had taken place prior to Wahl becoming an employee of NOAA and therefore not under NOAA jurisdiction:

This scientist explained to us that he believes he deleted the referenced emails at that time. We determined that he did not become a NOAA employee until after the incident, in August 2008, and therefore did not violate any agency record retention policies. Further, this individual informed us that in December 2009 he received a letter from Senator Inhofe requesting that he retain all of his records, which he told us he has done.

The Mann Pleadings
The NOAA OIG investigation (“the Inspector General of the U. S. Department of Commerce”) is listed in paragraph 21 of the Mann Complaint as one of the five governmental agency investigations (and four university investigations) that supposedly “conducted separate and independent investigations into the allegations of scientific misconduct against Dr. Mann and his colleagues” and is included in the paragraph 24 claim that “all of the above investigations found that there was no evidence of any fraud, data falsification, statistical manipulation, or misconduct of any kind by Dr. Mann”.

Its report is cited in Mann’s Reply Memorandum as one of “many other inquiries by various organizations within the United Kingdom and the United States that reached the same conclusion [Dr. Mann [had not] committed fraud or manipulated the data]”(page 2) and its report is attached as an exhibit to the Reply Memorandum, which accused CEI and National Review of “obfuscat[ing] and misrepresent[ing] the findings of those panels[including NOAA OIG, Muir Russell, Oxburgh, Commons Committee, DECC], in an effort to suggest (erroneously) that those inquiries did not exonerate Dr. Mann of fraud or misconduct”.

In his exhibits, Mann included nine reports: four reports from the two universities and five reports from government agencies. In his Reply Memorandum, he stated that “two universities and six governmental agencies independently investigated the allegations of fraud and misconduct” and that he had “been exonerated of fraud and misconduct no less than eight separate times”. While the NOAA OIG is not itemized on each occasion, the only reasonable interpretation of these claims is that the NOAA OIG (and Muir Russell, Oxburgh, Commons Committee and DECC) are included.

However, the investigation of the NOAA OIG was limited to NOAA employees while they were NOAA employees. The NOAA OIG did not conduct an investigation of Mann’s conduct nor did it “exonerate” him. This increases the total of misrepresentations to all five that I have thus far re-examined.

Out-of-Context Quotation
Mann’s pleadings in connection with the NOAA OIG investigation contain yet another egregious example of out-of-context quotation that misrepresented the record. They stated that the NOAA OIG investigation had found “‘no evidence’ of inappropriate manipulation of data”, citing pages 11-12 of the report, pages dealing with GHCN-M:

noaa quotation 1

However, the NOAA OIG had commented only on evidence in the CRU emails of whether NOAA had “inappropriately manipulated data comprising the GHCN-M dataset”. Mann’s quotation has been wrenched out of its original context and used to support assertions that go far beyond the narrow finding of the NOAA OIG.
noaa quotation 3

In addition, Mann’s claim that the NOAA OIG “examined all of the CRU e-mails, including the November 16, 1999 e-mail referenced above in which Professor Jones used the words “trick” and “hide the decline” is, to say the least, highly misleading. The NOAA OIG report does not mention or address the “trick” email, with which NOAA scientists were not involved. The report clearly stated that the NOAA OIG selected eight emails “which, in [their] judgment, warranted further examination to clarify any possible issues involving the scientific integrity of particular NOAA scientists or NOAA’s data”. The “trick” email was not one of the eight.

Contemporary Coverage
Mann’s pleadings stated that “all of the above reports and publications were widely available and commented upon in the national and international media”, that such coverage was read by the defendants and “laid to rest” “any question regarding the propriety of Mann’s research”.

There was negligible coverage of the NOAA OIG report in traditional media, but it was extensively covered on climate blogs (on both sides of the aisle) but the coverage was almost entirely devoted to the new information on Wahl’s admission that he had destroyed his email correspondence with Briffa about IPCC AR4, following Mann’s transmission of Jones’ deletion request, and the negligence of the Penn State Inquiry Committee in failing to interview Wahl. Climate Audit played a role in the coverage. The transcript of Wahl’s interview with the OIG was first released at CA here, later linked in an online article at Science(mag).

The OIG report had been released on February 23, 2010, which (by coincidence) was one day after the NSF OIG had visited me in Toronto. I published several posts on the report including a first post on the news that the OIG had asked Eugene Wahl about whether he had deleted emails and Wahl’s admission that he had, linking this to the seemingly wilful blindness in the Penn State Inquiry Committee report, a second on the inconsistency between stories from NOAA lawyers and Susan Solomon and a third on the misrepresentation of the OIG findings in NOAA’s press release.

A couple of weeks later, I was sent a transcript of Wahl’s evidence to the OIG on the incident, which I published as follows:

Q. Did you ever receive a request by either Michael Mann or any others to delete any emails?
A. I did receive that email. That’s the last one on your list here. I did receive that…

Q. So, how did you actually come about receiving that? Did you actually just — he just forward the — Michael Mann — and it was Michael Mann I guess?
A. Yes
Q. — That you received the email from?
A. Correct …
A. To my knowledge, I just received a forward from him.
Q. And what were the actions that you took?
A. Well, to the best of my recollection, I did delete the emails.

This story was covered by Chris Horner, a CEI associate, who made the obvious observation that Penn State Inquiry Committee had neglected to interview Wahl, even though their terms of reference required them to examine whether Mann “directly or indirectly” participated in “any actions” with the “intent to delete” emails:

Did you engage in, or participate in, directly or indirectly, any actions with the intent to delete, conceal or otherwise destroy emails, information and/or data, related to AR4, as suggested by Phil Jones?

Horner pointedly asked:

So, were Penn State’s investigators staggeringly incompetent, willfully ignorant, or knowingly complicit?

Horner continued:

This begs the same questions of PSU as it does of the UK’s two supposed inquires into ClimateGate, which were also cited as “clearing” the participants. Obviously we know that’s not possible because, if either had bothered to interview Wahl, they’d know what we now know. Wahl says Mann did indeed ask Wahl to destroy records, and Wahl did.

To many third parties, the most plausible interpretation of events was that Mann had forwarded Jones’ deletion request in the expectation that Wahl would act on it (as he did.) However, although Mann admitted that he had forwarded Jones’ deletion request to Wahl, he claimed that he had sent the email to Wahl only because he felt Wahl “needed to see it”. Mann said that Horner’s (supposed) claim that Mann had “told” Wahl to delete emails was “a fabrication, a lie and a libelous allegation”, a “despicable smear” that spoke to the “depths of dishonesty of professional climate change deniers” like Horner and others (including me):

The claim by fossil fuel industry lobbyist Chris Horner in his “Daily Caller” piece that I told Eugene Wahl to delete emails is a fabrication –a lie, and a libelous allegation. My only involvement in the episode in question is that I forwarded Wahl an email that Phil Jones had sent me, which I felt Wahl needed to see. There was no accompanying commentary by me or additional correspondence from me regarding the matter, nor did I speak to Wahl about the matter. This is, in short, a despicable smear that, more than anything else, speaks to the depths of dishonesty of professional climate change deniers like Chris Horner, Marc Morano, Stephen McIntyre, and Anthony Watts.

Mann did not identify any actual mis-statements on my part in relation to this incident, nor, to my knowledge, were there any.

Horner refused to back down, pointing out, among other points, that the words of Mann’s accusation did not correspond to the language of Horner’s original post:

Mann’s response is typically off point from the question

Please state where I “claim . . . that [Mann] told Eugene Wahl to delete emails,” and also what is libelous, Mr. Mann. If you do the latter, I am happy to retract it.

But, “Wahl says Mann did indeed ask Wahl to destroy records, and Wahl did” doesn’t do it, unless you want to crop off one end of the sentence (“Wahl says”) and replace it with something more appealing to your thesis (an inside joke for those familiar with the whole Hockey Stick saga). Chuckle.

Your allegation is false until you somehow demonstrate otherwise, and your problem lies with the NOAA inspector general whose transcript indicates these events transpired.

Mann didn’t follow through on his libel claims against Horner, but, two weeks later, did file a libel suit against Tim Ball and the Frontier Center for making the very old Penn State-state pen joke about him.

Gavin Schmidt and others have argued that Wahl’s destruction of emails, however unseemly, was not an offence and therefore that Mann’s role in forwarding Jones’ destruction of emails was likewise not an offence. Perhaps, perhaps not. But given the question before the Penn State Inquiry Committee and the evidence available to them – “Did you engage in, or participate in, directly or indirectly, any actions with the intent to delete, conceal or otherwise destroy emails, information and/or data, related to AR4, as suggested by Phil Jones?” – the Inquiry Committee was not justified in its finding that “there exists no credible evidence that Dr. Mann had ever engaged in, or participated in, directly or indirectly, any actions with intent to delete, conceal or otherwise destroy emails, information and/or data related to AR4, as suggested by Dr. Phil Jones”. Mann’s act of forwarding Jones’ deletion request to Wahl was such evidence and the Inquiry Committee ought to have forwarded this question to the Investigation Committee, where Mann could have presented his defence, such as it was.

As an editorial comment, surely Mann’s proper course of action, upon receiving Jones’ request that he destroy documents, was to reply to Jones immediately stating that (1) he refused to do so; (2) he would not forward the request to the much more junior Wahl; (3) urging Jones and Briffa to reconsider their plans to destroy documents. It is disquieting that none of the investigations set out standards that would be expected outside of climate science.

In the event, Wahl understood that he was expected to destroy documents, as he did so. We’ll never know whether the Penn State Investigation Committee would have accepted Mann’s defence, because the Penn State Inquiry Committee did not refer the matter for investigation. (According to information provided to me from a member of the committee, the proceedings of the Inquiry Committee were compromised by continued involvement of a Penn State faculty member who was obliged to recuse himself, but did not do so despite a statement in the report that he had).

In passing, I’ll refer readers to an interesting obstruction of justice case involving Frank Quattone, a securities executive. At the time, Quattrone’s firm was under SEC investigation for its handling of IPO trading. It was then relatively late in the calendar year and an administrator at his firm had sent a standard memo to all staff reminding them of the firm’s document retention policies and procedures. A few hours later, after learning of a leak of the investigation by the Wall Street Journal and being told to retain counsel, Quattrone sent an email to his staff endorsing the seemingly routine instruction as follows:

having been a key witness in a securities litigation case in south texas (miniscribe) i strongly advise you to follow these procedures

Quattrone’s email was countermanded the following day by the firm’s legal department and no documents were deleted. Nonetheless, Quattrone was charged with obstruction of justice under pre-Sarbanes-Oxley section 1512. The case had a complicated history and went on for years. The key point for the present discussion is that an appeal court ruled that a trier of fact could have concluded that Quattrone acted with a “corrupt intent”, an element of the offence, even though, on its face, Quattrone was endorsing a legitimate request.

Conclusion
Like the four investigations considered previously, Mann’s claim that that the NOAA OIG (Department of Commerce) “investigated” and “exonerated” Mann himself was untrue. In addition, Mann’s pleadings contained further gross misrepresentations of the investigation through selective misquotation or misleading statements. The OIG report re-opened questions about Mann’s role in Eugene Wahl’s destruction of emails as requested by Phil Jones, renewing criticism of the seemingly wilful obtuseness of the Penn State Inquiry committee. The episode even included accusations of libel by Mann against CEI’s Chris Horner as well as wild accusations of “dishonesty” by Mann against various critics, including myself.

285 Comments

  1. JCM
    Posted Feb 27, 2014 at 9:51 AM | Permalink

    Mann is ‘condemned by silence’. Forwarding an email and not making a comment implies agreement.

    • Will J. Richardson
      Posted Feb 27, 2014 at 1:00 PM | Permalink

      Correct, silence does imply consent. By passing the “deletion” email along to Wahl without comment, the law infers that Mann approved of the deletion. The maxim is:

      Qui tacet consentire videtur, ubi loqui debuit ac potuit.

    • Will J. Richardson
      Posted Feb 27, 2014 at 1:11 PM | Permalink

      I was wrong. Mann was not entirely silent. In response to the email from Jones, Mann replied that he would forward the email to Wahl and provided Jones Wahl’s new email address.

      Mr. McIntyre quotes and discusses the email exchange here: New Light on “Delete Any Emails”

  2. Chuck Norcutt
    Posted Feb 27, 2014 at 10:12 AM | Permalink

    The conclusion section is missing the word “neither” in this sentence fragment:
    “…NOAA OIG (Department of Commerce) “investigated” nor “exonerated” Mann…”

  3. pottereaton
    Posted Feb 27, 2014 at 10:57 AM | Permalink

    In the course of its inquiry, the department examined all of the CRU e-mails, including the November 16,1999 email referenced above in which Professor Jones used the words “trick” and “hide the decline.” The department found “no evidence” of inappropriate manipulation of data.

    Henceforth to be known as “Mike’s NOAA trick.”

    • Don B
      Posted Feb 27, 2014 at 4:18 PM | Permalink

      Don’t forget “Mike’s AGU trick,” during which he hid the hiatus.

      Mike’s AGU Trick

  4. David P
    Posted Feb 27, 2014 at 11:00 AM | Permalink

    While I recall concluding at the time the PSU inquiry was a whitewash, I had forgotten they didn’t even try to put a diligent face on it. Wow.

    • KNR
      Posted Feb 28, 2014 at 12:53 PM | Permalink

      To be fair they did actual ask Mann if he done anything wrong , you cannot expect them to then check out the answer he give can you !

  5. Posted Feb 27, 2014 at 11:09 AM | Permalink

    By way of background, readers should note that NOAA manages the GHCN-M data set. CRU manages the CRUTEM dataset. The “hide the decline” business refers to deleting a portion of a 3rd data set, Briffa’s tree ring data, and replacing it with some CRUTEM data.
    Just so I understand, the discrepancy in the quotes is as follows (if I have followed the discussion correctly). The OIG report says, emph added:

    4. Determine whether any of the CRU emails indicated that NOAA:
    (a) inappropriately manipulated data comprising the GHCN-M temperature dataset.
    [response:] We found no evidence in the CRU emails that NOAA inappropriately manipulated data comprising the GHCN-M dataset.

    which I take to mean they found no evidence in the CRU emails that NOAA fiddled with the NOAA data base; not surprisingly since the CRU emails didn’t talk about that issue.
    Mann’s legal team cited the above is follows:

    In the course of its inquiry, the department examined all of the CRU emails, including the November 16, 1999 e-mail referenced above in which Professor Jones used the words “trick” and “hide the decline.” The department found “no evidence” of inappropriate manipulation of data.

    By leaving out the qualifiers “NOAA” and “GHCN-M” data-set, and by priming the reader with reference to the unrelated deletion of Briffa’s post-1960 tree ring data in the WMO diagram, this rendering insinuates a much wider investigation and exoneration than actually took place.

    • bernie1815
      Posted Feb 27, 2014 at 11:41 AM | Permalink

      Ross: Nicely done. Yet another nail…

      • tomdesabla
        Posted Feb 27, 2014 at 8:52 PM | Permalink

        Thank God for economists and statisticians and their meddling in other disciplines. When it comes to “leaving it to the experts” these folks just didn’t get the memo. Without them we would be lost.

    • D.J. Hawkins
      Posted Feb 27, 2014 at 5:46 PM | Permalink

      Mann’s pleading is even more egregious; he makes the claim that “all” the e-mails were examined, when in fact it was only eight. How defective does a pleading have to be before the court clamps down on the offender?

    • MikeN
      Posted Mar 1, 2014 at 3:47 PM | Permalink

      Hmm, in his reply to the PNAS comment, Mann insinuates to casual readers that McIntyre is ignorant of multivariate regression algorithms.

  6. Posted Feb 27, 2014 at 11:33 AM | Permalink

    “the very old Penn State-state pen joke”
    Reminded me of this, from 1982:

    • Posted Feb 27, 2014 at 11:34 AM | Permalink

      Joke is at 8:50.

    • David L. Hagen
      Posted Feb 27, 2014 at 1:47 PM | Permalink

      Penn. State – State “Pen” comparisons/jokes go back at least to 1918. e.g.,

      . . .we passed the Pennsylvania State Penitentiary, only a few miles separating Penn State College from the Penn. State “Pen.”.

      The Carlisle Arrow and Red Man , Vol XIV Feb. 1, 1918 No 20
      Anyone find an earlier ref?

      • pottereaton
        Posted Feb 27, 2014 at 4:07 PM | Permalink

        It was given the name “Pennsylvania State College” in 1874, so you probably won’t find one earlier. (It became PS University in 1953.)

    • Tom
      Posted Feb 28, 2014 at 11:32 AM | Permalink

      As I have said before it is quite troubling that it is only when people go after Penn State that Mann seems to sue. He has had much worse things said about him than the old State Penn joke. But yet that is what causes him to sue. IMHO he wants to sue everyone as his e-mails dictate but Penn State holds him back. It is only when they are insulted that he is given the go ahead.

      Dr. Mann has to be very careful when it comes to Penn States involvement in this. They are an interested party and the outcome of these cases and if they are working behind the scenes to help Mann then as I have stated before all of his legal aid isn’t a gift. Its income and its taxable. With Mandia being a Penn State grad this “dark money” going to Mann has Penn State’s fingerprints all over it.

      I would hope that after the legal cases are all said and done the IRS takes a good hard look at who was behind the organizing of Mann’s legal funds.

      This case will run into the millions when it is all said and done. If Penn State were found to be working behind the scenes then Mann would be subject to income tax on all of it, federal and state. Quick what is 42% of $2,000,000.

  7. Dave L.
    Posted Feb 27, 2014 at 11:54 AM | Permalink

    It is ironic that the NOAA OIG investigation limited the scope of its interviews to NOAA employees and did not include NOAA contractors such as Mann. (The investigation was very critical of NOAA contracting practices.)

  8. DayHay
    Posted Feb 27, 2014 at 12:58 PM | Permalink

    I believe you are missing the point. All the investigations that did not concern Mann, DO NOT CONCERN MANN… sheesh.

    • Brad R
      Posted Feb 27, 2014 at 2:32 PM | Permalink

      “All the investigations that did not concern Mann, DO NOT CONCERN MANN”

      Which means all those investigations did not “exonerate” Mann, and it is inappropriate for Mann to claim that they did.

  9. Steven Mosher
    Posted Feb 27, 2014 at 1:08 PM | Permalink

    One of the things that was absolutely maddening was people’s insistence that Climategate was about temperature reconstructions. Skeptics did not help themselves by misrepresenting the centrality of that issue.

    The upshot is people made that the focus of investigations and of course found nothing. The real issues received short shrift.

    • Posted Feb 27, 2014 at 1:35 PM | Permalink

      Maddening yes – but attributing any blame to sceptics, however stupid, doesn’t sit well with me. Of course there are stupid sceptics but anyone who looked at Climategate at all diligently would arrive here at Climate Audit and read the intelligent concerns of the host. To always ignore Steve and pick up on the ignorance of a two-bit ‘sceptic’ of choice was willful misdirection. There, I’ve attributed motive. But by now it’s beyond doubt.

      • thisisnotgoodtogo
        Posted Feb 27, 2014 at 2:32 PM | Permalink

        The official response should never be predicated on public rumour, but should be seen in light of mandated codes of conduct, and law.
        One exception to the blame-laying on the agencies which I might admit, is Monckton’s acceptance on face value of one item proffered as excuse. He was apparently there as the token skeptic.

      • Pat Frank
        Posted Feb 27, 2014 at 6:00 PM | Permalink

        Good point, Richard. Diverting the blame is opportune for one who has no legitimate defense. The problem with the paleo-temperature reconstructions, of course, is that they’re pseudo-science. Splicing the instrumental record onto them can’t delegitimize them any further.

      • Orson
        Posted Feb 28, 2014 at 6:47 AM | Permalink

        Yes, I think Richard Drake is off base here. CG occasioned the media’s knee-jerk awakening, and thereby included previously neglected Hockey Stick scrutiny. Once that was included in the CG narrative – and the HS was an important precipitating event for the FOIA/FOA route to get data under-wraps that did eventuate in CG itself.

        Thereafter, skeptics had to answer old questions and the media, finally, began reporting the back-logged criticisms, especially in the frenzy that is the London press.

        Contrary to Richard, I believe the reinforcing synergies were already set up by December of 2009, in the wake of Copenhagen Climate Change Conference. COP15.

        • Posted Feb 28, 2014 at 9:26 AM | Permalink

          Orson: Thank you for seeking to correct where I was ‘off base’. Unfortunately I don’t fully follow what you have written, including your first word. Who were you indicating that you were agreeing with when you wrote ‘Yes’?

      • pottereaton
        Posted Feb 28, 2014 at 11:04 AM | Permalink

        Riachard’s point is valid, as is Mosh’s. Among skeptics there were the equivalent of conspiracy theorists who would make charges that could not be substantiated and the media would often pick up on them rather than the qualified commentators such as proprietor of this website. This was particularly true among warmist blogs who would use the statments of indignant and often uninformed commenters on skeptic sites as a brush to paint all skeptics.

    • dfhunter
      Posted Feb 27, 2014 at 8:15 PM | Permalink

      O/T to main post but your correct Steve

      it’s easy to forget the risks with little/no reward this guy/girl took, but worth restating his/her reasons every now and again –

      “It’s easy for many of us in the western world to accept a tiny green inconvenience and then wallow in that righteous feeling, surrounded by our “clean” technology and energy that is only slightly more expensive if adequately subsidized.

      Those millions and billions already struggling with malnutrition, sickness, violence, illiteracy, etc. don’t have that luxury. The price of “climate protection” with its cumulative and collateral effects is bound to destroy and debilitate in great numbers, for decades and generations.

      Conversely, a “game-changer” could have a beneficial effect encompassing a similar scope.

      If I had a chance to accomplish even a fraction of that, I’d have to try. I couldn’t morally afford inaction. Even if I risked everything, would never get personal compensation, and could probably never talk about it with anyone.

      I took what I deemed the most defensible course of action, and would do it again (although with slight alterations — trying to publish something truthful on RealClimate was clearly too grandiose of a plan ;-).

      Even if I have it all wrong and these scientists had some good reason to mislead us (instead of making a strong case with real data) I think disseminating the truth is still the safest bet by far.

      …..

      Over and out.

      Mr. FOIA

      • Will J. Richardson
        Posted Feb 27, 2014 at 10:38 PM | Permalink

        Thank you, Mr. FOIA

      • pottereaton
        Posted Feb 27, 2014 at 10:50 PM | Permalink

        Yes, I also thank you, Mr. FOIA. In perhaps the most crucial and misunderstood issue of our time, you were a “game-changer.”

        Can anyone imagine what this defamation case would look like if it were not for the publication of those emails? And beyond that, what the entire controversy would look like?

        • stan
          Posted Feb 28, 2014 at 2:14 PM | Permalink

          Long before CG there was already a huge pile of evidence to support Steyn’s statements re: Mann. E.g. read CA prior to Nov. 2009.

      • kim
        Posted Feb 28, 2014 at 8:28 PM | Permalink

        It’s for the grandchirruns; they thank you, too.
        ==============

    • NikFromNYC
      Posted Feb 27, 2014 at 9:28 PM | Permalink

      Splitting hairs in the face of outright corruption much?

      “For your eyes only…Don’t leave stuff lying around on ftp sites – you never know who is trawling them. The two MMs have been after the CRU station data for years. If they ever hear there is a Freedom of Information Act now in the UK, I think I’ll delete the file rather than send to anyone….Tom Wigley has sent me a worried email when he heard about it – thought people could ask him for his model code. He has retired officially from UEA so he can hide behind that.” – Phil Jones to Michael Mann

    • Robert Austin
      Posted Feb 27, 2014 at 11:53 PM | Permalink

      Steven,
      As I see it,the investigations were not carried out by stupid people that would have been led astray by the “babel” from the unwashed skeptics.Ergo,focus of the investigations on the temperature records rather than the proxy reconstructions was deliberate, willfully obtuse misdirection with the goal of tilting the scales towards exoneration.A variation on the drunk searching for his keys under the streetlight analogy but in this case the drunk isn’t searching in the light where it is easier, he searching where he knows there are no keys.

    • Willis Eschenbach
      Posted Feb 28, 2014 at 12:59 AM | Permalink

      Steven Mosher Posted Feb 27, 2014 at 1:08 PM

      One of the things that was absolutely maddening was people’s insistence that Climategate was about temperature reconstructions. Skeptics did not help themselves by misrepresenting the centrality of that issue.

      The upshot is people made that the focus of investigations and of course found nothing. The real issues received short shrift.

      Which un-named “skeptics” are you referring to? I wrote extensively about the issues from my perspective as a participant. I was clear that the issue wasn’t temperature reconstructions. I did my best to point to the real malfeasance exposed by the emails, lying and cheating and destroying evidence and the like.

      And I don’t recall Steve’s discussions about climategate focusing on temperature reconstructions. He has focused on the reconstructions separately, but not as a part of the climategate discussion that I recall …

      w.

      • Steven Mosher
        Posted Feb 28, 2014 at 2:22 PM | Permalink

        you are not a skeptic willis. you by your own admission are a heretic.
        or did you forget that?. or are you now claiming skeptic status just to disagree with me?

        you want names? you asking for names? perhaps I should publicize the mails from some prominent types suggesting that they need to find a way to tie climategate to NOAA. really? you want names?

        Be careful what you wish for.

        http://pjmedia.com/blog/climategate-noaa-and-nasa-complicit-in-data-manipulation/

        you know in the OJ trial Chris Darden made a critical error in asking a question
        when he didnt know the answer. you wanted names.

        This is your Darden moment

        “Recent revelations from the Climategate emails, originating from the Climatic Research Unit at the University of East Anglia, showed how all the data centers — most notably NOAA and NASA — conspired in the manipulation of global temperature records to suggest that temperatures in the 20th century rose faster than they actually did.

        This has inspired climate researchers worldwide to take a hard look at the data proffered, by comparing it to the original data and to other data sources. An in-depth report, co-authored by myself and Anthony Watts for the Science and Public Policy Institute (SPPI), compiles some of the initial alarming findings with case studies included from scientists around the world.”

        That report went on to accuse NOAA of fraud.

        Some good people had to waste a lot of time because of the weak attempt to tie
        climategate to NOAA. This attempt was wrong headed. It was deliberate.
        and I personally would have NONE OF IT. want the mails? or does this satisfy you?
        there’s more……

        Steve Mc: mosh, as you are aware, this line of argument was never raised at Climate Audit where I’ve observed over and over that the Climategate dossier says virtually nothing about the temperature data. Nor have I taken particular issue with the temperature data, warning readers that there was unlikely to be any smoking gun in the CRU data other than how little work they did on it. The blog article that you cited coat-racked an essentially unrelated prior dispute about temperature data onto Climategate. As you and I have discussed, such coatracking and mischarging contributed to the failure of the various investigations to investigate the actual issues. But while it may have contributed to the failure, by far the major responsibility for the inadequacy of the investigations must lie with the investigations themselves, which not only did a poor job but even insolently flouted sensible recommendations from the Commonm Committee. I also think that the detailed chronology shows that the diversion into temperature data had already occurred well before this particular blog article and thus you’re being overly dramatic in characterizing this particular blog post as a “Chris Darden” moment. My impression at the time was that concerns over temperature data arose first among establishment types (John Beddington types, though I haven;t parsed the exact history) and that this had occurred long before the blog article in question.

        • bmcburney
          Posted Feb 28, 2014 at 5:47 PM | Permalink

          Publish and be d—-d, Mosher. I, for one, will be very happy to read any e-mail you may wish to share.

        • Steven Mosher
          Posted Mar 3, 2014 at 11:16 AM | Permalink

          I’m not talking about “this post” being a darden moment. I’m talking about Willis’s claim being a darden moment. You and I both know that the mis chracterization BY OTHERS of climategate as a smoking gun on the temperature series afforded the investigative committees all the cover they needed to do the shitty job they did.
          In the end of course the committees bear responsibility for their actions. However, I would hope people can look back at this and learn a lesson: don’t overcharge a case. Dont go a bridge too far. If you have 5 good arguments and 1 lousy one, drop the lousy one because your opponent will seize on the lousy one and ignore the good ones.

          Steve: yup. the odds of us being on a different page were pretty small.

    • MikeN
      Posted Mar 1, 2014 at 4:41 PM | Permalink

      Steve, CliamteGate could be about Yamal. That’s what Phil Jones thought.

      • Posted Mar 3, 2014 at 12:09 PM | Permalink

        I’ve always had a lot of time for that part of the Jones ‘oeuvre’.

  10. David L. Hagen
    Posted Feb 27, 2014 at 1:36 PM | Permalink

    Does Mann come with clean hands?

    . . .a defendant might claim the plaintiff (party suing him/her) has a “lack of clean hands” or “violates the clean hands doctrine” because the plaintiff has . . .done something wrong regarding the matter under consideration.

    Compare: “a transcript of Wahl’s evidence to the OIG on the incident . . .

    “Q. Dd you ever receive a request by either Michael Mann or any others to delete any emails?
    A. I did receive that email.. . .”
    “Q. And what were the actions that you took?
    A. Well, to the best of my recollection, I did delete the emails …”

    With Mann:

    “I forwarded Wahl an email that Phil Jones had sent me,. . .This is, in short, a despicable smear that, more than anything else, speaks to the depths of dishonesty of professional climate change deniers like Chris Horner, Marc Morano, Stephen McIntyre, and Anthony Watts.”

  11. Stacey
    Posted Feb 27, 2014 at 1:55 PM | Permalink

    “(2) he would not forward the request to the much more junior Wahl;”

    A controlling mind does not necessarily need to ask.

    Mr McIntyre I’ve resisted for as long as I could, some coin of the realm is heading to CA. 🙂

  12. Posted Feb 27, 2014 at 2:48 PM | Permalink

    Are we sure Mann is Mann’s real name?

  13. William Larson
    Posted Feb 27, 2014 at 3:02 PM | Permalink

    “…been exonerated of fraud and misconduct no less than eight separate times.” –Mann Then there is Big Jule (“from Cicero, Illinois”), in “Guys and Dolls”, showing off his “innocence”: “Thirty-two arrests, NO convictions!” So I was hoping that Mann could get his number up to thirty-two as well, but Mr. McIntyre keeps whittling it back instead.

  14. Posted Feb 27, 2014 at 3:47 PM | Permalink

    I have a certain sense of deja vu about Mann’s claims to be “exonerated” by so many authorities and, for example, the longstanding claims of a 97% consensus based on spurious surveys.

    In both cases the relevant questions were never asked. In both cases the alarmists pretend that certain issues were addressed and distort what was actually said.

  15. Posted Feb 27, 2014 at 3:54 PM | Permalink

    Come to think of it, there is also a parallel with Mann’s Nobel claims.

    As the EU has been awarded an EU and as I am a citizen of and contributor to the EU I can, according to Mann’s logic, claim to have shared in a Nobel Prize.

    And as these various Climategate investigations found no evidence of any wrongdoing by me(because they didn’t look), I too, according to Mannian logic, can claim to have been “exonerated” by them.

    • pottereaton
      Posted Feb 27, 2014 at 4:13 PM | Permalink

      Steyn on Mann’s Nobel Prize on October 24th:

      In the same spirit, I see that I’ve just been awarded the 2012 Nobel Peace Prize. Under Ireland’s citizenship law, I’m an Irish national (through my father). Ireland is a member of the European Union. The EU has just been given the Nobel Peace Prize. QED. Come to think of it, my mother’s Belgian, so I’ve been awarded two Nobel Peace Prizes.

      I defer to the expertise of my colleague Jay Nordlinger in these matters, but I believe this will be an historic trial: The first time one Nobel Peace Prize winner has sued another Nobel Peace Prize winner — at least until Obama sues Rigoberta Menchu over who’s got the fakest fake memoir. I’ll bring my Nobel medal if Dr. Mann brings his.

  16. Fred
    Posted Feb 27, 2014 at 4:00 PM | Permalink

    One can only wonder if Dr. Mann knows the definition of the word exonerate.

  17. Posted Feb 27, 2014 at 4:18 PM | Permalink

    You guys just don’t appreciate how far Mann will go to deliver a joke.

    • thisisnotgoodtogo
      Posted Feb 27, 2014 at 8:27 PM | Permalink

      Thanks for the giggle!
      I imagine the significance of the video hi-jinks is not quite “there” for some of the younger readers.

  18. CaligulaJones
    Posted Feb 27, 2014 at 4:53 PM | Permalink

    We’ve had a similar case here in Ontario: a government was caught in a scandal, and someone tried to delete some emails. Seems our cops are a bit better at their jobs than some others one could mention:

    http://www.thestar.com/news/queenspark/2014/02/27/opp_weighing_mischief_charge_in_deleted_liberal_emails_probe.html

  19. mpaul
    Posted Feb 27, 2014 at 4:57 PM | Permalink

    Its extremely important to note (as Steve points out) that Wahl was much more junior than Mann or Jones.

    Imagine that you (in this scenario, your name is Tony) are a mid-level employee in a large company. The CEO, whose name is Bob, forwards you an email that had been written by another exec at the company. The email says, “Bob, can you ask Tony to call customer X to understand the problem they are having with our product”. The email is simply forwarded by Bob with no additional text from Bob. What are you going to do:

    A. Just file it away assuming that Bob is just sending it to you for your information
    B. Call the customer

    If I asked this question to 1,000 candidates in a job interview, how many people would say ‘A’? I bet 0. Mann’s explanation fails the ‘reasonable person’ test.

  20. Ben
    Posted Feb 27, 2014 at 5:23 PM | Permalink

    RE: “all of the above investigations found that there was no evidence of any fraud, data falsification, statistical manipulation, or misconduct of any kind by Dr. Mann”.

    Sounds race-horsey, doesn’t it?

    By logical extension, we can also say…
    “all of the above investigations found that there was no evidence of any fraud, data falsification, statistical manipulation, or misconduct of any kind by Steve McIntyre, Ross McKitrick, Albert Einstein, Elvis Presley, Mitch Hedberg et al…”.

    • Donn Armstrong
      Posted Feb 27, 2014 at 5:53 PM | Permalink

      The O.J. trial didn’t find “evidence of any fraud, data falsification, statistical manipulation, or misconduct of any kind by Dr. Mann”.” either.

      • JEM
        Posted Feb 28, 2014 at 12:38 AM | Permalink

        I assume you’d like credit if Steyn stirred that tidbit into his pleadings…

    • Steven Mosher
      Posted Feb 28, 2014 at 11:47 AM | Permalink

      None of the investigations found evidence of Elvis’ death

    • Skiphil
      Posted Feb 28, 2014 at 1:44 PM | Permalink

      Since none of the investigations listed by Mann found anything wrong with Bernie Madoff, Enron, Bre-X, Lance Armstrong, the Nixon White House, Teapot Dome, etc. etc. all related parties are now “exonerated” by the 8 investigations….

  21. Gary Pearse
    Posted Feb 27, 2014 at 6:20 PM | Permalink

    “all of the above investigations found that there was no evidence of any fraud, data falsification, statistical manipulation, or misconduct of any kind by Dr. Mann”.

    You don’t suppose that he’s deviously exonerating himself because “..there was no evidence of any fraud…by Mann” because there was no mention of Mann? No, that would be too far out! er..

    If this is so, it would appear to be the same kind of slippery obfuscatory footwork on this as widely alleged about his scientific product.

  22. Posted Feb 27, 2014 at 7:41 PM | Permalink

    This series of articles is not helping Mann to fulfill a life dream of being the replacement for Pachauri or Hansen.

    Thankfully.

    John

  23. Ed Barbar
    Posted Feb 27, 2014 at 8:51 PM | Permalink

    I’m wondering how much it matters to the lawsuit if one is exonerated by 8 claimed investigations vs only 1 investigation from a legal perspective. That is, if there is 1 investigation that does exonerate, and the other 7 are silent one way or the other, that’s good enough.

    Here I’m thinking that it takes time and energy to make the case of the negative propositions “These investigations did not exonerate Mann,” thereby consuming time and thought.

    In other words, claiming 8 could be a Machiavellian tactic that causes the defense to work harder and expend more money.

    • tomdesabla
      Posted Feb 27, 2014 at 9:01 PM | Permalink

      I don’t know…If I claim 8 pieces of evidence are there, and it ends up that there is only one – I don’t look too good to the judge. Besides, Steve isn’t done yet; there might not even be one credible exoneration.

      • Ed Barbar
        Posted Mar 1, 2014 at 3:08 PM | Permalink

        I’m not suggesting there is any actual exoneration of Mann, and it’s not material to what I’m suggesting. Mostly just navel gazing and trying to understand a bit more about the process.

        Even if all 8 show no exoneration, . . ., including so many simply causes more work for the defense. That’s my basic point. From other threads, the lawyers have suggested there isn’t much in the way of penalties for writing up these kinds of accusations. All the plaintiff need do is, upon complaint about factual errors from the defense, respond within some time period with an amended complaint.

        I even wonder if it would be better as a general rule to leave the complaint as is for easily provable inaccurate statements, from a tactical perspective, in the hopes that the plaintiff perjures himself (or herself), since the complaint isn’t sworn testimony (again, from the lawyers commenting). Then you get it during a trial, or under oath, and it becomes a big deal in terms of the law.

        Or perhaps merely embarrassing the plaintiff would be good.

        Meanwhile, I’m enjoying a lot of popcorn reading through Steve’s meticulous posts.

        • tomdesabla
          Posted Mar 2, 2014 at 10:18 PM | Permalink

          Well, if Steyn could get him to repeat his claims at deposition, then that would not be amendable and would be under oath.

    • mpaul
      Posted Feb 28, 2014 at 1:04 AM | Permalink

      Mann’s argument is that Steyn exhibited actual malice since 8 official investigations cleared him of all wrong doing and all media coverage declared him a saint so therefore Steyn had no basis for alleging that Mann’s work was suspect other than out of antipathy for Mann. What Steve’s work does is to show that there was ample reason for someone to be suspicious of Mann’s work.

      • Posted Feb 28, 2014 at 4:08 AM | Permalink

        Your insights plus Josh’s convince me that the hockey stick is no longer mere settled science but holy relic.

      • Unscientific Lawyer
        Posted Feb 28, 2014 at 5:51 PM | Permalink

        mpaul:

        “Mann’s argument is that Steyn exhibited actual malice since 8 official investigations cleared him of all wrong doing…”

        True, but before Mann even gets to malice, he has to get over the hurdle of showing that Steyn et. al’s statements are verifiably false. If he can’t demonstrate that, the statements can’t be defamatory. That is why he is also pointing to the investigations as verification that the statements are false.

        Although Mann’s claim that the investigations “exonerated” him was enough to convince the judge to deny the motion to dismiss (because the judge had to accept Mann’s claims as true), it’ll be much harder to survive a summary judgment because, as has been pointed out, it doesn’t appear that the investigations actually did what Mann claims.

        I also can’t help but wonder if Mann’s attorneys will use the investigations to block discovery, the argument being that since the investigations establish the falseness of Steyn’s statements, no discovery into Mann’s data, etc., should be allowed. If that argument doesn’t work for Mann, and the judge allows such discovery, expect a voluntary dismissal and a press release from Mann stating that since the court basically vindicated him by refusing to dismiss the case, he has cleared his name, he has decided to drop the suit.

        • Posted Feb 28, 2014 at 6:12 PM | Permalink

          Re: Unscientific Lawyer (Feb 28 17:51),

          If Mann drops the case for the reasons you stated, what would be the ramifications for Steyn’s countersuit?

        • MrPete
          Posted Feb 28, 2014 at 7:02 PM | Permalink

          Re: Unscientific Lawyer (Feb 28 17:51),
          Ignorant question: is there any means by which Mann can be forced to go through discovery, once his suit begins to move forward? Or (alternatively) is he in the driver’s seat in that regard, able to drop the suit at any point where discovery threatens to expose something embarrassing?

        • Unscientific Lawyer
          Posted Feb 28, 2014 at 7:03 PM | Permalink

          charles the moderator:

          Steyn would still be able to pursue his counterclaim, but I seriously doubt it would survive a motion to dismiss from Mann. There is such a thing as a “constitutional tort,” but as far as I know, such a claim can only be asserted against governmental actors. I read Steyn’s counterclaim, but I can’t figure out the basis for it. More than likely, Steyn’s counterclaim would be dismissed long before Mann ever gets to the point of deciding to pull the plug.

        • Unscientific Lawyer
          Posted Feb 28, 2014 at 7:31 PM | Permalink

          MrPete:

          Under DC’s SLAPP statute, there are some special rules that prevent any discovery up until the motion to dismiss is disposed of, but I’m not versed on the details of that. But in a typical case (and eventually in this one), each party can send requests to the other side to produce documents, answer written questions under oath, or provide sworn testimony during an oral deposition.

          Once discovery requests are made, the other side has a certain amount of time to respond (usually 30 days). This deadline is frequently extended out of professional courtesy, however. Invariably, the responding party views the discovery as unduly burdensome or outside the proper scope of discovery and objects to producing some or all of the information requested. The parties then try to work out an agreement and when that fails, a motion to compel is filed and heard by the judge who makes a ruling on what has to be produced. The party who objected then produces what he thinks the judge told him to produce, the other side disagrees, and they fight about it some more. Then, based on a review of the discovery responses, more discovery requests are made and the whole thing starts all over again. All of this frequently takes a surprisingly long time and shocking amount of money.

          So the short answer to your question is “yes.” He can be made to go thru the discovery process described above. In the absence of a counterclaim, however, Mann’s voluntary dismissal of his suit would end all discovery. However, as long as Steyn’s counterclaim survives, Mann would be obligated to respond to discovery requests. It’s possible that Steyn’s counterclaim was filed expressly for that purpose, but if so, I’m not so sure it will succeed.

        • MrPete
          Posted Feb 28, 2014 at 8:29 PM | Permalink

          Re: Unscientific Lawyer (Feb 28 19:31),

          GREAT description, thanks! That fits everything I’ve seen and heard over the years 🙂

          What a crazy battle.

        • Skiphil
          Posted Feb 28, 2014 at 9:05 PM | Permalink

          fwiw, Steyn says at the end of his Feb. 26 column that initial discovery requests are imminent:

          “When I’m back from my trip to Ottawa, we’ll be serving Dr Mann with my initial discovery requests.”

        • thisisnotgoodtogo
          Posted Mar 1, 2014 at 12:28 AM | Permalink

          “Steyn would still be able to pursue his counterclaim, but I seriously doubt it would survive a motion to dismiss from Mann. There is such a thing as a “constitutional tort,” but as far as I know, such a claim can only be asserted against governmental actors”

          Would it be possible that previous rulings by the judge place Mann’s work in the same category as a Government actor’s work?

        • pottereaton
          Posted Mar 1, 2014 at 1:15 AM | Permalink

          Unscientific Lawyer wrote:

          Although Mann’s claim that the investigations “exonerated” him was enough to convince the judge to deny the motion to dismiss (because the judge had to accept Mann’s claims as true), it’ll be much harder to survive a summary judgment because, as has been pointed out, it doesn’t appear that the investigations actually did what Mann claims.

          At this point, I think Mann will be relieved if the judge issues a summary judgement to dismiss the complaint. I just can’t see that Mann has anything like a compelling case.

          That will disappoint a lot of people who thought Mann would be dragged into discovery.

          My question to you is: is there any way the plaintiff’s counsel can, ummmm . . . facilitate the dismissal of the suit because he himself is not convinced he will be successful in prosecuting it? And do you think that is a possibility or perhaps likely?

        • kim
          Posted Mar 1, 2014 at 3:57 AM | Permalink

          So, will it be the Lady of Appeal, the Tiger of Settlement, or the Egrets of Dismissal? What a zoo this arena’s become.
          ==================

        • Unscientific Lawyer
          Posted Mar 1, 2014 at 11:26 AM | Permalink

          thisisnotgoodtogo:

          “Would it be possible that previous rulings by the judge place Mann’s work in the same category as a Government actor’s work?”

          No. As far as I can tell, the basis of Steyn’s counterclaim is that Mann sued him. Mann is suing as an individual, not as a state actor.

          pottereaton:

          “My question to you is: is there any way the plaintiff’s counsel can, ummmm …facilitate the dismissal of the suit because he himself is not convinced he will be successful in prosecuting it? And do you think that is a possibility or perhaps likely?”

          I don’t think that’s likely or even a possibility. First, not being convinced of success isn’t going to deter an attorney; there are very few cases that are slam dunks. Mann’s lawyers have already advised him of the reasons why his suit might not succeed because that’s part of the lawyer’s job, too. Second, Mann’s lawyer isn’t going to do anything adverse to his client’s interests. If he were to suggest to Mann that the case be dismissed, I don’t think his client would go along with that anyway. Third, Mann has survived a motion to dismiss. That’s a big deal. His lawyers now get to do all sorts of things that will force the defendants to spend money. After a defendant has paid a few legal bills, his view of things changes and he becomes more open to once unthinkable settlement proposals. Fourth, assuming Mann’s attorneys are not handling his case pro bono and that the legal bills are getting paid, it’s a great billing opportunity, win or lose; he wouldn’t want to stop now.

        • mpaul
          Posted Mar 1, 2014 at 1:50 PM | Permalink

          Unscientific Lawyer,

          I read Steyn’s counterclaim, but I can’t figure out the basis for it.

          Can Steyn make a civil rights claim under 42 U.S. Code § 1983 and/or 18 U.S.C. § 241? I don’t know how the courts are currently interpreting “…or other person within the jurisdiction thereof…” but I think Steyn would argue that while he is not a citizen, he publishes in the US and Mann’s action against him is based on speech published within the US and therefore he has standing as a person within the jurisdiction.

        • Unscientific Lawyer
          Posted Mar 1, 2014 at 2:43 PM | Permalink

          mpaul:

          “Can Steyn make a civil rights claim under 42 U.S. Code § 1983 and/or 18 U.S.C. § 241?”

          I doubt it. Section 241 is a criminal statute and can’t form the basis of a private cause of action. An allegation that a private litigant improperly used the courts to sue someone isn’t state action, so § 1983 is likewise inapplicable.

        • Bill
          Posted Mar 3, 2014 at 7:52 PM | Permalink

          So once again Mann relies on an appeal to authority instead of the data?

        • Fred Zarguna
          Posted Mar 17, 2014 at 5:00 PM | Permalink

          Unscientific Lawyer,
          42 USCS § 1983 says
          “Every person who, under color of any statute, ordinance, regulation, custom, or usage, …”
          Are there specific meanings as terms-of-art in either “custom” or “usage?” A lay reading of the code does not suggest to me that an improper restraint of free speech as a denial of the privileges and immunities of US Persons is specifically limited to government actors nor the application of statute, ordinance or regulation. Steyn, though not a citizen, is living in the US, so he is a US person and clearly protected by the statute if the application of “custom” or “usage” is as broad as it seems…

    • climatebeagle
      Posted Mar 3, 2014 at 11:23 PM | Permalink

      If the one investigation left is Penn State’s then it has the potential to help Steyn, since that was really the point of the original article.

  24. AndrewS
    Posted Feb 27, 2014 at 9:19 PM | Permalink

    It might have been a better use of your time, Steve, to write up the investigations that actually did exonerate Mann!

    • Will J. Richardson
      Posted Feb 28, 2014 at 4:47 AM | Permalink

      Mr. McIntyre wrote up his comments on the investigations that exonerated Mann in the post between this one and the February 25th post entitled “Mann Misrepresents the UK Department of Energy and Climate Change”

    • Skiphil
      Posted Feb 28, 2014 at 9:08 PM | Permalink

      If by “exonerate” you mean genuine substantive exoneration, rather than spurious Mannian “exoneration” then there are exactly zero investigations which exonerate Mr. Mann.

  25. John
    Posted Feb 27, 2014 at 9:23 PM | Permalink

    I noticed reference number 36 in Michael E. Mann wikipedia entry doesn’t go to the pdf final investigation report as referenced in the entry. Instead it goes to Penn State news.

    Just saying

  26. Political Junkie
    Posted Feb 27, 2014 at 9:29 PM | Permalink

    Mr. McIntyre,

    I sincerely hope that you are having as much fun writing as I am reading!

  27. Bill Jamison
    Posted Feb 27, 2014 at 11:53 PM | Permalink

    I can’t think of anyone else that could lay this out so concisely as Steve does. Bravo! Very nice work as always.

  28. onwyrdsdream
    Posted Feb 28, 2014 at 2:26 AM | Permalink

    lazy/sloppy/erroneous use of information in order to reach predetermined results by a climate change researcher. Shocking.

  29. Posted Feb 28, 2014 at 4:23 AM | Permalink

    The NOAA summarized NOAA’s procedures for preparing GHCN-M data and reported:

    “We found no evidence in the CRU emails that NOAA inappropriately manipulated data comprising the GHCN-M dataset”

    Unless there is a word missing from “The NOAA summarized NOAA’s …and reported:” (i.e. perhaps it should read “The NOAA OIG summarized …and reported”), then it seems to me that, “Well, they would say that, wouldn’t they?” might be the appropriate response!

    But, speaking of “appropriate” and “inappropriate” – and setting aside for a moment Steve’s observation that one wouldn’t expect to find such evidence in the CRU emails, anyway …

    In parsing that which was reported, it seems to me that the phrase “inappropriately manipulated” leaves open the possibility that (although not evident in the CRU emails which were – or were not – examined) the OIG might well have found evidence elsewhere of “manipulated data”. But who knows whether such “manipulation” was deemed to be “appropriate” or “inappropriate”?!

    So, “inappropriately manipulated data” strikes me as being a very Sir Humphrey-ish turn of phrase. Which reminded me of a similar Sir Humphrey-ish choice of wording in (at least one of) Muir Russell’s “conclusions”:

    […] We find that neither Jones nor Briffa behaved improperly by preventing or seeking to prevent proper consideration of views which conflicted with their own through their roles in the IPCC.

    If Muir Russell truly found that neither Jones nor Briffa ‘prevented or sought to prevent proper consideration ….” why not just say so, eh?! So, considering that those of us who have read the emails – and other relevant material – know damn well that Jones and Briffa did, in fact, do such deeds, it would appear that in Muir Russell’s view, there was nothing “improper” in their doing so;-)

    But I digress … I really want to thank you, Steve, for providing further lines of evidence in support of my thesis that (now former) faux Nobel Laureate, Michael Mann is the (faux historian) David Irving of climate science.

    And here’s another similarity that occurs to me. As I had noted in my recent post, historian, Richard J. Evans (who, incidentally, did spend some time on the faculty of UEA) was an Expert Witness at the 2000 Irving vs Penguin Books and Deborah Lipstadt libel trial. Evans wrote a 740 page Report for the defence, in which he noted:

    […] Unpicking the eleven-page narrative […] in Irving’s book […] and tracing back every part of it to the documentation on which it purports to rest takes up over seventy pages of the present Report.[…]

    The view from here, so to speak, is that by the time Steve McIntyre has finished “unpicking the … narrative” contained in Mann’s misleading pleadings, not only will he have replicated Evans’ feats, but he may well justifiably conclude (to slightly paraphrase Evans) that he has:

    not suppressed any occasion on which [Mann] has used accepted and legitimate methods of … research, exposition and interpretation: there were none

    Which leads me to a corollary to my thesis: Steve McIntyre (in addition to all his other contributions and accomplishments) may well come to be known as the Richard J. Evans of “climate science”. Although, I’m inclined to think that he’s been having far more fun than Evans did … and he deserves every minute of it!

    At this point, I am increasingly leaning towards a conclusion that the only way that Mann could possibly stand even an outside chance of “proving” his case against Steyn, would be if he were to add “exonerate” and “reasonable” to the words that must be “redefined” if one is to understand the lexicon of “climate scientists”. However, I have extremely low confidence that such redefinitions can be … sustained;-)

  30. Robin
    Posted Feb 28, 2014 at 4:52 AM | Permalink

    Regarding the selection of the eight mails, the omission of the “hide the decline” can be interpreted as implying that the investigation thought this wasn’t important. A skilled lawyer would likely argue this.

    It would be useful if standard guidelines for an inquiry were highlighted. The failures of the inquiries to follow good practice (eg interviewing all parties, both relevant critics and the “accused”, separate interviews etc) further devalues the conclusions of the reports.

    Steve: the NOAA OIG didnt investigate the “hide the decline” email because it didn’t involve NOAA scientists. I don’t fault them for that. The fault lies only with the attempt of Mann and his lawyers to claim that the NOAA OIG report had “exonerated” Mann and their highly misleading reference to the hide-the-decline email in their synopsis.

    • Steven Mosher
      Posted Feb 28, 2014 at 11:44 AM | Permalink

      you depose the investigators and ask them.

      1. Did you review this mail ( hide the decline)
      2. Why not?
      3. Did your search terms through the mails include the word Mann?
      4. Why not?
      5. Are you aware of the controversy over the Tiljander proxy
      6. Did you investigate that?
      7. Are you aware of the controversy over Gaspe
      8. was your investigation limited to NOAA employees
      9. Has mann ever worked for NOAA

      etc.

      The investigators are not Nick Stokes. They will testify clearly and accurately.
      Shading the truth under oath is not a career advancing move

    • Bob Denton
      Posted Feb 28, 2014 at 2:17 PM | Permalink

      The source of many contentious papers has been discovered:

      http://www.theguardian.com/technology/shortcuts/2014/feb/26/how-computer-generated-fake-papers-flooding-academia

      The programmers are currently working on some bugs in the module that generates the exonerations. Free upgrades will be made available to climate scientists.

  31. Neil Fisher
    Posted Feb 28, 2014 at 5:23 AM | Permalink

    “We found no evidence in the CRU emails that NOAA inappropriately manipulated data comprising the GHCN-M dataset”

    One *could* read this to say that the GHCN-M dataset itself was not inappropriately manipulated – that it still contains the same data it always did. Certainly, if evidence was to come to light that data that originated from said dataset was inappropritely manipulated, this intrpretation of the statement could be invoked in the defense of the authors of the report.

    I only point this out because similar “pea under the thimble” obtuse parsing of statements appear to have been used before.

    • Posted Feb 28, 2014 at 8:35 AM | Permalink

      You can twist it into any shape you like. But if you are at all bound by the plain meaning you can’t make it say that the OIG was claiming to have found no evidence that Mann manipulated data and/or methods to make his hockey stick look more striking and robust than was warranted by the underlying data. If you can make it say that, then words have come to mean anything at all, or nothing. And since that applies equally to the defendants, there’s no basis for libel, since the defendants could equally claim that their words actually mean anything and nothing. Sauce for the goose, etc.

      • JT
        Posted Feb 28, 2014 at 10:42 AM | Permalink

        You know, that might be a good line of defence if there were good evidence that Mr. Mann had repeatedly re-defined settled terminology out of its usual denotations for rhetorical effect. In that case surely Mr. Steyn would have as much latitude as Mr. Mann to play Humpty Dumpty.

      • Bill Hartree
        Posted Mar 1, 2014 at 4:19 AM | Permalink

        Ross,

        As Nick Stokes points out in his latest posting at Moyhu the code used in McKittrick and Mcintyre 2005 excises 50 years of Gaspe data with a consequent “more striking” divergence with respect to the Mann et al results. Does this mean that there has been a deliberate manipulation of the sort that you accuse Mann of performing?

        Steve: categorically not. We had attempted to replicate Mann’s stepwise results following his method of constructing networks in steps using proxies available in the first year of each step. We got different results than him. We then tried to diagnose the differences, finding that there were two main reasons: 1) Mann’s bizarre principal components method; 2) Mann’s unique and undisclosed extension of the Gaspe series. It was Mann that departed from the methodology that he had used in all other steps. It seems plausible that he did so intentionally. There are many issues with the validity of the GAspe series that we discussed at the time. In its early portion, it doesn’t meet Mann’s own standards for the minimum number of cores to constitute a chronology and shouldn’t have been used to 1404 anyway. A later attempt to replicate the chronology at Gaspe did not yield a chronology anything like the chronology used by Mann, but this data was withheld by Jacoby and D’Arrigo for more than 20 years. Plus other cedar chronologies look nothing like the Gaspe chronology.

        • Jean S
          Posted Mar 1, 2014 at 7:58 AM | Permalink

          Re: Bill Hartree (Mar 1 04:19),
          no, it means that Nick Stokes just confirmed what Steve and Ross have been saying last 10 years: if you (a) exclude Gaspe series from AD1400 network (as it should since it starts AD1403) (b) use PCA correctly (which downweights the few bad apples (britlecones)), you obtain a “striking divergence” from the MBH98 results.

          Other than that, I suggest that you actually try to study and understand the issues before making an accusation of that calibre.

        • Posted Mar 1, 2014 at 9:48 AM | Permalink

          Bill, you can’t think why we would be curious about the effect of removing Gaspe from the AD1400 roster?

          Under MBH98 methods, a series had to be present at the start of a calculation step in order to be included in the interval roster. In only one case in the entire MBH98 corpus was this rule broken – where the Gaspé series was extrapolated in its early portion, with the convenient result of depressing early 15th century results. This extrapolation was not disclosed in MBH98, although it is now acknowledged in the Corrigendum [Mann et al., 2004c]. In MBH98, the start date of this series was misrepresented; we discovered the unique extrapolation only by comparing data as used to archived data. There are other considerations making this unique extrapolation singularly questionable. The Gaspé series is already included in the NOAMER principal components network (as cana036) and thus appears twice in the MBH98 data set, and the extrapolation, curiously, is only applied to one of the columns. The underlying dataset is based on only one tree up to 1421 and only 2 trees up to 1447.
          Jones and Mann [2004] point to the need for “circumspect use” of tree ring sites with few early examples. The early portion of the series fails standard minimum signal criteria [e.g.
          Wigley et al. 1984] and indeed fails the data quality standards Mann et. al. themselves listed elsewhere. The early portion of the series was not used by the originating authors [Jacoby and d’Arrigo, 1989; D’Arrigo and Jacoby, 1992], whose analysis only begins effective 1601. In fact, Jones and Mann [2004] do not use the Gaspé series as an individual proxy and only use the Jacoby-d’Arrigo northern treeline composite when it is adequately replicated after 1601.

          McIntyre and McKitrick (2005)

          Click to access M&M.EE2005.pdf

          Hope that clears it up.

          Steve: as I mentioned in another inline note, in our first replication, we used original versions of series. In 2003, I didn’t “know” that Gaspe was in Mann’s AD1400 network: it didn’t go back to AD1400. I didn’t “know” that it mattered to Mann’s results. We subsequently discovered Mann’s undisclosed alteration of the data when we were trying to reconcile results.

        • Carrick
          Posted Mar 1, 2014 at 1:37 PM | Permalink

          Bill Hartree, Nick was taken to task on Lucia’s blog for the misleading nature of his post.

          Had Nick actually read the paper he was criticizing, he would have realized that M&M 2005 was emulating MBH98, and testing the effect of the undocumented padding of data in the Gaspe series from 1400-1403. Mann in various comments has implicitly admitted to the manipulation. Actually, it turns out Nick managed to confirm M&M 2005.

          I suggested to Nick that he read the bloody manuscript next time. He claims he has, but if so, he was reading it with his eyes shut. The issues that he totally missed are front and center at the start of the paper. Not exactly a subtle thing that would be easily missed.

        • MikeN
          Posted Mar 1, 2014 at 4:57 PM | Permalink

          I saw an e-mail, I think from Esper, about Fig 2 in that paper. How can a centered PC have mean not 0?

        • Posted Mar 1, 2014 at 6:00 PM | Permalink

          Ross,
          In Fig 1 you showed three panels. The top was the MBH98 emulation. The second was described as “using
          archived Gaspé version”
          . The third had centered differencing. Of the difference between the top two panels, you say “The only difference between the two series is the extrapolation of the first four years in MBH98.”.
          Then follows the quote you have here.

          I looked in to it because I found people saying – Mann extrapolated over four years, and it had this serious effect. And I found that hard to believe, because it seemed to be a fairly modest degree of infilling for missing values.

          I looked into the code and found that the only difference between the two series wasn’t the extrapolation; it was the entire removal of Gaspé between 1404 to 1450. And this was done by claiming that there was a rule requiring it “under MBH methods”. But I can see no such requirement stated by Mann anywhere.

          It’s a matter of arithmetic, of course, that if there are missing values you have do decide what to do about them. A very common problem. In this case, Mann decided to infill with the neighboring value. That will have an effect, and as I showed in the post, it was local and modest. To describe the consequence of your decision to then remove Gaspe entirely as being the effect is misleading. And people have been misled, including apparently Wegman.

          Nick – you are now starting to become dishonest. Mann had a stepwise procedure. I agree that Mann’s methodological description was poor. However, in all other steps, Mann used those proxies available in the opening year of the step and did not extrapolate. E.g. as Jean S pointed out to you for the Fritts series that started in 1602. We did not “remove” Gaspe from the AD1400 network. We applied Mann’s procedure of using proxies available in the first year of the step. It’s one thing for you to Racehorse, but please stop misrepresenting what we did.

        • Ed Snack
          Posted Mar 1, 2014 at 7:11 PM | Permalink

          Nick, if Gaspe doesn’t qualify to be included in the 14th century step, then it can’t be included, period. You want to allow the balance of the years to be included even though it is not qualified by starting after the initial date. Are you being deliberately obtuse about this ? The necessary condition for inclusion in the step was that there were values for the first year, which was the whole point of the padding !

          I’d also like to add that Gaspe has other issues, like where are the original records including all of the records.

        • Posted Mar 1, 2014 at 7:36 PM | Permalink

          Ed Snack
          “Nick, if Gaspe doesn’t qualify to be included in the 14th century step, then it can’t be included, period.”
          hard to argue with that, I guess. But where do you get these rules?

          Tje paper says “Under MBH98 methods”. But MBH were using MBH methods. And this is what they did.

          When there is missing data, you have to decide whether to replace some of the missing by an expected value, or ditch the lot. A common issue, and people generally try to work out whether the possible error introduced by infilling balances loss of information in the data that would be rejected.

          What Fig 1 showed was the effect of switching the decision from infilling to ditch the lot. That wasn’t made clear. And the decision was attributed to “MBH methods”. But MBH were implementing MBH methods, and they infilled. M&M used a M&M method. And while it produces a different result, it’s not at all clear that it is better. Gaspe trees might have been telling us something.

        • Posted Mar 1, 2014 at 8:15 PM | Permalink

          Craig
          “better by what criterion”?
          It’s better to use all the available data. And it’s worse if infilling introduces error. If you care about the data, you’ll seek a balance, and that isn’t necessarily knee-jerk rejection.

        • Posted Mar 1, 2014 at 9:36 PM | Permalink

          Craig,
          “why infill sometimes and sometimes not, as Mann did?”
          What he’s done may well be inconsistent. And there’s likely ad-hoccery. He probably paid more attention to infilling when data was running low.

          But what M&M are saying is “Drs MBH, you’re not following MBH methods. You’ve been inconsistent. We’ll fix that for you”. Not a literal quote.

          “And how do you know that infilling “introduces error” — where is your error meter?”
          In my post. There were complaints that Mann infilled with the 1404 value, which was low. I tried the 1404-1450 max value. Those two pretty much limit the range of what the unknown values could do. One could be more quantitative with a noise model.

          I think MBH were doing the right thing with the Gaspé extrapolation. It’s better to estimate 4 years than discard 46, though there more sophisticated means. But yes, they should document a policy and apply it consistently. Then it would be their policy, not yours.

          Steve: why do you think that they were doing the “right” thing given that: (1) they already used the Gaspe series in their NOAMER PC calculation where it was not extrapolated; (2) under minimum core count criteria, the 46 years should not be used anyway; (3) when further samples were taken, a very different chronology was obtained. Nick, the underlying problem is that that any of these versions is little more than squiggle. One squiggle is not “right” relative to another squiggle.

        • Posted Mar 1, 2014 at 10:01 PM | Permalink

          Steve,
          2 and 3 are different issues. I’m focusing on how to handle missing data, assuming the data is valid.

          On 1, their NOAMER PC calculation would accept missing values. No intervention needed. It wouldn’t have the same stepwise process.

          If it’s all just squiggles, then there’s no point in discussing it anyway.

          Steve: 2 and 3 are not different issues. you also just keep making stuff up. Their NOAMER PC calculation did not accept missing values. Why do you continually fabricate assertions?

        • Posted Mar 1, 2014 at 10:15 PM | Permalink

          Nick, your use of the quote is misleading. You say (emph added):

          In Fig 1 you showed three panels. The top was the MBH98 emulation. The second was described as “using archived Gaspé version”. The third had centered differencing. Of the difference between the top two panels, you say “The only difference between the two series is the extrapolation of the first four years in MBH98.”.

          You make it sound like the quoted sentence refers to the two panels. But you left out the previous sentence which makes it clear that that is not what we were referring to. Our paragraph in the paper says (emph added):

          The middle panel (“Archived Gaspé”) shows the effect of merely using the version of the Gaspé series archived at WDCP, rather than the version as modified by MBH98, accounting for a material change in the early 15th century. The only difference between the two series is the extrapolation of the first four years in MBH98.

          The “two series” to which we refer are, obviously, the 2 versions of Gaspe, which differ only by the extrapolation, just as we said. We go on to explain the difference between the two panels with reference to the extrapolation, the duplicate usage of Gaspe and the overlooking of the inadequate sampling prior to 1447, all of which allows the series to be introduced at the AD1400 step rather than the AD1450 step.

          I find it weird that you have spent so much time here lately bending and twisting the interpretation of Mann’s pleadings beyond their plausible meaning to try and put the disputed quotes into the best possible light, and now you have bent and twisted our words beyond their obvious meaning to try and put them into the worst possible light. It’s tedious watching this display of hyper-partisanship.

        • Posted Mar 1, 2014 at 10:19 PM | Permalink

          Steve,
          “Their NOAMER PC calculation did not accept missing values.”
          So did it discard Gaspé then?

          Steve: Mann did “stepwise” principal components. But he didn’t recalculate for each of his reconstruction steps, only for some steps. I have no idea what protocol was used. He recalculated for NOAMER 1400 and 1450 steps. Gaspe (cana036) was not used in the AD1400 PC step. but it was included in the AD1450 step. Mann may not have realized that cana036 was the same series as Gaspe.
          Nick – have you read the MM2005 papers, especially MM2005(EE). You seem to be unaware of things that are not at issue. It would be better if you re-read the papers before commenting further.

        • Posted Mar 1, 2014 at 10:40 PM | Permalink

          Ross,
          “But you left out the previous sentence which makes it clear that that is not what we were referring to”
          I can’t see that it has that effect at all. It just spells out what series you are talking about. The operative observation is that they differ by the four extrapolated values.

          I think I’m just using your words as they would be understood, and as I think Wegman understood them, for example. Making a decision to scrap 46 years of data is what determines the difference, and should have been stated explicitly.

          Yes, I tend to dwell on what you’ve done wrong, and what Mann hasn’t done wrong. It’s a counter-cyclical policy. There’s plenty of criticism of Mann here (you hardly need more from me), and not much attention to what might be good (or even not bad) about the things he has done.


          Steve: Nick, you are once again fabricating accusations. We explicitly discussed the effect of the Gaspe series in MM2005(EE) in meticulous detail. Your accusations that we weren’t “explicit” are false. Read the article. Nor did we “scrap” or “remove” data. We analysed the effect of Mann’s extrapolation versus the effect of using the policy used for all other series. It’s one thing for you to do your usual Racehorse stuff to try to justify Mann, but I request that you stop making bogus accusations against us.

        • MrPete
          Posted Mar 1, 2014 at 11:28 PM | Permalink

          Re: Nick Stokes (Mar 1 22:40),
          Nick, as I’ve charted, Mann himself “scrapped” thousands of years of data. The total (based on the assumptions I made; not sure anyone can truly replicate his secret steps?): 6559 years “lost”.

          So what you are suggesting is that it’s reasonable that for exactly ONE case out of hundreds, Mann’s decision to uniquely extend a data set to “save” 46 years of data is quite reasonable?

          That’s more than a bit of a stretch.

          This whole thing is extensively discussed in MM05. That you suggest they should have “discussed it explicitly” is sad. They did, but in the proper context: Mann’s extension/inclusion was the anomalous case.

    • Posted Feb 28, 2014 at 12:48 PM | Permalink

      Re: Neil Fisher (Feb 28 05:23), “We found no evidence in the CRU emails that NOAA inappropriately manipulated data comprising the GHCN-M dataset”

      Or you could say that the focus was on the word inappropriate. i.e. there was manipulation — but we’re OK with it.

      As Steve so often reminds his viewers — pea, thimble, , under, watch — express as appropriate in a sentence.

  32. bernie1815
    Posted Feb 28, 2014 at 10:01 AM | Permalink

    Steve: Besides the Penn State inquiry into Mann’s conduct, in how any of the inquiries Mann cites is he actually mentioned by name?

  33. minarchist
    Posted Feb 28, 2014 at 3:21 PM | Permalink

    Mark Steyn has discovered additional evidence of Mann’s exoneration which I think is very noble of him to share. Perhaps the plaintiff’s lawyers will wish to amend their complaint:

    “An investigation by President Lincoln of “four score and seven” emails concluded that Dr Mann’s research “brought forth a new birth of freedom”. An investigation by Sir Winston Churchill concluded that “Mike’s Nature trick” was “our finest hour”. An investigation by Judy Garland concluded that Dr Mann’s research demonstrated that global warming was causing troubles to “melt like lemon drops away above the chimney tops”.

    http://www.steynonline.com/6134/every-quote-ever-uttered-by-anyone-exonerates

  34. Beta Blocker
    Posted Feb 28, 2014 at 5:22 PM | Permalink

    Here are three questions that I posted on Lucia’s blog this afternoon, which assume that the purported Mann exonerations will be admitted into evidence:

    (1) If you are a lawyer for the plaintiff, is it more important to put greater emphasis on directly defending the substance of Mann’s science, and use the purported exonerations as a secondary and supporting line of argument; OR, is it more important to focus on the purported exonerations first, at the expense of spending time and effort directly defending the validity of Mann’s scientific work?

    (2) If you are a lawyer for the defense, is it more important to put greater emphasis on directly challenging the substance of Mann’s science, and use the issues surrounding the purported exonerations as a secondary and supporting line of argument; OR, is it more important to focus on the exonerations first, at the expense of spending time and effort directly challenging the validity of Mann’s scientific work?

    (3) If you are Mark Steyn and you don’t have a competent lawyer on your team to manage your defense, does it matter to you either way what the answers to questions 1 and 2 actually are?

    • pottereaton
      Posted Feb 28, 2014 at 10:20 PM | Permalink

      BB: I think the most obvious question now, is how can Plaintiff’s counsel possibly defend the so-called “exonerations.” Steve hasn’t exposed them all yet, and I don’t know if he will, but who will they call to say Mann was exonerated? Oxburgh? Muir Russell? The IG? Who will be a credible witness to say that Mann has been exonerated by all those investigations as claimed in the complaint? Graham Spanier?

      And it’s interesting that you’ve already put Mann on the defensive in (1). Yes, he will have to defend his science because the defendants’ counsel will be attacking it like the Japanese attacked Pearl Harbor.

      Re (2): they will have ample time to do both with equal vigor, should it get that far.

      Re (3): I wouldn’t worry about Steyn.

    • Bob Denton
      Posted Feb 28, 2014 at 11:39 PM | Permalink

      Beta Blocker.

      The next stage in the preparation for trial is discovery. It’s intended, amongst other things, to forewarn the other side of the details of your case so they’re not ambushed at trial. Lawyers will develop a strategy in the light of what’s discovered.

      For instance, at the moment the allegations complained of are very non-specific, fraud, fraudulence – but discovery will reveal specifics. Then, it’s only the specifics either side will need to deal with. If the Defendants’ case is that fraud/fraudulence, in context, meant only unsound/unreliable, but expressed in an exaggerated fashion – and generally run a non-defamation defence – then the Plaintiff won’t need to disprove fraud/fraudulence because it’s not alleged. The exonerations would cease to be relevant.

      If they run a defence of justification – that there was a defamatory allegation of fact – and it was true – the Plaintiff won’t be able to pray in aid the exonerations to disprove fraud, they’re not admissible for that purpose. However, the Plaintiff will also have to show actual malice, a state of the Defendant’s knowledge, and, insofar as it can be shown that the individual Defendants were aware of the exonerations, they can be used to that end. Far more people have read commentary on the exonerations than have read the exonerations, and the Defendants may have been aware of their existence but never read them. In that case, they’d have less relevance than the commentary on them that had been read. You can imagine which lawyers would try to big-up which documents.

      The Defendants would need to say something along the lines that they formed the belief that the Plaintiff was, in the specified way, fraudulent – possibly as a result of Climategate – and the subsequent exonerations appeared defective and did not change their minds.

      Steyn has put in an affirmative defence of justification (Truth).

      I don’t know what defences he’ll run, he’s entitled to run defences in the alternative, but it’s difficult to ride two horses going in opposite directions.
      He’ll need to say to the jury in evidence, “It’s true, the Plaintiff’s a fraud (justification), I genuinely believed he was a fraud when I said it (absence of malice) but, when I said it, though I believed it, I didn’t mean it, I meant something else (hyperbole).”

      He’d probably find the help of a lawyer useful in running inconsistent defences.

  35. bernie1815
    Posted Feb 28, 2014 at 9:16 PM | Permalink

    My apologies if this is somewhat tangential to Steve’s dissection of Mann’s pleading but a comment earlier about clean hands, reminded me of the rather juvenile and nasty comments frequently uttered by Michael Mann on his Facebook page. I hope Steyn has someone monitoring it and compiling comments. On February 23 he posted from the vault “”The Most Heinous #Climate Villains” #StephenMcIntyre #MarkMorano #SteveMilloy #RoySpencer”.

  36. Stacey
    Posted Mar 1, 2014 at 5:28 AM | Permalink

    Mann says
    “There was no accompanying commentary by me or additional correspondence from me regarding the matter, nor did I speak to Wahl about the matter.”

    Sometimes people say too much?
    Is it believable that Mann did not speak to Wahl “about the matter”
    Can anyone on this blog honestly say that having received an email from a collaborator requesting you and for you to request a colleague to delete emails and having sent him a copy of the email you would not discuss the matter with your colleague?
    Much more often than not, if something is unbelievable that’s because it’s unbelievable?

  37. MrPete
    Posted Mar 1, 2014 at 11:18 AM | Permalink

    Re: Ross McKitrick (Mar 1 09:48),

    In MBH98, the start date of this series was misrepresented; we discovered the unique extrapolation only by comparing data as used to archived data. There are other considerations making this unique extrapolation singularly questionable. The Gaspé series is already included in the NOAMER principal components network (as cana036) and thus appears twice in the MBH98 data set, and the extrapolation, curiously, is only applied to one of the columns. The underlying dataset is based on only one tree up to 1421 and only 2 trees up to 1447.

    Jones and Mann [2004] point to the need for “circumspect use” of tree ring sites with few early examples. The early portion of the series fails standard minimum signal criteria [e.g. Wigley et al. 1984] and indeed fails the data quality standards Mann et. al. themselves listed elsewhere.

    Take your pick: shoddy work, misrepresentation, obfuscation, “hide the decline.” No matter what you call it, this is nowhere close to good science.

    What ought to happen is this should be published in every basic science textbook as literally a textbook case of what everyone should avoid.

    For generations to come, Mann should be appreciated for his great contributions to what should never be done in science.

  38. MrPete
    Posted Mar 1, 2014 at 7:55 PM | Permalink

    Nick Stokes (Mar 1 19:36),
    I took a few minutes to gather a set of starting dates of the MBH98 series. I probably don’t have the correct or complete data series. But if this is even part of the full picture, I think it’s helpful. The chart shown below demonstrates the extent to which Mann’s methodology caused data to be “lost” for various periods of time. My method is described below the graph. Feel free to put this together yourself; it was not all that hard.

    As you can see, “lost” years were not avoided except perhaps for the older series.


    1) My understanding of the stepwise procedure: for each series (other than Gaspe), the starting date used was the next 50 year boundary >= the starting year series. So a series starting 1599 or 1600 is used beginning in 1600, a series 1601 or 1602 is used starting in 1650. (Changing these boundaries slightly but consistently won’t change the look of this chart all that much.)
    2) For a given series, the number of data years “lost” is the difference between the series starting data, and the Base date of the data used. Thus a 1599 series “loses” one year, a 1602 series “loses” 48 years.
    3) The size of each bubble is the number of series for that base year that had that many “lost” years. All were 1, 2 or 3.
    4) The only date/lost combination with three series was 1400/0.

    This is all derived from the data in the first part (before ITRDB begins) of http://www.nature.com/nature/journal/v430/n6995/extref/PROXY/mbh98datasummary.txt

    • MrPete
      Posted Mar 1, 2014 at 8:15 PM | Permalink

      Re: MrPete (Mar 1 19:55),
      Here’s the same chart for the rest of the data (ITRDB) from that source. If anything it strengthens the case that Mann’s procedure “lost” quite a lot of data.

      Nick, why would he make an exception for exactly one series, to the process that he applied consistently to hundreds of others?

      • John Archer
        Posted Mar 3, 2014 at 11:55 PM | Permalink

        Mr Pete,

        The “size of each bubble”: is it proportional to its radius, the area of its circular 2-D projection, or its volume as a spherical object? Just asking.

        Nice graphic.

        • MrPete
          Posted Mar 4, 2014 at 6:11 AM | Permalink

          Re: John Archer (Mar 3 23:55),
          Diameter. Could have done area but I was just doing something quick. This is from Excel, believe it or not. The first graph range is 1-3 for size; second is 1-5.

    • Posted Mar 1, 2014 at 10:12 PM | Permalink

      MrPete,
      The intervals were not all 50 years. But yes, segmentation must lose data. If you have 25 years data in a 50 year segment, you can’t extrapolate the remaining 25.

      Yes, I’m sure he’s been inconsistent in his use of infilling. But infilling can be good.

      • MrPete
        Posted Mar 1, 2014 at 10:20 PM | Permalink

        Re: Nick Stokes (Mar 1 22:12),
        “Inconsistent” is quite the understatement.

        As I understand it, he failed to infill over 300 times. He did infill once.

        Steve: in financial statements, this sort of practice would set off alarm bells for any auditors. Why would Mann do this in this one case?

        • Posted Mar 2, 2014 at 12:38 AM | Permalink

          MrPete,
          I think you’re counting every fragment. Usually infilling is not appropriate – too much missing.

          But yes, I’m sure there are cases where infilling would be appropriate and he didn’t. He should have, but it won’t always matter.

        • Carrick
          Posted Mar 2, 2014 at 3:01 AM | Permalink

          Correction:

          If the extrapolation (I think infill is the wrong word here) had been done systematically, it might not have mattered. Because this data manipulation was only done to a single series, it was clearly done in an ad hoc manner–something that opens you to the insertion of bias, as I believe happened here.

          It’s very clear why the extrapolation was done for the Gaspe series: That series had an obvious hockey stick shape. Mann needed to preserve that series in order for the reconstruction to pass the verification statistic he was using for his 1400-1450 period.

          One result of keeping this series is the flattening of the end of the hockey stick, effectively removing the upwards slope seen in more recent reconstructions, including Mann’s own 2008 EIV (the one he recommends in his 2008 paper).

          This in turn demonstrates why it’s important to inform the reader that you had done this particular manipulation. It’s this failure to report this ad hoc manipulation of data that makes what he did misconduct, and it’s the dramatic effect on the curve (producing an artificially flattened curve, as we know by comparing the results of this reconstruction to more modern ones) that makes the result of this misconduct substantive in nature. He obtained what must have been a pleasing result to him at the time, but it was wrong.

          Steve: “the upwards slope seen in … Mann’s own 2008 EIV” is because Mannian EIV replaces the proxy reconstruction in the instrumental period with instrumental data. It is a “perfect” reconstruction of instrumental data because it is instrumental data. We (Jeff Id also) discussed this when Mann et al 2008 was being discussed.

        • Posted Mar 2, 2014 at 3:46 AM | Permalink

          Carrick,
          “it’s the dramatic effect on the curve”
          Well, I think you’re saying that the modified curve would not have passed verification, so wouldn’t have been published anyway,

          Which means he would have had to start the recon in 1404, with a result almost identical to what he published.

        • AndyL
          Posted Mar 2, 2014 at 6:53 AM | Permalink

          <i)Well, I think you’re saying that the modified curve would not have passed verification, so wouldn’t have been published anyway,

          Which means he would have had to start the recon in 1404, with a result almost identical to what he published.

          That is the point everyone is making. Mann should have stated that he handled the dates differently, and also explained why he made an exception for the numbers of cores. Instead he handled this one series differently to all the hundreds of others and did not either mention it or justify why.

          M&M simply showed what the outcome would have been if he had treated this case exactly the same as all the others.

        • Posted Mar 2, 2014 at 8:05 AM | Permalink

          AndyL,
          You’re missing my point. If he had started the recon in 1404, then this rule you want to apply would have said, Gaspe is in, and he would have got the curve in panel 1.

          If he had started in 1403, the rule would have said, Gaspe is out, and you get the curve in panel 2. Not very robust.

          The salvation here is that starting in 1403 would also probably have failed verification. So you actually can’t publish the curve in panel 2. If you follow your rule, he has to start in 1404.

          Nothing much wrong with that. The recon is little less informative.

          The extrapolation allowed him to go back to 1400 – a small gain (round number) at a small price.

          Infilling smooths out these abrupt changes by making better use of the information, and I think he should have used it consistently. But it’s probably only in these very early stages with sparse data that it would make a significant difference.

        • Steve McIntyre
          Posted Mar 2, 2014 at 9:58 AM | Permalink

          Nick, step back for a minute if you can and think of Mann as producing financial statements. I know that many people do not accept the comparison, but, from my perspective, I expect scientists to be more honest than business promoters, not less honest. Accountants are required to use GAAP – “generally accepted accounting principles”. Financial statements have detailed notes, which are an integral part of the statements. Ad hoc policies raise red flags. For example, the notes to the Enron financial statements mattered.

          If Mann had a general policy of extending series for a few years to include them in a step, then this single case would not have attracted attention. But it was unique. When someone less pollyannish than you encounters this sort of ad hoc accounting, it ought to raise questions about the purpose of the ad hoc exception and why is was done.

          If the analysis was as “robust” as claimed, then this sort of ad hoc exception shouldn’t matter.

          It strongly suggests that Mann had previously done an analysis without extending Gaspe and got different results which he did not report.

          What becomes very clear is, as we reported in MM2005(EE), there are only two HS shaped series among the 22 series in the AD1400 step : Gaspe and the bristlecone PC1. And rather than the HS being a “robust” property of the proxy network, only a couple of series have the property and therefore the HS-ness of the network depends on the presence of the Gaspe and bristlecone PC1.

          Mann had an obligation to clearly report this, rather than persist with his false “robustness” claims. Mann’s experiments with Gaspe and with the censoring of the bristlecone data show that he was well aware of the non-robustness of his reconstruction.

          Nick, you also talk of “validation” but ignore the issue of the failed verification r2 (and CE) statistics. If “proxies” actually are proxies i.e. temperature plus low=order red noise, a reconstruction will necessarily have a significant verification r2 statistic. The combination of a high RE and low verification r2 statistic can be obtained through a variety of spurious data. The non-robustness also impacts RE statistics – a point that was underdiscussed at the time – but which would have been on Mann’s mind.

        • MrPete
          Posted Mar 2, 2014 at 10:14 AM | Permalink

          Re: Nick Stokes (Mar 2 00:38),

          But yes, I’m sure there are cases where infilling would be appropriate and he didn’t. He should have, but it won’t always matter.

          (As noted by Carrick, it should be “extrapolation” so I’ll go with that)
          Nick, pick your extrapolation cutoff point. ANY cutoff point. You can see from the graph that there were plenty of opportunities to extrapolate other series. Mann didn’t extrapolate ANY of them.

          Perhaps your final sentence is the key, and you are MOST certainly correct: “it won’t always matter.”

          It certainly mattered in this case. The extrapolation (and thereby the inclusion of this data set in the 1400 segment) makes a big difference.

          If one only extrapolates “when it matters”, that’s not science. How can you know that in this one case that’s the correct answer, but all the other cases it was OK to ignore?

          You’re defending a completely invalid methodology. I honestly cannot fathom why someone who cares about good science and good data handling would do such a thing.

        • Don Monfort
          Posted Mar 2, 2014 at 2:47 PM | Permalink

          Mr. Stokes is ludicrously defending the cause, at the expense of his own integrity. Meanwhile, the pause is killing the cause.

        • Spence_UK
          Posted Mar 2, 2014 at 5:54 PM | Permalink

          It often amuses me how Mann defenders don’t really understand robustness.

          Nick: if the method ends up with results that are hugely sensitive to one years’ difference, it means the method is not robust. It does not mean you get to tinker with the data until you get the answer you first thought of again.

          This reminds me a bit of when Steve first noted that trivial changes resulted in completely different histories (a point emphasised in the peer reviewed lit by Burger and Cubasch). The RealClimate brain dead answer was that if you continue to tinker further in different ways then you get the original answer again.

          This doesn’t recover robustness. It continues to underline the lack of robustness of the method – that trivial changes in the input give completely different answers. Which means you can always tinker and get your preferred result. Which is why hanging any conclusions on it is direct route to self-delusion.

        • Posted Mar 2, 2014 at 8:10 PM | Permalink

          Steve,
          “Nick, step back for a minute if you can and think of Mann as producing financial statements. I know that many people do not accept the comparison, but, from my perspective, I expect scientists to be more honest than business promoters, not less honest.”

          Yes, the comparison is not popular, and there is a reason. Scientists rely on replication rather than auditing, and for good reasons, some of which are shown by this Gaspe extrapolation issue. They rely on replication, because scientists try to establish principles and understanding, rather than numbers. So when people say that Mann’s result has been replicated, they are not talking about pixels between 1400 and 1450. They are talking about a general picture of how temperatures varied over the millenium, and the general mechanisms for gathering that picture and determining its reliability.

          Auditors and accountants occupy skyscrapers in Manhattan, and all over the world. Scientists do not. The effort consumed by financial accounting (and evasion) is huge. Science cannot proceed that way. Replication works, auditing does not. The classic failure is MBH98. Auditors are still litigating it sixteen years later. Meanwhile, it has been replicated many times. And the effort put into auditing climate paleo is peculiar. It can’t even be extended to the rest of climate science, let alone to science in general.

          I said Gaspe was an illustration. You say, we have to have an auditable rule. If Mann doesn’t state it, we’ll state it for him. And the rule says, throw out that data.

          But scientists want to know what those trees have to say. I know you have reasons why you think the data should not have passed screening, but if it has been accepted, you have to analyse it on the basis that it is real. So Mann says, what does all the data we have tell us about 1400-1450. He knows he can extrapolate and use the information with little error penalty. If there is an objection to that, he would probably fall back to starting recon in 1404, preserving the use of the information. He would not discard it.

          You throw it out and produce a curve which satisfies audit requirements, but is much less useful to scientists. And they have a way of getting back, which is the verification requirement. Discarding information will likely cause that to fail.

          Replicators do not care whether Mann wrote his extrapolation into a file or implemented it in code. They have the same aim – to use the data as best they can. And they, like Mann, will see that Gaspe cedars are telling them something, and will find a way to make use of it. And doing that, they will replicate his result, not yours.

          I agree that Mann in 1998 took short cuts. He should have declared and followed an infill policy, implementing it in his code. That would be better practice from both an audit and scientific point of view.

          “It strongly suggests that Mann had previously done an analysis without extending Gaspe and got different results which he did not report.”
          Yes, quite likely. It would have failed verification pre 1450. I’m sure that he was checking verification as he got his dataset together. He’d have to. And many experiments would have failed. Go out and get more data.

          Steve: Nick you say: “Yes, quite likely. It would have failed verification pre 1450.” That’s the classic definition of “data torture”. It looks like you and I agree that Mann’s work included data torture. I do not think that “data torture” and “fraud” are equivalent (the judge was wrong on this).

        • Steve McIntyre
          Posted Mar 2, 2014 at 8:55 PM | Permalink

          Nick, you say “scientists try to establish principles and understanding, rather than numbers.”

          If that were the case, I don’t think that there would be any issue. While that may be what “scientists” are interested in, I see negligible evidence that the recent reconstructions try to “establish principles and understanding”. On their contrary, their entire interest seems to be in showing that the modern warm period is warmer than the medieval warm period.

          If the “scientists” were interested in “principles and understanding”, then, once gaspe and the bristlecones were known to be idiosyncratic hockey sticks, their first instinct would have been to understand the validity of these proxies. To see why they are magic thermometers. To look at the peculiarities of these proxies – as we did in MM2005(EE). But you don’t see such analysis.

          If you look at the multiproxy analyses cited in AR5, few of them, in my opinion, make any real attempt at “principles and understanding”. They are nearly all about getting an answer.

        • Steve McIntyre
          Posted Mar 2, 2014 at 9:04 PM | Permalink

          Nick, you say: “Meanwhile, it has been replicated many times.” I disagree. There are many squiggles, but there is surprisingly little consistency from one squiggle to another.

          In an email among NAS panel members, Jerry North, in response to criticism of his comments at the NAS press conference said:

          …it is hard to deny that our spaghetti curve collection is really very different from MBH.

          So North didn’t agree with you.

          And such consistency as does exist is not an honest consistency, because of repeated use of bristlecones and overused proxies. In Wagenmakers’ protocol to avoid data torture, replication requires new data, not the re-use of the old data used to formulate the hypothesis.

        • Posted Mar 2, 2014 at 8:20 PM | Permalink

          SpenceUK,
          “Nick: if the method ends up with results that are hugely sensitive to one years’ difference, it means the method is not robust. It does not mean you get to tinker with the data until you get the answer you first thought of again.”

          Again that comes back to “whose method”? But do statisticians always reject data with missing values? Or do they find ways to use the real data while minimising the harm caused by the gaps?

        • bernie1815
          Posted Mar 2, 2014 at 8:28 PM | Permalink

          Nick: Generally people in the survey business would substitute a “neutral” value if there was a need to preserve “N”, but what should never be done is substitute values for missing values that (a) are not clearly stated and (b) become critical to or influence the solution. It is like removing troublesome outliers simply because they are outliers.

        • MrPete
          Posted Mar 2, 2014 at 8:54 PM | Permalink

          Re: Nick Stokes (Mar 2 20:20),
          If I’m doing a woodworking project with valuable bits of rare hardwood, I may use some wood filler in the gaps to ensure that the final product I envisioned is complete, solid, smooth, etc.

          However, if I’m following a predefined plan to create an analytical model of physical reality, it would be dishonest for me to fill in the “gaps” with my idea of what the model “ought” to look like. I certainly can’t ignore the plan and fail to tell anyone what I did.

          Reconstructing the climate of the past is not supposed to be a woodworking project.

          You seem to think that Mann had every right to fill in or sand down the data as desired to create the hockey stick he envisioned. But that’s not science.

        • TAG
          Posted Mar 2, 2014 at 9:25 PM | Permalink

          Nick Stokes writes:

          So when people say that Mann’s result has been replicated, they are not talking about pixels between 1400 and 1450. They are talking about a general picture of how temperatures varied over the millenium, and the general mechanisms for gathering that picture and determining its reliability.

          Lord Kelvin wrote

          “In physical science the first essential step in the direction of learning any subject is to find principles of numerical reckoning and practicable methods for measuring some quality connected with it. I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely in your thoughts advanced to the state of Science, whatever the matter may be.” [PLA, vol. 1, “Electrical Units of Measurement”, 1883-05-03]

        • Skiphil
          Posted Mar 2, 2014 at 10:04 PM | Permalink

          Nick at 8:10 pm

          “But scientists want to know what those trees have to say.”

          Doesn’t this entirely beg the question of whether those (extremely few) trees do have something scientifically significant “to say”???

          It hardly seems sound by any truly “scientific” method to assume that a very few trees have something “to say” even if they do not form a statistically valid sample, analyzed by a thoroughly justified and validated method.

          Nick, like many I appreciate that you are consistently polite and willing to engage in such contentious discussions with people who disagree with you, but this seems to be a case of assuming what needs to be scientifically demonstrated, that those few trees have anything at all “to say” for this kind of problem.

        • Spence_UK
          Posted Mar 3, 2014 at 3:36 AM | Permalink

          Nick, “whose method” – I already explained this – please read “Are Multiproxy Reconstructions Robust?” by Bürger and Cubasch:

          http://onlinelibrary.wiley.com/doi/10.1029/2005GL024155/abstract

          This is an entire class of paleoclimate reconstructions (which includes MBH98/99 amongst its number) which essentially allows the researcher to arrive at their preferred conclusions by tweaking seemingly trivial choices.

          This type of thinking was clear from Mann’s communications with Gergis on the infamous withdrawn paper. Having stated a simple principle as the basis for their reconstruction, when they realised it gave the wrong answer, Mann advised to just ditch that principle, get the answer that was first wanted, and justify it post hoc. It’s science Jim, but not as we know it.

        • Posted Mar 3, 2014 at 2:51 PM | Permalink

          Here’s a theoretical for everyone…

          What if all scientific research was subjected to McIntyre-style auditing?

          I would put forth that such a system would bring all scientific research to a grinding halt, specifically because of the nature of research being, as Nick points out, about “principles and understanding” rather than knowing where every nickel and dime has gone.

          In fact, if you went back and were to audit all previous research we could likely erase nearly all the advancements of the past 250 years.

          The Ken Ham’s of the world might be highly supportive of such an effort.

        • Carrick
          Posted Mar 3, 2014 at 5:04 PM | Permalink

          Steve McIntyre:

          Steve: “the upwards slope seen in … Mann’s own 2008 EIV” is because Mannian EIV replaces the proxy reconstruction in the instrumental period with instrumental data. It is a “perfect” reconstruction of instrumental data because it is instrumental data. We (Jeff Id also) discussed this when Mann et al 2008 was being discussed.

          I missed your comment… and realized I was being clear as mud. I was talking about the early period (for MBH98), namely 1400-1500. By “upwards slope” I actually mean increasing temperature as you went further back in time. In MBH 98/99, this region had a curiously flat temperature change (in fact for the entire reconstruction period, the temperatures were remarkably flat).

          So we’re talking about very different things here.

          I agree with the problems of Mann 2008 and the treatment of the post 1960 MXD data—the replacement of problematic MXD data with temperature data.

        • Carrick
          Posted Mar 3, 2014 at 5:12 PM | Permalink

          Nick Stokes:

          So when people say that Mann’s result has been replicated, they are not talking about pixels between 1400 and 1450. They are talking about a general picture of how temperatures varied over the millenium, and the general mechanisms for gathering that picture and determining its reliability.

          Seriously Nick, this just isn’t how the word “replicate” gets used in science. If we chopped the so-called replications off at 1800, you’d have to ingest a heavy dose of hallucinogens to create the delusion the various curves had anything in common.

          On the other hand, if what you are really saying is people are BS’ing when they say Mann’s results were replicated, I would have to agree with that.

          You really need to be careful how far down this road into pure sophistry you’re willing to go.

        • Posted Mar 3, 2014 at 5:38 PM | Permalink

          Carrick,
          “Seriously Nick, this just isn’t how the word “replicate” gets used in science. “
          Well, here it is. But reproduce if you prefer.

        • Carrick
          Posted Mar 3, 2014 at 5:41 PM | Permalink

          Here’s the text from wikipedia. Replication redirect to reproducibility, which is a term of art with similar meaning:

          Reproducibility is one component of the precision of a measurement or test method. The other component is repeatability which is the degree of agreement of tests or measurements on replicate specimens by the same observer in the same laboratory. Both repeatability and reproducibility are usually reported as a standard deviation. A reproducibility limit is the value below which the difference between two test results obtained under reproducibility conditions may be expected to occur with a probability of approximately 0.95 (95%).[2]

          Remember that Nick is claiming that “[when] people say that Mann’s result has been replicated [… they] are talking about a general picture of how temperatures varied over the millenium, and the general mechanisms for gathering that picture and determining its reliability”. In doing so, he’s decided that apparently it’s okay for words to mean whatever he wants them to mean (the “Humpty Dumpty” fallacy, named after the famous dialog in Alice in Wonderland).

          It would be an interesting exercise to see whether any of the other curves actually replicate in the precise sense used in science. I get the feeling they don’t, but I’ve not see anybody actually analyze this correctly. Were I Steyn a figure would be worth 1000 words. And 1000 words all telling the same lie that the results were truly replicated, worth a million.

        • Carrick
          Posted Mar 3, 2014 at 5:44 PM | Permalink

          Nick Stokes:

          Well, here it is. But reproduce if you prefer.

          So to make sure we’re clear here:

          You are using a cartoon drawn for a general audience to explain how the term of art is used in science???

          OMG. Or MOG. Mog sound better here.

        • Donn Armstrong
          Posted Mar 3, 2014 at 5:55 PM | Permalink

          Nick, Your “reproduce” link is on a 101 level and so is your argument.

        • HAS
          Posted Mar 3, 2014 at 6:04 PM | Permalink

          Just to repeat a comment I made elsewhere today, Nick is having fun muddying things by claiming replication results doesn’t require the use of the same method. This means he doesn’t have to worry knowing about what Mann did.

          This is rubbish – replication of the results implies replication of the method.

          Validation or verification of results would be more usual terms to apply to producing similar results using different methods, which is what he’s proposing with Mann’s work.

        • Carrick
          Posted Mar 3, 2014 at 6:37 PM | Permalink

          HAS, yes.

          Somewhat roughly: Replication refers to repeating a method, reproducibility refers to how similar the results are based on an agreed to metric from multiple replications of the same method. Verification refers to demonstrating that the methods were correctly implemented. Validation refers to the fact that demonstrating that the result is empirically correct (were the right methods used?)/

          M&M 2005 and Wahl and Amman 2007 would be replications of Mann’s work. Unlike with traditional papers (where the methods and data have been properly documented so that replication can be done without a heroic effort), this replication took a lot of reverse engineering to achieve.

          The other so-called replications of MBH aren’t really replications in any sense, other than a grade school one, and I doubt most scientists would agree they are operating on that level. And more modern reconstructions strongly suggest that the result of MBH are simply not valid.

        • Posted Mar 3, 2014 at 7:03 PM | Permalink

          Carrick,
          “You are using a cartoon”
          It’s a page from Berkeley’s Science Understanding 101. That’s where these things are taught. But if you prefer the freedictionary:
          “In scientific research, the repetition of an experiment to confirm findings or to ensure accuracy.”
          As Berkeley said, it’s usually done by another lab.

        • HAS
          Posted Mar 3, 2014 at 8:02 PM | Permalink

          Carrick

          Just to be clear I was referring to verification and validation of the results, not the verification and validation of methods which is what you are referring to. We need to be careful because unlike everyone else here Nick is applying these terms to the results, not the methods.

          Nick above says of replicators:

          “They .. aim .. to use the data as best they can. And they, like Mann, will see that Gaspe cedars are telling them something, and will find a way to make use of it. And doing that, they will replicate his result, not yours.”

          So Nick is talking about replicating results, not methods, and making sure that Gaspe 1400-1450 will get used come hell or high water.

          I think in common parlance what Nick is saying is the ends will justify the means.

        • thisisnotgoodtogo
          Posted Mar 3, 2014 at 10:24 PM | Permalink

          Nick Stokes said:

          “They have the same aim – to use the data as best they can. And they, like Mann, will see that Gaspe cedars are telling them something, and will find a way to make use of it.”

          Nick, how much did they want to hear what the tree rings were saying when it didn’t help the cause?

        • Carrick
          Posted Mar 4, 2014 at 10:01 AM | Permalink

          HAS, I don’t think it matters too much whether you are looking the methods or the results. In either case, you’re going to use verification statistics to test the skill of the replication.

          Nick’s resorting to what amounts to a “science for poets” course (one that few real science majors would take) and a dictionary definition that lacks to specificity to explain how one confirms findings.

          Truthfully, I think most people who look at the so-called replications naively are focusing on the “calibration period”, without realizing that isn’t a very strong test of skill of an algorithm. If nothing else, you self-select for reconstructions that pass during the calibration period (who would publish something where it failed)?

          Generally the methods can produce the right shape during the calibration period even using red noise. Many appear to pass verification tests by statistical error or “data torture”. I’d characterizing MBH 98 as doing a bit of both of these.

          The verification period, where you’re comparing against instrument data, also allows for a selection process–with Gaspe it’s almost certain that Mann ran his code with the original Gaspe series for 1400-1449, and saw his reconstruction was failing to verify even with his broken test. Then he worked out what he needed to change to pass verification.

          The real test of replicability is the period where you have no training data available, and here none of the curves agree with each other. There are many errors that have been noted, three of the prominent ones are scale bias, offset bias, and attenuation of high-frequency information.

          When you try and compare them against each other, by tethering them with the instrumental data, it’s unlikely that your conclusions will be particularly robust. I think a much stronger test would be to test how similar the curves are to MBH98 using standard statistical methods.

        • Carrick
          Posted Mar 4, 2014 at 10:37 AM | Permalink

          To be clear, I meant “I think a much stronger test would be to test how similar the curves are to MBH98 using standard statistical methods outside of the period where instrument data are present.”

        • miker613
          Posted Mar 4, 2014 at 12:12 PM | Permalink

          @Rob Honeycutt: “What if all scientific research was subjected to McIntyre-style auditing?
          I would put forth that such a system would bring all scientific research to a grinding halt”

          A remarkable comment. Did McIntyre ever insist that Michael Mann sit down and answer his objections? That he had to wait to do further research until the M&M Committee was convinced? As far as I know, all they ever did was say, “We want to try to study your work; please make it available to us.” And when they did that, “The following points don’t seem to be right.”
          Mann was always at liberty to ignore those objections – obviously, at the risk of everyone else concluding that his work was garbage. What he did instead was try his best to make it impossible for them to look at his work in the first place, and then create a stream of distractions and disinformation to cover his mistake.

          How in the world do you think that this could harm science? His work was wrong. M&M caught it. You would prefer that bad work goes unexamined?

        • Steve McIntyre
          Posted Mar 4, 2014 at 12:47 PM | Permalink

          Honeycutt: “What if all scientific research was subjected to McIntyre-style auditing? I would put forth that such a system would bring all scientific research to a grinding halt”

          It is my understanding that most successful branches of science try to ensure that results are replicable and “robust”, and that replicability is one of the keys to their success. I do not think that the 1000-year temperature reconstructions can, in their present state, be said to be a successful branch of science.

          I think that our criticisms of the work of Mann and other reconstruction academics were valid. I think that the way forward in the field is through the development of better proxies rather than repeatedly averaging strip bark bristlecone ring width chronologies with uninformative data and pretending that the result is “science”.

          My original training was in math, where careful examination and understanding of the steps in a proof was both acceptable and expected. “Peer review” is said to be an important part of the scientific enterprise. When I first encountered this field, I was surprised by that peer review even at eminent journals was very cursory and did not involve examination of the data. I did not regard my original examination of Mann et al 1998 as anything more than detailed peer review. In my opinion, all our comments were ones that a sufficiently well informed peer reviewer could and should have made and that a properly informed editor would have required Mann to respond to.

        • Posted Mar 4, 2014 at 1:04 PM | Permalink

          miker613
          “Did McIntyre ever insist that Michael Mann sit down and answer his objections?”
          Not directly, he didn’t himself have such powers. But Mann, Bradley and Hughes each received a long letter from the House or Representatives Energy Committee, Subcommittee on Oversight and Investigations, with these (and other) instructions:

          The authors McIntyre and McKitrick (Energy & Environment, Vol. 16, No. 1, 2005) report a number of errors and omissions in Mann et. al., 1998. Provide a detailed narrative explanation of these alleged errors and how these may affect the underlying conclusions of the work, including, but not limited to answers to the following questions:
          a. Did you run calculations without the bristlecone pine series referenced in the article and, if so, what was the result?
          b. Did you or your co-authors calculate temperature reconstructions using the referenced “archived Gaspe tree ring data,” and what were the results?
          c. Did you calculate the R2 statistic for the temperature reconstruction, particularly for the 15th Century proxy record calculations and what were the results?
          d. What validation statistics did you calculate for the reconstruction prior to 1820, and what were the results?
          e. How did you choose particular proxies and proxy series?
          ?

          Steve: but note that Mann did not give straightforward answers to these questions. For example, he evaded question (c). And when subsequently asked the same question by a NAS panelist, Mann lied to the NAS panel, denying that he had even calculated the verification r2 statistic.

        • MrPete
          Posted Mar 4, 2014 at 1:25 PM | Permalink

          Re: Nick Stokes (Mar 4 13:04),
          Interesting that in each case you’ve provided, these are basic questions of important facts that were omitted from their paper.

          These are not in themselves objections to what was done or even how. These are questions regarding omissions from their report.

          AFAIK in other scientific arenas, these kinds of things would normally be openly shared, if not mandated by gov’t regulation.

        • HAS
          Posted Mar 4, 2014 at 2:53 PM | Permalink

          Carrick Mar 4, 2014 at 10:01 AM

          Just a follow on point. Is part of the problem here that the researchers are focused on producing a temperature proxy, rather than on modelling the growth of trees where temp is just one of the environmental vbles. Many of these methodological problems seem much easier to see and deal with appropriately through that lens (to my untutored eye).

        • Carrick
          Posted Mar 4, 2014 at 3:06 PM | Permalink

          Steve McIntyre:

          It is my understanding that most successful branches of science try to ensure that results are replicable and “robust”, and that replicability is one of the keys to their success.

          What happens in other fields, is we don’t have to each develop our own instruments for measurements (I develop instruments, but most of my colleagues do not). I think the lack of well tested and understood algorithms for converting temperature proxies into temperature measurements is the fundamental limitation in this field. I think politicization that has occurred in the field has affected the outcome somewhat, but not having the right instrumentation for measurement is the show stopper.

          So I don’t think Honeycutt’s suggestion that most other branches of science would stop is valid In fact, it might even be welcome.

          In my case, I work as part of a large-scale multi-instutional project, where there is a substantial level of quality assurance and quality control. In spite of, or because of, this oversight from other institutions, problems are found and fixed rather than fielded.

          In fact, if I have any complaint for this project , it would be there isn’t enough QA/QC!

          One of the “negatives” of this environment is groups who “cannot make it work” are identified relatively quickly, with funding following the competent. In the egalitarian world of academic science, where those who can can, and those who can’t still get paid, this might not be very palatable, but it’s the only model that works, when people are going to rely on the outcome of the project.

        • TAG
          Posted Mar 4, 2014 at 3:57 PM | Permalink

          Rob Honeycuut writes:

          What if all scientific research was subjected to McIntyre-style auditing?

          I would put forth that such a system would bring all scientific research to a grinding halt, specifically because of the nature of research being, as Nick points out, about “principles and understanding” rather than knowing where every nickel and dime has gone.

          In high energy physics, the detection of a new particle will be accepted only if the experimental evidence is at a level of 5 sigma. Contrast this with Stoke’s principle that research is only about principles and understanding and one can obtain a good understanding of that principle’s superficiality .

        • Donn Armstrong
          Posted Mar 4, 2014 at 6:21 PM | Permalink

          Honeycutt,

          Here’s another theoretical question. Anyone please dorrect me if I’m wrong but MHB 98 made the claim that current temperatures are unprecedented over the past 1000 years which means warmer than the MWP. The hockey stick showed relativly flat temperatures over the 1000 year period essential eliminating the MWP and LIA which appeared on the first IPPC report. My question is, wouldn’t the evidence to overturn the original consensus view also need to be “unprecedented” or at least overwhelming and if so does Mann meet that standard?

        • Jan
          Posted Mar 4, 2014 at 8:28 PM | Permalink

          Rob Honeycutt,

          I started to write a heartfelt reply about why your theoretical forth-put was wrong from my perspective as a taxpaying citizen. Had a lot to do with a public right to know and the advent of modern communication affording a new degree of openness, transparency, responsibility and accountability, yadda, yadda, yadda, but realized that after getting rid of all that high-highfalutin’ principled stuff it basically boiled down to, “Yeah, really sucks to be audited. Deal.”

        • David Young
          Posted Mar 4, 2014 at 11:09 PM | Permalink

          Honeycut, I am reminded of the perennial resistance to reform in any field of human endeavor. Raising standards usually improves things and saying raising standards will bring “business to a halt” is how all corrupt cliques always react. It’s usually nonsense.

        • miker613
          Posted Mar 5, 2014 at 9:40 AM | Permalink

          @Nick Stokes ‘“Did McIntyre ever insist that Michael Mann sit down and answer his objections?” Not directly, he didn’t himself have such powers. But Mann, Bradley and Hughes each received a long letter from the House or Representatives Energy Committee, Subcommittee on Oversight and Investigations, with these (and other) instructions:’
          Okay, Nick, I concede your point. It probably would be a big problem for science if Congress investigated every detail of every study by a post-doc. Is that your point? Because I’m imagining that Congress will stick to investigating things that they think are of national concern, and McIntyre will go on investigating what he’s interested in, and most of us are fine with that.

      • thisisnotgoodtogo
        Posted Mar 3, 2014 at 11:00 PM | Permalink

        When the tree rings groaned “we are not thermometers”, did Mann listen to the rings?

  39. R Theron
    Posted Mar 1, 2014 at 10:05 PM | Permalink

    This is just tedious!!!!!!! The only thing illuminated by these posts is how pathetic the media coverage of climate change has been. I feel like I need a good shower. How is this allowed to continue?

  40. Geoff
    Posted Mar 2, 2014 at 12:20 AM | Permalink

    http://www.washingtonpost.com/news/volokh-conspiracy/wp/2014/03/01/steve-mcintyre-was-michael-mann-exonerated-by-the-oxburgh-panel/

  41. Joe
    Posted Mar 2, 2014 at 10:06 AM | Permalink

    Steve is moderating these posts – so maybe he could elaborate on one point being raised in blogosphere regarding Mann’s release of his data.

    There is a lot of commentary in the blogoshere that Mann has refused to release his data, (including his fight in the Virginia attorney general lawsuit). Obviously, there has been some release of his data and methodology since Steve et al have provide substantial critique of his studies. (though I recall seeing several articles discussing the painstaking process of obtaining some of the data and methodology used).

    Steve – can you provide a better summary of release of data.

    Steve: many people are far too angry about life and over-charge Mann or mis-charge Mann. The record on Mann et al 2008 is quite detailed. Although it was very difficult, Mann eventually provided a great deal of information on Mann et al 1998-99, but it is not complete. Missing data includes (surprisingly) the actual reconstructions by step. Both ourselves and Wahl and Ammann can emulate these results closely (their emulation and ours reconcile exactly), but we’ve never seen the results of each step. In terms of method, Mann archived a considerable amount of code, but it wasn’t by any means complete. For example, the code demonstrating the selection of the number of principal components for a tree ring network-step has never been archived. This was a very important battleground issue in early disputes and something that I would still like to see and which is relevant to the issues of data torture and data manipulation. The code for confidence interval calculations in MBH99 has never been archived either.

    • Joe
      Posted Mar 2, 2014 at 3:48 PM | Permalink

      Steve taking this discussion a step further. In the response to the motion to dismiss, Mann’s attorneys stated to the effect that every peer reviewed study concluded that your critique of the HS was inaccurate. (though the only reasons proffered for those inaccuracies seemed to be superficial). I will add that the NSF report stated to the effect that the statistical methods used by mann were subject to scientific debate.
      FYI I concur that while the NSF investigation was the most thorough of all the investigations of Mann, it still remained somewhat superficial (some may say a charitable characterization).

      Steve: the NSF report was not an “investigation of Mann”. The House Science Committee had asked several questions of NAS that would have required them to do at least some investigation, but Cicerone, the President of NAS, removed these questions from the terms of reference of the panel and they were not asked. Nor is it true that “every peer reviewed study fond that our critique was inaccurate”. McShane and Wyner strongly agreed with us. The NAS panel agreed with us on major points and did not disagree with us on any points. While Wahl and Ammann (who are sidekicks of Mann) spun things differently, their emulation code reconciled exactly to ours and they confirmed many important results e.g. the failed verification r2 and CE statistics. Wegman observed that Wahl and Ammann, read carefully, confirmed us rather than Mann.

      • Joe
        Posted Mar 2, 2014 at 5:15 PM | Permalink

        Steve
        TY
        I had forgotten about McShane Wyner (I also recall a fairly nasty response from Mann to the McShane Wyner study).
        One last request – Those defending Mann often refer to the multiple reconstructions with and without the bristlecone pines that also have the hockey stick shape. Are there any reconstructions out there and if so, to they have the same systemic errors as Mann’s HS.
        PS I appreciate your responses.

        FWIW, while the statistical analysis is above my pay scale, other factors cause me to question the validity of the HS such as the regional nature of the mwp being caused by a weather cycle sitting over one single spot of the globe for 300 years, several ice cores in antactica indicating a mwp, citrus fruit tree cultivation 300 miles north of the current day range in china during the mwp

        Steve: usually they refer to the “no dendro” reconstruction of Mann et al 2008. But it uses the contaminated upside-down Tiljander data. There are some recent new entries e.g. PAGES 2K Arctic, but the problem is that the underlying data mostly is just noise and any Stick depends on a very small number of proxies, and, examined in detail, these ones tend to be really screwed up. There are many CA posts on the “other” reconstructions. I’m not arguing that there is convincing evidence that the MWP was warmer than present – only that the multiproxy reconstructions provide negligible insight on the matter.

    • James Smyth
      Posted Mar 2, 2014 at 6:00 PM | Permalink

      The code for confidence interval calculations in MBH99 has never been archived either.

      [Sorry, if duplicate. I think Chrome ate my first]

      Revisiting the Gaspe stuff, I had a thought. As a software engineer who is a big fan of automated testing, it occurs to me that it would be trivial to automate something to find a series that, with the smallest change, had the largest affect on the output. In fact, it is so obvious to imagine automating variation of the input to find the best output (R2, RE, whatever) that it is hard to believe that there is anything original about the idea. I don’t think this is what you typically call “data mining” or “sensitivity analysis.”. How do you know when a given set of assumptions are not just post-hoc explanations for those things that your automation found to be the most favorable to your results? If it’s easy to manually add or remove a single series to compare results, it is also trivial to code all kinds of variations in input.

      Steve: my take on this, as I’ve said from time to time, is that if the proxies are “proxies” i.e. temperature plus low-order red noise, then the precise choice of method and/or precise choice of network inclusion won’t matter very much. All the huffing and puffing about methodology in MBH was really just bluster to include bristlecones.

      • bernie1815
        Posted Mar 2, 2014 at 6:31 PM | Permalink

        James: As data exploration there is nothing wrong with your approach, but you need (a) to be able to explain your hypothesized model and (b) to hold out a separate untainted sample of data. Alas there appears to be too little proxy data to validate the model, hence the potential for doing ex post facto model building – which is a no-no.

        Steve: Bernie, I’ve been reading some of the literature on “data torture”. Wagenmakers, a prominent social psychologist, emphasizes (as others have) that you can’t use the data that you used for data exploration over again for confirmatory analysis. I think that there’s an interesting application in the multiproxy studies. On Wagenmakers’ protocol, you can’t keep re-using bristlecones to “prove” a HS because you used them to create the “hypothesis”.

        • bernie1815
          Posted Mar 2, 2014 at 8:53 PM | Permalink

          Steve: You are correct. Wagenmakers is articulating what should be standard practice for model building and analysis. In social psychology we would construct hold out samples especially when it came to doing Factor Analysis on questionnaire and survey data. You could, I assume, do the same with temperature proxies, if you had enough proxies. The issue as I understand it with bristlecone pines is different and is down to determining whether the pc reflects a temperature signal or bristlecone pines themselves. In survey work you can for example get factors which essentially turn out to be driven by a demographic, say male and female, as opposed to some attribute of all respondents. In order to determine whether the factors are stable you need to split your sample into male and female and re-examine.
          The data matrix can be viewed as representing variables or respondents (climate/biological variables or proxy source), hence the similarity to cluster analysis. But you are probably much more up to speed on this than I am.

          Steve: the ex post practices of multiproxy jockeys are the antithesis of Wagenmakers. For the field to achieve credibility, it seems to me that they need to be able to get consistent results using fresh data from fresh sites, rather than yet another tube of lipstick on the bristlecones.

      • MrPete
        Posted Mar 2, 2014 at 7:09 PM | Permalink

        Re: James Smyth (Mar 2 18:00),
        If I’m understanding you correctly, what you describe is just fine in engineering, but a completely invalid process in science. Science is supposed to go from hypothesis to data collection to analysis (ie testing the hypothesis). What you’re describing is called data snooping: selecting data based on how well it matches the desired outcome.

        Before looking at the data, one is supposed to define the criteria. And in fact, one is supposed to use new data for future studies, rather than already-known data. For exactly this reason.

        Here’s how bad that is. Even if we don’t data snoop, we can’t even try out our hypothesis on a bunch of data sets to see if one or more are “significant” without radically adjusting the significance bar. As I understand it, to an approximation if one examines N data sets to see which ones produce a “significant” result, then instead of needing p=0.05 (5%/95% usual significance), the outcome must hit an approximately p/N significance figure. Thus, looking at N=10 sets, p’ becomes 0.005 ie 0.5% / 99.5% — a much tougher standard. Again, that’s just my understanding of a rough rule of thumb as a guy whose expertise is not in this arena.

        • James Smyth
          Posted Mar 2, 2014 at 9:21 PM | Permalink

          What you’re describing is called data snooping: selecting data based on how well it matches the desired outcome.

          I think I’m imagine something a bit more involved than data snooping. I’m thinking more about (input) parameter tuning. The more I think about it, the more this seems like the kind of tuning of which people accuse the modelers.

          In other words, it is one thing to snoop for the best sets among the data; it is another thing to tune things like “acceptable padding” or “acceptable truncation” or “retained PC count”, or any of the other decisions in a complex system which can be tuned. And if your goal is something like one or more verification statistics (and not say, a larger set of data that you are trying to predict), it seems like it would be easy to set up.

          And, here’s where things get even more speculative: If you didn’t separate your automation suite from you basic code base, it might be difficult to provide that code to outsiders.

          Again, that’s just my understanding of a rough rule of thumb as a guy whose expertise is not in this arena.

          Yeah, I’m very weak on even the rules of thumb. My training is more in the underlying mathematics of probability theory; I have almost no practical stats experience.

        • MrPete
          Posted Mar 3, 2014 at 11:47 AM | Permalink

          Re: James Smyth (Mar 2 21:21),
          I think what you are describing is simply a form of calibration using “known” data which can then be followed by various tests using new data.

          That’s pretty normal.

          Trouble begins when there’s no distinction between calibration data and test data. And more trouble when everything is used for calibration. And even more trouble when the same data is used over and over and over again.

          (An easy way to think about this, at least for me: “calibration” is in essence Experiment #1 where I learn parameters of my model from my first data set. That forms an hypothesis about what the parameters are… which can then be tested in Experiment #2 — which must obviously be on different data. If that experiment fails, I need a different model or parameters or data source or something.

        • James Smyth
          Posted Mar 3, 2014 at 3:30 PM | Permalink

          [Mr Peter] I think what you are describing is simply a form of calibration using “known” data which can then be followed by various tests using new data.

          Let me put it this way … Do you consider what was done by Mann to Gaspe to be “calibration”? Would you consider a more massive process (manual or automated) of detecting other favorable Gaspe-style decisions (or choices or manipulations) to be “calibration”?

        • MrPete
          Posted Mar 3, 2014 at 4:37 PM | Permalink

          Re: James Smyth (Mar 3 15:30),

          Do you consider what was done by Mann to Gaspe to be “calibration”?

          How could it be? If it were, it would not be part of the analytical result.

          Whatever we calibrate on is presumed to be “known” — we’re basically fitting our model/parameters to whatever data is “known.” Thus, the result of that calibration is input to something else, rather than output.

        • James Smyth
          Posted Mar 3, 2014 at 5:26 PM | Permalink

          You brought up the term calibration. I’m trying to point out that it’s not what I’m talking about. I’m talking about hunting around for favorable things (like the Gaspe tweak). And in particular, I’m talking about automating this hunting expedition to find things that give you favorable output like RE, R2, et al.

          I think I’ve beat this idea to death at this point. Interesting to me, but maybe not to anyone else.

        • Posted Mar 3, 2014 at 5:31 PM | Permalink

          Make it two people interested, as a lower bound.

        • MrPete
          Posted Mar 3, 2014 at 5:54 PM | Permalink

          Re: James Smyth (Mar 3 17:26),
          You may call it a “hunting expedition” but whatever you want to call it, looking at the data to determine “optimum” parameters is either:
          a) done as part of some kind of valid calibration step (before analysis using new data, and not part of the output)
          or
          b) done as part of the analysis and is invalid data snooping.

          Data snooping covers more than just direct examination of the data points. If I feed a bunch of data sets into a data-grinder and decide either which data to use or what settings to use… and all of that is supposedly part of my final analysis, then I’m up to no good scientifically speaking.

          Remember, one of the key tenets of science is to avoid fooling people… others or yourself. How can you possibly know if the data is telling you something valid, if you’re adjusting your analysis to fit the very data you’re trying to analyze?

          This is not like an engineering project where we know what is noise and what is signal, and we can tweak the circuitry to minimize noise. For all we know, the proxies may be all signal, or all noise.

        • MrPete
          Posted Mar 3, 2014 at 6:27 PM | Permalink

          Re:James Smyth (Mar 3 17:26),
          By the way, I’m a S/W architect who like you has great appreciation for test/etc automation methods. Perhaps the following will be helpful to you.

          Part of my background includes years working in demographic data arenas. Just as there are test methodologies for software development/etc processes, so too there are test methodologies for data management processes. Imagine if you will a standard data transformation process that takes data sets of a particular kind and cleans/analyzes/etc for publication or other purposes. Test automation scripts can be written for that process. Test data sets can be created to determine if the process is working as expected. And so forth.

          You may be interested in the work done by Jim Bouldin to carefully create test data sets and apply them to the typical dendroclimatology data analysis methods.

          YES, we can select data based on all kinds of interesting parameters. But this must be done very very carefully to avoid fooling ourselves.

          What you won’t find, either in demographics or in other arenas of science, is an automated method for optimizing on the fly which data sets to use in a particular analysis based on impact on the results or a statistical test. Think about it (and this has been discussed here before)… what if we tried to do that in medicine? Obviously, one could drop all the data indicating that a new medication can lead to fatalities. Or, more subtly, one could drop all data for subjects who did not complete the study regimen for whatever reason. But such methods of data selection aren’t going to give more accurate understanding of what is really going on.

          Recent studies have shown that even such seemingly benign tweaks as adding more data when the initial data set doesn’t produce a significant result… produce a much higher probability of false-positive results.

          Hope that helps!

        • James Smyth
          Posted Mar 4, 2014 at 3:45 PM | Permalink

          Mr Pete, I feel like we are talking past each other. You seem to be talking about legitimate issues around data selection. I’m talking about more nefarious attempts to drive results. And, again, just to reiterate, I’m imagining automating those attempts, rather than just manually manipulating things ad hoc.

        • MrPete
          Posted Mar 4, 2014 at 5:26 PM | Permalink

          Re:James Smyth (Mar 4 15:45),
          Ah! If all you’re suggesting is that nefarious efforts can be (semi)automated, of course you’re correct. Computers are really good at that 😉

  42. Carrick
    Posted Mar 2, 2014 at 10:43 AM | Permalink

    Nick:

    Which means he would have had to start the recon in 1404, with a result almost identical to what he published.

    This is a point that Mann makes in one of his comments. Is it possible you’re actually reading the ****ing literature now? (With your eyes open this time. I’ve found that helps.)

    Of course this is a post hoc manipulation. Like extending the series to 1400, this should be disclosed to allow replication, and correctly peer reviewed. But you might as well say “if frogs had wings, then they wouldn’t be bumping their a$$”, because we’re discussed what happened, not what could have happened.

    What was actually done was

    • the addition of four fake data points,
    • the act of which was not disclosed,
    • then the starting period for the Gaspe series modified to 1400,
    • which gives the appearance of hiding ones tracks,
    • and further, as even your own tests demonstrate, produces a substantive effect on the outcome.
    • Further this manipulation was then not revealed in any fashion for at least 7 years, after publication of the paper, even given the controversy about this time period.

    Adding of fake data points and not disclosing it was misconduct. The substantive change in outcome makes it a significant act. Some might even use the word “fraud” to describe it, I don’t agree with it (outcome is not sufficiently favorable IMO), but it is a legitimate opinion. This act and its consequences have been known since 2005. It would be very hard to argue that people couldn’t legitimately think that Mann had committed an act of fraud in this case to “hide the pre-1450 incline” in global temperature.

    That said I agree Mann could legitimately come to the same result. He could have:

    • added four fake data points
    • disclose he added these points
    • state 1404 correctly for start of the Gaspe data set
    • and truncate his output at 1404.

    As I observed on Lucia’s blog, this is similar to padding a window for Fourier analysis, in order to retain the same frequency resolution in the analysis. So it’s slightly tidier. But probably Gaspe shouldn’t have been used at all, which is the bigger problem. You can’t assume good data, when you know post hoc that you have an erroneous outcome.

    Of course nobody is saying you should investigation people for things they potentially could have done. And you don’t investigate them just because they got the wrong answer.

    You investigate them for actual acts of misconduct, not for potential acts of non-misconduct. And you don’t discount acts of misconduct simply because they could legitimately gotten to the same place.

    And if the investigation fails to address substantive examples of misconduct, then you can’t say the investigation exonerated Mann of this act of misconduct.

    That’s why this example keeps coming up. Mann is legally challenging somebody over the accusation of fraudulently arriving at his published curve. Then making erroneous claims about having been exonerated, when in fact the most detailed criticisms of him had been published prior to the so-called investigations.

    And apparently ignored.

  43. Posted Mar 2, 2014 at 2:51 PM | Permalink

    Nick/Carrick:
    Remember that it really is more than just four data points for Gaspe. Although the series starts in 1404, the data in those very early years comes from one tree. The original authors don’t use the data until much, much later in the series. So, yes, four data points were infilled/fabricated, depending on your perspective. In reality, no data for Gaspe ought to have been used before 1447 at the earliest. I’m mentioning this detail so that someone doesn’t come along and say, “It’s ONlY four data points…what’s the big deal?”, or, “It doesn’t matter.”

    Bruce

    • Posted Mar 3, 2014 at 1:17 AM | Permalink

      Bruce,
      Arguments about data validity can go on for ever. At some stage, you just have to accept what data you are going to use and analyse it. I’m talking about that stage.

      • AndyL
        Posted Mar 3, 2014 at 3:49 AM | Permalink

        Nick
        You compared auditing with science, and said that the purpose of science is to extend knowledge. Therefore (paraphrasing) the scientist can make selections and compromises to get the maximumum information from the data available.
        .
        If that is correct, then the onus is on the scientist to explain clearly what he has done so that knowledge is shared and the results can be replicated. If a scientist uses ad-hoc rules with no mention either of using them or of any justification why particular rules were used in one case and not another, then he has moved away from science and into the areas of advocacy and propaganda.

      • bdaabat
        Posted Mar 3, 2014 at 5:46 PM | Permalink

        Nick: one does not need to “accept” the data…. if one is to conduct studies using commonly accepted scientific principles, one requires checking the data to be sure it is valid and appropriate for the analysis and requires explicitly documenting the procedures used in handling that data. Neither was done in Mann98.

        1. Mann did not disclose that he used the data and “created” new data to extend the start period for the series.

        2. Mann did not follow the convention of the “discipline” of having cores from at least 5 trees in the proxy series in order to even consider them for inclusion. Applying the convention would have pushed the date back to at least 1447.

        3. Mann treated the Gaspe series differently than the other series. That singular approach, without documentation in the methods, strongly suggests that these adjustments were done post hoc.

        In the past, I’ve personally admired parts of your approach to contentious issues. I’ve read your comments because I wanted to get a different yet considered perspective. Your behaviour in this discussion (here and at Lucia’s) shows that you are truly not objective and you do not appear to be interested in understanding issues. You are no longer just providing a different perspective that should be considered.

        Bruce

  44. Geoff Sherrington
    Posted Mar 2, 2014 at 6:33 PM | Permalink

    Nick Stokes,
    We are about the same age, so you’ll get this.
    People who are younger or people whose minds have not been adversely affected by some of the irregular procedures of climate science – often through no fault of their own – these people seem to view certain CC procedures as part of their language, while we do not, or should not.
    Up above, you wrote that “Infilling smooths out these abrupt changes by making better use of the information”. No, no, no.
    Infilling is data fabrication. I use it, but mainly for ease of initial examination of working data. (e.g. a canned program like Excel can be used quickly by popping in some invented infillings, far easier than treating missing data). If you do use infilling, you should not use it for more than working models that don’t see publication. You should never, ever use infilling of missing values in a final, published paper. Or, if you have to, you have to spell this out clearly and precisely, neon light type prominence.
    Your words “making better use of the information” might have been expressed shorthand, but again there is no place for this type of thinking in a published paper. If you are writing about natural events. You must realise that Nature might not share your idea of what is ‘better’.
    The is quite adequate literature to support the accusation that those who invent data to get a ‘better use’ commonly do so to get an outcome preferred by them or their preconceptions or their ideologies. This type of wrong thinking is often seen in climate science. OTOH, you are familiar with procedures for estimating grades of ore deposits from sparse drill hole assays and you KNOW that you do not use invented infills in the final estimate. In this game, a game whose statistics has much overlap with say time series of areal measurements of rainfall or temperature, there is usually no final purpose for invented infills unless there is a bad intent afoot. I can’t imagine a reason for wanting an outcome for an ore reserve that is higher or lower than best estimate, unless there is criminal intent. There is every chance that you will be found out.
    Climate science lacks the final accountability that comes to miners who find the grade is not there on digging. You can’t invent economic ore deposits by fiddling the numbers. You should not be able to invent a hockey stick through fiddling.
    You get caught out, eventually.

    • Posted Mar 3, 2014 at 1:28 AM | Permalink

      Geoff,
      Infilling in various ways is very widely used whenever there are problems of missing data. ie in almost all practical statistics. We’ve been discussing Cowtan and Way and the Arctic, for example. Global temperatures, when you grid, have missing elements. That doesn’t mean the data is thrown away. In effect, they are usually infilled with the global average. You can look for better ways. But missing cells doesn’t mean we don’t know anything about global temperature.

      • ianl8888
        Posted Mar 3, 2014 at 2:46 AM | Permalink

        That doesn’t mean the data is thrown away. In effect, they are usually infilled with the global average

        Oh dear, oh dear …

        Firstly, please use the word interpolation rather than infill. The word “infill” is for the 5pm news, ok ?

        Secondly, and right on the point, why is the “average” of the data cells surrounding the blank area not used instead of some “global” average ?

        Your description as “usually infilled with the global average” is exactly throwing away, or considerably reducing the influence of, the surrounding data

        If the data-free area is completely outside any known data points or cells, then extrapolation is used with blazing neon-light warnings. NOTE: extrapolation is not “infill”

        Geoff Sherrington had it right – you are merely cavilling about the number of angels fitting on a pinhead, if you have described your technique correctly

        • Posted Mar 3, 2014 at 3:39 AM | Permalink

          ianl8888
          “Secondly, and right on the point, why is the “average” of the data cells surrounding the blank area not used instead of some “global” average ?”
          That’s just what Cowtan and Way did. I described here how using the latitude average rather than global accounted for a fair part of the change.

  45. Steve McIntyre
    Posted Mar 2, 2014 at 8:02 PM | Permalink

    Josh sends the following homage to Nick Stokes:
    Nick_Stokes_Defense

    • bernie1815
      Posted Mar 2, 2014 at 8:18 PM | Permalink

      If I was Nick, I would be proud that I had provided so much fodder for such a great artist and distiller of the truth (unlike our Aussie cartoonist) as Josh. A signed copy would hang in my study.

    • David Young
      Posted Mar 2, 2014 at 9:12 PM | Permalink

      Nick’s strategy is the same as that described by Walter Kaufman for less than honest theologians. The conclusions is predetermined and the attitude is the same as a lawyer for the defense.

  46. Bob
    Posted Mar 2, 2014 at 9:33 PM | Permalink

    “Nick Stokes, Replication works, auditing does not.”

    Wrong Nick. Is there point to replicating that which is clearly wrong?

    • Posted Mar 3, 2014 at 1:15 AM | Permalink

      Bob,
      “Is there point to replicating that which is clearly wrong?”

      If it’s clearly wrong it won’t replicate. That’s the point of the scientist’s way. There’s a truth to be found that doesn’t depend on how you got there.

      That’s what matters here. Scientists who actually want to know about 1400-1450 and are restricted to the data available to MBH98 will find ways to use it. Carrick has outlined some. The simplest is just to start the recon in 1404. The point is that they will get something very like Mann’s result. They will not get the M&M result, which is based on a speculation about a particular method of a particular author. It has no lasting value.

      • HAS
        Posted Mar 3, 2014 at 2:08 AM | Permalink

        Of course you can replicate something if it’s “wrong”, regardless of how one is using that term (one can for example easily replicate a wrong turn). One can’t however replicate something if one doesn’t know what was done, nor can one audit it.

        Replication and auditing deal with the quality and integrity of the work of others who have gone before. If one can’t replicate or it fails in audit (eg inconsistent treatment of data) one treats the old results with caution (or even disdain if it seems there is intent to mislead). And no harm in pointing out the results they would have been got if they point in another direction (particularly if the results were used to mislead).

        Wanting to know about 1400-1450 is a different class of problem. It raises a whole different set of issues e.g. is this data set even good enough to tell us about 1400-1450?

        Nick you are surviving on sophistry. No one here is talking about what actually happened in 1400 – 1450, they are talking about what Mann did.

        I’m sure you know that. So I assume that since you are having to argue about something else you concede the points others are making about Mann?

        • Posted Mar 3, 2014 at 3:26 AM | Permalink

          HAS,
          “Of course you can replicate something if it’s “wrong”, regardless of how one is using that term…One can’t however replicate something if one doesn’t know what was done, nor can one audit it.”

          You’re mixing up auditing and replicating. In replicating, you read of a result, usually in a scientific paper, and devise your own method to get that or a similar result, helped by the information in the paper. If the result is wrong, it won’t replicate (unless you coincidentally make similar errors). That’s how science builds. And people were replicating even before CA started.

          “No one here is talking about what actually happened in 1400 – 1450, they are talking about what Mann did.”
          Yes, but what Mann did will pass (has passed); what scientists want to know is what happened in 1400-1450.

        • Carrick
          Posted Mar 3, 2014 at 3:50 AM | Permalink

          Nick:

          Yes, but what Mann did will pass (has passed); what scientists want to know is what happened in 1400-1450

          They can look at newer reconstructions for that. Start by dumping this garbage in the waste bin if you want to learn the truth.

          For the purpose of actually knowing what “happened”, you should use series that aren’t as compromised as Gaspe to start with. And probably using a newer method (e.g., EIV) is going to be superior to the uncentered PCA of MBH in any case.

          Steve: I’m far from convinced that EIV is an improvement on regionally weighted averages. But the big problem is inconsistency of proxies and how to interpret the inconsistency.

        • HAS
          Posted Mar 3, 2014 at 4:28 AM | Permalink

          Nick Stokes

          “You’re mixing up auditing and replicating.”

          Nick, you need to get out more. Replication involves repeating an experiment so you can estimate the impact of uncontrolled variables, and related to that reproducibility is the ability to do it again.

          What you seem to be talking about is the validation of results using different methods.

          As I said you can’t replicate or reproduce (or audit) unless you know the data and the method.

        • bernie1815
          Posted Mar 3, 2014 at 7:48 AM | Permalink

          Nick:
          It seems to me that the replication of an historical temperature record is constrained by the limited availability of proxies. My understanding is that if you use the available proxies but exclude those that include BCP especially strip bark series, Gaspe Cedars, Yamal and Tiljander the historical temperature record derived from proxies again includes a distinctive MWP. Is that your understanding?

        • Pat Frank
          Posted Mar 3, 2014 at 12:03 PM | Permalink

          The big problem is that there’s no falsifiable physical theory to convert a tree ring metric into Celsius.

        • David Jay
          Posted Mar 3, 2014 at 12:33 PM | Permalink

          cute…

        • Carrick
          Posted Mar 3, 2014 at 12:54 PM | Permalink

          Steve McIntyre:

          Steve: I’m far from convinced that EIV is an improvement on regionally weighted averages. But the big problem is inconsistency of proxies and how to interpret the inconsistency.

          When you are calibrating against a time series that has measurement error, using EIV is one method for avoiding the bias associated with that calibration process. A standard least squares estimate of the calibration constant will yield an deflation in scale by the factor
          1/\sqrt{1 + (S/N)^2}.

          where S/N is the ratio of the amplitude of signal to noise

          Most standard toolbox methods (e.g., tfestimate in MATLAB) suffer from this effect when the signal to noise is poor.

          We use EIV in our group for similar signal-measurement problems. For my calibration work, I don’t typically use EIV, I use a variation on the three-microphone method proposed by Sleeman, 2006.

          Undoubtably there are bigger problems. Identifying “proxies” that have real climate signals is a big one . Identifying proxies that only respond to temperature is an even tougher problem.

          If you don’t have good data going in, then I’m afraid it’s GIGO regardless of the processing algorithm you are using.

          Only if you know your series has something to do with the quantity you’re trying to measure, will newer types of processing help.

          [ed: FYI $ latex code... $ with no space after the first ‘$’ gives latex. Remember backslashes! 🙂 ]

        • Carrick
          Posted Mar 3, 2014 at 1:11 PM | Permalink

          Grr… wrong thread and still a typo. LaTeX gibberish for the win:

          1/\sqrt{1 + (S/N)^2}.

          When the S/N is large, the bias can be safely ignored. When the S/N is small, this factor results in a large deflation in scale.

          Anyway, methods like EIV or the other ones I mentioned can eliminate this bias, which is one source of a reduction in variance in the reconstruction period relatively to the calibration or training period. I think the Fourier based approaches may also help in reducing the loss of high frequency information in the reconstruction.

          Still, I am not interested in generating an “improved method” primarily because I’m reluctant to stick my toes into this problem without starting with the raw data. Relying on other people to consistently process the raw data generally leads to data reliability problems from my experience.

        • HAS
          Posted Mar 3, 2014 at 2:06 PM | Permalink

          Just an additional afterthought on Nick’s sophistry.

          The discussion here is about the experimental methods that by definition needs an understanding of the method.

          Nick wants us to talk about the experimental results, and ignore the inconsistencies in the methods. Given the poor quality of the data and the proxies this is a much fuzzier and safer place for him to be.

          Also being ignored by Nick is that inconsistent methods (all other problems aside) usually leads one to discount the weight one puts on the study.

        • Wally
          Posted Mar 3, 2014 at 7:04 PM | Permalink

          In reply to Carrick re S/N:

          This was covered in textbooks on radio receiver performance back in the 1980’s: Once your S/N ratio gets small, using the measure of S/N to determine other metrics gets very iffy.

        • Carrick
          Posted Mar 4, 2014 at 9:48 AM | Permalink

          Most of the methods we use don’t rely on measuring S/N, rather they are trying to avoid the issues with bias introduced when the S/N is low.

      • mikep
        Posted Mar 5, 2014 at 11:01 AM | Permalink

        Replication is clearly often used to describe making an exact copy. Consider DNA replication. And when a copying mistake occurs that is copied – a replication of the mistake. Nick (and Rob Honeycutt) may like to take a look at the literature on replication in economics, for example. They could start here
        http://ineteconomics.org/blog/inet/economics-needs-replication

        And note the distinction made between narrow replication, which Nick strangely wants to call auditing, and wider replication.

  47. Bob
    Posted Mar 2, 2014 at 10:21 PM | Permalink

    Nick, you can only be described as indefatigable. But it is troubling to asses your approach to data handling. From what I can ascertain to your approach to handling data, you would be barred from practicing professional statistics if your were in the medical field.

    The gold-standard in clinical trial execution is a RCT, or randomized clinical trial. First a detailed clinical protocol is finalized tat includes very detailed inclusion and exclusion criteria. This is very important because the subject has to exhibit the characteristics of the disease your are attempting to understand. Bristlecone pines, if they were not unequivocally determined to be sensitive to temperature (and not CO2 fertilization for example), would not be included in the trial. This is determined before randomization.

    Also, before a trial begins, a randomization code is generated (usually in block form), and that code is not known to the sponsors, the investigators, or the subject. The code is given generally to a separate and independent DSMB, or data safety monitoring board, who has the power to break the code, but only under pre-specified conditions, such as one group suffering high morbidity or mortality.

    Also, before the trial begins, a SAP, or statistical analysis plan is written and finalized, with specified p values, primary and secondary endpoints, and can never be modified after the fact. Another important point In an ITT population, none of the patients are excluded and the patients are analyzed according to the randomization scheme. In other words, for the purposes of ITT analysis, everyone who begins the treatment is considered to be part of the trial, whether the subject finishes it or not. Once a subject is randomized, that patient is included, even if they, for some reason, never receive the intended treatment. Even if that subject did well, but violated the protocol, it is counted as a failure for the purposes of analysis. The idea behind this design it to eliminate as much bias as you can do. Bad data or missing data goes against the investigator, the theory that truly efficacious treatments will manifest in the face ot type I and II errors.

  48. Jean S
    Posted Mar 3, 2014 at 2:34 AM | Permalink

    Mann is (once again) tweeting a link to a “thoughtful” review of his own book. This time we learn that he is the MVP of the Team:

    An update in this paperback version shows continued vindication of the Hockey Stick, as it continues to be confirmed by additional studies. Those studies, and the now nine investigations in which no improprieties were found, surely make him the Most Vindicated Professor (MVP) ever.

    • Posted Mar 3, 2014 at 3:28 AM | Permalink

      It seems here he is the Most Wanted Professor.

      • Jan
        Posted Mar 3, 2014 at 10:49 AM | Permalink

        It’s the Streisand Effect.

      • Don Monfort
        Posted Mar 3, 2014 at 11:26 AM | Permalink

        Nicky, do you think that the transparent sophistry that you are wielding in your futile attempts to defend your hero will fool a jury?

        • Joe
          Posted Mar 3, 2014 at 11:46 AM | Permalink

          “Nicky, do you think that the transparent sophistry that you are wielding in your futile attempts to defend your hero will fool a jury?”

          The short answer is yes – it will likely fool the jury.

          I am a bit more pessimistic than Steve and others have posted on the topic of the makeup of the likely jury pool. Most of the population in the DC metroplex falls the group that believes the science is settled. As such, the ability to look objectively at the shortcomings in the studies will be difficult for the defense to overcome. The judge has indicated his predisposition that the science is settled via his ruling in the motion to deny the dismissal of the amended complaint.

        • pottereaton
          Posted Mar 3, 2014 at 12:48 PM | Permalink

          Joe: it might fool a jury, but it won’t fool an appeals court who will understand the implications of affirming a rigged defamation verdict against Steyn et al. And if they don’t do it, the Supreme Court certainly will. This is dead center a free speech case.

          And even if they get 12 average mostly liberal D.C. denizens on that jury, I would speculate that quite a few of them won’t have an opinion one way or another on climate change.

        • Don Monfort
          Posted Mar 3, 2014 at 3:30 PM | Permalink

          I believe that potter has it about right. A D.C. jury will be predisposed towards the cause, but they may be persuadable by evidence of mikey’s tricks such as has been presented here by Steve. Look at what happened when Gavin et al debated Lindzen, Crichton et al in front of an audience of NPR greenies. The crowd was turned around:

          http://www.npr.org/2007/03/22/9082151/global-warming-is-not-a-crisis

          Also, there may be reason to hope that enough jurors will take their civic responsibility to defend the first amendment more seriously than the judge has, so far. It’s also possible that the judge will feel a little anger on finding out about the faux exonerations.

          Does anyone know if mikey’s legal team has modified the stories that they have told the court, in light of Steve’s revelations of multiple misrepresentations?

        • JCM
          Posted Mar 3, 2014 at 10:32 PM | Permalink

          Your last sentence is wrong. The Weisberg decision displays no leanings towards the plaintiff or the defendant.

        • JCM
          Posted Mar 3, 2014 at 10:35 PM | Permalink

          OOPS, my comment was directed to Joe.

    • bernie1815
      Posted Mar 3, 2014 at 7:28 AM | Permalink

      Jean:
      I have been tracking Mann’s commentary as well. It is pretty amazing how he can get his followers to dump all over a negative review. Many appear to come via SkS. Though to be fair, some of the negative reviews give no indication that the reviewer has read the book – but then many of the positive reviews fall into the same category.

    • Posted Mar 3, 2014 at 11:13 AM | Permalink

      In the London software startup circles I’ve been frequenting, on and off, over the last ten years MVP stands for Minimum Viable Product – as defined by the bestseller The Lean Startup by Eric Ries, hot out of Silicon Valley in September 2011. Google’s your friend if you want to know more. I’m sure there’s an excellent joke to be made connecting this to the esteemed Penn State professor but that I’ll leave that to the entrepreneurial creativity of others.

  49. Jean S
    Posted Mar 3, 2014 at 3:44 AM | Permalink

    Nick:

    Scientists who actually want to know about 1400-1450 and are restricted to the data available to MBH98 will find ways to use it. Carrick has outlined some. The simplest is just to start the recon in 1404. The point is that they will get something very like Mann’s result.

    That’s a bold claim. Do you have anything to back it up?

    • Posted Mar 3, 2014 at 9:55 PM | Permalink

      Jean S,
      “That’s a bold claim. Do you have anything to back it up?”
      Yes. I showed in my post that whether you extrapolate with values at the max of the range could be expected, or near the min, there is little difference to the result. Any reasonable replication, that is genuinely trying to use the available data, will make some accommodation to use Gaspe in this range. With such low sensitivity, it won’t matter whether they use some other missing value treatment, or simply move the recon start to 1404. The M&M result relies on a rule which, if they are right, would be a defect in MBH, and discards data. Others would use a better rule.

      • MrPete
        Posted Mar 3, 2014 at 10:01 PM | Permalink

        Re: Nick Stokes (Mar 3 21:55),

        Any reasonable replication, that is genuinely trying to use the available data, will make some accommodation to use Gaspe in this range.

        Please provide evidence that a “reasonable” replication would use all available data, rather than excluding time frames for which there is insufficient data to produce a statistically valid result.

        Your claim is not supported by the facts.

        • Posted Mar 3, 2014 at 11:44 PM | Permalink

          MrPete,
          As I’ve said several times, all they would have to do is start at 1404 with the current methods. Then there’s no time issue about using Gaspe. But there are better ways.

        • MJW
          Posted Mar 4, 2014 at 1:33 AM | Permalink

          The advantage to starting at 1400 is it just seems like he’s starting at the beginning of a century; starting at 1404 reveals the year is cherry picked to use a dubious data sat based in the beginning years on one lone tree.

        • MrPete
          Posted Mar 4, 2014 at 6:23 AM | Permalink

          Nick, that’s NOT a justification of your claim!

          In fact what you are attempting to justify is data snooping. It’s invalid to adjust the start date to meet the needs of a particular data set. PERIOD. That’s data snooping and you of all people ought to know that. Why are you unable to admit the truth? I know it is hard when you’ve published your accusations about Steve’s “mistake” but sometimes it pays to apologize and correct your statements. Steve does so on this blog whenever a mistake is discovered. It’s the only way to retain integrity.

          Please please stop trying to justify data snooping. That’s just awful.

        • Spence_UK
          Posted Mar 4, 2014 at 9:50 AM | Permalink

          Nick, this has been explained to you over and over again. The inverse regression step in the reconstruction leads to a non-robust statistical solution in that you can modify inputs arbitrarily and get the answer you want (rather than the objectively correct answer).

          The fact that you can endlessly recover the original result (or equally break the original result) by tweaking different parameters – the start date of the step, the infill value, the normalisation factors, detrending steps etc. etc., only highlights how flawed the methodology is. It does not show anything else. Full stop.

      • Posted Mar 4, 2014 at 2:50 PM | Permalink

        MrPete,
        “It’s invalid to adjust the start date to meet the needs of a particular data set. PERIOD.”

        Recons have to start somewhere. How do you think the start date gets determined in the first place?

        Of course a recon starts when there is sufficient data. And the only way to find that out is to try it and see which start dates pass verification. Of course you’ll try dates related to the data set.

        • MJW
          Posted Mar 4, 2014 at 3:22 PM | Permalink

          Of course a recon starts when there is sufficient data.

          Adding a single tree provides sufficient data to determine the temperature for the entire northern hemisphere? That’s quite a tree!

        • MrPete
          Posted Mar 4, 2014 at 3:55 PM | Permalink

          Re: Nick Stokes (Mar 4 14:50),
          The experiment should be designed before collecting data. In this case, others have “collected” it but we still should not be selecting data sets before the experiment has been spec’d.

          Nick, I thought you knew this stuff. Science does not involve peeking into the “data box” until you’ve specified just about everything about the experiment. I realize that dealing with the climate of the past makes it very very tempting to peek, but doing so invalidates the experiment.

          Please go back and take a course on the basics of the scientific process.

          Readers may find information on “Blind Experiments” helpful. The linked article describes how this is done in some surprising arenas, such as particle physics.

        • Posted Mar 5, 2014 at 6:24 AM | Permalink

          MrPete,
          I’m actually curious how you answer my question – where do you think start dates come from? How should they get one? Pick a random number?

          And if they pick a date, and there just isn’t enough data to pass verification with that date, what should they do? Give up? Or try a date more suitable to the data they have?

        • Gerald Machnee
          Posted Mar 5, 2014 at 8:48 AM | Permalink

          Nick:
          **Or try a date more suitable to the data they have?**

          Nick, what you are trying to say is: “A date more suitable fir the results I want”

          Will you consider real science one day?

        • MJW
          Posted Mar 5, 2014 at 3:38 PM | Permalink

          And if they pick a date, and there just isn’t enough data to pass verification with that date, what should they do? Give up? Or try a date more suitable to the data they have?

          The only difference between the data available in 1404 and 1400 is a single tree! That’s why Mann couldn’t reasonably choose to begin at 1404. He would have had to explain why he selected such a seemingly arbitrary year, and there’s no valid explanation. Beginning at a round-numbered year like 1400, on the other hand, passes with little notice.

    • Skiphil
      Posted Mar 3, 2014 at 10:24 PM | Permalink

      Nick:

      “Scientists who actually want to know about 1400-1450 and are restricted to the data available to MBH98 will find ways to use it. ”

      Nick (and Michael Mann) seem to assume throughout that it is better to embrace insufficient or grossly flawed data to “say” something. What if the only valid kind of statement to be made is that we lack adequate data on a certain time period or topic??

      Why should it always be better to assert something rather than nothing? Or as Wittgenstein famously said in a very different context, “That of which we cannot speak we must pass over in silence.”

      • bernie1815
        Posted Mar 3, 2014 at 10:30 PM | Permalink

        Well said.

      • pottereaton
        Posted Mar 4, 2014 at 10:32 AM | Permalink

        Nick serves a useful purpose on this blog and others. In the absence of Mann, his cohorts and apologists, who are apparently too cowardly to defend their work and methodology from expert criticism, he comes and argues their position for them from an informed, if mistaken, point of view.

        He is their proxy, and until those eminentoes grace us with presence, we will have to make due with him.

  50. Posted Mar 3, 2014 at 11:15 AM | Permalink

    Posted Mar 3, 2014 at 3:26 AM | Permalink
    HAS,
    “Of course you can replicate something if it’s “wrong”, regardless of how one is using that term…One can’t however replicate something if one doesn’t know what was done, nor can one audit it.”

    You’re mixing up auditing and replicating. In replicating, you read of a result, usually in a scientific paper, and devise your own method to get that or a similar result, helped by the information in the paper. If the result is wrong, it won’t replicate (unless you coincidentally make similar errors). That’s how science builds. And people were replicating even before CA started.

    That is a VERY disingenuous statement.

    Replication is NOT under any circumstances the invention and divising your own method to get a similar result.

    Replication is EXACTLY what it states. The replication of a scientists exact work, to produce his EXACT results, otherwise its ALL guess work.

    You can tell people such as Mann don’t work in teams or code…

    You have to be able to explain ALL code you write to other people who need to know how it works for security and numerous other reasons.

    Replication though the publication of ALL code and methodology is the basis of science and professionalism.

    All Mann needed to do was publish his statistical analysis and PC code that he used to form the hockey stick.

    Maths is fairly easy to replicate. It should be a big deal to show ALL your data and working on something as important as this.

  51. Bob
    Posted Mar 3, 2014 at 7:26 PM | Permalink

    Nick, ” “In scientific research, the repetition of an experiment to confirm findings or to ensure accuracy.”

    Nick, repetition does not ensure accuracy. It may, but only if the original was accurate. I do hope you know the difference between accuracy and precision.

    • Posted Mar 3, 2014 at 9:05 PM | Permalink

      Bob, I’m quoting what the dictionary says.

      • Bob
        Posted Mar 3, 2014 at 10:04 PM | Permalink

        Nick, I thought you would have known better, but I will give it to you in grade school fashion. If we are target shooting, the bullseye represents accuracy and the subsequent groupings of shots around the bullseye represent precision. Nick, precision measures reproducibility, accuracy does not. Please, no more discussion about it.

  52. lawrence hickey
    Posted Mar 3, 2014 at 8:38 PM | Permalink

    http://phys.org/news/2014-03-climate-scientists-interact.html

    physics website comments on warming and Mann and Steve. Comments are still open.

  53. kim
    Posted Mar 4, 2014 at 12:07 AM | Permalink

    Instructions for the jury in five words: ‘I’m wondering what’s going on’.
    ===============

  54. stan
    Posted Mar 4, 2014 at 1:28 PM | Permalink

    With respect to Rob Honeycutt’s: “What if all scientific research was subjected to McIntyre-style auditing? I would put forth that such a system would bring all scientific research to a grinding halt”

    In my opinion, it would likely improve a pretty shoddy enterprise.

    1) As Amgen and Bayer have experienced, the vast majority of ground-breaking studies in two areas proved to be wrong. The story that Amgen’s Begley tells is beyond sad. http://reason.com/archives/2012/04/03/can-most-cancer-research-be-trusted

    2) I believe that Ross McKitrick has done some work in showing that a lot of academic research is badly flawed. I think he has said that the biggest problem is that doctoral programs used to teach students by having them replicate new ground-breaking studies. Now, the chase for grant money has changed everything. Grants aren’t available for replication so it never happens. I believe it was Ross who wrote that basically no one ever checks anyone else’s work.

    3. John Ioannides and Matt Briggs have had a lot to say about the likelihood that most published research is wrong because of the stats.

    4. Doug Keenan has described his experience in demonstrating that all the scientists doing radiocarbon dating were using flawed stats, but no one in the field had any interest in correcting the problem.

    Note — this isn’t an issue of a few people making mistakes. There is a lot of evidence that the entire scientific enterprise, as currently practiced, has almost no quality control at all. If science as an institution, lacks any semblance of quality control and no one seems to care, the institution is broken. Perhaps this is an overstatement. Even if so, science clearly is in desperate need of some more Steve Mc.

  55. MrPete
    Posted Mar 5, 2014 at 7:15 AM | Permalink

    Re: Nick Stokes (Mar 5 06:24),
    The problem of insufficient data happens all the time with data collection and data management. And your question’s a good one. You actually know the answer to this but may not have thought about it in this context.

    Some people feel it is important to “infill” or extrapolate to show they have at least something for the entire range of interest. But that’s actually just fooling themselves and others. The more appropriate answer is to admit we don’t know enough and use an Unknown value. In graphs and charts this can be represented in a variety of ways, from blanks to grays to light dots etc. Blank is usually best (although in some maps something else is appropriate.

    Think about it this way: if you’re going to create a gridded analysis of the surface of the earth, the typical method is to choose a resolution of some kind and create your grid. If your analysis requires data that represents 100% of any cell, then that is what determines the cells that have data. (A different “rule” can be used but it’s often far more complicated at the calculation step if one tries to deal with partial-cell data.) It would be the height of folly to modify the grid itself to fit the data. One might go to a higher resolution grid to obtain finer detail, but the general principle holds. And if there’s not enough data for a grid cell? Then you’ve got an area with Unknown values. Time to gray it out, and if the unknown area is big enough, hire an artist to draw a nice figure and label it “There Bee Dragons” 😀

    Unknown data does not need to be embarrassing. Unknown data is motivating, encouraging, even enticing… It’s what gets others to get involved. Motivates people to try something new, helps them realize that help is needed. “What, even the experts don’t know the answer to that? Wow, maybe I can do something!”

    When people proclaim that they have information about something, when they don’t, it’s misleading. Likewise, when people fail to show the valid data that they they DO have, that DOES fit their analytical parameters, that too is misleading. Not sure who loses more in such cases — the scientist themselves or the audience.

    Perhaps I have a special appreciation for this a bit more than others, because I’ve been involved in the technology for Unknown data since almost the beginning of my career. Just about any computer you can think of that knows how to do scientific calculations, and any database that can hold scientific data, has a pretty robust capability to record and calculate data sets containing Unknown values. Tracking and properly calculating with Unknowns in the mix is an important part of good science.

    But dealing well with the Unknown is nothing new and doesn’t require fancy computers. It’s been handled since before science existed. Just think about a Very Olde Mappe….

    • Posted Mar 5, 2014 at 12:41 PM | Permalink

      Mr Pete,
      None of that answers my simple question – where do you get a start date from? I’m told you’re not allowed to look at the data. But the analysis needs a starting point.

      My reasoning goes – well, there’s a fair bit of data going back to about 1400. Gaspe is a good data set starting in 1404, so let’s start 1404. If verification troubles, try something a bit later. You tell me that’s improper – can’t see why. But OK, what’s proper?

      • HAS
        Posted Mar 5, 2014 at 1:49 PM | Permalink

        Not OK if you don’t do it consistently across the whole data set throughout the investigation, and not OK if do it after you’ve check the consequences.

        As you well know.

        • Posted Mar 5, 2014 at 2:05 PM | Permalink

          HAS,
          None of that makes sense. I’m asking, how do you decide a start date for the whole recon? 1400? 1404? 1066? Not looking at the data.

        • Jeff Norman
          Posted Mar 5, 2014 at 3:35 PM | Permalink

          Humans are generally drawn to nice round whole numbers. 1400. This is where Mann et al started, or so they claimed.

        • Carrick
          Posted Mar 5, 2014 at 4:36 PM | Permalink

          Nick:

          None of that makes sense. I’m asking, how do you decide a start date for the whole recon? 1400? 1404? 1066? Not looking at the data.

          HAS said “not OK to do it after you’ve checked the consequences” rather than “no OK to look at the data”.

          Look at the data, esp. data quality issues, decide on methodology, without looking at the output from your algorithm, at least the output produced using the data you are planning on using for publishing your final result.

          Otherwise you’re in danger of tuning your data to match your expected outcome.

      • MrPete
        Posted Mar 5, 2014 at 6:41 PM | Permalink

        Re: Nick Stokes (Mar 5 12:41),
        Nick, it does answer.

        You could easily begin with “I want to learn what we know from 1000 on.”
        Then you define your analysis methodology (eg grids/buckets/whatever) and analysis parameters, data selection criteria, etc.
        THEN you collect data or select data sets, and apply the methodology.

        In this case, AFAIK Mann would have had a result of “Unknown” from 1000 until at least 1450.

        Clearly it’s only by monkeying with the analytical methodology, parameters etc, AFTER looking at the data and output, that he rearranged things to have some kind of result for the 1400-1450 “bucket.”

        Plus — clearly it’s a bad analysis and/or bad data if the presence or absence of a single data source in such an assembly can have a significant impact on the result. Tells us the data is highly variable and therefore highly uncertain.

        • Posted Mar 5, 2014 at 11:12 PM | Permalink

          MrPete,
          “In this case, AFAIK Mann would have had a result of “Unknown” from 1000 until at least 1450.”
          And then what? Say nothing can be done? Or do a recon from 1450?
          But in fact, I think it would have been 1000-1403.

        • Gerald Machnee
          Posted Mar 5, 2014 at 11:30 PM | Permalink

          RE: Nick
          **But in fact, I think it would have been 1000-1403.**

          Sure, now that you know what you want. You are still not listening.

        • Posted Mar 5, 2014 at 11:34 PM | Permalink

          Gerald,
          I’m listening, I’m just not hearing an answer. How can you ever start a recon, under these “rules”?

        • Spence_UK
          Posted Mar 6, 2014 at 4:08 AM | Permalink

          Nick, you realise that the Gaspe Cedar is not the only proxy in MBH98/99, don’t you?

          What magical properties have you conferred on this one proxy (which is neither the longest nor the shortest) which makes it the one proxy that must define the start or end of a reconstruction?

          And why are you not bothered that special handling is required to “get” the result that you want by tweaking a non-robust reconstruction post hoc?

          None of the other steps required this special treatment. Why start here? And why are you comfortable with the idea that the presence of one tree between 1404 and 1420 should somehow completely change the validation score of the reconstruction? Does that not bother you in the slightest?

          Apparently not! Nothing to see here, move along now.

        • Posted Mar 6, 2014 at 5:03 AM | Permalink

          SpenceUK,
          “What magical properties have you conferred on this one proxy (which is neither the longest nor the shortest) which makes it the one proxy that must define the start or end of a reconstruction?”

          Simple. We currently have a recon that passes verification through the here criticised device of extrapolation of Gaspe from 1403 to 1400. The extrapolation introduces a small amount of error, but the result is insensitive to the numbers used, except in the 1400-1403 region itself..

          If the recon began in 1404, the calculations would be almost identical, except that the extrapolation would not be required.

          Verification is an on/off thing. Using the available data, going back year by year, there will necessarily be one year which is the first where verification fails. That will very likely be at the endpoint of one of the data sets. It looks like it is Gaspe in 1404.

          As to “one tree”, again, validation is on/off. As you run out of data going back in time, a small loss can make the difference.

          Steve: Nick, the with-Gaspe recon does not “pass verification”. As endlessly discussed, more than one statistic is required to show “verification” It fails verification r2 and CE statistics, indicating that the RE statistic is spurious.

        • Spence_UK
          Posted Mar 6, 2014 at 8:44 AM | Permalink

          verification is an on/off thing

          No it isn’t, it is a confidence metric which is continuous.

          When it jumps suddenly on trivial changes it should be a warning sign that something isn’t right here.

          Clue: something isn’t right here.

        • Carrick
          Posted Mar 6, 2014 at 10:59 AM | Permalink

          Spence_UK:

          None of the other steps required this special treatment. Why start here? And why are you comfortable with the idea that the presence of one tree between 1404 and 1420 should somehow completely change the validation score of the reconstruction? Does that not bother you in the slightest?

          I think that’s a really good point, and I think it’s one that Steve McIntyre has been making too.

          When you have just a few “heroic proxies” that are doing all of the “heavy lifting”, in reality that just signals problems for the reconstruction method.

        • MrPete
          Posted Mar 6, 2014 at 12:29 PM | Permalink

          Re: Nick Stokes (Mar 5 23:12),

          Nick, I’m trying to understand why there’s such a disconnect on this, and I have an idea. Actually, a couple of ideas.

          First, it could be that you are conflating
          a) Simple cataloging of the data itself
          and
          b) Analysis based on the data
          Clearly, for (a) every single bit of data collected needs to be transparently shown. But that’s not the same as an analysis.

          Second, you seem to be pining for an annual analysis, instead of the stepwise analysis that Mann used. There’s nothing wrong with doing annual analysis of course, but the result will be different in many many ways. And it is not what Mann used.

          Even with an annual “grid” everything I said above still holds. Make an hypothesis, define data collection parameters, analysis methods, etc, THEN collect/select the data according to the experimental design. DO NOT change the experiment based on the data itself.

          And yes, it’s still true that there should not be One Tree To Rule Them All. (That’s a reference to a long-ago topic here 😉 )

  56. Solomon Green
    Posted Mar 5, 2014 at 12:05 PM | Permalink

    I found myself in the unusual position of agreeing with everything that Steven Mosher has posted on this web until I came to this.

    “The investigators are not xxxx xxxxxx. They will testify clearly and accurately.
    Shading the truth under oath is not a career advancing move.”

    From what I have read I am not convinced that the investigators would have testified clearly or accurately even under oath.

    Perhaps Mr. Mosher has too high an opinion of members of the non-skeptical climate community.

  57. pottereaton
    Posted Mar 5, 2014 at 9:07 PM | Permalink

    Steyn today

  58. MrPete
    Posted Mar 7, 2014 at 6:43 AM | Permalink

    I just realized more clearly that the underlying basis of this case has nothing to do with research results.

    Here’s what I just wrote on Nick’s blog, in response to RobH’s claim that Mann’s work was consistent with the work that has followed:

    Are you claiming that Mann’s process back then, the WAY he did his work
    * the way he defined his experiments
    * the way he selected data
    * the way he encouraged/discouraged scientific replication in what he published
    * the way he demonstrated care (or not) to not fool himself or others in his methodology

    Was consistent with others that follow?

    That’s factually a difficult case to make. The alarmist climate science community was undeniably opposed to the questions being asked by McIntyre. They circled their wagons. They lost under FOIA. And now they are grudgingly becoming more transparent. That’s a big improvement and is slowwwwly gaining ground to become compatible with the rest of science. But its hardly consistent with where Mann and the rest began.

    Just read up on best practices in science, such as Reproducible Research (that’s a specific term) to see what I mean. This is not a new thing — it goes back to the 1990’s in detail, and much earlier in concept.

    These are topics that have been avoided but will come out in court if any kind of effective defense is mounted.

  59. AJ
    Posted Mar 7, 2014 at 4:59 PM | Permalink

    Hey Steve… talking about hockey sticks and dendro’s, you might find this interesting. From today’s paper in Halifax:

    180-year-old hockey stick up for sale

    The owner of a sports artifact purported to be the world’s oldest hockey stick is putting the item up for sale.

    Mark Presley of Berwick bought the nearly 180-year-old stick in 2008 from a retired barber in North Sydney, who had displayed it in his shop for over 30 years. Presley paid $1,000 for it but will be looking for much more than that when the 10-day selling window closes next week.

    The monetary value of the stick is unknown as Presley has not had it formally appraised. The bidding opened Wednesday on eBay and there was already an early bid of US$10,000.

    The amount was short of the reserve price, which is the minimum amount a seller will accept. Presley wouldn’t reveal what he thinks the stick might be worth or the reserve amount.

    “I actually think that the value I have affixed to it — in other words the number that I need to get it to feel comfortable about letting it go — is actually quite fair given the significance of the object,” Presley said Thursday in a phone interview.

    A few years ago, researchers from Mount Allison University used tree-ring aging to help determine the stick’s approximate age. It’s believed the stick was made in the mid-to-late 1830s and originally owned by W.M. Moffatt of North Sydney.

    Presley posted the university project results on his website (www.themoffattstick.com) along with pictures of the artifact and details about its history.

    The stick, which is made of sugar maple and has the initials W.M. dug into the blade, is currently being stored in a vault, Presley said.

    http://thechronicleherald.ca/sports/1191793-berwick-man-puts-180-year-old-hockey-stick-up-for-sale

  60. Skiphil
    Posted Mar 13, 2014 at 11:58 PM | Permalink

    This is not in a climate science discipline, but it is remarkable that a top US official charged with overseeing investigations of research misconduct has resigned in disgust with a strong blast of criticism of the self-serving bureaucrats who made his work nearly impossible:

    Top US official responsible for investigating research misconduct resigns, blasting dysfunctional federal bureaucracy and politicized environment

    Just imagine what people involved with the various “investigations” and research bureaucracies pertaining to climate science might have to say, if they are not too co-opted by their commitments to “The Cause”….

  61. Canman
    Posted Mar 20, 2014 at 1:19 PM | Permalink

    One issue that I haven’t seen discussed in these threads is whether Mann is cherry picking his investigations by not including (or even mentioning) the NAS panel or Wegman report. Could there be any legal ramifications?

  62. pottereaton
    Posted Mar 25, 2014 at 12:09 PM | Permalink

    Steyn has hired three fearsome lawyers: Daniel J. Kornstein, Mark Platt, and Michael J. Songer.

    http://www.steynonline.com/6201/what-kind-of-fool-am-i

    I thought this was particularly humorous: “At any rate, joining Messrs Kornstein and Platt will be Michael J Songer, co-chair of the Litigation Group at Crowell & Moring in Washington, DC. Mike won a big $919.9 million payout for DuPont over a trade-secrets theft case involving Kevlar, which I was planning to wear to court anyway.”

  63. Steve McIntyre
    Posted Mar 27, 2014 at 12:48 PM | Permalink

    BTW I had a very pleasant meeting with Mark Steyn when he was in Toronto watching the Ezra Levant trial. Lots to talk about.

    • pottereaton
      Posted Mar 27, 2014 at 9:48 PM | Permalink

      Two formidable warriors who might never have crossed paths if it were not for one culturally defining dispute over scientific truth and people’s right to express their opinion about it.

      A transcript of that conversation might qualify as litrachur, and I’d love to read it, but at this point, I suppose, the less said the better.

    • Posted Mar 28, 2014 at 8:48 AM | Permalink

      Kornstein and McIntyre has a good ring to it.

  64. pottereaton
    Posted Apr 2, 2014 at 11:31 AM | Permalink

    Below, Steyn’s response to Mann’s motion to dismiss his counterclaims (a bout a third of the way down). The issues are becoming clearer:

    http://www.steynonline.com/6226/union-of-settled-scientists-threatens-to-strike

  65. pottereaton
    Posted Apr 28, 2014 at 7:16 PM | Permalink

    Good,lengthy, thoughtful essay on Mann v. Steyn et al in the National Review:

    http://www.nationalreview.com/article/376574/climate-inquisitor-charles-c-w-cooke

  66. pottereaton
    Posted May 8, 2014 at 1:31 PM | Permalink

    The latest from Steyn:

    Mann Misrepresents NOAA OIG

    in which he says:

    My lawyers and I are using this period to interview potential witnesses hither and yon, and prepare for our deposition of Mann and our requests for his documents.

    I wonder if our intrepid host is being interviewed as a witness . . .

  67. pottereaton
    Posted May 25, 2014 at 10:12 PM | Permalink

    Forgive my linking an off-topic subject, but in observance of Memorial Day Mark Steyn has a fascinating column about the origins of The Battle Hymn of the Republic.

4 Trackbacks

  1. […] the comments are very entertaining. The posts are here, here, here, here, here, and here, with another post added […]

  2. By Saturday Steyn | YouViewed/Editorial on Mar 1, 2014 at 11:41 AM

    […] false assertions, the invaluable Steve McIntyre now moves on to the US inquiries, starting with the National Oceanic and Atmospheric Administration Office of the Inspector General’s report. The NOAA comes under the Department of Commerce, and, in the “Dr Mann is Exonerated” […]

  3. […] https://climateaudit.org/2014/02/27/mann-misrepresents-noaa-oig/#more-18950 […]

  4. […] impossible and indefensible with great tenacity. Steve’s patience with him is exemplary and this thread, in particular, prompted the […]