After a year of stonewalling, Mann has published an update at his “grey” Supplementary Information (not yet reported at PNAS) in which he acknowledges an “error” in his figure S8a as follows:
UPDATE 4 November 2009: Another error was found in the corrected Supplementary figure S8a from December 2008: The previously posted version of the figure had an error due to incorrect application of the procedure described in the paper for updating the network in each century increment. In the newly corrected figure, we have added the result for NH CPS without both tree-rings *and* the 7 potential “problem series.” Each of the various alternative versions where these sub-networks of proxy data have been excluded fall almost entirely within the uncertainties of the full reconstruction for at least the past 1100 years, while larger discrepancies are observed further back for the reconstruction without either tree-ring data or the 7 series in question, owing to the extreme sparseness of the resulting sub-network. The new figure can be downloaded here (PDF)
Continues discussion from here. See technical discussion of emulation of CPS at, for example, http://www.climateaudit.org/?p=4244 http://www.climateaudit.org/?p=4274 http://www.climateaudit.org/?p=4494 http://www.climateaudit.org/?p=4501.
The latest correction deals only partly with the most egregious issue of upside-down Tiljander and even this isn’t dealt with clearly. I’ve tidied my Mannian CPS script a little (in particular, applying some of Ryan O’s documentation style which is well worth paying attention to) and will show the net impact of no-Tiljander on the AD800 network illustrated in Mann 2008. (Pretty much the same network extends to their AD1100 step.) I’ll also illustrate the effect of Mannian ex post screening on the same network – previous illustrations of the cherrypicking impact of ex post picking have been done in a red noise context e.g. Jeff Id, David Stockwell, Lubos, and it’s sort of interesting to see it in the context of Mannian “proxies” which are surely hard to distinguish from red noise.
No Tiljander No Dendro
The modern-medieval differential in the Mann CPS AD800/AD1000 networks (AD800 illustrated below), even with ex post correlation screening (CPS = “Cherry Pick and Scale”), has a swing of up to 0.7 deg C from the Mann base case.
Figure 1. Mann NH CPS AD800 Network. Left: black – base case. cyan- no Tiljander no dendro. Right – Difference of Mann version and no-Tilj no-dendro version.
No Tiljander No Bristle
While Mann and the Team petulantly show variations with “no dendro”, the active ingredient in this particular sensitivity are the Graybill bristlecones, which the NAS Panel said should be “avoided”. In our PNAS Comment, we noted that, although Mann 2008 purported to follow NAS panel recommendations, they flouted the NAS panel recommendations to “avoid” bristlecones. Mann’s response on this was no better than his ignoring of upside-down Tiljander. Mann said that:
They [MM 2009] ignore subsequent findings (4) concerning ‘‘strip bark’ records
Ref (4) here is of course the absurd Wahl and Ammann 2007 (see Bishop Hill’s Caspar and the Jesus Paper), which Wegman famously summarized as having “no statistical integrity”. It was available as a preprint, considered and cited by the NAS Panel and the version finally published in 2007 had no “subsequent findings” that were not considered by the NAS Panel. That NAS Panel chairman North was a reviewer of this article shows the limited due diligence of typical peer review processes. For completeness, the graphics below show the same results as above, but for “no-bristle” instead of “no-dendro”. Again, the modern-medieval differential for the relevant AD800 to AD1000 networks (AD800 shown below) – even with Mannian ex post correlation picking – is about 0.6-0.7 deg C.
No Ex Post Screening
The perniciousness of ex post screening is simple and well understood in the blogosphere – the same idea has been more or less independently reported by me, David Stockwell, Jeff Id, Lubos Motl and Lucia – with Jeff Id being particularly insistent recently on the issue. Prior discussions have mostly been on red noise networks. The Mann 2008 network is interesting as a sample for this sort of thing simply because the proxies are so horribly inconsistent that, like the MBH network, it is a little shop of horrors that produces all sorts of interesting statistical results that one doesn’t usually get to see in actual scientific literature.
Here is the AD800 network (notilj & nobristle) using Mannian CPS but without ex post screening. This CPS version uses 34 “proxies” as compared to the 18 “proxies” used in Mannian ex post picking. The statistical issue – that is totally ignored by Mann and the Team – is whether their results have any significance relative to cherry picking from red noise series.
This issue was raised in our short comment. Mann’s “response”:
McIntyre and McKitrick’s claim that the common procedure (6) of screening proxy data (used in some of our reconstructions) generates ‘‘hockey sticks’ is unsupported in peerreviewed literature and reflects an unfamiliarity with the concept of screening regression/validation. As clearly explained in ref. 2, proxies incorporating instrumental information were eliminated for validation and thus did not enter into skill assessment.
The observation in our PNAS Comment obviously didn’t reflect “unfamiliarity with the concept of screening regression/validation”. It is something that we understand thoroughly. The point is simple and easy to demonstrate. We cited David Stockwell’s note on the matter from AIG News – smiling a little – as a reference. I wonder whether Mann even bothered reading it. I’m sure that Jeff Id will be at no loss for words for this particular Mannian prevarication.
And as to Mann’s statement that “proxies incorporating instrumental information were eliminated for validation and thus did not enter into skill assessment”, it can readily be seen that the Luterbacher proxies were included in his “skill assessment” and this retort of Mann’s is also untrue.
The idea that Mannian-style operations should have to out-perform corresponding red noise networks is something that is “supported” in the peerreviewedlitchurchur – this is a theme of MM2005a, MM2005b and our Reply to Huybers, which contained useful additional analysis (a point conceded even by Ammann and Wahl.)
In the first comment below, I’ve attached a script generating the above results. The script http://data.climateaudit.org/scripts/mann.2008/utilities.txt contains an emulation function and retrieves relevant collations.
PS. Oh yes, here’s the new “corrected” figure. How does it tie together to the figure that I showed above? It always takes a long time to figure out what is in any of these muddy graphics. No digital results were provided. Nor was the code for the new results provided. It’s not that hard to provide working code. I’ve done it for my analysis here and this blog isn’t even in the PeerReviewedLitchurchur – shouldn’t PeerReviewedLitchurchur outdo mere blogs for transparency and documentation? I presume that Mann’s diagram isn’t a diagram of the relevant medieval proxies, but splices everything together – including the instrumental Luterbacher “proxies”. For sensitivity analysis, the relevant comparison is the actual medieval network.