When you go through the NAS Panel report closely, it’s amazing how many of our views were adopted – on principal components, statistical skill, statistical methods, bristlecones. I started out trying to make a post showing our views side-by-side with those of of our opponents and those of the NAS Panel, but the post quickly got out of hand in size. I’ll start with biased principal components, not because it’s the most important issue. I don’t think that it is, although it’s got lots of publicity because the effect itself surprised people and it’s technically interesting.
For the purposes of this post, I want to distinguish between the existence of the bias as a mathematical effect and whether the effect "matters". I think that it "matters", but the impact is a different issue than the existence of the effect itself. (I might add here that correlation or covariance PCs are irrelevant to the bias - that correlation and covariance PCs yield different results in the North American network is just an empirical oddity because of low variance in the bristlecones, but I’ll return to that when I discuss the NAS Panel’s view on whether the biased PC method "matters."
In our submission to the NAS Panel, we asked them to provide an opinion on the existence of bias in the Mannian PC methodology as follows:
One fairly practical issue on which the Committee’s opinion would be useful is the existence of this bias. Von Storch and Zorita  and Huybers  both acknowledge the existence of the bias, although they do not agree with us on its ultimate effect on a temperature reconstruction. Mann et al. have denied the very existence of the bias. The two issues are obviously separable and we would welcome a specific opinion.
As you will see below, they provided a specific opinion (although not necessarily because we requested it) and the opinion endorsed our position.
In our GRL article, we’d pointed out that the MBH principal components method was severely biased and over-weighted bristlecones, a finding previously publicized by Richard Muller in an article for the MIT Technology Review online. We expressed the bias as follows:
[MBH] carried out an unusual data transformation which strongly affects the resulting PCs. Their method, when tested on persistent red noise, nearly always produces a hockey stick shaped first principal component (PC1) and overstates the first eigenvalue.
This finding provoked a furious internet response. This was pre-climateaudit. You’ll get a sense of the type of opinion-forming that we were facing and how quickly one could lose credibility when confronted with determined internet criticism.
One of the first responses to this claim came from William Connolley, a realclimate coauthor who has been very active on the internet, including as a Wikipedia editor, in attempting to discredit us.
But (having read their paper) I now think I understand what they think the problem is (aside: they complain about data issues with some series but I think this is beside the point: the main point they are talking about is below), and I think that they are probably wrong, …. Perhaps someone would care to go through and check this. If I haven’t made a mistake then I think M&M’s complaints are unjustified and Nature correct to reject their article.
James Annan quickly agreed with Connolley:
Having had a quick glance at this and their papers, I think I agree with you. In fact it appears that we can add not knowing the difference between multiplication and division, to the already impressive list of blunders that M&M have made. They even seem to talk about adding the mean to the time series rather than subtracting it too. I might check this more carefully over the next few days if no-one else beats me to it.
I corresponded with Annan at the time, sending him code, and asking him to check and withdraw his remark, if it didn’t hold up after checking. Needless to say, I didn’t hear back from him.
Jury is still out, but it does not look promising for McKitrick
Brad DeLong, also cited by Lambert, added to the voices of ridicule:
Tim Lambert and William Connolley think that my colleague Richard Muller has been snookered by McKitrick and McIntyre, who believe that Mann et al.’s data normalizations artificially enhance the influence of series that show an uptrend since 1900. But Connolley argues–I think correctly–that McKitrick and McIntyre are simply confused: the normalizations diminish the influence of series that show a recent uptrend.
But Connolley argues”¢’¬?I think correctly”¢’¬?that McKitrick and McIntyre are simply confused: the normalizations diminish the influence of series that show a recent uptrend.
Connolley re-visited the matter in more detail a few days later:
Err… well there you have it. If I’m right, M&M are wrong, at least for this part of their argument. So what is M&M’s mistake? M&M think that they are right, which is unsurprising. To be fully convincing, one would have to find the error in their work. They claim to have done extensive monte-carlo simulations of blah wibble (thanks to Ian R). But they haven’t put up their code, so we can’t go through it (and remember folks, McK confused degrees and radians before! (Thanks Tim L)). And anyway who has the patience? Update: they *have* put up their code (thanks McI).
However, these criticisms were muted as compared to Mann’s post here at realclimate in January 2006.
Here, however, we choose to focus on some curious additional related assertions made by MM holding that (1) use of non-centered PCA (as by MBH98) is somehow not statistically valid, and (2) that "Hockey Stick" patterns arise naturally from application of non-centered PCA to purely random "red noise". Both claims, which are of course false, were made in a comment on MBH98 by MM that was rejected by Nature , and subsequently parroted by astronomer Richard Muller in a non peer-reviewed setting–see e.g. this nice discussion by science journalist David Appell of Muller’s uncritical repetition of these false claims. These claims were discredited in the response provided by Mann and coworkers to the Nature editor and reviewers, which presumably formed the primary basis for the rejection of the MM comment. …
So the facts deal a death blow to yet another false claim by McIntyre and McKitrick. Despite the plain facts, as laid out here, however, their false claims have nonetheless been parroted in op-ed pieces of dubious origin and other non-peer-reviewed venues. One of the primary missions of "RealClimate" is indeed to expose the false, disingenous, and misleading claims often found in such venues.
In another contemporary realclimate article, they accused Muller of not just "parroting" our findings but "scurrilously parroting" those findings. I thought that "The Scurrilous Parrot" would be a good name for a sailboat or a pub.
The blog comments to the Mann posting were interesting as well. Dano weighed in right away congratulating Mann for delivering this "death blow". Here’s another comment and reply:
All of this technical, statistical jargon is over my head, but I get the impression that the data on which the climate reconstruction is based is so sparse and uncertain that you can’t draw any firm conclusions supporting either MM’s or Mann’s side of the debate.
[Response: Even without technical training or a statistical background, you should have an adequate basis for discerning which of the two parties is likely wrong here. Only one of the parties involved has (1) had their claims fail scientific peer-review, (2) produced a reconstruction that is completely at odds with all other existing estimates (note that there is no sign of the anomalous 15th century warmth claimed by MM in any of the roughly dozen other model and proxy-based estimates shown here), and (3) been established to have made egregious elementary errors in other published work that render the work thoroughly invalid. These observations would seem quite telling. -mike]
The "egregious elementary error" linked to another realclimate post in which Mann said (using MM04 here for Michaels and McKitrick – but with the obvious implication that I’d been involved in the radian-degree error. They said there, linking to Tim Lambert:
Perhaps even more troubling, it has been noted elsewhere that MM04 confused "degrees" and "radians" in their calculations of areal weighting factors, rendering all of their calculations incorrect, and their conclusions presumably entirely invalid.
So what did the NAS panel say about the bias in Mannian principal components:
McIntyre and McKitrick (2003) demonstrated that under some conditions, the leading principal component can exhibit a spurious trendlike appearance, which could then lead to a spurious trend in the proxy-based reconstruction. To see how this can happen, suppose that instead of proxy climate data, one simply used a random sample of autocorrelated time series that did not contain a coherent signal. If these simulated proxies are standardized as anomalies with respect to a calibration period and used to form principal components, the first component tends to exhibit a trend, even though the proxies themselves have no common trend. Essentially, the first component tends to capture those proxies that, by chance, show different values between the calibration period and the remainder of the data. If this component is used by itself or in conjunction with a small number of unaffected components to perform reconstruction, the resulting temperature reconstruction may exhibit a trend, even though the individual proxies do not. Figure 9-2 shows the result of a simple simulation along the lines of McIntyre and McKitrick (2003) (the computer code appears in Appendix B). In each simulation, 50 autocorrelated time series of length 600 were constructed, with no coherent signal. Each was centered at the mean of its last 100 values, and the first principal component was found. The figure shows the first components from five such simulations overlaid. Principal components have an arbitrary sign, which was chosen here to make the last 100 values higher on average than the remainder.(86-87)
NAS Figure 9-2
We’ll forgive them for the incorrect citation – the point was made in our 2005 article. The earlier article noted "incorrect principal components" calculations in MBH, but was unable to diagnose the exact reason. The NAS Panel didn’t just devote a couple of sentences to the matter. They included a diagram showing simulations, not dissimilar looking to simulations that we’d previously shown, and even included source code for the simulations. (Dare we take some reflected credit for the NAS Panel including source code in a report? I wonder if any other NAS report has ever done this.) I guess someone on the panel was intrigued by the effect.
Again, the issue of whether this "matters" to MBH98 is different and I’ll analyse this issue separately. But I think that it’s fair to say that, on the specific question of whether the biased MBH methodology exists, the NAS Panel has found against Connolley, Lambert, De Long, Annan and Mann and in favor of us. I don’t believe that committees can determine "truth"; however, they can advise on a "consensus" and we can now say that the "consensus" on this issue is now against Mann and his associates on this one issue.
It’s amazing to look back and see the venom of the attacks by Mann and his sympathizers and exactly why I had to take up blogging (at John A’s timely suggestion) to prevent my reputation from being totally destroyed. Unfortunately, most climate scientists appear to have acquired their view on these issues through realclimate but even climate scientists may be vaguely aware that there are some issues.
PS: I’m not holding my breath for Mann to issue a Corrigendum on the above post (or Connolley or De Long for that matter.)
More on other NAS Panel findings in the next few days.