The Team have snarled back at Wegman here . They’ve posted up an August 16, 2006 letter from David Ritson to Waxman, accusing Wegman of not responding with a request for information that had been outstanding for almost 3 weeks (?!?) .
Yes, you read it right. Jeez, I’ve been waiting almost three years for data and the Team complains to Congress if they have to wait for 3 weeks.
Take a Ritalin, Dave.
I guess it’s time to re-submit a request to Mann for the actual stepwise results from MBH, how he calculated the confidence intervals, how he retained principal components… It’s all too ridiculous for words.
By the way, during this 3 week period, Wegman also had to testify at a 2nd House Energy and Commerce Committee session on July 27 (made necessary only because Mann apparently couldn’t get a babysitter on the 20th) and, if I recall correctly, was a major presenter at an American Statistical Association meeting and I’ve been told, went to Europe. If it makes Ritson feel any better, I’ve sent some emails to Wegman during the past month and haven’t heard back from him either. I’m sure that he’ll catch up to his email and apologize for any delays.
The funnier thing is that Mann is now fighting tooth-and-nail against even admitting that his PC method is biased, based on Ritson, whose comment on our article was rejected twice by GRL here here. He’s going tooth-and-nail against Wegman, but, as they say over in Team-world, now he’s not just fighting against Wegman, he’s fighting against an entire Team, as it’s not just Wegman that have confirmed the bias in his PC algorithm, it’s the NAS Panel, von Storch and Zorita, Huybers. Now these latter don’t necessarily agree with us on the impact of the biased PC methodology on final reconstructions but I’d have said that the bias itself was about as well-tested as anything in climate science.
Wegman started his testimony on July 27 as follows:
"The debate over Dr. Mann’s principal components methodology has been going on for nearly three years. When we got involved, there was no evidence that a single issue was resolved or even nearing resolution. Dr. Mann’s RealClimate.org website said that all of the Mr. McIntyre and Dr. McKitrick claims had been ‘discredited’. UCAR had issued a news release saying that all their claims were ‘unfounded’. Mr. McIntyre replied on the ClimateAudit.org website. The climate science community seemed unable to either refute McIntyre’s claims or accept them. The situation was ripe for a third-party review of the types that we and Dr. North’s NRC panel have done.
He stated in no uncertain terms that both their panel and the NAS panel had agreed on the decentering issue and that it should be "off the table".
“Where we have commonality, I believe our report and the [NAS] panel essentially agree….We believe that our discussion together with the discussion from the NRC report should take the ‘centering’ issue off the table. [Mann's] decentred methodology is simply incorrect mathematics… I am baffled by the claim that the incorrect method doesn’t matter because the answer is correct anyway. Method Wrong + Answer Correct = Bad Science.
The NAS panel itself had done essentially identical simulations to replicate the biased PC calculation, which they reported as follows, together with their Figure 9.2.
McIntyre and McKitrick (2003) demonstrated that under some conditions, the leading principal component can exhibit a spurious trendlike appearance, which could then lead to a spurious trend in the proxy-based reconstruction. To see how this can happen, suppose that instead of proxy climate data, one simply used a random sample of autocorrelated time series that did not contain a coherent signal. If these simulated proxies are standardized as anomalies with respect to a calibration period and used to form principal components, the first component tends to exhibit a trend, even though the proxies themselves have no common trend. Essentially, the first component tends to capture those proxies that, by chance, show different values between the calibration period and the remainder of the data. If this component is used by itself or in conjunction with a small number of unaffected components to perform reconstruction, the resulting temperature reconstruction may exhibit a trend, even though the individual proxies do not. Figure 9-2 shows the result of a simple simulation along the lines of McIntyre and McKitrick (2003) (the computer code appears in Appendix B). In each simulation, 50 autocorrelated time series of length 600 were constructed, with no coherent signal. Each was centered at the mean of its last 100 values, and the first principal component was found. The figure shows the first components from five such simulations overlaid. Principal components have an arbitrary sign, which was chosen here to make the last 100 values higher on average than the remainder.
FIGURE 9-2 Five simulated principal components and the corresponding population eigenvector. See text for details.
Now Mann is contesting this finding to the House Committee – criticizing Wegman but not the NAS Panel who made an identical finding. Here’s what he said:
There is another element of this question which raises a deeply troubling matter with regard to Dr. Wegman’s failure to subject his work to peer review, and Wegman’s apparent refusal to let other scientists try to replicate his work. Professor David Ritson, Emeritus Professor of Physics, Stanford University, has found error in the way that Dr. Wegman models the “persistence” of climate proxy data. Interestingly, this is the same error Steven McIntyre committed in his work, which was recently refuted in the paper by Wahl and Ammann, which was in turn vetted by Dr. Douglass Nychka, an eminent statistician. Dr. Ritson has determined that that the calculations that underlie the conclusions that Dr. Wegman advanced in his report are likely flawed. Although Dr. Ritson has been unable to reproduce, even qualitatively, the results claimed by Dr. Wegman, he has been able to isolate the likely source of Wegman’s errors. What is so troubling is that Dr. Wegman and his co-authors have ignored repeated collegial inquiries by Dr. Ritson and apparently are refusing to provide any basic details about the calculations for the report (see Attachments 3 and 4 to this Response). It would appear that Dr. Wegman has completely failed to live up to the very standards he has publicly demanded of others.
Moreover, the errors that Dr. Ritson has identified in Dr. Wegman’s calculations appear so basic that they would almost certainly have been detected in a standard peer review. In other words, had Dr. Wegman’s report been properly peer-reviewed in a rigorous process where peer-reviewers were selected anonymously, it likely would not have seen the light of day. Dr. Wegman has thus unwittingly provided us with a prime example of the importance of the peer review process as a basic first step in quality control.
This is intriguing. Ritson has supposedly found errors that are "so basic that they would almost certainly have been detected in a standard peer review". The error is supposedly in the way that we modeled the "persistence" of climate proxy data. Well, whatever the error supposedly was, it’s not just Wegman that couldn’t spot it, as I mentioned above, it’s the NAS Panel, the NAS panel peer reviewers, von Storch and Zorita, Huybers and many others. So what actually is this mysterious error?
Mann doesn’t actually say, but he says that it is the "same error" that was refuted by Wahl and Ammann, "vetted by Douglas Nychka, the eminent statistician". Presumably that is the same Douglas Nychka, who served on the NAS Panel despite the conflict. Excuse me, but can anyone point me to the section in the NAS Panel report where Nychka mentions the mysterious "error"? So whatever the error is, Nychka in his capacity as NAS panelist did not see fit to mention it in the NAS report. I’ve read Wahl and Ammann about as closely as anyone else and while they make lots of misrepresentations and accusations, about the only thing that they don’t accuse us of is making an error on "persistence". In fact, they say the exact opposite:
The method presented in MM05a generates apparently realistic pseudo tree ring series with autocorrelation (AC) structures like those of the original MBH proxy data (focusing on the 1400- onward set of proxy tree ring data), using red noise series generated by employing the original proxies’ complete AC structure.
They go on to make other criticisms of our methodology (which I reject) but they notably did not criticize the persistence properties of our pseudoproxies. (But the effect applies with simple AR1 structures as well as confirmed by the NAS panel and Wegman.)
Now you’d think that this "basic error" about persistence, if obvious to any peer reviewer, would have been mentioned in Ritson’s own submission to GRL (which was rejected twice.) But again while Ritson makes many criticisms of us in his article, he doesn’t mention anything about persistence. Since Ritson has made an issue of this, I’ve now posted up Ritson’s original submission to GRL together with our Reply (which I had previously posted up.) I discussed this previously here and here. If you look at Ritson’s GRL submission, you will be unable to locate any mention of the "basic error" about persistence that Mann is now frothing about.
Let’s now turn to Ritson’s letters to Wegman posted at Mann’s website here. His first letter was copied to Mann; and his second to both Mann and Schmidt. Ritson has also been a recent realclimate contributor and coauthor with Wahl and Ammann.
Here’s his first claim:
Any of my colleagues would have routinely checked their results to see if their derived PC1 (etc) derived from a systematic signal or from random noise. For example for a 70 member population, all that is required is to use the extracted PC1 vector from the 70 members, and apply it to each member to project out its relative sign (and amplitude). For signal dominated results one sign will predominate and for noise dominated results both signs will be roughly equally present. Needless to say when, a couple of years ago, I checked the M&M work, I did just that.
First of all, I agree that the distribution of eigenvector signs is different between a signal and noise. Indeed, I’d even say that the distribution of eigenvector signs might well be a test for the existence of a signal. But that’s neither here not there. The Mannian method doesn’t use that information – indeed a criticism of principal component methods is that information on the orientation of the series – which is presumably known if the series is a "temperature proxy" – is not used. But the issue with the Mannian method is that it is biased towards selecting HS-shaped series. We showed that the Mannian method promoted bristlecones into the PC1 making them seem like the "dominant component of variance" when they weren’t. If there are some nonclimatic series – "bad apples" – the situation is intensified. See our Reply to VZ on this. So as to this being a gotcha, I don’t get it. What’s it got to do with the price of eggs?
The next point raised by Ritson is as follows:
To facilitate a reply I attach the Auto-Correlation Function used by the M&M to generate their persistent red noise simulations for their figures shown by you in your Section 4 (this was kindly provided me by M&M on Nov 6 2004 ). The black values are the ones actually used by M&M. They derive directly from the seventy North American tree proxies, assuming the proxy values to be TREND-LESS noise.Surely you realized that the proxies combine the signal components on which is superimposed the noise? I find it hard to believe that you would take data with obvious trends, would then directly evaluate ACFs without removing the trends, and then finally assume you had obtained results for the proxy specific noise! You will notice that the M&M inputs purport to show strong persistence out to lag-times of 350 years or beyond.
Your report makes no mention of this quite improper M&M procedure used to obtain their ACFs. Neither do you provide any specification data for your own results that you contend confirm the M&M results.
I think that this relates to Ritson’s recent postings at realclimate about autocorrelation which I discussed here and here. Ritson is now promoting the idea that autocorrelation in proxy series is really low and that we assumed autocorrelation that was too high in our simulations.
Ritson’s point was considered by some of the more statistically-minded readers here and some tried to comment over at realclimate. realclimate shut down comments on the thread within about 7 days, beating Rasmus’ previous record, and refused to post many critical comments. Our conclusion here was that Ritson had done some kind of weird home-grown Team autocorrelation calculation that didn’t convince any of us.
But even if Ritson should eventually show that he’s right, this is not something that anyone else has noticed so far or that he’s been able to persuade any non-Team people about. It’s an issue that he did not raise in his own comment on our GRL article; it’s not an issue that’s discussed in Wahl and Ammann or "vetted by Nychka".
Ironically, in Gavin’s phrase, it also doesn’t matter. Let’s say that our autocorrelation coefficients were too high (which I don’t concede for a second). The bias still exists with lesser autocorrelation; it’s just not as severe. But at the end of the day, the problem is the weighting assigned to bristlecones. The NAS panel ruled against bristlecones. If you take bristlecones out of the mix, the PC method "doesn’t matter". To the left below are the reconstructions with differing PC methods showing the changing impact of bristlecones; to the right, the reconstruction without bristlecones.
In the words of the Team, sigh.
Left – WA Scenario 5 as previously described. Right – WA Scenario 6 with xxx bristlecone series excluded. Orange – MBH98 for reference. Red – with two Mannian PCs (WA Scenario 6a); magenta – with 2 covariance PCs (WA Scenario 6c) ; blue – one graph with 2 correlation PCs (WA Scenario 6b); one graph with 5 covariance PCs.