Here’s something amusing.
Mann has written to the House Energy and Commerce Committee, arguing that we made a fundamental and obvious mistake in how we calculated AR1 coefficients for the North American tree ring network, which exaggerated the HS-ness of the simulated hockey sticks – a mistake that Ritson has now supposedly picked up and which supposedly any reviewer would have picked up.
The irony is that the method that we used to calculate AR1 coefficients is identical to the method used by Mann in his Preisendorfer diagram for the NOAMER network submitted to Nature and posted up at realclimate, as I prove below.
Reviewing the debate a little, here’s what Mann wrote to the House Energy and Commerce Committee:
There is another element of this question which raises a deeply troubling matter with regard to Dr. Wegman’s failure to subject his work to peer review, and Wegman’s apparent refusal to let other scientists try to replicate his work. Professor David Ritson, Emeritus Professor of Physics, Stanford University, has found error in the way that Dr. Wegman models the “persistence” of climate proxy data. Interestingly, this is the same error Steven McIntyre committed in his work, which was recently refuted in the paper by Wahl and Ammann, which was in turn vetted by Dr. Douglass Nychka, an eminent statistician. Dr. Ritson has determined that that the calculations that underlie the conclusions that Dr. Wegman advanced in his report are likely flawed. Although Dr. Ritson has been unable to reproduce, even qualitatively, the results claimed by Dr. Wegman, he has been able to isolate the likely source of Wegman’s errors. ..
Moreover, the errors that Dr. Ritson has identified in Dr. Wegman’s calculations appear so basic that they would almost certainly have been detected in a standard peer review. In other words, had Dr. Wegman’s report been properly peer-reviewed in a rigorous process where peer-reviewers were selected anonymously, it likely would not have seen the light of day. Dr. Wegman has thus unwittingly provided us with a prime example of the importance of the peer review process as a basic first step in quality control.
Here’s the "mistake" supposedly identified by Ritson in the calculation of AR1 coefficients. (Now even if Ritson were correct, all this would affect is the HS-ness of our illustration of the biasing effect – it doesn’t in any sense disprove the biasing effect. Ritson:
To facilitate a reply I attach the Auto-Correlation Function used by the M&M to generate their persistent red noise simulations for their figures shown by you in your Section 4 (this was kindly provided me by M&M on Nov 6 2004 ). The black values are the ones actually used by M&M. They derive directly from the seventy North American tree proxies, assuming the proxy values to be TREND-LESS noise…Surely you realized that the proxies combine the signal components on which is superimposed the noise? I find it hard to believe that you would take data with obvious trends, would then directly evaluate ACFs without removing the trends, and then finally assume you had obtained results for the proxy specific noise! …Your report makes no mention of this quite improper M&M procedure used to obtain their ACFs.
Now we’ve done a variety of calculations to show the artificial hockey stick effect. A calculation that’s received the least attention, but is probably the most pertinent is the discussion in the Reply to VZ in which we talk about the impact of 1-2 "bad apples" in an MBH context. Given the concern over potential nonclimatic effect of bristlecones, this is actually a more important issue than the red noise argument. In our red noise discussions, we did two calculations – one with ARFIMA noise and one with AR1 noise. The ARFIMA noise produced pretty hockey sticks but introduced a secondary complication and replications have focused on AR1 examples. To set parameters for the simulation, we calculated AR1 coefficients on the North American AD1400 tree ring network using a simple application of the arima function in R:
arima.coef = arima(x,order=c(1,0,0))
This is what Ritson is criticizing, arguing that application of a standard arima function to a tree ring network without previously removing trends is incorrect. Now it seems to me that Ritson has recently argued that VZ’s implementation of MBH made some sort of ghastly error by removing a trend prior to regression – so it’s hard to say what Team policy is on when trends should be removed and when trends shouldn’t be removed – but that’s a story for another day.
However here my point is different. Whatever the right method may be, the method that I used simply followed Mann’s own methodology. This can be proven by looking at his Preisendorfer simulations posted up at realclimate here (which were also submitted to Nature). In the SI to Mann’s revised reply to our Nature submission (all unpublished), Mann stated – posted up here for the first time :
We performed the experiments described by MM04 , producing various realizations of M=70 statistically independent red noise series of length N=581 ‘years’, using an N(0,1) Gaussian innovation forcing and the lag one autocorrelation coefficients of each of the actual M=70 North American ITRDB data for the interval 1902-1980.
From this calculation, Mann produced the figure shown in the left panel below, which was submitted to Nature and later posted up at realclimate in an identical form. In the right panel, I show my replication of this figure posted up early last year here., discussed here at CA. This exact replication of Mann’s diagram was produced using the arima coefficients calculated above – if the coefficients had been calculated using Ritson’s method, the diagram would have been much different. So whether this method of calculating AR1 coefficients is right or wrong, it is EXACTLY the method used by Mann himself.
Mann’s Original Caption for Left Panel: FIGURE 1. Comparison of eigenvalue spectrum for the 70 North American ITRDB data based on MBH98 centering convention (blue circles) and MM04 centering convention (red crosses). Shown is the null distribution based on simulations with 70 independent red noise series of the same length with the same lag-one autocorrelation structure as the actual ITRDB data using the centering convention of MBH98 (blue curve) and MM04 (red curve). In the former case, 2 (or perhaps 3) eigenvalues are distinct from the noise floor. In the latter case, 5 (or perhaps 6) eigenvalues are distinct from the noise floor. The simulations are described in "supplementary information #2".
While we’re looking at these diagrams, it’s interesting to look at a couple of other example to see that Mann’s use of the Presiendorfer criterion cannot be demonstrated in other networks, as I pointed out early last year, although nobody from realclimate has explained the discrepancies and Wahl and Ammann avoided the topic altogether. On the left is a diagram for the Stahle SWM AD1700 network – no fewer than 9 PCs were actually retained. On the right is a diagram for the Vaganov AD1600 network in this case only 2 PCs were retained. What system did Mann actually use to decide retained PCs? I have no idea. You can’t get the retention decisions from these diagrams. No code for these retention decisions has ever been produced. I’d love to know how the retentions were made.
Mann’s tactics are really pretty amazing sometimes. Why pick another fight on such a bad issue? He should have just shut up, taken his medicine and got on with it. What if Wegman takes him at his word and actually examines the issues that Mann raises here? He will conclude that the climate science community has taken leave of their senses.