A Letter to Ritson

Ritson at realclimate did not thank me for helpful discussions on autocorrelation despite lengthy correspondence on my part with him. I thought that the histogram that I posted up earlier today looked familiar. So I looked back at my correspondence with Ritson (who posted up on autocorrelation at realclimate and sure enough, I’d sent the identical histogram to him in November 2004. So Ritson had seen correctly calculated autocorrelation coefficients a long time ago. The letter was interesting to re-read in the present context.

I tend to be most interested in empirical points and from reading your paper, I see one very obvious bit of empirical information which we could helpfully report (and I think that the absence of this may have been frustrating you) – the AR1 coefficients of the North American tree ring network. In the AD1400 network used by Mann, there are 70 sites with AR1 coefficients ranging from 0 to 0.79. Below is a histogram. Obviously the AR1 persistence in these tree ring site index series is greater than in temperature series and I think that this turns out to be very important both in principal component analysis (especially as done by Mann) and in the downstream regressions. Curiously, the AR1 coefficients seem to me to be more strongly correlated to the author than to any other variable. Series done by Stahle have little autocorrelation (and Durbin Watson statistics around 2), while series done by Jacoby and Graybill have very high autocorrelation and Durbin Watson statistics sometimes under 1. This is inadequately discussed in the literature.

Secondly, the AR1 coefficients underestimate the actual persistence in many sites. For example, the Sheep Mountain site has an AR1 coefficient of 0.76 under an ARMA (1,0) model, but its actual persistence is much greater. The ACF is shown below, with the red line showing the iterated AR1 coefficient. I’ve gotten interested in fractional processes to deal with sort of situation, following Mandelbrot (who actually calculated Hurst parameters for some tree ring series). I suspect that what I’ve described as a fractional process (using Whitcher’s algorithm following Hosking (1981)) would be recognisable to you as your 1/f process. Interestingly, Hurst of the Hurst parameter in 1/f processes was a hydrologist, who studied fluctuations of the Nile, a climatic series.

Thirdly, here is a graphic showing the relationship between the weighting of a site chronology under Mann’s PC method and its AR1 coefficient. I think that you will agree that it is a very strange scatter plot. The 14 series on the right account for over 99% of the variance in the PC1. As you see, there is a strong association between the AR1 coefficient and the EOF1 weighting – which is not an effect that one would normally seek and surely points to some problem with the method. The over-printed sites are sites which Mann excluded in an unreported sensitivity study – which gave him very different results than he reported. As I’ve mentioned to you before, I am very struck by this omission, which would not be legal to omit in a securities offering. When you look at the sites marked with the overprint, they are all bristlecone pine sites, mostly from Donald Graybill and reported in Graybill and Idso (1993). There are many curious features about bristlecone pines and the growth index has never been shown to be a climate proxy. The extreme right hand site is the one shown in the above ACF function.

I find it incredible that Ritson can seriously propose AR1=0.15 as a model for a series with an autocorrelation function looking like Sheep Mountain in the second figure. Especially when the issue had been specifically brought to his attention.

As a passing comment, the above letter also illustrates how constructive I was in correspondence. I probably have been a little unguarded in this respect as, for example, I sent Bürger a considerable amount of detailed information, but did not receive any acknowledgement in Bürger et al 2006.

14 Comments

  1. Posted May 25, 2006 at 3:05 AM | Permalink

    Given that the relationship between the proxy weighting and the AR1 coefficient is almost linear (certainly the lower bound anyway) perhaps the correct solution is to divide the final weights by the AR1 coefficient?

    Of course, that’s only useful if you believe the rest of the method is valid – otherwise you just end up with results which are slightly less wrong – but it would seem to be a step in the right direction to me. However, much better to throw this method away and devise a new one from scratch which does not suffer from these problems. For a start, by comparing proxies with local records rather than global ones.

    But I guess my point is, the striking linearity of that graph is evidence of a strongly biased weight choosing algorithm. But, that’s almost so obvious that it’s not worth mentioning.

  2. Steve McIntyre
    Posted May 25, 2006 at 8:50 AM | Permalink

    #1. Sure it’s evidence of a biased weight choosing algorithm. It mines for hockey stick shaped series – that’s the effect that we reported in our GRL paper.

    As we pointed out before, when you look up the proxies that are overweighted, they are the bristlecones. Principal components is usually considered to be an “exploratory” method and, in this case, attention to interpretation of the component would have led to the identification of the bristlecones as outliers. Of course, Mann did identify them as outliers, but “needed” them to “get” his results.

  3. Brooks Hurd
    Posted May 25, 2006 at 9:20 AM | Permalink

    Steve,

    Clearly you are doing a considerable amount of work on issues like this. You could submit these things for publication. That would put Ritson or others in an uncomfortable position if attempted to play games with your correspondence.

  4. TCO
    Posted May 25, 2006 at 2:10 PM | Permalink

    Your failure to publish is a worse lapse (but of a different nature) than Burger’s failure to acknowledge. And his paper is a gem in terms of its issue disaggregation, “killer experiment” analysis, and clear writing.

    I notice a common flaw in some of the postings here (that I bet doesn’t just frustrate JerryB-designated “dummies” like me, but also serious specialist scientists in this field) of not finishing thoughts. It’s evident sometimes when you say, “look at parameter X at value Y”, that what really bugs you is implication Z. But you don’t finish the thought. I’m not sure to what extent this is just a logic lapse and to what extent is timidity about making a clear assertion that opens the door for rebuttal.

    The percentage accounting in PC1 remark continues the failure in complete thought and the possible overemphasis of a flaw, since PC1 is NOT the reconstruction, and PC1 is diluted when fed into the Mannomatic. Since PC2 may counterbalance some of the effects of PC1. And since PCs are supposed to group variance into the lowest number of axes–thus it’s not surprising that some groups are over-represented in a PARTICULAR PC.

    ****

    That said, this type of posting is far preferable to 2013 crashes and sea level kerfuffle with the hoi palloi.

  5. John Adams
    Posted May 25, 2006 at 3:16 PM | Permalink

    o \iota \pi o \lambda \lambda o \iota

  6. TCO
    Posted May 26, 2006 at 10:08 PM | Permalink

    Airborne!

  7. Steve McIntyre
    Posted May 26, 2006 at 10:41 PM | Permalink

    #4. Look, this was just an email, not a paper. I show it because it sent Ritson the actual AR1 coefficients, the calculation of which he cocked up at realclimate.

    I don’t know why you go on about PC1 versus reconstruction so much.
    We’ve been over this before but one more time. We’ve shown the impact on both NH reconstruction and PC1.

    In the context of the time (and the circumstances are different now than in 2004), after MM03, Mann identified the North American PC1 as the “critical” series which accounted for the difference between MM03 and MBH results. So reconciling the two results had been isolated to the NOAMER PC1 – and reconciling the results was important at the time.

    Analysis of the properties of the PC1 were critical. At that point it was a given that the this one series accounted for the differences. Now this needs to be explained to people, but that was understood between Mann and ourselves at the time. It’s gotten lost over time. You don’t have a whole lot of space to work with in GRL and we wanted to publish there as well so we had to pick issues. I wanted to deal with RE statistics. Nobody seems to have cared, but it’s really an important topic. The concept of spurious RE was not on anyone’s radar screen at the time. It still hasn’t been assimilated, but I’m hopeful that people will catch on. Some of these things take a while.

    It’s amazing how many scientists in the field simply take their views from realclimate. As far as I can tell, the majority of them. I don’t think that the issue is exposition. While our exposition could undoubtedly have been better, our exposition has not prevented civilians like James Lane or Spence_UK from exactly grasping the points right away in a substantive way.

  8. TCO
    Posted May 27, 2006 at 10:08 AM | Permalink

    Steve:

    1. I understand that every blog post and email is not as good as a paper. All the more reason to publish more.
    2. I do think you have a tendancy to not always have a rigorously logical train of thought, to wander at times in exposition. That’s ok. You’re still scarily bright and no one is perfect. But I think it’s something to watch.
    3. I go on and on about the PC1 versus reconstruction because the “damage to the PC1” is an overstatement of “damage to the reconstruction”. That you have at some other point connected the two is irrelevant, Steve. It’s still disingenuous in discussion and sloppy in analysis. The effect on the intermediate is not the same as the effect on the overall.

  9. Posted May 27, 2006 at 12:43 PM | Permalink

    #7 “It’s amazing how many scientists in the field simply take their views from realclimate. As far as I can tell, the majority of them”. Steve, you’re probably right, but I hope not. Given Mike’s recent responses to Terry’s and my questions over at RealClimate, I would like to think most climate scientists have some critical thinking ability. Phil B.

  10. Willis Eschenbach
    Posted May 27, 2006 at 2:49 PM | Permalink

    Heck, at least you’re getting responses. I’m just getting censored when I’ve pointed out the glaring mathematical error in the Svalbard analysis ,,,’

    w.

  11. Terry
    Posted May 27, 2006 at 3:53 PM | Permalink

    Willis #10:

    What glaring errors did you point out? Could you post them here? I also had some very reasonable concerns about the analysis and got censored.

  12. Steve McIntyre
    Posted May 27, 2006 at 4:26 PM | Permalink

    Willis, do you mind/would you like if I post up your Svalbard analysis as a separate thread? I’ve had a couple of other people approach me about it offline.

  13. Terry
    Posted May 27, 2006 at 5:25 PM | Permalink

    #12:

    I would be very interested in a thread on Svalbard.

    The Svalbard analysis can be seen as a case study in the dangers of basing an analysis on extreme observations, i.e., the LEAST reliable data points.

    Basic humility tells you that when you find a 5-sigma event, either:

    1) The data is perfect and your analysis is error free and you have found a completely amazing result.

    2) There is an error somewhere in the data or your analysis.

    The more extreme the finding, the more likely that 2) is the correct interpretation — erroneous analyses regularly produce extraordinary results, while correct analyses rarely do.

  14. TCO
    Posted May 27, 2006 at 8:36 PM | Permalink

    or 3. The system has changed.