Calibration in the Mann et al 2007 Network Revisited

In a post a few months ago, I discussed MBH99 proxies (and similar points will doubtless apply to the other overlapping series) from the point of view of the elementary calibration diagram of Draper and Smith 1981 (page 49), an older version of a standard text. Nothing exotic.

One of the problems that arose in these plots is that virtually none of the individual calibrations made any sense according to the Draper and Smith style plot. Since then, I’ve dug a little further into the problem. If a statistical relationship is not significant under a standard t-test of 2, then a Draper and Smith style plot throws up nonsensical confidence intervals. I’ve dug a little deeper into the matter and have determined that if you lower your t-standards, then you can typically “get” confidence intervals for the individual calibrations that at least are intelligible. The larger question is whether a multivariate calibration can overcome the deficiencies of the individual calibrations (the topic of Brown 1982 and subsequent articles, which several of us have been parsing.)

However, before re-visiting that text, I want to re-plot calibration diagrams for all the MBH99 proxies, lowering the t-tests where required to get an intelligible confidence interval. The post title refers to the Mann et al 2007 network because it is the same as the MBH98-99 network. In all cases, I’ve limited my analysis to the verification period mean, since Wahl and Ammann have agreed that MBH has no “skill” at higher frequency. So we might as well see which proxies, if any, have “skill” at the verification period mean. In the MBH99 period (as with AD100), only one “climate field” is reconstructed, so all the piffle about lower order temperature PCs can be disregarded.

Let me start with showing a plot for the MBH99 series with the highest t-score in calibration – the NOAMER PC2 (not the Mannian PC1). I haven’t shown plots of individual series (you can see these here – link), but the PC2 doesn’t have a HS shape. Its’ individual “prediction” of the verification period temperature (sparse centered on 1902-1980) is an even 0.0 deg C, with 95% confidence intervals of -0.1 and 0.16 deg C. While the confidence intervals are quite precise, unfortunately the observed verification temperature of -0.18 deg C lies outside the confidence intervals.

draper13.gif

The next highest t-score comes from Briffa’s Tornetrask reconstruction. In this case, it verifies almost exactly. The problem with putting much weight on this particular reconstruction is that Briffa adjusted his results so that they worked as discussed [link].

draper12.gif

Only one other series in the MBH99 passes a standard t-test – Cooks’ old Tasmania version. Here the 95% confidence intervals are rather uninformatively between 0.04 and 8.81 deg C, but even this wide interval failed to bracket the oberved -0.19 deg C.

draper12.gif

All other series failed a simple t-test in the calibration period and yielded perverse confidence intervals that failed to bracket the estimate. I examined some of the more original literature, including Fieller 1954, and the most logical approach to these failed confidence intervals seems to be to lower the standard. If you can’t get a 95% confidence interval that makes any sense, maybe you can get a 75% confidence interval or a 50% confidence interval. There are some interesting mathematical issues involved in this, which Ross and I have been mulling over.

But the best way of seeing what is going on is simply to look at a lot of plots and the Mann et al 2007 data set (which is identical to MBH98) really provides an excellent compendium of perverse cases, which can be recommended to statistics students even if climate science “moves on”. In the following plots, I’ll show on the left, the 95% confidence intervals (where there is a breakdown) and on the right, a confidence interval based on a lower standard. Notice that the left hand plots all show both intersections on the same branch of the hyperbolae, thus the nonsense intervals. By lowering the confidence target, one intersection on each hyperbola branch is achieved and thus a “confidence” interval.

Here is the famous NOAMER PC1. (Note: This is from the original MBH archive and I need to re-check whether this is the “fixed” version or not. While this version fails a standard t-test, if the t-standard is lowered to 1.5, then one can say that with (say) 90% confidence, the verification period mean was between -1 and -16 deg C (actual value -0.19 deg C.) So this doesn’t seem as helpful a standard as one might hope.

 draper12.gif  draper12.gif

The next highest t-score for NH temperature came from the Qualccaya 1 ice core (t=1.31, which is still 90th percentile). Somewhat disappointing though is the fact that that Quelccaya 2 ice core has a t-value of only 0.36. Now that I think of it, I think that we’ve seen a tendency in some recent studies to use the Quelccaya 1 ice core as a proxy on its own, discarding the Quelccaya 2 results. Hmmm. Quelccaya 1 results purport to show confidence interval (t=1.31) of between -0.39 and -4.8 deg C (once again not containing the observed -0.19 deg C.) Quelccaya 2 results, at an even lower confidence level (t=0.36) bracket -2 and -8 dg C, again not containing the observed -0.19 deg C. Hmmm.

 draper12.gif  draper12.gif
 draper12.gif  draper12.gif

Next in descending t-scores is Briffa’s Polar Urals series. This is the older version before the update (which resulted in a warm MWP), prompting the Team to switch to Yamal. Once again, on the left side, we see the broken down confidence intervals at the usual 95% t-values, but with a lowered confidence standard (t=1.25), confidence intervals of between -0.9 and -8.4 deg C result (again not containing the observed value.)

 draper12.gif  draper12.gif

I hope you’re not getting too border, because there are some interesting examples still to come, though the next few are more of the same. Next we have the NOAMER PC3. In passing, how does application of a Preisendorfer rule result in 3 PCs for the AD1000 NOAMER network and 2 PCs for the AD1400 network. Maybe the PR Challenge will tell us. The t-value has now declined to 0.95, yielding the typical broken down diagram on the left. By lowering the t-standard, we can get a less confident interval, this time between 0.46 and 5.8 (and once again unfortunately not including the observed value.)

 draper12.gif  draper12.gif

Next come two accumulation series from Quelccaya. It’s intriguing that this one site accounts for 4 of 14 proxies in the network. Perhaps each individual ice core is thought to be tuned to different channels on the teleconnection dial. The confidence intervals for one accumulation series are between 0.05 and 0.9 deg C with low confidence (but not containing the observed value), while the other accumulation series using a t-confidence of only 0.5 yields an uninformative interval of 0.5 -15.5 deg C, again unfortunately not containing the observed value.

 draper12.gif  draper12.gif
 draper12.gif  draper12.gif

Next here are three proxies all with t-values in the seemingly uninformative 0.6-0.65 range: an Argentine tree ring series, a French tree ring series and a Greenland dO!8 series (the last one being used over and over again in these studies.) At very low confidence intervals, each of these “proxies” yields only very wide “confidence” intervals, none of which actually overlap the observed value.

 draper12.gif  draper12.gif
 draper12.gif  draper12.gif
 draper12.gif  draper12.gif

The last proxy has a bit of a place of honor. In this case, to obtain an intelligible “confidence” interval, one has to lower the t-standard to 0.02 (!!), resulting in a “confidence” interval between 12 and 19 deg C for the verification period reconstruction.

 draper12.gif  draper12.gif

Reviewing the bidding, only one of the MBH99 proxies, considered in an individual calibration, yielded confidence intervals that contained the observed verification mean (and that proxy – Briffa’s Tornetrask series – had been fudged so that it “worked”.)

The interesting statistical question, for which hopefully the methods of Brown 1982 and subsequent literature can assist, is whether a multivariate calibration using proxies which have all individually failed so badly can yield an answer with a valid confidence interval using proper methods (as opposed to methods applied by IPCC relying on Wahl and Ammann and the rest of the Team.)

23 Comments

  1. Pete
    Posted Jul 20, 2008 at 12:52 PM | Permalink

    Para 4 – Link missing?

  2. Steve McIntyre
    Posted Jul 20, 2008 at 3:08 PM | Permalink

    ##LOAD PROXY AND TARGET TEMPERATURE (MBH SPARSE)
    method=”online”;
    source(“d:/climate/scripts/spaghetti/mbh99_proxy.txt”)# 981 14
    dim(proxy)

    source(“d:/climate/scripts/spaghetti/sparse.txt”);tsp(sparse)#1854 1993

    ssq=function(x) sum(x^2)

    ##CALIBRATE ON 1902-1980
    xbar=mean(sparse[(1902:1980)-1853]) ;xbar # -4.691276e-18
    xbar_ver=mean(sparse[(1854:1901)-1853]);xbar_ver # -0.1926482
    X=as.matrix((sparse-xbar)[ (1902:1980)-1853 ]);length(X) #79
    Y=proxy[903:981,];dim(Y)

    #DRaper: g should be much smaller than (Say) about 0.20. A t-statistic of 2.236 would achive this for example

    plotf=function(i,t0=qt(df=n-2,.975)) {
    par(mar=c(3,4,2,1))
    temp= c(rep(FALSE,48),rep(TRUE,79))
    y=proxy[(1854:1980)-999,i]; x=sparse[1:(1980-1853)]
    Z=data.frame(x,y)
    fm=lm(y~x,data=Z[temp,]);coef(fm)
    summary(fm)

    alpha=coef(fm)[1];beta=coef(fm)[2]
    sigma=summary(fm)$sigma;sigma # 0.1939669
    n=sum(temp);n
    ybar=mean(Z$y[temp]) #-1.383277e-16
    ybar_ver=mean(Z$y[(1:127)[!temp]]);ybar_ver # 0.1862975
    X0hat= (ybar_ver-alpha)/beta;X0hat # -0.6282972 #page 49
    #edited on July 17 2008
    D= beta/sqrt(sigma^2*Sxxinv);D #for i =8 1.314614
    #interim in g-calculation see Draper 1.7.7
    summary(fm)$coefficients[2,]
    #Estimate Std. Error t value Pr(>|t|)
    #0.5900265 0.4488210 1.3146143 0.1925418
    #so D is the t-value here
    g=t0^2/D^2; g #9.237218
    #defined on Draper page 49 middle
    test=summary(fm)$coefficients[2,”t value”];test # 2.929039
    (t0/test)^2 # 2.294335 #from Draper 1981 1.7.7

    Q=(X0hat-xbar)^2*Sxxinv +(1-g)/n ;Q # -0.02400615
    #component of definition in 1.7.6: critical to existence of roots
    if (Q>= 0) {xrange=X0hat+(X0hat-xbar)*g/(1-g) + (1/beta)*t0*sigma* c(-1,1)* sqrt(Q) /(1-g); #this is Draper 1981 page 49 eq 1.7.6
    xlim0=range(c(X0hat,xrange))+c(-1,1)} else xlim0=X0hat+c(-3,3)
    if(i==3) xlim0=c(-20,8)
    if(i==9) xlim0=c(-2,2)
    a=seq(xlim0[1],xlim0[2],.1)

    ylim0=range(Z$y)
    plot(a,alpha+beta *a,type=”l”,xlab=””,ylab=”SD Units”,ylim=ylim0,yaxs=”i”,xlim=xlim0)
    #fitted modelof proxy versus temperature
    abline(h=ybar_ver,col=4)
    #mean verification period
    lines(xy.coords(c(X0hat,X0hat),c(-4,ybar_ver)),col=1,lty=2)
    #marks estimated verification mean

    lines(xy.coords(rep(xrange[1],2),c(-4,ybar_ver)),col=2,lty=2) #lower bound plotted
    lines(xy.coords(rep(xrange[2],2),c(-4,ybar_ver)),col=2,lty=2) #upper bound plotted

    lines(a,alpha+beta *a- t0*sigma* (1/n + Sxxinv*(a-xbar)^2)^0.5,lty=2,col=2)
    #lower band see Fgure 1.11
    lines(a,alpha+beta*a+ t0*sigma* (1/n + Sxxinv*(a-xbar)^2)^0.5,lty=2,col=2)
    title(paste(dimnames(proxy)[[2]][i]))
    axis(side=1,col=”white”,at=round(xrange,2),labels=TRUE,outer=FALSE,las=3,font=2,col.axis=2,line=-3)
    axis(side=1,col=”white”,at=X0hat,labels=round(X0hat,1),outer=FALSE,las=3,font=2,col.axis=1,line= -3)
    axis(side=2,col=4,at=ybar_ver,labels=round(ybar_ver,2),outer=FALSE,las=1,font=2,col.axis=4,line=-3)
    points(xbar_ver,ybar_ver,pch=19,cex=1.2)
    summary(fm)

    mtext(side=1,”deg C”,line=2)
    text(xbar_ver,ybar_ver+.4,round(xbar_ver,2),font=2)
    text(locator(1),paste(“t0=”,round(t0,2),”; t=”,round(summary(fm)$coef[2,3],2)),font=2,pos=4)

    if(Q<0) text(0,3,”Imaginary Roots”,pos=4)
    plotf=summary(fm)
    }

    plotf(1)
    plotf(2,t0=.5)
    plotf(5,t0=.8)
    plotf(6,t0=.52)
    plotf(7)
    plotf(7,t0=.51)
    plotf(9)
    plotf(9,t0=.75)
    plotf(8)
    plotf(8,t0=1.1)
    plotf(10)
    plotf(10,t0=.2)
    plotf(11)
    plotf(11,t0=1.7)
    plotf(12)
    plotf(12,t0=1.0)
    plotf(13)
    plotf(13,t0=0.4)
    plotf(14)
    plotf(14,t0=0.02)

  3. Dev
    Posted Jul 20, 2008 at 3:45 PM | Permalink

    I may be way off base, but to me the statistical question posed by all the failed proxies seems to be roughly analogous to Mann telling the old business joke, “We lose money on every unit, but we make it up in volume.”

  4. Dave Dardinger
    Posted Jul 20, 2008 at 4:27 PM | Permalink

    Steve,

    Instead of the NA PC3 diagram you have a duplicate copy of the Urals diagram.

  5. Steve McIntyre
    Posted Jul 20, 2008 at 4:42 PM | Permalink

    #3. That’s an apt comment. I can envisage some circumstances where you have a bunch of proxies, none of which individually work very well, but from which you can extract a valid signal. So there’s more to it than the univariate case and I’m re-visiting these issues.

    There’s a lesson here for the PR Challenge folks if they want to model realistic pseudoproxies. It’s not enough to pitch slow pitch as they do – adding a bit a white noise or low order red noise to a signal and recover the signal. That’s easy. What they have to do is to construct “interesting pseudoproxies” that fail similarly to the ones here and show that you can recover a signal from it.

    It’s an interesting problem – but I think that it’s mainly back to statistics and math and doesn’t have much really to do with climate models or the properties of corals. It really is too bad that they decided not to involve me with PR Challenge – I’m really quite a bit more familiar with the issues than the people who were there and have thought about the problems in a more fundamental way. But hey, they’re the Team.

  6. Posted Jul 20, 2008 at 4:45 PM | Permalink

    Steve,

    This script isn’t working for me. I get this message, Error: scale.method not found. There is a supplemental script or package that I’m missing. Please help :).

    Steve: Try.

    scale.method=FALSE

    I’ll try exiting my present R session and running it from scratch some time later. Sorry bout that.

  7. Jonathan Schafer
    Posted Jul 20, 2008 at 6:47 PM | Permalink

    Steve Mc,

    Are there any proxies out there and available that you can/have tested on which do pass a standard T test? Yamal, Tornetrask, any of the speleo, other ice core, non strip bark dendro proxies, etc? I’m wondering whether this type of test would be valid on any proxy at this point.

  8. MrPete
    Posted Jul 20, 2008 at 7:49 PM | Permalink

    typo: …not getting border -> getting bored 😉

  9. MrPete
    Posted Jul 20, 2008 at 7:51 PM | Permalink

    Steve, I’ll echo cdquarles’ request. For your rather large non-statistician peanut gallery, it would help to have a few pertinent examples showing what we *should* expect.

    Right now all we can do is read and say “I dunno what this really means, but Steve says it is bad, and the measures are obviously outside the CI”… not knowing how often these kinds of results are seen, we have no basis for comparison.

  10. Steve McIntyre
    Posted Jul 20, 2008 at 8:35 PM | Permalink

    The Tornetrask example would be an example of a “good” calibration. The trouble with this as a proxy is that it’s been “adjusted” to remove the divergence problem. As far as I can tell, it was Briffa’s first bit of chiropractic, but not his last.

    I’ve been doing similar plots on the Juckes version of these proxies and the t-values in the Juckes version all seem to have improved even if the location is the same. I’ll need to order some of mosher’s crazy pills as well.

  11. MrPete
    Posted Jul 20, 2008 at 10:39 PM | Permalink

    I would love to see some **unadjusted** data (any kinds to begin with, then eventually some climate data) that calibrates well.

    At this point, it feels like this is a statistical test for discovering human intervention in data series. Should NOT be so, AFAIK… I’m just hoping that other scientific data sources can be mined to show that this test is reasonable and useful as a way of finding “good” data. I just get a bad taste when I see sooooo many adjustments…

  12. Steve McIntyre
    Posted Jul 20, 2008 at 11:01 PM | Permalink

    I’ve figured out why the later correlations improve – the temperature data has been adjusted. The comparisons shown here use the temperature data used in MBH. Subsequent revisions to the temperature have “improved” the trend and with it the correlations to upward trending proxies. It’s hard to find bottom.

  13. MarkW
    Posted Jul 21, 2008 at 4:57 AM | Permalink

    if you lower your t-standards, then you can

    Hasn’t that been the problem? The AGW’ers keep lowering their standards?

  14. bender
    Posted Jul 21, 2008 at 8:02 AM | Permalink

    It is not the confidence interval, but the prediction interval that is expected to bracket observations. Prediction intervals are wider than confidence intervals.

  15. kim
    Posted Jul 21, 2008 at 8:50 AM | Permalink

    0 (Steve) Intuitively, the answer to your interesting statistical question is ‘no’. These are not a ‘precise’ enough tool to give ‘accurate’ results. It’s like the roomfull of monkeys; you might get an accurate result, but not likely.
    =========================================

  16. Dave Andrews
    Posted Jul 21, 2008 at 9:36 AM | Permalink

    The following was recently posted on Open Thread 3 at Open Mind (‘Luminous Beauty’,July 19th, 10.41pm)

    “Mann et al. (1998, 1999) used a network of 415 annually resolved proxy data … to reconstruct temperature patterns over the past thousand years. Zhang et al. (Z. Zhang, M. Mann, S. Rutherford, R. Bradley, M. Hughes et al., manuscript in preparation) have more recently assembled a much larger network of 1232 annually resolved proxy data consisting of tree rings, corals and sclerosponge series, ice cores, lake sediments, and speleothems combined with reconstructions of European seasonal surface temperatures back to 1500 CE based on a composite of proxy, historical, and early instrumental data (Luterbacher et al. 2004) … The additional inclusion of non-annually resolved, but still relatively high (e.g., decadal), resolution proxies (e.g., nonlaminated lake and ocean sediments) with high enough resolution and accurate enough age models to calibrate at decadal resolution leads to an even larger network of 1302 proxy series.

    I make that 887 “new” proxies.”

    I wondered if you Steve, or any one else was aware of this new paper?

    Steve:
    Nope. But the “415 proxies” are the same old MBH98 proxies, Graybuill bristlecones and all. They will probably add in the Briffa MXD network used in Rutherford et al. but I’m not sure what else will be in it. But always keep in mind that the vast majority of these records are very short and do not affect things before 1400. The issue will be what’s in the MWP network – bristlecones. And there will be a lot of Graybill bristlecone chronologies, that’s for sure.

  17. MrPete
    Posted Jul 21, 2008 at 6:05 PM | Permalink

    …And by only going back to 1500 they ignore the MWP completely.

    Take a look at Craig Loehle’s study for something a bit more interesting.

  18. Posted Jul 21, 2008 at 6:19 PM | Permalink

    Re #17

    I think the ‘1500 CE’ refers to the Luterbacher et al. 2004 proxies only, not the whole study.

  19. Posted Jul 22, 2008 at 9:31 AM | Permalink

    Steve,

    The script is still not working for me. Hard coding the scale.method object took care of the first object not found error, but now I get the object not found error referencing an object named Sxxinv. I have since updated my copy of R, but no joy still when using your script.

    Thanks for the R education and refreshing my statistics skills.

    Steve: Sorry bout that. I need to re-run this not in a session that’s already open. Sxxinv should be solve(Sxx) .

  20. Dave Andrews
    Posted Jul 23, 2008 at 6:03 AM | Permalink

    Re # 16,

    It turns out that the remark by LB is actually more or less a direct quote from an article written by Mann, although that wasn’t immediately obvious.

    The article is ‘Climate Over the Past Two Millenia’, the Annual Review of Earth and Planetary Sciences, vol35, 111-136, publication date May 2007. The quote comes from the bottom of p113.

    http://arjournals.annualreviews.org/doi/abs/10.1146/annurev.earth.35.031306.140042@climate.2007.1.issue-1

  21. Steve McIntyre
    Posted Jul 23, 2008 at 6:19 AM | Permalink

    #20. Another example of academic check-kiting by the Team.

  22. Robinedwards
    Posted Jul 23, 2008 at 12:52 PM | Permalink

    Steve, Looong ago you provided me with Mann et al’s 112 columns of data, 13 of which were temperatures and the remainder proxies of various sorts. I have done a /huge/ amount of processing on these data, and long ago reached my own conclusions about what might reasonably be deduced from them, taking the values Mann provided at face value. (Thus not worrying about the data inconsistencies and errors that you detected and published).

    Are we still talking about, and operating on, these data? (Presumably referred to as MBH98).

    I use a method of analysis completely different from your elegant work, making no serious attempt to derive a decent quantitative estimate of the “uncertainties”, although I can, under various assumptions, produce one.

    The method I use provides a grand overview of the data. It avoids setting up an hypothesised model, such as the very commonly used and accepted linear one, which for all we know might well be totally unrelated to the reality behind the origins of the numerical data.

    The technique derives the grand patterns in the data, and enables the behaviour of any chosen subgroups of the data to be compared directly. This yields outcomes that I find very instructive, and which incidentally seem to refute completely any suggestion of “hockey stick” behaviour, save for a few isolated single site series.

    If current thinking is that there might be a “more reliable” data set or sets, would it be possible to get a direct link to them?

    I’d much appreciate any thoughts you (or anyone else) may have.

    Robin

  23. Posted Jul 28, 2008 at 5:26 PM | Permalink

    I’m no mathematician but love the challenge of trying to understand this sort of stuff. My conclusion, as they would say in the East End of London, is “tree rings don’t prove nuffink, mate”. It gives a warm glow to a plump old man when he is able to reduce all your technical gubbins to simple English.

    Keep up the good work.