## MBH Confidence Intervals #2

At present, I’m having trouble figuring out what on earth is going on with MBH confidence intervals. Here is a figure showing the one-sigma confidence intervals in MBH98 (cyan) and MBH99 (salmon).

Figure 1. MBH98 and MBH99 one-sigma by calculation step. Cyan – MBH98; salmon – MBH99. Solid black – CRU std dev; dashed red -"sparse" std. dev.

Why is there such a big difference between MBH98 and MBH99 levels between 1400 and 1600? I have no idea.

Why is the confidence interval for the 1000-1400 roster with 14 proxies closer than for the 1400-1450 roster with 22 proxies? I don’t know.

The solid horizontal line shows the standard deviation for the CRU NH temperature between 1851 and 1980. It’s lower than the implied standard deviation of the residuals for all reconstructions prior to 1600 according to the MBH99 confidence intervals. How is that consistent with any correlation between the reconstruction and NH temperature? (The standard deviation for the "sparse" dataset beloved of Mann, shown in dashed red, is a little higher than the implied standard error of the residuals, but doesn’t look like it’s enough to give a correlation higher than about 0.02.) It looks to me like the standard error of the residuals used in the confidence estimation is limited by the standard deviation of the temperature in the calibration period rather than any statistical skill. I don’t see how any confidence intervals can be put on the reconstruction before 1600 using the MBH99 sigmas.

I’d welcome any ideas.

### Like this:

Like Loading...

*Related*

## 9 Comments

Thank you Steve.

Cyan… that’s the light blue one, right? 😉

My question is, how does Mann produce a one-sigma result for tree ring proxies in the 17th Century that appears to be better than the one produced by the instrumental record of the 20th Century?

Are trees a better measure of temperature than a thermometer inside a Stephenson screen?

John, I’m still not convinced that trees are a useful temperature indicator at all. In simple terms we have 2 equations solving for ring width and ring density and at least 4 major variables in temperature, moisture, CO2 fertilization and sunlight. Unless someone has changed the basic rules of algebra we do not have enough equations. Unless there are independant sources of measurement in the immediate area of each tree sample for some of the other variables then solving for any individual relationship is not mathematically possible.

The best that can be put together from any of this is an equation that defines optimum ring growth versus the sum of the effects of the 4 major variables.

Steve and Ross have done an excellent job of showing that Mann et al did not do a proper job of sorting out the rings. Those are quibbles with the fancy math. The basic math is completely fouled up. If I were grading the math behind the hocky stick (or any paper using tree rings alone as temperature or moisture proxies) as a math paper in school it would receive an E for Effort where D is fail, C is barely a pass and A+ is top mark.

Re: #3

Michael, that appears to be the “Mind the Gap in the Logic!” question.

If I can refer back to another study, the Keigwin paper on seawater temperatures in the Sargasso, what interests me is that Dr Keigwin provides the key calibration between relative oxygen isotopes in sea water and temperature. Mann does not provide this, nor give any explanation as to what he is measuring with his tree rings.

The other part is that when I asked Dr Keigwin for his study, he replied “How detailed do you want it?”.

I’ve always wondered whether there are other similar places around the world’s oceans to the Sargasso where similar studies could be done….

The “Hockey Stick” graph is vindicated:

http://www.ucar.edu/news/releases/2005/ammann.shtml

Re #5,

Perhaps "vindicated" is a little strong considering we don’t even know the full content of the paper. I’ll be interested, for example, in their reports of the R-squared statistic for the various reconstructions. But I won’t hold my breath, because I suspect they won’t give a straight answer to that particular question, in the usual hockey team way. I’m sure Steve will help to fill in any gaps…

Steve: Spence, I’ve been working through the code, which changes this quite a bit from usual Hockey Team situations. They don’t report the R2 anywhere and it isn’t even calculated in the code as far as I can tell. I’ll calculate the R2 for the AD1400 step tomorrow. The argument appears to be: they claim a higher RE with the bristlecones in than with the bristlecones out without any other jsutification. I’ll post this up on Wahl-Amman post as well.

Not sure, Stephen Berg, if your comment is ironic or not. It seems from the published results that the WAresults in fact prove that MBH 98 (&99) are in fact junk. WA do a reconstruction omitting the Bristlecone Pine Records, and the result, NO Hockeystick, and no merit in the reconstruction. So MBH (& WA) must use the Bristlecones to get a result. NOW, try to defend the use of the BPC records. They clearly are not responding to temperature, a afcat known to M,B, & H, as it is clearly stated in the original research results.

When you also add in the questionable data manipulation in the Polar Urals series to produce higher 20th century values, the doubtful provenance of the Gaspe Cedar records (location lost, cherry picked results acknowledged), and the issues raised with the Tasmaniaqn Huon Pine reconstructions, thyere is nothing left of these reconstructions ! Yest, yet, Realclimate is out there vigourously defending the same. Isn’t it time they opened their eyes and actually examined the data.

Oh dear, to all readers, please excuse the typos in the above comment. Preview, if it were available, would be my friend.

save the tree’s