At present, I’m having trouble figuring out what on earth is going on with MBH confidence intervals. Here is a figure showing the one-sigma confidence intervals in MBH98 (cyan) and MBH99 (salmon).
Figure 1. MBH98 and MBH99 one-sigma by calculation step. Cyan – MBH98; salmon – MBH99. Solid black – CRU std dev; dashed red -"sparse" std. dev.
Why is there such a big difference between MBH98 and MBH99 levels between 1400 and 1600? I have no idea.
Why is the confidence interval for the 1000-1400 roster with 14 proxies closer than for the 1400-1450 roster with 22 proxies? I don’t know.
The solid horizontal line shows the standard deviation for the CRU NH temperature between 1851 and 1980. It’s lower than the implied standard deviation of the residuals for all reconstructions prior to 1600 according to the MBH99 confidence intervals. How is that consistent with any correlation between the reconstruction and NH temperature? (The standard deviation for the "sparse" dataset beloved of Mann, shown in dashed red, is a little higher than the implied standard error of the residuals, but doesn’t look like it’s enough to give a correlation higher than about 0.02.) It looks to me like the standard error of the residuals used in the confidence estimation is limited by the standard deviation of the temperature in the calibration period rather than any statistical skill. I don’t see how any confidence intervals can be put on the reconstruction before 1600 using the MBH99 sigmas.
I’d welcome any ideas.