Figure 3. Stratigraphic plots of relative abundance of selected planktic foraminifers, d18O for Globigerinoides ruber (white variety) and winter sea-surface temperature estimates in RC 12-10. Data are from the work of Dowsett et al. [2003b].Poore, R. Z., H. J. Dowsett, S. Verardo, and T. M. Quinn, Millennial- to century-scale variability in Gulf of Mexico Holocene climate records, Paleoceanography, 18(2), 1048, doi:10.1029/2002PA000868, 2003.

]]>If I remember correctly Moberg tested for removing any single proxy to see how much it changed the final result, and it didn’t to any significant degree.

Thats because Moberg inventively included two dodgy proxies for temperature.

Learning from Mann et als “mistake” of only including one dodgy proxy (Bristlecone), Moberg, so he could say his results were “robust” WRT removal of any single proxy, included two.

If you look through StevM’s archive page More on Moberg you will find graphics for each of Mobergs proxies, and they include two suspect ones, #1 and #11that both “coincidentally” have a Hockey Stick shape to them.

Leave only one in and Mobergs process still overweights it and produces a “robust” Hockey Stick.

]]>However, what he is doing is mining for non-normal datasets. If a distribution is fat-tailed, then normalizing (subtract the mean, divide by the standard deviation) emphasizes the tails. This *may* be correctable if the series is autocorrelated, because the lower effective “N” increases the standard deviation … but on the other hand, it may not be. I doubt if Moberg made any such correction, at least I find no mention of it.

I suspect that in fact, this is a fundamental flaw of his method. Essentially, he is doing a Fourier analysis of sorts on the data, breaking it down into sinusoidal or quasi-sinusoidal frequencies. The problem is that a sine wave is non-normal, since the value spends more time at the extreme values. Let me go take a look …

… 1 hour pause for confirmation …

Yes, that’s the case. I constructed a synthetic series by averaging two different sinusoidal proxies. I had constructed the proxies previously through the addition of 4 sine waves that differed in frequency, phase and amplitude, so I knew the actual underlying functions. These two proxies had the same four underlying frequencies for both proxies, and differed only in phase and amplitude. Here are the proxies and the data.

The first proxy (blue) explains 23% of the variance, and the other explains 58%. Of course, between them they explain 100% of the variance.

Then, a la Moberg, I did the following:

1. Normalized the proxies.

2. Extracted the underlying four sine waves.

3. Averaged the corresponding frequencies.

4. Recombined the averaged sine waves.

5. Adjusted the mean and standard deviation of the reconstruction to that of the data.

Here are the results …

The problem, of course, lies in the incorrect calculation of the standard deviations of autocorrelated time series. While this data is extremely autocorrelated, the difficulty is clear. Without adjustment for autocorrelation, Moberg’s method contains an unknown error.

w.

There’s an amusing discussion of confidence intervals in Moberg’s SI:

Due to the relative shortness of the calibration period (124 years) compared to the longest timescales of interest, there is an uncertainty in the determination of the factor f [“variance scaling factor”]. It is practically impossible to quantify this uncertainty directly from the data used in the calibration period. Although it would in principle be possible to calculate a confidence interval for the variance ratio by using the F-distribution, it turns out that the resulting confidence interval becomes very large if one also accounts for autocorrelation — which is nearly 0.9 for a lag of 1 year in both the reconstruction and the instrumental data (after removal of

This would be a useful calculation for Jean S or UC to look at.

]]>If I remember correctly Moberg tested for removing any single proxy to see how much it changed the final result, and it didn’t to any significant degree.

However, with 11 proxies this is not at all surprising. Suppose we have 11 proxies. All but one of them are randomly distributed around 100, with a standard deviation of say 10, and the final proxy is distributed around 200, with a standard distribution of 20. Let’s say we’re looking at the mean and standard deviation of the average of the proxies as our variables of interest.

The mean of all of the proxies is . The standard deviation of this mean adds in quadrature, so it is .

If we pull out a correct proxy, the mean of the proxies is . The standard deviation of the results adds in quadrature, so it is .

If we pull out the incorrect proxy, the mean of the proxies is . The standard deviation of the results again adds in quadrature, so it is .

Now, this is an extreme example, where one of the proxies is badly wrong, and still pulling out any one proxy doesn’t make much difference (less than 10%).

And in the real world, things are mushier, and errors are never in quadrature, so the removal of an incorrect proxy may, paradoxically, make your results closer to reality … so in fact, finding things basically unchanged after removing any single one of eleven proxies doesn’t mean much.

w.

]]>Given that perecntage G bulloides has a negative relationship with SST – which is what it is directly measuring – so if the proxy were calibrated to SST, wouldn’t the proxy have a negative effect?

I disagree that Moberg clearly stated that increasing percentages of G bulloides were evidence of colder local SST – or that he clearly articulated why colder SST offshore Oman whould be regarded as especially strong evidence of global warming.

]]>