Nick, I have not followed in detail how these calculations are being handled, but is your reference to weighting here a means of proportioning the contribution of a proxy data point to a 20 year period based on the time resolution listed for that proxy in the Marcott SI?

]]>My recon was simple; the recon was a weighted mean for each time (after 20yr interpolation), and the CI was just the standard error of that mean. I was interested to do that because of suggestions that Marcott had not included that variation, and that E was required.

On the sources of dating error, Marcott et al quote a combined figure for each C14 date, and use that for MC perturbations. I think that figure is calculated by CALIB. That program asks for info on slice thickness etc, so I expect it is including the time of sample uncertainty.

]]>From my reading of the Marcott SI and other papers on the calibration error in relating most of these proxies responses to temperature, the variation is approximately +/- 3 degrees C for a spread of +/-2 standard deviations. Marcott uses that error, I think, and the time uncertainty in his Monte Carlo calculations to estimate CIs. The time uncertainty has 2 sources one of which is the radio carbon dating and the other is the time averaging of the sample collected. The carbon dating has a theoretical uncertainty (95% CIs , I believe,) from 0-6000 years of +/-16 years on top of which are errors due to the laboratory measurements that evidently vary from laboratory to laboratory and sample to sample. The literature on marine deposits and sampling indicates that time averaging of the collected sample could be such that the sample represents anywhere from a 10 to 100 year average.

The carbon dating error is simply a matter of not knowing exactly what year the sample was collected but rather having a probability distribution instead. The time averaging, on the other hand, says that the variation being compared sample to sample is from 10 to 100 years’ averages and not an annual average as might be implied unless otherwise noted. The annual variations would have to be greater than those determined from 10 to 100 year averages. That smearing alone could average out most of a modern warming period.

Further the proxy samples have a median coverage of 7 (average of 11) data points per 1000 years of reconstruction coverage and a number of proxies with 2 or 3 or less per millennium. Six of the proxies are located coincidently with another six. I am not sure what the net effect of so few proxies covering such a large number years is when considering that these proxies are mostly spread over decadal and centennial periods.

The Marcott SI list the proxies and the time resolution, but does not do what I considered a further necessary breakdown of that resolution into dating uncertainty and time averaging of the sample. I would think a proper Monte Carlo would require that breakdown also.

In order for me to compare millennial mean temperature differences for the Marcott proxies, I have gone through the variations to be expected from a temperature series of an ocean going location with reasonable trends or cycles and found that those variation are small compared to the measurement/calibration error/variation. I have tentatively concluded that the differences I see in millennial mean temperature differences for the six pairs of proxies from the same locations can be attributed to the a proxy calibration error of +/- 3 degrees C. I also have judged that the Marcott CIs are either too small or for unspecified time periods greater than annually.

]]>Now I compare this result with the published Marcott graph – see here. These “statistical errors” look to be about 50% larger than those derived by Marcott. Otherwise, the overall agreement is rather good. I am using the re-dated proxies, although this only effects the latest couple of points.

]]>Didn’t Craig say their anomaly base is the entire period? If so, then no your emulation is not in the same style. And the difference accounts for much if not all of the fact that your CI’s are still narrow in the base region.

Remember one lesson from the discussion over the last week (and McKitrick’s Randomness-Reducing Ray Gun): whatever you choose for your base period will (using the current calculation method) have narrower CI. Focus tightly on one year and it will go to zero.

]]>As a result I am convinced that they have included between proxy variation in steps 5-6 of their stack reduction, and that there is no major omission in their CI calculation. I have included the R code.

]]>RomanM: I am still of the opinion that Nathan is overreacting to what I wrote

Nathan is known for his histronics, so this is nothing new.

]]>