“My recon was simple; the recon was a weighted mean for each time (after 20yr interpolation), and the CI was just the standard error of that mean.”

Nick, I have not followed in detail how these calculations are being handled, but is your reference to weighting here a means of proportioning the contribution of a proxy data point to a 20 year period based on the time resolution listed for that proxy in the Marcott SI?

]]>Kenneth,

My recon was simple; the recon was a weighted mean for each time (after 20yr interpolation), and the CI was just the standard error of that mean. I was interested to do that because of suggestions that Marcott had not included that variation, and that E was required.

On the sources of dating error, Marcott et al quote a combined figure for each C14 date, and use that for MC perturbations. I think that figure is calculated by CALIB. That program asks for info on slice thickness etc, so I expect it is including the time of sample uncertainty.

]]>Nick, I do not know what assumption you have made in your calculations nor for that matter the Marcott authors. I do not see calculating CIs for the Marcott reconstruction as a simple proposition.

From my reading of the Marcott SI and other papers on the calibration error in relating most of these proxies responses to temperature, the variation is approximately +/- 3 degrees C for a spread of +/-2 standard deviations. Marcott uses that error, I think, and the time uncertainty in his Monte Carlo calculations to estimate CIs. The time uncertainty has 2 sources one of which is the radio carbon dating and the other is the time averaging of the sample collected. The carbon dating has a theoretical uncertainty (95% CIs , I believe,) from 0-6000 years of +/-16 years on top of which are errors due to the laboratory measurements that evidently vary from laboratory to laboratory and sample to sample. The literature on marine deposits and sampling indicates that time averaging of the collected sample could be such that the sample represents anywhere from a 10 to 100 year average.

The carbon dating error is simply a matter of not knowing exactly what year the sample was collected but rather having a probability distribution instead. The time averaging, on the other hand, says that the variation being compared sample to sample is from 10 to 100 years’ averages and not an annual average as might be implied unless otherwise noted. The annual variations would have to be greater than those determined from 10 to 100 year averages. That smearing alone could average out most of a modern warming period.

Further the proxy samples have a median coverage of 7 (average of 11) data points per 1000 years of reconstruction coverage and a number of proxies with 2 or 3 or less per millennium. Six of the proxies are located coincidently with another six. I am not sure what the net effect of so few proxies covering such a large number years is when considering that these proxies are mostly spread over decadal and centennial periods.

The Marcott SI list the proxies and the time resolution, but does not do what I considered a further necessary breakdown of that resolution into dating uncertainty and time averaging of the sample. I would think a proper Monte Carlo would require that breakdown also.

In order for me to compare millennial mean temperature differences for the Marcott proxies, I have gone through the variations to be expected from a temperature series of an ocean going location with reasonable trends or cycles and found that those variation are small compared to the measurement/calibration error/variation. I have tentatively concluded that the differences I see in millennial mean temperature differences for the six pairs of proxies from the same locations can be attributed to the a proxy calibration error of +/- 3 degrees C. I also have judged that the Marcott CIs are either too small or for unspecified time periods greater than annually.

]]>Strangely enough I also just completed an error analysis of the 73 proxies. In this case I calculated the standard deviations for all the 73 temperature proxies individually between 5500 and 4500 YBP. By binning the data every 100 years we then know how many proxy measurements contribute to each global anomaly value, and how this varies with time. This then yields the statistical error on the global average which typically varies from 0.1C to 0.15C depending on population n (sigma/sqrt(n)) . Using a 90% confidence band of 2 sigma we can then plot the result – as shown here.

Now I compare this result with the published Marcott graph – see here. These “statistical errors” look to be about 50% larger than those derived by Marcott. Otherwise, the overall agreement is rather good. I am using the re-dated proxies, although this only effects the latest couple of points.

]]>Re: Nick Stokes (Apr 13 06:38),

Didn’t Craig say their anomaly base is the entire period? If so, then no your emulation is not in the same style. And the difference accounts for much if not all of the fact that your CI’s are still narrow in the base region.

Remember one lesson from the discussion over the last week (and McKitrick’s Randomness-Reducing Ray Gun): whatever you choose for your base period will (using the current calculation method) have narrower CI. Focus tightly on one year and it will go to zero.

]]>As a result I am convinced that they have included between proxy variation in steps 5-6 of their stack reduction, and that there is no major omission in their CI calculation. I have included the R code.

]]>Does anyone know what time frame is assumed for the Marcott Monte Carlo estimated CIs? Is it a temperature averaged over 20 years? If so what does that imply for CIs over shorter or longer time periods? Does a 20 year time period make sense in light of the Marcott paper showing that variability in the reconstruction under 300 years is 0? Would not CIs for a millennial be a better measure of uncertainty and then imply8ing from those limits what the CIs would be for shorter time periods?

]]>RomanM, I may have spoken too soon about the Marcott CIs. I need to think about the time frame used to calculate the CIs.

]]>RomanM , I have been doing some calculations as indicated above and the estimated CIs I derive for the Marcott Monte Carlo reconstruction indicate that Marcott used a larger value of calibration regression SEs than he would have obtained from the calibration regression equations he shows in the SI for UK37 and Mg/Ca proxies. (I did not do my Monte Carlo the way the Marcott authors did and that might account for some difference.) My CIs in fact would be more in line with what you calculated for the regression SE for UK37. He does report using a SEs of 1.7, 1.7 and 1.0 degrees C for Tex86, chironmids and pollen proxies, respectively, and that value is more in line with 1.5 degrees C you calculated for UK37.

]]>RomanM:

RomanM: I am still of the opinion that Nathan is overreacting to what I wrote

Nathan is known for his histronics, so this is nothing new.

]]>