I did some simulations of reconstructions with uncertainty envelopes and got very interesting results, without considering temperatures proxies relation. Small sample size, unknown dynamic model and we don’t even need to go to proxies before we break the error envelopes. Not necessarily related to MBH studies,, but worth a look I think. Link here

]]>Peter *still* doesn’t know Jack. ]]>

Suppose dendroclimatology can be represented as an inferential chain of reasoning:

1. A=>B estimation of mean chronology

2. B=>C response function calibration

3. C=>D verification of calibration

4. D=>E reconstruction / extrapolation

Given that there is uncertainty, àŽⳬ in each of the four inferential steps, what is the overall probability that A=>E? The uncertainty in each step is substantial, and the cumulative error is reflected in the product: àŽ⳱*àŽⳲ*àŽⳳ*àŽ. Even if the probability that each inference is correct is 0.6, which is a very generous assessment in the case of dendroclimatology, the certainty associated with the whole chain of reasoning is ~0.13.

Why do dendroclimatologists routinely ignore the serious problem of error propagation in their work?

Imagine what the graph above would look like if they included envelopes of uncertainty around each of the strands of spaghetti. A big fat band of gray.

This is probably why the MWP is always missing from these “reconstructions” – it’s probably not reconstructable (search for “Fritts” above). In which case the only way to get it back it is to force them to publish confidence envelopes.

]]>Like the early posters, I can’t get the original article from the link to “http://data.climateaudit.org/pdf/2005.burger.pdf”.

Is it available ?

It is now that I’ve put the correct extension on the file…

]]>Like the early posters, I can’t get the original article from the link to “http://data.climateaudit.org/pdf/2005.burger.pdf”.

Is it available ? ]]>

There’s a considerable amount of overhead involved in replicating their method – much of it is just figuring out how it works. Commendably, Rutherford has archived code and data (but not Briffa MXD data), so replication should be possible. His blocking me has delayed my assessment as I didn’t know that the methods had been finally archived, but now that I’m possession of the SI, I’ll get to it some time. So much to do.

Impressionistically, my guess is that the RegEM reconstruction is simply one more linear weighting of the proxies (or very close to linear). It will undoubtedly have the same properties as MBH98 with respect to bristlecones (notably not discussed by Rutherford, Mann et al ) and with respect to cross-validation R2 statistics (notably not reported). The cross-validation CE statistics (which give similar failed results as R2) are included in the SI, but NOT mentioned or discussed in the article itself. Pretty cute- they can say that they disclosed the adverse CE results in the SI as a line of retreat. Oddly Mann, Rutherford, Ammann and Wahl [J Clim 2005] report cross-validation RE, R2 and CE in connection with model results – where you don’t get the R2 failures in a model world.

]]>