“There are some other frustrating aspects to this diagram. It turns out that the Jones et al [1998] version used in the compilation in Jones et al [2001] is not the same as the archived version for Jones et al [1998], but has been “re-calibrated”. I think that I’ve figured out the “re-calibration”, but it leads into more murky by-ways of the multiproxy underworld.”

I like how you explained that. Very helpful. Thanks.

]]>Reading it though, I was reminded of the teacher of my first statistical physical measurement class. Before telling us about errors, he talked about **precision **and **accuracy**. As I recall his example was “If you ask the billion Chinese the height of their chairman (whom they have never seen) you will get a very precise, but necessarily accurate answer”.

His point was be very careful when you use the term **error **since they can can result from many different source types. ðŸ˜‰

a. is there autocorrelation

b. how should autocorrelation be handled

I could imagine a case for instance where there is agreement on the method of dealing with autocorr, but disagreement on it being there. Or visa versa. Let’s drill down.

]]>One might reasonably ask why include a proxy with a larger inherent variation? The answer is that it’s not proper statistics to deliberately exclude data just because it’s going to make your fit worse for a period.

]]>This is that the most temperature sensitive series, those near the polar or elevational end of their range, are also likely to be a long distance (both in kilometres and temperature) from the nearest long-term ground station temperature record.

Consider, for example, two stands of trees, say they’re only 1 km. apart, but one is on the north slope of a mountain and the other on the south slope. We all know that both the average temperature and the changes of the temperature can be significantly different between the two locations.

To understand their changes in growth, we compare them to the nearest temperature record, which may be 30 km. away and down in a valley.

Now me, I’m not a brave enough man to attempt to numerically estimate the underlying error inherent in this calculation, but I can tell you one thing for sure …

It’s more than a tenth of a degree, the statistical error in the reconstructions above.

w.

]]>A proper homogeneity correction can ONLY be carried out if there are sufficient stations within a radius of 1000 km. If this is not the case then the confidence interval of average annual temperature IMHO is as big as 1 Kelvin, being the approximate average station inhomogeneity.

I would therefore imagine that confidence interval of the surface record is inversely proportional to station density, and increasing dramatically in the 19th century.

Now the the proxies were calibrated against the surface record, was the confidence interval of the surface record ever taken into account.

In all the graphs I saw of the surface record I never saw an error bar….