Sorry if that was unclear. The implicit assumption in doing such a selection is that the instrumental temperature trend is the actual temperature trend, hence “correlated with assumed temperature trend.”

Again, as Steve points out and you correctly surmise, selection on the dependent variable amounts to assuming your conclusions, and in your example you could use the same methodology to arrive at opposite conclusions.

]]>No collegiality within the other camp for those who intend to look too close. Science in the dark, and how they squawk when you apply some light to their doings.

]]>Warwick has discussed these events here on his blog.

You can see from the email to Teunisson that Jones’ personal response was immediately hostile, no matter that his first replies to Warwick seemed cheerful enough.

]]>First, the 95% bootstrap confidence intervals are undoubtedly based on the usual bootstrap assumption that the errors are independent. In fact, many of the 692 records are concentrated in a few regions — tree rings in western N.Am. and central Asia, ice cores in Greenland and Antarctica, marine sediments in the N. Atlantic, etc., see PAGES figure 1. This means that many of the errors are in fact highly correlated and the confidence intervals much too small. I don’t see any way to modify bootstrapping to take this into account.

A reasonable parametric approach would be to assume that correlations die off exponentially with great-circle distance as in “exponential kriging”. Most of the cross-sectional non-normality of the errors can be accounting for without bootstrapping by estimating the variance of each series across time as in Loehle and McCulloch 2008. See the SI at http://www.econ.ohio-state.edu/jhm/AGW/Loehle/ for details.

Second, the graphs PAGES and Julien show are composites constructed, as is common in climate studies, by averaging series that have been normalized to have zero mean and unit variance. This causes no distortion if each series is observed for the entire time period under consideration. However, if some are observed only for a shorter period, normalizing them to have zero mean over their own period tends to smooth out any long-run shape that might be present. In particular, it will tend to smooth out the LIA and MWP (if present) relative to recent fluctuations.

If the series are all progressively shorter, this can be easily corrected by zeroing out the longest ones over the entire series, and then centering each shorter series to have the same mean over its period as the longer ones do over its period. If they overlap randomly like weather station data, it is possible to find consistent offsets by solving a system of N+T equations in N+T unknowns, where N is the number of series and T the length of the total period. (There are actually N+T+1 equations in N+T unknowns, but one equation is redundant so it works out.)

In Loehle and McC 2008, most of the 18 proxies averaged ran for most of the 2000 period, so the second problem was not a big issue. However, Calvo quit in the 15th century, so that ideally it should have been centered to have had the same average over its period as the average over the other 17 series over its period.

]]>