"Detection and Attribution": Hegerl et al. [2003]

In some recent commentary trying to backpedal from the hockey stick, "detection and attribution" studies have been cited as alternative validations and Hegerl et al [2003] is cited as a key example. I have not looked in detail at these studies, but some features of Hegerl at al. [2003] struck me as so obvious that they are worth pointing out.

The most striking aspect to me is how little "explanation" of the proxy reconstruction CLH seems to be accomplished by Hegerl et al. [2003]. Look at the top panel and then look at the residuals in the bottom panel. For most of the period, the residual is about the same size as the original series and it looks to me like virtually nothing is "explained" in statistical terms. Table 1 of Hegerl et al [2003] states that 57% of the variance is explained by the forcing model. It sure doesn’t look like it. It would be nice to see the calculations.

Hegerl

Original Caption: Figure 1. Detection results for the updated Crowley and Lowery [2000] reconstruction of decadal Northern Hemispheric mean temperature (north of 30N, calendar year average). Upper panel: Paleo reconstruction (black) compared to the instrumental data (grey) and the best estimate of the combined forced response (red), middle panel: response attributed to individual forcings (thick lines) and their 5–95% uncertainty range (thin lines), lower panel: residual variability attributed to internal climate variability and errors in reconstruction and forced response. An asterisk “‹Å“”‹Å“*” denotes a response that is detected at the 5% significance level.

A few other comments. This study is not really "independent" of the Hockey Team. Crowley is a co-author and lead author Hegerl is his wife.

The CLH reconstruction shown in the top panel is a new version of Crowley and Lowery [2000]. It is described in the text as follows:

a modified version [T. J. Crowley et al., in preparation, "‹Å“"‹Å“CLH''] of the Crowley and Lowery [2000, hereinafter referred to as CL00] reconstruction (correlation with CLH 0.94). The latter is a weighted average of 9 long decadal or decadally averaged records over the Northern Hemisphere mid-to-high latitudes (30–90 N, the records sample both the warm and cold season, with a likely bias towards the summer half year). The weights are determined from the regression coefficients of individual records with the 30–90 N annual mean instrumental record during the period of overlap [Jones et al., 1999]. The resulting paleo time series was scaled so that the regression fit with the instrumental data from 1880–1960 had a slope of 1.0 [decadal correlations of 0.81 (with trend) and 0.66 (detrended)]. For consistency, the scaling of E02 is based on the same period and also decadally filtered data.

Unfortunately, I have been unable to locate Crowley et al., in preparation. If the original data was "misplaced" during Crowley’s move from Texas (see this post) , it seems odd that they managed to create a new version. Perhaps they will be able to locate the missing data by the time of publication. The weighting of series is different than in Crowley and Lowery [2000], as is the number of series – reduced from 15 to 9. The effect of the reduction in number of proxies is to make the 20th century peak larger than the medieval peak (which it isn’t otherwise). See my post on Crowley and Lowery [2000]. If the medieval peaks in CLH are unexplained (even in the mitigated form of CLH), how do you know that the 20th century peak isn’t something similar?

REFERENCE:
Gabriele C. Hegerl, Thomas J. Crowley, Steven K. Baum, Kwang-Yul Kim and William T. Hyde, 2003. Detection of volcanic, solar and greenhouse gas signals in paleo-reconstructions of Northern Hemispheric temperature, GEOPHYSICAL RESEARCH LETTERS, 30(5), 1242, doi:10.1029/2002GL016635. Downloaded from http://www.nicholas.duke.edu/people/faculty/hegerl/2002gl016635.pdf

6 Comments

  1. Hans Erren
    Posted Feb 26, 2005 at 4:31 PM | Permalink | Reply

    At Home I have a copy of the following intersting dissertation:

    Identification of Closed Loop Systems – Identifiability, Recursive Algoritms and Application to a Power Plant, Henk Aling, 1990, Dissertation Delft University.

    This highly mathematical study tries to find constraints when an event in a power plant, say a pressure wave, can be traced to a source fluctuation (fuel or oxygen).

    One of his conclusions:

    “In practise the estimated covariance function of the joint output/input signal obtained by a closed loop experiment will [b]never[/b] have the structural properties associated with the feedback system. This is due to the finiteness of the dataset, model structure mismatch and other circumstances by which the ideal assumptions, used in the derivation of the identifiablity results are violated.”
    (emphasis mine)

    In other words, every feedback system has signals that cannot be attributed to a given forcing. The whole effort to match the mid-20th century cooling on aerosols is an example.

    For more theoretical background – if you like heavy mathematics – the work of Kitoguro Akaike is a good start.
    http://www.ism.ac.jp/~kitagawa/akaike-epaper.html

  2. Michael Mayson
    Posted Feb 28, 2005 at 6:12 AM | Permalink | Reply

    There is a paper in Energy & Environment, 1 January 2004, vol. 15, no. 1, pp. 1-10(10) which has upset many in the news groups!

    "Using Historical Climate Data to Evaluate Climate Trends: Issues of Statistical Inference "

    It can be found here: http://www.ingentaconnect.com/content/mscp/ene/2004/00000015/00000001/art00002

    "Abstract:

    A strong case for global warming has been made based on reconstructed global climate histories. However, certain unique features of paleoclimate data make statistical inference problematic. Historical climate data have dating error of such a magnitude that combined series will really represent very long-term averages, which will flatten peaks in the reconstructed series. Similarly, dating error will prevent peaks (e.g., of the Medieval Warm Period) from multiple series from lining up precisely. Meta-analysis is proposed as a tool for dealing with dating uncertainty. While it is generally assumed that a proper null model for twentiethcentury climate is no trend, it is shown that the proper prior expectation based on past climate is that climate trends over a century period are likely. Climate data must be detrended before analysis to take this prior expectation into account."

    It seems like a good way of avoiding a statistically meaningless analysis.

    Steve: Thanks for drawing this to my attention. A point which definitely bothers me about Jacoby’s northern hemisphere temperature reconstruction, which I’ve mentioned here, is that he picked the 10 “most temperature-sensitive” of 36 sites studied and averaged them with an 11th selected site. If you have red noise series, this procedure will generate hockey stick shaped series. The multiproxy authors do not report how many series were canvassed before they selected their series, but if they examine 3 series for every 1 selected, I suspect that this will impart a selection bias sufficient to nearly always yield a hockey stick shaped series from red noise of proxy-type persistence.

  3. Michael Mayson
    Posted Mar 1, 2005 at 8:04 PM | Permalink | Reply

    As a bit of light relief you might enjoy this.

    http://www.null-hypothesis.co.uk/icecream.html

  4. Hans Erren
    Posted Mar 4, 2005 at 9:28 AM | Permalink | Reply

    john, can you replace [ with in comment 1?

  5. TCO
    Posted May 5, 2006 at 6:15 PM | Permalink | Reply

    Anything new here?

  6. TCO
    Posted Jun 11, 2006 at 12:22 PM | Permalink | Reply

    Martin, I think to really get meaningful results from the training type approach that you discuss (and I agree that should consider when it is useful and also what its dangers are, that we should not conflate it exactly with cherry-picking) that it’s best to do at least some of these three things:

    a. clearly label the features that distinguish real proxies from non-real with descriptors (e.g. upper treeline, or deciduous) into category.
    b. cleary communicate the failure of the original hypothesis and preserve and communicate all the details of the overall study (the rejected i1 series).
    c. validate the selection variables by new tests (in time or place).

Post a Comment

Required fields are marked *

*
*

Follow

Get every new post delivered to your Inbox.

Join 3,143 other followers

%d bloggers like this: