Briffa and MBH99 Smoothing

I’ve found an unlikely ally in my questioning of the smoothing of MBH99 in the original publication and in IPCC TAR: Keith Briffa.

Not that Briffa has posted a comment endorsing my observation. However, the smoothed version of MBH99 in Briffa et al 2001 (near contemporary with TAR) is virtually identical to what I got.

A CA reader has sent me a digitization of the MBH99 smooth and I’ve attempted to reverse-engineer the weights in the filter to yield the reported smooth. Even with reverse-engineering I’m unable to replicate the Swindlesque S-curve in the Mannian 20th century smooth (by “Swindlesque” here, I mean a curve showing a noticeable mid-century decline).

Reviewing the bidding, here is the graphic from IPCC TAR – we’re only discussing the MBH99 smooth for now.

Figure 2-21 from IPCC TAR.

I reported that I had been unable to replicate the upside-down S in the MBH smooth version (in IPCC here) and also in MBH99 and Mann et al 2000. Here’s my replication using a 40-year Hamming filter with end-period mean padding (as stated in the IPCC caption) as compared to the digitized version of the MBH99 smooth from the original article (this is zero-ed on 1902-1980 rather than 1961-1990 so the vertical scale is a little different, but that doesn’t affect the comparison below). The 40-year Hamming filter applied to MBH99 data (red) does not produce the Swindlesque S-curve in the 20th century that is in the MBH99 and IPCC smooth (black).

Grey – MBH99 from WDCP archive; black – digitized MBH99 smooth from MBH99; red – my smooth using 40-year Hamming filter with end-periiod mean padding.

It turns out that I’m not the first person to have failed to replicate Mann’s closing slalom course. Here is an excerpt of the 1400-2000 period from Plate 3 of Briffa et al 2001, summarized online here with large graphic version here . The MBH99 smooth is in purple here and precisely matches the smooth that I obtained – without the Swindlesque S-curve of the MBH99 smooth in the 20th century.

Excerpt taken from Briffa et al 2001 Plate 3. MBH99 smooth in purple.

I attempted to reverse engineer the weights in the filter by making a linear regression of the smoothed series against the original data with plus/minus K leads and lags, experimenting with different values of K between 15 and 30, all without success. Here is the fitted value using the stated bandwidth of 40 years.

Black – digitized smooth. Red – fitted by reverse engineering (linear regression with bandwidth of 40 years).

For comparison, here are the reverse engineered filter weights compared against 50-year Gaussian weights and 40-year Hamming weights. Hamming weights have considerably longer tails than gaussian weights. The Mannian weights – which remain unknown if they even exist – have a considerable amount in common with Hamming weights, but do not match exactly. The implementation of Hamming weights used here is adopted from Meko’s lecture notes (Meko being a dendro and is a useful source of anthropological information on the statistical customs of dendros) and reconciles to Meko’s example.

Black line – gaussian; red – Hamming; points – reverse engineered from MBH99 smooth.


  1. Posted May 14, 2007 at 6:29 AM | Permalink

    AR4 Ch6 fig 6.10 b is trickier, same kind of smooth there? BTW, can anyone explain 6.10 c ‘uncertainties’, they seem to expand from 1960-present quite remarkably..

  2. bender
    Posted May 14, 2007 at 7:21 AM | Permalink

    can anyone explain 6.10 c uncertainties’, they seem to expand from 1960-present quite remarkably

    It’s not a confidence interval. Instead, it’s more of a confidence “envelope”, created as the:
    “Overlap of the published multi-decadal time scale uncertainty ranges of all temperature reconstructions identified in Table 6.1”
    It’s probably the overlap that makes it so wide.

  3. Posted May 14, 2007 at 7:31 AM | Permalink

    I am interested in knowing what is going on in the environment, and I don’t want to take the politician’s words as fact. I just don’t understand what all those graphs mean. Could some one provide some definitions for all this?

  4. Dave Dardinger
    Posted May 14, 2007 at 8:43 AM | Permalink


    You might want to start by looking at the page linked in the left-hand margin second box, “Common Acronyms used on This Blog” or CATB if you prefer. (The alternative title, “Common Acronyms of Climate Audit” unfortunately has an Acronym, CACA, which would be too enthusiastically adopted by certain resident trolls.)

  5. Posted May 14, 2007 at 11:56 AM | Permalink

    I still don’t get it.. What reconstruction makes 0..10 color down to -0.6 C? What makes it up to +0.7 C? And it seems that PS2004 uncertainties are not used, it cannot bring that up..

  6. Michael Jankowski
    Posted May 14, 2007 at 12:40 PM | Permalink

    It’s not a confidence interval. Instead, it’s more of a confidence “envelope”, created as the:
    “Overlap of the published multi-decadal time scale uncertainty ranges of all temperature reconstructions identified in Table 6.1’€³
    It’s probably the overlap that makes it so wide.

    And yet Crowley and Lowrey (orange) goes outside all the envelopes in the 1800s. Does that mean it’s not in Table 6.1, and if not, why is it shown graphically?

  7. Posted May 14, 2007 at 12:57 PM | Permalink

    # 6

    Different picture, I’m puzzled by IPCC AR4 Ch 6 Fig 6.10, sorry for OT. Caption says

    Overlap of the published multi-decadal time scale uncertainty ranges of all temperature reconstructions identified in Table 6.1 (except for RMO..2005 and PS2004), with temperatures within ⯱ standard error (SE) of a reconstruction scoring’ 10%, and regions within the 5 to 95% range scoring’ 5% (the maximum 100% is obtained only for temperatures that fall within ⯱ SE of all 10 reconstructions).

  8. bender
    Posted May 14, 2007 at 4:25 PM | Permalink

    Crowley & Lowrey (orange) fits within the Fig. 6.10 anomaly envelope at ca. 1800-1850 (assuming that’s a valid, apples-to-apples comparison). The question is why the 6.10 interval is so wide. Frankly, the caption makes no sense to me. If they supplied code, I would not have to second guess what they did. Do you understand the caption, UC?

    One thing to note on Fig. 6.10 is the high top end of the envelope at ~AD990. I’ve commented on it several times now (based on other papers) and have yet to hear a single comment back. It seems to me the thouuuuuuusand year AD mark is itself cherry-picked. Warmers don’t want to talk about the potentially unprecedented trend AD910-990.

    Finally, why are anomalies computed using the 1961-1990? Shouldn’t they be using the 1970-200 window, now that it is 2007? Isn’t that what we always used to do? Looks like the baseline might have been cherry-picked too.

  9. Posted May 14, 2007 at 11:52 PM | Permalink

    Here’s how I understood it:

    Take 1-sigma and 2-sigma envelopes of all 10 reconstructions (*1). At given year go through all temperatures, and count scores. If given temperature is only within one 2-sigma envelope, lightest color is used (0..10), as the score is 5. If temperature does not touch any of those envelopes, no points, and color will be white. Within 1-sigma envelope, they’ll give double points. Go through all points of the figure (y=temperature, x=year).

    The problem is that this doesn’t explain the width of coloring from 1960-present. Maybe they heard me laughing when I saw MannJones2003,

    *1 Note that there are 12 reconstructions in Fig. 6.10 b …


Get every new post delivered to your Inbox.

Join 3,586 other followers

%d bloggers like this: