Rahm-Centering: Enhancing "Successful" Prediction

I only have time to post a quick note on this interesting aspect of Rahmstorf’s diagram – how the centering of Rahmstorf et al 2007 interacts with Rahm-smoothing (now conceded by everyone except Rahmstorf to be a simple triangular filter of 2M-1 years) to enhance “successful” prediction.

I noticed this effect when I did a plot using a standard reference period of 1961-1990 (as opposed to Rahmstorf’s unusual selection of 1990. Rahsmtorf had centered on 1990 (using Rahm-smoothed values). Centering on a single year is a procedure that was severely criticized in the blogosphere a couple of years ago and it was odd to see Rahmstorf also center on a single year, even if it was a smoothed version. But it made me think about the impact of centering on Rahm-smoothed 1990 and the results were interesting.

I had collated A1B model information from KNMI (a large 57-run subset of the 81 PCMDI runs) and presumably representative. I converted all models to 1961-1990 anomalies to match HadCRU and did an unsmoothed comparison of model ensemble average and observations showing 1-sigma limits as in the original. Unlike the original diagram, observations are not in the “upper part” of the models. Indeed, they weren’t even in the upper part” when Rahmstorf et al 2007 was written.

An important difference was that 1990 model values were noticeably above observed 1990 values using a standard 1961-1990 reference period, whereas Rahmstorf centered both the model ensemble and observations on the Rahmstorf-smoothed value in 1990. Think about it – now that we’ve confirmed that Rahm-smoothing is a triangular filter with 2M-1 years.

Figure 1. Unsmoothed comparison, 1961-1990 reference period. 1-sigma limits as in the original.

Rahmstorf’s triangular filter has a lot of weight on the edges relative to a more common gaussian filter. For M=15, in the calculation of the 1990 value, a 2M-1 triangular (Rahmstorf) filter will place as much weight on values from 2000-2005 as 1990 values! Let’s stipulate for a moment that the models were designed with an eye on history to the early 1990s (and the model response to Pinatubo is overwhelming evidence that they did) – nothing wrong with that. Actually, AR4 models probably had their eye on history even later than that.

But let’s take a best case: suppose that peeking was cut off after Pinatubo and everything after that was untuned modeling. Let’s further suppose for the simplicity of illustration that models tracked observations exactly up to Pinatubo and then models ran x% hotter than observations and consider the implications under Rahm-smoothing and Rahm-centering.

In the calculation of the Rahm-smoothed 1990 values, Rahmstorf looks forward. Under the above assumptions (i.e. models running hotter than observations), the Rahm-smoothed 1990 model value will be raised relative to the Rahm-smoothed observations. The use of the triangular filter will cause a noticeable larger effect than a gaussian filter.

Rahmstorf now centers both models and observations using these values. The centering step lowers the models relative to observations, in effect, offsetting part of the divergence. The effect is large enough to make a difference on the rhetorical effect of the Rahmstorf diagram.

To be clear, I am not claiming that Rahmstorf et al did this intentionally. Indeed, there is considerable evidence that they had negligible understanding of the properties of their smooth and perhaps even misunderstood what it was doing. However, scientists and reviewers need to be wary of confirmation bias and, once again, this seems to have interfered with the identification of the problem.

UPDATE: This is what the Rahm graphic would look like using IPCC smoothing.


  1. Posted Jul 7, 2009 at 7:45 AM | Permalink

    Yes. I’ve periodically shown the AR4 models and observations with various choices of baselines. When viewed on as temperature anomalies themselves (as opposed to trends) the choice of baseline makes quite a difference. If you select the ‘smooth’ temperature in 1990 as the baseline, the AR4 model anomalies aren’t too far off –now – but they can’t be. Mathematically, the two were pinned recently and it takes time for the difference in trends to manifest itself. However, while this improves agreement now the hindcast looks bad. In contrast, if you pin the anomalies in an earlier period, the hindcast looks better, but the agreement looks bad now.

    This is not obvious when reading the AR4 which, although published in 2007, showed AR4 model comparisons from 1900-2000 and stops. Had they slapped the projections and recent data on to those graphs, we would already have seen the AR4 models were not high relative to the observations.

    The authors do TAR, FAR, SAR projections from 1990 to most recent data. But those comparisons don’t show hindcast periods. So, the reader can’t see whether those older projections methods were able to hindcast.

    • Steve McIntyre
      Posted Jul 7, 2009 at 8:38 AM | Permalink

      Re: lucia (#1), Lucia, I realize that you’ve done lots of analysis on baselines and regularly refer correspondents to you for insight on such matters. I’m out all day and briefly checking in, but I’ll add a link in the head post.

      The point here was not the general point that baselines matter – one that you’ve made well – but how this plays out in the specific Rahmstorf case, now that we’ve conculsively pinned down what Rahmstorf actually did (a process that has been needlessly extended by Rahmstorf’s obstruction of David Stockwell.)

  2. AnonyMoose
    Posted Jul 7, 2009 at 8:37 AM | Permalink

    Was Rahmstorf peer reviewed? Must have used climatology peers, not statistical peers.

    • Charlie
      Posted Jul 9, 2009 at 8:48 AM | Permalink

      Re: AnonyMoose (#2),

      The Wegman Report that analyzed the hockey stick statistical methods found that paleoclimatolgists didn’t seem to communicate much with statisticians and peer review wasn’t always independent.

      I haven’t seen such a study for climate research in general, but since since it is only in the last couple of weeks that Rahmstorf found out that his filtering method is essentially a triangular filter leads me to believe that Wegman’s conclusions about paleoclimatologists are likely to apply to other climate research.

      snip – overeditorializng

  3. Posted Jul 7, 2009 at 9:06 AM | Permalink

    Sorry, I didn’t mean to suggest you should be mentioning my posts. I mean: I agree what you show is true. The baseline matters. I agree with you that once we know what, specifically, R did, illustrating that specifically makes people better understand that the general things that have been shown (by me, you or others) do, indeed, port over to the specific thing R did.

    In the past, I’ve only shown this generally because, of course, we did not know specifically what R did. So, in short: I think it’s great that you are showing this specifically.

  4. Posted Jul 7, 2009 at 10:09 AM | Permalink


    I think it is important for people to realize that the ssatrend algorithm pads the series with M additional points based on a linear extrapolation of the last M points. It does this both at the end and at the beginning. In the original Rahmstorf paper there is only a total of 34 years covered, and this process creates an additional 22 data points that are linear extrapolations. For this reason I don’t believe that this algorithm makes any sense for such a short period.

    Even if ssatrend was used on the full one hundred plus years of data, and I can’t tell from the description, it would have the same effect on the final points which is where the analysis is centered.

    I may be restating your point on the smoothing, but this seems clearer to me.

    I must admit that I had missed the effect of the centering. I had also missed the fact that the R07 illustrations didn’t back cast the IPCC scenarios before 1990.

    The quality of this paper, which is really a fluff piece, and the Rahmstorf sea level paper seem really low.

  5. Posted Jul 7, 2009 at 11:18 AM | Permalink

    Very interesting, Steve! — Rahmstorf et al’s diagram reproduced in your July 1 post just shows the model projections from 1990 on, without hindcasting them, so that any pre-1990 divergence caused by this choice of base year is hidden from the reader. The classic c. 1950 text “How to Lie With Statistics” may have overlooked this technique!

    However, in the post, you state, “The use of the triangular filter will cause a noticeable larger effect than a gaussian filter.” I doubt that the difference between a triangular and gaussian filter would be noticeable, particularly if both were scaled so as to equate their interquartile ranges. But this is a small point in comparison to the reference date/period issue.

  6. Roger Pielke, Jr.
    Posted Jul 7, 2009 at 11:22 AM | Permalink

    Steve- What is the proper comparison graphic, from Rahmsdorf et al. 2007 or the update in the Copenhagen report? or both?

  7. Peter D. Tillman
    Posted Jul 7, 2009 at 4:27 PM | Permalink

    Re: Gaussian filters

    For the clueless (like me), here’s what one looks like:

    A Gaussian curve is a characteristic symmetric “bell curve” shape that quickly falls off towards plus/minus infinity. You can adjust the height of the curve’s peak, the position of the center of the peak, and the width of the “bell” for the desired effect.

    http://en.wikipedia.org/wiki/Gaussian_filter has more details.

    Cheers — Pete Tillman

  8. DG
    Posted Jul 7, 2009 at 9:09 PM | Permalink

    …not sure how this study published in 2002 fits in to the discussion

    Click to access Govindan_Vyushin_PRL_2002.pdf

One Trackback

  1. […] Rahm-Centering: Enhancing “Successful” Prediction […]

%d bloggers like this: