Over the past few days, we’ve discussed many peculiar aspects of Rahmstorf smoothing and centering: incorrect disclosure; seeming unawareness of what the smoothing did; unattractive properties of the triangular filter; the enhancement of “successful” prediction; opportunistic policy changes.

It’s not though IPCC hadn’t turned its mind to smoothing. IPCC AR4 enunciated a sensible policy on smoothing in AR4 chapter 3 : “Observations: Surface and Atmospheric Climate Change” Appendix A. In that chapter, they condemned Rahmstorf procedures and, unlike Rahmstorf, described their filter in unambigous terms – no cat-and-mouse. They stated:

In order to highlight decadal and longer time-scale variations and trends, it is often desirable to apply some kind of low-pass filter to the monthly, seasonal or annual data. In the literature cited for the many indices used in this chapter, a wide variety of schemes was employed. In this chapter, the same filter was used wherever it was reasonable to do so. The desirable characteristics of such filters are 1) they should be easily understood and transparent; 2) they should avoid introducing spurious effects such as ripples and ringing (Duchon, 1979); 3) they should remove the high frequencies; and 4) they should involve as few weighting coefficients as possible, in order to minimise end effects. The classic low-pass filters widely used have been the binomial set of coefficients that remove 2Δt fluctuations, where Δt is the sampling interval. However, combinations of binomial filters are usually more efficient, and those have been chosen for use here, for their simplicity and ease of use

These are sensible policies. “Easily understood and transparent” clearly excludes Rahmstorf’s Copenhagen description of a triangular filter of length 29 as – “smoothed over 15 years”. Criterion 2 -excluding ripples and ringing – excludes Rahmstorf’s triangular filter on other grounds. Criterion 4 – “as few weighting coefficients as possible” also precludes Rahmstorf’s filter.

IPCC went so far as to provide a standard filter to “remove fluctuations on less than decadal time scales” for chapter 3, described in unequivocal terms as follows:

The second filter used in conjunction with annual values (Δt =1) or for comparisons of multiple curves (e.g., Figure 3.8) is designed to remove fluctuations on less than decadal time scales. It has 13 weights 1/576 [1-6-19-42-71-96-106-96-71-42-19-6-1]. Its response function is 0.0 at 2, 3 and Δt, 0.06 at 6Δt, 0.24 at 8Δt, 0.41 at 10Δt, 0.54 at 12Δt, 0.71 at 16Δt, 0.81 at 20Δt, and 1 for zero frequency, so for yearly data the half-amplitude point is about a 12-year period, and the half-power point is 16 years. This filter has a very similar response function to the 21-term binomial filter used in the TAR.

Instead of simply complying with standard IPCC procedures, Rahmstorf used a filter procedure described only in the AGU newspaper – the triangular filter properties of which were not described in the original article and indeed the authors say that they unaware of this defect at the time.

As so often in climate science, Rahmstorf changed smoothing policy not just once, but twice. First, in Rahmstorf 2007, he abandoned IPCC policy in favor of an article in the AGU newspaper; then he changed accounting parameters in the Copenhagen Report – all without explicitly stating that he had changed policy from the IPCC report and accompanying the change notice with an explicit accounting of the impact of the change.

Here’s what happens with Rahmstorf’s results if IPCC filter procedures had been followed. Rahmstorf can no longer assert that observations are in the “upper” part of models, with the implication that things are “worse than we thought”. R07 is looking shakier and shakier.

## 260 Comments

In fact, as you easily demonstrate, the observed data is already peaking (hehe – reverse thermometer style – hehe)

belowthe range of the models. So clever R turned what seemed a straightforward observation, added a bit of hocus pocus, and tahdah (new RC magic term) a headline for the ages was created. I think we underestimate his cleverness at our peril.To be clear, it is

NOTmy opinion that the recent downturn has been prolonged enough to be significant. Maybe there will be another jump in the level with a plateau for a while. Many readers are far too quick to jump to opposite conclusions.Just because Rahmstorf overstated his case does not mean that readers should overstate things the opposite way.

Dear Steve,

well, you beat them with their own weapons! 🙂

Can you tell who made the IPCC filter criterium? (just since there no doubt will be a credibility mud-fight)

And I also wonder what exactly is the difference to your figure in this post and the figure 2 shown in the thread 6440 – beside the different smoothing, may I ask which M you used in this new figure (sounds like M=11 from the IPCC text, do I guess that correctly!?):

http://www.climateaudit.org/?p=6440

As a part-time-lurker I am getting somewhat confussed, what is the data and to which model it should be compared (with/without vulcanos, TAR 3, 4 or what and why?)

It seems that at least you, Rahmsdorf and Lucia use different models for the comparison and sometimes exchange them . .

(This is not a critique on your work, I just want express my confusion and hopefully you can help me to understand when to pick what model)

All the best regards and keep up the good work,

LoN

Re: Laws of Nature (#3),

Forget Rahmstorf’s M. It isn’t used in this diagram. Rahmstorf’s “embedding dimension” is pointless mystification. The Rahmstorf filter is a triangular filter (unknown it seems to the authors) which has very undesirable smoothing properties according to specialists. IPCC chapter 3 used a filter that’s more like a gaussian filter or binomial filter.

Steve, I’d note that R07 was written before the AR4 came out, although possibly Rahmstorf had a preview of that section.

In your plot, how are you dealing with the endpoint conditions?

Re #3, I assume you are using the 13 pt filter (M=7). In that sense it’s not to be compared directly with R-smoothing, which may well have given a similar downturn with M=7.

Re: Nick Stokes (#5),

It was that Rahmstorf had a “preview”. He was an IPCC author or reviewer, he had copies of the First Draft and Second Draft. I have copies. Appendix A in the Second Draft is pretty much identical to the final and dates well before Rahmstorf’s article. It’s probably in the First Draft as well. So your new excuse doesn’t hold up. BTW, it wouldn’t do you any harm to agree with something.

For the third time, I used the filter listed in the IPCC section that I quoted – as I said I did.

I used end-point procedures as prescribed in IPCC Chapter 3 Appendix A.

Re: Steve McIntyre (#6),

Steve, the endpoint condition in Chap 3 of the AR3 continues to bother me. It seems to fail the most basic requirement of a trend estimator, which is to correctly estimate the trend of a straight line. It bends the line to a zero slope at the end. It’s conservative, as they say, but not accurate. I believe the MRC does satisfy this requirement.

Re: Nick Stokes (#12),

This may be a trivial question but please bear with me. The Mannian doubly reflected end point padding is, as I understand it, a minimum roughness smoother. It would accruately extend a straight line. However if it was applied to a sine wave just after the wave’s peak, it would produce an odd curve that extended upwards beyond the peak value of the wave.

Doesn’t MRC padding imply the assumption of a linear trend? And if so, wouldn’t the discernment of a linear trend in data padded in this way be unsurprising as this was a property of the padding and not, necessarily, the original signal?

Re: TAG (#18), I don’t believe your sine example would produce an odd curve. I think I said elsewhere what it should produce – a leading point following the unattenuated sinusoid, grading to the fully attenuated sine in the interior. It’s late where I am, but I may get a chance to calculate it tomorrow.

On your other questions – I’ve said that the use of padding does not assume the future, because it yields smoothers using known data only. But it gives a consistent result for a steady rise, which is an important property. And I think you’ll have to clarify “discernment of a linear trend”. If you apply an algorithm to find a trend, you’ll always discern one. As with linear regression, you need a measure of quality.

Re: Nick Stokes (#20),

This is specious nonsense.

Everypadding method assumes a particular pattern of “future” behaviour.As has been demonstrated amply in a number of posts on this blog, not only will MRC allow you to “discern” a linear trend when it exists, but you can also create the appearance of one when it doesn’t by manipulating the parameters.

All of your comments in defence of Rahmstorf are somewhat OT since they don’t address the specific situation. However, there are much better smoothers than the triangular weighted ones – lowess comes to mind. As well, has it not occurred to you that ssatrend supposedly uses time series properties of the sequence, but then the authors of the method ignore them completely in using a naive linear method to pad the series the series forward. Why would you not use those properties to do the end smoothing?

Re: RomanM (#23),

This is getting really to a comedy of errors… UC communicated me offline that the simplistic Monte Carlo simulation for the standard error of the trend is wrong… they add white

noise to the padded sequenceinstead of calculating thepadding from the noisy sequence… see the code.Recall Rahmstorf had a rationale for the M=15 switch:

Well, Stefan, I have some news for you: once the stupid padding mistake is corrected, the error is much worse in the end even when calculated with your program of choice for M=15.

Re: Jean S (#31),

What is one to conclude? That the work of those little statistical expertise should be reviewed by those with much more.

Re: Jean S (#31),

Looks quite bad. See http://www.climateaudit.org/?p=3504#comment-301724 and then :

(ssa = “What happens to

(var1)whitenoise if it goes through this tfilt”, from the code )RomanM, what do you think ? Someone forgot the effect of padding ??

( And yes, M15 is above M11 in some points 😉 )

Re: TAG (#18), Yes.

Re: TAG (#18), – I haven’t run SteveM’s script but from the double mirrored description you’ve nailed it exactly.

Re: Nick Stokes (#20),

Padding is a prediction of future data based on a reasoned guess. There are cases where this filter can be reasonable, in all cases disclosure is the key. The underlying assumption of a double mirror about the endpoint is a continuance of trend, non-disclosure looks like a hand in the cookie jar. I make no accusations of any kind because mistakes do happen and it took SteveM to figure out what the filter actually did. I’ve been at this for almost 1 year now and an unusual mistake supporting AGW models is getting a bit unsurprising at this point.

—–

The complexity of the filter aside, it’s visually obvious that the end point has been tweaked. A five year old could pick it out. Basic filters would reveal a different story so my questions are – Why can’t a phd climatologist pick it out when the point of the filter is to claim observations are near the top of the model trend? What is going on when nobody in the field calls them on it? And of course, how will they fix it in the near future??

In this case, we will learn more about a persons reaction to a problem than from the problem itself.

Does anyone here think that other filter values weren’t tried?

Don’t forget, the whole plot stinks anyway because the temperature observation data is so heavily corrected.

Well, sorry to sound relentlessly contrarian, but I seem to be alone here in thinking Rahmstorf’s sins are overstated, and if I don’t say where I think that happens, probably no-one else will. But I have sometimes agreed. I’ll also agree that a triangular filter, with its discontinuous slope, is not the best.

I wanted to establish the endpoint process used, because the minimum slope constraint recommended in Chap 3 is different from the MRC, for example. The reason why Rahmstorf might not prefer minimum slope is that if you want to approximate the slope near the end, you won’t get a good result if you include it in a constraint.

Re: Nick Stokes (#7),

Perhaps you are alone in this. The question is which of the issues Steve has raised are you willing to accept as an error by Rahmstorf? For example, do you think Rahmstorf described his method adequately and appropriately?

Re: Ron Cram (#78),

I discussed that here.

Re: Nick Stokes (#80),

So your phrase “yes, the caption to Fig 3 was wrong” includes all that needs to be said regarding the fact he did not accurately and adequately explain his method? To me, this raises lots of questions. How could he have made such a mistake? Did he not understand the effect? Has R ever offered an explanation?

You also write “The one constant is that R is a sinner. It’s pinning down the sin that varies.” No, it is not “pinning down the sin,” it is enumerating the list. Each error needs to be looked at and evaluated on its own. Then the accumulative effect can be known.

The answer to your question “Or does the scoffing apply to smoothing generally?” should be obvious now. Yes. And just because other people do it does not make it right.

Re Steve #6,

If you are not using Mann’s end-pegging, is it only by accident that your smoother appears to hit the last point on the head? Here is what IPCC Ch3 App A says:

Are you also using IPCC centering (1960-90) in your graph rather than Rahmstorf’s (1990)?

RE Nick Stokes #5,

As Steve keeps trying to make clear, “M” only relates to Rahmstorf’s nearly triangular filter of length 2M-1. The IPCC4 13-point filter is not triangular, and so does not have a Rahmstorf “M” associated with it. A 13-point triangular filter with M = 7 would not be equivalent to the bell-shaped IPCC filter.

As I pointed out at http://www.climateaudit.org/?p=6473#comment-348552, the triangular filter most similar to the 13-point IPCC filter has only 9 points (“M” = 5). But even then, it would not be equivalent in terms of the exact shape of its response function.

Steve for the purpose of clear communications to the layman, can you produce the same graph with Rahmstorf’s smoothing and another with IPCC? Or one graph with both?

While I can see from previous posts the differences, having it all in one clear graphic or side by side graphics is the clearest way to communicate the issue.

Thank you or your consideration.

These filters and smoothing became a part of the game in which they fool themselves. Needless to say, certain people including Rahmstorf start with a collection of filters and choose the filter that is most convenient for the predetermined conclusions. These things are so obvious at every step. It’s clear that they always choose the filters that amplify the warming between 1980 and 2000, and try to do everything they can to erase the lack of warming between 1940 and 1970 and after 2000.

It is enough to see the full data, including all the high-frequency noise, to conclude that it is a very open question whether the recent lack of trend is just a fluke (which is completely possible, of course) or a part of a longer process that is likely to continue. Various filters may increase the impact of various patterns at certain moments in the past but all these things simply distort the reality.

There is no “secret truth” waiting for the people who find the “right” filter. Instead, the filters will always be lossy. They eliminate some information, distort the influence of various things, and they simply cannot be viewed as universal tools towards a more accurate understanding of the reality. The exact, full records are the best information we can have. Theoretically, they reflect the collaboration of hundreds of atmospheric and other effects operating at many different frequencies. There’s no simple map. It’s not true that a particular filter completely erases all climate effects except for one, and so forth – especially because many of them have very similar frequencies or typical timescales.

Even the human effects and the natural effects have similar slopes and similar frequencies. They can’t be separated by any simple filters and whoever is trying to do so is cheating.

Hmmm, looking the graph, I think (apart from the ending) the key trick is not the Rahm-smoothing but Rahm-centering!

Re: Anthony Watts (#9) (and others),

here’s a way to visualize:

1) center all models (gray lines) such that 1990 value =0. Get rid of the pre 1990 values (I’m not sure if the model runs should be also smoothed before, but I think this has a relatively minor impact).

2) also center the smoothed temperature data to 1990, subtract the same value also from the temperature

data.

Notice the effect: almost all models are shifted downwards relatively to the temperature smooth

3) the Rahm-smooth (which is done on yearly data as opposed to monthly data used by Steve) has some additional

effects: the smooth gets “straighter” (as the filter length is longer) and the end gets sticking more up (and even more if you end in Dec 2006).

Steve, could you also post the IPCC (TAR) model data you used in the graph to this site? I know the runs are available somewhere, but for testing purposes it would be a lot easier if they were available as a nice, straightforwardly downloadable collection of yours 🙂

Re: Jean S (#11),

Yes, I’ll post up a collation of scraped runs. The rest is all pretty easy.

I’ll experiment with some of the alternatives, but, of the issues, like JEan S, I think that Rahm-centering on 1990 is the most important in creating the rhetorical effect- with a substantial Rahm-centering impact created only by the “remarkable” choice of a long triangular filter. I’ll do a run with IPCC filter and linear extrapolation padding a la Moore-Grinsted-Rahmstorf. Like JEan S, if the IPCC filter is used, I doubt that this tweak will affect things much, but it needs to be demonstrated.

As noted before, single-year centering was severely criticized in the blogosphere a couple of years ago and it is interesting to see the application of this method in R07 and the “most important” report since IPCC AR4.

Re: Steve McIntyre (#13),

They were against it before they were for it.

Re: Jean S (#11),

Indeed, when I first read Steve’s early posting I thought that centring on a 20-something triangular filter wouldn’t be too much worse than centring on a 30-year rectangular filter (which is all that really happens when referencing to 1961-1990).

But looking at Steve’s graph on this post highlights the dangers of doing this. It is clear that if you select a year on the smooth you can bias the result by as much as the confidence interval. (Which kind of makes sense). And eyeballing the graph, the two years which would boost the instrumental as high as possible relative to the models are probably… 1976 and 1989, I reckon. Hmmm…

Thinking about it, you need to make the standard error from the referencing of instrumental to the models much smaller than the error that you are assessing, which means having a MUCH longer reference than the smoothing being applied – otherwise your bias is as big as your CIs. Or, as so many have pointed out already, just don’t do the smooth.

I want to do a gag about being a smooth criminal, but perhaps it’s too soon.

“Nick Stokes:July 9th, 2009 at 4:41 am….. the endpoint condition in Chap 3 of the AR3 continues to bother me.”

What bothers me more is picking and choosing data manipulation techniques after-the-fact.

IPCC Chapter 3 has a clearly defined smoothing method and endpoint assumption method. Rahmstorf didn’t use that in 2007. He chose another method. Then he updated his chart for Copenhagen Synthesis Report and chose yet another method. Incredibly, the Copenhagen report did NOT mention this change in method.

Do you agree with the history stated in the above paragraph? Do they “bother” you?

Re: Charlie (#15), In fact, the specification in Chap 3 of the AR4 appears far from mandatory. They say: “

In this chapter, the same filter was used whenever it was reasonable to do so.“. And when is that?“If there is a trend, the method is conservative in that it will underestimate the anomalies at the end”.In other words, it’s not the method to use if you want an

accurateestimate of the trend at the end. And as I said above, you can’t expect it. It won’t even get it right for a straight line.Re: Nick Stokes (#17),

You insist on

accuracyat the endpoint but ignore the issue ofprecision. The uncertainty on the endpoints is higher, therefore the “accuracy” you seek to emphasize is simply not there. Sorry. Steve is right in #2 to emphasize the uncertain nature of the trend near the endpoint. Steadfast denial to admit Steve’s analysis is informative and insightful is telling.Re: Charlie (#15), Re: lucia (#235),

There is a clash of precautionary principles here. The popular one here is that scientific investigation which might lead to spending public money should apply a high standard of certainty before publication. Another, popular in other places, and in my view also legitimate, says that climate change is a real danger, and that the data should be analysed to get the best possible estimate of where we are going, even where that estimate is uncertain. Rahm is providing that latter analysis, and whichever principle you personally choose to apply, I think it’s right that such an analysis should be done and published, as long as it gets the maths right.

By scope I meant, yes, applying a linear filter to a restricted set of data. I chose that word, because in PaulM’s example, you clearly could do better by recognising the periodic behaviour. But such attempts with temperature data have generally not succeeded. Fitting a straight line has been done many times – it gives better freedom from noise, reflected in less uncertainty in the parameters, but is affected by events in the more distant past which may be judged to be irrelevant to the current trend. R is trying, as many do, to balance the need for enough data to reduce noise effects with the need to not overweight past effects which may have little currect influence.

Also Steve can you add a line where the “projections” begin (2001) and the curve fitting is done. Maybe gray it out before 2001 to emphasize the point.

Help me here. I’m just a layman trying to follow the discussion and perhaps I’m misunderstanding what most other readers take for granted. I assumed that the Rahmstorf filtering concept could only be applied to data, so that the most recent figure would be the 15 annual data points from 1995 to 2009. But now I’m thinking that this is so transparent, describing results that are actually arbitrarily old as a current trend, that nobody would do it.

But the alternative is that this filter is only triangular until the upper limit is now, after which the calculation is either asymmetric, hence inconsistent over time, or speculative. Neither makes any sense as a scientific result.

So while I’m observing intelligent and informed people having a discussion about whether Rahmstorf is right or wrong, I can’t get past the feeling that he’s simply meaningless. What am I missing?

Re: Rob Spooner (#19), In my opinion (contra IPCC) it is never ok to extend (reflect, extrapolate) the data past the end in the name of smoothing because this creates the result that is your key hypothesis to test (it is a circular argument because your smoother interacts with your hypothesis). The only smoother that works close to the end is a backward looking, eg. 5 year asymmetric filter that does no extrapolation.

Steve,

My apologies if this has been asked before, or if it’s a question so stupid as to at least give you a giggle: “What would be the effect of running the smoothing backwards (i.e. starting with the newest sample and moving to the oldest)?”. If there’s an odd artifact it’d then be on the oldest data and therefore not as much of an issue.

Statistics never was my forte :o)

Cheers

Mark.

Since we are talking smoothing and what sort is permissible for a given purpose, I want to present a suggestion which may be old hat or not. Why not do the smoothing to the point where the data runs out and then do one of two things.

1. Simply use the raw data to the present time and let the observer decide what if anything can be deduced. The point where the smoothing ends should be marked, of course.

2. Shorten the length of the filter to match the data until you reach the last point, which will then simply be the last point.

Of course, as Steve M points out, you need to be explicit and transparent as to what smoothing is used with the bulk of the data. And if it’s not a unmodified, standard smoothing, give the theory and code used to produce it.

Re: Dave Dardinger (#29),

There are many methods that might be acceptable. The real problem here is the opportunistic choosing of one method and then the other when the end data are trending up vs. down. If the community were to stick to one method, you wouldn’t have such opportunism to feed a confirmation bias.

Re: Dave Dardinger (#29),

Changing the filter length changes the range of frequencies that are filtered. Reducing the filter length reduces the amount of high-frequency “noise” that is removed.

No one has yet mentioned the problem that some of the 21st c. flatline may be attributable to natural low-frequency climate variability. The IPCC filter, baing a decadal filter, isn’t going to help with the problem of naturally occurring low-frequency noise/variability.

Re Jeff Id (8:23, #24): Mmm, I like what you say, “We will learn more about a person’s reaction to a problem than from the problem itself.” Now, if you ask me, that is good, and can be applied in all kinds of situations, not just the murky and often benighted field of climate science. An excellent probe, IMO. Many thanks.

Jean S, I’ve posted a collation of all (49) A1B model runs at KNMI that have 1961-1990 reference values and don’t have gross problems (HadCM3). They are at CA/data/rahmstorf in a zip file (in ASCII) model names as header.

The script for collating the data is in the same directory for reference.

I downloaded the centigrade version and calculated anomalies using a 1961-1990 reference period.

Re: Steve McIntyre (#32),

thanks a lot!

IPCC 4 employs a short 5-period filter and a longer 13-period filter. The latter is described as follows (as quoted by Steve, in the previous thread on “The Secret of the Rahmstorf ‘Non-Linear Trend Line'”):

I’ve posted plots of these weights versus the TAR 21-term binomial filter and an even better fitting 19-term binomial filter, as well as versus 9- and 11-term triangular filters on comment 204 of the same thread. The following give the corresponding frequency response functions (as amplitude versus cycle period):

The first graph shows that the IPCC 13-weight filter is even more similar to the 19 weight binomial filter than it is to the 21-weight binomial, as is already suggested by the weights themselves in my earlier post. All give essentially 0 amplitude to cycles of period 4 or less, with no “ringing”. The amplitudes I find for the IPCC filter match those cited by IPCC above, so we are at least on the same page.

The second shows the objectionable “ringing” that occurs at high frequencies (short periods) with triangular filters. Otherwise, the 9-weight triangular filter (Rahmstorf’s M = 5) matches the IPCC filter very closely, as suggested by the weights in my earlier post.

The second graph shows that the original 21-weight essentially triangular filter (“M” = 11) of Rahmstorf, Hansen et al has a much longer half-amplitude and half-power period than the IPCC filter. Its half-ampitude occurs at about 25 years (vs about 11 years for IPCC), and its half-power (at amplitude =

~~.717~~.707since power is amplitude squared) occurs at about~~35~~34years versus 16 years for IPCC.Rahmstorf’s new 29-weight triangular filter (M = 15) is not plotted, but would have roughly 50% longer characteristic periods, i.e about 37 years for half-amplitude and 52 years for half-power.

Re: Hu McCulloch (#34),

Hu, can you plot up the M=15 case as well. That’s a very helpful graphic.

Re: Hu McCulloch (#34), “The first graph shows that the IPCC 13-weight filter is even more similar to the 19 weight binomial filter than it is to the 21-weight binomial, ….. ”

Just a minor quibble. The IPCC13 filter has the same rejection of signals with periods of 6 cycles or shorter as does the 21 weight binomial filter, while having the same response to long period signals as the 19 binomial filter. Best of both worlds.

A blowup of the lower left corner of the response plot of the various filters would show even more vividly the poor filter choice made by Rahmstorf (assuming that the Rahmstorf M=15 smoothing has the same ripples as the 29 triangular filter.)

Were I Rahmstorf, I’d be somewhat humiliated at this point and would be cranking a bunch of stuff through my smoothing routine to find out what its response really is rather than having to find out about it via another blog.

When I perform detailed analyses, I use very simple filtering. I remove any data point that does not support my theory or the intent of the study.

I don’t understand, however, why I am not the most famous analyst who has ever lived. And I still can’t figure why my peers seem to be laughing (or choking) when I describe my filters. I think they’re jealous, because I’ve never completed an analysis that didn’t prove my theory correct.

Per Steve #35,

The half-amplitude and half-power periods for M = 15 are a little shorter than I had guessed in #34, since the 29-period filter is not quite 50% longer than the 21-period filter. The actual values from my underlying table are 34 years and 47 years, resp.

(Note that the square root of .5, where the half-power occurs, is .707, not .717 as claimed in my earlier post!)

Re: Hu McCulloch (#37),

I suppose it’s obvious to those who’ve studied such things, but I notice that the M number is the highest point on the period axis where the ampitude = 0. the next highest point is M/2. I think the rest are M/3, M/4… M/n.

Re: Hu McCulloch (#37), Thank you for the informative plots. Assuming that the Rahmstorf M=15 filtering has a time response essentially equal to your triangular 29 plot, then then Rahmstorf and Copenhagen synthesis report editor have yet another update to make.

Sometime in the last few days the caption to Figure 3 has been updated to read “Changes in global average surface air temperature (smoothed over 15 years)(corrected from 11 in the first version of this report)……..”

I’ll be watching to see if the editor will make another correction to indicate that “smoothed over 15 years” is not an accurate description.

Re: Hu McCulloch (#37),

Thanks very much for this graph. It is just what I was looking for. I don´t think that it is very intelligent to just keep on adding to the number of data points to produce a lower frequency cut-off, using a rather unimaginative distribution of the coefficients. I am fairly sure that a good digital filter designer would use the extra points to adjust the shape of the transfer function to be more like a brick wall at the same time as changing the frequency.

Of course, all the graphs only apply when smoothing/filtering with the full number of points available. The time varying response function at the end makes it a bit of a nightmare from a frequency perspective.

I just can´t see how one can use the last n/2 points in the series when using a symmetric filter without making assumptions about the, as yet, unknown future. To try to use this part of the line as confirmation of another set of assumptions built into a model is simply delusional.

There really are only three choices. Dash the line and warn the reader that it is provisional and may be incorrect if the future is not as expected. Put genuine error bands around the line or best of all, stop the line when this region is reached.

I can see some value in using frequency sensitive filtering to try to tease out whatever cyclic processes may exist, or appear to exist but I am sure that a linear regression with properly adjusted uncertainties is a far more powerful statistical test of trend. Naturally, there is still scope for arguing about start and stop dates but the procedure is much more understood and transparent.

Bender

One of the great unknowns is whether any apparent cycles or trends are real in the sense that they have some deterministic cause compared to them being pulled from some random process. Even if they are deterministic, they may also be chaotic which does not help much with prediction.

As far as I see things, the modellers are telling us that we are on the upswing of a zero frequency sine wave. Any low pass filter that admits higher frequencies than zero is bound to be contaminated by low frequencies like 1/10,000 years up to a chosen cut-off, say 1/30years. If there really are undulations in the data they will show up, but it may take a very long time to find out.

I think that weather noise is turning out to still have enough amplitude at frequencies lower than 1/30 years to make it hard to measure the zero frequency trend. Throwing in some uncertainty about the temperature measurement corrections makes it even harder to find the real trend. I just wish I could be around when the mystery is resolved. 🙂

Rahmstorf’s ignorance is striking. Hopefully he is learning something from this exposition.

To show the effect of smoothing using the minimum slope condition as in this post, along with the IPCC 13pt filter, I applied it to the Mauna Loa annual CO2 data (circles). Note how it represents the trend in recent years.

Re: Nick Stokes (#39),

By contrast, here is the same 13pt smoother using Mann style MRC:

Re: Nick Stokes (#40),

wow !! An estimate for linear data gives a linear trend.

Ain’t Science a wonderful thing.

Re: Dan Hughes (#44), Dan, The result of #40 is unsurprising. The point is that Rahmstorf is being taken to task here for not using the scheme in #39.

Steve:Puh-leeze. #39 is an issue that I’ve barely mentioned.Re: Nick Stokes (#52),

…on a dataset that has vastly different properties than #39 and #40

Re: Nick Stokes (#39),

IMHO, This is one case where padding(forecasting) a linear trend would appear reasonable – if it were disclosed.

Re: Nick Stokes (#39),

Are you saying that IPCC made an error? The horror.

Nick, there are multiple perils to smoothing – a sensible smooth in this case may beg the question in another.

But why would you

wantto smooth this series? Or has too much exposure to climate articles created an addiction?Re: Steve McIntyre (#43),

Why would you want to smooth the series? A reasonable question one might ask is, what is the best estimate we can make of the rate of rise of [CO2] in 2008. A reasonable answer is the slope of the smooth in the second curve. A very bad answer is the slope of the first curve.

I noted above that in Chap 3 the AR4 did not say that the filtering they describe should be used in all cases. They said only that “In this chapter, the same filter was used whenever it was reasonable to do so.” It isn’t reasonable if you are looking for recent trends. It was not suited for Rahmstorf’s needs.

Re: Nick Stokes (#45),

Am I missing something? What are the details in your graphs? Which values are padded? How many are there? Indicate this on the graph. This isn’t Open Mind.

More importantly, what does an example on low variation, pretty much linear data like this tell us about the situation we have been talkng about? Not a lot.

I have a suggestion for you. make it comparable to Rahmstorf’s situation. for example, with M=11: Try deleting the last 10 values of either the GISS or Hadcrut global temperature sequences. Now run any of your 21 point smoothers using the MRC criterion. Compare the fitted to the actual values.

Re: RomanM (#53), No values on the graph are “padded” values. You could say that padded values for 2009-2014 were used to construct the asymmetric filters that calculated the smoothed points for 2003-8. Values up to 2002 just use the symmetric filter.

Steve:Nick, this is a minor semantic point. The literature in question routinely uses the term “padded” to describe the method. Yes, you can convert a given padding to an asymmetric filter but the motivation for any particular asymmetric filter is provided by the padding. To my knowledge, the equivalence is not in dispute – the dispute is about begging the question, be it through “padding” or the equivalent asymmetric filtered results. And again, what is the purpose of smoothing for carrying out a statistical test?? Please cite a non-Team article which recommends such a procedure.Re: Nick Stokes (#56),

The fact that it works with

somedata is not relevant. Why aren’t you making the comparison using the temperature data that was used by Rahmstorf?The Hadcrut data can be found here. The GISS global sequence is here.

Re: RomanM (#59),

OK, Roman, here’s Hadcrut3, with the 13pt filter. The green is unsmoothed annual. The red curve is basically Steve’s above, with minimum slope. The black is MRC. Its behaviour is dominated by the pinned end-point.

Re: Nick Stokes (#62),

So which one do you think is the “correct” one in this case? Which one might Rahmstorf have chosen?

If he had been using minimum slope and changed to MRC without announcing the fact would you find this problematical?

Re: RomanM (#66), Rahmstorf has never used minimum slope, to my knowledge. In all the papers I’ve seen, he has used Grinsted’s variant of MRC, with either M=11 or M=15.

As I’ve said, I don’t think minimum slope can ever give a good end slope – the slope is part of the constraint.

Re: Nick Stokes (#69),

I guess you missed the point of what I said:

It’s one of those rhetorical questions like “if he had changed the parmeter without telling anyone…?” which I asked in relation to your graph to illustrate the point of these posts, a point the implications of which you seem not to fully appreciate. These minor alterations stem from the apparent

needto ensure that NO information from the AGW activist community allows any inkling of doubt that the entire agenda is anything but completely correct.Which method gives “better” results is just a red herring here. The answer to that question depends on the nature of the data at hand and the intent of the smoothing. There may be wrong answers, but there is no single “best” method for every case.

Re: RomanM (#59),

I’ ve run three different smoothings on the HADCRUT3 series that I’ll

send you for whatever uswe you care to put them.

Re: John S. (#68),

Although I can probably run some smoothings myself, I would be pleased to look at what you have done. If you like, I can drop you an email – I have ways of getting your address.

Re: RomanM (#75),

By all means do. The unilateral smoothing might interest you.

Re Nick Stokes #39,

Good graph, Nick! The IPCC’s minimum slope criterion (reflective padding) provides an unbiased estimate of the final value of a driftless random walk plus noise, but a biased estimate of the final value of a trendline plus noise. Mann’s double reflective padding (“minimum roughness criterion”) gives an unbiased (if noisy) estimate of the final value of a trendline plus noise but a very noisy estimate of the final value of a driftless random walk plus noise.

Perhaps a good compromise, one that is transparent and not based on any assumption about the presence or absence of a trend or dubious padding, would be to just shorten the filter length as the end point is approached, depicting the exceptional final period with a dashed line to indicate the changing filter length. The final point, which then isn’t really smoothed at all, would just be shown by itself without even a dashed smoother. As the filter and therefore the effective sample gets shorter, the measurement error, if appropriately estimated, would become much larger than in the interior of the data.

It is trivial to shorten a triangular or centered rectangular filter (with an odd number of points) as the end is approached. If we knew what formula the IPCC filter is based on, it should be trivial to shorten it as well, but they don’t say how it was computed. Presumably it’s an integer-valued approximation to one of the many good filters that are out there on the shelf.

Showing the shortened filter period with a dashed line would have the additional advantage of providing the reader a visual image of the filter half-width.

Re: Hu McCulloch (#46),

I see nothing wrong with that approach.

Re: bender (#60),

A better approach is not to smooth for the purpose of hypothesis testing. I ask once again – what’s wrong with non-nonlinear line.

Gee.

I wonder if these results will be greeted by the press with the same acclaim as Ram’s paper. Will we see numerous stories about how new research shows that global warming is less than expected?

Now I’m a novice at applying filters to climate data (I do electronics) but wouldn’t a Bessel filter be better since it doesn’t ring and preserves phase (group delay)? The roll off is nor so sharp though.

Let’s step back for a minute and reflect on what Rahmstorf was trying to do?

Like Santer, he was trying to say whether observations were “worse than we though” relative to an ensemble of models.

In this sense, a smoothed “nonlinear trend line” is, in a sense, a sort of

statistic. Unfortunately, any padding scheme, however plausible, begs the question – as the result depends critically on the padding scheme.There’s a very simple way around this problem.

Instead oof using “nonlinear trend lines”, use

linear trend lines. They don’t depend on the future. And they go right to the present. Plus lots is known about their properties.Plus recall Santer’s arguments about the range of uncertainty even in trend lines of satellite observations. Surely the cone of uncertainty for “nonlinear trend lines” has to be that much greater. (UC and Jean S are, as usual, a step ahead on this, as they are already peering into Rahmstorf’s uncertainty calculations. Based on experience in “simple” parts of the algorithm, there’s little reason to be confident that they’ve accurately calculated the confidence intervals of their estimates.)

If the “linear trend line” is in the upper part of the models, then that is something that is far more convincing than using a method that ends up assuming the answer.

RE Dave Dardinger #42,

I hadn’t noticed this property of the triangular filters, but checking my underlying tables, you’re exactly right. I can’t prove it’s true, but it does seem to be a valid generality.

I omitted periods less than 2 intervals, which unless I am mistaken is the Nyquist period, below which “aliasing” occurs. (Indeed the graphs go nuts if extended down there.)

It should be remembered that Ramstorf’s filter is only approximately triangular (for reasons that were obvious to Steve, but over my head), so that my triangular filters are not true Ramstorf filters. But from Steve’s previous post, they are a close approximation.

Re: Hu McCulloch (#50), Hu, I believe the function you are plotting is (sin(u)/u)^2, where u=M*pi/T, T=period.

Clarification: Compare the “padded” to the actual values.

Jorge (#221 on old thread):

There’s still point that needs to be clarified: the difference between an “one-off” transient signal, which is zero outside some finite interval, and a continuing “power signal” that maifests non- zero values indefinitely. But I do agree with you about the general lack of rigorous grounding in these matter so sadly evident in “climate science.”

Questions that Stokes is currently avoiding:

1. “Why aren’t you making the comparison using the temperature data that was used by Rahmstorf?”

2. “Please cite a non-Team article which recommends such a procedure.”

Let’s keep a running count, shall we? Have I got them all?

Re: bender (#61),

Another question Nick Stokes avoided/ignored:

Charlie (#15), “IPCC Chapter 3 has a clearly defined smoothing method and endpoint assumption method.

Rahmstorf didn’t use that in 2007. He chose another method.

Then he updated his chart for Copenhagen Synthesis Report and chose yet another method. Incredibly, the Copenhagen report did NOT mention this change in method.

Do you agree with the history stated in the above paragraph?

Does this history “bother” you?”

Re: Charlie (#63), I haven’t ignored this. It’s been amply covered above. A few points:

1. I haven’t ignored the IPCC specified endpoint condition – I’ve been trying to show why R was well advised not to use it

2. R et al 2007 was submitted in Oct 2006. The AR4 advice came out in about May 2007.

3. R did not choose another method in 2009. He used M=15 instead of M=11. As I’ve said above, in 2007 he wrote another paper which looked at various M values and discussed their merits. In that paper he used M=15. There’s no “right” value.

Re: Nick Stokes (#70),

I told you previously that the same advice was in the Draft Reports available long before Rahmstorf. If Rahmstorf disagreed with the IPCC recommendations, then he should have submitted Review Comments objecting to them.

Re: Steve McIntyre (#71), But Rahmstorf (or whoever) also doubtless did the work long before the 7-author paper was finally submitted. Are they supposed to spot a draft text (which is quite ambivalent), scrub what they’ve done, and rework it on that basis?

Re: bender (#61), Well, Bender, answer 1 is above. On Q2, I’ve already cited the Australian Bureau of Statistics on the use of extrapolation, and also two noted NZ statisticians here. They in turn cite a list of authors going back to 1877 on the use of extrapolation endpoint conditions.

re 70:

“As I’ve said above, in 2007 he wrote another paper which looked at various M values and discussed their merits. In that paper he used M=15. There’s no “right” value.”

Kewl. lets use M = 3.

I see no scientific value whatsoever in smoothing. none.

Re: steven mosher (#77)

In fact, the range he looked at started at M=2.

Re: Nick Stokes (#79), Sort of sympathetic with what you are saying but I have been reading the Rahmstorf 2007 sea level paper and wouldn’t say that his testing of M=2:x was anything more than optimizing his message, or that there was any more understanding of what the method was doing. I suspect that his results may be an artifact of the series padding and smoothing technique there too.

Re: David Stockwell (#81), David, what he actually did there was to use smoothed sea level and temperature curves to calculate rates of change, and then look at the regression of sea change on temp change. From that you get a single number, the slope, and he wanted to test its sensitivity to the choice of M. He looked at a range from 2 to 17, and found the slope varied from 3.5 to 3.2 (approp units). So it’s not just optics – that’s a numerical sensitivity test. Most of the data there is not end-range, so extrapolation is a minor issue.

And in a way SteveMo has a point there – I’d like to have seen him test M=0.

Re: Nick Stokes (#84), I don’t want to talk about it much until I do it, but the key points of highest delta SL are in the end-range. Chow.

Re: David Stockwell (#85),

Ouch ouch ouch.

It’s ciao (Italian) or tchau (Portuguese).

Re: Nick Stokes (#79), Like I said, no scientific value whatsoever. none.

I find the backward and forward discussion over various filters amusing because they amount to the following: If you know what the underlying data generating process (DGP) is, you can construct an appropriate smoother that is not misleading at endpoints. The corollary, unfortunately is, smoothing can not help you to discen the underlying DGP as all it reveals is your assumptions. Smoothing in this case is not an analytical tool – it is an optical tool. A smoother returns your assumptions and nothing more. If you wish to discern the underlying DGP you must look elsewhere.

OT (related to #59): what happened to http://hadobs.metoffice.com/hadcrut3/ ???

JeanS

It looks as if the whole Hadobs site is down.

Re: Bishop Hill (#89),

Yep, it seems to now up again.

WM Briggs has written some articles on smoothing of time series in his blog:

http://wmbriggs.com/blog/?s=smoothing&submit=Go

My favorite: Do not smooth times series, you hockey puck!

To defend Rahmstorf’s use of a method he clearly didn’t understand is ridiculous. But Rahmstorf didn’t just

usethis method, he advertised it as something novel that others should be using. So, Nick Stokes, what are you playing at? Your position is absurd. No credible scientist would defend such clownery?Re: bender (#93),

Really? In a published paper? Where?

His method isn’t perfect, but it’s better than the alternative that was proposed here.

Re: Nick Stokes (#99),

Quite right. It was Moore et al. 2005 – and Rahmstorf was not among those advertising this method in Eos. Still, Rahmstorf chose to employ a method without investigating its properties. Which is still a sin. He clearly did not understand what the algorithm was doing. Which is a sin.

As for the true slope of the GHG-forced trend: time will tell. Smoothing does nothing but reflect back your assumptions – as someone above suggested. At the moment it seems to me the alarmist rhetoric of 1998 (specifically MBH98) looks increasingly ridiculous with each passing day. Time will tell.

Meanwhile, why not tie mitigation and adaptation incentives to the weight of the actual evidence – as Ross McKitrick has so logically suggested?

Re: Nick Stokes (#99),

Just admit it. You are upset that the last few time points (ten years worth of data) don’t fit your hypothesis and you support the distortion of graphics to suit your belief-driven purpose, whatever method is required to do so.

Gavin Schmidt himself has said that 10 years’ worth of deviation from the long-term GMT trend would be a concern for his assumptions about the strength of GHG forcing. What length of deviation would it take before you had similar concerns?

Re: Nick Stokes (#99),

Are you implying that anything that is said outside of published papers is not open to question? Or is it that anything said outside of peer review should be ignored?

Are non-peer-reviewed statements like statements you make in Las Vegas? “What happens in Vegas stays in Vegas.”

Re: Patrick M. (#103), No. I was just trying to pin down where I should be looking. I wasn’t aware of anything published, but I need to bear in mind that some folks here seem to have memorized everything R has ever said on blogs, and expect that knowledge (which I lack) as part of the discourse.

Anyway, it seems bender was thinking of someone else, so it’s moot.

Re: Nick Stokes (#99),

Really? How is a nonlinear trend better than a linear trend? Can you show a reference in the statistical literature to support such a view? Absent a reference, can you show the properties that make it superior? If you are willing to try, are you convinced R understood the properties of both methods correctly? In your view, was R’s choice an enlightened one or was it merely a fortunate happenstance?

Re: Ron Cram (#105),

There’s no difference in the “linear trend” properties of the filter proposed here and R’s method. The contrasts are in the filter shape and the endpoint treatment. Are you convinced you understand the properties of both methods correctly?

R’s use of the words non-linear, which SteveM had some fun with, actually refers to the non-linear dependence of the filter weights on the data. But as Steve also points out, this doesn’t really matter, because it turns out to be pretty much a triangle filter.

I agree with Steve that R should have let SSA alone. Its merit is in providing a family of EOF’s, and it has a non-parametric aspect. But to use just one EOF makes it a heavyhanded way of doing what is effectively linear filtering, and the filter isn’t ideal, although not bad. But I think his end-point treatment is the best he could have chosen, and since Mann’s paper explains the considerations quite well, his choice may well have been enlightened.

Re: Nick Stokes (#106),

What? How can you write:

If anything has been proposed, it is no smoothing at all.

Re: Ron Cram (#114), Ron, my discussion of smoothing here (#118), was intended as a response to your post too.

Re: Nick Stokes (#119),

You are confusing averaging for a field and smoothing a time series to get a prettier line. One is acceptable and one is not. Matt Briggs has written two blog posts which have been linked here repeatedly on why smoothing a time series is wrongheaded and why it is especially wrongheaded to use a smoothed series for further analysis. Have you bothered to read them?

Do not smooth time series you hockey puck!”

Do NOT smooth time series before computing forecast skill

This is exactly the kind of information climate scientists seem to be ignorant of. All of the alarmists in climate science make the same mistakes and think it is okay because everyone else does it. It is not okay. Their failure to interact with qualified and informed statisticians like Briggs and McIntyre cause them all kinds of problems, embarrassments and public dunkings like this thread.

Re: Ron Cram (#132), Yes, Ron, I have read Matt Briggs’ posts; in fact, I referred to one here. I largely agree with the second post – if you are computing a statistic like a regression slope, you should avoid smoothing as far as possible – it does nothing help there, and will probably cause problems. I don’t agree much with his first post. Take his key statement

“The real temperature is the real temperature is the real temperature, so to smooth is to create something that is not the real temperature but a departure from the real temperature.”Not true, for the reasons that I’ve stated. Any “real” temperature that you’ll encounter in climate talk is already a highly smoothed. processed object. You make a smoothing decision when you decide to plot annual data rather than monthly data. And then monthly rather than daily.Re: Nick Stokes (#135), Again, not the point.

.

Your argument reads like a lesson in the fallacy of accident (or the fallacy of absolutes):

.

Smoothing is bad. Annual averages are a form of smoothing. Therefore, annual averages are bad.

.

You provide this as a counterexample in an attempt to equivocate R’s actions with a well-accepted practice. While you can certainly define smoothing such that it includes annual averages, no one else here has signed on to that definition. This is because averaging is a mathematical operation that is defined independently of smoothing and – as you yourself correctly identified – requires an additional sampling frequency change that the 12-month classical moving average smoothing

does not..

You don’t get to change definitions once the argument has started. And everyone with at least some statistical knowledge has stated that the problem with R’s method and paper is that he uses a smoothing technique that has no well-defined statistical theory in order to perform an analysis about the fit of model predictions to observations. You then provide an example that

does not meet the problem criteriathat it is a method without a well-defined statistical theory – and you present it in such a way that your argument is logically fallacious..

R’s use of a filter he did not understand to perform a shoddily-constructed hypothesis cannot be defended. If you wish to argue the innocence of his intent, great. But there is no defense of the scientific value (or lack thereof) of the result.

Re: Nick Stokes (#135),

You are still making the same mistake that I pointed to earlier. Simply restating your mistake does not shed any light.

Re: Dave Dardinger (#152),

I agree when you write: “Thus what’s called a “running average” is actually a smoothing, while a yearly or monthly average is not.” However, I am not sure I agree “if the proper distinctions are kept in mind, I’m sure a case for smoothings can be made.” I am willing to listen if you are willing to expound. But I want to say upfront that I do not think I will never agree smoothings are necessary for display. To think that way is to look down on your audience.

Re: Ron Cram (#161),

I think you meant “ever”, not “never”. I also think as you do about this, but that doesn’t mean that a “case” can’t be made for smoothing. As I understand it, “making a case” means producing a line of reasoning which tries to show the validity of something. Proving the case is something altogether different. And “making the case” differs from what we see in typical Team or Climate Community presentations which are normally either arguments from authority or pseudo arguments where either data or methods are missing at some critical point. If you can’t say, “we used THIS data and did THIS to it and have reached THESE conclusions about the results”, you haven’t made a case if the things in caps aren’t available. Or you might say, “We make THESE assumptions and produce THIS model (simple or computer based) and get THESE results.” Still other sorts of case making could be devised depending on the area of study, but the point is that they’re testable or at least arguable as given and not after endless attempts to get what’s needed to be tested or argued for or against.

Re: Dave Dardinger (#166),

You are absolutely correct. Sorry for the typo. I am sorry I misunderstood your other point. I assumed you meant it was possible to prove the case under certain circumstances.

Re: Nick Stokes (#99),

I haven’t “proposed” an alternative smoothing method. I observed that the IPCC smoothing method was different than Rahmstorf’s and he provided no justification for the weird method that he used. While I’m some distance from making any prescription on these things, I did observe:

You say:

What is your justification for the strange (IMO) assertion that hypothesis testing using smoothed data is “better” than testing using unsmoothed data.

Re: Steve McIntyre (#111), I wasn’t saying that it was better than using unsmoothed data – I was saying that it is better than using the procedure described in Chap 3. This you described as a sensible policy, you criticised him for “abandoning” it, and demonstrated an implementation of it. That seems to me hardly different from proposing that he should have used it.

I think you’re generally right about hypothesis testing, but that isn’t what was being done in the R et al 2007 paper.

Re: Nick Stokes (#112),

The IPCC policies that I described as “sensible” for filters were:

Instead of using an “easily understood and transparent” filter, Rahmstorf used an obscure methodology, that was not understood by either the original authors or Rahmstorf, the interpretation of which required deconstructing a concatenation of sources; Rahmstorf’s description of the method remains inaccurate. He continues to substantially understate the effective period of the filter.

I endorse IPCC supporting “easily understood and transparent” filters and wish that they would adhere to transparency policies in other areas. I am baffled at your insistence that Rahmstorf’s goofy manifold embedding stuff is “better” than an easily understood and transparent filter and suspect that you are just pulling our legs here.

Re: Steve McIntyre (#113), I’m certainly not insisting that the SSA method is better than linear filtering. In fact, I said exactly the opposite earlier today. Your heading says “Rahmstorf rejects IPCC procedure”, and with the following article and demonstration, clearly implies that he shouldn’t have. The “IPCC procedure”, or the procedure the authors of Chap 3 said they used “wherever it was reasonable to do so”, included using, as you did, the minimum slope criterion.

Re: Nick Stokes (#112),

Nick, please stop digging. You are now really clutching at semantic straws. Even if a formal statistical analysis is not presented, the fundamental (and only) purpose of the paper is the comparison of observations to model predictions. If it looks like a duck, and walks like a duck…

I often write about the constant use by climatologists of the signal-noise

metaphorto describe what they are dealing with – a point that Matt Briggs agrees with 100%.Financial analysts deal with noisy series as well – that’s the point of the Rahmstorf and Associates post – but they don’t use signal-noise language to describe financial markets. They are what they are.

Most of the literature on filters is based on situations where signal-noise is involved. Smoothing a time series with a short realization is a different animal. There is no statistical purpose to the smoothing as it eliminates data. In a graph, its only purpose is statistical.

A linear trend is a form of “smooth” as well – but it’s also a statistic – and the properties of that statistic are known.

If people want to use “nonlinear trends” as a type of statistic – which is what Rahmstorf is doing, then you have to develop a statistical theory about the behavior of this statistic, which they haven’t done.

Re: Steve McIntyre (#95), One can’t develop a “nonlinear trends” statistical theory when the method of construction can change at will-there is nothing stable about it to compare anything to.

Re: Steve McIntyre (#95), I agree and would like to ask why filtering is being used to show some under lying variation when the common theme in the end is to say weather = linear trend + noise? Seems like people just like to make pretty curves. If Rahmstorf is going to filter to show some underlying behaviour he should state the frequency range the filter acts in and why he is using this filter i.e. what underlying process is it showing.

Forgot? I’m seriously afraid that they might not have been aware of it in the first place.

Lucky for him, Rahmstorf didn’t need to calculate no stinkin’ error bounds for his temperature sequences. 😉

Hey I think I can help with this smoothing problem.

We stock market chartists often use simple or exponentional moving averages to try to understand trends. Often traders claim that when, say, a ten day MA crosses down below a 20 day MA, that it is the time to sell, as a down turn trend has been established. It works sometimes and sometims not. (The real secret in short term trading however lies in good stop loss management, not trend indetification) but that’s another story).

All that’s just background. My point is that charting theorists insist that moving averages should be shifted to the left by N/2 periods and that there is no legitimate way to project them further forward to today, let alone tomorrow. Should someone tell RAM that?

Incidentally, there is a way of making money out of the market. Perhaps I will tell you sometime or you could just read up on Buffett.

Re: Ausie Dan (#107),

Should someone tell RAM that?No. What you’re describing is the requirement for centred (zero lag) smoothing. And R’s symmetric filter with MRC end conditions does just that. The loss of smoothing near the end is the price.Following on from my last post #107, I think I’ve got the answer to the whole problem. If Steve could just do two runs using Ram technology, one with M=11 and the second M=15, then when the shorter period line crosses down through the 15 period line, then the global warming crisis will be over. Steve can then tell Obama and Rudd (Australian Prime Minister) that they have saved the world from disaster and can go home, victorious. It may just save the world too, from financial meltdown as they try unsuccessfully, King Canute like, to pull CO2 levels down.

Simple enough really if you think about it quietly for a few minutes!

Please explain in simpleton’s terms what Rahmstorf has done?

Re: davod (#110), “Please explain in simpleton’s terms what Rahmstorf has done?”

I can explain it from my (simpleton’s) viewpoint.

The Copenhagen Synthesis Report has a graph, Figure 3, showing that temperature observations since 1990 have been

towards the upper end of the IPCC Third Assessment Report projections.This graph, its caption, and the associated text are misleading because of 1) smoothing done to the data, 2) undisclosed changes in the smoothing method since this graph was first published by Rahmstorf in 2007, 3) the recentering of both the projections and the data on the single year, 1990.

The recentering issue is more complicated and subtle than the smoothing but has a much more dramatic effect. The other important underlying issue is the one of lack of transparency.

—————–

More detailed account …..

Rahmstorf published a peer reviewed paper in 2007 comparing temperature observations to the IPCC Third Assessment Report. In this comparison he used a unique data smoothing of the temperature data. This blog’s author, Steve McIntyre wrote a paper critiquing Rahmstorf’s paper. He and others tried to get more details on the smoothing and data manipulation but Rahmstorf was uncooperative.

The Copenhagen Synthesis Report recently issued has a figure that updates Rahmstorf’s 2007 graph. The caption and text related to this graph implied that it used the same smoothing technique as Rahmstorf’s 2007 paper.

It turns out that he had changed the length of smoothing period, which signficantly changes the appearance of the graph. This was discovered by people digging through literature and trying all sorts of filtering techniques and parameters with a goal of replicating his graph. Eventually they came to the conclusion that he changed a parameter to M=14. On the RealClimate.org blog Rahmstorf said that the parameter was really M=15. He send a note to the editor of the Copenhagen Synthesis Report and the caption was updated to say “smoothed over 15 years” rather than the origina “smoothed over 11 years”.

The reverse engineering by others of Rahmstorf’s filtering method has shown that it is essentially equivalent to an averaging filter of 29 year length with weighting factors of a triangular shape (For comparison, a simple moving average can be described as weighting factors having a rectangular shape). On his blog realclimate.org, Rahmstorf has rejected arguments that “smoothed over 15 years” is a misleading caption, saying that it is common to describe filters by their half power widths. Unfortunately, analysis done by a commenter on this blog has shown that the half power width of Rahmstorf’s M=15 filter is 47 years ! (I have a comment on realclimate awaiting moderation asking Rahmstorf what he calculates as the half power width of his filter).

RECENTERINGThis simpleton is still working on fully understanding the recentering issue, but the net effect of first smoothing projections and data, and then setting them equal at one particular year results in a comparison that varies widely according to which single year is chosen to be the one where the (smoothed) data and the (smoothed) projection are set equal to each other.My (perhaps faulty) understanding is that recentering is the equivalent of going to the graph at the top of this blog and moving the red line upward to the location of the black line at the year 1990.

Why use smoothing at all? —- several commenters have pointed out that plotting smooth lines and eyeballing them is a poor way to test projections. I agree with this as far as testing the projection, but it appears that the real purpose of Figure 3 in the the report is to convince people that observed temperatures have been towards the upper end of projections. The graph at the top of this post, using the procedures recommended by IPCC in AR4 WG1 Chapter 3 shows a quite different picture — one where the observed temperatures are now signifcantly below projections.

To Steve McIntyre: putting captions on the graph in your post would greatly help many of us in understanding.

Re: Charlie (#126),

nice summary! Here’s a side effect of recentering on the actual measurements relatively to the models. Pay no attention to those CLs in this figure 😉 , but notice that nothing else is changed but the filter length.

Re: davod (#110),

Please explain in simpleton’s terms what Rahmstorf has done?Summary:Basically, it appears Rahmstorf has a tale he’s trying to tell for the Copenhagen Report: that the global mean surface temperature is rising even faster than the average of the IPCC models predicted. See this chart from his report… the claim he’s trying to support is that the temperature trends (bold lines) fall into the upper half of the grey area. Unfortunately for him, recent data doesn’t really support that claim, so he applied a smoothing filter that does. Instead of using IPCC-recommended smoothing (which would look like this) or even hisownmethods from 2 years ago (which would look like this), he used a longer smooth so that inconvenient cooling wouldn’t cause his trendlines to droop.I’m highly skeptical that he carefully picked his methodology and

thenimpartially applied it to the data, which is how science is supposed to be done. Choosing your methods so you can show what you want to show is advocacy, not science, and I think he’s been caught with his pants down here. Unfortunately, it’s unlikely that anyone formulating policy will ever see any of this… what they’ll see is the Copenhagen Report, without critique, and without any charts that would diminish the “consensus” narrative.At this point R = Rahmstorf needs to be added to the list of acronyms as an alternative to the computer language of the same name.

No, verm, it isn’t semantics. If you think that a hypothesis is being tested, there’s only one quack test – say what it is. Can you?

But the sudden conversion in this thread over the last few posts to the advocacy of unsmoothed data is spurious. There is, in climate terms, no such thing, or at least nothing that you could put on a graph. Unsmoothed temperature data is a single instrument reading in a single place. To make climate sense, or even weather sense, you have to aggregate. You aggregate over space to make a regional or global average. You aggregate over time to make a daily or an annual average, in the process, smoothing out diurnal and/or seasonal variation. So in a plot of annually averaged global temperatures there has already been a huge amount of smoothing which has removed two major cycles. And that is generally where people start.

So “no smoothing” is a totally unrealistic aim. Everyone smoothes. To what extent? Until (and if) the data makes some sort of sense.

Steve:Nick: Please stop making things up. I questioned the purpose of smoothing very early in this and have re-iterated this point consistently in these discussions. Your allegation of “sudden conversion in this thread over the last few posts to the advocacy of unsmoothed data” is a fabrication. Additionally, I’ve queried how Rahmstorf smoothed, but the statistical purpose of smoothing has been at issue right from the issue.“Everyone smooths”. Financial analysts don’t always smooth.

Re: Nick Stokes (#118),

As a layman following this debate, I may not be qualified to judge some of the technical issues, but I can spot a logical fallacy when I see it. The one you seem to employ consistently is the “straw man” argument.

The purpose of the “straw man” argument is to divert attention from the real issue by switching the focus to a debate about a distorted, false representation of the opponent’s argument.

You did it, for instance, when you made the claim that what Steve proposes is to treat the data the way you did in comment 39.

And you are doing it again in the quoted text above when you try to switch the debate by taking on a purely fictional “no smoothing, not even averaging, not even the use of multiple instruments and multiple readings” argument that no one here has advanced.

Do you really expect us to swallow the notion that since everyone uses averages, the smoothing that R did is justified?

Re: Michael Smith (#120), Michael, I’m not saying that because everyone smoothes, that all smoothing is justified. I’m countering the argument that says that because smoothing has some issues, that we should avoid it. We can’t. We’ve had to use it just to get data down to a manageable size. And RomanM (#125), when you go from monthly averages to annual averages, that is equivalent to performing a 12-month moving average (classic smoothing), and then sampling every 12th value. The sampling does nothing to undo the effect of the smoothing.

Re: Nick Stokes (#134), Nick, you missed the point. For averages, there are well-defined statistical theories that allow us to evaluate the results. Unless there is a well-defined statistical theory for R’s filter, then any results obtained cannot be subject to any rigorous test. In other words, the conclusion is in the eye of the beholder. You are missing the forest for the trees.

Re: Ryan O (#136), Ryan, smoothing using a triangle filter (which as Steve says is virtually what Rahm is doing) is just weighted averaging, and is no more resistant to statistical analysis than binned averaging. In fact, it’s just a simple moving average performed twice.

And I’m not saying that smoothing, or annual averaging, is bad. I’m just saying that smoothing is inescapable. You still have to figure out whether any particular smoothing is helpful. But you can’t go back to no smoothing.

Re: Nick Stokes (#141),

.

Seriously, you need to start addressing the actual argument or simply be quiet.

.

Did R identify his filter as a triangular filter?

.

Is it possible for most readers to determine what R’s filter even

was, especially given his later incorrect statements that no endpoint padding was being used?.

Did R perform any statistical analysis of his result based on that fact?

.

Was it possible, prior to the identification here that the filter was a triangular filter, for

anyoneto have performed any statistical analysis of his result?.

Could the minor difference between R’s filter and a triangular filter result in one of them failing to reject a null hypothesis and the other rejecting the null hypothesis?

.

In short, was it possible for

anyreader of R’s paper to be able to independently assess the apparent agreement of the models with observations (even qualitatively)?.

If not, then the smoothing used was entirely inappropriate. Nothing else you’ve said changes that.

.

And with that, I believe I’ve had my say.

Re: Ryan O (#142), Your questions:

Did R identify his filter as a triangular filter?No. As I said earlier, the use of SSA is cumbersome. Being very similar to something simpler then it might help to say so, but complexity doesn’t make it wrong.

Is it possible for most readers to determine what R’s filter even was, especially given his later incorrect statements that no endpoint padding was being used?He referred to the paper by Moore and Grinsted, and apparently used Grinsted’s code. He’s writing a one-page paper in Science. You don’t get a lot of space for discussing smoothing techniques.

Did R perform any statistical analysis of his result based on that fact?No. Again, in the context, there’s little space for doing that. It is just the case that in the real world, plenty of people publish smoothed curves without an associated statistical analysis. Sad, but…

Was it possible, prior to the identification here that the filter was a triangular filter, for anyone to have performed any statistical analysis of his result?Yes. Steve was able to identify by argument about the first eigenvector that the filter coefficients had small dependence on the data, and were effectively triangular. And Jean S apparently had the actual numerical weights.

Could the minor difference between R’s filter and a triangular filter result in one of them failing to reject a null hypothesis and the other rejecting the null hypothesis?Unlikely, in my view. But Rahm was not doing such a test.

In short, was it possible for any reader of R’s paper to be able to independently assess the apparent agreement of the models with observations (even qualitatively)?Yes. The data before smoothing was also shown on the graph (and is anyway generally available).

Re: Nick Stokes (#145),

There’s unlimited space in Supplementary Information. If he’s not going to write a proper methodological description, then archive the code. OR provide information to people that ask. Instead, Rahmstorf obstructed Stockwell and published incorrect descriptions of his methodology.

We’ve not started on his confidence intervals yet. Jean S has sent me some notes and figures on Rahmstorf’s confidence intervals assertions are totally bogus as well. More on this some time.

Re: Steve McIntyre (#149),

Stokes, you are so done. Steve M hit the nail on the head here, replying exactly as I was about to: there’s ample room in SI to describe methods. Indeed, it’s hard to imagine someone being

moreobfusatory ormoreopportunistic in his methodology. These are “sins”. Restate for me his “sins” (your rhetoric, not mine) in your own words. No linkies please. Other than that, you are done.Re: bender (#153), I have linked to my own statements (another here). You haven’t responded to this question (#146): You said (#38)

Rahmstorf’s ignorance is striking. Hopefully he is learning something from this exposition.

What do you think he should have learnt from this exposition?

Re: Nick Stokes (#141),

Depends, I’d say… unqualified usage should not be encouraged anyway. If just a visual aid for the untrained eyeball is needed, a relaxed approach may be OK. But if conclusions are drawn from the course of the smoother, it needs objective criteria for the choice of the method and the respective settings, just as for any other method. Rahmstorf et al. did not present any reasoning as to their choices – the method seemed to be around in their environment, and the endpoint treatment was used as implemented in the software. M=11 was initially deemed good, but then M=15 really better, no quantitative considerations invoked. Bad practice!

(“Numerical recipes” p.767, William H. Press, Saul A. Teukolsky, William T. Vetterling, Brian P. Flannery)

Re: Nick Stokes (#134),

Your definition of “smoothing” is so broad that it isbasically useless. So now, calculating summary statistics is “smoothing”. Smoothing becomes a hammer and all statistical procedures are nails.

If I have 3 groups of observations and I calculate their means, is this defined as smoothing? If I then tell you these were measurements taken before, during and after some event does that make a difference? How about in year 1, 2 and 3 of a study? Does that make a difference? Or, does it only become smoothing when I graph the results? If you see all of these situations as “smoothing”, then your definition is pretty much useless since all measures of location are then just part of “smoothing”.

In the time series context, the situation is no different than the latter two cases I mentioned above. There should be a differentiating factor to separate smoothing from the agglomeration that you mentioned earlier. That difference is the time scale of the results. If I smooth monthly data, my results should be on the same time scale as the original data – the smoothed series should be a monthly series rather than annual. Yes, I agree that in this case, the annual values are a subset of a particular smoothed monthly sequence, but that is irrelevant. And, yes, climate scientists offer series that are “smoothed on a decadal or a century basis” and then present the summarized averages of subsequences of the original series. They also present daily “averages” of temperature data which are simply the average of the maximum and minimum temperatures. Statisticians would call these “mid-ranges”, but this is climate science so maybe definitions are looser.

How about dealing with the specifics of the thread, not with misdirected side-trips? I don’t think that you have seriously looked at the specific effects of the methodology in question, yet you offer sweeping acceptance of their end result. Try going through a thread where MRC has been discussed earlier. Maybe it might be enlightening.

Re: Nick Stokes (#118),

Talk about spurious. Calulating summary statistics is not “smoothing”. It only becomes smoothing if the original values are then replaced by the new calculated values.

If I average monthly temperature values to create a plot on a yearly scale, I would likely get a less variable set of points, but that does not constitute a smoothing “smoothing” of the monthly values, nor would I call it that. If I replace each monthly value by the annual average,

thenit becomes a smooth.Ok, Nick Stokes, why don’t you outline for us what you think Rahmstorf’s

actualsins were, as compared to the ones being imagined by the rest of us.Other than that, I think you’re done.

Re: bender (#121), bender, I said something about that here.

Re: Nick Stokes (#139), I find your description of his sins to miss the most important parts.

Re: Ryan O (#140), Could I bounce back a question to you or bender. Bender said (#38)

Rahmstorf’s ignorance is striking. Hopefully he is learning something from this exposition.What do you think he should have learnt?

Re: Nick Stokes (#146), Perhaps he will have at least learned the characteristics of his smoothing methods. Perhaps he will realize that centering the model and the data on a single year is poor practice.

Over on realclimate, Rahmstorf has justified leaving the caption as “smoothed over 15 years” by saying that describing a filter by its half power width is an accepted practice.

My direct question of “what is the half power width of your M=15 filter?” was submitted over 12 hours ago, but is stuck in the realclimate.org moderation black hole.

Nick,

Of course it is semantics. As Steve has pointed out, the objective was very similar to Santer 08 of comparing observations to model projections. You can read Section 4 of S08 for a discussion of appropriate tests. What do you think the objective of R07 was?

How about a Full Salute for a true defender of science? John Brignell.

Take a look at this for a typical, British understated, completely succint summary. Wow!

And here’s the link:-

http://numberwatch.co.uk/2009%20July.htm#filters

RE Nick Stokes, 112:

In fact, Rahmstorf, Hansen et al (Science 2007, as quoted by Steve in the 7/1 post “Opportunism and the Models”), stated:

They have clearly stated a hypothesis about climate, that it is responding more quickly than the climate models had predicted. They state that GISS and HadCRU temperatures have increased .33dC since 1990 (with no uncertainty worth mentioning), and that this is in the upper part of the range projected by IPCC, as shown in their accompanying graph:

Clearly the point estimate of temperature is above the scatter of model point estimates and is approaching the upper boundary of the projected range, evidently some sort of cconfidence interval. They don’t claim to have yet rejected the implicit null of no difference, but are claiming that the data is almost there.

This being “Science” magazine rather than “Speculation” magazine, the reader naturally assumes that a scientific hypothesis is being stated and (almost) rejected.

PS: Is it just my computer, or is WORDPRESS incredibly slow as a result of the ongoing (?) WWW slowdown?

I am not prepared to introduce “R” as an acronym for Rahmstorf or to sully the reputation of an excellent language by the association.

Stern measures may be invoked: people who sully the term R by using it for Rahmstorf do so at peril of having their posts deleted.

Re: Steve McIntyre (#128),

Heh, heh, heh.

Re Charlie #126,

Just to clarify, 47 years is the half power for a

pure triangularfilter of length 29 (corresponding to Rahmstorf’s M=15). According to Steve (“The Secret of the Rahmstorf ‘Non-Linear Trend Line'”, July 3, Rahmstorf’s filter, at least when applied to this data, is almost exactly triangular, as shown in Steve’s diagram below for M = 11:So Rahmstorf’s actual M=15 filter may have a half-power slightly different than 47 years. According to Nick, the filter weights respond to the data somehow, and Steve seems to leave this open, but even Nick seems to concur that M=15 would still be essentially triangular.

Ever wondered how the models would compare to observations in a longer period? Wonder no more! Since the amazing methods of Rahmstorf et al are now available also to us simpletons, I was able to produce a 70-year comparison since the beginning of the WWII:

The trend lines are just about to go below the lowest of the models. I guess we can now cancel the Copenhagen meeting in December.

Charlie amd Miku:

Thanks for the explanations. The rest makes a little more sense.

A small thank you to those statisticians here who are making climate science better.

Mike Bryant

The submission and acceptance of this article for Science Brevia were abuses of process. According to a news release on the AAAS website, Science editor Don Kennedy describes such articles as “short, focused and understandable to non-specialists” which report the “kinds of findings [that] present themselves.” The news release also says that “Brevia … features research findings that are unusually clear, that can be explained in 600 to 800 words and that can be understood by an audience that may lack extensive background knowledge.”

The reporting of “the” global mean surface temperature increase as “0.33C for the 16 years since 1990” in both the NASA GISS data and the Hadley Centre/Climatic Research Unit, by a group of authors which included representatives from both bodies, gave these statements an authority and definitiveness which misled reporters and communicators. The BBC report referred to the paper comparing projects with “WHAT HAS ACTUALLY HAPPENED”, and the CSIRO news release said that “The team, from six institutions around the world, reviewed ACTUAL OBSERVATIONS … from 1990 to 2006 and compared them with projected changes for the same period” (EMPHASES added). In short, the intended audience was misled about the significance of the reported findings.

snip – only for editorial focus

BTW, I don’t know that there has been a good distinction between averaging and smoothing made here. An average is self-contained for each point in the resultant while in a smoothing given data is used for several or many points in the resultant. Thus what’s called a “running average” is actually a smoothing, while a yearly or monthly average is not.

A case can be made that smoothings should never be used for scientific purposes, only for display, but if the proper distinctions are kept in mind, I’m sure a case for smoothings can be made.

Re: Dave Dardinger (#152),

I agree. There is little appreciation evident here of the difference between sample averaging–and its attendant statistical properties–and smoothing and/or decimation of unique measurements that constitute a time series. Fitering a time series is always a strictly deterministic operation upon the data points, with a precisely known effect. And low-passing has its legitimate uses if one is looking for low-frequency turning points in a series, after which an appreciable run of data in the opposite direction might be expected.

BTW, if anyone has the numerical values of the near-triangular SSA filter at issue here, I’d be happy to calculate the precise frequency response function. Of course, this would shed little light on the illegitimate finagling of the overblown “end-point problem,” which is strictly one of phase lag. Asymmetric causal fiters can be devised with fairly similar response characteristics that reduce that phase lag appreciably.

Re: Dave Dardinger (#152), The key thing about both bin averaging and running averaging is that high frequency information is discarded – most obviously, say, seasonal information in annual averaging. That’s a fundamental reason why you can’t describe a set of annual averages as unsmoothed. A lot of information has gone, and you can’t get it back.

Re: Nick Stokes (#159),

That’s true, but only important if you’re interested in frequencies higher than bin size. If you’re trying to look at seasonal effects, it’s senseless to look at annual averages, however smoothed. You should use monthly or daily averages.

When you’re trying to look at lower frequencies, then keeping the appropriate averages, without smoothing is better because you can always make calculations from the “raw” averages which smooth it. But working from the smoothed data spreads affects over time periods and with no real benefit. You might be able to back-engineer your smoothed data, but it’s more work. Do your calculations from the binned data and then smooth the results if necessary to match a lower frequency model if necessary.

If you want to see temperatures in the last millennium you might want average ten yearly averages (which should come out essentially the same as averaging 120 monthly averages) to reduce the data set to 100 decadal points. Statistically it’s easy to deal with such a data reduction. But doing a smoothing will do strange things to the statistics and this appears to be what Rahm. et. al. have done without knowing the affects their smooth has on the statistical properties of the new data set. And that, of course, doesn’t even get into the actual subject of this thread which are end effects.

RE Charlie #147, I also tried asking Rahmstorf a) what the precise weights are for his M = 15 filter, and b) what half-power period he computes for it, on RC this morning at 8:16. No response so far.

Re: Hu McCulloch (#155), I’ve tried twice in the last 24 hours. Neither of my comments or questions related to the half power width of his M=15 filter have made it past the realclimate moderation filter.

Stefan Rahmstorf’s inline comments to #407 and #395 of http://www.realclimate.org/index.php/archives/2009/06/a-warning-from-copenhagen/#more-690 appear to have been made before your post #37 in this thread which showed that the half power width of a 29 point triangular filter is about 47 years.

I suspect that he is, or at least up through July 9th, was unaware of the half power bandwidth of his M=15 data smoothing process, since he is justifying the updated Copenhagen Synthesis figure 3 caption of “smooothed over 15 years” by saying that it is common to describe filters by their half power.

It is a shame that the heavy moderation of realclimate.org prevents a discussion of what should be basic, non-controversial topics such as the half power width of a filter.

Re: Hu McCulloch (#155), the numbers are slightly data dependent. Jean S has given the numbers for Hadcrut3 here (comment #15535).

RE Nick Stokes #158,

Thanks. But has Rahmstorf ever confirmed Jean S’s values? It strikes me as odd that my question (and Charlie’s) to RC hasn’t gone through yet. I can see that it might take some time to answer it, or maybe Rahmstorf just won’t ever get around to answering it, but why should there be a delay posting the question itself?

Re: Hu McCulloch (#160),

Be patient, it took 5 days from Rahmstorf to answer my simple “did you change the filter length”-question 😉

RE Jean S #162,

For what it’s worth, here’s what I tried to post on RC yesterday morning:

There haven’t been any posts there since Saturday morning. Perhaps the “Real Climate Science” is settled?

I should have mentioned that my 47 years is years per cycle. Frequency can be measured either in cycles per time unit or radians per time unit. Is there anyone who measures period in time units per radian rather than time units per cycle? This would be shorter by a factor of 2*pi. The IPCC was describing its 13-weight filter in terms of years per cycle, not years per radian.

REVISION AND VERSION CONTROL

I noticed that climatecongress.ku.dk had updated the synthesis report without any notice, errata, revision number/date; or any other way for someone looking at the report to know that it had changed. The only hint is the properties tab on Adobe reader that showed July 1 revision date.

Even stranger, I noticed that The Potsdam Institute had made further (apparently) typographical correction and had a third revision of the synthesis report at http://www.pik-potsdam.de/news/press-releases/files/synthesis-report-web.pdf/view , with an Adobe file revision date of July 7th.

Very surprising lack of control. Apparently the simple practice of having a simple errata or note on changes, and changing the version level when changing the document is something that had not occurred to the editors.

A student assistant at climatecongress.ku.dk has told me that a newer version will be issued later this summer, with a new figure in Box 2 (apparently unrelated to S. Rahmstorf’s work). He also informed me that they will be considering some sort of system to better track and illustrate changes to the document.

I found this lack of control and documentation somewhat surprising for a report so important to public policy decisions.

Is there a similar problem in tracking and documenting revisions to IPCC documents such as AR4, TAR, etc?

Re: Charlie (#164), Did you notice this typo?

What a cowboy outfit.

Using the filter weights deduced by Jean S for the 21-point symmetric SSA smoothing used by Rahmstorf, I’ve computed the exact amplitude response. It deviates from the analytically known result for the equivalent Bartlett (triangular) filter insignificantly, staying within -.0083 and .0143 of the latter. The positive discrepancies are concentrated somewhat above the half-power cut-off at ~.03 cycles per data interval, and the negative ones extend from ~.10 cycles per data interval to the Nyquist frequency (1/2 cycle per data interval).

The vaunted SSA methodoly provides scant relief from the undesirable ripples seen in the triangular amplitude response. In fact, it creates a slight negative side-lobe at ~.27 cycles per data interval, though this may be an artifact of round-off error in the deduced filter weights.

It is precisley such ripples, particularly when they go negative, that create the “strange effects” of data smoothing that have disconcerted statisticians since the time of Yule and Slutsky, who new nothing about filter response characteristics when they wrote before WWII. And to this day, some statisticians continue to view smoothing very suspiciously, without benefit of the analytic insights developed since then. The advocacy of simple time-averaging is particularly misguided, because such averaging introduces horrendous (up to .34) negative side-lobes in the nominal stop-band of amplitude response.

Re: John S. (#169), Re: Hu McCulloch (#173), etc

The filter coefficients were directly obtained from the ssatrend.m program (it has the output tfilt, which contains the filter coefficients). I uploaded the coefficients for M=1,..,50 (columns) using annual HadCRUT3 series (up to 2008): SSA filter coefficients.

Re: Jean S (#176),

Thank you for the reference. The precise values of the SSA weights do sum to unity and some slightly negative (phase-flopping) values in amplitude response, which made me hesitant about the results, disappear. The entire graph looks indistinguishable from the old, however, with all the same ripples! Many thanks to Geoff Sherrington for posting my old graph, which incorporates the imperceptible effects of round-off errors in the weights as listed at Lucia’s.

It appears that in expanding the filter from 21pts to 29pts, Rahmstorf was shifting from ~33yrs being the half-power period to being the half-amplitude period in the smoothed HADCRUT3 yearly series. This increases the delay in low-frequency turning points from 10yrs to 14yrs.

Re: John S. (#177),

Let’s see now: 1998+10 = 2008; too risky! 1998+14 = 2012; much safer!

RE John S, #169,

Thanks, but what half-amplitude and half-power periods do you get for the weights Jean S computed with M = 11 (21 points) and M = 15 (29 points)? I haven’t tried plotting response functions with Jean S’s numbers yet.

Of course the interesting thing will be to see what Rahmstorf himself says the half-power period is, given his statement quoted above in #163 that this is the best measure of effective width.

Personally, I think half-amplitude width would be more useful in the present context (if not in electrical engineering), but still that will be surprisingly long, about 34 years. IPCC4 stresses the amplitude response of their 13-point filter, and only mentions the half-power width as an aside.

When I just checked, there still had been no posts on the RC Copenhagen thread since Saturday morning. The masthead does warn, “Update: we are having some performance issues and some functionality may be disabled.”. However, the subsequent threads have had no problem with recent comments getting through. Perhaps it is Rahmstorf’s functionality that is disabled??

Re: Hu McCulloch (#170),

As I stated, the half-power “cut-off” is at ~.03 cyles per data interval. For yearly data, this occurs at a period slightly longer than 33-1/3yrs, well in line with that of the corresponding triangular 21-point filter.

I agree with you that it is not the half-span but the half-power width that is the metric of practical interest. I haven’t computed the 29-pt SSA response for the simple reason that the precise weights are unknown to me. Nevertheless, I don’t expect any significant departures from triangular filter response.

RE John S #172,

Thanks — I didn’t realize how close to .030 cycles per interval the frequency was. As noted by Nick Stokes in #158, Jean S has the M = 15 weights as he has computed them in comment #15535 of Lucia’s “Fishy Odor” thread. Rahmstorf should either confirm or correct Jean’s calculation, but meanwhile the best we can do is use these weights.

Re: Hu McCulloch (#173),

Not wishing to become a contributor to the Journal of Irreproducible Research, I had refrained from computing the 29-pt SSA response from weights that do not add up precisely to unity (which is also the case for the 21-pt filter).

BTW, have you received my calculations kindly forwarded by Craig Loehle? If so, please feel free to post the graph, as I am not adept at that. I only ask that you correct the heading to “SSA” instead of “SAS,” which I fly apparently much too frequently.

Re: John S. (#174),

This is a graph that John S puts forward.

John S. has pointed out that the correct title of this graph should read “SSA 21 point symmetric smoothing” rather than “SAS 21 point symmetric smoothing.” — HMHere are the response functions for M = 11 and M = 15, versus the corresponding triangular filters and IPCC4’s 13-weight filter. I used Jean S’s 4 decimal weights from Lucia’s site, but then divided by their sum (1 plus or minus .0002 due to rounding error) to make them sum exactly to unity.

The half-amplitude and half-power widths, in years, are:

IPCC4 13: 11.3, 15.8

SSA M = 11: 24.3, 33.8

Triangular 21: 24.7, 34.4

SSA M = 15: 33.0, 45.9

Triangular 29: 33.8, 46.9

Still no posts on the RC Copenhagen thread since Saturday morning.

Re: Hu McCulloch (#178),

Renormalizing strongly rounded weights is simply a linear scale change. It can ensure unity response at zero frequency, but it cannot eliminate negative lobes (or,equivalently, a 180-degree phase shift) created by the rounding in the frequency-response function.

I’m curious: why do you plot your results as a function of period?

Re: sky (#180),

You’re quite right. Renormalization of filter weights is simply a scale change, incapable of removing (or introducing) any features of apmlitude response, aside from purely numerical effects. And this hold true regardless of the character of the weights. The negative side-lobes unexpectedly encountered in the cosine transform of the symmetric SSA kernels, however, turn out to be a product not of strong rounding, but of wrong rounding. The third-highest weights in the 21pt kernel should be .0757 rather than .0758. This correction takes care of both problems.

The display of results is often a matter of personal preference. There is a much more basic reasons, however, why frequency is the usual choice for the x-axis in displaying filter responses and power spectrum estimates. Both are mathematically defined as functions of frequency.

In particular, the simple analytic form of all FIR filter responses is a trigonometric polynomial in a frequency metric. The response function of simple moving averages resembles the sinc(x) = sinx/x function, and that of triangular smoothers resembles the square of that function. These analytic forms become quite unrecognizable functions of period to those of us who cannot visualize u*sin(1/u) in a flash.

Moreover, there’s the practical consideration of displaying the results over the entire baseband range (0 to the Nyquist frequency) in one graph. Log-log graphs are the standard in electronics, where the baseband reaches into the MHz range and the monotonic roll-off of IIR filters is very steep. To my mind, they are ill-adapted to the geophysical setting, where the baseband seldom extends beyond 1Hz and the filter response functions often contain zeros. The only personal preference that I would plug here is to delineate the baseband range in fractions of Nyquist, rather than fractions of the sampling frequency. This would avoid much cumbersome labeling of the x-axis.

Re: Hu McCulloch (#178),

I find the concept of width slightly odd for a low pass filter as it is kind of obvious that we are talking from DC up to the cut-off frequency. This frequency is the main feature of the filter. It is interesting to see how this frequency came to be defined as the half power amplitude.

It comes from the granddaddy of all low pass filters in electronics. Basically it is a potential divider where the upper part is a resistor and the lower part is a capacitor. If both parts were resistors, there is the interesting case where the two resistors have equal values which gives an output at half the input amplitude – at all frequencies.

Where the lower part is a capacitor, the interesting case is the frequency where the capacitive reactance is equal to the resistance. It turns out that the magnitude of the output is .707 of the input when the proper real and imaginary values are used in the computation of Z2/(Z2 +Z1) where Z2 is a pure reactance and Z1 is a pure resistance.

One can of course also say that the .707 factor in the voltage is a halving of power. Naturally it is not the job of electronics people to impose their views on statisticians or climate scientists but this particular definition of corner or cut-off frequency is certainly not an arbitary one. This definition also works when higher order filters are used whereas the half amplitude tells you little unless the slope of the roll-off is also given.

I would probably have changed your X axis to frequency and used a log scale for both X and Y but it is really a matter of taste. I am still waiting to see how our fish is going to wriggle off the hook.

Re sky #180,

Electrical engineers (who did most of this filtering stuff originally) like to think in terms of frequencies, either cycles per second or radians per second, and so plot the response function as in John S’s graph in #175 above. But climatologists (eg IPCC, Rahmstorf) prefer to think in terms of easily visualized time units per cycle instead. Thus John’s graph and mine above give the same information (for M = 11 or 21 weights), just presented differently.

Plus, it’s easier to eyeball the half-amplitude and half-power periods from an appropriately scaled period graph than it is from a frequency graph that has the standard scaling from 0 to the Nyquist frequency. (2 time units per cycle in my figure and .5 cycles per time unit in John’s.) The full horizontal axis is infinite on a period diagram, but the upper portion isn’t very interesting, and so can be truncated appropriately to something finite.

Renormalizing can’t generate negative lobes, so long as the weights are all non-negative and have been centered on 0. All it does is force the response to be exactly 1 at 0 frequency (or, equivalently, infinite period).

Thank you, Hu McCulloch and John S. for your points of view.

I posted the following comment this morning on the RC Copenhagen thread, which has had no new posts clear “moderation” since 8 days ago, including the 7-day-old query I mentioned in #163 above:

Despite a headline notice that new software has been causing some functionality problems, RC has had plenty of activity on its newer posts, right up to today. The problem seems to be that Rahmstorf is just evading the issue.

Re: Hu McCulloch (#186),

They sure don’t do themselves any good with this sort of behavior.

I’m not experienced with characterizing filters by the half width in an assumed fashion. What field does that regularly?

Re: Jeff Id (#188),

realclimatescientists except when they take another position

AchTung.. Klimatoknowledists.

RE Jeff Id #188,

Rahmstorf (as quoted in #186 etc above) refers to the

half power width, not thehalf width. The former is (I gather) quite common in electronics to characterize the wavelength above which the filter attentuates the power by a factor of .5 or more. Or, if more convenient for the application, this can be expressed in terms of frequency (cycles or radians per second). Since power is proportional to the square of amplitude, this is the wavelength at which the filter attenuates the amplitude by a factor of~~.75~~.293or more (leaving~~.25 = .5^2~~1-.293 = .707 = sqrt(.5)or less). So the half-~~power~~amplitudewidth is the same as the quarter-~~amplitude~~powerwidth. IPCC4 focusses on the amplitude response rather than the power response, but this is reasonable in terms of climatology.[Corrected thanks to Jorge # 194 below]“

Half width” would be ambiguous, but this is not what Rahmstorf said in this particular quote. If the “width” is taken to mean the number of non-zero points, his M=15 filter has width 29 (which was PaulM’s point over on RC and Steve’s as quoted below), and half-width 14.5 or roughly 15. But someone else might use “width” to mean the half-power period or half-amplitude period. In #186 I was careful to use “period” rather than “width” to avoid this kind of confusion.Of course, this not to say that Rahmstorf isn’t completely muddled on this issue. In comment #395 on the RC Copenhagen thread, he said, in reply to a comment of Steve’s that somehow made it through “moderation”,

Evidently Rahmstorf thinks that the “characteristic response”, i.e. half-power period according to his later post, of a filter will roughly be the reciprocal of its peak weight. It’s true that his M = 15 filter has max weight of roughly 1/15 (this is true exactly for a 29-point triangular filter), but its half-amplitude period is 33.0 years, using Jean S’s calculation of the weights, and its half-power period even longer yet at 45.9 years.

Still no posts have gotten through “moderation” on the RC Copenhagen thread since the morning of 7/11. Perhaps Nick can find out what is amiss with the “real climate scientists”?

Re: Hu McCulloch (#191),

Isn’t this what is normally called the “impulse response” of a filter. Rhanstorf explains:

He is saying that the response of a single unit imput to the filter will last for 30 years and half of that is 15. This is a way of looking at things that I have never heard of. I am not that experienced in signal processing but has anyone else ever heard of this way of characterizing a filter. Rahmstorf seems to be confusing the frequency response with the time domain impulse response. It just seems to be confused to me but I’ll defer to someone more experienced.

Re: Hu McCulloch (#191),

I see your point about half-power width. I could see them blocking me, after all there have been a few deservedly mean things said about Mann on tAV. They can pretend whatever they want about my id and occasional youthful and self-healing vent ;). However, blocking discussion by you strikes me as nothing but ostriching. With more energy, I’d go look up the ad-hoc criteria for moderation put in one of the recent threads by gavin.

Re: Hu McCulloch (#191),

I think there is a small mistake here.

“Since power is proportional to the square of amplitude, this is the wavelength at which the filter attenuates the amplitude by a factor of .75 or more (leaving .25 = .5^2 or less). So the half-power width is the same as the quarter-amplitude width.”

Using Pout = 0.5 * Pin we get Aout^2= 0.5 * Ain^2 or Aout = (sq rt 0.5) * Ain = 0.707 * Ain

So at the cut-off frequency the output is 0.707 of the input amplitude or the output power is 0.5 of the input power. At higher frequencies or shorter periods the fraction of the input that appears as output will reduce. If the amplitude is reduced to 0.5 the power will be reduced to 0.25.

I strongly support the view that the cut-off frequency should be defined as the half power frequency. This, of course, also defines the period where values shorter than this lead to increasingly smaller fractions of the input appearing in the output.

Within the limitations of the filter, it seems to me that Rahmstorf is saying that fluctuations with a period of over 45 years are the climate change signal he wants to keep and that he wishes to progressively suppress the remainder of the signal with shorter periods. Clearly, the shorter the period, the more it will be smoothed away in the output.

Once again, the entire vocabulary of signal processing is something that I repeatedly question in connection with these short series characterized by red noise.

The term “climate change signal” is a metaphor transposed from radio communications. It’s not a metaphor that is common language in financial market analysis where the lengths of record are similar. The term sticks in my craw. All too often people use this term to describe what is merely a smooth of a series.

Re: Steve McIntyre (#195),

And one fimds that the vocabulary of signal processing is used in a confusing and non-standard manner

Re: Steve McIntyre (#195), It is an awkward carryover of a technology. In signal processing they never have time-errors (due to dating uncertainty in proxies) and you know the signal frequency cutoff for your real signal (below which it is white noise or below which you are willing to lose high frequency signals for the sake of reducing hiss). For the Hadley data, some of us would like to KEEP the evidence for a recent downturn in temperature, and not smooth it away. I would bet also that in real signal processing you don’t try to use filters right up to the end of the data.

Re: Craig Loehle (#197),

This isn’t quite the same issue but I am going to ask a question that I have been wondering about. Nick Stokes indicates that the smoothing filter has zero lag. However as tehe sampling period moves to the end of the sample, the filter changes as Nick Stotkes indicate due to the padding. At the end of the sample, the filter becomes FIR as Stokes indicated since signal being presented to the filter is derived from the preceding samples due to the padding.

Does this mean that the filter changes from one with zero lag to one with some lag?

Re: TAG (#202),

All filters have a lag. He may be referring to what is known as a zero-phase filter, but like many Wiki-experts, he does not know the proper terminology. Zero-phase filters are non-causal (with one exception), i.e., the current output depends upon future inputs. Zero-phase filters have an impulse response that is symmetric about zero, i.e., h(n) = h(-n) for real filters, and Hermitian for complex filters. The previous statement implies the filter has an even length impulse response, except the noted sole exception which is a simple single pole, constant gain filter h(n) = g*del(n).

.

I’m not sure what you’re saying in the rest of the post. The filter “becomes FIR?” FIR = finite impulse response, which implies no feedback. If anything, since the padding uses past values for the current inputs, that would imply the filter becomes IIR (infinite impulse response), sort of.

.

Oh, and for those that want to brave the details, here’s a definition of “signal” (he uses integers, which is true if the data are quantized, just substitute the word “reals” and use R instead of Z). Noise of course, can be classified as a signal, too, but that’s not the problem Steve is getting at.

.

Mark

Re: Mark T (#204), As I consider this, perhaps Nick is referring to the situation in which the current input is centered on the filter and is used to produce the current output… a non-causal implementation of a linear-phase filter.

Mark

Re: Mark T (#205), Yes, that sounds right. With this kind of analysis, you don’t need strictly causal filters – at each point except the last, you know at least some “future” data. The limitation is that you don’t have data beyond the endpoint.

On your earlier point, the fact that padding uses past data doesn’t make it IIR, because the padding, at least in the variants of MRC, is created from data within the range of the original central filter.

Re: Mark T (#205),

That is phrasing it much better than I could. Nick Stokes pointed out that at the end of the signal the effect of the MRC (minimum roughness) padding is to change the filter weights. The filter is no longer centred since some of the effect of the padding uses a past sample in the place of one of the future samples. This, in effect, is creating a new non-centered filter with new weights. At the end, the padding causes all past samples to be used.

Is this having an effect on the delay produced by the filter?

Re: TAG (#208), No, the filter remains centred, with no delay. In Mann-style MRC, the weights are required to remain positive, and so the fewer remaining future values (before the endpoint) are upweighted. At the end, you can’t do this any more, which is why Mann has a pinned endpoint. Grinsted’s method, which Rahm used, allows negative weights. These are applied to the distant past values, which preserves centering.

At Lucia’s, I worked out a simple example with Mann MRC (#16214) with a triangular filter, and gave an R function (#16223) for the general least squares MRC (Lucia’s version, not quite Grinsted’s) weights for again a triangular filter. This method does not pin the endpoint.

Re: Nick Stokes (#209), Re: TAG (#208),

Thanks fopr your patience. However one question to try your patience yet again. The filter remains centered on the padded signal. It doesn’t remain centered on the physical signal. There would be no delay on the paded signal. Is there a delay on the physical signal?

Re: TAG (#210), No. Here is the 3pt Mann MRC example with 5pt triangle filter (1 2 3 2 1)/9. We have y0, y1, y2, y3 physical data. Extrapolating, we have y4=2*y3-y2 and y5=2*y3-y1. Call the smooths s1, s2, s3. In this case, s2 is the only interesting one. s2= (y0 + 2*y1 + 3*y2 + 2*y3 + (2*y3-y2))/9 The weights on the physical data are thus (1 2 2 4 0)/9 and this is centred: -2*1-1*2+0*2+1*4=0.

You can work out more general cases. But the general principle is that the weights must be centred, because the method of construction means the filter leaves a line unchanged.

Re: Nick Stokes (#211),

If you want to project a linear trend from the point where legitimately centered filter output stops in off-line processing, then do it simply and honestly, instead of cloaking it in nonsensical terms of filter response.

Your claim that the five asymmetric weights (0,4,2,2,1)/9 represent a zero-delay filter whose output is centered is entirely specious. The fact that the filter OUTPUT is zero in your contrived example proves absolutely nothing of the kind. And the only reason you get zero output is because the central DATA point is zero. Yes, there’s zero phase shift at zero frequency, but that’s a trivial result, because at that frequency there can be no phase shift.

The z-transform of this filter shows that, in the general case, the output lags by ~1/10 of a cycle at 0.10 Nyquists (20yr period in yearly series), by ~1/6 of a cycle at 0.17 Nyquists (~12yr period) and by exactly half a cycle at Nyquist, where the amplitude response is exactly one third. Both the amplitude and phase responses ripple beyond half a Nyquist. That’s a terrible low-pass filter!

If you want a legitimate output at the last data point in real-time processing, then the inescapable mathematical fact the output will lag has to be accepted. Professionals never use symmetric filters in such cases, because the phase lag can be appreciably reduced by asymmetric filters, either FIR or IIR. But “climate scientists,” whose Wiki-expertise in these matters was aptly noted by Mark T (#204), have a true knack for doing things quaintly. If their claimed “advances” in digital filtering were ever submitted to professional review, the unanimous response would be ROTFL.

Re: John S. (#221), I don’t think you’ve been following. I said in (#206),

It’s not really appropriate to talk of phase shifting with these end averages, because the same average is not applied to a whole sinusoid, and I’ve tried to avoid referring to them as filters, but to stick to the term weighted average. I’ve spoken of the triangular filter, because itisused for convolution, and because everyone else has been using that terminology there.My point has been simply that the weighted averages are centred (which you’ve confused somehow with zero output), and that means that a line is passed unchanged. That is an important property when you’re trying to find a trend.

Re: Nick Stokes (#222),

So the delyy does change as the filter becomes more and more modified with the padding.

Re: TAG (#224), Where do you get that from? No, again each of the moving averages is centred, and a straight line passes the whole smoothing process unchanged. That would not happen (with nonzero slope) if there was delay. It is, as I said in #203, not really meaningful to talk of a phase shift for the near-end weighted averaging, since it would never be simultaneously applied to a whole sinusoid.

Re: Nick Stokes (#222),

An essential point keeps escaping you. The fact that a set of weights passes a straight line unchanged IN THE ABSENCE of other components is not the practical issue. It’s what happens in the realistic, rather than trivial, case that matters. The z-transform shows that explicitly in the general case for any set of weights, including the terminal weights that you presented. Lack of a consistent set of weights used throughout the record does not circumvent that issue, it only obscures and magnifies it. The fundamental difference between actual data and that projected ad hoc beyond the endpoint remains. And that’s precisely the heart of the problem, which cannot be danced around lightly.

Your relentless grasp of the irrelevant is uncanny. It makes any further discussion pointless.

Re: Nick Stokes (#222),

It’s useful when you are trying to find a trend where none exists, which is presumably why the ‘climate scientists’ love it. Take the half-Nyquist mode, ie … -1 0 1 0 -1 0 1.

MRC with the 5-point triangular filter people have been using as an example gives you … -1/9 0 1/9 0 -1/9 2/9 1, showing a completely bogus sharp upward trend.

If you increase the length of the filter (as Rahmstorf did) the effect gets worse, ie the bogus trend shows up over a longer period.

More generally, this bogus trend will appear whenever you have data that increases for a while and then levels off – exactly as has occurred with recent temperatures. Again, this is why it’s used by the likes of Rahmstorf and Mann.

Re: PaulM (#227), The steep trend at the end arises because you then look at just the last 3 points, because of the shortness of the filter. It is associated with high uncertainty, as I said above. Two timesteps later, there’s an equal downtrend, in this oscillatory case. A longer filter reduces the trend by smoothing the rise. 7pt would give (1 0 -1 0 1 0 1 9 16)/16. Again, 2 steps later it’s all reversed.

There’s merit in the view that these end points are too uncertain to rely on. But if you really do need to make a decision based on late info, there’s nothing better within that scope.

With your late levelling example, no, the method emphasises whatever has happened recently. The recent temperature example in (#62) showed that. If there is a “bogus” trend there, it’s down.

Re: Nick Stokes (#228),

If these researchers just said that. Adequate data are hard to contain and firm conclusiosn are not possible but the data are not incompatible with AGW with very serious consequences. It hey just admitted that. However they refuse to say that. Instead they don a mantle of omniscience and hide behind assertions in obscure mathematical language. When it is revealed that they do not have a deep understannding of the mathematics and have misused it, they refuse to admit it.

The Scientific American has a column this month wondering why the US public is increasingly skeotical of AGW claims. This is a real reason

Re: TAG (#229),

What Nick doesn’t seem to grasp is that Stefan didn’t “need” to make the decision to write a paper on this topic for Science. He

couldhave recognized that the behavior of his curve at the endpoints of his filter were too uncertain to rely on and then… get this…not written the paperorchosen another method.As for the notion that there is nothing better than MRC within the scope… which scope? The scope of

filtering? Or the scope of describing how recent data behave. Recall that no one put a gun to Stefan’s head and said “You must filter. You may only filter. You are not permitted to use any analytical technique other than filtering”. So, presumably, the “scope” for his work is not “application of filters”; it is “trying to diagnose the progress of surface temperatures”.If his scope is trying to progress of observed surface temperatures, he had other choices. One of them is:

he could have just fit a least squares to the data. That’s smooths in the sense of providing a iine. He also could have easily computed uncertainty intervals.

The method would not have been perfect, but it wouldn’t have fewer pitfalls than filtering using MRC for the end points and then diagnosing agreement between models and data based on what was happening in the region that was affected by his choice of end point treatments.

It may well be (as Nick suggests) that no other

filterwould have provided better results than the one selected by Rahmstorf. But that’s irrelevant because no one put a gun to Rahmstorf’s head and said hehadto use a filter. Stefan chose to use that filter. Others’ observed that ithappenedthat his choice emphasized warming, where as other widely known, conventional choices would not have done so.Then temperatures cooled, and his choice turned around and bit him in the tuckus. Had he just fit a straight line, his tuckus wouldn’t be missing a big chunk.

Re: lucia (#235),

I agree wholeheartedly with your comments about Rahmstorf’s choices.

To be sure, every symmetric filter will produce an “end-point” problem when its output is centered. He could have simply stopped at the last legitimate such date. Or, he could have chosen to accept the phase lag of asymmetric causal filters and produced a legitimate output for the current date. In the case of superior linear-phase filters (e.g., Butterworth) that phase lag amounts to a constant delay much less than that of symmetric filters for equivalent smoothing, but not necessarily an integral multiple of the sampling interval. That delay can be easily reported. Instead, Rahmstorf chose a jury-rigged procedure that fabricates a continuing upward trend. Meanwhile, adept low-pass filtering of HADCRUT3 shows a downturn underway.

Re: John S. (#236), Rahm, Mann etc can be criticised for not making proper use of the statistical literature. But theirs isn’t a jury-rigged process. I listed here (#67) references from regular statistical authorities, describing the use of such methods since at least 1877.

It’s certainly possibly to use a causal filter with lag. But then you should plot the smooth estimate at its true centre. That’s then a version of just not smoothing to the end.

“adept lowpass filtering” – well, I did show, that MRC shows such a downturn, and could be criticised for exaggerating it.

Re: Nick Stokes (#239),

I just knew that as a final resort in this debate, an appeal would be made to “statistical authorities,” who merely describe

ad hocways around the “end-point” problem, instead of rigorously solving it. Yessiree, there were real experts in digital filtering in 1877! And how do you plot the output of a non-linear phase causal filter? Maybe we should ask Neanderthals who painted in caves.Re: Nick Stokes (#228),

Rubbish. The MRC method forces the 2nd derivative to be zero. So any data that is rising and then levelling off (and therefore has negative second derivative) is not allowed to do so under MRC, and the levelling off is replaced by a fake linear trend.

Re: PaulM (#230), Wrong about the levelling off. Mann’s MRC, for example, requires the smooth to pass through the last point. It can’t ignore the levelling. As in the Hadcrut example, a more valid criticism is that it makes an exaggerated response.

It’s true that the second derivative is forced to be zero at the end. But the correct value is almost impossible to estimate from data anyway, so the requirement creates little distortion. As with your half-Nyquist example. There’s no way a correct second derivative could be estimated.

Re: Nick Stokes (#231),

No, you are wrong. Passing through the last point is nothing to do with levelling. Dont you know what a second derivative is? I thought you were a scientist. A levelling out curve with -ve second derivative is forced by MRC to have 0 second derivative. But I’m repeating myself. As John S has said, further discussion with you is pointless.

Re: PaulM (#233), I’m very well aware of the role of higher derivatives. Most smoothing methods are based on truncating (eg setting to zero), in some way, higher derivatives. Linear regression sets all derivatives above the first to zero, everywhere. MRC sets the second derivative to zero at one point (the end). You have to do something there. The method mentioned in the OP set the first derivative to zero at the end, with far more intrusive effect (#39).

Re: TAG (#202),

All these filters are FIR, in the sense that each point of the smooth is a weighted average of a finite number of data points (eg 21 or 29, as much discussed above). The number of data points spanned by the averages as the end is approached diminishes, as data is replaced by padding. It’s not really appropriate to talk of phase shifting with these end averages, because the same average is not applied to a whole sinusoid. The important property is that, for MRC in its various forms, the average is centred. That is, the time value you attribute to the smooth point is at the centre of mass of the weights.

It’s important because it means that each of the averages. when applied to a straight line, returns exactly the corresponding point on the line as the smooth point. MRC smoothing returns a line unaltered. That’s important, not just because it seems that it should. The point of smoothing is to try to diminish “noise” so that an underlying pattern becomes more evident. That pattern may include a trend. You don’t want to alter the trend in the process of trying to reveal it.

As I sought to show above (#39), minimum slope does not have that property, with undesirable results. MRC has it because the padding process, if applied to a line, would continue the line unchanged.

Re: Steve McIntyre (#195),

Smoothing as used by statisticians is just another name for low pass filtering as used by electronics guys. All the classic smoothers from least squares straight lines or polynomials up to cunning ones like Spencer´s are low pass filters when the transfer function is plotted against frequency. When viewed from the frequency perspective one can clearly see how the filter is reducing the amplitude of any sine waves in the input time series.

It is quite true that in communications it is known which frequencies are of interest and which are not. It is also true that when a smooth is applied to a climate series the design of the smoother reveals which frequencies the designer considers interesting and which are not. This is the case whether the designer realised it or whether he didn´t.

The person who does the smoothing is the person who decides what is signal and what is noise. This may be total rubbish in any physical sense but that choice is made whenever a smoother is applied.

Steve:” Smoothing as used by statisticians” – puh-leeze. Statisticians dislike smoothing. It’s realclimatescientists that love to smooth. And yes, the filters are the same, but the context is different. In signals, there are real frequencies to be recovered. Not so in climate.Re: Jorge (#199), Steve’s comment :

That’s a bit too strong, I’m afraid.

Surely :

But

and

Re: Steve McIntyre (#195),

Well, time series in general, but clearly signal processing folks (comm or otherwise) use the term “signal” to refer to something that is not noise (even interference is a signal). This is, as you note, incorrect usage by the climate folks. They read Wikipedia to learn about signal processing and control theory concepts then declare themselves experts and “move on.”

Re: Craig Loehle (#197),

Yes and no, but generally speaking, the uncertainty is typically much much less than a single sample period, e.g., a 100 MHz record won’t have a micro-second of missing data, though there may be several nano-seconds of uncertainty in the true edge timing (depends upon a variety of factors). Signal to interference and noise ratio (SINR) is a function of, among other things, the rms clock jitter.

You generally have a good idea of the signal bandwidth, and where it lies within the noise bandwidth, allowing you to selectively filter out the components you need. With any luck, the signal bandwidth is much less than the noise bandwidth and you can get some processing gain by baseband downconversion, filtering, and decimation.

Mark

Re: Steve McIntyre (#195),

And then ‘noise’ is defined as observation minus smooth.. Who cares about distortion anymore 😉 Moore writes:

re “Statisticians dislike smoothing”, not necessarily; at least engineers can use smoothers, after the signal is defined, e.g. in :

http://www.amazon.com/Extrapolation-Interpolation-Smoothing-Stationary-Time/dp/0262730057

Re: Hu McCulloch (#203), ,

Ah, apart from silly padding, all we have here is two pass M-year moving average :

s=ssatrend(HadC,15);

b=ones(15,1)/15;

s2=filter(b,1,flipud(filter(b,1,flipud(HadC))));

plot(s2-s)

RE #186,

Eureka — my query finally made it onto the RC Copenhagen thread!

The problem may in fact have been their new software, which they say has some technical problems. Their new software lists comments both at the end of posts (as on CA) and now in a separate pop-up window. Each has a “submit comment” window at the end. The posts I tried on 7/12 and 7/19 were entered on the traditional window at the end of the post, but on a hunch yesterday I tried resubmitting my 7/19 inquiry on the popup window instead. This is the version that went through.

I have some further relating to the “cutoff frequency” and the sinc function that I will post soon.

Re: Hu McCulloch (#198),

Seems like your query was left unanswered, and now the comments are closed. They’ve moved on.

Re Jorge #194,

So true — now corrected. Thanks!

Here are amplitude response functions for various centerable rectangular filters:

Note that each one has a cutoff period (0 amplitude) at its width, since a rectangular filter exactly annihilates a sinusoid whose period equals its length. It also annihilates any multiple of this frequency, whence the zeroes at 1/2 width, 1/3 width, etc.

Although it is common and not confusing to characterize a rectangular filter by its width and therefore longest cutoff period, the half-amplitude (or half-power) period is in fact more informative. For widths 5, 9, 15, 19, and 29 with annual data, these are 8.2, 16.8, 24.8, 36.5, and 48.0 years, respectively.

Since a triangular filter of length 2M-1 is just the outcome of applying a rectangular filter of length M twice (or “convolving” it), it has exactly the same zeroes, whence the same regularity in #34 noted by Dave Dardinger in #42 above.

However, since the amplitude resulting from applying a filter twice is just the square of that from applying it once, the response is always lower, and hence the half-amplitude (or half-power if you prefer) period is considerably longer than for the underlying rectangular filter. For a triangular filter it would therefore be very misleading to characterize its effective length by its longest cutoff frequency, even though this may be what Rahmstorf has in mind.

With continuous sampling, the response function of a rectangular filter becomes the cardinal sine or sinc = sin(x)/x, as noted by Nick Stokes in #57 above and by Martin Vermeer in #398 of the RC Copenhagen thread. With discrete sampling, however, this is only an approximation. When sinc(15*pi/P) is plotted versus period P along with the Rectangular 15 response function, the two graphs are almost indistinguishable, differing only by a pixel or two here and there. However, the difference between the two functions, plotted below, shows clearly that these are two distinct functions:

Re: Hu McCulloch (#203), Hu, yes, if you have a continuous filter with frequency response function F(f), but you actually digitally filter with sampling frequency s, then the response function of the digital filter is:

F_d(f)=F(f) + F(f+s) + F(f-s) +F(f+2s) + F(f-2s) + …

In your case the difference function you are plotting is approx F(f+s) + F(f-s) = 2* sinc(15*pi/P)/(P^2)

This issue is of practical importance when trying to use design ideas from finite-width analogue filters. They try to use very smooth transitions to zero to get good high frequency roll-off. But as soon as you get up to a significant fraction of the sampling frequency, a digital filter loses those gains. That is why the IPCC filter mentioned here only makes modest effort to merge smoothly to zero. It’s just not worth using extra data points (which you may not have) with a lot of small weights in a smooth taper. It’s also why a triangular digital filter is not too bad.

“Steve: ” Smoothing as used by statisticians” – puh-leeze. Statisticians dislike smoothing.”

Sorry Steve, clearly I was mistaken in thinking that smoothing was used in statistics, presumably by statisticians.

It seems a bit odd that someone would write a whole book on the subject of smoothing methods in statistics.

Oh well. I am not a statistician so I will assume it is a private fight. 🙂

http://books.google.es/books?id=wFTgNXL4feIC&dq=Smoothing+methods+in+statistics&lr=&hl=en&source=gbs_navlinks_s

Re: Jorge (#213),

That book overview made me LOL:

After all, who would want statistical theory corrupting a statistics textbook! I wonder if that book ever made the reading list for a climate science statistics class?

Re: Spence_UK (#216),

I saw the bit about data analysts but guessed they were statisticians by training. Seems like I was wrong again! I´m none too sure about J.Spencer being a statistician either. He seems to have published his smoothing formula in the Journal of the Institute of Actuaries back in 1904.

The sort of comment that I had in mind was Matt Briggs (which is very much in line with the above quotation from Press.

Sure there are circumstances in which smoothing is helpful for

exploratorydata analysis (Tukey’s title) or for exposition – I myself often employ smooths for that purpose. But the reification of the smooth as the “signal” is a repugnant metaphor and annoys me whenever I see it.Thermometers are not radio receivers trying to pick up a radio station.

RE Nick Stokes #211,

A (to me) more obvious way to obtain a triangle-based smoother for s2 when y0, … y3 but not y4 have been observed is just to reduce the width of the triangle to 3 there, as I suggested in #46, so that the weights become (0 1 2 1 0)/4, rather than the (1 2 2 4 0)/9 that you derive with Mannian double-flip extrapolative padding. My weights are symmetrical about y2, give maximal weight to y2, and don’t require any assumption about the presence or absence of a trend.

Mann’s method, on the other hand, extrapolates a trend that may or may not be present, and gives maximal weight to y3 instead of y2. Rahmstorf’s trendline extrapolative padding has a similar problem.

Re: Hu McCulloch (#219),

Hu, yes, I agree with that characterisation. But the MRC behaviour has some justification. Look at the response to a unit pulse. When that data point is first encountered, MRC passes it unfiltered. The result does look like an uptrend, as it should. Of course, the trend should also be given a low confidence measure. At the next time step, you would report a symmetric distribution – no trend. MRC would report the (1 2 2 4)/9 – still reporting a trend, though reduced. That seems reasonable – you have a lot of zeroes on one side, and only one on the other. Of course, at the next step, MRC says no trend. Within its M=2 window, there is now balance.

With a wider window, the difference would be more noticeable. MRC attenuates the initial estimate of an uptrend (or down) more slowly, which gives a smoother response to incoming information.

On another topic (#214), I think the exact frequency response function for your 15pt moving average digital filter (#203) is sin(15*pi/P) / (15*sin(pi/P)).

re john moore

But the trend line errors are incorrect as well, as shown in my comment here .

Line

epleft=sqrt(max(sE_x(idx))^2+std(detrend(x(idx)))^2);

is not very good attempt; and Monte Carlo is not required at all, as the filter (tfilt) is linear.

I prefer to write to CA (

Rapid Correspondence) : the original article is misleading, and there are serious errors in the code.RE Jean S, #232,

There’s no way for an outsider to leave a new comment, but Rahmstorf is still free to reply inline to my comment — or not, if he wants to evade the issue.

At the end of the RC Copenhagen post, Rahmstorf remarks,

Like he says!

235& 236 I agree, but

Re: John S. (#236),

nah ( at least they should check how uncertainties propagate )

Re: UC (#237),

I’m not sure what to make of your cryptic comment. Ordinarily, “how uncertainties propagate” applies to model outputs, not to FIR filter outputs obtained from observational series, such as HADCRUT3 purports to be. Please clarify.

Re: John S. (#242),

Hey, Brohan’s comprehensive error bars are much more cryptic than my posts;)

But error bars in the smoothed sets should anyway get wider near the end-points (see for example http://www.climateaudit.org/?p=2541#comment-187403 ). In HadC data they don’t, so something must be wrong. They are aware of the problem, http://hadobs.metoffice.com/hadcrut3/smoothing.html , but haven’t solved the problem yet.

With their definitions for signal and noise (something along http://www.climateaudit.org/?p=2541#comment-188580 ) , the solution is quite straightforward (see http://www.climateaudit.org/?p=6533#comment-348882 and the link to Roman’s post )

(..comments seem to disappear, trying once again..)

RomanM: The new spam filter that was installed last weekend seems to be working overtime. It has to get used to our ways of doing things I guess.The graph titled “Rahmstorf with IPCC Smooth” stimulated a couple of questions for me:

1) Why is the gray area different in that graph than in the graphs in Rahmstorf’s or Stockwell’s papers?

2) What was the point of fitting polynomials to smoothed data when discussing the validity of a trend calculation? Interpolation and regression are not equivalent.

Update — Rahmstorf still hasn’t replied to my 20 July query (#416) on the RC Copenhagen thread. Although new comments are closed, I doubt that he is barred from answering existing comments.

Maybe I should stop holding my breath?

Re: john moore (e#134),

I guess now it is good time to go back to this issue. It seems to me that the SEs ssatrend.m reports are incorrect (see also http://www.climateaudit.org/?p=6533#comment-348882 ). And the reason seems to be that effect of noise to the padded values is not taken into account. Or am I missing something? Let me try a simple code so you can point where I go wrong:

Re: UC (#246),

The discussion of the issue is too fragmented for me to follow it. But perhaps this update that I just hacked out today fixes it: ssatrend page. Feel free to contact me by email if you wish to discuss the issue or have ideas for better approaches. Now the code is in the public domain, I think it is in everybody interrest to make it as robust as possible.

[ed:link fixed]Re: Aslak Grinsted (#247),

I think that your link has a slight error in it. The following is more likley the correct link.

Re: Aslak Grinsted (#247),

Thanks for the comment, it really helps to have the author of the code here!

I tried to defragment it a bit with the m-script in #246; it seems that sE_R is too small and does not represent a reliable measure for the uncertainty in R (except in the middle, where padding is not an issue). I did run my script with the new version, and the problem is still there. With RomanM’s help I think I’ve figured a way to fix the code so that MC is not needed and the filter no longer believes to be more accurate than it is in reality. I’ll try to compress this to m-script and will post it in here.

Re: UC (#249),

I’ve updated the code once again. I think the new bootstrapping method should be very robust under most conditions. It results in sligthly larger uncertainties because the “padding noise” is no longer assumed to be independent.

The quality of the padding is assessed by checking how well the linear regression performs inside the domain where there is data.

—

I’ve also added the possibility that the measurement noise model is more complicated than gaussian white noise. Example using a somewhat bizarre noise model:

Re: Aslak Grinsted (#250),

I don’t have time right now to go through the new code very carefully, but I don’t understand why don’t you estimate uncertainties simply by repeating the padding process for the noisy series (you are using Monte Carlo anyway)? Something like this (cut-and-paste to your original code):

%What happens to whitenoise if it goes through this tfilt?

if noerrorbar

sE_R=[];

else %%% begin Jean S modification

%MONTE CARLO WAY

mcreal=zeros(length(x),1000);

for ii=1:size(mcreal,2)

% add white noise

xPlusNoise=x+randn(size(x)).*sE_x;

% padding (exactly the same way as done for the trend)

switch Args.ExtrapMethod

case {‘minimumroughness’,’minrough’,’mr’}

idx=(1:mp)’;

pleft=polyfit(idx,xPlusNoise(idx),1);

idx=(n-mp+(1:mp)’);

pright=polyfit(idx,xPlusNoise(idx),1);

% add noise to noisy estimated padding

paddedNoisyX=[polyval(pleft,(-(M-1):0)’)+randn(M,1)*sE_x(end);xPlusNoise;polyval(pright,n+(1:M)’)+randn(M,1)*sE_x(end)];

case {‘minimumslope’,’minslope’,’ms’}

idx=(1:mp)’;

pleft=mean(xPlusNoise(idx));

idx=(n-mp+(1:mp)’);

pright=mean(xPlusNoise(idx));

% add noise to noisy estimated padding

paddedNoisyX=[pleft+zeros(size(idx))+randn(M,1)*sE_x(end);xPlusNoise;pright+zeros(size(idx))+randn(M,1)*sE_x(end)];

end

temp=conv(paddedNoisyX,tfilt);

%%% end Jean S modification

mcreal(:,ii)=temp(floor((length(temp)-n)/2)+(1:n));

end

sE_R=std(mcreal’)’;

end

Of course, that’s only first aid, and we have to wait UC for the real cure 😉

Re: Jean S (#251),

We need to estimate the errors associated with the padding algorithm. Even if x is noise free there will still be a padding prediction error, and that is why I dont just look at the noisy series. Have you tried your code on a noise free series?

My new code estimates the padding error by testing how well the padding-algorithm performs in the interior of the series. It does that by making a linear fit in an mp-point wide window in the interior. It then uses the residuals of an mp-point linear extrapolation. If the measurement noise process is white, then these residuals will contain both the measurement error and the signal prediction error. (For a red noise process my new code might not work, because the padding algorithm will be able to predict some of the noise. I.e. the residuals will be too small – to fix that I probably will have to test the effect of the padding algorithm on the noise process).

Re: Aslak Grinsted (#252),

Let’s first answer correctly to the question

“What happens to whitenoise if it goes through ssatrend?”

end then move on to signal distortion issues. Jean’s modification seems to pass my test in #246

Re: Aslak Grinsted (#252),

yes, that’s why I said my code is “first aid”. I simply used, very unrealisticly, the last value used for actual observations for the error associated with the difference between the prediction (padding), based on

noise free observations, and the true (future) values. But, as UC hints, that’s not the issue of the main concern here. The issue is that you make your predictions based on noisy observations, i.e., you have an additional uncertainty associated with the prediction itself. This is assessed in my code by the noisy padding process within your Monte Carlo simulation.Re: Jean S (#254),

I do not believe that it is assessed correctly in your code. That is why I ask you to consider the case sE_x=0. That would result in sE_R=0, if I read your code-snippet in #251 correctly. That is sE_R=0 is clearly wrong, because even with sE_x=0 there would still be a prediction error. To test it you can try: ssatrend(cos(1:200),11,0).

Re: Aslak Grinsted (#256),

ssatrend.m:

% SE_R: standard error of the R (how much of sE_x is remaining in R).

how much of 0 is remaining in R ?

Re: Aslak Grinsted (#256),

It seems that you want even wider CIs near the endpoints. Once again, let’s first answer correctly to question “What happens to whitenoise if it goes through ssatrend?” Moore05 need to be fixed, and Rahmstorf’s conclusions are invalid. Shouldn’t make much difference to the big picture, right?

Re: Aslak Grinsted (#250),

The new version seems to output different R than the previous. But here’s the difference approx. difference between Jean’s and ssatrend.m ver4 uncertainties

( changed line

CI=prctile(mcreal’,[5 95])’;

to

CI=prctile(mcreal’,[2.5 97.5])’;

to match better with sE_R*2 used in earlier versions )

To avoid MC one can do the following:

replace original ssatrend.m lines

R=filter(tfilt,1,paddedX);

R=R((1:n)+M*2-1);

with

%% start UC mod.

%paddedX (minimumroughness) can be though as matrix B times the input series:

idx=(n-mp+(1:mp)’);

Xi=[idx ones(M,1)];

Xi2=[n+(1:M)’ ones(M,1)];

Be=Xi2*pinv(Xi);

idx=(1:M)’;

Xi=[idx ones(M,1)];

Xi2=[(-(M-1):0)’ ones(M,1)];

Bs=Xi2*pinv(Xi);

BsZ=[Bs zeros(M,n-M)];

BeZ=[zeros(M,n-M) Be];

Ii=eye(n);

B=[BsZ;Ii;BeZ];

%% now B*x equals paddedX ( x is the input series )

%% next, filter(tfilt,1, .. in ‘matrix times vector’ -format:

tt=[tfilt;zeros(length(paddedX)-length(tfilt),1)];

F=tril(toeplitz(tt));

%% and finally, chop the padded values

Z=[zeros(n,M*2-1) Ii zeros(n,1)];

%% now, Z*F*B*x equals R ..

CC=Z*F*B;

R=CC*x;

%% .. and unity variance white noise process covariance (I) propagates through the filter:

Cout=CC*CC’;

%% end UC mod.

we can run MC-free code with different noise models, for example

cumsum(randn(300,1)*.2)+randn(300,1);

turns to

Hhh=tril(ones(300));

Iii=eye(300)*0.2;

Ccc=(Hhh*Iii)*(Hhh*Iii)’+eye(300);

Cout=CC*Ccc*CC’;

and

noisemodel=@()smooth(randn(300,1),5)*2;

(hmm, not sure how smooth.m handles endpoints, let’s try Roman’s method)

S5=zeros(300);

for i=1:300

tt=zeros(300,1);

tt(i)=1;

S5(:,i)=smooth(tt,5);

end

Ccc=S5*4*S5′;

Cout=CC*Ccc*CC’;

and then sE_R=sqrt(diag(Cout)) gives values that will pass the test the little frequentist in me wrote in #246.

ssatrend.m is used extensively in this new sea level paper

http://www.realclimate.org/index.php/archives/2010/04/science-story-the-making-of-a-sea-level-study/

Are the endpoint issues solved now?

Not sure if they matter at all in Vermeer et al, but I think the conversation should continue, this is interesting topic!

## 4 Trackbacks

[…] Rahmstorf Rejects IPCC Procedure. […]

[…] ClimateAudit: https://climateaudit.org/2009/07/08/rahmstorf-et-al-reject-ipcc-procedure/ […]

[…] search Niche Modelling, and see ClimateAudit, the Blackboard, as well as many other posts from these and other statistical blog […]

[…] Rahmstorf Rejects IPCC Procedure […]