Since the publication of Rahmstorf et al 2007, a highly influential comparison of models to observations, David Stockwell has made a concerted effort to figure out how Rahmstorf smoothing worked. I now have a copy of the program and have worked through the linear algebra. It turns out that Rahmstorf has pulled an elaborate practical joke on the Community, as I’ll show below. It must have taken all of Rahmstorf’s will-power to have kept the joke to himself for so long and the punch-line (see below) is worth savoring.

Rahmstorf et al oracularly described Rahm-smoothing using big words like “embedding period” and the interesting concept of a “nonlinear…line”:

All trends are nonlinear trend lines and are computed with an embedding period of 11 years and a minimum roughness criterion at the end (6 – Moore et al EOS 2005) …

David’s efforts to obtain further particulars were stonewalled by Rahmstorf in a manner not unreminiscent of fellow realclimatescientists Steig and Mann. I won’t recap the stonewalling history on this occasion – it’s worth doing on another occasion but I want to focus on Rahmstorf’s practical joke today.

In April 2008, a year after publication, Rahmstorf stated at landshape.org and RC that:

The smoothing algorithm we used is the SSA algorithm by Aslak Grinsted (2004), distributed in the matlab file ssatrend.m.

Unfortunaely Matlab did not actually “distribute” the file ssatrend.m, nor were David Stockwell and his commenters able to obtain the file at the time. When asked by UC (a highly competent academic statistician) where the file could be located, Rahmstorf, following GARP procedures, broke off communications.

A year later, I’m pleased to report that ssatrend.m and companion files ( Aslak Grinsted) may be downloaded from CA here – see the zip file. I’ve placed the supporting article Moore et al (EOS 2005) online here .

I transliterated ssatrend.m and its various functions into an R script here – see the function ssatrend.

Underneath the high-falutin’ language of “embedding dimension” and so on is some very elementary linear algebra. It’s hard to contemplate a linear algebra joke but this is one.

First of all, they calculate the “unbiased” autocovariances up to the “embedding dimension” – the “embedding dimension” proves to be nothing more than a big boy word for the number of lags (M) that are retained. This is turned into a MxM symmetric (“toeplitz”) matrix and the SVD is calculated, yielding M eigenvectors.

Another way of looking at this calculation step is to construct M copies of the time series of interest, with each column lagged one time step relative to the prior column, and then do a SVD of the rectangular matrix. (Some microscopic differences arise, but they are not material to the answer.) In this sort of situation, the first eigenvector assigns approximately equal weights to all M (say 11) copies. So the first eigenvector is very closely approximated by

They ultimately retain only one eigenvector (E in the ssatrend code), so the properties of lower order eigenvectors are immaterial. Following through the active parts of the code, from the first eigenvector (which is very closely approximate by the above simple form), they calculate a filter for use in obtaining their “nonlinear trend line” as follows (in Matlab code)”

tfilt=conv(E,flipud(E))/M;

Convolving an even vector by another even vector yields a triangular filter which in this case has the form:

(1 2 3 ……M ….. 3 2 1)/M^2

In the figure below, I’ve calculated actual values of the Rahmstorf filter for the HadCRU temperature series with M=11, demonstrating the virtual identity of the Rahmstorf filter with a simple triangular fllter ( The Team).

Figure 1. Comparison of Triangular and Rahmstorf Filters

In comments at RC and David Stockwell’s, Rahmstorf categorically asserted that they did “not use padding”:

you equate “minimum roughness” with “padding with reflected values”. Indeed such padding would mean that the trend line runs into the last point, which it does not in our graph, and hence you (wrongly) conclude that we did not use minimum roughness. The correct conclusion is that we did not use padding.

Parsing the ssatrend.m code, it turns out that this statement is untrue. The ssatrend algorithm pads with the linear projection using the trend over the last M points – terms like “minimum roughness” seem like rather unhelpful descriptions. Mannian handling of endpoints is a little different – Mann reflected the last M points and flipped them centered on the last point. I observed some time ago (a point noted by David Stockwell in his inquiry to Rahmstorf) that this procedure, when used with a symmetric filter, pinned the smooth on the end-point ( a procedure used in Emanuel’s 2005 hurricane study, but not used when 2006-7 values were low.)

Rahmstorf padding yields results that are a little different from Mannian padding, because the symmetric (very near-)triangular filter applies to actual values in the observed period, but uses linear trend values rather than flipped values in the padding period. If the linear trend “explains” a considerable proportion of the last M points, then, in the padding period, the linear projection values will approximate the flipped values and the smooth from one filter will approximate the smooth from the other filter. Rahmstorf’s statement that he did not use “padding” is manifestly untrue.

Given that the filter is such a simple triangular filter, you can see how the endpoint smoothing works. M years out from the end, it is a triangular filter. But at the last point, in Mannian padding, all the filter terms cancel out except the loading on the last year – hence the end point pinning. The transition of coefficients from the triangular filter to the one-year loading can be calculated in an elementary formula.

Turning now to the underlying reference, Moore et al 2005, I note that EOS is the AGU newsletter and not a statistical journal. [Update note: its webpage states: “Eos is a newspaper, not a research journal.” I receive EOS and often find it interesting, but it is not a **statistica**l journal.]

Given that we now know that the various and sundry operations simply yield a triangular filter, it’s worthwhile re-assessing the breathless prose of “New Tools for Analyzing Time Series Relationships and Trends”.

Moore et al begins:

Remarkable progress recently has been made in the statistical analysis of time series…. This present article highlights several new approaches that are easy to use and that may be of general interest.

Claims that a couple of Arctic scientists had discovered something novel about time series analysis should have alerted the spidey-senses of any scientific reader. They go on:

Extracting trends from data is a key element of many geophysical studies; however, when the best fit is clearly not linear, it can be difficult to evaluate appropriate errors for the trend. Here, a method is suggested of finding a data-adaptive nonlinear trend and its error at any point along the trend. The method has significant advantages over, e.g., low-pass filtering or fitting by polynomial functions in that as the fit is data adaptive, no preconceived functions are forced on the data; the errors associated with the trend are then usually much smaller than individual measurement errors…

The widely used methods of estimating trends are simple linear or polynomial least squares fitting, or low-pass filtering of data followed by extension to data set boundaries by some method [e.g., Mann, 2004]. A new approach makes use of singular spectrum analysis (SSA) [Ghil et al., 2002] to extract a nonlinear trend and, in addition, to find the confidence interval of the nonlinear trend. This gives confidence intervals that are easily calculated and much smaller than for polynomial fitting….

In MC-SSA, lagged copies of the time series are used to define coordinates in a phase space that will approximate the dynamics of the system…

Here, the series is padded so the local trend is preserved (cf. minimum roughness criterion, [Mann, 2004]). The confidence interval of the nonlinear trend is usually much smaller than for a least squares fit, as the data are not forced to fit any specified set of basis functions.

Here’s a figure that probably caught Rahmstorf’s eye. The “new” method had extracted a bigger uptick at the end than they had got using Mannian smoothing. No wonder Rahmstorf grabbed the method. Looking at the details of the caption, the uptick almost certainly arises simply from a difference in filter length.

Fig. 2. Nonlinear trend in global (60°S–60°N) sea surface temperature anomaly relative to 1961–1990 (bold curve) based on the 150- year-long reconstructed sea surface temperature 2° data set (dotted [Smith and Reynolds 2004]), using an embedding dimension equivalent to **30 years**; errors are the mean yearly standard errors of the data set. Shading is the 95% confidence interval of the nonlinear trend. The curve was extended to the data boundaries using a variation on the minimum roughness criterion [Mann, 2004]. For comparison the thin curve is the low-pass trend using Mann’s low-pass filter and minimum roughness with a **60-year** cutoff frequency.

At the end of the day, the secret of Rahm-smoothing is that it’s a triangular filter with linear padding. All the high-falutin’ talk about “embedding dimension” and “nonlinear … lines” is simply fluff. All the claims about doing something “new” are untrue, as are Rahmstorf’s claims that he did not use “padding”. Rahmstorf’s shift from M=11 to M=15 is merely a shift from one triangular filter to a wider triangular filter – it is not unreasonable to speculate on the motive for the shift, given that there was a material change in the rhetorical appearance of the smoothed series.

Finally, I do not believe that the Team could successful assert a copyright interest in the triangular filter ( The Team).

## 221 Comments

Congratulations Steve. It the interests of full acknowledgment, it must be remembered that Rahmstorf et al 2007 is coauthored by a number of lead authors of the IPCC reports, including a Hansen J.E.. Wouldn’t want them to miss out on the credit for this major innovation.

I’m surprised that you don’t get processed for these accusations. Basically you are calling them liars. This is a very serious accusation…

Steve:please do not put words in my mouth. I said “Rahmstorf’s statement that he did not use “padding” is manifestly untrue.” I commented about the statement not the person.Re: Luis Dias (#2),

Unlike this:

The correct conclusion is that we did not use paddingRight?

I know he has a matlab clase, but does Dr. Steig also teach linear algebra? Would a stint at UW would be a productive use of your and Dr. Rahmstorf’s time?

The concept of a nonlinear line is apparent in referring to a “line graph” when one means a scatter plot of a series of consecutive points connecting each point to the next in the series with a line segment. I’m a pedant but I don’t think their a-mathematical phrase is

thatbad, at least by the standard of how “line” tends to be used in the graphing community.Very good job Steve! Your wonder why the “Team” doesn’t like to release information. Have you heard it’s not air temperatures going up! It’s now Ocean temperatures AGW group is centering on now!

What’s odd is that Stefan seems to have such little knowledge about the specific features of the filters he uses. As so many possibilities exist, you would think he would look at the math and understand the features of a variety of filters he might have used, be able to describe the guts of each, and explain why he picked on filter rather than another.

Re: lucia (#9),

I certainly would think that

youwould. I’m not at all surprised thathedidn’t.Words to live by, even if, or especially if the claim supports something you believe to be true.

ROTFL — hahahahha. Wow, nice work. Waste of a perfectly acceptable Friday but otherwise, whew!!

In 94 I worked for a weekend on calculation of a perfect lens for a project, after two 15 hour days of mangling equations in spreadsheets it was finished — I fit a 4th order polynomial to the result which turned out to be — A perfect circle!!.

Maybe I’m just pedantic, but I like to program my calculations myself from the theory/equations for a problem. Even the maximum likelihood functions. That way I am sure I understand what I’m doing. Silly me. I didn’t know I could get away with handwaving if I used fancy language and hid my calculations.

Re: Craig Loehle (#10), Yah . . . I just got schooled by Nic L on the only part of a script I thought I understood well enough that I didn’t bother to rewrite it. There’s no substitute for doing it yourself – even if you have a ready-made script – just to make sure you understand what it is that you’re doing.

without having to call themselves liars or incompetent.This is what I’m trying to figure out. Given his statement/refutation about not using padding, Rahmstorf had to either knowingly lie, or he had/(has?) absolutely no idea how that (very simple) smoothing algorithm works. Can anyone come up with a 3rd option?

I think this explains why the linearization about a trend works so well ( http://www.climateaudit.org/?p=3504#comment-301724 )

Here’s the middle point impulse response:

and the endpoint impulse response:

Coldish -08 is treated a bit differently in the new version 😉

I had some problems in understanding Moore’s article earlier,

http://signals.auditblogs.com/2008/09/25/moore-et-al-2005/

BR,

UC (Let’s say academic engineer. Not a statistician 😉 )

Re: UC (#26),

Coldish -08 is treated a bit differently in the new version

Counting back, so is 1998. How very convenient. 🙂

Re: Terry (#13),

Careful, this filter is not time-invariant . -98 effect is interesting, though, let me re-check.

Well done, Steve. Yet another black eye for Science mag ((5/4/07).

The full list of the authors who share this co-discovery of the triangle is: Stefan Rahmstorf, Anny Cazenave, John A. Church, James E. Hansen (of GISS), Ralph F. Keeling, David E. Parker, and Richard C.J. Somerville.

There appears to be a trivial typing error in the link to the EOS paper:

It reads:

http://data.climateaudit.org/pdf/statistics/grinsted.2005.pef

when it should be:

(change in pef to pdf)

John A writes: Quite correct. I’ve changed the link to the correct fileNo more legal discussion please. I’ve deleted a number of comments mostly for editorial reasons – that they distract from the thread. Sorry bout that.

Steve, typo in the path to your R, should be scripts/rahmstorf/functions.rahmstorf.txt

I know I’ll be snipped (yet)again, Ian

Steve– yep, no point in this sort of discussion. Too much of it already.Do not knock the technique. After a while with the current trend, the figures will flip violently the other way.

Nick

Re: nick (#18),

A triangular filter is what it is. I didn’t say that this yielded “wrong” results. If you use a triangular filter, say so. And they shouldn’t claim that they’ve made a big discovery in time series analysis when they haven’t.

William Briggs standard comment about smoothing is : “don’t”. Is there any valid purpose to smoothing the HadCRU data here. Or is it merely rhetorical.

The other issue is a sort of accounting issue. If you’ve established an “accounting” policy for how you smooth series in presentations, you’re stuck with it. You shouldn’t change it opportunistically; you shouldn’t change it without clearly disclosing the change to your readers, together with a statement of the impact of the accounting change.

The following is from John Brignell at Number Watch for July. It says it all!

“It is impossible to provide a reasonable estimate of the local slope at the last data point. It all depends on what happens next, which is the great unknown. Various methods have been applied, but in essence they are all cheats. Some of the dishonest tricks used are quite gross, as happened with the notorious “Hockey Stick” This predicted that temperatures were rising sharply, when in fact they were declining. Among other enormities was packing a smoothing routine beyond the end-point with manufactured data that assumed the desired outcome. The moral is: treat any smoothed data with care and, if they are smoothed up to the end point, with disdain.”

As I recall from discussion on CA way back, his comment was in effect, “don’t use it for statistical analysis.”

It’s generally OK to look at smoothed data (which averages out noise in an ad hoc way), but if it’s carried out to the end point, the extension should at least be indicated as a dotted line or with some other indication of incompleteness.

Re #14, Hu McCulloch:

The paper was initially published in ScienceXpress online on 1 February 2007, on the eve of the release of the AR4 scientific report in Paris. It was used in media reports on the AR4, to reinforce the message that “it’s worse than we thought.” According to a media release by Australia’s CSIRO on 2 February:

“An international team of climate scientists has cautioned against suggestions that the … IPCC has previously overestimated the rate of climate change. The team, from six institutions around the world, reviewed actual observations of carbon dioxide, temperature and sea level from 1990 to 2006 and compared them with projected changes for the same period. In a review published in the journal Science today, the authors found that … global-mean surface temperatures were in the upper part of the range projected by the IPCC… The global average temperature estimates are collated separately by NASA’s Goddard Institute for Space Studies in the USA and the Hadley Centre and Climatic Research Unit in the UK.”

The CSIRO media statement named all seven authors and their institutions – as David Stockwell says, they should all now share the credit.

Steve, you really need to read, Fashionable Nonsense: Postmodern Intellectuals’ Abuse of Science. There are some fascinating parallels between the use of language you describe above and the “post modernist” intellectuals use of language in their papers. The irony, of course, is that in this case it’s a

scientistmaking these incomprehensible claims ;).I authored the original matlab file. When I have sent people this file I usually always point out that it is very similar to the triangular/bartlett filter. (In some rare cases it does can yield something different.) The term “non-linear trend” is used very commonly in SSA papers to denote the lowest frequency component. So, it is not something completely new – the new idea is to think of the ssa-reconstruction as a FIR filter which makes it much more clear exactly what padding is going on. I coded it with linear padding because I thought that this gave more transparent results than the mirroring. I like that I can tell by eye directly from the smooth where the results are affected by padding.

Re: Aslak Grinsted (#26),

Thanks for checking in. I’m as big of an offender as anyone on rewriting and creating ambiguity, but can you clarify what you meant by:

I usually always point outand

it does can yieldThe second, I suppose since being qualified with “In some rare cases” can be assumed not to matter, but I’m not sure which you meant by the first.

And – do you remember if you pointed out to Dr. Rahmstorf that the method used linear padding?

Just curious – thanks!

Re: Aslak Grinsted (#26),

Aslak, thanks for checking in. Wouldn’t it have made more sense to have pointed this out in the article itself? Rahmstorf doesn’t seem to have appreciated it.

As a mathematical exercise, I’m intrigued by the cases in which the results differ from a triangular filter. Can you post up/send me an example data set which yields a different result? It would be interesting to work through. Thanks, Steve Mc

Re: Aslak Grinsted (#26),

Aslak, I notice that you are a coauthor of the study discussed here: http://www.sciencedaily.com/releases/2009/07/090701102900.htm. I am unable to identify an archive for the data used in this study. Could you please post up the ice core, tree ring and sea ice data used for the regression in a publicly accessible location? Thanks, Steve McIntyre PS I’ve separately emailed the lead author.

Re: Aslak Grinsted (#26),

Aslak, one more thing. In your Moore et al 2005, you state:

Do you now agree that the “new” approach is itself simply a version of “low-pass filtering of data followed by extension to data set boundaries by some method” -since your method proves to be a low-pass (triangular) filter with extension by a linear projection of a short segment at the end?

A little OT. The errors shown by the grey shading on the SST graph (Figure 2 at end of post) are a mystery to me. Are we actually being told that SST measurements in the 1800s have an error less than +/- 0.1 degree! And the number of ocean buoys deployed in the period was… Zero? Get out of here!

snip – no need to pile on

Wow. Is it this easy to discredit all these {Stefan Rahmstorf, Anny Cazenave, John A. Church, James E. Hansen (of GISS), Ralph F. Keeling, David E. Parker, and Richard C.J. Somerville.} scientists?

As I can find no scientific reply from this clan justifying their methods, then it’s all over except for the reports that relied on this analysis to ammend the record with the discloser about the undocumented changes to the time smoothing length.

Thanks Steve. Thanks to all the others who also spent untold toil on this issue. You all should send a bill to the clan at your billable rates, demanding payment from them for their “stonewalling history”. If they simply provided data and methods, they could have avoided this completely.

I have to also agree that this issue has to break down to either incompetent science or intentional and biased rhetoric.

Re: EJ (#31),

Folks, keep this in perspective. There’s nothing necessarily “wrong” with a triangular filter – it is what it is. Rahmstorf’s obstruction of Stockwell’s effort to find out what he did is inappropriate. Rahmsdorf’s obstruction bought a bit of snark when it turned out that he doesn’t appear to have a very clear understanding of this aspect of his methodology.

Having said that, I’m not convinced that there’s any pressing to need to smooth the data. Maybe Rahmstorf’s point – that observations are in the “upper” end of IPCC models = holds up with unsmoothed data, but that’s an exercise for another day.

In the meantime, we can draw some amusement at the contrast between Rahmstorf’s pomposity and his performance – but no more than that.

Re: Steve McIntyre (#34),

Then I’m confused. I thought that Rahmstorf was cited repeatedly as critical evidence that the last few years of data showed that the climate was getting worse than anyone had predicted. The reality is that the last few years of data in his graph were not what most people were led to believe they were. Most people assumed (or were told) that the data was simply the real, raw data showing climate history. Instead, the years from the past had the value of their datapoints influenced by his guesstimates of the future. The most recent years leading up to the study were influenced the most.

I doubt very seriously that the general public understood that guesses about the future were used to “adjust” data about the past (much less how extensively). I’m confident that any reasonably competent trial lawyer who cross-examined any of the authors of this study would have a jury absolutely convinced within a few minutes that the authors were deliberately using statistics to mislead the general public.

That’s pretty serious.

[side question — would it be accurate to say that the two studies most often cited by AGW proponents to the general public would be Mann’s hockey stick and Rahmstorf? With perhaps Jones on UHI also ranking high on the list?]

Re: stan (#37),

Please do not extrapolate beyond the point made here. The issues that mention are not ones that I raised in this post.

IPCC reports are what are influential – not the studies that you mention.

The url to the pdf file contains an error. It should be .pdf not .pef.

Besides the ‘seemingly’ contradictory statements from Rahmstorf, it seems very strange to me to publish – in a newspaper – a newly discovered, simpler method of data analysis to the community and then when the community ask for greater detail you obfuscate that very discovery.

Am I wrong?

Where is the integrity in that?

Re: Nut (#36),

The article in EOS was published by Moore, Grinsted et al, not by Rahmstorf – though Rahmstorf use was prominent.

Steve,

I wholeheartedly agree with your last note that things should be kept in perspective. Rahmstorf may be inaccurately describing what is done in the code. But there isn’t necessarily anything wrong with the smoothing. We’ve been debating at some length at Lucia’s about the legitimacy of extrapolation. I quoted this relevant advice from the Australian Bureau of Statistics:

They could have said “Alternatively and equivalently”. Any linear extrapolation, whether Grinsted’s or MRC, gives the padded points as a linear combination of values within the range. Then the application of the symmetric filter gives another linear combination for the smoothed points. The result is just an asymmetric filter using points within the known range.

While this method of construction does use padding, the same filter could have been used without invoking extrapolation. Mann’s 2004 paper says that the asymmetric filters in MRC can be derived by minimising the second derivative of the smooth. He doesn’t give proper details, but it is a reasonable criterion, which doesn’t rely on extrapolation.

In fact, I think MRC and/or Grinsted end filters do have a particular property – they are asymmetric, but have zero lag. This is ensured by the method of construction. The extrapolation would continue any straight line exactly. The symmetric filter leaves a straight line invariant. Therefore MRC or Grinsted do the same. Smoothing which leaves any straight line invariant is zero-lag, which is useful.

You remarked on how this pins the smooth on the endpoint. This is a necessary consequence of zero lag. Near the end, you have to upweight the few available future points to balance the past. But at the end, there are no future points available, so the filter can’t use any past points.

Re: Nick Stokes (#40),

Nick, again, to keep things in perspective, you should not let Rahmstorf off the hook on his opportunistic change of accounting policies, his misunderstanding of the filter properties and his obstruction of David Stockwell’s efforts to determine what he did. I’ve been quite objective in distinguishing between these problems and their impact on the “results” – which I’ve not yet got to – but equally it would be appropriate if you conceded these points before defending him on alternative grounds.

The end point pinning problem has a history that you may not be aware of, arising out of Emanuel 2005, with Emanuel agreeing that end point pinning was an “error”.

Emanual had applied a smoothing filter that pinned the end-point on a high value.

Landsea commented:

Emanuel replied:

In a 2007 submission to GRL, Roger Pielke and I showed (in passing) the effect of Emanuel’s pinning using the then most up-to-date data (with a low end point.) A referee stated that it was now “well established” that end point pinning was “outlandish” (we had noted that was not the way to do things) and rejected even an illustration.

In a 2007 CA post, I observed http://www.climateaudit.org/?p=1681 that Mannian “reflect-and-flip” padding (an operational description of the method) resulted in a pinned end-point.

In climate science, it seems that people often try to take inconsistent positions. Is your position that Emanuel confessed to an “error” that he shouldn’t have?

Re: Nick Stokes (#40),

Zero lag at zero frequency is by no means an adequate criterion in judging the response of a realizable (unilateral) low-pass filter operating on a broadband signal, such as temperature. Invariably, the response of such filters

lagsthe signal components at nonzero frequencies. Claims that this lag can be overcome in any rigorous fashion are simply another example of trying to substitute assumptions for data–an egregiously common practice in “climate science.”Re: John S. (#153), Time series analysis is not subject to strict realizability, which would indeed require a lag. For most of the data, you are smoothing knowing what came before and after, and people can and mostly do use a centred filter. Realizability only becomes an issue as you approach the endpoint. In this method, smoothing degrades until, at the end, there’s no smoothing at all. That is realizable. It’s the price you pay.

A desire to smooth to the end is not exclusive to climate science. Probably the most enthusiastic practitioners are would-be moneymakers studying various market prices. And it seems to go back at least to 1877.,

Re: Nick Stokes (#154),

You’ve missed the entire point about the desire to overcome “the end-point problem” in rigorous smoothing being an empty one.

Re: Nick Stokes (#154), Nick, your comment about would-be moneymakers piqued my interest.

I too have reflected on the similarity between financial technical analysis and some areas of climate science. Both make extensive use of smoothing and filtering techniques, and both regularly grapple with issues such as sample period length, underlying “signal”, endpoints, etc.

Here in part is what Wikipedia has to say about technical analysis:

“In the 1960s and 1970s it was widely dismissed by academics. In a recent review, Irwin and Park reported that 56 of 95 modern studies found it produces positive results, but noted that many of the positive results were rendered dubious by issues such as data snooping so that the evidence in support of technical analysis was inconclusive; it is still considered by many academics to be pseudoscience”.

I would suggest to the Society of Technical Analysts they might find useful the proven skills of a Mr William M Connolley.

Re: Ian Sims (#196), Ahhh – it’s amazing what jumps back at you on re-reading your own submissions. To be clear, I intended my reference to Mr Connelly solely as a humorous take on his Wikipedia efforts on behalf of the hockey stick – no other angle intended whatsoever.

this is a link to “Rules of Good Scientific Practise in the Leibniz Association” (Rahmstorf’s employer PIK is a member institute of the Leibniz Association)

I think the questions that arise are:

– did Rahmstorf violate the code of ethics, by changing parameters, endpoints without telling, making an untrue statement about padding, not releasing source code etc. ?

– does his employer PIK tolerate his conduct ?

– does the Rahmstorf group or PIK interact with the statistical community ?

– how does the Rahmstorf group select statistical methods (following statistical research or is the selection data or even result driven) ?

Steve:again, I urge readers not to go a bridge too far. This happens far too often. People need to learn not to hyperventilate. Aside from being a waste of energy, it’s counter-productive. What happens is that an opponent always rebuts the weakest and most hyperbolic claim and avoids dealing with the narrower and more problematic issue. Please stop going beyond what’s on the table.forgot the link:

http://www.wgl.de/?nid=gwp&nidap=&print=0

Steve,

When trying to smooth to the end of the data range, there must be some negative trade-offs. Most asymmetric filters create a lag. The smoothed value is actually an estimate whose expected value relates to some previous point in time. MRC insists on zero lag; the price is that as the endpoint is approached, the endpoint is up-weighted, with the weighting reaching 1 at the actual end. So you sacrifice smoothing effectiveness to maximise use of current data. The result is that the end point is very mobile as new data is acquired.

I don’t think that is an error; it’s a choice. Emanuel may not have been aware that he was making it.

Re: Nick Stokes (#45),

Nick, if someone wants to use an asymmetric filter for end-point purposes, their obligation is to state what the filter is and provide a reference to statistical literature where its properties can be consulted? No useful purpose is served by climate scientists purporting to invent triangular filters.

You say that end point pinning is a “choice” not an “error”. On this basis, it is then your position that Emanuel confessed to an “error” that he did not commit. Can you confirm that you agree with this. Emanuel may not have been aware of the issue when he wrote the article, but he was aware of it in his reply to Landsea.

The other implication of your statement is that the GRL referee’s statement that this method was “outlandish” is also incorrect.

Whatever answer is “right”, all of these positions are not reconcilable and it would do you no harm to clearly concede points that you do not dispute. Otherwise such discussions are not of interest.

Inconsistent positions.

Dang, can’t we call it better?

Steve,

I haven’t sought to discuss the SSA issue. But SSA is itself a well established statistical procedure (you can get an R package). There must be more to it than just approximating a triangular filter. But in fact, I think here it is what is done near the endpoint, rather than the interior filter used, that is controversial.

I don’t think the issue of smoothing effect vs zero lag can be resolved by reference to who said what when. I can’t really comment on why the referee thought Emanuel’s effort was outlandish. In fact, I don’t know myself whether it was right or wrong. Emanuel seems to be saying that he should have not shown the endpoint, rather than that he should have used some other smooth approximant there.

Again. all I can do is put my argument that the only lag-free smoothing estiimator at the endpoint is the value itself. Whether you should demand greater smoothing at the cost of a lagged estimator is debatable. Maybe most people would say that you should.

I agree that there is probably a discussion of the endpoint issue (lag vs smoothing) in the statistical literature that Mann and others should have referred to. But I haven’t, on a cursory search, been able to find it.

Re: Nick Stokes (#48),

We are discussing Rahm-smoothing not SSA. While there may well be more to SSA than approximating a triangular filter, I’ve analyzed the code for Rahm-smoothing and IMO I’ve convincingly showed that it is simply a virtually triangular filter plus linear padding – contrary to the overblown language of the authors.

The R package for SSA doesn’t work for Windows. (I’ve corresponded with the package maintainers.)

I’m going to discuss the connection between SSA and triangular filters a little more today. There’s another penny to drop.

Re: Steve McIntyre (#59),

There is a package that does ssa in R. It is called simsalabim and it runs in Windows. It downloads in the usual fashion from within the R program.

The two main functions are decompSSA and reconSSA. I have been playing with it over the last several days and it seems reasonably straightforward to use.

Re: RomanM (#65),

Roman, whatever the merit or otherwise of the SSA package, ssatrend.m doesn’t use it. I replicated ssatrend quite quickly without using any package – simply using svd. My replication matches Matlab results.

Re: Nick Stokes (#48),

Using a “linear line” is one plausible alternative.

Re: Nick Stokes (#48),

Nick, you say:

Interestingly, a couple of years ago, I was sent a copy of a submission on this very topic submitted to GRL – by a highly competent author with many academic publications.

Reviewer #1 ranked the paper:

Reviewer #1 stated:

The paper was rejected.

The story is pretty interesting.

Re: Steve McIntyre (#87), Steve, I looked a bit further in the statistical literature. I found this paper which said:

Re: Nick Stokes (#89),

Nick, I haven’t argued one way or the other about what is a “good” way of constructing a smooth – you’re arguing with someone else here. I haven’t opined on the issue. My original point was simply to determine what Rahmstorf did as opposed to what he said he did. After we agree on what he actually did, it will be time to consider whether it makes any sense.

Determining what he actually did has not been easy for a variety of reasons – primarily arising out of GARP.

Rahmsdorf said that (1) he didn’t use padding and (2) that his filter was only M=11 long affecting only the last 5 values. We’ve shown that (1) he used padding; (2) that his filter is 2M-1= 29 long.

Further Rahsmtorf used a generally unavailable method (ssatrend.m was not available to non-Team members until recently), a poorly documented method, that was only published in the “AGU newspaper, not a research journal”, that was poorly peer reviewed, if indeed it was reviewed at all; and where Rahmstorf was uncooperative with inquiries. It is a fundamental position here that important applied studies should use well understood methods and not home made, ad hoc methods, that are poorly described.

For some reason, you seem unwilling to agree with this. I don’t understand your apparent opposition to supporting those simple principles – adoption of these principles by the Team would mitigate much criticism.

Re: Nick Stokes (#89),

Your reference does NOT support your overall attempt to launder Rahmstorf.

Rahmstorf wants to argue that observations are in the upper part of the IPCC cone. I agree with Lucia that this has to be supported based on observations to date. I fail to see how extrapolating data – by Mann-smoothing, Rahm-smoothing or any other method is not relevant to this assessment. This is a different issue than smoothing methodology. (You also have to realize that weird Team smoothing has a history here – Mann 2008 is an extended riff on spurious correlation arising out of excessive smoothing and this sort of thing affects our consideration of new Team emanations.)

An on point reference has to do more than your citation – it has to demonstrate circumstances in which smoothed series with projected data provides a more effective statistical test than original data. I don’t think that you can find one.

Re: Steve McIntyre (#91)

Steve, let me work an example. Suppose we smooth with a 7-point version of your triangular filter and Mann-style extrapolation. I’ll call the data points y and number them backwards, so the endpoint is y1. We need three extrapolates (italics):

… y4 y3 y2 y1

2y1-y2 2y1-y3 2y1-y4Then the smooths corresponding to y4, y3, y2 and y1 are:

(y7+2y6+3y5+4y4+3y3+2y2+y1)/16, (y6+2y5+3y4+4y3+2y2+4y1)/16, (y5+2y4+2y3+2y2+9y1)/16 and y1

The first is of course the interior filter, and then they grade down. My point is that whether they were constructed by extrapolation or, say, by a minimum second derivative criterion, they are just normal asymmetric smoothers with the zero lag property.

To check zero lag, put y1=1, y2=2, y3=3 etc. You can substitute to see that the smooths take the same values.

Re: Nick Stokes (#94), I don’t think you understand the implications of what you are saying, Nick. When recent years start to show a downturn in temperature, and they do, using the past few years of data to forecast the future removes the downward trend at the endpoint. try it.

Re: Craig Loehle (#103), Yes, of course new data affects the smooth curve in the fringe region. That doesn’t affect the fact that the smooth as drawn is the best estimate based on the information available at the time, using only known data. There is no “making up results”.

Re: Nick Stokes (#111),

It may use known data, but the ability to choose

howthat data is used through the choice of padding method and smoothing parameters offers ample opportunity to “make up results”.The folks at Hadley demonstrated this when the method they had been using to show how steadily the temperature had been rising took a equally sudden downturn. They quickly found the “error” in what they had been doing and changed methods.

Re: Nick Stokes (#111), If choosing different smoothing periods and methods for extrapolation (like Mannian reflection vs linear extrapolation vs whatever) gives you different results at the end period, and it does, then you can choose any wag of the dog’s tail you like, and that is not science it is propaganda.

Re: Nick Stokes (#111),

trying to recall my lessons in probability theory, doesn’t the

“best estimate based on the information available at the time”

require a

– data model

– noise model

– and a metric to quantifiy the estimate ?

I really don’t see this requirement fulfilled by rahmstorf’s unskilled and untrained usage of somebody else’s function with adhoc determination of the number of coefficients used.

A little history about the building, where these great moments in climate science happen:

“The main building of the PIK, today called Michelson House, was opened in 1879 as the first astrophysical observatroy of the world…

The famous Michelson experiment took place in the basement of the eastern cupola in 1881, by which Albert Michelson tried to measure the velocity of the “ether” relative to the earth.

In the same building, Karl Schwarzschild (1873–1916), the director of the former observatory, in 1915 found the first exact solution of the field equation of the Albert Einstein’s general theory of relativity.”

http://de.wikipedia.org/wiki/Potsdam-Institut_f%C3%BCr_Klimafolgenforschung

Re: Nick Stokes (#94)

So with the 7-point version, y2 (the second-last point) is smoothed to

(y5+2y4+2y3+2y2+9y1)/16

which means that the second-last smooth point has y1 weighted 9/16 and y2 weighted only 2/16. How can such biased weighting be justified?

Re: Sara Chan (#143), Sara, as I’ve said elsewhere, the biased weighting is necessary to ensure zero lag. The center of mass has to be at y2. That means -3*1-2*2-1*2-0*2+1*9=0, as it does. That is only possible if y1 is heavily weighted That’s why the filters deteriorate, as smoothers, as you approach the ends.

Perhaps here is a systematic way to look at it. From the International Journal of Forecasting, Real time representation of the UK output gap in the presence of model uncertainty tackles an even more general case of GDP estimates that where past data are revised as well.

Model uncertainty in this case could be realized in the type of padding extension. The result would be a flared CI around the endpoints, larger than expected from estimates without the model uncertainty.

I note that the well-known web comic XKCD has just put up a comment about obviously well-justified graphical extrapolation… http://www.xkcd.com/

Agree with those who don’t lightly use “canned” algorithms — at least when the limitations of the algorithm may have a significant effect. Like others here, I would start with first principles and work things through for myself — so I would know the assumptions and limitations.

In the subject case, since the “point” was use of a “new algorithm”, it borders on inexcusable “if” the authors used the algorithm as a black box without fully understanding it.

After a few drinks as a student, I’ve often walked a non-linear line home.

Hey Steve, I posted a quick piece about the evils of smoothing today.

http://wmbriggs.com/blog/?p=735

Re: William Briggs (#54),

Thank you for a fine illustration.

Many weather stations now have data recording one one minute intervals. It is not uncommon to derive daily indices from these, then to combine the daily into annual. The annual are then correlated with station separation distance and the results used for in-filling (guesswork) and distance interpolation to make multidimensional maps.

Can we interpret from your illustration that interpolation of this type has traditionally been done incorrectly because of the influence of smoothing? How does one construct error envelopes properly through these stages? Is the so-called Law of Large Numbers applicable?

Re: William Briggs (#54),

I posted the following reply to your blog:

I wonder if maybe you haven’t just demonstrated a limitation to the usefulness of R**2 rather than the uselessness of smoothing per se. Or, a variation on the same thing, a reminder that correlation is not causation, and that if you torture the data long enough it will confess, even to a crime it did not commit.

As I look at your graphs, I find the loess lines highly informative, especially with the lower levels of smoothing. When I look at the raw data, I imagine that I can see a downward trend, but that the variations around the trend do not appear to be strictly random, but periodic. And your loess line with smoothness = 0.05 seems to bear this out.

I think that non-linear smoothing is important here, because it shows the limitations of linear smoothing as indicative of anything characteristic of this kind of data.

As for computing forecasting skill, I can see your point, but only because the forecast in question is not derived from the data itself. If I were to develop a deterministic non-linear forecast from the data, the projection from the smoothed time series would be the appropriate basis for computing forecast skill. But that is not what we are discussing. We’re discussing a test of the forecasting skill of GCM’s, and unless they forecast non-linear behavior of the kind embodied in the smoothing, then it opportunistic to use a smoothing that they didn’t forecast to claim higher forecasting skill.

But that is a condemnation of an inappropriate use of smoothing, not a condemnation of smoothing per se.

RE Nick Stokes, #40, what is MRC? It’s not in the CA Common Acronym list. The Mann (2004) article linked doesn’t explicitly list an MRC, though there is a Mann et al 2003. Is this it?

Re: Hu McCulloch (#55), Hu, MRC stands for Minimum Roughness Criteria (I don’t know why the plural). Functionally it means minimising the second derivative of the smooth near the endpoint.

Re: Nick Stokes (#71),

“MRC stands for Minimum Roughness Criteria”

See http://rankexploits.com/musings/2009/more-fishy-how-doesdid-mann-guess-future-data-to-test-projections/

Added to http://climateaudit101.wikispot.org/Glossary_of_Acronyms

My usual plug to Steve: please substitute this for the old one in your masthead!

Cheers — Pete Tillman

I notice that the comment period over at the EPA has been extended in an informal way:

“Late comments may still be submitted on the proposed rule; however, the Clean Air Act does not require that the Environmental Protection Agency consider comments submitted past the end of the official comment period June 23, 2009, when developing the final rule. Nonetheless, we will continue to consider comments received after the close of the comment period, to the extent practicable.”

http://epa.gov/climatechange/endangerment.html

Steve: Why wouldn’t you post this comment on the EPA thread rather than this one?

Re: Hank (#56), My mistake. So sorry. I’m thumping a drum, aren’t I? I’ve put it at your June 23 thread, “Climate Audit submission to EPA”

RE #21, 54,

William Briggs’ earlier article, dated 9/6/08, was entitled, “Do not smooth time series, you hockey puck!”:

If the data is some meaningful signal (like “climate”) measured with white noise error, smoothing can give a better estimate of the signal. The peril in subsequent analysis is that the estimation errors are now highly serially correlated, so that the effective sample size becomes much smaller than the nominal sample size. If the smoother was an m-period MA, one simple solution would be to only use every m-th smoothed value in subsequent analysis.

Re: Hu McCulloch (#57),

Thanks for this link – I knew that Matt had written forcefully on the topic. CA readers, check out Matt Briggs new an old postings on this topic.

Re: Hu McCulloch (#57),

Hu,

Thanks for this countervailing view to the mantra of “no smoothing, never, ever.” There is nothing wrong with smoothing, per se. A linear regression is a kind of smoothing. fft’s are a kind of smoothing. The only real issue with smoothing is to acknowledge the loss of information (or degrees of freedom) that results from smoothing, and thus the corresponding increase in uncertainty (or confidence interval).

I’m in engineering and so don’t know the answer to this question. However if a new method is proposed in an engineering paper, the referees usually require that it be compared to previous methods. They would expect the paper to identify in what ways the previous methods were deficient and how the new method addresses these deficiencies. They would expect that the limitations of the new method be identified and discussed. They would expect that these discussions be numerical.

This does not appear to be the case in the papers that are discussed on this blog. Is this the usual practice in the publishing of scientific papers in archival journals?

Re: TAG (#61),

Yes, I think so. I don’t believe most scientists are mathmatically literate enough to do what you propose, nor does their work actually require them to be. We rely on people like you to do that for us and package up the results so we can stick results in one end and get significance etc out of the other. The “skills” are then in choosing the correct analysis for the job (not just the analysis that gives you the results you’re looking for!) and hoping the guy who wrote the analysis software knew what they were doing!

Re: TAG (#61),

One of my consistent objections to many climate science articles is their use of “novel” statistical methods to derive important applied results without previously establishing the “novel” method in a statistical journal.

The code for ssatrend was never published. The only reference is an article in EOS, which AGU describes as a “newspaper”, not a research journal. I say this not to disparage EOS, but simply to observe that this is not a sensible way to establish the methodology.

Having said that, it’s not that there’s necessarily anything “wrong” with a triangular filter – but it is surely wrong even in climate science terms to describe the length of a triangular filter as an “embedding dimension”. This makes everything sound much grander than it really is.

Nick,

You are clearly a talented mathematician, but you would greatly enhance your credibility if you showed some balance. Two questions:

– are you not at all troubled that respected scientists published a significant and oft-quoted study in Science without understanding the methodology behind their analysis? (the evidence seems quite incontrovertible on this point).

– are you not troubled by the clear stonewalling of the lead author in response to queries about the analysis?

Re: verm (#70), verm, I spent some years as a mathematical and statistical consultant to scientists in a major research institution. So I might be troubled, but not surprised. We’d aim for understanding that is functional rather than insightful. On that criterion, I’m not sure that this paper falls way short.

I think that subject to demands of time etc (which people do have), they should be as helpful as possible in response to queries.

Re: Nick Stokes (#72),

“We’d aim for understanding that is functional rather than insightful.”

If you are saying that “functional” is to convey a particular interpretation by the reader, rather than an accurate (i.e. insightful) interpretation, then I would agree that the particular smoothing under discussion was done with that in mind. Doesn’t seem quite honest to me.

The Team is such a bunch of children. They are smoothing a time series for crying out loud. How difficult is that? They use a simple-minded weighting where lags are given progressively less weight. Big duh. It is triangular — completely arbitrary, but the kind of assumption you would make in the first fifteen seconds of thinking about the problem.

But then they dress it up with all their made-up terms (as if time-series statistics haven’t been understood by real statisticians for many decades now).

Ya gotta love:

“a data-adaptive nonlinear trend”

“no preconceived functions are forced on the data” (a bald-faced lie — the maximum number of lags is “preconeived”)

Good grief.

There is a vast literature in econometrics about smoothing. Problems arising from smoothing date back to at least Yule and Slutsky. See following comment from Judy Klein’s history, Statistical Visions in Time excerpt here

describing limitations:

It would be interesting to locate her references for the second point. Readers – please do not use this reference as a reason to pile on.

Re: Steve McIntyre (#76),

Sounds like a really bad (or really good) christmas party.

One reason for smoothing is that an occassional outlier can give a false impression to the eye of what the behavior (signal) is. So I do not find smoothing per se objectionable. As an example, the 1998 el nino extreme warm period (for UAH for example) was very brief and does not affect linear trends as much as might be assumed. HOWEVER: smoothing up to the endpoint that involves extrapolation, reflection, or whatever is making up results IMO. Further, being coy about your smoothing method (length, filter type, etc) prevents people from seeing what effect the smooth might have.

Re: Craig Loehle (#77), It just isn’t true that the use of extrapolation in constructing a filter involves making up results. The extrapolation step (from known data) is followed by applying a symmetric smoother, and as a matter of linear algebra, you can write down the resulting asymmetric filter that is applied at each step, and it uses only known data.

You can follow the progress of the smooth at a point in time. When you first reach that point, it is the endpoint, and the smoother is … nothing at all. No made-up data there, but no smoothing either. One timestep later, and you know a data point on the “future” side, and you can use a slightly better filter, with the new point balanced by some past data. And so on, with the filter eventually developing to the full symmetric filter used in the interior.

So you have a period where you can’t smooth as much as you’d like. But there’s no use of made-up data.

There is no end point problem.

We are talking about physical properties of the world at a point in time. I don’t know of any physical property that at one point in time depends on its future unknown values.

If you average the property over a time period and plot it, you are only plotting one point for that period. Plot daily average temperature againt days, yearly average temperature against years, and so on.

If you calculate a weighted average over 11 years, you plot it against 11 year periods. Then of course you have to decide on how to label your x axis, one example could be “End year of 11 year averaging period”.

If you change the averaging period dynamically in the plot, e.g. shortening it in the end, you don’t plot the same thing everywhere, which will make it very difficult to decide a label on the y axis.

If you at some point include guesses of future values, you are not plotting a measured value, but a prediction. And should clearly state that.

Its that simple.

Steve, could you confirm whether you agree with my earlier comment (really from jean S) that Rahmstorf seems to have confused the ’embedding period’ with the ‘smoothing period’ and that the copenhagen report fig 3 should say smoothed over 29 years, not 15 or 11?

Yes ’embedding dimension’ is just jargon to make something very simple sound highly technical and sophisticated.

We must recall that there are 7 authors of R et al 2007 and it may not have been R who did the averaging. Also I think we should not be too hard on Aslak Grinsted who has been dragged into this.

Re: PaulM (#79),

Yes, ssatrend.m with M=15 creates a (virtually) triangular filter of length 29.

Re: PaulM (#79), Re: Ian Castles (#93),

Yes, the smoothing (the filter length) is 2m-1 years, and they should further revise the caption. You can see it from the equation for the filter (tfilt): it’s the convolution of a vector E (an eigenvector of length m) and its reverse (thus also of length m). Hence, the filter length 2m-1. If E has equal coefficients, the convolution followed by normalization leads to a triangular filter.

Re: Aslak Grinsted (#95),

One should find a series such that the operation

does not pick an eigenvector (from a symmetric Toeplitz autocorrelation matrix) with (relatively) equal coefficients. Not an easy task.

Nick Stokes (#72),

In Rahmstorf, the meaning of the smoothed curve is misunderstood and incorrect conclusions were reported. This should be seen as disfunctional with respect to recognizing the useful applications of smoothing, and limiting conclusions those those applications.

If all you mean by functional is “People learned how to check boxes in a program and create a smooth curve, but having no clue what smoothing should be used to learn”, then maybe one can say the use was “functional”. In principle, peer reviewed articles are supposed to communicate somethign well beyond “Look ma! I got a program from Alex Grinstead, and I used it to create a smooth curve! Whooo hooo!”

snip

Steve: you’re attributing motivesRe: John A (#81),

Really? I was making a general comment about how peer reviewers wave through papers like this without being able to critically comprehend the methodology used. I remarked that this is a game where there everybody loses if the taxpayers realise what’s been going on.

And if attributing motives is to be disallowed, how come you get to make direct statements like the following which imply motives (and impure motives at that)?

I’d call that “attributing motives”. What’s your excuse?

Re: John A (#84),

I don’t think that I attributed a

motivethere. You may think that I did, but I did not do so categorically, as you did. However, I’ll reconsider whether I have, now that you’ve brought it to my attention.And even if I did step somewhat over blog rules, that should not be construed as licence to abandon the blog rule, but rather someone should point it out to me so that I consider the situation.

Re: Steve McIntyre (#85),

Oh really? So it depends upon whether I categorically stepped into the hidden blog mantrap or whether it was inadvertent as you did. These transgressions appear to be mortal sins when I make them and only venal when you make them. I wonder if I should check out my heart for purity before making such innocuous comments lest they get peremptorally snipped by the Saintly One.

Its rather like the distinction between whether Ramstorf made a categoric statement about his methodology which was demonstrably untrue and whether he lied about it. Maybe he was unconscious at the time he wrote it and therefore is not culpable for his writings.

It all depends on the state of mind, the intention to deceive and dissemble. Climate science grows more like theology every day.

Neither the method, Singular Spectrum Analysis, nor the terminology, e.g. embedding dimension, is new. To my knowledge SSA is being applied in climate science since the mid or end 80’s. At that time it was rather known as Extended Empirical Orthogonal Function Analysis EEOF- you can google that term and you will get quite along list of papers. It is essentially a principal component analysis of an augmented field, in which additional ‘dimensions’ are created by lagging the data in time.

This method is most useful in the multivariate case to detect oscillations in large data sets. Many would agree that it does not make much sense to apply it to a single timeseries and keep just the leading vector – it is indeed very similar to a simple smoothing.

The terminology ’embedding dimensions’ also stems from those first applications of the method, and it is reminiscent of another method to estimate Lyapunov exponents from observed time series of chaotic systems – also augmenting the dimension by creating lagged replicates of the observed series.

So, actually Rahmstorf et al. just used the terminology that had been used for decades. The caveat is that, while in the multivariate case SSA really yields useful insights and the terminology may be justified, it sounds indeed too arrogant for the simple univariate case.

The end-point smoothing is another matter

Re: “Remarkable progress:”

A giant leap for Mann-kind, but a small step for McIntyre?

A summary chronology:

(1) According to the first sentence in the caption to Figure 3 of the Copenhagen Synthesis Report, as released on 18 June,

the Figure showed “Changes in global average surface air temperature (smoothed over 11 years) relative to 1990.”

(2) On 25 June, Jean S queried Stefan Rahmstorf at RealClimate about the “smoothed over 11 years”, asking “Did you change the filter length from M=11 to M=14 in the temperature graph (Figure 3)?”

(3) On 30 June or thereabouts, Dr Rahmstorf replied “Almost correct: we chose M=15. In hindsight the averaging period of 11 years that we used in the 2007 Science paper was too short to determine a robust climate trend…”

(4) On 3 July or thereabouts, the first sentence of the caption to the Figure was amended so as to read: “Changes in global average surface air temperature (smoothed over 15 years) (corrected from 11 in the first version of this report) relative to 1990.”

This could be taken to mean that the smoothing in Figure 3 shown in the first version of the Copenhagen Report was over 11 years, but that this had now been corrected to 15 years. But in fact, it is only the caption that has been corrected, not the Figure itself.

(5) In the light of the postings of 4 July at #79 and #88 above, it now seems that the caption should be further revised to say that the smoothing is over 29 years.

Re: Ian Castles (#93), I guess the caption will stay… and we have a new definition what “smoothed over 15 years” means. Dr. Rahmstorf explains:

[self snip]

Re: Jean S (#168),

Rahmstorf: they look the same to “non-experts”.

Re: Jean S (#169),

R also states at Realclimate

Does the 15 year running average response, as shown in 169, have a half-width of 15 years? It looks to me that it has a width of 15 years and a half-width of 7.5 years. Does the term “width” have a specialist meaning so that the impulse response shown has a “half-width” of 15 years?

Re: Jean S (#168),

An interesting defence – Rahmstorf is now trying to argue that this thing really is a M-year filter – arguing that the term “x year filter” is ambiguous and that his terminology can be justified.

Unfortunately neither Rahm nor his reader cite any authority or precedent for their usage. There is an interesting discussion of filters in the Appendix to chapter 3 of IPCC AR4 – I’m sure that Rahm is familiar with this.

In addition, the term “20-year filter” is specifically used in AR4 chapter 6 Box 6.4. I have not inconsiderable experience with the data sets in AR4 chapter 6 Box 6.4 and can operationally reproduce reproduce what IPCC describes as a “20-year filter”.

Re: Steve McIntyre (#181),

I have just posted this at RC:

Stefan, the caption is incorrect. It says ‘smoothed over 15 years’. There is no possible ambiguity of interpretation of this. 29 years are involved in your smoothing, so it should say ‘smoothed over 29 years’.

Re27: I do not remember any correspondence with Rahmstorf regarding ssatrend & padding – i think he may have had the code through a 3rd party, though I might just have lost the emails. I pointed out the similarity to the simple bartlett filter to e.g. Nicolas Nierenberg (and that is how it ended up on this blog i suppose). Heres the quote from the email i sent to Nicolas: “In the end the SSATREND is almost equivalent to applying a bartlett (or triangle) FIR filter”

Re28: I am about to leave for the field (Wednesday) so I am too busy to find an example at the moment. But try with something like the sunspot series – i have not tried it myself so i am not sure what it would give.

I’d just like to address a couple of points about the Moore et al 2005 article and the origins of the ssatrend code that may give your readers a clearer understanding of why we wrote what we did.

Re 90

We have sent out the ssatrend.m file to anyone who wanted it if they contacted us by email. All my papers are available as pdfs from my website. I strongly believe in making data available and codes available when the results are published – sometimes we work on data that others control who may not, unfortunately, want it distributed.

Not sure who you mean by “Team members” – we are not in Mann or Rahmstorf’s groups, and never worked with any of their close associates. And if they use our code they have not got our code directly from us either as far as we remember, indeed they may well have written a version themselves, its not as you correctly point out, very difficult.

Eos articles of the type we published are peer-reviewed by 2 anonymous referees. The quality of the refereeing is not something we as authors have any control over.

Our motivation in publishing the short Eos article was to highlight the enormous recent progress in time series analysis for the wider geophysics community that read Eos – its delivered to all AGU members. Many of the readers have no statistics knowledge beyond what they picked up in university years ago. in the first paragraph we say:

This present article

highlights several new approaches that are easy

to use and that may be of general interest.

We were

notsuggesting we had any great breakthroughs – if we had we would not be writing them in Eos. The next sentence that followed the quote in the article here was:Ghil et al. [2002] presented a general reviewof several advanced statistical methods with a

solid theoretical foundation.

we were showing some methods that we think are better than those regularly used – a lot of the paper was actually about wavelet coherence and website where code could be downloaded.

Our sstrend code is unsupported, its not on the website, but we think its handy and has a better basis than an arbitrary low pass filter. SSA itself is an established method and we invented no terminology in our paper. One nice thing of the SSA non linear trend is that its gives an estimate of the errors for the trend component as well as the line itself. This is very handy when the data have time varying errors as you can see in the fig 2 you reproduced.

re 90:

Of course, as you and other here show, its very easy to find a reason why any smoothing could be criticized near the data boundaries – some of the various choices available were shown by Mann (2004)(in GRL). But unless you only want to fit by eye or stop the smoothing curve well before the end of the data, you must make some choice. To my eyes our method gives a nicer representation than that used by Mann, and certainly than by polynomial smoothing.

Re: john moore (#97),

John Moore stated above:

I requested the data for Macias Fauria (Moore, Grinsted) et al. This is a reconstruction using Finnish tree rings and Lomonosovfonna (Svalbard) isotope data. Macias Fauria refused to provide the critical Lomonosovfonna (Svalbard) isotope data:

He added:

I also note that Moore included a caveat that they provided data except when they didn’t provide data:

john More

The difficulty is that Rahmstorf made conclusions about the agreement between the smoothed curve and data in the region of the end points. (In his conclusions, the “end region” spans almost the entire region corresponding to the projections he was and is still testing using this method.)

R provided uncertainty associated with the smoothed value away from the end regions, and certainly none in the end regions.

Everyone agrees that if you feel you must draw your smooth line to the end points, then one “must” do something. But given the highly tentative nature of the behavior or the endpoints, and the fact that the analyst can adjust the appearance by arbitrarily selecting parameters, it’s clear that one

ought tobe extremely cautious when making conclusions based on the endpoints of the smoothed graph. All in all, with respect to testing projections (particularly near end points) it would be wiser to simply avoid using this smoothing to do much more than make pretty graphs.If the climate community does not understand this endpoint issue, it would be wiser if they cultivated the habit of distinguishing between the region further than M/2 from the boundaries from that closer than M/2 from the boundaries. (Hadley now does this by changing their solid line to a dashed line.) That way, even if they believe the “must” do something at the end points, they would at least remind themselves to be cautious about making conclusions based on the appearance of data in those end regioms/

Dr Moore, thank you for your comments.

I’ll take you at your word that you made the code available to anyone who asked and defer discussion of the availability of the code.

Let me also defer responding to you on the EOS publication. However I note that the use of “novel” statistical routines (i.e. ones that are not well investigated in standard statistical literature) for important applied results is a practice that is regularly criticized at this blog. I’d be interested in chatting with you on this topic if you have some time, but, assuming that there are constraints on your attention, I’d like to try to reconcile a few relatively empirical points.

In the thread, I observed that the first eigenvector in all relevant (and perhaps all) cases is going to have relatively equal weights. As a result, I deduced that the convolution of the first eigenvector will result in a (virtually) triangular filter of length 2M-1 (I note that this point had not been pointed out in Rahmstorf et al 2007, nor did Rahmstorf seem aware of the point in prior online discussion at realclimate and landshape.org. Subsequent to my making this observation a couple of days ago, Aslak Grinsted commented that he generally included such a warning when he sends out copies of the software, though it now seems that Rahmstorf may have got the software through a third party without such a warning.)

Do you agree that, in the case at hand, ssatrend is virtually identical to a triangular filter of length 2M-1? I request that you confirm the length as well, since that is an issue for the Copenhagen Synthesis REport and Rahmstorf et al 2007.

If we can settle this question, doubtless we can agree on many other points.

Re: 100. Yes you are correct – usually, as aslak Grinsted [#26] says the non-linear trend filter is close to triangular, and is very similar to the result of low pass filtering with a period of 2M-1.

You asked why we did not point out this in the original paper – the answer is we were not aware (in 2005) that it was so commonly essentially a triangular filter. Normally we use many SSA components not just the lowest frequency component, as its often the quasi-periodic oscillations that we have been more interested in than the trend when investigating climate mechanisms. We did, if you examine the fig 2 caption in Moore et al., 2005, show how similar it was to low pass filtering with cut off at 2M.

While I appreciate your confirmation of the main point, your language here remains inexact. The filter is not just “very similar” to the result of low pass filtering with a filter of length 2M-1; it is

identicalto one (up to the padding period.) It is“very similar”(virtually identical) to a simple triangular filter.I acknowledge your point that you were unaware of this in 2005. However, regardless of whether you were aware of this or not, it implies the incorrectness of your claim that your method was “new” as compared to

That sort of thing happens, but I think that you have published a Correction when you became aware of this. Similarly, when you became aware of the virtual identity of the method to a simple triangular filter of length SM-1, you should have included this information in a correction notice. It’s nice that Grinsted attached a warning when he sent out the program by email, but obviously the program fell into unwary hands such as Rahmstorf’s.

In addition, now that you’re aware of the debate and the point, I think that you have an obligation to communicate to the publishers of the Copenhagen Synthesis Report that their caption remains incorrect through Rahmstorf’s continued misunderstanding of the method. I gather that they have corrected the caption to “Changes in global average surface air temperature (smoothed over

15years) relative to 1990″, but since it is derived with a triangular filter of length 2M-1 (not M), due to Rahmstorf’s misunderstanding of the algorithm, I think that you have an obligation to notify the authors, now that you’re aware of the matter. Similarly Rahmstorf et al 2007 had an effective filter length of 21 years not 11.Re 102. I just looked at the synthesis report and I am not sure that they have even used the method we are talking about – the caption just says 15 year smoothing was used. The Eos paper was not referenced at all in the report. If you are so sure that he used the SSA method with M=11, why don’t you contact the publishers yourself? I certainly hope that if they have used M=11 that they change the caption to 21 year smoothing.

Sorry but I cannot agree that the SSA method is identical to low-pass filtering with all data (and I have not done an analysis of the data in the synthesis report myself); Aslak Grinsted seems to think that for some time series it looks different. So the language I used in #101 is as far as I am willing to go.

This also impacts on whether a correction in Eos should have been made or not – the only thing you seem to think needs correcting is that the method is not “new” – you say it is identical to simple low pass filtering. However, the SSA method used is different from low pass filtering, and there is nothing so far as I know that means it is actually identical in all cases – though I accept it could be in practice. I don’t think that means there should be a correction since the method is not incorrect, and it produces other things – such as errors for the trend line, which are useful. As far as i know low pass filtering is not used on the errors (of course I could be wrong).

Unlike you, we have simply used the method when we want to produce a trend, we slowly became aware of how it often looked like low pass filtering – it was not a moment of discovery.

Of course if you think the article needs correcting then I think the onus is on you as the person who claims this discovery to make the article for Eos – or wherever you like, arguing that the method is identical to something else and hence not new – or more importantly that the method is wrong or the original article was misleading. We supply a fair warning with the code we send out now, and anyone can see from the original figures how close to low pass filtering it is with appropriate cutoffs and windows. There is also the good chance that Rahmstorf or co-workers wrote there own code rather than using our sstrend.m. I simply have no idea as we have never communicated on this topic with them.

I don’t know what else I can do on this thread – I am leaving for summer holidays later today, but I hope that my input has been in someway helpful.

Re: john moore (#104),

thanks for your comments and enjoy your holiday.

You say:

I did not claim a “discovery” – I reported what I observed without any assertion of priority. Both you and Aslak have already asserted priority to your awareness of the problem and say that you have warned users to whom you sent copies. It’s your obligation to report the problem in the original outlet upon becoming aware of it; I have no obligation to do so. If, as you say, you have been aware of the problem for some time, feel free to report the problem without fear of complaint from me as to competitive priority.

However I did spot the problem very quickly once I got a copy of the code and, in my opinion, this rather proves my point about the weakness of the original reviewing. If I, as a mere “amateur”, as I’m often described by realclimatescientists, could identify the problem so quickly, I guess that a “professional” would identify the problem even faster. 🙂

Re: john moore (#104),

one other question – in ssatrend, you convolve the first eigenvector to obtain your linear filter. I didn’t see any mention of this convolution operation in Ghil et al 2002 and it doesn’t arise from first principles PCA. What is the authority for convolving the first eigenvector as a

SSAoperation?Aslak Grinsted stated above:

I have received an email offline from a reader who obtained the code from Grinsted within the past year without receiving such a warning. This does not prove that warnings are not “usually” attached but does show that the warning is not “always” attached.

It also shows that sometimes attaching the warning in emails is not an adequate substitute for publishing a proper notice.

sheesh talk about not making friends

My field is mechanical engineering, not statistics, but this discussion is both interesting and informative. Thank you all.

P.S. What, exactly, is an eigenvector, in terms of this thread?

Rahmstorf says he changed from the 11 year filter to a 15 year filter because 11 years was “too short to determine a robust climate trend.” Now that he knows the original filter length was actually 21 years — 6 years longer than he thought he needed to change it to for a robust trend — will he return to the original filter?

Re: MJW (#115),

Actually, there has been a lot of misunderstanding on this thread and elsewhere on the width of the filter. What Rahmstorf says in the 2007 paper is that the “embedding period” is 11 years. In this 2007 paper he has shifted to 15 years (he compares a range of figures), which he calls the “embedding dimension” (SSA terminology). Now the embedding dimension is the dimension (number of rows) of the autocorrelation matrix. Presuming embedding period means the same, which I’m sure it does, then his paper was indeed saying that the filter width was 21 years, and people just didn’t read it carefully.

Re: Nick Stokes (#118), So when Rahmstorf said,

he was dividing 20 by 2 and getting 5?

Re: MJW (#119), Yes, it looks like R was getting “embedding period” and filter width mixed up in that blog response on Niche Modelling. Maybe he is confused about the difference. But the description in the paper was correct.

Re: Nick Stokes (#122), The description of the paper may have been correct, but Rahmstorf’s understanding of the description was incorrect. You say,

Given that Rahmstorf said filter half-width was 5, and that the averaging period was 11 years, I’m not

suggestinghe believed the filter width was 11 years, I’msayinghe believed it was 11 years. And because he believed it was 11 years, and therefore believed the new filter’s width was 15 years, my original question stands: Now that he knows the original filter width was actually 6 years longer that he thought the new filter’s width was, does he believe it was long enough to “determine a robust climate trend”?Re: MJW (#123), Note in my sentence that you quoted I referred to

R et al. There were seven authors, and since the method seems to be correctly described in the paper, someone else may have been responsible for this part.Re: Nick Stokes (#126),

But Rahmstorf was the lead author, and he was the one who claimed a different filter length was justified because 11 years wasn’t long enough to establish a robust climate trend.

Re: Nick Stokes (#118),

Here we find one of those who, I presume, didn’t read the paper carefully:

Try to spin that, too.

Re: Jean S (#120),

Yes, in that RC response he used the words carelessly in that sentence. In the previous sentence, he correctly speaks of M=15, so it is clear what he means.

Incidentally, I don’t include you among the careless readers. You simply pointed out that an 11-year embedding period implies a 21-year filter width. I was responding to a post clearly suggesting that R et al had believed their filter width was 11 years.

Re: Nick Stokes (#118),

Nick, you say:

In an earlier comment, I asked Dr Moore (without a reply) for a citation within the SSA literature for the step in ssatrend.m in which there is a convolution of the first eigenvector resulting in the filter going from M to 2M-1 in length. The first eigenvector has length of only M (say 11) and a smoothed reconstruction from SVD first principles would end up with more or less a moving average rather than a triangular filter. (First principles SVD-type anlaysis being something that I’m not unfamiliar with. 🙂 )

There is definitely no mention of a convolution operation in Moore et al 2005.

There is no explicit mention of the widening of the filter in Ghil et al 2002. It looks like equation (11) of Ghil et al 2002 http://www.atmos.ucla.edu/~kayo/data/publication/ssa_revgeophys02.pdf may imply such a widening – I haven’t parsed it yet. If so, the effect is definitely not articulated and Rahmstorf seems to have had no comprehension of the point (if indeed this is the consequence of equation 11 which I’ll investigate over the next day or so.)

Re: Steve McIntyre (#128),

Vautard et al 1992 (cited by Ghil et al 2002 – the coauthors are merely reordered) stated:

They solve this by linear regression. This might be where the convolution comes in.

Re: Steve McIntyre (#128), Steve, The embedding dimension M is, as you say, the dimension of the truncated lag-covariance Toeplitz matrix used. The first eigenvector is then of dimension M. This is set out in the Wiki entry which closely follows this book by Eisner and Tsonis.

Like you (and I think Jean S) I then inferred from the code that the actual filter was formed by convolving the principal eigenvector with itself, so that it would be of length 2M-1. What the Wiki entry says is that the principal component is formed by convolving the time series with just the eigenvector. It’s again following the book by Eisner and Tsonis, but what E&T go on to say, (and Wiki doesn’t seem to) is that to recover the phase information, you have to convolve again with the eigenvector. I presume that the convolutions commute, thus justifying the self-convolution of the eigenvector that appears in the code.

Coming back to R et al 2007, I’m assuming, as I said, that by embedding period they mean embedding dimension.

BTW it’s rather late here down under, so I hope I’ve got that right.

Re: Nick Stokes (#131), Apologies to Wikipedia. They do go on to show the second convolution (of X with A – last equation with k=1 only).

Re #104, John Moore.

“I just looked at the synthesis report and I am not sure that they have even used the method we are talking about – the caption just says 15 year smoothing was used. The Eos paper was not referenced at all in the report.”

The Copenhagen Synthesis Report cites Rahmstorf et al (2007) as the source of Figure 3 on p. 9 (see Reference 3 on p. 37). Rahmstorf et al (2007) in turn says that “All trends are nonlinear trend lines and are computed with an embedding period of 11 years and a minimum roughness criteria at the end.” The source cited for this statement is “J.C. Moore, A. Grinsted, S. Jefrejeva, Eos 86, 226 (2005).”

In response to Willis Eschenbach’s post #192 of 23 June on the “A Warning from Copenhagen” post on the RealClimate website, Stefan Rahmstorf replied, in part:

“This is shown in detail in our Science paper of 2007, the results of which are shown and updated in the Copenhagen Synthesis Report. You could have looked it all up in the report. -stefan]”

For the subsequent history, see my #93 above.

Nick, your attempts to defend Rahmstorf’s incompetence are becoming ridiculous. There is as you say ‘a lot of misunderstanding’ but this originates entirely from Rahmstorf. His mistake was pointed out by David Stockwell in April 1998, but he has repeated the same mistake on his own blog and now in the Copenhagen report. In the paper you cite he says “

We checked that this analysis is robust within a wide range of embedding periods (i.e., smoothing)” without explaining the difference.Re: PaulM (#124), I am not concerned to defend Rahmstorf’s personal understanding of time series analysis. It does seem to be, in blog responses, shaky. And clearly the caption to Fig 3 in the Copenhagen report is inaccurate. My concern is that on other blogs the discussion has gone on to suggest that the method used in Rahmstorf et al 2007 of smoothing over the whole data range is bogus. And I don’t believe that is so. It is, as far as I can tell, correctly described there, and is a method with an honorable statistical history, as my references show. It is well known that making inferences about a time series at the end of the range has difficulties, and these lead to high sensitivity to encountering new data points. That reflects the fact that the effectiveness of smoothing tapers to zero at the end.

Many have said that the sensitivity should have been indicated on the graph, and I agree that this would be desirable. But it isn’t so clear how to do it informatively, and I haven’t seen that problem solved elsewhere.

Incidentally, R was not an author of the Copenhagen report, although he may have supplied the information they used.

Re: Nick Stokes (#125),

Nick, you continue (deliberately, I fear) to miss the key issue with the Rahmstorf analysis. The analysis is used to to justify the claim that: “The data now available raise concerns that the climate system … may be responding more quickly than climate models indicate.” Irrespective of whether the padding or end point smoothing is appropriate, do you think this analysis can reasonably support that particular claim (which is after all the key finding of the Science paper)?

Re: verm (#130), What you have (or had) is a plot which shows a smoothed curve apparently creeping upwards. The pattern is known to be fragile, but is arguably the best estimator. What you make of that really then depends on what precautionary principle you apply. One view, richly represented here, says that getting it wrong would be very costly, so you should probably on the safe side omit the doubtful region completely. The alternative view, represented sparsely here but richly at Copenhagen, says that AGW is dangerous, and you should use that best estimator to see where we’re going, even knowing that it might change a lot (up or down) with new data.

I see both sides of that. I just believe that if people want to take the second view, they should make sure that they have the maths right. And as far as I can see, they have.

Re: Nick Stokes (#135),

Nick, With respect, you are now just spinning. It is R et al who made the claim that “the data now available raise concerns that the climate system … may be responding more quickly than climate models indicate.” In order to make that claim, the authors need to show evidence. Given the uncertainties and error bars associated with the smoothing approach, it seems clear that this cannot be said with any degree of statistical (or any other) significance. You, of course, understand this but refuse to concede it. For R et al not to mention these uncertainties is beyond sloppy, particularly seeing they include a discussion on potential reasons why the “data” exceeds the model projections.

But apart from that, it’s great.

Nick

Who suggested smoothing over the whole data range is bogus? Most people are specifically discussing smoothing in the endpoints. R’s conclusions were based on comparing the endpints of smoothed observations to projections, so it’s the end points that are the issue. R’s notions of what one can learn by comparing the smoothed endpoints of smoothed data to projections

isbogus. The reason he promulgated bogus interpretations appears to be that he may not understand the thought processed and assumptions underlying the filter output.I don’t know why you think suggesting that smoothing has some uses in situations totally distinct from those used by Rahmstorf helps Rahmstorf out.

This is a trivial problem that anyone with even an iota of creativity can solve after thinking for 30 seconds. Hadley figured out how to do this last year when they discovered, to their apparent consternation, that their endpoints were turning down. They

very quicklyrealized they can draw a solid curve up to the final N/2 filters and then show a dashed curve afterwards. One could also demark the end region by placing a vertical line at the N/2 years from the end of the graph. Once could change colors of the trace. One could show uncertainty intervals computed based on the statistical models used to guess the future values used to smooth data in the end point region (thes unceratinty intervals would explode near the end points, but that’s thebenefit.Re: lucia (#132), Lucia, by “whole data range” I wanted to emphasise up to and including the endpoints. Yes, smoothing on the interior hasn’t been challenged.

And yes, you can avoid trouble by dashing the end region, or even omitting it. But this is a rough way of indicating the gradual loss of smoothing as you approach the end. If you want to give a quantitative measure, you have to find some way of showing a kind of growing error region.

Hi Steve et al.,

I just checked this thread as I have a long wait for a plane. To answer the question on convolution:

I am not sure off-hand if the convolution approach is referenced in the standard SSA literature, I have a vague memory of it somewhere – i think its not our invention, but perhaps I am wrong. We found it simply gives the same results as the standard method in the region where there are no data boundary effects – at least when we tested it.

The advantage of the convolution method are that it allows the padding to be done, and then the MonteCarlo calculation of the confidence limits on the nonlinear trend to be calculated. If we only found this convolution method empirically, then of course it may not be mathematically correct – maybe Aslak can comment some more on this, but he is almost leaving for Greenland.

Regarding the lack of warning about the m file looking like a triangular filter, we will code into the help for the file so it cannot be forgotten.

Ian Castles #117

About the data in the CPH synthesis, its not clear to me that the method is the same as we used, especially as the caption says

Changes in global average surface air temperature (smoothed over 15 years) (corrected from 11 in the first version of this report) relative to 1990. The blue line represents data from Hadley Center (UK Meteorological Office); the red line is GISS (NASA Goddard Institute for Space Studies, USA) data. The broken lines are projections from the IPCC Third Assessment Report, with the shading indicating the uncertainties around the projections3 (data from 2007 and 2008 added by Rahmstorf, S.).

I read this as saying the projections come from the reference 3. The data themselves are different from R’s 2007 paper, so why not the smoothing method as well? I can’t understand why he would change from 11 to 15 years in the caption if he still is using M=11. If you strongly think its actually 21 years then please do go ahead and contact the editors. I have not followed this argument at all, so if if I wrote I would be simply quoting your own findings anyway.

Lucia £98

Thanks for this – I agree with you, and will try to ensure we do it like that in future.

Re: john moore (#134),

The data is only different from the R’s 2007 in the sense that they have updated HadCRUT and GISS series. I don’t think that justifies changing the filter length.

He was not. He was using M=15 as you can read here, and he was doing it with your (Aslak’s) code as you can read from here (see comments). You can easily check with the code that definitely M refers to your “embedding dimension”, and therefore, the actual filter length is 2M-1. So R eta al (2007) smoothed over 21 years, and now the Copenhagen Report has the smoothing over 29 years. I don’t see there is any doubt about that despite people like Nick trying to obfuscate.

Re: Jean S (#138), Jean, I don’t know why you say I am trying to obfuscate. I completely agree with you. R et al 2007 say that they are using an embedding dimension (M) of 11. That means smoothing over 21 years. And yes, M=15 means smoothing over 29 years.

I’ve mentioned in #118 another Rahmstorf 2007 paper in which he discusses various M values, and settles on M=15. I’m not as conspiratorial as you, and my guess is that he used the programs from this paper for the Copenhagen calcs. Incidentally, that 2007 paper was not rigged to improve the look of the 2008 data.

Re: Jean S (#138), Thanks for the excellent animation.

Re: john moore (#134),

ssatrend.m does MC after the padding… Let’s see.. How that affects the confidence limits..

Re100: I am sure you are right that I have not pointed it out to that particular individual. But as i said the similarity does not always hold. I think i have been very helpful to individuals with questions to the code. But it can not be my job to control how people wield the tool. Especially considering that I have never released it publicly and only provided the code upon request. Now it is in the public domain here out of my reach. I just hope that people wont apply it blindly.

It seems that nick stokes (131) has some references that tie the ssa-reconstruction with convolution, so i will not look for any. But you can verify that the FIR filter approach works easily like this (matlab syntax using the breitenberger ssa functions):

x=randn(100,1);

x=x-mean(x); %!!!

[E,V,A,R,C]=ssa(x,10);

t=(1:length(x))’;

ix=5; %pick a component to compare

plot(t,R(:,ix),t,filter2(conv(E(:,ix),flipud(E(:,ix)))/length(E),x));

This code compares the normal ssa-reconstruction method with the fir filter approach. You can see that it gives the same except near the boundaries.

Re28: normally distributed white noise very often gives trend components with FIR filters that are quite different from triangular. That is because the input series only has a very weak(/non existant) trend. For example i just got a trend component with this eof (from the above matlab code):

[0.0087 -0.0383 -0.1936 -0.3607 -0.5752 -0.5752 -0.3607 -0.1936 -0.0383 0.0087]

That results in something close to gaussian after convolution.

I believe that it would also happen for series where the background trend is being dominated by oscillatory components (which is why i suggested the sunspot series). It is in cases where there are clear oscillatory components that I think that ssa may result in better trends than simpler filters, because the oscillatory behaviour already have been accounted for by the leading components.

Re: Aslak Grinsted (#137),

Yes, that’s true, with the emphasis on “white”. White gaussian noise is the “most stationary” series there can be. This does not seem to be the case of temperature series nor any of these climatic series under discussion.

Nick

So what if it’s rough? The alternative of doing nothing making it appear the end points do not include a huge amount of uncertainty, and then having authors make conclusions that would only make sense if the endpoints contained little uncertainty is foolish.

R. made no attempt to indicate the difficulties associated with the endpoints, and worse, advanced conclusions that would only make any sense if these difficulties did not exist. This suggest that

hewas unaware of these problems. Possibly the reviewers and the other 6 authors were also unaware of these things. If people do not point out the serious problems associated with these slip-shod practices that lead to bone-headed conclusions appearing in Science, the practices are bound to continue.Re: lucia (#142),

Here, here!

And thanks again to Jean S.

Re: Jean S (#138),

for the cool animated graphic.

I know our host doesn’t like to speculate on motive, but perhaps he’ll allow this comment from distinguished physicist

snip-

Steve – nope. I’d have preferred that your comment above was: Hear, hear!!Interesting discussion today. Thanks to the various commenters, especially to John Moore and Aslak Grinsted for standing up for themselves.

An underlying issue that Lucia’s emphasized – and I agree with this – is that smoothing seems counterproductive in the Rahsmtorf situation – something that Moore and Grinsted are obviously not responsible for. Matt Briggs comments once again on smoothing. There are some useful comments from applied practitioners also condemning the use of smooths in data analysis. See http://wmbriggs.com/blog/?p=735

IMO this is really the issue in Nick Stokes’ attempts to rescue Rahsmtorf’s approach – for the purpose of testing whether observations are in the “upper part” of IPCC models, conventional statistical practice requires that your test be based on observations to date. Filtering – of any form – serves no purpose in such data analysis

Re Nick Stokes, #144,

Admittedly, keeping the smoother centered in some sense can be desirable. But then the simplest and most transparent way to do this as the endpoint is approached is just to shorten the filterwidth as necessary to fit. Thus in your example from #94, with a triangular filter width of 7 (extending 3 on each side of the center), the full length filter would only be used up to the time corresponding to y4. At y3’s date, the filter would be shortend to width 5, then to 3 at y2’s date. At the endpoint y1 there would simply be end-pegging as in Mann’s method, but without the pretense of a “minimal roughness” 7-period filter. In fact, the smoothing would transparently just be shortened until there was no smoothing

at allat the end.In Sara’s example (#143), the smoother for the date of y2 would have weights on y5 – y1 of (0 0 1 2 1)/4, rather than (1 2 2 2 9)/16 as with Mann’s method. Then y2 rather than y1 would dominate the smoother for y2’s date.

The binomial filters favored by IPCC4 admittedly have some nice features relative to a rectangular or triangular filter. These could easily be shortened as above as the end point is approached (with appropriate warning like a dashed line).

Since shortening the filter width like this leads to no smoothing at all at the end point, it would be appropriate to omit the final “smoothed” point altogether, and just to show the raw data there.

Re: Hu McCulloch (#148), Hu, I don’t think that works. “Minimum roughness” is a reasonable request. You’re stuck with the “pinned” endpoint jerking with each new data point. If you try to make the next point relatively unresponsive to the new data, you just create a steep derivative at the end. Not smooth.

Re: Nick Stokes (#155),

Nick, can you explain to me what purpose is served in using smoothed data for assessing whether observations are in the upper or lower end of models? See, for example, discusson at Matt Briggs’ blog.

You seem to have studiously avoided this issue.

Re: Steve McIntyre (#156),

Yes, I have tried to stick to a particular mathematical argument. That’s never easy to make, and you can’t do it while chasing down all the side issues that arise (eg whether Rahmstorf should have been more forthcoming on some occasion etc).

But the issue you describe is the logical next step. Let me review what R et al 2007 actually said. They showed some plots of smoothed data (sea level, temp etc), compared them with some kind of mean of model scenario calculations, and showed in the background a shaded area representing the range of variation corresponding to factors like different sensitivity assumptions. And they said that the smoothed data was, by various measures of location and trend, running higher than the central measure of the model runs.In the case of sea level, they noted the curve had reached the upper limit of variation of some parameter range.

Now none of these statements are explicitly statistical. The model runs were not claimed to be random samples from a population. Perhaps they should have been, but it’s hard to see how you could set that up. The smoothed data curve, of course, still contains some dependence on the data noise. You could probably model that and calculate a probability that, had the noise gone differently, the data (or trends) would no longer appear to lie above the model measures. They didn’t do that (although they did give an error range on a sea level trend, presumably from a linear regression calc).

The post of Matt Briggs is on a somewhat different topic. He’s saying that you should not smooth before trying to identify correlation. And certainly you should be careful there, but identifying correlation is not what R et al are doing.

You asked earlier #145 whether smoothing is appropriate at all in comparing the data to, say, a model curve. Well, you could analyse some sort of statistics about whether the raw data lay predominately above the curve. That’s hard to do by eye and varies through time. The smoothed curve generally represents a weighted mean. It’s easier to visualise whether that mean lies above the model curve. Of course, you should really measure that relative to the s.e. of that mean subject to the noise. And that s.e. rises at the end of the data range.

Re: Nick Stokes (#159),

One fundamental problem here is that noise is never well defined in these articles. Nevertheless, ‘some dependence’

is a clear understatement. Check

http://www.climateaudit.org/?p=6473#comment-348240 ,

amplitudes in http://www.climateaudit.org/?p=6473#comment-348240 figures ,

http://www.climateaudit.org/?p=1681#comment-114704 , etc etc

Statistical problems turned into hand-waving. And then any conclusion they want can be derived using fancy words. That’s what we have here.

Re: Nick Stokes (#159),

As a question from a layman, you mention that the smoothed output is a weighted average. Rahmsdorf provides some commentary that the weighted average has to be some length to adequately eliminate noise.

How is the adequate filter length determined and how?

Assume a sine wave. Now take a point that is just beyond the peak of the wave and apply a smoother to that point. The smoothed output will continue to rise beyond the peak of the original sine wave. Can we then assert that this extended rise is a valid property of the original sine wave and not the errors of the smoother?

Re: TAG (#163),

“noise” – signal-noise is a metaphor and I wish climate scientists would stop using it. We’re not talking about radio receivers and static. There is no reason to think that fluctuations around a smooth are a result of measurement error.

Financial analysts also work with noisy series and would never call a smoothed series a “signal”. They’d end up unemployed.

Re: Steve McIntyre (#164),

Bravo, Steve! The

au courantaffectation of expertise in signal and system analysis is made laughable by all-too-frequent miscomprehension of the substance behind the terminology.Re: TAG (#163),

According to my interpretation, the continued rise in a smoothed sine wave beyond the peak of the original sine wave, is a property of the delay of the filter. Going to a MA (FIR) filter of longer length will increase the delay. This is what Rahmsdorf did in his two papers. He is merely pushing the observed rise in the late 20th centuries out in time with his smoothed curve.

Consider the sine wave again with Mannian padding applied to a point just beyond its peak. The assumption in padding is that this padded curve is an approximation to the original sine wave. Is this curve with its doubly reflected padding a good approximation?

Re: TAG (#163),

If the last value is above the LS trend (*), select short filter (blue). If the last value is below the trend, select long filter (red).

There are, of course, non-ad-hoc-methods, but then you’d need to define what you mean by signal and noise. And, as you get papers published in

Sciencewithout doing so, why bother.(*) If you run out of M’s, or trend doesn’t look good,just take longer set of data.

Re: UC (#171),

You really should give a keyboard alert next time you post!

8)

Re: UC (#171),

I asked the same question in the RealClimate thread with words to the effect “What criteria were use to select the averaging period?” (R used the term “averaging period”).

The question did not make it past the moderation filter

Re: TAG (#163), Re: Tom Gray (#174),

There’s a discussion in this Rahmstorf 2007 paper on embedding dimension, focussing here on sea levels. He tries a range from 2 to 30 years. This seems to be the paper where he settled on 15 years, at least for that data set.

TAG, your point about the continued rise of the sine wave after smoothing is exactly the issue of trying to achieve zero lag that I was describing above. You’re describing the process of phase shift; MRC and Grinsted type filters are designed to prevent that.

Re: Nick Stokes (#175),

Does this zero phase shift property hold at the end where padding takes place? It has been pointed out that in that case the filter changes (with the effect of the padding) to become an FIR one.

Re: TAG (#178), Yes, that is the point of the discussion Re: here #144 and nearby. By construction, the filters are centred and leave a line function unchanged.

Re: Nick Stokes (#175),

And for the temperature series, he settled on 11 years (as is evident from the update in this post) until 2008 values came in, and then resettled on 15 years. Next year will probably shed some light on what is the next M value these nomads are going to settle on. Maybe R is indeed following the procedure outlined by UC in #171.

Wow, I think I am beginning to understand. If I understand this correctly, this smoothing is done over a period of time. The closer you get to the present, the less data you have so you have to fill in the blanks. All these methods you mention are efforts to extrapolate the value of what is essentially an average to years for which you have increasingly less data. Is that correct? If so I understand the “made-up” question. But doesn’t the value of the extrapolation depend greatly on the regularity/predictability of the previous data?

Steve:I don’t think that it’s helpful to try to jump to conclusions. There’s more to be said on both sides of this.“Thanks to the various commenters” Thanks from me also. It really is a rare treat for all the principals (-R) to chip in, and to have both sides of a controversy represented. Good job.

This is only tangentially — pun unintended — related, but in regard to Mann’s Minimum Roughness critera, with cubic spline interpolation this criterion is called “free-end” or “natural” since (to a first-order approximation) it’s the shape assumed by a real drafting spline when the slope at the end points is unconstrained. Interestingly, it’s well-known that this generally results in poorly shape curves.

I’m fully prepared to be called out as ignorant on this but this seems akin to pixel interpolation for images near borders. As you approach the edge of an image, all the pixels from which to interpolate exist more and more on one side of an image and less on the other. Eventually, you reach a point at which all pixels are on one side with none on the edge. In image manipulation, you could sample the image in grid cells and come up with a percent range of pixels that fall nearest to the average pixel color in a grid cell, or you could do it just for adjacent grid cells. But the size of those cells could dramatically change the percentage of related pixels. Another alternative would be to pad with the last x colums of pixels. However, no matter what method you choose the results may or may not provide any usable pixels at all. It would totally depend on the image and the method selected. This seems to me similar to the issue being discussed in that while you can pad the endpoints, the values derived may or may not provide any value or insight into trends. They may simply be artifacts of the method and parameters chosen.

Am I way off base here?

Jonathan–

You are not way off. Also, the point people keep explaining to Nick is that R is being criticized because he used his smoothed edge results to test a theory.

Of course people want to smooth data (or images) for all sorts of reasons. There are many ways to do it none entirely correct. All methods are similar in some regards, but differ according to some arbitrary assumptions by the smoother. Few would criticize smoothing if the goal is nothing more than to create a pretty picture or curve.

However, if you are trying to test a theory against data (as R did) , testing against information the appearance of smoothed data (or the image) the edges which are undeniably influenced by the analyst’s (or photographer’s) preferences about how the edges “should” look is not appropriate.

Probably off topic….but not sure where to post this….

http://www.overcomingbias.com/2009/07/simple-forecast-models-best.html

Re: dunbrokin (#162),

Thanks, this was a really interesting link which led to this paper

and the interesting note about the “new” theta method being the best performer in the M3 forecasting tests, which led to the other papers and even the “Handbook of Economic forecasting” (see Google books) which found that the theta method was no better and possibly worse than simple exponential smoothing for forecasting. Exponential smoothing of course puts more weight on recent observations. Quite the opposite in fact from any smoothing that likes to ignore recent data. So one might tentatively conclude (by dint of the evidence of multiple testing) that R’s method is the worst forecasting method he could possibly have chosen.

Nick

Probably? Of course you can calculate that. Read Aslak and John More’s discussion about their paper. But despite access to the paper and the program, R. decided not to look into uncertainty.

So, the result is: in an paper appearing in

Science, the 7 authors did not make the slightest attempt to consider how uncertainty in the data or their method affected their conclusions about surface temperatures. Worse, had they spent the 30 seconds required to look at the uncertainty, they would have disovered the uncertainty was enormous.This is why they are being criticized. People here

a) are noting that the Rahmstorf appears to have not given one iota of thought to how he assumptions underlying his smoothing method and the uncertainty assocated with “weather noise” affected his conclusions.

b) are showing he continues to appear to misunderstand the uncertainty in his method,

c) are pointing out that the uncertainty associated with his method has been widely known and is obvious to anyone with more than three active brain cells in their head and

d) are showing, that, unsuprisingly, ‘weather’ happened, and Rahnstorf’s conclusions were shown to be unsupported not only

nowbut were looking tenuous as soon as 2007 data trickled in.Now, add to this the fact that

a) he changed the value of the arbitrarily selected value of “M” in a way that masked the fact that observations do not fall in the upper bound of the TAR even using the error prone method he used to ‘analyze’ things in the first place

b) somehow overlooked the typo in the caption.

c) did not call out the fact that he’d changed his parameter choice M anywhere in the synthesis report and

d) says all sorts of things that perpetuates the notion that he does not have clue one about the difficulties associated with interpreting the meaning of smoothed curves near endpoints.

Of course he is going to be criticized further. This will happen even if, as you point out, smoothing is something that cam be applied in some circumstances somewhere (but which all resources warn should be done with care.)

The fact is: R did not simply write a blog post or Washington Post editiorial doing something boneheaded having little impact. Hid analysis and diagnosis of the pace of climate change appeared in a peer reviewed paper in Science; the analysis and conclusions were every bit as foolish as George Will’s “analysis”. If George Will deserved criticism for minsiterpreting what we can learn from diagonosing a trend by connecting two end points, surely, Rahmstorf deserves criticism for mis-using what one can diagnose from M=11 smoothing with MRC endpoint treatment and reporting ill-founded conclusions in a paper published in

Science.RE 169. Somebody take away R’s shovel.

I’ve been reading these postings with fascination. As a simple ex-“scientist” the relationship between data and models has always interested me. What are climate scientists with R’s background really trying to do? Are they fitting data to models or models to data? Despite all their well-known shortcomings, discussed here and elsewhere at considerable length, it seems to me that the /data/ must be in the driving seat, and that the very best estimate of the current state of the climate is provided by the latest data, whatever the in-fashion model or smoothing technique purports to show. Surely, any type of “smoothing” is a model, and must be less informative than the actual data values. As Steve points out, climate “noise” is nothing of the sort. It is simply the typical difference (by some measure, simple or complicated) of the discrepancy between what an arbitrary model “predicts” or fits and what was actually observed.

Can anyone supply a convincing reason or hypothesis as to why climate data on time scales of months or years or decades should follow any arbitrary “smooth” pattern – apart that is from gross seasonal changes in non-equatorial regions (which can be readily approximated from abundant historical data from the same source). As I’ve written recently in CA it seems to me strange that so many who are interested in climate change seem to be compelled to hypothesise an underlying linear model when the data evidence on many time scales shows little inclination to take kindly to linear forcing. My investigations lead me to think that a linear model may well suffice (i.e. be adequate for some practical purpose) over carefully chosen time ranges of arbitrary length – provided that one is prepared to be flexible regarding acceptable excursions – and that these periods of random duration frequently seem to be associated with stable climate, not with constant change. After these stable periods a very abrupt change is usually encountered, preceding another quasi stable regime.

Am I the only person who has noticed these things? Come on! Give me some support or otherwise provide a clear refutation of my notions.

Robin, Bromsgrove UK

Re: Robinedwards (#176), note that the team have in effect

definedclimate as whatever is left after an arbitrary 30 year smoothing, thus avoiding the necessity for any discussion of why climate might have such properties.No, according to the caption in Fig 2 of the 2007 sea level paper, he used M=15 then for both sea-level and temperature. I’m not sure what we’re supposed to make of your reference to Tamino’s update, dated Mar 25, 2008. It doesn’t seem to say what M value is used. I don’t know when the graph was made, but it can’t be including much 2008 data.

Re: Nick Stokes (#183),

How is this exchange at Realclimate between JeanS and R to be understood?

Additionally the statement seems to answer my question about what criteria were used to choose the filter length. What do you think of the technique used here. It seems to be a cut and try method which is sensitive to the result provided.

Re: Tom Gray (#186), Well, my first comment is that changing from M=11 to M=15 reduces the 2-sigma error, but not radically. It’s just another point in the tradeoff continuum – smooth too little and the trend uncertainty is high – smooth too much and you may be losing real information.

But you’re inferring that he changed because of the 2008 result. He doesn’t say that – he just says that the 2008 result illustrates the effect of the change. I repeat my observation that he was using a M=15 filter for temperature in a Science paper in 2007.

No Nick Stokes (#183), the M=15 in the Figure 2 of the paper you linked is for “correlating” the rate of sea-level rise and temperature, not comparing temperature and projections. I’m sure you can distinguish those two.

Being so clever as you are, you surely noticed that the update included 2007 data, and was identical to M=11 (2007) graph in #138. Hence R was using M=11 in the 2007 paper and in the update last year, and only changed to M=15 this year. Do you need a more throughout explanation, or is it clear now?

Nick, you are a [self snip] fellow.

BTW, Nick, if the thing you try desperately argue were true, it would only mean that the nomad-R first settled on 15 years, then quickly changed to 11 years, and now settled back to the original camp. The paper you cited have this “time stamp”

while the R et al paper has this:

So pick your horses more carefully.

Re: Jean S (#185),

Received for publication doesn’t tell you when the work was done. A paper with 7 authors from 6 institutions takes a lot longer to get out the door. Each has their own internal review processes (one of which I am very familiar with).

Re: Nick Stokes (#189),

You are such a spin-master-troll-clown that I give up. This is not even funny anymore. I pity scientists relying on your advice. I’ll ignore all your comments here and elsewhere from now on.

Re: RomanM (#187),

yes, it solves the “problem”, but only partly so. In the original paper they state:

This is the staement the whole speculation in the paper is based on. Now, in the updated Copenhagen report graph (M=15), the 2006 values for GISS and HadCRU are 0.29 and 0.27, respectively, and the smooth is in the upper middle part of the projections. Of course, that’s better than M=11 giving the inconvenient visual changes with values (for 2006) being 0.28 and 0.26.

Continuing to use M=11 for the 2008 update clearly offered Rahmstorf problems. Since the smoothing method uses a regression on the last M points to create the padding, it means that the points used would be from the years 1998 (one of the highest on record) to 2008 producing a considerably flatter set of padding points. This inconvenience is solved by increasing M to 15 thereby adding the four lower temperature years prior to 1998 back into the mix.

Rahmstorf is definitely aware of the visual changes produced by varying M. In the sea level paper (which clearly indicates the ad hoc nature of his choices for M) he observes:

In this case, for the reasons I have given above, increasing M will in fact increase the slope so that the leveling off is “progressively lost”.

So Nick Stokes claims that R. is just using good statistical practice? Give me a break!

I have just looked again at the version on the http://climatecongress.ku.dk site. The caption to Fig 3 says

I think that ‘og’ is a word in Danish (meaning and), so maybe it doesn’t come up on their Danish spellchecker.

I wonder how long it will take them to correct the typo and to correct the smoothing period to 29 years.

Re: PaulM (#191),

I bet that it won’t be changed to include the number 29. If it is changed, it will be something like “smoothed with an embedding period of 15 years”.

Here is the policy on filters used in the relevant IPCC AR4 chapter on observations: “Observations: Surface and Atmospheric Climate Change”. See Appendix A.

They state:

These are sensible policies: “easily understood and transparent”; “as few weighting coefficients as possible in order to minimize end effects”.

They consistently used one filter to “remove fluctuations on less than decadal time scales” stated in unequivocal terms as follows:

While we’ve discussed the change in accounting policy (for smoothing) between Rahmstorf et al 2007 and the Copenhagen Synthesis Report, we haven’t so far discussed the change from IPCC accounting policy to Rahmstorf et al 2007. R07 provide a citation for their smoothing policy, but their citation is to an article in what AGU describes as their “newspaper, not a research journal”. Although R07 comments directly on IPCC AR4, they do discuss the effect of their change in smoothing policy. Fast forward to Copenhagen Synthesis where the effect is exacerbated.

SteveM

Translation: He means the integral under a triangle with 30 years width is equal to the integral under a rectangle with 15 years width. Betch anything.

Hee’s why I think this: Fluid mechanics texts often discuss “characteristic time scales”, “characteristic velocity scales” etc. Scaling arguments are sometimes used to explain certain physical processes. Sometimes (but not always), the “characteristic time scale” any process is then estimated by taking the integral of the autocorrelation for that thing. This ‘characteristic time scale’ is called “the integral time scale”. So, one “characteristic time scale” for the 30 year triangular filter is the same as one “characteristic time scale” for the 15 year rectangular filter.

Whether or not this makes it correct to call the 30 year triangular filter a 15 year filter on a “pragmatic” level is, nevertheless, highly debatable. There are, after all two big problems:

A) If the equality of the characteristic time scales of the filters is the basis for the argument Rahmstorf is making, then we could also point out that these filters cannot be characterized by

onetime scale. The other frequently used time scale derived from autocorrelations is the “Taylor microscale”. This is obtained by taking thesecond derivativeof the autocorrelation at t=0 and manipulating modestly. If you compare the shape of the filters, you will notice that the triangular filter has an discontinuous derivative at t=0, while the second derivative of the triangular filiter is zero at t=0. So, in this event, one characteristic time scale is infinite, while the other is zero.B) It’s not at all clear this notion of characteristic time scales based on integrals or derivative of the functional form of the filter is anything any non-expert would use to base their understanding of the filter performance. I suspect most non-experts would prefer to be told that R used a triangular filter with a span of 29 years. From this information, they would recognize that triangular filters weight nearby years more than later years and that the results of the smoothing will be affected by data 29 years away from the smoothed point.

Re: lucia (#195),

The weights of all properly scaled low-pass filters sum (or integrate) to unity, thereby preserving the zero-frequency DC component (mean) of the signal and maintaining the amplitude of oscillations in the pass-band. Scales that you describe pertain to characterizing the signal in terms of its autocorrelation function–an entirely separate matter. The practical choice of cut-off (half-power) frequency in the filter response is based upon other considerations, pertaining to what signal frequencies are of interest. Those scales need not match.

The complex-valued (amplitude and phase) frequency response of any discrete FIR filter is given by the z-transform of its weights and is independent of signal properties.

Now, when the real-valued weights are symmetric about a central point in a low-pass filter of total span N = 2M+1, the z-transform shows a linear phase-lag. This corresponds to a simple delay of M data points, which can be removed by time-shifting the filter output accordingly. When plotted that way, a symmetric loss of M output points at the beginning and end of the record is unavoidable. In such “centered” treatment, the output at any time-point depends only upon the input values within +/-M of it.

Autocorrelation of the signal does come into play, however, when ad hoc ideas of extending the available record are entertained. If the signal is narrow-band, then Wiener prediction filters might produce reliable results. Alas, the temperature record is invariably broad-band and no reliable extrapolation is mathemativally possible.

For the benefit of the larger discussion, it should be pointed out that preserving the mean or linear trend (which, in the limit, is the mean of the first-difference series) is a trivial problem. What is by no means trivial is demonstrating that the time-series of interest actually contains a true linear trend. The very fact that different stretches of climatic records produce highly different estimates of regressional slope militates against such an assumption.

Re: lucia (#195), First let me say that I agree with SteveM. The filter is what one would normally think of as a 29-point filter, and R should just say so.

There is a defence of R’s view here in terms of time constants, which also relates to Jorge’s query about the frequency domain behaviour.

But there is another argument that R could put. The primary filter (the eigenvector) that SSA calculates is indeed a filter of length M, very like Jean’s rectangular filter above (#169). It then applies it twice. That is equivalent to applying the filter convolved with itself, which would be a (2M-1) pt Bartlett-style filter. And in fact, the ssatrend program does it this way, although it needn’t have.

But again, I prefer Steve’s simpler version.

#195 Lucia, I think you meant 15 years away from the smoothed point.

#137, Aslak and Steve

Is the ssatrend code on this site posted with the author’s permissions?

Re: MikeN (#199),

The R version of ssatrend is my script transliterating ssatrend.m and I see no reason why anyone’s “permission” is needed.

The ssatrend.m script is posted to prove that my version matches the matlab version and that the net result is merely a triangular filter.

At the present time, it doesn’t matter a speck whether the script remains up. I’ve demonstrated that the net effect of the method is simply a triangle filter plus linear extrapolation padding. The point’s been conceded by the authors.

Aslak has commented above and had the opportunity to request that the script be taken down. He didn’t.

If he did make such a request, I would take the code down according to his request and request that he place the code online at his own website. If he refused to do so, I would ask both the publishers of Moore et al 2005 (EOS) and Rahmstorf et al 2007 (Science) to require Aslak to make the code available. My guess is that Aslak doesn’t want to draw any more attention to the matter and decided not to ask that the code be taken down.

MikeN–

Yes. Sorry.

I have two questions for you clever chaps and chapesses. Could anyone show a graph of amplitude v frequency for the 11 and 15 versions of the triangular filter to compare with the filter Steve quoted above (194). I am an electronics type who would find it much easier to look at the 3dB cutoff frequency and the attenuation in dB/octave. This presentation would also show if there are any ripples in the passband or stopband.

I´m not sure if my second question is well posed! How does one handle the fact that these types of filter seem to have time varying coefficients in the run up to the end of the series? I can´t get my head round what this means in terms of a frequency response.

I am no expert but I thought that in principle one could turn a time series into a set of sine waves and then change the amplitude (and phase) of each one by the filter characteristic at that frequency and then add the modified sines back up to produce the smoothed times series. This procedure would seem to be thwarted if the filter characteristic is a function of time as well as frequency.

This difficulty does not arise with a causal filter, or a symmetric acausal type away from the end point. It is just one more reason why I distrust the output for the last M padded years.

Re: Jorge (#203), Jorge, you’ll find the response functions of the rectangular and Bartlett windows here and here.

On your second question, yes, if the filter is not time-invariant, you can’t quote a single frequency response. If you think of a MRC type filter, at the leading point, there is no attenuation of a sinusoid at all. M points back, you see the fully attenuated (but no lag) version of the sinusoid. In between, there’s a gradually increasing attenuation.

Re: Jorge (#203),

Your idea of turning a time series into a sum of sinusoids and then manipulating them to effect filtering is one that appeals to many novices. They call it “FFT filtering.” It’s perfectly sensible if one is dealing with strictly periodic signals and the record length is some integral multiple of the fundamental period. In that case, the “end-point problem” disapears entirely, because the extrapolation is simply the repetition of the record from its beginning. A strictly periodic signal can be predicted indefinitely from its Fourier coefficients.

The insurmountable problem with non-periodic signals is that they contain components that are not harmonics of any record length. An entire continuum of frequencies is involved. When any Fourier-series expansion of a finite record is made (FFTs are simply fast algorithms for doing so) the off-harmonic components are smeared in to a wide band of frequencies. Manipulating the coefficients is by no means equivalent to digital filtering, which always has a continuous frequency response that can be established precisely. That’s why so much research has been devoted in communications engineering to developing filters with various optimal response properties. The short lengths of most climatic time-series (the proxy data almost invariably are not proper series) present practical challenges, however, in applying these filters. But the real obstacle to effective filtering often proves to be myopic concentration on the time-domain.

Re: John S. (#218),

I think I do understand the difference between a Fourier series when the signal is repetitive and when it is a one-off. The single period one has to have an infinite number of sine waves that add up to zero outside the interval as well as producing the required shape within it. It is then the truncation to a finite number that leads to all sorts of artifacts that have to be handled.

I have seen FFTs for temperature series and they rarely seem to show clear frequency components. Things like solar or ENSO cycles seem to have variable amplitude and frequency modulation that distribute the signal across the frequency spectrum.

It does seem clear that most smoothing techniques invented historically by statisticans are really low pass filters when viewed from the frequency perspective. As many people have said, in the communications game one mostly knows which part of the frequency spectrum contains the signal of interest and so the filters can be designed accordingly. What I think some people don´t seem to appreciate is that all these smoothing methods do have an implied frequency that separates what they are interested in and what they are not.

It is my impression that the climate community have never really wanted to be too specific about the frequencies they wish to cast out as some kind of noise and those they consider as the longer term climate changes. Anyway, every time someone uses a specific smooth others can uncover the assumption, whether it is stated up front as in the IPCC filter Steve described or buried in Rahmstorfian obfuscations.

Re: Jorge (#221),

How does one separate the “noise” of PDO etc. from the “signal” of AGW using any filter that allows 20-yr, 30-yr variability etc. to pass through it? The decadal filters will remove much of ENSO, but will not touch the low-frequency noise of PDO and other such natural nonstationary climate behaviors.

RE Steve #194 (quoting IPCC4):

In fact, this admittedly clever 13-weight IPCC4 filter is even closer to a 19-weight binomial filter than it is to a 21-weight binomial filter, as shown below:

It’s also not much different, on average, than a 9-weight triangular filter (Rahmstorf’s M = 5):

An 11-weight triangular filter (Rahmstorf’s M = 6) is already unambiguously wider than the 13-weight IPCC filter, except for the insignificant first and last weights. Rahmstorf’s 21-weight filter (M = 11) is therefore at least twice as wide, in effect, as the IPCC4 filter.

Perhaps he and Hansen et al picked 21 weights under the mistaken impression that this would be similar to IPCC3’s 21-weight binomial filter, and therefore to IPCC4’s 13-weight filter.

[Addition: Although the “9-weight” filter hits the axis at -5 and +5, its first and last non-zero weights are at -4 and +4, so it indeed has 9 non-zero weights, and is correctly labeled, despite my brief concern that it was wrong. — HM]

Steve, Aslan’s comment suggests maybe he’s not OK with it.

Based on your quote’

I’m pleased to report that ssatrend.m and companion files ( \copyright Aslak Grinsted) may be downloaded from’

I assumed you had his original code. Either way, I would think posting without his permission is not a good idea.

Re: MikeN (#205),

Tell you what are worse ideas: the program, Rahmstorf using the program. Or the use in the “most important” report since IPCC of a time series program invented by two Arctic scientists and published in the AGU newspaper. Perhaps you should spend some of your energy on this.

As I said above, keeping this useless program online doesn’t matter a speck to me. The only reason that these folks wouldn’t want it online is to save themselves embarrassment.

As I said above, I’ll cheerfully take it offline if directly asked. And do what I can to require AGU, Science and the Copenhagen Report to make it public.

Not to beat a dead horse (as opposed to rooting for a dead horse, as Nick Stokes seems to be doing), but Rahmstorf’s claim that he intended to use a filter that averaged over 21 years is contradicted by his own words:

Re: MJW (#206),

Yep.

Saying it’s close to a triangular filter should be good enough. Your line above gives the impression that people are downloading the actual code, and not an R translation.

I make an issue of this, since you frequently are asking people for code and data. Someone who is willing to give you code but doesn’t want to make it public, will not do so.

Steve: I keep my agreements. If I’d agreed not to make this code public, I wouldn’t have. I got the code without agreeing to keep it confidential.Quite frankly, I’m trying to think if I’ve run into a situation where people have been willing to give data to me, but not to make it public. I request things that I expect to be part of the public record as a condition of publishing. I’m not interested in getting coopted by secret Team handshakes.

.Re: MikeN (#209),

Where was this published?

TAG, Steve has some code linked above.

>Especially considering that I have never released it publicly and only provided the code upon request. Now it is in the public domain here out of my reach. I just hope that people wont apply it blindly.

I’m not sure that that’s an endorsement of posting the code. You yourself acknowledged the code was copyrighted when you posted it. Were you just mocking Rahmstorf?

Re: MikeN (#214),

Puh-leeze. I’ve gone to great lengths to educate people so that they don’t use this silly program blindly. People such as Rahmstorf and the Copenhagen Synthesis.

I’ve gone to more trouble to educate people than the original authors. If Moore nor Grinsted were seriously worried about people using this code “blindly”, then they should have spoken out against Rahmstorf. What did we hear? Silence of the lambs from Moore and Grinsted and bleating over at realclimate.

Look, you’re making me speak out more harshly against these folks than I would have otherwise. This code has no value.

If it reassures you, I’ll be happy to insert comments in ssatrend.m that since the publication of the article, they’ve learned that the program is merely a triangular filter and that this is a filter with known undesirable properties and that accordingly the program should not be used in scientific studies – that its use should be restricted to portfolio management.

Mathematical operations and ideas are not the subject of copyright protection. While the particular expression of those ideas in the original program might be subject to protection (a matter of particular controversy) – the expression of those same ideas and mathematical operations in a different language are most definitely not. The range of copyright protection is limited, even in the original language, by the fact that to implement a particular mathematical operation may have only one meaningful expression within that language. Thus, there may be no creativity involved in fixing the mathematical expression in the fixed form of a program. No creativity of expression = no copyright.

In short, assertions of copyright with respect to scientific ideas are commonly bogus.

Steve:On other occasions, I’ve asked people not to express their views on copyright law. It’s been amply discussed on other occasions. I’m not interested in legal theory on this as the code here is worthless in any event.I’m not arguing any legal violation.

I also don’t care what comments are in the code, or how people use it.

I just think it is better procedure to ask before publishing someone else’s code(or a translation of it), if there is any reason to think they might object.

Given that Rahmstorf was claiming he wasn’t free to distribute the code, that would give me pause before passing it on.

Ignorance exposed. And denial thereof.

Thanks, Steve.

RE Jorge #203,

See Comment 34 on the new “Rahmstorf Rejects IPCC Procedure” thread. I will add M=15 in the near future per Steve’s request at comment 35. I suggest that discussion of this aspect should move to the newer, shorter thread.

COntinued at http://www.climateaudit.org/?p=6533

## 11 Trackbacks

[…] The Secret of the Rahmstorf “Non-Linear Trend Line” […]

[…] at Steve McIntyre’s Climate Audit kindly linked to an old article of mine entitled “Do not smooth series, you hockey […]

[…] The Secret of the Rahmstorf "Non-Linear Trend Line"- Steve McIntyre sågar Rahmstorf. […]

[…] The Secret of the Rahmstorf “Non-Linear Trend Line“. […]

[…] was only part of the story. I have decided to tell the rest of the story after reading “The Secret of the Rahmstorf ‘Non-Linear Trend Line’“ at Steve McIntyre’s Climate […]

[…] warming theory, white studies, and so forth require extreme stupidity to fit in, as illustrated by Rahmstorf’s triangular filtering, just as modern art requires the complete lack of good taste or any artistic talent in order to […]

[…] 53. Rahmstorf-Glättungs-Gate (s. a. hier und hier) […]

[…] El secreto de la “línea no lineal” de tendencia de Rahmstorf. […]

[…] 75. Rahmstorf smoothing-gate and here and here. Rahmstorf uses statistical tricks to hide the decline. 76. Revelle-gate Gore attempts to have […]

[…] […]

[…] et.al. (2007) had not been criticised for its incorrect statistical analysis by Stockwell and McIntyre, Garnaut may still be using Rahmstorf’s figure to argue that changes have been underestimated, as […]