There’s really no solid ground for speculation when only the authors know (???) what they did with the data.

No matter how sharp-featured or discontinuous the signal, if it satisfies the Dirichlet conditions, the entire stretch of record can be reproduced over *continuous* time with an *infinite* Fourier series. That reproduction, however, is not extendable to other stretches, unless the signal itself is strictly periodic. When that signal is sampled at *discrete* time-intervals, the F. series is necessarily truncated to a *finite* series at the Nyquist frequency. If there’s no aliasing introduced by sampling, that reproduceability survives. Otherwise, truncation of the series produces the Gibbs effect *between* the cardinal points at which the underlying signal was sampled. The exactitude of reconstruction at the cardinal points survives in any event. Any post-sampling filtering of the *discrete* data series can only operate on the frequencies below or at Nyquist. With genuine band-pass filtering, there’s no need to detrend the data; the filter removes that trend by itself.

I really can’t take any more time to clear up your persistent confusions about the basics of signal analysis/synthesis.

]]>I didn’t understand your argument about cardinal points, but it certainly isn’t true that Gibbs oscillations are only manifest when insufficient Fourier terms are retained. Sharp featured signals have many harmonics, extending into high frequencies. They only remain sharp featured if all these are retained with amplitude and phase unchanged. Any filtering changes this, and the harmonics then become evident as oscillations within the data set.

In the case of these bandpass filters, the artefacts created by the boundary treatment generate oscillations within the frequency band. There would have been higher frequencies as well, but they are attenuated by the band pass.

]]>I don’t think your explanations really fit.

Had BS09 properly detrended the data, the downturn at the end of their bandpass shouldn’t be as severe as it is. Furthermore, Gavin has effectively admitted to error in using cyclical padding. Lacking any code from him, nobody really knows what was actually done.

The higher-frequency ripple known as the Gibbs effect is manifest only *between* cardinal points of a discrete series when insufficient terms are retained in the Fourier series expansion of a sharp-featured signal. This is not an issue here. Both wavelet and F. analysis provide *exact* results *at* the cardinal points. If anything, I would expect customarily very smooth finite-duration wavelets to smooth over discontinuities.

Cyclic padding, of course, introduces a discontinuity – reflection makes a discontinuous derivative. Both produce Gibbs effects. In this case, they are not so large, because the process (D8 and D7) is at least approximately stationary, and just a part of the total signal.

]]>Good for you. Cleverness is not even required, it’s built in to the periodic nature of Fourier series analysis.

]]>Uncritical resort to cyclical padding has become fairly common practice not only in filtering but in spectral estimation, where the entire record is assumed to repeat indefinetly in FFT-based analyses. When the record is very long and suitable decimation techniques are used, the results can be quite indicative despite such assumptions. That luxury is very seldom available in climate studies. My employer wisely bans all analyses that involve data padding.

* Maybe Mark T can shed some expert light on this question.

]]>To this econemetrician, this really looks like someone desperately thrashing the data. There is no connection between the original data and the final results.

Of course, that may be the point…

]]>I fear that I must have missed a vital point early in these discussions of smoothing, endpoint treatment and wavelets. I admit immediately that I know /nothing/ about wavelet analysis, so will have to assume that in common with other methods of smoothing it is intended to produce believable (plausible?) estimates of as yet ungathered observations. In other words, it is hoped that the future of the series can be predicted. This demands (I think) that a model must be hypothesised and then tested against known observations so that one can be reasonably convinced that model and observations are mutually “consistent”. Having achieved this the model is projected into the future and its consequences weighed in the balance of politics and science.

What I would really like to know is the provenance of the data that were used to generate the intriguing plots provided by Steve. The cyclic component is truly amazing! Where can they be found, please? To my untutored eye some of the curves appear to be creations of fantasy – though since I cannot see the individual data points I may be fantasising too.

Is anyone really going to place credence in these methods? I would like to see the result of applying them to some well-known climate data. I would suggest trying observations from Anchorage, Fairbanks and other Alaskan sites for the thirty year period 1946 to 1976, totally ignoring data from the following years, applying the endpoint manipulation methods of the types described by Steve, and using the fitted model to forecast events from mid 1976 onward. My expectation is that a simple model will do a brilliant job on the known data, 1946 to 1976, producing forecasts for 1977 to say 1987 with impressively narrow confidence bands. This exercise should be repeated for the 30 years after 1976, using the data that emerged during that period. Comparison of the two fitted models and their relationships to the actual data numbers should provide some talking points.

Underlying all this is my feeling that climate for a given location or region may be essentially resistant to any “rational” attempts to forecast it. A “random” component may in practice outweigh the best-laid schemes of skilled operators in the art of climate forecasting.

Robin

]]>]]>Dear Dr Schmidt

In a recent comment at Lucia’s blog, you stated:

“Note we still don’t have a perfect emulation, so perhaps you guys could agitate for some ‘code freeing’ to help out. ”

As you may be aware, I’ve been trying for some time to achieve a more “perfect emulation” of MBH methodology. While some code was provided in connection with the House Energy and Commerce Committee hearings a few years ago, the code was unfortunately incomplete. The code did not include the steps in which the confidence intervals in MBH99 were calculated nor how the number of retained principal components for the various tree ring networks was calculated. Existing information is insufficient to permit either to be emulated. Aside from myself, some very able Climate Audit readers (Jean S, UC) have tried hard to figure out these steps and have been frustrated.

I would appreciate it if you would attempt to persuade your associates to provide the relevant code for these steps. I will do what I can to persuade Scafetta to provide a script for their calculations.

Resolving small measures like this can go a long way.

Regards,

Steve McIntyre