Off-center Butterworth Filters

Today I’m doing a first reconnaissance of Butterworth filters, used extensively in Mann et al 2008.

The comments here are notes and thoughts and I hope that some issues can be clarified by readers. As of 12.30 pm today, I think that that the main problem that I’ve had pertains to a difference between R and Matlab implementations, but it illustrates an interesting effect.

We’ve had discussions of smoothing from time to time. Just before the release of Mann et al 2008, UC posted an interesting comment on Mannian smoothing here based on an earlier smoothing paper. Within 24 hours, Mann changed his code for lowpassadaptive.m, noting (but not describing) the change here, “minor errors fixed 9/2/08″, failing to cite or acknowledge Climate Audit as a source, although Penn State policies appear to require students to properly acknowledge sources under similar circumstances. Consult the interesting comments by Roman M, UC and others.

This thread was written just before the release of Mann et al 2008. UC observed shortly after its release that lowpass.min is used everywhere in Mann et al 2008, something that I’m now trying to come to grips with. My take on the earlier thread was that the issues mostly seemed to involve padding of future values and their potential impacts. This inter-relates to today’s topic.

I ran into serious problems trying to replicate Mann’s smoothing operations in R. Here’s the Dongge series where I encountered problems with Butterworth filters applied to uncentered data (though it’s taken a while to diagnose the problem.) What I observed was the huge displacement at the end of the series (which seems to be “zero-seeking” behavior at the end point.)

If you are so inclined, you can download Mann’s version of this series as follows:

url=”http://www.meteo.psu.edu/~mann/supplements/MultiproxyMeans07/data/proxy/allproxy1209/dongge.ppd”
test=read.table(url,colClasses=”numeric”,skip=3)
dongge=ts(test[,2],start=test[1,1],end=1996);x=dongge
temp=!is.na(x);sum(temp);year=c(time(x))[temp]
x=ts(x[temp],start=min(year[temp])) ; N=length(x); N #2003

The figure below shows the results of simple Gaussian filter and a 10-point Butterworth filter with frequency 0.05, padding according to lowpassmin.m (as shown in the SI) and selection of padding alternatives according to Mann’s procedures. In this example, padding was not an issue, as all the padding alternatives led to negligibly different results. The problem lay entirely with the implementation of the Butterworth filter, perhaps with the filtfilt algorithm. (The results are actually worse than indicated on this graphic as the overshoot on the right goes FAR off the chart.)

Here’s the code for my implementation of the Butterworth filter (using the signal package in r), which has an implementation of filtfilt (which I presume to do the same thing as the Matlab filtfilt, but this may be an issue.)

library(signal)
mannsmooth =dongge
bf=butter(10,.05,”low”) ;bf
M=10
mannextend=array(rep(x,9),dim=c(N,9))
dummy1=cbind(rep(mean(x[1:M]),M),x[(M+1):2],2*x[1]-x[(M+1):2]); dummy1=cbind(dummy1,dummy1,dummy1)
dummy2=array(c(rep(mean(x[(N-M+1):N]), 3*M),rep(x[(N-1):(N-M)],3),rep(2*x[N]-x[(N-1):(N-M)],3)),dim=c(M,9));
mannextend=rbind(dummy1,mannextend,dummy2);dim(mannextend) #2023 9
#extend beginning and end with 3 padding schemes
working=apply(mannextend,2, function(x) filtfilt(bf,x) )
test=ts(working[(M+1):(N+M),],start=tsp(x)[1])
fred=apply(test-x,2,function(x) sum(x^2))
index=min((1:9)[fred==min(fred)]);index
mannsmooth[temp]=working[(M+1):(N+M),index]

[Note -for the subsequent graphic below, I've centered before filtering and then added back the mean,]

I’ve corresponded with UC and Jean S about this over the last couple of days. It seems that using Butterworth filters (and filtfilt) in R on off-center series seems to introduce a problem that is unrelated to end-point padding. UC observes:

This might cause the difference with Matlab. filtfilt.m sets the initial state + does minimum roughness, and centering doesn’t matter. If zeros are padded centering probably matters .

I’m pretty sure that this is the difference.

Sorting this out has not been entirely easy. In an earlier post, UC noted that the R-manual says:

“In this version, we zero-pad the end of the signal to give the reverse filter time to ramp up to the level at the end of the signal.”

and in another comment on the prior thread, UC observed that thefiltfilt”> Matlab manual stated:

filtfilt reduces filter startup transients by carefully choosing initial conditions, and by prepending onto the input sequence a short, reflected piece of the input sequence. For best results, make sure the sequence you are filtering has length at least three times the filter order and tapers to zero on both edges.

[2 pm Eastern: Dealing here only with the zero-tapering recommendation, UC specifically reviewed this question and emailed me that his view is that the zero-tapering part of the recommendation doesn't matter given the Matlab implementation of filtfilt and he doesn't understand why it's in the manual. Arggh.]

There are some odd comments about filtfilt in the manuals. The manual for the R-package “signal” says of the implementation of filtfilt:

This corrects for phase distortion introduced by a one-pass filter, though it does square the magnitude response in the process. That’s the theory at least. In practice the phase correction is not perfect, and magnitude response is distorted, particularly in the stop band.

In response to an inquiry to Matlab, Matlab (worryingly) stated to a CA reader:

The filtfilt function in Signal Processing Toolbox does not implement the initial condition proposed in Gustafsson paper. However, in his paper, Gustafsson pointed out that we used to have a bug in the initial condition we implemented in filtfilt. Therefore, we put Gustafsson’s paper as a reference to the function.

Padding was raised as a problem in the previous thread. UC observed to me today that the padding in lowpassmin as used in Mann et al 2008 is 50% less than the padding used in lowpass.m as reported in Mann’s 2008 smoothing paper ! THe padding code for lowpass.m in the reconstruction paper is:

fn=frequency*2;
npad=1/fn;

while the corresponding padding in the smoothing paper lowpass.m is three times larger:

fn=frequency*2;
% pad far enough to remove any influence of boundaries of padded series on the interior
npad=3*round(1/fn);

This would be 30 years of padding if the code in Mann et al (GRL 2008) were used, rather than 10 years of padding for the code in Mann et al (PNAS 2008). One would really hope for a more stable software platform.

There are some disquieting comments about both Butterworth filters and filtfilt in technical literature and manuals, though I’m not sure that these comments are relevant to the present situation. One caveat here says:

This filter is usually used in frequency domain work. It offers great slope and phase match, but adds lots of overshoot/preshoot errors to a transient or single shot event. A Butterworth filter is good for frequency domain analysis, but bad for time domain data acquisition.

Despite all these caveats, by centering the data, the problem seems to disappear as shown below. Here I first centered the series, then butterworthed it (there was negligible difference between 10 and 30 years of padding) then subtracted the mean to regain native centering. Whatever the theoretical concerns over padding and filtfilt, this smooth looks OK for practical purposes.

Does any of this matter? Dunno. I’m not really sure why this smoothing is done before standardization. I guess the moral will be to try the CPS recon without using Mannian smoothing and see if it makes any difference. (Once this bottleneck is removed, we’ll be pretty close to being able to emulate the CPS recon, though the EIV procedure still is quite distant.)

Roman:

Now when filtfilt.m receives this padded result, it adds further padding at each end as well. In the case used by Mann, Butterworth (10, .025), it pads each end with 30 more values which are both reflected and flipped about the ends of the already padded sequence.


64 Comments

  1. Carrick
    Posted Oct 7, 2008 at 9:15 AM | Permalink | Reply

    One of the tricks I use, when time-domain filtering a previously recorded series using the Butterworth method, is I run the series forward in time and then run the first-filtered sequence backwards in time through the same BW filter. This procedure results in a net 0-delay filter (yes there are other ways of doing this, but this is the easiest way to implement), so you are guaranteed no phase distortion.

    I like the BW filter better than the truncated Gaussian because I find it typically produces less ringing when you have a 1/f noise floor (typical in meteorological and physics applications) and handles impulsive signals in noise better as well.

    It isn’t ideal for very short time sequences, I agree, though there are ways to pad the sequence to make it better behaved at the end-points.

    • Mark T.
      Posted Oct 7, 2008 at 10:00 AM | Permalink | Reply

      Re: Carrick (#1),

      One of the tricks I use, when time-domain filtering a previously recorded series using the Butterworth method, is I run the series forward in time and then run the first-filtered sequence backwards in time through the same BW filter. This procedure results in a net 0-delay filter (yes there are other ways of doing this, but this is the easiest way to implement), so you are guaranteed no phase distortion.

      That’s essentially what the filtfilt() function in MATLAB does. It, filtfilt(), also attempts to deal with endpoint problems, but acausal filters will always have unresolved issues in this regard.

      I don’t think the question here is really “Are Butterworth Filters a Good Idea” but “Are High-Order IIR Filters a Good Idea” coupled with “Is an Acausal Filter a Good Idea” for use with these climate series. Any high-order IIR will exhibit significant ringing since the ringing is a function of the compactness in the frequency domain (other filters, such as an elliptic, would actually be worse). Higher order low-pass filters have tighter support (more compact, sharper transitions) and hence more ringing. Oh, and Mann’s filter is not 10th order, it is 20th since he’s doing it with filtfilt() (er, maybe 19th order since it is two back-to-back filters which is a convolution).

      The acausality of implementing filtfilt() is also an issue as it requires invented data at the endpoints and also current outputs depend upon future inputs.

      Mark

  2. Steve McIntyre
    Posted Oct 7, 2008 at 9:20 AM | Permalink | Reply

    #1. How does this differ from the filtfilt operation?

  3. Mark T.
    Posted Oct 7, 2008 at 10:04 AM | Permalink | Reply

    The centering issue results in a step function at the endpoints, i.e., the data goes from zero to some offset in one sample, which is why you see magnified ringing. There’s still significant time duration in the 10th order IIR implemented with filtfilt(), however, so each output point is really a function of as many as a few hundred inputs.

    Mark

  4. Carrick
    Posted Oct 7, 2008 at 10:14 AM | Permalink | Reply

    Mark:

    That’s essentially what the filtfilt() function in MATLAB does.

    I didn’t bring that up, but Mark is correct of course. I’ll just mention that one also shifts the first point so that the series starts at zero (then adds that value back). Alternatively, you can just backfill with the initial value of the series.

    This fixes the ringing problem associated with an initial step function in the data, as would be present here if you didn’t use that procedure. Also, I would probably stick to a 4th order Butterworth for this application.

    Here is the results using my software:

    Blue line is 20-year average (fstop=0.05 yr-1), red line is 100-year average (fstop=0.01 yr-1).

  5. Steve McIntyre
    Posted Oct 7, 2008 at 10:17 AM | Permalink | Reply

    #3,4. I’ve re-titled this to “Off-center Butterworth Filters”, which seems to be the salient issue here. Merely centering the series gets rid of the problem. So AFAIK this is the only salient issue and the one that I’m interested in feedback on – so folks, please don’t dazzle me with electrical engineering other than on this issue. Indeed, the only salient issue is the programming one: did Mann center before butterworthing? If not, then there’s a problem here? If he did, then there probably isn’t.

  6. Carrick
    Posted Oct 7, 2008 at 10:20 AM | Permalink | Reply

    By the way, centering doesn’t work as well when you have a 1/f broad-band signal, as is typical in meteorological data. The reason is that the “local mean” for the “start up” phase of the filtering can vary dramatically from the over all mean (so the problem with ringing of the filter from the initial step function in the data isn’t solved). An alternative approach to the one I described above would be to take an average of the first n points, where n is the filter length, and use that for the offset.

  7. Steve McIntyre
    Posted Oct 7, 2008 at 10:21 AM | Permalink | Reply

    #5. OK, we all understand that there are a variety of ways to smooth sensibly. It’s not like we ourselves were trying to figure out a sensible way of smoothing. The issues are entirely replication ones:

    1) does offcenter Butterworth with filtfilt introduce the end effect noted in the first graphic? (I’m convinced that it does).

    And once again:

    2) Confirmation that Mann used Butterworth on uncentered series?

    [Update: UC reports that the comment in the Matlab manual on filtfilt about the need for zero tapering is misleading as this is not an issue in Matlab filtfilt (though it is for R filtfilt.) AFAIK the issue is entirely moot as yes, he used Butterworth on uncentered series; R Butterworth and uncentered series introduces an artifact, but Matlab Butterworth on uncentered series appears not to.)

    • Mark T.
      Posted Oct 7, 2008 at 12:07 PM | Permalink | Reply

      Re: Steve McIntyre (#8),

      1) does offcenter Butterworth with filtfilt introduce the end effect noted in the first graphic? (I’m convinced that it does).

      I think this would be situational dependent, Steve. If the mean is significantly different than either endpoint, then you will have a “step” at the endpoint even after centering. In the series you have here (Dongge) it looks as if the two endpoints are roughly the same small distance from the mean, which will minimize the effect (which is sort of what Carrick was getting at in (#10)).

      Of course, the whole concept of “centering” requires stationarity but that’s a different hobby horse of mine that I doubt can ever be properly addressed. :)

      Mark

  8. Steve McIntyre
    Posted Oct 7, 2008 at 10:23 AM | Permalink | Reply

    #7. Point taken, but that seems to be a second order effect relative to O18 series where all the values are heavily negative – if the mean wasn’t subtracted somewhere along the way.

  9. Carrick
    Posted Oct 7, 2008 at 10:33 AM | Permalink | Reply

    So AFAIK this is the only salient issue and the one that I’m interested in feedback on

    Just because centering solved the problem for this series, doesn’t mean that it is the appropriate method to use for these types of data. In general for times series associated with f-1 they are suboptimal.

    Secondly, discussing the Matlab implementation that Mann used seems hardly “off-topic” to the original question you raised… Just saying.

    Steve:
    Sure. I was getting a bit preoccupied with what I was thinking about. Sorry to be grumpy about it.

  10. Eric
    Posted Oct 7, 2008 at 10:35 AM | Permalink | Reply

    Steve(#2),

    Filtfilt is slightly more complicated then a forwards/backwards pass with filter(). The signal is first padded (but not with zeros!) to help eliminate edge effects. The filter order determines the pad size. Then signal delay elements are added to make sure the initial/final conditions line up. These values should be dependent on your filter coefficients.

    Because of this delay section and the padding, you can get a marked difference between a forwards/backwards filter and filtfilt.

  11. Steve McIntyre
    Posted Oct 7, 2008 at 12:30 PM | Permalink | Reply

    #12. I think that displacements from the mean of the order shown here are not going to be a problem. I take the point of various commenters that it might be a problem for trending series where mean-reverting properties of Butterworth filters would come into play and create somewhat of an artifact at the end.

    Hmmmmm. Maybe Mann’s preoccupation with weird smoothing arises from his use of Butterworth filters and this me zero-seeking behaviour. The Mannian flip at the end of the series would counteract this a bit. Maybe we’ve explained something different than what we started with.

    Anyway, for my emulation, I’m going to center and then Butterworth on the assumption that the Matlab implementation does this.

  12. Steve McIntyre
    Posted Oct 7, 2008 at 12:48 PM | Permalink | Reply

    Mark and Carrick, here’s a prominent series (Dasuopu) with a very strong trend. Left a simple gaussian filter, right a Butterworth filter (centered before filtering.) In this case, I applied Mannian fit choices as well.) We see the zero-seeking behaviour at the end of the series resulting in an artifact. This illustrates a case working against his desired result (and in this case is an unreasonable result as well.) My size-up is that practitioners should heed the comments in the manuals that this smooth is designed for frequency and not for time series applications.

  13. Mark T.
    Posted Oct 7, 2008 at 2:47 PM | Permalink | Reply

    Well, in time-series applications you just have to accept the fact that the end-points aren’t going to behave as the inner portion of the series. Sometimes this isn’t a problem, for example when you’re integrating a million point radar return and the bits at the ends don’t contribute much to the total signal power. Of course, in this case you can also clearly delineate between signal and noise, which is not the case with these climate series. All that is even reasonably assumed is “low frequency = climate, high frequency = noise” which may or may not be true.

    The problem with this in regards to climate series is that the end bits are exactly what we want to evaluate. At least, that is what Mann is trying to show with his recons: that the end (now) is greater – in some statistical sense – than during the MWP (oh, what’s the new politically correct term for that?), whatever that means.

    I agree with your read on this, btw. Why does he even worry about smoothing before the last stage of analysis anyway? Smoothing correlates data over the length of the filter, which is an undesirable side-effect going into a correlation-based algorithm. It makes sense when you are decimating and you want to prevent aliasing… but smoothing for the purpose of smoothing?

    Mark

  14. Carrick
    Posted Oct 7, 2008 at 3:47 PM | Permalink | Reply

    The only reason for smoothing is that the correlated part of the signal is expected to be present only in a band of frequencies (in this case a low-pass band), and that the other (high-) frequency components should not be correlated. While it’s true that you end up with correlation between adjacent data values after smoothing, that can be corrected by decimating the data set to remove this filter-induced correlation.

    That’s the short version. But it is still very possible that this smoothing is unnecessary for this application, and in any cases may produce undesirable end effects. So it certainly is worth checking to see whether smoothing is worth the headache it produces…

  15. RomanM
    Posted Oct 7, 2008 at 3:50 PM | Permalink | Reply

    Although filters are not my area of expertise, maybe I can add something to the discussion here. I will try to make it understandable for those who haven’t been looking at the programs.

    I have been looking at the Mann smoothing for a while now trying to recreate his Matlab-based routines in R. The R butter function which generates the smoothing coefficients produces the same results as the Matlab version. However, there are several basic important differences between the two filtfilt functions of R and Matlab.

    In the Mann lowpass.m procedure, he first pads the sequence at both ends. The amount of padding used depends on the “frequency” of the smoothing required (smoothing of interpolated data with f = .05 in the preprocessing stage and later proxy and temperature smoothing with f = .10, according to the SI). In the paper on smoothing, he pads with three times as many values as he does in the reconstruction paper, saying “The length of the series padding used here to implement boundary constraints has been increased from Dt/2 (used by Mann [2004]) to 3Dt/2 to further guard against time series end effects”. Since the smoothing paper was likely to have been written more recently, my suspicion is that he may have come to realize that there were substantial boundary effects in the process.

    The padded process is passed to the filtfilt function which pads the sequence further (reusing the previously added values) where the number of extra values is equal to the “order” of the filter, i.e. one less than the number of actual coefficients in the filters. This is 10 in the smoothing paper, but I am unaware of what order was actually used in the reconstructions. At this point, the Matlab version of filtfilt differs from the R version. As was stated earlier in this thread, the R padding is done using zeros (according to R help) where in Matlab, reflect-and-flip is used. This will certainly produce possible substantial differences at the boundaries.

    There is a second difference that appears between the two versions at the next step as well. Matlab calculates a series of “initial-conditions” using only the filter itself. The calculated numbers are then each multiplied by the leading value in the padded sequence (which comes from somewhere well away from the edge of the original unpadded sequence). The padded series is then passed to an internal Matlab function called “filter” along with the filter coefficients and the initial condition values. This result is then filtered in the same way in the opposite direction (again with the application of initial conditions). The smoothed result is obtained by removing the padding pieces. This second difference will have noticeable effects at the ends of the series.

    The stumbling block in trying to get this to work easily in R is the fact that the R version of the function, filter, does not seem to have the ability to incorporate the initial conditions, although otherwise, it does appear to work the same way. I have some ideas that I am still checking out to see how to get around that.

    Why these black box methods have to be used is beyond understanding. Everywhere you look there are missteps of all kinds (how many threads has the latest paper generated here?) – and those are just the ones on the surface. It is apparent that they can’t have two clues about what they are doing when they apply the flip-flops and the other arcane methods nor about the appropriate way to calculate error bounds (which UC has shown to be larger near the ends than the ones they usually come up with). But, hey, it’s cli-sci and it provides me with something interesting to do…

  16. Mark T.
    Posted Oct 7, 2008 at 4:04 PM | Permalink | Reply

    Yes, Carrick, I agree. The problem in this case (with these climate series) is that they don’t really know where the uncorrelated parts of the signal lie (which band, that is), so really, any smoothing potentially removes valid signal.

    Mark

  17. MC
    Posted Oct 7, 2008 at 4:08 PM | Permalink | Reply

    Myself and colleagues have used filtfilt in MatLab before and this has always caused problems. I’m a bit disappointed Mann didn’t just write his own 0-delay filter in an m file and use this. It’s a bit more longwinded but its clear at least. He could have used a simple linear iterative filter i.e. first-order, or just a running average forward and back. I have a distrust of MatLab functions anyway as they have too many extras in them that never give you a straight answer unless you are careful. Notably fft with 0 padding.
    The caveat about using a Butterworth for frequency domain analysis is also pertinent. If you really want to be pedantic Steve, try creating arrays of data with varied lengths of zeros at each end and see what the filtfilt function does.

  18. Carrick
    Posted Oct 7, 2008 at 4:20 PM | Permalink | Reply

    RomanM and MC, my big worry is that filtfilt.m is proprietary code, and can change between versions. This is technical enough of a method, that “getting it right” is important, and one really shouldn’t rely on canned software to do the heavy lifting for you.

    However, because Mann does his own padding, the additional padding of Matlab is likely to have little or no effect…. So at least that’s one less thing to worry about.

    • Mark T
      Posted Oct 7, 2008 at 6:39 PM | Permalink | Reply

      Re: Carrick (#20),

      RomanM and MC, my big worry is that filtfilt.m is proprietary code, and can change between versions.

      % Author(s): L. Shure, 5-17-88
      % revised by T. Krauss, 1-21-94
      % Initial Conditions: Fredrik Gustafsson
      % Copyright 1988-2004 The MathWorks, Inc.
      % $Revision: 1.7.4.2 $ $Date: 2004/12/26 22:15:49 $

      It’s pretty old, probably not going to change. It is copyrighted so I cannot distribute it. I was actually surprised that it contains an actual script, since many of their functions are now pre-compiled and the m-file is nothing more than the help/readme for the function.

      Mark

  19. John A
    Posted Oct 7, 2008 at 5:01 PM | Permalink | Reply

    Steve:

    I’m not really sure why this smoothing is done before standardization

    How many derivatives of the original data does the Mannian process go through before he gets to his “result”? And more importantly, with all of this smoothing, standardization and bullying of data, what can we say about error bars, goodness of fit or any metric through such a mangling process?

  20. Steve McIntyre
    Posted Oct 7, 2008 at 5:07 PM | Permalink | Reply

    #17. Roman, the Matlab reflect and flip – can you confirm that this that both horizontal and vertical?

    • RomanM
      Posted Oct 7, 2008 at 5:54 PM | Permalink | Reply

      Re: Steve McIntyre (#22),

      No problem. It is fairly obvious by looking at the filtfilt.m file:

      % Extrapolate beginning and end of data sequence using a “reflection
      % method”. Slopes of original and extrapolated sequences match at
      % the end points.
      % This reduces end effects.
      y = [2*x(1)-x((nfact+1):-1:2);x;2*x(len)-x((len-1):-1:len-nfact)];

      The 2*x(1)-x((nfact+1):-1:2) and the “slopes matching” are a dead giveaway. However, at the Mathlab home website, you can find a reference to filtfilt at (never saw a url this long):

      http://www.mathworks.com/access/helpdesk/help/toolbox/signal/index.html?/access/helpdesk/help/toolbox/signal/filtfilt.html&http://www.mathworks.com/support/functions/alpha_list.html?sec=3

      which says

      Algorithm

      filtfilt is an M-file that uses the filter function. In addition to the forward-reverse filtering, it attempts to minimize startup transients by adjusting initial conditions to match the DC component of the signal and by prepending several filter lengths of a flipped, reflected copy of the input signal.

      “Several” filter lengths sounds like an exageration to me. From the program, it looks like one at each end. One should also notice that reflection does not include the endpoint (unlike the minimum roughness of professor mann who reflects that endpoint as well – another innovation).

  21. Nick Moon
    Posted Oct 7, 2008 at 6:26 PM | Permalink | Reply

    Just looking at the examples Steve has posted, it looks like the there is a sine wave-ish ripple, in the butterworth that isn’t in the gaussian smoothed data. Perhaps looking at it by eye isn’t the best tool. But it looks like about a 50 year cycle length.

    Still if Steve’s playing with signal processing stuff, why not stick each of the datasets through a fast-fourier and see what happens? The original unsmoothed data should be a fairly broad flat line when FFT’ed. Any kind of smoothing – should attenuate the higher frequency end. But if the ringing that appears at the endpoints is also affecting the entire dataset – this might show up as a clear spike on an FFT.

    However, I don’t suppose any of this matters much in the big picture. It depends on what use is made of the data after going through this low pass filter. I haven’t looked at the code for this bit.

  22. Mark T
    Posted Oct 7, 2008 at 6:42 PM | Permalink | Reply

    Uh, you should probably clip the filtfilt.m code that RomanM posted. This is a pretty public website and I’m guessing none of us have permission to post their code.

    Mark

  23. Steve McIntyre
    Posted Oct 7, 2008 at 8:21 PM | Permalink | Reply

    OK…. Jean S sent me a digital version of Dongge cave that he butterworthed in Matlab. Uploaded to data/mann.2008/proxy385.txt. He reported:

    First column is year, second the original and
    third Mann-filtered. I used lowpassmin with f=0.05 (since 385 was in category 6001, i.e. a “decadal” proxy).

    Here’s my best emulation in R, which seems very good and more than reasonable enough for practical purposes. I thought that this pretty much ended the inquiry, but there’s a catch.

    The problem is that this calculation was done with f=0.1 and the differences blew up when I re-ran it with f=0.05. I hope that Jean S made a typo, but I am hoping against hope here, since this would be very uncharacteristic.

  24. Steve McIntyre
    Posted Oct 7, 2008 at 9:19 PM | Permalink | Reply

    #17. Roman, a CA reader pointed out to me by email that filtfilt.m online here has the following line:

    nfact = 3*(nfilt-1); % length of edge transients

    nfilt is the length of the filter (10 points here). So the Matlab padding is 30 years added on to the Mann padding. The reader observes that he did not observe any zeroing in the code.

    • RomanM
      Posted Oct 8, 2008 at 6:10 AM | Permalink | Reply

      Re: Steve McIntyre (#28),

      Thanks for the correction. I must be getting old because I actually knew that. In the document which I referred to in my comment 19 on the UC on Mann Smoothing thread, I wrote:

      Now when filtfilt.m receives this padded result, it adds further padding at each end as well. In the case used by Mann, Butterworth (10, .025), it pads each end with 30 more values which are both reflected and flipped about the ends of the already padded sequence.

      I guess nfilt and nfact look too much alike for these eyes…

      IMHO, Bob North (#30), makes the excellent point that there is really no genuine basis for applying the signal smoothing techniques to what is really a statistical problem of estimation of a series of parameters. Without an understanding of the underlying relationships between these parameters and using an analysis suited to those relationships, all that these techniques serve to do is to superimpose spurious correlational structures onto the data. These structures (and NOT the underlying information in the data) then become the basis for the reconstructions themselves and for the validation procedures justifying the precision of the results. This is one of the points made by M and M when they showed that, using the Mann-made methodology, even unrelated randomly generated data when properly selected could tell us how cold it was hundreds of years in the past.

  25. Posted Oct 7, 2008 at 9:25 PM | Permalink | Reply

    Here are my comments on Mann’s Butterworth filter from a year ago (bold added and typo corrected):

    Steve –
    Thanks for the clarification on the Gaussian filter!

    Although IPCC AR4 WGI Chapter 6 generally uses a Gaussian filter, it’s curious that Mann (2004) instead uses the relatively arcane Butterworth filter. This is an electronic filter that has a very flat passband and a monotonic (albeit slow) stopband rolloff. With infinite order, it becomes an ideal “brickwall” lowpass filter, but since all realizeable circuits have some resistances that makes them non-ideal, electrial engineers have to settle for imperfect filters like the Butterworth. The likewise imperfect Chebyschev filter is generally preferred, since it has faster rolloff at the expense of some passband ripple. (See Wikipedia)

    However, computer filtering of data is not subject to the same contraints as electrical engineering, and so one could just as well use the ideal filter, which in continuous time is the cardinal sine function sinc(x) = sin(x)/x, appropriately scaled and normalized. In unbounded discrete time there is presumably a discrete version of this. In bounded discrete time, one could simply truncate the weights at the boundaries and renormalize, reflect the data at the boundaries until the weights are imperceptible, or (more ambitiously) recompute optimal weights that kill, in expectation, an equally spaced set of frequencies out to the Nyquist frequency, taking the expectation over the random initial phase of the signal.

    But if a precise bandwidth isn’t crucial, one could just use Gaussian, binomial, Kalman, or even plain old equal weights. Mann’s Butterworth filter serves no apparent purpose in this context whatsoever, other than to dazzle the rubes with some high-tech but inappropriate apparatus.

    IPCC’s Chapter 3 Appendix 3.A states that Chapter 3, if not Chapter 6, uses either the 5-weight filter (1/12)[1,3,4,3,1], or a likewise integer-based 13 weight filter. It’s not clear where these come from, since they are not quite binomial, but at least the short one does have the attenuations it claims, making it a clever way to achieve an almost 10-year half power bandwidth with only 5 points! They say the long one has a response function similar to the 21-term binomial filter used in TAR, despite using only 13 points.

    – Hu McCulloch
    Econ Dept.
    Ohio State U.
    mcculloch.2@osu.edu

  26. Bob North
    Posted Oct 7, 2008 at 9:35 PM | Permalink | Reply

    All of this discussion of various filters and smoothing techniques is very interesting, but the fundamental question still remains is there a real-world, physical basis for using a butterworth or any other low pass, high pass, screen pass filter on the proxy data when attempting to develop a temperature reconstruction. I would really like to understand the physical basis for such filtering efforts.

  27. Steve McIntyre
    Posted Oct 7, 2008 at 9:37 PM | Permalink | Reply

    #30. Shouldn’t your question be directed at Mann et al? We’d just be guessing as to their reasons – which, as we’ve aluded to , seem obscure to us.

  28. Steve McIntyre
    Posted Oct 7, 2008 at 9:40 PM | Permalink | Reply

    #29. Just catching up to you, Hu. I never quite saw any issues with the smoothing stuff (partly because I didn’t have a Butterworth function in R until a little while ago.) Given the corpus in which this method was included, I should have realized that it wouldn’t be as it seemed.

  29. Posted Oct 7, 2008 at 11:46 PM | Permalink | Reply

    Smooth first, ask questions later.

  30. Mark T
    Posted Oct 7, 2008 at 11:55 PM | Permalink | Reply

    Hehe… good one. I like Hu’s comment on dazzling the rubes, too.
    Mark

  31. Manfred
    Posted Oct 8, 2008 at 2:09 AM | Permalink | Reply

    A question just out of curiosity. I have a background in electronics and analogue signal processing, and the filtering names that fly around here sound so awfully familiar. Has anyone ever tried to make a spectrum analysis of the noise?

  32. Jean S
    Posted Oct 8, 2008 at 2:10 AM | Permalink | Reply

    Hu, this is not exactly related to the topic, but since you are a real expert on this, I ask it here (I’m sure you understand why I’m asking this ;) ). Suppose you have a standard Kalman filter. You can assume that your state vector is just a sequence, if that’s any help. Now you apply the filter in the following “stepwise” manner:
    1) You estimate your observation noise covariance from a known “training” set (process noise variance is assumed to be known).
    2) You apply the filter to get your state estimates
    3) Now you repeat the procedure from 1) with a new longer observation sequences such that they overlap with the ones previously used in the common period. There may be fewer observation sequences now. In covariance training, you use the known state sequence AND the sequence estimated in 2) (as they were known states)
    This can be repeated several times. Now, how would you characterize the procedure? Is the covariance estimation meaningful/OK? Or should one only use the known state sequence for the noise estimation at each step? Are there better ways to directly adapt to the changing number of observational sequences?

    Steve (#27), I indeed used the parameter frequency=0.05 as an input to lowpassmin.m (I recalculated it and checked against data/mann.2008/proxy385.txt). But did you take into account that lowpass.m (which lowpassmin is calling), the version I was using, has the lines

    fn=frequency*2
    npad=1/fn;

  33. Willis Eschenbach
    Posted Oct 8, 2008 at 3:23 AM | Permalink | Reply

    Hu, you say:

    IPCC’s Chapter 3 Appendix 3.A states that Chapter 3, if not Chapter 6, uses either the 5-weight filter (1/12)[1,3,4,3,1], or a likewise integer-based 13 weight filter. It’s not clear where these come from, since they are not quite binomial, but at least the short one does have the attenuations it claims, making it a clever way to achieve an almost 10-year half power bandwidth with only 5 points! They say the long one has a response function similar to the 21-term binomial filter used in TAR, despite using only 13 points.

    As you likely know, the full width to half maximum (FWHM) is often used to characterize the width of a smoother. This is the width between the points where the amplitude is half the maximum amplitude. A 21 point binomial filter has a FWHM of ~ 9 points.

    Since their shorter filter is 13 points wide, it is perfectly possible, by a careful choice of values, to construct a 13 point integer filter with a FWHM of 9. One possible candidate would be:

    1, 5, 10, 14, 17, 19, 20, 19, 17, 14, 10, 5, 1

    This filter gives a result somewhat similar to an 11 point Gaussian filter, which has about the same FWHM. Why you would choose it is a mystery, however, but these folks seem to specialize in obscure filter choices for obscure reasons …

    w.

  34. Posted Oct 8, 2008 at 4:49 AM | Permalink | Reply

    The whole procedure of gridboxcps.m is unprecedented. Proxies with quite poor correlation to local temperature are smoothed with ad-hoc padding. Then variance matching, smoothed local temperature as target, is performed. Finally, area-weighted average (represents temperature ) is variance matched to hemispheric temperature (represents what, after scaling ? ) . Do they have any idea what happens to noise in the process? Even iid noise, to begin with ? What if I switch from variance matching to CCE?

  35. Posted Oct 8, 2008 at 6:42 AM | Permalink | Reply

    This is one of the points made by M and M when they showed that, using the Mann-made methodology, even unrelated randomly generated data when properly selected could tell us how cold it was hundreds of years in the past.

    Well, it would be certainly quicker and wouldn’t involve making arduous trips to stands of magic trees in Colorado and coring them. One could stay in Starbucks and not have to move at all.

    Perhaps what we’re learning is that carefully screened series of red noise really do tell us about the climate of time’s past and that Mann is a genius.

    Sarcasm aside, I remain baffled how much filtering, smoothing, padding and fitting needs to be done to the data before its seen fit to be averaged together and called something like temperature with meaningful error bars and meaningful statistical metrics for significance. How can any meaningful signal survive such pressure cooker mangling?

  36. RomanM
    Posted Oct 8, 2008 at 4:25 PM | Permalink | Reply

    Ok. I think that I have succeeded in simulating the results of Matlab’s version of filtfilt on R. I never did figure out how those pesky startup conditions were implemented so I made an extremely simple change in the process ( but I had to think of it first :) ) which makes those conditions unnecessary. I wrote up a short R function to do the trick:

    # b = moving-average coefficients of an ARMA filter (normally called ‘b’)
    # a = autoregressive (recursive) coefficients of an ARMA filter
    # x = input sequence to be filtered

    # needs to have library(signal)

    filtfiltx = function( b, a, x ) {
    len = length(x)
    nb = length(b)
    na = length(a)
    nfilt = max(nb,na)
    nfact = 3*(nfilt-1) # padding size
    # If not enough data, quit
    if (len <= nfact) stop(“Data must have length more than 3 times filter order.”)
    b = c(b, rep(0,nfilt)) # zero-pad but not really necessary?
    a = c(a, rep(0,nfilt)) # ditto
    y = c(2*x[1]-rev(x[2:(nfact+1)]), x, 2*x[len]-rev(x[(len-nfact):(len-1)]))
    y = rev(y[1] + filter(b,a,y-y[1]))
    y = rev(y[1] + filter(b,a,y-y[1]))
    y[(nfact+1):(nfact+len)] }

    If anyone finds any problems with the script, please tell me about them.

    • IainM
      Posted Oct 9, 2008 at 6:56 AM | Permalink | Reply

      Re: RomanM (#41),

      Just FYI: filtfiltx fails with “Error in filtfiltx(bf, x) : argument “x” is missing, with no default” when dropped into the dongge script in Steve’s OP.

      Which just goes to show that somebody’s watching, if not learning :)

      • RomanM
        Posted Oct 9, 2008 at 7:21 AM | Permalink | Reply

        Re: IainM (#43),

        R is looking for three variables, not two, so the third variable x is missing. That’s because in keeping with the format of the Matlab filtfilt function, the way I programmed it was to separate out the b and a components as two variables. Try writing filtfiltx(bf$b, bf$a, x) instead.

        Alternatively, you can change the filtfiltx function to look like:

        filtfiltx = function( bf, x ) {
        b = bf$b
        a = bf$a
        len = length(x)
        nb = length(b)
        na = length(a)
        nfilt = max(nb,na)
        nfact = 3*(nfilt-1) # padding size
        if (len <= nfact) stop(“Data must have length more than 3 times filter order.”) # Not enough data
        b = c(b, rep(0,nfilt)) # zero-pad but not really necessary?
        a = c(a, rep(0,nfilt)) # ditto
        y = c(2*x[1]-rev(x[2:(nfact+1)]), x, 2*x[len]-rev(x[(len-nfact):(len-1)]))
        y = rev(y[1] + filter(b,a,y-y[1]))
        y = rev(y[1] + filter(b,a,y-y[1]))
        y[(nfact+1):(len+nfact)] }

        In the latter approach, it will make your drop-in work properly.

        • IainM
          Posted Oct 9, 2008 at 7:27 AM | Permalink

          Re: RomanM (#44),

          Ahhh, I see. Having read about R’s amazing abilities in parameter passing I thought R would expand bf to provide the {b,a} pair, I live and learn! Many thanks.

        • RomanM
          Posted Oct 9, 2008 at 7:54 AM | Permalink

          Re: IainM (#45),
          The abilities of an R programme depend on what has been programmed into it. In order to do that here, I would have had to put in code checking to see which version of the data you entered and had the program act accordingly.

          However, having just run the programme on the dongge script, I noticed that it should be “updated” (not corrected ;) ) for neatness’ sake. Two (unnecessary) lines should be changed:

          Current:

          b = c(b, rep(0,nfilt)) # zero-pad but not really necessary?
          a = c(a, rep(0,nfilt)) # ditto

          Updated:
          b = c(b, rep(0,nfilt))[1:nfilt] # zero-pad but not really necessary?
          a = c(a, rep(0,nfilt))[1:nfilt] # ditto

          I didn’t check into it, but my suspicion is that somewhere in the application of the Matlab filter function, it may require that b and a be of the same length and they pad the shorter filter portion with zeros. This does not appear to be needed in R. I added a bunch of zeros to both, but neglected to chop off the extras to bring the lengths back to the value, nfilt. To quote something I read somewhere, “… these errors do not influence results obtained, but they could under some circumstances.”

        • IainM
          Posted Oct 9, 2008 at 8:16 AM | Permalink

          Re: RomanM (#46),

          Thanks again Roman, I’ve ‘updated’ accordingly but I won’t be citing you as the source. In fact I’ll add your name to the ‘nameless ones’ list :p

          (That could become the sincerest form of flattery!)

        • RomanM
          Posted Oct 9, 2008 at 8:32 AM | Permalink

          Re: IainM (#48),

          I will remain nameless…

          However, since I am having fun playing with this function, I combined the previous versions into a single function (version 2.0?) which can handle both ways of specifying the filter:

          filtfiltx = function( b, a, x = NULL ) {
          if (is.null(x)) { x = a
          a = b$a
          b = b$b }
          len = length(x)
          nfilt = max(length(b),length(a))
          nfact = 3*(nfilt-1) # padding size
          if (len <= nfact) stop(“Data must have length more than 3 times filter order.”) # Not enough data
          b = c(b, rep(0,nfilt))[1:nfilt] # zero-pad shorter of b and a
          a = c(a, rep(0,nfilt))[1:nfilt] # ditto
          y = c(2*x[1]-rev(x[2:(nfact+1)]), x, 2*x[len]-rev(x[(len-nfact):(len-1)]))
          y = rev(y[1] + filter(b,a,y-y[1]))
          y = rev(y[1] + filter(b,a,y-y[1]))
          y[(nfact+1):(len+nfact)] }

          Now if bf consists of the pieces bf$b and bf$a, then either of filtfiltx(bf, x) or filtfiltx(bf$b, bf$a, x) should give the same results.

        • IainM
          Posted Oct 9, 2008 at 11:41 AM | Permalink

          Re: RomanM (#49),
          Thanks for V2.0, I shall now retire to the back of the cave and play with my speleothems!

          Re: Patrick M. (#51),

          I’m glad you took my meaning :)

        • RomanM
          Posted Oct 9, 2008 at 12:32 PM | Permalink

          Re: IainM (#52),

          Perhaps you didn’t guess the source of my statement in #46,

          To quote something I read somewhere, “… these errors do not influence results obtained, but they could under some circumstances.”

        • IainM
          Posted Oct 9, 2008 at 12:47 PM | Permalink

          Re: RomanM (#53),

          No I didn’t guess, I actually knew it :)

        • Patrick M.
          Posted Oct 9, 2008 at 11:07 AM | Permalink

          Re: IainM (#48),

          I won’t be citing you as the source

          Nor will whoever updates the readme’s at Mann’s site.

  37. peter_ga
    Posted Oct 9, 2008 at 3:52 AM | Permalink | Reply

    Looks like the transient response of the Butterworth filter has given him a nice hockey-stick upkick at the end.

    Without a full knowledge of the application, how about a linear-phase finite impulse response digital filter, easily developed from a kaiser-windowed frequency brick at the cutoff, gives up to 90 db pass/stop performance. The linear phase produces a constant input-output delay, so the data can be time shifted by the known constant delay. The delay corresponds to half the impulse response length of the filter. Because the filter response has perfectly linear phase, transient response is perfect.

    However this means that there is no output data for half the length of the filter, both at the beginning and the end. This is quite logical, and must be nearly optimal for this application.

  38. Carrick
    Posted Oct 9, 2008 at 8:14 AM | Permalink | Reply

    peter_ga:

    Looks like the transient response of the Butterworth filter has given him a nice hockey-stick upkick at the end.

    I don’t agree that it has, at least in this case. Without “proper” boundary conditions, it certainly can. (And the “proper” ones are usually a bit of a black art, meaning you have to hand verify how your algorithm handles any given data set.)

    Since one could in principle create hockey-stick shapes, this certainly illustrates the danger in smoothing-first.

    • Mark T.
      Posted Oct 9, 2008 at 9:33 AM | Permalink | Reply

      Re: Carrick (#47), Yeah, a much more worthy cause of concern is that these things are data dependent.

      Mark

  39. Ashley
    Posted Oct 10, 2008 at 1:25 AM | Permalink | Reply

    I work in seismic processing of data and hence I am very familiar with the application of butterworth (and indeed Ormsby filters, which are “boxcar” amplitude spectrum filters). With any filter like this you are applying a convolutional process to a time series of finite length. The filtering process assumes the series is of infinite length. This means that if you have a significant change in data values at the end of the series, then due to the truncation you will see a major excursion in the filtered response. This response is entirely spurious. The usual procedure when faced with this problem is to either:

    (a) Truncate the series after filtering to where the filtered response is valid or
    (b) apply a cosine taper to the filtered response in order to smooth the end amplitudes to zero.

    For applications such as seismic processing for oil exploration, losing the start or end of the series is academic as the response at the start or end of the time series is generally of no interest. However, when relating well log (borehole) data to the seismic response the truncation effect can be annoying and will result in the analysis having to disregard (lose or throwaway) data at the end of the filtered time series.

    You cannot low pass filter a time series and expect meaningful results which can be analysed to still remain at the ends of the filtered signal.

    • Mark T.
      Posted Oct 10, 2008 at 9:02 AM | Permalink | Reply

      Re: Ashley (#55),

      (b) apply a cosine taper to the filtered response in order to smooth the end amplitudes to zero.

      Yes. but that would make the trend go down instead of up and therefore is a no-no. ;)

      Mark

      • Ashley
        Posted Oct 11, 2008 at 2:21 AM | Permalink | Reply

        Re: Mark T. (#56), That would certainly be true for a time series with a non-zero mean. I forget that in my work the mean for a seismic time series is zero because the seismic signal does not contain very low frequencies, hence the mean is naturally filtered and the cosine taper is then a good solution. However, for borehole data (and the case in point on this thread) this is not the case and the preferred method would then be to truncate the time series after filtering.

    • MikeH
      Posted Oct 10, 2008 at 12:05 PM | Permalink | Reply

      Re: Ashley (#55),

      a) Truncate the series after filtering to where the filtered response is valid or

      How would you normally determine where the valid portion of the filtered response starts and ends?

      • Ashley
        Posted Oct 11, 2008 at 2:22 AM | Permalink | Reply

        Re: MikeH (#57), At least the half length of the filter, bu to be really safe, the full length of the filter.

  40. peter_ga
    Posted Oct 10, 2008 at 9:41 PM | Permalink | Reply

    I don’t understand why they use a Butterworth filter. It is maximally flat in the passband. How is this of any great value? The group delay is non-constant over the passband. For this application, the group delay is important, because it is necessary to relate each output point to a specific time, and a non-constant group delay as a function of passband frequency does not allow this.

    Also this is a digital rather than analogue filtering application. FIR digital filters allow for constant group delay. The simplest would be a moving time average. If you had say a 10 year average, obviously the first and final 5 years of the data set would be unavailable. A more complex filter is easily developed with a simple algorithm explained in most introductory texts on digital signal processing. (Rabiner and Gold, Oppeinhamer and Schaefer)

    I have recent experience with this technique because professionally I developed a real-time interpolator to allow frequency conversion enabling legacy games to replay their high quality sounds in a gaming environment. Linear phase is great, though not strictly necessary, in interpolation, because the output data can be verified by inspection.

    • Ashley
      Posted Oct 11, 2008 at 3:07 AM | Permalink | Reply

      Re: peter_ga (#58), I am not sure I understand this description of a Butterworth filter. Firstly, a filter is normally considered in terms of its amplitude and phase spectra. For the type of application we are considering here a zero phase filter will place the filtered estimate at the centre of filter. This then becomes a non-causal estimate. This is an equivalent choice in terms of phase spectrum to a simple moving average filter, as you describe in you second paragraph, where the average result estimated at any point in the time series is comprised of the average of an equal number of samples (assuming regular sampling) from both before and after the current time position. Any time domain filter we wish to apply has a frequency domain equivalent.

      In terms of the amplitude spectrum, it is the amplitude spectrum that is being modified in order for the filter to work. There are many types of filter but the two common ones used in my area of expertise are the Ormsby (“boxcar”) filter and the Butterworth filter.

      For an Ormsby filter used as a band pass filter then you would set four parameters – F1, F2, F3 and F4 which are the filter “corner” points. The amplitude spectrum from zero frequency to F1 is set to zero. From F1 to F2 the amplitude spectrum of the filter increases linearly from zero to one. Between F2 and F3 the amplitude spectrum is flat with a magnitude of one and then again decreases linearly back to zero between frequencies F3 and F4. To run the same filter in low pass mode you would only specify frequencies F3 and F4, the amplitude spectrum of the filter then being a constant 1 from zero frequency up to F3 and then linearly tapering down to zero amplitude at F4. Diagram below has frequency along the x-axis and amplitude on the Y-axis

      &nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbspF2&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbspF3
      &nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp+++++++++++
      &nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp+&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp+
      &nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp+&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp+
      ———————————
      &nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbspF1&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbspF4

      A Butterworth filter in bandpass mode is defined by two filter points, equivalent to F3 and F4 in the above picture. However, the slope of the filter is then defined in dB/Octave (an octave is a doubling/halving of the frequency), usually referred to as the roll-on or roll-off the filter. With a Butterworth filter the decay therefore never causes the signal to actually zero any frequency, the amplitude is simply progressively more attentuated away from the pass band. For a low pass Butterworth the filter would be defined by one frequency (F3 in the above picture). Below this frequency the signal is not modifed, above this frequency the signal is progressively attenuated according to the filter slope. A gentle slope might be -6dB/Octave, more agressive slopes might be -18dB/Octave.

      For an Ormsby filter, if the two frequencies defining the slope (Ie F1/F2 or F3/F4) are too close together and the slope then becomes very steep this can create an undesirable “ringing” effect in the resulting filtered signal. Butterworth filters are regarded as more gentle in this regard, although if the roll-off slope is made very steep (eg -72 dB/Octave) then ringing is likely to be observed in the filtered signal.

      The other aspect of filter design that has to be considered is the length of the filter (sometimes called the operator). The operator has to be of sufficient length to properly work as the required filter.

      After designing the filter then it is applied by convolution, either in the time domain (reverse order of operator then repeatedly shift, cross-multiply with signal, add products) or in the frequency domain (multiply amplitude spectra of signal and filter, add phase spectra).

      • Ashley
        Posted Oct 11, 2008 at 3:11 AM | Permalink | Reply

        Re: Ashley (#61), Sorry about the diagram – the non-breaking spaces worked fine in the preview pane!

  41. Carrick
    Posted Oct 13, 2008 at 9:19 AM | Permalink | Reply

    peter_ga:

    I don’t understand why they use a Butterworth filter. It is maximally flat in the passband. How is this of any great value? The group delay is non-constant over the passband.

    That’s true if you apply it in the forward direction only. However, the implementation that Mann is using (Matlab’s filtfilt function) is applied first in the forward, then that output is run backwards through the filter. The filtfilt implementation produces a zero group delay, at the cost of being non-causal of course.

Post a Comment

Required fields are marked *

*
*

Follow

Get every new post delivered to your Inbox.

Join 2,902 other followers

%d bloggers like this: