UC writes in that there’s another Mannian problem:
And GRL08 ( http://www.meteo.psu.edu/~mann/shared/articles/MannGRL08.pdf ) is the prologue. Tried this new smoother ( http://www.meteo.psu.edu/~mann/smoothing08/lowpassadaptive.m ) with HadCRUT monthly and f=0.0104,
[smoothedbest,w0,w1,w2,msebest] = lowpassadaptive(HadM, 0.0104);
got this figure,
But the weights are [w0 w1 w2]
ans =
0.4100 0 0.5500
Surprisingly, this does not agree with the text, where constraint sum(w_j)=1 is specified. I guess there’s something wrong with Mann’s exhaustive search,
for weight0=0:0.01:1
for weight1=0:0.01:1.0weight0
for weight2=0:0.01:1weight0weight1
I hope he will fix this soon.
UPDATE Sept 3, 8 am: It didn’t take long. As noted in a comment below, Mann changed the online code within hours of this being posted at CA to the following (note the difference in the third line). There is no annotation of the change nor credit to CA.
for weight0=0:0.01:1
for weight1=0:0.01:1.0weight0
weight2=1weight0weight1;
60 Comments
Yes, I tried with a signal consisting entirely of ones with the last value changed to zero … and obtained a pretty interesting “smoothing” ;)
Amazingly or not, Mann solves this rather simple optimization problem with exhaustive search … and even that is coded wrong! Seems to me that none of the reviewers bothered to check the code.
Must have been peer reviewed.
That technique has always worked before.
I’m an engineer and therefore smarter than most scientists.
But if I was a scientist and I’d come up with that graph after years of exhaustive investigation, I would have said “Geez, 1910 to 1940 has the same shape and magnitude as 1980 to 2010! Now what the hell did those two sequences have in common?”
#1
Mann:
less computationally cumbersome??
tic,[smoothedbest,w0,w1,w2,msebest] = lowpassadaptive(randn(100,1),0.025);toc
Elapsed time is 18.584496 seconds
UC (#4), you say:
Perhaps after his exhaustive search, he could no longer afford that much time…
In my own work, I care deeply about the computational efficiency of my filter calculations. But I want to be able to repeat them 10,000 – 20,000 times per second.
It seems the code has been updated now
Re: Thor (#6),
http://www.meteo.psu.edu/~mann/smoothing08/
:)
#6
And who said they don’t read CA? :)
heres a BBC news article about Mann’s latest work:
http://news.bbc.co.uk/2/hi/science/nature/7592575.stm
I just read the 1300 year articles coming out in the papers this morning and wrote this link below on my blog.
http://noconsensus.wordpress.com/2008/09/03/climatehottestfor1300yearsthesecondcoming/
I then came here to see what was happening. I should have known it was already being addressed. What I say in my blog is different though. While CA is forced to address the science, selective elimination of the data seems designed to make it difficult to refute rather than a better science.
Dr. Mann, some weights, such as [0.95 0.05 0] are still missing. Seems to be a rounding issue, try ‘format hex’ to locate the problem.
RE #11
This should take care of it. Loop counters seem to be single precision.
w0=0:0.01:1;
for i=1:length(w0)
weight0=w0(i);
w1=0:0.01:1weight0;
for j=1:length(w1)
weight1=w1(j);
weight2=1weight0weight1;
11, 12. It’s just that Mann is a terrible programmer. Any student knows that you don’t use real variables in loops, you use integers, otherwise you can get this rounding problem.
It should be
for j=0:100; weight0=j/100;
etc
Or if he knew anything at all about Matlab, he would write
weight0=linspace(0,1,101)
He doesn’t. One of these days he’ll learn, maybe not. I always chuckle when he mentions “filters available in Matlab.” There are no filters available, there are solvers available. Anyone can code the solvers in any language by simply picking up a book on filter design and implementing the appropriate solver. MATLAB just makes it easy. :)
Btw, adhoc implementations that are “not optimal in terms of its statistical properties,” are quite scary. You can always find an adhoc implementation that works in very select cases, particularly when you have a predetermined outcome (which can sometimes be useful). “more sophisticated and precise approaches” carry statistical weight.
Mark
PS: technically, MATLAB should be all caps in publications since that is the actual product name.
It is also interesting to see that the great Mann still doesn’t seem to realize that his crazy ‘minimum roughness’ smoothing (a) forces the smoothed curve to go through the final data point, and (b) gives this final point an absurdly large influence on the data. Despite the facts that Steve pointed out (a) ages ago, and that (b) is clear from Mann’s own code:
apad=2*indata(nn)indata(nnipad);
This shows that the final data point influences all the padded data, with a weighting factor of 2.
From lowpass.m:
% (2) pad series with values over last 1/2 filter width reflected w.r.t. x and y (w.r.t. final value)
% [imposes a point of inflection at x boundary]
The same lack of understanding appears in his GRL08 paper:
“and the ‘minimum roughness’ constraint which seeks the smoothest behavior near the time series boundaries by tending to minimize the second derivative of the smooth near the boundaries.”
For those who may not have been reading CA long, the problem of smoothing up to the end of your data was discussed exhaustively here a while back. All possible methods for smoothing Mann’s data out to 2006, as in the recent paper, were discussed and shown to create artifacts, sometimes huge artifacts.
With regard to UPDATE Sept 3, 8 am, the text below had been posted by this morning on Mann’s Supplemental Information page for the paper:
No attribution to CA’s discovery. Maybe they will when they fix the other problem in the program.
In the paper, mann states:
But the program reads:
If I read “var” correctly, mse in the program calculates the variance of the differences between the weighted combination of the smoothed series and the temperature series (indata), not the sum of squares of the differences (The term in the denominator of mse is irrelevant since it is the same for all of the approximately 5000 cycles of the loop – a waste of calculation). When the means of the series S0(t) and T(t) are equal, then the variance and the mean square error are also equal. Otherwise, they are not.
I don’t think that Prof. Mann understands that his Butterworthfiltfilt smoothing produces three series whose means are in general not only all different from each other but from the mean of the temperature series as well. He is actually minimizing Σ [S0(t)T(t)  (w1m1 + w2m2 + w3m3  mT)]2 where m1, m2, m3 and mT are the means of the four series involved in the calculation. The final weights produced by the program could differ from the weights when the mse is correctly calculated.
Perhaps, he could also explain why the conditions he has imposed on the minimization are necessary. In fact, he might discover that the best fit (with minimum mse) is obtained by simply regressing the three series on the temperatures with no minimum preconditions on the “weights”. It could have saved some embarassment because the flawed program would have been unnecessary.
Independent Verification of the coding and Verification of the calculations is required before the calculated numbers can be considered to be candidates for acceptable results.
How come I seem to kill a thread whenever I post a comment? ;)
I have done some work to determine whether the error noted by UC and the error that I noted in comment 17 actually have a material effect on the results in the paper. The answer in short is yes, on both counts. My description of that work is too long to fit in the margin … er, I mean, to be include in this post, so I have created a pdf document which can be found here.
Of the two errors, I think that the latter one (var instead of mse) is the more egregious of the two. The first is a mathematical error which relates to the author’s abilities as a programmer, but the latter indicates a lack of appreciation of the elementary statistical tools and concepts which he uses not only in this paper, but also in all of his temperature reconstructions.
Re: RomanM (#19), Have you tried posting a copy of your comments in #17 on Real Climate? You might get an answer from Prof. Mann?
Re: RomanM (#19),
You want to check the body count, I always thought I’m the threadstopper ;)
Note that lowpassmin.m is used in almost all the mfiles Mann provided in PNAS SI. var instead of mse is a serious error, but as the whole idea of padding data to minimize (smoothed – real) is silly, I guess it doesn’t change the results.. As Willis points out, this kind of smoothing is really a statistical exercise, where one needs to predict future values. Mann and Brohan seem to have difficulties in understanding this.
Re: RomanM (#19),
Michael “I am not a statistician” Mann does not seem to be a much of a programmer either. As I said in #1, it sure seems none of the reviewers bothered to read the code. On the other hand, it seems that they did not understand the paper either; otherwise there can be no way they had given green light for this garbage.
MarkR, you say:
BWAHAHAHA … oops, sorry, Freudian slip, what I meant to say is that your suggestion would make an interesting experiment. I and others have had little to no success in this arena … however, RomanM, give it a try and report back what happens.
w.
Mann is clueless about smoothing. He thinks the best smoother is the one that minimizes the squared difference between the smooth and the last few datapoints in the series. He originally claimed that nonsense in his GRL paper about smoothing. I wrote a reply to GRL, but they said I was being too hard on Mann … could be, I don’t think much of his statistical abilities.
Think about his claim for a minute … the theoretical smoother that would minimize the difference between the smoother and the data points themselves is the smoother which does absolutely no smoothing at all. That would have a squared difference of zero … but I don’t think that’s what we want, is it?
In fact, we want the smoothing algorithm that minimizes the error, not between the smoother and the data points, but between the smoother and the eventual smoothed line (which, of course, will only be revealed when we have more data points in hand.)
However, despite the fact that we can’t check the difference between the smoother and the eventual smoothed line at the end of the dataset, we can test and compare it anywhere in the dataset other than within half the smoother width of the end. I have never found a dataset where that test showed that any of Mann’s options (minimum slope, minimum roughness, and minimum smooth) could outperform the smoothing algorithm which I developed and use myself. This means, of course, that his latest failed attempt (average his three smoothers) will also not outperform my algorithm.
I probably should repolish and resubmit my paper to GRL … another project for the list.
w.
re RomanM’s pdf (good read!) ,
Yes.
As I said, lowpassmin.m, is used everywhere in the new Mann paper. Uncertainty increases, this method is extremely sensitive to measurement errors, see
http://www.climateaudit.org/?p=1681#comment114704
and
http://www.climateaudit.org/?p=2541#comment188580
Similar thing happens with real statistical smoothing,
http://www.climateaudit.org/?p=1681#comment114062
Perhaps I should explain my method for handling missing data in a centered smoothing algorithm. I apply it with a Gaussian smoother, but it can be used with any kind of smoother.

A smoother usually works by multiplying a string of N data points by a “kernel”. The kernel is a string of values, of the same length N as the string of data. The kernel values have the additional property that they sum to one (1). Value by value the data is multiplied by the corresponding kernel value. Then the resulting amounts are summed. This sum is the “Gaussian average”. It is a weighted average, with the kernel containing the weight as a percentage for each data point.
Now the question arises, how do we deal with missing data? Start with a single missing data point. I reasoned that we know how much of the total weight we are missing. Let’s say that single missing data point gets 11% of the weight (the corresponding kernel value is 0.11). From that we know that on average we’re at 89% of the eventual value. So our best guess (absent other knowledge) is whatever total we had up to that point, divided by 0.89 (1 minus the missing data weight).
That was my insight. I just keep a running total of how much data I have in hand. If I end up with 73% of the data, I do not consider whether it is from the “past” or the “future”. I do not concern myself with whether the missing 27% is in one year, or three separated years, or three missing years in a row. I just divide my result by the percentage of data weight present. That’s the best estimator I’ve found for the missing data. It does not pad the data or extend it. It just take what it has as the best guess, and adjusts the total accordingly.
The effect of this is that at the beginning and the end of a block of data, my algorithm transitions smoothly from being an acausal smoother (a centered smoother which averages based on both the future and the past data) to being a causal filter (an uncentered smoother which considers only past and present data).
As far as I know I invented this method … but then, I don’t know very far, so it probably already has a name.
Since the transition from acausal to causal is slow and even at the end of the dataset, the possible error also increases slowly. (Remember, this is the error between this estimate and the final future smoothed actual result). The is particularly true with a Gaussian smoother, because much of the weight is near the center. Because of this, the error at the end is minimized.
My best to everyone,
w.
PS – I also found that it is possible to improve this algorithm marginally, by noting that if the data is rising, the estimate by this method is low, and vice versa. This means that the error is somewhat correlated with the trend of the data. However, the gains in using this method are tiny, visually indistinguishable from the basic method.
Re: Willis Eschenbach (#25),
Can’t you replace your
nonsymmetric filter + no data padding
with equivalent
symmetric filter + data padding ?
Then you could compare more easily with Mann’s filter.
One should note that Matlab’s filfilt.m does ‘minimum roughness’ anyway,
http://www.mathworks.com/access/helpdesk_r13/help/toolbox/signal/basic19a.html
Ah – but Mann and Brohan know what the future values are, their friends Dr Hansen and Dr Schmidt informed them, and they can therefore judge which is best by eye using this special knowledge. Only cardcarrying climate scientists are endowed with this special knowledge so only their judgement is considered credible.
;)
We’ve seen the same lack of understanding in MBH98. The entire apparatus of expanding the “rectonstructed” temperatue principal components to gridcells and taking averages leads to weights for the RPCS which do not change. The weights can be calculated once, saving millions of calculations.
Thanks to everyone for taking time to read the document and post comments. Let me reply to several of the observations and suggestions:
Re: MarkR (#20), and Willis
I get enough of being ignored or abused elsewhere. ;) I don’t see much point in taking it to a partisan environment which in the past has been unable to admit errors. If anything, submitting a note to GRL might be more to the point. However, for me the fun is always in the chase – doing the math, solving the problem, etc. Once that is done, writing up papers has always been an onerous task, bogged down in references and other niggly publication details. As well, I am still trying to decode the Matlab filtfilt algorithm’s way of incorporating its initial calculated “steady state” conditions into the filtering process. When I have that, I can then determine the effect of each observation on the smoothed value at any other place.
Re: Willis Eschenbach (#22),
It may be part of what we want under some circumstances if the intent is to find some sort of “optimal” smooth. This is why in the document I mentioned that Mann uses a “40 year” filter without ever defining it and why it is important that it actually be defined in some rigorous way. Obviously, in the unconstrained case, a less smooth series can have a lower mse and the original series fits itself best with mse = 0. However, if welldefined conditions are placed on the smoothed series, it makes the problem mathematically meaningful and minimum mse can then be a measure of optimality . E.g., these conditions could be numeric bounds on the possible variation of the sequence. The omission of such conditions in the paper is one of the major reasons that the paper has no genuine scientific value. As well, that is also why any meaningful comparison of your filter to others needs to take into account the smoothness level required and/or achieved in both results.
Re: UC (#24),
Your references to your earlier comments are both interesting and helpful. After examining the calculations involved in the Matlab filtering functions, I would be hard put to come up with an interpretation of what the “40 year” smoothed temperature for a given year actually estimates or represents. There are many properties of the Mann smoothie that I doubt the users of it fully understand regarding the potential amount of bias in using that smoothed value to estimate what the actual temperature might have been. This leads to a more drastic underestimate of the uncertainty level. Using IPCC probability jargon, it is “almost absolutely certain” that most viewers of a hockey stick graph will be misled in interpreting a Mann smoothed graph.
Re: RomanM (#30),
I think I have it! Like UC, I also like to view a filter as a matrix which tells me what weight each observation in the sequence has on a given smoothed value. The complication in trying to do so in Mann’s case was not only the “optimal initial conditions” of filtfilt and how these initial conditions are implemented in the filter function, but the padding and even more padding in the black box of the Mann methodology. It occurred to me today to instead use the linearity of the overall procedure to construct the filter matrix. If you apply the smoothing to a sequence consisting of a one in the kth position and zeros everywhere else, the resulting smooth is the kth column of the matrix. I wrote a short program in Matlab to calculate the matrix using the lowpass.m function given by Mann:
Here, n is the length of the sequence to be filtered, freq is the “frequency” of the smooth using the butterworth filter, and be and en determine the type of padding to be used at the beginning and end of the data sequence: 0 for mean padding, 1 for reflection and 2 for reflectandflip. The sequence is then multiplied by the matrix to apply the smooth. The result was checked for some of the be and en values against lowpass.m applied to the 158 values of the 18502007 Hadcrut temperature data supposedly used in the paper and the differences were small enough to be accounted for by calculation rounding. (If Prof. Mann chooses to use the program, I hope he gives me some credit for it. :) )
Some of the results are particularly interesting. For the case n = 158, freq = 1/40, be = 2, and en = 2 (the “minimum roughness” case at both ends – this is the padding shown in UC’s graph (#44)), the 2007 temperature has the weights:
0.61505,
0.70775
0.80268
0.89909
0.99620
in determining the smooth for the years, 2003 to 2007. Seems a little like endpinning! Another interesting observation: Which year has the second largest weight for the smooth at 2006? Nope, not 2006, but 1995! It should be noted that these weights are calculated without using the observed temperature sequence. Here is a graph of the contribution for each of the final 11 years towards the smoothing of the entire sequence (a plot of the last 11 columns of the matrix):
Note the effect of the last value towards the smoothing at times far removed from it. It also shows the low weight of values near 2007 in determining the smoothing for their own year. That appears to be a sideeffect of the padding process. The cases be = 0, en = 0, and be = 1, en = 1, are considerable more uniform.
What I don’t get is that the point of smoothing isn’t to minimize the MSE (assuming “optimal” means MMSE) between the data points and the smooth, it is to minimize the MSE between the true state of the system, i.e., the actual temperature or proxy as it is transformed into temperature, and the calculated smooth. A Kalman filter will do this, as an example (UC has some good posts on this floating around in here, too).
Mann’s solution is by no means optimal in any sense that I know of (and he apparently admits this, but uses it anyway). Nor is it “adaptive,” at least not in the sense that I ever learned (or used). Willis is correct, what he is trying to minimize is absolutely meaningless.
Good work, btw, RomanM. The use of var() to calculate the MSE rather than taking the root of the sum of the squares is puzzling. Any fixed offset between smoothed and indata will get removed using var(), providing an incorrect result. Very strange, even if you ignore the purpose of smoothing issue. The more I think about this, the more baffled I get.
Mark
Re: Mark T. (#31),
I agree with you 100% that the paper is pretty much a meaningless excercise. In a sense, I suspect what Mann is trying to do is to represent the data points as closely as possible with a sequence of a given smoothness level where he interprets “as closely as possible” as minimum mse. My guess is that he somehow had the misimpression that the means of the three fitting series are each equal to the mean of the original series despite the fact that this will obviously not be the case. This would then imply to him that the weights should be constrained to have a total one (for unbiasedness), the means of the differences would equal 0, and then he could calculate his mse using the builtin function var instead of mse (which is also builtin in Matlab).
What he sees as the “true state of the system” is the smooth in the interior of a series where the endpoints play a lesser role and where all three of his fitting smooths are almost identical. He spends a lot of effort trying to demonstrate that his method is “superior” to the other three individually when you truncate the series at selected points as if that would prove something. I think that it gets particularly bizarre when he states:
Then he goes on to look at the results of a bunch of models and states:
When you don’t have the real thing, bring out the models! But hey, this isn’t science, it’s climate magic.
Ugh, he really believes this, doesn’t he? Two words: Kalman Filter. He can at the very least find the MMSE estimate of the true series mean and compare his nonsense to that. There may actually be a paper in this idea.
Mark
I know nothing about low pass or Kalman, but I am not stupid : one cannot smooth out noise up to the last data point without inserting fake data.
In Mann08, the last part of the red line (the CRUT) is at the core of their argument. If one stops it at 1986, as one should when smoothing over 40 years data ending in 2006, current temperature is no warmer than the MWP. No more argument. It seems to me these fancy smoothing algorithms should be denounced.
Re: Manny (#34),
You shouldn’t make statements that could be at odds with what you admit you don’t understand. Besides the fact that the Kalman filter is a onestep predictor, the “smoothed output” for the current point (present data), is a function of current and past inputs. The Kalman also outputs out a prediction for the next state, but that’s immaterial.
What Mann does is based on a 40point FIR that is not lagged, for which it is not possible to get a smooth up to the current data unless some tricks are employed (which result in threads such as this one).
Mark
Manny, you say:
Actually, I described how to do that exact thing without inserting fake data in #25.
e.
Re: Willis Eschenbach (#35),
But then your filter is no more timeinvariant, and comparison with middle values is not apples to apples. You can transform your filter to timeinvariant version, but then you’ll need ad hoc padding.
Re: Willis Eschenbach (#35),
Willis, you did and you didn’t. It may appear that the two methods of smoothing a sequence of values (padding, and altering the smoothing algorithm) are different, but in essence they are two equivalent formats for describing how a smoothing algorithm handles the endpoints.
In the case of padding, new “observations” are constructed from the existing values (e.g. by reflection, with or without flipping the values, repeating the end value, using the mean of all or part of the sequence, etc). Then the same smoothing algorithm which has been used in the center of the series (in essencce, using UC’s terminology, timeinvariant) is applied to padded series in order to extend the smoothing to the end. When the smoothing is done as a linear function applied to the sequence (as in a weighted moving average) and the padding values are also linear functions of the observed values (as in each of the cases that I mentioned above), it is possible to alter the smoothing algorithm (in a unique way) near the endpoints in a fashion to appear similar to what you do with your truncated “kernel”. When the revised algorithm is applied to the original unpadded sequence it produces exactly the same results as the unaltered algorithm did on the padded sequence.
Conversely, it is not difficult to show that when using the method you describe, a unique set of “padding” values can be calculated in a linear fashion from the sequence. When these values are appended to the end of the sequence and you apply your smoothing algorithm (unaltered without truncation of the weights), you get exactly the same result as you did without padding. Each of these two methods can be made to look like the other in its usage.
UC’s point of “comparison with middle values is not apples to apples” applies if you do not compare the padded version of your smoother at the endpoints. As well, if you wish to examine your method’s behavior at the endpoints with that of another method, you should express them both in the same format for the comparison to make sense.
I had written the above before seeing your latest post (while I had still not submitted this one). I will add that you are correct in that the error bars for any smoothing should be calculated using the format which gives an explicit expression for the relationship between the smoothed value and the original series. That is indeed the format which you prefer.
UC, thanks for your interesting post. You say:
Not sure what you mean by “timeinvariant”, could you explain that? What I do is as the filter approaches the end, it gradually changes from an “acausal” filter (averages using data from both before and after the date in question)to a “causal” filter (averages only using data from before the date in question). Is that what you mean by “timeinvariant”?
And while comparison of end values with middle values are not “applestoapples”, the error is small even on the last data point, and decreases rapidly away from the end. So there is a difference … but in general it makes very little difference.
Also, using my method, the error bars for the final points are easy to calculate. This means that in good scientific fashion the best estimate plus error bars can be given, which of course once again makes the comparison apples to apples … what more do we need?
w.
Re: Willis Eschenbach (#38),
Just to see if I understand this point, I think that UC means the group delay changes as you approach the end point. That makes time nonlinear.
Re: Willis Eschenbach (#38),
See RomanM’s #40. I like to think linear filters as matrix – vector multiplication, so : timeinvariant filtering matrix is Toeplitz, http://ccrma.stanford.edu/~jos/mdft/Matrix_Multiplication.html .
And again, causal filtering matrix is lower diagonal. I really don’t know how to classify filters that use invented future values ;) But let’s define these padded filters in your way; then the filter is acausal, except for the last value. That’s the only output that doesn’t use _real_ future values.
It is easy to get confused with this stuff, Mann has two peer reviewed journal papers that aim to validate this:
For additive measurement errors this can be done. But in the sense Mann is using these figures (1987 … 2026 was the warmest 40 yr period ever) it is quite difficult to add error bars. We’d need a statistical model for the signal. And even with additive measurement errors, it seems that there’s a great confusion (*), see
http://signals.auditblogs.com/2008/09/25/mooreetal2005/
( About a paper that Rahmstorf cites, and then shows that IPCC projections underestimate the global warming .. )
(*) Me or Moore is confused. Or both are ;)
Roman M and DeWitt, thank you for your comments.
I’ll have to think about the question of padding. I see what you mean, that my procedure can be replicated by choosing an appropriate set of padding values. However, I don’t know that that makes the two equivalent, or that it means that my procedure depends on the padded values.
For example, a “causal” filter does not depend on future values. Now, this causal filter’s results at any point could be replicated by choosing an appropriate set of future values … but does that mean that it is not really a “causal” filter, since it now depends on future values?
Like I said, I’ll have to think about that. The best procedure, it seems to me, is just to add an error bar on the end of smoothed series.
w.
Re: Willis Eschenbach (#41),
A filter is “causal” if it only depends on prior time values at ALL points and not just at a single point. In no way can a causal filter be “replicated” by future values. Your filter looks causal at the endpoint, but if you think about it so does every padded filter as well since the padding is calculated using existing prior values.
In your case, by the way, the “replication” is unique, i.e. there is exactly one set of padding values which will give you the unpadded results. They are not too difficult to calculate on using R.
RomanM, many thanks. I had verified for myself that the “replication” is unique by the ancient Australian method known as “suck it and see” …
With your definition, I guess any filter which is padded at the end would be neither causal nor acausal … which doesn’t make a whole lot of sense to me. I likely look at this a little different because of the way that I came to design my filter. Initially, I was not concerned with the endpoints, but only with possible missing values in the dataset and how to deal with them. My method treats all missing values (past, present, or future) equally, and deals with them the same way. It treats missing future values in the middle of the dataset the same as missing past values in the middle of the dataset, or missing future values at the end of the dataset. Not sure if that makes it causal, acausal, both, or neither.
The reason I use my method, however, is not theoretical but practical — of all the methods I know of, including loess and lowess smoothing, my method gives the smallest errors. When I find a better method, I’ll switch.
w.
Excellent program, RomanM. Now, from this viewpoint MannGRL08 optimization problem gets interesting..
.. but let’s first check how these smoothers respond to white measurement errors with unity variance:
This roughness thing seems to amplify noise near the first points!
Re: UC (#47),
Your graph nicely demonstrates the need for large uncertainty bars for the smoothed when using the reflectandflip padding!
I am still looking at the matrices to try to understand what is going on in the smoothing process by visually examining the matrices. Using the Matlab “surf” program, you get
It sort of looks like a hybrid space vehicle! This matrix uses simple reflection at each end of the series (corresponding to be = en = 1). There is a sideways “fin” along with some “wrinkles” near both ends of the main diagonal. From their position, I am guessing that they might be an artifact of the reflection padding which magnifies the effect of some of the observations.
Re: RomanM (#48),
Yes, and it is formal explanation of this http://www.climateaudit.org/?p=1681#comment114704
We need one more matrix for Brohan,
http://hadobs.metoffice.com/hadcrut3/smoothing.html ,
continuing the series by repeating the final value.
Re: UC (#49),
Seeing this as a good way to learn some more Matlab, I wrote up a function to calculate the smoothing matrix for an endpinned moving average with an arbitrary set of weights (probably can be done easier, but I learned some things writing it) and ran it for a sequence of comparable length with the ones you used in your graph in #47 .
The result:
Hadley says they use a 21point binomial filter which with that many points is close to being Gaussian. Unlike the Mann smoothie, points far enough away have no effect on the smoothed value at a given place. As you correctly suspected, the error bar for white noise at the last value is bigger (about twice as large) than at points in the middle of the sequence.
Re: RomanM (#50),
And that should be clearly visible in this figure http://hadobs.metoffice.com/hadcrut3/diagnostics/global/nh+sh/annual_s21.png
But it’s not.
I think something is fixed now, see Figure 6. of the new HadCRUT4 paper,
http://www.metoffice.gov.uk/hadobs/hadcrut4/HadCRUT4_accepted.pdf
Interesting to see when Mann et al will become aware of this, the impulse response of lowpassmin.m is quite longer than in the hadcrut smoothing case..
Look how long it took them to figure out that their method exagerates the end value of the series! After an inconvenient downturn in the monthly temperatures early in the year, they suddenly realized that they were misleading the public and needed to ensure that they no longer “cause much discussion” through the use of “inappropriate” methodology. After a possible full year or two of lower temperatures, they could conceivably equally suddenly realize that the error bounds are indeed inaccurate and the upper bounds need to be raised. Perhaps we could speed this process up. The site does state
Do you think they really mean it? ;)
Re: RomanM (#52),
These error bounds are clearly incorrectly calculated, and I’m afraid that it is not the only mistake in Brohan’s paper. It’s not like a typo, it is a clear misunderstanding how errors propagate.
BTW,
Re http://www.climateaudit.org/?p=3361#comment292113
2008/08 value seems to be 0.387 . My prediction was
% Year Month 2 sigma Predict. +2 sigma
2008 8 0.15206 0.34864 0.54521
Not bad. But I guess GCMs did better job.
ssatrend.m seems to be a nonlinear filter, so RomanM’s method cannot be directly used. But I made a small modification, linearization about a trend,
With this version the resulting matrix approximates the filter very well. And sensitivity to noise is again very clear:
(n=177; embedding dimension 30, as used in http://signals.auditblogs.com/2008/09/25/mooreetal2005/ )
**
The asymmetry in figure of #47 seems to be due to a small programming error, lowpass.m pads beginning and end differently ( at the beginning, first value is copied ). Something that Mann can correct in the next version, without credit to CA.
Re: UC (#54),
I can’t seem to locate ssatrend.m with google. Any hints as to where it might be available?
Re: RomanM (#55),
R version here now: http://data.climateaudit.org/scripts/rahmstorf/functions.rahmstorf.txt
Jean S wrote about mversion here (comment #15495)
Need to try #54 with M=15, as that value seems topical now ;)
Re: RomanM (#55),
Original Matlab version also now available at
http://data.climateaudit.org/scripts/rahmstorf/
Note my other comment showing that the Rahmstorf huffing and puffing merely ends up yielding a triangular filter.
Time to update, as 2008 is available, using HadCRUT and minimum roughness:
..and just another way to look at this:
Thanks, UC and Steve. I have now downloaded these files from the CA site.
I’ve been trying to familiarize myself with the inner workings of SSA. It looks like there may be some useful aspects of the methodology, but I’m not sure that smoothing as done by the team is necessarily one of them.
4 Trackbacks
[...] introduced himself to Climate Audit soon after he started his blog (here)>. He began with a variety of interesting technical analyses of Mann et al 2008 – [...]
[...] introduced himself to Climate Audit soon after he started his blog (here)>. He began with a variety of interesting technical analyses of Mann et al 2008 – [...]
[...] readers it is clear why Minimum Roughness acts this way, see for example RomanM’s comment in here (some figures are missing there, will try to update). But to me it seems that the methods used in [...]
[...] Willis, a couple of days after the release of Mann’s smoothing article, UC observed that there was an error in Mann’s published algorithm, which was reported at CA here. [...]