Mann's New Divergence "Theory": A Smoothing Artifact

Over at realclimate, Mann has advocated a new “explanation” of the Divergence Problem raised by one of their readers in connection with IPCC AR4 Figure 6.10. Mann says that there is no Divergence Problem; he blames the reader for failing understand boundary constraints in smoothed series. Raising the Divergence Problem at realclimate was so blasphemous that either Mann or his website hosts, Environmental Media Services, investigated the IP address of the person who dared to ask about the Divergence Problem. The Divergence Problem is however a real issue and not simply an artifact of IPCC smoothing.

The Divergence Problem has been discussed on many occasions at this site. Previously I’ve reported the large-scale decline in ring widths, reported in passing in Briffa et al 1998, here here here among others and criticized Briffa’s cargo cult explanation of the

In the absence of a substantiated explanation for the decline, we make the assumption that it is likely to be a response to some kind of recent anthropogenic forcing. On the basis of this assumption, the pre-twentieth century part of the reconstructions can be considered to be free from similar events and thus accurately represent past temperature variability.

The poster at realclimate posed the following question, which, by the way, was a question that Kurt Cuffey of the NAS Panel posed to various presenters. (In their report, the NAS panel did not firmly grasp this nettle, more or less adopting Cook’s version of the cargo cult explanation, discussed here. At the press conference, Cuffey said that the Divergence Problem was an important consideration in withdrawing confidence from the early estimates.) Anyway, here’s the question at realclimate:

What we are interested in in this thread, however, are the error bars on the temperature reconstructions from proxies. What is striking from the IPCC chart is that the “instrumental record” starts diverging seriously upwards from the “proxies” around 1950, and is in the “10%” overlap range by about 1980.

The simple read on this, surely, is that the proxies are not reflecting current temperatures (calibration period) and so cannot be relied upon as telling what past temperatures were either?

Here’s Mann’s answer in which blames the Divergence Problem, something seriously discussed and not resolved by the NAS Panel and obviously something very much on the mind of a serious scientist like Rob Wilson, as entirely to do with smoothing:

[Response: Actually, you have mis-interpreted the information provided because you have not considered the implications of the smoothing constraints that have been applied at the boundaries of the time series. I believe that the authors of the chapter used a smoothing constraint that forces the curves to approach the boundary with zero slope (the so-called ‘minimum slope’ constraint). At least, this is the what it is explicitly stated was done for the smoothing of all time series in the instrumental observations chapter (chapter 3) of the report. Quoting page 336 therein, This chapter uses the minimum slope’ constraint at the beginning and end of all time series, which effectively reflects the time series about the boundary. If there is a trend, it will be conservative in the sense that this method will underestimate the anomalies at the end. So the problem is that you are comparing two series, one which has an overly conservative boundary constraint applied at 1980 (where the proxy series terminate) tending to suppress the trend as the series approaches 1980, and another which has this same constraint applied far later (at 2005, where the instrumental series terminates). In the latter case, the bounday constraint is applied far enough after 1980 that is does not artificially suppress the trend near 1980. A better approach would have been to impose the constraint which minimizes the misfit of the smooth with respect to the raw series, which most likely would in this case have involved minimizing the 2nd derivative of the smooth as it approaches the terminal boundary, i.e. the so-called ‘minimum roughness’ constraint (see the discussion in this article). However, the IPCC chose to play things conservatively here, with the risk of course that the results would be mis-interpreted by some, as you have above. -mike]

[Response: Well, no, actually the proper read on this is that you should make sure to understand what boundary constraints have been used any time you are comparing two smoothed series near their terminal boundaries, especially when the terminal boundaries are not the same for the two different series being compared. -mike]

[Response: p.s. just a point of clarification: Do the above represent your views, or the views of Shell Oil in Houston Texas (the IP address from which your comment was submitted)? -mike]

Here’s the offending IPCC Figure 6.10, together with original caption.

LINK TO IMAGE

Figure 6.10. Records of NH temperature variation during the last 1.3 kyr. (a) Annual mean instrumental temperature records, identified in Table 6.1. (b) Reconstructions using multiple climate proxy records, identified in Table 6.1, including three records (JBB..1998, MBH..1999 and BOS..2001) shown in the TAR, and the HadCRUT2v instrumental temperature record in black. (c) Overlap of the published multi-decadal time scale uncertainty ranges of all temperature reconstructions identified in Table 6.1 (except for RMO..2005 and PS2004), with temperatures within ⯱ standard error (SE) of a reconstruction scoring’ 10%, and regions within the 5 to 95% range scoring’ 5% (the maximum 100% is obtained only for temperatures that fall within ⯱ SE of all 10 reconstructions). The HadCRUT2v instrumental temperature record is shown in black. All series have been smoothed with a Gaussian-weighted filter to remove fluctuations on time scales less than 30 years; smoothed values are obtained up to both ends of each record by extending the records with the mean of the adjacent existing values. All temperatures represent anomalies (°C) from the 1961 to 1990 mean.

Now the IPCC figure has already expurgated the offending parts of three Divergent series which are truncated at 1960 (Briffa et al 2001; Rutherford, Mann et al 2005, Hegerl et al 2006), an expurgation discussed on several occasions, for example here which showed the truncation of the Briffa series in the spaghetti graph and the impact of the deleted data on the IPCC TAR spaghetti graph (since the same series is used in IPCC AR4, the impact will be the same. So it’s not just smoothing that’s involved here.

Although Mann told the NAS panel that he was “not a statistician”, he cited his own article on smoothing, which can be used to decode his idiosyncratic language on smoothing. The caption in AR4 Figure 6.10 says that it was smoothed with a 30-year gaussian smooth dealing with endpoints by “extending the records with the mean of the adjacent existing values”, a fairly straightforward methodological description. Mann however describes the situation as follows:

I believe that the authors of the chapter used a smoothing constraint that forces the curves to approach the boundary with zero slope (the so-called ‘minimum slope’ constraint)

Consulting Mann’s article on smoothing, which needless to say is not a statistical authority and should not be relied on as though it were a statistical authority, one finds:

To approximate the minimum norm’ constraint, one pads the series with the long-term mean beyond the boundaries (up to at least one filter width) prior to smoothing.

Another system of dealing with endpoints is to reflect the values for smoothing. Mann describes this system as follows:

To approximate the minimum slope’ constraint, one pads the series with the values within one filter width of the boundary reflected about the time boundary. This leads the smooth towards zero slope as it approaches the boundary.

Mann based his speculation on the smoothing in Figure 6.10 on the smoothing method in Chapter 3 (lead author, Jones). Chapter 3 does indeed say:

This chapter uses the minimum slope’ constraint at the beginning and end of all time series, which effectively reflects the time series about the boundary. If there is a trend, it will be conservative in the sense that this method will underestimate the anomalies at the end.

BTW while I was checking this quote, I noted that Appendix 3.B on error measurement was excluded from the pdf version of chapter 3, but is available online as supplementary material here .

So underneath the verbiage, Mann has incorrectly described the smoothing in Figure 6.10. Yes, IPCC chapter 3 said that they dealt with endpoints by reflection, but the caption to Figure 6.10 in chapter 6 explicitly says that they used end point padding. Mann goes on to recommend another padding alternative, in which the closing values aren’t just reflected vertically (in time) but horizontally, a system which he describes as follows:

A better approach would have been to impose the constraint which minimizes the misfit of the smooth with respect to the raw series, which most likely would in this case have involved minimizing the 2nd derivative of the smooth as it approaches the terminal boundary, i.e. the so-called ‘minimum roughness’ constraint (see the discussion in this article).

I wondered whether IPCC might have done this and checked TAR for it. In the case of the truncated Briffa series, it would led to a reconstruction that could only be described as Mann-dacious. Here’s what would have happened. They truncate the bad bit of the Briffa series in which it goes down at the end of the 20th century to near all-time “cold” in the proxy index, leaving a series ending in 1960, going up at the end. Juckes, Briffa, Zorita et al (submitted to CP) is even more aggressive, truncating in 1940! After 1940, the actual series goes down. But with Mann-recommended smoothing the closing trend would be reflected enabling the smoothed version to go up even though truncated values were going down. I wonder why Mann considers this “a better approach”.

Bottom line: Mann’s answer is a nothing. The Divergence Problem is not simply an artifact of smoothing in IPCC Figure 6.10. It’s a real problem.

101 Comments

  1. Steve Sadlov
    Posted May 30, 2007 at 9:26 AM | Permalink

    Dr. Mann is either playing games, or is seriously delusional. How can he fail to realize the obvious?

  2. Posted May 30, 2007 at 9:29 AM | Permalink

    In the absence of a substantiated explanation for the decline, we make the assumption that it is likely to be a response to some kind of recent anthropogenic forcing. On the basis of this assumption, the pre-twentieth century part of the reconstructions can be considered to be free from similar events and thus accurately represent past temperature variability.

    Sounds like a secular version of Intelligent Design theory. There are holes in our scientific knowledge, therefore the explanation must be Man. Why is that the default position? Why not simply say “We don’t know” and try to find out?

  3. Posted May 30, 2007 at 9:32 AM | Permalink

    All right, so we have the “divergence problem” all explained away. Now it’s time to find a solution to the “delusion problem” Mann is dealing with.

    I still don’t get how any “serious scientist”, as the true believers are called, can ignore the real-time divergence issues, the attempt to hide it by cutting out the most recent data from the proxy series, and the lack of replicatability of so many of these proxy analysis.

    PS. “Replicatability” – probably not a word. But if Mann can sift through statistical methodologies and choose the one that suites his needs, then I can make up words.

  4. John A
    Posted May 30, 2007 at 9:32 AM | Permalink

    Is anyone else disturbed by the fact that Mann cites either himself or his non-statistician co-authors rather than statisticians or standard statistical texts when dealing with issues like this?

  5. Posted May 30, 2007 at 1:51 PM | Permalink

    #4

    Yes, it is a bit annoying 😉

    But now the Team is in gridlock, with the smoothing divergence problem and the real divergence problem. They probably don’t know that.

  6. Craig Loehle
    Posted May 30, 2007 at 2:22 PM | Permalink

    In the following references to the divergence problem, the problem exists in the annual tree rings, not in the 30 year smoothed reconstruction.
    (Barber et al., 2000; Briffa, 2000; Briffa et al., 1998a, b; Briffa et al., 2004; Büntgen et al., 2006; Carrer and Urbinati, 2006; D’Arrigo et al., 2004; D’Arrigo et al., in press; Driscoll et al., 2005; Jacoby and D’Arrigo, 1995; Jacoby et al., 2000; Kelley et al., 1994; Lloyd and Fastie, 2002; Pisaric et al., 2007; Vaganov et al., 1999; Wilmking et al., 2004, 2005; Wilson and Luckman, 2003; Wilson et al., in press)

  7. x
    Posted May 30, 2007 at 2:33 PM | Permalink

    Is there any way to confirm whether the question in question at realclimate was truly posted from a Shell Oil IP address?

  8. bernie
    Posted May 30, 2007 at 2:41 PM | Permalink

    Doesn’t the smoothing argument get in part resolved if they bring the proxies up to date – at least in terms of providing more data to determine the possible biases introduced by the smoothing? Does the divergence problem get better defined if they bring the proxies up to date? What is the problem with bringing the proxies up to date? Of course then there may be even more pressure on disclosing more about the data and rocesses for handling the data. Have I missed something?

  9. JeffB
    Posted May 30, 2007 at 2:49 PM | Permalink

    I think the comment of “shell oil” pretty much sums it up for Mann.

  10. Follow the Money
    Posted May 30, 2007 at 2:51 PM | Permalink

    Raising the Divergence Problem at realclimate was so blasphemous that either Mann or his website hosts, the Environmental Defence Fund,

    Steve, that’s intriguing new in itself. Environmental Defense, at least in the USA, has been a long-time enabler of c _ _ _ _ _ c _ _ _ _ _ t _ _ _ ing 🙂 , giving the financial industry a false face of environmentalist support for years, even before 1998 Kyoto.

    In California now the local Democratic politicos are realizing the stupid trap they painted themselves into, now blaming Ahhnold for employing the legislation they themselves proffered. Here’s an example:

    http://www.contracostatimes.com/ci_6012884?source=rss

    Note the last sentences wherein the Environmental Defense flack tries to limit damage by oblique reference that Ahhnold’s showboating is problematic, not that the underlying law itself should be reexamined.

  11. bernie
    Posted May 30, 2007 at 2:52 PM | Permalink

    #7
    Even if it was a Shell IP, the very thought of someone who commented on a scientific blog having their IP address searched for and publicized is extremely troubling. Also I never realized that RC was hosted on an EDF server. Can someone confirm this?

  12. Mark T.
    Posted May 30, 2007 at 2:53 PM | Permalink

    Actually, you have mis-interpreted the information provided because you have not considered the implications of the smoothing constraints that have been applied at the boundaries of the time series.

    Is he really so oblivious to the concept of an FIR filter that he thinks this is a true statement? More folks versed in signal processing theory need to take a look at the stuff this joker is saying. He’s crossed a line into abject idiocy.

    BTW, note to Mann, “smoothing” if applied causally, only uses PAST inputs, therefore the are NO edge effects that would cause a droop such as the divergence. If they are applying the smoothing a-causally, i.e. the current output is dependent upon future inputs, then he is openly admitting a serious flaw in any of their reconstructions. What a maroon.

    Mark

  13. Posted May 30, 2007 at 2:57 PM | Permalink

    This from Ender at RC

    “It written by Craig Idso and is chock full of single proxy studies that ‘prove’ the MWP was warmer than today. I actually do not get why skeptics think that it is SO important what the temperature of the MWP was. Both events, recent anthropogenic warming and the MWP, to me are independent. Even if the MWP was proved in peer reviewed studies to be warmer than today that does not mean that it will not get warmer still from our greenhouse emissions.

    It also shows the double speak of the skeptics. On the one had decrying the unreliability and so-called dodgy stats (M&M) in the hockey stick they then use similar proxy data with the same now strangely not dodgy stats to claim that the MWP was warmer than today.

    Stats are stats – tomato / tomoto – I guess all stats and methodologies are the same as far as these people go. This person doesn’t come here often I suppose.
    Then, a few comments down, a response from eric:

    [Response:There is a large literature on this. In my view one of the clearest papers is that by Crowley and Lowry in Science a few years ago. Here’s a link to the abstract.
    The temperature variations are probably much less than 1 degree C by the way. One of the points of his paper is that we understand the response of the climate to forcing (e.g. sun, CO2, aerosols) rather well, and looking at the past variations gives us about the same for the climate sensitivity (how much temperature will change for a given forcing) as we already had calculated from first principles. –eric]

    Talk about inconsistent and doublespeak. Is the MWP important…. or not?

  14. Dave B
    Posted May 30, 2007 at 3:04 PM | Permalink

    mann said:

    [Response: p.s. just a point of clarification: Do the above represent your views, or the views of Shell Oil in Houston Texas (the IP address from which your comment was submitted)? -mike]

    how is an IP address a “point of clarification?” apparently, only “evil oil” could conceive of such a way of destroying the “consensus of serious scientists.”

  15. Mark T.
    Posted May 30, 2007 at 3:05 PM | Permalink

    So underneath the verbiage, Mann has incorrectly described the smoothing in Figure 6.10. Yes, IPCC chapter 3 said that they dealt with endpoints by reflection, but the caption to Figure 6.10 in chapter 6 explicitly says that they used end point padding.

    Ohhh my garage. They really are applying an acausal filter to the data to model a real-world phenomenon. I’m stupefied.

    Mark

  16. Mark T.
    Posted May 30, 2007 at 3:08 PM | Permalink

    [Response: p.s. just a point of clarification: Do the above represent your views, or the views of Shell Oil in Houston Texas (the IP address from which your comment was submitted)? -mike]

    Do his comments represent his views, or the millions of dollars he and his co-authors are extorting from the rest of us dupes?

    Mark

  17. Steve McIntyre
    Posted May 30, 2007 at 3:24 PM | Permalink

    #11 this came up shortly after realclimate started. They acknowledged it in one of their early posts
    http://www.realclimate.org/index.php/archives/2005/02/a-disclaimer/ It’s Environmental Media Services and Fenton Communications, which I believe are linked to Environmental Defense Fund, but I’ll edit to reflect this.

  18. Craig Loehle
    Posted May 30, 2007 at 3:28 PM | Permalink

    When the punch line (what the public most wants to see) of the reconstructions is specifically the right-hand side (most recent decades) it is just inconceivable to do any sort of padding at all beyond the range of the data. It is also simply unbelievable to drop portions of the data you don’t like.

  19. Anthony Watts
    Posted May 30, 2007 at 3:31 PM | Permalink

    RE 11, 17 I’ve done the lookup, here it is:

    Domain ID:D105219760-LROR
    Domain Name:REALCLIMATE.ORG
    Created On:19-Nov-2004 16:39:03 UTC
    Last Updated On:30-Oct-2005 21:10:46 UTC
    Expiration Date:19-Nov-2007 16:39:03 UTC
    Sponsoring Registrar:eNom, Inc. (R39-LROR)
    Status:OK
    Registrant ID:B133AE74B8066012
    Registrant Name:Betsy Ensley
    Registrant Organization:Environmental Media Services
    Registrant Street1:1320 18th St, NW
    Registrant Street2:5th Floor
    Registrant Street3:
    Registrant City:Washington
    Registrant State/Province:DC
    Registrant Postal Code:20036
    Registrant Country:US
    Registrant Phone:+1.2024636670
    Registrant Phone Ext.:
    Registrant FAX:
    Registrant FAX Ext.:
    Registrant Email:betsy@ems.org
    Admin ID:B133AE74B8066012
    Admin Name:Betsy Ensley
    Admin Organization:Environmental Media Services
    Admin Street1:1320 18th St, NW
    Admin Street2:5th Floor
    Admin Street3:
    Admin City:Washington
    Admin State/Province:DC
    Admin Postal Code:20036
    Admin Country:US
    Admin Phone:+1.2024636670
    Admin Phone Ext.:
    Admin FAX:
    Admin FAX Ext.:
    Admin Email:betsy@ems.org
    Tech ID:B133AE74B8066012
    Tech Name:Betsy Ensley
    Tech Organization:Environmental Media Services
    Tech Street1:1320 18th St, NW
    Tech Street2:5th Floor
    Tech Street3:
    Tech City:Washington
    Tech State/Province:DC
    Tech Postal Code:20036
    Tech Country:US
    Tech Phone:+1.2024636670
    Tech Phone Ext.:
    Tech FAX:
    Tech FAX Ext.:
    Tech Email:betsy@ems.org
    Name Server:NS227.PAIR.COM
    Name Server:NS0000.NS0.COM

    and, here is the kicker, in Wikipedia they say the founder of the group is a former Al Gore aid, and the article references RealClimate.org too.

  20. Mark T.
    Posted May 30, 2007 at 3:34 PM | Permalink

    It is inconceivable to me that they would represent _current_ reconstructed outputs using _future_ inputs, whether padded or otherwise. Simply using a zero-phase filter (which I know Mann and Rutherford did in their 2006 non-effort) does not solve the issue. If a tree-ring is, by some freak chance, a valid indicator of temperature, smoothing the results in such a manner is laughable.

    Mark

  21. Steve McIntyre
    Posted May 30, 2007 at 3:39 PM | Permalink

    Think about what Mann’s “recommended” method would do to hurricanes using Emanuel’s 1-4-6-4-1 filter. His code is at: http://holocene.meteo.psu.edu/Mann/tools/Filter/lowpass.m

    The method recommended here:

    % (2) pad series with values over last 1/2 filter width reflected w.r.t. x and y (w.r.t. final value)
    % [imposes a point of inflection at x boundary]

    Since 2006 hurricanes are low, this becomes the reflection point. Think how non-robust this method is: the last point is singled out for special attention. If you vertically reflect 2004 and 2005 hurricanes as Mann recommends, you get hugely negative hurricane PDI and counts (or on a log method), very low numbers.

  22. Anthony Watts
    Posted May 30, 2007 at 3:41 PM | Permalink

    RE20, Mark, when smoothing tree rings, a belt sander is always best 😉

  23. aurbo
    Posted May 30, 2007 at 3:46 PM | Permalink

    …it is just inconceivable to do any sort of padding at all beyond the range of the data. It is also simply unbelievable to drop portions of the data you don’t like.

    It is not at all inconceivable, nor unbelievable, if one’s motive is to justify a hokey [sic] stick.

    Could the support for Mann-made AGW be starting to crumble?

  24. Mark T.
    Posted May 30, 2007 at 3:50 PM | Permalink

    They need a belt sander to clear away the flawed “science” they’ve managed to educate themselves with.

    The filter problem is solved readily by using only past values. Take the 1-4-6-4-1 filter (assume it is normalized for unity coefficient summation). The current output, say the year 2000 is then represented by:

    y(2000) = 1*x(2000) + 4*x(1999) + 6*x(1998) + 4*x(1997) + 1*x(1996)

    Mann’s method, however, would be:

    y(2000) = 1*x(2002) + 4*x(2001) + 6*x(2000) + 4*x(1999) + 1*x(1998)

    So the question is: how did the temperature in 2000 know what the temperature in 2002, or 2001, would be in order to be as it is?

    Overall, this is probably a poor choice for a filter type since 2 years prior to the current year is weighted heaviest. I’d think maybe some exponentially weighted filter with the largest value at the current year would be more appropriate.

    Mark

  25. Posted May 30, 2007 at 4:08 PM | Permalink

    More than you wanted to know about Betsy Ensley who set up realclimate.org

    Betsy Ensley, Web Editor/Program Coordinator: Betsy joined the staff of EMS in April 2002 as a program assistant for EMS’s toxics program. Presently [2005], she manages BushGreenwatch.org, a joint EMS-MoveOn.org public awareness website, and coordinates environmental community media efforts to protect and improve environmental and public health safeguards. Before coming to EMS, Betsy interned at the U.S. Department of State in the office of the Assistant Secretary, Bureau of Educational and Cultural Affairs. Betsy graduated with honors from the University of Iowa in 2000, where she majored in Global Studies with thematic focus on war, peace and security. She minored in Asian languages. From zoominfo Business People Info

  26. bernie
    Posted May 30, 2007 at 4:16 PM | Permalink

    Steve:
    I am not sure about the Environmental Defense Fund connection but Environmental Media Services was started by David Fenton, The following link is interesting especially as it somehow morphs from one group to another. I wonder if Wegman is looking for anther population to try out his network analysis.

  27. Posted May 30, 2007 at 4:17 PM | Permalink

    Re #13: “[and looking at the past variations gives us about the same for the climate sensitivity (how much temperature will change for a given forcing) as we already had calculated from first principles. ‘€”eric]”

    As a neo-Aristotelian philosopher, I would like to know what first principles he is talking about. How did he arrive at these first principles? Are these so-called first principles apodictically true or merely current empirical wisdom?

    Re #19: “in Wikipedia they say the founder of the group is a former Al Gore aid, and the article references RealClimate.org too.”

    Not surprising on both counts. Regarding Wikipedia, Connolley is a dominant player, as an admin and editor. He and a bunch of fellow environmentalists dominate the editing of the global warming-related articles and they have often used RealClimate as an important source.

    Re #23: “Mann-made AGW”

    ROFL!

  28. bernie
    Posted May 30, 2007 at 4:26 PM | Permalink

    I think it would be smart and responsible to remove Ms Ensley’s name from this thread at this time. She is after all just the registrant of the site.

  29. Posted May 30, 2007 at 4:33 PM | Permalink

    Apparently, the demolition of Mann’s cherished hockey stick is a concerted campaign by the fossil fuel industry.

  30. Steve Sadlov
    Posted May 30, 2007 at 4:36 PM | Permalink

    But keep the Fenton reference. That is not irresponsible at all, since it’s an overt fact.

  31. Reid
    Posted May 30, 2007 at 4:53 PM | Permalink

    Fenton is a Democratic party public relations firm. Like all public relations firms their business is favorable press coverage for cash.

    Their big client of the last 3 years has been Cindy Sheehan. All of her appearances have been stage manged by Fenton.

  32. Joel McDade
    Posted May 30, 2007 at 4:55 PM | Permalink

    I’m gonna upgrade my stock trading chartware and replace my simple moving averages with Mann’s look-ahead filter. I’m so confident that I’m gonna switch to the futures markets, for maximum leverage.

    I’ll give you an update.

  33. Steve Sadlov
    Posted May 30, 2007 at 5:00 PM | Permalink

    RE: #31 – Interesting how RC management has tried to make a point that the ownership of the server / site is only a “coincidental convenience,” and that they claim they have no specific affinity for Fenton et al and Fenton el al’s points of view. Curiouser and curiouser.

  34. moptop
    Posted May 30, 2007 at 5:28 PM | Permalink

    In the absence of a substantiated explanation for the decline, we make the assumption that it is likely to be a response to some kind of recent anthropogenic forcing. On the basis of this assumption, the pre-twentieth century part of the reconstructions can be considered to be free from similar events and thus accurately represent past temperature variability.

    We’re here because we’re here because we’re here because we’re here…

  35. Reid
    Posted May 30, 2007 at 5:29 PM | Permalink

    Re #33, I didn’t know of a connection between RC and Fenton. If true it means RC is paying for or being subsidized by highly partisan players. Fenton doesn’t provide PR services to the entire community. Only those with Democratic party objectives.

    Google is a treasure trove of Fenton facts.
    http://en.wikipedia.org/wiki/Fenton_Communications
    “They specialize in public relations for not-for-profit organizations, and state that they do not represent clients that they do not believe in themselves.”

    I wonder if they would represent Climate Audit? They “do not represent clients that they do not believe in themselves.” Sorry Steve, No Fenton for you!

  36. David Smith
    Posted May 30, 2007 at 5:33 PM | Permalink

    Mann raised the issue of the origin of posts, which raises a counter-question:

    Gavin Schmidt is listed as a US government employee (NASA GISS). Does he post during working hours? If so, does that mean he’s speaking on behalf of the US Government?

  37. Steve McIntyre
    Posted May 30, 2007 at 7:00 PM | Permalink

    #37. I wonder what realclimate’s IP screening policy is: do they check IP addresses of all posters? If they don’t, what are the criteria for doing an IP search? What was it about the Richard Jones’ post that triggered the alarm bells at realclimate to do an IP search? Who did the search? Mann? Schmidt? Someone at Environmental Media Services? Or is the search being done by a volunteer to whom realclimate turned over the IP information?

  38. Craig Loehle
    Posted May 30, 2007 at 7:46 PM | Permalink

    Regarding the justification for a smoothing function. If your model is that x is caused by the smoothed data over the past y years, looking ahead is not permitted (it is acausal). However, if you compute the values for each year (as in climate reconstructions) and then just smooth the result for presenting it, this is not inherently bad. You must smooth it to see trends. The problem is smoothing it past the end of the data. If you apply a 20 year smoothing (+ and – 10 yrs from a given date) and your data ends in 1990, you really should only compute a smoothed curve up to 1980.

  39. Curt
    Posted May 30, 2007 at 8:24 PM | Permalink

    Mark W, Mark T:

    The issue of “causal” versus “non-causal” filtering is subtler than you suggest. If filtering is called for (a separate issue), non-causal filtering can be perfectly valid for an after-the-fact reconstruction. Non-causal smoothing filters are used in many fields to reduce or eliminate the effect of high-frequency noise on the data stream because they simply do a better job than a causal filter would. In digital audio and telecommuncations, most anti-noise filters are non-causal, with the data stream delayed by the (half-) length of the filter.

    My own field may help illustrate the issue. My company builds positioning controllers for precision mechanical systems such as robots and machine tools. Generally, there are position sensors but not velocity sensors. There are several reasons why velocity information is needed, and it must be derived through some kind of filter from the measured position data. The first reason is for real-time feedback. Obviously, this filter must be causal. Perhaps less obviously, this filter must be short, so the feedback loop can react quickly to changes. The second reason velocity information is needed is for real-time display of velocity for an operator interface. Again, this must be causal, but we have found that a longer filter is better here, to provide smoothness at the cost of delay (which usually is not noticed). The third reason is to produce after-the-fact plots for data analysis. Here we use a non-causal filter, because it produces a better estimate of the true velocity at each time instant. Periodically, someone will use the non-causal velocity estimate to try to re-create the action of the feedback loop, and I will have to take a long time to explain why that will not work very well.

    I believe I recall a CA thread in which someone (bender?) identified an inappropriate use of non-causal filtering, but this isn’t one of them.

    (By the way, I have a patent pending in which one of the enabling techniques is applying a non-causal filter with delay where traditionally causal filters have been used.)

    So I would argue that the use of non-causal filters for after-the-fact reconstructions is not necessarily incorrect. A bigger issue is whether filtering is appropriate at all. And one issue of this post is what to do when you get close enough to one end of the record that the symmetrical filter you have been using throughout the middle of the record cannot be used any more. IMO, the best thing to do is to cut off the smoothed plot before the end(s). If the filter involves 5 years of past and 5 years of future data, the smoothed plot ideally should end 5 years before the unsmoothed.

    As this thread indicates, there are a variety of techniques that can be used to smooth the end region of the data set, and they can be used for good or evil… 😉 Actually, I just finished an algorithm this month that employed the mirroring technique that Mann explains to compute smoothed corrections in the end regions of a table.

    But I completely agree with our host that Mann’s attribution of the divergence problem to sequence-end smoothing is simply preposterous.

  40. jae
    Posted May 30, 2007 at 8:44 PM | Permalink

    Interesting thread. If you have a lag in the dependent variable, say 1 year, like tree rings, is it not possible that Mann, et. al. have a point?

  41. Jim Johnson
    Posted May 30, 2007 at 10:33 PM | Permalink

    “And one issue of this post is what to do when you get close enough to one end of the record that the symmetrical filter you have been using throughout the middle of the record cannot be used any more. IMO, the best thing to do is to cut off the smoothed plot before the end(s). If the filter involves 5 years of past and 5 years of future data, the smoothed plot ideally should end 5 years before the unsmoothed.”

    I agree with this. And would add that, if the sum-total of the ‘gotcha’ that you are trying to show (i.e. the unprecedented, life-on-earth-threatening warming of the last 30 years or so) is substantially within the smoothing interval at the end of your data, then you should either:

    1. Not smooth the data.

    2. Use an asymetric smoothing interval of past data only for the whole plot, or

    3. Sit on your hands for 30 more years until you have enough data to use your symetric smoothing thru the entire period of interest.

    Simply cutting off the inconvenient downtrending data at the end of your dataset and pretending that they dont exist (Briffa) or grafting on convenient ‘data’ from an incompatible source (hokey blade ‘instrumental series’ frankengraphs) are not honest options. Ditto the whacking off of inconvenient warm periods at the beginning of your data that would negate your claim of ‘its the hottest its been in a thousand years’.

  42. Jon
    Posted May 30, 2007 at 10:39 PM | Permalink

    I must echo #39. I did some contract work for a major search/adwords provider. They were interested in removing particular period trends from their data. The effect you’re referring to in this discussion is known as “group delay” at time t the output of the filter describes the data at t-d. Usually a large group-delay is accepted in exchange for linear phase response. This preserves the qualitative shape of the data sans the filtered frequency components. Alternatively, the group delay can be minimized (minimum phase filter) at the cost of nonlinear phase shifts with respect to frequency.

    Any kind of end-period padding technique will necessarily result in distortions of some kind. The ideal situation then is to handle this with truncation.

    Its important to realize though that simply using a causal-filter will not eliminate group-delay.

    AFAIK, its amazing how frequently you see people forgetting to shift their x-axis of their filtered data to account for group-delay effects.

  43. Ralph Becket
    Posted May 30, 2007 at 10:44 PM | Permalink

    Is smoothing applied for any reason other than to visually indicate a trend in noisy data?

    If so, what are these reasons and what criteria are used to choose a smoothing method?

    Mark’s point concerning acausal smoothing strikes me as rather significant: the assumption must be that climate over preceeding years is an indicator of climate in future years. Given that, why not use, say, a simple exponential moving average that does not suffer from acausality?

    Four more questions, if I may:
    (1) how well do the various reconstructions agree with one another on a year-by-year basis?
    (2) How well do they agree with the instrumental record on a year-by-year basis?
    (3) Are the reconstructions based on sufficiently disjoint data sets that considering them as a group is statistically meaningful?
    (4) If so, how should one interpret the reconstructions taken as a group?

    Ta,
    — Ralph

  44. Jon
    Posted May 30, 2007 at 11:15 PM | Permalink

    Use an asymetric smoothing interval of past data only for the whole plot, or

    No. This does not magically render the group-delay of the filter zero.

  45. Larry Huldén
    Posted May 30, 2007 at 11:37 PM | Permalink

    “If you apply a 20 year smoothing (+ and – 10 yrs from a given date) and your data ends in 1990, you really should only compute a smoothed curve up to 1980.”
    I think that this is an important point. You can however combine the smoothed curve ending in 1980 and add the actual data 1980-90 (in different colour). The curves can be connected using the difference between the end of the smoothed part in 1980 and the actual value for 1980. Then the reader can see the data trend from 1980 to 1990 in relation to the smoothed curve. Or add a linear trend 1980-90 to the smoothed part.

  46. Mark T.
    Posted May 31, 2007 at 12:03 AM | Permalink

    Non-causal smoothing filters are used in many fields to reduce or eliminate the effect of high-frequency noise on the data stream because they simply do a better job than a causal filter would. In digital audio and telecommuncations, most anti-noise filters are non-causal, with the data stream delayed by the (half-) length of the filter.

    In these fields, the data stream has certain properties in which the signal is known to continue into the future (and is known to be stationary to some extent, perhaps wide sense stationarity). Nature is causal, and it does not have such well-behaved properties normally attributed to other fields. A filter designed to represent the output of a temperature series as this can _only_ be appropriate if it only relies on past inputs. Sorry, but I think your assessment of this is incorrect.

    Technically, any FIR filter can be acausal simply by placing a shift in the data output, and causal again with another delay. If we implement your example of an anti-noise filter, for example, one must account for the acausality by using a delay to shift current output to the current input (essentially). I.e., the implementation may be acausal, but the ultimate use of the data is not. The temperature series reconstructions are designed to show you what the global mean temperatue was during a particular year. Any smoothing, as you suggested in your follow-up statements, is probably inappropriate, but an acausal smoothing is even more inappropriate.

    A quick read of any of the code that Mann has written and you’ll quickly discover that he does not understand any of the concepts you and I have just debated.

    Mark

  47. Mark T.
    Posted May 31, 2007 at 12:15 AM | Permalink

    Use an asymetric smoothing interval of past data only for the whole plot, or

    No. This does not magically render the group-delay of the filter zero.

    Actually, there is a such thing as a zero-phase filter, and it is by definition acausal. Rather than implement the ZP filter, however, in RM2006, they implement a little MATLAB function known as filtfilt(). FiltFilt() takes a signal, filters it with the original filter, then flips the resulting sequence and repeats with the same filter. This produces a zero phase filtering function. Mann doesn’t just use this as a graphical tool, he uses this within the reconstruction code itself as I recall.

    While the debate that Curt and I have just had regarding the use of an acausal filter for display purposes certainly sits in the gray area of right/wrong*, the use of something like this within the actual processing is most definitely wrong with such data.

    Mark

    *(I think it produces misleading graphics for this type of display, which is just an opinion, his opinion differs, no major scientific disagreement that I can see).

  48. Mark T.
    Posted May 31, 2007 at 12:21 AM | Permalink

    (1) how well do the various reconstructions agree with one another on a year-by-year basis?

    The correlation is fairly high during the calibration period, though I don’t have actual numbers. After the point of “divergence,” that changes.

    (2) How well do they agree with the instrumental record on a year-by-year basis?

    Up to the point of divergence, fairly well. Again, no immediate numbers. After the divergence, well, it wouldn’t be called that if the tree-rings agreed, would it? 🙂

    (3) Are the reconstructions based on sufficiently disjoint data sets that considering them as a group is statistically meaningful?

    Think of a fairly common joke you can imagine. Then they get a divorce… got the picture?

    (4) If so, how should one interpret the reconstructions taken as a group?

    Same data, same methods with different window dressings, same results sort of. Not a surprise.

    Mark

  49. Mark T.
    Posted May 31, 2007 at 12:22 AM | Permalink

    Oh, that was supposed to be “fairly common [insert midwest/southern state here] joke”

    Mark

  50. Jim Johnson
    Posted May 31, 2007 at 12:44 AM | Permalink

    “No. This does not magically render the group-delay of the filter zero.”

    Perhaps, but I am not convinced that it is necessary that it does, given the purpose of the smoothing in this instance. What is important is that the smoothing be applied consistently thru the period of interest, for all datasets used in the comparison.

  51. Posted May 31, 2007 at 1:07 AM | Permalink

    RE Steve’s comment

    BTW the digital versions of the Figure 6.10 reconstructions are available (I think for all of them although there are a couple of variations.

    Ok, data is available, next we need the code. Note that mike doesn’t have that code, as he don’t know what was done with the boundaries:
    http://www.realclimate.org/index.php/archives/2007/05/the-weirdest-millennium/#comment-34104

    The caption doesn’t indicate how many adjacent values were used in calculating the mean used to pad the series. If the number of adjacent values used was one half filter width, then the boundary constraint is essentially identical to that achieved by reflecting the series about the terminal boundary, i.e. the ‘minimum slope’ constraint. Even if few adjacent values were used, the method still supresses any trend near the boundary

    Anyway 6.10.c is the interesting one, as it combines original CIs with smoothed reconstructions (something similar what Brohan did). And those boundaries, maybe they didn’t truncate divergent series in 6.10.c

  52. Jon
    Posted May 31, 2007 at 1:27 AM | Permalink

    “No. This does not magically render the group-delay of the filter zero.”

    Perhaps, but I am not convinced that it is necessary that it does, given the purpose of the smoothing in this instance. What is important is that the smoothing be applied consistently thru the period of interest, for all datasets used in the comparison.

    Ah. No you don’t get it. If you compute the moving average over M+1 years. Where K is the last year of the data-set, then the last output of the filter is the K-M/2 year.

    Actually, there is a such thing as a zero-phase filter, and it is by definition acausal. Rather than implement the ZP filter, however, in RM2006, they implement a little MATLAB function known as filtfilt(). FiltFilt() takes a signal, filters it with the original filter, then flips the resulting sequence and repeats with the same filter. This produces a zero phase filtering function. Mann doesn’t just use this as a graphical tool, he uses this within the reconstruction code itself as I recall.

    The group delay of FiltFilt is not zero. Although I agree that filtfilt will simulate a zero-phase filter. This is mentioned directly by the matlab documentation: “Note The length of the input x must be more than three times the filter order, which is defined max(length(b)-1,length(a)-1). The input x should be large enough so that the impulse is correctly represented. For example, for a fifth order filter, if the input sequence is a delta sequence, the 1 value should appear within the first 15 samples.”

  53. Bob Meyer
    Posted May 31, 2007 at 1:53 AM | Permalink

    While the discussion of causal and non-causal filters is entertaining I think that we should remember that science is based on empirical data and that means data that has already been taken, not data taken in the future.

    A system of ideas based on data from the future isn’t called science, it’s called prophecy and unless Mann can throw his staff on the ground and turn it into a snake I’m not going to believe anything he says based on future data.

  54. Gaudenz Mischol
    Posted May 31, 2007 at 2:04 AM | Permalink

    @16

    I just makes me think of the famous western:

    the good, the bad and …. Mike Mann

  55. PaulM
    Posted May 31, 2007 at 2:14 AM | Permalink

    A few brief remarks –
    * Mann has now edited out his remark about #47’s IP address.
    * Why don’t more of you post your criticisms at RC?
    * The filtering/smoothing argument is very amusing, since when Friis-Christensen did this in his solar cycle length work he was slated for it by them see here. Hypocrites?
    * And the irony of these guys criticising someone else’s reconstruction!

  56. James Lane
    Posted May 31, 2007 at 2:43 AM | Permalink

    * Why don’t more of you post your criticisms at RC?

    Because it’s a waste of time. Cogent criticisms don’t get posted. It’s the main reason that RC is such a dull blog.

  57. nevket240
    Posted May 31, 2007 at 3:51 AM | Permalink

    As an avowed Eco-Heretic I love sites like this.
    People are being deliberately led away from the basic issue.

    snip

  58. Demesure
    Posted May 31, 2007 at 4:09 AM | Permalink

    #56 I concur
    A year ago, before the NAS and the Wegman reports, skeptics can still post at RC.
    Now a scientfic criticism can be posted once but your followup will be deleted (I presume by filtering out the poster by the first numbers of his ip address). To me, that’s a sign they are increasingly desperated to defend the stick.

  59. MarkW
    Posted May 31, 2007 at 4:50 AM | Permalink

    #13,

    That goes to show the statistical illiteracy of many in the AGW camp.
    Not all statistics are created equal.
    Bristlecone pine statistics are not the equivalent of O18 statistics.
    One is a good proxy for temperature, the other is not.

    There is nothing wrong with accepting O18, while rejecting bristlecone.

  60. Posted May 31, 2007 at 7:29 AM | Permalink

    (51)

    Oops, original post went to

    http://www.climateaudit.org/?p=1515#comment-111967 ,

    my mistake, wrong window and caching..

  61. Michael Jankowski
    Posted May 31, 2007 at 8:03 AM | Permalink

    RE#55

    Why don’t more of you post your criticisms at RC?

    Does “post” technically mean submitting a comment, or having RC allow the comment to be posted?

    I haven’t posted at RC in a long, long time. But I tried yesterday, about 20 hrs ago, and it’s not to be found. My post was absolutely harmless and reasonable.

    I wrote in response to:

    #49 – valid point about the 20th century and MWP could have two completely independent warming causes, but that the “warmer” side is also hung-up on the MWP as respresented by the propping of the hockey stick, widespread claims of “warmest in 1,000 years,” alleged “We have to get rid of the MWP” email, etc.

    #52 – many in the US have lived through perceived “greatest threat to civilizations” in the past…diseases (cured, preventble, or now very treatable), Hitler/Holocaust, WWII in general, Cold War, “the Population Bomb,” etc, and that the US might be sort of numb; that many people see the things #52 cares about with regard to children as what is important to devote money and resources to as opposed to flawed climate programs; that many feel adaptation is more reasonable than attempts at slowing/prevention (if even possible); and that maybe it would help if the loudest crier of “greatest threat to civilization” would practice what he preaches.

    Now if a tame, even-handed post on the above subjects doesn’t get through, how is a post calling out Mann’s mistakes and commenting on the paranoid Big Brother watch going to get through?

  62. Jeff Norman
    Posted May 31, 2007 at 8:04 AM | Permalink

    Re: #12 Mark T.,

    “smoothing” if applied causally, only uses PAST inputs, therefore the are NO edge effects that would cause a droop such as the divergence.

    I’ve been wondering about this. When I plot a running average of temperature in a trend should the last year averaged be the year plotted? (i.e. the average of 1990 to 1999 should be plotted as 1999).

    And while you’re/we’re at it could someone describe in very simple words what a spline filter does to a data series?

    Re: #42 Jon,

    Any kind of end-period padding technique will necessarily result in distortions of some kind. The ideal situation then is to handle this with truncation.

    If this is true, then in your opinion would it be more responsible to truncate (in the same way, using the same rules), all the other data being presented with the truncated data?

    Re: #55 PaulM,

    * Why don’t more of you post your criticisms at RC?

    I don’t think James Lane’s response truly capture the scope of what is going one. Many of the people who post here have tried to post at Real Climate only to find:

    * Our comments edited/truncated to make the comment confused or meaningless;
    * Editorial comments inserted throughout the post in a way that breaks up any argument that is being made; and finally
    * Our posting rights revoked.

    It is hard to be heard when you are being censored. IIRC SteveM originally started Climate Audit after he discovered that he could not discuss the issues (defend himself from some very outrageous assertions)at Real Climate.

    SteveM,

    The Real Climate comment about Shell Oil allegedly made by Michael Mann (it was “signed” mike) reveals what kind of person he is.

    The realisation this was over the top and needed to be changed reveals what kind of person he could be.

    The fact that he did not apologise goes back to the first point.

    But now I am confused. I thought Shell and BP were the good oil companies and that EXXON was the bad oil company. Does this then reveal a mindset that there are in fact no good oil companies and all the pandering made by Shell and BP to the so called environmentalists will be for naught?

  63. Demesure
    Posted May 31, 2007 at 8:14 AM | Permalink

    #62, I thought Exxon was the good guys too??? That’s their official position (I know, I know, to some, the best way for Exxon to be good is to commit suicide and its executive board is pondering the move) :

    Climate remains today an extraordinarily complex area of scientific study. The risks to society and ecosystems from increases in CO2 emissions could prove to be significant – so despite the areas of uncertainty that do exist, it is prudent to develop and implement strategies that address the risks, keeping in mind the central importance of energy to the economies of the world.

  64. steven mosher
    Posted May 31, 2007 at 8:14 AM | Permalink

    Mann’s problem is easily solved. Append a temp. projection from a GCM beyond the end of the instrument record
    until 2030. Then use the tree ring respone function with some added noise to create a tree ring future.
    and then just smooth out to the future. Do a chart, terminate the smooth at 2007 and then claim a new statistical
    approach. Seriously, if you trust an GCM to project the future temp record, then just go ahead and “extend”
    the istrument “record” forward. And if you trust your tree rng response function do the same thing there.
    Simulated temp, simulated precip. simulated tree rings. Then these pesky smoothing issues go away.

    That’s my audition to join the team! how’d i do?

  65. KevinUK
    Posted May 31, 2007 at 8:25 AM | Permalink

    #28 bernie

    It would be very easy for Betsy to change this information if she wants to. Perhaps now that a clear link has been established between between RealClimate.org, EMS, Fenton and Al Gore then maybe after being told about this thread she will do. That’s up to her.

    KevinUK

  66. Jaye
    Posted May 31, 2007 at 8:51 AM | Permalink

    A bio for David Fenton…

    David Fenton Bio

  67. Jim Johnson
    Posted May 31, 2007 at 8:51 AM | Permalink

    Jon,

    “Ah. No you don’t get it. If you compute the moving average over M+1 years. Where K is the last year of the data-set, then the last output of the filter is the K-M/2 year.”

    I get it. I’m just not convinced that it is terribly important wrt the purpose of the smoothing in this instance. Which purpose I see as strictly data visualization: smoothing out the bumps to provide a graph that is more pleasing to the human eye, that permits occular estimate of trend and visual comparison of such between multiple datasets.

    And I think that is the only legitimate purpose of smoothing in this instance. If someone were attempting to use the smoothed data quantitatively, or to compare to other datasets that were not smoothed by the same method … or were incorporating such smoothing into their temp-from-tree-rings predicting algorithm … then both the issue you raise (time shift) and the issue Mark raises (causality) would be very important.

    But I dont think that using these smoothed presentation graphs as input for some further analysis is kosher, so properties of the smoothing that dont affect visualization are moot. The various truncating, end-padding and reflecting/double reflecting strategies do affect visualization in ways that are misleading, and which can be chosen to be misleading in particular directions. And Mann, as evidenced by his expressed preference for the ‘keep flipping it until the trend goes up’ methodology, has found that to be an irresistable temptation.

  68. Steve McIntyre
    Posted May 31, 2007 at 8:54 AM | Permalink

    #55. Here’s a couple of posts on my experience on trying to post at realclimate:
    http://www.climateaudit.org/?p=389
    http://www.climateaudit.org/?p=419

    Their censorship of opposing views is notorious and shows the hypocrisy of their stated comment policy.

  69. bender
    Posted May 31, 2007 at 9:20 AM | Permalink

    re: #55

    Why don’t more of you post your criticisms at RC?

    #56 has it exactly right. Way too much rhetoric from the peanut gallery means they waste your time. Heavy-handed censorship means they can make you look weak if that want to. And they do. Heavy-handed cutting of commentary means you must always genuflect to avoid any semblance of criticizing their authority. There are so many fundamentalist alarmists commenting there you get all angles of illogical presumptuousness. The continual dodge and weave will drive you crazy. Basically, it is not an open forum for critical thought. It is a hierarchy-reinforcing mechanism for consensus-keeping. The feedback from the top, provided through the inline commentary, is fine. But you get one answer, and that’s the final word. No further questions, please. No debate.

    Problem is: these guys ain’t statisticians. They’re very weak on that point and they know it. And so they don’t like questions in that area. That’s why Wegman was so devastating. Why would I ask statistical questions to a group that can’t answer them? To expose their ignorance? They won’t let you do it. They will use their heavy editorial hand to stop you. You can’t win.

    CA regulars know this already. I just learned it for myself.

    The real question is: why don’t RC regulars post at CA? [Ans: Because they’re in love with their hypothesis.]

  70. Mark T.
    Posted May 31, 2007 at 9:30 AM | Permalink

    * The filtering/smoothing argument is very amusing, since when Friis-Christensen did this in his solar cycle length work he was slated for it by them see here. Hypocrites?

    If it was brought up as a topic of discussion, I would have been equally harsh. Hypocrisy requires that we are all aware of such things. Most of us have jobs, so we only have time to post on things that are on the forefront of discussion. Also, as many have mentioned, most of the purpose of the smoothing is for graphical purposes, which I think are inappropriate though they don’t particularly modify real results. Used within a computation, however, it is a definite problem.

    The realisation this was over the top and needed to be changed reveals what kind of person he could be.

    No, it simply means he knows he’ll get beat up over this. In his world view, everybody that doesn’t believe in his ways must be bought by special interests. He’s not stupid, however, statistical deficiencies aside, and he realizes that getting “caught” with such a statement hurts his cause.

    Mark

  71. Mark T.
    Posted May 31, 2007 at 10:01 AM | Permalink

    The group delay of FiltFilt is not zero.

    Agreed.

    Mark

  72. Posted May 31, 2007 at 10:03 AM | Permalink

    re: 11 & others

    Mann behaves like politician [that is, like he has something to hide] – he knows he’s a leading figure in the AGW debate so he and the team keep up an agressive posture to scare away the weak but also to keep the faithful – by speaking always in absolute terms regarding the science and by keeping up the personal attacks on skeptics, help keep the average environmentalist always questioning the skeptics motive and never on the science debate itself.

  73. Stan Palmer
    Posted May 31, 2007 at 10:16 AM | Permalink

    re 69

    Motivated Reasoning

    There are so many fundamentalist alarmists commenting there you get all angles of illogical presumptuousness. The continual dodge and weave will drive you crazy. Basically, it is not an open forum for critical thought. It is a hierarchy-reinforcing mechanism for consensus-keeping. The feedback from the top, provided through the inline commentary, is fine. But you get one answer, and that’s the final word. No further questions, please. No debate.

    http://mixingmemory.blogspot.com/2006/03/motivated-reasoning-i-hot-cognition.html

    The blog posting at the URL above gives a summary of the ideas behind the concept of “motivated Reasoning’ This is a relatively new concept from cognitive science. A pertinent extract from this posing is

    These situations imply that motivated reasoning is in fact our default mode of reasoning; the one that we revert to when we are threatened, when our cognitive resources are limited, or when we aren’t highly motivated to make an effortful attempt to come to the objectively “right” answer. Interestingly, under this theory, motivated reasoning is automatic, relatively effortless, and likely occurs below the level of awareness. This allows for what Kunda calls the “illusion of objectivity,” which is the belief that the conclusion at which we’ve arrived is the objectively right one, even though the processes through which we’ve arrived at it were biased

    This seems to be a accurate description of what you see going on at RealClimate. It is a natural form of reasoning but it is not scientific in the strict sense of the word. However it is scientific in the sense in which it is practiced. Can one imagine a peer review process in a community with a strong belief in the importance of their findings.

  74. Mark T.
    Posted May 31, 2007 at 10:38 AM | Permalink

    It happens in here as well, though the fact that Steve M. and John A. let most of the chaff through the gates tempers the result. I.e., there are enough opposing or different viewpoints that those stemming from “motivated reasoning” get uncovered. Another difference is that many in here tend to accept the “proof” of the error of their ways, though not always.

    Mark

  75. bernie
    Posted May 31, 2007 at 10:41 AM | Permalink

    #65
    Kevin:
    I still think that as a matter of common courtesy and as a means of avoiding inappropriate (or charges thereof) intrusions on named persons who are not “public figures” that we should refrain from providing all but minimal information about any third parties. Everybody on the Internet is not as polite and civilized as the regular posters on this site. It only takes a troll to decide to make mischief.

    My earlier question remains unanswered:

    Doesn’t the smoothing argument get in part resolved if the proxies are brought up to date – at least in terms of providing more data to determine the possible biases introduced by the smoothing? Does the divergence problem get better defined if they bring the proxies up to date? What is the problem with bringing the proxies up to date? Have I missed something?

  76. Steve Sadlov
    Posted May 31, 2007 at 10:42 AM | Permalink

    RE: #36 – Regarding abuse of NASA IT systems, I’ve personally witnessed (e.g. by receiving the resulting chains of email, with headers and addresses intact) NASA IT systems being used for – coordinating anti war rallies, spreading disinfo and slanderous statements about the chief executive of a large English speaking Western Hemisphere country, passing along radical and extremist literature, passing along links to web sites in countries on the US export control “restricted” and “embargoed” lists …. and it goes on and on. Gavin, if he’s using NASA IT systems, is relatively tame, compared to what I’ve witnessed. Not that it excuses it … I simply wanted to state just how bad the overall problem is. Maybe the new management will clean house soon. Got to get things in order before the Orion program really starts to crank. I digress …

  77. Steve Sadlov
    Posted May 31, 2007 at 10:52 AM | Permalink

    RE: #66 – Fascinating.

  78. Ken Fritsch
    Posted May 31, 2007 at 10:55 AM | Permalink

    I have a bit of a complaint in the difference in attention paid by responders of this blog to this thread about a Mann distraction (I see no meat or new information coming out of his remarks or a discussion of them) and the thread “New Light on Old Fudge” where Hakan Grudd’s PhD thesis and related papers online were linked by Steve M.

    I read the Grudd link and would recommend it to the layperson seeking a more fundamental understanding of the current state of the science of dendrochronologist/denroclimotologist (dendros).

    Grudd explains in rather detailed fashion the complicated methods of dendros in attempts to extract a temperature signal from a combination of TRW and MXD and appears less hesitant to indicate some of the uncertainties involved in the process than have some other dendros I have read.

    It would appear from Grudd’s and Rob Wilson’s work that I have read that they have more recently found a better calibration correlation when using variables heavily weighted towards MXD, but including TRW and TRW lag 1 year with significantly less weighting. Grudd’s papers/thesis discuss the divergence problem in detail and indicate that by using the heavily weighted MXD regressions with samples from younger trees in the post 1980 period that the divergence problem is eliminated. He also talks about these modified reconstructions showing several periods in the past as being warmer than the current warm period and differing in these differences from the older reconstructions of Briffa. He does not offer much in the way of a deterministic explanation for the younger tree inclusions eliminating the divergence phenomena.

    Grudd, evidently like other dendros, does not approach or test the divergence problem as a failure of out-of-sample results, but instead, like those I have witnessed from stock investing schemes, comes up with “new and improved” models using recent times for calibration and, in the process, delays further the out-of-sample testing of the new and improved version. His use of half the calibration to test against the other half as a verification and the reversing the periods for a second calibration versus verification test is wholly unconvincing for me, since it proves nothing if the original period used for calibration and verification were data snooped and over fit.

    While I continue to see much potential danger from data snooping in Grudd’s approach, I think the details of the process that he presents and the less defensive approach of his writing makes the “New Light on Old Fudge” thread well worth a further look and discussion by posters and readers at this blog.

    • bender
      Posted Jun 24, 2010 at 11:05 AM | Permalink

      Grudd’s papers/thesis discuss the divergence problem in detail and indicate that by using the heavily weighted MXD regressions with samples from younger trees in the post 1980 period that the divergence problem is eliminated.

      “Eliminated”? As in “rectified”? Or as in “obscured”?

  79. Posted May 31, 2007 at 11:00 AM | Permalink

    Mann’s problem is easily solved. Append a temp. projection from a GCM beyond the end of the instrument record
    until 2030. Then use the tree ring respone function with some added noise to create a tree ring future.
    and then just smooth out to the future. Do a chart, terminate the smooth at 2007 and then claim a new statistical
    approach. Seriously, if you trust an GCM to project the future temp record, then just go ahead and “extend”
    the istrument “record” forward. And if you trust your tree rng response function do the same thing there.
    Simulated temp, simulated precip. simulated tree rings. Then these pesky smoothing issues go away.

    That’s my audition to join the team! how’d i do?

    # 64…. Not bad. Not bad at all.

  80. Mark T.
    Posted May 31, 2007 at 11:09 AM | Permalink

    I have a bit of a complaint in the difference in attention paid by responders of this blog to this thread about a Mann distraction

    Mann is just an easy target due to his own arrogance and generally flawed logic.

    Mark

  81. Roger Dueck
    Posted May 31, 2007 at 11:28 AM | Permalink

    PaulM #55 re RC post

    Astute readers will notice that there is a clear problem here. The widespread predisposition to believe that there must be a significant link and a lack of precise knowledge of past changes are two ingredients that can prove, err…., scientifically troublesome.

    This in reference to solar?! I thought the RC team was discussing the divergence problem!

  82. Michael Jankowski
    Posted May 31, 2007 at 12:41 PM | Permalink

    RE#75:

    What is the problem with bringing the proxies up to date? Have I missed something?

    http://www.climateaudit.org/index.php?p=89

    From Mann himself (quoted at the above page, and/or see the response to #4 here):

    “Most reconstructions only extend through about 1980 because the vast majority of tree-ring, coral, and ice core records currently available in the public domain do not extend into the most recent decades. While paleoclimatologists are attempting to update many important proxy records to the present, this is a costly, and labor-intensive activity, often requiring expensive field campaigns that involve traveling with heavy equipment to difficult-to-reach locations (such as high-elevation or remote polar sites). For historical reasons, many of the important records were obtained in the 1970s and 1980s and have yet to be updated.”

    Mann later links to a reconstruction graphic that he claims shows that the historical records continuing to near present do follow the surface record…aside from any tinkering that may have been done with divergent proxies in order to get them to follow the surface record, the spaghetti graphic is so small that you can’t tell if what he’s saying is true. Upon saving the image and zooming-in 800%, this reconstruction “navy blue curve” disappears in the spaghetti.

    Anyhow, the actual thread has a lot more laughs to it. I recommend it.

  83. KevinUK
    Posted May 31, 2007 at 1:13 PM | Permalink

    #66 and #77

    Fascinating indeed! Looks like Mr Fenton is a man we should all emulate (NOT!). Are communist regimes and terrorist groups classed as non-prophet making organisations in the US?

    KevinUK

  84. Posted May 31, 2007 at 1:56 PM | Permalink

    Here’s the part we need to replicate:

    Steve, help, published CIs of those recontructions, please 😉

    (fig url http://www.geocities.com/uc_edit/ch6.jpg)
    Information must be reliable, i.e., replicable (repeatable) as well as valid (relevant to the inquiry).

  85. steven mosher
    Posted May 31, 2007 at 3:26 PM | Permalink

    can you guys give me the cliff notes version of the difference between Acausal versus casual filters.
    or better link some good stuff so we dont jack the thread

  86. Mark T.
    Posted May 31, 2007 at 4:47 PM | Permalink

    can you guys give me the cliff notes version of the difference between Acausal versus casual filters. or better link some good stuff so we dont jack the thread

    Wikipedia is your friend.

    Short simple answer…

    causal: current output of a system is dependent upon current and past inputs and outputs only. i.e., cause precedes, or is coincident with, effect. y(n) = a1*x(n) + a2*x(n-1) + … + b1*y(n-1) + b2*y(n-2) + …

    acausal, aka non-causal: current output depends upon current, past and future inputs and outputs. the right hand side of the above equation may then contain some n+1, n+2, etc., terms.

    anticausal: current output depends only upon future inputs. the right hand side of the above equation would contain ONLY >n terms, i.e. n+1 would be the minimum.

    Nature is generally assumed causal but relativity can muck with that (I suppose “relative causality” would be in order). The concept of an acausal filter can easily be adjusted simply by sliding the output window. For example, if we had

    y(n) = a1*x(n+1) + a2*x(n) + a3*x(n-1)

    we could simply adjust y by adding a delay to the inputs for

    y(n) = a1*x(n) + a2*x(n-1) + a3*x(n-2)

    This will change the apparent outcome, particularly in feedback systems.

    Mark

  87. Mark T.
    Posted May 31, 2007 at 4:49 PM | Permalink

    Oh, I should add that most texts list the b coefficients on the inputs and the a coefficients on the outputs. A minor nit.

    Mark

  88. steven mosher
    Posted May 31, 2007 at 7:31 PM | Permalink

    RE 86. Thanks mark, I should always wiki before posting. Thanks for the cliff notes version.

  89. Mark T.
    Posted May 31, 2007 at 10:25 PM | Permalink

    The wiki article does not explain it that way, btw.

    Mark

  90. Posted May 31, 2007 at 11:54 PM | Permalink

    See also

    http://ccrma.stanford.edu/~jos/filters/Zero_Phase_Filters_Even_Impulse.html
    and links therein.

    My short explanation:
    Linear filter for finite data set can be written as y=Kx, where y is filter output vector (column vector, latest value at the bottom), matrix K is the filter and x is input data. If the matrix K is lower diagonal, we have a causal filter. We didn’t have such restriction of K in

    http://www.climateaudit.org/?p=1515#comment-108922

    answer

    http://www.climateaudit.org/?p=1515#comment-109168

    which completes my incomplete proof that optimal non-causal filter is better than (or equal) optimal causal filter. But note that mike and others don’t talk to mainstream statisticians / signal processing people, so their non-causal filters are just for visualization / joking.

  91. wxm
    Posted Jun 1, 2007 at 2:16 AM | Permalink

    haha, so many people love climate science. The climate science will be thriving.

  92. PaulM
    Posted Jun 1, 2007 at 2:30 AM | Permalink

    # 56 58 61 62 68 69: I apologise for my naive newbie question. I had assumed RC was a genuine blog that would abide by its comment policy (“Questions, clarifications and serious rebuttals and discussions are welcomed”). Thanks for putting me straight on this. Below is what I submitted, which they did not post.

    I cannot see how Stefan concludes “Without exception, the reconstructions show that Northern Hemisphere temperatures are now higher than at any time during the past 1,000 years”. No, the reconstructions themselves show that the temperature now is about the same as it was 1000 years ago. You only draw your false conclusion by comparing instrumental measurements with the proxies – but by the same method you can conclude that the temperature now is higher than the temperature, um, now, which shows something is seriously wrong. Your attempt to explain this away at #47 is not convincing (Why do the proxies all stop at 1980? And the reflection condition halves the smoothing time so they should show the recent rise but they don’t). The proxy reconstructions simply don’t support your conclusion.

  93. Hans Erren
    Posted Jun 1, 2007 at 3:01 AM | Permalink

    re 84:
    Any idea why the uncertainty band increased recently?

  94. Posted Jun 1, 2007 at 3:08 AM | Permalink

    #93

    No ideas, only a guess (#51):

    maybe they didn’t truncate divergent series in 6.10.c

    truncated 6.10.b, not truncated 6.10.c, everything is possible 😉

  95. Philip B
    Posted Jun 1, 2007 at 4:36 AM | Permalink

    More anecdote:

    I live in Western Australia and have travelled extensively in the bush. Trees are much larger along watercourses, and away from watercourses tree size clearly declines as you move from higher to lower rainfall areas.

    My point is that in warm temperate dry climates the tree growth sensitivity is to precipitation not temperature.

  96. Posted Jun 4, 2007 at 2:48 AM | Permalink

    #84,93

    Wasn’t that hard to figure out after all, ECS 2002 uncertainty ranges increase in the 1950-1992 period. Bootstrap CIs combined with another divergence problem. See Cook et al QSR2004 Fig. 6

    After AD 1950, there is clear divergence, particularly between the North” and South” subset chronologies.

  97. jae
    Posted Jun 4, 2007 at 4:47 PM | Permalink

    95:

    My point is that in warm temperate dry climates the tree growth sensitivity is to precipitation not temperature.

    I agree that this is usually true. But the claim is that at their altitudinal or latitudinal limits, tree growth is controlled primarily by temperature, not moisture. And this may be true for some locations, but not many, IMHO. Especially in desert-like areas where bristlecone pines live.

  98. Posted Jun 5, 2007 at 2:05 AM | Permalink

    Here’s a part from Cook et al QRS2004 Fig. 6 :

    ( http://www.geocities.com/uc_edit/Cook04.jpg )

    This kind of divergence, between blue ( proxies 55-70 deg N) and red ( proxies 30-55 deg N) causes bootstrap CIs to expand, am I right? Smaller divergence happens around year 1000, which causes a clear peak in 6.10.c. Loss of growth sensitivity, greater regional variability?

  99. N Leaton
    Posted Jul 4, 2007 at 3:06 PM | Permalink

    There is another explanation, and that is that there is a selection bias in the proxies.

    If the only proxies are chosen such that the fit the known record, you would expect to see the following.

    The proxies are a good fit for the known data – obviously

    When the new temperature data comes out, the data diverges.

    When you look back at the historical records, the proxies diverge.

    Again what is seen. The lack of divergence during the known record, and the extrapolations forward and backward in time showing divergence, is a clear example of this as an explanation.

    Nick

  100. bender
    Posted Jun 24, 2010 at 10:58 AM | Permalink

    If you estimated confidence intervals robustly, then it wouldn’t matter if you reflected or padded the endpoints, because the false information contained at the end of the series would be reflected in an appropriately ever-widening confidence envelope. So the substantive issue here is not how to terminate the series with false data, but how to reflect the appropriate level of uncertainty given lack of knowledge about the future data points beyond the series end.

    Trust Mann to dodge the real issue.

One Trackback

  1. […] to 2007. Recently I discussed Mann’s “explanation” of the Divergence Problem, that it was an artifact of IPCC […]