Over at realclimate, Mann has advocated a new “explanation” of the Divergence Problem raised by one of their readers in connection with IPCC AR4 Figure 6.10. Mann says that there is no Divergence Problem; he blames the reader for failing understand boundary constraints in smoothed series. Raising the Divergence Problem at realclimate was so blasphemous that either Mann or his website hosts, Environmental Media Services, investigated the IP address of the person who dared to ask about the Divergence Problem. The Divergence Problem is however a real issue and not simply an artifact of IPCC smoothing.
The Divergence Problem has been discussed on many occasions at this site. Previously I’ve reported the large-scale decline in ring widths, reported in passing in Briffa et al 1998, here here here among others and criticized Briffa’s cargo cult explanation of the
In the absence of a substantiated explanation for the decline, we make the assumption that it is likely to be a response to some kind of recent anthropogenic forcing. On the basis of this assumption, the pre-twentieth century part of the reconstructions can be considered to be free from similar events and thus accurately represent past temperature variability.
The poster at realclimate posed the following question, which, by the way, was a question that Kurt Cuffey of the NAS Panel posed to various presenters. (In their report, the NAS panel did not firmly grasp this nettle, more or less adopting Cook’s version of the cargo cult explanation, discussed here. At the press conference, Cuffey said that the Divergence Problem was an important consideration in withdrawing confidence from the early estimates.) Anyway, here’s the question at realclimate:
What we are interested in in this thread, however, are the error bars on the temperature reconstructions from proxies. What is striking from the IPCC chart is that the “instrumental record” starts diverging seriously upwards from the “proxies” around 1950, and is in the “10%” overlap range by about 1980.
The simple read on this, surely, is that the proxies are not reflecting current temperatures (calibration period) and so cannot be relied upon as telling what past temperatures were either?
Here’s Mann’s answer in which blames the Divergence Problem, something seriously discussed and not resolved by the NAS Panel and obviously something very much on the mind of a serious scientist like Rob Wilson, as entirely to do with smoothing:
[Response: Actually, you have mis-interpreted the information provided because you have not considered the implications of the smoothing constraints that have been applied at the boundaries of the time series. I believe that the authors of the chapter used a smoothing constraint that forces the curves to approach the boundary with zero slope (the so-called ‘minimum slope’ constraint). At least, this is the what it is explicitly stated was done for the smoothing of all time series in the instrumental observations chapter (chapter 3) of the report. Quoting page 336 therein, This chapter uses the minimum slope’ constraint at the beginning and end of all time series, which effectively reflects the time series about the boundary. If there is a trend, it will be conservative in the sense that this method will underestimate the anomalies at the end. So the problem is that you are comparing two series, one which has an overly conservative boundary constraint applied at 1980 (where the proxy series terminate) tending to suppress the trend as the series approaches 1980, and another which has this same constraint applied far later (at 2005, where the instrumental series terminates). In the latter case, the bounday constraint is applied far enough after 1980 that is does not artificially suppress the trend near 1980. A better approach would have been to impose the constraint which minimizes the misfit of the smooth with respect to the raw series, which most likely would in this case have involved minimizing the 2nd derivative of the smooth as it approaches the terminal boundary, i.e. the so-called ‘minimum roughness’ constraint (see the discussion in this article). However, the IPCC chose to play things conservatively here, with the risk of course that the results would be mis-interpreted by some, as you have above. -mike]
[Response: Well, no, actually the proper read on this is that you should make sure to understand what boundary constraints have been used any time you are comparing two smoothed series near their terminal boundaries, especially when the terminal boundaries are not the same for the two different series being compared. -mike]
[Response: p.s. just a point of clarification: Do the above represent your views, or the views of Shell Oil in Houston Texas (the IP address from which your comment was submitted)? -mike]
Here’s the offending IPCC Figure 6.10, together with original caption.
Figure 6.10. Records of NH temperature variation during the last 1.3 kyr. (a) Annual mean instrumental temperature records, identified in Table 6.1. (b) Reconstructions using multiple climate proxy records, identified in Table 6.1, including three records (JBB..1998, MBH..1999 and BOS..2001) shown in the TAR, and the HadCRUT2v instrumental temperature record in black. (c) Overlap of the published multi-decadal time scale uncertainty ranges of all temperature reconstructions identified in Table 6.1 (except for RMO..2005 and PS2004), with temperatures within ⯱ standard error (SE) of a reconstruction scoring’ 10%, and regions within the 5 to 95% range scoring’ 5% (the maximum 100% is obtained only for temperatures that fall within ⯱ SE of all 10 reconstructions). The HadCRUT2v instrumental temperature record is shown in black. All series have been smoothed with a Gaussian-weighted filter to remove fluctuations on time scales less than 30 years; smoothed values are obtained up to both ends of each record by extending the records with the mean of the adjacent existing values. All temperatures represent anomalies (°C) from the 1961 to 1990 mean.
Now the IPCC figure has already expurgated the offending parts of three Divergent series which are truncated at 1960 (Briffa et al 2001; Rutherford, Mann et al 2005, Hegerl et al 2006), an expurgation discussed on several occasions, for example here which showed the truncation of the Briffa series in the spaghetti graph and the impact of the deleted data on the IPCC TAR spaghetti graph (since the same series is used in IPCC AR4, the impact will be the same. So it’s not just smoothing that’s involved here.
Although Mann told the NAS panel that he was “not a statistician”, he cited his own article on smoothing, which can be used to decode his idiosyncratic language on smoothing. The caption in AR4 Figure 6.10 says that it was smoothed with a 30-year gaussian smooth dealing with endpoints by “extending the records with the mean of the adjacent existing values”, a fairly straightforward methodological description. Mann however describes the situation as follows:
I believe that the authors of the chapter used a smoothing constraint that forces the curves to approach the boundary with zero slope (the so-called ‘minimum slope’ constraint)
Consulting Mann’s article on smoothing, which needless to say is not a statistical authority and should not be relied on as though it were a statistical authority, one finds:
To approximate the minimum norm’ constraint, one pads the series with the long-term mean beyond the boundaries (up to at least one filter width) prior to smoothing.
Another system of dealing with endpoints is to reflect the values for smoothing. Mann describes this system as follows:
To approximate the minimum slope’ constraint, one pads the series with the values within one filter width of the boundary reflected about the time boundary. This leads the smooth towards zero slope as it approaches the boundary.
Mann based his speculation on the smoothing in Figure 6.10 on the smoothing method in Chapter 3 (lead author, Jones). Chapter 3 does indeed say:
This chapter uses the minimum slope’ constraint at the beginning and end of all time series, which effectively reflects the time series about the boundary. If there is a trend, it will be conservative in the sense that this method will underestimate the anomalies at the end.
BTW while I was checking this quote, I noted that Appendix 3.B on error measurement was excluded from the pdf version of chapter 3, but is available online as supplementary material here .
So underneath the verbiage, Mann has incorrectly described the smoothing in Figure 6.10. Yes, IPCC chapter 3 said that they dealt with endpoints by reflection, but the caption to Figure 6.10 in chapter 6 explicitly says that they used end point padding. Mann goes on to recommend another padding alternative, in which the closing values aren’t just reflected vertically (in time) but horizontally, a system which he describes as follows:
A better approach would have been to impose the constraint which minimizes the misfit of the smooth with respect to the raw series, which most likely would in this case have involved minimizing the 2nd derivative of the smooth as it approaches the terminal boundary, i.e. the so-called ‘minimum roughness’ constraint (see the discussion in this article).
I wondered whether IPCC might have done this and checked TAR for it. In the case of the truncated Briffa series, it would led to a reconstruction that could only be described as Mann-dacious. Here’s what would have happened. They truncate the bad bit of the Briffa series in which it goes down at the end of the 20th century to near all-time “cold” in the proxy index, leaving a series ending in 1960, going up at the end. Juckes, Briffa, Zorita et al (submitted to CP) is even more aggressive, truncating in 1940! After 1940, the actual series goes down. But with Mann-recommended smoothing the closing trend would be reflected enabling the smoothed version to go up even though truncated values were going down. I wonder why Mann considers this “a better approach”.
Bottom line: Mann’s answer is a nothing. The Divergence Problem is not simply an artifact of smoothing in IPCC Figure 6.10. It’s a real problem.