Ryan O has produced a very interesting series of Antarctic tiles by calculating Steigian trends under various settings of retained AVHRR principal components and retained Truncated Total Least Squares eigenvectors (Schneider’s “regpar”). The figure below re-arranges various trend tiles provided by Ryan in a previous comment, arranging them more or less in increasing retained AVHRR PCs from top to bottom and increasing number of retained TTLS eigenvectors left to right. Obviously, in terms of any putative regional reconstruction, the results are totally unstable to what de Leeuw would describe as “uninteresting” variations of regpar and retained PCs.
I want to review two things in today’s note. First, the instability reminds me a lot of a diagram in Bürger and Cubasch GRL 2005, which built on our prior results. There’s something remarkable about the Bürger and Cubasch 2005 presentation that we’ve not discussed before. Second, I thought that it would be worthwhile to review what Steig actually said about fixing on PC=3 and regpar=3, in light of this diagram. We’ve touched on this before, but only in the context of varying regpar and not on the joint variation of retained PCs and regpar.

Figure 1. Collation of Ryan O Trend Tiles. I caution readers that I haven’t verified these results. However, Ryan has built on my porting to R of the Schneider RegEM-TTLS algorithm and has placed relevant code online, in keeping with the open source analysis that we’ve all been conducting during this project.
Bürger and Cubasch 2005
Buried in the Supplementary Information to Bürger and Cubasch 2005 is the following graphic which shows high early 15th century values under some “flavors” of MBH98 parameters – “flavors” corresponding to parameter variations with reduced bristlecone weights, which correspond to similar diagrams in MM05b (EE) especially.

Bürger and Cubasch SI Figure 2. SI readme says: This directory contains 3 additional Figures showing …, (2) the analysis for the MBH98-step for AD 1600 [ a different step than the AD1400 step discussed in MM05a,b] …[Figure] 1600.eps [shows] the 32 variants from combining criteria 1-5 (grey, with CNT=0), distinguished by worse (light grey) or better (dark grey) performance than the MBH98-analogue MBH (10011, black). Note the remarkable spread in the early 16th and late 19th century. [my bold].
This figure is not only not presented in the article itself; it is not even referred to in the running text, which refers to the Supplementary Information only as follows:
Figure 1 shows the 64 variants of reconstructed millennial NHT as simulated by the regression flavors. Their spread about MBH is immense, especially around the years 1450, 1650, and 1850. No a priori, purely theoretical argument allows us to select one out of the 64 as being the ‘‘true’ reconstruction. One would therefore check the calibration performance, e.g. in terms of the reduction of error (RE) statistic. But even when confined to variants better than MBH a remarkable spread remains; the best variant, with an RE of 79% (101001; see supplementary material1), is, strangely, the variant that most strongly deviates from MBH.
Bürger and Cubasch Figure 1 is shown below. While it is somewhat alarming for anyone seeking “robustness” in the MBH quagmire, they refrained from including or even referencing the diagram that would be perceived as giving fairly direct support of our work. I don’t blame Gerd Bürger for this at all; he cited our articles and has always discussed them fairly. In 2005, the mood was such that Zorita and von Storch felt that their ability to get their 2005 Science reply to Wahl and Ammann through reviewers would be compromised if they cited us in connection with bristlecones and MBH and discussed the issue without citing us (Zorita apologizing afterwards) even though we were obviously associated with the issue and they were well aware of this. In the Bürger and Cubasch case, the diagram was buried in the SI. (We have obviously been aware of this diagram and have used it from time to time, including our NAS presentation.)
Bürger and Cubasch 2005 Figure 1.
I apologize for the digression, but I think that there are some useful parallels between the non-robustness observed in Bürger and Cubasch 2005 and in Ryan’s tiles. The reason for such instability in the MBH network was the inconsistency between proxies – an issue that we referred to recently in our PNAS Comment on Mann et al 2008, where we cited Brown and Sundberg’s calibration approach to inconsistency – something that I’ll return to in connection with Steig.
Regpar and PC=k in Steig et al 2009
On earlier occasions, the two Jeffs, Ryan and I have all observed on the instability of trends to regpar choices, noting that the maximum for the overall trend occurred at or close to regpar=3. It was hard to avoid the impression that the choice of regpar=3 was, at best, opportunistic. Let’s review exactly how Steig et al described their selection of regpar=3 and to their selection of PC=3.
In the online version of their article (though not all versions), they say (links added by me):
We use the RegEM algorithm [11- T. Schneider 2001], developed for sparse data infilling, to combine the occupied weather station data with the T_IR and AWS data in separate reconstructions of the Antarctic temperature field. RegEM uses an iterative calculation that converges on reconstructed fields that are most consistent with the covariance information present both in the predictor data (in this case the weather stations) and the predictand data (the satellite observations or AWS data). We use an adaptation of RegEM in which only a small number, k, of significant eigenvectors are used [10 – Mann et al, JGR 2007]. Additionally, we use a truncated total-least squares (TTLS) calculation [30 – Fierro et al 1997] that minimizes both the vector b and the matrix A in the linear regression model Ax=b. (In this case A is the space-time data matrix, b is the principal component time series to be reconstructed and x represents the statistical weights.) Using RegEM with TTLS provides more robust results for climate field reconstruction than the ridge-regression method originally suggested in ref. 11 for data infilling problems, when there are large differences in data availability between the calibration and reconstruction intervals [10 – Mann et al, JGR 2007]. For completeness, we compare results from RegEM with those from conventional principal-component analysis (Supplementary Information).
The monthly anomalies are efficiently characterized by a small number of spatial weighting patterns and corresponding time series (principal components) that describe the varying contribution of each pattern… The first three principal components are statistically separable and can be meaningfully related to important dynamical features of high-latitude Southern Hemisphere atmospheric circulation, as defined independently by extrapolar instrumental data. The first principal component is significantly correlated with the SAM index (the first principal component of sea-level-pressure or 500-hPa geopotential heights for 20S–90S), and the second principal component reflects the zonal wave-3 pattern, which contributes to the Antarctic dipole pattern of sea-ice anomalies in the Ross Sea and Weddell Sea sectors [4 – Schneider et al J Clim 2004; 8 – Comiso, J Clim 2000]. The first two principal components of TIR alone explain >50% of the monthly and annual temperature variabilities [4 – Schneider et al J Clim 2004.] Monthly anomalies from microwave data (not affected by clouds) yield virtually identical results [4 – Schneider et al J Clim 2004.]
Principal component analysis of the weather station data produces results similar to those of the satellite data analysis, yielding three separable principal components. We therefore used the RegEM algorithm with a cut-off parameter k=3. A disadvantage of excluding higher-order terms (k > 3) is that this fails to fully capture the variance in the Antarctic Peninsula region. We accept this tradeoff because the Peninsula is already the best-observed region of the Antarctic.
Virtually all of the above is total garbage. We’ve seen in earlier posts that the first three eigenvector patterns can be explained convincingly as Chladni patterns. This sort of problem is long known in climate literature dating back at least to Buell in the 1970s – see posts on Castles in the Clouds. “Statistical separability” in this context can be demonstrated (through a reference in Schneider et al 2004 (by two coauthors) to be the separability of eigenvalues discussed in North et al (1982). Chladni patterns frequently occur in pairs and may well be hard to separate – however, that doesn’t mean that the pair can be ignored. The more salient question is whether Mannian principal component methods are a useful statistical method if the target field is spatially autocorrelated – an interesting and obvious question that clearly is not the horizon of Nature reviewers.
Obviously the above few sentences fall well short of being any sort of adequate argument supporting the use of 3 PCs. In fairness, the use of 3 PCs seems to have been developed in predecessor literature, especially Schneider et al JGR 2004, which I’ll try to review some time.
However, the regpar=3 decision does not arise in the earlier Steig Schneider literature and is entirely related to the use of Mannian methods in Steig et al 2009. The only justification is the one provided in the sentence cited above:
Principal component analysis of the weather station data produces results similar to those of the satellite data analysis, yielding three separable principal components. We therefore used the RegEM algorithm with a cut-off parameter k=3.
This argument barely even rises to arm-waving. I don’t know of any reason why the value of one parameter should be the same as the other parameter. It’s hard to avoid the suspicion that they considered other parameter combinations and did not consider combinations that yielded lower trends.
















FOI and the "Unprecedented " Resignation of British Speaker
Readers have sometimes proposed that I try to enlist the support of a British MP for efforts to get information from the various stonewalling UK climate institutions, such as Fortress CRU. In fact, it seems that British MPs have had their own personal reasons for not supporting FOI. For the past 5 years, they have stonewalled FOI requests by journalist Heather Brooke for details of their expenses (2006 comment here). Speaker Michael Martin led the stonewalling campaign. Brooke challenged the MPs in court and, in May 2008, won a notable success. But a year later, even with a court victory, she was still no further ahead.
In the last couple of weeks, the ground suddenly shifted. Using a tried-and-true method (chequebook journalism), the Daily Telegraph purchased a disk with details of MP expenses. The reasons for the stonewalling became pretty clear. MPs expensed the public for things like repairs to the moat at the family castle of one MP to a subscription to Playboy Channel for the husband of another MP. No detail seemed too large or small not to be charged to the public. A climate scientist would have said that the situation was “worse than we thought”.
The worst of the trough-feeding arose over provisions entitling MPs to purchase and improve 2nd and 3rd homes at public expense, under the guise of being nearer Parliament or nearer their constituency, even if the 2nd home was only a few miles closer to Parliament than the original home. MPs as a class seem to have become small-time real estate speculators with the public underwriting the cost of their speculations, but not the capital gains. The public anger is not just about the chiseling, though the anger about the chiseling is real enough, but about the influence of these sorts of perqs and benefits and MPs as a class.
The exposure of the pigs at the trough has angered the British public and amused the rest of the world (e.g. CBC in Canada here.
One of the first casualties of the affair was Speaker Michael Martin, who had been administered the expense program and who had directed the prolonged litigation against revealing the expenses. Martin became the first speaker since the Little Ice Age to resign – in climate science, this is known as an “unprecedented” resignation.
After spending five years trying unsuccessfully to get the expenses, Heather Brooke was understandably a bit sour at being scooped by the Daily Telegraph merely buying the information, but gamely expressed some vindication at this sorry mess being exposed, observing sensibly:
As CA readers, David Holland, Willis Eschenbach and I have been given a variety of fanciful and untrue excuses by climate scientists stonewalling FOI requests. Within this reverse beauty contest, the excuses of Hadley Center executive John Mitchell for refusing to provide his Review Comments on IPCC AR4 chapter 6 are among the most colorful: first, Mitchell said that he had destroyed all his correspondence with IPCC; then he said that they were his personal property. David Holland then submitted FOI requests for Mitchell’s expenses for trips to IPCC destinations and information on whether he had done so on vacation time, while also confronting Hadley Center with their representations to the public on how Hadley Center scientists were doing the British public proud through their participation as Hadley Center employees in IPCC. So Hadley Center foraged around for a new excuse – this time arguing that releasing Mitchell’s review comments would compromise British relations with an international organization (IPCC), IPCC in the meantime having informed Hadley Center that it did not consent to the review comments being made public – ignoring provisions in the IPCC by-laws that require them to make such comments public. In administrative law terms, there is unfortunately no recourse against IPCC – an interesting legal question that we’ve pondered from time to time (also see Global Administrative Law blog here.)
We’ve also tried unsuccessfully to obtain Caspar Ammann’s secret review comments on chapter 6, which IPCC failed to include in their compilation of Review Comments and which Ammann and Fortress CRU have refused to make public.
It’s hard to picture exactly what’s in the Mitchell correspondence (or Ammann correspondence for that matter) that’s caused the parties to be so adamant about not disclosing comments that are properly part of the public record. In Mitchell’s case, I suspect that the reluctance arises not so much from the fact that anything particularly bad was said, but merely that the record would be embarrassingly empty – thereby showing (what I believe to be) an almost complete casualness to discharging any obligations as a Review Editor other than swanning off to IPCC destinations.
As long as we don’t know, it will of course be a mystery. A couple of weeks ago, MP expenses were a mystery as well.