Thanks to a couple of readers, who’ve pointed out both a discussion of the "Hockey Stick Row" and its inclusion as a question fo the week.
The question of the week was here
A row erupted this week over the so-called "hockey stick graph". What does this graph purport to show?
A: The bending of an "ideal" hockey stick under varying degrees of pressure
B: Temperature variation in the northern hemisphere over the last 1,000 years
C: Damage to the human ear drum caused by noises of varying volume
Interestingly, the question preceding this one was the following:
What is the name coined for the defence strategy unsuccessfully employed by former WorldCom chief executive Bernie Ebbers in his fraud trial, which ended this week?
A: The Gee Whizz defence
B: The Aw Shucks defence
C: The Blow Me Down With A Feather defence
The article is here.
Ross McKitrick was interviewed for the article. Gavin Schmidt weighed in with the usual realclimate line:
"This is a tiny step in the hockey stick analysis. If you do it in different ways, you still get the answer you got before, providing you don’t throw away any significant data."
Dr Schmidt points out that McIntyre and McKitrick use a different convention but do not alter subsequent steps in their analysis to account for this. As a result, he says, McIntyre and McKitrick’s analysis removes crucial data included in the original hockey stick work.
For recent readers, what realclimate author Schmidt means by a "different convention" is a principal components calculation carried out according to the description in the original article using "conventional" methods. MBH98 actually de-centered the data and used an uncentered method, which had the effect of mining for hockey stick shaped series. realclimate seems to be adopting the position that MBH98 contained a misrepresentation on a vital methodology, rather than an accidental error. A method which has an undisclosed data mining property is not just an alternative "convention".
The "crucial data" is of course the compromised bristlecone pine data. In a conventional PC analysis, this goes to the PC4. One of the big selling points of MBH98 was its "robustness" – they claimed that their calculations were robust even to the exclusion of dendroclimatic records altogether. Now they imply that the bristlecones are "crucial". Of course, they’ve known all along that the compromised bristlecone proxies were "crucial" – see the calculations in the BACKTO_1400-CENSORED folder. They just didn’t tell anyone: that would have spoiled the party.
The statistical question is whether the bristlecone pine data is "significant". See our E&E article for a detailed discussion of bristlecones. Passing a Preisendorfer Rule N test does not show that bristlecone pine data is a temperature proxy. For example, if you combined the 50 non-bristlecone tree ring series with 20 stock price series for dot.com’s from 1996-1998 into a principal components calculation, the dot.com prices would undoubtedly generate a PC series that would pass a Preisendorfer Rule N test: all that means is that they are a distinct pattern in the network. It doesn’t prove that they are a temperature proxy. Preisendorfer’s Rule N is at most a necessary condition for significance, not a sufficient condition: a distinction often lost on realclimate authors and a distinction obviously lost in this case. If world climate history is held to stand or fall on the bristlecones, there should be a comprehensive discussion of their properties (as we advocate in our E&E article); it should not be hidden from sight as done in MBH98.
Additionally, one of the selling points of MBH98 was its supposedly careful proxy selection. But its results come from the flawed method mining for the most flawed proxies, creating a "perfect storm" .
Phil Jones also weighs in:
"They keep going on about one data set, but there are loads of others that show the same thing."
Phil Jones is the same guy who responded to Warwick Hughes’ request for the underlying station data used to support the CRU gridded temperature calculation as follows:
Even if WMO agrees, I will still not pass on the data. We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it.
I’ve sought information on the identity of the 387 sites used in Briffa, Jones et al.  from both Briffa and Jones for nearly a year now. Here is my most recent (Nov. 2004) attempt:
I have not had any luck with Keith Briffa on this and perhaps you can help.
Briffa et al (JGR 2001), of which you were a co-author, refers to 387 sites, but does not provide a listing of the sites or an FTP location for the underlying data, although AGU data policies theoretically require such information. I presume that there is a convenient listing of the sites, and I would appreciate a copy. Additionally, I would appreciate information on an FTP location for the data used in the study (or a password to the SOAP location if it is located there.)
Thank you for your consideration, Steve McIntyre
Jones brings up Moberg as the new study in town. The ink is barely dry on Moberg. It looks to me like there are many issues with it as well. It will take a little while to deconstruct their methods. I suspect that many climate scientists, who are quick to praise the results as a bail-out from MBH98-99, don’t know much about wavelet methods. In the simulations for our GRL article, I used a method from Brandon Whitcher’s wavelet package and have a bit of a leg up on this. I plan to try to replicate Moberg results. Moberg does a much better job than most studies in data citation, but, like previous multiproxy studies, throws in a couple of unarchived series, making exact replication frustrating. Moberg continues to use proxies ending by 1980 with the usual splice of instrumental records. I would very much like to see some evidence that the Moberg proxies would pick up the warm 1990s. – it doesn’t seem like that much to ask. If they don’t pick up the warm 1990s, then the error bars may be about the size of total climate variability.