[Climate Audit was started on Jan 31, 2005. Prior to its startup I had some notes at a prior website http://www.climate2003.com, which John A transferred to the CA blog at its start-up.]
We are sometimes asked about other multiproxy studies which are held to somehow support Mann. A couple of comments. First, if Mann’s calculations are wrong, the fact that other studies get similar results is neither here nor there. Equally, a critique of MBH98 doesn’t refute these other studies, nor have we claimed this. Second, I’m not convinced that these studies are anywhere near as mutually supporting as claimed. When I get to it, I’m going to try to quantify exactly what is supposedly being shown by the spaghetti diagrams and see if they rise statistically above spaghetti diagrams from our simulated hockey sticks (see the Oct. 25 comment for an example). Third, the record for other multiproxy studies is, in all but one case, worse than MBH98. Here is a brief summary:
Crowley and Lowery (2000)
After nearly a year and over 25 emails, Crowley said in mid-October that he has misplaced the original data and could only find transformed and smoothed versions. This makes proper data checking impossible, but I’m planning to do what I can with what he sent. Do I need to comment on my attitude to the original data being “misplaced”?
Briffa et al. (2001)
There is no listing of sites in the article or SI (despite JGR policies requiring citations be limited to publicly archived data). Briffa has refused to respond to any requests for data. None of these guys have the least interest in some one going through their data and seem to hoping that the demands wither away. I don’t see how any policy reliance can be made on this paper with no available data.
Esper et al. (2002)
This paper is usually thought to show much more variation than the hockey stick. Esper has listed the sites used, but most of them are not archived. Esper has not responded to any requests for data.
Jones and Mann (2003); Mann and Jones (2004)
Phil Jones sent me data for these studies in July 2004, but did not have the weights used in the calculations, which Mann had. Jones thought that the weights did not matter, but I have found differently. I’ve tried a few times to get the weights, but so far have been unsuccessful. My surmise is that the weighting in these papers is based on correlations to local temperature, as opposed to MBH98-MBH99 where the weightings are based on correlations to the temperature PC1 (but this is just speculation right now.) The papers do not describe the methods in sufficient detail to permit replication.
Jacoby and d’Arrigo (northern treeline)
I’ve got something quite interesting in progress here. If you look at the original 1989 paper, you will see that Jacoby “cherry-picked” the 10 “most temperature-sensitive” sites from 36 studied. I’ve done simulations to emulate cherry-picking from persistent red noise and consistently get hockey stick shaped series, with the Jacoby northern treeline reconstruction being indistinguishable from simulated hockey sticks. The other 26 sites have not been archived. I’ve written to Climatic Change to get them to intervene in getting the data. Jacoby has refused to provide the data. He says that his research is “mission-oriented” and, as an ex-marine, he is only interested in a “few good” series.
Jacoby has also carried out updated studies on the Gasp” series, so essential to MBH98. I’ve seen a chronology using the new data, which looks completely different from the old data (which is a hockey stick). I’ve asked for the new data, but Jacoby-d’Arrigo have refused it saying that the old data is “better” for showing temperature increases. Need I comment? I’ve repeatedly asked for the exact location of the Gasp” site for nearly 9 months now (I was going to privately fund a re-sampling program, but Jacoby, Cook and others have refused to disclose the location.) Need I comment?
Jones et al (1998)
Phil Jones stands alone among paleoclimate authors, as a diligent correspondent. I have data and methods from Jones et al 1998. I have a couple of concerns here, which I’m working on. I remain concerned about the basis of series selection – there is an obvious risk of “cherrypicking” data and I’m very unclear what steps, if any, were taken to avoid this. The results for the middle ages don’t look robust to me. I have particular concerns with Briffa’s Polar Urals series, which takes the 11th century results down (Briffa arguing that 1032 was the coldest year of the millennium). It looks to me like the 11th century data for this series does not meet quality control criteria and Briffa was over-reaching. Without this series, Jones et al. 1998 is high in the 11th century.
These studies are less “independent” than they appear. Many proxies recur in nearly all studies (e.g. Tornetrask, Polar Urals, Tasmania). If you look at all the authors, there is much overlap. Mann is in 4 of the studies; in addition to Jones et al 1998 and the two articles with Mann, Jones is a co-author in Briffa et al. 2001 and supplied much of the data to Crowley and Lowery. Bradley and Jones have been frequent co-authors.