Science – Email #39

One more attempt to extract data. This the 39th email in my correspondence with Science and I still don’t have a complete record on either Esper et al 2002 or Osborn and Briffa 2006. And people wonder why I haven’t published more. Continues from here


Dear Dr Hanson,

Thank you for your continued efforts in trying to obtain data from Esper et al 2002 and Osborn and Briffa 2006. I regret that their unresponsiveness is making you handle this file many more times than it should have been. I acknowledge that a little more information came in from Osborn in April, but the majority of the requests that we discussed remain unresolved.

Esper et al 2002
1. Esper sent 13 of 14 chronologies. The 14th chronology (Mongolia) is still missing and no progress has been made on this in 2 months. This should be trivial to supply and I request that you take steps to obtain the missing chronology.

2. To date, Esper has sent 11 of 14 measurement files (10 in March and most recently Polar Urals in April), omitting Mongolia and two Graumlich sites, Boreal and Upperwright. I realize that there is a Mongolia file at WDCP (mong003.rwl), but the dates do not match Esper’s dates. Also, since Esper does not necessarily use all the data in a file, it is necessary to see the data as used. The two Graumlich sites are definitely not at WDCP and it is my understanding that Graumlich has lost the data. I presume that Esper was using his own version obtained prior to Graumlich losing the data, but Esper’s seeming inability to provide the measurement data for these two sites is unsettling. Perhaps it’s simply that he hasn’t considered your requests important enough to respond to. If that is the case, perhaps you could write to him in firmer tones than you have done so far.

I can see no valid reason why Esper cannot respond to items (1) and (2) within 48 hours or why you should not put him on such notice.

3. I requested information on Esper’s methodology by which he (a) distinguished between linear and nonlinear sites; and (b) decided on which data to remove from a site data set. You forwarded Esper’s answer in your most recent email, but his answer remains unresponsive and, indeed, even somewhat unsettling.

In the April 2006 email forwarded by you, Esper cited Esper et al 2003 http://www.wsl.ch/staff/jan.esper/publications/TRR_2003.pdf as authority both for the supposed distinction between linear and nonlinear sites and the removal of data. Unfortunately, that article contains no discussion whatever on the distinction; indeed it does not even mention “linear” and “nonlinear” sites. The article does refer to “meta information”, but not in a way that is germane to replicating the calculation. Accordingly, I re-iterate my previous request for an operational definition of “linear” and “nonlinmear” sites as used to produce the results in Esper et al 2002. Source code would be fine.

Secondly, Esper said that the purpose of removing data was to avoid a “biased chronology”, again citing Esper et al 2003 as authority. In this case, the comments in the “authority” were, if anything, more unsettling than the email. Esper et al 2003 said on the subject of removing data:

Before venturing into the subject of sample depth and chronology quality, we state from the beginning, “more is always better”. However as we mentioned earlier on the subject of biological growth populations, this does not mean that one could not improve a chronology by reducing the number of series used if the purpose of removing samples is to enhance a desired signal. The ability to pick and choose which samples to use is an advantage unique to dendroclimatology.

To the extent that this is an accurate description of the methods employed in Esper et al 2002, it does not justify removing data on the grounds of avoiding bias. Indeed, it seems to be a recipe for bias in the direction of the “desired signal” – making one wonder what was the “desired signal” and, on what grounds, a signal should be “desired”.

In responding to the issue of removal of data, Esper introduced his answer with the phrase “as described”, implying that there was some prior description of the removal of data. There is no mention of the removal of data in Esper et al 2002 and the issue came up only after I was able to inspect source data that you obtained. Again I note that Esper’s answer was unresponsive and I reiterate my request for a replicable methodology by which Esper’s removal of data can be replicated.

Osborn and Briffa
I acknowledge and appreciate the gridcell data which you obtained from Osborn and Briffa. I am unimpressed by their explanation and feel that it deserves wider attention as it may shed useful light on the quality of the HadCRU2 data set. I recognize that you requested that their answer remain confidential. I have requested a confidentiality release from Osborn and Briffa, but to no avail. I can see no valid reason for confidentiality. I request that you re-consider the basis for confidentiality and provide an answer that is not confidential.

I requested 4 measurement data sets (Polar Urals, Tornetrask, Taymir and Athabaska). Osborn and Briffa appear to have refused this data and you referred me to the “original authors” or the original journals. In the case of three of these data sets (Polar Urals, Tornetrask, Taymir), the “original author” was Briffa (2000). You have presented me with a distinction without a difference. While I have contacted the “original author” (Briffa) for this information, there is little prospect that he will provide the information except under a direction from Science. I see no reason why Science should countenance an author hiding behind a prior publication in a lesser journal with weaker archiving policies. I request that Science take a broad view of its data archiving policies in this case and that Science does not permit authors to take legalistic approaches to avoid supplying data. I ask that you require authors, who have used results from a “non arms length” publication which has not been properly archived, to meet Science’s policies for all non-arms length data. I therefore request that you require Osborn and Briffa to produce measurement data from Briffa (2000), used to produce results applied in Osborn and Briffa 2006.

Thompson
I note that no progress seems to have been with the Thompson data and hope that you are continuing your efforts on this front as well.

Once again, I appreciate your efforts. It is too bad the various authors have responded with bits and pirces of their data, rather than with complete information.

Regards, Steve McIntyre

Update: On May 19, Hanson reverted with data from Lisa Graumlich for two series. As to my requests for the measurement data used to create the Briffa (2000) chronologies used in Osborn and Briffa 2006, he stated:

If you require data from papers published elsewhere, you should contact those authors and journals.

That would be, uh, Briffa. He also stated that I was not to reproduce emails from him about data provenance even though they were coming from him in his official capacity.

On May 19, I sent the following letter:

Dear Dr Hanson,

Thank you for finally supplying Esper’s version of the two foxtail series. I note that these have also been archived at WDCP in the past week, although the versions archived there differ somewhat from Esper’s versions. Oddly enough, in each case, Esper’s version includes a series that is not included in what Graunlich archived and in the case of the Upper Wright series, Esper did not use all the series.

Osborn and Briffa

I regret the position that you’ve taken here – that Science has no obligation to require them to archive their measurement data. Bwlow is a graphic illustrating the difference in the “chronology” archived by Graumlich this week for the two foxtails sites, as compared with the “chronology” sent in by Esper. The methodologies for calculating the “chronology” obviously lead to quite different results from the same or virtually the same data. Thus, in the tree ring field, where quite different “chronologies” can be obtained from the same measurement data, one needs to be able to evaluate the measurement data in order to test for robustness of the conclusions. This is presently impossible for the Osborn and Briffa paper, which you published. While I will maintain confidentiality on the wording of your response and appreciate your efforts, I don’t consider that any confidentiality attaches to the lack of success in my attempts to obtain this data through Science despite your considerable efforts. Here considerable blame remains with the original authors.

Mongolia
The Tarvagatny Pass version archived at WDCP is obviousl similar to Esper’s version, but it is not the same. In particular, the illustration in the SI to Esper et al 2002

shows that Esper used Mongolia values back to 800, while the WDCP archive includes values only after 900. Since there are relatively few values in the 800-900 period, the Mongolia values actually used by Esper remain relevant. Once again I re-iterate my request for the Mongolia version used by Esper as well as the site chronology generated by Esper – both of which have been referred to in virtually every communication.

Esper Methodology
I take it from your last communication that you do not intend to obtain or require any clarification or further details from Esper regarding his methodology – even on the matter of how he decided which series to include or exclude. Is this correct or is this in process?

Thompson
Again, I have not heard from you regarding Thompson. Is this simply a dead topic? Should I conclude that Science will making no efforts to require Thompson to archive data beyond what is presently available?

Thank you as usual for your consideration.

Regards, Steve McIntyre

No response was ever received to this letter.

Request re Briffa 2000

Since Science said to contact the “other authors” in respect to measurement data, on April 28, 2006 , I wrote to Tim Osborn as follows:

Science said that you did not directly use the measurement data for Polar Urals, Tornetrask, Yamal and Taimyr, but chronologies previously published and therefore took no responsibility for obtaining this data, directing me back to you or to the original journal. While I disagree with this decision and may pursue it with Science if necessary, to simplify matters would you voluntarily provide the measurement data used for the above sites in calculating the chronology in Briffa [2000]. Thanks, Steve McIntyre

On May 23, Osborn responded:

Steve – Science are correct to say that I “did not directly use the measurement data for” those sites. Not only did I not use them, I don’t actually have a copy of them. So I cannot help you. Tim

OK, he didn’t have the data, but that didn’t mean that Briffa didn’t have the data. So on May 23, I wrote to Briffa one more time:

Dear Dr Briffa,
On April 28, 2006, I asked Tim Osborn for the measurement data for Polar Urals, Tornetrask, Yamal and Taimyr sites, supporting the chronologies used in Osborn and Briffa [2006]. Osborn says that he does not have the data, but did not say that you didn’t have the data. Do you have the data? If so would you please comply with the request below and voluntarily provide the measurement data used in Briffa 2000, and relied upon in Osborn and Briffa 2006, for these sites.
Thank you for your attention. Steve McIntyre

On May 28, 2006, Briffa replied:

Steve these data were produced by Swedish and Russian colleagues – will pass on your message to them]
cheers, Keith

That was the last I heard from him. By this time, I’d exchanged over 40 emails with Science and others and figured I’d done all that I could do.

Esper Methodology

Getting methodological information from Esper is a bit like dealing with Mann, a lot like dealing with Mann. It really makes me wonder whether there might be some clunker like Mann’s PC methodology lurking in Esper’s closet. Like Mann, instead of providing a comprehensive methodological desscription ideally with code as in econometrics journals, Esper would rather provide non-responsive answers. Right now, I have two outstanding methodological questions – one that I’ve been asking for a while: how he operationally allocates tree populations into “linear” and “nonlinear” trees; the other rose out of disclosure in February and March – not all trees were used from a site, so how did he decide which trees to use an which trees not to use.

Here were the questions that were put to Esper via Science:

a) In 4 cases (Athabaska, Jaemtland, Quebec, Zhaschiviersk), Esper’s site chronology says that not all of the data in the data set is used. This is not mentioned in the original article. What is the basis for de-selection of individual cores?
b) Esper et al. [2002] do not provide a clear and operational definition distinguishing “linear” and “nonlinear” trees. As previously requested, could you please provide an operational definition of what they did, preferably with source code showing any differences in methodology.

Here is Esper’s non-responsive answer:

As described, in some of the sites we did not use all data. We did not remove single measurements, but clusters of series that had either significantly differing growth rates or differing age-related shapes, indicating that these trees represent a different population, and that combining these data in a single RCS run will result in a biased chronology. By the way, we excluded other sites because growth was too rapid, for example.

The split into linear and non-linear ring width series is shown in a supplementary figure accompanying the Science paper. The methods of this widely accepted, approach are described in the paper cited below and in the Science paper. It is possible to make this an operational approach, for example, by fitting growth curves to the single measurement series (e.g. straight line and negative exponential fits) and group the data accordingly. We didn’t do this in the Science paper, but rather investigated the data with respect to the meta information (i.e. for a particular site; data from living trees, and clusters of sub-fossil data), which I believe is a much stronger approach. This, however, requires experience with dendrochronological samplings and chronology development: Esper J, Cook ER, Krusic PJ, Peters K, Schweingruber FH (2003) Tests of the RCS method for preserving low-frequency variability in long tree-ring chronologies. Tree-Ring Research 59, 81-98.

First, consider Esper’s statement: “As described, in some of the sites we did not use all data.” I challenge anyone to locate any “description” or even hint in the four corners of Esper et al 2002 that they did not use all the data, let alone any reason for why they did not use all the data. There is no “description” or even hint in Esper et al 2002 that all the data was not used. The admission came only in response to my parsing through data that took nearly two years to get.

Esper now says that cores were de-selected to avoid a “biased chronology” and cited Esper et al 2003 as a suppposed authority for the procedure. However an examination of Esper et al 2003 provides no such authority. In fact, the closest thing in Esper et al 2003 to such a statement is the following, which I’ve quoted before:

Before venturing into the subject of sample depth and chronology quality, we state from the beginning, “more is always better”. However as we mentioned earlier on the subject of biological growth populations, this does not mean that one could not improve a chronology by reducing the number of series used if the purpose of removing samples is to enhance a desired signal. The ability to pick and choose which samples to use is an advantage unique to dendroclimatology.

Here Esper is talking about removing data to “enhance a desired signal”. Excuse me – that doesn’t sound like a way of avoiding a “biased chronology”; it sounds like a recipe for making biased chronologies – biased towards a “desired signal”. I think that readers are entitled to a better explanation of what Esper is doing.

As to the distinction between linear and nonlinear trees, it is simply not described in either publication. I challenge any of the people who usually disagree with me (Peter Hearnden, Steve Bloom, John Hunter) to read Esper et al 2003 and locate for me where this publication distinguishes between linear and nonlinear trees. The figure in the SI cited by Esper simply shows the number of “linear” and “nonlinear” trees. It does not explain how the distinction is made.

So how did Esper distinguish between linear and nonlinear trees? What effect does this classification have? An obvious question: if he didn’t make this distinction, does it affect relative MWP-modern levels? Did Esper exclude data to “enhance a desired signal”? If so, what was the “desired signal”? In fact, I’m not sure that these particular issues necessarily affect Esper’s results, but right now it’s impossible to replicate his calculations until one knows how these things were done. For all we know, maybe he used Mannian principal components.

There’s a little edge to Esper’s calculations because he has a high MWP as shown below.

The main reason why his MWP is high relative to usual Hockey Team fare is that he used the updated Polar Urals data (see my discussions of Briffa) which had elevated MWP values. This is the only Hockey Team study with updated Polar Urals data. Once Briffa realized the updated Polar Urals data set had a high MWP, he substituted the nearby Yamal data set which has a pronounced HS-shape and called it “Polar Urals”. The substituted data was quickly incorporated into subsequent HS studies – Mann and Jones 2003, Osborn and Brifa 2006, D’Arrigo et al 2006, etc.

The Foxtails and the Hounds

There is something weird going with Esper’s delivery of data, where he delivered quite a lot of material – but left out some key data. Here ‘s the status on data – there are some puzzling methodological issues which I’l’l return to.

Chronologies: Esper sent 13 of 14 chronologies in February. This was very interesting and very helpful. However, he failed to deliver the 14th chronology from Mongolia. Despite a couple of iterations, no progress has been made on finishing the job.

Measurement Data: In March, Esper sent 10 of 14 measurement files in March, omitting Mongolia, Polar Urals, Boreal and Upperwright. Why would these have been omitted? The Polar Urals measurement file as used was sent in April. Again, why wouldn’t the others have been sent at the same time? In April, Science cited the existence of the Mongolia file at WDCP. I’m quite sure that this relates to the Esper version, but the dates do not match so the Esper version is different somehow.

Hanson said that he was pursuing the "original authors" for the foxtail data. The context is unclear whether he is trying Graumlich, having been unsuccessful with Esper. It’s hard to tell. But what does Graumlich have to do with it? We’ve seen that Esper changes the versions, so the Esper version is the one that needs to be examined.

If he’s trying to get the measurement data from Graumlich, then he may be in for a rough go. Andrew Bunn, who published on these sites (Bunn et al, 2005 discussed last year on this site) said recently that he did not see the foxtail measurement data during the course of writing Bunn et al. even though chronologies for Boreal and Upperwright are published in Bunn et al 2005 (indeed this is the only publication of them, as the earlier Graumlich study cited in Esper et al 2002 does not report chronologies for these two sites. Bunn said that Graumlich lost the measurement data when she moved from Arizona to Montana a number of years ago – something Crowley would understand, and a good reason for archiving the data in the first place. Someone paid for it.

When Bunn told me that the data had been lost, I hypothesized that Esper might have had a grey version and the data might thus have been preserved despite Graumlich’s loss. But with every week that goes by, I’m beginning to wonder. It could be that Esper’s just being difficult, but now it’s not just me, Science is involved. Every time that Hanson has to pick up this file again, he’s going to get mad at both Esper and me. I’m sure that initially it was me, but as time goes on, and the production of a simple data set by Esper seems to be more and more problematic, Hanson’s surely going to get increasingly irritated with Esper and the Hockey Team.

The foxtail series are not just used in Esper et al 2002; they are also very prominent in Osborn and Briffa 2006. Wouldn’t it be amusing if hypothetically an IPCC lead author was using lost data?

More O.B. Confidential

Osborn and Briffa site chronologies differed from Esper site chronologies for 4 sites. Site chronologies can differ depending on the standardization method used; in order to analyze the effect, one needs to see the measurement data. Hundreds of measurement data sets have been archived at WDCP. The really weird thing is that the Hockey Team likes to use proprietary data, rather than archived data – look at mainstay Briffa studies: Yamal, Tornetrask update, Taimyr, Athabaska (Luckman version): unarchived. Since Science policy requires data, I’ve persisted in trying to get the actual measurement data used by Briffa so that I could understand what accounts for the site differences with Esper. Here is the most request (and we’re at 38 emails back and forth and counting:

3. In 4 cases, the Osborn site chronology differs from the Esper site chronology, although in the other cases the versions are identical. In some cases, the date ranges do not match. I do not believe that it is possible to replicate the Osborn version from the Esper measurement data in these 4 cases and surmise that Osborn used a different measurement data set. I therefore request measurement data used by Osborn for the following sites: Polar Urals, Tornetrask, Taymir and Athabaska.

This email did not contain a request for confidentiality. The voice is that of Sciencemag, not O&B.

Esper et al. was not the source for the four series in question, as stated in the SOM of the paper (see paragraphs c and d). The Athabasca series was replaced with a series from Luckman and Wilson. The other three series contain some non-identical tree-ring series derived from the same sites; thus the series they used can not be reproduced using the Esper et al. data; there are fewer tree cores in the Esper et al. data. The source for these three series is Briffa (2000). Osborn and Briffa did not not use raw tree-core measurements, only chronologies that had previously been assembled by others, and these have been deposited. You may want to contact those original authors or those publications if you require their raw data.

Isn’t this cute. Osborn and Briffa are taking the position that the Science 2006 article did not directly use the measurement data, but only the chronologies calculated from the measurement data. Thus, they are taking the legalistic position that the measurement data itself is not subject to Science’s data policy – a point which Science seems to be endorsing. Given that the chronologies were calculated in Briffa {2000), I guess I should try to locate this Briffa fellow. I wonder if he’s got anything to do with the Briffa of Osborn and Briffa.

Give me a break. Science should take the position that they are the big dogs. If Osborn and Briffa want to rely on non-arms-length results published in a journal which does not adhere to Science’s data archiving policies, they should not be allowed to hide the wienie. Science should tell Osborn and Briffa to produce the measurement data.

I’ve not given up on persuading Science of this. However, in the mean time, I’ve written to Osborn and requested that they voluntarily provide the measurement data, but no answer so far.

It is so ridiculous – Briffa 2000 was published 7 years ago. The data from these 4 sites (Yamal, Taimyr, Tornetrask update, Athabaska- Luckman version) are being used over and over again in Hockey Team studies and it is still unarchived.

What if Briffa, as a lead author in IPCC 4AR, hypothetically featured results from Osborn and Briffa 2006? Wouldn’t it be objectionable if Briffa, under such circumstances, didn’t readily and enthusiastically respond to requests for supporting data?

O.B. Confidential

I’m up to 38 emails back and forth with Science now in trying to get data. A little more drifted in April, about which I’ll report on in a couple of posts. My most recent progress report was here

One of the questions pertained to a discrepancy between the correlation between gridcell temperatures and foxtail chronologies reported by Osborn and Briffa and the actual correlation. Osborn supplied the information that they used – they used data only going back to 1888, while the HadCRU2 gridcell goes back to 1870 and there was a significant difference between the correlations when 1870 was used instead of 1888. Since there appeared to be an ad hoc use of data, I requested an explanation of the discrepancy and corresponding information for other gridcells as follows:

6. I acknowledge receipt of a temperature data set from 1888-1990. The HadCRU2 data set contains temperature data for the gridcell 37.5N, 117.5W commencing in 1870. However, the gridcell information provided by Osborn commenced only in 1888 and the differences are material to the final result (0.045 versus 0.18 reported). What is the reason for commencing this comparison in 1888 rather than the available 1870? Why is there no notice of this in the SI? Since there is a material difference in this example, could you please provide the gridcell temperature sets in a comparable format for the other 13 Osborn and Briffa series.

Osborn and Briffa sent an explanation to Science, which was forwarded to me, together with the corresponding information. The explanation shed some interesting light on CRU temperature data, which many readers of this blog are interested in. (As you all know, CRU’s station data is confidential.)

Unfortunately, Science advised me that the Osborn and Briffa explanation was itself confidential and not for public posting without permission of Osborn and Briffa, which to date has not been forthcoming. Osborn and Briffa are both at CRU in England and intersted readers may wish to contact them personally for a confidential explanation.

Mann to speak at UC Santa Cruz

As one of our commenters has helpfully pointed out, Michael Mann will be giving a presentation at the University of California, Santa Cruz, this Wednesday.

Michael Mann, director of the Earth System Science Center at Pennsylvania State University, will give a lecture on global climate change on Wednesday, May 10, at UC Santa Cruz. His talk–"Global Climate Change: Past and Future"–will take place at 7 p.m. at the Seymour Center at UCSC’s Long Marine Laboratory. The event is free and open to the public.

He’s going to be speaking on topics very close to our hearts:

Mann is one of the leading authorities on global climate change. His research has been central to establishing the growing human influence on climate and, as a result, has been the target of criticism from skeptics of global warming. Mann will present the evidence for a human influence on the climate of recent decades. Such evidence includes instrumental measurements available for the past two centuries, paleoclimate observations spanning more than a millennium, and comparisons of the predictions from computer models with observed patterns of climate change.

If, by any chance, someone is in the area and can give a report on what transpires, I’m sure Steve will be most interested. Hopefully, Dr Mann will not have to dash off for a flight back to Philly and sticks around for a few questions at the end, you never know…

Acceptance Dates

I posted up information on IPCC publication deadlines, which are presumably there to ensure that authors do not play favorites. Here are dates of submission, acceptance and publication of some studies that have been discussed recently on this blog. These should be compared against WG1 deadlines of August 12, 2005 for being supplied to TSU, December 16 for being "published or in press" and end February as a drop dead date for final preprints.

Wahl, Ritson and Ammann was submitted on 3 October 2005, accepted on 27 February 2006 and published April 28, 2006. On its face, it failed to meet both the December and February deadlines – it merely got to December status in February. Purely hypothetically, IPCC reviewers might want to consider whether it met submission deadlines for even being supplied to TSU for inclusion in the First Draft. The possibility of its not being supplied for the First Draft is, of course, entirely hypothetical as this information is confidential. But since IPCC review is supposed to be done with an abundance of caution, I presume that diligent reviewers will check such things.

Hegerl et al [Nature 2006] was submitted on 8 July 2005, accepted on 28 February 2006 and published on April 20, 2006. This article does not describe the HC reconstruction. That is described in another article [Journal of Climate] which had not been accepted as at April 20, 2006, according to the Nature article. IPCC reviewers should ensure that any results attributed to Hegerl et al 2006 are actually supported in the NAture article; if they come from the Journal of Climate submission, then they should not appear. As to the Nature article, as with the Wahl et al 2006 article in Science, it did not meet either the December deadline for being "published or in print" since it was accepted only in February and did not meet the February drop-dead date as it only then arrived at the December milestone.

Wahl and Ammann [Climatic Change], not the rejected GRL article, was submitted on May 10, 2005 and accepted on February 27, 2006. No final preprint exists currently and as of today, it does not appear in the online publication list for Climatic Change. On an earlier occasion, we reported that there were dramatic changes between the version that existed in December and the accepted version, including the inclusion of verification r2 statistics that confirmed our findings in MM05a, MM05b. Again, this failed several milestones – it was not "published or in print" by December 16; no final preprint existed as at February 28 and substantial changes were made post December.

Dare I observe that there seems to have been lots of activity on this front on February 27-28. Of course, this last minute flurry was pointless under IPCC WG1 policies.

Osborn and Briffa [2006] was submitted on 23 September 2005, accepted on 17 January 2006 and published on Feb 10, 2006. So it met the February deadline, but did not meet the December deadline of being "published or in print". It’s too bad since Briffa is a lead author. Diligent reviewers should also review their First Draft records to check whether a draft version of Osborn and Briffa was made available for the First Draft as it was supposed to be. As a lead author, I’m sure that Briffa would be expected to comply with the letter of all policies.

Again all this discussion is entirely hypothetical. No reader should conclude that any of these studies have been mentioned in the Second Draft of IPCC 4AR. That information is confidential. However, there’s no harm in saying that lead authors should not be permitted to circumvent rules in favor of their own publications, just as a general point. I’m sure that such things are unlikely to happen with IPCC, but you never know.

IPCC WG1 Publication deadlines

I’ve previously discussed IPCC WG1 publication deadlines in the context of Wahl and Ammann [2006], where the authors seemed to make last-ditch efforts to comply with IPCC WG1 publication deadlines, but ironically failed to comply with the letter of the deadlines. There have been some other high-profile "late-breaking" articles (Osborn and Briffa [2006], Wahl et al [2006], Hegerl et al 2006) which seemed to have been published with an eye on the IPCC 4AR buzzer. I thought that it would be interesting to collate IPCC WG1 publication deadlines and see whether these buzzer-beating shots got off in time. As I am bound by WG1 confidentiality, I am dealing here entirely with a hypothetical situation, continuing a discussion which I started well in advance of the release of the Second Order Draft. Readers should not conclude that any of these studies are or are not mentioned in the Second Order Draft. This is all hypothetical.

In my previous post on the matter, I linked to the IPCC WG1 policy statement on publication deadlines at the WG1-UCAR website here . The version available in February has been replaced by a new version ; the previous version may be found in a web archive here. The differences are not material, but both versions are shown below. [Update: The new version mentioned here is available only at a web archive now – see here. Continue reading

Weblog update: LaTeX now available

After Jean S’s comments written in LaTeX, I decided it was time to add TeX support once and for all. Since the webhost doesn’t have LaTeX installed, I had to use mimeTeX, which supports just basic TeX without all the flourishes, but should be good enough to produce some good quality
equations and symbols sufficient for the uses employed here.

If Steve want to use TeX within posts then he can put [ tex] and [/ tex] either side of his formulae and they will be rendered into TeX when he saves the post. (Note that I’ve put an extra space in the square brackets so that they are not rendered here, but you need to remove the spaces when you use them)

For commenters, TeX is available in the same manner, but since there is no preview, and you cannot go back and edit your comments, you get one shot at it. I would suggest that you pre-render your LaTeX commands on your favorite platform to make sure that they will work, before adding them to the comment.

For example:

[ tex] x = \frac{ -b \pm \sqrt{b^2 – 4ac}}{2a}[/ tex]

will produce the general quadratic formula

x = \frac{ -b \pm \sqrt{b^2 - 4ac}}{2a}

Detrended in Amherst

Wahl et al [2006 ] fulminated as follows :

The VS04 results have been interpreted to cast serious doubt on the MBH reconstruction. … However, these results are in large part dependent on a detrending step not used by MBH, which is physically inappropriate and statistically not required. The take-away message for the climate community should be strong encouragement for more vigorous cross-comparisons of the various reconstruction implementations, based on real-world proxy series, model emulations, and simulated modifications to real-world data. Such a step would help eliminate unnecessary confusion that can distract from the crucial contributions of climate change research to important scientific and policy questions.

Quite aside from the issue of whether trending or detrending makes any difference to the VZGT results (I’m convinced that the issue is immaterial to their principal point), it grates me no end that Mannians can seemingly with a straight face suggest to others that cross-comparisons are a good idea as a means of avoiding confusion, after it has required years of quasi-litigation to gradually unpeel details of Mannian methods. I want to go back over some of our correspondence with Nature. I started doing this because I recalled some curious issues of trended-vs-detrended arising in our Materials Complaint. If the implementation of trending-detrending matters to anything, then Nature should step and take its share of the blame for failing to respond to very specific issues for methodological clarification. There is also a rich irony in this, because Mann’s justification for not providing proper methods was that Zorita et al 2003 had managed to replicate their results – a claim still extant in the Corrigendum SI Continue reading