Yesterday Ross and I submitted an article to IJC with the following abstract:
A debate exists over whether tropical troposphere temperature trends in climate models are inconsistent with observations (Karl et al. 2006, IPCC (2007), Douglass et al 2007, Santer et al 2008). Most recently, Santer et al (2008, herein S08) asserted that the Douglass et al statistical methodology was flawed and that a correct methodology showed there is no statistically significant difference between the model ensemble mean trend and either RSS or UAH satellite observations. However this result was based on data ending in 1999. Using data up to the end of 2007 (as available to S08) or to the end of 2008 and applying exactly the same methodology as S08 results in a statistically significant difference between the ensemble mean trend and UAH observations and approaching statistical significance for the RSS T2 data. The claim by S08 to have achieved a “partial resolution” of the discrepancy between observations and the model ensemble mean trend is unwarranted.
Attached to the article as Supplementary Information was code (of a style familiar to CA readers) which, when pasted into R, will go and collect all the relevant data online and produce all the statistics and figures in the article. In the event that Santer et al wish to dispute or reconcile any of our findings, we have tried to make it easy for them to show how and where we are wrong, rather than to set up pointless roadblocks to such diagnoses.
We only consider the comparison between the model ensemble mean trend and observations (the Santer H2 hypothesis). In our discussion, we note that we requested the collated monthly data used by Santer to develop his H1 hypothesis and that this request was refused, attaching the correspondence as supplementary information. Had the H1 data been available when the file was open, we would have analyzed them, but there weren’t, so we didn’t. The results for the H2 hypothesis are interesting in themselves.
We noted that an FOI request to NOAA had been unsuccessful, that the publisher of the journal lacked policies to require the production of data and that an FOI to the DOE was pending. We urged the journal to adopt modern data policies. With all the problems for the new US administration, the fact that they actually turned their minds to issuing an executive order on FOI on their first day in office suggests to me that DOE will produce the requested data. A couple of readers have taken the initiative of writing DOE expressing their displeasure with Santer’s actions as well and they think that the data might become available relatively promptly. Personally I can’t imagine any sensible bureaucrat touching Santer’s little campaign with a bargepole. I’ve long believed that sunshine would cure this sort of stonewalling and obstruction and I hope that that happens.
Update (Jan 27): Events are moving right along as I discovered when I started going through today’s email. In last week’s snail mail, I received a letter dated Dec 10 from some arm of the U.S. nuclear administration (to which Santer’s Lawrence Livermore belongs) acknowledging my FOI request of Nov 14 to the DOE [from memory, I’ll tidy the dates as I don’t have the snail response on hand], saying that it had been in their queue of requests, which are considered in the order in which they are received. The snail seemed especially slow on this occasion. So I wasn’t holding my breath.
Amazingly, in today’s email is a letter from a CA reader saying that the Santer data has just been put online http://www-pcmdi.llnl.gov/projects/msu/index.php (I haven’t looked yet, but will). He sent an inquiry to them on Dec 29, 2008; the parties responsible wrote to him saying that they would look into the matter. They also emailed him immediately upon the data becoming available.
Surprisingly (or not), the same people didn’t notify me concurrently with the CA reader even though my request was almost 6 weeks prior.