Yesterday Ross and I submitted an article to IJC with the following abstract:
A debate exists over whether tropical troposphere temperature trends in climate models are inconsistent with observations (Karl et al. 2006, IPCC (2007), Douglass et al 2007, Santer et al 2008). Most recently, Santer et al (2008, herein S08) asserted that the Douglass et al statistical methodology was flawed and that a correct methodology showed there is no statistically significant difference between the model ensemble mean trend and either RSS or UAH satellite observations. However this result was based on data ending in 1999. Using data up to the end of 2007 (as available to S08) or to the end of 2008 and applying exactly the same methodology as S08 results in a statistically significant difference between the ensemble mean trend and UAH observations and approaching statistical significance for the RSS T2 data. The claim by S08 to have achieved a “partial resolution” of the discrepancy between observations and the model ensemble mean trend is unwarranted.
Attached to the article as Supplementary Information was code (of a style familiar to CA readers) which, when pasted into R, will go and collect all the relevant data online and produce all the statistics and figures in the article. In the event that Santer et al wish to dispute or reconcile any of our findings, we have tried to make it easy for them to show how and where we are wrong, rather than to set up pointless roadblocks to such diagnoses.
We only consider the comparison between the model ensemble mean trend and observations (the Santer H2 hypothesis). In our discussion, we note that we requested the collated monthly data used by Santer to develop his H1 hypothesis and that this request was refused, attaching the correspondence as supplementary information. Had the H1 data been available when the file was open, we would have analyzed them, but there weren’t, so we didn’t. The results for the H2 hypothesis are interesting in themselves.
We noted that an FOI request to NOAA had been unsuccessful, that the publisher of the journal lacked policies to require the production of data and that an FOI to the DOE was pending. We urged the journal to adopt modern data policies. With all the problems for the new US administration, the fact that they actually turned their minds to issuing an executive order on FOI on their first day in office suggests to me that DOE will produce the requested data. A couple of readers have taken the initiative of writing DOE expressing their displeasure with Santer’s actions as well and they think that the data might become available relatively promptly. Personally I can’t imagine any sensible bureaucrat touching Santer’s little campaign with a bargepole. I’ve long believed that sunshine would cure this sort of stonewalling and obstruction and I hope that that happens.
Update (Jan 27): Events are moving right along as I discovered when I started going through today’s email. In last week’s snail mail, I received a letter dated Dec 10 from some arm of the U.S. nuclear administration (to which Santer’s Lawrence Livermore belongs) acknowledging my FOI request of Nov 14 to the DOE [from memory, I’ll tidy the dates as I don’t have the snail response on hand], saying that it had been in their queue of requests, which are considered in the order in which they are received. The snail seemed especially slow on this occasion. So I wasn’t holding my breath.
Amazingly, in today’s email is a letter from a CA reader saying that the Santer data has just been put online http://www-pcmdi.llnl.gov/projects/msu/index.php (I haven’t looked yet, but will). He sent an inquiry to them on Dec 29, 2008; the parties responsible wrote to him saying that they would look into the matter. They also emailed him immediately upon the data becoming available.
Surprisingly (or not), the same people didn’t notify me concurrently with the CA reader even though my request was almost 6 weeks prior.
58 Comments
Bravo!
snip – I know it was in good humor, but stay away from politician’s names
In last week’s snail mail, I received a letter dated Dec 10 from some arm of the U.S. nuclear administration (to which Santer’s Lawrence Livermore belongs) acknowledging my FOI request of Nov 14 to the DOE [from memory, I’ll tidy the dates as I don’t have the snail response on hand], saying that it had been in their queue of requests, which are considered in the order in which they are received. The snail seemed especially slow on this occasion. So I wasn’t holding my breath.
Amazingly, in today’s email is a letter from a CA reader saying that the Santer data has just been put online http://www-pcmdi.llnl.gov/projects/msu/index.php (I haven’t looked yet, but will). He sent an inquiry to them on Dec 29, 2008; the parties responsible wrote to him saying that they would look into the matter. They also emailed him immediately upon the data becoming available.
Surprisingly (or not), the same people didn’t notify me concurrently with the CA reader even though my request was almost 6 weeks prior.
I wonder if the new administration’s new policies regarding visibility are to blame? Hmmm…
Mark
Re: Mark T. (#4), Hi Mark!
Any resemblance between the posting of a link to the data by a bureaucrat and the public pronouncement of a ‘new’ policy by the POTUS (President of these United States of America) is a coincidence (see Vogon rules of bureacratic behavior).
#4. You’d have to think so. Imagine sitting in the chair of a bureaucrat and being asked to sign off on refusing to provide climate data in the face of the new directive. You’d say – Ben, you’re on drugs.
The entire obstruction battle was misguided from the start. It was unwinnable. The climate scientists involved may have chuckled to themselves at their little repartee but it didn’t work with the public. It makes them look bad even in situations where there probably wasn’t any substantive issue other than guys being jerks.
The moral is simple: the public is worried about the issue, they want full, true and complete disclosure, they are unconvinced that federally funded employees have any personal IPR (intellectual property rights) in the data and code that is entitled to “protection” and, in particular, that any such rights are waived when the results are being used for policy purposes.
It’s pretty simple, but it’s against previous Team practices.
Agreed. Either way, it is good to see you making an attempt at publication. Not that you haven’t attempted in other instances, just that this one you have publicized. 🙂
Mark
Go, Steve, go!
My, my. Here’s what they now say. “This data available on this site was prepared as an account of work sponsored by an agency of the United States government.” Various scientists have come on this site purporting to explain why Santer shouldn’t have to produce the data. My response was that I wasn’t interested the right and wrong of it from first principles, but whether Santer was obliged to do so under existing policies governing his work and/or publication – and, if not, whether the responsible agencies and journals should change their policies. It appears that he was obliged to do so under at least one of these policies. In full:
Might want to correct “submited” in the title…
It’s good to see you guys publishing in journals.
What I’m curious about now is; who will the reviewers of your article be? Will it be someone from the climate science community, or someone from the statistical community?
Here’s something amusing. If you unzip the tarball,
http://www-pcmdi.llnl.gov/projects/msu/tam2.tar.gz
it places the data into a folder entitled FOIA. 🙂
Re: Steve McIntyre (#12),
Hillarious! The heat is finally on these guys, and it’s not just global warming.
Knowing how gov’t agencies, and gov’t workers, work, I don’t find this surprising in the least. And yes, I have been part of the “machine”.
“I hear that train a-comin’…“
Congrats for the successful FOI, and good luck in your paper. I’m curious at the outcome of it.
BTW, have you got a link to your paper, or are you withholding it until they approve it?
Re: Luis Dias (#16),
I don’t think it is a good idea for many reasons:
The reviewers could ask the authors to change some parts of the article, or to complete it (e.g. with an analysis of the H1 hypothesis)
The article could be rejected and then the authors have to revise it and submit to other journal.
And I don’t know it that can be done either. If I am not wrong, the information described on an article is required to be original, not published in any other journal. (except for review articles) I don’t know if posting an article on a blog counts as published. But I am pretty sure that if you publish an article on PLoS, an on-line only article, you cannot publish the same article in other journal.
Re: Urederra (#31), the rules (and conventions) vary greatly from journal to journal and even from field to field within the same journal. The main thing is to understand the local rules, whatever they may be.
He’s withholding it until he gets an FOI request 🙂
No responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed — Got that right didn’t they.
Re: Jeff Id (#18),
Maybe you’re too patient Steve, would you mind snipping this one. I got a little grumpy when reading how difficult it is to get the data. I mean really, it’s just data after all. If the data works as they said, it shouldn’t be a problem. Anyway, it was just a blood pressure spike and I don’t need to contribute to making the tone any worse. Sorry for the trouble.
I look forward to your analysis of the H1 hypothesis, Steve.
#20. I’m not sure that I’ll have time for a while. I would have done it when the file was open, but I’ve got other things to do right now. I’m sure that someone else can do it.
#21 The analysis of the H1 hypothesis will surely be requested by the reviewers, now that thse data are available…
You may be right. However, I think that the present results merit reporting whether or not we analyze other aspects of the Santer paper and I think that that’s the position we’ll take if the issue arises. But might as well think about it ahead of time.
Actually, it would have made sense tactically for Santer to make the other data available now, in the hopes that it muddies the refutation of the H2 situation (not that I think that that had anything to do with data marked FOIA being made available in reponse to an FOI request.)
Re: Steve McIntyre (#24), if the reviewers do ask you to add an analysis of H1 that would be (at least in my field) a tacit acceptance of the submission on the condition that you do the analysis. But as you say, that can wait until they ask – trying to second guess reviewers can waste vast amounts of time, and it’s usually simpler just to wait and find out what they actually want!
I have no idea of what IJC embargo policy is, but at some point it might be an idea to post it at arxiv.org, either under “Data Analysis, Statistics and Probability” or “Atmospheric and Oceanic Physics”.
Thats where you have made your mistake. You should have collected all the on line data and put it into your own ftp/web archive with references as to where and when you collected it. The on line data will change so that others can say their results don’t match yours therefore it should all be ignored. If you haven’t done it yet, collect and archive the data.
Steve: 🙂 I’d already done exactly that. We’ve seen the data change and I’ve complained about others not photographing exact data sets. So I’ve got the versions that we used. Users can use the photographed versions if they want.
Hi Steve – We also have a paper under review at the IJOC on a similar, complementary topic. It is Pielke Sr., R.A., T.N. Chase, J.R. Christy, B. Herman, and J.J. Hnilo, 2009: Assessment of temperature trends in the troposphere deduced from thermal winds. Int. J. Climatol., submitted. http://www.climatesci.org/publications/pdf/R-342.pdf.
Our abstract reads
“Recent work has concluded that there has been significant warming in the tropical upper troposphere using the thermal wind equation to diagnose temperature trends from observed winds; a result which diverges from all other observational data. In our paper we examine evidence for this conclusion from a variety of directions and find that evidence for a significant tropical tropospheric warming is weak. In support of this conclusion we provide evidence that, for the period 1979-2007, except for the highest latitudes in the Northern Hemisphere, both the thermal wind, as estimated by the zonal averaged 200 hPa wind and the tropospheric layer-averaged temperature, are consistent with each other, and show no statistically significant trends.”
Re: Roger A. Pielke Sr. (#27),
“Significant tropical tropospheric warming” as in the satellite data?
I don’t see it either.
Re: Sam Urbinto (#30),
If you squint and hold your head at the right angle, you might be able to see a slight uptrend in the first two-thirds of the image. But if you can see that, you can also see a slight downtrend in the last third.
I heard this on a Lecture series on particle physics by Professor Steven Pollock:
“To physicists, when a theory is well established, it doesn’t mean you quit thinking about it. In fact, physicists are the most skeptical, cynical and aggressively challenging conservatives that you can imagine. If you say a theory is out there and it is correct, all the experimentalists want to do is prove you wrong – AND THEY ARE WORKING REALLY HARD TO TRY TO FIND SOME DATA THAT WILL PROVE YOU WRONG.
Of course, in the process, if they keep agreeing with the theory, it just stronger and stronger and stronger evidence that in fact, the theory really is correct.”
Hopefully Steve you are pushing Climate science to be more like Particle Physics in the 1970s – 1990s (string theory excepted).
Keep soldiering on!!
Hope there’s nothing in here that will come back to bite you in the nether regions. Perfectly understandable if you snip this, but it wouldn’t surprise me to learn that supplying this data is actually a poisoned chalice – hope you have considered this and investigated it before you archive any data and offer it publicly. Wouldn’t want to see you get in trouble.
Re: Neil Fisher (#29),
The chalice with the palace holds the pellet with the poison. The vessel with the pestle holds the brew that is true…and don’t even get me started on the flagon with the dragon.
Re: jeez (#35),
From the movie “The Inspector General” which has resonance with this blog and climate science on multiple levels.
Steve McIntyre as the Inspector General
If anyone wants to look at model data, the following script creates one object with 98 time series: the object is named Model and is a list with two items Model$T2 and Model$T2LT. Each component has 49 time series, one for each run.
Model=list(NA,2);names(Model)=c(“T2″,”T2LT”)
url=”http://www-pcmdi.llnl.gov/projects/msu”
name0=c(“tam2.tar.gz”,”tam6.tar.gz”)
for (k in 1:2) {
download.file(file.path(url,name0[k]),”temp.gz”,mode=”wb”)
handle=gzfile(“temp.gz”)
fred=readLines(handle)
close(handle)
length(fred)
index2= grep(“RTIME”,fred) ;N=length(index2)#length 49
index1= c(index2[2:length(index2)]-28,N)
index=data.frame(index2+1,index1);names(index)=c(“start”,”end”)
writeLines(fred,”temp.dat”)
writeLines(fred[28],”name1.dat”)
name1=scan(“name1.dat”,what=””)
id=fred[grep(“Data:”,fred)];id=substr(id,34,nchar(id));id=gsub(” +$”,””,id)
Model[[k]]=list()
for(i in 1:49) {
Model[[k]][[i]]=read.table(“temp.dat”,skip=index$start[i]-1,nrow=index$end[i]-index$start[i]+1)
names(Model[[k]][[i]])=name1
Model[[k]][[i]]=ts(Model[[k]][[i]],start=c(round(Model[[k]][[i]][1,2]),1),freq=12)
}
names(Model[[k]])=id
}# k
sapply(Model[[1]], function(A) A[1,2] )
g=function(A) {temp=time(A)>=1979; fm=lm( A[temp,”TROP”]~c(time(A))[temp]) ;g=fm$coef[2];g}
trend=array(NA,dim=c(49,2));trend=data.frame(trend);names(trend)=names(Model)
trend$T2=sapply(Model$T2, g)
trend$T2LT=sapply(Model$T2LT, g)
Re: Steve McIntyre (#36),
Thanks
Re: Steve McIntyre (#36), When I run this in R. I get an error message saying object “N” not found. I am new to R. Any suggestions?
Steve: sorry bout that. NOthing to do with R, but due to my programming quickly and having information on my console that I didn’t put in my collation. N=length(index2) – added/
Re: Steve McIntyre (#36),
I took at look at all the data (yes all the data – R is a pretty useful tool!!!) and found a couple of oddities. The script which looks like this
generated the plot below
I especially like the jump of 10 degrees in 1900 (but it’s ok, by 1960, the temperatures are back to normal. Run 31 has a line of -999.90s for the year 1901 (obviously missing values).
To plot a particular run, the command is modplot(i,j) where i =1 or 2 and j = 1 to 49.
Rather than do them individually, R has the ability to create pdf files containing the graphs. The script
will create two pdf files called Model1.pdf and Model2.pdf which contain a total of 2 x 49 x 10 = 980 graphs. The files are each 13 MB so I won’t post them. It takes a couple of minutes to do them and they are a great help when cleaning the data. The HL data is highly variable for all the runs.
Jeez your wrong, while Danny played in both. That line comes out of the Court Jester.
I did not name the movie, but the Court Jester is correct and has equal resonance.
Re: jeez (#39),Hawkins: I’ve got it! The pellet with the poison’s in the vessel with the pestle; the chalice from the palace has the brew that is true! Right?
Griselda: Right! — but there’s been a change: they broke the chalice from the palace…
Hawkins: They broke the chalice from the palace?
Griselda: …and replaced it. With a flagon.
Hawkins: A flagon?
Griselda: With the figure of a dragon.
Hawkins: Flagon with a dragon.
Griselda: Right.
Hawkins: …but did you put the pellet with the poison in the vessel with the pestle?
Griselda: No! The pellet with the poison’s in the flagon with the dragon! The vessel with the pestle has the brew that is true!
Hawkins: The pellet with the poison’s in the flagon with the dragon, the vessel with the pestle has the brew that is true.
Griselda: Just remember that!
Oh, now I’ve done it. I’ve brought in the flagon with the dragon!
Urederra, and others on multiple publishing:
The rules for the MSM are different to non-existent. I noticed what looked like plagarism in our local paper a while back, as almost identical article had appeared in a previous magazine under a different author’s name. I challenged it, and quickly heard directly from the author, who uses a different “nom-de-plume” for news and for feature articles. He chastised me for bothering him, and told me in no uncertain terms that the “media” allowed an author to publish the same article in as many publications as would accept it. I replied that that was not the same as the science journals I dealt with in my day, and I still thought it bordered on dishonesty.
Go, Steve, go!
This is very good news. Those of us who nag you about submitting more papers will have to shut up for a while. I assume IJC is International Journal of Climatology.
You might be wise to cut down on the Santer-snarking for a while. Since your paper is a criticism of Santer et al, it is more or less certain that one of the reviewers will be Santer or one of his co-authors.
There is no reason not to post the paper either here or on ArXiv, other than your amusement in tantalising your readers!
Re: PaulM (#48), Roger Pielke, Sr. has his paper pending at IJC and has apparently put it on line; so I agree.
Re: R program
This step generates an error
names(Model[[k]])=id
Error: object “id” not found
Do you want id to be name0[k] ?
Steve: id=fred[grep(“Data:”,fred)];id=substr(id,34,nchar(id));id=gsub(” +$”,””,id)
This picks up the model/run identification.
FOIA requests are required by law to be answered within 10 business days of receipt. A 10 day extension may occur for good cause. Failure to comply can subject the requested agency to suit in federal district court and imposition of costs.
Nice catch Steve! Can’t wait to see if it will get published! Seems like a pretty obvious mistake on Santer et al.’s part (though not the first time he (Santer) has done a paper with endpoint issues. Skeptics will know what I mean), but no doubt they will have their say-if they don’t ignore you, which would be harder if Santer himself is a reviewer.
I neglected to mention the other oddity. Run 31 contains a bunch of -999.90 (obviously missing values) in 1901 for both Model[1] and Model[2]. The graph makes it obvious (modplot(1,31)).
Nice. I didn’t know about the pdf tool either; that’s cool. The labeling can be improved though. tam2 and tam6 are the “levels” T2 and T2LT and not the “models”.
names(Model) #[1] “T2” “T2LT”
My explanation of my collation wasn’t very clear. The 7th data set in Model[[1]] also has a name:
names(Model[[1]])[7]
#[1] “CNRM3.0/20c3m_run1/Xy/tam2_CNRM3.0_20c3m_run1_mm_xy_wf_r0000_0000.nc”
So you’ve plotted (I think) model CNRM3.0 run 1 for level T2. The model can be parsed from the name string.
Here are values from the original data. The jump up occurs precisely at 1900.0 – sort of a century version of Y2K :). The down draft occurs precisely at 1960. I wonder how they do that.
I have been able to replicate the values obtained by Santer in Table I for the Multi-model means and the Inter-model SDs for both the T2 and the T2LT. These are calculated simply by first finding the slopes for the 49 runs, then by calculating the average for each of the 19 models from which the runs are taken and finally taking the average of the latter 19 results (as advertised in the paper), The SDs are the sample variance of the 19 model averages.
The groups are given (using the notation and group order in the paper in Figure 3A) by the script:
My scripts for extracting 1979 to 1999 and getting the regression coefficients were a bit clumsy so I won’t post them.
Roman, I agree with your diagnosis. Here’s how I went about getting the results. There’s a little more structure in Model than you may be using. Each item has a useful name and is a time series, not just a matrix. Also sapply and tapply are really elegant R functions for ragged arrays that take a little getting used to. The Team uses Fortran do loops even in R. (It’s like they speak R with a heavy accent).
A REAL programmer can write FORTRAN in any language!
(quote from: “Real Programmers Don’t Do Pascal”)
The year is 2025 there’s been a great deal of wobble tilt and solar activity all at once.
A man is looking for a bar in Antarctica when he comes across a sign outside a pub :-
“A pint
A pie
and a Friendly Word”
This seems fine so in he goes.
Customer: “A pint please”
Customer: “I’ll also have a pie”
Silence
Customer: “Landlord what about the Friendly Word”
Landlord: “Don’t eat the pie”
Re: Modelos climáticos y realidad. Santer. « PlazaMoyua.org (#34),
The correct spelling is agoreros, which is spanish for ominous. Algoreros is just a witty pun.
Re: Urederra (#34),
Since I’m not a Spanish speaker, I had to look it up. But you’re right! That’s fuuny!! But I have a twisted sense of humor. I like puns.
8 Trackbacks
[…] Enero 27, 2009 Modelos climáticos y realidad. Santer. Posted by soil under Cambio Climático, algoreros, calentamiento global | Etiquetas: algoreros, calentamiento global, Cambio Climático | Steve McIntyre acaba de enviar para su publicación en el International Journal of Clomatology un trabajo sobre los modelos climáticos y su relación con los datos de temperatura real [–>]. […]
[…] Yesterday, Climate Audit announced the submission of a paper on tropospheric temperature trends (see). […]
[…] Climate Audit today, Steve McIntyrewrites about getting data about the Santer article: With all the problems for the new US […]
[…] I recounted this refusal and the progress of several FOI requests in several contemporary posts here here here and here. […]
[…] Jan 26, Ross and I submitted an article on Santer et al 2008, noted up the next day here ; I reported in the post that our submission included comments on the data refusal. On January 27, I […]
[…] Santer et al truncated their data at 1999, just at the end of a strong El Nino. Steve and I sent a comment to IJOC pointing out that if they had applied their method on the full length of then-available data […]
[…] Santer et al truncated their data at 1999, just at the end of a strong El Nino. Steve and I sent a comment to IJOC pointing out that if they had applied their method on the full length of then-available data […]
[…] Santer et al truncated their data at 1999, just at the end of a strong El Nino. Steve and I sent a comment to IJOC pointing out that if they had applied their method on the full length of then-available data […]