Submitted Article on Tropical Troposphere Trends

Yesterday Ross and I submitted an article to IJC with the following abstract:

A debate exists over whether tropical troposphere temperature trends in climate models are inconsistent with observations (Karl et al. 2006, IPCC (2007), Douglass et al 2007, Santer et al 2008). Most recently, Santer et al (2008, herein S08) asserted that the Douglass et al statistical methodology was flawed and that a correct methodology showed there is no statistically significant difference between the model ensemble mean trend and either RSS or UAH satellite observations. However this result was based on data ending in 1999. Using data up to the end of 2007 (as available to S08) or to the end of 2008 and applying exactly the same methodology as S08 results in a statistically significant difference between the ensemble mean trend and UAH observations and approaching statistical significance for the RSS T2 data. The claim by S08 to have achieved a “partial resolution” of the discrepancy between observations and the model ensemble mean trend is unwarranted.

Attached to the article as Supplementary Information was code (of a style familiar to CA readers) which, when pasted into R, will go and collect all the relevant data online and produce all the statistics and figures in the article. In the event that Santer et al wish to dispute or reconcile any of our findings, we have tried to make it easy for them to show how and where we are wrong, rather than to set up pointless roadblocks to such diagnoses.

We only consider the comparison between the model ensemble mean trend and observations (the Santer H2 hypothesis). In our discussion, we note that we requested the collated monthly data used by Santer to develop his H1 hypothesis and that this request was refused, attaching the correspondence as supplementary information. Had the H1 data been available when the file was open, we would have analyzed them, but there weren’t, so we didn’t. The results for the H2 hypothesis are interesting in themselves.

We noted that an FOI request to NOAA had been unsuccessful, that the publisher of the journal lacked policies to require the production of data and that an FOI to the DOE was pending. We urged the journal to adopt modern data policies. With all the problems for the new US administration, the fact that they actually turned their minds to issuing an executive order on FOI on their first day in office suggests to me that DOE will produce the requested data. A couple of readers have taken the initiative of writing DOE expressing their displeasure with Santer’s actions as well and they think that the data might become available relatively promptly. Personally I can’t imagine any sensible bureaucrat touching Santer’s little campaign with a bargepole. I’ve long believed that sunshine would cure this sort of stonewalling and obstruction and I hope that that happens.

Update (Jan 27): Events are moving right along as I discovered when I started going through today’s email. In last week’s snail mail, I received a letter dated Dec 10 from some arm of the U.S. nuclear administration (to which Santer’s Lawrence Livermore belongs) acknowledging my FOI request of Nov 14 to the DOE [from memory, I’ll tidy the dates as I don’t have the snail response on hand], saying that it had been in their queue of requests, which are considered in the order in which they are received. The snail seemed especially slow on this occasion. So I wasn’t holding my breath.

Amazingly, in today’s email is a letter from a CA reader saying that the Santer data has just been put online http://www-pcmdi.llnl.gov/projects/msu/index.php (I haven’t looked yet, but will). He sent an inquiry to them on Dec 29, 2008; the parties responsible wrote to him saying that they would look into the matter. They also emailed him immediately upon the data becoming available.

Surprisingly (or not), the same people didn’t notify me concurrently with the CA reader even though my request was almost 6 weeks prior.

58 Comments

  1. Andy
    Posted Jan 27, 2009 at 10:02 AM | Permalink

    Bravo!

  2. Sam Urbinto
    Posted Jan 27, 2009 at 10:22 AM | Permalink

    snip – I know it was in good humor, but stay away from politician’s names

  3. Steve McIntyre
    Posted Jan 27, 2009 at 10:23 AM | Permalink

    In last week’s snail mail, I received a letter dated Dec 10 from some arm of the U.S. nuclear administration (to which Santer’s Lawrence Livermore belongs) acknowledging my FOI request of Nov 14 to the DOE [from memory, I’ll tidy the dates as I don’t have the snail response on hand], saying that it had been in their queue of requests, which are considered in the order in which they are received. The snail seemed especially slow on this occasion. So I wasn’t holding my breath.

    Amazingly, in today’s email is a letter from a CA reader saying that the Santer data has just been put online http://www-pcmdi.llnl.gov/projects/msu/index.php (I haven’t looked yet, but will). He sent an inquiry to them on Dec 29, 2008; the parties responsible wrote to him saying that they would look into the matter. They also emailed him immediately upon the data becoming available.

    Surprisingly (or not), the same people didn’t notify me concurrently with the CA reader even though my request was almost 6 weeks prior.

  4. Mark T.
    Posted Jan 27, 2009 at 10:48 AM | Permalink

    I wonder if the new administration’s new policies regarding visibility are to blame? Hmmm…

    Mark

    • Posted Jan 27, 2009 at 11:34 AM | Permalink

      Re: Mark T. (#4), Hi Mark!

      Any resemblance between the posting of a link to the data by a bureaucrat and the public pronouncement of a ‘new’ policy by the POTUS (President of these United States of America) is a coincidence (see Vogon rules of bureacratic behavior).

  5. Steve McIntyre
    Posted Jan 27, 2009 at 11:01 AM | Permalink

    #4. You’d have to think so. Imagine sitting in the chair of a bureaucrat and being asked to sign off on refusing to provide climate data in the face of the new directive. You’d say – Ben, you’re on drugs.

    The entire obstruction battle was misguided from the start. It was unwinnable. The climate scientists involved may have chuckled to themselves at their little repartee but it didn’t work with the public. It makes them look bad even in situations where there probably wasn’t any substantive issue other than guys being jerks.

    The moral is simple: the public is worried about the issue, they want full, true and complete disclosure, they are unconvinced that federally funded employees have any personal IPR (intellectual property rights) in the data and code that is entitled to “protection” and, in particular, that any such rights are waived when the results are being used for policy purposes.

    It’s pretty simple, but it’s against previous Team practices.

  6. Mark T.
    Posted Jan 27, 2009 at 11:16 AM | Permalink

    Agreed. Either way, it is good to see you making an attempt at publication. Not that you haven’t attempted in other instances, just that this one you have publicized. 🙂

    Mark

  7. Barbara
    Posted Jan 27, 2009 at 11:26 AM | Permalink

    Go, Steve, go!

  8. Steve McIntyre
    Posted Jan 27, 2009 at 11:42 AM | Permalink

    My, my. Here’s what they now say. “This data available on this site was prepared as an account of work sponsored by an agency of the United States government.” Various scientists have come on this site purporting to explain why Santer shouldn’t have to produce the data. My response was that I wasn’t interested the right and wrong of it from first principles, but whether Santer was obliged to do so under existing policies governing his work and/or publication – and, if not, whether the responsible agencies and journals should change their policies. It appears that he was obliged to do so under at least one of these policies. In full:

    The datasets are described in the document “Information Regarding Synthetic Microwave Sounding Unit (MSU) Temperatures Calculated from CMIP-3 Archive”. Potential users should download and read this document before downloading the datasets. No additional documentation nor additional support for these data can be provided.

    Publications using any or all of the synthetic MSU T2 temperatures and/or the synthetic T2LT temperatures should reference these datasets as follows:

    “Synthetic MSU temperatures from 49 simulations of 20th century climate change were calculated as described in Santer, B.D., et al., 2008: Consistency of modeled and observed temperature trends in the tropical troposphere. International Journal of Climatology, 28, 1703-1722, doi:10.1002/joc.1756.”

    The datasets are available as two GZIP archives of 49 ASCII files each. Open source utilities to extract the files from the archive on computers running the Windows and UNIX operating systems are available at http://www.gzip.org. File extraction was tested using the commercial software package WinZip (http://www.winzip.com) on a Windows XP computer system.

    T2 data: tam2.tar.gz
    T2LT data: tam6.tar.gz

    This data available on this site was prepared as an account of work sponsored by an agency of the United States government. Neither the United States government nor Lawrence Livermore National Security, LLC, nor any of their employees makes any warranty, expressed or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States government or Lawrence Livermore National Security, LLC. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States government or Lawrence Livermore National Security, LLC, and shall not be used for advertising or product endorsement purposes.

    This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  9. Jeff Alberts
    Posted Jan 27, 2009 at 11:43 AM | Permalink

    Might want to correct “submited” in the title…

  10. Thor
    Posted Jan 27, 2009 at 11:45 AM | Permalink

    It’s good to see you guys publishing in journals.
    What I’m curious about now is; who will the reviewers of your article be? Will it be someone from the climate science community, or someone from the statistical community?

  11. Steve McIntyre
    Posted Jan 27, 2009 at 12:00 PM | Permalink

    Here’s something amusing. If you unzip the tarball,
    http://www-pcmdi.llnl.gov/projects/msu/tam2.tar.gz

    it places the data into a folder entitled FOIA. 🙂

    • jae
      Posted Jan 27, 2009 at 12:43 PM | Permalink

      Re: Steve McIntyre (#12),

      Hillarious! The heat is finally on these guys, and it’s not just global warming.

  12. Jeff Alberts
    Posted Jan 27, 2009 at 12:10 PM | Permalink

    Surprisingly (or not), the same people didn’t notify me concurrently with the CA reader even though my request was almost 6 weeks prior.

    Knowing how gov’t agencies, and gov’t workers, work, I don’t find this surprising in the least. And yes, I have been part of the “machine”.

  13. Jeff Alberts
    Posted Jan 27, 2009 at 12:13 PM | Permalink

    I hear that train a-comin’…

  14. Luis Dias
    Posted Jan 27, 2009 at 12:23 PM | Permalink

    Congrats for the successful FOI, and good luck in your paper. I’m curious at the outcome of it.

  15. Luis Dias
    Posted Jan 27, 2009 at 12:24 PM | Permalink

    BTW, have you got a link to your paper, or are you withholding it until they approve it?

    • Urederra
      Posted Jan 27, 2009 at 4:09 PM | Permalink

      Re: Luis Dias (#16),

      BTW, have you got a link to your paper, or are you withholding it until they approve it?

      I don’t think it is a good idea for many reasons:
      The reviewers could ask the authors to change some parts of the article, or to complete it (e.g. with an analysis of the H1 hypothesis)
      The article could be rejected and then the authors have to revise it and submit to other journal.
      And I don’t know it that can be done either. If I am not wrong, the information described on an article is required to be original, not published in any other journal. (except for review articles) I don’t know if posting an article on a blog counts as published. But I am pretty sure that if you publish an article on PLoS, an on-line only article, you cannot publish the same article in other journal.

      • Jonathan
        Posted Jan 27, 2009 at 4:14 PM | Permalink

        Re: Urederra (#31), the rules (and conventions) vary greatly from journal to journal and even from field to field within the same journal. The main thing is to understand the local rules, whatever they may be.

  16. mugwump
    Posted Jan 27, 2009 at 12:29 PM | Permalink

    BTW, have you got a link to your paper, or are you withholding it until they approve it?

    He’s withholding it until he gets an FOI request 🙂

  17. Posted Jan 27, 2009 at 12:39 PM | Permalink

    No responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed — Got that right didn’t they.

    • Posted Jan 27, 2009 at 1:35 PM | Permalink

      Re: Jeff Id (#18),

      Maybe you’re too patient Steve, would you mind snipping this one. I got a little grumpy when reading how difficult it is to get the data. I mean really, it’s just data after all. If the data works as they said, it shouldn’t be a problem. Anyway, it was just a blood pressure spike and I don’t need to contribute to making the tone any worse. Sorry for the trouble.

  18. Paul Penrose
    Posted Jan 27, 2009 at 12:57 PM | Permalink

    I look forward to your analysis of the H1 hypothesis, Steve.

  19. Steve McIntyre
    Posted Jan 27, 2009 at 1:00 PM | Permalink

    #20. I’m not sure that I’ll have time for a while. I would have done it when the file was open, but I’ve got other things to do right now. I’m sure that someone else can do it.

  20. PedroS
    Posted Jan 27, 2009 at 1:19 PM | Permalink

    #21 The analysis of the H1 hypothesis will surely be requested by the reviewers, now that thse data are available…

  21. Steve McIntyre
    Posted Jan 27, 2009 at 1:40 PM | Permalink

    You may be right. However, I think that the present results merit reporting whether or not we analyze other aspects of the Santer paper and I think that that’s the position we’ll take if the issue arises. But might as well think about it ahead of time.

    Actually, it would have made sense tactically for Santer to make the other data available now, in the hopes that it muddies the refutation of the H2 situation (not that I think that that had anything to do with data marked FOIA being made available in reponse to an FOI request.)

    • Jonathan
      Posted Jan 27, 2009 at 2:40 PM | Permalink

      Re: Steve McIntyre (#24), if the reviewers do ask you to add an analysis of H1 that would be (at least in my field) a tacit acceptance of the submission on the condition that you do the analysis. But as you say, that can wait until they ask – trying to second guess reviewers can waste vast amounts of time, and it’s usually simpler just to wait and find out what they actually want!

      I have no idea of what IJC embargo policy is, but at some point it might be an idea to post it at arxiv.org, either under “Data Analysis, Statistics and Probability” or “Atmospheric and Oceanic Physics”.

  22. TerryS
    Posted Jan 27, 2009 at 2:47 PM | Permalink

    which, when pasted into R, will go and collect all the relevant data online and produce all the statistics and figures in the article.

    Thats where you have made your mistake. You should have collected all the on line data and put it into your own ftp/web archive with references as to where and when you collected it. The on line data will change so that others can say their results don’t match yours therefore it should all be ignored. If you haven’t done it yet, collect and archive the data.

    Steve: 🙂 I’d already done exactly that. We’ve seen the data change and I’ve complained about others not photographing exact data sets. So I’ve got the versions that we used. Users can use the photographed versions if they want.

  23. Posted Jan 27, 2009 at 2:56 PM | Permalink

    Hi Steve – We also have a paper under review at the IJOC on a similar, complementary topic. It is Pielke Sr., R.A., T.N. Chase, J.R. Christy, B. Herman, and J.J. Hnilo, 2009: Assessment of temperature trends in the troposphere deduced from thermal winds. Int. J. Climatol., submitted. http://www.climatesci.org/publications/pdf/R-342.pdf.

    Our abstract reads

    “Recent work has concluded that there has been significant warming in the tropical upper troposphere using the thermal wind equation to diagnose temperature trends from observed winds; a result which diverges from all other observational data. In our paper we examine evidence for this conclusion from a variety of directions and find that evidence for a significant tropical tropospheric warming is weak. In support of this conclusion we provide evidence that, for the period 1979-2007, except for the highest latitudes in the Northern Hemisphere, both the thermal wind, as estimated by the zonal averaged 200 hPa wind and the tropospheric layer-averaged temperature, are consistent with each other, and show no statistically significant trends.”

    • Sam Urbinto
      Posted Jan 27, 2009 at 3:24 PM | Permalink

      Re: Roger A. Pielke Sr. (#27),

      “Significant tropical tropospheric warming” as in the satellite data?

      I don’t see it either.

      • Ron Cram
        Posted Jan 28, 2009 at 11:37 AM | Permalink

        Re: Sam Urbinto (#30),

        If you squint and hold your head at the right angle, you might be able to see a slight uptrend in the first two-thirds of the image. But if you can see that, you can also see a slight downtrend in the last third.

  24. Hemst 101
    Posted Jan 27, 2009 at 3:04 PM | Permalink

    I heard this on a Lecture series on particle physics by Professor Steven Pollock:

    “To physicists, when a theory is well established, it doesn’t mean you quit thinking about it. In fact, physicists are the most skeptical, cynical and aggressively challenging conservatives that you can imagine. If you say a theory is out there and it is correct, all the experimentalists want to do is prove you wrong – AND THEY ARE WORKING REALLY HARD TO TRY TO FIND SOME DATA THAT WILL PROVE YOU WRONG.

    Of course, in the process, if they keep agreeing with the theory, it just stronger and stronger and stronger evidence that in fact, the theory really is correct.”

    Hopefully Steve you are pushing Climate science to be more like Particle Physics in the 1970s – 1990s (string theory excepted).

    Keep soldiering on!!

  25. Neil Fisher
    Posted Jan 27, 2009 at 3:08 PM | Permalink

    …or represents that its use would not infringe privately owned rights.

    Hope there’s nothing in here that will come back to bite you in the nether regions. Perfectly understandable if you snip this, but it wouldn’t surprise me to learn that supplying this data is actually a poisoned chalice – hope you have considered this and investigated it before you archive any data and offer it publicly. Wouldn’t want to see you get in trouble.

    • jeez
      Posted Jan 27, 2009 at 7:26 PM | Permalink

      Re: Neil Fisher (#29),

      The chalice with the palace holds the pellet with the poison. The vessel with the pestle holds the brew that is true…and don’t even get me started on the flagon with the dragon.

      • Stan Palmer
        Posted Jan 27, 2009 at 8:58 PM | Permalink

        Re: jeez (#35),

        The chalice with the palace holds the pellet with the poison.

        From the movie “The Inspector General” which has resonance with this blog and climate science on multiple levels.

        Steve McIntyre as the Inspector General

  26. Steve McIntyre
    Posted Jan 27, 2009 at 8:45 PM | Permalink

    If anyone wants to look at model data, the following script creates one object with 98 time series: the object is named Model and is a list with two items Model$T2 and Model$T2LT. Each component has 49 time series, one for each run.

    Model=list(NA,2);names(Model)=c(“T2″,”T2LT”)
    url=”http://www-pcmdi.llnl.gov/projects/msu”
    name0=c(“tam2.tar.gz”,”tam6.tar.gz”)
    for (k in 1:2) {
    download.file(file.path(url,name0[k]),”temp.gz”,mode=”wb”)
    handle=gzfile(“temp.gz”)
    fred=readLines(handle)
    close(handle)
    length(fred)
    index2= grep(“RTIME”,fred) ;N=length(index2)#length 49
    index1= c(index2[2:length(index2)]-28,N)
    index=data.frame(index2+1,index1);names(index)=c(“start”,”end”)
    writeLines(fred,”temp.dat”)
    writeLines(fred[28],”name1.dat”)
    name1=scan(“name1.dat”,what=””)
    id=fred[grep(“Data:”,fred)];id=substr(id,34,nchar(id));id=gsub(” +$”,””,id)

    Model[[k]]=list()
    for(i in 1:49) {
    Model[[k]][[i]]=read.table(“temp.dat”,skip=index$start[i]-1,nrow=index$end[i]-index$start[i]+1)
    names(Model[[k]][[i]])=name1
    Model[[k]][[i]]=ts(Model[[k]][[i]],start=c(round(Model[[k]][[i]][1,2]),1),freq=12)
    }
    names(Model[[k]])=id
    }# k

    sapply(Model[[1]], function(A) A[1,2] )

    g=function(A) {temp=time(A)>=1979; fm=lm( A[temp,”TROP”]~c(time(A))[temp]) ;g=fm$coef[2];g}
    trend=array(NA,dim=c(49,2));trend=data.frame(trend);names(trend)=names(Model)
    trend$T2=sapply(Model$T2, g)
    trend$T2LT=sapply(Model$T2LT, g)

    • Posted Jan 27, 2009 at 10:56 PM | Permalink

      Re: Steve McIntyre (#36),

      Thanks

    • Posted Jan 28, 2009 at 7:14 AM | Permalink

      Re: Steve McIntyre (#36), When I run this in R. I get an error message saying object “N” not found. I am new to R. Any suggestions?

      Steve:
      sorry bout that. NOthing to do with R, but due to my programming quickly and having information on my console that I didn’t put in my collation. N=length(index2) – added/

    • RomanM
      Posted Jan 29, 2009 at 6:31 AM | Permalink

      Re: Steve McIntyre (#36),
      I took at look at all the data (yes all the data – R is a pretty useful tool!!!) and found a couple of oddities. The script which looks like this

      library(lattice)
      modplot = function(mod,run,model=Model) {
      nr =nrow(model[[mod]][[run]])
      namex = c(“XGLOBAL”,”NH”,”SH”,”NHHL”,”NHML”,”NHLL”,”SHHL”,”SHML”,”SHLL”,”TROP”)
      mo = c(“tam2”, “tam6”)
      dat = data.frame(rep(model[[mod]][[run]][,2],10), c(model[[mod]][[run]][,4:13]), rep(namex,1,each=nr))
      xyplot(dat[,2] ~ dat[,1] | dat[,3], ylab = “Temp(C)”, xlab = “Time”,
      main = paste(paste(mo[mod]),”….. Run “,paste(run)),
      panel = function(x,y) {panel.xyplot(x,y, type=”l”); panel.lmline(x,y, col=”red”)})
      }

      modplot(1,7)

      generated the plot below

      I especially like the jump of 10 degrees in 1900 (but it’s ok, by 1960, the temperatures are back to normal. Run 31 has a line of -999.90s for the year 1901 (obviously missing values).

      To plot a particular run, the command is modplot(i,j) where i =1 or 2 and j = 1 to 49.

      Rather than do them individually, R has the ability to create pdf files containing the graphs. The script

      for (mod in 1:2){
      pdf(paste(“Model “,paste(mod),”.pdf”, sep =””))
      for (k in 1:49) {
      xp = modplot(mod,k)
      print(xp) }
      dev.off()}

      will create two pdf files called Model1.pdf and Model2.pdf which contain a total of 2 x 49 x 10 = 980 graphs. The files are each 13 MB so I won’t post them. It takes a couple of minutes to do them and they are a great help when cleaning the data. The HL data is highly variable for all the runs.

  27. Vernon
    Posted Jan 27, 2009 at 9:22 PM | Permalink

    Jeez your wrong, while Danny played in both. That line comes out of the Court Jester.

  28. jeez
    Posted Jan 27, 2009 at 9:26 PM | Permalink

    I did not name the movie, but the Court Jester is correct and has equal resonance.

    • darwin
      Posted Jan 28, 2009 at 10:54 AM | Permalink

      Re: jeez (#39),Hawkins: I’ve got it! The pellet with the poison’s in the vessel with the pestle; the chalice from the palace has the brew that is true! Right?
      Griselda: Right! — but there’s been a change: they broke the chalice from the palace…
      Hawkins: They broke the chalice from the palace?
      Griselda: …and replaced it. With a flagon.
      Hawkins: A flagon?
      Griselda: With the figure of a dragon.
      Hawkins: Flagon with a dragon.
      Griselda: Right.
      Hawkins: …but did you put the pellet with the poison in the vessel with the pestle?
      Griselda: No! The pellet with the poison’s in the flagon with the dragon! The vessel with the pestle has the brew that is true!
      Hawkins: The pellet with the poison’s in the flagon with the dragon, the vessel with the pestle has the brew that is true.
      Griselda: Just remember that!

      Oh, now I’ve done it. I’ve brought in the flagon with the dragon!

  29. George M
    Posted Jan 27, 2009 at 9:27 PM | Permalink

    Urederra, and others on multiple publishing:
    The rules for the MSM are different to non-existent. I noticed what looked like plagarism in our local paper a while back, as almost identical article had appeared in a previous magazine under a different author’s name. I challenged it, and quickly heard directly from the author, who uses a different “nom-de-plume” for news and for feature articles. He chastised me for bothering him, and told me in no uncertain terms that the “media” allowed an author to publish the same article in as many publications as would accept it. I replied that that was not the same as the science journals I dealt with in my day, and I still thought it bordered on dishonesty.

  30. Posted Jan 28, 2009 at 6:58 AM | Permalink

    Go, Steve, go!

  31. Posted Jan 28, 2009 at 12:16 PM | Permalink

    This is very good news. Those of us who nag you about submitting more papers will have to shut up for a while. I assume IJC is International Journal of Climatology.
    You might be wise to cut down on the Santer-snarking for a while. Since your paper is a criticism of Santer et al, it is more or less certain that one of the reviewers will be Santer or one of his co-authors.
    There is no reason not to post the paper either here or on ArXiv, other than your amusement in tantalising your readers!

    • PhilH
      Posted Jan 28, 2009 at 12:57 PM | Permalink

      Re: PaulM (#48), Roger Pielke, Sr. has his paper pending at IJC and has apparently put it on line; so I agree.

  32. Eddy
    Posted Jan 28, 2009 at 12:50 PM | Permalink

    Re: R program

    This step generates an error
    names(Model[[k]])=id
    Error: object “id” not found

    Do you want id to be name0[k] ?

    Steve: id=fred[grep(“Data:”,fred)];id=substr(id,34,nchar(id));id=gsub(” +$”,””,id)
    This picks up the model/run identification.

  33. brendy
    Posted Jan 28, 2009 at 1:02 PM | Permalink

    FOIA requests are required by law to be answered within 10 business days of receipt. A 10 day extension may occur for good cause. Failure to comply can subject the requested agency to suit in federal district court and imposition of costs.

  34. Andrew
    Posted Jan 28, 2009 at 3:00 PM | Permalink

    Nice catch Steve! Can’t wait to see if it will get published! Seems like a pretty obvious mistake on Santer et al.’s part (though not the first time he (Santer) has done a paper with endpoint issues. Skeptics will know what I mean), but no doubt they will have their say-if they don’t ignore you, which would be harder if Santer himself is a reviewer.

  35. RomanM
    Posted Jan 29, 2009 at 6:35 AM | Permalink

    I neglected to mention the other oddity. Run 31 contains a bunch of -999.90 (obviously missing values) in 1901 for both Model[1] and Model[2]. The graph makes it obvious (modplot(1,31)).

  36. Steve McIntyre
    Posted Jan 29, 2009 at 10:41 AM | Permalink

    Nice. I didn’t know about the pdf tool either; that’s cool. The labeling can be improved though. tam2 and tam6 are the “levels” T2 and T2LT and not the “models”.
    names(Model) #[1] “T2” “T2LT”

    My explanation of my collation wasn’t very clear. The 7th data set in Model[[1]] also has a name:

    names(Model[[1]])[7]
    #[1] “CNRM3.0/20c3m_run1/Xy/tam2_CNRM3.0_20c3m_run1_mm_xy_wf_r0000_0000.nc”

    So you’ve plotted (I think) model CNRM3.0 run 1 for level T2. The model can be parsed from the name string.

    model_id=strsplit(names(Model[[1]])[7] ,”/”)[[1]][1] #”CNRM3.0″
    run_id= strsplit(names(Model[[1]])[7] ,”/”)[[1]][2] # “20c3m_run1”

  37. Steve McIntyre
    Posted Jan 29, 2009 at 10:58 AM | Permalink

    Here are values from the original data. The jump up occurs precisely at 1900.0 – sort of a century version of Y2K :). The down draft occurs precisely at 1960. I wonder how they do that.

    478 1899.7500 1197.0000 -1.1508 -1.1220 -1.1796 -1.0806 -0.8283 -1.3482 -0.5948 -0.8760 -1.5586 -1.4458
    479 1899.8334 1198.0000 -1.1223 -1.0900 -1.1544 -0.2054 -1.1610 -1.2751 -0.8686 -1.0499 -1.3076 -1.2619
    480 1899.9166 1199.0000 -0.9937 -0.9119 -1.0754 0.4406 -1.0786 -1.1522 -0.7002 -1.0025 -1.2294 -1.1892
    481 1900.0000 1200.0000 8.8407 8.4133 9.2676 3.4258 6.6583 11.0346 4.0506 7.8419 11.7094 11.7167
    482 1900.0834 1201.0000 9.1429 8.7014 9.5840 4.3542 6.5638 11.4312 4.8833 8.1336 11.9055 11.9967
    483 1900.1666 1202.0000 9.4028 8.9432 9.8619 4.0595 7.1663 11.5526 5.7646 8.3075 12.0979 12.2001
    484 1900.2500 1203.0000 9.5233 9.1686 9.8774 4.4947 7.4680 11.6661 6.1120 8.2685 12.0644 12.1947
    485 1900.3334 1204.0000 9.6344 9.6008 9.6673 5.6921 8.1642 11.7000 6.0567 8.2723 11.6562 12.0354
    486 1900.4166 1205.0000 9.7839 10.0083 9.5591 6.0204 9.0062 11.8106 6.5351 8.3135 11.2813 11.8192
    487 1900.5000 1206.0000 9.7135 10.0107 9.4157 6.0511 9.1357 11.7125 6.5922 7.8483 11.3199 11.7708

  38. RomanM
    Posted Jan 29, 2009 at 11:36 AM | Permalink

    I have been able to replicate the values obtained by Santer in Table I for the Multi-model means and the Inter-model SDs for both the T2 and the T2LT. These are calculated simply by first finding the slopes for the 49 runs, then by calculating the average for each of the 19 models from which the runs are taken and finally taking the average of the latter 19 results (as advertised in the paper), The SDs are the sample variance of the 19 model averages.

    The groups are given (using the notation and group order in the paper in Figure 3A) by the script:

    modgrp = c(12,1,1,1,1,1,13,14,15,15,15,2,2,2,3,3,3,17,17,4,4,4,4,4,5,
    5,5,5,5,10,11,16,16,16,18,19,7,6,6,6,8,8,8,8,8,9,9,9,9)

    labs =c(“CCSM3.0″,”GFDL-CM2.0″,”GFDL-CM2.1″,”GISS-EH”,”GISS-ER”,
    “MIROC3.2(medres)”,”MIROC3.2(hires)”,”MRI-CGCM2.3.2″,”PCM”,
    “UKMO-HadCM3″,”UKMO-HadGEM1″,”CCCma-CGCM3.1(T47)”,”CNRM-CM3″,
    “CSIRO-Mk3.0″,”ECHAM5/MPI-OM”,”FGOALS-g1.0″,”GISS-AOM”,”INM-CM3.0″,”IPSL-CM4″)

    modgrp = factor(modgrp,labels=labs)

    My scripts for extracting 1979 to 1999 and getting the regression coefficients were a bit clumsy so I won’t post them.

  39. Steve McIntyre
    Posted Jan 29, 2009 at 12:21 PM | Permalink

    Roman, I agree with your diagnosis. Here’s how I went about getting the results. There’s a little more structure in Model than you may be using. Each item has a useful name and is a time series, not just a matrix. Also sapply and tapply are really elegant R functions for ragged arrays that take a little getting used to. The Team uses Fortran do loops even in R. (It’s like they speak R with a heavy accent).

    id=names(Model[[1]])
    test=sapply(id,function(x) strsplit(x,”/”)[[1]])
    Info.Model=data.frame(model=test[1,],run=test[2,])
    row.names(Info.Model)=1:49

    h=function(A) {
    temp=(time(A)>=1979)& (time(A)<2000)
    fm=lm(A[temp,”TROP”]~A[temp,”RTIME”])
    h=fm$coef[2];
    h}

    Info.Model$trend_T2=sapply(Model[[1]],h)
    Info.Model$trend_T2LT=sapply(Model[[“T2LT”]],h)

    Info=data.frame(model=unique(Info.Model$model))
    Info$trend_T2=tapply(Info.Model$trend_T2,Info.Model$model,mean)
    Info$trend_T2LT=tapply(Info.Model$trend_T2LT,Info.Model$model,mean)

    SANTER TABLE 1 #1. Santer 2008 Table 1
    loc=”http://data.climateaudit.org/data/models/santer_2008_table1.dat”
    santer=read.table(loc,skip=1)
    names(santer)=c(“item”,”layer”,”trend”,”se”,”sd”,”r1″,”neff”)
    row.names(santer)=paste(santer[,1],santer[,2],sep=”_”)
    santer=santer[,3:ncol(santer)]

    rbind(10*apply(Info[,2:3],2,sd),
    santer[c(16,12),1])
    # trend_T2 trend_T2LT
    #[1,] 0.09778996 0.09208094
    #[2,] 0.09800000 0.09200000

    rbind(10*apply(Info[,2:3],2,mean),
    santer[c(15,11),1])
    # trend_T2 trend_T2LT
    #[1,] 0.1992421 0.2147754
    #[2,] 0.1990000 0.2150000

  40. David Jay
    Posted Jan 29, 2009 at 2:30 PM | Permalink

    A REAL programmer can write FORTRAN in any language!

    (quote from: “Real Programmers Don’t Do Pascal”)

  41. Stuart Harmon
    Posted Jan 29, 2009 at 4:21 PM | Permalink

    The year is 2025 there’s been a great deal of wobble tilt and solar activity all at once.
    A man is looking for a bar in Antarctica when he comes across a sign outside a pub :-

    “A pint
    A pie
    and a Friendly Word”

    This seems fine so in he goes.

    Customer: “A pint please”

    Customer: “I’ll also have a pie”

    Silence

    Customer: “Landlord what about the Friendly Word”

    Landlord: “Don’t eat the pie”

  42. Urederra
    Posted Jan 27, 2009 at 4:55 PM | Permalink

    Re: Modelos climáticos y realidad. Santer. « PlazaMoyua.org (#34),

    algoreros,

    The correct spelling is agoreros, which is spanish for ominous. Algoreros is just a witty pun.

  43. Ron Cram
    Posted Jan 28, 2009 at 11:31 AM | Permalink

    Re: Urederra (#34),

    Since I’m not a Spanish speaker, I had to look it up. But you’re right! That’s fuuny!! But I have a twisted sense of humor. I like puns.

8 Trackbacks

  1. […] Enero 27, 2009 Modelos climáticos y realidad. Santer. Posted by soil under Cambio Climático, algoreros, calentamiento global | Etiquetas: algoreros, calentamiento global, Cambio Climático |   Steve McIntyre acaba de enviar para su publicación en el International Journal of Clomatology un trabajo sobre los modelos climáticos y su relación con los datos de temperatura real [–>]. […]

  2. […] Yesterday, Climate Audit announced the submission of a paper on tropospheric temperature trends (see). […]

  3. […] Climate Audit today, Steve McIntyrewrites about getting data about the Santer article: With all the problems for the new US […]

  4. […] I recounted this refusal and the progress of several FOI requests in several contemporary posts here here here and here. […]

  5. […] Jan 26, Ross and I submitted an article on Santer et al 2008, noted up the next day here ; I reported in the post that our submission included comments on the data refusal. On January 27, I […]

  6. […] Santer et al truncated their data at 1999, just at the end of a strong El Nino. Steve and I sent a comment to IJOC pointing out that if they had applied their method on the full length of then-available data […]

  7. […] Santer et al truncated their data at 1999, just at the end of a strong El Nino. Steve and I sent a comment to IJOC pointing out that if they had applied their method on the full length of then-available data […]

  8. […] Santer et al truncated their data at 1999, just at the end of a strong El Nino. Steve and I sent a comment to IJOC pointing out that if they had applied their method on the full length of then-available data […]