A1B and 20CEN Models

Lucia did a recent post on the construction of IPCC Figure 9.5, which I’d also been looking at in light of the Santer model information but I had different issues in mind. IPCC Figure 9.5 says that they extended selected 20th century runs (the “20CEN” models) with A1B models in order to produce the graph shown below up to 2005. The splice is intriguing on a number of counts – not least of which is the first question: how’d they do it?

Here is the original version of IPCC AR4 Figure 9.5

Original Caption: Figure 9.5a. Comparison between global mean surface temperature anomalies (°C) from observations (black) and AOGCM simulations forced with (a) both anthropogenic and natural forcings …. All data are shown as global mean temperature anomalies relative to the period 1901 to 1950, as observed (black, Hadley Centre/Climatic Research Unit gridded surface temperature data set (HadCRUT3); Brohan et al., 2006) and, in (a) as obtained from 58 simulations produced by 14 models with both anthropogenic and natural forcings. The multimodel ensemble mean is shown as a thick red curve and individual simulations are shown as thin yellow curves. Vertical grey lines indicate the timing of major volcanic events. Those simulations that ended before 2005 were extended to 2005 by using the first few years of the IPCC Special Report on Emission Scenarios (SRES) A1B scenario simulations that continued from the respective 20th-century simulations, where available. … The multi-model ensemble mean is shown as a thick blue curve and individual simulations are shown as thin blue curves. Simulations are selected that do not exhibit excessive drift in their control simulations (no more than 0.2°C per century). Each simulation was sampled so that coverage corresponds to that of the observations. Further details of the models included and the methodology for producing this figure are given in the Supplementary Material, Appendix 9.C. After Stott et al. (2006b).

I hadn’t really thought about it before, but, if I’d been asked, I would have assumed that the A1B simulations were done separately from the 20CEN simulations. If so, it’s not obvious how you’d go about splicing A1B and 20CEN simulations. For example, in many cases, there are multiple realizations of each model – how would you go about linking individual A1B runs to individual 20CEN runs?

There are other interesting aspects of this figure – including the selection of 20CEN runs: not all runs are used. AR4 Chapter 8 SI provides information on which runs were selected. I’ll return to this issue on another occasion. Today I want to walk through the splicing.

Over the past few days, I’ve scraped tropical (20S-20N) averages for all 78 20CEN (25 models) and all 57 A1B runs (24 models) from KNMI (KNMI has some excellent tools, but they are still pretty labor intensive. I’ve done a pretty little scraping program that eliminates 99% of the cut-and-paste drudgery).

Interestingly, virtually all of the A1B runs start in the late 19th century – and have identical start dates as the 20CEN runs. The two exceptions were GISS AOM and FGOALS – for these two models, A1B starts the year after 20CEN ends. This strongly suggested the possibility that individual A1B runs were associated with individual 20CEN runs and that a lexicon linking runs could be constructed. A hint exists in AR4 chapter SI page 9-7, where 28 20CEN runs are shown as being extended with A1B runs.

It appears that this is the case and that accordingly, there is a “natural” extension of the 20CEN runs with A1B runs.

However, there is not a one-to-one map between 20CEN and A1B models. Overall there are 25 20CEN models and 24 A1B models – the one missing A1B model is BCC CM1, which therefore cannot be extended. I wonder whether the absence of an A1B run for BCC CM1 might be a clerical miss – earlier this week, I notified KNMI that two PCM 20CEN models at PCMDI were not on their system. They promptly responded that the models were there, but the linking webpage hadn’t been updated (they promptly fixed things.) Maybe the A1B run for BCC CM1 is around.

For 14 of the remaining 24 models, there are the same number of 20CEN and A1B models and for all 14 (with the numbers ranging from 1 to 5). For each model, I did cross-correlations for the overlap period and in every case, there was one and only one A1B-20CEN map that had a correlation of 0.99 or so. In every case, the “natural” order was preserved in the map. While the values of the cross-correlation “diagonal” were around 0.99 or higher, the “off-diagonal” were significantly lower, with the characteristics varying a lot from model to model. For example, CGCM3.1 had cross-correlations between “different” runs of around 0.9, while they were around 0.6 for CCSM3.0 and a very low 0.05 (with a couple negative) or so for ECHAM5.

Only one model (CCSM3.0) had more A1B (7) than 20CEN runs (6). This meant that only one out of 57 A1B models was left without a “natural” 20CEN link. Again, I wonder whether there might be another CSM3.0 20CEN run somewhere.

Given the existence of this one-to-one map, if the correlation is 0.995 or even 0.9995, it seemed odd that the correlation wasn’t 0.999999 or 1.000000.

This had an interesting explanation, which in turn confirmed the identity of the runs. My comparisons were done using “anomalies”, one of the KNMI options – and the reference period for the 20CEN and A1B datasets is different. As a result the reference means used to create the anomaly differ between the 20CEN and A1B versions. KNMI also permits the retrieval of non-anomaly versions expressed in deg C. I spot checked the CCCma series and these values were identical between versions to all decimal places, confirming that, in this case at least, the 20CEN and A1B runs were identical in the overlap period. The lack of perfect correlation resulted from the fact that the pattern of monthly normals was slightly different between the 20CEN version and A1B version, resulting a slight decorrelation.

The existence of this connection between 20CEN and A1B runs makes one scratch one’s head a little in trying to understand exactly what the IPCC authors meant by saying that the A1B runs were an “extension” of the corresponding 20CEN runs, if, as appears to be the case, they are actually alter egos of the same run. (One odd exception to the IPCC “extensions”: PCM A1B runs are available at KNMI but were not used to “extend” the corresponding 20CEN runs.

There’s an interesting connection to Santer in this, which I’ll visit on another occasion.


43 Comments

  1. Andrew
    Posted May 15, 2009 at 6:42 PM | Permalink

    So, if I’m understanding correctly, they appear to be suggesting that the A1B scenarios of future warming are actually just the 20CEN runs continued forward with assumed emissions etc.? Its just that one wold think that they would have been done independently…Curiouser and curiouser…

    • Steve McIntyre
      Posted May 15, 2009 at 6:46 PM | Permalink

      Re: Andrew (#1),

      It sure looks that way. I don’t think that there’s anything “wrong” with that. In fact, I’d been getting increasingly annoyed about the truncation of “historical” models in 1999 so that there was seemingly no out-of-market comparanda. But it looks as though the two series can be linked quite plausibly.

      • Andrew
        Posted May 15, 2009 at 6:55 PM | Permalink

        Re: Steve McIntyre (#3), Its not that its “wrong” just odd. See, I would think that they would have run models based off of historical data only as a diagnostic test then run the models again with speculative emissions scenarios, which they presumably didn’t have access to when the models were being designed. Its just contrary to my previous understanding of what they had done.

  2. Steve McIntyre
    Posted May 15, 2009 at 6:42 PM | Permalink

    For reference, here is code for scraping the KNMI 20CEN runs. This should work turnkey:

    ##FUNCTIONS read.ensemble.runs=function(model,scenario,prefix=myprefix,region=”GL”,suffix= “&standardunits=true”,method=”default”) {
    if (region==”TRP”) {suffix=”&standardunits=true&lat1=-20&lat2=20&lon1=0&lon2=360″;suffix2=”0-360E_-20-20N”}
    if (region==”NH”) {suffix=”&standardunits=true&lat1=0&lat2=90″;suffix2=”0-360E_0-90N”}
    if (region==”SH”) {suffix=”&standardunits=true&lat1=-90&lat2=0″;suffix2=”0-360E_-90-0N”}
    if (region==”arctic”) {suffix=”&standardunits=true&lat1=60&lat2=90″;suffix2=”0-360E_60-90N”}
    if (region==”antarctic”) {suffix=”&standardunits=true&lat1=-90&lat2=60″;suffix2=”0-360E_-90-60N”}
    if (region==”GL”) {suffix=”&standardunits=true”;suffix2=”0-360E_-90-90N”}

    url=paste(paste(prefix,model,scenario,sep=”_”),suffix,sep=””);url
    my_info=download_html(url)
    my_info[grep("raw data",my_info)]
    y= my_info[grep("raw data",my_info)][3];y
    n=nchar(y)
    loc=file.path(“http://climexp.knmi.nl”,substr(y,10,n-16) )
    if(method==”adhoc”) loc=gsub(“N__a”,”N_na”,loc)
    Sys.sleep(4)
    test=readLines(loc);test[1:5]
    index=grep(“ensemble”,test) # 1 1817 3633 5449 7265
    K=length(index);
    if(K>0) { index=c(index,length(test)+1)
    writeLines(substr(test,1,24),”temp.dat”)
    test=NULL
    for (i in 1:K) {
    working=read.table(“temp.dat”,skip=index[i]+1,nrow= (index[i+1]-2)-(index[i]+1) ) ;
    test=ts.union(test,ts(working[,2],start=c(floor(working[1,1]),round( 12* (working[1,1]%%1),0)+1),freq=12))
    }
    dimnames(test)[[2]]=1:K
    read.ensemble.runs=test} else {
    writeLines(substr(test,1,24),”temp.dat”)
    working=read.table(“temp.dat”,nrow= length(test)-2) ;
    read.ensemble.runs= ts(as.matrix(working[,2]),start=c(floor(working[1,1]),round( 12* (working[1,1]%%1),0)+1),freq=12)
    }
    read.ensemble.runs
    }

    download_html =function(url) {
    download.file(url, “temp.html”);
    html_handle < – file("temp.html", "rt");
    html_data <- readLines(html_handle);
    close(html_handle);
    unlink("temp.html");
    return(html_data);
    }

    ###INFO
    knmi.info=read.csv("http://data.climateaudit.org/data/models/knmi.info.csv&quot;,sep="\t")

    ##SCRAPING
    email="yourname@you.ca" #
    #I USED MY OWN EMAIL WHICH IS REGISTERED
    myprefix=paste("http://climexp.knmi.nl/get_index.cgi?email=&quot;,email,"&field=tas",sep="")
    scenario="20c3m"
    prefix=myprefix

    knmi.info[knmi.info$scenario==scenario,]
    # model alias scenario Runs
    #1 BCC CM1 bcc_cm1 20c3m 4
    #2 BCCR BCM2.0 bccr_bcm2_0 20c3m 1
    #4 CGCM3.1 (T47) cccma_cgcm3_1 20c3m 5
    #9 CGCM3.1 (T63) cccma_cgcm3_1_t63 20c3m 1

    ##AN EXAMPLE
    i=1;#i=23#i=2
    model=knmi.info[knmi.info$scenario==scenario,][i,"alias"];model
    test=try(read.ensemble.runs(model,scenario,region="TRP"));
    test[1:10]

    ##SCRAPING
    target=knmi.info[knmi.info$scenario==scenario,];M=nrow(target)
    #this gets IDs of all applicable runs

    ensemble.trp=rep( list(NA),M);names(ensemble.trp)=target$model
    for(i in 1:M) { ensemble.trp[[i]]=try(read.ensemble.runs(model=target$alias[i],scenario,region="TRP"));
    Sys.sleep(2)
    }

    sapply(ensemble.trp,dim)
    # BCC CM1 BCCR BCM2.0 CGCM3.1 (T47) CGCM3.1 (T63) CNRM CM3 CSIRO Mk3.0 CSIRO Mk3.5 GFDL CM2.0 GFDL CM2.1 GISS AOM GISS EH GISS ER
    #[1,] 1597 1800 1813 1812 1680 1561 1561 1681 1681 1813 1441 2653
    #[2,] 2 1 5 1 1 3 3 3 3 2 5 9

    #save(ensemble.trp,file="d:/climate/data/models/knmi/ensemble.trp.tab")
    #uploaded to CA/data/models/ensemble.trp.tab

  3. Chad
    Posted May 15, 2009 at 8:21 PM | Permalink

    Steve,

    Interestingly, virtually all of the A1B runs start in the late 19th century – and have identical start dates as the 20CEN runs.

    If you download SRES A1B for any model and there are multiple runs for 20C3M, it will not compute the ensemble mean for the 20C3m period (which is what I use). It will just take the first 20C3M run and splice it with A1B.

    As far as I know, the 20c3m runs are initialized from a control run. When they end, the next experiment begins (commit, SRES, etc…)

    • Steve McIntyre
      Posted May 15, 2009 at 9:11 PM | Permalink

      Re: Chad (#5),

      If you download SRES A1B for any model and there are multiple runs for 20C3M, it will not compute the ensemble mean for the 20C3m period (which is what I use). It will just take the first 20C3M run and splice it with A1B.

      It looks a bit different than that to me – the 5 A1B runs are matched to 5 different 20CEN models – it doesn’t just find the “first” run for each model – it finds a different run for each A1B run.

      The explanation could be simple and obvious but the online documentation isn’t very illuminating.

      • Dave Dardinger
        Posted May 15, 2009 at 9:25 PM | Permalink

        Re: Steve McIntyre (#7),

        The explanation could be simple and obvious but the online documentation isn’t very illuminating.

        Perhaps I’m missing something. Why can’t previously run models be re-run with different endpoints? Presumably the portions of overlap would be identical unless there are are random factors in the models which wouldn’t seem to me to make much sense (though of course nature itself has randomness at the core of the physics that runs it.)

      • Chad
        Posted May 15, 2009 at 9:39 PM | Permalink

        Re: Steve McIntyre (#7),

        It appears that what I said doesn’t apply to all the models as I implied. I had discovered what I stated when I was checking to see if my calculations with Matlab using gridded data matched what was on Climate Explorer. I was dealing with HadCM3 to be specific. It had two 20c3m runs and one A1B run. I was wrong to assume it applied to all the other models.

  4. Chad
    Posted May 15, 2009 at 8:56 PM | Permalink

    Here’s some interesting information from the netCDF file for the CNRM 20c3m run:

    “Experiment was initiated from year 111 of the control simulation CT1 (nominal year 1970), when equilibrium was reached (corresponds to nominal year 1860 of 20C3M XX1). Forcing agents included: CO2,CH4,N2O,O3,CFC11(including other CFCs and HFCs),CFC12; sulfate(Boucher),BC,sea salt,desert dust aerosols.”

    I had wondered myself why the 20c3m didn’t start in Jan-1900. This explains why at least this particular 20c3m run begins a few decades earlier.

  5. Posted May 16, 2009 at 5:48 AM | Permalink

    Lucia

    the 5 A1B runs are matched to 5 different 20CEN models – it doesn’t just find the “first” run for each model

    Yes. This is my impression.

    What they do to create a “series” is this:

    1) Run one long controll run. (There is only ever one of these. It’s at a constant level of forcing.)
    2) Pick a year from the control run to “start” the 20th century runs. Call that year whatever… (usually 1900).
    3) Begin varying forcings as from the new start year (i.e. 1900) and go forward. This is 20th century run “n”
    4) End at Dec 1999 (usually. A few go longer). At this year, they start applying a scenario to the 20th century run. This would be scenario run “n”, and it has to be connected to 20th century run “n”. That way any and all “model La Nina’s, PDO” etc. match. ( I plotted all connected scenarios to make sure I didn’t have any obvious discontinutinties in a download.)

    They can and do sometimes apply a more than 1 SRES to the end of control run “n”. So, 20th century run 0 may continue with SRES a1b and also SRES a2.

    Then, they repeat this for run “n+1″. But, when the run a different 20th century run, they start from a different year in the control run. This randomizes the “start” point to ensure they begin during different portions of any ENSO, PDO etc cycles (or whatever one might call them in models.)

    Sometimes, they end at step 3, so there are more 20th century runs than scenario runs.

    But you do need to connect the SRES to the control runs correctly. PCMDI does have documentation describing this. (It is not organized stupendously conveniently. You need read the information from each modeling group, find it and make your own convenient table. But… on the other hand, the information is there.)

    But… I just let Geert Jan do it at KNMI!

    I think the two unconnected scenarios at KNMI are just a few he overlooked connecting. (He connected them all around December, and is only funded part time on this. So, yeah. Sometimes things aren’t 100% convenient for users. But it’s awfually convenient for me, so on the balance, I gotta love KNMI()

    I asked Geert Jan how to figure out which projection latches onto which 20th century run. He says the NetCDF files contain information that helps if you later go to PCMDI to find which 20th century slaps onto which SRES projection. (Right now, I admit I just guessed and linked “o” to “0” and 1 to 1. But I need to check especially for some unconnected A2 runs because if I apply my guess, there are “jumps” in temperature at the connection point. Since I only guessed the connection, I assume my guess was wrong rather than the model temperature mis-behaved. Needless to say, I haven’t shown any of these “jumpy” models at the blog, or used them in any analysis!)

    • Steve McIntyre
      Posted May 16, 2009 at 6:34 AM | Permalink

      Re: lucia (#10),

      I agree with you that Geert’s plug-ins are very good (though I also think that my scraping plug is an enhancement :). Also Geert has been cheerful and responsive to any inquiries – an anti-Santer, so to speak.

  6. Posted May 16, 2009 at 6:50 AM | Permalink

    You’re scraping code is definitely a plus! Geert Jan is very, very helpful.

  7. David Smith
    Posted May 16, 2009 at 7:12 AM | Permalink

    Could someone clarify how hindcasts, like the 20’th Century runs, handle sea surface temperature (SST)? Do the models start with, say, only the January 1,1900 SST and then the SST and air temperature evolve as time moves forward? Or, are the historical SST values input throughout the model run (1900, 1901, 1902, 1903, etc)?

    I’m pretty sure that some model hindcasts used historical SST as input. Since global air temperature won’t vary widely from global SST the air temperature output trend really reflected the input historical SST trend. Perhaps that practice ended as the models evolved.

  8. Posted May 16, 2009 at 8:08 AM | Permalink

    David Smith–
    The don’t do either of the things you suggest. They actually do something conceptually much better. (Whether or not it works out better one could debate, but in fact, we can’t know.)

    Modeling groups do a variety of things. But mostly, they initiate the control runs with empirical data describing more or less the temperature they believe matched the oceans back in 18XX. Then they apply the forcings they think applied in 18XX and run that for many, many years. (I don’t know if it’s 1880, 1899 etc. But it’s pretty far back.) This is called the “control run”.

    So, in principle, after running for a long time, they have a “planet” that exhibits more-or-less equilibrium conditions under forcings in whatever far back year they began. This new solution no longer matches the historical value per se. If a model was perfect, and they ran the model long enough, then the new value would represent what the real earth might have looked like under the frozen forcings the modelers applied. (Bear in mind: The real earth did not experience frozen forcings, and even the applied forcings could be mistaken for 18XX. Still, if they ran the model long enough, it will have ‘forgotten’ the initial conditions, and the problem because a boundary condition problem for that model.)

    When starting the 20th century runs, they pick a year from the “control run”. (It has to be a year that is fairly far along in the series of “control runs”. That’s why you read things like the equivalent of 1990 etc. Some of these runs are very long, so it can be the equivalent of 2200 etc.)

    Once they are running the 20th century, there is zero input from measured temperature of the earth in the files used to run the models. The only way measured data can influence the results is if knowledge of the correct answer influences the modelers choice for the magnitude of forcings. There is some evidence that modelers may prefer levels of aerosol forcing that tend to make their models give better matches to observations. But, they don’t input temperatures.

  9. Julius St Swithin
    Posted May 16, 2009 at 8:10 AM | Permalink

    I was first pointed to the IPCC AR4 Figure 9.5a by “Real Climate”. When I pointed out it showed that the models badly underestimated early-20th-century warming and none of the mid-20th-century cooling until Agung volcano in the 1960s I got no further response. This figure is paired with 9.5b which shows the 20th century simulation with no increase in CO2. The pair of figures occupies about 1/4 of page. Given the trillions of dollars of expenditure predicated on the accuracy of the models it is perhaps surprising that so little attention is given to their performance. For more details on how the models performed have a look at:

    http://www.climatedata.info/Temperature/temperature.html

    One interesting thing this site shows is some simulations of temperature in degrees C rather than as anomalies relative to 1901-50. The difference between the “warmest” and “coldest” models is around 1.5 C.

    The same site also has data on precipitation:

    http://www.climatedata.info/Precipitation/precipitation.html

    Again they shows that there is a difference between the “wettest” and “driest” models. This is of the order of 200 mm/year.

    • Steve McIntyre
      Posted May 16, 2009 at 8:40 AM | Permalink

      Re: Julius St Swithin (#15), while there’s lots to criticize in this field, I don’t think it’s reasonable to say that modelers pay little attention to the accuracy of their performance. In my reading, it seems like they are attentive to their performance. Whether they oversell the accuracy is a different question – one that I’m not in a position to comment on at present.

    • BarryW
      Posted May 16, 2009 at 9:38 AM | Permalink

      Re: Julius St Swithin (#15),

      The observed data appears to have a 60 yr wave (peaks at 1880, 1940, 2000) that does not appear to be picked up in the simulations. The fit of the models to the latter 20th century may be due to just their trend to align with the upslope of the sinusoid.

      Trends that are started in the troughs (say 1900 or 1970) are going to be biased high. Unfortunately, the satellite data starts in 1979 which means that, assuming I’m correct, that even the satellite data will show a higher trend than is really happening. This would also seem to mean that the present downturn is probably related to the negative slope of the present wave and not a change in the underlying trend.

  10. Steve McIntyre
    Posted May 16, 2009 at 8:52 AM | Permalink

    While Geert’s program is a valuable resource, it looks to me like there is some debugging still to do (a caveat that he also expresses.) For example, here is the MIROC medres A1B anomaly for the tropics (this is from the radio buttons and not from my scrape). Something is wrong with the centering.

    It looks to me like the non-anomaly version might be a little safer to work with – then make your own anomalies.

  11. Julius St Swithin
    Posted May 16, 2009 at 10:51 AM | Permalink

    Steve (No 16). It was not my intention to imply that climate modellers are not concerned with accuracy – I am sure they are. My concern is that information provided to assess the accuracy of the models is lacking compared to the information available on the impacts. The words “accuracy” and “evaluation” do not appear in the synthesis report; chapter 8 – evaluation – is one of the shortest. This is balanced to some extent by the large amount of supplementary material to chapter 8 though in this section most of the results are in form of contour maps relating to 1980 to 99 for sea and 1961 to 90 for land. Figure 9.5a (above) suggests that this was the period when models were at their best. It would have been interesting to have seen comparable results for other periods.

  12. Steve McIntyre
    Posted May 16, 2009 at 11:16 AM | Permalink

    I sent the following note to Geert of KNMI on nits that I noticed with his software. You’d think that they’d do this sort of reconciliation themselves as it’s annoying to have to re-examine every data set to see why they don’t tie together:

    Geert, a few more nits from examining the available runs relative to PCMDI. (This is not to diminish the value of your software which is excellent.)
    20CEN
    – you have 4 BCC CM1 runs which don’t seem to have a counterpart at PCMDI
    – PCMDI has 10 CCSM3.0 runs, while you have 6
    – PCMDI has 5 GFDL 2.1 runs, while you have 3
    – PCMDI has 2 IPSL CM4 runs, while you have 1

    A1B
    – PCMDI has 9 CCSM3.0 runs, while you have 7
    – PCMDI has 4 GISS EH runs, while you have 3
    – PCMDI has 5 PCM runs, while you have 1
    Is this an indexing issue as with the PCM 20CEN runs? If so, your automated updating of your pointers needs a little tweaking.

    TOS: There are a lot of TOS datasets at PCMDI that aren’t available at KNMI.

    Anomaly series: something weird is happening with your MIROC med-res anomaly series – one of the runs gets displaced relative to the other runs. It looks like it’s not picking up the right climatology.

    Regards, Steve McIntyre

  13. Posted May 16, 2009 at 1:38 PM | Permalink

    SteveM-
    I pointed that issue out to Geert. There is something wrong with his package to create anomalies. If you download in C and compute anomalies based on the individual run, you don’t see those weird splits in the temperature anomalies.

  14. Jesper
    Posted May 16, 2009 at 1:53 PM | Permalink

    Simulations are selected that do not exhibit excessive drift in their control simulations (no more than 0.2°C per century)

    This smells like another case of cherrypicking to try to demonstrate a preconceived conclusion while artifically narrowing error bars.

    We eliminate all models that show climate change from effects other than CO2, and….voila!….given the observed climate change, you see that CO2 is the only effect! Brilliant!

    • Willis Eschenbach
      Posted May 16, 2009 at 2:56 PM | Permalink

      Re: Jesper (#22), it’s not quite that bad. The “control runs” merely hold all of the inputs constant. They’re not eliminating “models that show climate change from effects other than CO2.’ They are eliminating models that “drift” when the “external forcings” are held stable.

      My objection to that process is that we have little information on the natural variability of the earth in the absence of external forcings. From the claims of the AGW supporters, I deduce that “natural variability” is a) big enough to overpower any CO2 effects and to explain any model vagaries, and yet b) small enough to ignore at all other times.

      However, since we don’t know how much the earth might “drift” in the absence of changes in the known forcings, as you point out, the cutoff is both suspect and arbitrary. Curiously, they don’t require that the models give realistic temperatures, only that they don’t drift …

      w.

      • Scott Brim
        Posted May 16, 2009 at 4:43 PM | Permalink

        Re: Willis Eschenbach (#23)

        Willis Eschenbach: From the claims of the AGW supporters, I deduce that “natural variability” is a) big enough to overpower any CO2 effects and to explain any model vagaries, and yet b) small enough to ignore at all other times.

        Now, it may be just my impression, but is there not a further tendency to attribute downward patterns in observed temperatures to natural variability, while at the same time attributing upward patterns only to CO2, i.e. natural variability can lower observed temperatures but it can’t raise them.
        .
        Suppose one takes this approach as Standard Operating Procedure. What then are the implications for judging the validity of NOAA’s candidate replacement for the Hockey Stick, should the stick ever suffer fatal injury at the hands of the deniers?
        .

        • Andrew
          Posted May 16, 2009 at 4:50 PM | Permalink

          Re: Scott Brim (#24),

          Now, it may be just my impression, but is there not a further tendency to attribute downward patterns in observed temperatures to natural variability, while at the same time attributing upward patterns only to CO2, i.e. natural variability can lower observed temperatures but it can’t raise them.

          Seems that way. And the figure this post starts with is the “smoking gun” than underlies that argument.

  15. Curt Covey
    Posted May 16, 2009 at 5:08 PM | Permalink

    Steve,

    I was surprised to find that output from the climate models contributing to the IPCC AR4 is available from Geert Oldenburgh of KNMI. Speaking for myself, the surprise was a pleasant one because the database our group (PCMDI) holds was originally intended only for people familiar with the netCDF file format, and I suspect that a large fraction of our newer users find it confusing. Does Geert’s interface get around this problem?

    In any case, the PCMDI remains the only “official” repository of IPCC AR4 climate model output in uniform format. This output is now available from PCMDI to anyone promising non-commercial use (i.e. you promise to publish your results openly rather than sell them for profit). It would be a good idea for people making use of data from Geert to also check the PCMDI archive if they have questions about the data — as some of the foregoing CA bloggers have done already.

    Though I don’t have time to handle questions from a huge number of bloggers, you may e-mail me directly at my unpublished address if you think I can clarify aspects of the PCMDI archive.

    With best wishes,
    Curt

    • Steve McIntyre
      Posted May 16, 2009 at 6:33 PM | Permalink

      Re: Curt Covey (#26),

      Speaking for myself, the surprise was a pleasant one because the database our group (PCMDI) holds was originally intended only for people familiar with the netCDF file format, and I suspect that a large fraction of our newer users find it confusing. Does Geert’s interface get around this problem?

      Speaking for myself, netcdf is NOT a problem. There is an excellent R-package ncdf (from NCAR, I think) that effortlessly acquires netcdf files into R.

      What Geert’s facility provides is a usable extraction format that does not require the downloading of GB of data when you’re only interested in KB of monthly data over defined regions.

      Against his layout is the fact that, as is, it requires a lot of clerical work to cut and paste information from radio buttons. I’ve done a script that emulates the radio button hits and scrapes the data, but really this is the sort of software that the providers should be providing.

  16. Posted May 16, 2009 at 5:38 PM | Permalink

    Curt–
    Geert Jan Oldenburg provides data in several formats including plain text. It’s very convenient. The main conveniences are the tools that permit people to develop monthly mean time series applying a range of criteria.

    Out of curiosity, who did you envision as users of PCMDI? What sort of information did you think they would want to access? From my point of view, the main problem with the resource is that one has to download terra bytes of data to learn even the simplest things. On the other hand, I guess this permits people doing odd ball things flexibility. It’s also unnecessarily time consuming to find out information because quite a bit seems scattered all over the place.

    • Chad
      Posted May 16, 2009 at 6:21 PM | Permalink

      Re: lucia (#27),
      I wouldn’t agree that one needs to get terabytes of data to learn even the simplest things. If that were the case, it would be next to impossible for anyone not possessing an ungodly amount of memory and processing power to be able to do anything meaningful with the data. Is it a lot of data? You betcha! The variables ta, ts and tas that I’ve been archiving for the past few days probably in total (if I downloaded it for all years wrt ta) takes up about 100 GB. Climate Explorer’s convenience comes at a price- you’re constrained by what you can do with the data. If you want global, hemispheric, or zonal averages, it’s great. But what if you’re interested in looking at very specific regions or want to (like I did) mask out data to mimic observational coverage, you’re out of luck. Great tool nonetheless.

      • Steve McIntyre
        Posted May 16, 2009 at 6:37 PM | Permalink

        Re: Chad (#28),

        Chad, it only took me about 10 minutes to scrape TRP tas averages for all 78 20CEN runs at KNMI using my scraping program to ping Geert’s radio buttons. That demonstrates what is feasible for PCMDI and what should be their target. The final data set of interest in my case was less than 1MB. Maybe the tas data is only 100 GB and not 1TB, but I’m not interested in downloading 100 GB to obtain less than 1MB, when the server can extract the information for me.

        • Chad
          Posted May 16, 2009 at 7:56 PM | Permalink

          Re: Steve McIntyre (#30),
          Not that it really matters, but the tas data is only 6 GB. But I don’t see why PCMDI couldn’t have an interface comparable or better than KNMI for relatively simple operations.

    • Curt Covey
      Posted May 16, 2009 at 6:38 PM | Permalink

      Re: lucia (#27), the answer to the question “who did you envision as users of [climate model output from] PCMDI?” is that we expected at most a couple hundred specialists publishing papers that would be subsequently reviewed and assessed by Working Group 1 of the IPCC. (Working Group 1 reports on the science of climate change; Working Groups 2 and 3 report on impacts of climate change and options for mitigating or adapting to climate change, respectively.) We therefore anticipated that our users would be familiar with climate models, with their large volume of output, and with the netCDF format which is commonly used in meteorology and climatology.

      In a way we became victims of our own popularity as our user base grew to several thousand people around the world.

  17. Posted May 16, 2009 at 6:50 PM | Permalink

    Curt–
    OK. that makes sense. I was aware netCDF is commoly used in climatology. As I said, I don’t think netCDF is big issue for anyone who might want to access. You learn the format, then you deal with it. GeertJan’s decision to offer multiple formats is nice– but the key attraction is the interfaces creating monthly data.

    I agree the current users are not those running models. I was just wondering who you initially envisioned accessing the data. It does seem designed for a few hundred scientists, not “the world”. It’s not organized for “the world”– which is of course fine. It’s just a matter of what was intended when proposed and funded.

    Chad– Yes. At KNMI you can run the script to get the sort of data Geert Jan envisioned in advance. It’s fairly flexible, but not infinitely so. I’m not trying to express disapproval of PCMDI, I was just wondering what it’s original mission was.

  18. Geoff Sherrington
    Posted May 16, 2009 at 7:23 PM | Permalink

    At the risk of repetition, please ensure the integrity of the raw data before investing heavily in thought power and computation. Can’t make a silk purse from a sow’s ear.

    I cosider that KNMI has made a major contribution as Steve acknowledges. It has notbeen an easy task. I published this before (in CA ‘Downloading KNMI Model Data, Dec 23, 2008 post 31)and was correctle rebuked by rebuked by Geert Jan for posting private email content. I have apologised, but the essence of the post remains and is repeated:
    In emails, Geert Jan van Oldenborgh of KNMI is most cooperative, but is part of a team that has taken on a quite large task that no single person could possibly answer in detail off the cuff.

    He noted privately that the data mismatch I showed allegedy originated from NOAA, that “This way the warming trend in Australia is underestimated in the GISS and NCDC estimates that depend on GHCN.”

    “It is unfortunate that there is no formal bug-tracking system in place for climate data, something like Bugzilla for programming. The chain is sufficiently complicated (BOM => NCDC => KNMI => you) to warrant a formal system IMHO. Also, dealing with these things by hand for O(10^5) time series is undoable.”
    Which more or less says what SM has been saying for some time now.

    • Geoff Sherrington
      Posted May 16, 2009 at 7:55 PM | Permalink

      Re: Geoff Sherrington (#33),

      Sorry, hit the “GO” button in midstream. Ironic that I have so many errors in a lecture about errors.

      In more than half the (almost random) temperature comparisons I have made between BOM Australian rural data and NH records in databases derived therefrom, there have been discrepancies, some of the order of 1 deg C per year for several years in a row.

      Also, some Australian southern stations like Macquarie Island, plus Casey, Davis and Mawson in Antarctica have shown nil to tiny changes in the past 40+ years and an insignificant 1998 spike. Strictly, if models cannot reconcile these differences from the global mean, they should be put in the “questionable” category. Logically, it would mean that other areas of the world had a large temperature increase to constrain the mean, and in places a huge 1998 peak. So, is there justification to call them “global” models when they do not seem to represent findings on the whole globe?

  19. Posted May 17, 2009 at 6:16 AM | Permalink

    Chad– I suspect they don’t have the interface because they weren’t funded to do that. The way government programs work is more or less like this:
    1) Someone proposes an idea and persuades a program manager to fund it.
    2) They get funded to do certain specific things. That’s their scope– they have deliverables.
    3) Once they’ve hammered out the scope that scope, they do that.

    There can be some flexibility. There is more flexibility in a pure science program and less in other programs. But if PCMDI didn’t propose creating interfaces, they aren’t going to exist. Given who the forsaw as users, they probably agreed on the scope of just collecting certain information, storing it and providing a web interface to let people download nearly unprocessed model data.

    Of course, then the real world hit them. It turns out that loads and loads of people want to know something about what models predict.

    • Steve McIntyre
      Posted May 17, 2009 at 8:35 AM | Permalink

      Re: lucia (#36),

      Very reasonable comments. You’re right about considering what they were trying to do. And from that perspective, there’s no point objecting to their archive not meeting the demands that people subsequently placed on it.

      Having said that, one can perhaps ask whether they have adequately responded to the demands subsequently placed on the archive – leaving aside Santer’s repugnant attitude.

      Clearly climate is a major public concern and there are many people interested in the model runs who are not interested in downloading GB of data to get KB of what is relevant to them. They’ve known about this for a while. Most small companies would respond to changing market demand and arguably PCMDI hasn’t responded as nimbly as they might have. But let’s think about how they can improve things, as opposed to backbiting.

      Improvements don’t need to cost very much. I’m sure that they could implement Geert’s interface at negligible cost – tweaking it a bit while the file was open. Not that I particularly like Java interfaces.

      They could also implement something that provides the service of my scraping program in R to automate the radio buttons. Why not their own R package?

      These sorts of things would improve the quality of service to outside parties. Maybe they should spend more money on things like that – recognizing the obligation that comes with their unique franchise as a data archive – and less on carrying out their own analyses of the data (the Santer sort of thing) where there’s nothing particularly unique about their services and where they don’t do a particularly good job – sometimes, as in Santer 2008, not even a satisfactory job.

  20. Kenneth Fritsch
    Posted May 17, 2009 at 9:50 AM | Permalink

    The IPCC figure above showing 58 model runs as thin yellow lines would appear to me to be obscuring that for which a thinking person might be looking. The yellow blur implies some rather inexact method of averaging of the modeled runs. It would be an opportunistic choice if one were attempting to show a lack of difference between observed and modeled results.

    If one were instead interested in differences between individual modeled and observed results, those differences would be plotted and plotted such that one could readily distinguish individual model run comparisons. That is not to say that the IPCC graphs do not provide information – about the mind set of those in charge at the IPCC.

    • Steve McIntyre
      Posted May 17, 2009 at 10:21 AM | Permalink

      Re: Kenneth Fritsch (#38),
      Kenneth, I agree. These graphs are, in their way, worse than the spaghetti graphs. I’ve been trying to think of an appropriate name for these annoying graphs – perhaps “porridge graphs” or “mashed potato graphs”, capturing the idea of the loss of detail.

      • Andrew
        Posted May 18, 2009 at 1:52 PM | Permalink

        Re: Steve McIntyre (#39), Thinking outside the food box might help. I get “fog” as the first thing that comes to my mind.

        • Posted May 19, 2009 at 10:23 AM | Permalink

          Re: Andrew (#41), “Fog Tunnel” or “fog band” is what I get. It clouds the issue to make it almost invisible – the real temperature data with its unmistakeable up-down-up- over the 20 Century is stuck into a fog band wide enough to suggest visually that the measurements were bad and really the trend is steadily up. And the bright yellow fog neatly hides how exaggeration of the volcanic effect of Agung is used to hide the 1940-1970 fall, while the telltale spikes of Pinatubo and El Chichon show how unwarranted the exaggeration is.

          Hey, perhaps it’s a yellow submarine… :D

  21. Steve McIntyre
    Posted May 17, 2009 at 10:42 AM | Permalink

    Santer’s article has a section on lapse rates, requiring the comparison of modelled surface temperatures and modeled T2LT temperatures.

    While PCMDI archived Santer’s T2 and T2LT work product (including the goofy CRM version), they didn’t archive the corresponding surface data.

    Although most people presume that Santer would use “conventional” temperature data sets (HadCRU, GISS, NOAA), he instead used things like ERSST sea surface and HadISST sea surface temperature. While KNMI provides air temperature (tas), it’s provision of SST (tos) is spotty.

    Again this can doubtless be extracted from PCMDI, but once again we’re into GB of data merely to get modeled SST data which is less than 1MB in total.

    One presumes that Santer’s results should, to some extent, apply to air temperature and this can be extracted from KNMI, as I’ve done. I can get very high cross-correlations for everything except the goofy CNRM series and an annoying problem with the CCSM3.0 series, where it looks like:
    1) KNMI is missing a 20CEN version available in A1B. There is one more A1B series than 20CEN series at KNMI. Santer b30.030b has a 95+ correlation to A1B run2 but not to any 20CEN version at KNMI. So it looks like KNMI is clerically missing CCSM3.0 20CEN run 2 for some reason.

    2) Santer b30.030d does not have a +70 correlation with any KNMI version. there are a few CCSM series at PCMDI and not at KNMI. It looks like Santer used a version not archived at KNMI, making use of this run impractical without a linked surface version.

  22. sky
    Posted May 18, 2009 at 8:57 PM | Permalink

    That model runs produce artificial time series which may have little connection to reality is something that most here at CA know quite well. But those without experience with Surface Marine Observations made by ships of opportunity may not know that the marine component of HADCRUT3 is in many respects just as artificial–in fact, not even a proper time series.

    Outside well-traveled sea-lanes, largely in the NH, SMO’s are so sparse and irregularly made that the idea of constructing average SST monthly time-series for each 5-degree Marsden square is laughable. In many such squares there’s barely adequate data even for reliable monthly climatic summaries. QC is totally absent and it’s not uncommon to find ship reports where the longitude has been flipped in polarity. Without proper daily averages for each day of the month, strong temporal biases and aliasing are introduced. The locational bias in the coarse grid can be enormous and pass unrecognized. And then, of course, there’s the “correction” applied by Hadley to “homogenize” older data sampled by wooden buckets with more modern engine-intake measurements. Guess which way the trend of the “bucket correction” goes.

    Anyone who believes that global temperatures rose sharply from a deep low in 1900 to a peak in the 1940’s, bottomed in 1956, and have been rising ever since is simply unacquainted with SMO reporting practice. Artificial series from SMOs constitute the bulk of the Hadcrut3 gridwork data.

Follow

Get every new post delivered to your Inbox.

Join 3,330 other followers

%d bloggers like this: