Rahmsmoothing and the Canadian GCM

Quite aside from the realclimatescientistsmoothingalgorithmparameterselectioncontroversy, another interesting aspect of Figure 3 of the Copnhagen Synthesis Report is the cone of model projections. Today I’ll show you how to do a similar comparison for an AR4 model of your choice. Unlike Rahmstorf, I’ll show how this is done, complete with turnkey code. I realize that this is not according to GARP (Generally Accepted Realclimatescientist Procedure), but even realclimatescientists publishing in peerreviewedliterature should be accountable for their methodology.

Here is Figure 3 from the Copenhagen Synthesis Report. I take it that the grey cone is the spread of model prokections – note that the caption says that these are from the Third Assessment Report. Raising the question – why the Third Assessment Report? Wouldn’t the Fourth Assessment Report be more relevant?


Figure 1: Copenhagen Synthesis Figure 3. “Changes in global average surface air temperature (smoothed over 11 years) relative to 1990. The blue line represents data from Hadley Center (UK Meteorological Office); the red line is GISS (NASA Goddard Institute for Space Studies, USA) data. The broken lines are projections from the IPCC Third Assessment Report, with the shading indicating the uncertainties around the projections3 (data from 2007 and 2008 added by Rahmstorf, S.).”

The IPCC AR4 Smear Graph
In April 2008, doctorrealclimatescientist Rahmstorf discussed models versus observation, excerpting IPCC Figure 1.1 which showed both the AR4 cone and the Tar cone as shown below. As I understand it, these cones are more or less spaghetti graphs with one color – grey.


Figure 2. IPCC Figure 1.1 shown by Rahmstorf at RC in April 2008.

The Canadian GCM
I thought that it would be interesting to do my own comparison from first principles. Rather than smearing everything into one big stew a la IPCC, I thought that it would be interesting to show results for individual models. And to simplify presentation, I’m only showing HadCRU, rather than spaghetti-ing it up with GISS. In making this presentation, I used my implementation in R of ssatrend (benchmarked against the original Matlab version – more on this later.) I also used my function to scrape model data from KNMI (this function constructs CGI commands at KNMI within R.)

Here’s a Rahmstorf-style plot for the Canadian GCM runs, chosen because I’m Canadian. (Actually I knew from Santer studies that it runs “hot”, so it wasn’t entirely a random selection. I wanted to see what this sort of model looked like.) I’ve only examined these plots for a few models – NCAR is another one that I looked at- but I can modify the script easily to make a pdf showing similar plots for all models and might do this some time.

I invite readers to consider whether there is a “remarkable similarity” between the geometry of the coherence of model and observations in this graphic and the coherence of model and observations in the Copenhagen Synthesis graphic. One noticeable difference between this graphic and the realclimatescientistgraphic is that it does not truncate the hindcast performance. In this case, one is inclined to say that the proprietors of this particular model haven’t gone out of their way to tune its performance to actual 20th century history. I’m pretty sure that this model is at the upper sensitivity end, but I don’t know this.


Comparison of Canadian CCCMA to HadCRU, in Rahmstorf 2007 style.

Readers may easily derive this graphic for themselves using the following turnkey code. First load functions to implement KNMI scrape and Rahmstorf smoothing and plotting:

source(“http://data.climateaudit.org/scripts/models/functions.collation.knmi.txt”)
#function to scrape from KNMI
source(“http://data.climateaudit.org/scripts/rahmstorf/functions.rahmstorf.txt”)
#emulation of Rahmstorf smooth

Now load HadCRUT3v and Rahmsmooth it, using the realclimatescientistsmoothingparameter of Rahmstorf et al 2007.

source(“http://data.climateaudit.org/scripts/spaghetti/hadcru3v.glb.txt”) #hadcru3v, hadcru3v.ann
had=hadcru3v.ann
had_smooth= ssatrend(window(had,end=2008),M=11) #Rahmsmooth

Make sure that you register at KNMI. Then insert your registered email address in the R-code as shown below

email=Email= [##register and insert your email here

Now logon and set the field to “tas” (temperature at surface) and the scenario to A1B “sresa1b”. The code below will print out the available KNMI models according to a semi-manual collation from their webpage a few months ago. (KNMI needs to have a readable list of models !!)

logon=download_html( paste(“http://climexp.knmi.nl/start.cgi?”,Email,sep=””))
scenario=”sresa1b”
field=”tas”;
Info=knmi.info[knmi.info$scenario==scenario,]
row.names(Info)=1:nrow(Info)
Info #gives A1B models at KNMI

To generate the CCCMA figure in this post, look up their row-number of the model in the Info table generated above and simply execute the following command. This should scrape the data from KNMI. If it doesn’t, then there;s probably something wrong with your KNMI handshake.

plotf(2) #CCCMA

Just change the number to generate other model comparisons. (I’ll use the pdf function to generate all the models in one document.)

Update: I modified the function a little and produced a pdf with all the models plotted in the above style. A pdf is at http://www.climateaudit.org/data/models/models_vs_hadcru.pdf . Virtually every model performed better than the Canadian GCM.

pdf(file=”d:/climate/images/2009/models/models_vs_hadcru.pdf”,width=5,height=5)
for (i in 1:nrow(Info)) plotf(i,gdd=FALSE) #CCCMA
dev.off()

32 Comments

  1. Andrew
    Posted Jul 2, 2009 at 12:59 PM | Permalink

    I do recall hearing once that various models were looked at for the warming they produce and that a Canadian model was an outlier, producing much more and accelerating warming while most showed fairly constant rates of change. I can’t remember where though. This may be related.

  2. Posted Jul 2, 2009 at 1:13 PM | Permalink

    In the first code snippet you seem to have a doubled “/scripts” in the path.

    This fixes the error getting the knmi functions but even with that correction the ssatrend one returns a 404 error i.e. http://data.climateaudit.org/scripts/rahmstorf/ssatrend.txt doesn’t work and neither does
    http://data.climateaudit.org/scripts/scripts/rahmstorf/ssatrend.txt

  3. Craig Loehle
    Posted Jul 2, 2009 at 1:31 PM | Permalink

    Clever of you to notice the back-truncation (non-reporting) of model values prior to 1990 in Rahmsdorf. That looks like why they chose 1990. A realistic comparison would start (zero match) in 1940 or so when CO2 levels started to rise. Such a comparison would show the models going way up over the actual temperatures very quickly (just shift the gray curves up to match the purple line in 1940 or 1950 to see what I mean). Did these guys learn their tricks at the carnival shell game table?

    • BarryW
      Posted Jul 2, 2009 at 3:40 PM | Permalink

      Re: Craig Loehle (#3),

      There appears to be two previous peaks in the temp data at about 1880 and 1940 (60 yr cycle?). When I looked at some of the models they did not show that pattern and only really matched from about 1970 on.

      Steve: I’m not interested in this thread being diverted into a discussion of cycles. The topic is an interesting one and no need to digress so far afield.

      • BarryW
        Posted Jul 2, 2009 at 8:12 PM | Permalink

        Re: BarryW (#5),

        My point was that there are relatively large excursions in the data that occur over large time frames. These don’t seem to be picked up by the models and I don’t think they count as noise or weather with swings lasting 30 some years.

  4. Chad
    Posted Jul 2, 2009 at 2:05 PM | Permalink

    Now logon and set the field to “tas” (atmospheric temperature)

    Typo. Should be “temperature at surface”. Atmospheric temperature is “ta”.

  5. pete m
    Posted Jul 2, 2009 at 4:09 PM | Permalink

    Here is Figure 3 from the Copenhagen Synthesis Report. I take it that the grey cone is the spread of model prokections

    “projections”?

    Wouldn’t the Fourth Assessment Report be more relevant?

    Yes. Unless you don’t accept the use of unpublished papers – maybe he refuses to use it for this reason? lol.

  6. DeWitt Payne
    Posted Jul 2, 2009 at 5:03 PM | Permalink

    doctorrealclimatescientist

    As long as we’re doing German style compound nouns, one should probably add Professor to Rahmstorf’s title as well: Professordoctorrealclimatescientist. The German would be Professordoktorrealenklimawissenschaftler or perhaps Professordoktorrealenklimanaturforscher.

  7. braddles
    Posted Jul 2, 2009 at 5:46 PM | Permalink

    If the Canadian model “runs hot”, it is interesting that it warms at about 2 degrees C per century in the bottom graph, but even that is too fast to match the real world. Some modellers tell us that ‘the models’ show the globe warming by up to 7.4 degrees by 2100 (MIT recently) while others like Rahmsdorf say ‘the models’ match the real world quite well. They can’t be talking about the same models, surely.

    • Andrew
      Posted Jul 2, 2009 at 5:57 PM | Permalink

      Re: braddles (#8), There are so many different models they can all claim to be right. There is a lot of leeway in the uncertainty, oddly enough…

  8. Steve McIntyre
    Posted Jul 2, 2009 at 6:28 PM | Permalink

    I’ve posted a pdf of similar plots for all A1B models at KNMI. The Canadian GCM performed much worse than most of the models and readers should not extrapolate for its dismal performance to other models
    http://www.climateaudit.org/data/models/models_vs_hadcru.pdf .

    Inclusion in the model spaghetti graph doesn’t seem to require that the model pass an SAT test.

    • Posted Jul 3, 2009 at 1:29 AM | Permalink

      Re: Steve McIntyre (#10),

      Inclusion in the model spaghetti graph doesn’t seem to require that the model pass an SAT test

      Thanks much for comparing the individual model runs with the real world. It clearly shows that most of the models fail miserably in hindcasting and are therefore useless for forecasting. Not one of the models hindcasts really well. I had expected something like that, however the results are actually “worse than expected”:). And how can IPPC accept the lumping of obviously bad models with the best ones. Bad models should be excluded by objective criteria.

      Could you by any chance do the same analysis for other variables: water level? CO2? methane?

      Thanks for a great blog.

      • Craig Loehle
        Posted Jul 3, 2009 at 7:42 AM | Permalink

        Re: Karl Iver Dahl-Madsen (#16), We have tested our airplane design with multiple simulation models. Though some of the models indicate severe vibration, the wings coming off, or failure to lift off, we are happy to report that the average output of the models matches our wind-tunnel test reasonably well, so feel confident flying RealClimateAirlines!

    • Ron Cram
      Posted Jul 3, 2009 at 11:19 PM | Permalink

      Re: Steve McIntyre (#10),

      You say the Canadian GCM performed worse than others. True, but after looking at your pdf – most of the others did not do much better. All but five pretty seriously overpredicted warming as compared to adjusted temperatures of poorly sited stations.

      Thanks again for a great blog!

    • Peter D. Tillman
      Posted Jul 4, 2009 at 1:29 PM | Permalink

      Re: Steve McIntyre (#10),

      http://data.climateaudit.org/data/models/models_vs_hadcru.pdf

      Steve:
      Could you please post a legend to your plot, either (pref.) there or here?

      TIA, PT

  9. AnonyMoose
    Posted Jul 2, 2009 at 7:23 PM | Permalink

    Based on the plot of the Canadian model runs, it appears that the Canadian model runs involve running a canoe down a stream. I applaud the Canadians for having the least energy-intensive climate projection generator.

  10. Michael Jankowski
    Posted Jul 2, 2009 at 8:32 PM | Permalink

    As I understand it, these cones are more or less spaghetti graphs with one color – grey.

    Lasagna?

    • Steve McIntyre
      Posted Jul 2, 2009 at 9:21 PM | Permalink

      Re: Michael Jankowski (#13),

      An excellent graphic which should advance the nomenclature.

      Rigatoni graphs are worth considering as well. Or maybe half-farfalle and full-farfalle graphs.

  11. Posted Jul 3, 2009 at 1:16 AM | Permalink

    I see that you seem to have fixed the doubles “scripts” tag I noted yesterday, but I still can’t actually access the SSAtrend code. here’s what I get:

    > source(“http://data.climateaudit.org/scripts/rahmstorf/ssatrend.txt”)
    Error in file(file, “r”, encoding = encoding) :
    cannot open the connection
    In addition: Warning message:
    In file(file, “r”, encoding = encoding) :
    cannot open: HTTP status was ‘404 Not Found’
    >

    The other parts seem to work but I am somewhat stymied by the lack of ssatrend code. Hence I would say that this post is itself written according to GARP

    Steve: Sorry bout that. As Jean S observes below, the file is http://data.climateaudit.org/scripts/rahmstorf/functions.rahmstorf.txt. Ishould have shut down my R session and double-checked the turnkey. If something like this doesn’t work first time, also check the directory which is readable.

  12. Posted Jul 3, 2009 at 6:13 AM | Permalink

    As I understand it, these cones are more or less spaghetti graphs with one color – grey.

    Oddly enough, the cones are not the classice spaghetti graphs. The TAR projections did not use AOGCM’s directly. They used the Raper modified Wigley UD/EB model, which contains a number of tuning parameters (at least 6) and spits out deterministic output.

    The tuning parameters were determined using Rapers analysis of output from GCMs.

    So, the AF4 has “spaghetti” realizations for each models underlying it’s model output, but the TAR does not. It also means the TAR uncertainty range only reflects the effects of parameter variations but no “weather noise”. This makes it particularly odd that Rahmstorf does not, and had never, included “weather noise” in his placement of the temperature in 1990. He should add uncertainty bands to reflect the statistical uncertainty associated with his determination of the “true” magnitude of the temperature in 1990.

    Of course, his response to “discovering” that a 20 year smooth still contains some “weather noise” was not to add uncertainty intervals to the outside of the TAR values, but to change m = 11 to m=15. Very odd.

  13. BRIAN M FLYNN
    Posted Jul 3, 2009 at 7:58 AM | Permalink

    Steve:
    “Rahmstorf discussed models versus observation, excerpting IPCC Figure 1.1 which showed both the AR4 cone and the Tar cone as shown below.”

    The figure shows the ranges of error covering the IPCC projections from the First through the Third Assessment Reports. The AR4 projection and corresponding range of error are not shown but embedded in the error ranges of the first three IPCC reports.

    Rahmstorf’s use of the TAR projection and the Copenhagen Synthesis Report are matters of “framing” for the Copenhagen Conference later this year. Katherine Richardson, marine biologist at the University of Copenhagen and an organizer of the March, 2009 Copenhagen Conference, made clear that the event last March was “not a regular scientific conference [, but] a deliberate attempt to influence policy.” (see http://www.guardian.co.uk/environment/2009/feb/09/scientists-summit-climate-change)

    Although the March Conference was billed as an emergency meeting, we should be thankful. After all, comparison could have been made to the SAR projection ;>), and Rahmstorf’s “update” should remind one and all that IPCC projections have no skill and (especially in the case of AR4) should not be relied upon.

  14. Edward
    Posted Jul 3, 2009 at 9:11 AM | Permalink

    Observed Temperatures should be Adjusted Temperatures since it has nothing to do with reality.

  15. Posted Jul 3, 2009 at 9:53 AM | Permalink

    OK its the R noob again.

    Note: If I should ask these kinds of questions somewhere else please tell me whereand I’ll go there.

    When I try to run ssatrend I get an error:

    > had_smooth= ssatrend(window(had,end=2008),M=11)
    Error in ssatrend(window(had, end = 2008), M = 11) :
    object “sE_x” not found

    I’m not sure which sE_x this refers too I assume it’s the bolded line in this code fragment but I’m certain:

    if ( case==”minimumroughness”) {
    idx=(1:mp);
    Data=data.frame(x,t= 1:length(x))
    fmleft=lm(x~t,data=Data[idx,]) #pleft=polyfit(idx,x[idx],1); #weird way of fitting trend
    pleft= fitted(fmleft)
    epleft=sqrt(max(sE_x[idx])^2+ sd(x[idx]-pleft) ^2 );

    idx=(n-mp+(1:mp));
    fmright=lm(x~t,data=Data[idx,]) #pright=polyfit(idx,x(idx),1);
    pright= fitted(fmright)
    epright=sqrt(max(sE_x[idx])^2+ sd(x[idx]-pright) ^2 );
    paddedNew=c(predict( fmleft,newdata=data.frame(t= -(M-1):0)),x,
    predict( fmright,newdata=data.frame(t= n+(1:M)) ) )
    } #end case

    Also I tried (for fun) to use both “minimumslope” and “mean_padded” as a third parameter and neither works. minimumslope gives the same sE_x error, “mean_padded” gives a complaint about

    Error in as.ts(x) : object “paddedNew” not found

    this would seem to be because the paddedNew variable is not set in the case==”mean_padded” ccode as it would be in the other two cases.

    • Steve McIntyre
      Posted Jul 3, 2009 at 1:32 PM | Permalink

      Re: FrancisT (#23),

      sorry bout that, I must have slightly different copies on my computer and online. I’ll tidy this up.

  16. Posted Jul 3, 2009 at 10:38 AM | Permalink

    Ok I commented out the epleft and epright lines which mention sE_x and I think it works because I get the same graph when I do the subsequent steps. Since epleft and epright are not mentioned anywhere else those two lines appear to be superfluous

  17. Antonio San
    Posted Jul 3, 2009 at 12:40 PM | Permalink

    Is it the much touted model from Dr. Andrew Weaver?

  18. Steve McIntyre
    Posted Jul 3, 2009 at 2:19 PM | Permalink

    I’ve edited the function – commenting out some steps in the function that aren’t used. I had set values for these variables in my console session – that’s why the execute differed. I’ve placed a script at http://www.climateaudit.org/scripts/rahmstorf/CA_post.txt

  19. Anthony Watts
    Posted Jul 4, 2009 at 3:45 PM | Permalink

    Typo – Copenhagen is misspelled in the second line.

    Also, being familiar with homemade pasta, I’d like to add a type that has been overlooked:

    The pasta starts out with a gentle curve, almost flat, but becomes bent in cooking. It’s a feature of splicing together two different blends of pasta recipe and extruding them from a single pasta die. Each segment expands differently to temperature and moisture, and thus the sharp bend develops.

    Please don’t ask for the recipe. It’s secret.

  20. Michael Jennings
    Posted Jul 6, 2009 at 12:19 PM | Permalink

    Nice Anthony, good to see you still retain your sense of humor with all the knocks you have taken for daring to be different.

3 Trackbacks

  1. […] Rahmsmoothing and the Canadian GCM […]

  2. By Dahl-Madsen » Hidsige klimaalarmister on Jul 13, 2011 at 6:13 AM

    […] Som stammer fra: https://climateaudit.org/2009/0…ion/ […]

  3. […] de la “mannomáticas” (por Mann). En este caso fue el “rahmsmoothing” [–>].  E hizo hace pocos años uno de esas trabajos alarmistas según los cuales el nivel del mar se […]