Replicating the “Trick” Diagram

Michael Mann, Dec 2004

No researchers in this field have ever, to our knowledge, “grafted the thermometer record onto” any reconstruction. It is somewhat disappointing to find this specious claim (which we usually find originating from industry-funded climate disinformation websites) appearing in this forum [realclimate].

Phil Jones, Nov 1999

I’ve just completed Mike’s Nature trick of adding in the real temps to each series for the last 20 years (ie from 1981 onwards) and from 1961 for Keith’s to hide the decline.

Gavin Schmidt, Nov 2009

Scientists often use the term “trick” to refer to a “a good way to deal with a problem”, rather than something that is “secret”, and so there is nothing problematic in this at all.

I’ve been able to make a very close emulation of Jones’ 1999 WMO diagram. Jean S at Climate Audit reported on the key features of Mike’s Nature trick as used by Jones first. I’ve confirmed his results and identified digital versions of all the relevant series. A turnkey script generating the diagram is in the first comment below. A few observations on issues that required parsing. There seem to be two NCDC versions of the Jones reconstruction; the WMO diagram uses (or appears to use) a version archived in connection with Jones et al 2001. The NCDC archive for MBH is reference 1902-1980 while the diagram is reference 1961-1990; the difference in CRU references is applied to the MBH reconstruction to recenter it. (The value ties in with a comment in the Climategate Letters.) I used an old CRU version (archived with the truncated Briffa reconstruction) otherwise the delta was larger. In order to get the endpoint of the graph to look right, I needed to have both 1998 and 1999 values of the instrumental series.

A reader below (sensibly) proposed the following additional explanation. The green line is tree-ring data on the left. On the right it has been smoothly merged into temperature data. This is “Mike’s Nature trick”, that Mann falsely claimed was never done. The replacement of actual data with spliced temperature data “hide the decline” in the green tree-ring data curve in the late 20th century, making the reconstruction seem able to model recent temperatures, when it didn’t.


Figure 1. Emulation of Jones’ WMO 1999 Diagram.

The original diagram is in shown in the Nov 24 UEA press release here.

44 Comments

  1. stevemcintyre
    Posted Nov 29, 2009 at 11:45 AM | Permalink

    ##COMPARE ARCHIVED BRIFFA VERSION TO CLIMATEGATE VERSION

    #1. LOAD BRIFFA (CLIMATEGATE VERSION)
    # archive is truncated in 1960: ftp://ftp.ncdc.noaa.gov/pub/data/paleo/treering/reconstructions/n_hem_temp/briffa2001jgr3.txt

    loc=”http://www.eastangliaemails.com/emails.php?eid=146&filename=939154709.txt”
    working=readLines(loc,n=1994-1401+104)
    working=working[105:length(working)]
    x=substr(working,1,14)
    writeLines(x,”temp.dat”)
    gate=read.table(“temp.dat”)
    gate=ts(gate[,2],start=gate[1,1])

    #2. J98 has reference 1961-1990
    #note that there is another version at ftp://ftp.ncdc.noaa.gov/pub/data/paleo/contributions_by_author/jones1998/jonesdata.txt

    loc=”ftp://ftp.ncdc.noaa.gov/pub/data/paleo/contributions_by_author/jones2001/jones2001_fig2.txt”
    test=read.table(loc,skip=17,header=TRUE,fill=TRUE,colClasses=”numeric”,nrow=1001)
    test[test== -9.999]=NA
    count= apply(!is.na(test),1,sum)
    test=ts(test,start=1000,end=2000)
    J2001=test[,”Jones”]

    #3. MBH : reference 1902-1980
    url<-"ftp://ftp.ncdc.noaa.gov/pub/data/paleo/contributions_by_author/mann1999/recons/nhem-recon.dat&quot;
    MBH99<-read.table(url) ;#this goes to 1980
    MBH99<-ts(MBH99[,2],start=MBH99[1,1])

    #4. CRU instrumental: 1961-1990 reference
    # use old version to 1997 in Briffa archive extended
    url<-"ftp://ftp.ncdc.noaa.gov/pub/data/paleo/treering/reconstructions/n_hem_temp/briffa2001jgr3.txt&quot;
    #readLines(url)[1:50]
    Briffa<-read.table(url,skip=24,fill=TRUE)
    Briffa[Briffa< -900]=NA
    dimnames(Briffa)[[2]]<-c("year","Jones98","MBH99","Briffa01","Briffa00","Overpeck97","Crowley00","CRU99")
    Briffa= ts(Briffa,start=1000)
    CRU=window(Briffa[,"CRU99"],start=1850)
    tsp(CRU) # 1850 1999 #but starts 1871 and ends 1997
    delta<-mean(CRU[(1902:1980)-1850])-mean(CRU[(1960:1990)-1850]);
    delta # -0.118922
    #used to get MBH values with 1961-1990 reference: compare to 0.12 mentioned in Climategate letters

    #get updated version of CRU to update 1998 and 1999 values
    loc="http://hadobs.metoffice.com/crutem3/diagnostics/hemispheric/northern/annual&quot;
    D=read.table(loc) #dim(D) #158 12 #start 1850
    names(D)=c("year","anom","u_sample","l_sample","u_coverage","l_coverage","u_bias","l_bias","u_sample_cover","l_sample_cover",
    "u_total","l_total")
    cru=ts(D[,2],start=1850)
    tsp(cru) # 1850 2009

    # update 1998-1999 values with 1998 values
    CRU[(1998:1999)-1849]= rep(cru[(1998)-1849],2)

    #Fig 2.21 Caption
    #The horizontal zero line denotes the 1961 to 1990 reference
    #period mean temperature. All series were smoothed with a 40-year Hamming-weights lowpass filter, with boundary constraints
    # imposed by padding the series with its mean values during the first and last 25 years.
    #this is a low-pass filter
    source("http://www.climateaudit.org/scripts/utilities.txt&quot😉 #get filter.combine.pad function
    hamming.filter<-function(N) {
    i<-0:(N-1)
    w<-cos(2*pi*i/(N-1))
    hamming.filter<-0.54 – 0.46 *w
    hamming.filter}
    Y=X=ts.union(MBH=MBH99+delta,J2001,briffa=briffa[,”gate”],CRU=window(CRU,end=1999) ) #collate
    temp= time(Y)>1960
    Y[temp,”briffa”]=Y[temp,”CRU”]
    temp= time(Y)>1980
    Y[temp,c(“MBH”,”J2001″)]=Y[temp,”CRU”]
    smoothb= ts(apply(Y,2,f),start=1000)

    xlim0=c(1000,2000) #xlim0=c(1900,2000)
    ylim0=c(-.6,.35)
    par(mar=c(2.5,4,2,1))
    col.ipcc=c(“blue”,”red”,”green4″,”black”)

    par(bg=”beige”)
    plot( c(time(smoothb)),smoothb[,1],col=col.ipcc,lwd=2,bg=”lightblue”,xlim=xlim0,xaxs=”i”,ylim=ylim0,yaxs=”i”,type=”n”,axes=FALSE,xlab=””,ylab=”deg C (1961-1990)”)
    usr <- par("usr")
    rect(usr[1],usr[3],usr[2] ,usr[4],col="lightblue") # – the part used to fit the model

    for( i in 1:3) lines( c(time(smoothb)),smoothb[,i],col=col.ipcc[i],lwd=2)
    axis(side=1)
    labels0=labels1=seq(-.6,.4,.1);
    labels0[is.na(match(seq(-.6,.4,.1),seq(-.6,.4,.2)))]="";labels0[7]="0"
    axis(side=2,at=labels1,labels=labels0,tck=.025,las=1)
    axis(side=4,at=labels1,labels=labels0,tck=.025)
    box()
    abline(h=0)
    legend("topleft", fill= col.ipcc[c(2,1,3)],
    legend=c( "Jones 1998", "Mann 1999", "Briffa 2000"))
    title("WMO 1999 Emulation")

  2. PaulM
    Posted Nov 29, 2009 at 12:25 PM | Permalink

    Steve I really think you need to explain things more clearly for the thousands of new readers who are now reading your blog. Most of them will not understand this. I’ll have a go, please correct:

    The green line is tree-ring data on the left. On the right it has been smoothly merged into temperature data. This is Michael Mann’s trick, that he falsely claims is never done. The reason they do it is to hide the fact that otherwise the green tree-ring data curve would go down in the late 20th century rather than up, showing that the tree-ring data is useless at representing temperature. Hence “hide the decline”.

  3. an observer
    Posted Nov 29, 2009 at 12:40 PM | Permalink

    I believe you need to compare the chart in this post to the chart in the prior post. Essentially, the data to the right of the yellow line in the prior post is “fake” in the sense that it is derived from the real temp record rather than a proxy, contra what is claimed in the lead quote. Steve is illustrating that he can perform the same “trick” with a few lines of R code. The joke here is that it probably took many more lines of code and much effort to work the orginal illusion and Steve is rubbing it in a little.

  4. Calvin Ball
    Posted Nov 29, 2009 at 12:44 PM | Permalink

    Paul, that’s my take. And I agree that this is important for the masses to get; the popular interpretation of “hide the decline” is somehow concealing the lack of warming over the past decade. This is completely wrong, but it’s catching on.

  5. Posted Nov 29, 2009 at 12:52 PM | Permalink

    Calvin

    It is one of those weird quirks of fate, that the most damning narrative is actually not the most accurate one, but does the most good in the end.

  6. Posted Nov 29, 2009 at 12:55 PM | Permalink

    I’ve looked a some of the references that discuss divergence but haven’t found an explanation of how one can train on data from a selected period and adjust data from prior times but then ignore data from subsequent time.

    While the “tricks” are interesting, it would be helpful to describe the justification for the calibration procedure that allowed production of the curve to which the tricks were applied.

    Steve:
    you’ll have ask Phil Jones and Mike Mann.

  7. Calvin Ball
    Posted Nov 29, 2009 at 1:04 PM | Permalink

    Plato, I think I speak for most people here when I say that I care less about what the outcome is than that it’s right. I don’t want to sound sanctimonious, but doing the best possible job of getting to the bottom of the facts is more important than any desired outcome. If we could all stay focused on that, the outcomes will take care of themselves.

  8. Matthew Drabik
    Posted Nov 29, 2009 at 1:05 PM | Permalink

    Climate Audit has undertaken many, many efforts to uncover and correct dozens (at least) of instances of poor methodology in mainstream climate science. As noted above, there is so much history and detail when it comes to auditing the shoddy work product of Mann et.al. that it is easy to get overwhelmed by the minutiae. Mr. McIntyre has been very reserved in his reaction to Climategate and continues to produce documented analysis.

    However, a short paragraph in each post explaining how each post relates to the big picture would help, even for a dedicated reader like myself. Also, I would like to recommend that “Willis Eschenbach’s FOI Request” somehow be kept at the top of the home page. It seems to be the best introduction to Climategate for a newcomer.

  9. Posted Nov 29, 2009 at 1:11 PM | Permalink

    Calvin – I think we;re talking at xpurposes – I agree with you, it’s vital that the science is right, however – the media aren’t the sharpest detail cookies at times and need a simple message.

    The fact that they’re running with this story at all will allow other sceptics to feel safe enough to come out from behind the sofa.

    Politics and media are two sides of the same coin – and frequently perceptions, as we’ve seen with AGW, do have a lot of power.

  10. Lewis Deane
    Posted Nov 29, 2009 at 1:22 PM | Permalink

    Steve,
    Have you heard of the recent article in the Sunday Times where they admit explicitly that they no longer have the “raw data” – this was the final defence, if I remember rightly that they gave to you as regards your various FOI requests – ie “the dog ate my homework “excuse. How does this reconcile with one of Phils emails stating, to paraphrase, ” I’ll destroy the datta rather than let it out.” Was he talking about a different set of data or could we infer fairly he had the raw data at the time? Quaeritur. By the way, I miss the input of some of your old time companions – Jean s, Bender the ‘bulldog’, etc: has the extraordinary interest in this event – “A miracle has happened” might one day be considered a fairly historical statement – and the crash of your server swamped them out or are they, as I prefer to think, working hard away in the background for a more considered view of this scandal? I just hope to see CA back up and running, much regards and appreciation, Lewis.

    Steve:
    I preferred things to be a bit quieter as well. CA is operating a bit more reliably now that the overflow is mostly here; you’ll have more luck finding Jean S there.

  11. Lewis Deane
    Posted Nov 29, 2009 at 1:32 PM | Permalink

    PS Forgot to say I know the heroically busy Anthony is trying to find time for your server. What a fella!

    Steve: I have no idea how Anthony accomplishes as much as he does.

  12. Posted Nov 29, 2009 at 2:04 PM | Permalink

    “Steve: I have no idea how Anthony accomplishes as much as he does.”

    He’s modelled his day on being 100% longer : )

  13. Jean S
    Posted Nov 29, 2009 at 2:11 PM | Permalink

    Lewis, thanks for the complement. I’m mainly nowadays spending my time snipping OT comments at CA 😉 Seriously, there are so many things right now to do, and I hardly know how to prioritize them. I’ve spent my extra time this weekend trying to get the new Mann code work and related things. I wish Steve had more time to consider this latest Mann, but obviously he needs to spend his time with this CRU thing.

    Anyhow, the most amazing thing I find in this latest Mann piece is the fact that they have basicly just recycled their old work … and they get a publication in Science! The code is mainly the same as in Mann et al (2008) and some figures and tables have been taken directly from there.

  14. dearieme
    Posted Nov 29, 2009 at 2:13 PM | Permalink

    snip – prohibited language

  15. Andrew
    Posted Nov 29, 2009 at 2:37 PM | Permalink

    OK – this answers my question posed in a previous thread. The full quote from Mann goes on to say that presenting the instrument record on the same graph is frequently done (and it was done like this in the IPCC report). But actually using the instrument & proxy records together on the same series is pretty bizarre.

  16. Lewis Deane
    Posted Nov 29, 2009 at 2:52 PM | Permalink

    Thanks, steve and jean. I’m now in the pub to take a quick break from the fuss. Since the new mannomatic was submitted before these revelations I take it is not hubris but rather the typical ‘innocence’ of this particular CRU. Anyway keep working for we who are to lazy ( I feel guilty! ) to work it out for ourselves! Much appreciated!

  17. michaelv
    Posted Nov 29, 2009 at 3:02 PM | Permalink

    Looking over this R code, specifically these lines

    # start code snippet
    [[2]]<-c("year","Jones98","MBH99","Briffa01","Briffa00","Overpeck97","Crowley00","CRU99")
    Briffa= ts(Briffa,start=1000)
    CRU=window(Briffa[,"CRU"],start=1850)
    # start end snippet

    Question: should the last line of the code snippet be changed to:

    CRU=window(Briffa[,"CRU99"],start=1850)

    as the data window command is looking for the previously defined CRU99 data series

    Steve
    : yep. sorry about that. sometimes I make edits at the console that I forget to make in my script. Not that often, but sometimes.

  18. Posted Nov 29, 2009 at 3:12 PM | Permalink

    Calvin, that’s because some blogs (WUWT included) and news outlets have reported it that way. They don’t explain that the decline being hidden is in the reconstruction, not in the temperature record (which has its own problems.)

  19. Calvin Ball
    Posted Nov 29, 2009 at 4:11 PM | Permalink

    Jeff, that’s my point exactly. It’s almost like there’s a firewall between the press and the technical skeptics.

    snip – too much complaining

  20. george hanson
    Posted Nov 29, 2009 at 4:12 PM | Permalink

    snip – forbidden words

  21. Lewis Deane
    Posted Nov 29, 2009 at 4:18 PM | Permalink

    I think there is a general point that we see in the screaming and shouting since this incident – that we, ourselves (mankind) have become quite infantile. We desire ( the layman but also scientists) a sort of idee fixe to lead, with a golden skein, to that silly truth that will tell us how to behave. But, of course, we can’t behave. Nor can nature.
    That is to say, I wish some would show the rigour of thought in what they say as CA has done since its inception – I hope that doesn’t sound to disgustingly sycophantic!

  22. Lewis Deane
    Posted Nov 29, 2009 at 4:44 PM | Permalink

    snip – policy

  23. Lewis Deane
    Posted Nov 29, 2009 at 5:52 PM | Permalink

    Don’t publish this:
    I’m very honoured to be snipped.

  24. John Archer
    Posted Nov 29, 2009 at 8:53 PM | Permalink

    I hope this flashing GIF of Steve McIntyre’s graph v the UEA graph works. Even if it does I’m sure others could make a much better job of it.

  25. John Archer
    Posted Nov 29, 2009 at 9:02 PM | Permalink

    [Second attempt. Is there a delay between submitting and posting? Maybe comments here are closed. There again, perhaps I’m on a blacklist? Probably.]

    I hope this flashing GIF of Steve McIntyre’s graph v the UEA one works. Even if it does I’m sure others could make a much better job of it.

  26. John Archer
    Posted Nov 29, 2009 at 9:12 PM | Permalink

    TO MODERATOR ONLY (not for posting):

    Woops! Apologies for the double attempt at posting.

    I had my cookies switched off the first time and (therfore?) didn’t receive the message: “Your comment is awaiting moderation.

    Now I see that message for both attempts.

  27. John Archer
    Posted Nov 29, 2009 at 10:58 PM | Permalink

    Oh to hell with is. Here’s the ####### link direct.

  28. Rich
    Posted Nov 30, 2009 at 3:22 AM | Permalink

    hamming.filter<-function(N) {

    I can't find the closing brace.

    Steve: Hmmm.. Dunno why not. Gremlin between my script and WordPress. Fixed anyway. I’ll post up ASCII scripts at CA as well in the future.

  29. John Hekman
    Posted Nov 30, 2009 at 12:42 PM | Permalink

    Steve
    What kind of requests are you getting for interviews regarding Climategate? Is there anything in the works that we can look for?
    Thanks.

  30. Richard Greenacre
    Posted Nov 30, 2009 at 1:49 PM | Permalink

    Another script question.

    Where is the ‘Y’ in “temp= time(Y)>1960” setup?

    Steve: Hmm, that seems to have got misplaced in my transcription as well. I’ve inserted it.

  31. Ken Harvey
    Posted Nov 30, 2009 at 1:54 PM | Permalink

    Seeing that the raw data has been disposed of, I am not going to believe any graph that is shown to me, no matter the author. What we must have now is re-gathered raw data compiled by honest scientists, supported by competent programmers. No graph is worth attention if it does not include a detailed explanation of all proxy data used and the statistical manipulations that have been employed. Until all that is sorted out, this septuagenarian is going to carry on smoking.

  32. B. Kindseth
    Posted Dec 1, 2009 at 12:55 PM | Permalink

    In the IPCC-Working group 1, The Physical Basis of Climate Change, Figure 9.5, p.684, the plot of 14 models with natural forcings only shows a decline in temperature after 1960. This is eerily similar to the Briffa tree ring decline that Mann and Jones chose to hide. Any comments on this?

    Steve:
    I think that the main issue is that tree ring proxies as not responding to temperature. Better to discuss this on a CA thread/

  33. Curt Covey
    Posted Dec 2, 2009 at 1:12 AM | Permalink

    Seems to me that Phil Jones did the scientifically correct thing in “hiding the decline” of temperature implied by post-1960 tree ring data. We have enough direct measurements of surface temperature after 1960 to know the actual trend. Global warming appears in not only Phil’s (CRU) processing of these measurements but also in Jim Hansen’s (GISS) and Tom Karl’s (NCDC) independent processing. If the tree rings imply temperatures that contradict repeated analysis of the direct measurements, then the implication from tree rings must be wrong after 1960 and that data should be rejected as incorrect. Am I missing something here?

    Steve: Of course you are. The issue pertains to the validity of the tree rings as proxies in earlier times. If they don’t record the warmth of the past 30 years, how do we know whether they would have picked up corresponding warmth in the 1000s (if there were such). The implication of your statement is that the tree ring records should be rejected as uninformative about past temperature history rather than just the post-1960 portion.

  34. Ade
    Posted Dec 2, 2009 at 10:32 AM | Permalink

    The original diagram is in shown in the Nov 24 UEA press release here.

    That last link is, if I’m reading it correctly, quite amazing (and I’m trying not to “pile on” here…).

    It would seem that all three “reconstructions” (Jones, Mann & Briffa) utterly fail to show any kind of large uptick as is claimed for the instrument data. I’m slightly surprised that CRU dares to publish that statement with a straight face; it seems to be saying “no problem, nothing to see here” wheras the graphs are clearly saying “these reconstructions simply don’t work”.

    Indeed – purely from analysing the graph, it would appear that these reconstructions are only valid between approximately 1912 and 1935.

    Unless…. am I missing something that would be really obvious to anyone with a modicum of scientific knowledge?

  35. Murray
    Posted Dec 2, 2009 at 10:49 AM | Permalink

    Curt, there is also the issue of the validity of the instrumental averages. UHI and other land use change effects have not been adequately compensated, and the “march of the thermometers” has not even been addressed. GW theory says that the lower troposphere should warm faster then the surface, but using the published data the opposite is the case. If surface warming is overstated by about factor 2, the theory would be validated. It seems very unlikely that tree rings are a good temperature proxy for any time period, but it seems equally unlikely that the published instrumental averages reflect reality.

  36. Posted Dec 6, 2009 at 10:21 PM | Permalink

    “The original diagram is in shown in the Nov 24 UEA press release here. ”

    The link to the press release at http://www.uea.ac.uk/mac/comm/media/press/2009/nov/homepagenews/CRUupdate gives me a 404 error “Sorry, but the page you requested does not exist.”
    The diagram (jpg) link still works.

  37. nealmcb
    Posted Dec 6, 2009 at 11:31 PM | Permalink

    I worked a while with the R script, using my very rusty R skills, and have something that seems like it might be what you had in mind, updated for the slightly different output of the .eastangliaemails.com site now and fixing some bugs. In order to make it available in the proper format, without wordpress or the web browser messing up the ascii quotes and dashes, I’ve put it up at github: http://gist.github.com/250498

    Download the latest version in text format via http://gist.github.com/250498.txt

    Github also has the advantage of putting it under version control so that people can compare various versions of the script, and can even “fork” the script (including the version history) to make modifications of their own after a painless signup process. I highly recommend this general technique for your pages, Steve, since it also serves as an example of the kind of methodology that facilitates replication and sharing in general.

    So e.g. you can get and compare the latest version above to what I originally took from this page, which is preserved there at the bottom of the “revision” list and available in “raw” form here:

    I ran it on Ubuntu Linux (Hardy) via “R –save < trickdiagram.r
    and it produced this file:
    http://bcn.boulder.co.us/~neal/tmp/Rplots.ps

    Does that look right?

  38. Tuomo
    Posted Dec 7, 2009 at 10:51 AM | Permalink

    If you read the web, you see a lot of claims. Some obviously wrong, some more intriguing. My question to you: Are the following claims true?

    – Briffa’s proxy data is valid as a temperature proxy only if it correlates with the instrumental temperatures. In Briffa’s original proxy data series, the proxy series correlates positively with the instrumental series before 1940, but there’s no positive reliable positive correlation in the post 1940 data.

    – (Based on the email correspondence and code samples, someone could guess that) Jones deleted the post-1960 values of the Briffa series, replaced them with some materially different values.

    – The Jones’s materially different values are most likely instrumental temperature measurements or other values selected to be close to the instrumental measurements. The end result was a “proxy” series that looked like an accurate reconstruction of late 20th century temperatures when compared to the instrumental measurements.

    – Jones’s graphs used the code fraglemtn with “highly artificial” “fudge factor.”

    – The results were published in World Meteorological Organization WMO-No. 913 (http://www.wmo.ch/pages/prog/wcp/wcdmp/statemnt/wmo913.pdf)

    I’d like to know which claims are demonstrably wrong (proven innocent), which claims could be true but that there’s not credible evidence for (not proven guilty), and which claims are demonstrably true (guilty).

    What is of particular interest to me is whether the output of the “highly artificial” “fudge factor” code was ever used in any published papers.

  39. Tuomo
    Posted Dec 7, 2009 at 2:22 PM | Permalink

    Here is the response to essentially the same question from RealClimate.org, which by the way is very good:

    Tuomo says:
    6 December 2009 at 8:28 PM
    Given that this is a comments thread to a CRU-email context post: Here’s a question about what exactly “Mike’s Nature trick” is. By Mike’s Nature trick I mean what exactly was done by Jones that he referred by that term.

    If you read the web, you see a lot of claims. Some obviously wrong, some more intriguing. My question to you: Are the following claims true?

    – Briffa’s proxy data is valid as a temperature proxy only if it correlates with the instrumental temperatures.

    [Response: Basically true. You could also use other independent temperature proxies in earlier times. – gavin]

    – In Briffa’s original proxy data series, the proxy series correlates positively with the instrumental series before 1940, but there’s no positive reliable positive correlation in the post 1940 data.

    [Response: Not quite. The divergence happens post 1960. – gavin]

    – (Based on the email correspondence and code samples, someone could guess that) Jones deleted the post-1960 values of the Briffa series, replaced them with some materially different values.
    – The Jones’s materially different values are most likely instrumental temperature measurements or other values selected to be close to the instrumental measurements
    – Jones then smoothed the spliced series.

    [Response: This is a misreading. The only goal was a smoothed blend of the proxy and instrumental data to indicate the long term and recent changes without it being too cluttered. The post-1960 data in the Briffa reconstruction isn’t relevant to that. But smoothing requires some decision about what to do with the end point problem (in this case starting in 1935 since it was a 50 year smooth). Jones used the instrumental data so that values from 1935 to 1985 are a blend of the proxy and instrumental data. I’m not quite sure what the criterion was at the 1999 end point of the instrumental period. – gavin]

    – The end result was a “proxy” series that looked like an accurate reconstruction of late 20th century temperatures when compared to the instrumental measurements.

    [Response: No. The end result was described as the proxies and instrumental record.- gavin]

    – The results were published in World Meteorological Organization WMO-No. 913 (http://www.wmo.ch/pages/prog/wcp/wcdmp/statemnt/wmo913.pdf)

    [Response: The smooth was used in that brochure. – gavin]

    My question to you are which ones of these claims are true and which are not? I guess with more resolution I’d like to know which claims are demonstrably wrong (proven innocent), which claims could be true but that there’s not credible evidence for (not proven guilty), and which claims are demonstrably true (guilty).

    What is of particular interest to me is whether the output of the “highly artificial” “fudge factor” code was ever used in any published papers. I have a co-author that uses “if 6==9″ as debugging toggle, and code sections in those parts do not (at least intentionally) make it to our papers. He argues that this might be a similar case.

    [Response: No. There was a draft, but it doesn’t seem to ever have been published, and is very clear about why and how this was done. – gavin]

  40. Tuomo
    Posted Dec 7, 2009 at 2:58 PM | Permalink

    I wrote on RealCliamte.org: “What is of particular interest to me is whether the output of the “highly artificial” “fudge factor” code was ever used in any published papers.”

    [Response: No. There was a draft, but it doesn’t seem to ever have been published, and is very clear about why and how this was done. – gavin]

    I read the paper at http://74.125.93.132/search?q=cache:www.cru.uea.ac.uk/~timo/papepages/pwosborn_summertemppatt_submit2gpc.pdf. Here’s what I believe is the relevant section, on page 21:

    “Warm-season temperature reconstructions with extended spatial coverage have also been developed, making use of the spatial correlation evident in temperature variability to predict pasttemperatures even in grid boxes without any tree-ring density data. The calibration was undertaken on a box-by-box basis, and each grid-box temperature series was predicted using multiple linear regression against the leading principal components (PCs) of the calibrated, gridded reconstructions described in section 4.4. The PCs were computed from the correlation matrix of the reconstructions, so the calibration was in effect removed and similar results would have been obtained if the PCs of the raw, gridded density data had been used instead. The only difference is that the calibrated data with the artificial removal of the recent decline were used for the PCA. Using the adjusted data avoids the problems otherwise introduced by the existence of the decline (see section 4), though all reconstructions after 1930 will be artificially closer to the real temperatures because of the adjustment(the adjustment is quite small until about 1960 – Figure 5c). Tests with the unadjusted data show that none of the spatial patterns associated with the leading PCs are affected by the adjustment, and theonly PC time series that is affected is the leading PC and then only during the post-1930 period. Inother words, the adjustment pattern is very similar to the leading EOF pattern, and orthogonal to theothers, and thus only influences the first PC time series.”

    I don’t find the argument particularly convincing as far as the substance of this hypothetical experiment goes. But that’s not the point. The point is that I think this is conclusive evidence that the “very artificial” “fudge factor” code is _not_ fraudulent.

  41. Posted Dec 7, 2009 at 8:08 PM | Permalink

    I am not very familiar with tree-ring reconstruction of temperature data, but why do all those people do the time frames beyond known and measured temperatures? I mean, you have to validate your model first by testing if it holds up to the recent temperature data that is available?!

    Perhaps this has been done, but if so, than your discussion really shows that something went wrong?!

    If I set up models, I try to find achieved real-life data and test if my model can at least replicate that, if not, than I have a problem?!

  42. Tuomo
    Posted Dec 7, 2009 at 11:37 PM | Permalink

    Max —

    My post above is about the “highly artificial” “fudge factor” code. The way it was used and documented in the paper draft satisfies me personally that it’s not fraudulent.

    What has “gone wrong” in the tree ring width and density calibrations, I believe, is the following two things. First, the rainfall matters a lot. Rainfall is not independent of the temperature. Researchers have tried find samples from areas where temperature is the constraint, not rainfall, to mitigate this. However, I don’t know how well the rainfall has been ruled out.

    Second, and more importantly, most tree-ring studies don’t adjust for the age of the tree. The width and density of the tree ring are related to both how many years old that particular tree was when it grew the ring in question and what kind of year that year was.

    Fortunately for us, at least one study adjusts for the age of the tree: http://people.su.se/~hgrud/documents/Grudd%202008.pdf

    I just read Grudd’s study, and he demonstrates pretty convincingly that the reason why the tree ring density has stopped correlating with temperature in the post-1960 sample is that the trees in the previous studies were old. Grudd collected data on young trees and merged those to the data set. The presence of both old trees and young trees from all periods then allowed him to estimate the relation between the age of the tree when the ring grew and ring density and the year effect.

    The paper suggests that Briffa / Briffa et al. (so many papers, can’t keep track) should have collected younger sample trees, should have removed the effect of the age of the tree, and then stuck with unadjusted density-based proxy. This would have eliminated the need for “Mike’s Nature trick” to “hide the decline.” There’s no decline in the density of recent rings of young trees.

    Now, what’s the punch line from Grudd’s study? When you model the tree ring density right, you’ll get a relation between temperature and density that holds up well throughout the sample. Then, if you use that model to reconstruct temperature record (blue line in Fig 12), you’ll see that in Northern Europe was really warm for a couple of centuries about 1000 years ago, warmer than now.

  43. Posted Dec 12, 2009 at 4:19 PM | Permalink

    Could Steve or someone lese with more statistical knowledge than me explain something about the temperature record please?

    In the available data for annual anomalies, there is a clear “step change” that occurs between ’96 and ’98 of around 0.4 degrees in the NH and a similar, though smaller, step in the SH. These coincide very nicely with the unusually strong ’97 / ’98 El Nino event. Since then, there’s been a fairly flat ongoing trend at about the level of the “step”. That makes it completely unsurprising that “the last decade is the warmest on record”

    However, in the headline graphs of trends, this step is smoothed which creates an impression of a rapid increase from before that time.

    Now, my statistical education stopped around A level, about 25 yeras ago, but I seem to remember that it was generally bad practice to try and construct a trend across an event that causes a known and explained discontinuity in your data. Indeed, that’s one of the reasons they have to homogenise all the raw data before anyone sees it!

    So, what am I missing and how do they justify treating temperatures before and after that step as a single series?

  44. Posted Jan 26, 2010 at 4:32 PM | Permalink

    in case anybody is looking for it,
    I found PAGES Newsletter, Vol. 7 Nº 1
    here
    http://www.pages.unibe.ch/cgi-bin/WebObjects/products.woa/wa/product?id=81

    extremely interesting tornetrask and taymir chronologier on page 6!

    Steve: Hi, Hans. These look the same as the corresponding Briffa 2000 chronologies – the measurement data for which became available in Sep 2009 as we all know 🙂

7 Trackbacks

  1. […] are totally familiar with the history of climate science and just want McIntyre to give them the actual freaking computer code to replicate Jones graph from 1999. […]

  2. […] the next-largest culprit is Michael Mann, Mr. Nature Trick, who is not to be confused with the Nature Boy or the other "Heat"-making Mann. He has had his […]

  3. By Mann’s Mad Money | GlobalWarming.org on Dec 2, 2009 at 5:45 PM

    […] the next-largest culprit is Michael Mann, Mr. Nature Trick, who is not to be confused with the Nature Boy or the other “Heat“-making Mann. He has […]

  4. By Global Climate Scam » Mann’s Mad Money on Dec 3, 2009 at 10:41 AM

    […] the next-largest culprit is Michael Mann, Mr. Nature Trick, who is not to be confused with the Nature Boy or the other “Heat”-making Mann. He has […]

  5. […] the actual temperature. So what the climate scientists did in some of the hockey stick graphs, according to the skeptics, was delete the tree ring data starting in 1960, replacing them with the actual temperatures. The […]

  6. […] matchede de virkelige temperaturer. Så hvad klimaforskerne gjorde i nogle af hockeystavs-graferne, ifølge skeptikerne, var at slette træringsdata begyndende i 1960, og erstatte dem med faktiske temperaturer. […]

  7. […] such as incorrect lat/lon values of proxy samples, upside down Tiljander sediment proxies, and truncated/switched data, is mind boggling. It’s doubly mind boggling when these errors are well known to thousands of […]