More on the MXD Data Set

On Sep 9, 2008, I sent an FOI request to the University of East Anglia, requesting a copy of the MXD data set as provided to Mann et al. Today (Oct 2, 2008), I was notified that they would provide this data and, sure enough the data is now posted (Oct 2, 2008) at http://www.cru.uea.ac.uk/~timo/datapages/mxdtrw.htm under the heading Rutherford et al 2008.

There are some puzzles.

The website reports the use of 341 sites, in Rutherford et al 2005, while the text of Rutherford et al reports the use of 387 sites. So one or the other is incorrect. An earlier article (Briffa et al, Holocene, 2002a) used 387 sites, listed here. I presume that the website is correct and the article is wrong and that a small corrigendum on this matter should be issued. All 341 sites said to have been used in Rutherford et al 2005 are included in the list of 387. Why were 46 sites removed from the network? Surely some sort of explanation should be provided.

The website (as of Oct 2, 2008) states:

The values after 1960 are a combination of information from high-frequency MXD variations and low-frequency instrumental temperature variations. We recommend, therefore, that the post-1960 values be deleted or ignored in any analysis that might be biased by the inclusion of this observed temperature information, such as the calibration of these data to form a climate reconstruction, or comparision of these data with instrumental climate observations for the purpose of assessing the ability of these data to represent temperature variability.

This is also a very puzzling comment as Rutherford et al 2005 nowhere mentions the blending of instrumental temperature variations back into proxy data after 1960. And if this was done, it is rather troubling. The explanation in Rutherford et al 2005 (Briffa, Osborn being coauthors with the Mann crowd) said:

Because the age-banding method requires large numbers of samples throughout the time period being studied, it has been applied only at a regional scale for the MXD network used here, rather than at the level of the 387 original site chronologies.

OSB therefore worked first with the traditionally standardized data at the individual chronology scale and gridded them to provide values in 115 5° by 5° grid boxes (26 available back to A.D. 1400) in the extratropical NH (Fig. 1b). They then developed temperature reconstructions by the local calibration of the MXD grid-box data against the corresponding instrumental grid-box temperatures.

The “missing” low-frequency temperature variability was then identified as the difference between the 30-yr smoothed regional reconstructions of Briffa et al. (2001) and the corresponding 30-yr smoothed regional averages of the gridded reconstructions. OSB add this missing low-frequency variability to each grid box in a region. After roughly 1960, the trends in the MXD data deviate from those of the collocated instrumental grid-box SAT data for reasons that are not yet understood (Briffa et al. 1998b, 2003; Vaganov et al. 1999). To circumvent this complication, we use only the pre-1960 instrumental record for calibration/cross validation of this dataset in the CFR experiments.

As I read the above, it describes a sort of coercion of the individual gridded series at “low frequency” to the corresponding “low frequency” shape of the regional ABD data (which is available at WDCP by the way). However it’s hard to be sure right now.

Here is a plot of average of the Briffa-Osborn gridded data, with dotted red line showing the part deleted in Mann et al 2008 (where Osborn and Briffa are coauthors).

Note that Briffa and Osborn also archived today various data used in IPCC AR4 graphics – previously unavailable in these versions.

45 Comments

  1. Steve McIntyre
    Posted Oct 2, 2008 at 2:37 PM | Permalink

    HEre is a quick unannotated retrieval script if you want to look at the data.

    source(“http://data.climateaudit.org/scripts/utilities.txt”)
    url=”http://www.cru.uea.ac.uk/~timo/datapages/schweingruber_mxdabd_grid.dat.sent2rutherford”
    mxd=read.table(url)
    dim(mxd) # 595 115
    mxd=ts(mxd,start=1400)

    url=”http://www.cru.uea.ac.uk/~timo/datapages/schweingruber_mxdabd_locations.dat”
    fred=readLines(url)
    fred=fred[c(2,4)]
    writeLines(fred,”temp.dat”)
    #[1] ” 87.500 92.500 102.500 112.500
    loc=read.fwf(“temp.dat”,widths=c(8,rep(8,114) ))
    loc=t(loc);dimnames(loc)[[2]]=c(“long”,”lat”)
    loc=jones(lat=loc[,2],long=loc[,1])
    loc
    dimnames(mxd)[[2]]=loc

    temp=(mxd< -9);sum(temp)
    mxd[temp]=NA
    count=apply(!is.na(mxd),1,sum);plot.ts(count)
    annual=ts(apply(mxd,1,mean,na.rm=T),start=1400)

    url="http://www.cru.uea.ac.uk/~timo/datapages/b02sens387_site.txt&quot;
    loc387=read.fwf(url,skip=7,widths=c(6,26,9,2,4,11,10,6,6))
    loc387=loc387[,c(1:3,5:9)]
    names(loc387)=c("number","name","id","type","long","lat","start","end")
    loc387$id=as.character(loc387$id)
    loc387$id=gsub(" ","",loc387$id)

    url="http://www.cru.uea.ac.uk/~timo/datapages/b02sens341_site.txt&quot;
    loc341=read.fwf(url,skip=7,widths=c(6,26,9,2,4,11,10,6,6))
    loc341=loc341[,c(1:3,5:9)]
    names(loc341)=c("number","name","id","type","long","lat","start","end")
    loc341$id=gsub(" ","",loc341$id)

    test=match(loc387$id,loc341$id);temp=!is.na(test);sum(temp) #341
    loc387[!temp,]

  2. Sam Urbinto
    Posted Oct 2, 2008 at 3:04 PM | Permalink

    Hopefully that robot doesn’t bring down their website and get anyone in trouble.

  3. Craig Loehle
    Posted Oct 2, 2008 at 3:18 PM | Permalink

    The post-1960 anomaly would make this the most widespread divergence yet seen. How convenient to not count this in their calibration/verification stats!!

  4. Posted Oct 2, 2008 at 3:31 PM | Permalink

    This should be fun when I get some time later. Thanks Steve.

    I wonder why he got rid of that little inconvenient end section. Hmmm.

    There’s no way any serious scientist could accept this paper.

  5. Steve McIntyre
    Posted Oct 2, 2008 at 3:46 PM | Permalink

    #4. The deletion of the post-1960 was an issue in IPCC AR4. I objected vehemently to it in a Briffa study (Briffa is a coauthor of Mann et al and was IPCC author of this section). All they said is that it would not be “appropriate”. So let’s not personalize this to Mann, tho we tend to use him as a shorthand. We’re talking Briffa and the IPCC chapter 6 editors and review editors as well. “Serious” scientists.

    Another oddity. Mann has excluded EVERY Schweingruber ring width series from his data set. This is the large data set collected from sites said ex ante to be temperature sensitive in which ring width declines (not just MXD). Not a single Schweingruber series is used. Is this because of the MXD or some other reason? Who knows.

  6. Posted Oct 2, 2008 at 3:59 PM | Permalink

    Ok, sorry.

    The script didn’t run out of the box for me. I got an error could not find function jones right after an unrecognized symbol in the line before. A couple more errors later it coughs up names of about 51 series.

    I am unfamiliar with R syntax but am beginning to be able to read it. Is it possible to write this data to a text file.

  7. Posted Oct 2, 2008 at 4:09 PM | Permalink

    I got it worked out another way.

    • Peter O'Neill
      Posted Oct 3, 2008 at 5:05 AM | Permalink

      Re: Jeff Id (#6, #7),

      For anyone trying to run this script, a closing parenthesis is missing in the 10th line (excluding any spacing lines),
      and the source for the function jones is needed. Replacing that 10th line by these two will fix both problems:

      loc=read.fwf(“temp.dat”,widths=c(8,rep(8,114)))
      source(“http://data.climateaudit.org/scripts/utilities.txt”)


      Steve:
      Thx. Changes made in online version.

  8. thefordprefect
    Posted Oct 3, 2008 at 4:07 AM | Permalink

    Perhaps someone could help me here. Unfortunately I am only and electronics engineer and so not up with the “logic” here.

    I thought there was thermometer readings of temperature going back to about 1800AD (initially readings taken at +-1degC which at some time becomes an order of magnitude more accurate.

    Tree ring proxy data for temperature must be incredibly difficult to extract, considering all variables (much more difficult than making an allowance for urbanisation of themometry).

    Logic dictates that the measured temperature must be a VALID record of temperature at that location.
    Logic dictates that proxied temperature should agree with REAL temperature where this is available (there is obviosly a problem with location miss-match).
    Logic dictates that if this is not the case then the proxy is invalid.

    So if the proxy is wrong for a portion of the thermometered range say 1960 onwards but valid from 1800 to 1960 then surely it is valid to chop the proxy at 1960 and discard the 1960-2008 portion? Although I must admit I would have binned the whole record.

    If there is no match with any of the thermometered range then the proxy is wrong and should be discarded.

    The only exception would be if it were the same as 100’s of other proxies (but including this would add little to the reconstruction of past temperatures)

    So my question is:
    What is wrong with chopping data that is proved invalid by incontrovertible evidence?

    Steve: If the proxy fails to record the warm last half 20th century, how do you know that it would have recorded prior warm periods. You can’t. IF they had “binned the whole record”, that would have been consistent – but how can they claim any knowledge of past warm periods from such data?

    • James S
      Posted Oct 3, 2008 at 5:07 AM | Permalink

      Re: thefordprefect (#8),

      My answer would be that there is nothing wrong with chopping data provided that you can reasonably prove that the something happened at the date that you chopped the data which means that the data is no longer a temperature proxy (an example would be the Finnish lake varves, which appear to be a reasonable proxy to temperature up until a point where they have an anthropogenic influence – I would say that it would be reasonable to chop these).

      If you do chop up your data you should also spend extra time informing people as to why you made the decision and make sure that any fellow researchers understand your reasoning fully.

      • thefordprefect
        Posted Oct 3, 2008 at 5:18 AM | Permalink

        Re: James S (#10),
        So the criticism is not the removal of datafrom 1960 onwards but the lack of openness as to why this was done?

        If so some of the barbed comments seem a bit over the top!:

        The post-1960 anomaly would make this the most widespread divergence yet seen. How convenient to not count this in their calibration/verification stats!!

        I wonder why he got rid of that little inconvenient end section. Hmmm.

        etc.

        Steve: There’s a long history to the deletion of the post-1960 Briffa data. It was done without any notice in the IPCC 2001 spaghetti graph giving a false coherence to the illustrated proxies. It was done again in IPCC 4AR despite specific requests from one reviewer (me) that the deleted data be shown and any divergence explained. They refused.

        • James Lane
          Posted Oct 3, 2008 at 6:29 AM | Permalink

          Re: thefordprefect (#11),

          Ford, as you point out, there is an instrumental record going back to the 19th century. So we don’t need proxies for that period. However, the reconstructions of Mann, Briffa and company extend back 1000 years. If tree rings are a poor proxy for temperature in the second half of the 20th century, how do we know they are any good for any period prior to the instrumental record?

          A common misconception is that the hockey stick debate is about the “blade” of the stick. It’s not – everyone agrees global temperature rose during the 20th century, we know that from the instrumental record. The debate is about the “shaft” – the 1000 odd years prior to the instrumental record, and the claim that recent warming is unprecedented.

          To my mind the Briffa truncation is manifestly absurd. Tree rings sort-of track temperature for about 100 years, and then for the next 40 years they don’t, so we’ll just ignore the data post 1960, call it the “divergence problem”, recommend further study, and then not do any further study.

          Most of the proxy series end about 1980, and despite it being a global emergency, few seem inclined to update the proxies, it being all too hard apparently. On the odd occasion that they are updated, Mann et al prefer to use the old versions. It would be a joke, if it were funny.

          All this is old news. Search CA for the “divergence problem”.

        • thefordprefect
          Posted Oct 3, 2008 at 6:54 AM | Permalink

          Re: James Lane (#12),
          A common misconception is that the hockey stick debate is about the “blade” of the stick. It’s not – everyone agrees global temperature rose during the 20th century, we know that from the instrumental record. The debate is about the “shaft” – the 1000 odd years prior to the instrumental record, and the claim that recent warming is unprecedented.
          If the dendro proxy data is wrong then why are you debating how it should be handled/why it has been misshandled. It is wrong. It should not be used.
          Looking at data from the EPICA dome C ice core there is a bit of detail in the last 2000 years. This shows excursions +-1degC over this period (There is a positive spike at the latest dates which needs an explanation). certainly not grossly different to Mann’s data. there even appears to be a medeival warm period +1degC and a little ice age (-0.5C).
          Is this data invalid?
          Mike

        • Dave Dardinger
          Posted Oct 3, 2008 at 7:46 AM | Permalink

          Re: thefordprefect (#13),

          Is this data invalid?

          Maybe, maybe not. The trouble with the ice cores is that a lot of the data has never been archived, and the authors have refused to give Steve M the data. If you’ll look at old ice-core threads here, you’ll see that there are some problems when it comes to them including what should be expected when it comes to particular cores from their locations and how well the cores line up.

    • Gerald Machnee
      Posted Oct 3, 2008 at 7:13 AM | Permalink

      Re: thefordprefect (#8),
      **So my question is:
      What is wrong with chopping data that is proved invalid by incontrovertible evidence?**
      The answer is simple – It is called cherry picking. You cannot pick parts of the proxy that suit your purpose. If you chop post 1960, then you have to prove the rest is valid. This has not been done.

      • thefordprefect
        Posted Oct 3, 2008 at 7:50 AM | Permalink

        Re: Gerald Machnee (#14),
        I’m becoming more confused!
        “You cannot pick parts of the proxy that suit your purpose. If you chop post 1960, then you have to prove the rest is valid.”
        What are you debating here then? The proxy is wrong delete it. Forget the hockey stick!!!!.
        You will never be able to prove any proxy.
        There is no REAL data to prove it against.
        It may match 1960 to 2008 but pre 1960 may be totally loopy – PROVE it is not – it is impossible. You could compare many errored proxies and take some statistical average but this, to an engineers mind, does not give you valid data.
        Trying to update dendro temperatures to present day I would suggest is virtually impossible. You would have to return to the same trees, know the changes in environment. How many trees would this involve – there are over 400 studies using more than one tree? Were trees marked for the record?
        The EPICA proxy I plotted shows an increase in temperature around 1950 (I think – no data currently to hand) This does not agree with any peak in the measured temperature. Thus this proxy is now discredited!!.

        • Craig Loehle
          Posted Oct 3, 2008 at 8:09 AM | Permalink

          Re: thefordprefect (#16), Here is the problem: there is not in most cases a reason given for chopping the data that show divergence (post 1960 or post 1980). Some series are used that show divergence and some are chopped. If there is a global something affecting proxies after some date, then no proxy should be used after that date. The MXD series are much too widespread geographically to invoke some special case like a local insect outbreak or disease. As said above, if the trees match temperature for the period up to 1960 but not after that (for no reason that anyone knows) what makes one believe they are any good for the preceding 1000 years? You can also ref my new paper:
          Loehle, C. 2008. A Mathematical Analysis of the Divergence Problem in Dendroclimatology. Climatic Change DOI 10.1007/s10584-008-9488-8

  9. Demesure
    Posted Oct 3, 2008 at 8:05 AM | Permalink

    @jeff and other R beginners (or not) : a nice animated tutorial on how to use R efficently with a syntax highligting editor.

  10. Posted Oct 3, 2008 at 9:13 AM | Permalink

    I just did a post on this subject

    http://noconsensus.wordpress.com/2008/10/03/the-hockey-stick-data-hoax/

    Thanks for digging this up Steve.

  11. Steve McIntyre
    Posted Oct 3, 2008 at 9:42 AM | Permalink

    #13. First, you have to look at multiple ice cores not just one. Can you give a link to the data set you are referring to her. A Nov 2007 version of EPICA Dome C delD ends in 1911 – is that the data that you mean ?

    • thefordprefect
      Posted Oct 3, 2008 at 1:09 PM | Permalink

      Re: Steve McIntyre (#20),
      Apologies – memory failure, data finishes in 1912 for EPICA. Anomaly peak is next point back 1895! with a temp offset of +1.8C.

      OT has anybody here thought of doing a FFT on a set of yearly data. If there is any cyclical forcings these could show up. I’ve tried it on Bejing Stalagmite data and it does show a bit of a peak at 24 – 25 year period but this was done using Excel.

  12. Carl Gullans
    Posted Oct 3, 2008 at 9:49 AM | Permalink

    #16: This is the purpose of a validation period. Out of 100 Proxy-X samples, 15% correlate well with temperature in part of the instrumental record (calibration). These 15% are declared “temperature thermometers”, with the others being spoiled by some unknown factor. Fine. You then must show that a very high % of those 15% also predict temperature in the other half of the instrumental record (validation); if they do not, the proxy is almost certainly worthless.

    This does not prove any individual proxy to be real or false records of temperature, but it does give or take away confidence (with added caveats, namely that the proxies are assumed to respond to temperature in the past the same way that it did during the calibration/validation periods). This assumption may also be false and a reason to challenge statistical results.

    I don’t think the two of you are disagreeing on anything.

  13. Steve McIntyre
    Posted Oct 3, 2008 at 10:01 AM | Permalink

    Here is my comparison of the average of the Mann MXD versions as compared to the versions posted yesterday at CRU – this is the same as Jeff’s up to smoothing differences. Mine here is an 11-year Gaussian smooth.

    For people who may wish to confirm this discrepancy for themselves, here’s a script.
    #http://www.cru.uea.ac.uk/~timo/datapages/mxdtrw.htm
    #MXD from tree-cores sampled at 341 sites were used, assembled into 341 site chronologies.

    source(“http://data.climateaudit.org/scripts/utilities.txt”)
    url=”http://www.cru.uea.ac.uk/~timo/datapages/schweingruber_mxdabd_grid.dat.sent2rutherford”
    mxd=read.table(url)
    dim(mxd) # 595 115
    mxd=ts(mxd,start=1400)

    url=”http://www.cru.uea.ac.uk/~timo/datapages/schweingruber_mxdabd_locations.dat”
    fred=readLines(url)
    fred=fred[c(2,4)]
    writeLines(fred,”temp.dat”)
    #[1] ” 87.500 92.500 102.500 112.500
    loc=read.fwf(“temp.dat”,widths=rep(8,115) )
    loc=t(loc);dimnames(loc)[[2]]=c(“long”,”lat”)
    loc=jones(lat=loc[,2],long=loc[,1])
    loc; dimnames(mxd)[[2]]=loc

    temp=(mxd=1800)&(time(chron)< =1950)
    mannavg=ts(apply(chron,1,mean,na.rm=T),start=tsp(chron)[1])

    Y=ts.union(annual,mannavg)
    f=function(x) filter.combine.pad(x,truncated.gauss.weights(11) )[,2]
    plot(c(time(Y)),f(Y[,2]),col=2,type="l",lwd=2,las=1,ylab="",xlab="")
    lines(c(time(Y)),f(Y[,1]) )
    smartlegend(x="left",y="bottom",inset=0,fill=1:2,legend=c("Mann 2008","Archived") ,cex=.7)
    title("Average of Gridded MXD")

    • IainM
      Posted Oct 3, 2008 at 2:58 PM | Permalink

      Re: Steve McIntyre (#22),

      Been trying to get this script to run but it’s coming up with errors (newbie R user here). Anyone else been successful?

      Steve: When I post up a script, I don’t usually clear my workspace, so there are sometimes some references to things that are on my workspace. Or sometimes I’ve corrected a bracket on the console and forgotten to do so in the script. Let me know and I’ll fix it. I’m sorry to be careless about this, but I’m doing stuff pretty fast and trying to show the workings as I do it.

      • IainM
        Posted Oct 4, 2008 at 4:31 AM | Permalink

        Re: IainM (#35),

        Thanks for responding, Steve, but I don’t mean to take you away from the work you’re doing. I was hoping that someone else might come back with a fix. Keep up the good work !

  14. Posted Oct 3, 2008 at 10:33 AM | Permalink

    I think that some gratitude might be in order, for making their tree-ring and AR4 data available.
    THANK YOU TIM!

    The data for AR4 fig 6.10b is useful. It has always seemed to me that this fig shows good evidence for the MWP. So I took the 6 AR4 data sets that start before 900 AD, in unsmoothed form (smoothing is always bad according to statistician William Briggs, and introduces the end-points problem), and averaged them, giving the following picture:

    This ends in 1960, since the HCA2006 series ends then. If we omit HCA2006 and just average the remaining 5 we can go up to 1979, and we get a similar picture:

    So, the IPCC’s own data shows a clear MWP around 1000 AD, and shows that the MWP was as warm as today (well, at least 1979). The hilarious irony is that on the following two pages of AR4, box 6.4, they attempt to argue that this is not the case (“the evidence is not sufficient to support a conclusion that hemispheric mean temperatures were as warm, or the extent of warm regions as expansive, as those in the 20th century as a whole, during any period in medieval times“).

    • Mark T.
      Posted Oct 3, 2008 at 10:51 AM | Permalink

      Re: PaulM (#23), I like Briggs and his writing style always makes me smile a bit. However, I must disagree that you never smooth time series. When you have a priori knowledge of the desired signal bandwidth (or actual structure, pdf, etc.), it is OK since you know exactly were to place the cutoff frequency to prevent loss of relevant data. In general, these cases are not what he is referring to, so my point is really a nit. What is being discussed here, in particular, certainly falls into the no a priori knowledge realm.

      Mark

  15. M. Jeff
    Posted Oct 3, 2008 at 10:37 AM | Permalink

    #22, are the graph legends reversed?

    Steve: Fixed

  16. Soronel Haetir
    Posted Oct 3, 2008 at 10:47 AM | Permalink

    thefordprefect,

    Another problem faced is that in at least some cases made up data is appended to the series after the actual values are truncated.

    At least that is my understanding of how this data is used in the multiproxy reconstructions.

    Steve: While this present situation with Mann’s use of MXD is troubling, this is a bit unusual in the recon corpus. The main issues lie elsewhere.

  17. Mark T.
    Posted Oct 3, 2008 at 11:01 AM | Permalink

    I should, reading some of his comments, however, he does differentiate some things a bit by clarifying the difference between noise and signal. In my scenario, I know where the signal is and where the noise is (any interference outside of the signal can be considered noise), in the proxy world, you don’t.

    Mark

  18. AndyL
    Posted Oct 3, 2008 at 11:04 AM | Permalink

    OT (a bit)

    How widely is the Mann 08 paper being used? Are scientists promoting / defending / citing / publicising the paper or is it being quietly ignored?

    I ask because there seem to be few sites or articles defending the paper or rejecting the analysis here – which they would if they spotted any errors

  19. Mark T.
    Posted Oct 3, 2008 at 11:09 AM | Permalink

    Gavin seems to be defending it.

    Mark

  20. Steve McIntyre
    Posted Oct 3, 2008 at 11:41 AM | Permalink

    #28. It was widely announced and publicized when it came out (BBC and other). A few sites immediately jumped on board (realclimate, Tamino); they jumped so quickly that it was impossible that they could have done any critical analysis; they are hardly “independent” though.

    As to sites “spotting” errors, to my knowledge, can you give me any example of any site (other than this one) ever spotting and reporting errors in Team reconstructions? For some reason, this sort of analysis hasn’t been done elsewhere in hte past (Jeff Id has now joined the fray and is very critical of Mann et al 2008).

    • AndyL
      Posted Oct 3, 2008 at 4:09 PM | Permalink

      Re: Steve McIntyre (#30),
      Steve,
      As Mark T said in 31, what I meant was that no-one seems to be jumping on any errors of yours (or Jeff Id’s)

      There was an enormous burst of publicity when the paper came out, but relatively little since. Maybe it’s too early to tell, but I’m interested in whether M08 is being accepted and publicised, or whether your analysis is already causing people to avoid citing and promoting it (or at least to be careful about doing so)

      • Gerald Machnee
        Posted Oct 3, 2008 at 5:00 PM | Permalink

        Re: AndyL (#36),
        **As Mark T said in 31, what I meant was that no-one seems to be jumping on any errors of yours (or Jeff Id’s)**
        Errors are hard to find as they seldom occur. If they do they are corrected.
        What is quoted is the following worn out statement- “It has been discredited”, but no proof is ever offered.

      • Mark T.
        Posted Oct 3, 2008 at 5:10 PM | Permalink

        Re: AndyL (#36),

        Maybe it’s too early to tell, but I’m interested in whether M08 is being accepted and publicised, or whether your analysis is already causing people to avoid citing and promoting it (or at least to be careful about doing so)

        That’s a good question. The blog-world, however, is much different than the television/media world, and it is plausible to assume that being roundly discredited here will have zero impact there.

        Mark

  21. Mark T.
    Posted Oct 3, 2008 at 11:58 AM | Permalink

    I think AndyL was referring to errors in your analysis, indicating that perhaps they don’t even have the “soundbite,” i.e., irrelevant, errors to capitalize on. 😉

    Mark

  22. Posted Oct 3, 2008 at 12:42 PM | Permalink

    Andy,

    It’s very difficult to miss this problem in the data because the authors actually admit it in the M08 paper.

    They just don’t demonstrate it in graphical format. In the link above, I quoted the section which admits the technique so people could see the cause and effect on the same page.

  23. Posted Oct 3, 2008 at 1:27 PM | Permalink

    Thefordperfect #33

    I also would like to point out that sorting the data according to a trend by deletion produces a statistical amplification of local data when compared to historic trends. This is something I have very clearly demonstrated on my blog.

    Any time you sort noisy data (really any noise level at all) looking for a trend you get this effect.

    The effect is a bit subtle apparently because it is such an accepted technique in paleoclimatology but the result is quite large. From my reconstruction of 08, the temperature data in the historic period prior to 1850 is de-magnified to 62% of the calibration range data. I haven’t figured out how to calculate the offset yet.

    • Kenneth Fritsch
      Posted Oct 4, 2008 at 8:08 AM | Permalink

      Re: Jeff Id (#34),

      The effect is a bit subtle apparently because it is such an accepted technique in paleoclimatology but the result is quite large. From my reconstruction of 08, the temperature data in the historic period prior to 1850 is de-magnified to 62% of the calibration range data. I haven’t figured out how to calculate the offset yet.

      JeffId, I have seen rather vague references to this effect of reconstructions and I believe they were pointed to as a mild warning by a review of the AGW consensus literature. It could have been in an IPCC review. I would like to see you push this point because as stated it has to be an important aspect of interpreting reconstruction results.

  24. Geoff Sherrington
    Posted Oct 4, 2008 at 6:35 AM | Permalink

    I try to resist suggesting that Steve should do this or that to gain more publicity. But times change. Some of these recent “errors” are so easy to see that they are serious candidates for youtube type short clips. Anyone gone down that road? It’s usually pretty vital to catch the populace when they are young, not old and settled. Steve, please snip if inappropriate.

  25. Posted Oct 4, 2008 at 2:20 PM | Permalink

    I have been working on running the above script for hours now. I am really stuck.

    for ( i in 1:K) chron=ts.union(chron,g(mann[[index [i] ]]) )

    The g in the line above is an unrecognized function. I tried removing it and have spent quite a bit of time studying time series but no luck yet.

    #41 I will. I want to learn R first but have been studying some of the characterization techniques for matching red noise to measured data. I am hoping for a more general answer to offset and magnification than just trying to match a single paper though. I found some references stating that this was the method used to criticize our hosts past work on this subject.

    Steve: If I throw up an illustrative script, PLEASE don’t waste time trying to figure out a hangup. Just ask me. It will 99% of the time be something that I’ve got on my console. Here is simply a little function to make the data into a time series.

    g=function(X) ts(X[,2],start=X[1,1])

  26. Posted Oct 4, 2008 at 2:49 PM | Permalink

    I don’t like to bug you too much. Besides, reading and trying is a good way to learn.

    I changed
    g=function(X) ts(X[,2],start=X[1.1])

    to

    g=function(X) ts(X[,2],start=X[1,1]) — a comma in 1,1 which made sense to me now.

    and the only problem left is smartlegend is an unrecognized function. I comment it out and get your graph.

    Steve: Sorry bout that. smartlegend is in the gplots package. So you have to install gplots. Then insert library(gplots) in the script. I’ve got too many things in my console right now and need to close it down and see what I’ve got loaded for these little scripts.

  27. Craig Loehle
    Posted Oct 5, 2008 at 7:41 AM | Permalink

    When is it valid to drop some data? As a post-doc, a fellow post-doc dropped by with a question. He had a nice allometric relationship between leaf length and leaf area, but some points were messing up the graph. I asked was there anything special about those points? Well, those were the ones where bugs had taken a big bite out of the leaf and he had estimated the missing piece. Clearly, I told him, you goofed on the extrapolation, and should drop those points. He was not able to drop them just because they messed up his graph…

  28. ChrisJ
    Posted Oct 5, 2008 at 2:55 PM | Permalink

    programming style. Hi Steve- I have found that single letter variable names are really hard to search with the computer. However if you triple them up it is much easier to follow/find/tweak them later. For example…

    ggg() instead of g()
    xxx instead of x
    yyy instead of y

    etcetera.

    This one small change can make a tremendous difference in readability for others. Hang in there! Thanks. best regards, -chris

    Steve:
    Good point. If I use function with an uninformative name like this, I’ll typically define it on one line and use it in an adjacent line. Because R is so concise, I’m increasingly being a little redundant in my programming and re-defining smoothing functions and things like that as I use them.

One Trackback

  1. By The FOI Myth #2 « Climate Audit on Dec 29, 2009 at 8:37 PM

    […] and expeditiously by CRU placing the requested information on a webpage (see CA discussions here […]

%d bloggers like this: