Cook et al. [2004]

From a recent posting at realclimate:

Their [M&M] second criticism is of the statistic employed by MBH98 as diagnostic of statistical skill, the "Reduction of Error" or "RE" (note that this statistic was favored as a skill diagnostic in prominent recent studies by Cook et al (2004) and Luterbacher et al (2004) in Science)… MM instead promote the use of a simple linear correlation coefficient ("r") in its place.

Obviously we do not such thing. We’ve advocated the use of more than one verification statistic, showing in our GRL article that spuriously high RE statistics can be generated by data mining procedures, such as the erroneous MBH98 PC procedure. What is ironic in Mann’s citation of Cook as authority that paleoclimatologists need not provide a range of verification statistics is that Cook et al. [2004] actually provide a suite of verification statistics, headed by the R2 statistic, which Mann et al. now object to. Cook et al. [2004] states:

The calibration and verification statistics used to assess the goodness-of-fit and validity of the PDSI reconstructions are (i) the calibration period coefficient of multiple determination or regression R2 (CRSQ), (ii) the verification period square of the Pearson correlation coefficient or r2 (VRSQ), (iii) the reduction of error (RE), and (iv) the coefficient of efficiency (CE) …When these statistics are calculated for the 103–grid point West regional average PDSI reconstruction, CRSQ, VRSQ, RE, and CE are 0.86, 0.73, 0.78, and 0.72, respectively, for the most highly replicated post-1800 period of the reconstruction, based on a median of 41 tree-ring predictors per grid point reconstruction, and 0.68, 0.54, 0.64, and 0.53, respectively, when based on the smallest subset of tree-ring predictors available at the start of each grid point reconstruction (a median of 2 per grid point) (fig. S3).

Some online discussion is at the SI to Cook et al. [2004]. In this case, Cook et al. show a higher R2 than RE. This is obviously a completely different situation than MBH98 where there are spuriously high RE values from data mining and insignificant R2 values in the AD1400 step (estimated at about 0.0). A point of follow-up is that Cook et al. [2004] is a study of drought in the U.S. from tree rings. Cook et al. report the use of 835 tree ring chronologies, of which 602 are in the "West". They report the use of 17 chronologies in AD800.

I requested a listing of the sites from Connie Woodhouse of NOAA, one of the co-authors, to try to see if these sites overlapped with the sites used in MBH98 as "temperature" proxies. The map in Figure S2 of the SI sure looks like it overlaps with MBH98. I’m into the typical song-and-dance for the simplest such request. Woodhouse says that I have to get the information from Cook. I’ve had no luck getting data from Cook – I’ve been trying to get their updated Gaspé information (which does not have a hockey stick shape) for nearly a year; they’ve refused to provide. I’ve tried to get the exact location of the Ste Anne River, Gaspé site to commission re-sampling; they’ve refused to provide. I suspect that it will be a long time to get the list of sites in this study.


  1. Posted Aug 8, 2005 at 3:08 PM | Permalink

    I commented on the debatable claim that 2003 is the hottest summer of the last 500 years here:

  2. TCO
    Posted Sep 11, 2005 at 3:50 PM | Permalink

    Whiskey Tango Foxtrot? Why do you need the damn same location? Resample in the general area. Use a colleague who knows how to pick good trees to advise you. Is the goal to detect fraud or poor work by a specific investigator or to determine if the conclusions of the research are valid?

  3. Steve McIntyre
    Posted Sep 11, 2005 at 7:39 PM | Permalink

    There’s a reason for exactness – they did an updated version at Gaspe which did not have a hockey stick shape. They then said that it wasn’t EXACTLY the same site so it didn’t count and they didn’t have to report it. They refuse to locate both.

    Given the Hockey Team position on other matters, I’m sure that they’d say that the original Jacoby site was uniquely chosen for tempreature sensitivity and my site, although nearby, was not well chosen.

    In your reading of older posts, have you picked up on Briffa’s deletion of post-1960 values in his MXD compilation when they went against him. You’ll like that. It even got cited by Roger Pielke.

  4. TCO
    Posted Sep 15, 2005 at 9:38 PM | Permalink

    Nope…not going to let you wiggle away on this. The STANDARD way for taking care of these types of errors is to redo the experiment. Do I need to figure out how Pons and Fleishman screwed up, to show that in my lab the damn palladium doesn’t promote fusion? Your way is not “wrong”, but it’s NOT the only way (or preferred way…if you have the dollars, time). And if you aren’t getting it done your way, do it the normal way. And I already said take a dendro guy and pick the right sort of a site. Who cares if they contest it, if you know that you picked the right sort of site anyhow. to be honest, this goes to the point that an individual site might have fertilizer effect or avalanches or whatever. You’re better off hitting another site, since you don’t really trust the sparsity of sites anyhow.

  5. TCO
    Posted Sep 15, 2005 at 9:39 PM | Permalink

    No, I haven’t seen that stuff nor do I understand what you are driving at.

  6. Steve McIntyre
    Posted Sep 15, 2005 at 9:52 PM | Permalink

    The Gaspe chronology is data mining. THey were unable to replicate the result in their repeat sampling (which they neiter reported nor archived.)

    Cedar sites elsewhere don’t show the upspike of Gaspe.

    There are two issues: one what the Gaspe cedar growth is. But even if I went there and didn’t get a hockey stick in the chronology, that’s only a small part of the story. The bigger issue is the non-reporting of adverse data by the Hockey Team. If mining promoters didn’t report bad holes, you’d call for the security commission or even the police. I mean that literally. The NSF needs a wake-up call on Jacoby from someone. I tried but they blew me off.

  7. TCO
    Posted Sep 15, 2005 at 10:04 PM | Permalink

    It’s the same size issue either way. Some forms of fraud, you will likely never catch, Steve. Resampling in general vicinity would be a killer move. it may be weaker in showing fruad…but it’s stronger in terms of understanding the true situation.

  8. TCO
    Posted Sep 15, 2005 at 10:05 PM | Permalink

    I meant, I haven’t seen or it didn’t stick out the Briffa stuff and the peilke comment.

  9. Ed Snack
    Posted Sep 15, 2005 at 10:33 PM | Permalink

    TCO, sometimes I think you don’t get it. If Steve goes and samples Cedars somewhere near the Gaspe site (can’t be the same, no one knows where it is), and gets completely different findings, the AGW crowd will just blow it off. Incorrectly sampled, ignores appropriate sites, not properly treated, etc etc and etc. Then finally, they’ll go on using Jacoby’s series and ignore anything else. Why not, no one in their professional field is calling them on it, and there is a legion of people who think like MItch, John Hunter, and others, who even think it wrong to check on these kind of details, preferring to believe in the honesty and probity of the current researchers who have uncovered the truth.

    All that said, I agree that it is definitely worth going and doing at least one original piece of research like this. Perhaps Steve could get a real dendro person to co-author and help publish. Why, perhaps Steve could ask Danzero to assist. Steve, how about it. Sadly I am too far away to help out, unless you want some Southern hemisphere sampling ?

  10. Steve McIntyre
    Posted Sep 16, 2005 at 5:00 AM | Permalink

    The issue is not that the original data were fraudulent. I’m sure that they were what was reported. The issue is not reporting the later data.

  11. TCO
    Posted Sep 16, 2005 at 7:24 AM | Permalink

    That’s fine. An independant reassessment with new sampling is still GREAT. That way, you really know what behavior accross the general area is and if it is highly variable.

  12. Steve McIntyre
    Posted Sep 16, 2005 at 7:26 AM | Permalink

    If you want to see another report from the same area, see here:

  13. TCO
    Posted Sep 16, 2005 at 7:45 AM | Permalink

    So what’s the next step on that study, Steve? How about writing up the comparison for a journal of record?


Get every new post delivered to your Inbox.

Join 3,421 other followers

%d bloggers like this: