New CPD Paper on Reconstructions

Here is the paper from MITRIE on climate reconstructions

M. N. Juckes, M. R. Allen, K. R. Briffa, J. Esper, G. C. Hegerl, A. Moberg, T. J. Osborn, S. L. Weber, E. Zorita, Millennial temperature reconstruction intercomparison and evaluation, Climate of the Past Discussions, 2, 1001-1049, 2006

There has been considerable recent interest in paleoclimate reconstructions of the temperature history of the last millennium. A wide variety of techniques have been used. The interrelation between the techniques is sometimes unclear, as different studies often use distinct data sources as well as distinct methodologies. Recent work is reviewed with an aim to clarifying the import of the different approaches. A range of proxy data collections used by different authors are passed through two reconstruction algorithms: firstly, inverse regression and, secondly, compositing followed by variance matching. It is found that the first method tends to give large weighting to a small number of proxies and that the second approach is more robust to varying proxy input. A reconstruction using 18 proxy records extending back to AD 1000 shows a maximum pre-industrial temperature of 0.25 K (relative to the 1866 to 1970 mean). The standard error on this estimate, based on the residual in the calibration period is 0.149 K. Two recent years (1998 and 2005) have exceeded the estimated pre-industrial maximum by more than 4 standard errors.

Update – Oct. 26

Some quick thoughts on the proxy selections. I’m sorting my way through this. It’s a full European Hockey roster with Briffa, Moberg, Esper etc. develop a “Union” reconstruction with 18 proxies, listed in their Table 1 with an asterisk. The proxies are pretty familiar.

Let’s compare the selections to my Hegerl predictions using my prediction numbering based on the least indpendence principle. The Briffa-Moberg-Esper et al reconstruction contains the following:

1. Yang composite – this is included. It’s listed twice in Table 1 under different alter egos, which BME did not identify as being the same; they correctly identify a couple of alter egos, but are not very accurate.

2. Taymir – this is in. Again it’s listed twice in their Table 1.

3. Polar Urals This is used TWICE in their reconstruction – once as the Briffa MXD version and once as the Yamal version of Briffa 2000. The Polar Urals update of Esper et al is NOT used, not is the version from Hegerl et al 2006 (which seems to be an average of the Yamal and Polar Urals update.)

4. Mongolia Surprisingly this is not used.

5. Tornetrask Again, this is unaccountably used TWICE in their reconstruction – once in the Briffa MXD version and once in the Esper RCS version. Their Table 1 attributes a series to Norway in Hegerl et al, but this is an alter ago for Tornetrask and another goof.

6. van Engeln This is not used, as it does not go back far enough.

7. Greenland dO18 This is used. Table 1 mentions three seemingly related versions, all referenced to Fisher et al 1996, using the version from Jones 1998. There is no archive for Jones 1998 and I’d previously concluded that the version used in Jones et al 1998 is identical to MBH. I had previosuly thought that the versions were all the same, but will need to examine this/

8. Jasper somewhat surprisingly not included, but the Luckman version does not go back to 1000 although it goes back nearly that far.

9. Bristlecones/foxtails Briffa-Moberg-Esper et al use no fewer than FOUR bristlecone/foxtail series – the two foxtail series used in Esper (and again in Hegerl). Their Table 1 fails to mention the use of these foxtails by Hegerl (or her use of Mann’s PC1). The other two versions are two series from Moberg (Methuselah Walk, Indian Garden) which do not have big growth pulses. Moberg inadvertently used the Methuselah Walk version twice, but it’s only used once here.

So the above series, which were my prime predictions for Hegerl, contribute 11 of 18 Briffa-Moberg-Esper series.

Other series in the Union reconstruction are:

Chesapeake Mg/Ca – used in Mann and Jones 2003, Moberg, Osborn and Briffa 2006

Quelccaya (accumulation and dO!8 from Core 2) – thus 2 series. Core 1, also used in MBH99, is not mentioned. The dO!8 series also contributes to Thompson’s tropical dO18 average. Guliya and Dunde dO18 from Thompson are important ingredients in the Yang composite.

GRIP boreholetemperature – this is a very high MWP value

Morocco morc014 – this is a tree-ring series from MBH98-99. It’s a functional equivalent of noise (and a precipitation proxy if anything)

Arabian_Sea:Globigerina_bulloides – we’ve talked lots about this. This measures G. bulloides foraminifera, associated with upwelling cold water. Comes from Moberg. Very non-normal.

Shihua Cave, China stalagmite – another Moberg series.

Moberg Exclusions:
The following are excluded – I haven’t checked explanations yet:
Agassiz ice melt – perhaps a little surprisingly since it’s very HS;
Conroy Lake sediments – these have a high MWP and were in my apple-picking reconstruction
Sargasso Sea – also have a high MWP and were in my apple-picking reconstruction
Caribbean dO18 –
Tsulmajavri, Finland sediment
Norwegian stalgmite
Indigirka ring widths – this has a high MWP and was in my apple picking reconstruction

66 Comments

  1. Hans Erren
    Posted Oct 26, 2006 at 4:49 PM | Permalink

    “Our committee believes that the assessments that the decade of the 1990s was the hottest decade in a millennium and that 1998 was the hottest year in a millennium cannot be supported by the MBH98/99 analysis. As mentioned earlier in our background section, tree ring proxies are typically calibrated to remove low frequency variations. The cycle of Medieval Warm Period and Little Ice Age that was widely recognized in 1990 has disappeared from the MBH98/99 analyses, thus making possible the hottest decade/hottest year claim. However, the methodology of MBH98/99 suppresses this low frequency information. The paucity of data in the more remote past makes the hottest-in-a-millennium claims essentially unverifiable.”

  2. bender
    Posted Oct 26, 2006 at 10:01 PM | Permalink

    Tell me something I didn’t know already.

  3. Dave Dardinger
    Posted Oct 26, 2006 at 10:35 PM | Permalink

    Here’s an interesting quote:

    McIntyre and McKitrick (2003) [MM2003] criticise MBH1998 on many counts, some related to deficiencies in the description of the data used and possible irregularities in the data itself. These issues have been largely resolved in Mann et al. (2004).

    Having followed much of this stuff here I assume the “these issues” refers to the description and data issues and may indeed be correct. But note that these are only “some” of the deficiencies. The rest, of course have not been resolved and won’t be until Mann, et. al. admit what everyone already knows, that the methods used were shoddy.

    Of course others, who don’t know how the team prevaricates and misleads, may think that “these issues” refers to all the criticisms which is, of course, why it’s worded so misleadingly. My contempt for Mann and his team continues to grow. It’s a shame Edwardo allowed himself to be associated with them.

  4. Willis Eschenbach
    Posted Oct 27, 2006 at 5:21 AM | Permalink

    Steve, you say they used “Greenland dO18” from Fisher. As I mentioned elsewhere, Fisher used the ice core ‘ˆ’€šO18 as a proxy for precipitation

    w.

    … but wait, there’s more! Order now, and at no extra cost,
    get a special attachment for the Mannomatic
    that lets you predict rain and sun with the same proxy! …

  5. Posted Oct 27, 2006 at 6:42 AM | Permalink

    In what sense is Composite plus variance matching (CVM) method optimal?

  6. Paul Penrose
    Posted Oct 27, 2006 at 7:28 AM | Permalink

    Which one of the authors is the statistician?

  7. bender
    Posted Oct 27, 2006 at 7:41 AM | Permalink

    From Juckes et al.:

    Briffa and Osborn (1999) and MM2005c suggest that rising CO2 levels may have contributed significantly to the 19th and 20th century increase in growth rate in some trees, particularly the bristlecone pines, but though CO2 fertilisation has been measured in saplings and strip-bark orange trees (which were well watered and fertilised) (Graybill and Idso, 1993, and references therein) efforts to reproduce the effect in controlled experiments with mature forest trees in natural conditions (Korner et al., 2005) have not produced positive results.

    This is not convincing to me and maybe Drs. Zorita or Wilson could comment. If the growth response to temperature, moisture, and CO2 is synergistic then the dendroclimatologists are mis-specifying the response model.

    i.e. They’re trying to cram this reality:

    G = T + M + C + T*M + T*C + M*C + T*M*C + e

    into this model:

    G = T + e

    Anyone care to comment on the possibility of response model mis-specification?

  8. bender
    Posted Oct 27, 2006 at 7:41 AM | Permalink

    Which one of the authors is the biologist?

  9. Steve McIntyre
    Posted Oct 27, 2006 at 7:59 AM | Permalink

    Sometimes the mendaciousness of the Team astonishes me. I must disappointed that Eduardo has associated himself with this. The account of our work is quite troubling and I get really tired of picking spitballs off the wall. For example, here’s a particular saucy statement:

    The code used by MM2005 is not, at the time of writing, available, but the code fragments included in their text ….

    Now this is troubling on two different counts. First it is simply untrue. The code used in McIntyre and McKitrick 2005 (the FRL article) is available at the SI to the article which is cited in the publication. It is at ftp://ftp.agu.org/apend/gl/2004GL021750. (The code for MM05 (EE) is at http://www.climate2003.com/scripts/MM05_EE/ee2005.backup.txt). They have consulted this site as they specifically discuss aspects of MM03 discussed there.

    This incorrect statement is made in a context that implies that code is otherwise generally available, while coauthor Briffa refuses to even identify the sites in Briffa et al 2001 (and numerous other studies) or provide the measurement data for the Yamal, Tornetrask update or Taymir sites. Briffa has never made code available. Esper refused to provide data; after dozens of emails and the intervention of Science, an incomplete file was arranged and Esper refused to provide a reproducible explanation of his methodlogy. Moberg’s data supply was better but refused to provide all series without a formal complaint to Nature. Nanne Weber didn’t say boo to a goose when I was at KNMI.

    They are an astonishingly cheeky bunch. I wish Eduardo travelled in better company.

  10. Steve McIntyre
    Posted Oct 27, 2006 at 8:05 AM | Permalink

    #6. It’s definitely not Nanne Weber. She told me at KNMI that she didn’t like statistics; she didn’t know r2 from RE. Hey, it’s climate science – I guess that is one of her qualifications.

  11. bender
    Posted Oct 27, 2006 at 8:10 AM | Permalink

    I saw that statement on code availability and choked on my coffee. They should be asked to issue a corrigendum.

    But before tarring Zorita with that brush, recognize that a junior author on a many-authored paper does not have much control over content and tone. Writing a paper with the team does not mean you are part of the team. Not anymore at least. i.e. Not since Wegman/NAS.

  12. bender
    Posted Oct 27, 2006 at 8:20 AM | Permalink

    I’ve never written one of these many-authored “consensus” papers before. I’m starting my first just now. And it’s interesting to see first-hand how the strong peronalities leading the enterprise are very eager to forge a consensus where there is almost none. I could see how, unless there is push-back form the co-authors, the lead authors will have their way with the text. At the end of it all, as a junior author you have two choices: take your name off the paper, and have nothing to show for your efforts, or keep it on and risk being tarred with the brush. But who can afford option 1?

  13. bender
    Posted Oct 27, 2006 at 8:56 AM | Permalink

    Re #7 on response-function model-misspecification
    Here is an interesting discussion of a paper showing how increased CO2 reduces stomatal opening preventing water loss and thus increasing water-use efficiency. Brilliant. The plants are “watering” themselves … by preventing excessive dehydration, which is probably pretty severe in these very exposed alpine environments.

    All of a sudden it makes sense why the srip-bark bcps might respond more than the full-barks. Those trees are under severe hydric stress. That’s why they’re strip-barked in the first place!

    Purely additive response models are therefore misspecified models, because C and M interact synergistically.

    QED

    Note that this effect would not be restricted to just bcps. All treeline conifers used in “temperature” reconstruction should respond this way; the more extreme the hydric stress the stronger the synergistic response.

  14. Steve McIntyre
    Posted Oct 27, 2006 at 9:05 AM | Permalink

    bender, some other references on this topic at this post http://www.climateaudit.org/?p=329

  15. bender
    Posted Oct 27, 2006 at 9:21 AM | Permalink

    Re #14
    Just when I thought I’ve read the whole blog, out comes a thread from the past like this one, that just nails it. (My #7/#13 comments were completely independent.)

    Someone has got to be working on the physiology of this problem, because it is just too tantalizing (and obvious) and workable for that to be not be the case. The experiments required to nail it down (i.e. parametrize the model and quantify the misspecification effect) are pretty simple. But the tree-ringers wouldn’t do that kind of work. It would have to be up to a real physiologist to do it. Therefore it is possible that it is an open problem, just begging to be solved, with many papers in Nature and Science as the reward.

    Which ambitious young tree physiologist out there wants to have some fun with me overturning the scientific establishment?

  16. Jean S
    Posted Oct 27, 2006 at 9:24 AM | Permalink

    But before tarring Zorita with that brush, recognize that a junior author on a many-authored paper does not have much control over content and tone.

    I agree. And I hope that Eduardo understands that whatever I may say about this paper… it’s nothing personal. I have to say though that I’m quite disappointed as this paper goes against pretty much everything I’ve been saying, e.g., here. 😦

    A couple of questions:

    1) What “stardardized proxy records” mean in the first line of Appendix A1, zero mean and unit variance?
    2) Do I understand correctly this CVM: take the mean of (standardized) proxies and scale by the ratio of instrumental std and std of the proxy mean?!!?

  17. Steve McIntyre
    Posted Oct 27, 2006 at 9:34 AM | Permalink

    Tang et al is mentioned by the NAS Panel – we drew it to their attention.

    Notice that the Euro Hockey Team don’t mention either the NAS Panel or Wegman. Hegerl testified to the NAS Panel. The NAS panel said – don’t use bristlecones. So the Euro Team go ahead and use 4 bristlecone/foxtail series.

    Aren’t academic publications supposed to reference and discuss the most up-to-date literature.

  18. Ross McKitrick
    Posted Oct 27, 2006 at 9:35 AM | Permalink

    I’ve only given the paper a quick read so far. Despite the various juvenile asides and grudging treatment of our work, they do quietly concede that removing the bristlecone pines removes the 15th century skill. Their defence is, I guess (they don’t press it very far), that the bcp’s aren’t actually CO2 fertilized. But it’s a straw man argument. They imply that claims of spurious growth in the bcp’s arises from a priori views on CO2 effects. No, it is based on the lack of correlation with local temperature; CO2 fertilization is one of several candidate explanations, but the matter apparently remains a “mystery” as Hughes put it.

    And they don’t mention that if skill hangs on use of bcp’s the results are not robust, nor do they acknowledge that, even with the bcp’s, the r2 and CE test scores show no skill up to the 1700s, even though they cite Wahl and Ammann, who computed these results and thereby confirmed and extended our own earlier assertions on the point. And unless I missed it they totally ignored the problem of spurious RE scores. All these issues are out there and well-understood. The paper doesn’t present any advance on them. Far from having ‘moved on’ some of these guys have yet to ‘catch up’.

  19. Mark T
    Posted Oct 27, 2006 at 9:38 AM | Permalink

    I’ve never written one of these many-authored “consensus” papers before. I’m starting my first just now.

    Neither have I. Just about everything that is out there from me has only my name on it (except the patent I led, and a paper my MS thesis advisor wrote based on my research). As a matter of fact, what my current advisor and I intend to do in the next year or so will only be the two of us, which is the norm for student/teacher papers and, in general, for most journal papers I’ve seen from the IEEE (occasionally there are 4 or 5 authors, but they usually all work in a lab together).

    I’m not sure how I would feel about being stuck with such a conundrum, bender.

    Mark

  20. Jean S
    Posted Oct 27, 2006 at 9:46 AM | Permalink

    I’ve seen from the IEEE (occasionally there are 4 or 5 authors, but they usually all work in a lab together).

    If there are four authors, the rule of thumb in engineering papers is the following: the first author had most of the ideas plus did the actual writing and performed simulations, the second author had some of the ideas, the third author is the boss of the first author (paid salery), and the forth author is the boss of the second author (paid salery). 😉

  21. Steve McIntyre
    Posted Oct 27, 2006 at 9:50 AM | Permalink

    My two cents worth of advice – a wise old businessman once told me when I was young: lie down with dogs and you catch fleas. If you get involved with tricky people, there always ends up being a problem. When you’re younger, you often think that you can handle it, but it will blow up somewhere.

    Another old saying – the first loss is sometimes the best loss. Be prepared to walk away. What’s a little time invested in a paper? If you don’t like how it’s going, walk away.

  22. Mark T
    Posted Oct 27, 2006 at 9:51 AM | Permalink

    I’ve always assumed that teachers get listed first regardless, btw. True or no?

    Mark

  23. bender
    Posted Oct 27, 2006 at 9:54 AM | Permalink

    Re #18
    I think we may have the “mystery” by the throat, in the form of #7. And if this hypothesis is correct, then the effect is problematic in all high-elevation conifers, not just the two bcp forms. What would NAS say then?

    Say, University of Guelph has a good agricultural program. Surely there are plant physiologists there with an interest in biology-free, misspecified empirical models that are being used to make controversial statements about how the planet’s climate system behaves?

  24. Mark T
    Posted Oct 27, 2006 at 9:54 AM | Permalink

    Agreed, Steve. Part of the problem, however, is that the inexperienced often do not realize they’ve made their bed with flea-ridden bedding. One of the benefits from being an experienced engineer taking his first steps into the world of academia is that I am fully aware of what happens when I get involved with a sinking ship. I hope this pays off.

    Mark

  25. Jean S
    Posted Oct 27, 2006 at 9:58 AM | Permalink

    Far from having “moved on’ some of these guys have yet to “catch up’.

    Once again they first fit their proxy data to the instrumental data (this time the full period 1856-1980), and then they calculate their statistics from the same instrumental data. And they see no problem with this.

  26. Mark T
    Posted Oct 27, 2006 at 9:59 AM | Permalink

    It has long been my supposition, Ross, that the entire proxy realm is based on flawed assumptions that form the basis of a circular argument. Just once, I’d like to see some sort of definitive proof that any proxy is primarily driven by temperature, and not any one of the other, known, confounding factors. They start out assuming temperature = proxy measurement criteria, and then show some (weak) correlation and oila!, temperature = CO2 = man made.

    Mark

  27. Mark T
    Posted Oct 27, 2006 at 10:02 AM | Permalink

    Interesting description, Jean. I had never read that before. They mention cross-validation, too. 🙂

    Mark

  28. bender
    Posted Oct 27, 2006 at 10:08 AM | Permalink

    Re #21
    All dogs have fleas. Choose your dogs, choose your fleas. Change your choice, change your fleas.

  29. Mark T
    Posted Oct 27, 2006 at 11:01 AM | Permalink

    Fortunately, being a project lead typically allows me to pick my fleas.

    Mark

  30. Steve Sadlov
    Posted Oct 27, 2006 at 12:50 PM | Permalink

    RE: #12 – I was once on a many authored paper (you won’t find it easily, pre internet days) where I simply had lunch with a core member.

  31. bender
    Posted Oct 27, 2006 at 1:29 PM | Permalink

    Re #30
    Doesn’t mean you didn’t have a tremendous influence on the evolution of the paper during that lunch.

  32. bender
    Posted Oct 27, 2006 at 1:42 PM | Permalink

    Re #4

    get a special attachment for the Mannomatic that lets you predict rain and sun with the same proxy

    Willis, that’s exactly what Salzer & Kipfmueller (2005) purport to do with bcps.

  33. Mark T
    Posted Oct 27, 2006 at 1:42 PM | Permalink

    One of the guys I included on my (sole) patent was there simply because he had a hand in some semi-related front-end work with the receiver I was designing (the patent was for an Automatic Gain Control algorithm). He and I always had a tense relationship, so I sort of tossed him a bone since we had worked out our differences (a bit at least). I also asked him to review the work, which he did.

    Mark

  34. Steve McIntyre
    Posted Oct 27, 2006 at 1:49 PM | Permalink

    #32. Arabian Sea G bulloides percentage does the same. It’s a precipitation proxy in Treydte et al (Nature 2006) and a temperature proxy in Moberg et al (Nature 2005). As long as it’s a stick, it’s Natur-al.

  35. bender
    Posted Oct 27, 2006 at 1:55 PM | Permalink

    Going out on a limb here. But what’s the chance that the too-sharp G bulloides response is due to multivariate synergies not captured in a mis-specified, additive model?

  36. Steve McIntyre
    Posted Oct 27, 2006 at 2:04 PM | Permalink

    The G bulloides wasn’t even calibrated to temperature. It’s a percentage series and looks more like a uniform distribution.

    It’s funny to look at the mau-mauing of Gray about upwelling and then the Team uses an upwelling index as a proxy for global warming. It is all bizarre beyond words.

    This proxy is listed as one of the Euro Team All-Star proxies in their Table 1 but doesn’t seem to be in the listing of proxies in their SI.

  37. Steve McIntyre
    Posted Oct 27, 2006 at 3:23 PM | Permalink

    Wullscheger 2002 is online here

  38. Brooks Hurd
    Posted Oct 28, 2006 at 4:20 AM | Permalink

    From Juckes et al 2006, Conclusions, page 1026:

    The IPCC2001 conclusion that temperatures of the past millennium are unlikely to have been as warm, at any time prior to the 20th century, as the last decades of the 20th century is supported by subsequent research and by the results obtained here. Papers which claim to refute the IPCC2001 conclusion on the climate of the past millennium have been reviewed and some are found to contain serious flaws. Our 10 study corroborates the IPCC2001 conclusions.

    What they are saying in the conclusions, is that since some of the papers which are counter to IPCC2001 had serious flaws, let’s forget all of the papers.

    Did all who peer reviewed this paper flunk Freshman Logic? This sort of a logical error befits politicians, not scientists.

    To me, this casts yet more serious doubt on the current peer review process. This is not an error in esoteric statistics which might be excused if the peer reviewer were not well versed in the field. This is a basic logical error which was purposely inserted in the conclusions to mislead the readers.

  39. Jean S
    Posted Oct 28, 2006 at 4:46 AM | Permalink

    Did all who peer reviewed this paper flunk Freshman Logic?

    It is not peer reviewed yet. It is under review. If you want, you can participate to the review (Open Discussion), see the instructions here.

  40. Brooks Hurd
    Posted Oct 28, 2006 at 5:11 AM | Permalink

    Thanks Jean

  41. bender
    Posted Oct 28, 2006 at 6:42 AM | Permalink

    That a paper contains a flaw does not mean it came to an incorrect conclusion. It means its conclusions are unsupported. There’s a difference, and in this case that matters.

    e.g. Is IPCC2001 correct? There are several studies which claim to have refuted it; but they contain flaws. That doesn’t mean IPCC is correct. It means it just hasn’t been refuted. Yet.

    I would be careful about entering into the commentary without thinking some more about this. Accussing these guys of “flunking Freshmen Logic”, well – them’s fightin’ woids.

  42. welikerocks
    Posted Oct 28, 2006 at 8:02 AM | Permalink

    I’ll bite.

    Bender says:

    That a paper contains a flaw does not mean it came to an incorrect conclusion. It means its conclusions are unsupported. There’s a difference, and in this case that matters.

    If your conclusions are NOT supported by your papers contents within the scientific method understood [and you should state your standards; ie: I will be convinced when A, B, C happens [this does not change]; and this is the method I used] and you still think your conclusions are correct “some how”, isn’t that called Faith?

    Is IPCC2001 correct? There are several studies which claim to have refuted it; but they contain flaws. That doesn’t mean IPCC is correct. It means it just hasn’t been refuted. Yet.

    So no one has any idea exactly what is going on and they are jumping to conclusions.

    That’s my logic.

  43. welikerocks
    Posted Oct 28, 2006 at 8:29 AM | Permalink

    Hey wait Bender:

    Is IPCC2001 correct? There are several studies which claim to have refuted it; but they contain flaws.

    That’s not what it said.
    It said some contain flaws, not they contain flaws.

    *** some are found to contain serious flaws.**
    (wouldn’t you love to make them define serious? And there is some thinking here that is wierd – My flaws are not as bad as your flaws?

    It is all a big fat guess.

  44. Posted Oct 28, 2006 at 8:41 AM | Permalink

    #39

    Good to know, thanks 😉 Let’s see if CVM gets through. (They refer to Mann and Jones 2003, but I found no CVM from that paper)

  45. Steve McIntyre
    Posted Oct 28, 2006 at 7:17 PM | Permalink

    I’ve been wading through this paper. There are some aspects of it that are really pathetic. It’s hard to imagine that anyone after the NAS report and Wegman Report would examine permutations of the Mannian pseudo-principal components method – I call it a pseudo-method, because it is not a principal components method due to the decentering. Anyway they experiment with different permutations of the Mannian pseudo-method – using a short-segment of 125 years; using short-segment undetrended standard deviation instead of short-segment detrended standard deviation. I’ll write this up. This is like being in a time machine.

    It looks to me like they had a bumch of time and money tied up in this report; got sidetracked on irrelevant issues; they then probably had it mostly in the can when the NAS and Wegman reports came out.

    They also spend a lot of time experimenting with networks using trees that were not extended to 1980. In MM03, we observed that a lot of trees had been extended, but it was not an issue that we dwelled on or placed much weight on. In the NOAMER network, the extended trees tend to not to have HS shapes – Graybill’s bristlecones were collected in the mid-1980s. It appears that the Euro Team has done most of its calculations truncating the NOAMER network by the 14 series that needed extension.

    My jaw is dropping further and further as I pursue this article.

    They spend time on the Stahle/SWM network which Mann raised as an issue in their Internet response in Nov 2003, but which both Mann and us agreed was irrelevant to any reconstruction issues. The Euro Team has waded into the Stahle/SWM network. It’s hard to imagine why. However, maybe I’ll get one loose end cleared up that I’d been unable to resolve with Nature. A couple of sites in this network have identical values for the first 120-125 years or so. Mann refused to identify one of them saying it was only a Stahle (pers comm). I’m sure that he used near-duplicate versions of the same site. Maybe the Euro Team can sort this out, now that they’ve raised it as an issue.

  46. Posted Oct 30, 2006 at 2:26 AM | Permalink

    # 16

    2) Do I understand correctly this CVM: take the mean of (standardized) proxies and scale by the ratio of instrumental std and std of the proxy mean?!!?

    I found something similar from here

    http://www.climateaudit.org/?p=530

    After the calculation of RPCs, Mann re-scaled the variance of each RPC in the calibration period to the variance of the “observed” RPC.

    Have to admit, this is something I don’t understand. If calibration residuals are used to compute 2-sigmas, and before that the reconstruction is scaled so that variances match.. That scales both signal and noise then?? .. I think I’ve missed something.

  47. Steve McIntyre
    Posted Oct 30, 2006 at 7:17 AM | Permalink

    #46. Reviewing the bidding on the CVM approach, they:

    1) standardize the selected proxies to standard deviation units;
    2) average them- this will be in standard deviation units and have reduced variance;
    3) re-scale the average to match the standard deviation of NH temperature (or other target)

    Mann did something a little different. He decomposed NH temperature into temperature PCs and (according to my analysis) regressed (Partial Least Squares regression) each of the temperature PCs against the many proxies. PArtial Least Squares regression can have a re-scaling step. The issue here is really overfitting as Mann is doing many calculations here that cannot be assigned any physical meaning and gets a brute force fit in the calibration period (high calibration r2, low calibration residuals) – with catastrophic verification r2 values.

    In some ways, the matter is easier to quantify in the Mannian framework with you have hundreds of proxies than when there has been manual cherrypicking in the aw-shucks cherrypicked collections of the more recent studies.

  48. per
    Posted Oct 30, 2006 at 12:07 PM | Permalink

    this paper is hilarious.

    [MM2003] criticise MBH1998 on many
    counts, some related to deficiencies in the description of the data used and possible
    irregularities in the data itself. These issues have been largely resolved in Mann
    et al. (2004).

    so that is the corrigendum, essentially accepting that the criticisms in MM2003 are correct.

    They attribute the failure of
    this attempt to errors in the MBH1998 methodology, but a major factor was their misunderstanding
    of the stepwise reconstruction method in relation to stage (1) (Jones and
    Mann, 2004; Wahl and Ammann, 2006)….unlike the discredited
    MM2003 result,…

    errr, weren’t the relevant details only published in 2004 ?

  49. Steve McIntyre
    Posted Oct 30, 2006 at 12:39 PM | Permalink

    per, nice to hear from you. You’ve followed this story from day 1. It’s amazing to see them try to re-hast the stepwise reconstruction stuff. It’s like we’re back in Novermber 2003.

  50. per
    Posted Oct 30, 2006 at 1:44 PM | Permalink

    it is like reading usenet ! There was a materials complaint to Nature, and MBH were forced to issue a corrigendum – which amounts to a fairly clear vindication of your claims of “deficiencies in the description of the data used and possible irregularities in the data itself”.

    It is surprising to see a garbled and misleading claim, like “these issues were resolved”- the claims were either right or wrong, so why not state the truth ?

    It is difficult to see a simple explanation of how someone came to write these lines.

    per

  51. Steve McIntyre
    Posted Oct 30, 2006 at 1:58 PM | Permalink

    Actually the issues weren’t “resolved” in the Corrigendum. The Corrigendum was not peer-reviewed. It was simply something that Mann and the Nature editors agreed on. After the Corrigendum, I re-submitted my request for things like — the actual results for the AD1400 step, which Nature said – was up to the author to provide. Mann refused and it’s still unavailable. One can sort of replicate it, but both Wahl and Ammann and ourselves (whose replications are essentially identical) replicate a little warmer than archived MBH results from 1400-1450. After all this time, I’d still like to see Mann’s actual results for this step.

  52. Posted Oct 31, 2006 at 3:46 AM | Permalink

    #47

    std matching combined with CIs from calibration residuals would be interesting combination.. 2-sigmas would never exceed 4 \sigma _T , where \sigma _T is the sample std of calibration data.

  53. TAC
    Posted Nov 5, 2006 at 5:14 AM | Permalink

    This is likely OT, but I think it is of interest. The CPD paper cites Hosking (1984), a very important paper on how to use fractional differencing to model long-term persistence (aka “long-range memory”, the “Hurst phenomenon”; “fractional Gaussian noise”; “FARIMA”; “arfima”; etc.) for the purpose of computing statistical significnance. Employing this sort of approach would constitute a welcome advance: Proper concern for LTP, and statistical treatment of LTP, has been sorely lacking. I was excited about seeing this reference.

    However, it turns out Hosking is cited once in the paper, and it is in one of the most bizarre paragraphs I have ever encountered. Here it is in full:

    Following Hosking (1984), a random time series with a specified lag correlation structure is obtained from the partial correlation coefficients, which are generated using 20 Levinson-Durbin regression. It is, however, not possible to generate a sequence matching an arbitrarily specified correlation structure and there is no guarantee that an estimate of the correlation structure obtained from a small sample will be realizable. It is found that the Levinson-Durbin regression diverges when run with the lag correlation functions generated from the Jones et al. (1986) northern hemisphere temperature record and also that from the HCA composite. This divergence is avoided by truncating the regression after n=70 and 100 years, respectively, for these two series. The sample lag-correlation coefficients are, in any case, unreliable beyond this point. Truncating the regression results 5 in a random sequence with a lag correlation fitting that specified up to the truncation point and then decaying.

    I can’t say for sure, but it appears that the approach taken in this paper — an inevitably unsuccessful effort to fit a huge ARMA process — was precisely what Hosking was arguing against. The whole point of Hosking’s work is that you can model this sort of long-range correlation structure parsimoniously (say with 1, 2, or maybe 3 parameters) if you recognizes the underlying structure. The fitting problems reported above are well understood; the problem is well understood; Hosking introduced fractional differencing to solve it.

    This is frustrating. It makes me wonder if the authors even bothered to read Hosking.

    Hosking, J. R. M.: Modeling persistence in hydrological time series using fractional differencing, Water Resour. Res., 20(12), 1898--1908, 1984.

  54. Willis Eschenbach
    Posted Nov 5, 2006 at 6:41 AM | Permalink

    TAC, thanks for pointing out the strangeness of that paragraph. I puzzled over it ’til my brain exploded, and I couldn’t make any sense out of it either, but I figured that was just my lack of understanding.

    w.

  55. TAC
    Posted Nov 5, 2006 at 10:45 AM | Permalink

    #54 Willis: hermeneutically speaking, that paragraph really is something to behold. I think nearly every sentence includes at least one error; some sentences manage to incorporate two or three. Or at least that’s my interpretation; it is extraordinarily confusing. I suppose it must have meant something to somebody — it passed peer review, right?

  56. Steve McIntyre
    Posted Nov 5, 2006 at 10:53 AM | Permalink

    #55. I think that it’s in open peer review.

    I’m of two minds about whether to wade into the debate over at CPD or see how the climate guys manage their peer review all by themselves (even with the benefit of substantive commentary over here).

  57. Willis Eschenbach
    Posted Nov 5, 2006 at 2:25 PM | Permalink

    As of this morning, I’m the only person to comment on the MITRIE paper at CPD, go figure.

    I asked a couple of pointed questions of Martin, no reply. Likely he’s busy … but I sure hope the “Discussion” part of CPD is more lively than it has been to date …

    w.

  58. MarkR
    Posted Nov 5, 2006 at 3:11 PM | Permalink

    It’s the weekend, some people have other non-work related things on their minds.

    Over here it’s Bonfire Night.

    Penny for the Guy Gov?

  59. KevinUK
    Posted Nov 5, 2006 at 3:21 PM | Permalink

    #58, Mark R

    Shouldn’t that be post the Stern review.

    “Several pennies for the Goverment, guy?”

    Given that bonfires contribute to CO2 in the atmosphere and therefore to global warming, I wonder how long it will be before our eco-infiltrated, nannying, political correct government introduces a ban on celebrating ‘bonfire night’?

    KevinUK

  60. MarkR
    Posted Nov 5, 2006 at 3:51 PM | Permalink

    a ban on celebrating “bonfire night’?

    Hi Kevin, I think they already banned burning the Guy in some places.

  61. Willis Eschenbach
    Posted Nov 5, 2006 at 4:18 PM | Permalink

    Yeah, it’s the weekend, but the site’s been open for comments for ten days now.

    w.

  62. TAC
    Posted Nov 6, 2006 at 7:41 AM | Permalink

    Martin Juckes: As I mentioned in #53, citing Hosking (1984) was welcome, but the discussion in the CPD paper is entirely unsatisfactory. My suggestion would be to read with care some of the recent work on this topic: P Craigmile et al. [“Trend assessment in a long memory dependence model using the discrete wavelet transform,” Environmetrics 15, 313-335, 2004]; M Kallache et al. [“Trend assessment: applications for hydrology and climate research,” Nonlinear Processes in Geophysics 12:201-210, 2005]; D Koutsoyiannis [“Nonstationarity versus scaling in hydrology,” Journal of Hydrology 324, 239-254, 2006], and the various papers cited therein.

    Here is the gist: Most papers on climate trends employ demonstrably inappropriate methods. The conventional assumption of iid data, or at most AR(1) or ARMA(p,q) [p and q small], are inconsistent with what one sees in virtually any long time series of climate data (proxy or real). This has been known for over 50 years [Hurst, 1951]. One consequence is that the standard trend tests greatly overstate trend significance, typically with errors that can best be described as huge (e.g. here).

    I do not fully understand what was done in the CPD paper, so I am not sure how to comment on it. However, I can state that what appears in the appendix does not inspire confidence. Specifically, the right way to handle this situation does not involve reproducing the sample pacf out to lag 70 or 100.

    There really is a better way; see references above, or, even better, consult a statistician familiar with this topic.

  63. bender
    Posted Nov 17, 2006 at 9:17 AM | Permalink

    Dr. Juckes,
    Your attention is requested on #7.

  64. jae
    Posted Nov 17, 2006 at 11:06 AM | Permalink

    Bender: Don’t your comments in #7 suggest that the use of tree rings as temperature proxies is futile? Too many variables and interactions? You have only listed a few out of many other variables, also.

  65. bender
    Posted Nov 17, 2006 at 11:42 AM | Permalink

    Re #64
    1. No, because I am not stating a proven fact. I am asking a provocative question.
    2. If one could identify the top 4 factors and quantify their influence (including interactions), reconstruction might be possible through a simulation approach, rather than a statistical approach. (If the top 4 factors account for much of the variability in an experimental setting, the reconstruction might be quite good.)
    3. However you are definitely catching my point. If a four-factor model with interactions is what is required to explain 80% of the variation in growth, then this might explain why a one-term linear model that “explains” 10% of the variability might be inadequate. That is why it is important that the Rob Wilsons, the Martin Juckeses & co-authors address this point.
    4. My question is directed specifically at California bcps, not all trees in general (although it may have broader relevance for other species in other systems).

  66. jae
    Posted Nov 17, 2006 at 11:59 AM | Permalink

    65: Thanks. The problem is that there is little or no data for the other variables. In fact, as I understand the situation, they are not even using the correct (local) data for the temp. variable.

%d bloggers like this: