Bürger and Cubasch

I quickly mentioned BàƒÆ’à‚⻲ger and Cubasch [2005] which was published in December and meant to post up some further comments at the time, but forgot to do so. I was reminded of this by the question of a reader at Daily Kos . Let me mention first that BàƒÆ’à‚⻲ger and Cubasch is a really interesting and well-conceived paper and that Cubasch was an IPCC TAR heavyweight. Here’s a tease -the following diagram is in their Supplementary Information and is explained later in this post.

Anyway, Jack asked Mann at Daily Kos:

Response to Burger and Cubasch 2005 needed
Thanks for the response here and the direct response on RealClimate! I have just posted a reply on RealClimate that is summarized in the Subject line of this post. Outdated? Not when a critical (i.e., important) paper was published two months ago.

I looked for the post about Burger and Cubasch at realclimate that Jack mentioned and surprise, surprise, it was censored. (The first post by Jack was allowed, not the one on BàƒÆ’à‚⻲ger and Cubasch.) I wonder what Gavin’s excuse is this time. They really are pathetic. Anyway they couldn’t censor at Daily Kos so Jack’s question survived at Daily Kos and prompted the following reply from Mann:

Burger and Cubasch: Would have been a useful contribution to the literature about 10 years ago, when Mann et al ("MBH98") and other groups were using simple EOF-based approaches. The primary criticism of Burger and Cubasch is that such approaches lack regularization or an explicit model of the error covariance structure of the data. This is fair enough. However, the method used by Mann and Colleagues for roughly the past 6 years now, Regularized Expectation-Maximization, is not subject to either criticism. This method yields essentially the same reconstruction when applied to the same proxy data [Rutherford, S., Mann, M.E., Osborn, T.J., Bradley, R.S., Briffa, K.R., Hughes, M.K., Jones, P.D., Proxy-based Northern Hemisphere Surface Temperature Reconstructions: Sensitivity to Methodology, Predictor Network, Target Season and Target Domain, Journal of Climate, 18, 2308-2329, 2005], http://www.meteo.psu.edu/~mann/shared/articles/RuthetalJClimate05.pdfindicating that the original approach was robust in practice, despite the legitimate theoretical limitations of using a truncated EOF basis.

This method furthermore has been demonstrated to accurately reconstruct multi-century timescale variability based on applications to model simulation data (which rebuts another criticism that has been leveled against the MBH98 method).
Mann, M.E., Rutherford, S., Wahl, E., Ammann, C., Testing the Fidelity of Methods Used in Proxy-based Reconstructions of Past Climate, Journal of Climate, 18, 4097-4107, 2005.

So Burger and Cubasch is effectively pre-empted by this more recent work, which was highlighted by Science last November and in the Feburary issue to appear of the Bulletin of the American Meteorological Society. More information can be found here.

Once again, the nomads appear to have de-camped. Whenever anyone criticizes one of their papers, they’ve moved on. Never a place to lay one’s head even for one night. In this case, the nomads must have moved on just before BàƒÆ’à‚⻲ger and Cubasch arrived. Rutherford, Mann et al. was not published 10 years ago, but in mid-2005. how would anyone know that the nomads had already decamped? I might mention here that, in early 2005, I requested details from Journal of Climate on the RegEM method described here, which they refused to provide prior to final publication, which seems to have occurred in mid-summer. As I mentioned before, I’m blocked from Rutherford’s website, but have been provided with the contents of the website, but haven’t analyzed them in detail yet.

First, as we’ve got used with respect to our articles, no matter how many problems are itemized in a critique, the Hockey Team always picks one point, which they label the "primary criticism" and only deal with that. Here Mann argues that the "primary criticism" of BàƒÆ’à‚⻲ger and Cubasch is that "simple EOF-based approaches…lack regularization or an explicit model of the covariance structure of the data" – defects which Mann seems to admit for MBH, but not for the "regularized expectation-maximization" methodology of Rutherford et al [2005]. I’ll return to this supposed "primary criticism" after giving a little exposition of actual BàƒÆ’à‚⻲ger and Cubasch claims, as it doesn’t seem to me that the criticism that Mann chooses to mention is "primary" in BàƒÆ’à‚⻲ger and Cubasch . So let’s look at what BàƒÆ’à‚⻲ger and Cubasch actually said (the paper is posted up here.)

The main empirical work carried out by BàƒÆ’à‚⻲ger and Cubasch is to analyze the cumulative impact of 6 seemingly innocuous methodological decisions both in the context of their emulation of MBH98 and in climate simulations (discussed primarily in a companion article which I have not seen yet.) They pose the problem as follows:

For instance, assertions made by MBH98 and later about certain steps (such as rescaling) being “‹Å“”‹Å“insensitive” to the method were hard to quantify and thus of little help. BàƒÆ’à‚⻲ger et al. [2005] showed that the method is, on the contrary, highly sensitive to the variation of 5 independent standard criteria (as we call the steps here), resulting in an entire spectrum of possible climate histories. Those experiments were conducted in the synthetic world of a climate model, with noise-disturbed temperature grid points serving as pseudo-proxies, and it turned out that the amplitude of the reconstructions ranged between about 20% and 100% of the true (simulated) millennial history. Whether or not these results extend to the real-world case, i.e. whether or not the MBH98 and relative approaches are robust, including the predictor selection issues as argued by McIntyre and McKitrick [2005a], is the subject of the current study.

It’s nice to be cited in such an interesting article. The "independent standard criteria" – 5 in their simulations and 6 in their MBH emulation – are all seemingly innocuous methodological choices, which yield 2^6 different reconstructions. They point out that:

No a priori, purely theoretical argument allows us to select one out of the 64 as being the “‹Å“”‹Å“true” reconstruction.

and argue that "if it [the MBH98 reconstruction] is robust certain refinements such as rescaling should not affect the essence of the final result."

Music to our our ears. The 6 binary choices pertain to: detrending; use of PCR in the regression step; using global or temperature PCs as a target; inverse or direct regression; re-scaling; centering of tree ring PCs. Here’s a quick synopsis edited slightly:

The following 6 criteria were considered, all belonging to the standard toolbox of empirical climatology. The model nomenclature is binary (1/0).

TRD – trended – 1 or detrended – 0 data in calibration period

PCR – Before estimating the regression model, the proxy predictors undergo a PC transformation (PC regression). PCR -1; no PCR – 0. SM note: this is different than PC applied to tree ring networks or to temperature gridcells. It would be a further PC step once the network of 22-112 proxies are assembled.

GLB – One can use either the single predictand NHT (1) or, alternatively, a set of leading principal components (0) so that spatial detail is simulated as well. But note that like MBH98 we use just one PC. (SM note – this presumbly refers to the AD1400 step in controversy (or the AD1000 step in MBH99), which is the step with only one PC)

INV – Direct (0) or inverse (1) regression. " Direct regression is the kind of regression that is normally applied, here, as a regression of the instrumental temperature fields (predictand) on the proxies (predictor). Inverse regression goes vice versa, first, by regressing the proxies on temperature and, second, by finding for a given proxy the temperature field with the closest (in a least squares sense) image to the proxy under the regression map. This is the same as inverting the regression map using the pseudo inverse. It is noteworthy that the simulated amplitudes of a multiple direct regression are scaled by the canonical correlations between predictor and predictand field, while the inverse form is scaled by the inverse of those correlations [see Bu¨rger et al., 2005].

RSC – Rescaling (1) or not re-scaled (0). "To match simulated and original variability, rescaling of the predictand is sometimes applied with scaling factors taken from the calibration period. This ensures adequate variability at least for that period, but introduces uncontrollable results if that domain is left. RSC is frequently encountered in statistical downscaling under the name inflation [cf. Karl et al., 1990]. Note that if either one of INV and RSC is applied the simulated amplitude is increased relative to observations; this is in conflict with the damping arguments given in [von Storch et al., 2004]. We have not found any reference regarding the effect of rescaling on model uncertainty."

CNT – Tree ring PCs centered (1) or uncentered (0). " The MBH98 choice of calculating the PCs of some proxy clusters from anomalies of the 20th century climate has been criticized for reducing off-calibration amplitudes and favoring hockey stick shaped results [cf. McIntyre and McKitrick, 2005a, 2005b]. Under the CNT criterion those PCs are determined from the full period to temper the impact of a strong positive 20th-century trend. We applied Preisendorfer’s rule N for selecting the PCs." SM note- it would be a little more precise to show 0 – uncentered; 1- correlation; 2- covariance. I’m not sure which they used (although it doesn’t matter much in terms of the argument.)

They summarize the situation as follows:

Note that each single criterion is a priori sound, with numerous applications elsewhere, and can hardly be dismissed purely on theoretical grounds. Note further that all of the above criteria are independent, mutually consistent and can thus arbitrarily be mixed, so that any combination thereof defines one of 2^6 = 64 reasonable “‹Å“”‹Å“flavors” of the regression model. Following Table 1 we identify a flavor using a binary code of length 6, indicating whether any of the 6 criteria is valid or not. For example, 100110 refers to an inverse regression with rescaling, trend, and spatially explicit predictands, and without using PCR; this is the variant used by MBH98, and we denote it by MBH.

I disagree with their acquiescence in the MBH uncentered principal components method as "being a priori sound, with numerous applications elsewhere", but this is not central. In light of other disputes, one could probably identify 3 PC alternatives, with both covariance and correlation PCs. In the above binary nomenclature, the case that we illustrated in our E&E article would be a further variation of 100111. Since the handling of the Gaspé series is a 7th choice and has a material impact, this would increase the poulation of reconstructions to 3* 2^6.

Their first empirical finding is the spread of reconstructions is immense. BàƒÆ’à‚⻲ger and Cubasch:

Figure 1 shows the 64 variants of reconstructed millennial NHT as simulated by the regression flavors. Their spread about MBH is immense, especially around the years 1450, 1650, and 1850. No a priori, purely theoretical argument allows us to select one out of the 64 as being the “‹Å“”‹Å“true” reconstruction. One would therefore check the calibration performance, e.g. in terms of the reduction of error (RE) statistic. [SM note – the "calibration RE" statistic is really just the "calibration r2" statistic; when I use "RE" statistic, I’m nearly always referring to the "verification RE" statistic. Watch what he’s doing here – he’s using the calibration statistic to pick the model and the verification statistic to check what he calls "over-fitting", what I’d call "spurious regression".] But even when confined to variants better than MBH a remarkable spread remains; the best variant, with an RE of 79% (101001; see supplementary material ftp://ftp.agu.org/apend/gl/2005GL024155.), is, strangely, the variant that most strongly deviates from MBH.

Figure 1. 26 = 64 variants of millennial NH temperature, distinguished by smaller (light grey) and larger (dark grey) calibration RE than the MBH98 analogue (MBH, black). Instrumental observations are dashed. All curves are smoothed using a 30y filter.

I draw attention to the much better statistical practice used by BàƒÆ’à‚⻲ger and Cubasch than by the Hockey Team in carefully distinguishing calibration and verification. They state the following:

It may be important to stress the following: On the basis of the validation RE one might be tempted to prefer the (most simple) variants 100000 or 101000, or also MBH, to the others. But that statistic must not be used to select a model; it can only serve as a check of a model, e.g. for overfitting, after it has been selected. To do otherwise amounts to extend the calibration over to the validation period. In that case, i.e. using the calibration 1854–1980, the simulations look remarkably different (not shown).

It would be very interesting to see the results for the 1854-1980 calibration period and it’s too bad that they are not shown. I’ll see if I can get them. I sent BàƒÆ’à‚⻲ger my reconstruction code early last year, so he should be receptive to the request.

BàƒÆ’à‚⻲ger and Cubasch go on to show a large spread in their simulation results as well, reported as follows for the AD1600 network – a network for which MBH claimed much closer confidence intervals. Note that the figure below is for only 32 cases and only uses uncentered PCA:

We have analyzed the influence of each of the criteria on the overall behavior of the simulation…. We have nevertheless conducted the same experiments under the setting of the AD 1600 step where more proxies (57) are available. The variations are comparable to those seen in Figure 1. The spread is particularly large in the earliest part of the simulations, especially among those with a calibration RE higher than MBH (cf. SM). But they have a negative validation RE, which indicates overfitting.

Supplementary Figure 2. 1600.eps. The 32 variants from combining criteria 1-5 (grey, with CNT=0), distinguished by worse (light grey) or better (dark grey) performance than the MBH98-analogue MBH (10011, black). Note the remarkable spread in the early 16th and late 19th century. ftp://ftp.agu.org/apend/gl/2005GL024155/2005GL024155-fs02.jpg

BàƒÆ’à‚⻲ger and Cubasch attempt to explain this remarkable lack of robustness by noting that proxy values are being extrapolated well outside of the range for the proxies in the calibration period, sometimes far outside the calibration range. They say:

But as Fritts [1976, p. 15] continues: “‹Å“”‹Å“In order to make this kind of inference, however, it is important that the entire range of variability in climate that occurred in the past is included in the present-day sampling of environment.” This is, in fact, the basic condition of statistical regression – but only one half of it. The other half applies to the tree ring variations: They also must lie in a range that is dictated by the calibrating sample. This, however, is not the case here. For almost all of the 24 proxies, the range of the millennial variation is considerably larger than the sampled one, with numerous cases of proxies exceeding 7 and more calibration standard deviations (cf. SM). As a consequence, the regression model is extrapolated beyond the domain for which it was defined and where the error is limited.

BàƒÆ’à‚⻲ger and Cubasch conclude with some very strong caveats about fitting models outside their calibration range as follows:

Any robust, regression-based method of deriving past climatic variations from proxies is therefore inherently trapped by variations seen at the training stage, that is, in the instrumental period. The more one leaves that scale and the farther the estimated regression laws are extrapolated the less robust the method is. The described error growth is particularly critical for parameter-intensive, multi-proxy climate field reconstructions of the MBH98 type. Here, for example, colinearity and overfitting induce considerable error already in the estimation phase. To salvage such methods, two things are required: First, a sound mathematical derivation of the model error and, second, perhaps more sophisticated regularization schemes that can keep this error small. This might help to select the best among the 64, and certainly many more possible variants. In view of the relatively short verifiable period not much room is left.

So after this devastating critique of MBH98-type reconstructions (entirely in the spirit of our articles), Mann concluded that the "primary criticism" of BàƒÆ’à‚⻲ger and Cubasch was that "EOF-based approaches … lack regularization or an explicit model of the error covariance structure of the data", and, conveniently, claims that Rutherford, Mann et al., which is hot off the press, somehow avoids these seemingly intractable problems. (BTW I was unable to locate any use of the term "explicit model of the error covariance structure of the data" or any apparent synonym. As so often, the Hockey Team does not quote the critic, but re-writes the claim into one that is perhaps easier to respond to, regardless of the original criticism.)

As I mentioned before, I’ve not gone through Rutherford, Mann et al. to see if they’ve avoided the BàƒÆ’à‚⻲ger and Cubasch criticisms – it would amaze me if they did. For example, Rutherford, Mann et al. astonishingly use the original MBH98 tree ring principal components series in their primary analyses. In fact, I doubt that Rutherford, Mann et al. actually avoid any of the BàƒÆ’à‚⻲ger and Cubasch critiques, but, hey, it’s in Journal of Climate. Andrew Weaver edited it, so what further proof would anyone require.


  1. Mike Rankin
    Posted Jan 26, 2006 at 3:24 PM | Permalink

    The link to the Bürger and Cubasch paper did not work for me.

  2. hans kelp
    Posted Jan 26, 2006 at 4:29 PM | Permalink

    The link to the Burger and Cubash paper did not work for me either.

  3. Jack
    Posted Jan 26, 2006 at 4:41 PM | Permalink

    Hey, I’m the same Jack. RC invited me directly to repost my reply (it failed their filter the first time), but I declined because the essential content was covered in their reply at DailyKos. (I didn’t want to force parallel blogging.) I think that the issue has gained enough traction here; thanks for the in-depth treatment. I ascertain that this promises to be a very interesting year for this particular subheading of the climate change topic.

  4. Hans Erren
    Posted Jan 26, 2006 at 5:13 PM | Permalink

    Hi Jack,

    I noticed DailyKos has a 24 hour embargo before you are allowed to post after registering. Keeps the spammers out.

    I noticed the little graph with the climate model: the righthand bottom corner says 2400(!)

    Some climate models predict a dramatic warming trend over the next few hundred years as a result of increased greenhouse gases. This phenomena could lead to more powerful tropical storms.

    Talk about alarmism!

  5. Tony Knudson
    Posted Jan 26, 2006 at 5:41 PM | Permalink

    Extrapolating outside of your data range is fraught with problems. Extrapolating 400 years is ridiculus…and worthless.

  6. John A
    Posted Jan 26, 2006 at 5:43 PM | Permalink

    Burger and Cubasch: Would have been a useful contribution to the literature about 10 years ago, when Mann et al (“MBH98”) and other groups were using simple EOF-based approaches.

    I love this. “It may only have been published recently but its already 10 years behind the new bold approaches that we are now using”.

    I don’t know if you’ve ever seen a 1980s British comedy series called “Yes Minister” which was about senior civil servants conniving to undermine their political masters’ wishes, usually by initially praising them “That’s an extremely brave policy, Minister..but have you thought about the consequences…”.

    One of the quotes from the senior civil servant stuck particularly in my mind regarding critical reports or inquiries into some failure by the Government and delivered to Parliament by experts. The advice was never to attack the author or the contents but to thank them for their contribution, and then “put out a press release to the effect that the critical report is out-of-date and its conclusions have already been incorporated into new guidelines which have been operating for some time…”

    Thus the critical work has already been outflanked, and nobody, the Press included, would wish to flog a dead horse of an issue. Then the advice would be to announce some bland new initiative based on these supposed new guidelines which would deal with the same problem in the future. The Press trumpets a forward-looking politican with a bold new agenda, and all will be well.

    Get the idea? It’s Roger Pielke’s suggestion that the “science has moved on from the Hockey Stick”. Treat all references to the Hockey Stick with fake credulity about how this one little study should have been noticed by so many people, and its really no-one’s fault and only small-minded obsessives would go on about it after all this time. Then a few yawns and audible sighs later, the interviewer asks “so what new bold exciting papers are you about to publish?”

    And all is well in “climate science world”

  7. Steve McIntyre
    Posted Jan 26, 2006 at 6:57 PM | Permalink

    John A, I did watch many episodes of Yes, Minister and found it very amusing. It looks like Sir Humphrey is coaching the Hockey Team.

  8. Dave Dardinger
    Posted Jan 26, 2006 at 10:13 PM | Permalink


    You need to be careful about beginning a message “Hi, Jack!” You’re liable to find yourself on a NSA list. Especially with other words like “Dramatic” and “Powerful” in the message.

  9. Louis Hissink
    Posted Jan 26, 2006 at 11:33 PM | Permalink

    They say African gorilla’s never camp in the same spot twice either, but that is for an entirely different reason – it’s their own mess they avoid each following day.

  10. James Lane
    Posted Jan 26, 2006 at 11:57 PM | Permalink

    That’s an excellent presentation of Bürger and Cubasch. Very clear. Mann’s response is pitifully inadequate.

  11. Larry Huldén
    Posted Jan 27, 2006 at 1:35 AM | Permalink

    RE #9
    I think Louis Hissinks comment on gorillas can be applied to the Hockey Team.

  12. Peter Hearnden
    Posted Jan 27, 2006 at 3:03 AM | Permalink

    ‘Jack’. I wonder who he is…

  13. John A
    Posted Jan 27, 2006 at 4:08 AM | Permalink

    Re: #12

    Well Peter, maybe its because you clearly don’t know Jack…

  14. Peter Hearnden
    Posted Jan 27, 2006 at 5:43 AM | Permalink

    Re #13, so, you know Jack? Or don’t you know Jack either? Me, I’ve never met him.

  15. John A
    Posted Jan 27, 2006 at 6:32 AM | Permalink


    Why don’t you write to the owner of the Daily Kos for a similar interview to RealClimate?

  16. Brad H
    Posted Jan 27, 2006 at 6:53 AM | Permalink

    Speaking of Yes, Minister (one of my all time favourite shows), here are some further, pertinent quotes:-

    “If people don’t know what you’re doing, they don’t know what you’re doing wrong.”

    “Almost anything can be attacked as a failure, but almost anything can be defended as not a significant failure.”

    “No one really understands the true nature of fawning servility until he sees an academic who has glimpsed the prospect of money or personal publicity.”

    “The surprising things about academics is not that they have their price, but how low that price is.”

    “It is axiomatic in government that hornets’ nests should be left unstirred, cans of worms should remain unopened, and cats should be left firmly in bags and not set among the pigeons. Ministers should also leave boats unrocked, nettles ungrasped, refrain from taking bulls by the horns, and resolutely turn their backs to the music.”

    I could go on and on, as virtually every line was a pearl. Those who missed it have missed a modern classic.

  17. John A
    Posted Jan 27, 2006 at 9:55 AM | Permalink


    Indeed it is a classic. That show taught more about the reality of governance than many a political science course.

    I’m pretty sure that there must be a line about making sure that public and the press were all in a tizzy about some inconsequential thing so as to take attention away from things that are of great importance (or it may have been Macchiavelli) that they should be worried about.

  18. nanny_govt_sucks
    Posted Jan 27, 2006 at 1:09 PM | Permalink

    “It is axiomatic in government that hornets’ nests should be left unstirred, cans of worms should remain unopened, and cats should be left firmly in bags and not set among the pigeons. Ministers should also leave boats unrocked, nettles ungrasped, refrain from taking bulls by the horns, and resolutely turn their backs to the music.”

    Excellent quote. Sounds quite libertarian, along the lines of Thomas Paine’s “That government is best which governs least”.

  19. Steve Bloom
    Posted Jan 27, 2006 at 6:03 PM | Permalink

    Actually this blog reminds me of Fawlty Towers, with John A. as Basil.

  20. Hans Erren
    Posted Jan 27, 2006 at 6:26 PM | Permalink

    Don’t mention the war?

  21. ET SidViscous
    Posted Jan 27, 2006 at 9:04 PM | Permalink

    Does that make Tim Little Basil?

  22. Steve McIntyre
    Posted Jan 27, 2006 at 11:25 PM | Permalink

    there’s a good question at Kos about regEM – the sort of question that Gavin would censor at RC.

  23. Dave Dardinger
    Posted Jan 27, 2006 at 11:59 PM | Permalink

    re#20 It was a joke, Hans (assuming your remark was to me). If you didn’t get it, it’s not worth explaining. An example of dry humor from the dry Southwest where we’re about to set a record for the longest period without measurable rain since they started keeping records.

  24. ET SidViscous
    Posted Jan 28, 2006 at 1:03 AM | Permalink

    Yes and Hans’ was a joke refering to Farty Towells. Nothing to do with you. One of the more famous quotes from the show.

  25. Ed Snack
    Posted Jan 28, 2006 at 1:18 AM | Permalink

    And Steve Bloom is a dead set for Manuel or maybe Basil the “siberian hamster” ? ! Que Steve ?

  26. James Lane
    Posted Jan 28, 2006 at 1:54 AM | Permalink

    Or over at RC, “don’t mention the bristlecones”!

  27. John A
    Posted Jan 28, 2006 at 3:12 AM | Permalink

    Re #19

    That’s the problem with alarmists: they just don’t have a sense of humor.

  28. Dave Dardinger
    Posted Jan 28, 2006 at 7:52 AM | Permalink

    Re #24 Ah, I see. Unfortunately I’ve never seen the show. Though I do like British humor. We were just watching an early Jeeves & Wooster last night; though it wasn’t as funny as some of the later ones. They apparently hadn’t hit their stride that first season.

  29. JerryB
    Posted Jan 28, 2006 at 9:50 AM | Permalink

    It seems that withstanding “the test of time” is not to be expected of some people’s paleoclimate studies. 🙂

  30. Ken Robinson
    Posted Jan 28, 2006 at 10:00 AM | Permalink

    This thread (actually, the entire issue of climate change) reminds me of a classic Monty Python skit about a man seeking a debate. Paraphrasing slightly;

    “Hello. Is this the right room for an argument?”
    “I told you once.”
    “No you didn’t.”
    “Yes I did.”
    “Just now.”
    “No you didn’t.”
    “Yes I did.”
    “Oh look this isn’t an argument, it’s just contradiction.”
    “No it isn’t.”
    “Yes it is! An argument is a connected series of statements intended to establish a proposition whereas contradiction is just the automatic gainsaying of whatever the other person said.”
    “No it isn’t.”
    “Oh, I’ve had enough of this.”
    “No you haven’t.”

    Sound familiar to anyone?


  31. ET SidViscous
    Posted Jan 28, 2006 at 11:58 AM | Permalink

    I still prefer the dead parrot sketch.


    All I did was take the original sketch and do a universal replace with McIntyre, Mann and hockeystick. Very funny coincidences.

  32. Terry
    Posted Jan 28, 2006 at 3:33 PM | Permalink

    Mann says:

    Burger and Cubasch: Would have been a useful contribution to the literature about 10 years ago, when Mann et al (“MBH98″‚Ⱪ and other groups were using simple EOF-based approaches.

    Isn’t this a tacit admission that MBH98 was fundamentally flawed? When you have something that is true, you don’t “move on” from it, you build on it and extend it. To my ears, he is basically saying, “yes, MBH98 proved nothing, but today (trust me) we have it right.”

  33. john lichtenstein
    Posted Jan 28, 2006 at 9:22 PM | Permalink

    Yes Minister and the also good (but not as good) Yes Prime Minister were released on DVD in 2000. Let’s see if my karma survives:

  34. Michael Jankowski
    Posted Jan 30, 2006 at 7:55 AM | Permalink

    10 yrs ago? Is it 2008 in Mann’s world? I would like to ask him who wins the Super Bowl, KY Derby, etc, in 2006 and 2007.

  35. Steve McIntyre
    Posted Jan 30, 2006 at 8:35 AM | Permalink

    #34 – it would be worth checking what the Jones and Mann [2004] survey said about RegEM – I don’t recall offhand, but I’d be surprised if they explicitly reported that the nomads had moved on. I’ve looked a little more at RegEM. I don’t see why it would be a magic bullet; it’s just another multivariate method. It seems inherently perilous for climate proxy users to be relying on statistical methods which are not used by the general run of applied statisticians. RegEM may be a dandy method – but it’s a little alarming when it’s hard to show non-Hockey Team and non-climate applications.

    There’s a considerable amount of overhead involved in replicating their method – much of it is just figuring out how it works. Commendably, Rutherford has archived code and data (but not Briffa MXD data), so replication should be possible. His blocking me has delayed my assessment as I didn’t know that the methods had been finally archived, but now that I’m possession of the SI, I’ll get to it some time. So much to do.

    Impressionistically, my guess is that the RegEM reconstruction is simply one more linear weighting of the proxies (or very close to linear). It will undoubtedly have the same properties as MBH98 with respect to bristlecones (notably not discussed by Rutherford, Mann et al ) and with respect to cross-validation R2 statistics (notably not reported). The cross-validation CE statistics (which give similar failed results as R2) are included in the SI, but NOT mentioned or discussed in the article itself. Pretty cute- they can say that they disclosed the adverse CE results in the SI as a line of retreat. Oddly Mann, Rutherford, Ammann and Wahl [J Clim 2005] report cross-validation RE, R2 and CE in connection with model results – where you don’t get the R2 failures in a model world.

  36. Michael Jankowski
    Posted Jan 30, 2006 at 9:08 AM | Permalink

    Maybe I’m beating a dead horse, but what about Mann and Jones (2003b) http://www.ncdc.noaa.gov/paleo/pubs/mann2003b/mann2003b.html … RegEM, or is that a publication that, “Would have been a useful contribution to the literature about 10 8 years ago?”

  37. fFreddy
    Posted Apr 30, 2006 at 3:04 AM | Permalink

    Like the early posters, I can’t get the original article from the link to “http://data.climateaudit.org/pdf/2005.burger.pdf”.
    Is it available ?

  38. John A
    Posted Apr 30, 2006 at 3:18 AM | Permalink

    Like the early posters, I can’t get the original article from the link to “http://data.climateaudit.org/pdf/2005.burger.pdf”.
    Is it available ?

    It is now that I’ve put the correct extension on the file…

  39. fFreddy
    Posted Apr 30, 2006 at 3:40 AM | Permalink

    Thanks John.

  40. bender
    Posted Aug 7, 2006 at 7:09 AM | Permalink

    On the propagation of error in inferential dendroclimatology

    Suppose dendroclimatology can be represented as an inferential chain of reasoning:

    1. A=>B estimation of mean chronology
    2. B=>C response function calibration
    3. C=>D verification of calibration
    4. D=>E reconstruction / extrapolation

    Given that there is uncertainty, àŽⳬ in each of the four inferential steps, what is the overall probability that A=>E? The uncertainty in each step is substantial, and the cumulative error is reflected in the product: àŽ⳱*àŽⳲ*àŽⳳ*àŽ⳴. Even if the probability that each inference is correct is 0.6, which is a very generous assessment in the case of dendroclimatology, the certainty associated with the whole chain of reasoning is ~0.13.

    Why do dendroclimatologists routinely ignore the serious problem of error propagation in their work?

    Imagine what the graph above would look like if they included envelopes of uncertainty around each of the strands of spaghetti. A big fat band of gray.

    This is probably why the MWP is always missing from these “reconstructions” – it’s probably not reconstructable (search for “Fritts” above). In which case the only way to get it back it is to force them to publish confidence envelopes.

  41. bender
    Posted Aug 7, 2006 at 7:52 AM | Permalink

    Re #13 LOL
    Peter *still* doesn’t know Jack.

  42. UC
    Posted Aug 9, 2006 at 1:21 PM | Permalink

    # 40

    I did some simulations of reconstructions with uncertainty envelopes and got very interesting results, without considering temperatures proxies relation. Small sample size, unknown dynamic model and we don’t even need to go to proxies before we break the error envelopes. Not necessarily related to MBH studies, :), but worth a look I think. Link here

  43. Posted Aug 9, 2006 at 7:04 PM | Permalink

    #42 Wow, how can the nred method be so far off? Did you check everythong twice :0

  44. UC
    Posted Aug 10, 2006 at 11:32 AM | Permalink

    43: nred goes off if the signal is not smooth. Didn’t check twice, but you can check this example code in http://www.climateaudit.org/?p=682#comments , # 82.

%d bloggers like this: