Preisendorfer on Centering "Conventions"

In our GRL article, we pointed out that MBH98 had misrepresented their principal components methodology as being "conventional", when it wasn’t.

At realclimate , they argued that it was an alternative centering "convention".

Since they elsewhere rely on Preisendorfer, it’s interesting to see what Preisendorfer has to say about centering.

On page 26 of his opus Principal Component Analysis in Meteorology and Oceanography, Preisendorfer says:

t-centering the data set

The first step in the PCA of [data set] Z is to center the values z[ t,x] on their averages over the t series… Using these t-centered values z[t,x], we form a new n x p matrix.

Preisendorfer adds:

If Z in (2.56) is not rendered into t-centered form, then the result is analogous to non-centered covariance matrices and is denoted by S’. The statistical, physical and geometric properties of S’ and S [the covariance matrix] are quite distinct. PCA, by definition, works with variances i.e. squared anomalies about a mean.

According to Preisendorfer, centering is not a "convention"; it is integral to PCA.

I doubt that many paleoclimatologists have read Preisendorfer’s book. It’s not easy. I don’t know his background, but, from reading this book, Preisendorfer seems like a very capable mathematician who happened to end up in meteorology. For example, he has a section on abstract PCA and is attentive to dual spaces, a fundamental concept in abstract linear algebra, but not one that you see a whole lot in paleoclimate. It has lots of interesting digressions – for example, there’s an interesting digression on the distribution of eigenvalues in random matrices that leads to very difficult mathematics. (Persis Diaconis has written on these problems and has a survey which is internet accessible.)

27 Comments

  1. JerryB
    Posted Aug 1, 2005 at 7:33 AM | Permalink

    Steve,

    You referred to page 26 of some document, but left out the name of that document.

    This comment becomes obsolete once you have read it. 🙂

  2. Posted Aug 1, 2005 at 7:29 PM | Permalink

    Steve, I found this interesting paragraph in a Toledo Blade editorial (shortened URL: http://toledoblade.notlong.com).

    “Granted, Mr. Mann admits the program has a bias toward generating hockey stick-shaped graphs even when none exists in the data. But he insists his conclusions are still correct. And like other climate change research, his data and methodology have never been fully scrutinized.”

    This is news to me. Does anybody know if it’s been documented elsewhere? Has it been acknowledged at realclimate.org?

    Pete

  3. JerryB
    Posted Aug 1, 2005 at 7:39 PM | Permalink

    US Congressman Joe Barton expressed some views at:

    http://www.dallasnews.com/sharedcontent/dws/dn/opinion/viewpoints/stories/080105dnedibarton.7b0a5fb.html

    Need I mention that he did not comment on Preisendorfer’s views on the virtues of centering.

  4. TCO
    Posted Aug 12, 2005 at 8:34 AM | Permalink

    If Mann gets the hockey stick with a normal centering mechanism as with an off center centering, why doesn’t he publish with that? And if there is a difference, then to the degree that there is, it invalidates his rebuttal, no?

    For steve, have you tried doing a “normal centered PCA” of random noise and does that generate hockey sticks as well or only the strange centered one.

    REgarding the number of PCAs (I am thinking of them as coefficients in a regression), if the 4th one(with his expanded number of PCAs in nomral centering). WEll if it only shows 8% of the effect, then when you graphi it do you reduce the hockey stickishness to 8% of the value? SOrry, I’m not being precise in my question.

  5. Steve McIntyre
    Posted Aug 12, 2005 at 8:49 AM | Permalink

    First, Mann’s method is not a principal components method as defined by Presiendorfer (see the quote on this.) The PC methodology issue is intertwined with the bristlecones. Under his method, the bristlecone growth (which has a hockey stick shape) appears to be the “dominant component of variance” rather than a minor and possibly contaminated effect. There’s a second stage of data mining in his regression module which we’ve not discussed much, but then picks out the NOAMER PC1 and Gaspe series for over-emphasis.

    I don’t assert that “normal” PC is a correct way of processing the dataset. In my opinion, the NOAMER tree ring dataset is a dog’s breakfast and no unsupervised algorithm can be relied on to produce a meaningful temperature proxy. But you certainly shouldn’t let loose a biased data-mining algorithm into this dog’s breakfast. If one does a calculation with a usual PC algorithm and include the NOAMER PC4, you get a hockey stick (which we point out in our EE article). Any cmobination with the PC4 yields a hockey stick temperature reconstruction; any reconstruction without the PC4 does not. If you exclude hte bristlecones, you don’t get a hockeystick even if you use the biased MBH PC method; and certainly not with a proper method.

    The realclimate argument is that not using the bristlecones is “throwing out” data – they try to avoid using the term bristlecones. Well, if the 20th century bristlecone growth rate is contaminated by fertilization, either from CO2 or Owens Lake or something else, then the data should be excluded. What’s particularly devastating is that they did calculations without bristlecones (the BACKTO_1400-CENSORED file) showing no hockey stick, but did not report these calculations. Not only did they not report these calculations, they claimed that their reconstruction was “robust” to the presence/absence of dendroclimatic indicators altogether. It is incomprehensible to me how they could make that claim, knowing the impact of their CENSORED calculations without bristlecones. Or why people don’t jump on this misrepresentation since their entire defence now relies on getting the bristlecones in through the back door?

    But any reconstruction with the bristlecones has a cross-validaiton R2 of ~0, which they have withheld. This withholding took place in the original paper and has been carried on by Ammann and Wahl.

  6. TCO
    Posted Aug 12, 2005 at 9:08 AM | Permalink

    1. is R2, “rsquared”?

    2. Your more serious complaint is bristlecones, then no? The centering argument is not critical?

    3. Any efforts to add more data to the set so that you would have less bristlecone effect? Surely there is more than one 20th century study published to refer to, no? If you can’t throw out data, you can dilute it no?

    4. What happens if you just fit the data to a polynomial, not an extreme one, but one with enought twistyness that if there is MWP and LIA, it would show up? What does that best fit look like?

    5. Are you more strongly fighting (arguing, trying to correct the record, etc.) to show that the hockey stick’s shaft is too flat? This would move one back towards something more like the 1990 graph. Or to show that the blade is too steep (which would imply less current GW, from whatever cause)? Or to show that no meaningful charts can be made and that you can generate whatever you want? (IOW, low statistical significance of the solution)?

  7. TCO
    Posted Aug 12, 2005 at 9:19 AM | Permalink

    Pursuant to 6-3, you seem to have alluded to possible bias with selection of the substudies for the metanalysis (CENSORED discussion). A meaningful review of all the literature and a look at what was left out/in would seem to address this issue. Do MBH give a fair overview of all the proxy studies done in the literature? And is their selection process consistent and reasonable (not biased)? Have you done your own lit search to check this?

    If you are just finding fault with an individual study, couldn’t that be done with all of them (not just the ones that you want to get rid of to elimante the hockey stick), and how could one expect random bias (perhaps not, but then that argument should be made in toto for 20th century studies…the specific bristlecone study not being the issue). Also how can a meta-analysis of 200+ studies be driven so much by one study? That just feels wrong and un-philosophical with the principles of meta-analysis (which recognizes that you are doing more than combining a bunch of surveyors numerically, that you are combining things done with different methods, different likelihood of care, exogenous contamination, etc.)

  8. Steve McIntyre
    Posted Aug 12, 2005 at 9:28 AM | Permalink

    1. R2 is r-squared in a univariate case. R2 is used in econometrics; paleoclimatologists generally use “r2”. Mann made a very pretentious comment to the Barton Committee about this.

    2. I’m not trying to make a beauty contest. There are a lot of issues. The two issues strongly interact: the flawed method picks the most flawed proxies. The method is a disaster. If you have 1-2 hockey stick shaped bristlecone series out of over 50, the MBH PC method will pick out the bristlecone as the PC1 or PC2 and it will be Rule N “significant”. So the method is no good. But the bristlecones are also no good. We became aware of them by seeing what the flawed method did. But once you’re aware of them, you can’t just ignore them. MBH99 purported to adjust for bristlecones, but the adjustment was a farce.

    3. There are a few different issues. If you data mine red noise series and pick out hockey stick shaped series, you can get hockey stick shaped results. The way to test this is to see what happened to the series after 1980. If the proxies are any good, you’d expect to see 1990s values off the charts. I haven’t seen any evidence of this. If there was such evidence (e.g. in the Sheep Mountain bristlecones collected by Hughes in 2002), I’m sure we’d have heard about it. The silence is deafening – it’s like mining promoters not reporting drill results: you know that they were duds. We don’t hear about “proxies”; now we hear about glacier retreats – what about tree rings?

    4-5. I’m simply saying that Mann can’t say that the 20th century was unique on his data and his methodology. I think that the other Hockey Team studies don’t prove their point either. You can browse for comments of mine about Briffa and Crowley and get clavor – google “climateaudit Briffa” or “climateaudit Crowley” and you’ll get a sense of what I posted on these guys a few months ago.

  9. John Hekman
    Posted Aug 12, 2005 at 9:36 AM | Permalink

    TCO: what Steve said. Also, look under “Favorite Posts” on the main page. The post entitled “McKitrick: what is the hockey stick debate about” has a link to a non-technical paper Steve did that spells out in more detail what he has said in comment 8 here. Have you read this? Also, there are lots of studies that contradict MBH. Ice bore hole studies, studies finding a Medieval warm period, etc. This debate will not be won by being reasonable about looking at everyone’s results. It seems to be necessary to show in excruciating detail that MBH have not shown what they claim to have shown. Doing alternative studies that find no hockey stick will be brushed off and have no effect.

  10. John Hekman
    Posted Aug 12, 2005 at 9:45 AM | Permalink

    I can’t resist saying more. You know how the radicals in the 60s used to say “the world is not like your high school civics class”? Well, we are seeing that science is also not like the simple paradigm of “good science will win out.” There is no omnicient judge who looks at the science from both sides and says “this one is right; this one is wrong.” Instead, there is a cacophony (sp?) of voices yelling different results. And there is big-time politics. In this world, the “truth” will not be accepted on MBH until they have been finally cornered into admitting that they cannot get their results without the bristlecones.
    By the way, Steve, their statement is that the results are robust to the presence/absence of “dendroclimatic indicators.” This means leaving out all tree ring data. Maybe they have a series of ice cores or other proxies that generates a hockey stick.

  11. TCO
    Posted Aug 12, 2005 at 9:48 AM | Permalink

    Yes, I did read it. It’s still helpful to process things to understand wrt different criticisms: (1) effect of the issue on the final solution, (2) level of certainty that issue is an error.

    IOW, for stat adjustment A, we are 100% certain it is an error and it has 20% impact on the answer. Or substudy X, we have 50% questions about its proper performance in the field, and it has 90% impact on the answer. Or little kvetch D which has 2% impact on the answer, but is still something to fix.

    I thought you might say the method and the study are confounded. Can you at least say how a “normal” meta-analysis would work out? Normal methods and inclusive selection of studies?

  12. TCO
    Posted Aug 12, 2005 at 9:53 AM | Permalink

    You know you really ought to think more about the conceptual presentation of these arguments to an intelligent, but not stat jock audience (people like me). What are you going to do if you get called to talk to Barton? I hope you won’t make off-hand remarks about ARMA (or the equivalent). AND YES, you need the solid proofs to backi it up and should be ready to fight it out with the specialists. But you also need to boild it down and comunicate it to an “upper management”* decision-maker body.

    *P.s. GW, Canada, mining, finance…I’m starting to see an evil oil company in your past, maybe a little “consulting” now? Not that there’s anything wrong with that, yadayada. 😉

  13. TCO
    Posted Aug 12, 2005 at 9:59 AM | Permalink

    look…if you go into a discussion with congresscritters or the new york times or anyone with better use of their time than to read this website, you ARE NOT going to win if “bristlecone conditionality” is your mantra (or if you expect to ever get Mann to repudiate his own work). Much better off going for the larger audience and using understandable concepts like compilation of 200 studies works only with inclusion of one study (or similar simpleness.) Doesn’t stop you from fighting it out on another plain as well.

  14. Steve McIntyre
    Posted Aug 12, 2005 at 1:52 PM | Permalink

    TCO, we’ve got different levels of presentation depending on the audience. I know how to present to lay audiences if that’s the audience. That’s what I’ve dealt with all my life in business. In fact, I’ve never made a presentation to an audience that was not a lay audience. I have never, for example, given a lecture or presentation at a university, which is a little surprising given all the attention that this has received and the number of people working at disproving us. As to current post topics, I’m doing some statistical work right now so I’m posting up some statistical stuff. As to the blog, it’s getting a huge number of hits for what I expected – it will pass 500,000 hits this week since it started in mid-Feb 2005, so it’s finding some sort of audience. This is a pretty idiosyncratic blog. I’m assuming that people will dip into things that interest them and, if it doesn’t, they have no obligation to read it.

  15. TCO
    Posted Aug 12, 2005 at 2:12 PM | Permalink

    It’s ok, man. Just an input. You do a great job. And taking the time for “so what” can be a pain with how much else you. I’m sure you do better then me. Still…it is develish important.

    On the other thing. You really ought to go give some talks, go to a conference, present a paper, have a few brews with the guys. Maybe you can collect some intel to guide your work. Seriously.

  16. TCO
    Posted Aug 12, 2005 at 2:16 PM | Permalink

    Still, even when I wrote for specialists (for publication of experimental studies), I always tried to have a common-sense introduction. Something as small as a couple sentences. Then the reader can decide if he wants to dig into the details.

    Don’t get mad at me please or sick the spambot on me. Have fun drinking with the tree-huggers. 😉

  17. TCO
    Posted Aug 12, 2005 at 2:18 PM | Permalink

    Oh…and you might think about trying to get some play for your blog or for the popular article that you wrote from the conservative bloggers (they will be the only ones interested). More traffic, better. A lot of those guys are content assemblers. P.s. Don’t send them to the ARIMA post. 😉

  18. TCO
    Posted Sep 21, 2005 at 9:01 PM | Permalink

    If you use a normal centered method (but same data), how much does the “stick index” change. I’m not giving up on disaggregating issues.

  19. TCO
    Posted Sep 22, 2005 at 1:57 PM | Permalink

    bump

  20. Mark Frank
    Posted Oct 9, 2005 at 5:04 AM | Permalink

    This thread appears to address a question I have being trying to answer the last few days.

    Can Steve or someone confirm whether you still get a hockey stick shape using centred PCA? The dialogue above suggests you do. And if that is true, isn’t TCO right, that the real issue is the Bristlecone pine series? The use of non-centred PCA might exacerbate the problem but the hockey stick still exists without it.

    Is there a diagram somewhere showing what the hockeystick looks like using centered PCA? Presumably it is different.

    Thanks

  21. Dave Dardinger
    Posted Oct 9, 2005 at 6:56 AM | Permalink

    Mark,

    As far as I remember it’s the using a centered PCA that results in the hockey-stick moving from PC1 to PC4 (or is it PC5) and that’s what resulted in Preisendorfer’s rule entering into the picture. IOW most of the variance with a centered PCA get explained by other proxies, but it still takes the bristlecones to explain the hockeystick. This either means that the other proxies aren’t very good temperature measurers or that the instrumental record, with its pronounced hockeystick-blade shape, is flawed and takes a flawed proxy like bristlecones to reproduce it.

  22. Mark Frank
    Posted Oct 9, 2005 at 9:20 AM | Permalink

    #21 Dave – I think what you are saying is “yes” you still get a hockey stick shape with centred PCA and the real issue is the presence of the bristlecones.

    So – simple question that has probably been answered a thousand times – but I can’t find it. What happens if you use centred PCA without the Bristlecones?

    Cheers

  23. Steve McIntyre
    Posted Oct 9, 2005 at 9:37 AM | Permalink

    Re: #22 and others: if you do a run with centered (covariance) PC calculations and retain 2 PCs (as in MBH98), you get high 15th century results (see MM05 (GRL)). If you do it without bristlecones, you get high 15th century results, regardless of the PC method. If you do centered PC and retain the PC4, you get low 15th century results. We survey various permutations and combinations in our E&E article as sensitivity comments not as saying that any method is a “right method”. Within “centered PC methods”, there are also covariance and correlation PC methods.

    We have tried to maintain attention to both data and methodology issues. I don’t like the idea of a “beauty contest” (or an “ugliness contest”) to say which is the REAL problem. We don’t say that their results are “simply” an artifact of their PC method, since you can get low 15th century results with a centered PC method. A phrase that I like is: the flawed method interacts with the flawed proxies to create a “perfect storm”. We discovered the role of the bristlecones by actually following the flawed method to see what it did: it highlighted the bristlecones. I don’t think that a sensible answer can or should depend on how you apply Preisendorfer’s Rule N (and I’m not persuaded that it was ever used in the tree ring networks in the first place – I think that it’s been proposed ad hoc to try to salvage inclusion of the PC4).

    In MM05 (EE), we characterized these results in terms of robustness rather than correctness – although this buance seems to have been totally lost in debate, largely because of realclimate disinformation. One of the warranties of MBH (and I like to use contractual terms like warranty) was that it was robust to the presence/absence of dendroclimatic indicators. (I’ve cited reliance of Bunn et al [2005] on exactly this claim.) But it’s results are obviously not robust to presence/absence of bristlecones.

  24. Mark Frank
    Posted Oct 9, 2005 at 11:04 AM | Permalink

    Steve thanks and excuse me for making you go over ground you have clearly already covered.

  25. fFreddy
    Posted Oct 9, 2005 at 11:12 AM | Permalink

    Steve, isn’t it true that if the bristlecones had turned down in the 20th century, then that is the shape that the MBH procedure would have shown ( an upside-down hockey stick ) ?

    And more generally, isn’t it right that the MBH procedure will mine for any series where the data points during the 20th century “training period” (or did they call it “validation period” ?) are markedly offset from the remainder of the data seres ? It seemed to me that you could replace the bristlecones with a step function with the diccontinuity in the year 1901, and that is what the end result would look like.

    Or did I get hold of the wrong end of the (ahem) stick ?

  26. Steve McIntyre
    Posted Oct 9, 2005 at 11:45 AM | Permalink

    MBH98 doesn’t care whether a series goes up or down. It just changes the sign. PC series don’t actually have a sign – in group theory terms they are a “coset” but I don’t suppose that image will help anybody. The MBH99 PC1 points down, although the bristleonces point up. in the 2nd step, it gets regressed against a trend and signs get flipped.

    A step function would have a different impact on the tree ring PC calculations and on the regression step. I’m offline now.

  27. Brooks Hurd
    Posted Oct 9, 2005 at 11:56 AM | Permalink

    Steve and Ross,

    Happy Thanksgiving!

One Trackback

  1. By "Mannian" PCA Revisited #1 « Climate Audit on May 28, 2013 at 11:02 AM

    […] observed previously at CA here , the first step in Preisendorfer’s methodology is “t-centering”, i.e. removing […]