Tamino and the PC4

As noted previously, Tamino did not quote, cite or discuss how our articles reported key issues in his post – an omission that results in our research not being accurately represented in the record at Tamino’s site. I’ll discuss a couple of examples. It’s unfortunate that time has to be spent on such matters prior to dealing with issues like short-centering.

The PC4
Tamino reports breathlessly that the hockey stick pattern can be observed in the PC4 of the North American tree ring network, because it’s “a pattern in the data” and not an “artifact”.

the hockey stick pattern is still there even with centered PC — which is no surprise, because it’s not an artifact of the analysis method, it’s a pattern in the data.

and later asks with this illustration,

Did MM really not get this? Did they really discard the relevant PCs just to copy the bare number of PCs used by MBH, without realizing that the different centering convention could move the relevant information up or down the PC list?

You betcha. When done properly on the actual data, using 5 PCs rather than just 2, the hockey stick pattern is still there even with centered PC — which is no surprise, because it’s not an artifact of the analysis method, it’s a pattern in the data. Here’s a comparison of PC#1 for the North American ITRDB (international tree ring data base) data using the MBH method (red), and PC#4 from using the MM method.

Tamino has misrepresented the research record, as both MM2005 (GRL) and MM2005 (EE) report the occurrence of the hockey stick pattern in the North American PC4, attributing it to the bristlecones.

In MM2005 (GRL), we stated:

Under the MBH98 data transformation, the distinctive contribution of the bristlecone pines is in the PC1, which has a spuriously high explained variance coefficient of 38% (without the transformation – 18%). Without the data transformation, the distinctive contribution of the bristlecones only appears in the PC4, which accounts for less than 8% of the total explained variance.

In MM2005(EE), we re-iterated the same point at a little more length:

In the MBH98 de-centered PC calculation, a small group of 20 primarily bristlecone pine sites, all but one of which were collected by Donald Graybill and which exhibit an unexplained 20th century growth spurt (see Section 5 below), dominate the PC1. Only 14 such chronologies account for over 93% of the variance in the PC1, effectively omitting the influence of the other 56 proxies in the network. The PC1 in turn accounts for 38% of the total variance. In a centered calculation on the same data, the influence of the bristlecone pines drops to the PC4 (pointed out in Mann et al., 2004b, 2004d). The PC4 in a centered calculation only accounts for only about 8% of the total variance, which can be seen in calculations by Mann et al. in Figure 1 of Mann et al. [2004d].

In MM2005 (EE), we further reported the effect of carrying out an MBH-type reconstruction under many permutations (most of which were re-stated by Wahl and Ammann without citing our findings), including the use of both 2 and 5 North American covariance PCs.

If a centered PC calculation on the North American network is carried out (as we advocate), then MM-type results occur if the first 2 NOAMER PCs are used in the AD1400 network (the number as used in MBH98), while MBH-type results occur if the NOAMER network is expanded to 5 PCs in the AD1400 segment (as proposed in Mann et al., 2004b, 2004d). Specifically, MBH-type results occur as long as the PC4 is retained, while MM-type results occur in any combination which excludes the PC4. Hence their conclusion about the uniqueness of the late 20th century climate hinges on the inclusion of a low-order PC series that only accounts for 8 percent of the variance of one proxy roster.

These are not exotic references; these points at issue in Tamino’s posts are specifically and clearly discussed in these articles.

Wegman also stated that the hockey stick from the bristlecone/foxtails occurred in the PC4 (see Question 10b):

Without attempting to describe the technical detail, the bottom line is that, in the MBH original, the hockey stick emerged in PC1 from the bristlecone/foxtail pines. If one centers the data properly the hockey stick does not emerge until PC4. Thus, a substantial change in strategy is required in the MBH reconstruction in order to achieve the hockey stick, a strategy which was specifically eschewed in MBH. In Wahl and Ammann’s own words, the centering does significantly affect the results

“In the Data”
Here’s a related Tamino straw man. Tamino states:

PCA (centered or not) doesn’t create patterns at all, they have to be there already even to “exhibit a larger variance.”

No one disagrees with this. We stated that the MBH algorithm “mined” for hockey stick patterns; we did not say that it “manufactured” them.

In effect, the MBH98 data transformation results in the PC algorithm mining the data for hockey stick patterns.

Wegman (see question 9b) expressed the point in similar terms:

If the variance is artificially increased by decentering, then the principal component methods will “data mine” for those shapes. In other words, the hockey stick shape must be in the data to start with or the CFR methodology would not pick it up… Most proxies do not contain the hockeystick signal. The MBH98 methodology puts undue emphasis on those proxies that do exhibit the hockey-stick shape and this is the fundamental flaw. Indeed, it is not clear that the hockey-stick shape is even a temperature signal because all the confounding variables have not been removed.

What is Tamino’s point of disagreement on this issue with either Wegman or ourselves?

As to the PC4, we stated clearly that the hockey stick shape was a distinct pattern in the North American tree ring data set, observable in the PC4 under a centered calculation (as Mann et al had done as well in their Nature submission placed online and in a post at realclimate). We reported that the pattern could be traced back to the bristlecones and spent a considerable amount of time analyzing bristlecones. For Tamino to present the PC4 without making any citation or reference to our comments on the matter – and to then snottily ask “Did MM not get this?” results in our research not being accurately represented in his posting.

98 Comments

  1. kim
    Posted Mar 11, 2008 at 11:24 AM | Permalink

    Leon Festinger’s ‘When Prophecies Fail’ is appropriate reading here and now and a lot of elsewheres.
    =========================================================

  2. Onimat
    Posted Mar 11, 2008 at 11:37 AM | Permalink

    Maybe Tamino was relying on this: http://www.realclimate.org/index.php?p=8

    Curiously undisclosed by MM in their criticism is the fact that precisely the same ‘hockey stick’ pattern that appears using the MBH98 convention (as PC series #1) also appears using the MM convention, albeit slightly lower down in rank (PC series #4) (Figure 1). If MM had applied standard selection procedures, they would have retained the first 5 PC series, which includes the important ‘hockey stick’ pattern. The claim by MM that this pattern arises as an artifact of the centering convention used by MBH98 is clearly false.

  3. Kenneth Fritsch
    Posted Mar 11, 2008 at 11:49 AM | Permalink

    Tamino has discovered the truth — he unfortunately does not realize it yet or what it is. Any bets on whether he ever will? Or is inclined to?

  4. MarkR
    Posted Mar 11, 2008 at 12:06 PM | Permalink

    Just posted on Tamino:

    Where is the undisputed professional Statisticians recommendation to use non centered PCA for the task in question?

    Why use a non standard, experimental, unproven method which gives vastly inflated weighting to the least reliable

    temperature proxies?

    Why isn’t the Mann methodology and data fully published after 10 years?

    Why is there no recognition that none of the Hockey Teams published studies have passed all the Statistical tests to avoid

    spurious regression?

    Why is updated data from Sheep Mountain which shows a flaw in the suitability of Bristle Cones as proxies being with-held

    “on legal advice”?

    Why no mention that both the Republican and Democrat sides on the Barton Committee recognised that the Mann Hockey Stick

    had been discredited? (look up the transcript.)

  5. steven mosher
    Posted Mar 11, 2008 at 12:48 PM | Permalink

    re6 just be glad I spared you the you tube of her singing. Note also, I made no bad jokes
    about having to take tamiflu, no jokes about tammy faye baker, and I refrained until
    now debbie reynolds

    No more. I promise. the humour usually works best to break up a too long fight, so at the start
    of a thread it just tends to distract.

  6. Steve McIntyre
    Posted Mar 11, 2008 at 12:53 PM | Permalink

    #5 and such. Regardless of being funny, remember such posts are going to have a short life span.

  7. steven mosher
    Posted Mar 11, 2008 at 1:15 PM | Permalink

    re 6. as they should

  8. Jeremy
    Posted Mar 11, 2008 at 1:23 PM | Permalink

    Steve, I’ve read all this before. I would say stop spending time on MBH98, you’ve done enough to defend your work in attempting to replicate their work. In fact, when I first starting visiting this blog, I read most of the story on how Mann et al stonewalled you on data, methods, etc..

    That was enough to convince me they had plenty to hide. Real science has never worked like that. They continue to convince me (by not indulging your curiosity and discussing their work with you) that they are not real scientists. That’s harsh, I know, but all they have to do is open up and talk about their methods to stop me from saying that.

  9. VG
    Posted Mar 11, 2008 at 1:29 PM | Permalink

    At the rate temps are going, sea ice , SST, solar etc this may not even be an issue in 1-2 years.

  10. steven mosher
    Posted Mar 11, 2008 at 1:39 PM | Permalink

    RE 8. Jeremy, Sometimes it’s good to retill the soil. I’d hazard a bunch of people who are new to the site, and some that are old, will garner an appreciation for Mcs work. At first, I thought that
    the whole PCA thing may be a diversion from some of the work that was being done on GISS, but I was glad to see Mc explain this stuff again, and I even found old posts I had forget about. The other thing I find interesting is that people still think it’s about the BLADE, when the shaft is the real
    story.

  11. John A
    Posted Mar 11, 2008 at 1:45 PM | Permalink

    Is it just me, or is it Groundhog Day again?

    What actual new information is Tamino actually presenting? None as far as I can see.

  12. fFreddy
    Posted Mar 11, 2008 at 1:55 PM | Permalink

    As Tamino admits :

    As an earlier comment points out, my rebuttal isn’t really my own creation, it borrows heavily from the response on RealClimate by Mike Mann himself, here and here. My main contribution was the exposition.

  13. MarkR
    Posted Mar 11, 2008 at 1:57 PM | Permalink

    #4 Didn’t make it past the censor. Shame. But at least I know he knows.

  14. Jeremy
    Posted Mar 11, 2008 at 2:14 PM | Permalink

    Re:#10… Yeah, maybe I am whining. I’m just a bit of a purist. In my head I find little value in continuing to rehash details of methods that we’re discussing with other researchers via smoke-signals. That is not such a bad analogy to me. Fellow researchers have failed to disclose their data, their methods, and their reasoning, so the community here has to effectively invent their own method to match the results of the silent ones. When they accomplish this and discover the exact error of the silent ones, they are blasted for being liars.

    To me, this completely misses the real issue here. The real issue here is that science is for *everyone* who is curious, and curiosity denied is akin to murder it is the gravest of sins. When you make a claim, and someone is curious how you arrived at that claim, you’re supposed to talk about it. You are supposed to TALK ABOUT IT. If you don’t that’s like saying, “I turned 3 pancakes into monkeys!” and then just claiming you used magic to do it.

  15. Craig Loehle
    Posted Mar 11, 2008 at 2:26 PM | Permalink

    “I turned 3 pancakes into monkeys!”

    I’m guessing that turning monkeys into pancakes might be easier to explain…

  16. John A
    Posted Mar 11, 2008 at 2:30 PM | Permalink

    I submitted the following as a comment on Tamino’s blog, but since I’m skeptical about prior moderation, I’ll copy it here just in case:
    ================================

    Tamino,

    You’d avoid a lot of mistakes if you actually would take time to read M&M(GRL, 2005) and M&M(E&E, 2005), because then you wouldn’t make so many stupid mistakes in your characterization of their work.

    Suffice it to say that to claim that

    “Centering is the usual custom, but other choices are still valid; we can perfectly well define PCs based on variation from any “origin” rather than from the average. It fact it has distinct advantages IF the origin has particular relevance to the issue at hand. You shouldn’t just take my word for it, but you *should* take the word of Ian Jolliffe, one of the world’s foremost experts on PCA, author of a seminal book on the subject.”

    is simply wrong and Joliffe does not endorse the Mannian method.

    MBH claimed to be using “conventional PCA” but did not center the data series. That produces Hockey Stick reconstructions even when using red-noise pseudoproxies more than 99% of the time. So look up “spurious regression” in the literature.

    You also fail to grasp that the central proxy, the bristlecone pines of Graybill & Idso were not and are not temperature proxies (as they themselves noted).

    You fail to point out that without that one proxy in the network, the decentered method has no hockey stick shapes to mine, and no hockey stick results – which clearly means that the method is not robust, but is wholly dependent upon one dubious dataset.

    All of these sorts of stupid mistakes would be avoided if you actually read what M&M actually said and quoted from them directly.

    By the way, Steve McIntyre replies directly to these mistakes in Mannian PCA Revisited #1 and Tamino and the PC4

    As it is you appear to be digging yourself into a hole and your “friends” are helping you dig even further.

  17. Alan S. Blue
    Posted Mar 11, 2008 at 2:33 PM | Permalink

    “The” killer point is how Mann’s Mining technique turns up hockeysticks in random noise.

    That points out how much this is a goal-seeking method as opposed to a data analysis method. Carrying out such an examination on a batch of random data – in excruciating detail – would seem to put this to rest. (Again.)

  18. Mike B
    Posted Mar 11, 2008 at 2:43 PM | Permalink

    Tamino has discovered the truth — he unfortunately does not realize it yet or what it is. Any bets on whether he ever will? Or is inclined to?

    I don’t know that he’ll ever get it. On the one hand he snidely remarks about how visitors from CA “only want to talk about distractions like Bristlecone Pines.” Yet he seems to be unable to recognize, no matter how well he understands the math, that the hockey stick IS the bristlecone pines, and that the bristlecone pines ARE the hockey stick PC.

    Did Mike Mann manufacture the Hockey Stick? No. But he did sort through a large pile of branches and roots until he found one that looked like a Hockey Stick, tossed out the rest, hoisted the HS above his head and then ran through the streets of Athens yelling “Eureka!”

  19. steven mosher
    Posted Mar 11, 2008 at 2:45 PM | Permalink

    re 14. Agreed on everything. Especially the pancake to monkey transformation.

    I’ll never forget the first B I got on a math test. I got every question correct.

    I didn’t show my work.

  20. jae
    Posted Mar 11, 2008 at 2:46 PM | Permalink

    Even us average guys can plainly see the logical pitfalls in relying on one set of trees dictate the shape of the curve. Those folks must really NEED that hockey stick for some reason.

  21. Jeremy
    Posted Mar 11, 2008 at 3:23 PM | Permalink

    Re 19:

    Likewise, I remember one the first theoretical physics lectures I ever attended. I got almost nothing out of it until after it was over when one of the best professors I’ve ever known complained to me (an undergraduate) about the complete lack of discussion of methodology in the talk and the total focus on achieved results. His point struck me odd at the time, but he was completely correct. The process is the thing, not the conclusion. The conclusion changes like a flag in a changing breeze (particularly when it comes to results obtained by computer). The methods are where the true value resides. It is in the methods that you understand your colleagues thinking, where you learn new ways of seeing physical reality, and where you learn to work your own reasoning into the community to add to knowledge. It is there that you gain an understanding of the thought process of your peers, and more importantly a grasp of the limitations and uses of everyone’s reasoning.

    I must stop, I’m veering way off topic for this blog, and I’m being too militant about this.

  22. David Holland
    Posted Mar 11, 2008 at 3:28 PM | Permalink

    Jae, the TAR “attribution” chapter says why they need it on page 702. I posted it at http://www.climateaudit.org/?p=2841#comment-223089.

    Put simply, if it could be as warm, or warmer than now, in the MWP with lower GHG concentrations it could just be that GHGs may not be as important as we are led to believe or of course the sun could have been hotter then or something else like random natural internal variation caused it. By definition we can’t predict random.

    Much simpler just “to get rid of the MWP” as one lead author of AR4 WGI Ch6 is alleged to have suggested.

  23. Raven
    Posted Mar 11, 2008 at 4:15 PM | Permalink

    This argument showed up on Tammy:

    “It’s worth repeating, once again, that the Hockey Stick emerges if you just averaged the proxies together, or if you remove the BCPs entirely. The BCPs are included because they strengthen the verification of the early part of the reconstruction.”

    It seems to contradict everything said here. Is there some element of truth being stretched beyond recognition or is it simply an outright falsehood?

  24. Sam Urbinto
    Posted Mar 11, 2008 at 4:38 PM | Permalink

    Here’s my take on this Raven. I’m not a statistician (some calculus, and various computer science and electronics math, etc) But maybe I can boil it down to the 50,000 foot view (!!!)

    If things aren’t properly centered, a small number of 70 proxies (14) swamp out the signal of the bulk (56) Therefore, step 1 is incorrectly weighted, and shows something in it that actually shouldn’t show up until step 4.

  25. Posted Mar 11, 2008 at 4:56 PM | Permalink

    #23 Raven,

    What does that even mean, “just average the proxies together”? Most of them are not locally calibrated to temperature. So if we have a tree ring series with ring widths in millimetres, and a formanifera (sp?) series with percentages, we can just average them together?

    If you do a simple (or area-weighted) average of locally calibrated temperature proxies, my impression is that you get something like Loehle 2007?

  26. jae
    Posted Mar 11, 2008 at 5:01 PM | Permalink

    Raven:

    “It’s worth repeating, once again, that the Hockey Stick emerges if you just averaged the proxies together, or if you remove the BCPs entirely. The BCPs are included because they strengthen the verification of the early part of the reconstruction.”

    Well, Steve will probably answer, but I don’t think this is true. Also, there sure was no hockey stick when Craig Loehle averaged together 18 non-tree ring proxies.

  27. jae
    Posted Mar 11, 2008 at 5:08 PM | Permalink

    Raven, even if it is true, if they ever bring their proxies up to date, they might have a hockey stick that points the wrong way (the “divergence” problem)!

  28. Sam Urbinto
    Posted Mar 11, 2008 at 5:41 PM | Permalink

    Makes you wonder why the proxies haven’t been updated and why the old ones that don’t or probably don’t or are unknown if they reflect temperature are being used so much.

    Almost makes you wonder.

  29. SteveSadlov
    Posted Mar 11, 2008 at 5:47 PM | Permalink

    According to the Team, Cali BCPs are not a direct T proxy, but a teleconnected one, and therefore, MM does not matter. Yes, that is some mighty fancy tap dancing.

  30. Gerald Machnee
    Posted Mar 11, 2008 at 5:55 PM | Permalink

    For those who wondered why Steve replied to the Bull. Sooner or later many of those reading Tamino’s site will come here and will find the real answers. Tamino is just another shill for the stick and bull department that keeps cropping up.

  31. Steve McIntyre
    Posted Mar 11, 2008 at 6:07 PM | Permalink

    Actually I haven’t responded to his main point yet. I’m doing a little work on that. I’ve been busy the last while with a squash tournament plus we had an out-of-town guest for a few days last week.

  32. Curt
    Posted Mar 11, 2008 at 6:25 PM | Permalink

    Raven #23:

    This argument showed up on Tammy:

    “It’s worth repeating, once again, that the Hockey Stick emerges if you just averaged the proxies together, or if you remove the BCPs entirely. The BCPs are included because they strengthen the verification of the early part of the reconstruction.”

    It seems to contradict everything said here. Is there some element of truth being stretched beyond recognition or is it simply an outright falsehood?

    See Figure 6 in Ross McKitrick’s exposition here:

    Click to access conf05mckitrick.pdf

    The second plot is the simple mean of the proxies. The fourth is Mann’s PC1 without bristlecones.

  33. Ross McKitrick
    Posted Mar 11, 2008 at 6:50 PM | Permalink

    The hockey stick does not emerge if you average the proxies together. Look at Figure 2 in our NAS presentation http://data.climateaudit.org/pdf/NAS.M&M.pdf.

    Also, if you take out the BCPs the hockey stick disappears. Mann himself discovered this in his CENSORED folder. See page 21 in our NAS presentation.

  34. steven mosher
    Posted Mar 11, 2008 at 6:53 PM | Permalink

    RE 32. They still think it’s about the BLADE. It’s the shaft, the flat shaft.

  35. aurbo
    Posted Mar 11, 2008 at 9:05 PM | Permalink

    Re #34:

    I agree. My inclination is to accept the blade and give them the shaft.

  36. Phil.
    Posted Mar 11, 2008 at 11:29 PM | Permalink

    Tamino said:
    “Did MM really not get this? Did they really discard the relevant PCs just to copy the bare number of PCs used by MBH, without realizing that the different centering convention could move the relevant information up or down the PC list?

    You betcha. When done properly on the actual data, using 5 PCs rather than just 2, the hockey stick pattern is still there even with centered PC — which is no surprise, because it’s not an artifact of the analysis method, it’s a pattern in the data. Here’s a comparison of PC#1 for the North American ITRDB (international tree ring data base) data using the MBH method (red), and PC#4 from using the MM method.”

    To which Steve McI responded:
    “Tamino has misrepresented the research record, as both MM2005 (GRL) and MM2005 (EE) report the occurrence of the hockey stick pattern in the North American PC4, attributing it to the bristlecones.

    In MM2005 (GRL), we stated:

    “Under the MBH98 data transformation, the distinctive contribution of the bristlecone pines is in the PC1, which has a spuriously high explained variance coefficient of 38% (without the transformation – 18%). Without the data transformation, the distinctive contribution of the bristlecones only appears in the PC4, which accounts for less than 8% of the total explained variance.”

    In MM2005(EE), we re-iterated the same point at a little more length:……………..”

    However, weren’t both those statements made after the point had already been by others?

    “Our critics, including Mann himself, have mounted several counterarguments which are more fully
    canvassed and dealt with in the Environment and Energy paper (vol. 16(1)). The main response is that if
    the PC algorithm is corrected, but instead of only using 2 PCs from the NOAMER group we use at least 4
    PCs, a hockey stick shape can be partly recovered.” Ross McKitrick

  37. Geoff Sherrington
    Posted Mar 11, 2008 at 11:55 PM | Permalink

    Re 19 Steve Mosher

    re 14. Agreed on everything. Especially the pancake to monkey transformation.

    I’ll never forget the first B I got on a math test. I got every question correct.

    I didn’t show my work.

    I’ll never forget the first A I got on a math test. I got every answer correct.

  38. John A
    Posted Mar 12, 2008 at 3:40 AM | Permalink

    As I suspected, Tamino deleted my earlier comment, so here’s the next one:

    Tamino:

    Response: I think you need to look at Wahl & Amman. When you do straight PCA you *do* get a hockey stick, unless you make yet another mistake as MM did. When you do NO PCA you get a hockey stick

    You *do* get a Hockey Stick if the BCPs are included. If they are excluded then even a decentered Mannian PCA can’t produce a Hockey Stick.

    All of this is covered in M&M (E&E, 2005).

    Oh, and Wahl & Amman confirmed that the verification R2 was ~0 meaning the model had no skill.

    So where’s the beef, Tamino?

    The BCPs are central to the Hockey Stick shape and without them, even the Mannian PCA won’t find a Hockey Stick shape to mine. With the BCPs with proper centering, the Hockey Stick appears in the PC4. So what? The BCPs produce the Hockey Stick shape, but they are not temperature proxies – therefore the correlation between the Hockey Stick and the instrumental “global mean temperature” is a spurious correlation and meaningless.

    Back to you Tamino. We’re still waiting for some insight.

  39. Stan Palmer
    Posted Mar 12, 2008 at 5:57 AM | Permalink

    re 36

    However, weren’t both those statements made after the point had already been by others?

    This AGW debate is much more than a trivial dispute among academics about priority. It doesn’t matter who said what first. The important part of this is the science about the global environment and its effect on the global economy.

  40. Boris
    Posted Mar 12, 2008 at 6:16 AM | Permalink

    Perhaps, John A, you should refine your statement to say that BCP (strip bark) are not only temperature proxies. You know, what dendros actually think about them. Otherwise, you are only stating your opinion and not a fact.

  41. Glacierman
    Posted Mar 12, 2008 at 6:20 AM | Permalink

    #23, and #25,

    As we have already learned, you can get a hockey stick from red noise. Averaging proxies that are unrelated to each other and are not temperature proxies as is suggested by Tamino, seems to me would produce numbers that are basically just noise. It doesn’t even make sense. Averaging proxies? They can’t even show the relationship to temperature, yet use the numbers to reconstruct paleo climate? The proof is in the method, and it proves that the reconstruction has no meaning.

  42. Stan Palmer
    Posted Mar 12, 2008 at 6:26 AM | Permalink

    re 40

    Perhaps, John A, you should refine your statement to say that BCP (strip bark) are not only temperature proxies. You know, what dendros actually think about them. Otherwise, you are only stating your opinion and not a fact.

    Isn’t the argument that John A is referring to is that no hockey stick is present without the BCP. So the entire hockey stick argument is that a group of trees in the south western US can be sued as a proxy for world temperature for the last thousand years. Isn’t this what SteveMc means when he indicat4es that the entire Mann exercise is a method of assigning differing weights to various proxies and that the error is that it preferentially assigns large weights to the BCP.

    To me Mann’s mistake boils down to creating a tacit assumption that the BCP are a world (not just or even a local) temperature proxy. The arguments about the validity of this or that PCA method are spurious in this regard. The PCA finds the BCP shape for some or other level and then the claim is made that this is the world temperature.

  43. James Lane
    Posted Mar 12, 2008 at 6:55 AM | Permalink

    Re #42

    Stan, that’s pretty much exactly what’s going on.

  44. MarkW
    Posted Mar 12, 2008 at 6:57 AM | Permalink

    Boris,

    There is no evidence that BCP’s are a temperature proxy, either in full, or in part.

  45. Jean S
    Posted Mar 12, 2008 at 7:27 AM | Permalink

    So the entire hockey stick argument is that a group of trees in the south western US can be sued as a proxy for world temperature for the last thousand years.

    No, you have to sue also the Quebecian cedar trees (Gaspe series). That series is the backup for the bristlecone pines: if you do PCA correctly, then the Mannomatic puts more weight on that series. If you do PCA correctly and remove the Gaspe series, this is what essentially happens (image from RC!), yellow/orange curve:

  46. Steve McIntyre
    Posted Mar 12, 2008 at 7:39 AM | Permalink

    The Gaspe backup that Jean S mentions above is interesting, because, as we observed in our articles, out of 415 series used in MBH, there is only ONE series that is extended at the beginning so that it can be included in an earlier period. This one-off handling of the data was not mentioned in MBH, needless to say. This sort of one-off handling is something that sets alarm bells for financial auditors, but hasn’t bothered any climate scientists that I’m aware of. Yes, you can guess which series is involved.

    In our 2003 calculation, we didn’t know of Mann’s “adjustment” to the Gaspe series to make it longer and thus did the calculation without Gaspe in the AD1400 network (Gaspe only had one tree in its early portion anyway.). Mann hyperventilated that we had omitted critical data, when the real issue was that he’d adjusted the data for no good reason and without disclosure.

    And BTW I was sent a graph of an update of the Gaspe series that didn’t have a HS shape. When I asked Jacoby and d’Arrigo for the data, they refused. They also refused to provide a map or directions to where the sampled trees were located. Only the old data has been archived; the update (which was done in 1991 or so) which didn’t have an HS hasn’t been archived. Newish readers should go back and re-read Jacoby’s letter about a “few good men” in an early post (Jacoby category.)

  47. Stan Palmer
    Posted Mar 12, 2008 at 7:48 AM | Permalink

    re 46

    Isn’t it true that the fact remains that the Mann method is not robust to the absence of certain proxies. The mathematics cloaks this with obscurity but the weakness of the method is that it takes proxies with very narrow relevance and uses them to represent a global value

  48. Jean S
    Posted Mar 12, 2008 at 8:17 AM | Permalink

    Newish readers should go back and re-read Jacoby’s letter about a “few good men” in an early post (Jacoby category.)

    The post is here although other posts in the Jacoby category are also worth reading. See also MM05(EE).

  49. Steve McIntyre
    Posted Mar 12, 2008 at 8:29 AM | Permalink

    #36. Phil, the PC4 point was first raised in Mann’s reply to our Nature submission. In MM2005(GRL) and MM 2005 (EE), we did not omit a discussion of the PC4 as Tamino alleges. The quotes show otherwise. Tamino’s assertion in 2008 that we failed to consider the matter is an inaccurate representation of the research record.

    There are some interesting issues related to the interconnection of Preisendorfer’s Rule N and the PC4, which I’ll discuss in a post.

  50. Stan Palmer
    Posted Mar 12, 2008 at 8:36 AM | Permalink

    re 46

    And BTW I was sent a graph of an update of the Gaspe series that didn’t have a HS shape. When I asked Jacoby and d’Arrigo for the data, they refused. They also refused to provide a map or directions to where the sampled trees were located. Only the old data has been archived; the update (which was done in 1991 or so) which didn’t have an HS hasn’t been archived. Newish readers should go back and re-read Jacoby’s letter about a “few good men” in an early post (Jacoby category.)

    A program on Canadian public radio (“As It Happens” on CBC) recently reported an effort by the US congress to overcome “publication bias” in the reporting of the results of clinical trials. “Publication bias” results from teh fact that trails or research that files to produce an interesting result is not attractive for publication. Authors tend not to write papers and journals prefer positive rahther than negative results. Thus the peer-reviewed record is biased in that it fails to report the extent of negative results. Jacoby’s ‘few good men” comment can be taken as an alternqaive way of describing publication bias.

    The US Congress has mandated the creation of a registration system for all clinical trials. Follow on researchers will ahve the benefit of all results, both positive and negative, on drugs and treatment.

  51. Ross McKitrick
    Posted Mar 12, 2008 at 8:39 AM | Permalink

    #36: Phil, as a historical matter you are correct. In our exchange at nature we were focused on diagnosing the effect of the decentered PCA. We showed that using centered PCA dropped the BCP’s out of the MBH proxy roster, which only included the first 2 North America PC’s. Mann responded that the BCP’s reappeared in PC4, and argued the roster should be expanded to include PCs 1-5. We concurred that the BCP’s appeared in PC4. We didn’t try to argue that PC4 should or shouldn’t be included, we argued that results which turn on the decision cannot be considered robust. Also, we argued that if original readers had known about the issue then the reception of the findings would have been different. There’s a big difference between advertizing the BCP pattern as the dominant pattern in the North American proxy network, accounting for 37% of the variance, versus showing it is only in the 4th PC, accounting for 8% of the variance, and associated only with one controversial proxy from one region. The regression module doesn’t care which PC number the BCP’s appear in, but the scientific-minded reader does, if the overall conclusions pivot on their inclusion. Had the original paper correctly quantified their contribution to the North American network, people would have appropriately discounted the robustness of the final hockey stick result.
    Mann didn’t engage this debate, instead he argued that there are rigorous protocols and tests for inclusion of PCs and they prove the BCPs belong in the stew. I have never found this a very interesting argument, and I think it would only be convincing to beginners who think statistical modeling amounts to applying formulas out of a cookbook. With a bit of experience you learn that formulas can’t substitute for actually understanding what’s driving your data and model. Especially with regards to PC model specification, judgment still matters. Even if we granted the PC inclusion argument, the robustness and significance of the final results are still the real issues. But with regard to the PC inclusion argument, we pointed out that Mann abandons the Rule N when it suits him elsewhere, he isn’t doing the conventional PCA that Preisendorfer was referring to when he derived the rule N, and it is just prima facie implausible that a bristlecone pine is some magical ecological antenna that can pick up a global climate signal which the rest of his data set missed.

  52. Francois Ouellette
    Posted Mar 12, 2008 at 9:08 AM | Permalink

    #50

    The problem of which data to chose and what to publish is a recurrent one in all sciences. The “classic” study is that of Millikan’s experiments to determine the charge of the electron. His lab books, studied by historians of science, clearly show that he deliberately chose not to use his “poor” data, i.e. those that didn’t agree with his integer charge hypothesis. Was that fraud? The question is still open… In the end, he was right… But anyone who has done experimental research knows that you end up with tons of measurements, most of which is crap. You can’t use the very limited publication space to describe all of that data and why it was left aside. Often, you just “know” that it’s no good, and won’t spend the time figuring out or explaining why. There is a sort of “intimate” contact between an experimentalist and his setup, and you just feel it when things aren’t right. It can take months of failed measurements to finally get the result you’re looking for. For those here who are not scientists, you should realize that that is what experimentalists spend 99% of their job at. Not very exciting, except on that glorious day when it finally works. That is, in essence, Jacoby’s point referred to above. In that sense, I agree with him.

    This is however not an excuse not to make the raw data available. Lab books exist precisely for that reason: they are a record that anyone should have the right to access. In a well-run lab, lab books are required to be signed every day by a witness (all right, I admit, I never did it…). Nowadays, with the internet and the huge electronic storage capabilities, there is no excuse not to archive all data.

  53. Glacierman
    Posted Mar 12, 2008 at 9:22 AM | Permalink

    #51

    “and it is just prima facie implausible that a bristlecone pine is some magical ecological antenna that can pick up a global climate signal which the rest of his data set missed”

    Thanks for that, it’s been a long morning.

    I wonder how well that signal is calibrated to recent data. I am sure there is a good explanation why the magical ecological antenna worked so well in the past, but now the signal isn’t quite getting the reception it once did.

  54. Sam Urbinto
    Posted Mar 12, 2008 at 9:45 AM | Permalink

    Things have have some temperature signal in them that can’t be discerned or reliably extracted or skillfully separated are not temperature proxies.

  55. Boris
    Posted Mar 12, 2008 at 10:00 AM | Permalink

    There is no evidence that BCP’s are a temperature proxy, either in full, or in part.

    This is false. Ask Rob Wilson.

  56. Steve McIntyre
    Posted Mar 12, 2008 at 10:07 AM | Permalink

    Some of the medical debate is familiar to me. Nancy Olivieri’s experience was instrumental in New England Journal of Medicine changing its policies. One of my best friends (another Mc) was her lawyer. Perhaps some Scottish traditions (think David Hume) linger on in the Scottish diaspora in southern Ontario. Some of the “evidence-based medicine” articles come from the Toronto area – Guyatt, one of the authors, is even from a squash playing family.

  57. Sam Urbinto
    Posted Mar 12, 2008 at 10:35 AM | Permalink

    Do BCP have a signal or are they a proxy? Signal does not equal proxy. If it’s not calibrated carefully to reliably extract the temperature signal, it’s not a temperature proxy.

  58. Posted Mar 12, 2008 at 10:37 AM | Permalink

    http://www.climateaudit.org/?p=2844#comment-223524 Jean S, yellow result without variance matching, I guess? I get almost the same result, but verification REs with variance matching as

    1400 -0.26 -0.22

    1450 0.44 0.40

    1500 0.51 0.46

    (first with full recon, second with sparse recon )

    and without variance matching

    1400 -0.75 -0.63

    1450 0.34 0.09

    1500 0.41 0.079

    Note that at RC Mann mentions RE score ( -0.76), and interestingly those two other steps show acceptable verification REs 🙂 🙂

  59. MarkW
    Posted Mar 12, 2008 at 10:40 AM | Permalink

    If you sample 20 trees, then take temperature records from the 20 nearest weather stations and compare them to each other. It’s often possible that one of the trees will match up (in Erik’s words) pretty well with one of the stations.

    That’s not evidence that trees are temperature proxies. It’s just evidence that if you have enough data points, the chances are some of them will match up.

  60. matt
    Posted Mar 12, 2008 at 10:50 AM | Permalink

    There is a very interesting legal stage called “findings of fact” where, after much argument by both sides, a document is constructed that highlights the points over which both parties agree. This then forms the basis for future debates, and it ensures that you don’t have to keep covering the same ground over and over and over. Simple cite the graph in the FoF and then build an argument atop that fact and you don’t have to worry about the other side attacking the foundation again.

    It’d be very, very interesting for Tamino and Steve to attempt to formally highlight their biggest disagreements, in priority order, and in the process they’d likely find out where they also agreed (FoF). And from that, it’s a straightforward exercise to find your experts to attack the unsettled bits.

    As it stands, much of this is just he-said/she-said back and forth with much of the same ground being covered over and over until folks get exhausted and move onto something else.

    Steve: Two years ago, I proposed such a document to Caspar Ammann on the basis that our codes coincided and we should be able to state an agreed set of facts. My concept was virtually identical to yours. He said that it would interfere with his career advancement.

  61. Michael Jankowski
    Posted Mar 12, 2008 at 11:16 AM | Permalink

    This is false. Ask Rob Wilson.

    Technically, yes, Rob Wilson says they can be a temp proxy. There’s no need to ask him. He posted the following on this site, (http://www.climateaudit.org/?p=610, #25)

    I stated “Take home message – I do not think the BP data are as bad as Steve would have us believe”. However, I never said that they were GOOD either. With a correlation of only 0.38 (explained variance =14%), these data can hardly be defined as a strong temperature proxy. There are certainly issues with BP data, but I only added the post to add a little balance to the argument against them. The work by Salzer and Kipfmueller shows that BP data can express a valid temperature signal.

    So he says they aren’t a “strong temperature proxy.” He says that if worked properly they “can express a valid temperatuer signal.” He admits he “never said they ere GOOD” and that he “only added teh post to add a little balance” (which to me is akin to playing Devil’s advocate).

    So, technically, Rob Wilson said BCP can be a temperature proxy if worked properly…but not necessarily a good one. What exactly is your justification for a temperature proxy that Rob Wilson can’t say is “good,” let alone something like “strong” (or better yet, accurate, excellent, or exceptional)?

  62. Steve McIntyre
    Posted Mar 12, 2008 at 11:27 AM | Permalink

    Let’s keep in mind that Rob Wilson hasn’t sampled BCPs. Biondi, a member of the NAS Panel, who has published on them with Hughes said that they can’t be used after the mid-19th century – which is exactly the period where their trend is used by MBH.

    But aside from anything else, Ababneh’s results differed from Graybill’s, Neither her results nor our Almagre results show all-time record growth in the late 20th century. Just the opposite. So whatever anyone may have thought in the past: out-of-sample testing has shown that they do not record late 20th century warmth in the form of record wide ring widths.

  63. Sam Urbinto
    Posted Mar 12, 2008 at 11:40 AM | Permalink

    All of that is quite different than saying Rob saying “BCP are temperature proxies.”

    Perhaps it would be better phrased “BCP can sometimes be temperature proxies, under certain conditions. However, real world experience has shown us that they are unrealiable and contradictory, and so in general are not temperature proxies and shouldn’t be used as such. They are not temperature proxies is a more accurate statement than they are.”

    Or maybe even more to the point. If the NAS says “You can’t use BCP as temperature proxies after the mid-1850s” sounds to me like the consensus position 🙄 So if we talking about recent BCP “They are not temperature proxies.” versus “Under tightly controlled circumstances, pre-1850 BCP can sometimes be temperature proxies.”

    There, problem solved.

  64. Steve Viddal
    Posted Mar 12, 2008 at 12:15 PM | Permalink

    I guess it is fair to assume that the norwegian site forskning.no is not widely read by visitors and contributors here. It is a web based popular science news service, and has received some attention because it is one of the few Norwegian media that publish news that question IPCC dogma. After all, our glorious Nobel comittee saw fit to award Al Gore the Nobel Peace Prize.

    Lately a Norwegian statistician and PhD in forecasting has published some opeds where he has questioned IPCC prognosis (along Green and Armstrong) as well as signs of lack of use of statistical competence in the IPCC assessments, exemplified by Wegman/NAS criticism of MBH.

    It goes without saying that Norway’s contribution to RealClimate, Rasmus Benestad feels compelled to come to MBH’s rescue:

    http://www.forskning.no/Artikler/2008/februar/1204153165.64

    It is unfortunately in Norwegian only. But as discussion of MBH methods once again has raised it’s head, it might be of interest to see Benestads defense of MBH. I have thus made an attempt to translate the relevant section, and beg for forgiveness aforehand, as english is only my secondary language, I am not very skilled in the required linear algebra and not the least, even in Norwegian dr. Benestads reasoning and arguments can be a challenge to follow. Anyway here goes, in the (translated) words of Rasmus Benestad:

    “As for the Wegman report, Stordahl refers to a political report (signed by a republican from Texas). However, it should also be noted that The European Geophysical Union published a statement criticising this hearing.
    National Academy of Sciences (NAS) has also performed an independent assessment of the work by Mann, Bradley and Hughes (MBH)
    A quote by Dr. North (The House Energy and Commerce committee hearing, US House of representatives 19.07.2006) says a great deal: “We also question some of the statistical choices made in the original papers by Dr. Mann and his colleagues. However, our reservations with some aspects of the original papers by Mann et al. should not be construed as evidence that our committee does not believe that the climate is warming, and will continue to warm, as a result of human activities.”
    This does not mean that the work MBH did was erroneous, but their method could be discussed, and that there might not be a fixed answer to which method is most appropriate.
    Another point is that critique of one single paper should not be interpreted as casting doubt on the theory of anthropogenic warming. Regarding this it should be pointed out that a number of higly respected scientific bodies supports the IPPC (Link to Royal Society).
    MBH’s work was in it’s days groundbreaking and because of this they discussed all uncertainties in their analysis – indeed, the uncertainties were an important part of the analysis that since has been neglected by critics of MBH. MBH was published in one the most prestigious scientific journals, Nature and has not been rebutted or corrected since.
    MBH graph (hockeystick) is still present in the last IPCC report along with several other similar analysis, and they are to large degree consistent. McIntyre & McKitrick, that to a large degree motivated the Wegman hearing based their criticism mainly on what kind of level of reference that should be used for a so called Principal Component Analysis.

    Rebutted Criticism

    Later, several papers have published that rebuts the validity of M&M’s arguments. It has been shown that MBH’s choice of reference period has little effect on the final temperature curve, because it was only one of several initial values in a preparatory stage (procedure?) prior to the so called regression analysis.
    The regression analysis ensures that the initial values receives credible weighting. M&M also made a mistake when they compared the verification of an analysis of purely global means with an analysis that also includes geographic (spatial?) patterns, as if the two were comparable.

  65. Wondering Aloud
    Posted Mar 12, 2008 at 12:29 PM | Permalink

    I spent much of yesterday reading the Tamino blog and reasoning through the comments and claims on this issue. To me it looks like a deliberate attempt to misrepresent and misunderstand what was said here.

  66. Boris
    Posted Mar 12, 2008 at 12:52 PM | Permalink

    So, technically, Rob Wilson said BCP can be a temperature proxy if worked properly…but not necessarily a good one. What exactly is your justification for a temperature proxy that Rob Wilson can’t say is “good,” let alone something like “strong” (or better yet, accurate, excellent, or exceptional)?

    The point is to accurately portray the issue with BCP (strip bark). They are proxies for temperature. They do have CO2 fertilization problems. MBH accounted for CO2 fertilization.

    Steve: There are issues with BCPs – CO2 was one issue, but strip bark appears to be a very different problem. They don’t become proxies for temperature merely by asserting it over and over. They are in very arid locations and much affected by precipitation. MBH adjusted the PC1 in the AD1000 step in MBH99 but didn’t touch the AD1400 step. I’ve discussed their “adjustment” on another occasion and it hardly meets any sort of scientific standards for ensuring that BCP chronologies are correct. Then you run into the Ababneh problem – what’s going on with the Graybill chronologies in the first place? How can “science” be based on this sort of foolishness?

  67. Stan Palmer
    Posted Mar 12, 2008 at 12:58 PM | Permalink

    The point is to accurately portray the issue with BCP (strip bark). They are proxies for temperature. They do have CO2 fertilization problems. MBH accounted for CO2 fertilization.

    Some questions:

    a) What areas of the world are they temperature proxies for?

    b) How do we know that the BCP have CO2 fertilization problems?

    c) What is the accuracy of Mann’s method of dealing with the CO2 fertilization problems of the BCP?

  68. Pat Frank
    Posted Mar 12, 2008 at 12:59 PM | Permalink

    #66 point me to the physical theory that analytically extracts temperatures from tree rings, Boris.

  69. Posted Mar 12, 2008 at 1:35 PM | Permalink

    I removed treeline11.dat and ITRDB PCs from AD1500 step, and didn’t use variance matching (it’s wrong anyway). Result in green:

    Verification RE (full recon) is 0.41. Verification R2 0.014, and calibration R2 0.49 – a bit overfit-sounding, I wouldn’t buy this reconstruction. But verification RE is positive, in RC world this should go?

  70. Posted Mar 12, 2008 at 1:35 PM | Permalink

    From Press et al., Numerical Recipes, Chapter 13, page 454 of the first edition of 1986:

    “The analysis of data inevitably involves some trafficking with the field of statistics, that gray area which is as surely not a branch of mathematics as it is neither a branch of science.”

    And another useful remainder, “Stats in the absence of causality is not science.”

    I think these recent discussions are proof of both these. If mathematics was the focus, the true correct conclusion would have been available in a matter of minutes. If causality had been determined prior to jumping into ‘not math’ stats, again true correct conclusions would have been available in minutes.

    But we need people who will traffic with stats. I’m just glad that I’ve not had to be one of them.

  71. Posted Mar 12, 2008 at 2:56 PM | Permalink

    Steve Vidal–
    Is that Ramus’s photo in the article? He’s a cutie! 🙂

  72. Mike B
    Posted Mar 12, 2008 at 3:21 PM | Permalink

    #69 UC –

    Are these Principle Components? If so, which centering method was used, and which PC do these represent?

    What do the other PC’s look like?

  73. Jean S
    Posted Mar 12, 2008 at 4:02 PM | Permalink

    #72 (Mike B): No, I think those are UC’s emulation of MBH98 AD1500 step, and the same without the Gaspe series, no NOAMER tree rimgs (PCs), and a specific part of later MBH algorithm (variance matching).

  74. Joe Crawford
    Posted Mar 12, 2008 at 4:52 PM | Permalink

    Anyone still believing in tree rings as proxies for temperature should look at the picture of a tree slice from Arizona over at RC: (www.realclimate.org/index.php/archives/2008/03/536-ad-and-all-that/#more-535)

    I don’t know what type of tree is pictured, but if ring width some how represents temperature, it looks to me like it would depend on which side of the tree you are standing whether you are hot, cold or comfortable in any particular year. I’m not sure how you would calculate how many samples (borings) and from which directions you would need to take them in order to get an accurate ring width for determining temperature.

    Joe

  75. LadyGray
    Posted Mar 12, 2008 at 5:10 PM | Permalink

    As John Brignell of Number Watch says: They used to disembowel animals in an attempt to forecast the future. Now they disembowel trees in an attempt to aftcast the past. Funny old world!

  76. Jean S
    Posted Mar 12, 2008 at 5:25 PM | Permalink

    #72 (Mike B) (addition): Sometimes UC’s writing is so compressed that maybe only few of us can follow… let me try to guess what he’s saying this time 🙂

    The figure I linked in #45 is from this RC post. UC probably tried to replicate it, but noticed that by doing exactly as they claim there does not produce the desired figure. However, notice in the figure that the variance of the yellow curve is much larger than that of the actual curve.

    Over RC, mike writes:

    In stark contrast, our reproduction of the MM reconstruction demonstrates that their reconstruction dramatically fails statistical verification (see Figure 5) with an RE score ( -0.76) that is statistically indistinguishable from the results expected for a purely random estimate (as a reminder, RE

    So UC tried to reproduce the yellow curve with slightly different parameters (removed “variance matching”) in order to get the reported verification RE (for AD1400 step). These are the ones he reported in #58. Now the only one even close is the value 0.75, which he obtained without “variance matching” and when RE value is calculated with respect to the full set of grid cells (in MBH, as I reported in the other thread, and Steve were aware of the fact, the verification REs are calculated with respect to so called “sparse reconstruction”). In #69 UC is just showing his results for the AD1500 step.

    So what does this mean? Well, not much, but:
    1) it seems that mike has in his own defense replicated his own algorithm wrongly on two accounts (no variance matching and wrong way of calculating verification REs). He should have calculated and reported the right column of the first values in #58, but instead he did the left column of the second set of values.
    2) the reconstruction obtained by throwing out NOAMER and Gaspe series is completely fine in their own standards except for the AD1400 step.

  77. Max
    Posted Mar 12, 2008 at 5:28 PM | Permalink

    This whole subject has become like looking at an etchasketch that everyone has done a random twist of the knob on while blindfolded, then having everyone trying to take credit for the random lines that kinda resembles a bunny rabbit.
    The math has gone past most of us, so now were are at a stage where we have to just decide with personal faith. Something this important to humanity/current economy, shouldn’t be mired in this kind of grey fog where the public just gets steered by the loudest foghorn, in the end we are still directionless..

  78. Jean S
    Posted Mar 12, 2008 at 5:28 PM | Permalink

    Steve, could you fix the block quote in #76. Thanks!

  79. matt
    Posted Mar 12, 2008 at 6:39 PM | Permalink

    Steve said: Two years ago, I proposed such a document to Caspar Ammann on the basis that our codes coincided and we should be able to state an agreed set of facts. My concept was virtually identical to yours. He said that it would interfere with his career advancement.

    You should see if Tamino would go for this. If someone is interested in hit-and-run science I can see them not wanting this at all. But if two people are really sincere about getting down to the root issue, and if they are both confident in their stance then I’d think both would welcome it.

    Peer review is wonderful but slow. In this day and age of rapid communication, it seems like there is an opportunity for someone to build a wiki (I hate the word, but everyone knows what it means) of scientific statements and certainty, and new directions can build on previously established directions. And if a previously established fact is proven wrong or suspect, then all the subsequent science that relied on that foundation automatically becomes suspect to the degree it relied on now fallen fact.

    Some on Real Climate love to talk about how many strands of evidence their are supporting global warming, but some of those strands are merely selected observations that support a pre-conceived notion. Those that don’t (eg. southern hemi cooling) seem to fall by the wayside. Imagine if there was a way to look at all these strands of evidence side by side and drill down into each and see expert opinions along the way. There has to be a way to harness all this disjoint expertise ’cause these slugfests on the blogs are huge timesinks that accomplish little. I’ve always found it fascinating that some feel they are qualified to talk about bristle cone pines, ice shelves, complex software development and validation, climate dynamics, chemistry, stats, etc, and claim to understand how all that works, but God forbid the moment a real statistician sticks his nose under the tent to offer some friendly advice.

    Of course this might not be so interesting for those that do science daily amd who relish the notion of publishing. But it would be SO COOL for amatuer scientists to very quickly look at mainstream theories, see the foundations they are built upon, and potentially contribute to fortifying or falsifying those foundations based on very narrow expertise acquired somewhere along the line in life (school, job, independent study).

  80. Steve McIntyre
    Posted Mar 12, 2008 at 7:26 PM | Permalink

    #79. I’m in a very odd situation where the academic viewpoint on these matters has been almost entirely formed by non-peer reviewed blog postings. Mann himself never published a peer reviewed response to our criticisms, instead replying at realclimate (his 2004 submission to Climatic Change was rejected). Academic opinions were being set in stone through realclimate postings, rather than journal publications.

    The first journal published response from the Mann camp came only last fall. Mann’s realclimate responses were worked up by his associates, Wahl and Ammann – who, interestingly fail to even acknowledge Mann’s priority for their arguments. Their submission was available in a draft form in early 2006, but the final version is different on some important points, primarily arising from the rejection of Ammann and Wahl GRL submission (on two occasions).

  81. Phil.
    Posted Mar 12, 2008 at 9:39 PM | Permalink

    Re #80

    #79. I’m in a very odd situation where the academic viewpoint on these matters has been almost entirely formed by non-peer reviewed blog postings. Mann himself never published a peer reviewed response to our criticisms, instead replying at realclimate (his 2004 submission to Climatic Change was rejected). Academic opinions were being set in stone through realclimate postings, rather than journal publications.

    Steve, your position has been principally advanced in the same manner so I don’t see what your problem is?

  82. Posted Mar 13, 2008 at 1:15 AM | Permalink

    Mike B, Jean S (#72,73)

    Sorry about self-featuring-information-compression;) It is the MBH98 reconstruction AD1500 step(blue), and my emulation of MBH98 AD1500 step (without NOAMER PCs and Gaspe, no variance matching, green). I plotted this, because Mann at RC seems to make quite a noise about negative RE, -0.76 of MM reconstruction. This one has positive RE. And Mann’s method, calibration-residual-based CIs, show that these two versions are equally accurate.

    #76

    1) it seems that mike has in his own defense replicated his own algorithm wrongly on two accounts (no variance matching and wrong way of calculating verification REs). He should have calculated and reported the right column of the first values in #58, but instead he did the left column of the second set of values.

    Yes, imagine how many degrees of freedom they have. Any conclusion they please can be derived from MBH algorithm, because it is not well defined.

    2) the reconstruction obtained by throwing out NOAMER and Gaspe series is completely fine in their own standards except for the AD1400 step.

    Yes. MBH standards are ‘anything with positive RE will do’, ‘it is ok to derive CIs from calibration residuals’. Amazing, who was the referee of MBH98??

    (*) Figure corrected, original was MBH with all steps ( http://signals.auditblogs.com/files/2008/03/no_na_bu.png )

  83. Posted Mar 13, 2008 at 1:36 AM | Permalink

    66, Steve

    MBH adjusted the PC1 in the AD1000 step in MBH99 but didn’t touch the AD1400 step.

    And, by accident, this procedure leads to visible astronomical cooling, http://www.climateaudit.org/?p=2344#comment-159821

  84. MarkW
    Posted Mar 13, 2008 at 4:41 AM | Permalink

    He said that it would interfere with his career advancement.

    That has to be one of the saddest quotes in science.

  85. John A
    Posted Mar 13, 2008 at 5:57 AM | Permalink

    Boris #40:

    Perhaps, John A, you should refine your statement to say that BCP (strip bark) are not only temperature proxies. You know, what dendros actually think about them. Otherwise, you are only stating your opinion and not a fact.

    No Boris. The people who sampled them, Graybill and Idso, stated that they were not temperature proxies. So it is up to Mann, or anyone who defends him, to establish that they are temperature proxies after all.

    So its not just my opinion. Its also the opinion of the NAS Panel that they are not temperature proxies, for what its worth.

    The notion that they are mysteriously “teleconnected” to the global mean temperature is so risible that it doesn’t deserve comment. It’s like someone discovering that the price of a tin of baked beans in Thailand correlates to the Dow Jones Industrial Average so well that the Dow can be reproduced from Thai baked bean prices.

    The Global Mean Temperature is an index, just like the Dow. It has no physical meaning and it is impossible for trees to grow in close correlation to it except by chance.

  86. BoulderSolar
    Posted Mar 13, 2008 at 12:07 PM | Permalink

    I hear comments like this as a rebuttal to criticisms of MBH:

    “There are 14 subsequent papers that take different approaches to the same question and data, using different statistical techniques and different selection criteria for data inclusion and so on – and arrive at very close to the same answers.”

    I’d like someone to respond to this. Is it true?

  87. fFreddy
    Posted Mar 13, 2008 at 12:19 PM | Permalink

    Bouldersolar, you might like to have a look at the first few posts in this category.

  88. Michael Jankowski
    Posted Mar 13, 2008 at 1:46 PM | Permalink

    Re#86, fFreddy’s direction is a good one to take.

    same question and data

    Garbage data in = garbage results out, regardless of methods

    using different statistical techniques

    How many of these “statistical techniques” have been verified as legitimately applied, such as being peer-reviewed and published as a method prior to being used for a reconstruction? How many statistics experts (as opposed to the admitted “I am not a statistician” Mann) were involved in those papers?

    and different selection criteria for data inclusion

    There seem to be a lot of overlap in using the same data sets, so it’s hard to say there is a clear-cut “different selection criteria.”

  89. jae
    Posted Mar 13, 2008 at 2:26 PM | Permalink

    and different selection criteria for data inclusion

    LOL, I doubt that they are THAT much different. e.g., find me one of those papers that does not “select” BCPs.

  90. frost
    Posted Mar 14, 2008 at 7:01 AM | Permalink

    Are there any simple programs that will do PCA? I would guess that R would be able to do that but I’m hoping for something that is more beginner friendly. This is for a non-climate related problem.

    Steve:
    I started using R from scratch. Let’s say that you have a matrix X and you want do a PC analysis. Here’s all you do:
    pca0 =prcomp(X) for a covariance; the left is in pca0$x; the eigenvalues in pca0$sd and the right in pc0$rotation
    pca1=prcomp(X,scale=TRUE) for correlation PCs

    Or if you want to do a SVD, it’s just as easy”
    svd0= svd(scale(X,scale=FALSE)) for covariance
    svd1 = svd(scale(X)) for correlation

    What could be easier?

    IMHO no one should consider doing statistical work other than via R these days.

  91. Posted Mar 14, 2008 at 7:29 AM | Permalink

    Frost says in #90,

    Are there any simple programs that will do PCA? I would guess that R would be able to do that but I’m hoping for something that is more beginner friendly. This is for a non-climate related problem.

    EViews 6.0 has a newly expanded PCA capacity that is quite easy to use. It’s a little expensive, though, unless you’re at a university with a site licence.

  92. Mike B
    Posted Mar 14, 2008 at 11:28 AM | Permalink

    For UC, Jean S., Steve, and whoever else would like to take a shot at it…

    I thought I knew the answer to this question, but now I’m not so sure.

    What exactly is the MBH98 Reconstruction?

    Is it PC1 of the partially centered tree ring data?

    Or is it a linear combination of PC’s from several data sets?

    Does it involve any time domain splicing of “reconstructions” or PC’s?

    Or does the “reconstruction” not involve PC’s at all?

    Thanks in advance.

  93. Craig Loehle
    Posted Mar 14, 2008 at 2:50 PM | Permalink

    BoulderSolar asks: “I hear comments like this as a rebuttal to criticisms of MBH:

    “There are 14 subsequent papers that take different approaches to the same question and data, using different statistical techniques and different selection criteria for data inclusion and so on – and arrive at very close to the same answers.”

    I’d like someone to respond to this. Is it true?”

    If you use proxies that are not really temperature proxies (i.e., your assumptions are wrong) you will get white noise out of your model, and the mean of the white noise will be some sort of fairly flat output over the simulated period (the shaft of the hockey stick). Then by picking out a few proxies that correlate with your temperature “signal” you get the blade, and you have your hockey stick. That is, a failed temperature reconstruction will look just like the spagetti graph in IPCC up to 1900. They never have shown that their approach works with out of sample data.

  94. Steve McIntyre
    Posted Mar 14, 2008 at 4:20 PM | Permalink

    Also, as I’ve said on many occasions, the studies rely heavily on “data snooping” i.e. re-use of proxies where the shapes are known e.g. bristlecone/foxtails, Yamal, Yang China; plus opportunistic choices of which version to use – e.g. Yamal rather than Polar Urals update. Very slight variations in selections leads to reversal of the medieval-modern relationship. The difference is very sensitive to a few accounting choices.

    The authors of these studies are hardly “independent” as this term is understood in public businesses. Bradley and/or Jones have coauthored with most of them. The proxies aren’t independent (See Wegman on this as well) nor are the authors.

  95. Otto
    Posted Mar 14, 2008 at 8:16 PM | Permalink

    steve said:

    “Also, as I’ve said on many occasions, the studies rely heavily on “data snooping

    Isn’t this an accusation of fraud?

    Steve: “Data snooping” is a statistical concept used in econometrics and contains no such implication. IT occurs in other fields. Proponents typically believe (incorrectly) that their decisions are good ones. See posts on Greene and references there. If there has been data snooping, then normal statistical tests don’t apply.

  96. Posted Mar 15, 2008 at 12:07 AM | Permalink

    Is it PC1 of the partially centered tree ring data?

    No, PC1, along with other proxy data is fed to classical calibration algorithm (with some modifications), temperature PCs as targets. Proxy PC1 gets the second largest weight in AD1400 step, largest is assigned to Gaspe series. Without those two, you’ll get warm 15th century (#45).

    Or is it a linear combination of PC’s from several data sets?

    Linear combination of proxy PCs + raw proxy records.

    Does it involve any time domain splicing of “reconstructions” or PC’s?

    11 steps.

  97. frost
    Posted Mar 15, 2008 at 8:32 PM | Permalink

    Re: 91, Hu
    This is for a hobby project so spending money is out. Thanks for the tip, tho.
    Re: Steve McI’s reply
    I’m a rank beginner in statistics (actually that overstates my capabilities) so I was hoping for something really easy to use. After all, the fate of the globe does not hang on this investigation. However, since you’ve given me a recipe, I’ll give R a try. Thanks for taking the time to respond.

  98. Christopher
    Posted Mar 17, 2008 at 3:23 PM | Permalink

    Has Tamino taken up your idea here? Have you emailed him offblog etc? Any communication at all?