A Challenge to Tamino: MBH PC Retention Rules

Today I wish to re-visit an issue discussed in one of the very first CA posts: did MBH98 actually use the PC retention rules first published in 2004 at realclimate here and described both there and by Tamino as “standard” selection rules.

Much of the rhetorical umbrage in Tamino’s post is derived from our alleged “mistakes” in supposedly failing to observe procedures described at realclimate. As observed a couple of days ago, Tamino has misrepresented the research record: both our 2005 articles observed that the hockey stick shape of the bristlecones was in the PC4. In MM 2005 (EE), we observed – citing the realclimate post as Mann et al 2004d:

If a centered PC calculation on the North American network is carried out (as we advocate), then MM-type results occur if the first 2 NOAMER PCs are used in the AD1400 network (the number as used in MBH98), while MBH-type results occur if the NOAMER network is expanded to 5 PCs in the AD1400 segment (as proposed in Mann et al., 2004b, 2004d ). Specifically, MBH-type results occur as long as the PC4 is retained, while MM-type results occur in any combination which excludes the PC4. Hence their conclusion about the uniqueness of the late 20th century climate hinges on the inclusion of a low-order PC series that only accounts for 8 percent of the variance of one proxy roster.

Mann, M.E., Bradley, R.S. and Hughes, M.K., 2004d. False Claims by McIntyre and McKitrick regarding the Mann et al. (1998) reconstruction. Retrieved from website of realclimate.org at http://www.realclimate.org/index.php?p=8.

In MM2005 (EE), we observed correctly that you got a HS-shaped reconstruction if you use 5 PCs (including the bristlecones in the PC4), while you don’t get one if you use 2 PCs. (We considered many other permutations in MM2005 (EE), including correlation PCs; Tamino’s not the first person to criticize us without properly reading our articles.) Despite the above clear statement, Tamino distorted the research record by falsely alleging that we had failed to consider results with 5 PCs:

When you do straight PCA you *do* get a hockey stick, unless you make yet another mistake as MM did. .. When done properly on the actual data, using 5 PCs rather than just 2, the hockey stick pattern is still there even with centered PC — which is no surprise, because it’s not an artifact of the analysis method, it’s a pattern in the data.

Here Tamino, as he acknowledges, relies heavily on Mann’s early 2005 realclimate post where Mann stated: :

MM incorrectly truncated the PC basis set at only 2 PC series based on a failure to apply standard selection rules to determine the number of PC series that should be retained in the analysis. Five, rather than two PC series, are indicated by application of standard selection rules if using the MM, rather than MBH98, centering convention to represent the North American ITRDB data. If these five series are retained as predictors, essentially the same temperature reconstruction as MBH98 is recovered (Figure 2).

So what exactly is this “mistake” that we are supposed to have made? We said that you got a HS reconstruction with 5 PCS; so did Mann and Tamino. We said that you didn’t get a HS reconstruction with 2 PCs, so did Mann and Tamino. Their argument, as I understand it, is that it is a “mistake” to do a reconstruction with 2 PCs rather than 5 PCs, as 5 PCs are mandated by “standard selection rules”.

I’ve reviewed what Preisendorfer and others have said about determining “significance” of PCs – that PC analysis is merely exploratory and Rule N (or similar rules) merely create a short list of candidate patterns; they do not themselves establish scientific significance. As Preisendorfer said, there is no “royal road” to science. Someone somewhere has to do the grunt work of showing that the PC4 has scientific validity as a temperature proxy, a Rule N analysis can’t do that.

But today I want to discuss something quite different and something that has really annoyed me for a long time. For all the huffing and puffing by Mann and Tamino about Preisendorfer’s Rule N being a “standard selection rule” or a “correct” way of doing things, Mann failed to produce the source code for the Preisendorfer tree ring calculations when asked by the House Energy and Commerce Committee for MBH98 source code, even though it was a highly contentious issue where implementation errors had already been alleged.

Worse, as shown below (re-visiting a point made in early 2005), it is impossible to reproduce the observed pattern of retained PCs shown for the first time in the Corrigendum SI of July 2004. MBH98 itself made no mention of Rule N in connection with tree ring networks, referring instead to factors such as “spatial extent” which have nothing to do with Rule N:

Certain densely sampled regional dendroclimatic data sets have been represented in the network by a smaller number of leading principal components (typically 3–11 depending on the spatial extent and size of the data set). This form of representation ensures a reasonably homogeneous spatial sampling in the multiproxy network (112 indicators back to 1820)

In fact, when one re-examines the chronology of when Rule N was first mentioned in connection with tree rings, it is impossible to find any mention of it prior to our criticism of Mann’s biased PC methodology (submitted to Nature in January 2004) and Mann then noticing that the hockey stick shape of the bristlecones was in the PC4, a point first made in Mann’s Revised Reply to our submission, which presumably was submitted around May 2004. The Supplementary Information 1 to that submission was substantially identical to the later realclimate post (this post itself being one of the very first dated posts, actually preceding the start-up of realclimate on Dec 1, 2004 – evidencing perhaps a little too much interest in the matter on their part, something that is worth noticing. )

I’ve been substantially able to replicate the methodology illustrated in the realclimate post and I’ve applied the methodology to all other MBH98 network/step combinations, resulting in some disquieting inconsistencies. The figure below shows on the left the realclimate Rule N calculation for the AD1400 NOAMER network. The 4th red + sign marks the eigenvalue of the bristlecone PC4 upon which the MBH98 reconstruction depends in this period. The red + signs show eigenvalues using a centered (covariance) calculation; the blue values are using Mannian PCs. The two lines show simulated results generating random matrices based on AR1 coefficients. On the right is my emulation of this calculation – which, as you can see, appears to be identical. The eigenvalue information shown here is completely consistent with the information in MM 2005 (GRL) and MM2005 (EE).

The blue arrow shows the eigenvalue of the bristlecone-dominated PC using Mannian methods; the red arrow show the eigenvalue using a centered calculation. These results were reported in MM2005 (GRL) where we stated that explained variance of the bristlecones wsa reduced from 38% (the blue arrow) to 8% (the red arrow) and that the bristlecones were not the “dominant component of variance” in the North American network, as previously claimed by Mann et al – an impression that they would obviously have been given by their incorrect PC calculation. So when they say that the error didn’t “matter”, it certainly mattered when they said that this particular shape was the “dominant component of variance” or the “leading component of variance”, claims that they made in response to our 2003 article.

 preise1.gif  tamino50.gif

Caption to realcimate figure and MBH 2004 Nature submission: FIGURE 1. Comparison of eigenvalue spectrum for the 70 North American ITRDB data based on MBH98 centering convention (blue circles) and MM04 centering convention (red crosses). Shown is the null distribution based on simulations with 70 independent red noise series of the same length with the same lag-one autocorrelation structure as the actual ITRDB data using the centering convention of MBH98 (blue curve) and MM04 (red curve). In the former case, 2 (or perhaps 3) eigenvalues are distinct from the noise floor. In the latter case, 5 (or perhaps 6) eigenvalues are distinct from the noise floor. The simulations are described in “supplementary information #2”.

Now for some fun. The figure below shows the calculations for two other MBH98 network/step combinations, illustrating here Mannian PC results for comparison to actual retention. On the left is a calculation for the Vaganov AD1600 network where 2 PCs were retained (retained PCs are circled.) PCs 3,4 and 5 were not retained, but all are “significant” according to the Rule N supposedly used in MBH98. In fact, the unused PC3 here is surely more “significant” than the covariance PC4 which Mann now claims under the algorithm illustrated at realclimate.

On the right is another network – Stahle.SWM AD1750, showing an opposite pattern. In this case, MBH retained nine PCs although only three are “significant” under the realclimate version of Rule N. This is also a relatively small network (located in southwestern U.S. and Mexico and inexplicably excluded from the NOAMER network with which it somewhat overlaps.)

 tamino48.gif  tamino49.gif

Note: The dotted line additionally shows the 2/M “rule” that was noticed in the examination of the source code in connection with Mann deciding how many “climate fields” to retain. It’s not mentioned anywhere but is also illustrated here for convenience.

It seems quite possible to me that the Rule N version illustrated at realclimate was not actually used in MBH98. The first surfacing of the realclimate algorithm in the present form came only after Mann realized that the bristlecone hockey stick really did occur in the PC4 and was not the “dominant component of variance”. There is no contemporary evidence that it was used and there is no source code proving its use. Both Mann and Nature refused to provide details back in 2003 and 2004. The actual pattern of retained PCs cannot be reproduced according to the way that I’ve implemented the realclimate algorithm. As I said earlier, I’m not sure why Tamino has decided to re-open these particular scabs. In Mann’s shoes, I would have left this stuff alone.

The question for Tamino. Which is incorrect: the information on retained PCs at the Corrigendum SI? Or the claim that the algorithm illustrated at realclimate was used in that form in MBH98? If there is some other explanation, some way of deriving the Vaganov AD1600 and Stahle/SWM AD1750 using the realclimate algorithm, please show how to do it. I’ll post up data and code for my implementation to help you along. C’mon, Tamino. You’re a bright guy. Show your stuff.

UPDATE: Willis Eschenbach reports below in a comment that he has examined also examined these calculations and results and also concludes that MBH98 did not use Rule N.

NOTE: Just in case Tamino says that …sigh… it’s too much work, here’s my script. The functions used for the calculations are in http://data.climateaudit.org/data/mbh98/preisendorfer.functions.txt and can be analyzed there in case there are any defects; the collated information is http://data.climateaudit.org/data/mbh98/preisendorfer.info.dat and the tree ring networks are in http://data.climateaudit.org/data/mbh98/UVA. These can be read into an R session as follows:


The Stahle 1750 graphic can be produced using this command (and the functions show the calculation):


The Vaganov 1600 example can be produced as follows:


The North American comparison can be done as follows:

#Do Plot illustrated at CA
title(main=paste(target$network[i],”: AD”,target$period[i],sep=””) )
temp=( (x$lambda-x$preis)>0); target$preis[i]=sum(temp)
arrows(x0=2.2, y0=.382, x1=1.15, y1=.382, length = 0.1, angle = 30, code = 2, col = 4, lty = 1, lwd = 4)
arrows(x0=5, y0=.12, x1=4.15, y1=.08, length = 0.1, angle = 30, code = 2, col = 2, lty = 1, lwd = 4)
legend(5.5,.4,fill=c(4,2),legend=c(“Mannomatic”,”Centered (Cov)”))

Results from my simulations are stored in the R-object http://data.climateaudit.org/data/mbh98/preis_mannomatic.tab which contains results for 20 network/step cases shown in the preisendorfer.info.dat file.


  1. Gerald Machnee
    Posted Mar 14, 2008 at 7:49 AM | Permalink

    Maybe it is time for Tamino to emulate Mann and state “I am not a statistician”.
    Great work, Steve.

  2. Michael Jankowski
    Posted Mar 14, 2008 at 7:57 AM | Permalink

    Let’s hope it’s more than the usual “McIntyre can’t follow the MBH procedure and reproduce it” junk.

    Tamino (and others) might want to revisit past CA threads on this interesting issue:

    Feb ’05 http://www.climateaudit.org/index.php?p=34
    Aug ’05 http://www.climateaudit.org/?p=291

  3. Johan i Kanada
    Posted Mar 14, 2008 at 8:00 AM | Permalink


    I wouldn’t expect a fair and substantive answer, if I were you. After all, only Mann, Tamino, Gore et el understands climate
    science, so comments/criticisms from a poor statistician means nothing to them. [snip]

  4. Jean S
    Posted Mar 14, 2008 at 9:25 AM | Permalink

    Corrigendum SI:

    The number of PCs retained in various proxy sub-networks were determined for each independent interval of the stepwise reconstruction based on a combination of objective criteria (modified Preisendorfer rule N and scree test)

    Hmmm, looking those graphs I think we can also rule out scree test.

  5. Akseli
    Posted Mar 14, 2008 at 9:34 AM | Permalink

    I think Tamino is just trying to keep you busy in this proxy issue. He and other RealClimate people are afraid that you can find out something else more shocking to public.

    More focus should be put on reliability of surface station temperature records of past 100 years because it’s something public understands. Clear numbers.

    Temperature records of rural USA and Finland stations show that:
    – temperature increased as much in 1910-1940 than 1975-2005
    – temperature was decreasing 1940-1980
    – temperature is now about the same as 1940

    Is it possible to create 100 years temperature charts which are using only rural classified stations and stations where data is available in all years without any black holes.

    Even this might not give correct answer because some rural stations are in airports and such places.

  6. Steve McIntyre
    Posted Mar 14, 2008 at 9:34 AM | Permalink

    In August 2004, we wrote to Nature as follows: http://data.climateaudit.org/correspondence/nature.040810.htm

    Mann et al. state that they use an “objective criterion” to decide how many principal component series to retain for each region and each calculation step. In the SI, they refer to consideration of both Preisendorfer’s Rule N and to a Scree Test but do not state their “objective” criterion. Preisendorfer’s Rule N describes simulations from white noise series. The Supplementary Information to Mann et al.’s second reply describes a simulation process based on red noise modeled with lag-one autocorrelation – a quite different procedure. Can you obtain a provide an exact and replicable description of the procedure used to decide the retention of principal component series?

    They refused this information http://data.climateaudit.org/correspondence/nature.040907.htm saying:

    And with regard to the additional experimental results that you request, our view is that this too goes beyond an obligation on the part of the authors, given that the full listing of the source data and documentation of the procedures used to generate the final findings are provided in the corrected Supplementary Information. (This is the most that we would normally require of any author.)

    I do not want to give you the impression that we are dismissing your concerns out of hand. From the outset, our main concern has been to rectify the potential errors in the original MBH98 publication and to provide the data and materials used on our permanent website – the root causes of your initial frustrations. Indeed, I hope that you are at least in part reassured by the efforts that we (and Professor Mann) went to to rectify these problems in the form of the Corrigendum and the extended Supplementary Information. But having now rectified these problems, we feel that our role in the matter has concluded.

  7. Patrick M.
    Posted Mar 14, 2008 at 9:56 AM | Permalink

    Does AGW have no champion who will pick up this gauntlet?

  8. timothy
    Posted Mar 14, 2008 at 10:11 AM | Permalink

    This may be too trivial to bother with, but isn’t parallel analysis a better way to choose factors, compared to Rule N or a scree test?

  9. Anthony Watts
    Posted Mar 14, 2008 at 10:18 AM | Permalink

    RE 5 Akseli

    I think Tamino is just trying to keep you busy in this proxy issue. He and other RealClimate people are afraid that you can find out something else more shocking to public.

    More focus should be put on reliability of surface station temperature records of past 100 years because it’s something public understands. Clear numbers.

    I would agree, particularly about the part of “…it’s something public understands.”

    Steve Mc. The best service you could do for your own work on MBH 98 is to reduce in a crucible the essence of your discoveries to a nugget, a “blog sound bite”, and grant unfettered republishing to foster better understanding.

    Steve, if you could condense all your MBH 98 work into a 100-200 word “blog bite”, what would you say?

  10. Posted Mar 14, 2008 at 10:30 AM | Permalink

    I agree. I read your site every day and being a layman I understand very little of it, to be honest. In this instance, regarding the current post, I’m completely at a loss as to what exactly is wrong and what you’re trying to say. Thanks.

  11. kim
    Posted Mar 14, 2008 at 10:33 AM | Permalink

    Well, EJD, Mann was trapped in a thicket of confusion and when Steve carved him a path out of it, Mann took it, then repudiated his saviour.

  12. Wm. L. Hyde
    Posted Mar 14, 2008 at 10:58 AM | Permalink

    Actually, Steve has to be talking to scientists interested in this. As a layman, I too have been following this avidly for years now, and I must say that my old brain has taken in some of it, but still, limited as my understanding is, I don’t comment because I have nothing to add. CA is one of the more opaque blogs I read, and one of the most interesting. I know I will never ‘get it’ completely, but scienists in the field, who too often have felt muzzled, they will certainly understand, and possibly be moved to investigate furthere on their own. Steve knows what he’s doing!

  13. Gerald Machnee
    Posted Mar 14, 2008 at 11:25 AM | Permalink

    Re last few comments: Steve’s work may be complicated, but it has to be done. Note that the misconceptions that Tamino created are not easy to understand either or pick out as nonsense except by the few knowledgeable contributors to this site as well as Steve. I also take comfort in the fact that Steve will correct any errors.

  14. Yorick
    Posted Mar 14, 2008 at 11:52 AM | Permalink

    I think Tamino is just trying to keep you busy in this proxy issue

    I have to wonder, since they seem to have gone so over the top, is it a “duck with a broken wing” stategy. You know, where the mother duck fakes a broken wing to draw a predator away from her nest of helpless ducklings?

  15. Tom C
    Posted Mar 14, 2008 at 11:58 AM | Permalink


    While I understand your motivation to correct the record every time it is distorted by Tamino and other propagandists, it is not a good idea to engage them. They are creating the illusion that there are still issues in play, that the accuracy of the MBH work is still an open question. It would be better to treat it as the closed question that it is. It would be better to describe the whole episode in a verbal formula something like:

    Mann and colleagues used a sophisticated statistical method incorrectly to try to claim that global temperatures for the past thousand years were accurately captured in the ring widths of one tree species in California. This despite the fact that ring widths of the same species do not correlate with temperature over the last 30 years. A panel of eminent statisticians reviewed their work and confirmed that the method was incorrect and that the results could not be trusted.

    Post something like that and be done with it.

  16. Pierre Gosselin
    Posted Mar 14, 2008 at 12:03 PM | Permalink

    I’ll second that motion. Steve’s stuff is over my head too, and I don’t have the time to invest
    to sort through it and figure it out.
    Perhaps Steve, if you have the time, you could conclude your entries in the future with a SUMMARY and conclusion. That would be a real clarifier and time-saver for us laymen and journalists.
    In general I think there’s overwelming and proof consensus that MBH98 got an F.

  17. deadwood
    Posted Mar 14, 2008 at 12:07 PM | Permalink

    re GM at 13:

    It is certainly well beyond the two undergrad courses in stats that I took, and quite unlike geostats which I took as a grad student, but I can follow Steve’s logic.

    From my cross visits to “Open Minds” (Closed?), I believe I can see what the magic flute guy is trying to do here. The AGW crowd is worried by the apparent lack of warming in the last 10 years and are grasping at the “unprecedented” straw that the “Hockey Stick” provided them.

    They a big hill to climb as far as I can tell, but Steve is a better judge of that than I am.

  18. Steve McIntyre
    Posted Mar 14, 2008 at 12:35 PM | Permalink

    #15 and others. If people aren’t interested in statistical issues, then they’ve come to the wrong place. That’s what I like to write about. I also happen to be interested in statistical issues and methodological issues and again, people who just want answers have come to the wrong place.

    As to the issue here, it’s one that I’ve wondered about for a few years. Since Tamino’s raised it and since he has access to Mann, perhaps the matter can be clarified for once and for all. Tamino said that this method was used in MBH98; let’s see the evidence. IF he does, then that will fill in a few squares of the crossword puzzle and the post will have accomplished something – we’ll know how the PC retention was actually done.

    IF Tamino can’t produce any evidence, then people can reasonably conclude that the method was not used in MBH98 after all and can consider that information in assessing Tamino’s and Mann’s assertions about what they did. That’s useful information as well.

    Either way represents progress so I think that this is worthwhile.

    And BTW I do not agree that MBH used a “sophisticated statistical” method. They used a complicated method but, in a statistical sense, it was actually pretty naive. I think that there is a decent chance of MBH becoming a classic example in statistics texts of how not to do things and so the time invested in it remains worthwhile.

  19. MrPete
    Posted Mar 14, 2008 at 12:50 PM | Permalink

    Perhaps a slight realignment of the graphics will help us in the peanut gallery understand. Experts: is this about right?

    The issue, described verbally:
    * Steve originally showed that the “most important” data point (BCP’s) was actually in
    fourth place for North America.
    * Mann (and Tamino) claim a specific procedure was used to pick out the top N data bits
    from each proxy… everything “above the line.” And so even if in 4th place, the BCP’s
    still belong.
    * Steve has now demonstrated that this procedure was not followed: sometimes they kept a lot
    more data, sometimes a lot less. No rhyme or reason.

    The issue, described visually:

    A) The graphs show how much each item contributes to the total value of
    the data set. So all the data points add up to a value of 1.0 (or 100%)

    B) The question raised is whether Mann picked his “important” data elements
    in a scientific and appropriate way.

    C) Steve has shown that Mann was inconsistent at first, and inconsistent
    later. Inconsistent period.

    D) Why it’s important: the data you keep or toss determines the outcome you get.


    I. The Original Inconsistency

    Blue dot/arrow: Mann claimed BCP’s were in #1 position (highest)
    Red dot/arrow: Oops. Steve proved it was in #4 position. [Mann/Tamino claim:
    the procedure keeps everything above the dotted stats-line.]

    II. Newly Revealed Inconsistencies

    Steve has demonstrated that Mann made both kinds of errors: keeping fewer than his procedure required, and keeping more than his procedure allowed. So where’s evidence supporting the idea of a valid procedure used for selection? It appears arbitrary.

    a) Keeping too little: he kept only two Vaganov points, should have been more.

    b) Keeping too much: he kept nine Stahle points, should have been far fewer.

  20. Alan S. Blue
    Posted Mar 14, 2008 at 12:52 PM | Permalink

    I’m with #14.

    The ‘blade’ is a piece of the hockeystick that is, essentially, a dry well. Excessive data outside the uncertainty won’t be sufficient. The blade will just be shifted or tilted. But it will still be “unprecedented.” Just with new and improved models.

    The ‘stick’ is the key piece moving AGW from a sideline to a critical issue. Can this technique be applied to determine which proxies most heavily contribute to the elimination of the Medieval Warm Period? Because this is the other side of the “unprecedented” position. It appears that a localized phenomena with bristlecones is being used (through teleconnection) to assert the blade. But another phenomena – well established historically – which might be localized to just half the Northern Hemisphere, is not well represented in the hockeystick.

  21. Wm. L. Hyde
    Posted Mar 14, 2008 at 1:24 PM | Permalink

    Yes Steve, I agree with you. Your work output is colossal and I don’t see how you can accomplish so much work and still have time for squash tournaments. Your outline of strategy shows you would be good at chess, as well. My squash days are over because of my arthritis, but I can still play chess. Lay people should just go along with Steve’s methods, as he’s made it clear for yours that he does what he does because it interests him. Keep reading! More and more sinks in all the time.

  22. mccall
    Posted Mar 14, 2008 at 1:55 PM | Permalink

    re: 9 “Steve Mc. The best service you could do for your own work on MBH 98 is to reduce in a crucible the essence of your discoveries to a nugget, a “blog sound bite”, and grant unfettered republishing to foster better understanding.

    You mean like, “MBH9x mines for hockey sticks!” Works for me.

  23. GTFrank
    Posted Mar 14, 2008 at 2:09 PM | Permalink

    15, 16, et al
    Summary? In a nutshell for non stat expert folks like myself. This is the big picture. I always keep this in mind as I follow these topics, and see where the pieces fit in. I may have missed, but I have not seen any posts on RC or Tamino that directly confront, accept, or explain why this is so except to show that the output is correct. I look on those sites for a scientific explanation.
    BTW, I am here for the math and the methods. I hope RC and Tamino will explain the experimental scientific theories supporting these methods. I expect a physical scientist to describe how those statistics relate to the physical world.

    “Mannian” PCA Revisited #1

    “Whenever there is any discussion of principal components or some such multivariate methodology, readers should keep one thought firmly in their minds: at the end of the day – after the principal components, after the regression, after the re-scaling, after the expansion to gridcells and calculation of NH temperature – the entire procedure simply results in the assignment of weights to each proxy. This is a point that I chose to highlight and spend some time on in my Georgia Tech presentation. The results of any particular procedural option can be illustrated with the sort of map shown here – in which the weight of each site is indicated by the area of the dot on a world map. Variant MBH results largely depend on the weight assigned to Graybill bristlecone chronologies – which themselves have problems (e.g. Ababneh.)”

  24. Patrick M.
    Posted Mar 14, 2008 at 2:23 PM | Permalink

    Amazingly this is one of the first threads where I DO understand some of what’s being discussed.

    Hint: I wouldn’t try to understand PCA, (Principle Component Analysis), from a statistical viewpoint, (at first). Start from Linear Algebra. If you can get a grasp of what eigen vectors and eigen values *are*, (not necessarily how to compute them), you will be half way to understanding PCA.

    Simple example: Eigen systems have been used to “compress” finger print images for organizations like the FBI.

    Basically you can compute the eigen values and eigen vectors from the image. The eigen vectors and values come in pairs where the eigen value describes how much of the variance the corresponding vector accounts for.

    So say we get 8 eigen vectors and their corresponding values from the image. The relative magnitudes of the eigen values will determine the percentage of the variance the vector accounts for.

    So our pretend vectors might show:
    vect1 accounts for 60% of the variance
    vect2 accounts for 20% of the variance
    vect3 accounts for 10% of the variance
    vect4 accounts for 8% of the variance
    vect5 accounts for 1% of the variance
    vect6 accounts for 0.5% of the variance
    vect7 accounts for 0.3% of the variance
    vect8 accounts for 0.2% of the variance

    So if you wanted to recreate the fingerprint and you only used vect1 you would get a fuzzy picture. Add in vect2 and the picture gets clearer. Add in vect3 and you get 90% of the variance of the original, (i.e pretty clear). Add up the “top” 5 vectors and you get 99% of the quality of the original. Since the last 3 vectors don’t account for much of the variance you can get rid of them and not lose much image quality, (that’s the compression).

    So when people are saying PC4 they are referring to the 4th largest eigen value. The 8% they are referring to is the amount of the variance PC4 describes.

    Steve: Feel free to snip this if I’m way off base.

  25. M.Villeger
    Posted Mar 14, 2008 at 2:29 PM | Permalink

    wm L. Hyde,I too am not a statistician and thus cannot argue on that subject the way Steve does. However, Steve McIntyre, by his demeanor and his detailed analysis and willingness to go to the bottom of the questions is showing the qualities of a scientist who has integrity and who could be facing vindication or apologies with the same stand. That’s credibility many others do not display. That is why I am fascinated with this blog. Also, for the non specialist but scientist, the overall discussions show the intricate processes behind some results and just as in any science field, the extent of some methods’ domain of application. One gets a flair for the subject based on real science and not politics. A hard copy of the McIntyre et al. blog achievements in the style of Springer Praxis series (Leroux, Kondrayshev) would be a reference.

  26. Steve McIntyre
    Posted Mar 14, 2008 at 2:30 PM | Permalink

    I prefer to think of all this in terms of establishing weights as so much cancels out when you calculate the NH temperature average. How much weight does one assign to bristlecones? You can almost compare it to a portfolio algorithm – there are some interesting mathematical similarities.

  27. Bob Mc
    Posted Mar 14, 2008 at 3:09 PM | Permalink

    Steve Mc

    Like it or not, you are the face on the skeptic side of AGW. It is not an honor you expected or even desire, but right now, for the “un-climate-science-educated” masses, you are the leader. People do look to you to investigate, critique, and question as necessary, the science behind Global climate predictions.

    You supply the kicking and screaming, we’ll supply the horses to drag you……………

  28. Kenneth Fritsch
    Posted Mar 14, 2008 at 3:41 PM | Permalink

    It seems quite possible to me that the Rule N version illustrated at realclimate was not actually used in MBH98. The first surfacing of the realclimate algorithm in the present form came only after Mann realized that the bristlecone hockey stick really did occur in the PC4 and was not the “dominant component of variance”. There is no contemporary evidence that it was used and there is no source code proving its use. Both Mann and Nature refused to provide details back in 2003 and 2004. The actual pattern of retained PCs cannot be reproduced according to the way that I’ve implemented the realclimate algorithm. As I said earlier, I’m not sure why Tamino has decided to re-open these particular scabs. In Mann’s shoes, I would have left this stuff alone.

    For those who claim not to completely comprehend the technical analysis that Steve M is doing here, I think you have to first of all appreciate the fact that Steve M is first and foremost a puzzle solver and secondly that he is sufficiently polite and laid back not to jump to any conclusion about the motivations of those whose work he is analyzing.

    In fact in this case motivation is not critical to the puzzle being solved here. The important point is whether Mann went into the PCA with his a prior criteria in hand for selecting the meaningful PCs or was it artfully crafted after the fact (and in answer to the MM analyses). At this point one need not point fingers, but only take into account Steve M’s analysis. I like this one because the technical aspect is given in a bite size proportion and non-technical aspects are rather obvious.

  29. Steve McIntyre
    Posted Mar 14, 2008 at 4:14 PM | Permalink

    #28. Some method was used to select the number of retained PCs before the fact. It just looks to me like it was a different method than the one illustrated at RC. Maybe Mann merely forgot how he did it originally when asked about it in 2003 and 2004 and, when pressed, inadvertently produced a different method that he’d used elsewhere that had the purely coincidental benefit of purporting to justify the inclusion of the bristlecone PC4.

  30. kim
    Posted Mar 14, 2008 at 4:18 PM | Permalink

    William Ockham salutes #29.

  31. LadyGray
    Posted Mar 14, 2008 at 4:27 PM | Permalink

    Maybe Mann merely forgot how he did it originally when asked about it in 2003 and 2004 and, when pressed, inadvertently produced a different method that he’d used elsewhere

    Formality of Operations, Laboratory Manuals, and Written Procedures. As much as I have always hated these things, they are essential to good science. Hopefully everyone can learn from this. If your work won’t ever be audited or published, then it won’t matter. But if someone might someday look over your shoulder and ask questions, it might behoove you to be able to rationally explain what you’ve done.

  32. Steve McIntyre
    Posted Mar 14, 2008 at 4:34 PM | Permalink

    #29, 31. Of course, if this is what happened, it wouldn’t permit them to claim that I had made a “mistake” in implementing their precious method (a “mistake” that, if I made, being one that I had diligently sought to avoid by formally asking how they did the calculation, with the request being stonewalled.)

  33. Kenneth Fritsch
    Posted Mar 14, 2008 at 5:29 PM | Permalink

    Re: #29

    Maybe Mann merely forgot how he did it originally when asked about it in 2003 and 2004 and, when pressed, inadvertently produced a different method that he’d used elsewhere that had the purely coincidental benefit of purporting to justify the inclusion of the bristlecone PC4.

    The use of BCPs, the PC4 and Rule N all have a major bearing on the how the a priors were handled and thus the resulting statistical conclusions. It is that part of the equation that comes through loud and clear on the blog exposition but not necessarily in the publications of scientific papers. Unfortunately, I think some readers and posters here do not fully appreciate those expositions without getting involved in the murky waters of motivation.

  34. Michael Jankowski
    Posted Mar 14, 2008 at 5:29 PM | Permalink

    Maybe Mann merely forgot how he did it originally when asked about it in 2003 and 2004 and, when pressed, inadvertently produced a different method that he’d used elsewhere

    All the more reason to be upfront with methods either in the SI or in the…ahem…”methods,” “procedure”, etc (depending on what the journals format is) section of a paper. All the more reason that these methods should be available during the peer review process so that the reviewers can determine what was actually done.

    Still, it would be funny, and not just because it’s a pathetic excuse along the lines of “the dog ate my homework.”

    There’s an ardent MBH supporter and MM hater (Eli Rabett) who is a prof. He posted on another board a story about a student who didn’t get a good grade on a paper, project, or exam (I can’t remember which) in Rabett’s class because he did something wrong. The student approached Rabett with a textbook, citing that he did exactly what was in this textbook to get his answer, with no way of knowing the textbook was wrong. The professor smirkly bragged about how he told the student that he should contact the publisher of the book and ask them for a better grade.

    If the aforementioned hypothetical “I cannot recall how I did it, but I did it right” excuse arises, it would be interesting to see how the hardline Dr. Rabett reacts.

  35. Ross McKitrick
    Posted Mar 14, 2008 at 6:53 PM | Permalink

    For the relative newbies bewildered by the post, the background at http://www.climateaudit.org/index.php?p=166 might help. The issue in this post is, in one sense minor, but in another sense important, because during the detailed work on the hockey stick we found, at turn after turn, Mann would throw an argument out that, on close inspection, turned out to be false. These things have to be recorded.

    In this case, we showed that the original basis for using the bristlecones as the dominant component of the data was based on an incorrect PC method. Mann could simply have argued that, upon correcting the PC method, yes the hockey stick pattern drops to PC4 and is thereby shown to be a minor regional pattern, not the dominant continental pattern as he first claimed, but he’d like to include the PC4 anyway. Instead, he got all huffy, claimed that there is some “rigorous” rule for retaining PCs that rigorously shows the PC4 has to be included in the data base, and that he rigorously used this rigorous test in his rigorous analysis, and that it is a kind of cookbook standard that everyone who knows anything always uses. Tamino has taken this propaganda at face value and chortles that Steve and I are dingbats for not knowing all this. Steve’s post simply shows that, in fact, Mann didn’t use the supposedly “rigorous” rule anywhere else in his analysis. His rules for retaining PCs were apparently haphazard and arbitrary. Either he had a rigorous rule and he hasn’t disclosed it (specifically refusing our request via Nature), or he didn’t actually follow any formal PC retention criteria and his later claims to have done so are false. The one thing he cannot claim is that he consistently used the Preisendorfer Rule N to decide on PC retention in his various proxy networks.

  36. George M
    Posted Mar 14, 2008 at 8:39 PM | Permalink

    Let’s see if this describes the disconnect which some readers suffer. The General Climate Models (GCMs) of Climate Science require a great quantity of statistical operations to reach their conclusions. Steve Mc is an experienced and accomplished statistician, while the inference from external examination of GCMs and other pronouncements of the entrenched Climate Science community is that their statistical training may be incomplete. So, you have non-statisticians arguing with a statistician, and from what I read here not coming off too well, so they tend to dissemble.
    The larger picture, which Steve eschews, is that these shaky mathematical manipulations form a large part of the basis for expensive, world changing proposals from the AGW proponents. The BCP statistics are interesting in their own right, but proxies and then psuedo proxies, on top of the absence of any scientifically supportable connection between BCP rings and temperature alone makes their application to the AGW debate somewhat murky. And how on earth the concept that BCP analysis is good historically but not recently could be believed simply boggles the mind.

  37. John A
    Posted Mar 14, 2008 at 8:43 PM | Permalink

    I was for a long time baffled about posts regarding Preisendorfer Rule N, retention rules and what was done in MBH98.

    Now I’m a lot less baffled, and more intrigued because each “assault” upon Steve and Ross’ work has allowed the rest of us non-statisticians to get a insight into statistics, and how truly meaningless results like MBH98 (aka “The Hockey Stick”) can impact all of our lives.

    So I’d encourage everyone who sits on the sidelines and wonders about these issues to not be discouraged or dismayed, but ask simple questions both here on the blog, and on the message board ( http://www.climateaudit.org/phpBB3 ).

    I’d also recommend Ross’ overview papers on this and a whole host of related topics at http://ross.mckitrick.googlepages.com which take you very gently into the bizarre world of reconstructions of past climate, why integrity in statistical methodology is so important and why we’ve all been betrayed by the actions of a relatively small group of people.

    As I say these types of posts seem arcane, but trust me, persistence and reading around the subject are good strategies.

  38. John A
    Posted Mar 14, 2008 at 8:50 PM | Permalink

    Oh and Mr Pete (#19):

    Thank you very much for clarifying and simplifying what baffled me about those sorts of plots. Now I’m a little less ignorant and a little more intrigued about retention rules in MBH98.

  39. Yancey Ward
    Posted Mar 14, 2008 at 10:13 PM | Permalink

    I agree with John A- Mr. Pete made it quite clear. Too bad I can’t buy him a beer.

  40. tetris
    Posted Mar 14, 2008 at 10:18 PM | Permalink

    Steve Mc
    Have to agree with Anthony W in #9. Given your predilection for the finer points, Tamino/RC and Co are managing oh so nicely to get you to studiously get lost amongst the green washers, red bolts and blue nuts of the Mannian opus.

    You probably enlightened more people last year with the changes in US temp record forced on GISS by your work than anything else.

    Incidentally, where IS Waldo? That key question appears to have been lost amongst the green washer, blue bolts, etc., of proving the Hockey Team wrong, yet again. Even the MSM [aka: main stream media] appear to have understood that the problem with Mann et. al. is not the blade but the handle. For all practical purposes Mann and Co have done their “exit left”. Move on.

    Although Anthony didn’t put it that way, consider the possibility you may be losing traction. The finer points of statistics have only so much media “hang time”.

  41. Schlew
    Posted Mar 14, 2008 at 11:31 PM | Permalink

    Just gotta say this whole discussion is really cool. As an engineer pretty heavily involved in signal processing, a lot of this is beginning to make sense to me. As for the hockey stick, I couldn’t care less as to whether it’s validated or debunked. It’s the process and methodology that fascinates me.

    Carry on…

  42. J. Peden
    Posted Mar 14, 2008 at 11:43 PM | Permalink

    tetris #40

    Although Anthony didn’t put it that way, consider the possibility you may be losing traction. The finer points of statistics have only so much media “hang time”.

    Naw. On the contrary, Steve is rather uniquely finding out about, describing and displaying what is done in Climate Science’s [mainly Mann’s] method by auditing it, and in a completely scientific way, imo. This appears to be necessary and also interesting, because the CS method starts to look, and is even so far proven to be in some respects, unscientific – and in some very major respects, starting right at the basis for proving or disproving an hypothesis – the data and the handling of the data, in this case data forming the base of AGW claims.
    If the whole AGW is a house of cards, simply taking out one, especially at the base, can cause the whole thing to collapse, though this is not Steve’s intent, as he is approaching it scientifically by simply trying to ascertain what Mann has done, as a valid phenomon within reality himself. If it happens to threaten Mann, enc., so be it, and so what?

    How steve’s work does not relate exactly to Climate Science’s method and the AGW issue is beyond me. I can’t see what more anyone could want, even though I’ve been frustrated myself in trying to catch up reasonably well with the statistical analysis involved. But that’s my problem, and I like it.

    Also, the related subjects covered here on CA are very broad, and the minds I see at work here are literally awesome. And I am seeing, further, a gigantic ripple effect in the broader scientific and political worlds. I’m watching them, too.

  43. David Jay
    Posted Mar 15, 2008 at 6:29 AM | Permalink


    Too bad I can’t buy him a beer

    Got Paypal? 😉

  44. Patrick M.
    Posted Mar 15, 2008 at 7:48 AM | Permalink

    Even if the current established AGW crowd doesn’t respond to CA’s challenges. The next crop of climate scientists will certainly be warned about CA, and will have its criticisms in mind when they do their research. Whether Steve McIntyre gets credit for it or not, he’s making the next generation better.

  45. peter
    Posted Mar 15, 2008 at 8:34 AM | Permalink

    To the people who want a non-technical summary, its already been done. Its Ross McKitrick’s
    “What is the Hockey Stick Debate About?” Easy to follow. All you have to read is the part about the HS itself. The framing portions you can take or leave as you like. There is a also a little tutorial on PCA, “A tutorial on Principal Components Analysis”, by Lindsay I Smith which is fairly easy to follow. Then there are the rather shorter M&M notes “M&M Backgrounder Winter 2005: PCA Supplement”. Not sure where exactly I got that, but it was probably someplace on the CA site.

    This is probably enough to enable one to at least follow the main lines of the discussion without being totally swamped.

  46. Posted Mar 15, 2008 at 8:46 AM | Permalink

    The link is here: “What is the Hockey Stick Debate About?” by Ross McKitrick

  47. Marine_Shale
    Posted Mar 15, 2008 at 8:58 AM | Permalink

    Steve McIntyre

    Thank you once again for your blog and the opportunity to comment.
    I cannot agree with many of the comments on this thread regarding “summarising” and “blog sound bites” simply because that is the whole problem with “climate science” at the moment; too much argument from authority and not enough exposition of the basis of various claims.
    It is interesting that some people have suggested that there may be an appetite in some quarters to divert your attention from the issue of the surface temperature record. I would argue that one thing is quite nicely linked to the other.

    If you hadn’t gone into an in depth analysis of various proxies and PCs -and weightings of same- in previous threads I would never have researched the Tasmanian series that was done by Cook in 1991. The interesting thing about Cook’s Huon Pine series, as you know, was the fact that the sampling was done at Mount Read in the Western half of the island but the temperature calibration was done with surface stations located in the Eastern half of the island. The stations were Hobart, Launceston and Low Head light house. Not only were all the “calibration” stations in a different climatic zone (temp and precipitation) to the sample zone, they all had issues with UHI effects.

    To make things even more interesting; the station data has subsequently been “adjusted”, first by Torok and Nicholls in 1996 and again, I think, by Della-Marta in 2004.

    So where does that leave that particular proxy record, has the proxy record been adjusted as well?
    Divert attention from the surface record?- perhaps not.

    Steve, regarding Cook’s series, are you able to tell me why Dr Mann only used the data from 900 AD when the original series went back to 1600 BC.

    Also, the data Mann has archived for his Tasmanian series is different to Cook’s data archived at NOAA (which was also adjusted to account for summer freeze events that suspended growth for five to eight years). http://gcmd.nasa.gov/records/GCMD_NOAA_NCDC_PALEO_1998-040.html

    Thanks again

    Steve: Mann often uses obsolete versions of proxy data, as we observed in MM2003. In this case, he used a 1991 version and has even continued to use this in Mann et al 2007.

  48. Posted Mar 15, 2008 at 9:14 AM | Permalink

    Although Anthony didn’t put it that way, consider the possibility you may be losing traction. The finer points of statistics have only so much media “hang time”.

    As my Nana used to say: “It’s a dirty job, but somebody’s gotta do it”. (Actually, I don’t recall her ever saying that to me, but I’m sure she said that to somebody).

    Like many here, I get all blurry eyed when the Steve’s analysis climbs into the numbers and methodology atmosphere. But I usually get the jist of the technical criticism.

    Steve, thanks for your hard work.

    PS. I tend to live with by inverse version: “It’s a dirty job, so why should I have to do it”.

  49. Tom C
    Posted Mar 15, 2008 at 9:32 AM | Permalink


    I feel pretty silly trying to give you advice, since it is your blog, your energy, your accomplishments, etc. All I was trying to say in #15 above is that Tamino et. al. are not arguing in good faith. They are propagandists. You will never “win” the argument any more than you have already. I’d say it is a waste of time to engage them, and only gives them a chance to say that you are obsessing about a ten year old paper. Anyway, many thanks for your fantastic work and impressive temperment.

  50. timothy
    Posted Mar 15, 2008 at 9:44 AM | Permalink


    Oh wait…

  51. tetris
    Posted Mar 15, 2008 at 10:04 AM | Permalink

    Re: 42 and 48
    I have previously made it clear that I think that Steve Mc is doing a superb job. No changes there.

    As # 45 and 46 point out, we already have Ross Mc’s summary of all that’s wrong with the Hockey Stick. My concern is that Tamino et. al. are cleverly getting Steve to come back to the Mann opus time and again, because they know that is one of his hot buttons.

    Meanwhile, the debate is steadily moving on elsewhere, with Pielke, Lindzen, Watts and a few others now analyzing and commenting on data that appears to indicate that global temperatures have stalled and are showing downward trend lines since anywhere from 1997 to 2003. We have Lucia who has started to take a closer look at the innards of IPCC projections, with some puzzling first conclusions. And the list goes on.

    It seems to me that it’s becoming pretty clear that the entire AGW/ACC debate is rapidly moving on, and at the political level has started to produce significant shifts towards “reality”. As I said above, even the mainstream media [from the NYT, the Boston Globe, the Times of London, etc., take your pick] have understood that Mann’s story is fundamentally flawed [as in: forget the blade, it’s the shaft that’s wrong] and have moved on.

    As much as I admire Steve Mc’s work and tenacity, as far as the Mann opus is concerned we’re dealing with the law of diminishing returns. I believe that for instance, getting back to the stringent analysis of the various and significant shortcomings of the ROW temp record would be much more useful. It is that sort of work that caused GISS to amend its US temp record and the media did take note. I rest my case.
    That said, keep up the good work Steve.

  52. Gerald Machnee
    Posted Mar 15, 2008 at 10:29 AM | Permalink

    Re #49 and #51, etc.
    While others are doing what they know best, so is Steve. The challenges issued by Steve are interesting, as they request a direct reply, which may not be forthcoming. There is a lot more to be done, even if some of it has to be repeated. Watts and Steve have not even touched the stations out of North America and their continuity and moves. Whether it is 10 year old studies or not, it may have to be repeated until the IPCC and other agencies see the “lights”. I have referred many to this blog. Statistics is Steve’s party, and I am also having fun (without the rum).

  53. Steve McIntyre
    Posted Mar 15, 2008 at 10:30 AM | Permalink

    I have a considerable amount of unfinished business in terms of proxies. There are some interesting points that need to be written up formally and I noticed things when I revisit these topics. I’ve noticed some things in this visit to proxy land that I’m going to post up.

    Please understand that I really am not studying these things with a view to changing policy.

    As to the current status of the HS debate, the GA Tech presentation is more up to date than Ross’ early 2005 summary.

  54. MrPete
    Posted Mar 15, 2008 at 10:58 AM | Permalink

    Seems to me that nurturing better science is more important in the long run than impacting policy decisions in the short run.

  55. Enochson
    Posted Mar 15, 2008 at 11:08 AM | Permalink

    I am a long time CA lurker. I am interested in the statistics and the method but do not have the sophistication to follow all of the arguments. Fortunately, some of the comments help. I agree with Watts that the meaning of the work could be better explained. Perhaps Watts can do that on his site?

  56. steven mosher
    Posted Mar 15, 2008 at 11:12 AM | Permalink

    it’s a gift that keeps giving folks.

    The best thing for them to do is to admit that the hockey stick was in error and do
    the science right. Their sole concern is a marketing one.

  57. Smokey
    Posted Mar 15, 2008 at 12:59 PM | Permalink

    It appears that the skeptics’ view is getting more traction in the media.

  58. hswiseman
    Posted Mar 15, 2008 at 1:52 PM | Permalink

    I am afraid that there isz no choice but to respond to the aspersions caast by the likes of Tamino. Any unchallenged assertion will be treated as an admission. I remember a recent instance where SM had the temerity to go out for dinner before answering some accusation or another. A federal case was immediately mounted by the troll du jour. Today’s piece of analysis happens to be nice distillation of some of the fundemental shortcomings in MBH. If the phone doesn’t ring, you’ll know it Tamino.

  59. Geoff Sherrington
    Posted Mar 15, 2008 at 7:24 PM | Permalink

    This is vital work. It is not the time to wrap it up as a polite intellectual exercise.

    The widespead use of statistics in climate (and some other) sciences has 3 fundamental hurdles:

    1. Are the data amenable to stats investigations? (The answer usually requires stats pilot runs that can be quite long)

    2. What is the optimum stats methodology for the investigation? (There is often no objective answer to this, though encouragement from several methods is helful).

    3. Is the stats methodology done correctly by the book? (New books on stats, not new experiments within papers on other topics, are better housekeeping).

    Using the work cited above, Steve and Ross have shown deficiencies on all points. Severe ones, distorting ones for policy makers.

    Having shown the ability to find and express deficiencies on this data set, Steve and Ross have established credentials to move to other data sets. After all, I guess that many CA regulars take interest because official conclusions sometimes appear to be wrong.

  60. steven mosher
    Posted Mar 15, 2008 at 7:34 PM | Permalink

    RE 58.

    tamino is a surrogate. tamino is a PROXY warrior. Mann will not and cannot come out to fight.


    1. He cannot risk losing. It would undermine the IPCC and AIT and “climate science”
    even the pecerception of losing a debate is catastrophic for the warmers.

    2. The party line is “there is no debate”. So he personally hates being attacked and
    being called a [self snip] and compared to [self snip], BUT the talking points he has
    been issued say: don’t debate sceptics. The issue is settled.

    THUS the need for surrogates and Proxies. Surrogate and proxy warriors work best
    when they are anonymous, when they can’t be tracked backed to the people who really
    want to make the argument.

    I hesitate to use the words “proxy warrier” because the term is pretty inflamatory and people
    might think I am comparing them to terrorists or such. That’s not the point. The point is
    this. AGWers have choosen a media strategy that says “thou shalt not debate.” debate is over.
    So, in those cases where “mistakes are made” they need a way to answer the charges
    without ruining their reputation and authority. This is handled through surrogates, and proxies.

    There is nothing out of the ordinary about this media strategy. Gorilla 101.

  61. tetris
    Posted Mar 15, 2008 at 7:43 PM | Permalink

    Re: 53
    Steve Mc
    I understand your statement that “I’m not studying these things with a view to changing policy”.
    As a long-time CA supporter and occasional contributor [snipped/deleted more than once, but that’s your perogative], I have tried to point out that while you may not be doing this explicitly to change policy, the undeniable fact is that the results of your work [alone or with Ross Mc] have had a noticable impact on AGW thinking in general, and in the world of day-to-day politics around the world, more to the point.

    Whether this was your intention or not or even whether you like this outcome or not, is immaterial. I broadly track [in 4-5 languages around the world] the “politics” of the AGW/ACC story as diligently as you track the “details”. Without a doubt, your work has had a very considerable impact in that it [as in last summer’s GISS episode] causes the “mainstream” media folks to actually think again. To achieve this in a media world that by and large supports the notion of a AGW “consensus” is highly commendable. It is hard to argue with GISS re-organizing its US temp history based on your flagging the “UFA” and in the process pointing out 1934 and 1921 as # 1 and 3 and 1998 as #2. In part because of your work, the “science is settled/there is AGW consensus” myth has been broken, once and for all.

    Any study that tells us it is based on “95% certainty” also tells us there is in fact very little statistical “certainty” and should be treated it as such. What the IPCC very cleverly has done in its latest epistles is to bias the “levels of confidence” scale by at least two orders of magnitude in its favour so that 95% now is “near certainty” and not merely “possible”.

    As I suggested above, pls revisit the ROW temp data issue. The question marks you have already highlighted, further examined and, say, linked to what Anthony Watts has been doing about the US data base, will help more and more people to understand that the underpinnings of the AGW/ACC story contain more “we don’t know/understand” and the “data is open to serious interpertation” rather than “the science is settled”. As the politicians get this message more and more often, we may see even more “reality checks” from that quarter as well.

    Again, pls keep up the good work. What you’ve done so far and what you’re doing is very important.

  62. theduke
    Posted Mar 15, 2008 at 9:28 PM | Permalink

    Mosh 60: Exactly. Proxy warrior, proxy warrier, proxy worrier, whatever.

    What Steve Mc’s post proves is that Mann doesn’t debate because the hole is already too deep. The law of holes may not be a scientific concept, but it’s good to observe it nevertheless.

  63. Michael Hansen
    Posted Mar 16, 2008 at 7:50 AM | Permalink

    Tamino is Grant Foster. Together with Annan, Gavin Schmidt and Mann he co-authored the hitjob on Stephen Schwartz, and also made a guest-blog on this issue at Real Climate, now using the Tamino pseudonym. Talk about split personalities here…

    Anyway, it’s fair to say that Tamino, aka Grant Foster, is affiliated with the team, and therefore cannot be considered neutral in this dispute.

  64. Steve McIntyre
    Posted Mar 16, 2008 at 7:55 AM | Permalink

    #63. The Stephen Schwartz article warranted analysis – I’m far from sold on his handling of autocorrelation; I worked up some thoughts at one point and meant to pull a post together but didn’t finish it.

  65. Bernie
    Posted Mar 16, 2008 at 10:47 AM | Permalink

    Steve McIntyre, to my knowledge, has never indicated where he stands on the strong AGW hypothesis promulgated by IPCC. He has focused on addressing what he believes are defects in the way some climate scientists “practice” their science. This thread is an example of the poor/flawed statistical methodologies used by Mann. Other threads focus on the quality of the temperature data.
    Some of us may have much firmer positions on the strong AGW hypotheses – but it is inappropriate to assume that Steve shares these views even if what Steve has done has to some extent supported the non-strong AGW position. It is Steve’s objectivity that I believe persuaded Judith Curry that Steve was not only highly knowledgeable and skilled, but also open minded on climate issues, perhaps more open minded than some strong AGW scientists.
    For me, this work is an extremely useful exercise in sharpening my thinking about statistical and measurement issues. At a broader level, Steve’s blog and the tone he has set illustrates the power of the internet to educate and inform. I am very impressed by the number of “old dogs” on this site who are intent on learning some “new tricks”.

  66. MarkR
    Posted Mar 16, 2008 at 11:44 AM | Permalink

    Mosher #60 Mann has already lost according to both Republicans and Democrats on the Congressional Record. It’s just that no-one seems to be able to publicise the fact.

    Mann will not and cannot come out to fight.EVER.WHY?
    1. He cannot risk losing.

    JULY 19 AND JULY 27, 2006

    MS. SCHAKOWSKY (Democrat). –that point. I know, but your question wanted
    to reinforce the notion that this was based on this false or
    inaccurate Dr. Mann study

    MR. STEARNS. Well, I think–
    MS. SCHAKOWSKY. –and it is not.


  67. steven mosher
    Posted Mar 16, 2008 at 12:21 PM | Permalink

    re 63. I usually refer to him as ‘sunglasses.’ Foster, Grant. The whole point of grant foster
    hiding behind foster grants is as follows.

    1. he can talk about time series statisitics on his web site without having to actualy present
    his credentials. This fools everyone but professional statisticians who actually work in the feild.

    2. he can hide his connections to nasa. Other proxy warriors do the same thing. Confederacy is
    turned into consensus.

    3. he can be a bully without impugning climate science reputation or his own.

    I have no problem with any of this. It’s all standard stuff. get use to it. it doesnt go to the truth
    ofthe matter.

  68. Moped
    Posted Mar 16, 2008 at 12:46 PM | Permalink

    What’s his connection to NASA other than co-authoring with Gavin Schmidt?


  69. Aaron Wells
    Posted Mar 16, 2008 at 4:27 PM | Permalink

    re 68. Moped, I followed your link and read up on Grant Foster. LOL!! It describes 2 programs he was responsible for, both of which were written in BASIC and running under MS-DOS. I hope he has more than that.

  70. steven mosher
    Posted Mar 16, 2008 at 4:42 PM | Permalink

    Guys. NONE of this frees you from the obligation of pointing out tammy errors. NONE OF IT.

    the errors are the errors. Stick to the errors.

    However, when you get CURIOUS about why a smart man would make these errors, as we all do, then
    you need an explaination.

    1. he had a brain fart, and YOU were wrong to think he was smart.
    2. Something else. Global Warming induced obsequious sucking up.

  71. Raven
    Posted Mar 16, 2008 at 4:45 PM | Permalink

    From the link in #67
    “Since working for the AAVSO, Grant has spent much of his time pursuing his musical career. Playing Irish music festivals and local venues whenever he has the chance, Grant plays guitar and sings Irish ballads.”
    I guess that would explain the choice of an opera character for a pseudonym.

  72. steven mosher
    Posted Mar 16, 2008 at 4:46 PM | Permalink

    that’s enough tinder for todays fire

  73. John Lang
    Posted Mar 16, 2008 at 6:36 PM | Permalink

    Given that Tamino is an astrophysicist and a trained statisician, he does not get a pass from me on the conduct he has pursued on his website.

    His background should indicate a higher degree of integrity than the quoting out of context, the manipulation of basic data, the censoring, and the lack of objectivity which is evident on the blog.

    It is clearly a waste of time to post there or debate Tamino at all.

  74. Bernie
    Posted Mar 16, 2008 at 7:27 PM | Permalink

    Ah, but why not Monostatos?

  75. deadwood
    Posted Mar 16, 2008 at 7:53 PM | Permalink

    #74 – Its the “Magic” that makes it appropriate for his pseudo.

  76. Ron Cram
    Posted Mar 16, 2008 at 10:41 PM | Permalink


    Thank you for responding to Tamino’s post. If you don’t respond, certain people may think he is correct. It is unfortunate Tamino will not allow any debate on his site.

    Keep up the good work, Steve.

  77. MrPete
    Posted Mar 17, 2008 at 4:22 AM | Permalink

    Ron Cram, it’s incorrect to say that Tamino doesn’t allow debate. So far, he has yet to censor any of my comments, and I’ve posted quite a few.

    Sure, he has his buttons, and has his own biases. And that almost certainly means certain posters or ways of saying things will get snipped. But that happens everywhere.

    Our host here also has off-limits topics and attitudes.

    My sense about Tamino is, if you want to vigorously discuss the science, great. As soon as you impugn motives or integrity of people, you’re in shaky territory — and he is more likely to snip a skeptic than an alarmist once you’re in that territory.

    And right now he’s sick of the name calling that’s been going on, so he’s heading into a “quiet period” where he’s gonna try to tone down the level of debate. I dunno how that will turn out.

  78. MrPete
    Posted Mar 17, 2008 at 4:25 AM | Permalink

    (Whether and when you think it worthwhile to try to say something over there is an entirely separate topic. Personally, I think John Lang’s statement above has a lot of merit. But I’m also the eternal optimist so every once in a while I try to call people to step back and look at the picture with fresh eyes… 🙂 )

  79. Willis Eschenbach
    Posted Mar 17, 2008 at 4:48 AM | Permalink

    Well, after much procrastination, I finally copied the R code and ran the calculations. I wrote a loop to look at all twenty of the steps … man, there’s not even a pretense of adhering to Rule N. Only a few of the twenty adhere to Rule N. f you drew random lines you’d likely do better. On some of them, all ten pcs were above the line according to Rule N .. but not all ten were used. On another, three were distinguished by Rule N … but nine were used. Go figure …

    Good to see another part of the perpetual bob and weave about MBH98 bite the dust. Rule N wasn’t used. Simple as that.


  80. fFreddy
    Posted Mar 17, 2008 at 10:15 AM | Permalink

    Willis, would it be possible to generate a table showing, for each step, the Rule N number and the number actually used ?

    Steve: My calculations are here: http://data.climateaudit.org/data/mbh98/preisendorfer.info.dat . The column headed “preis” shows what results from my Preisendorfer Rule N calculation. The third column shows the MBH number. Nothing matches. Actually even the AD1400 example at realclimate doesn’t precisely match.

  81. Posted Mar 17, 2008 at 12:13 PM | Permalink

    This is surely a dumb question, but what exactly is the formula for Rule N? It’s not mentioned in the post, that I can find, nor in the Realclimate post linked. There’s some discussion in Steve’s 8/05 post at http://www.climateaudit.org/?p=291, but even there I don’t quite get it.

    If there are N unit variance series, the N eigenvalues must sum to N. Does Preisendorfer’s rule call for excluding all PC’s but those whose eigenvalues exceed 1/N of the total variance, ie all but those whose eigenvalues exceed 1?

  82. Steve McIntyre
    Posted Mar 17, 2008 at 12:40 PM | Permalink

    It’s a simulation procedure described in chapter 5 of Preisendorfer 1988. The realclimate variation does a simulation in which (say) 70 synthetic series are generated with AR1 coefficients determined somehow (I’ve calculated AR1 coefficients using arima in R for the sites in each network.) A SVD is then done on the synthetic number and eigenvalues retained. The Preisendorfer Rule N “exploratory” test is that the eigenvalue in the actual network must be greater than the 95% eigenvalue from the synthetic network. This turns out, in practice, to allow somewhat fewer PCs than a 1/N rule.

    It’s not this is a bad procedure for exploratory purposes. My issues are: 1) that I am unconvinced that it was actually used in MBH. Perhaps they now wish that they’d used it, but the evidence is against its actual use; 2) passing a Rule N test is the beginning of scientific testing, not the end. Bristlecones are a distinct pattern in the NOAMER network; that doesn’t mean that they can detect worldwide temperatures.

  83. Posted Mar 21, 2008 at 11:47 AM | Permalink

    Steve — in the original post above you wrote,

    In fact, when one re-examines the chronology of when Rule N was first mentioned in connection with tree rings, it is impossible to find any mention of it prior to our criticism of Mann’s biased PC methodology (submitted to Nature in January 2004)

    Yet in MBH 98 (p. 786, column 1 near middle), they wrote, “Preisendorfer’s selection rule ‘rule N’ was applied to the multiproxy network to determine the approximate number N_eofs of significant independent climate patterns that are resolved by the network…”

    So I think it can be said that rule N was at least on the table as early as 1998. (Of course, this is not to say that they really used it, or that they used it correctly, etc.)

    Hu, I’m aware of that comment and I think that I’ve quoted it in the past. This comment describes how they determined the number of temperature PCs to reconstruct. This step can be seen in the source code. This is a different step than the determination of retained PCs in tree ring networks,. where the only thing that they say was:

    Certain densely sampled regional dendroclimatic data sets have been represented in the network by a smaller number of leading principal components (typically 3–11 depending on the spatial extent and size of the data set).

    This particular statement doesn’t preclude the use of PReisendorfer’s Rule N, but Rule N has nothing to do with “spatial extent” and indicates that something else is involved. But perhaps not. Mann didn’t produce the relevant section of source code for tree ring networks so no one really has any idea right now how the calculation was actually done and, as noted elsewhere, it seems impossible to figure out a rule that yields both AD1600 Vaganov and AD1750 Stahle SWM retentions. I’d

  84. hswiseman
    Posted Mar 29, 2008 at 5:57 PM | Permalink

    Re: #84

    JFK or ORD and the like have ASOS stations arguably located in the middle of 24/7/365 blast furnaces. But as you correctly point out the instrumental record is probably as accurate as you are going to get. That would make these sites suitable longinatudally as long as you acknowledged that airport heat island effects are baked into the nominal temperature record. ASOS also would allow you to compare ASOS TAN (Taunton MA), a prop job daylight onesy-twosey airstrip to nearby KBOX (NWS Taunton) to nearby BOS and PVD. With a little work, you might tease out a valid AHI factor. It is the absence of conventions in measurements and phony baloney adjustment factors that have transformed the entire ground level temperature exercise into a huge waste of time money oxygen and bandwidth. There are alot worse starting points than ASOS. Maybe that’s why they are the stations making into the grids.

  85. Posted Apr 1, 2008 at 9:26 PM | Permalink

    Re #81, 82, I’ve finally gotten ahold of a copy of Preisendorfer, through interlibrary loan since it’s rather hard to find, and find his Rule N to be very reasonable.

    What I didn’t get from Steve’s #82 is that a large number of Monte Carlo replications of the data set are generated with unit variances and zero correlations across series, so that the true eigenvalues for each replication are equal and the computed eigenvalues differ only randomly. Then if the actual first eigenvalue (with normalized data) is greater than the 95th monte carlo percentile, it is retained for further investigation. If then the second is greater than its (slightly lower) monte carlo percentile, it is retained, and so forth until the eigenvalues are no longer significant by this test.

    Preisendorfer points out that a correction must be made if the series are in fact serially correlated, since this reduces the effective independent sample size. Rather than make his indirect sample size correction, it would seem easier just to generate series with the empirical autocorrelation whatever that may be.

    I would quibble with Preisendorfer on three small points.

    The first is that he in fact says to retain all eigenvalues that are greater than their respective 95th percentiles. But under the null of equal (and therefore uninteresting) eigenvalues, about 5% of the eigenvalues should be greater than their 95th percentiles, so that with eg 100 series, this rule might randomly pick eg PC4, 39, 52, 63, and 92. The rule I set out above would not stoop to PC92 (or PC4 for that matter) unless all of PC1 – PC91 (or PC1 – PC3) are also significant.

    Of course having passed Rule N just means that the PC should be considered in a next step calibration equation, and does not imply that it will have any significance correlation with temperature or whatever.

    My second quibble is that he doesn’t direct you to normalize the replications to unit sample variance. Even though they were drawn with unit population variance, they will never have unit sample variance, and therefore will have different properties than your sample under the null, if the sample has been normalized.

    My third quibble is that he erroneously states that with say 100 replications, the 95th largest realization should be your 95% critical value. In fact, this is the 95/101 critical value. To obtain a true 95% critical value without interpolating, you should use say 99 replications, and then take the 95th of these as the 95% critical value. Under the null, your sample is then the 100th realization of the process, and the probability is exactly 5% that it is one of the 5 largest draws, ie greater than the 95th largest of the 99 artificial samples. Jean-Marie DuFour at U. Montreal is very fond of this point.

    (In fact, DuFour insists that you only need to take the largest of 19 replications. It’s true this gives an exact 5% test size, but it lacks the desirable property of replicability. I would instead go for at least 999 replications, and take #950.)

    It’s unclear where the name “Rule N” comes from, since his sample size is n, not N.

  86. Posted Aug 26, 2009 at 10:54 AM | Permalink

    Per hswiseman’s comment at #84:

    I’ve done a data analysis that simply looks at the addition of thermometers to airports, by latitude, over time. Your point is well made. Not only is the AHI important, but it grows dramatically over time in the bulk record as more and more of the thermometer records come from airports (and more of them in the Tropics).


2 Trackbacks

  1. […] Matter #3: Preisendorfer’s Rule N (Feb 13, 2005 ) A Challenge to Tamino: MBH PC Retention Rules (Mar 14, 2008) Gavin and the PC Stories (Mar 3, 2009) Steig’s “Tutorial” (Jun 2, […]

  2. […] stick was made, it wasn’t supported by any of the program code released my Mann, and the evidence says it couldn’t have been […]

%d bloggers like this: