Sliming by Stokes

Stokes’ most recent post, entitled “What Steve McIntyre Won’t Show You Now”, contains a series of lies and fantasies, falsely claiming that I’ve been withholding MM05-EE analyses from readers in my recent fisking of ClimateBaller doctrines, falsely claiming that I’ve “said very little about this recon [MM05-EE] since it was published” and speculating that I’ve been concealing these results because they were “inconvenient”.

It’s hard to keep up with ClimateBaller fantasies and demoralizing to respond to such dreck.

Far from regarding the MM05_EE results being “inconvenient”, I tried very hard to draw further attention to them by publication of a joint statement with Wahl and Ammann on points of agreement and disagreement – a story that I’ve told on numerous occasions.

Within a day or two of Wahl and Ammann’s announcement in May 2005, I had determined that our code and the Wahl-Ammann code reconciled to approximately seven 9s – Wegman waggishly said that Wahl and Ammann had replicated McIntyre and McKitrick rather than Mann.  I had a major dispute with them over their refusal to report the controversial verification r2 results – I knew that they got the same answer as us, because our codes reconciled exactly. Our disagreement with Wahl and Ammann came from the characterization of the various reconstructions, not on the squiggles themselves.  As readers are aware, I was anxious to settle as many technical issues as possible and made a formal offer to Caspar Ammann to publish a statement of points agreed and points disagreed. Ammann refused because it would be “bad for his career”.  A contemptible excuse even for climate academics.  

I wanted to publish a joint statement of agreed results to avoid wasting time on stupidity like we’re seeing nine years later from Stokes and the ClimateBallers. Unfortunately, the career greed of Wahl and Ammann won out or else these matters would have been put to rest years ago, as they ought to have been.

Having tried so hard to publish a joint statement of agreed results with Wahl and Ammann, it is ludicrous for Stokes to claim that they are somehow “inconvenient” for me.  I wrote detailed blogposts in 2006 showing the exact parallels between Wahl and Ammann variations and the previously published variations in MM05-EE, posts conspicuously ignored by Stokes, though they are both categorized and tagged.

For example, a CA post here showed graphs demonstrating the equivalence of MM05-EE and WA Scenario results.   The figure showed actual MM05 and WA digital results plotted in a consistent style and scale.  Results using two covariance PCs were shown in pink, with two Mannian PCs in red; two correlation PCs in blue and MBH98 in orange.

Figure 1.  From here. Original caption: MBH-style reconstructions. Left: Archived results from MM05(EE) Figure 1. Right – WA Scenario 5 results (emulation). Pink – 2 covariance PCs; blue – 2 correlation PCs; also 4 covariance PCs; red – WA case with Mannian PCs; orange – MBH98.

In the same blogpost, I compared WA scenario 6 (no bristlecones) to the MM05 scenarios shown in WA Scenario 5.  Without bristlecones, results were essentially identical regardless of PC methodology – a finding that Mann had presumably encountered when he did calculations from his CENSORED file.


Figure 2 . Original caption:  Left – WA Scenario 5 as previously described. Right – WA Scenario 6 with bristlecone series excluded. Orange – MBH98 for reference. Red – with two Mannian PCs (WA Scenario 6a); magenta – with 2 covariance PCs (WA Scenario 6c) ; blue – one graph with 2 correlation PCs (WA Scenario 6b); one graph with 5 covariance PCs. As I recall, some differences arose because WA did not exclude all the stripbark series in the CENSORED directory, but I’d need to fisk the file to crosscheck.

These results, including the above diagram, were later  applied in rebutting untrue assertions in Juckes et al.

Long before the publication of Wahl and Amann, we had already reported (MM05-EE) that, despite Mann’s bluster and unsupported claims, we were in substantial agreement on the impact of various permutations on the reconstruction, though not on the implications, stating:

While we differ with Mann et al. on the issue of which methodological assumptions are “correct”, if the assumptions are specified sufficiently precisely, there is surprising consensus on the actual effects…

For example, here is a figure from Mann’s submission to Climatic Change in early 2004, showing calculations from a couple of permutations that relate fairly closely to subsequent presentations by ourselves and Wahl and Ammann.

2003 mann submission to climchg figure 2

 

A few months later, in Mann’s reply to our 2004 submission to Nature (and also in Realclimate posts), Mann raised nearly all the lines of argument that Stokes attributes to Wahl and Ammann. For example, the following paragraph from Mann’s submission to Nature raises topics, subsequently presented by Wahl and Ammann without citation or credit:

The choice of centering of the data in PCA simply changes the relative ordering of the leading patterns of variance. Application of Preisendorfer’s selection rule “Rule N'” (MBH98) selects 2 PCs for the MBH98 centering (1902-1980), but 5 PCs for the MM04 centering (1400- 1971). Although not disclosed by MM04, precisely the same ‘hockey stick’ PC pattern appears using their convention, albeit lower down in the eigenvalue spectrum (PC#4) (Figure 1a). If the correct 5 PC indicators are used, rather than incorrectly truncating at 2 PCs (as MM04 have done), a reconstruction similar to MBH98 is obtained. Moreover, similar results are obtained whether or not proxy networks are represented in terms of PCs.

In MM05-EE, we commented on this discussion (Mann’s reply to our Nature submission being online by then)  as follows:

  • In a centered calculation on the same data, the influence of the bristlecone pines drops to the PC4 (pointed out in Mann et al., 2004b, 2004d). The PC4 in a centered calculation only accounts for only about 8% of the total variance, which can be seen in calculations by Mann et al. in Figure 1 of Mann et al. [2004d].
  • If a centered PC calculation on the North American network is carried out (as we advocate), then MM-type results occur if the first 2 NOAMER PCs are used in the AD1400 network (the number as used in MBH98), while MBH-type results occur if the NOAMER network is expanded to 5 PCs in the AD1400 segment (as proposed in Mann et al., 2004b, 2004d). Specifically, MBH-type results occur as long as the PC4 is retained, while MM-type results occur in any combination which excludes the PC4. Hence their conclusion about the uniqueness of the late 20th century climate hinges on the inclusion of a low-order PC series that only accounts for 8 percent of the variance of one proxy roster.
  • If de-centered PC calculation is carried out (as in MBH98), then MM-type results still occur regardless of the presence or absence of the PC4 if the bristlecone pine sites are excluded, while MBH-type results occur if bristlecone pine sites (and PC4) are included. Mann’s FTP site [Mann, 2002-2004] actually contains a sensitivity study on the effect of excluding 20 bristlecone pine sites5 in which this adverse finding was discovered, but the results were not reported or stated publicly and could be discerned within the FTP site only with persistent detective work.
  • If the data are transformed as in MBH98, but the principal components are calculated on the covariance matrix, rather than directly on the de-centered data, the results move about halfway from MBH to MM. If the data are not transformed (MM), but the principal components are calculated on the correlation matrix rather than the covariance matrix, the results move part way from MM to MBH, with bristlecone pine data moving up from the PC4 to influence the PC2. In no case other than MBH98 do the bristlecone series influence PC1, ruling out their interpretation as the “dominant component of variance” [Mann et al, 2004b]

All of these issues later became “scenarios” in Wahl and Ammann, who cited our articles, but did not refer to our prior discussion and analysis of the various permutations and combinations (or Mann’s even earlier discussion), thus leaving subsequent readers, like Stokes, with the incorrect belief that Wahl and Ammann had originated analysis of these permutations. In one of my blogposts, I commented:

There is virtually no difference between what we said in MM05b and the WA Scenario 5. It is exceedingly annoying that WA did not discuss the close relationship between what we had said in MM05b and what they said. Indeed, their failure to reconcile the results arguably rises to a distortion of the record – a point which that I made as a reviewer, but which Climatic Change and WA ignored.

Plagiarism is not simply the use of sentences without attribution, but the use of ideas without proper attribution and credit.  Through their failure to properly cite either Mann or ourselves on many of these issues,  Wahl and Ammann obviously left subsequent readers, like, as we have seen, Nick Stokes, with a false sense of their priority, while at the same time, distorting the research record through their failure to cite and discuss prior analyses of these issues.

Conclusion

I’m sure that Nick Stokes was somewhat stung by my post entitled “What Nick Stokes Wouldn’t Show You”, but the disturbing fact was that my title was true. Brandon Shollenberger had challenged Nick in multiple venues to show a panelplot of randomly selected PC1s in consistent orientation (as the NAS panel had done), but Nick had refused.  So my title was cutting, but justified.    I’m sure that Nick wanted to give me a taste of my own medicine, but, if he wants to do so, he should not resort to fabricated headlines and deceptions that are no better than National Inquirer scumminess, such as the following:

stokes excerpt 1

In his effort to falsely portray me as withholding results, Stokes claimed that  I’ve “said very little about this recon since it was published” and speculated that I found its results as “inconvenient”. I obviously presented these results both in academic literature and in multiple Climate Audit blogposts. I also showed them in presentations. I tried hard to present them in a joint statement with Wahl and Ammann, and have complained on multiple occasions about their refusal.  Stokes’ suggestion and implication to the contrary is both unfounded and contemptible.

I’m not going to bother commenting on his supposed gotcha, challenging why issues with Mannian PCs appear to have relatively less impact in the modern period where the data mining is occurring than in earlier periods.  Given that there is no dispute between ourselves and Wahl and Ammann (or Mann) on this point, Nick might think for a while on why this is not an issue among us, before falsely claiming a gotcha.

 

166 Comments

  1. kim
    Posted Oct 1, 2014 at 12:14 AM | Permalink

    If I’d bet on Ol’ Steveball, I’d be a free one today.
    ================

  2. Posted Oct 1, 2014 at 2:15 AM | Permalink

    Racehorse, once an thoroughbred, deep in slime.
    Balling isn’t a victimless clime.

    • Streetcred
      Posted Oct 2, 2014 at 2:23 AM | Permalink

      He’s just an old hack now … aimlessly wandering the back paddock dreaming of years gone by. The glue factory beckons but he resists, tossing his head, a few silver strands of what was once his glorious mane fall across his cloudy eyes; his once racehorse legs quiver under the effort of the mane toss …

  3. Alexej Buergin
    Posted Oct 1, 2014 at 2:23 AM | Permalink

    Calling the Enquirer Inquirer is a clever way to not offend these respected journalists by comparisons with our racehorse fellow.

  4. AntonyIndia
    Posted Oct 1, 2014 at 2:39 AM | Permalink

    What surprises me is that emotions overrule brains frequently in climate science. Not only on the individual level but also on the collective. Groupthink and “Us vs them” overrule rational expositions of data processing. Maybe time for Scientists from other fields to call a spade a spade here, and blow the whistle on illogical or biased faults. They don’t have to fear for their careers I hope, although these days just about anything is caused by “Climate Change” TM.

  5. Jean S
    Posted Oct 1, 2014 at 3:15 AM | Permalink

    Before this post was up, I uploaded a couple of pictures in order to answer Nick’s latest. Let’s put them here. Here’s Nick’s graph:

    and here’s the corresponding figure from WA (Figure 3):

    Exactly the same about 0.3 disparency in the AD1400 and AD1450 steps! Since Nick wanted to bring up that figure, I’d may as well point out WA’s dishonest visual “trick” of using the (land!) instrumental record (not even plotting the recons!) in the right panel (“latest CRU”) in order to artificially inflate the dynamic range (y-axis).

    Here’s also a money quote from WA:

    When two or three PCs are used, the resulting reconstructions (represented by scenario 5d, the pink (1400–1449) and green (1450–1499) curve in Figure 3) are highly similar (supplemental information). As reported below, these reconstructions are functionally equivalent to reconstructions in which the bristlecone/foxtail pine records are directly excluded (cf. pink/blue curve for scenarios 6a/b in Figure 4).

    Confirming exactly the main message of MM!

    • Nick Stokes
      Posted Oct 1, 2014 at 4:06 AM | Permalink

      No, that is not the corresponding figure. It covers part of 15th century only, and includes the effect of exluding Gaspe. The figure I referred to is Fig 2 from Ammann and Wahl. My comment which simply gave the fixed link has now been awaiting moderation for twelve hours.

      • Jean S
        Posted Oct 1, 2014 at 4:33 AM | Permalink

        You never stop, do you?

        The curve 5d in Figure 3 is the same scenario as your “Centered PCA”. Figure 2 in WA does not even have PCA! This would have been clear to you if you had bothered to read the first 1.5 lines in the caption:

        Northern Hemisphere annual mean temperature reconstructions using only individual proxy records – without principal component summaries of proxy data

        And the effect should be visual in the 15th (and a little bit in the 16th) century as I explained to you last time about 12 hours ago. And it is Wahl/Ammann not Ammann/Wahl which is referring to another paper. And I told you also that I don’t care (and explained also why) about your comments in the moderation.

        • Nick Stokes
          Posted Oct 1, 2014 at 5:10 AM | Permalink

          “This would have been clear to you if you had bothered to read the first 1.5 lines in the caption”
          You still have the wrong picture. It’s from Ammann and Wahl, 2007, Fig 2. It’s headed “MBH-type reconstruction of the last millenium” and the caption is quite different to what you quote.

          Since you can’t get it right, I think it would help if you would release the comment with the link so others can.

        • Jean S
          Posted Oct 1, 2014 at 6:08 AM | Permalink

          How in the heck could I have known that you meant Ammann & Wahl Fig 2. when you were (and still are) linking to Wahl & Ammann paper in your web page? And no, that is not the same as your “Centered PCA” but corresponds to WA scenario 5b. But thanks for bringing that figure up, I noticed something funny/interesting I had not noticed before (Steve and others, take a look at footnote 1)!

        • Posted Oct 1, 2014 at 6:14 AM | Permalink

          “How in the heck could I have known that you meant Ammann & Wahl Fig 2.”
          Because I referred to at as
          “here is the Ammann and Wahl emulation for the millennium”. And gave a link to the image, which you still have in moderation, now 13 hours later..

        • Nik Stokes
          Posted Oct 1, 2014 at 6:29 AM | Permalink

          Another clue is that it is shown at the bottom of the post that this thread is about.

          Jean S: I checked it last evening (my time) and all I saw was a “Update” with a link to the Wahl & Ammann paper. I didn’t check there today nor I bothered to look your comment from the moderation. And this thread it not about Ammann&Wahl figure, but about your original post whose main figure is equivalent to Wahl&Ammann figure 3.

      • AndyL
        Posted Oct 1, 2014 at 7:44 AM | Permalink

        Jean:
        It has been long-standing practice at this site to give greater tolerance to critics than to supporters of CA. When Nick Stokes is the direct topic of discussion, he should have right to reply and his posts should all be let through.

        Nick adds value to this site, even if most people strongly disagree with him and say so forcefully. It would be helpful if somehow Nick could be white-listed, so his posts never go go moderation

        Jean S: IMO over the years Nick Stokes has added almost nothing valuable. IMO he has been given far too much tolerance, but that’s Steve’s decision. I have no control over the moderation settings here, and only thing I can do is to edit/delete/approve/disapprove comments. And only thing I’ve said/done about moderating Nick is that I’m not going to (anymore) spend anytime releasing his comments if they happen to go to the moderation. Steve is going to do whatever he sees the best. BTW, now most of Nick’s comments seem to come through directly.

        • AndyL
          Posted Oct 1, 2014 at 10:00 AM | Permalink

          Jean:
          Responding to your in-line comments. I understand your position, though don’t agree with it. However when Nick is the subject of the discussion I think you should be prepared to go the extra mile to release his comments

      • Salamano
        Posted Oct 1, 2014 at 9:19 AM | Permalink

        I really hope this whole exercise isn’t boiled down to a misunderstanding of dueling “Wahl & Ammann” vs. “Ammann & Wahl” challenges…

        Somebody step in and help me out here… my head is spinning!

        Does “I’ll show you mine if you show me yours” work here..?

    • Nick Stokes
      Posted Oct 1, 2014 at 4:12 AM | Permalink

      Since you think it is important, I think you should include that Fig 2 in the text.

    • Carrick
      Posted Oct 1, 2014 at 5:46 AM | Permalink

      I should point out that this figure:

      demonstrates that there is a serious problem for MBH’s reconstruction method.

      Note how much more low-frequency information there is using the centered-PCA?

      [I’m referring to the period outside of the calibration period during which the reconstruction is forced by the algorithm to follow the temperature series. Agreement with the temperature series over the calibration period is important for internal validity of the reconstruction, but, if properly done, is automatic and provides no information about the validity of the temperature reconstruction outside of the calibration period. ]

      It has been recognized since circa 2004 that MBH did not have enough low-frequency variation—which of course is a completely fatal flaw for authors who are trying use this reconstruction to claim “unprecedented temperatures”). In fact “not enough” is really “almost none”: The handle of MBH 98 is absurdly flat-lined.

      Newer reconstruction, like Moberg 2005, were even met with enthusiasm at Real Climate, though the practical implication that MBH 98/99 was shown to be invalid seems to been carefully not explicated.

      If you know the original MBH 98/99 reconstruction fails to replicate multidecadal variability shown in newer and presumably higher fidelity reconstructions, then the fact this reconstruction has recent temperatures to be unprecedented has zero evidentiary value. Instead of noting this, the RC group credulously states:

      We hope that press reports about this paper that mention the increased variability will also emphasize the other key result: that there is “no evidence for any earlier periods in the last millennium with warmer conditions than the post-1990 period – in agreement with previous similar studies (1-4,7)” where (1)is MBH98, (2) is MBH99, (7) is Mann and Jones ’03.

      Showing that you can “lift the handle” from this flat-lined posture by using a centered PCA simply confirms that short-centered PCAs are partly responsible for this flat-lined and certainly invalid reconstruction.

      In my opinion, the appropriate pattern here is “wrong result, wrong method = retraction”, even if the “retraction” is a [in fact non-existent] discussion in e.g., Mann 2008, which explains why the older results cannot be viewed as reliable.

      Because this retraction has steadfastly not been done, argument continues over the validity of what factually a worthless paper. The results are wrong. The methods are wrong and long-sense abandoned. The only point that people can raise as valid is something you could have written down (whether it is true or not) without having to perform a messy reconstruction. Simply supposing it was not warmer then is as valid as what we’ve learned from this useless paper.

      Maybe if we all agreed to give this paper a decent burial, discussion of it would subside. Though still, I find the discussion of the effect of erroneous methodologies on a reconstruction to be an interesting topic of it’s own right. Discussion of the paper has bee made more interesting than it needs to be, simply because some people seem to have great trouble speaking basic truths, but that’s where we are today as I see it.

      • ehac
        Posted Oct 1, 2014 at 6:03 AM | Permalink

        LIA in Moberg is lower than MBH…. LIA in MM is warm. Warmer than MBH. According to Carrick that is a confirmation of MM. If you can lift the handle… Moberg does the opposite.

        Interesting twist.

        • Carrick
          Posted Oct 1, 2014 at 8:08 AM | Permalink

          ehac, the point relating Moberg to MBH98 is that MBH98 (and MBH99), does not have enough low-frequency variability (almost none). This point has been acknowledged by RealClimate and others without any discussion, but is made even more forcefully if we compare MBH98 to the ensemble of modern reconstructions including Mann 2008 EIV (this is Mann’s favored reconstruction from his 2008 paper):


          The effect of using a centered PCA is to increase the low-freqeuncy variability. This is called a “sensitivity test”. The result of this test confirms that centered PCA reduces bias. This confirms a claim of M&M.

          Using centered PCAs is not the “M&M” method, it is standard practice. Nobody, including Mann, would now say that using a short-centered PCA is a improvement in practice over the standard method. Since we find that using a centered PCA

          The comparison of MBH98/centered PCAs to Moberg that yo make simply tells us that even using the standard methodology, MBH98 still fails validation testing against modern reconstructions.

          That has nothing to do with the point that using a short-centered PCA reduces low-freqeuncy variability. So, no, not a twist, just confusion on your part on what the issues at play are.

        • Steve McIntyre
          Posted Oct 1, 2014 at 8:42 AM | Permalink

          Carrick,
          in 2006, I did a good post with original analysis comparing and diagnosing the ability of different multivariate methods to preserve low-frequency variability on a benchmark network of pseudoproxies from a climate model (from Von Storch and Zorita.) In my opinion, it deals with this issue more effectively than anything currently in climate academic literature.

          When you have non-contaminated networks, as the VZ setup, “simple” methods do better than more complicated methods. As long as the methods do not flip the pseudoproxies over, any form of positive weighted average among the pseudoproxies will have the central limit theorem working in its favor.

          OLS (inverse on the proxies) performs very poorly in preserving low frequency because it generates a lot of negative coefficients (flipping the series) and quickly reducing the grinding power of the Central Limit Theorem.

          Any methods which don’t take advantage of knowing the sign of the relationship of the proxy to temperature will flip over some of the series and lose low frquency even in an ideal case.

          In a toy case, the coefficients of a PC1 (other than Mannian) are typically all of one sign (and can be set to be positive for ease of interpretation as a weighted average). Adding lower order PCs introduces contrasts which makes the eventual weightings more uneven and can make some weightings negative. Preisendorfer’s Rule N was proposed in an entirely different context (plus Preisendorfer demanded that the significance of identified patterns be separately proven) and its application for identification of temperature signal in a tree ring network (as you well understand) is surely something that ought to be proven by specialists, rather than asserted unilaterially as a law of nature.

        • Posted Oct 1, 2014 at 8:33 AM | Permalink

          Carrick, PAGES 2000 also has much less high frequency variation. Moberg sampled from a much smaller region and range of proxies and you would expect that NHemispherical or global reconstructions would have less “fast” variation.

          Cast your carrots upon the water.

        • kim
          Posted Oct 1, 2014 at 10:17 AM | Permalink

          Hah, note the naive Bender(11/20/07) late in that ’06 post and comment thread:

          ‘How could such a nice opening post get overrun by such trollishness? Hopefully, those days are over.’
          ===================

        • ehac
          Posted Oct 1, 2014 at 10:53 AM | Permalink

          Carrick did not quite observe the problem using Moberg to confirm M&M. Moberg’s LIA is colder than MBH. And M&M’s LIA is warmer than MBH. Is there at LIA at all in M&M?

          And of course: Steve will not use data he doesn’t like. As in preparing the “sample” of simulation runs for the SI.

        • Sven
          Posted Oct 1, 2014 at 11:07 AM | Permalink

          “And of course: Steve will not use data he doesn’t like. As in preparing the “sample” of simulation runs for the SI.”
          And the source for this claim?

        • ehac
          Posted Oct 1, 2014 at 11:40 AM | Permalink

          The smokescreen is definitly working for Sven. Unaware of the selected “sample” in SI.

        • miker613
          Posted Oct 1, 2014 at 11:46 AM | Permalink

          “Unaware of the selected “sample” in SI.” Why would anyone listen to these drive-by slurs? Post a link if you have anything to claim, or if you want anyone to know what you’re talking about.

        • Carrick
          Posted Oct 1, 2014 at 11:47 AM | Permalink

          Eli: Carrick, PAGES 2000 also has much less high frequency variation

          I was referring to low-frequency variability here, not high. Also, I wasn’t endorsing Moberg as the most optimal high-frequency reconstruction. (I lean towards Ljungqvist for that.)

          In terms of low-frequency reconstructions, all of the modern ones appear to agree with each other. What that means practically is a different question. But what is also true is that MBH98 does not agree with any of the modern reconstructions in terms of its low-frequency content.

          To get to the question you are raising, one needs to look at the spectral content of the reconstructed series. I agree that for high frequency, Moberg is an outlier. But it doesn’t seem to just be an averaging issue: The slope is shallower, and the fraction of high frequency energy to low-frequncy energy is much larger than comparable reconstructions.

        • Carrick
          Posted Oct 1, 2014 at 11:58 AM | Permalink

          ehac:

          Carrick did not quite observe the problem using Moberg to confirm M&M. Moberg’s LIA is colder than MBH. And M&M’s LIA is warmer than MBH. Is there at LIA at all in M&M?

          I directly addressed this point. Centered PCA is a correction to MBH98, not a separate reconstruction of M&M. Centered PCA produces a reconstruction with more low-frequency energy, but not necessarily a valid reconstruction.

          And of course: Steve will not use data he doesn’t like. As in preparing the “sample” of simulation runs for the SI.

          if you want anybody to take you seriously (instead of just another spiteful troll), you’ll need to do better than make unsubstantiaed allegations.

          Based on your inability to follow even simple explanations, I really doubt you know what you’re talking about here in any case..

        • Bob K.
          Posted Oct 1, 2014 at 12:50 PM | Permalink

          ehac: “And of course: Steve will not use data he doesn’t like.” Are you seriously delusional or are you trying to toss a fish in a barrel so that everyone can have fun shooting it?

        • ehac
          Posted Oct 1, 2014 at 3:07 PM | Permalink

          miker needed a link.

          http://onlinelibrary.wiley.com/doi/10.1029/2004GL021750/suppinfo

          Enjoy. carrick and Svein are too.

        • Carrick
          Posted Oct 2, 2014 at 11:55 AM | Permalink

          lame.

        • Carrick
          Posted Oct 10, 2014 at 5:45 PM | Permalink

          Eli Rabbet, a bit late in the game here, but I thought I’d link to some results that were driven by your comments about variance.

          If I scale the spectra so that the low-frequency aligns with Loehle (I think I have a better method now, due to a suggestion by “A Fan”), here’s what I get:

          As I said on Brandon’s blog:

          While the reconstructions have been scaled to match the low-frequency portion of Loehle & McCollough, the two temperature series are shown with no scale adjustment. Given uncertainties in the relationship of the pseudo-temperature scale to the global temperature scale, the level of agreement was a bit surprising to me.

          Also — note that Moberg does not agree well with the other high-frequency reconstructions. Loehle predictable rolls off steeply below 50-Hz.

          Ljungqvist and Mann 2008 EIV appear to agree well with each other (given uncertainty) and with the global temperature series, both in slope and in magnitude.

          There seems to be an issue with Moberg. You were right about that, but it seems to be more substantive than just sample size.

      • Frank
        Posted Oct 1, 2014 at 8:44 AM | Permalink

        Carrick: With all of the venom directed at M&M, some readers may have forgotten or do not know about von Storch’s showing that MBH98/99 methodology suppressing variability in the shaft. The ultimate in deception is to overlay the instrumental record (with full variability) on top of a hockey stick with an artificially-straightened shaft and an off-centered-PCA-enhanced blade.

        Click to access vonStorchEtAl2004.pdf

        http://www.sciencemag.org/content/306/5696/679.abstract

        Empirical reconstructions of the Northern Hemisphere (NH) temperature in the last millennium based on multy proxy records depict small-amplitude variations followed by a clear warming trend in the last two centuries. We use a coupled atmosphere-ocean model simulation of the last 1000 years as a surrogate climate to test the skill of these methods, particularly at multidecadal and centennial timescales. Idealized proxy records are represented by simulated grid-point temperature, degraded with statistical noise. The centennial variability of the NH temperature is underestimated by the regression-based methods applied here, suggesting that past variations may have been at least a factor of two larger than indicated by empirical reconstructions.

        • ehac
          Posted Oct 1, 2014 at 11:00 AM | Permalink

          Not an effect of PCA.

        • Carrick
          Posted Oct 1, 2014 at 12:01 PM | Permalink

          But then, nobody has claimed that short-centered PCA is the only problem with MBH98.

  6. Nick Stokes
    Posted Oct 1, 2014 at 3:55 AM | Permalink

    Yes, my headline was a companion piece to yours. And I think it is well justified. There has been an endless focus on a mathematically irrelevant alignment of PC1, conveying a message of “mining for hockey sticks”. Pictures in the Wegman report, cartoons by Josh etc. Yet almost nothing written on what this actually means for the result. What it does to reconstructions. And there is your 2005 calculation showing no hockey stick effect at all from decentering. And of course, Wahl and Ammann’s plot showing negligible difference over the whole range. How often do they appear in the discussions?

    I mentioned in my piece surprising evidence of this lack of attention. The Wegman Report, just one year later, had much material about PC1 from your GRL paper. But he said almost nothing about your E&E reconstruction, which was clearly relevant. A very oblique discussion in the specific MM05EE section, with nothing about the centering/decentering effect comparison. So, in the congressional committee hearing, Rep Stupak asked Wegman, saying:

    “Does your report include a recalculation of the MBH98 and MBH99 results using the CFR methodology and all the proxies used in MBH98 and MBH99, but properly centering the data? If not, why doesn’t it?”

    Wegman responded saying:

    “Ans: Our report does not include the recalculation of MBH98 and MBH99. We were not asked nor were we funded to do this. We did not need to do a recalculation to observe that the basic CFR methodology was flawed. We demonstrated this mathematically in Appendix A of the Wegman et al. Report. The duplication of several years of funded research of several paleoclimate scientists by several statisticians doing pro bono work for Congress is not a reasonable task to ask of us. “

    And yet, he had reviewed MM05E&E. He had access to your code. He claimed to have run, at least, the GRL code, and should surely have looked at the EE code. He should have been aware that he did not need to duplicate several years of funded research. It didn’t take me long to get it running. But he said nothing. And so the talk went on to Wahl and Ammann
    “The Wahl and Ammann paper came to our attention relatively late in our deliberations,…”

    When you yourself spoke to the Committee you gave a presentation which included a five page prepared statement, and 14 slides. Your gave a two-page introduction. But you did not once tell them that you had actually emulated, with and without decentering, the results of MBH, the work in question. Nor did you say anything about it in your responses for the record, despite questions relating to difficulties that you had had in being able to replicate Mann’s work.

    Your response to one earlier question from Mr Whitfield:

    MR. MCINTYRE. Well, let me answer that. That is true, and that is the one specific item that was verified by both panels, and both the NAS panel and the Wegman report specifically confirm that his methodology would produce a hockey stick from random data.

    No qualification that you were talking about PC1, rather than a reconstruction, and no context to suggest that. You had already reproduced the actual reconstruction that was the well-known hockey-stick result that they would have believed you were referring to, and shown that the result was no more HS-like than with centered PCA.

    In your second address to the committee, you even managed to talk about Wahl and Ammann, and to say that your code reconciled to theirs, without making the obvious connection that both reconciled, or at least related, to MBH.

    The nearest it came was in this late exchange, which followed Wegman’s statement about not having done the calculation without decentering. It goes:

    “MR. STEARNS. Well, you know, I just want to give you your due here. We have heard in testimony that Drs. Wahl and Ammann have reproduced Dr. Mann’s work and shown your criticism to be invalid, and I guess–is this true and were your criticism erroneous? I will give you an opportunity to respond to that.

    MR. MCINTYRE. Well, a couple of points. First of all, the code that we used to emulate Dr. Mann’s work reconciles almost exactly with that of Wahl and Ammann. And so any conclusions that differ are not because of differences in how we have emulated the reconstruction. They think that certain steps are fine, we don’t. They have in my opinion not carefully considered the implication of bristlecones. Our codes reconcile so right now I am confident in our conclusions that if you remove the bristlecones you have a major impact on the final results.

    Last December, I met with Ammann in San Francisco and suggested to him that since our codes reconciled so closely that it would make sense if we co-authored a paper in which we set down the points that we agreed on, set down the points we disagreed on in an objective way so that we didn’t seem to be launching missiles at one another and creating more controversy. I said that we could declare an armistice for 6 weeks until we accomplished this, and if we didn’t get to conclusion everybody would go back to square one and that each of us could write separate appendices, say where we disagreed.”

    Well, you had time for that anecdote about Ammann, and even to say that your codes reconciled. But again, just the briefest mention of “emulating” MBH. Nothing about how well it corresponded to MBH, or what effect the decentering had.

    The question of whether decentering made a material difference was of obvious interest to the committee and Stupak asked about it directly. Wegman replied that he had not done a comparison. In fact your code that had that capability. You had an opportunity to point that out, and even volunteer your own results. You did not.

    • Sven
      Posted Oct 1, 2014 at 4:32 AM | Permalink

      Yes, Nick. Me too. If I would have someting to hide, the first thing that would come to my mind would be to propose to my opponents to write a joint paper. And I would do it publicly, in front of the Committee. The second thing right after that, when I see that I’m wrong, would be to say that this whole thing is old and not worth bothering about, like your pal ATTP. And thus the whole issue would be solved. How nice!

    • Posted Oct 1, 2014 at 6:23 AM | Permalink

      If your intent was to garner converts, you have failed. If your intent was to demonstrate the duplicity of your efforts, you have succeeded. I actually respected your opinion somewhat before this episode. I will not make such a mistake again.

    • TimTheToolMan
      Posted Oct 1, 2014 at 6:51 AM | Permalink

      Nick writes “There has been an endless focus on a mathematically irrelevant alignment of PC1, conveying a message of “mining for hockey sticks”.”

      I was always under the impression that the mining for hockeysticks was primarily due to the data dredged proxy selection and the decentred PCA enhanced the effect. I guess the message wasn’t “conveyed” to me properly.

    • miker613
      Posted Oct 1, 2014 at 10:39 AM | Permalink

      I have no clue why you think that McIntyre or Wegman were obligated to mention your particular issue in response to those questions, or why it is telling that they didn’t. As I think McIntyre has made clear here, the point of disagreement with Wahl and Ammann is not on that issue, but on an entirely different one. Just because you’re fascinated with this point doesn’t mean that everyone is.

      • Nic Stokes
        Posted Oct 1, 2014 at 7:02 PM | Permalink

        “I have no clue why you think that McIntyre or Wegman were obligated to mention your particular issue in response to those questions”
        It’s not my particular issue. It’s Fig 1 in MM2005E&E, one of the papers on which the Wegman report to Congress was based. And Stupak specifically asked about such a comparison.

        • miker613
          Posted Oct 1, 2014 at 9:41 PM | Permalink

          Yeah – so Wegman said he didn’t make the comparison. Which makes sense, assuming he didn’t. McIntyre didn’t mention it, since he wasn’t asked. Which makes sense…

          Steve: it wasn’t the sort of hearing where you could just butt in or interrupt. You spoke when spoken to. The congressmen were interested in what North and Wegman had to say – not what me or Crowley or Mann had to say. And while the ClimateBallers criticize Wegman for not analysing the reconstruction step, the NAS panel didn’t study it either and they had a staff. At the time, I was critical of both academic panels as lacking due diligence. The House Science Committee got angry with the NAS for not answering the questions that they had been asked, even after a staffer pleaded with them at the workshop to deal with the committee questions. They eventually refused to pay them (pers comm, Ralph Cicerone :))

    • Layman Lurker
      Posted Oct 1, 2014 at 10:56 AM | Permalink

      Nick:

      There has been an endless focus on a mathematically irrelevant alignment of PC1, conveying a message of “mining for hockey sticks”.

      Nick, since the bristlecones have high persistence and hockey stick shapes the MBH short centering algorithm promotes them (and their dubious hockey stick signal) to PC1 instead of PC4. This artefact is hardly “mathematically irrelevant” since it is incumbent on the researchers to demonstrate that lower order PC’s are contributing signal rather than noise to the reconstruction. Not “irrelevant” because a diligent researcher who innocently included the dubious proxies should be directed by due process to examine the lower order PC4 critically, and upon realizing that PC4 is composed of bristlecones would then look at the correlation with the local observed temperature record. Are the bristlecones in Mann’s network correlated with regional temperatures?

      • Tom T
        Posted Oct 1, 2014 at 2:26 PM | Permalink

        Your post suggests that Mann didn’t do his due diligence. As we know Mann did examine the bristle cones critically. Upon realizing that his reconstruction was dependent on their inclusion he decided to ignore it and make false claims of robustness in his paper. However, he is a bit absent minded and forgot that he had left the runs where he specifically excluded the Graybill series sitting on his FTP server in the ironically named “Censored” folder. An over-site the climate-gate e-mails revealed that he greatly regretted.

        It is this point more than any other the Steyn will tattoo Mann with as he has no explanation for why he specifically excluded those proxies or why he later claimed robustness when he knew it was false.

      • Posted Oct 1, 2014 at 5:15 PM | Permalink

        LL,
        “This artefact is hardly “mathematically irrelevant” since it is incumbent on the researchers to demonstrate that lower order PC’s are contributing signal rather than noise to the reconstruction.”
        It’s important to have good data. But PCA can’t help you there. You may think bristlecones are dubious, but you don’t want the PCA deciding that. If there is an apparent signal, then it should have been included either way, whether as PC1 or PC4. I don’t know about regional temperatures and bristlecones.

        • Black Dow
          Posted Oct 2, 2014 at 4:57 PM | Permalink

          I am not a statistician. But can someone who is please explain to me why this statement isn’t just the most utterly contradictory logic please, because I can’t see it.

          In my laymans understanding, the entire point, the raison d’etre, of principal component analysis, is to try and discern signal from noise.

          As I said, I am no statistician. But I do know shannons information theory. And I do know that the ONLY difference between ‘good’ data and ‘bad’ data is the ratio of signal to noise.

          So please can someone…Nick…please…explain how the following statement makes sense:

          Its important to have good data [couldn’t agree more].
          But PCA can’t help you there [then WTF can it help you with Nick?]
          You may think bristlecones are dubious, but you don’t want the PCA deciding that[ok Nick, please tell me then. What technique or methodology, precisely, other than PCA do you recommend to decide whether a given data set contains a ‘dubious’ ratio of signal to noise?]
          If there was an apparent signal [apparent to WHAT?????? Other than the PCA you have just said can’t be relied on to detect apparent signal. Please Nick, please tell me, how is the signal in the bristlecones “apparent” to you?]
          then it should have been included either way, whether as PC1 or PC4 [so here is again where my lack of statistical knowledge leaves me grasping – my amateur understanding is that a PC1 is a PC1 precisely because it possesses a higher signal to noise ratio than a PC2, or a PC3, or a PC4. Someone please explain to me why that isn’t true, because otherwise what I think Nick has just said is “If there is an apparent signal, it should be included either way, whether it is apparent or not”].

        • Nic Stokes
          Posted Oct 2, 2014 at 5:18 PM | Permalink

          In your terms you can think of PCA like an old AM radio receiver. You can twiddle an oscillator and use an IF to align with a frequency band. You then use a rule N (filter) to separate what you regard as noise. And you’ll hear someone talking. The system won’t tell you if what they are saying is true or reliable, and you don’t want it to.

        • Steve McIntyre
          Posted Oct 2, 2014 at 8:25 PM | Permalink

          Nick’s interpretation of principal components as a radio receiver is completely contrary to the interpretation of Preisendorfer, the author of a leading text on the subject and the originator of Preisendorfer’s Rule N. Preisendorfer regarded the role of his Rule N (and similar rules) as simply exploratory. Rule N could be used to identify something that was potentially non-random, but scientific work was then needed. Preisendorfer (see CA tag “preisendorfer” for selections and comments):

          The null hypothesis of a dominant variance selection rule [such as Rule N] says that Z is generated by a random process of some specified form, for example a random process that generates equal eigenvalues of the associated scatter [covariance] matrix S…

          One may only view the rejection of a null hypothesis as an attention getter, a ringing bell, that says: you may have a non-random process generating your data set Z. The rejection is a signal to look deeper, to test further.

          There is no royal road to the successful interpretation of selected eigenmaps e[i] or principal time series a[j] for physical meaning or for clues to the type of physical process underlying the data set Z.

          In our case, if bristlecones were identified as non-random under Rule N, then the next scientific step is to examine them to see if there is a scientific explanation for their non-randomness. That was the procedure we adopted at the time: MM05-EE looked at the literature on stripbark bristlecones and whether specialists had established them as temperature proxies: it turned out that they hadn’t. At the time of MM05-EE, we were unaware of the importance of mechanical deformation as a potential explanation: this now seems much more likely as an explanation. And, as CLimategate shows, both Briffa and Esper were aware of this explanation, but reluctant to discuss this in public for fear of opening “Pandora’s box” “at their peril”.

          Mann and now Stokes completely distort the scientific meaning of Preisendorfer’s Rule N and completely betray Preisendorfer’s scientific approach.

        • Black Dow
          Posted Oct 2, 2014 at 5:34 PM | Permalink

          In my terms. Gee, thanks for dumbing it down to the level a lil old hick like me can understand Nick.

          Radio reception is an excellent example of information theory though.

          If you turn on a radio, as you say, it will always make a noise. The question we seek to answer (by twiddling the knob if you like) is whether the noise you hear is static, or whether it has meaning.

          If you “hear someone talking” Nick, its not f*cking noise. Its signal. It doesn’t matter what they are saying, or how “true” it is.

          So I’ll ask you again – if you are NOT using PCA to determine whether a given data set has a significant signal to noise ratio, WTF are you using it for?
          And if you ARE using PCA, as you say, to identify the signal (talking) in the noise (radio waves) then please, how can you say with a straight face that “you don’t want the PCA deciding that” a certain data set is “dubious” (ie has a low signal to noise ratio).

        • Spence_UK
          Posted Oct 2, 2014 at 5:44 PM | Permalink

          The problem with Nick’s radio analogy, is that the decentred PCA radio only plays country, even when it is tuned into a rock or pop station (or even when there is only static). The thing just keeps playing country.

        • Spence_UK
          Posted Oct 2, 2014 at 5:45 PM | Permalink

          And Nick Stokes is the used car salesman telling you that’s exactly what it is supposed to do.

        • Nic Stokes
          Posted Oct 2, 2014 at 5:48 PM | Permalink

          BD,
          I took it that you were an EE, not a hick. Sorry if I got it wrong.

          PCA is simply providing a basis which aligns with what seem to be the most prominent patterns. It’s the rule, or similar reasoning, that identifies where to draw the line. You could say that that is quantifying similar to S/N, but the S is multi-dimensional. But when it comes to reliability of bristlecones etc, for PCA that is equivalent to the reliability of what you here on the radio. PCA is not the same as calibration and verification.

        • Black Dow
          Posted Oct 2, 2014 at 6:20 PM | Permalink

          Nick,

          Do you even hear yourself?

          “PCA is simply providing a basis which aligns with what seems to the most prominent patterns”.

          I will assume that by “most prominent patterns” you mean ‘signal’.

          I will further more assume that when you say “providing a basis that aligns with” you mean ‘detecting’.

          So do you actually mean to say “PCA is simply detecting what seems to be signal”?

          If not, please tell me where I have misunderstood Nick.

          But if I have understood you correctly, could you please tell me (as I have now asked three times) how it is possible to say that “you don’t want the PCA deciding that[a data set has low signal to noise ratio]”?

          PCA is CENTRAL to Manns assertion that he has detected a MEANINGFUL temperature signal in the proxy noise Nick.

          It can either be the case that PCA does detect signal (in which case you have to deal with Steve’s criticism of Manns PCA as-is).

          Or it can be the case that PCA does not detect signal. In which case Manns original assertion is unfounded.

          One.

          Or the other, Nick,

          Time to get off that fence.

        • Nic Stokes
          Posted Oct 2, 2014 at 7:08 PM | Permalink

          “I will assume that by “most prominent patterns” you mean ‘signal’.”
          No, you’re thinking one dimensionally. If you look at Fig 2 in MBH98 it shows the five EOFs that PCA identified in the instrumental temperature. They are spatial patterns common to the distributed data. Not just one signal. PCA initially returns many other EOF’s, but most will be noise, and eliminated by the application of some rule.

          But coming back to the AM receiver analogy, it doesn’t tell you S/N ratios either, though you can do extra analysis to find out. It just selects and amplifies. I think you’re conflating PCA with other parts of the analysis.

        • Black Dow
          Posted Oct 2, 2014 at 7:22 PM | Permalink

          I’m doing my best to give you the benefit of the doubt here Nick, but you’re really not helping.

          Radio reception is the PERFECT example of information theory, and you clearly don’t understand it.

          If I tune in a radio and hear something that has meaning to me, then I have signal.

          If I tune in a radio and hear nothing of significance, then I have noise.

          Tuning a radio receiver is precisely the action of discerning signal from noise. There just isn’t anyway to argue that in information theory terms tuning a radio doesn’t provide feedback on signal to noise ratios. Seriously. You just can’t say that, any more than you can say that it is lighter at night than during the day.

          Again I am no expert on statistics but this much I do understand Nick.

          PC1 contains more signal than PC4.

          It IS significant that Mann detected hockey sticks as PC1, and not PC2, or PC3, or PC4. Because if the hockey stick is PC4 then it means there are three MORE significant dimensions to the data than the one you have selected.

          Please tell me what other analysis Mann did. Honestly. Please do. And because I feel sorry for you I will refrain from pointing out that the basic criticism of Mann made by Steve and Ross was that his chosen methodology selected and amplified weak signals beyond what was justified in the data.

        • Nic Stokes
          Posted Oct 2, 2014 at 8:18 PM | Permalink

          BD,
          “If I tune in a radio and hear something that has meaning to me, then I have signal.”
          You’re not distinguishing the role of a radio receiver. It won’t help if there was a useless microphone, say, degrading the S/N. It won’t know. The receiver just amplifies whatever is in the band.

          And what the PCA returns isn’t just hockey sticks. It detects a number of patterns, and reconstitutes the pattern from all of them. Promoting from PC4 to PC1 doesn’t change that, unless you had been inclined to truncate PC4.

          Warming to the AM analogy, suppose you had some small orchestras in different towns, and you wanted to play a grand symphony. So you can transmit them on different frequencies, and someone with a mixer can reconstruct a (mono) grand symphony. Now it doesn’t really matter if one channel has too many woodwinds, as long as the overall number is OK.

        • Nic Stokes
          Posted Oct 2, 2014 at 9:20 PM | Permalink

          Steve,
          “Nick’s interpretation of principal components as a radio receiver is completely contrary to the interpretation of Preisendorfer”

          Jolliffe “PCA” P 128, describes it thus (transcribed):

          “Rule N, described in section 5d of Preisendoefer and Mobley (1988) is popular in atmospheric science. It is similar to the techniques of parallel analysis,… and involves simulating a large number of uncorrelated data sets of the same size as the real data set to be analysed, and computing the eigenvalues of each simulated data set. To assess the significance of the eigenvalues of the real data set, the eigenvalues are compared to percentiles derived empirically from the simulated data.”

          That’s just numerical. There’s no scientific review as part of the rule N. Of course, you’d want to do it anyway, but that is a different story.

          Steve: Nick, for crissake, I quoted directly from Preisendorfer’s text. Can’t you f-ing read? Preisendorfer said that Rule N was merely a “ringing bell” to examine scientific significance. I described Rule N as Preisendorfer did. Following Rule N, Preisendorfer’s protocol required examination of scientific significance. But when we examined bristlecone literature, we found that specialists did not endorse bristlecone ring widths as a thermometer – quite the opposite. Being flagged on Rule N doesn’t entail scientific significance or prove that the PC is a temperature proxy, as opposed to stripbark mechanical deformation. Your position, aping Mann, is profoundly anti-Preisendorfer. Did you consult Preisendorfer’s actual text before pontificating to me about it? I suggest that you try reading Preisendorfer text before lecturing me on it.

        • Nic Stokes
          Posted Oct 2, 2014 at 10:56 PM | Permalink

          “Can’t you f-ing read? “
          Steve, what I write needs reading too. I was describing to BD about what PCA actually does. And I wrote about what rule N actually does. These are mechanics. That’s the receiver aspect And I then showed Jolliffe’s definition, which seems useful in the circumstances.

          You have quoted what Preisendorfer said about how the outcome of Rule N should be used. I wasn’t writing about how radio receivers should be used.

          “The rejection is a signal to look deeper, to test further.

          There is no royal road to the successful interpretation…”

          That doesn’t sound like he’s specifying a rule.

        • Steve McIntyre
          Posted Oct 3, 2014 at 9:13 PM | Permalink

          Nick,
          it’s hard to keep up with all the ClimateBaller fantasie. You speculate that we included a panelplot of 10 PCs in our Nature submission. Our Nature submissions have been online for nearly a decade and, if you’d bother looking at them, you’d see that they did not contain the supposed panelplot or equivalent. See: http://www.uoguelph.ca/~rmckitri/research/fallupdate04/submission.1.final.pdf and http://www.uoguelph.ca/~rmckitri/research/fallupdate04/MM.short.pdf.

        • clays
          Posted Oct 3, 2014 at 9:19 AM | Permalink

          Nick claims: “what I write needs reading too.” No, really, it doesn’t (well, maybe for comic relief). To ignore the underlying source and quote a third party’s interpretation shows a desperation not to admit error and an unfortunate dishonest mindset.

        • Steven Mosher
          Posted Oct 3, 2014 at 1:28 PM | Permalink

          “That doesn’t sound like he’s specifying a rule.”

          what next Humpty Dumpty?

          perhaps you say Preisendorfer’s rule N is a black box.

        • Posted Oct 4, 2014 at 1:17 AM | Permalink

          Steve,
          “You speculate that we included a panelplot of 10 PCs in our Nature submission.”

          I said that panel was not in the Nature comment. My speculation that they had seen it was based on the statement in the comment:
          “Ten simulations were carried out and a
          hockey stick shape was observed in every simulation.”

          I thought that Nature would not let you include that without seeing the evidence. As I say, speculation, but not from nothing.

          I also noted that when you first sent the MM05GRL paper to Nature ( different submission), the referee made a very similar reference to ten simulations producing hockey sticks every time. Again surprising, if he had not seen a graph. The function to produce such a tableau (hockeysticks.eps) runs as part of your script lodged with the GRL paper.

        • Nic Stokes
          Posted Oct 4, 2014 at 1:21 AM | Permalink

          Steve,
          “You speculate that we included a panelplot of 10 PCs in our Nature submission.”
          My response is in moderation

        • Nic Stokes
          Posted Oct 4, 2014 at 1:31 AM | Permalink

          Steve,
          “You speculate that we included a panelplot of 10 PCs in our Nature submission.”

          Since I’m in moderation, I might as well link to my comment on the Nature matter. It’s here. There are some other comments in the subthread.

        • Nic Stokes
          Posted Oct 4, 2014 at 1:54 AM | Permalink

          Steve,
          “You speculate that we included a panelplot of 10 PCs in our Nature submission.”

          (This response, under my WordPress ID, dropped out of moderation, presumably sent by Akismet to spam. I’ll try this better-regarded user name)

          I explicitly said that panel did not appear in the Nature comment. My speculation that they had seen it was based on the statement in the comment:
          “Ten simulations were carried out and a
          hockey stick shape was observed in every simulation.”

          I thought that Nature would not let you include that without seeing the evidence. As I say, speculation, but not from nothing.

          I also noted that when you first sent the MM05GRL paper to Nature ( different submission), the referee made a very similar reference to ten simulations producing hockey sticks every time. Again surprising, if he had not seen a graph. The function to produce such a tableau (hockeysticks.eps) runs as part of your script lodged with the GRL paper.

          Steve: once again, your speculation was untrue. You say that the speculation was “not from nothing”, but, as too often, you and the other ClimateBallers prefer speculation based on the flimsiest of evidence to facts.

        • Sven
          Posted Oct 4, 2014 at 2:53 AM | Permalink

          Nick, a speculation or a suspition? If the latter, then it’s a silly suspition

      • Tom T
        Posted Oct 1, 2014 at 6:12 PM | Permalink

        No as I said in an earlier thread the weighting should be some combination of the eigenvalue and correlation to the dependent variable. Not one or the other.

        A strong correlation and low eigenvalue suggests spurious correlation.

        A weak correlation and high eigenvalue suggest a different signal.

        As was pointed out perhaps how we should approach the use of PCA in global and hemispheric temperature reconstructions is something that the literature should have addressed before self proclaimed paleoclimatologists jumped in head first not knowing the first thing about what they were doing.

        The method should have been hashed out first. Instead everyone was in a race to be first.

        • Pat Frank
          Posted Oct 1, 2014 at 8:19 PM | Permalink

          Tom, none of them apparently know that PCs have no particular physical meaning. They assign PCs to temperature by mere qualitative fiat.

        • Tom T
          Posted Oct 2, 2014 at 10:38 AM | Permalink

          I think that they do know that. The fact that they simply retain enough PCs until they get the result they want shows that they don’t care about PCA at all. They simply add it to the method to give it an air of statistical respectability when all they are really doing is a correlative model.

      • Layman Lurker
        Posted Oct 2, 2014 at 10:32 PM | Permalink

        Blind application of Peisendorfer’s Rule N without critical evaluation of lower order PC’s is just an excercise in curve fitting.

        • Jean S
          Posted Oct 2, 2014 at 11:52 PM | Permalink

          Exectly. What seems to totally excape ClimateBallers is that the Rule N (as other objective rules) use eigenvalues (squared singular values of the SVD decomposition) in order to determine how many PCs should be kept. In normal PCA, each eigenvalue is just empirical variance, i.e. average squared departures from the overall mean. Thus PCA is making a decomposition of the original data set to the orthogonal components (PCs) in terms of variance they can explain. Thus you may say that your rule is such that you keep the minimum number of PCs such that they can explain 50% of the variance in the data set.

          Now in Mannian PCA the eigenvalues do not represent variance, but they represent the average squared departures from the calibration (short) mean. So applying any rule blindly to Mannian PCA changes its meaning. Taking the number of PCs such that the sum of normalized eigenvalues is over 50% , doesn’t mean those PCs can explain 50% of the variance (usually they can’t). In other word, PCA looses its fundamental summarizing feature.

    • MikeN
      Posted Oct 1, 2014 at 4:42 PM | Permalink

      So Nick, this decentering chart shows negligible difference, no hockey stick effect, but your PCA emulation without selection shows a big difference in hockey stickishness?

  7. Jean S
    Posted Oct 1, 2014 at 4:15 AM | Permalink

    There is a new post up by ATTP explaining exactly why it is so frustrating to argue with the ClimateBallers:

    So, I have found it somewhat interesting that Steven McIntyre has taken an interest in things I’ve said about his paper. To be fair to Steven, my understanding of his paper has changed a great deal since I made most of the comments he’s highlighted. I still think his paper is being misinterpreted by some (which was one of the points I was suggesting in one of the comments he’s highlighted) but I don’t really care.

    In other words, you first attack someone else, and when shown to be wrong, you don’t just care anymore! You don’t correct the record, and you just leave the old attack untouched for the next generation of ClimateBallers to use. And

    Sometimes people suggest that I should go and comment on posts like those written by Steve, but you just need to see JeanS and Stephen Mosher’s responses to Nick Stokes, to see why I don’t. Don’t get me wrong, there’s nothing fundamentally wrong with being a jerk. Many people are and I can be one myself at times (this might be one of them), but I certainly don’t need to interact with them.

    In other words, there is nothing wrong “being a jerk”, but you avoid interacting with people who just might give back on your own measure. What a coward!

    • Sven
      Posted Oct 1, 2014 at 4:35 AM | Permalink

      “Don’t get me wrong, there’s nothing fundamentally wrong with being a jerk. Many people are and I can be one myself at times (this might be one of them), but I certainly don’t need to interact with them.”

      He’s talking about who exactly? Mann? Laden? Tamino?

    • Nick Stokes
      Posted Oct 1, 2014 at 4:41 AM | Permalink

      “In other words, there is nothing wrong “being a jerk”, but you avoid interacting with people who just might give back on your own measure. What a coward!”,/i>

      He may have been thinking of this:
      “Last few days I’ve released quite a big portion of your sad commentary from the moderation. Since you don’t learn anything, from now on I won’t do it, and let Steve completely decide what portion of your spam is worth publishing here.”

      That’s not exactly courageous.

      • taget
        Posted Oct 1, 2014 at 6:07 AM | Permalink

        Nick, with all due respect, given your string of clear mistakes in recent threads, – including your rather embarrassing claim of a “black box”- and your craven refusal to acknowledge any of your errors, any objective observer would conclude that you have been treated with kid gloves. I applaud Steve for continuing to let you post though I am sure it must be tempting to ban your dishonest nonsense.

        In the past you have been a highly partisan yet competent and generally honest (Tiljander excepted) advocate for warmists. In recent days, you have compromised both your competence and honesty.

      • david eisenstadt
        Posted Oct 1, 2014 at 6:09 AM | Permalink

        look nick”
        you claim that a routine written in R is “black Box’…. you are shown…led to the documentation you wished existed, and you ignore it.
        thats the equivalent of spam. why waste our time with your BS?
        now, we all know that you are more than smart enough to understand just what youre doing when you prevaricate and dissemble as you have… why dont you walk it back a bit, and just try being straight forward with your arguments, instead of the arm waving and red herring pitching that youve engaged in thus far. really, its beneath you.

      • Steven Mosher
        Posted Oct 1, 2014 at 10:08 AM | Permalink

        refusing to admit you were

        1. wrong about the black box
        2. incompetent in R
        3. Wrong about the documentation.

        when these dont even matter is cowardly and telling

    • Szilard
      Posted Oct 1, 2014 at 4:49 AM | Permalink

      But look … getting into these blog vs blog wars is a real trap. It doesn’t matter what the subject is; science, stocks, door-knob collecting. The debates always degenerate into the same useless, endless, tail-chasing rhetoric, in the end driven by the lowest-common-denominator types who don’t have anything better to do, while everybody else leaves in disgust.

      Articulate the case, debate with those who are worth it, ignore the rest. Save emotional energy for where it means something ie real life.

      Just my 2c. Would hate to see this place turn into the usual Web kindergarten …

    • MikeN
      Posted Oct 1, 2014 at 6:09 PM | Permalink

      That would put him in the same company as William Connolley, Ari Jokimaki, and Arthur Smith, who started an investigation into Tiljander, fresh off a victory over Mosher, and then said he would get back to it later if the skeptics didn’t annoy him in the meantime.

    • Posted Oct 2, 2014 at 10:44 AM | Permalink

      Apparently his attention to detail in his study of MM05 has not yet got as far as as spelling the author’s name correctly. Amusingly, later in the same post he refers to someone called “Stephen Mosher”.

      • Steven Mosher
        Posted Oct 2, 2014 at 9:37 PM | Permalink

        my evil twin

  8. Posted Oct 1, 2014 at 4:16 AM | Permalink

    I think there’s an additional issue with the post you’re discussing. Yesterday I commented:

    Steve, I have a question. Nick Stokes has a (in my opinion ridiculous) post up which uses an emulation of your MM05 Figure 1 (his emulation was posted . The version he posts has some difference though. Notably, his emulation begins at ~.175 whereas your begins at slightly over .2, and his ends at a higher point than yours. The effect is his emulation shows the most recent part of the reconstructed series is unprecendented whereas yours shows the first 30 years surpass it.

    Do you know what causes this discrepancy? My assumption right now is it’s a matter of smoothing, but I’m not sure what smoothing would produce which results.

    But if you look at Jean S’s comment just above, you see that description is not currently accurate. My belief is Nick Stokes changed the figure after he saw my comment pointing out the issue and hasn’t disclosed the change. Unfortunately, I didn’t think to save a copy of the image/page when I looked at earlier, so I can’t prove that beyond any doubt.

    I can, however, show what I believe to be fairly compelling evidence. Stokes has a previous post in which he said he had emulated MM05EE’s Figure 1. His emulation is currently shown as this image. It does not match the figure published by Steve McIntyre and Ross McKitrick, even though that figure is shown in the very same post. Even more telling, there is this figure in the same post which is a near direct predecessor to the image I discussed in the quoted comment. It also shows the same discrepancy I described yesterday.

    The difference between the two versions of the figure isn’t particularly important, but it would be rather interesting if I’m right in believing Nick Stokes replaced an incorrect version of a figure without informing anyone.

  9. Posted Oct 1, 2014 at 5:07 AM | Permalink

    I threw together a quick post highlighting what I believe to be another issue with the post being criticized on this page. In a post he titled, “What Steve McIntyre won’t show you – now,” Nick Stokes apparently posted an incorrect figure, which he has secretly replaced with a corrected version because I pointed out the problem with it. This led me to note:

    By the way, the title of the post I claim this deception happened in is, “What Steve McIntyre won’t show you – now.” It apparently included an image Nick Stokes won’t show you… now.

    I probably should have made a joke about the fact Stokes said of it:

    I think this has become a very inconvenient graph.

    I guess that might explain why he’d get rid of it.

    • Nick Stokes
      Posted Oct 1, 2014 at 5:13 AM | Permalink

      Brandon,
      Someone complained that the lines were indistinct. I replaced the 2 pixel width with 3 pixels. That is all.

      • Posted Oct 1, 2014 at 5:23 AM | Permalink

        Nick Stokes, you’re free to claim that. I believe the evidence strongly suggests otherwise. I’ve shown the discrepancy exists in past versions of your same “emulation,” including a figure which is a nearly direct predecessor to the one in question. The only way I can see your claim being believable is if you’ve posted a note somewhere informing people you’ve updated your emulation to correct for some problem.

        I don’t even know how I could be wrong about this. I only looked for figures of your “emulation” after I saw a discrepancy. If the discrepancy was purely a figment of my imagination, how did I happen to find the exact same one in the previous versions of your figures? And what, I just happened to correctly call that the figure had been changed, but somehow was completely wrong about what the change was?

        That’s pretty implausible. Show us your updates which informed your readers you changed your “emulation” so they should focus on the new results, not the old ones. Then maybe it will seem a bit more believable.

  10. Posted Oct 1, 2014 at 6:08 AM | Permalink

    Brndon,
    I’ve checked. I think you’re right and the difference is due to Gaspe. The current version excludes Gaspe, in agreement with MM2005. The version you had included it. The code is from a post where I was comparing.

    I’ve linked the earlier version and more details in a comment at your blog.

    • sue
      Posted Oct 1, 2014 at 8:26 AM | Permalink

      Nick, does your ‘current’ version also end earlier than your ‘earlier’ version? Seems to me part of it has been cut off. And why wouldn’t you note this change of versions in your post rather then posting here that you’ve posted a link at Brandon’s?

      • Posted Oct 1, 2014 at 8:44 AM | Permalink

        sue,
        Yes, I will note it. I’ve been responding to some comments at my blog.

  11. Stephen Richards
    Posted Oct 1, 2014 at 6:11 AM | Permalink

    Nick: Stop digging before the sides collapse in on you!!

    • jorgekafkazar
      Posted Oct 1, 2014 at 6:56 PM | Permalink

      “It’s just a flesh wound.”

    • angech
      Posted Oct 10, 2014 at 6:35 AM | Permalink

      Would it not be better if he keeps digging? IMHO

  12. knr
    Posted Oct 1, 2014 at 6:55 AM | Permalink

    It’s hard to keep up with ClimateBaller fantasies and demoralizing to respond to such dreck.

    To be frank its pointless too, there is no way hard core believers can deal with the challenge you offer them but to resort to insults . No matter how much you try , no matter how much time you spend , no matter how many you kills , there will always be fly’s to buzz around you.

    • Gary
      Posted Oct 1, 2014 at 7:42 AM | Permalink

      I’m curious to understand why there’s continuing response to Stokes’ trolling behavior. It’s obvious by now he won’t change so why not just ignore him? No need for banishment; just roll your eyes and move on.

      • Posted Oct 1, 2014 at 7:58 AM | Permalink

        Gary, look at what happened with Nick Stokes’s depiction of the Wegman Report. It became a meme spread amongst a group which has made an issue of it whenever they could. Had the current discussion been held when he first posted about the Wegman Report, that might have been headed off. If not, at least more people who heard the meme would recognize how it is wrong.

        There may be times when ignoring him is a good idea. There are definitely times when it isn’t though.

        • William Larson
          Posted Oct 1, 2014 at 12:35 PM | Permalink

          As I see it, it’s like that scenario in which a kid throws a rock through a window of your factory. OK, it’s for the kid to fess up and get it fixed, but nevertheless you the factory owner cannot wait for that magical day–it most likely will never come. You have to go and replace the window yourself at your expense, and keep on replacing each window every time this subsequently occurs, otherwise at the very least you send a message that it’s all right to break your windows: if it weren’t all right, you would certainly replace them when it happened. You have to do this every time, even though “it’s not your fault” and even though “it’s not fair”. So too here: SM needs to keep dealing with this “dreck”, regardless, otherwise he sends a message that he doesn’t care about the truth and about the validity and integrity of his work.

          And again as I see it, Mr. Stokes is attempting here what might legitimately be called “intellectual bullying”. One can’t give in to bullies. Plus, it is axiomatic that “all bullies are cowards”. I think that describes Mr. Stokes here–for example, he hasn’t the courage to admit when he is wrong.

        • MattK
          Posted Oct 1, 2014 at 3:01 PM | Permalink

          Not commenting on it gives them more ammo. How many times have you seen Stokes mention that Steve is ignoring something or Steve finds it inconvenient to talk about.

          Basically, it is the, I found a problem, Steve won’t talk about it, so that proves I am right.

          Then after Steve deal with what is claimed to be the problem, they come back later with a subset of the problem that wasn’t explicitly covered and show that he is still ignoring something.

          Once it has finally be revealed that it was all a bunch of crap, the people claiming it will say it doesn’t matter anyway while refusing to change anything that they claimed in the first place.

        • Gary
          Posted Oct 1, 2014 at 3:08 PM | Permalink

          Brandon, the operative word is “continuing.” Sure, refute, and once again if you have to, but then have done with the troll regarding and iteration of the same issue. He’s only into getting attention.

  13. PhilH
    Posted Oct 1, 2014 at 8:51 AM | Permalink

    Hey Nick, you left the “a” out of Brandon.

  14. pottereaton
    Posted Oct 1, 2014 at 9:11 AM | Permalink

    It appears that Brandon has kept his eye on the pea and chosen the correct thimble.

  15. Kenneth Fritsch
    Posted Oct 1, 2014 at 10:06 AM | Permalink

    I have made a standardized composite of the 212 North American Tree Ring proxies used in MBH 98. There are 70 proxies back to 1400 and 142 that start later. The 70 proxies are the ones discussed most often and of late at CA. I standardized by subtracting a mean for each proxy (I used 1620-1970 and the entire 1400-1980 period and saw no perceptible difference in results) and dividing by the anomaly standard deviation. I then made a composite of all the standardized proxies by taking a mean for each year from 1400-1980.

    I fit the best arfima model of ar=0.136, ma=0 , d=0.232 and sd =0.28 and did simulations (with fracdiff in R) which I show with the composite series in the first link below. Using an ARMA model the best fit was ar=0.71, ma=-0.38), sd=0.28. I attempted to compare these models using fracdiff and the log.likelihoods generated and found a slight preference for the arfima model, but not by much. The simulations of the arfima and ARMA models look much alike. The composite series and the simulations of it show meandering series with no long term secular trends but rather shorter term trends that appear randomly as noise in a series. It would be difficult to impossible to see how one could derive a hockey stick from this composite without some selection process that bias towards that shape.

    It is important to note that my exercise here makes no claim or attempt to provide a geographically weighted look at proxy responses, but rather to determine what shape a random composite of proxy series would result. I have a second link below to an Excel file with 7 worksheets that contains data, graphs and R code for this exercise. I have plotted all 212 proxy series and the eigenvector combinations of #1 and #2 and #1, #2, #3, and #4 from Singular Spectrum Analysis. One can readily see from these plots that the composite of 212 proxies is made up of individual proxies that have various levels and combinations of white and red noise and some with secular trends that overall appear randomly. It is interesting that these random appearing trends are not always random in time location or with proxy specific type/species and thus the 70 proxies can have more upward ending trends than downward while in the other 142 proxies the opposite case is seen. In other words, one needs a goodly number of proxies to see these trends cancel out or to appear as noise in the composite series.

    I think that one could argue that the starting premise to be tested with these proxies as thermometers should be what would we expect from a composite if the proxies were poor to totally invalid thermometers and not that used by those doing temperature reconstructions which apparently is that some proxies are reasonably good thermometers and we only need to select those ones ex post facto.

    • tty
      Posted Oct 1, 2014 at 12:09 PM | Permalink

      This could be the basis for a new parlor game: “Pick the real proxy”

      • Beta Blocker
        Posted Oct 1, 2014 at 12:41 PM | Permalink

        tty: This could be the basis for a new parlor game: “Pick the Real Proxy”

        As the late Dr. Gall, inventor of the Gall Bladder, once remarked, “Life is like a sewer — what you get out of it depends on what you put in to it.”

    • Kenneth Fritsch
      Posted Oct 1, 2014 at 3:47 PM | Permalink

      I am hoping that the link below will connect interested parties to the Excel file noted above and titled: MBH1998_Proxy_Analysis

      https://www.dropbox.com/home?select=MBH1998_Proxy_Analysis.xlsx

      • MikeN
        Posted Oct 1, 2014 at 4:43 PM | Permalink

        Can you drop the x at the end?

        • Kenneth Fritsch
          Posted Oct 1, 2014 at 6:21 PM | Permalink

          Does the x at the end cause a problem for you? The link works for me.

        • MikeN
          Posted Oct 1, 2014 at 9:21 PM | Permalink

          I read somewhere that the purpose of the x is to make it incompatible with OpenOffice.

        • Posted Oct 3, 2014 at 9:28 AM | Permalink

          Actually just microsoft trying to force people to upgrade. But I would not be surprised if they had that as an unstated motive (if they state it, they get in trouble for Monopoly stuff).

        • TimTheToolMan
          Posted Oct 1, 2014 at 10:42 PM | Permalink

          an xlsx extension means Office 2007 (or higher)

        • svs
          Posted Oct 3, 2014 at 10:00 AM | Permalink

          The xlsx (and xlbx and xlmx) are actually different formants internally to the older xls format. Just renaming it to xls will only confuse Excel.

        • Posted Oct 3, 2014 at 3:31 PM | Permalink

          Just do a “Save as” and select the older XLS format.

        • Duke C.
          Posted Oct 3, 2014 at 10:40 AM | Permalink

          Zamzar will convert xlsx to xls. It’s free.

          http://www.zamzar.com/

      • Harold
        Posted Oct 2, 2014 at 11:45 AM | Permalink

        “X” means a pseudo-XML format instead of binary. If you have an old version of excel that won’t read xlsx, download Libreoffice (free), and use Librecalc. It’ll read and write anything.

    • Kenneth Fritsch
      Posted Oct 6, 2014 at 3:09 PM | Permalink

      After doing a composite plus scale (CPS) for the 212 North America tree ring (NATR) proxy series from MBH 1998 with no intention of yielding a weighted geographic regional temperature but rather testing the assumption that these proxies contain very little or no temperature signal that can be separate from the noise, I wanted to do the same with all 1209 Mann 2008 proxy series. Recall that the NATR CPS showed no discernible temperature signal but rather red/white noise due to autocorrelations and shorter term random secular trends in the underlying proxies that could be fit well with an ARAM or ARFIMA model.

      The Mann 2008 temperature reconstruction includes 71 Luterbacher proxies which in turn include the modern instrumental record. These proxies needed to be eliminated from the composite in order to look at the true proxy responses. The 4 Tiljander proxies which by consensus should not be used as temperature proxies where remove also. The CPS for the 1134 proxies was calculated by standardizing the individual proxy series with the baseline period of 1799-1930 where all the proxies had data and taking a mean of the standard deviation units of all the individual standardized proxies. The composite covered the period of 1400-1995.

      It should be noted here that the 105 Schweingruber MXD series (SMXD) where truncated at 1960 because the proxies exhibited divergence. I attempted to track down which of these proxies corresponded to any in the International Tree Ring Data Base and was only able to find a small portion. If I had found the full SMXD series I would have used those in place of the truncated ones. Failing to find many of these proxies, I simply left the truncated ones in the composite with the proviso that had the divergence been used in the last 30-35 years or so depending on when the SMXD series ended the composite would have tended to be more downward sloping.

      Also to be noted is that I used the original Mann 2008 proxy data and not the in-filled version where all proxies where in-filled with an unknown process from the end date to 1995.

      The composite and the ARMA and ARFIMA model simulations with a spline smooth are shown in the link below. It is important to note here that the y axis is in units of standard deviation and not degrees centigrade. I show the GISS global temperature series and a spline smooth in standard deviation units for comparison. An eyeball comparison would put one standard deviation unit for the composite at something around 0.25 degrees C and therefore the range of the composite series at something like -0.1 to 0.1 degrees C. That appears to be a shockingly low variation for the composite and means that upward trend at the end is very insignificant even when discounting the effects of the truncation of the 105 SMXD series. I have double checked these results and found no errors – at least to this point.

      Whatever the degree C units should be for the composite it is readily shown, like in the case of the 212 NATR composite, that a calculation based on the assumption that the proxies have no discernible temperature signal bears out that assumption.

      • Layman Lurker
        Posted Oct 6, 2014 at 4:05 PM | Permalink

        Thanks for that Kenneth. You are aware of my views of LF distortion in the Schweingruber MXD – possibly even accounting for the divergence. Curious (if you are inclined) what the composite would look like if those series were excluded.

        • Kenneth Fritsch
          Posted Oct 6, 2014 at 5:07 PM | Permalink

          Layman, I was contemplating excluding all 105 Schweingruber proxies, but I wanted to make the point of the truncation at 1960. I can easily do a CPS without those proxies. Actually your analysis, as I recall, found the divergence could have started in the 1920s. That is a critical point because that would tend to counter the arguments made by those who go to great lengths to connect the divergence to something in the modern warming period and thus claim that the cause of divergence was less likely to occur in the past and is somehow unique to the warming period.

      • Kenneth Fritsch
        Posted Oct 7, 2014 at 9:48 AM | Permalink

        I did a composite plus scale (CPS) by the methods described in the post above on the Mann 2008 temperature reconstruction (before in-filling) after excluding the 71 Luterbacher proxies (instrumental), the 4 Tiljander proxies (not a temperature proxy for the modern era) and the 105 Scweingruber MXD proxies (truncated at 1960 in Mann 2008 due to divergence). I show the composite for 1400-1995 with a spline smooth and 9 simulations using a well fitting ARIFMA model in the first link below.

        In the second link below I show the 2 sigma confidence intervals (CIs)for the composite and the number of proxy data used each year (1400-1995) in calculating the composite. I was shocked by the very large range of the CIs compared to the mean composite series and the spline smooth. I have not done CIs for CPS, but I could not find any basic error in the method used. The method was merely calculating the row standard deviations from the standardized proxy matrix where the columns were the individual proxy data standardized in units of standard deviations.

        The resulting CIs would be expected from individual proxies which gave very random responses each year around a mean close to 0. I want to do some more simulations in the near future, but I cannot see where there is any discernible temperature signal in this group of proxies. I judge that it takes numerous manipulations of the proxy data (tricks) used in Mann 2008 to make the data appear to have a signal.

        • Posted Oct 7, 2014 at 2:24 PM | Permalink

          Kenneth,

          What, if any, survivor bias is introduced by that large drop-off in the number of proxies?

        • Kenneth Fritsch
          Posted Oct 7, 2014 at 4:47 PM | Permalink

          “What, if any, survivor bias is introduced by that large drop-off in the number of proxies?”

          Not sure I understand the question. The drop-off starts at about 1975 and this is on the original and not in-filled data. Remember the authors of Mann 2008 felt compelled to in-fill for every proxy with missing data up to 1995. That action makes a farce of the reconstruction and particularly since the time period it covers is the modern warming period where we want to most how the proxies respond. As the data becomes more sparse at the end the reconstruction the more influence a few proxies can have.

        • Posted Oct 8, 2014 at 12:51 PM | Permalink

          In medical studies, knowing how survivor bias can influence a study affects whether it is taken seriously or not (think what’s different between those who died versus those who didn’t and if that affects the inferences or conclusions made in the study). It would be good to know, if possible, if there are systematic differences between the proxies that survive to the end and those that don’t. Is there enough data to do a t-statistic (difference between the means) or other methods to test for survivor bias?

        • Kenneth Fritsch
          Posted Oct 12, 2014 at 10:16 AM | Permalink

          As a follow up to showing the original Mann 2008 proxy series without in filling and without the proxy groups Luterbacher, Schweingruber and Tiljander (excluded for reasons already discussed in detail above) and a composite plus scale (CPS) of the individual series, I want here to show these 1029 proxies divided into 40 groups with 39 groups formed by proxy type and locality/region with at least 5 proxies per group and a 40th group made up of the remainder of the proxies not fitting into the group of 39. The remainder group was from different types of proxies (187) and dispersed around the globe. As a result the 39 groups should give a picture of local historical temperature if the proxies had a discernible temperature signal while the 40th group should do the same for the historical global temperatures.

          In the first 4 links/graphs below I show the series for the 40 groups with each labeled such that the group could be identified from the Mann 2008 SI and the 40th group is identified as Remain. The picture presented by these group series is that the groups as a whole show no discernible temperature signal and particularly in comparison to the earlier shown simulated ARMA and ARFIMA models. It should also be remember that the y axis is in standard deviation units and converting those units to degrees C would reduce the scale.

          I calculated a CPS of the 40 proxy groups for comparison with the earlier presented CPS of the individual series and those graphs are presented in the last link/graph below. This comparison should show any discrepancies that might arise from the lack of area weighting in the CPS for individual proxies. I show the upper and lower 95% confidence intervals outlined in red and shaded grey. If I try to show the actual series with the CIs, the graph becomes difficult to read. I have also included a spline smooth the CPS series in both graphs. The two graphs are very similar and are in line with what one would expect from combining proxies that are responding to variables other then temperature in a manner that tends very much toward a noisy series without a signal and particularly a discernible temperature signal. The variability of the CPS series should be evaluated based on the simulations of ARMA and ARFIMA models shown previously.

          It should be noted that the CIs for the CPS from individual proxies shown here are much reduced from that shown previously. In the previous graph I inadvertently did not divide by the square root of the number of data points used in determine the mean. Making this error doubly egregious was exclaiming my surprise that CI ranges were so wide.

        • Layman Lurker
          Posted Oct 12, 2014 at 2:24 PM | Permalink

          Interesting work Kenneth. Thanks for posting.

  16. Posted Oct 1, 2014 at 11:30 AM | Permalink

    Nick Stokes has added an update which misrepresents things:

    Update – this graph changed slightly, as Brandon noted. It comes from an earlier post comparing what happens in 1400-1450 with and without Gaspe cedars. The first posted was with, which corresponded to MBH, and so is more appropriate, in my view. The second agrees, as you can see, with M&M. Since the difference doesn’t affect my point, I’ll let that stand. The change was not intended – I was widening the smoothed curves for visibility. The first version is here.

    The earlier post does not just compare what happens “with and without Gaspe cedars.” It does give focus to that, but it also shows another issue:

    The bottom panel shows the effect of changing the offest mean to centered. I’ll say more about that in a future post.
    Now I’ll plot the same data but superimposed….
    I’ve made the panel 3 plot faint, since it isn’t the current subject.

    Stokes claims to have emulated the figure by Steve McIntyre and Ross McKitrick, but his third panel is clearly different than theirs. He knows this as he’s acknowledged the point in reference to his recent post. He offers no explanation as to why, despite knowing his earlier post has errors in it, he’s not adding any sort of note or disclaimer to the post. Instead, he pretends the old post simply didn’t cover the issue where he messed up.

    Additionally, Stokes says:

    The first posted was with, which corresponded to MBH, and so is more appropriate, in my view. The second agrees, as you can see, with M&M.

    Which is difficult to interpret as there’s apparently a word missing after “was with.” While we may not be able to tell just what Stokes was trying to say, we can be fairly certain it’s not true as there was nothing “which corresponded to MBH” in his first version. All he did was show the effect of fixing the PCA calculation and not extending the Gaspe series separately, choosing not to show the combined effect.

    • MikeN
      Posted Oct 1, 2014 at 2:48 PM | Permalink

      Brandon, he clearly means was with Gaspe.

      • Posted Oct 1, 2014 at 2:55 PM | Permalink

        MikeN, I’d like to assume that’s what he meant, but it’s hard to given he went on to refer to it as that “which corresponded to MBH,” I don’t see how one can claim the centered PCA, with duplicated and extended Gaspe series, “corresponded to MBH.” I don’t even know what that would mean.

    • thisisnotgoodtogo
      Posted Oct 1, 2014 at 3:58 PM | Permalink

      I appreciate Brandon’s efforts to keep an eye on the underhanded tactics of some of the lesser figures whose stars are apparently rising as Team Defenders – rising due to misrepresentations they promote.

  17. Posted Oct 1, 2014 at 12:53 PM | Permalink

    Steve,

    You continue the good fight with reason and rationality. Your opponents continue to name-call and attack. Reasonable people can see what is going on.

    I hope you will soon feel less pressure to defend yourself. This remarkable website tells the story more than adequately. Anyone who is honestly curious, who has no personal or financial investment in furthering “scientific lies” has sufficient material here to answer every detail.

    Many people are deeply grateful to you for your selfless work.

  18. Tom T
    Posted Oct 1, 2014 at 2:31 PM | Permalink

    Is it just me or is Nick’s claim of

    (McIntyre) “said very little about this recon [MM05-EE] since it was published” and speculating that I’ve been concealing these results because they were “inconvenient”.”

    Remarkably similar to his earlier claim of

    “The thing is, hosking.sim is the blackest of boxes. You put in the data vec, you get back one simulated vec. No further information about what acf it calculated, the arfima, nothing. And there is virtually no documentation except for Hosking’s 1984 paper. Hosking didn’t write the code.”

    Nick stop digging. You pattern is all to apparent to someone who isn’t brain dead. You aren’t paid to convert the undecided. You are paid to keep the faithful with the flock. When will you realize that you are the rear guard of a retreating force.

    • Steven Mosher
      Posted Oct 1, 2014 at 3:15 PM | Permalink

      yes, steve “said” very little. another climateballer move

  19. MikeN
    Posted Oct 1, 2014 at 2:50 PM | Permalink

    Nick, if you want to compare the effect of centering, then why not overlay plots 2&3, removing the extra Gaspe influence?

    • Nic Stokes
      Posted Oct 1, 2014 at 5:30 PM | Permalink

      MikeN,
      “why not overlay plots 2&3”
      Well, I think a comparison is meaningless unless it actually includes the MBH98 result. I think the right thing to do is compare decentering on the same datasets – ie wuth Gaspe in both cases. Somewhat inadvertently, I am now doing it the M&M way, with Gaspe removed 1400-1450. That may be better in terms of faithfulness to Fig 1, so I’m staying with it. It’s irrelevant to my point about hockey sticks.

      • MikeN
        Posted Oct 1, 2014 at 6:13 PM | Permalink

        It’s not irrelevant if you are lumping in two factors at once. You have simply declared Gaspe is an issue for pre 1500 so ignore that, when that is where the difference appears.
        MBH98 vs MBH98 proper centering is also a good comparison, though then you have to get to which PCs.

  20. kim
    Posted Oct 1, 2014 at 4:40 PM | Permalink

    The warhorse stokes his
    Sentimental siege gun fires
    Smoke obscures the wall.

    H/t Plum.
    ======

  21. Posted Oct 1, 2014 at 4:42 PM | Permalink

    I applaud Steve’s manner and technique for to me he is channeling Tsun Tsu who said “Build your opponent a golden bridge to retreat across.” @Jean S, I suggest that your frustrations with Nick need to be carefully tempered lest you become what you recoil against. Nick has an audience and your impatience will only provide evidence to them of the malfeasance in us.

  22. Posted Oct 1, 2014 at 7:19 PM | Permalink

    Some comments over at Anders’s blog amuse me too much to not respond, so I’ve written a post you guys might find interesting. The wrap up:

    So Michael Mann creates test series by using the autocorrelation structure of the actual data without detrending. McIntyre and McKitrick respond by doing the same thing. KR thinks that means McIntyre and McKitrick are horrible people who don’t know what they’re doing. He doesn’t have a problem with Mann though.

    And that is how he proves himself to be a Climateballer. It’s not hard. You can do the same. Many others have.

  23. willnitschke
    Posted Oct 1, 2014 at 8:14 PM | Permalink

    This is a technical blog and many who read it, like myself, are not familiar with the technical background to assess the various merits of such a debate. I also enjoy reading counter points but I’ve ignored Nick Stokes comments for a long time now. The ignoring started when he cited a comment by Roy Spencer in support of one of his claims. I looked up the citation and noted that the full quotation contradicted, rather than supported, his claim. I like to give people the benefit of the doubt, and my assumption was that he had googled around until he found a quote that supported what he already believed and as soon as he found something, he stopped reading. No problem, and when I pointed this out to him a simple “my bad” would have sufficed. But that was not to be. Stokes actually dug his heals in, even though I provided the full citation which he never addressed. He changed topics, went off on irrelevant tangents, obfuscated and rambled, and indulged in copious nonsense on an issue that was, frankly, black and white. And that’s why I ignore Stokes. Counter points are one thing. Intentional misinformation in the service of propaganda is another matter entirely.

    • mpainter
      Posted Oct 1, 2014 at 10:07 PM | Permalink

      A very peculiar disability this man is afflicted with. I am always amazed when I see people who know him well lose their patience with him, as if they expected him to respond in a reasonable manner, like the rest of us.

  24. Posted Oct 1, 2014 at 8:39 PM | Permalink

    Reblogged this on I Didn't Ask To Be a Blog.

  25. miker613
    Posted Oct 1, 2014 at 10:47 PM | Permalink

    Mentioned a number of times recently at ATTP:
    “Indeed. McIntyre and McKitrick should have done what Michael Mann did and used an objective rule to calculate how many PCs to keep.” “In the former case, 2 (or perhaps 3) eigenvalues are distinct from the noise eigenvalue continuum. In the latter case, 5 (or perhaps 6) eigenvalues are distinct from the noise eigenvalue continuum.”

    Is this the same thing as the “Preisendorfer Rule N” mentioned here recently? What does all this mean?

    Jean S. Yes, “Rule N” is the “objective rule” Mann claimed in 2004 he had used in MBH98. See here, here, and the end of this recent post.

  26. TAG
    Posted Oct 2, 2014 at 6:15 AM | Permalink

    As a layman, I’ve read and re-read these discussions about principal components, selection rules, hockey sticks and temperature reconstructions. However isn’t the result of all of this mathematics is that a set of trees in the south western US are somehow proxies for the temperature history of North America. This occurs despite the fact that trees lack significant correlation with the temperature history of their local area.

    I know that i am just a layman and should be well advised to listen and not speak on these matters but wouldn’t this singular property of these trees give one pause as to the utility of the method used?

    Steve: yes, you’re 100% right. Mann and his defenders try to turn the topic to Preisendorfer and other pseudo-mathematical things but avoid talking about whether the stripbark bristlecones are valid temperature proxies that are adequate to measure world temperature, as Stokes in his turn has done.

    • Beta Blocker
      Posted Oct 2, 2014 at 2:05 PM | Permalink

      Re: TAG (Oct 2 06:15),

      Steve, if one examined all of the independent temperature reconstructions which purport to verify the basic hockey stick shape of Mann’s reconstructions, how many of those reconstructions follow the same fundamental approach?

      That is to say, how many of these reconstructions bury a subset collection of proxy data already having a basic hockey stick shape inside of a thick groundmass of other supposed proxy data, and then dig that basic hockey stick shape back out of the groundmass using a methodologically deep, complex, and largely opaque process — albeit with an end result which is somewhat reshaped in comparison with the original as a result of having been buried and then re-extracted?

    • William Larson
      Posted Oct 3, 2014 at 9:01 PM | Permalink

      Plus, there was that thesis/dissertation by What’s-her-name, who went and recored the Sheep’s Mountain bristlecones and could not replicate Graybill’s results. (This was posted at CA at the time, but I am too lazy to go find it in order to cite it properly, plus shame on me for forgetting her name.) So even those “all-important” bristlecone tree-rings could not withstand the test of reproducibility.

      Steve: Ababneh. Her data remains unarchived. I was contacted offline by someone who knew her who said that she was a fine person.

      • Ed Snack
        Posted Oct 4, 2014 at 3:39 AM | Permalink

        It’s Linah Ababneh, she’s currently at the Cornell Tree Ring laboratory. Her work was also withheld at some time in the past (not sure if still correct) for “legal reasons”. All that I have read about the work is that she did a fairly thorough job of sampling bristlecones and specifically avoided over-sampling strip bark examples; and the resulting chronologies contain no “hockey-stick”. So it is “not in the public interest” to reveal her work to the general public “in case they get the wrong ideas”.

        • barn E. rubble
          Posted Oct 4, 2014 at 8:39 AM | Permalink

          RE: Posted Oct 4, 2014 at 3:39 AM | Permalink | Reply
          “It’s Linah Ababneh, she’s currently at the Cornell Tree Ring laboratory.” . . .

          They are doing some interesting work. Particularly in the area of ‘Dendro Data Standards’.

          http://dendro.cornell.edu/projects/datastandards.php

          “As part of a wider community initiative Cornell has contributed to the development of a universal tree-ring data standard – TRiDaS.”

        • Posted Oct 4, 2014 at 8:54 AM | Permalink

          The Ababneh Thesis was the first CA post in October 2007. Or try /tag/ababneh after the climateaudit.org

        • barn E. rubble
          Posted Oct 4, 2014 at 9:46 AM | Permalink

          RE: Posted Oct 4, 2014 at 8:54 AM | Permalink
          “The Ababneh Thesis was the first CA post in October 2007. . .”

          Thanks for the link. Interesting read particularly with Willis Eschenbach (who I’ve since enjoyed reading but didn’t know of then) has the last comment Posted Oct 22, 2007. “I loved the Abaneh quote: . . .”

          I note from their web site: (http://www.tridas.org/download.php) the first item under, “Information about TRiDaS” is the following:

          “Article describing TRiDaS: JANSMA, E., BREWER, P. W. & ZANDHUIS, I. (2010) TRiDaS 1.1: The tree ring data standard. Dendrochronologia 28, 99-130.
          Note that the version of the standard described in the document has now been superceded, however, much of the discussion is still valid.”

          I haven’t had time to find out what exactly has been superseded yet . . . were there any other follow up CA posts? I noticed 1 track back to WUWT post.

        • Posted Oct 4, 2014 at 11:29 AM | Permalink

          Tag Archives: Ababneh shows five subsequent posts and isn’t guaranteed to be comprehensive.

  27. PaddikJ
    Posted Oct 2, 2014 at 12:27 PM | Permalink

    Careful mate, lest they discharge their noses in your general direction.

  28. Willis Eschenbach
    Posted Oct 2, 2014 at 2:43 PM | Permalink

    Dang, I gotta admit, while it’s been fun watching the Stokesworm wriggle on the hook, my previous opinion of him (which was bad) has been shown to be far too optimistic …

    Nick, dear fellow, do stop digging. Your constant attempt to prove yourself right when you are wrong taints even those rare few times when you have been right …

    w.

    • Nic Stokes
      Posted Oct 2, 2014 at 4:28 PM | Permalink

      Willis,
      “Nick, dear fellow, do stop digging.”
      Thanks for the affection. This post is about my “sliming” Steve by suggesting that he has been vocal about decentered PCA causing perturbations in PC1, but quiet about his own demonstration that the end effect did not add hockey sticks to the reconstruction. I responded here with chapter and verse on how, in two days of Congressional Committee hearings devoted to the Wegman report, which based on MM05 extensivedely pictured the changes to PC1, nothing was said about the MM05E&E comparison. Not even when Stupak asked directly. Steve was there, made presentations, testified (twice), answered questions. Nothing.

      This has not been answered. Do you have an answer?

      Steve: Nick, as I wrote in the post, I presented the reconstructions results in an article in academic journal, in blog posts at Climate Audit and in contemporary presentations, plus I also attempted to get Ammann and Wahl to jointly present these results. Your assertion that my efforts in this respect were insufficient is incorrect and your effort to suggest bad faith on my part is contemptible.

      In my contemporary trip report, I was critical of the lack of due dilgence by both Wegman and the NAS panel, especially the NAS panels use of bristlecone-based reconstructions, after saying they should be avoided. I also agreed with the criticism that the impact on reconstructions should have been checked – a criticism that applies to NAS as well:

      Speaking of both the NAS panel and Wegman report, it amazes me how little due diligence is actually done in these sorts of reports. The only due diligence that the NAS panel did was to check the tendency of the PC method to make hockey sticks. Everything else is just a literature review – by slightly less biased people than usual, but still just one more literature review. For example, as I point out, they reject bristlecones as a proxy, but do not assess the impact of this.

      Wegman did more due diligence – they checked the biased PC method and also checked its impact on the North American network. It’s too bad that they didn’t also express opinions on the impact on reconstructions – I agree with Gulledge on that. However, Gulledge should then have been equally mad at the NAS panel who didn’t independently check this either. Wahl and Ammann and Rutherford et al cannot be used as evidence.

      As a panelist (and one with no prior experience at this sort of activity), I answered the questions that I was asked. The panels were separate – on the first day when there was most of the action, North and Wegman were on the first panel; the panel was very interested in them and most of the panel’s interest was gone by the time that my panel was called up. They asked me a few questions, but their interest had been in what the two chairs had made of the dispute.

      The panelists realized very quickly that there was some kind of cock-up in Mann’s work – even Crowley didn’t really defend Mann – but also realized the issue of AGW didn’t disappear just because Mann might have cocked up. They were very attentive until they made up their minds on this question and then their interest moved on to other problems.

      I don’t recall my reactions to particular answers, but, if for example, I thought that North or Wegman gave an unsatisfactory answer, it would have been unthinkable for me to hurdle up to the panelist table and say that Wegman or North should have given some other answer.

      If you think otherwise, you’re on drugs.

      Another point: looking at my 2006 AGU Union presentation at a followup to the 2006 hearings, my focus was entirely on reconstructions, showing the effect of minor variations in assumptions on reconstructions, with only a passing mention of PC issues.

      You weren’t following the dialogue at the time. You haven’t read the contemporary exchanges. Why don’t you read the contemporary discussion.

      • Steve McIntyre
        Posted Oct 2, 2014 at 8:16 PM | Permalink

        I took a look at my presentation to the House Committee and it was to a very considerable extent focused on availability of data, the failure of scientists to live up to obligations of NSF grant requirements and the failure of NSF to enforce their policies.

      • Laws of Nature
        Posted Oct 4, 2014 at 1:03 PM | Permalink

        Dear Nick and others,
        “[..]Steve by suggesting that he has been vocal about decentered PCA causing perturbations in PC1, but quiet about his own demonstration that the end effect did not add hockey sticks to the reconstruction.[..]”

        I was looking into that old example when Mann cited Jolliffe as a backup (apparently misspelling his name) and then Jolliffe answered to him (I think it might be valuable to be pointed out to M. Steyn if he is not already aware of it.

        I think this was the claim by Mann:

        http://www.realclimate.org/index.php/archives/2005/01/on-yet-another-false-claim-by-mcintyre-and-mckitrick/
        “[..]For specific applications of non-centered PCA to climate data, consider this presentation provided by statistical climatologist Ian Jolliffe who specializes in applications of PCA in the atmospheric sciences, having written a widely used text book on PCA. In his presentation, Jollife explains that non-centered PCA is appropriate when the reference means are chosen to have some a priori meaningful interpretation for the problem at hand.[..]”

        And here is Jolliffe’s answer to it if I am not mistaken:

        Ian Jolliffe Comments at Tamino


        “[..]It [the presentation] certainly does not endorse decentred PCA. Indeed I had not understood what MBH had done until a few months ago. Furthermore, the talk is distinctly cool about anything other than the usual column-centred version of PCA. It gives situations where uncentred or doubly-centred versions might conceivably be of use, but especially for uncentred analyses, these are fairly restricted special cases. It is said that for all these different centrings ‘it’s less clear what we are optimising and how to interpret the results’.[..]”

        If I read this correctly, Jolliffe is saying, that of course you can decenter as a valid mathematical tools, but loose your point of view by doing it, the meaning of your result becomes very unclear..

        • Nic Stokes
          Posted Oct 4, 2014 at 3:14 PM | Permalink

          “Mann cited Jolliffe as a backup (apparently misspelling his name) and then Jolliffe answered to him”
          I think the citer was Tamino, not Mann.

          Steve: Nick, why don’t you check? Mann cited him long before Tamino. Ironically, as a reviewer for Nature, Jolliffe was quite favorable to us.

        • AndyL
          Posted Oct 4, 2014 at 3:45 PM | Permalink

          Nick
          SImply look at the references quoted and the dates

          Mann was quoted in 2005 in teh RealClimate article. Tamino built a 4-part series of posts on it in 2008 and Jolliffe responded to Taninos’s points.

          It was unfortunate that Jolliffe only responded 10 years after the original article and after Mann08 had been published.

        • Steve McIntyre
          Posted Oct 4, 2014 at 4:33 PM | Permalink

          In Mann’s Hockey Wars book, he ignored Jolliffe’s comments and continued to assert that Mannian principal components were simply an alternative “convention”.

        • Nic Stokes
          Posted Oct 4, 2014 at 4:00 PM | Permalink

          Reading what is quoted from what Jolliffe wrote, I can’t see any reference to what Mann wrote in 2005 in RC. He’s complaining at Tamino’s about what Tamino wrote.

        • Nic Stokes
          Posted Oct 4, 2014 at 4:31 PM | Permalink

          “Nick, why don’t you check?”
          I did check. Yes, Mann cited Jolliffe earlier, but that isn’t what Jolliffe was answering. He was answering Tamino. He didn’t mention that RC citation.

          Jean S: You are right Nick, he didn’t mention. Let’s see what Jolliffe was responding to over at Tamino’s:

          Centering is the usual custom, but other choices are still valid; we can perfectly well define PCs based on variation from any “origin” rather than from the average. It fact it has distinct advantages IF the origin has particular relevance to the issue at hand. You shouldn’t just take my word for it, but you *should* take the word of Ian Jolliffe, one of the world’s foremost experts on PCA, author of a seminal book on the subject. He takes an interesting look at the centering issue in this presentation.

          Tamino was simply plagiarizing Mann, who was (still is) referring to the same presentation as follows:

          For specific applications of non-centered PCA to climate data, consider this presentation provided by statistical climatologist Ian Jolliffe who specializes in applications of PCA in the atmospheric sciences, having written a widely used text book on PCA. In his presentation, Jollife explains that non-centered PCA is appropriate when the reference means are chosen to have some a priori meaningful interpretation for the problem at hand.

          IMO, that even more blatant mispresentation of Jolliffe’s view.

        • Nic Stokes
          Posted Oct 4, 2014 at 5:43 PM | Permalink

          Jean S: You are right Nick
          Thank you Jean, I’ll frame that. But why is it so hard to get simple facts right here. The entirety of my comment was:
          “Mann cited Jolliffe as a backup (apparently misspelling his name) and then Jolliffe answered to him”
          I think the citer was Tamino, not Mann.

        • Steve McIntyre
          Posted Oct 4, 2014 at 7:29 PM | Permalink

          NIck Stokes wrote:

          Mann cited Jolliffe as a backup (apparently misspelling his name) and then Jolliffe answered to him” I think the citer was Tamino, not Mann.

          and
          Nick says:

          Reading what is quoted from what Jolliffe wrote, I can’t see any reference to what Mann wrote in 2005 in RC. He’s complaining at Tamino’s about what Tamino wrote.

          Then:

          Jolliffe answered Tamino, not Mann. The statement was wrong and needed correcting. Why can’t the site just accept that?

          Mann originally mischaracterized Jolliffe at realclimate long before Tamino’s post (as Jean S observed.) In 2005, Mann stated:

          For specific applications of non-centered PCA to climate data, consider this presentation provided by statistical climatologist Ian Jolliffe who specializes in applications of PCA in the atmospheric sciences, having written a widely used text book on PCA. In his presentation, Jollife explains that non-centered PCA is appropriate when the reference means are chosen to have some a priori meaningful interpretation for the problem at hand.

          Ross and I repudiated Mann’s misrepresentation of Jolliffe in early 2005 here:

          10. Mann et al. state that “the use of non-centered PCA is well-established in the statistical literature and, in some cases is shown to give superior results to standard, centered PCA”
          http://www.realclimate.org/index.php?p=98. They go on to cite two studies.

          First, we note that MBH98 stated that they used “conventional” PCA. We have elsewhere pointed out
          that “conventional” PCA calculations are “centered” – a position acknowledged here by Mann. Mann
          et al. are obviously quite free to argue the merits of non-centered PCA for this particular calculation – an argument which we believe will be unsuccessful –but they should have stated that this was what they were doing in the first place, and perhaps issue another Corrigendum in which they report that the description of their PCA method in MBH98 was inaccurate. …

          The second presentation cited by Mann is a Powerpoint presentation on the Internet by Jolliffe (a well known statistician).

          Jollife explains that non-centered PCA is appropriate when the reference means are chosen to have some a priori meaningful interpretation for the problem at hand. In the case of the North American ITRDB data used by MBH98, the reference means were chosen to be the 20th century calibration period climatological means. Use of non-centered PCA thus emphasized, as was desired, changes in past centuries relative to the 20th century calibration period. (http://www. realclimate. org/index. php? p=98) In fact, Jolliffe says something quite different. Jolliffe’s actual words are: “it seems unwise to use uncentered analyses unless the origin is meaningful. Even then, it will be uninformative if all measurements are far from the origin. Standard EOF analysis is (relatively) easy to understand -variance maximization. For other techniques it’s less clear what we are optimizing and how to interpret the results. There may be reasons for using no centering or double centering but potential users need to understand and explain what they are doing.” Jolliffe’s presents cautionary examples showing that uncentered PCA gives results that are sensitive to whether temperature data are measured in Centigrade rather than Fahrenheit, whereas centered PCA is not affected. Jolliffe nowhere says that an uncentered method is “the” appropriate one when the mean is “chosen” to have some special meaning, he states, in effect, that having a meaningful origin is a necessary but not sufficient ground for uncentered PCA. But he points out that uncentered PCA is not recommended “if all measurements are far from the origin”, which is precisely the problem for the bristlecone pine series once the mean is de-centered, and he warns that the results are very hard to interpret. Finally, Jolliffe states clearly that any use of uncentered PCA should be clearly understood and disclosed – something that was obviously not the case in MBH98. In the circumstances of MBH98, the use of an uncentered method is absolutely inappropriate, because it simply mines for hockey stick shaped series. Even if Mann et al. felt that it was the most appropriate method, it should have had warning labels on it.

          Subsequently, Tamino revived this issue at his blog, taking exactly the same line as Mann had done in 2005, leading to Jolliffe’s repudiation. Nick argues that Jolliffe only directly repudiated Tamino’s post. This would be relevant if there were relevant differences between Tamino’s post and Mann’s earlier realclimate post, but there aren’t. Jean S described Tamino’s post as “plagiarized”. If Nick can identify a relevant difference, then there’s something to discuss, but he hasn’t.

          But there’s more that Nick hasn’t told you about.

          Following Jolliffe’s intervention at Tamino’s, he emailed me directly (reported at CA) and stated, over and above us, his intervention at Tamino’s:

          I looked at the reference you made to my presentation at http://www.uoguelph.ca/~rmckitri/research/MM-W05-background.pdf *after* I drafted my contribution and I can see that you actually read the presentation. You have accurately reflected my views there, but I guess it’s better to have it ‘from the horse’s mouth’.

          So when Nick says: “Jolliffe answered Tamino, not Mann”, Nick is either not in possession of all the facts or misrepresenting them by not telling the full story. Jolliffe went an extra mile and directly endorsed our criticisms of Mann’s realclimate comments about Jolliffe’s work that was later discussed by Tamino. So Jolliffe “answered” Mann on both counts. Had Stokes bothered to examine the record before pontificating, he would have known this.

          Jolliffe also reiterated in the email observations that he had made at Tamino:

          I had not understood what MBH had done until a few months ago…

          Of course, given that the data appear to be non-stationary, it’s arguable whether you should be using any type of PCA….

          It therefore seems crazy that the MBH hockey stick has been given such prominence and that a group of influential climate scientists have doggedly defended a piece of dubious statistics.

          As to the substantive point, Nick, do you concede that Jolliffe unequivocally rejected Mannian principal components?

        • Nic Stokes
          Posted Oct 4, 2014 at 9:16 PM | Permalink

          “As to the substantive point, Nick, do you concede that Jolliffe unequivocally rejected Mannian principal components?”
          Well, Steve, do you concede that Jolliffe did not, as a matter of simple everyday fact, respond to Mann in the quote linked? LoN got that from your post.

          Steve: As I pointed out to you before, in the CA thread, Jolliffe responded to Mann’s misuse: Jolliffe notified us that he endorsed our rebuttal of Mann’s realclimate post as follows:

          I looked at the reference you made to my presentation at http://www.uoguelph.ca/~rmckitri/research/MM-W05-background.pdf *after* I drafted my contribution and I can see that you actually read the presentation. You have accurately reflected my views there, but I guess it’s better to have it ‘from the horse’s mouth’.

          I don’t know how much more unequivocally Jolliffe could have responded to Mann’s misuse. That he also responded to Tamino’s misuse does not derogate from him responding to Mann’s misuse.

          But yes, Jolliffe certainly didn’t like being cited by Tamino in support. He said:
          “my main concern is that I don’t know how to interpret the results when such a strange centring is used? Does anyone? What are you optimising? A peculiar mixture of means and variances? An argument I’ve seen is that the standard PCA and decentred PCA are simply different ways of describing/decomposing the data, so decentring is OK. But equally, if both are OK, why be perverse and choose the technique whose results are hard to interpret? Of course, given that the data appear to be non-stationary, it’s arguable whether you should be using any type of PCA.”

          He says there are theoretical problems. “why be perverse and choose the technique whose results are hard to interpret? ” I can agree with that. I think he made a mistake.

          But practical people, like Congressional Committees, will want to know what is the practical effect on a reconstruction. What difference does it make? This information was not volunteered to the Committee.

          Steve: again, you keep saying things that are untrue. I doublechecked the proceedings and, once again, you are making a fabricated claim, as can be readily seen by reviewing the record and not removing statements from their context, as you do too often.

          “So when Nick says: “Jolliffe answered Tamino, not Mann”, Nick is either not in possession of all the facts…”
          No. I’m just trying to get a simple factual error corrected. Jolliffe answered Tamino, not Mann.

          Steve: you originally contradicted a commenter by saying “I think that the citer was Tamino, not Mann”. You have been repeatedly shown that Mann cited (and misused) Jolliffe long before Tamino – as I said. The premise of your original statement was wrong. Do you finally admit that Mann also cited Jolliffe?

        • Nic Stokes
          Posted Oct 5, 2014 at 5:55 PM | Permalink

          Steve,
          “Do you finally admit that Mann also cited Jolliffe?”
          Yes, of course Mann also cited Jolliffe. And in terms which Jolliffe probably would have responded similarly, if he had read it. He didn’t, so we don’t know. But again, LoN said:
          ““Mann cited Jolliffe as a backup (apparently misspelling his name) and then Jolliffe answered to him”
          I think the citer was Tamino, not Mann.”

          The citer whom Jolliffe answered was Tamino, not Mann. That’s the simple fact I was trying to get right.

          Steve: You stated “I think the citer was Tamino, not Mann.” I observed that Mann had also cited Jolliffe long before. That is incontrovertible and was my point. As to whether Jolliffe ever read the realclimate post, Jolliffe wrote to me personally, endorsing our 2005 criticism of Mann’s realclimate post (which was linked in our criticism. Are you saying that Jolliffe endorsed our criticism of Mann’s post without reading Mann’s post? If so, you seem to hold Jolliffe in very low regard, undeservedly so, in my opinion.

        • RomanM
          Posted Oct 5, 2014 at 8:25 PM | Permalink

          So you are saying that Prof. Jolliffe did not address the problems with the Mann PC methodology? Did he have to call Mann on the telephone or send him a personal email to answer him? I would think that correcting one of Mann’s apologists would constitute an “answer” particularly when Steve indicated that there was more to Prof. Jolliffe’s involvement than that sole encounter.

          I really cannot understand your persistence on such an obvious matter.

        • Nic Stokes
          Posted Oct 5, 2014 at 9:07 PM | Permalink

          “So you are saying that Prof. Jolliffe did not address the problems with the Mann PC methodology?”
          No, I am stating a simple fact. Jolliffe answered to Tamino, not to Mann.
          Steve: You stated “I think the citer was Tamino, not Mann.” I observed that Mann had also cited Jolliffe long before. That is incontrovertible and was my point. As to whether Jolliffe ever read the realclimate post, Jolliffe wrote to me personally, endorsing our 2005 criticism of Mann’s realclimate post (which was linked in our criticism. Are you saying that Jolliffe endorsed our criticism of Mann’s post without reading Mann’s post? If so, you seem to hold Jolliffe in very low regard, undeservedly so, in my opinion.

          And resisting the endless stretching that goes on here. Steve says:
          “Nick argues that Jolliffe only directly repudiated Tamino’s post. This would be relevant if there were relevant differences between Tamino’s post and Mann’s earlier realclimate post, but there aren’t.”
          That’s his interpretation on the lack of difference. I don’t dispute it. But it’s just not true that Jolliffe answered to Mann. There’s no indication, in all that Steve has written, that Jolliffe ever read Mann’s earlier realclimate post. There’s a difference between saying that it’s what Jolliffe might have said, and what he did.

        • RomanM
          Posted Oct 5, 2014 at 9:25 PM | Permalink

          And what is so important about the possible existence of such a difference? Does it save Mann the embarrassment of being corrected on an error? Does it make his methodology correct because nobody told him directly that inventing and using techniques before they are fully analyzed for their efficacy may lead to unforeseen negative consequences in the results? What is your point in this?

          Don’t bother answering… This is a waste of time as usual.

  29. DocMartyn
    Posted Oct 2, 2014 at 8:18 PM | Permalink

    Can I ask a simple technical question about the calibration period and the ‘divergence’ of many proxies?

    If we have a number of informationless proxies that just have the property of auto-correlation what is the difference in the number of proxies one can fish for if one searches for this shape _/ followed by either /, _, ^ or \?
    It appears to me that if you are not constrained by the end, basically having end buffering, you will pick out more hits than if you are at the end of the series.
    Is this correct, that you have more freedom to pick this shape, _/, if it is prior to the end of the series, than if it is the end?

  30. Posted Oct 5, 2014 at 7:55 AM | Permalink

    Steve McIntyre (7:29 PM):

    So when Nick says: “Jolliffe answered Tamino, not Mann”, Nick is either not in possession of all the facts or misrepresenting them by not telling the full story. Jolliffe went an extra mile and directly endorsed our criticisms of Mann’s realclimate comments about Jolliffe’s work that was later discussed by Tamino. So Jolliffe “answered” Mann on both counts.

    Thanks Steve.

One Trackback

  1. […] topic of discussion has moved away from the parsing of the Michael Mann defamation suite and the shenanigans of blog commenter Richard Stokes towards a multi-part discussion of the publication of the recent […]