Some more on von Storch and Zorita

A number of people are citing the von Storch and Zorita paper as somehow showing that the erroneous MBH98 method did not "matter". I stated previously that the VZ paper indicated that they had inaccurately replicated the hockey stick algorithm – which made their results irrelevant. I have since had some correspondence with von Storch, who I do not regard as being a member of the Hockey Team and who has always been very civil to us in both correspondence and public comments .

von Storch has clarified some points on the methodology in the VZ paper. However, the correspondence has merely confirmed my previous view that they incorrectly replicated the hockeystick methodology of MBH98. I have attached a short script in R here to illustrate the following discussion.

First, we had stated in our Reply that VZ seemed to have done PC calculations on the correlation matrix rather than the decentered data matrix and that this mattered. In our recent correspondence, von Storch stated that they calculated principal components using covariance matrices and did not do SVD on the de-centered data matrix. He then asserted that the results are identical under either methodology, referring me to his text, von Storch and Zwiers[1999].

While one hesitates to directly disagree with von Storch, his text simply does not confirm the point that he requires. Von Storch and Zwiers [1999] (page 301) states that the "SVD of the conjugate transpose of the mxn centered data matrix" [their italics. my bold] extracts the "same information from the sample as a conventional EOF analysis". I agree with this obviously. However, contrary to the claim in his email, this is not the case if the data matrix is uncentered, as you can determine from a simple experiment. The first part of the short script here contains an example showing this. The point can be seen instantly.

Secondly, he suggested that the covariance matrix of a data set is changed by re-centering. I am puzzled by this suggestion, as this is not the case. Again, I have provided an example in the script.

Third, von Storch said that they did get different results under two different methods. Right now, it’s a bit of a guess as to what they did since the stated use of covariance matrices is at odds with the mention of correlation matrices in the article. My guess is that they also did a re-scaling step together with the re-centering step – presumably dividing by the full-period standard deviation in one case and the short-period standard deviation in the other. If so, this would explain the reference to a "correlation matrix" in their article and could also be reconciled with the use of the term covariance matrix in the email. If this surmise as to what they did is correct, the extra hockey stickness in the shortcentered version is only microscopically greater and could be reasonably described as merely a "glitch". There is a correlation of 0.99 between the two different PC1s and only a slight increase of the "hockey stick index" in the NOAMER case. However, this methodology does not replicate the actual MBH98 algorithm and it will not in general lead to the biased creation of hockey stick shaped PC1s as we observed with the MBH98 algorithm.

Fourth, the additional MBH step of using the detrended standard deviation was not used by VZ. This enhances the hockey-stick-ness of the MBH98 PC1, again as shown in the attached script.

Thus, addition to the various problems with the pseudoproxies itemized in our Reply and the VZ failure to deal with bristlecones, I can say confidently that the VZ methodology definitely failed to replicate the hockey stick algorithm and that this failure is material to their findings.

8 Comments

  1. TCO
    Posted Oct 31, 2005 at 4:18 PM | Permalink

    Given that you have found a new issue with VZ/Storch, how will this be handled? If you can resolve the problem with him, that would be best. If he needs to do a retraction, you should allow him to do so. If he refuses and you still think there is an issue, you should publish (as tedious as that sounds).

  2. Steve McIntyre
    Posted Oct 31, 2005 at 4:36 PM | Permalink

    No idea what we can do or will do.

  3. TCO
    Posted Oct 31, 2005 at 5:20 PM | Permalink

    I think that discussion off-line would be useful. The normal method when you find a mistake in someone’s work is to tell them and have them correct it. Since the issue is non-intuitive, they may need a little time to process the info, double-check, etc. And they may have something you have not thought of. This is normal.

    I assume that if they refuse to acknowledge the issue and you still think you are right, then that you should publish. You could even publish on the general issue of what makes a difference and what doesn’t (in the theoretical case) rather then addressing (only) this issue. Could do it in a stats methods journal or such.

    BTW, your sharing of the code is nice and all, but will not be followed by very many people. They need figures and such. (Not saying you should do all that work…just want you to know.)

    Hang in there. Doing good.

  4. Paul McGinnie
    Posted Nov 1, 2005 at 6:43 AM | Permalink

    Re #4 – only on code sharing point.

    I would like to respectfully disagree with a possible interpretation (or maybe just my misinterpretation) of TCO’s point about code sharing.
    The publication of code for the production of results on this site enormously improves the credibility of the claims made here. As most people will accept arguments from authority (at least from “a scientific consensus”) it is absolutely vital that those going against such a “consensus” do so in as open and verifiable a fashion as possible. It is a powerful argument to say – “if you disagree with statement X, just check it” i.e. download and install R and cut and paste this script into the GUI on your own PC at home and then sit and think about the results.

    Another feature of regular posting of scripts it it will keep this site honest through those times when the temptation is to not admit or address errors, and will allow for the rapid contribution by others in such a case – better to catch errors quickly (they are inevitable)!

    TCO may well be correct about the number of R-coders out there. However, the bar is much lower when one is following another’s script than when trying to write it oneself (especially if commented and structured as Steve seems to do). I think it is enormously helpful to show such details, as it makes the whole debate accessible in a totally new way – one does not have to take sides in an esoteric debate between rival authorities in which one is ill equipped – you can just run the numbers to see what is being discussed. That will help clarify many peoples thinking, and might even prompt the other side to open up and share more data.

    I would amplify this point by suggesting that as much code be posted as possible, even down to the scripts for producing the graphs.

  5. Steve McIntyre
    Posted Nov 1, 2005 at 8:11 AM | Permalink

    I’m going to try to be more systematic about putting scripts up even for little comments. While I’m pretty organized with respect to publication scripts, I’m not for incidental comments on the blog or for work in progress. So now I’ve generated hundreds of little scripts which are hard to keep track of. If I follow my own prescriptions, it takes only a little more time to pretty them up when I post something up and then it’s there for my reference as well.

    R’s not that hard everyone. I just stumbled across it and started using it. It improved my productivity about 1000% almost instantly as soon as I started.

  6. Paul McGinnie
    Posted Nov 1, 2005 at 11:17 AM | Permalink

    I can echo Steve – I started using R three months ago, and now use it daily for a lot of numerical work. It is not hard to learn if you have any previous programming experience.

  7. TCO
    Posted Nov 1, 2005 at 4:16 PM | Permalink

    would R do anything for me if I were to have (can I make this more subjunctive?) a job that involves bizness weenie stuff like helping people think about strategy and such.

  8. jeff id
    Posted Aug 27, 2009 at 6:52 PM | Permalink

    It’s ironic that VZ apparently made the mistake of doing the math correctly.

    Am I understanding now that their findings were correct in that multi-proxy studies deflate the historic variance (less than they indicated) but they got an exaggerated answer because they didn’t successfully replicate the failure of Mann98 decentered PCA?

%d bloggers like this: