https://climateaudit.org/2007/09/14/is-juckes-et-al-2006-peer-reviewed/#comment-428161

]]>The powerpoint presentations associated with the book can be found here

http://www.oup.com/uk/orc/bin/9780199280964/01student/ppts/ch13/

The one on spurious regression gives a clear straightforward explanation.

]]>If you look at the individual charts of all the Proxies used you will see that they do not correlate with each other. There is no way any proper statistical process can fix that.

Using novel and untested methods which are not approved by qualified Statisticians, certainly doesn’t.

Please talk to a qualified Statistician.

]]>You replied that white noise is uncorrelated, and used the R-function rnorm. So you really dont know about multivariate normals with a given covariance structure, generated, e.g., by the mvrnorm function? Come on Steve, this I just cant believe, as a more basic statistical concept is hard to find.

If Steve says he uses rnorm, it answers your question completely, samples from multivariate normal distribution; entries of covariance matrix outside the main diagonal all zero. He didn’t use mvrnorm.

BTW, in the mainstream multivariate calibration papers, generally noise is allowed to be correlated spatially but not in time (*). In these climate proxy reconstruction papers it is admitted that proxy noise is red (in time). Maybe even as red as temperature signal itself. As with the model *proxy = temperature times zero plus noise* .

(*) see e.g. Confidence and Conflict in Multivariate Calibration, Philip J. Brown; Rolf Sundberg, J . R. Statist. Soc. B (1987) 49, No. 1, pp. 46-57

]]>But hey, climate science really progresses: now we know that it is not needed to have even 18 proxies to reconstruct the northern hemisphere temperature, just 13 is enough! So all you working with the instrumental US48 data, you can stop now: since the contiguous U.S. represents only about 4% of the NH area, one instrumental series should definitely be plenty. Pick your favorite.

Or just pick one long instrumental series, scale it using CVM (any calibration period you prefer) and you’ll get pretty accurate global temperature series. That’s how this ‘science’ works.

For those who demand some mathematical rigor, *Technometrics* August 1989 issue has two very interesting papers,

1. Multivariate Calibration With More Variables than Observations ( Sundberg and Brown)

2. Small-Disturbance Asymptotic Theory for Linear-Calibration Estimators (Srivastava and Singh)

In 1. it is noted that

when the assumption is not satisfied, is singular.

in reconstruction terms, n is calibration years, p is # of unknowns per year and q is number of proxies per year. Quite easy to see that in MBH98, from AD1700 step onwards the residual covariance matrix ( ) is singular. But Mann doesn’t really want to estimate uncertainties, so he doesn’t need such a matrix.

Wrt Juckes et al CP paper, article 2. might be more interesting. In the new Appendix A2 it is said that

The forward regression technique damps variance (Burger et al., 2006), while the inverse regression model amplifies variance. A compromise between these two, for univariate regression (provided x’y’ greater than 0 ), is the variance matching approach

Srivastava and Singh are also after a compromise between the two known calibration methods

In this article we consider a weighted average of the classical and inverse calibration estimators X, which leads us to a family of calibration estimators..

It seems to me that CVM is a member of that family, and thus this paper should be on ‘must read’ list. Some caution is needed, as the discussed Small-Disturbance Asymptotic theory assumes that random errors in the calibration are not large. In proxy vs. temperature calibration it seems that all we have is random errors..

]]>I really do need to work on the multivariate issues

Yes, and until then claims that multivariate calibration/inverse regression, with or without rescaling, equals PLS are unfounded.

On the white noise query #21 though, I replied and I’m not sure what else I can say.

You replied that ‘white noise is uncorrelated’, and used the R-function ‘rnorm’. So you really don’t know about multivariate normals with a given covariance structure, generated, e.g., by the mvrnorm function? Come on Steve, this I just can’t believe, as a more basic statistical concept is hard to find.

In this, as you agree, Goosse failed to ensure that Juckes answered the issues and from then on, it was impossible to adhere to CP policies. The referees were all changed and the process changed from open review under CP and EGU policies to closed review

No, I agreed to ‘Juckes didn’t answer’, not to ‘Goose failed to ensure that Juckes answered’. Again, the main problem for the Editor was (I guess) that besides the wealth of ‘short comments’ only one reviewer responded in the first round, so he approached 3 additional reviewers for a second round. To close this round was probably not the best idea, but it was still consistent with CP policies. Open review would have revealed all the differences between editor and reviewers, and Juckes reaction/non-reaction to them.

does not generalize to a “pattern of unresponsiveness”, which is a very unfair statement.

Sorry, maybe that was a bit too harsh. But would you agree that your enterprise here is slightly more complex than simple ‘puzzle solving’, as you like to characterize it?

#287:

I can only wonder what went through the heads of these additional reviewers as for instance the above mentioned things were explicitly spelled out in e.g. Willis’ review.

Jean, did you read their reviews?

]]>Seems to me that there is nothing but cosmetic changes. The meaningless upper bounded “standard error” estimate is still there along with the most ridiculous “significance” estimation probably ever published. How hard is it for these guys to understand a simple thing that if your proxies correlate positively with the target series, so should your “random sequences” otherwise their statistics simply do not match and you are comparing apples to oranges? I can only wonder what went through the heads of these additional reviewers as for instance the above mentioned things were explicitly spelled out in e.g. Willis’ review.

But hey, climate science really progresses: now we know that it is not needed to have even 18 *proxies* to reconstruct the northern hemisphere temperature, just 13 is enough! So all you working with the instrumental US48 data, you can stop now: since the contiguous U.S. represents only about 4% of the NH area, one instrumental series should definitely be plenty. Pick your favorite.

The most obvious error in the original A2 has been corrected, but CVM is still there. Seems that this science progresses slowly, hopefully soon they will learn how to compute uncertainties.

]]>Still no cross correlation of individual Proxies.

That would be a logical first check for robustness, wouldn’t it?

Or: How well does one proxy predict another?