What is happening in this thread and by many Hockey Team members on other websites is that they are using their “paid” positions and titles of their workplace as an authority. Misquoting and making false claims are like going for coffee. Steve has to rely on his expertise. And he has demonstrated it here over and over again demonstrating that there are many who do not have enough staistics to do the papers. ]]>

once one starts looking at the topic from the point of view of multivariate analysis, you get a cleaner perspective and much less sympathy for claims that any one method is the “right” method

Proper statistical model would help in selecting the method.

I think that overfitting in multivariate calibration would be a good topic for a paper, and could be submitted to non-climate science journal as well (if someone is afraid of publication biases in climate science).

]]>Actually the entire topic linking Partial Least Squares regression and inverse regression deserves an article by itself.

Not a bad idea. Key words: Multivariate calibration, Overfits and Spurious correlations.

Downloaded some PJ Brown’s papers on multivariate calibration (1,2). If I understood it right, the INVR (Juckes et al) is a solution to *controlled calibration* problem (but Brown adds sample residual covariance weighting).

For *natural calibration* solution, Brown uses term ‘inverse regression’, and it seems that it is the method we (at least I) have here referred as ‘direct regression’. Some random thoughts:

1) Temperature vs. proxy calibration is not controlled calibration, in the sense that we can’t make sure that calibration temperatures include the whole range of temperatures. However, I think Brown’s ‘inverse regression’ would be more problematic in the proxy case. (I did some simulations.)

2) Should we use multivariate calibration or univariate calibration? i.e. Juckes et al A1 or A2 (before the CVM-scaling!!). Not much difference, except that in multivariate case there is a clear danger of overfitting. (This result is based on my simulations, not a proven fact)

3) There seems to be a general assumption that calibration data is very accurate. For example, in (2)

The motivation for this ‘model’ is that the GLS and the BLP are defined as optimal choices under the assumption that calibration data yield accurate parameters.

GLS=Generalized Least Squares

BLP=Best Linear Predictor

REFS:

1) P.J. Brown (1982) Multivariate Calibration. Journal of the Royal Statistical Society. Series B, Vol. 44, No.3. pp 287-321

2) R. Sundberg and P.J. Brown (1989) Multivariate Calibration With More Variables Than Observations. Technometrics, August 1989, Vol.31 NO. 3. pp 365-371

]]>I read this blog extensively, and in fact have read all the threads you are involved in completely from top to bottom. I did not make a snap judgement, but observed over a long time period before coming to a conclusion. I then stated my opinion clearly, and without malice. I did not “lash out” as you claim. In fact, your one-sentence response, if anything, proves my point.

Fortunately for you and everyone else on this blog, I do not beat dead horses. There will be no futher direct communications from me to you, as that would be pointless. I may from time to time refer to you in the third person, however I will continue to use the honorific ‘Dr.’ instead of the inappropriately (to me) familar ‘Martin’ or the brusque (to me) ‘Juckes’.

]]>But which is which and which is when?

Assume E(epsilon)=0 for all t? Proof by assumption!

]]>In my professional experience, the brightest PhDs with whom I have worked played down the fact that they had a doctorate.

It is because any PhD with a modicum of intelligence realizes that he is an expert on only that tiny, tiny part of science which he has studied in depth. A PhD generally indicates a person who has a lot of persistence, but not necessarily anything else. Anyone who wants to test their general intelligence should try to outwit a Southern tree-farmer on a log purchase.

]]>