With nothing but a low r2 you doubt the fundamental validity of the proxy?

After consulting Dr. Ben, I was lead to believe that calculating r^2 would be ridiculous and an RE satatistic with a value 0 or higher would be more appropriate — so forget everything I said as I must have been temporarily blinded by reason.

]]>How would a large unexplained variation and the uncertainty associated with it be transposed back into time for the reconstruction where the factors of the unexplained variation could conceivably obliterate the explained variance in the calibration/validation period?

Huh? Ken, the larger the unexplained variation, the smaller the magnitude of the calibration coefficient, the larger its standard error, and the lower its significance.

But like I said, this IS assuming that the unexplained variation does NOT “obscure” the proxy response signal. Don’t forget that the signal can be very noisy yet still be estimated correctly.

This also assumes, of course, that the series in question IS indeed a proxy, i.e. the calibration is not spurious. Maybe that’s your issue? With nothing but a low r2 you doubt the fundamental validity of the proxy?

]]>This unexplained variation, even if it large, is not a concern if it is independent of the thing that is being proxied/reconstructed. That is to say, the size of the confidence envelope will be suitably inflated by the huge uncertainty on the calibration parameter. Further adjustment beyond this is not required.

The unexplained variation is, of course, estimated in the calibration/validation period/process. How would a large unexplained variation and the uncertainty associated with it be transposed back into time for the reconstruction where the factors of the unexplained variation could conceivably obliterate the explained variance in the calibration/validation period?

And, of course, if we do not turn a blind eye to the divergence/out-of-sample period results following the calibration/validation period, we have some corraborating evidence to give concern.

I suppose as long as no one bothers to discuss these results all in the same paragraph, we can avoid the concern.

]]>Yes, and that’s why there’s value in putting a lens on that issue. ]]>

This unexplained variation, even if it large, is not a concern if it is independent of the thing that is being proxied/reconstructed. That is to say, the size of the confidence envelope will be suitably inflated by the huge uncertainty on the calibration parameter. Further adjustment beyond this is not required. ]]>

If your reconstruction includes a robust estimate of uncertainty then you do not need to abritrarily screen out proxies for which the calibration correlation is “low”. The “lowness” gets factored into the size of the confidence envelope: the lower the r, the wider the envelope.

If you have a regression correlation that when squared indicates that you are explaining maybe a percent or two of the variable, I would think that perhaps, regardless of the confidence envelop calculated one might be very concerned about using that regression in a reconstruction that goes back in time where all those unexplained (by R^2) factors could come into play.

]]>If your reconstruction includes a robust estimate of uncertainty then you do not need to abritrarily screen out proxies for which the calibration correlation is “low”. The “lowness” gets factored into the size of the confidence envelope: the lower the r, the wider the envelope. ]]>

Urederra, please look at the Steve M post here and reread what the introduction to this thread was actually quoting.

Re: Steve McIntyre (#16),

The criteria is a correlation (r) of at least 0.5 for the between TR core samples and says nothing about the overall correlation to temperature.

While I find using such a low correlation with temperature strange for a reconstruction, I find it even stranger that the reviewers of these papers never seem to object much to these methods.

]]>Thanks for the correction. That is even worse than I thought.

]]>