Mike showed many detailed psuedo-proxy tests of the RegEM method and these seemed quite convincing in showing little problem with that method… it does assume equal error in both instrumental and proxies, so it should show less bias than other methods that wrongly put all the error in the instrumental record (i.e., “typical” regression).

I guess random walk pseudo-proxies were ruled out. If I get this right, ‘all the error in the proxies’ would lead to CCE ( minimize the errors in the direction they occur, Eisenhart 1939)

]]>1) pete’s view is that high autocorrelation in the unknown X-series causes problems for RegEM. To me, the problem is not autocorrelation but training sample not being ‘like’ the rest of the data. I’d like to see what are the true underlying assumptions for RegEM to work. MAR vs. MCAR discussion didn’t increase my understanding enough.

2) In my example the classical calibration works much better ( http://www.climateaudit.info/data/uc/cce_regem2.png ), whether one uses pete’s CIs or not. I’d like to know when to switch from CCE to RegEM, do we need to run GCMs before selecting the calibration method?

If you use, for example, bias-corrected CI’s it looks like this.

For this problem, the much simpler classical calibration (https://climateaudit.org/2007/07/05/multivariate-calibration/ ) works clearly better:

And as we know the process that generated the missing values, we could increase the accuracy further by using that information.

I specifically said that the missing-at-random assumption is not violated. I wouldn’t be surprised if the very high autocorrelation in your setup causes problems for RegEM.

Is this problem with autocorrelation documented somewhere?

]]>Ryan O and I have done a lot of work on RegEM and found some mathematical issues with it, particularly the TTLS variant.

More generally, TTLS regression is not a very stable method and predictions using it should be treated with circumspection. Selection of the optimum truncation level is not easy even where there is only one pattern of missingness. One of the greatest experts on TTLS, Prof van Huffel, doesn’t recommend using TTLS to make predictions of unknown values, as opposed to estimating regression coefficients. Ridge regression RegEM, which likewise allows for errors in the predictor as well as predictand variables, is much more stable.

]]>Comparing calibration period estimates to observations is an important point. But you obscure that point by pretending that there’s some sort of “overwriting” going on. As I said, snark over accuracy and clarity.

Mann started it I think we are progressing anyway. Let me cherry-pick one step from M08, and show the result with your code:

The problem of using calibration method that suffers from ‘loss of variance’ in this context is that they are good tools for generating hockey sticks. Not without help, as flat line is not a hockey stick. But if you somehow combine a flat line with instrumental data (or use various existing smoothing methods, see for example http://www.youtube.com/watch?v=pCgziFNo5Gk ), you’ll get a blade. But due to loss of variance*, we cannot really tell whether this kind of blades have existed in the past. That’s why the accompanying uncertainties need to be well derived and documented (who knows what was done in MBH99?). So there is a reason to worry about combining instrumental to the reconstruction. Worries change to a joyride once the uncertainties are handled correctly.

* Calibration example: ICE result is a matrix weighted average between CCE result and zero (Brown 82), hence the loss of variance. Less accurate proxies, more weight towards zero.

]]>Code is still here. Latest revision does ad-hoc bias corrected CIs.

2SE CI’s are only really appropriate when your estimate is unbiased and approximately normal. Even with an MCAR setup they’d be too narrow if you used them with RegEM.

]]>The narrow confidence intervals are the result of your use of two-standard-error confidence intervals, which aren’t appropriate in this context.

Would they be appropriate with MCAR assumption? Or, with the assumption that the training sample gives us information about the missing values prior to observing the proxy values?

If you use, for example, bias-corrected CI’s it looks like this.

This is what I’m looking for, can you update the code so I can try it with Mann’s data?

I specifically said that the missing-at-random assumption is not violated. I wouldn’t be surprised if the very high autocorrelation in your setup causes problems for RegEM.

What if we just combine the assumptions (your missing-at-random + Roman’s

*it should be possible to calculate a valid estimate of the mean annual temperature using only the observed values without further assumptions or information about the unobserved values.
*

then we don’t need to worry about high autocorrelation, i.e. add further assumptions?

]]>So RegEM gives a flat line with too-narrow confidence intervals (reminds me of hockey-sticks),

The narrow confidence intervals are the result of your use of two-standard-error confidence intervals, which aren’t appropriate in this context.

If you use, for example, bias-corrected CI’s it looks like this.

but any assumptions are not violated.

I specifically said that the *missing-at-random* assumption is not violated. I wouldn’t be surprised if the very high autocorrelation in your setup causes problems for RegEM.

For me, at least, very interesting contributions here from you (although possibly OT, but let SM snip away if he so chooses–it’s ok with me). I especially like your idea of “the expansion and contraction of human civilization” as a “proxy” (note the quotation marks there) of temperature variance. Of course there could well be, or is, a temperature component in it, but how could one tease that out? Maybe it could be done to some extent. If it could be done, then, as you said, it offers a much longer proxy record (although you said “correlation period” rather than “proxy record”–I need to be alert that “correlation period” is NOT the same as “calibration period”). Thanks. ]]>