bender, you say, “Incidentally, the more dominating the low-frequency exogenous component(s), the lower the precision on the ARMA model estimates.” I don’t believe that is true, at least as I understand the statement.

The exogenous component can be made more and more dominating, in the sense of explaining the total sum of squares, by increasing the sample variance of the exogenous variables. Thus in a model of

1) Y(t) = phi*[Y(t-1) — X(t-1)*b — e(t-1) ] + X(t)*b + e(t),

i.e. an AR1 model with a mean equal to X(t)*b, an increase in the variation of X will increase the explanatory power of the model and the significance of the estimate of b.

Run a Monte Carlo with varying levels of the variance of X and you will see, I believe, that the Mean Square Error of the estimate of phi is essentially unchanged by the variance of X while the MSE of the estimate of b is inversely related to the variance of X.

Is this the model you are referring to in your quote?

Marty

]]>1. For a quasi-demonstration of how a PACF changes when a trend is removed, compare the PACFs of the tropical storm count (with 1970-2005 trend) vs. the landfalling hurricane count (without trend) with which it is strongly correlated (r=0.62 before 1930, r=0.49 afterward). See how PACs 1-4 drop in the detrended series?

2. A clarification for anyone who finds it necessary: the purpose of autoregression is to identify endogenous processes that are persistent through time. Exogenous processes that fade in and out tend to inhibit the estimation of the endogenous autoregressive component, because they introduce a complex nonstationary noise structure.

3. Incidentally, the more dominating the low-frequency exogenous component(s), the lower the precision on the ARMA model estimates. This is the real problem with 1/f noise: you increase your sample size over time, and you inevitably uncover some new “trend” caused by some hitherto unknown exogenous forcing agent. Consequently, it is impossible to obtain an “out-of-sample” sample. (Your new samples come from **different **populations, which thus nullifies the validation test.)

If one views the statistics in Hampel terms – what’s the breakdown point from contamination? With Mannian methods, the breakdown point can arise with as little as one contaminated series.

]]>does lead to a high AR1 coefficient. That is the one of the points these guys are making: AR1 models are an improvement over AR0 models, but they are fraught with their own problems.

The problem with these HS-shaped series is that they are nonstationary, so AR coeffs do not have a straightforward interpretation. (Split any time-series at the join of the shaft and blade, compute the PACF and you will see what I mean when you compare the two.) You could take out the trend, to give the coeffs a straightforward interpretation, but then you’ve got the problem of interpreting what it is you’ve taken out, and an autocorrelation analysis certainly isn’t going to help you now.

The purpose of autoregression is to figure out how Xt varies as a function of Xt-1. If they are autocorrelated only indirectly, through the action of some other forcing variable, then the autoregressive model is a bad model, and this badness will revealed when the forcing agent fades in and out (as teleconnections are wont to do).

]]>Ritson appears to be arguing that a high AR1 coefficient is actually evidence of a trend+low AR1 coefficient.

I think he just argues that you included signal in your simulated observations(=signal+noise).

This is about to get interesting. Signal must contain ‘trends’ and no high frequencies (otherwise Ritson coefficient would underestimate the AR1 coeff of the proxy noise, link). On the other hand, signal cannot contain trends without CO2 forcing. Trendsetters.

]]>Or he has an agenda to push.

Mind you he would make an excellent salesman for another BrEx.

Does Mann believe we have to peer-review peer review ?

Who guards the guardians? I think all of us believe in assuring that the peer-reviewers have as little pro-author or anti-author bias as possible. Peer-review ought to be less anonymous and have more stature and reward than it currently does.

]]>I confess I get hung up on the statistics side of the questions. When I first tried modeling the MBH98 method, I did it with a single “bad apple” format and calculated how bad the apple had to be to turn the whole bushel to worms. With real data for many — maybe, many many — series, the bad apple has got to be a — and I am not arguing the “the” — big problem in saying anything about the many, many.

But hey, maybe Dr. Mann has done paleoclimatology a great service: the MBH98 transformation can be used, sort of in reverse, as a general “de-wormer.”

]]>