It makes sense that using multiple AR(1) series summed can generate a realistic power spectrum for a time series. I think I would be wary of using that approach for significance testing, though, since it implies different assumptions about the frequencies much lower than the minimum resolvable frequency from the series (i.e., 1/t, where t is the series length). This may not have much effect on the look of the power spectrum, which has limited extent, but I suspect it could well have an effect on the apparent significance of a trend within the series.

This reminds me of the Schwartz and Scafetta example, discussed at Lucia’s place here. This relates to the “controversial” Schwartz paper which found a short time scale for climatic processes. Scafetta investigated and felt the data was based described by two different time scales. This would come as no surprise at all to Hurst fans, for the reasons described above.

To my mind, treating the problem in this way is possible, but the solution seems to have a degree of “cycles and epicycles” about it. My instinct is that generalising the scaling behaviour would be a better route to take. My instinct is not always correct though

]]>So AR(1) is a Team paradigm. No wonder RC dodges questions about Hurst and 1/f.

AR(1), but probably not many AR(1)s summed, as

]]>Even the sum of as few as three AR(1) processes with widely distributed coefficients (e.g., 0.1, 0.5, 0.9) gives a reasonable approximation to a 1/f power spectrum (Ward 2002).

So AR(1) is a Team paradigm.

Except when it’s not.

If we assume weather noise IS AR(1) and has the lag-1 correlation that gives the variability in 8 year trends Gavin gets, then you can prove the monthly weather data since 2001 is an outlier. By *a lot*. It’s just not variable enough.