Eh, ‘and trend precision’ should obviously read ‘and LESS trend precision’.

]]>Dear professor Koutsoyannis,

I’m very glad to see you are still so active in the discussion 🙂

I think we are failing to see the (common) forest because of the trees. Considering the discussion over at Bart’s (in which you, too, participated 🙂 I think we are basically approaching the same critter from different directions

All the consequences of the high Hurst coefficients you reported, are the same as those of a (near) unit root process (ie. same forest, behind different trees).

– high uncertainty in parameter estimates (also the ones describing expected change, or if you will ‘the trend’)

– very high prediction uncertainty (widening prediction intervals with increasing forecasting horizons)

– deterministic (linear) trends are bogus

Considering this, don’t you think it’s a bit of a stretch to classify a process exhibiting a Hurst coefficient of 0.99 as ‘stationary’ and thereby very different from the integrated process representation? For all practical purposes (see again your conclusions), this process can very well be described as non-stationary.

So, in my view, the point is not whether the series *has* a unit root (nothing *has* a unit root, and I think we agree on this, philosophically), but whether it should be modeled as having one. For those unaware of the pretext of this discussion, see the Bart thread (linked above) where I gave an entire battery of formal arguments why I believe the process should be modeled as a unit root containing one.

As I see it, the method you propose can be mirrored to a specific ARFIMA (where FI = ‘fractionally integrated’) process. In other words it is (very, very) long term stationary, as physics would predict. However, the number of observations available, ie. the instrumental record in case of temperatures, would make estimating any such process pointless.

Think of it this way: in order to ‘measure’ long term stationarity you need a time frame over which (at least a part of) this stationarity can be observed. The instrumental record falls well short of this requirement, and the results of ARFIMA estimates can thus be described as ‘arbitrary’ and useless for any ‘trend’ inference. Needless to say, this didn’t stop climate scientists from estimating it and publishing the results as some kind of ‘proof’ of something (what exactly, has eluded me so far).

Hence, as I proposed earlier, we should model it as a full-fledged unit root process (ie. a non-stationary one), as it resembles one so much. Just like Kaufmann and for example Beenstock did when proceeding with cointegration analysis.

Also, I strongly disagree (as I did in our earlier e-mail conversation 🙂 with the notion that ARMA processes require a large number of parameters to describe. The main advantage of an ARMA/ARIMA structure is that it *is* parsimonious due to its enormous flexibility. Hence the modeling framework’s ability to outperform most, if not all, structural models in terms of prediction accuracy.

In the case of the temperature record I managed to describe the ARIMA process ‘governing’ the series in 3 parameters (both versions of the stochastic trend), which can hardly be considered ‘overfitting’. One simply has to responsibly employ Information Criteria when performing comparative diagnostics.

***

However, it comforts me that we arrive at the same conclusion: There is much more natural variability and trend precision than is generally understood.

This convergence in conclusions is indeed what one would expect when employing different, yet consistent, methods to study the same observed process. As you can see, I completely agree with your assertion:

“In my view they are just models, i.e. abstract mathematical constructions and there is not one-to-one correspondence of the abstract world of models with the real world.”

————-

Dear Steve,

What exactly do you mean with “Steve: Different animals. The process affects the error bars”?

A complete model describes the entire process, not just a part of it. For example, error bars themselves are generated via a deviation from a base model, of a process. In this sense, the error bars are not ‘affected’ by a process. Rather, they and their dependence are a part of it. Can you clarify, please?

Also, what does ‘all to artificial’ mean in this context? I don’t see any reason why a HK approach would be any less ‘artificial’ than an ARIMA one. Any model, and especially one summarizing a hypercomplex system with unknown boundary conditions and a practically infinite set of determinants into a finite set of parameters (in the case of trends: less than 5), is per definition ‘artificial’.

]]>In professional circles, real-world evidence is what speaks most convincingly. By ensemble averaging 110-year-long series of annual average temperatures recorded at a score of broadly representative US stations little affected by UHI, the following sample acf r(m) is obtained for m =1,25yrs by a demonstrably unbiased estimation algorithm:

0 1

1 0.338569

2 0.06349

3 0.243411

4 0.255325

5 0.179204

6 0.194036

7 0.270506

8 0.140518

9 0.096746

10 0.09645

11 0.077148

12 0.11199

13 0.211723

14 0.05923

15 0.012962

16 -0.086086

17 -0.017072

18 -0.02321

19 -0.039536

20 0.111982

21 -0.110693

22 -0.171983

23 -0.127144

24 -0.1171

25 -0.133591

Note the fairly regular secondary peaks spaced 6-7 years apart and the

persistently negative values at the longer lags.

Applying the HK formalism means ignoring these essential features, while

prescribing a monotonic decay based on the value r(1) = 0.339. But that

value is strongly influenced by the intra-decadal oscillations and tells us

virtually nothing about the persistence of the multi-decadal oscillations

that lead to increasingly negative values at the longer lags. The latter

climatic oscillations show the highest power density by far, accounting for

a third of the total variance in the first few spectral bands alone. The

intradecadal oscillations that produce the “jittery” year-to-year

variabilty are more of academic interest vis a vis climate change. HK

tells us nothing reliable about either in this case, which is structurally

not much different from what is observed around the globe.

Dear “sky”,

I tried to put an end to this exchange, which does not seem to attract the interest of anybody else, by stressing our convergence. But I fully disagree with your last comment, particularly with your use of “never” and “parsimonious”. But I still wish to put an end to it, so I will not explain my reasoning any more. You know who I am, you can read my papers (some I have indicated above), all of which are available on line (at least as preprints), and you can find my reasoning there.

]]>While the HK framework is a step upward from simplistic white-noise noise models, it INFLEXIBLY PRESUMES a stochastic structure that is almost never exhibited by time-series of climate variables. The latter almost invariably exhibit oscillatory behavior (due to spectral peaks) that is a game-changer wrt to the primitive concept of persistence. It is a simple model, to be sure. But I would not characterize HK as “parsimonious,” as if the essential features have been captured. Let’s leave it at that.

]]>The public would misuse data, but scientists wouldn’t. Right.

]]>Since, as you say, you are not as “modern” as I initially suspected and since you do not give definitions for these concepts different from mine, I think we have good reasons to converge. To make a step further toward convergence, my reply is yes, I am aware of the limitations you mention. Please see slide 12 of my presentation in Edinburgh, where I say: “The HK process does not provide a ‘perfect’ and ‘detailed’ mathematical tool for geophysical processes. Rather it is the most parsimonious and simplest alternative to the classical, independence-based, statistical model”.

Since, as you say, you are quite aware of the basic things I mention, I hope you will agree with me that the notions of nonstationarity and nonlinearity have been badly abused and understand why I insist to its correct use.

]]>After many decades of professional experience in analyzing and modeling geophysical processes, I’m considerably less “modern” than you suspect. There’s nothing “fashionable” about the terms “nonstationary” and “nonlinear.” Both are time-honored, familiar technical terms that tell us what the process or system is NOT.

Nor did I ever remotely suggest that a nonlinear stochastic approach is necessary to obtain “trajectories” that “resemble” the real world. On the contrary, my reference to standard linear methods of signal analysis (BTW well described by Papoulis in abook by that name)should have alerted you to that.

I’m quite aware of the basic things you mention. Are you aware of the imitations of HK when dealing with processes whose spectrum contains very strong peaks and valleys?

]]>Re: Demetris Koutsoyiannis (Jul 24 05:41),

And, probably, exceptional reliance on, and unbridled faith in, the Freedom of Information Act.

]]>