Question. What initially stimulated concern for and research into anthropogenic climate change?

Answer. A time series of global men temperatures.

The theory posits that

1) there is an observable and measurable positve trend in global mean temperatures.

2) There is a causality from GHG emmissions, to GHG concentrations to temperature.

You have to start with the first point, before the second has any significance here. And it sure is funny how global mean temperatures appear to be a random process (ARMA) that in itself does not have any underlying trend – it is simply a function of it statistical properties (i.e autoregressive and moving average).

The implication being that we are seeing patterns in the temperature data that don’t in fact exist. This is extremely common, especially in the area of financial markets. If this is indeed a random walk (a non-stationary series in the strict sense – i.e. does not include trend stationary series), the only reason we seem to observe trends when we do calculations on a naive linear baiss (i.e. OLS) is because we don’t have a long enough data series.

In effect this is what Steve’s work seems to be showing. It is unbelievable that this simple time series analysis has never been done before. Well, it is when you consider the econometric abilities of climate scientists as displayed in the realclimate article on “extreme events”. In that they show what ios clearly a trend stationary series, which is easily recognisable as something completely different to any available time series for temperature, label it “non-stationary” and infer that is a valid model to explain the supposed frequency of “ectrem events”.

]]>The method discussed – the Cochrane-Orcutt method – can solve for the bias in uncertainty estimates that OLS suffers from when residuals are not ‘white noise’.

]]>I think you need to finish the sentence Steve.

]]>And I still don’t have a good feel for what processes physically lead to the effect.

]]>ARFIMA has more persistent correlation than ARMA – correlations decay at n^-a, where a

]]>2. Still not clear on the rationale for the ARMA behavior itself. Can you give me a good example? I need some intuition for why this type of behavior occurs versus non-autocorrelated behavior.

]]>You can check the autocorrelation of a time series. If you get AR1 coefficients over 0.9 (probably 0.8), then you’re in a red zone and will need to adjust t-statistics. This is one issue where I’m going with these multproxy studies. At best, they do a goofy confidence interval calculation assigning confidence intervals of 2 standard deviations. This methodology is based on the fact that the 95% critical t-statistic is 1.96 – hence the 95% confidence interval. If the true t-statistic is 5 or 8 (as seems quite possible to me) and something that I’m trying to show, then the honest confidence interval of these studies is less than natural climate variation – which is certainly my view of them.

]]>WRT ARMA itself, what is the physical rationale for ARMA commonly? How can one figure out that one is in an ARMA situation and so needs to adjust the t-stat threshhold?

]]>But the spreads are really broad. It would fall below 90th perentile and even 80th percentile with a trend much lower than the present trend.

There must also be effects in these time series that are not ARMA(1,1) on a monthly scale, but perhaps something like ARMA (1,1) on multiple scales (e.g. decadal, centennial etc.) I haven’t thought about how you would simulate something like that. You can probably do it in wavelets and it would be interesting to try. Too many little baubles to collect.

]]>