The latest GISS readings are shown in the diagram below:

[wp_caption id="" align="alignnone" width="450" caption="Scenarios A, B and C Compared with Measured GISS Surface Station and Land-Ocean Temperature Data"][/wp_caption]

The original diagram can be found in Fig 2 of Hansen (2006) and the latest temperature data can be obtained from GISS. The red line in the diagram denotes the Surface Station data and the black line the Land-Ocean data. My estimate for 2008 is based on the first six months of the year.

Scenarios A and C are upper and lower bounds. Scenario A is “on the high side of reality” with an exponential increase in emissions. Scenario C has “a drastic curtailment of emissions”, with no increase in emissions after 2000. Scenario B is described as “most plausible” and closest to reality.

Hansen (2006) states that the best temperature data for comparison with climate models is probably somewhere between the Surface Station data and the Land-Ocean data. A good agreement between Hansen’s premise and measured data is evident for the period from 1988 to circa 2005; especially if the 1998 El Nino is ignored and the hypothetical volcanic eruption in 1995, assumed in Scenarios B and C, were moved to 1991 when the actual Mount Pinatubo eruption occurred.

However, the post-2005 temprature trend is below the zero-emissions Scenario C and it is apparent that a drastic increase in global temperature would be required in 2009 and 2010 for there to be a return to the “Most-Plausible” Scenario B.

Will global warming resume in 2009-2010, as predicted by the CO2 forcing paradigm, or will there be a stabilsation of temperatures and/or global cooling, as predicted by the solar-cycle/cosmic-ray fraternity?

**Watch this space!**

P.S: *It would be very interesting to run an “Actual Emissions” Scenario on the Hansen model to compare it with actual measurements. The only comments that I can glean from a literature survey is that Scenario B is closest to reality, but it would appear that CO2 measurements are above this scenario, but unexpectedly, methane emissions are significantly below. Does anyone have the source code and/or input data to enable this run?*

If you use statistics to infer something about an unknown aspect of some sample, you can use the z-test to see if the difference between that sample mean and the population mean is large enough to be significant. In order to satisfy the central limit theorem (enough observations of variables with a fininte variance will be normally distributed (Gaussian or bell-curve), the observations are considered beforehand to be i.i.d. by default.

A collection of random variables is i.i.d. (independent and identically distributed) if each has the same probability distribution and they are all independent of each other. If observations in a sample are assumed to be iid for statistical inference, it simplifies the underlying math, but may not be realistic from a practical standpoint.

Examples of iid:

Spinning a roulette wheel

Rolling a die

Flipping a coin

Ceteris paribus of course.

(A statement about a causal connection between two variables should rule out the other factors which could offset or replace the relationship between the antecedent (first half of the hypothetical proposition, in this case throwing a die) and the consequent (the second half, in this case that the die will land without influence that would make the throw be not iid in the sample, such as weighting one side of it before throwing it)

]]>FWIW, I think the difficulty with lining everything up in 1958 is that the initial conditions (IC) for the runs were probably midnight, Dec. 31, 1957. Hansen et al. doesn’t say this, but one must provide initial conditions to a run, and setting the IC to match that particular time is the only thing that makes any real sense.

The Annual Average temperatures in 1958 did rise, and however the initialized the model didn’t.

It does make complete sense to put HADCRUT and GISS on the same basis time basis, so what willis does makes sense there. You need to normalize everything to the same year.

I’m actually not sure quite what is correct to do about matching or not matching start points.

I could be wrong, but it seems to me there are challenges revolving around with setting initial conditions. You can’t set them for a full average year, you must set them for a precise time. How well can any modeler know everything in Dec. 31, 1957? Whatever choices are made have some effects on climate. Some choices — for example individual storms– may have short term effects on predicted climate; others’ long term effects on predicted climate. (Anomolously high or low amounts of stored heat in the oceans could have a quite long term effect.)

The sensitivity to these initial conditions is not discussed in the 1988 papers, I don’t run these models, so I don’t know.

But… anyway, did anyone ever find the data for the Hansen graph on line? I’m hankerin’ for the unshifted stuff, and I’d like 2006 and 2007!

]]>#168 is consistent with #165.

I don’t agree, E[x(k)x(k+1)]=0 and that means no autocorrelation (to me.) w(k) matters, that’s true, should add E[w(k)]=0, w(k) i.i.d.

Your caution about definitions, I imagine, stemmed from this line in #158:

My caution about definitions is in #160. You added

“non-autocorrelated” and “independent” are synonymous.

which I didn’t agree with.

Last post.

OK.

]]>Of course, the nature of w(k) matters. The correlation in x(k) will degrade as the variance in w(k) increases. That does not mean x(k) is not autocorrelated. It means the autocorrelation coefficient is a weak model for describing the autoregressive effect of alpha.

Your caution about definitions, I imagine, stemmed from this line in #158:

A scorching hot month is not usually followed by a freezing month, for example. This type of

dependenceon the previous data point is called “autocorrelation“.

If you “agree with Willis” on this point, then why raise the issue about definitions, particularly the distinction between of “dependence” and “autocorrelation”?

His statement is accurate enough for a blog and accurate enough to make his case. If he wanted to be more accurate he might have said:

This type of dependence on the previous data point is

~~callled~~what leads to “autocorrelation”.

But then we’re splitting hairs here. And I just don’t think it’s necessary. Last post.

]]>UC, if x1 is dependent on x2, and so on, then xi is, by definition, autocorrelated.

What is your definition for autocorrelation? I would use ‘E(x1x2)=E(x1)E(x2) means no autocorrelation’. My point is, if we criticize Hansen about faulty stats, we should be quite accurate with our terms then. (I know what Willis means and I agree with him)

Let’s see if I find an example of dependent but non-autocorrelated process.. Change AR1 x(k+1)=alpha*x(k)+w(k) to x(k+1)=alpha*x(k)*w(k), would that do?

]]>My original post that led to this discussion was originally intended for a less mathematically knowledgeable blog, so I tried to simplify the math, describing the standard deviation as the “average size” of the residuals, which is not strictly true, etc.

w.

]]>