The ensemble mean and its associated” variance” is a chimera, an abomination and a gross perversion of the CI.

It is indefensible to use the model spread for anything that a CI would be used for.

This cannot be overemphasized.

A confidence interval is a measure of uncertainty due to sampling. The ensemble spread is nothing of that kind.

]]>I quote “Flaws in comparisons can be more conceptual as well – for instance comparing the ensemble mean of a set of model runs to the single realisation of the real world.

Translation: Determing the worth of combined model outputs weighed against observations is “a conceptual flaw”.

“Or comparing a single run with its own weather to a short term observation. These are not wrong so much as potentially misleading”

translation : Validating a single model output against observation is “potentially misleading”.

Am I wrong here, am I reading what I am reading between the lines?

]]>This is true and is an expected outcome in statistics.

It would be much more remarkable if any model or model mean actually faithfully [correctly] modeled the single observation in this case or the exact model mean.

Note that this virtually impossible occurrence does actually happen with extreme frequency in the real world.

As in smartest and least smart students both getting identical test answers for example.

Bridge hands with 13 spades.

With models this can be seen in the inability of any model to have an overall negative trend, ever.

–

Gavin said “the formula given defines the uncertainty on the estimate of the mean – i.e. how well we know what the average trend really is. But it only takes a moment to realise why that is irrelevant. Imagine there were 1000’s of simulations drawn from the same distribution, then our estimate of the mean trend would get sharper and sharper as N increased. However, the chances that any one realization would be within those error bars, would become smaller and smaller.”

–

In practice the probability would remain exactly the same [Help please from the roomful of mathematicians]. Statistically the probability of the particular run falling within range of the observation or expected mean can be described by standard normal distribution.

Thus it is far more likely that any one model run will fall within one standard deviation [64.3%] and 95.4 fall within 2 standard deviations. There should in fact be 50% of the possible distributions below the actual observation in most model runs.

Perhaps the inability to incorporate larger Natural variation parameters is the reason for the model divergence.

For what you are aiming to do, which is sensible, the spread is not useful: rather, you would take each model one by one, compare it to the observations (taking the uncertainty in the latter into account), and decide whether to keep it or toss it in the bin. After this, you can go back to the models and see what factors may have contributed to their fate, and learn from it.

But this has nothing to do with the spread in the ensemble and more importantly, the spread in the ensemble has nothing to do with whether “models in general” match the observations or not. And that is the point I am making.

]]>I heard from some that the heat transport by ocean towards the Arctic is smaller than in other models. But have not investigated or confirmed.

]]>I tried twice before to show how the 1200 km smoothing increases “the observed” Arctic temperature compared to stations data or compared to the 250 km smoothing, but was not able to get through the gatekeepers. Here it was not a direct object of analysis and it got through.

]]>*$40 per ton is supposed to render a host of uneconomic technologies viable through a subsidy mechanism.*

The pricier technologies and the idea they will limit harmful pollution is mostly pablum for the suggestible believers and and maybe some of the planners that can’t see the scale and ambition of the scam in which they are participating.

The much bigger money is in finding the highest price industries and people will pay **not** to alter their conduct, and $40 is what the lobbyists who lobby the EPA think it is. That pricing is “cap and trade” working at peak efficiency. The EPA will reveal the nation-wide trading mechanism soon, or has already, or the admin will force it in by “executive action” while the people are distracted by the election, I predict.