KevinUK I agree with just about everything you say on this except maybe the following:

IMO the IPCC only does this so that it can claim a “consensus’ and avoid possible disputes between different national climate modelling communities throughout the world which would otherwise undermine this claimed “consensus’.

I don’t believe that there is any sort of conspiracy or that there is anything particularly sinister about the IPCC. I checked it out on the web and it looks very similar to the dozens of other international science bodies that non-scientist seldom even hear about. Such bodies usualy pride themselves on their even-handed internationalism.

I think the problem lies with the peer review system. Scientists tend to form themselves into little clubs around their own specialty. Each club has its own culture which defines what is acceptable research methodology and what is not. They even have their own vocabularies, e.g. principal components are called “empirical orthogonal functions” by oceanographers and surface wave specialists talk about the “celerity” of a wave rather than its velocity.

The trouble is these cultures tend drift apart and their adherents lose contact with other disciplines. This is particularly true of statistical methods. Each discipline has its own ideas about what constitutes statistical evidence and these can be at odds with the methods used by real statisticians. Many climate scientists have a background in applied mathematics which is often an alternative to formal statistics in undergraduate courses. Consequently they tend to “wing it” when it comes to stats and throw around terms like “variance” without really understanding the underlying concept of hypothesis testing. Their isolationism leads to dysfunction. IPCC TAR Chapter 8 is a classic example of this dysfunction.

]]>Thank you for your reasonable reply to #84 and I apologise if my reply was worded in a belligerent way.

I agree with you. I’ve posted about this previously, namely that the UK nuclear industry was founded on a lie in order to justify the UK tax payers funding of it. That all changed with the ‘dash for gas’ the best example of which I can give is that BNFL built a combined heat and power (CHP) at Sellafield (Fellside CHP plant) to provide steam for processing heating and electricity for THORP as well as surplus electricity to the national grid. If they actually believed in the econimics of nuclear power generation then they would have built a NPP. The fossil-fuel levies as they were called back then have now been re-directed into the renewables industry to enrich the likes of John Selwyn Gummer who a cabinet minister (with MAFF portfolio) under Maggie T was involved in the whole ‘dash for gas’ decision making process. One of the reasons why I post on ‘nuclear stuff’ on this blog to to hopefully ensure that any resurgence of nuclear power generation in theUK isn’t based on another lie, namely AGW.

Other than the fact that its definitely off thread, I also agree with you that its not worth discussing the CAP.

On the subject of T(im)L(ambert), I’ve post this before but I have him to thank for getting me onto the subject of global warming in the first place. As I suspect you already know I am a big fan of John Brignell’s NumberWatch web site. Up until seeing a criticism of John B (he called him a crank and wouldn’t justify why) I hadn’t taken the time to get seriously into the whole AGW debate. After reading the thread on his site, I got far more interested as IMO (perhaps not yours) I think a lot of what JB posts on Numberwatch is bang on the mark. For example read this months (October) Numberwatch particularly the bit about the BSE fiasco. If TL hadn’t called him a ‘crank’ there’s a fair chance I wouldn’t have put the effort in (nor have persuaded a lot of my friends to do likewise), hence my thank you to TL.

KevinUK

]]>I’m interested in your quote

“Now that we have a really nice database of global model results”

Does this imply that you consider the output from GCMs simulations to be data? If so, why? can you explain how you consider the output of a computer model be data? Because it’s used as input into other computer models e.g. economical forcasts of the effects of global warming etc? To my mind, in the context of computer model simulations there can only be two types of data:

Input data: e.g. initial conditions, physical constants, coefficients for physical property functions e.g. for density of CO2 as a function of temperature/pressure etc

Transient (forcing) data: data which varies as a user defined function of time that ‘forces’ a change to the steady state e.g. the speed of rundown in the gas circulators in a gas-cooled reactor etc.

Output from a computer simulation is NOT data (even if it is input into another model), it is a result (largely determined by the initial steady state and the transient forcing conditions) from the simulation. Similarly any perturbation to the input and/or transient data only provides an indication (not proof) as to the sensitivity which the computer model (a combination of equations) has to this input/forcing parameter. The variation in the output (e.g. maximum post trip fuel temperature reached) as a function of a perturbation in an input/forcing parameter is NOT a measure of the uncertainty of the model.

The combining of the difference in predictions between several ‘similar’ computer models (ala IPCC TAR style) for a given defined scenario (initial state and defined forcings) is NOT a measure of the uncertainty of the output result e.g. predicted global mean surface temperature over a 100 years. IMO it is statistically invalid to combine the outputs of computer simulations in this way. IMO the IPCC only does this so that it can claim a ‘consensus’ and avoid possible disputes between different national climate modelling communities throughout the world which would otherwise undermine this claimed ‘consensus’.

What do you think?

KevinUK

]]>This is shocking. Even I am taken aback by this. I would say, IPCC and all feeder processes into it, needs to be audited much more thoroughly than the rather modest actions taken until now.

]]>I decided to read the Held & Soden preprint you link but I’m not sure why you say there are not free parameters. Consider:

If the equilibrium response of lower tropospheric temperatures to a doubling of CO2 is close to the canonical mean value of 3K, this corresponds to a 20% increase in es.

This from early in the paper means that we already have an important parameter fixed higher than most of us skeptics are willing to accept. This means the results may be of interest if you’re already a warmer, but otherwise they’re not that useful. Further, though I’ve just started through the paper, they say that the starting material are runs of the various models used by the IPCC. We already know there are things all these models have in common, like the positive cloud feedbacks, which we don’t accept, so why should we accept anything which they all have in common?

More, perhaps, later.

]]>Flux corrections are not about how the model is parameterised. Certainly quantities like diffusion and surface roughness need to be parameterised for coarse grid models but flux corrections do not fall into this category. They are fudges. They are arbitrary quantities for which there is no physical justification. They are added after the event to force the model output to resemble real world observations. If a child did this in a math test we would say he was cheating. Flux corrections are a form of cheating.

Moreover, modeling does not need to be done this way. For example, take a look at Isaac’s manuscript “Robust Responses of the Hydrological Cycle to Global Warming,” [Held and Soden, 2006] (here). No free parameters! Personally, I found it both refreshing and reassuring.

Which raises a question: If all the “fudge” were eliminated, what would the OAGCM models (and all the others) report? Would the differences just be quantitative (i.e. greater, but unstructured, discrepancy between model results and observations)? Or would new, *interesting*, “artifacts” of the models emerge?

The models we provided were always stringently tested against real currents measured by current meters in known wind fields and temperature profiles. If a model did not predict the measured currents accurately it was rejected by the client. We had to be sure. People’s lives and hundreds of millions of dollars depended on getting it right.

I have just read Chapter 8, Model Evaluation of IPCC TAR. The point I want to make is that they do not “get it right”. They do not even come close to getting it right.

17 of the 34 OAGCM models evaluated used flux corrections. Let us be clear about this. Flux corrections are not about how the model is parameterised. Certainly quantities like diffusion and surface roughness need to be parameterised for coarse grid models but flux corrections do not fall into this category. They are fudges. They are arbitrary quantities for which there is no physical justification. They are added after the event to force the model output to resemble real world observations. If a child did this in a math test we would say he was cheating. Flux corrections are a form of cheating. Any model which needs to use flux corrections can immediately be dismissed as inadequate.

The authors of Chapter 8 appear to exhibit a complete ignorance of statistical reasoning and scientific method. Nowhere are confidence limits used. There is no discussion of a null hypothesis. They purport to use statistics by comparing the variance of different model ensembles with one another and with the real world. This does not imply that the models represent the real world. It only show that the models behave similarly to the real world. Of course they do. Had they not done so they would have been tweaked until they did or they would not have been published at all. It does not mean that the models are homologous to the real world or that they have any predictive power.

If the authors used accepted statistical methods to evaluate each model they would proceed as follows: Firstly specify a null hypothesis such as: “the model accurately predicts the real world and any differences between model predictions and real world observations are due to chance”. Secondly use ensemble methods to calculate the probability that these differences are indeed due to chance. Thirdly compare this probability with some specified threshold. If it is less than the threshold then the model is dismissed as being wrong.

I believe that the authors are in fact well aware of this methodology. They give themselves away in the introduction to Chapter 8 where they state

While we do not consider that the complexity of a climate model makes it impossible to prove the model ‘false’ in any absolute sense it does make the task of evaluation extremely difficult and leaves room for a subjective component in any assessment.

In other words we are not going to do the statistics properly because it doesn’t give us the answers we want. Eyeballing the diagrams in Chapter 8 which summarise differences between model output and real world temperatures shows that in every case there are significant hot spots or cold spots which, I believe, would preclude the model from passing any proper statistical evaluation. The “task of evaluation” is in fact quite easy: all the models fail. The conclusion must be that there are aspects of the world’s climate system that have not been properly accounted for in the models. This is hardly surprising; it is a very complex system and, as the failure of the models shows, it is not yet completely understood.

In many ways OAGCM climate modelling resembles astrology. They both make predictions using a complex and arcane methodology. They both deliver outcomes which resemble the real world in their character. They are both eager to exaggerate their successes and ignore their failures. Neither of them is the outcome of the scientific method.

]]>