## Koutsoyiannis 2008 Presentation

Anything by Demetris Koutsoyiannis is always worth spending time on. Spence_UK draws our attention to this recent presentation at AGU asking:

How well do the models capture the scaling behaviour of the real climate, by assessing standard deviation at different scales. (Albeit at a regional, rather than global level).

Assessment of the reliability of climate predictions based on comparisons with historical time series (Click on “Presentation” to get through abstract page)

### Like this:

Like Loading...

*Related*

## 298 Comments

Each line in the conclusion is a SLAP in face of any GCM’er that thinks he has a handle on the science. I am sure this presentation will get the usual wagon circling response from the community, if it hasn’t already.

Steve

That was an incredible presentation. Aside from the religious aspects (pro or anti) of global warming, do you have an opinion about why these models are so at variance with observations? We have noticed here that the current models are diverging more and more from observations (Douglass, Christy) so there seems to be some systemic source of error in what is going on. Is it a fundamental misunderstanding of climate? Is it GIGO (Garbage In Garbage Out). I just read an article where European scientists are proposing that funding for a 100 petaflop computer so that they can more accurately model climate but would all that power improve the models or would it just increase the velocity at which we get bad answers due to flawed input assumptions?

Warning spoiler!

From the conclusions.

To take Government (or any) money for climate models, on the basis that they work, while knowing that they don’t work is ……

To give Government (or any) money …..etc.

I don’t think we are allowed to use the word, but you can fill in the blanks.

Actually if the modelers weren’t using the models to make indefensible clams, it wouldn’t be a problem. Trying to develop climate modeling programs is a reasonable scientific endeavor. As long as:

1. They admit when they are wrong

2. Are actively and properly trying to validate their results

3. Don’t make strong positivist claims until the models are proven to work

4. Stay out of the political realm

Steve

I want to second Dennis Wingo’s request for some guidance on this one. I’ve been reading the same stuff and have just put a post about it on my own blog (here) but would really appreciate your opinion.

Is it really possible that the modellers are trying to hide fundamental problems concerning initial conditions behind a plea for vastly more powerful hardware? It seems to me that Dennis could be right and that the result may be more detailed predictions, but without significantly reducing the levels of uncertainty.

Maybe I missed something but in presentation observed temperatures are in principle much higher than in model simulations. That would mean that models systematically

underestimaterate of warming. Am I wrong?Ivan, the models seem to be over and under and invariably wrong.

#7, Ivan: Yes, you are wrong. Your first observation is about the absolute levels of temperature, and then your inference relates to the rate of warming. Not logical. Look at the rates of warming in the various graphs, and you’ll see that, relative to reality, the models show rates of temperature change that are broadly similar (Alice Springs, Khartoum, Matsumoto, Aliartos, Colfax), higher (Manaus), or much higher (Albany, Vancouver), not lower. But it really doesn’t mean anything anyway. Given the very poor predictive power of these models, the outcomes from such comparisons are basically random.

Well! Koutsoyiannis’ work nicely corroborates Matthew Collins’ 2002 study* of the HadCM3, showing its projection correlation with its own artificially generated climate systematically declined to zero by four seasons; the HadCM3 had no ability to model climate.

Thanks a lot for posting that here for us, Steve. IMO Koutsoyiannis’ study should become a classic in the field. I love it that he cites Popper.

*M. Collins (2002) “Climate Predictability on Interannual to Decadal Time Scales: The Initial Value Problem.” Climate Dynamics 19, 671-692.

It’s beginning to look like GIGO means not only “garbage in, garbage out,” but regarding climate models: “garbage in, Gospel out.”

I don’t believe that the initialization problem is a question of funding. Its a much more fundamental problem in mathematical modelling when attempting to solve simultaneous differential equations each of which could be of order or more.

In saying that there is an initialization problem (following Trenberth), climate modellers are basically saying that predicting any localized climatic event is theoretically impossible.

This is why the news is filled with stories about climate events with an obligatory modellers saying that this is “what was expected from our models” without dealing with the nasty reality that the model predicted nothing at all, and the modellers are only expecting (a prior expectation that they put into their models) that climatic events will increase in frequency and depth as the climate generally warms.

Its clear from the analyses of Koutsoyiannis et al, as well as the fascinating article by Pat Frank in the recent issue of

Skeptic, that climate models are very, very bad at capturing even regionalized climate changes and ensembles of climate models produce systemic errors so large that no prediction can be made even 1 year in advance.That’s the bad news.

The good news is that in the next funding round, climate modellers will be able to produce even prettier and more beguiling pictures that mask these fundamental issues than ever before.

Don’t believe me? Try this BBC News description of a climate model:

Yep. Hard science in action.

I don’t know, it’s all Greek to me. 😀

Do the models account for things like this?

re 12

Does not inspire confididence in the forecast skill.

That said there are substantive limitating qualities in present mathematical theory that are well known by mathematical physicists,that have been “overlooked” in the IPCC “storybooks”

“In the field of forecasting stochastic processes, the situation is quite

similar, although there are differences. Everything is all right with

forecasting stationary stochastic processes. We have here the well-known

Kolmogorov-Wiener method, which has been found in the framework of

the well-developed system of concepts of stationary stochastic processes.

The problem is practically reduced to solving an integral equation whose

kernel is the autocorrelation function of the stochastic process. But in

fact all or almost all really observable stochastic processes prove to be

non-stationary, at least judged by the behavior of their mathematical expectation.There is no mathematical theory of non-stationary stochastic

processes.”

Navilov.

Apropos of the disagreement of the Hurst exponents between models and data, I remember that there was a paper several years ago that pointed out the the GCMs did not have the same fluctuations as the actual climate over a very large ranges of time scales.

The most important finding of the Koutsoyiannis et al presentation is that the GCM’s in general significantly underestimate the variances in the observed series’. They fail to do this because much, if not most, of the observed variation since 1880 is likely caused by cyclical changes in ocean circulation patterns rather than by anthopogenic induced forcing. If this is indeed the case (as will become apparent in the view of this writer over the next decade as ocean circulation-induced cooling overwhelms the modest anthopogenic effect) then the GCM’s will be revised to account for such cyclical change. The Keenlyside et al paper suggests that such GCM revision is now underway in response to reality’s ignorance of the current crop of models. Unfortunately, mainstream climate science has way, way too much invested in the belief that equilibrium sensitivity is high to let go.

Vic Sage (#5 above) puts it well, namely, climate scientists should have stayed out of the political relm until their models have been proven to work. Future generations will hopefully learn from the mistakes of this generation of scientists.

Can I also add that I remain baffled by the notion that climate models are highly innaccurate in the short term

but become increasingly accurate as the time horizon increases and as more climate models are combined.That seems to me to encapsulate the entire problem I have with the credibility of climate models and the people who swear by them.

Richard Feynman called it a disease in science.

Steve, (off topic)

The link “Responses to MBH” at the left, is broken.

(Fill free to delete this comment)

RoD

Re # 17 John A

This is an analogy for stockmarket prediction, where from day to day some people have poor scores, some have good scores, but the hustlers say that over the term of decades, (and as more players have joined), the market has always risen after a fall. If the observation is correct, it is useless. Also, if it is wrong it is useless.

Re 6 and 12 above, a BBC local news program had an item, a couple of days ago, about a large investment in hardware to enable better climate forecasting. Perhaps they’ll end up with even prettier pictures!

RE #17 John A:

“Can I also add that I remain baffled by the notion that climate models are highly innaccurate in the short term but become increasingly accurate as the time horizon increases and as more climate models are combined”

Koutsoyiannis appears to be saying the opposite. The further into the future, the less correlation to reality, to the point of uselessness.

Yes Sir! and Boy Howdee!

===============

OUCH! Thats going to leave a mark.

Ok, now that we have settled this, can we get down to some real science on what impacts we might expect from increasing CO2 concentrations?

Re 2 Dennis Wingo: “…I just read an article where European scientists are proposing that funding for a 100 petaflop computer so that they can more accurately model climate but would all that power improve the models or would it just increase the velocity at which we get bad answers due to flawed input assumptions?”

Déjà vu? I think it was in the early 80’s when the huge (at least for that time) climate modeling project on the Cray-1A at NCAR was canceled, or, I believe they said “postponed until computers got an order of magnitude faster”. The project had become the laughingstock around Boulder. It was said that it had gotten very good at predicting tomorrow’s weather; it just took seven days to do it.

After reading Koutsoyiannis, I guess things haven’t changed a lot in the last 25 or 30 years after all. Where is Steve Mosher’s General (ref; the Raobcore Adjustments thread #9) when you need him?

Joe

RE #2 and “systemic source of error”.

1. I had done CFD modeling of 2-D turbulence over a corner using the K-E turbulence model many moons ago (1981). As I recall the need for the K-E method was to get the number of variables to solve for equal to the number of equations. There were 1 or 2 “fudge factors” that you had to guess at and these would be based on a trial and error where you see if the results compare closely to the output. Isn’t that basically what they have to do in the GCMs and isn’t that a systematic source of error?

2. Also, if you had the full accurate profile of all the initial conditions (velocity, pressure, temp, etc) of the entire calculation field (or maybe it was just the grid boundaries-I don’t recall), you didn’t need the K-E simplification, but you couldn’t feasibly get this profile for real world situations. Lets say you could get all the initial conditions of the atmosphere at the start of a run, you’d still have shortfalls in the modeling of the physics, and shortfalls in the modeling of the coupled ocean and land as well as pure calculation rounding and grid size issues.

All systematic sources of error.

3. I assume the GCMs are basically trying to solve Navier-Stokes equations mixed in with volume distributed “generation” terms (like cosmic ray based cloud formation (are they in yet?, condensation, aerosols, etc). Are the Navier-Stokes existence and smoothness problems discussed in the following wikipedia excerpt a potential “systematic source of error” or is it a different animal of a category?:

“The Navier-Stokes equations are also of great interest in a purely mathematical sense. Somewhat surprisingly, given their wide range of practical uses, mathematicians have not yet proven that in three dimensions solutions always exist (existence), or that if they do exist they do not contain any infinities, singularities or discontinuities (smoothness). These are called the Navier-Stokes existence and smoothness problems. The Clay Mathematics Institute has called this one of the seven most important open problems in mathematics, and offered a $1,000,000 prize for a solution or a counter-example.”

I suppose GCMs take into account the effect of CO2 on radiative heat ransfer. However, what about convective and conductive heat transfer which will be determined by turbulence? These effects will be highly non-linear and will be acting over a wide range of spatial as well as temporal length scales. As I understand it, these effects are assumed to be noise in GCMs, average out to zero and hence are expected to only affect weather as opposed to climate. Ever since I read Prof. Tennekes’ essay some time ago, (http://www.sepp.org/Archive/NewSEPP/Climate%20models-Tennekes.htm) I have struggled to understand how this can be justified. In the last few days I have asked people to comment on the last paragraph of Prof. Tennekes’ essay at a couple of sites (RC – Butterflies, tornadoes … and WMBriggs.com/blogs). Gavin at RC has kindly done so today – he accepts that turbulence is a problem but contends that this does not mean that there is no point in modeling climate at all. I can see that models can provide many insights into a complex problem but the question is about the “predictive” skill of the models, especially over localised rather than global scales. If they don’t have much skill, it is difficult to see how they can help frame public policy – especially if very drastic and revolutionary changes in life-style are called for. It may be that they may not have such skill at the moment and are unlikely to have such skill in the future because of “the horrible unpredictability problems of turbulent flows” (Tennekes).

Those are interesting results, but I suppose that people on RC or similar would argue that authors use only 10 stations or something like that. Although spatially dispersed all around the world and climatically diverse, they amount to small fraction of overall number of stations, probably even stations with long temperature records. Additionaly, we already know that stations with strange, counterintuitive temperature trends do exists everywhere in the world. Some of the station used by Koutcoinas et al, also are strange, (eg Karthum excibit cooling trend in 1920s and 1930s when world passed through large warming, and then the same station shows strong warming trend 1945 onward, although much of the world (and logically most of the other stations) exhibit cooling trend 1945-1975). I wonder to what extent exercise like this with probably cherry-picked (or randomly chosen, whatever) stations with non-representative trends could be taken as model testing procedure? I suppose a larger number of stations should be picked, or at least stations with trends similar to global.

It’ s arguable how good of a job climate models do on global mean temperature (for example the period 1940-1970 is not well captured, usually there is a big temperature anomaly in the mid 1940s); however, there is no reason to believe that regional forecasts would have much validity either for forecast or backcast. In addition to urban effects mentioned bySam Urbinto above (#33), there are land use changes in general, see e.g. this. In addition, regional demographic patterns that are neither included in backcasts—for example, the regional increases of suflate emissions in the industrialization of third-world nations such as China are not included in most standard models, rather the global emission levels are assumed to be a constant forcings throughout all geographical regions.

Clearly these shifts in demographics patterns are difficult to accurately predict (who in 1988 for example would have predicted the rapid growth of China’s industry over the next 20 years). This suggests to me that global climate models will

alwaysbe of limited skill in predicting the regional effect of a changing climate. And by the way, if I am wrong about the amount of detail that is included in the global forecasting models used in the quoted study, that makes itworsefor them, not better.32 Ivan, yes. That was the first thing that occurred to me as well….seems like a method ripe for cherry picking (although I did not read all of it, and maybe he lays out a tenable method by which individual comparison points were chosen (?). And yes, Gavin has already dismissed it based on the very limited coverage and using single points to compare to entire model grid cells (he recommended a different study which ostensibly used some sort of regional averages of station data to compare to the model output…and surely got more favorable results).

Arggg. I get so frustrated by the whole culture of GCMs. I agree whole-heartedly with the analysis by Koutsoyiannis, and a don’t see how it is even debatable.

The models do not have temperatures which statistically resembles the actual temperatures. (In this case that means spectral behavior many-degree temnoerature swings in the 1 to 30 year range.) If that is true,

the physics must be wrong, and you would be insane to look for a 0.1 degree rise in the integrated signal.You can start with first-principles physics and predict the steady state effect of 2xCO2. That’s okay.

You can look at historical data, and infer climate sensitivity. That’s reasonable.

You can build a complicated model, tune it empirically, and use it to estimate next weeks weather. (maybe)

But you can’t use that model to extrapolate decades into the future, into conditions well outside your experience. There is no reason why that should work. If someone had offered me a million dollars to try it, I swear I would have declined, knowing it was wrong to take there money when there was no hope of solving the problem.

I need a drink.

Thanks Steve for the posting and the comment, also on behalf of my co-authors. Thanks Spence_UK for drawing attention to this presentation. And thanks to all discussers for the comments. I am impressed by the number of downloads of the presentation file from our repository, which was triggered by this posting. 2500 downloads in two days! More than the annual number of downloads of our most popular papers.

This was in fact a poster paper in the EGU session “Climatic and Hydrological Perspectives on Long-term Changes” (see information in http://www.cosis.net/members/meetings/sessions/information.php?p_id=322&s_id=5262) organized by Manfred Mudelsee, Kosta Georgakakos and Harry Lins (Vienna, April 2008). I think it was a great success and it was decided to reorganize a similar session in 2009, again in Vienna.

The session organizers had invited climatologists, hydrologists, mathematicians and other scientists to discuss together their different perspectives on long-term changes. I think the target was achieved. For instance, Rasmus Benestad, who had criticized the idea of Long Term Persistence (LTP) in climate a couple of years ago in RealClimate (http://www.realclimate.org/index.php?p=228) expressed a vivid interest on LTP in the morning oral block; by the way this block was chaired by Alberto Montanari, the president of Hydrological Sciences of EGU. I had an oral presentation there, jointly with Tim Cohn, on LTP and climate (The Hurst-Kolmogorov pragmaticity and climate, available on line, http://www.itia.ntua.gr/en/docinfo/849). Benestad who spoke later said that LTP is fascinating. He presented the history of climatic changes of Northern Europe, and replying my question, agreed that many factors that have driven or affected these climate changes are unpredictable. He has reported about the EGU session in RealClimate (http://www.realclimate.org/index.php/archives/2008/04/egu-2008/).

That’s about this EGU session – I look forward to an even greater success in 2009.

Coming back to the discussion in this thread, I have a couple of comments. I see that confidence on GCMs is deeply rooted, so any falsifying results, like ours, must be attributed to manipulations such as cherry picking. No, we did not do any cherry picking. We retrieved long data series without missing data (or with very few missing data) available on the Internet. We had decided to use eight stations and we retrieved data for eight stations, choosing them with the sample size criterion (> 100 years – with one exception in rainfall in Australia, Alice Springs, because we were not able to find a station with sample sizes > 100 years for both rainfall and temperature) and also a criterion to cover various types of climate and various locations around the world. Otherwise, the selection was random. We did not throw away any station after checking how well it correlates with GCMs. We picked eight stations, we checked these eight stations, and we published the results for these eight stations. Not even one station gave results better than very poor. Anyone who has doubts, can try any other station with long observation series. Our experience makes us are pretty confident that the results will be similar with ours, those of the eight stations. And anyone who disagrees with our method of falsification/verification of GCM results may feel free to propose and apply a better one. What we insist on is that verification/falsification should be done now (not 100 years after) based on past data. The joke of casting deterministic predictions for hundred years ahead, without being able (and thus avoiding) to reproduce past behaviours, is really a good one, as things show. But I think it is just a joke.

In this respect, I do not think that the uncertainty of future climate can be eliminated with more petaflops in computers. Have we understood everything on climate, so that our problem is just the increase of computational speed and accuracy? Even if this were the case, it is known from dynamical systems that a fully understood system with fully known dynamics (even simple – but nonlinear) is unpredictable for long time horizons. For instance in my Toy Model of Climatic Variability (http://www.itia.ntua.gr/en/docinfo/648 or http://dx.doi.org/10.1016/j.jhydrol.2005.02.030) I have demonstrated that a caricature 2-dimensional climatic model with fully known extra simplified dynamics is unable to reproduce even its own “climate change” trajectories given slight uncertainty of initial conditions.

I think the ocean circulation has a better chance of being globally modeled than the atmosphere, as my intuition says that turbulence in the atmosphere has a greater propagation “consequence” and modeling challenge than turbulence in water…. (???)

If this is true, what would hold anyone back? Just say your modeling the warming.

RE 32. Dr. K. Thank you for your work. SteveMc, on a few occasions we have had threads

that touch on Hurst and LTP. do we have specific thread dedicated to either. It would be

a nice place for Dr. K to drop and chat?

CA readers are an inquiring lot. That’s a lot of “lurkers” who’ve taken the initiative to look at the paper.

And a useful function for the blog in drawing attention to the paper.

#35. We’ve visited the idea of long-term persistence from time to time, more in 2005 when the audience was smaller, than recently. The data set that inspired the LTp concept was a climate data set – the Nile River series studied by Hurst of the Hurst exponent. I’ve studied some interesting old papers by Mandelbrot who also studied climate series including tree ring series, some of which are predecessor series to versions used in Mann’s PC1. Lots of angles to the topic that remain unexplored here.

#32

Excellent presentation. From engineer point of view, the paper by Richard Voss,

1/f (Flicker) Noise: A Brief Review, 33rd Annual Symposium on Frequency Control. 1979, page(s): 40- 46is also a good read on this topic. 1/f makes predictions (given the observations) for distant future quite inaccurate, and makes forcings and ‘weather noise’ difficult to separate. Hockeystick shows no 1/f behavior ( http://www.climateaudit.org/?p=483#comment-208708 ) , but the connection between hockey stick and reality is questionable.

Koutsoyiannis’ weakness:

Comparison with stations is, ultimately, the right thing to do, but I wouldn’t pretend that an interpolation (I don’t know Georgakakos’ method) of four grid points truly represent the climate of that station.

In a region with important topographic features, four points don’t match better with local topography when the mesh size of current models is of hundreds of km.

If a station is located in a vast flat area, the comparison should be fine, otherwise you should look just at trends.

Anyway, all the eight selected stations, either in absolute value or in trend, show the unreliability of CGMs.

Of course, this is a short presentation and I don’t pretend that all the issues are explicitly stated.

I guess I don’t get the obsession with “initial conditions” by both the modelers and the critics. In the long run (anything longer than a week) those should not matter. For example, I could drop a foot of snow here in Virginia (an impossibility this time of year), and in a week or even just a few days, the climate will be back to normal. The comment about turbulence makes a lot more sense. In general the chaotic elements of weather are the problem, perhaps “initial conditions” is a euphamism for that?

The biggest problems I see with modeling (and the entire AGW movement) is oversimplification. Above that is alluded to by lack of cosmic ray cloud formation. The resolution of models is very poor as well. An MCS can have a notable climate impact in the short and medium run, but would normally be left out or parameterized. The general problem with parameterizing is that the parameters are valid for current climate but not for altered climates. More valid parameters can be determined from higher resolution submodels.

If I am not mistaken the modeling problem will be solved within 20 years or so. The “initial conditions” or chaotic effects problems will not (IMO) affect the average climate provided weather is modeled with full fidelity (adequate resolution and comprehensive inputs).

Do the GCM modelers have their own documented methods and techniques for validating and verifying their climate models, ones which could be compared in some fashion with Koutsoyiannis’ method?

Or is it possible that validation and verification of the GCM’s using more-or-less conventional methods is not actually possible; i.e., the question of how to go about performing conventional V&V on these models is, in some sense, an “ill-posed” question?

And if the question of how to perform V&V on a GCM is ill-posed, does this mean that the models themselves, as an amalgamation of various physical theories and assumptions, are not falsifiable in any conventional sense?

For those interested, there is a very good webpage on Hurst exponents and autocorrelation in time series (such as stock market indices) at this link

Its not quite as scary a topic as you’d think.

The practical example of the Hurst exponent is trying to estimate the minimum size of a reservoir so that it never runs dry (which is what Hurst studied with the Nile). This is discussed here

Gavin thinks the models are just great!

http://www.realclimate.org/index.php/archives/2008/05/what-the-ipcc-models-really-say/

Read up on simple non-linear systems like the Lorenz attractor. As for the second comment, in the 19th century many scientist thought that they had everything they needed with enough data and PDE’s.

Thanks Jaye, but I thought the whole point of the attractor is that it depicts the similarity of overall system behavior even though the initial and resultant conditions are unpredictable? As for the second, are there any systems that can’t be described with sufficient data and PDE’s?

“So far modellers have failed to narrow the total bands of uncertainties since the first report of the Intergovernmental Panel on Climate Change (IPCC) in 1990.”

Actually, not since the first (USA) National Academy of Science review of the brand-new topic of global warming in 1979.

The NAS review put the climate sensitivity at 1.5C to 4.5C per doubling. The latest spread from IPCC in AR4 is 2.0 to 4.5, immediately followed by the statement that “values less than 1.5C are highly unlikely.”

Think about what computers were like in 1979 compared to today. Thinking that the next increment in computer power will make a difference does not fit the historic trend.

Eric,

It is important to know how big the attractor is – how long does it take for the system to explore the entire attractor, 1000 years or maybe in this case 10,000,000 years? Neighboring trajectories can quickly diverge to and spend long periods of time in different quasi periodic sections of the attractor that have very different average characteristics.

Re: #41

The take away line in that thread might be Gavin Schmidt’s comment excerpted above. It gets beyond the science but explains much with regards to policy related issues and motivations.

re 35. SteveMc, also recall that Oke used hurst to look for changes in temp station siting.

eric,

As somebody said, just finding the attractor is the tough part.

Most natural processes would qualify. For instance, the time interval series between drips from a dripping faucet is chaotic.

Attractors are a different issue. So please don’t use this thread as a springboard.

In respect to Gavin’s observation cited in #41:

I’m certainly on record with the opinion that the fundamental issues should be explainable without GCMs. On that basis, it should be possible for Gavin or someone else to identify a clear A-to-B exposition of the problem without GCMS if “you don’t need climate models to know we have a problem”. Perhaps someone might be able to get a reference from Gavin for an exposition without the distraction of whether the GCMs have introduced irrelevant issues into the calculation.

I posted the following two comments on Real Climate. The first one went thru, the second is “awaiting moderation.”

Might I suggest that the commenters on this thread read the article by Pat Frank that recently appeared in Skeptic for a scientific (not handwaving) discussion of the value of climate models and the errors induced by the simplifications used in the parameterizations. That article, along with the mathematical discussion of the unbounded exponential growth (ill posedness) of the hydrostatic system of equations numerically approximated

by the current climate models should mathematically clarify

the discussion about the value of the models.

Jerry

[Response: That’s funny. None of the models show this kind of behaviour, and Frank’s ludicrous extrapolation of a constant bias to a linearly growing error over time is no proof that they do. That article is an embarrassment to the otherwise reputable magazine. – gavin]

Gavin,

Before so quickly dismissing Pat Frank’s article, why don’t you post a link to it so other readers on your site can decide for themselves the merits of the article? What was really hilarious was the subsequent typical hand waving response that was published by one of the original reviewers of the manuscript.

I also see that you did not disagree with the results from the mathematical manuscript published by Heinz Kreiss and me that shows that the initial value problem for the hydrostatic system approximated by all current climate models is ill posed. This is a mathematical problem with the continuum PDE system .

Can you explain why the unbounded exponential growth does not appear in these climate models? Might I suggest it is because they are not accurately approximating the differential system? For numerical results that illustrate the presence of the unbounded growth and subsequent lack of convergence of the

numerical approximations, your readers can look on climate audit under the thread called Exponential Growth in Physical Systems.The reference that mathematically analyzes the problem with the continuum system is cited on that thread.

Jerry

Re: #49

I guess my response to Gavin’s comment, “you don’t need climate models to know we have a problem”, was manifold puzzlement in that we have heard that we do not need temperature reconstructions and now we do not need climate models. It would be instructive of Gavin to provide the “other” way to show that global mean temperatures may increase by x +/- b for doubling of CO2 or its GHG equivalents, but it would have been more scientific and less advocacy appearing had he left out the “to know we have a problem” part.

Going deeper into this alternative approach I am wondering how one would make the case for the detrimental effects of CO2 increases without the models looking at regional areas and local climate catastrophes. Would not the alternative exposition of warming due to CO2 doubling be limited to predicting/projecting/explaining increases in the mean global temperature — which in my opinion means little to those one might want to convince that “we have a problem”?

49, On the contrary, Douglass et al has shown climate models fail to show we have a problem, with re to AGW.

Re: #51

Perhaps Gavin prefers to deal with people who think that all that they need is blind faith? There’s not much else left really.

Responses from Gavin and me

[Response: I have no desire to force poor readers to wade through Frank’s nonsense and since it is just another random piece of ‘climate contraian’ flotsam for surfers to steer around it is a waste of everyones time to treat it with any scientific respect. If that is what he desired, he should submit it to a technical journal (good luck with that!). The reason why models don’t have unbounded growth of errors is simple – climate (and models) are constrained by very powerful forces – outgoing long wave radiation, the specific heat of water, conservation of energy and numerous negative feedbacks. I suggest you actually try running a model (EdGCM for instance) and examining whether the errors in the first 10 years, or 100 years are substantially different to the errors in after 1000 years or more. They aren’t, since the models are essentially boundary value problems, not initial value problems. Your papers and discussions elsewhere are not particular relevant. – gavin]

Gavin,

It is irrelevant what journal the article was submitted to. The scientific question is does the mathematical argument that Pat Frank used stand up to careful scientific scrutiny. None of the reviewers could refute his rigorous mathematical arguments and thus the Editor had no choice but to publish the article. Pat has used a simple linear formula to create a more accurate climate forecast than the ensemble of climate models (the accuracy has been statistically verified). Isn’t that a bit curious given that the models have wasted incredible amounts of computer resources? And one only need compare Pat’s article with the “rebuttal” published by a reviewer to see the difference in quality between the two manuscripts.

The mathematical proof of the ill posedness of the hydrostatic system is based on rigorous PDE theory. Please feel free to disprove the ill posedness if you can. However, you forgot to mention that fast exponential growth has been seen in runs of the NCAR Clark-Hall and WRF models also as predicted by the Bounded Derivative Theory. Your dismissal of rigorous mathematics is possibly a bit naive?

If any of your readers really wants to understand if the climate models are

producing anything near reality, they will need to proceed beyond hand waving arguments and look at evidence that cannot be refuted.

Jerry

#50 — Jerry, Gavin’s response seems to imply that he doesn’t realize the uncertainties in intermediate results must be propagated forward along with those results, when when they’re being used in sequential step-wise calculations, in order to know the uncertainty in the final result.

Your work on the initial value problem further seems to show that no model of coupled PDE systems will ever be capable of long-term accurate forecasts. I’d suspect that part of why GCMs don’t show non-convergent behavior is because the hyperviscosity they include damps out the upward cascade of instability.

Your point is also reinforced by Carl Wunsch’s observation that ocean models also don’t converge. He wrote that when he asks modelers about the meaning of non-converged output, they brush him off on the grounds that their results ‘look reasonable.’

At least Gavin realizes that there is “outgoing long wave radiation, the specific heat of water, conservation of energy and numerous negative feedbacks.”

How one models those, well, that’s another story, as is what the results mean.

The hydrosphere wants better representation in all this atmospheric stuff. I hear it’s ready to go on strike.

“I suggest you actually try running a model (EdGCM for instance) and examining whether the errors in the first 10 years, or 100 years are substantially different to the errors in after 1000 years or more. They aren’t, since the models are essentially boundary value problems, not initial value problems.”

This is a bizarre statement. First, how does Gavin define “error” in this context? Is he comparing with an “exact solution” to his “boundary value problem” (unlikely)? Second, it appears that he is suggesting that you run a “boundary value problem” GCM to prove that climate is a “boundary value problem”. Really?

Maybe we can have rational discussion with Gavin when he bothers to document his own GCM and can thereby tell us what equations he is actually solving (maybe he doesn’t even know what’s in there anymore…).

#55 (Pat Frank)

Pat, I was confused by the ranges you presented because I interpreted them as the 95% confidence interval. This would make the results clearly wrong because they were way outside any physically possible outcome (i.e. the probably of a 100 degC cooling in the next 100 years is effectively zero). However, your text seemed to indicate that these unrealistic ranges indicate that the models are unstable and that you are not claiming that the model 95% confidence intervals are +/- 100 degC.

However, I am not certain. Could you please clarify the meaning of those ranges?

Responses from Gavin and me

[Response: I quite agree (but only with your last line). Frank’s argument is bogus – you know it, I know it, he knows it. Since models do not behave as he suggests they should, perhaps that should alert you that there is something wrong with his thinking. Further comments simply re-iterating how brilliant he is are not requested. – gavin

Gavin,

I point out that there was not a single mathematical equation in the “rebuttal”, only verbiage. Please indicate where Pat Frank’s linear fit is less accurate than the ensemble of climate models in a rigorous scientific manner, i.e. with equations and statistics and not more verbiage.

And I am still waiting for you to disprove that the hydrostatic system is lll posed and that when the solution of that system is properly resolved with a physically realistic Reynolds number that it will exhibit unbounded exponential growth.

Jerry

Hi Pat (#55),

It never ceases to amaze me how people like Gavin are able to use verbiage in order to circumvent the logic of a rigorous mathematical argument. The quote at the beginning of your manuscript says it all. These people should spend the time they waste on verbiage learning some scientific skills.

Note that when I pointed out that exponential growth has been shown in NCAR’s own well posed (nonhydrostatic) models as proved by mathematical theory, Gavin did not respond. Unphysically large dissipation can hide

a multitude of sins and is a common tool misused in many numerical models in many scientific fields.

If you look at the derivation of the oceanographic equations, they are

derived using scaling for large scale oceanographic flows (similar to how the hydrostatic equations of meteorology are derived). The viscous terms

are then added to the hydrostatic oceanographic equations in an ad hoc manner. There is no guarantee (and in fact we were unable to prove) that this leads to anything physically realistic. A better method starts with the NS equations and subsequently to a well posed system.

Jerry

I call it neo-medievalism. Goes along with the obsessive censorship, name calling and shouting down of somebody else’s POV. Quite common in certain circles whose polarity on a number line corresponds to negative numbers.

Like Frank K in #57, I’m scratching my head trying to figure out what Gavin means by “errors”. What is he comparing these model results to? And if he knows what the values should be, why is he running the models at all?

In the slide show from UC in # 36 quoting # 32 Demetris Koutsoyiannis, the latter notes:

As a natural example of related observations from Nature, you might have seen the cauliflower-related vegetable that is nick-named here “Broc cauli”.

It is a neat example of events being repeated at different scales at different times to express self-replication. It reminds me of the power spectrum frozen in one time, similar to the slide show page, Koutsoyiannis & Cohn, The Hurst phenomenon and climate, 28.

The image shows how nature can control and replicate a shape over and over. The whole is relicated in the parts, at about 4 visible orders of scale.

This leads (past Lovelock and past DNA) to the question, does climate have inbuilt feedbacks that prevent tipping point and runaways? Plants like this broc cauli have been around for long periods by geological criteria and the subsistence of life implies control, replication and stability rather than runaway and extinction.

Are the long-term peaks in the climate power spectra replications of earlier combinations of climate which then self corrected? Is climate composed of fairly discrete combinations which repeat, as if travelling between feedback limits, much as this vegetable repeats? (after eating, also). Does climate have a constraint like the keyboard of a piano, where theme and variation can be repeated endlessly, between frequency bounds?

Those pesky geologists seem to believe that for the major portion of the last hundred million years of earth’s history, the earth’s average temperature has been from five to ten degrees centigrade warmer than it is today. Whatever the reasons, life on earth has managed, somehow or other, to survive the experience.

Re:51

Kenneth F

Gavin’s observation that we need neither temperature reconstructions nor climate models to “know we have a problem” is puzzling because it raises the intriguing question of what “problem” he is actually refering to.

At RC some time ago, when asked what might cause him to start questioning GCMs, Gavin comment was to the effect that we would need a decade of contrary data. Given that all relevant temperature metrics have been either flat or falling for nearly a decade now, we pretty much have that in hand it would seem. Is that possibly his problem?

Re:65

Geoff S.

Is this an accurate representation of RC/GISS cognitive patterns?

Consider a chaotic system but one that has an orbit, let’s say the orbit looks like when you try to draw a circle over and over so the orbits are bounded (close to a circle) but never quite repeat. The initial value problem diverges exponentially, but the solution always stays near the circle orbit. Thus divergence of the solution does NOT mean runaway behavior like going to infinity. It means that small errors in initial values lead to unpredictable (but bounded) trajectories. Translated to the climate system, failure of the initial value problem means that you can’t predict the weather, but does NOT mean that the model predicts 200 deg C temperatures in New York at some point because the system is bounded by the inputs of energy at all times, which are also being dissipated. Koutsoyiannis results mean that the local scale orbits of the weather are wrong, and thus you should not use the models for forecasting long term impacts of the climate in Arizona or Calcutta. It does not prove that the long-term 100 year climate projection of GMT is wrong–but neither has anyone proved that this is right either.

RE: 66

Yes, yes…that makes sense.

As an aside, guys like gavin need to stop using stochastic models for chaotic processes.

Re: 66 and 67

Hi Craig,

Given your comment above, why then do most (perhaps all) climate models have an “Eulerian core” which is essentially solving the atmospheric initial value problem? If climate is indeed a “boundary value” problem, then I would think that simpler models for the “weather noise” would suffice.

Of course you can’t get those colorful fluid dynamic (CFD) plots and animations without the chaotic atmosphere in the solution somewhere…

#63 — this is the fundamental problem with GCMs, IMHO. They model the climate as being a control system lacking asymptotic stability. Small perturbations in input produce unbounded outputs in the GCMs. Given all that our little planet has been through over the eons, how can any reasonable person defend a model where a tiny perturbation in one input variable (CO2 concentration) results in an unbounded output (surface temperature)? If this were the case, the planet would have had a very short lifetime.

I believe it is both an initial value and boundary value problem. The initial value problem is needed to get it started. With a completely random initial state, it would eventually settle down to a realistic climate, but you might have to run it for thousands of (simulated) years to get the ocean to spin up correctly (with proper currents etc). In that sense it is feasible as a boundary value problem, but wastes resources to run it that way. The boundary value problem is because the system constantly loses heat and without the constant input of solar energy (make it a pure initial value problem) it quickly would go to absolute zero temperature as heat was radiated away. The claim that the system is unstable — goes out of control with an input of CO2– is implausible though Hansen pushes this. One can’t argue it is impossible, because the system was not designed per se for stability. But the earth has been through a lot over billions of years and life is still here, so negative feedbacks seem more plausible than positive.

#58 — Raven they were only single SD’s meaning 68% confidence intervals. When the confidence interval is so much larger than the system value, it just means that one doesn’t know anymore where the system is within its bounds. That is, the limits of resolution are larger than the bounds of the system, and one no longer has any information about its internal state.

That applies to the point Craig made in #66 as well. The model doesn’t predict a 200 C climate at New York (or anyplace else), but when the physical uncertainty gets to (+/-)200 C it means the model is no longer predicting anything at all. There is no information available about the state of the system, except perhaps that it is somewhere inside the bounds. Where it is, exactly no one can know, and the projection mean is no better than a guess.

Jerry #59, thanks for your continued interest. It’s very appreciated. It’s also nice of Gavin to suppose without evidence that I’m knowingly making a bogus argument. The linear model shows all that’s needed to reproduce the GCM global average temperature outputs that reflect GHGs is to propagate a linear forcing assumption. That being true, then the cloud forcing error applied to GHG temperature propagates similarly. The SRES temperature projections are just cuts though the time-temperature plane of the GCM output phase-space. The forcing uncertainty just shows the spread of climate trajectories along the temperature axis that reflect the physical uncertainty of cloud forcing. This uncertainty can’t be neglected just because GCMs are boundary value problems. When the uncertainty becomes larger than the bound, all information is lost about the state of the climate.

It’s clear that climatologists are ignoring your work (Gavin claims it’s not particularly relevant), and so ignoring the devastating implications the unbounded exponential growth you describe has on the ability to predict future climates. It’s awful business. The physics of climate is beautiful, and modeling climate is a noble endeavor. The climate models are instruments for research into how to model climate, not engineering tools as Dan Hughes has pointed out repeatedly. The whole magnificent enterprise has been subverted by politics.

Technically the (unmodified) continuum weather and climate systems of dynamical equations are initial-boundary value problems. The initial part is the data that are used to start up the system. The boundary values are, for example, the constraint that the vertical velocity at the surface vanish. If the systems were strictly hyperbolic (no dissipation) then the initial values would play a crucial role in the long term evolution of the systems. But fluids are not inviscid and so over longer periods the physical dissipation that drains energy from the systems and the

forcing that provides energy input can become the dominant components of the long term solutions. The real problem is that the dissipation used in climate models is unphysically large and the associated parameterizations of the forcing terms are then necessarily inaccurate. Of course there is the more insidious problem with the ill posedness of the hydrostatic system and the fast exponential growth of the nonhydrostatic system, i.e. numerical methods will never converge to the solutions of the real systems.

Jerry

69 MPaul

The climate system to me looks like a cross-coupled positive-feedback oscillator. It is unstable in most of its range, but the oscillator has asymptotic stability because the voltage can’t go below V = 0 or above Vs, the supply voltage.

Something seems to stop the climate from getting colder than the bottom of the ice ages, and something else triggers a cooling move to the next ice age when it gets too warm.

Craig Loehle #66,

you say that Koutsoyiannis results do “not prove that the long-term 100 year climate projection of GMT is wrong–but neither has anyone proved that this is right either”.

Climate is not just T2m and even less is not GMT.

Climate must be regarded as the ensamble of all meteorological, oceanic, etc. variables for each point (3D) of the globe and what AOCGMs are required to do is not merely a projection of GMT but to correctly forecast the multidecadal distribution of each meteorological (etc..) variable.

So, if you prove that a model cannot forecast the distribution of just one or two variables in a station to station approach (I think more then 8 stations are required), then you proved models are wrong.

The position of who says that models can correctly simulate just one unphysical variable (GMT) dosn’t hold.

By the way, regarding current climate models, which are a boundary condition problem, whereas the climate system, at least in a multidecadal time frame, is a non linear system, Roger Pielke has a related thread in today’s weblog.

But it wasn’t always that way, and won’t be forever, chances are. The current positions of the continents are probably the main reason it’s like it is now. So in 30 or 40 million years there may be no more ice ages, or nothing but ice ages.

Re: 70

I agree that the climate system is driven by boundary conditions and various “forcings”, and thus over long time periods responds as a boundary value problem (i.e. initial conditions become less relevant as time proceeds). However, one problem I see with this in a numerical context is that many of the boundary conditions are themselves functions of time, and worse yet functions of the dependent variables you are solving for (particularly temperature). Will perturbations in the boundary conditions or numerical errors in the solution grow non-linearly and cause the solution to wander unpredictably? I have noticed that many of the models employ spatial filters, “mass fixers”, “energy fixers” and the like to keep the solutions stable. How do these adhoc treatments affect temporal accuracy over long integration times? Maybe there’s enough numerical diffusion in the algorithms to keep them from becoming unstable regardless of the initial or boundary conditions!

Frank

Sam – The “Pacific” union is already on strike, along with the “Indian” Union. The “Atlantic” Union is voting right now. The “Solar” Union was locked out and fired years ago. 😉

Danged unions. Them and their lack of reconcilliation with the models.

😀

75 JeffA

I won’t argue with that. If I wanted to, I guess we would have a lot of difficulty proving the other wrong.

Paolo M.: let me give an example. Forest growth simulators model many species, with birth, growth, and death of individual trees. Even with exact initial conditions, the composition of trees on any individual plot of ground can’t be predicted, but over long time periods the average tree mass and species assemblage often can. Thus just because you can’t predict the weather (local tree composition) does not mean the long-term aggregate behavior can’t be predicted. Same with climate, as defenders of GCMs argue, such as Gavin (hear that Gavin?). However, if your model for clouds is wrong (e.g., positive feedback instead negative), then that is a different matter altogether.

Here’s a recent publication that reinforces Demetris’ results. “

An extremely simple univariate statistical model called IndOzy” does as good a job at predicting ENSOs as extremely complex GCMs. The authors, Halide and Ridd, make striking observations about what this says about the utility and reliability of the current crop of climate models.Here’s the reference and the Abstract:

H. Halide and P. Ridd (2008) “Complicated ENSO models do not significantly outperform very simple ENSO models” Int’l J. Climatology 28(2, 219-233. doi: 10.1002/joc.1519

Abstract: “An extremely simple univariate statistical model called IndOzy was developed to predict El Niño-Southern Oscillation (ENSO) events. The model uses five delayed-time inputs of the Niño 3.4 sea surface temperature anomaly (SSTA) index to predict up to 12 months in advance. The prediction skill of the model was assessed using both short- and long-term indices and compared with other operational dynamical and statistical models. Using ENSO-CLIPER(climatology and persistence) as benchmark, only a few statistical models including IndOzy are considered skillful for short-range prediction. All models, however, do not differ significantly from the benchmark model at seasonal Lead-3-6. None of the models show any skill, even against a no-skill random forecast, for seasonal Lead-7. When using the Niño 3.4 SSTA index from 1856 to 2005, the ultra simple IndOzy shows a useful prediction up to 4 months lead, and is slightly less skillful than the best dynamical model LDEO5. That such a simple model such as IndOzy, which can be run in a few seconds on a standard office computer, can perform comparably with respect to the far more complicated models raises some philosophical questions about modelling extremely complicated systems such as ENSO. It seems evident that much of the complexity of many models does little to improve the accuracy of prediction. If larger and more complex models do not perform significantly better than an almost trivially simple model, then perhaps future models that use even larger data sets, and much greater computer power may not lead to significant improvements in both dynamical and statistical models. Investigating why simple models perform so well may help to point the way to improved models. For example, analysing dynamical models by successively stripping away their complexity can focus in on the most important parameters for a good prediction.“Pat, I’m sure research funding agencies might be interested in the implications of H. Halide and P. Ridd (2008) but those directly involved in the operation of GCMs won’t.

BTW, I very much enjoyed your most recent Skeptic article.

#72 – Jerry, what happens if a constant forcing physical uncertainty, e.g., of clouds, enters into the step-wise climate calculation at every step? Essentially, an initial value-like uncertainty is present in every intermediate climate that initializes the subsequent climate, during a time-wise, step-wise climate futures projection. Wouldn’t this continual forcing uncertainty cause an increasingly large uncertainty of outcome over long periods?

Thanks for your discussion in #72. Every time you post, I learn at least one important thing — usually several.

Dover #82, thanks.

There is a big push going forward to get funding for supercomputers for climate forcasting as you can see in today’s Nature editorial:

Nature 453, 257 (15 May 2008) | doi:10.1038/453257a; Published online 14 May 2008

and

Perhaps most importantly:

Why do I get the feeling the edtorial writer has not read Halide and Ridd (2008)?

If the great GCMs are not good enough to reproduce atmosphere temperature profiles (Douglass), or regional changes (Kousoyiannis), did not predict the current 10-year pause in global warming, and cannot predict ENSO any better than a trivially simple model (Halide and Ridd), then just what ARE they good for?

Pat Keating

May 14th, 2008 at 12:52 pm 73

Are you aware of the paper :

” A new dynamical mechanism for major climate shifts.

Anastasios A. Tsonis, Kyle Swanson, & Sergey Kravtsov

Department of Mathematical Sciences, Atmospheric Sciences Group, University of Wisconsin-Milwaukee, Milwaukee WI 53201-0413

Abstract We construct a network of observed climate indices in the period 1900–2005 and investigate their collective behavior. The results indicate that this network synchronized several times in this period. We find that in those cases where the synchronous state was followed by a steady increase in the coupling strength between the indices, the synchronous state was destroyed, after which a new climate state emerged. These shifts are associated with significant changes in global temperature trend and in ENSO variability. The latest such event is known as the great climate shift of the 1970s.

We also find the evidence for such type of behavior in two climate simulations using a state-of-the-art model. This is the first time that this mechanism, which is consistent with the theory of synchronized chaos, is discovered in a physical system of the size and complexity of the climate system.”

They use a neural net on coupled atmospheric and ocean circulations .

links :

http://www.agu.org/pubs/crossref/2007/2007GL030288.shtml

preprint:

http://www.uwm.edu/~kravtsov/downloads/GRL-Tsonis.pdf

Interesting.

The Tsonis paper in #86 anna v expresses mathematically some of the thrust that I was trying to highlight visually in #63. No doubt it will be debated, but hypothesising, one might ask

‘Can climate be represented by a manageable number of “pockets” or “states” or “synchronicities”, like the chords on a piano, composed of a mixture of notes, roaming ever between limits but repeating in a familiar melody?’

If so, it has the potential to make modelling simpler and so reduce the petaflops referenced by another Geoff in #84.

re 86

Tsonis et al is an interesting paper.The velocity inversion (v > -v )has the same effect as time inversion (t>-t).

We see this also as in Ghil et al (2008)

The Shilnikov phenomena is a problematic issue for the GCM as they stand.

Craig Loehle #80,

with all due respect, what modelists says is not the revealed truth and I think we are not supposed to follow their reasoning.

As far as the multidecadal forecast is concerned, you can’t currently predict the regional state of the climate system few decades ahead. As a consequence, you can’t predict a global state nor the value of that strange and useless parameter named GMT.

Knowing that the system is bounded at one side by the temperature of boiling water and at the other by absolute zero doesn’t add anything worth to be considered.

86, 87 anna v

No I wasn’t aware of the Tsonis et al paper. Thanks for citing it.

Craig #80 and Paolo M #91

I think Craig is just saying GCM defenders such as Gavin argue that although we can’t figure out short-term events we can get a beter handle on long term aggregate events. Or that on a macro level, certain overall behaviors can be figured out to some extent.

(Now what that macro size has to become (in both time and space) and how correct it is and what use it is to us, those are separate issues.)

And the bit about clouds and getting it correct; on the other hand, others question if the bits and pieces in figuring out the long term aggregate are correct in their extent and sign in all cases (clouds, aerosols, particulates, water vapor, lapse rate, troposphere versus stratosphere) averaging highly variable data together into one system. Which is part of the bit of qualifying and quantifying the models.

It all boils down to whatever ability any model has to tell us something is not going to be details, it would be over a period of years at some large scale. I would think the shorter the years the larger the scale. One only has to look at the anomaly itself to see there is often a large year to year difference, but clearly the trend long term is up.

What the anomaly means in reality is a different story, but the anomaly data is higher now than it was before.

If the models get what the anomaly will do in the future correct is another question also.

Sam: thanks. Paolo: I am NOT defending the models. Certainly when you see results like Douglass et al and many other results that don’t match, it is no longer obvious that the models work, I am just saying that if GMT trends over 100 years is all you want, the models could miss el ninos and local weather badly and still average out ok globally. the MIGHT do this, but it has not been proved that they do it properly in the case of adding GHG & land use change in.

You’re welcome, Craig. I’d say that over a long enough time, a model or ensemble that is close to reality will average out regardless of short term trends or specific events. However, without taking land-use (and all the other “population <-&rt; technology cycle” side effects) into account, the models have to rely upon GHG, which in my opinion is little or none of whatever the overall effect is.

We have the case where ModelE, built to reflect a world with GHG, tell us it can’t reflect a world without them (duh). In the real, practical world, there’s water vapor, water phase changes, clouds, wind, tropospheric variation in environmental and adiabatic lapse rates, insolation variations, a magnetic field, a molten core, human-constructed albedo changing (and wind and weather pattern modifying) structures over vast areas, farms, farm animals, freeways and the like. And 7 billion people. To think just the road and highway and freeway systems alone all over the NH don’t modify weather patterns and change heat absorption behaviors a great deal is rather short-sighted, IMO.

And even if you model the anomaly correctly it still may not mean anything physical.

(I’m sure you’re aware that I do not take it for granted the anomaly and its anomaly trend is a valid proxy for energy levels, or the concept of a global average particularly physical or meaningful in and of itself in the first place.)

Sam Urbinto #93

I think you missed what I said, i.e. I mentioned “multidecadal distribution of each meteorological variable”, that is the definition of climate. I’m not dealing with weather, so I agree with Gavin too.

I don’t agree with modelists and those of you who endorse modelist reasoning when they say that, by using numerical modeling (AOCGMs), they were able to correctly simulate GMT during the last few decades and, therefore, models are skillfull or, simply, good in their prediction.

But the climate system is not GMT and you can’t choose the physical variable you pretend being correctly predicted and dismiss the others.

A model predicts all variables of your physical system and your climate model is good if all “multidecadal distributions of each variable” are well simulated.

Would you accept a weather man that does good predictions of wind direction only and is wrong for precipitation, temperature, wind speed, cloud cover and type, etc…?

Or what about a weather model doing good prediction of fog only?

And what about a climate model that predicts GMT? Do you have to look at the other variable or don’t?

If the climate community wants to use AOCGMs (whose atmospheric part is represented by common weather models), all variables (their multidecadal distribution) must be correct and on a local scale.

I’m not really impressed by how good models predict GMT.

I find it interesting that these arguments by Koutsoyiannis, Tsonis, and Ghil are precisely the arguments I have been putting to Gavin Schmidt since 2006. Why he dodged them then I can not say. But I would watch now for the RC spin on these papers. The pressure is mounting for them to account.

Paolo:

I promise you that the climate in August 1930 in Havana Cuba was about the same as it was in August 2005 give or take a few degrees. And that it probably didn’t snow either of those months. The tricky thing is getting any more specific than that.

Pat Frank (#83),

If the basic dynamical system is well posed (which is the case for the unmodified, compressible, Navier-Stokes system, but not the hydrostatic system), then the error equation for the perturbation satisfies a variable coefficient linear system of equations (in this case an incomplete parabolic system) for a period of time. The length of time depends on a number of things including the size and source of the perturbation. For example, if the error is from an external forcing such as the sun, then the error equation involves no initial conditions, but does involve an integral of the perturbation error in the forcing. If on the other hand the error is in the initial conditions, then the

error equation only involves the perturbation in the initial conditions.

For longer periods of time the linear system of error equations cannot be ensured to apply unless the error is so small that the linear system continues to apply.

The thing that interested me about your results is that the perturbation in the forcing (GHG) led to a linear growth in the solution. Normally such a linear growth is due to resonance, i.e. the forcing is of a very special type that magnifies certain natural modes of the system.

Jerry

Pat Frank (#83),

There is an attack on your manuscript by Anthony Kendall (#127)

on real climate. I am responding, but thought you might also want to reply.

Jerry

Pat (#83),

Here is my response.

Anthony Kendall (#127),

Let us take your statements one at a time so that there can be no obfuscation.

Is the simple linear equation that Pat Frank used

to predict future climate statistically a better fit than the ensemble of climate models? Yes or no.

Are the physical components of that linear equation based on

arguments from highly reputable authors in peer reviewed journals?

Yes or no.

Is Pat Frank’s fit better because it contains the essence of what is driving the climate models? Yes or no.

Are the models a true representation of the real climate given their unphysically large dissipation and subsequent necessarily inaccurate parameterizations? Yes or no.

Does boundedness of a numerical model imply accuracy relative to the dynamical system with the true physical Reynold’s number?

Yes or no.

Given that the climate models do not accurately approximate the

correct dynamics or physics, are they more accurate than Pat Frank’s linear equation? Yes or no?

What is the error equation for the propagation of errors for the climate or a climate model?

Jerry

Pat (#83)

My original post on RC disappeared so I reposted and it seems to have disappeared. Isn’t that cute. If the post doesn’t appear, it tells me

quite a bit about the “scientists” at RC .

Jerry

#102,

This kind of deletions should be gathered and posted to

http://www.realclimatesecrets.org/

I mentioned verification r2 in this thread http://www.realclimate.org/index.php?p=507 , criminal activity, comment deleted (I think Jean S met the red button as well, after #46). Imagine if I had mentioned multivariate calibration inconsistency diagnostic, R, I’d be in jail!

#103,

http://www.realclimatesecrets.org/

just returns “cannot be found”

Well the good news is that there was a response to the questions I posted.

The interesting news is that they did not answer the questions with a yes or no as I requested, but with the usual obfuscation.

# Gerald Browning Says:

15 May 2008 at 11:28 PM

Anthony Kendall (#127),

Let us take your statements one at a time so that there can be no obfuscation.

Is the simple linear equation that Pat Frank used

to predict future climate statistically a better fit than the ensemble of climate models? Yes or no.

[Response: No. There is no lag to the forcing and it would only look good in the one case he picked. It would get the wrong answer for the 20th Century, the las glacial period or any other experiment. – gavin]

Are the physical components of that linear equation based on

arguments from highly reputable authors in peer reviewed journals?

Yes or no.

[Response: No. ]

Is Pat Frank’s fit better because it contains the essence of what is driving the climate models? Yes or no.

[Response: If you give a linear model a linear forcing, it will have a linear response which will match a period of roughly linear warming in the real models. Since it doesn’t have any weather or interannual variability it is bound to be a better fit to the ensemble mean than any of the real models. – gavin]

Are the models a true representation of the real climate given their unphysically large dissipation and subsequent necessarily inaccurate parameterizations? Yes or no.

[Response: Models aren’t ‘true’. They are always approximations. – gavin]

Does boundedness of a numerical model imply accuracy relative to the dynamical system with the true physical Reynold’s number?

Yes or no.

[Response: No. Accuracy is determined by analysis of the solutions compared to the real world, not by a priori claims of uselessness. – gavin]

Given that the climate models do not accurately approximate the correct dynamics or physics, are they more accurate than Pat Frank’s linear equation? Yes or no?

[Response: Yes. Stratospheric cooling, response to Pinatubo, dynamical response to solar forcing, water vapour feedback, ocean heat content change… etc.]

What is the error equation for the propagation of errors for the climate or a climate model?

[Response: In a complex system with multiple feedbacks the only way to assess the affect of uncertainties in parameters on the output is to do a Monte Carlo exploration of the ‘perturbed physics’ phase space and use independently derived models. Look up climateprediction.net or indeed the robustness of many outputs in the IPCC AR4 archive. Even in a simple equation with a feedback and a heat capacity (which is already more realistic than Frank’s cartoon), it’s easy to show that error growth is bounded. So it is in climate models. – gavin]

Jerry

Pat (#83),

Do you want to respond to the nonsense?

Jerry

Well this is probably futile, but for what it is worth here is my response to Gavin.

Well I adressed Anthony Kendall’s comment (#127) and appeared to be answered by Gavin. A rather interesting set if circumstances. Now let us see why the responder refused to answer the direct questions with a yes or no as asked.

Is the simple linear equation that Pat Frank used

to predict future climate statistically a better fit than the ensemble of climate models? Yes or no.

[Response: No. There is no lag to the forcing and it would only look good in the one case he picked. It would get the wrong answer for the 20th Century, the las glacial period or any other experiment. – gavin]

So in fact the answer is yes in the case that Pat Frank addressed as clearly shown by the statistical analysis in Pat’s manuscript.

Are the physical components of that linear equation based on

arguments from highly reputable authors in peer reviewed journals?

Yes or no.

[Response: No. ]

The references that Pat cited in deriving the linear equation are from well known authors and they published their studies in reputable scientific journals.

So again the correct answer should have been yes.

Is Pat Frank’s fit better because it contains the essence of what is driving the climate models? Yes or no.

[Response: If you give a linear model a linear forcing, it will have a linear response which will match a period of roughly linear warming in the real models. Since it doesn’t have any weather or interannual variability it is bound to be a better fit to the ensemble mean than any of the real models. – gavin]

Again the correct answer should have been yes. If the linear equation has the essence of the cause of the linear forcing shown by the ensemble of models

and is a better statistical fit, the science is clear.

Are the models a true representation of the real climate given their unphysically large dissipation and subsequent necessarily inaccurate parameterizations? Yes or no.

[Response: Models aren’t ‘true’. They are always approximations. – gavin]

The correct answer is no.A simple mathematical proof on Climate Audit shows that if a model uses a unphysically large dissipation, then the physical forcings are necessarily wrong. This should come as no surprise because the nonlinear cascade of the vorticity is not physical. Williamson et al.

have clearly demonstrated that the parameterizations used in the NCAR atmospheric portion of the NCAR climate model are inaccurate and

that the use of the incorrect dissipation leads to the wrong cascade.

Does boundedness of a numerical model imply accuracy relative to the dynamical system with the true physical Reynold’s number?

Yes or no.

[Response: No. Accuracy is determined by analysis of the solutions compared to the real world, not by a priori claims of uselessness. – gavin]

The answer should have been no, but the caveat is misleading given Dave Williamson’s published results and the simple mathematical proof cited.

Given that the climate models do not accurately approximate the correct dynamics or physics, are they more accurate than Pat Frank’s linear equation? Yes or no?

[Response: Yes. Stratospheric cooling, response to Pinatubo, dynamical response to solar forcing, water vapour feedback, ocean heat content change… etc.]

The correct answer is obviously no. All of those supposed bells and whistles in the presence of inappropriate dissipation and inaccurate parameterizations were no more accurate than a simple linear equation.

What is the error equation for the propagation of errors for the climate or a climate model?

[Response: In a complex system with multiple feedbacks the only way to assess the affect of uncertainties in parameters on the output is to do a Monte Carlo exploration of the ‘perturbed physics’ phase space and use independently derived models. Look up climateprediction.net or indeed the robustness of many outputs in the IPCC AR4 archive. Even in a simple equation with a feedback and a heat capacity (which is already more realistic than Frank’s cartoon), it’s easy to show that error growth is bounded. So it is in climate models. – gavin]

The problem is that Monte Carlo techniques assume random errors. Pat Frank has shown that the errors are not random and in fact highly biased. If you run a bunch of incorrect models, you will not obtain the correct answer.

Locally errors can be determined by the error equation derived from errors in the dissipation and parameterizations. Given that these are both incorrect,

one cannot claim anything about the results from the models.

I continue to wait for your proof that the initial-boundary value for

the hydrostatic system is well posed, especially given the exponential growth shown by NCAR’s Clark-HAll and Wrf models.

Jerry

Interesting. Assume he’s referring to model runs when he says “experiment.”

What’s this Realclimatesecrets.org? I get the “can’t display this page” message, even when I enter it manually.

#106, Jerry thanks so much for your advocacy. Your effort is greatly appreciated. RealClimate is a snakepit, and is known for lack of fairness and its censorship. I’m reluctant to subject myself to their peculiar brand of adjudication.

You can see from the conversation I’ve had here that there is little patience with my views, even on a blog where a principled editor has control

http://initforthegold.blogspot.com/2008/03/why-is-climate-so-stable.html

But I’ll take a look at RealClimate this weekend and see if I can do anything.

Re #108:

I think the suggestion of a “realclimatesecrets.org” was tongue-in-cheek to suggest a place where deleted comments from realclimate.org should go.

The real reason for the disparagement is that Gavin is too lazy to construct a proper argument and relies on “voice of god” symbolism and short putdowns to get his own way.

It doesn’t wash.

Pat Frank showed me his proposed manuscript a few months ago and I grasped the point he was making about error-propagation immediately, but I’m not a PhD so who knows what appalling process happens to them that they cannot understand simple well-posed arguments any more?

The climate models are beguiling because of the human belief engine that gives massive positive feedback to random data that “looks” or “smells” correct, and because climate modellers spend about five minutes considering errors before deciding that they’re too trivial to be concerned with.

What Pat Frank has shown in a simple, comprehensible way is that the growth of errors beyond a few months are so large that the projection is literally meaningless – there is no information in that curve. It’s then a belief system to credit that curve with predictive power, not mathematical rigor.

The climate models do to the future what the Hockey Stick does to the past, produce a beguiling picture of curve while hiding the fact that the errors in the calculation are so large that the curve means literally nothing.

Has anyone asked Gavin about Koutsoyiannis et al? That should be entertaining.

Pat (#109),

I hope I haven’t misrepresented your results in any way. I have tried to be careful to state the results of your manuscript in the correct way, but if I messed up blame it on me!

Jerry

Memorable phrase, but maybe a bit harsh? “Human pattern recognition engine” might be closer to the mark in the present context. It’s extremely powerful, very useful most of the time, but also extremely seductive.

I thought all Science majors had to take the “Guarding Against Pseudo-Patterns 101” course. Maybe not Climatology majors.

#110 I asked about GCM error propagation at RC more than two years ago, using the same language Pat Frank uses. The question was ignored. I was told the physics was “solid”. Which is, of course, a dodge. I have yet to read Frank’s article, but I have no doubt his conclusion is qualitatively correct: error propagation is a serious problem that has been swept under the carpet.

Michael Tobis at initforthegold:

It would appear that Michael has taken his ball and gone home.

I just made the following observation to a general board and would like a comment if somebody thinks I am off. Thanks.

There are many types of computer models, as many as the theories they are trying to check.

Climate models are learning tools, in the way that a computer model entered into a robot will be used for the robot to learn how to walk, talk,navigate the human world generally: one step at a time, continually feeding back information and adjusting the parameters. That is what the climate modelers have been doing with their programs. This is a useful program for a robot and eventually it will learn to clean the house and open the doors etc. It will never write a competing theory of relativity. More simple, it will not be able to predict outside the door, more than a step at the time.

Physics theoretical computer models ( and I believe engineering ones too) never feed back and change parameter after publishing, unless it is a new publication that points out the errors in the previous version. Continuous updatings would not validate the theory behind the computer programs, it would just show that the programs have too many parameters and are trivial with no predictive power, never mind the highfalutin sounding equations used.

Thus I am saying that climate models are useful a step at a time, because they are a learning experience for the climate community. Defining a step as bigger than a few days and turning them into predictive tools is a farce as far as rigorous testing of theory versus data is concerned, because the method is still a learning tool method and not a research tool method.

John A,

yes. The problem would be to prove that the comment in question was really submitted to RC, but gavin could always correct erroneous posts. Or someone could try to re-submit the comment, if he/she thinks that this kind of comment will not be deleted. Blog-format would be needed, naturally.

We had a thread early on for comments that failed to get through RealClimate censorship, but the server was underpowered to handle the load and we had to stop.

It’s still underpowered today.

My response (#182) to a hand waving argument by Ray Ladbury (#178) on RC.

Hank,

I would like to comment on Ray’s lack of understanding of mathematics

in his following comment.

Hank, I’m aware of Browning, and what he has cannot be correctly characterized as understanding. I notice that he scrupulously avoided the issue of whether CO2 is a greenhouse gas–and all the other physics. The issue is not whether the models work–their success speaks for itself. The thing that bothers me about Browning et al. is that they completely lose sight of the physics by getting lost in the details of the models.If you are a “skeptic,” the models are your best friends–they’re really the only way we have of limiting the risk we face. That the planet is warming is indisputable. That CO2 is behind that warming is virtually beyond doubt. What is open to dispute is how much harm will come of that. If we were to limit ourselves to the worst-case from paleoclimate, the damage to human society is effectively unlimited. The models tell us where to worry. They give us ideas of how much we have to limit emissions and how long we have to do it. They make it possible to go from alarm to cost-effective mitigation. If the models were to be demonstrated unreliable, they are more likely to err on the conservative side. We still have large risks, but now they are unquantifiable. Ask Warren Buffet if he’d prefer a risk that is imperfectly bounded to one that is unbounded and see which one he’ll take.

I’m sorry, but I don’t attach a lot of value to technical prowess when it is divorced from the context (physical and societal) of what is being modeled.

It is well known in mathematics that if an initial-boundary value problem

for a time dependent partial differential equation is not properly posed,i.e. it is ill posed, then there is no hope to compute the solution in the continuum and certainly not

with any numerical method. The reasons that the ill posedness of the hydrostatic system that is the basis of all the atmospheric components of the current climate models has not yet been seen are as follows.

The climate models are only resolving features greater than 100 km in size, i.e. they are not resolving mesoscale storms, hurricanes, fronts, etc. These are certainly important to any climate. How is it possible that the climate models are able to run when not resolving these features. The answer is by using unphysically large dissipation that prevents the small scale features from forming. Thus the model is not physically realistic as claimed by Ray and the forcing terms are necessarily inaccurate in order to overcome the unphysically large dissipation (energy removal). Runs by Dave Williamson at NCAR have shown the inaccuracy of the spatial spectrum when using unphysically large dissipation and have also shown that the forcing terms (parameterizations )are not physically accurate (references available on request). Thus the models are not accurately describing the continuum dynamics or physics (forcing), i.e. the numerical solutions are not close to the continuum solution of the hydrostatic system.

Runs by Lu et al of the NCAR Clark-Hall and WRF models have also shown that as soon as finer numerical meshes that resolve the smaller scale features are used, fast exponential growth appears even in the well posed nonhydrostatic models (reference available on request). In the case of the hydrostatic system, meshes of this size will show the unbounded exponential growth typical of ill posedness (see numerical runs on Climate Audit under the thread Exponential Growth in Physical Systems).

Thus hydrostatic climate models are currently so far from the real solution of the hydrostatic system that they are not showing the unbounded exponential growth. And the numerical gimmick that is used to run the models unphysically removes energy from solution at too fast of rate, i.e. it is not physically accurate.

So CO2 has increased, but climate models are not close to reality so adding forcing terms (physics) at this stage or later when the unbounded exponential growth appears is nonsense.

Climate audit has shown that the global measurement stations are questionable (to say the least) and the numerical climate models are inaccurate and always will be. So the arguments for AGW are not scientific, but hand waving. I have nothing to gain in this argument (I am retired), but RC has lots to lose in terms of funding.

Jerry

114 (AW) That’s a revelatory thread Michael Tobis shut down over there. Good show, Pat.

===========================================

RE 118 (gerry)

This from Dymnikov (2006)(Mathematical problems of climate)

Judith Curry is, herself, skeptical of the vertical convection parameterizations. These, I assume, relate to features that arise within the 100km resolution mentioned by Jerry.

I am glad to see Hank Roberts and Ray Ladbury’s physics is so “rock solid”. And Gavin Schmidt’s application of statistics so skillful. Their under-representation and appreciation of model error will be their downfall. It didn’t have to be that way. Alas. Everyone’s loss.

Koutsoyiannis et al bring questions on the uncertainty of GCM to match the reality of obsevables,both in predictabilty,and retrodiction (hindcasting)

The divergence from reality(from unknown quantities of ‘natural variation”) at present being one level of uncertainty.

The level of mathematical skill from the “proprietors” of “state of the art gcm” being another as Dymnikov observes.

Another problem area is the parametrization of external forcing,from what id the “normal mean climatic state’ and what is an “abnormal state”

Ghil et al also bring some interesting questions.

Near and far-from equilibrium states and phase transitions from LTP regimes are significant(non trivial)mathmatical problems for ezternally forced GCM.

Ruzmaiken puy this quite succinctly.

We saw an early response from HADCRU in updating the ensemble(sample)sst due to more accurate data from the Argo project(late last year),and in other papers published this year.(This is a correct methodology as the extrapolation of historical sst ensembles have high levels of uncertainty and inaccuracies)

On the other side of the table Humbert is arguing there must be a problematic issue with Argo as yet unidentified and similar to initial issues with MSU,nut these are unimportant as he says

A statement that will probably “keep the poets happy” but notwithout problems as Jaynes notes.

re 114. calvinball

#122 Does Ghil read CA?

re 124

Unsure,I recommend his Lorens Lecture.

http://www.atmos.ucla.edu/tcd/NEWS/BDEs-LorenzLecture.pdf

http://www.atmos.ucla.edu/tcd/NEWS/Lorenz_AGU05_v6bis.ppt

It seems RC agrees,there is no consensus with enso variability.

Indeed the reality is quite observable in slide 21 of the Ghil Lorenz lecture,where overall the CS returns to its previous state.

All,

A response to Gavin

Gavin (#182),

> [Response: The argument for AGW is based on energy balance, not turbulence.

So mesoscale storms, fronts, and turbulence are now classified as turbulence.Oh my.

> The argument existed before GCMs were invented, and the addition ofdynamical components has not provided any reason to adjust the basic picture.

So why have so many computer resources been wasted on models if the “proof”

already existed. A slight contradiction.

> As resolution increases more and finer spatial scale processes get included, and improved approximations to the governing equations get used (such as moving to non-hydrostatic solvers for instance).

I have published a manuscript on microphysics for smaller scale motions and they are just as big a kluge as the parameterizations used in the large scale models. And it has also been pointed out that there is fast exponential growth

in numerical models based on the nonhydrostatic models. Numerical methods will not converge to a continuum solution that has an exponential growth.

> Yet while many features of the models improve at higher resolution, there is no substantial change to the ‘big issue’ – the sensitivity to radiative forcing.

Pardon me but isn’t radiative forcing dependent on water vapor (clouds) that Pat Frank and others have shown is one of the biggest sources of error in the models.

> It should also be pointed out (again) that if you were correct, then why do models show any skill at anything? If they are all noise, why do you get a systematic cooling of the right size after Pinatubo? Why do you get a match to the global water vapour amounts during an El Niño? Why do you get a shift north of the rainfall at the mid-Holocene that matches the paleo record? If you were correct, none of these things could occur.

How was the forcing for Pinatubo included? It can be shown in a simple 3 line proof that by including an appropriate forcing term, one can obtain any solution one wants. Even from an incorrect differential system exactly as you have done.

> Yet they do. You keep posting your claim that the models are ill-posed yet you never address the issue of their demonstrated skill.

There are any number of manuscripts that have questioned the “skill” of the

models. I have specifically mentioned Dave Williamson’s results that you continue to ignore. Please address any of the issues, e.g. with the nonlinear cascade of vorticity that produces unresolved features in a climate model within a few days, How does that impact the difference between the model solution and reality? Or the impact of the upper boundary using numerical gimmicks? Or the use of inaccurate parameterizations as shown by Sylvie Gravel (see Climate Audit) or Dave Williamson? The ill posedness has also been shown when the mesoscale storms will be resolved and the dissipation reduced

exactly as in the Lu et al. manuscript.

>In fact, you are wrong about what the models solve in any case. Without even addressing the merits of your fundamental point, the fact that the models are solving a well posed system is attested to by their stability and lack of ‘exponential unbounded growth’.

I have specifically addressed this issue. The unphysically large dissipation in the models that is preventing the smaller scales from forming is

also hiding the ill posedness (along with the hydrostatic readjustment of the solution when overturning occurs due to heating – a very unphysical gimmick).

> Now this system is not the exact system that one would ideally want – approximations are indeed made to deal with sub-gridscale processes and numerical artifacts – but the test of whether this is useful lies in the comparisons to the real world – not in some a priori claim that the models can’t work because they are not exact.

And it is those exact sub grid scale processes that are causing much of the inaccuracy in the models along with the hydrostatic approximation.

> So, here’s my challenge to you – explain why the models work in the three examples I give here and tell me why that still means that they can’t be used for the CO2 issue. Further repetition of already made points is not requested. – gavin]

If you address any one of my points above in a rigorous mathematical manner

with no handwaving, I will be amazed. I am willing (and have) backed up my statements with mathematics and numerical illustrations. So far I have only heard that you have tuned the model to balance the unphysically large dissipation against the inaccurate forcing tuned to provide the answer you want. This is not science, but trial and error.

Jerry

Re 127

If the post-Pinatubo behavior of the climate system over a period of approximately three years is evidence

forthe models, is the current stable or declining temperature trend of five to eight years evidenceagainstthe models?I wish the modellers would be explicit about their claims, the sands do shift so…

#128 I believe that their answer is that the models do such big forcings as Pinatubo (a climate forcing) and not the ten years of weather. Though, I have noted that the pro-AGW press like single events to make claims, most of the modellers and scientists do not. However, I would note that it seems more and more are agreeing that 30 years shows climate. Looking at the record whether the CET, US, of the world, 30 years does not seem to be such a bad claim. After all, in the records, seldom does one see 30 years of cooling. But this gets us back to the question “If it has been warming since 1650 or so, how can you tell if it is a natural forcing or man-induced?”

Jerry #127, I found Gavin’responses to your 218 on RealClimate interesting.

Is this comment relevent? I thought your point was the subgrid phenomena. Pinatubo was a many grid phenomena correct? Should a large scale event necessarily be proof that the micro scales are correct? Also, the time scale of Pinatubo is not the same scale as for the models that you object to (3 vs 100), or is that incorrect on my part? I did not think that opitcal depth was what you were objecting to. Is this actually (Pinatubo) a straw-man? Do you think that

, or is this another strawman; or perhaps I just missed Gavin’s sarcasm?

Neil (#128),

I looked at the first reference that Gavin cited (Hansen et al. on Pinatubo). I have never seen more caveats in a manuscript.

After reading it I was uabale to conclude anything. Note that I had asked for a very specific info on how Pinatubo was simulated. This did not provide it.

Jerry

I apologise if I come across as slightly obsessed with what Prof.Tennekes wrote but I am trying to understand the relevance or otherwise of turbulent flow phenomena on “weather” and “climate”! I would be very grateful if Gerald Browning cares to comment on my earlier comment (#27)on this thread.

#130

That is the key point Jerry .

I do not know if you will be able to get the complete and accurate information about what the models did and I doubt it .

My take on it would be that a simple 2 D variationnal model (remember we are only talking about some 3 values of delta T where delta T is a difference between a yearly average integrated over the whole sphere and some reference value) would only consider the change in overall average albedo and adjust the energy balance accordingly .

Then as the time is too short to perturbate the oceans’ systems , only the atmosphere and the surface would be perturbated .

From there one can get an order of magnitude of the global perturbation (“cooling”) that would surely be the right order of magnitude of the observed average temperatures .

An independent but not really necessary sophistication could be to connect the albedo and optical thickness variation to the properties of the aerosols (quantity , size distribution , speed of propagation etc) .

One could hope that a multimillion GCM can get at least energy conservation over a very short period (a couple of years) right .

As those are short signals , the problem can be solved by a perturbation method where more or less everything can be considered constant and it wouldn’t imply anything about the skill to predict absolute values or variations over periods of hundreds of years when oceans , sun and ice cover kick in .

So if you get the information , I am ready to bet that it doesn’t contain much more than what I sketched above , envelopped in fancy vocabulary like spectral absorptivity and such .

Without considering the spheres (atmo, hydro, cryo, bio, litho, magneto, helio…) individually and in their interactions with each other in many different interlocking ways on various time scales, any big picture will of course be hopelessly incomplete.

Maybe just the big five, cryosphere, hydrosphere, atmosphere, lithosphere and biosphere, could be called the CHALB-sphere.

BENDER,

I have another for the list

http://www.statslab.cam.ac.uk/Reports/1996/1996-18.ps

Regression in Long-Memory

Time Series

Richard L. Smith, Cambridge University and

University of North Carolina

Fan-Ling Chen, University of North Carolina

ABSTRACT

We consider the estimation of regression coecients when the residuals

form a stationary time series. Hannan proposed a method for doing this,

via a form of Gaussian estimation, in the case when the time series has

some parametric form such as ARMA, but we are interested in semipara-

metric cases incorporating long-memory dependence. After reviewing re-

cent results by Robinson, on semiparametric estimation of long-memory

dependence in the case where there are no covariates present, we propose

an extension to the case where there are covariates. For this problem it

appears that a direct extension of Robinson’s method leads to inecient

estimates of the regression parameters, and an alternative is proposed. Our

mathematical arguments are heuristic, but rough derivations of the main

results are outlined. As an example, we discuss some issues related to cli-

matological time series

…..

Our interest in this problem was motivated, in part, by the problem of

estimating trends in climatological time series, and of obtaining associated

standard errors and signicance tests. For discussion, see Bloomeld (1992)

and Bloomeld and Nychka (1992). It is a well-known phenomenon that

time series of global average temperatures exhibit a steady increasing trend

over most of the last 150 years, but it is still a point of debate whether such

trends are real evidence of an enhanced greenhouse eect due to human

activity, or are simply long-term fluctuations in the natural climate….

Tom Vonk (#132),

A number of excellent points. I especially like the one about

the long time scale of the ocean as compared to the atmosphere.

That makes clear the gimmick they used to predict for a period of a

few years, but not for longer periods.

Jerry

Jerry (#128) and Tom V. (#132)

With the caveats, approximations, and gross simplifications in Hansen’s treatment, including the admitted inability to predict regional facts based on Pinatubo’s eruption, one wonders why the 1992 Hansen paper is seen as a test of numerical climate models at all, rather than as a test of the much simpler proposition “big volcanic eruptions that change albedo make the global average temperature go down for a while”.

Neil (#136),

Every statement that Gavin makes has a modifying descriptor

(may is a favorite and close without a quantitative amount is another).

IMHO he is the the most unscientific scientist I have known. Good scientists like Peter Lax, Fritz John, etc. will prove their statements without qualifiers in rigorous scientific terms.

Jerry

I have complained bitterly in the past about getting no response (other than denial) from RC on the topic of the structural stability of EOF-derived “circulatory modes”. I have suggested these are just “eigenthingies”.

I now ask why RC (Schmidt, Roberts, Ladbury) did not point me toward Zwiers & von Storch (2004) who state:

From: Zwiers & von Storch. 2004. On the role of statistics in climate research. Int. J. Climatol. 24: 665-680.

Funny how these authors not only

understood and anticipatedmy question, but answered in the affirmative. (Not only that, they quote the word ‘modes’ the same way I do.)It is against blog rules to speculate why RC might have failed to point out this paper. But the paper says a number of intersting things that sounds a lot like Wegman.

The relevance to Koutsiyiannis thread is: internal LTP noise and the problem it causes for attribution exercises.

In keeping with the auditing function of CA, one wonders if this paper was cited in the IPCC docs, and if so, in what context.

You could always look at this PDF of CMIP runs and ask yourself what Hansen means by ” the most trusted models produce a value of roughly 14 Celsius”. when actually it should be “CMIP control runs show a global temperature of around 11 to 17 Celcius.”

I think ensemble is a French word meaning “We have no clue, combining them will let us say we have averaged out the errors.”

Re #138. Bender, A search of all chapters of the 2007 Report of WGI (“The Physical Science Basis”) shows that this 2004 paper by Zwiers and von Storch was NOT cited in that document. It would be surprising if there was any reference in the Reports of the other Working Groups.

sam that is lovely

All,

Pat Frank has decided to post a response on RC to address the less than scientific comments on RC about his manuscript that appeared in Skeptic. Note that some of those comments came from Tapio Schneider. After reading Tapio’s comments on the history of his participation in the review process,

you might want to read my response and, of course, eventually Pat’s

response.

Also note that my list of issues has been dismissed by Gavin

without him ever adressing them quantitatively.

Jerry

Re #142

If he had answered them quantitatively I’m sure you would have considered that obfuscation since you demanded a yes or no answer (#105)!

Bender (#138),

I also like the preceding sentences from the same paragraph as your quote:

Well is seems that my response to Tapio has mysteriously disappeared.

I have asked RC if this was done intentionally. If not I ahve a copy and will repost it on RC. I will also post it here.

Jerry

>Regarding the articles in Skeptic Magazine and claims made by Browning

(e.g., #91, #152) and others, some (late) comments to clarify the

history and some of the science.

> In September 2007, Michael Shermer, the publisher of the magazine,sent me Frank’s submitted article (an earlier but essentially similar version

thereof), asking me what I think of it. This was not a request to

review it but an informal email among acquaintances.

Are articles submitted to Editors suppose to be sent out to random people or only to the intended reviewers? This seems like a breach of publication rules.

What is the difference between the copy you saw and the final published article so we can make sure that the two contain the same scientific information before proceeding.

>I pointed Shermer

to some of the most glaring errors in Frank’s article. Some of these

have come up in posts and comments here.

I will address the “glaring errors” mentioned in this comment and then we can proceed to any other glaring errors.

> For example, Frank confuses

predictability of the first kind (Lorenz 1975), which is concerned

with how uncertainties in the initial state of a system amplify and

affect predictions of later states, with predictability of the second

kind, which is concerned with how (possibly chaotic) internal dynamics

affect predictions of the response of a system to changing boundary

conditions. As discussed by Lorenz, even after predictability of the

first kind is lost (”weather” predictability), changes in statistics

of a system in response to changes in boundary conditions may be

predictable (”climate” predictability).

You call this a glaring error when Lorenz has not proved anything. In particular you are very careful to say “may be predictable”. That is not a rigorous proof, nor even a rigorous statement.

> Frank cites Matthew Collins’s

(2002) article on limits to predictability of the first kind to infer

that climate prediction of the second kind (e.g., predicting the mean

climate response to changing GHG concentrations) is impossible beyond

timescales of order a year; he does not cite articles by the same

Matthew Collins and others on climate prediction of the second kind,

which contradict Frank’s claims and show that statistics of the

response of the climate system to changing boundary conditions can be

predicted (e.g., Collins and Allen 2002). After pointing this and

other errors and distortions out to Shermer–all of which are common

conceptual errors of the sort repeatedly addressed on this site, with

the presentation in Frank’s article just dressed up with numbers

etc. to look more “scientific”–I had thought this was the last I had

seen of this article.

Is there a mathematical proof in Collins second manuscript or is it just a run of a climate model? As a mathematical friend of mine clearly stated, a model is not a proof of anything. That is especially true when the model is closer to a heat equation than the system with the correct Reynold’s number. In fact there have been many blind alleys in science because of exactly these type of assertions.

>Independently of Michael Shermer asking my opinion of Frank’s article,

I had agreed to write an overview article of the scientific basis of

anthropogenic global warming for Skeptic Magazine. I did not know that

Frank’s article would be published in the magazine along with mine, so

my article certainly was not meant to be a rebuttal of his

(#91). Indeed, I was surprised by the decision to publish it given the

numerous errors.

It has yet to be shown that there are any mathematical errors in Pat’s

manuscript. In fact when the reviewers are unable to present any rigorous arguments in rebuttal, the Editor is required to publish the article.

The Editor told Pat that someone would write a rebuttal in the same issue.

So would the questionable arguments above be your rebuttal?

>Regarding some of Browning’s other points here, the ill-posedness of

the primitive equations means that unbalanced initial conditions

excite unphysical internal waves on the grid scale of hydrostatic

numerical models, in lieu of smaller-scale waves that such models

cannot represent. This again limits predictability of the first kind

(weather forecasts on mesoscales with such models) but not necessarily

of the second kind. Browning is right that the models require

artificial dissipation at their smallest resolved scales.

The problem has nothing to do with unbalanced initial conditions. The problem will arise from rounding errors and does not go away over a longer time period. It is only hidden by the poor resolution and unphysically large dissipation. Does this have something to do with the “glaring errors” in Pat’s article or is it just another random potshot.It certainly not rigorous.

If the numerical solution is not even close to approximating the continuum solution, nothing can be stated about the results. And very few modelers have shown that reducing the mesh size leads to the same or even a similar solution. In fact Dave Williamson has shown just the opposite result.

>However,

from this it does not follow that the total dissipation in climate

models is “unphysically large” (#152) (the dissipation is principally

controlled by large-scale dynamics that can be resolved in climate

models).

This is pure nonsense. Dave Williamson has shown that the cascade of vorticity happens in a matter of days and is very different when too large of dissipation is used (in fact smaller than in current climate models).

And he reduces the hyperviscosity coefficient by orders of magnitude when reducing the mesh size (but leaves the sponge layer coefficient near the top the same). You might try reading the scientific literature in your own field.

>And it does not follow that the “physical forcings are

necessarily wrong” (#152). Moreover, the “forcing” relevant for the

dynamical energy dissipation Browning is concerned with is not the

forcing associated with increases in GHG concentrations, but the

differential heating of the planet owing to insolation gradients (this

is the forcing that principally drives the atmospheric dynamics and

ultimately the dynamical energy dissipation). As Gavin pointed out, many aspects

of anthropogenic global warming can be understood without considering

turbulent dynamics.

We are not talking turbulent dynamics. We are talking about a numerical solution that is not even close to the solution of the continuum system

for scales less than 100 km (mesoscale storms, fronts, hurricanes).

What prevents these from forming? Too large of dissipation. If the correct dissipation were used, the forcings would be quite different. Have you heard of microphysics for smaller scales of motion?

Jerry

#144 Yes. Many quotable, Wegmanesque quotes. So many that one wonders why it isn’t cited by IPCC. Ironic that I should advocate for an ASA

Journal of Statistical Climatology. That is precisely what these fellows understand that Gavin Schmidt doesn’t.Re: Comment 84 and others on throwing more petaflops at climate models (without waiting for better understanding):

The scientists at the NCAR are actively getting more “flops” Here’s the story on their new 76-teraflop supercomputer, said to be one of the world’s 25 fastest. They only need 13 more to reach their first petaflop. I wonder how much one of those babies costs?

re 139 the observation data is consistent with the control runs.

Can’t help imagining Mother Nature smiling in the background about them human beings trying to fathom her antics…

Phil (#143),

You are confusing different comments. One was a yes or no because Gavin loves to diverge from the issue at hand. But others asked quantitative questions like what is the impact of the treatment of the upper boundary,

the improper cascade of vorticity, etc.

And it is clear from comments made by RC that they don’t even know the difference between a well posed and ill posed system. Pathetic.

Jerry

Jerry

#151 (Jerry) says:

Would it be possible to give us non-modellers a layman’s summary of the difference?

149 mosh

Not to +/-2 SEM they’re not!

😀

Actually, sadly, we have no observational data just an anomaly offset from the mean of the low and high model from 1951-1980 or 1961-1980. 😦

Raven (#152),

There are many references that discuss properly posed problems.

I will quote the definition from the book “Difference Methods for Initial-Value Problems” by Richtmyer and Morton. And then I will try to explain in simpler terms.

From page 39:

The problems we consider can be represented abstractly, using Banach space terminology. One wishes to find a one-parameter family u(t) of elements of B, where t is a real parameter, such that

(3.1) d over dt u(t)= A u(t), 0

Oh my, the comment seems to have been shortened for some reason. I must have used a bad character (.probably the less than sign) I’ll have to retype.

Jerry.

ampersand lt semicolon, Jerry:

<

Raven (#152),

You have the reference from the shortened comment so I will just write my less technical summary to save me time.

A properly (well) posed time dependent problem depends continuously on the initial data, i.e. a small perturbation of the initial data will lead to a solution that is close to the unperturbed solution for a finite time interval. This is exactly the type of solution that one expects physically.

An improperly (ill) posed problem does not depend continuously on the initial data, i.e. a small perturbation of the initial data will lead to a solution that deviates from the unperturbed solution by an arbitrarily large amount in an arbitrarily small amount of time. The solutions of these problems are not physical and any small error, whatever its source, will lead to a drastically different answer.

Jerry

Hoi Polli (#150),

The computing rates are always cited in the best scenario, i.e. single processor peak rate times the number of processors. The problem was always that the processors got bogged down in communications between processors

so the actual rate was considerably less, e.g. as little as 10 % of peak.

So the extra processors did not increase the computing in a linear fashion

as the number of processors increased. I assume that this is still true? In any case, when the basic dynamical system has serious problems

the claimed increase in forecast or climate model accuracy is nonsense.

Jerry

Raven (#152),

The definitions have nothing to do with numerical models, but are

for continuous partial differential equations. Once a continuum

PDE is determined to be well posed, it can be approximated by numerical methods where consistency (accuracy) and stability of the numerical method

must be shown. Then the numerical method will converge to the continuum solution as the mesh size is reduced (for a finite time interval).

Jerry

#157 – Thanks – that is the level of explanation I was looking for.

If I understand Koutsoyiannis et al.’s methodology, they compared modelled temperatures to recorded temperatures by downscaling GCM outputs to the NOAA station level. Is that degree of downscaling methodologically sound? The researchers say they based their techniques on Georgakakos (2003), but in that study–again, if I’m not mistaken–downscaling only went as far as the US climate division level (on the magnitude of 10,000km2).

Also, Albany is in Georgia, not Florida.

Well it seems that RC has also edited Pat Frank’s rebuttal to Tapio.

Jerry

I’ve been having trouble getting my reply to Anthony Kendall’s comments past the WordPress spam filter at RC, and so have just emailed them in to RC contributions, in the hopes of having them published.

But in any case, here’s it is:

Re. #127 — Anthony Kendall wrote, “

I just finished Frank’s article, and I have to say that it makes really two assumptions that aren’t valid …1) The cloudiness error…”[snip]

“

… he uses this number 10%, to then say that there is a 2.7 W/m^2 uncertainty in the radiative forcing in GCMs. This is not true. Globally-averaged, the radiative forcing uncertainty is much smaller, because here the appropriate error metric is not to say, as Frank does: “what is the error in cloudiness at a given latitude” but rather “what is the globally-averaged cloudiness error”. This error is much smaller, (I don’t have the numbers handy, but look at his supporting materials and integrate the area under Figure S9), indeed it seems that global average cloud cover is fairly well simulated. So, this point becomes mostly moot.”Your description is incorrect. Table 1 plus discussion in Hartmann, 1992 (article ref. 27) indicate that –27.6 Wm^-2 is the globally averaged net cloud forcing. That makes the (+/-)10.1 % calculated in the Skeptic article Supporting Information (SI) equal to an rms global average cloud forcing error of the ten tested GCMs. Further, the global rms cloud percent errors in Tables 1 and 2 of Gates, et al., 1999 (article ref. 24), are ~2x and ~1.5x of that 10.1%, respectively.

Your quote above, “

what is the error in cloudiness at a given latitude,” appears to be paraphrased from the discussion in the SI about the Phillips-Perron tests, and has nothing to do with the meaning of the global cloud forcing error in the article.“

2) He then takes this 10% number, and applies it to a linear system to show that the “true” physical uncertainty in model estimates grows by compounding 10% errors each year. There are two problems here: a) as Gavin mentioned, the climate system is not an “initial value problem” but rather more a “boundary value problem”…”It’s both. Collins, 2002 (article ref. 28) shows how very small initial value errors produce climate (not weather) projections that have zero fidelity after one year.

Collins’ test of the HadCM3 has only rarely been applied to other climate models in the published literature. Nevertheless, he has shown a way that climate models can be tellingly, if minimally, tested. That is, how well do they reproduce their own artificially generated climate, given small systematic changes in initial values? The HadCM3 failed, even though it was a perfect model of the target climate.

The central point, though, is that your objection is irrelevant. See below.

“

…–more on that in a second, and b) the climate system is highly non-linear.”But it’s clear that projection of GHG forcing emerges in a linear way from climate models. This shows up in Gates, 1999, in AchutaRao, 2004 (Skeptic ref. 25; the citation year is in error), and in the SRES projections. The congruence of the simple linear forcing projection with the GCM outputs shows that none of the non-linear climate feedbacks appear in the excess GHG temperature trend lines of the GCMs. So long as that is true, there is no refuge for you in noting that climate itself is non-linear.

[snip]

“

The significance of the non-linearity of the system, along with feedbacks, is that uncertainties in input estimates do not propagate as Frank claims.”To be sure. And theory-bias? How does that propagate?

“

Indeed, the cloud error is a random error, which further limits the propagation of that error in the actual predictions. Bias, or systematic, errors would lead to an increasing magnitude of uncertainty. But the errors in the GCMs are much more random than bias.”SI Sections 4.2 and 4.4 tested that very point. The results were that cloud error did not behave like a random, but instead like a systematic bias. The correlation matrix in Table S3 is not consistent with random error. Recall that the projections I tested were already 10-averages. Any random error would already have been reduced by a factor of 3.2. And despite this reduction, the average ensemble rms error was still (+/-)10.1 %.

This average cloud error is a true error that, according to statistical tests, behaves like systematic error; like a theory bias. Theory bias error produces a consistent divergence of the projection from the correct physical trajectory. When consistent theory bias passes through stepwise calculations, the divergence is continuous and accumulates.

“

Even more significantly, the climate system is a boundary-value problem more than an initial-value problem.”Speaking to initial value error vs. boundary value error is irrelevant to the cloud forcing error described in my article, which is neither one.

Consider, however, the meaning of Collins, 2002. The HadCM3 predicted a climate within a bounded state-space that nevertheless had zero fidelity with respect to the target climate.

[snip]

“

To summarize my points:“

1) Frank asserts that there is a 10% error in the radiative forcing of the models, which is simply not true.”That’s wrong. An integrated 10.1 % difference in global modeled cloudiness relative to observed cloudiness is not an assertion. It’s a factual result. Similar GCM global cloud errors are reported in Gates, et al., 1999.

“

At any given latitude there is a 10% uncertainty in the amount of energy incident, but the global average error is much smaller.”I calculated a global average cloud forcing error, not a per-latitude error. The global average error was (+/-)2.8 Wm^-2. You mentioned having looked at Figure S9. That Figure shows the CNRM model per-latitude error ranges between about +60% and -40%. Figure S11 shows the MPI model has a similar error-range. Degree-latitude model error can be much larger than, or smaller than, 10%. This implies, by the way, that the regional forcings calculated by the models must often be badly wrong, which may partially explain why regional climate forecasts are little better than guesses.

“

2) Frank mis-characterizes the system as a linear initial value problem, instead of a non-linear boundary value problem.”If you read the article SI more closely, you’ll see that I characterize the error as theory-bias.

Specific to your line of argument (but not mine), Collins, 2002, mentioned above, shows the initial value problem is real and large at the level of climate. The modeling community has yet to repeat the perfect-model verification test with the rest of the GCMs used in the IPCC AR4. One can suppose these would be very revealing.

[snip]

“

Let me also state here, Frank is a PhD chemist, not a climate scientist…”Let me state here that my article is about error estimation and model reliability, and not about climate physics.

[snip]

“

There’s also a reason why this article is in Skeptic instead of Nature or Science. It would not pass muster in a thorough peer-review because of these glaring shortcomings.”The professionals listed in the acknowledgments reviewed my article. I submitted the manuscript to Skeptic because it has a diverse and intelligent readership that includes professionals from many disciplines. I’ve also seen how articles published in more specialist literature that are critical of AGW never find their way into the public sphere, and wanted to avoid that fate.

Dr. Shermer at Skeptic also sent the manuscript to two climate scientists for comment. I was required to respond in a satisfactory manner prior to a decision to accept.

Here’s my original response to the relevant portion Tapio Schneider’s comments at RC. Gavin edited out the first bit, but I can fully document it.

It’s interesting that Tapio, Gavin, and A Kendall all missed the point that the cloudiness error tested out as systematic, consistent with theory-bias. If that error is indeed from theory-bias, it will play havoc with the position that all climate prediction needs is a bigger, faster computer.

Anyway, here’s the original post. May it make it through the discretion filter at CA.

———————————-

Re #206, only one of the two climate scientists recruited by Dr. Shermer at Skeptic referred to type 1 and type 2 errors. That particular reviewer prefaced his remarks to Dr. Shermer with an attempt at character assassination; see SI Section 1.2 for a more discrete paraphrasing of the accusation and a discussion of it.

I cited Collins’ 2002 article (Skeptic ref. 28) merely to show from a different perspective that GCM climate projections are unreliable. Likewise Merryfield, 2006 (ref. 29). Citing them had nothing to do with validating my error analysis.

Collins and Allen, 2002, mentioned by Schneider, tests the “potential predictability” of climate trends following boundary value changes. E.g., whether a GCM can detect a GHG temperature trend against the climate noise produced by the same GCM. This test includes the assumption that the GCM accurately models the natural variability of the real climate. But that’s exactly what is not known, which is why the test is about “potential predictability” and not about real predictability. Supposing Collins and Allen, 2002, tells us something about the physical accuracy of climate modeling is an exercise in circular reasoning.

The error relevant to uncertainty propagation in the Skeptic article is theory-bias. Schneider’s claim of confusion over type 1 and type 2 errors is both wrong and irrelevant. Of the major commentators in this thread, only Jerry Browning has posted deeply on theory-bias, and his references to the physics of that topic have been dismissed.

—————————–

#147 — bender, the ASA leadership came out with a statement saying that the IPCC is telling us important stuff, AGW is a real threat, and that more statisticians should be employed in assessing and solving the problem. One wonders what Wegman, et al. think of this.

#165: Pat

Apart from the politically correct initial endorsement of the IPCC AR4 with no indication of having done any analysis themselves, it appears to me (reading the bold text, in particular) that the ASA leadership is advocating – and rightly so – that there is a need to involve more statisticians in the climate science process. This is basically the same conclusion that most of us on CA have come to independently.

#166, Roman, my mistake. Take a look at this.

Well Real Climate has started to selectively edit my responses. Would the readers on this site want me to post them here?

Jerry

Steve:Sure.#167 Pat

The latter link is obviously a press release based on the ASA statement from the first link. The press release has the typical hyperbole which we have seen in such releases from all kinds of organizations in the past. Such an unrestrained endorsement of the AR4 would be expected – I don’t see them saying anything like “these guys have it all wrong and they desperately need us to bail them out”.

From my own personal experience, many statisticians have paid no attention to the work done in climate science (including myself until I started hanging around CA) and don’t have a clue of the many examples of the misuse of statistics in some of the work done there. However, some of them will still arrogantly swear that the the results are incontrovertible, the science is settled and anyone not knowing those “facts” are ignorant (sort of like what one sees on RC😉 ). I suspect that the “endorsement” was, like the AR4, the result of a small group of persons with that type of attitude. What I found interesting was how forcefully the initial statement indicated the need and possibilities for the involvement of statisticians, not only in climate research, but in the IPCC process itself. For the most part, I would trust the results from such involvement to be more intellectually honest than the multitude of refusals in the past to be open about the techniques used in the papers I have read over the past year.

Roger Pielke Sr. responded to Ray Pierrehumbert

“In this simple case, where the model noise and SST forcing matches satellite-observed statistics from CERES (for reflect SW) and TRMM TMI (for SST), a positive feedback bias of 0.6 W m-2 K-1 resulted (the specified feedback, including the Planck temperature effect, was 3.5 W m-2 K-1)……to the extent that any non-feedback radiative fluctuations occur, their signature in climate data LOOKS LIKE positive feedback. And when modelers use those relationships to help formulate cloud parameterizations, it can lead to models that are too sensitive.”

http://climatesci.org/2008/05/22/a-response-to-ray-pierrehumbert%e2%80%99s-real-climate-post-of-may-21-2008-by-roy-spencer/

oopps that is Roy Spencer at Roger Pielke’s site

Steve (#168),

Thank you.

RC actually posted links to four of the references I gave them ( Comment #284 – the two Williamson manuscripts, the Lu manuscript, and the book reference that contains the ill posedness proof of the hydrostatic equations).

When I provided the references, I expected them to try to find some weakness in the manuscripts. Gavin immediately picked out one line of the conclusion of the Jablonowski and Williamson

manuscript saying it showed the numerical models converged and that I was wrong (see exchange #294 below).

# Gerald Browning Says:

22 mai 2008 at 1:30 PM

Gavin (#291),

And you claim I am rude. Only in response to your insults. Williamson’s tests show very clearly the problem with inordinately large dissipation, i.e. the vorticity cascade is wrong and the parameterizations necessarily inaccurate exactly as I have stated. Anyone that takes the time to read the manuscripts can see the problems and can judge for themselves.

Jerry

[Response: Presumably they will get to this line in the conclusions of J+W(2006): “In summary, it has been shown that all four dynamical cores with 26 levels converge to within the uncertainty … “. Ummm…. wasn’t your claim that the models don’t converge? To be clearer, it is your interpretation that is rubbish, not the papers themselves which seem to be very professional and useful.- gavin]

I responded (and it actually went thru) with #295

# Gerald Browning Says:

22 mai 2008 at 1:49 PM

Gavin (#294),

And what is the uncertainty caused by and how quickly do the models diverge? By exponential growth in the solution (although the original perturbation was tiny compared to the size of the jet) and in less than 10 days. Also note how the models use several forms of numerical gimmicks – hyperviscosity and a sponge layer near the upper boundary) for the “dynamical core” tests, i.e. for what was suppose to be a test of numerical approximations of the dynamics only. And finally, the models have not approached the critical mesh size in the runs.

Jerry

There was no response to this.

Earlier Ray Ladbury had made a rather uninformed statement about the ill posedness and that is where I called the scientists on the site

pathetic (on CA comment above) for arguing about this issue when they clearly did not know the difference between a well posed and ill posed system. So they evidently called in an applied mathematician to try to counter my arguments because they could not handle them. Now the applied mathematician did not use his real name, but I have a guess who it might be. Anyway here is his first substantive comment (#285) and my response.

Gerald #281

No Gerald, I have not missunderstood the definition of a well posed problem. In order to be well posed a problem should

a) Have a solution at all

b) Given the initial/boundary data that solution should be unique.

c) The solution should denpend continuously on the given data

Your comment have focused on point c) but the other two are also part of the definition of well-posedness.

In fact if you are going to object to simulations knowing that the problem is well posed isn’t even the best thing to complain about. Even if the problem is well posed it is far more important to know if the problem is well-conditioned. Just knowing that the solution depends continuously is is of little value.

Even for problems which are not well posed we can often do a lot. If you talk to the people doing radiology you will find that they have to compute inverse Radon transform, and that problem is known the be ill-posed. However there are ways around that too.

For something like a climate model you probably don’t even need to looks for classical solutions, since one is interested in long term averages of different quantities. Knowing that e.g. a weak solution or a viscosity solution exists, together with having an ergodic attractor in the phase space of the model would give many correct averages, even if the system is ill-conditioned and chaotic within the attractor.

I had a look at what you wrote over at Climate Audit too, and calling the people who are writing at RC “pathetic” for for knowing the definition of a well posed problem in mathematics isn’t exactly in the spirit of the good scientific discussion you claim to want.

While I’m at it I can also let you know that the performance of a modern supercomputer is not given as the single processor flops times the number of processors. It is based on running a standardised set of appliction benchmarks. Take a look at http://www.top500.org if you want to get up to date on the subject.

# Gerald Browning Says:

22 mai 2008 at 12:41 PM

Jonas (#285),

You seem to conveniently have forgotten the interval of time mentioned in Peter Lax’s original theory. You might look in Richtmyer and Morton’s classic book on the initial value problem on pages 39-41.

Jerry

Note that the above reference is exactly the one I cited here.

Because you never know what will be allowed, I did not respond any further

to Jonas’ remark But I will here.

Ill posedness is a very local phenomenon and will appear if the time dependent differential system is (nearly) hyperbolic or parabolic. Nonlinearity will not change the result. That has been illustrated by (non) convergent numerical examples on this site.

Well posedness is defined for linear variable coefficient hyperbolic and

(incompletely) parabolic systems. Extensions to nonlinear equations can and have been made for finite periods of time. For a rigorous treatment of the nonlinear incompressible Navier-Stokes equations, see the miminal scale estimates by Henshaw, Kreiss, and Reyna.

Tomography (MRI’s) use a circular device to take multiple scans of the object (e.g. the brain) and the amount of X-rays is quite large. The reason for this is that the mathematics is trying to invert an integral and this process is known to be ill posed. The problem reduces to

inverting an ill conditioned matrix, but this is an entirely different problem than an ill posed time dependent PDE. And even in this case only the largest features can be determined because smaller scale features are even more ill conditioned.

The benchmarks used to test the parallel computers, the so called linpack series, are a special set of routines developed for parallel computers to perform linear algebra.

That does not mean that a climate model will run as efficiently as the benchmarks because of nonlocal communications. So this statement is misleading. In fact a number of models have reverted to finite difference methods instead of spectral methods because they employ more local communications.

AT #300 Jonas commented

# Jonas Says:

22 mai 2008 at 2:58 PM

Gerald #292.

Lax is a great mathematician but his original papers are not among my readings. R&M I have on the other hand read, some time ago. Unless my memory is wrong(I’m at home and don’t have the book available here) the initial part of the book covers linear differential equations and the Lax-Richtmyer equivalence theorem for finite-difference schemes.

So, if by “the interval of time” you refer to the fact that this theorem only assumes that the solution exists from time 0 up to some time T and that for e.g Navier-Stokes it is known that such a T, depending on the initial data, exists then I understand what you are trying to say. However the problem with this is that unless you can give a proper estimate of T you can not know that the time step you choose in the difference scheme is not greater than T. If that is the case the analysis for small times will not apply to the computation. So long term existence of solutions is indeed important even when using the equivalence theorem.

Furthermore, the equations used here are not linear and for nonlinear equations neither of the two implications in the theorem are true.

There is no doubt that there are interesting practical and purely mathematical, questions here, but if you are worried about the methods used try to make a constructive contribution to the field rather than treating people here like fools and calling them names at other web sites. I’m not working with climate or weather simulation but I have doubts about something I read outside my own field I will politely ask the practitioners for an explanation or a good reference rather than trying to bully them.

And my response at #301

# Gerald Browning Says:

22 mai 2008 at 4:16 PM

Jonas (300),

I am willing to continue to discuss well posedness, but ill posedness of the hydrostatic system is the problem and the unbounded growth shows up faster and faster as the mesh is reduced (more waves are resolved) exactly as predicted by the continuum theory in the above reference by Heinz and me. There are computations that show this on Climate Audit under the thread called Exponential Growth in Physical Systems.I ran those tests just to illustrate the theoretical results.

I cannot here explain the connection between linear and nonlinear theory,

but there are theorems discussing this issue, especially for hyperbolic and

parabolic systems. See the book by Kreiss and Lorenz on the Navier-Stokes equations.

For some beautiful nonlinear theory on the NS equations, look at the minimal scale estimates by Henshaw, Kreiss and Reyna and associated numerical convergence tests.

Jerry

I tried to respond to the bullying comment but it disappeared.

One should know that in climate science the proper term for this is “Canadian” – as in “contribution from two Canadians”

#172

“For something like a climate model you probably don’t even need to looks for classical solutions, since one is interested in long term averages of different quantities. Knowing that e.g. a weak solution or a viscosity solution exists, together with having an ergodic attractor in the phase space of the model would give many correct averages, even if the system is ill-conditioned and chaotic within the attractor.”

Huh ??

What might he be possibly meaning with this garbage ??

Classical solution of what ? Classical as opposed to what , non classical ? What makes a solution classical ?

How does he know that a weak solution (of what ? Of N-S ?) exists ? And how weak anyway ?

How does he know that there is an attractor in the phase space of the model ? Of course that he doesn’t know that and it is even impossible to prove that there is one .

Besides even if there was one , the phase space of the model is dramatically smaller than the phase space of the real Earth and would be irrelevant .

What is a “viscosity” solution ?

What has ergodicity to do with attractors ? What would be “non ergodic” attractors ?

“

A system chaotic within the attractor” . God ! A system is chaotic or isn’t . If it is , it may have an attractor but it has not to . But saying that a system is chaotic within an attractor is like saying that bubbling water has bubbles in it .If I hasard a translation of this garbage in semi understandable terms , it gives something like that .

“As we use (temporal) averages , there is a magic that makes the behaviour of the system independent of the number of dimensions of the phase space .

This magic warrants that by using a crude model with a very low number of dimensions , if a

finitenumber of numerical runs with afinitetime horizont appears bounded (e.g the trajectories of the modelled system lieallwithin a subspace of themodelphase space that we call attractor for no particular reason other than because we can), then the same result and the same bounding applies to the real system with infinite phase space dimension and infinite time horizont .”Now despite the fact that the person obviously has no understanding of the chaos theory , there is not an embryo of beginning of a stability theory of the trajectories of the climate system in the phase space .

There is a powerfull theorem , the Kolmogorov-Arnold-Moser theorem , that has been used to study the stability of the planet orbits .

Indeed while it is known that the orbits are chaotic it doesn’t necessarily imply that they are unstable .

The Kolmogorov-Arnold-Moser theorem says that if and only if the system can be represented by a perturbation Hamiltonian then under suitable conditions the effect of the perturbation Hamiltonian is to only deform the attractor but not to destroy it .

In other words the system is chaotic but stable (within certain bounds) .

An analogous approach to the climate would be to find an integrable system that would be an idealisation of the climate and then prove that the difference between the real climate and the idealised (modelled) climate is a perturbation .

Perturbation meaning in that context that the perturbation Hamiltonian is small and stays small .

Applying KAM theorem would then prove that the attractor can (or can not) be destroyed by the perturbation .

And it is at this point that the problem even if formulated in the frame of the chaos theory meets Jerry’s issues .

To apply such methods to a known chaotic system , it is necessary that its idealisation be

INTEGRABLE.Numerical methods qualify only insofar that it can be proven that they converge to the continuous solutions of whatever system of equations is chosen .

Climate models fail already at this stage as Jerry shows .

Even if somebody found this idealised converged system which would be clearly completely different of the current climate models , there would be a second much more formidable task – prove that everything that is not contained in the idealised system constitutes a “perturbation” .

This task , supposing that the first has been solved , is way beyond our current state of knowledge and it is not excluded that actually the perturbation method doesn’t apply at all .

What stays ?

Evidence over the last billion of years suggests that some dimensions in the infinity of dimensions of the phase space (temperature , pressure etc) are bounded .

This suggests that there indeed might be an attractor for the climate .

However evidence also suggests that despite rather “narrow” bounds on some dimensions (air and water velocities , solar irradiation , temperature) the attractor is sufficiently vast that it allows extremely different dynamic states (ice Earth , tropical Earth , stormy Earth etc) with fast transitions between them .

Looking at that attractor with a 10 years scale would be at best useless and at worst ridiculous .

What Tom Vonk said.

The extrapolative mis-characterization of fundamental concepts is becoming breath-taking.

That was a fantastic rant Tom V. Thanks sincerely for the tour of K-A-M.

This seems strange, though:

Why should boundedness imply the existence of a dynamical attractor, rather than simply bounded responses to bounded perturbations?

Despite Jonas’ comments on well posedness (that have nothing to do with the ill posedness of the hydrostatic system), existence for quasilinear (a form of nonlinearity common to the most often used systems) hyperbolic, parabolic, and mixed systems has been proved (see Kreiss and Lorenz text on the Navier-Stokes equations). And uniqueness follows from the basic energy estimates. Thus the continuous dependence on the data

is the crucial point and for these systems there is continuous dependence.

These proofs are for a finite time interval for hyperbolic systems

because they can shock. But there are global existence results shown for other systems in special cases.

Ill posedness is a very local phenomenon (much like numerical instability)

and needs no global existence theory.

Jerry

Jerry, Tom V, Dan, et al.

I note that the budget for the U.S. Climate Change Science Program is a bit over $2B per year. I also note that the strategic plan for U.S. climate research does not even mention research into the mathematical foundations of climate models (see particularly Chapter 10 “Modeling Strategy”). I was particularly surprised to see that NSF’s program statement in Chapter 16 does not even use the word ‘mathematics’.

I realize this is a very broad question to which a comprehensive answer in a blog is impossible, but do you have any thoughts on how we

shouldbe spending the research dollars? What research questions are being overlooked? How do you think the program for climate modelling research could be recast or modified?Steve M. has mentioned adding an independent oversight/auditing function to the funding mix; Dan H. has emphasized software verification and validation for code used to drive policy; etc.

Well I responded to this bit of nonsense:

and the comment disappeared. Basically I said that if anyone reads Pat’s article it is clear he makes no such claim and that the comment only reflects badly on the commenter.

Selective editing anyone?

Jerry

Neil,

Note that there are no numerical analysts or continuum PDE scientists

on the IPCC. An oversight board that included some independent specialists (no funding attachments to the game) would be very appropriate. But it might be difficult to find a specialist in these areas that could stay awake during all of the unsubstantiated hand waving.

Jerry

Tom Vonk (#172),

As usual we are in agreement. Thank you for the clarifying exposition.

Jerry

P.S. Currently I am looking at Radon transforms to learn more about the mathematical method.

bender UC, you will like this.

I’m digging into the attribution studies. Why? because the huge spread in the model outputs in forecasts made me wonder how in gods name they ever did and attribution study. with ensembles that wide lots of observations fit the data.

so, chapter 9 of Ar4. figure 9.5

Comparison between global mean surface temperature anomalies (°C)

from observations (black) and AOGCM simulations forced with (a) both anthropogenic

and natural forcings and (b) natural forcings only. All data are shown as global

mean temperature anomalies relative to the period 1901 to 1950, as observed

(black, Hadley Centre/Climatic Research Unit gridded surface temperature data

set (HadCRUT3); Brohan et al., 2006) and, in (a) as obtained from 58 simulations

produced by 14 models with both anthropogenic and natural forcings. The multimodel

ensemble mean is shown as a thick red curve and individual simulations are

shown as thin yellow curves. Vertical grey lines indicate the timing of major volcanic

events. Those simulations that ended before 2005 were extended to 2005 by using

the fi rst few years of the IPCC Special Report on Emission Scenarios (SRES) A1B

scenario simulations that continued from the respective 20th-century simulations,

where available. The simulated global mean temperature anomalies in (b) are from

19 simulations produced by fi ve models with natural forcings only. The multi-model

ensemble mean is shown as a thick blue curve and individual simulations are shown

as thin blue curves. Simulations are selected that do not exhibit excessive drift in

their control simulations (no more than 0.2°C per century). Each simulation was

sampled so that coverage corresponds to that of the observations. Further details of

the models included and the methodology for producing this fi gure are given in the

Supplementary Material, Appendix 9.C. After Stott et al. (2006b).

Now, you can go look at this graphic, but basically what you see is two chart.

In one chart you have the Observation surrounded by spaggitti. 14 models, 58 runs. this is basically the hindcast experiement. and you see the observation

falls within the swath of the 14 models. In this experiement GHG are modeled.

Now comes the second chart. where GHG are not modelled and you have only “natural” forcings.

Do they use 14 models? Nope. they use 5. why? to reduce the spread? do they use

58 runs? nope only 19. Do they run every model? nope. only those that have no drift in the control runs.

So for “forecast” they use a whole host of models. they dont throw out those with drift in control runs, the drift gets transported into forecast uncertainity. Makes it harder to falsify. But when the want to do an attribution study, you get 5 models 19 runs, and the less reliable models tossed off the island.

Interesting.

Again I ask the eternal question; is anyone surprised when models built (and then tuned) to reflect the reality of a climate system with GHG don’t match when they have the have the GHG removed? It’s like Tom said: “…bubbling water has bubbles in it.”

mosh #182 I would ask if you’re surprised about selectivly choosing models and such, but I know you’re not surprised at all.

Re: #182

Is that “interesting” nuanced or hyperbole? More questions: How would this analysis have been handled if it were strictly a scientific endeavor? Or if it were marketing immediate mitigation for AGW? Does AR4 follow the RC line of “reasoning” or is it vice versa?

I offer for consideration my

Very Own Complete And Logical Global Climate Model(i.e., the VOCAL-GCM):Only ten years worth of future anomaly prediction is allowed; but hey, it’s good enough for government work. That will be $2,000,000 please, and thank you for your generous support.

steven m

FWIW, IMO this is a brilliant idea. Revisiting the attribution studies in light of the currently-convenient criterion for what is consistent with what should be worth a raft of publications. Hoist, meet Petard.

steven mosher #182

Error bars when you need’em, only one or two computer runs when you need’em. Ain’t Science wonderful.

re 183, 184, 186 and 187.

It’s funny when the douglas thread got going and I what there notion of

“consistent” was it got me thinking about the attribution studies.

Partly because I was also looking at the modelE response for GHG forcing.

Hoist and petard, exactly

Then Gavin showed the huge spread of forecasts for the next twenty years.

I asked a simple question. Do you use the bad hindcasters to forecast?

Yup. hmm.

where’s beaker?

Anyways more sources to dig through. Chatper 9 AR4. interesting reading

gavin (#317) on RC

>[Response: But Jerry, a) what are the high resolution models diverging from? b) the climate models don’t resolve mesoscale features, and c) what added forcing terms are you talking about? – gavin]

But Gavin, a) the models are diverging from each other in a matter of less than 10 days due to a small perturbation in the jet of 1 m/s compared to 40 m/s as expected from mathematical theory b) the climate models certainly do not resolve any features less than 100 km in scale and features of this size, e.g. mesoscale storms, fronts, hurricanes, etc. are very important to both the weather and climate. They are prevented from forming by the large unphysical dissipation used in the climate models. c) any added forcing terms (inaccurate parameterizations) will not solve the ill posedness problem, only unphysically large dissipation that prevents the correct cascade of voticity to smaller scales can do that.

Jerry

188 steven mosher says:

where’s beaker?

He got tired of saying ‘SD fits anything, SEM fits nothing’ instead of ‘you can’t compare a model ensemble with observations’

But of course, if I remember correctly, he (she, it) had a problem with the observations being acceptable. Which I agree to, but satellite is more trustworthy than air/water thermometer readings, so take that FWIW.

But, hey.

Hank Roberts (RC #323),

> What kind of supercomputer did those people use? What model were they running, were they using one of those otherwise in use, or did they write their own? What’s puzzling is that of the models that are written up most often, while there are differences, they all seem quite credibly similar and none of them has had one of these runaway behaviors.

> What’s so different about the one Dr. Browning is talking about? How can it go squirrely so quickly compared to the other climate models?

For a summary read the manuscript’s abstract. Jablonowski and Williamson used 4 different models. One was the NASA/NCAR Finite Volume dynamical core, one was the NCAR spectral transform Eulerian core of CAM3, one was the NCAR Lagrangian core of CAM3, and one was the German Weather Service GME dynamical core. Note that the models were run using varying horizontal and vertical resolutions (convergence tests) for an analytic and realistic steady state zonal flow case and a small perturbation on that state. Although a dynamical core theoretically should be only a numerical approximation of the inviscid ,

unforced (adiabatic) hydrostatic system, the models all used either explicit or implicit forms of dissipation. One can choose just the Eulerian core to see how the vorticity cascades to smaller scales very rapidly as the mesh is refined and the dissipation reduced appropriately. This cascade cannot be reproduced by the models with larger dissipation coefficients.

As I have repeatedly stated, unphysically large dissipation can keep a model bounded, but not necessarily accurate. And because the dynamics are wrong, the

forcings (inaccurate approximations of the physics) must be tuned to overcome the incorrect vorticity cascade.

Jerry

re: #178

A very, very short summary:

1. Documentation, documentation, documentation … document everything in all models, methods, and software.

2. Verification that the coding is correct.

3. Verification that all numerical solution methods are performing with expected theoretical characteristics.

4. Development of Validation plans, procedures, and processes.

5. Carry out Validation processes. A continuous activity. It’s a process.

6. Develop and implement an approved Software Quality Assurance Plan for maintaining the production-level code.

re 191

Can a layman take this to mean that the results reported for GCMs are not due to the physics that they contain but of the parameterizations that are added to them so that they will get the results expected?

Re larger computers

Is this exchange (slightly modified for typos, clarity and spaces) of any interest to you math people? Are CPUs these days big enough to avoid precision loss if interim storing on disc is needed? Is the carrying of enough significant figures in calculations and constants like pi a source of error for climate modellers? Try seeing how many sig figs you get for pi in Excel. (This part of note by Geoff Snr.)

From: support@graphpad.com

Sent: Thursday, 22 May 2008 1:53 AM

Subject: Precision and Accuracy [CASE#233853]

Prism does calculations in double precision, but only stores data and results in single precision. We don’t generally share information about how it was programmed. Why are you asking?

Harvey

On 2008-05-20 23:30:03.0 you wrote:

Precision and Accuracy

How many significant figures can your software carry?

What language was it written in?

Who wrote the compiler and linker?

Do you use their compile, link and runtime libraries?

Have your team written any math or statistical routines?

Sincerely

Geoffrey James Sherrington (Geoff Jnr)

#182 S Mosher

In terms of a consistant approach in assumption use, using only the best 5 for the hindcast and all for the forecast would normally either invalidate stated or “normally assumed” assumptions. I remember when Gavin and others were saying that past recorded temperatures were NOT used to calibrate the models. That appears to be correct. Howerver, I would say that the situation is worse. Models were chosen to indicate good hindcasting and then all models were “spliced” into the forcast. Kind of a combination of “buying a pig in a poke” and “bait and switch” with the worst properties of each.

Does this mean that they ran the models many times and not only used only the models that “fit”, but threw out runs that did not fit? Thus the 58 vs 19 simulations? Would that information be in Stott (2006b)?

http://www.iac.ethz.ch/people/knuttir/papers/tebaldi07ptrsa.pdf

re 195 John. It appears this is what they did.

1. one hindcast experiement with 14 models and 58 runs. include all forcings run from 1850 to 2000. Then those same 19 mdeols do forecasts.

then they tack on the years 2001-2005 from those forecasts.

2. one hindcast expereient with 5 models 19 runs with no human GHG forcing. in this case they only use models that have no drift LT .2C

century.

What they porport to show is that the experiment one tracks the observations better and thus change is attributed to GHG.

questions. why tack on 5 years of a forecast onto a hindcast?

why the decision to use a smaller esemblee?

why elimate models that drift in one case and not the other?

#196 Looks like a must-read. Might need a new thread for this one.

Dan Hughes, #192:

It is easy to presume that none of the GCMs has ever been subjected to a reasonably complete software quality assurance audit. But before an audit can be done, one has to ask the question, what software QA standards will be applied? Should climate modeling software be treated differently than other categories of software, and should this consideration be part of the software QA standards themselves?

We can predict with some certainty that the modelers will oppose the application of any reasonably tight software QA program; and if they lose that fight, they will then seek to have the QA standards themselves revised so as to reduce the impact and the effectiveness of the SQA program against their own category of software.

This is an arduous and expensive task for complicated code systems. The modelers are likely to respond, give us X more dollars please or else we won’t do it. That would be a perfectly reasonable request on their part, assuming the particular GCM in question is thought to be of some reasonably useful value for its intended purpose, and merits spending a lot of money to make it a production-grade software system.

This raises the question of where and how the underlying physics and related mathematics is documented relative to where and how the software code is documented, and how the relationships among the three are documented and maintained. There must be a traceable pathway within the documentation from the physics on through to the mathematics on through to the code base.

We can predict that the definition of “validation” will become an extremely contentious issue in the development of the SQA plan. This will become a very sticky wicket if validation is formally tied within the SQA plan to periodic real world observations.

Same comment as for Item 4. In addition, one must also implement a process of Requirements Management. If the requirements are subject to change, then the validation criteria must likely change. It all has to match up from beginning to end, from one side to the other.

One also has to mark the input and output data with the code revision number that was used to process and produce it. And one has to store it all away – the code, the data, and the documentation – as “record material in electronic format” according to accepted records management practices.

Let’s examine the topic of GCM validation and verification in another context:In the context of asking for a useful exposition of how 2xC02 yields 2.5C global warming, Steve McIntyre has said that he views the GCMs as intellectual exercises.

Suppose such an exposition as Steve desires were actually to be written, and suppose too that the GCMs were to be thoroughly documented as to the content and implementation of their underlying climate physics.

Suppose we were then to compare the 2xC02 yields 2.5C exposition to the various GCMs and their underlying physical and mathematical assumptions, and also the various GCMs to each other in terms of both their overall approach to the task, and their low-level technical detail.

What similarities and differences would we find among them?

Would the similarities and differences that we find be of any relevance is assessing the capability of the GCMs to do the job that is being asked of them?

#197 You mean “no drift

GT0.2C”?I wonder if the quote should actually read “(no more than a positive 0.2°C per century”?

Don’t know how I messed up the link but here is another quote that I think is interesting.

http://www.iac.ethz.ch/people/knuttir/papers/tebaldi07ptrsa.pdf

The more I read this the more it appears to me to support the approach of the Douglas paper. Here they state that structural uncertainties have to be evaluated to make sure that a result found by a particular model is not an artefact of its individual structure by looking at an esemble of different models. They are using the model ensemble to quantify and understand. In particular, to understand the

Long and short, IMO, because model ensemble “seem(s) to work”. The question “Is there some statistical test for a system that is accepted simply because it seems to work? If so, what is the test?”

This is circular reasoning, correct?

re 200. ya sorry. GT.

Gavin (#324 on R),

>[Response: Your (repeated) statements do not prove this to be the case. Climate models do not tune the radiation or the clouds or the surface fluxes to fix the dynamics – it’s absurd to think that it would even be possible, let alone practical.

Oh please, spare me. Did you read the second manuscript by Williamson

or look at the manuscript by Sylvie Gravel on Climate Audit. Do you read

anything or just spout verbiage.

>Nonetheless, the large scale flows, their variability and characteristics compare well to the observations.

Over what time period? That is not the case for the Jablonowski and Williamson test that is a small perturbation of a large scale flow.

>You keep dodging the point – if the dynamics are nonsense why do they work at all?

No you keep dodging the point that the dynamics are not correct and the

parameterizations are tuned to hide the fact.

>Why do the models have storm tracks and jet streams in the right place and eddy energy statistics and their seasonal cycle etc. etc. etc.?

Are we talking about a weather model or a climate model? Pat Frank (and others) have shown that there are major biases in the cloudiness

(water cycle).

> The only conclusion that one can draw is that the equations they are solving do have an affiliation with the true physics and that the dissipation at the smallest scales does not dominate the large scale circulation.

Or that they have been tuned to overcome the improper cascade of vorticity.

By “smaller scales” I assume you mean that mesoscale storms, fronts, and hurricanes are not important to the climate or that there is no reverse cascade over longer periods of time. That is news to me and I would guess many other scientists.

> It is not that these issues are irrelevant – indeed the tests proposed by Jablonowski et al are useful for making sure they make as little difference as possible – but your fundamentalist attitude is shared by none of these authors who you use to support your thesis. Instead of quoting Williamson continuously as having demonstrated the futility of modeling, how about

finding a quote where he actually agrees with your conclusion? – gavin]

The tests speak for themselves. That is why I cited the references. Did you ask Dave? I worked with him for years and the only reason he got into climate modeling was because it allowed him to pressure NCAR into giving him his Senior Scientist appointment.

Jerry

My last reply to criticism has been deleted from the, “What the IPCC Models Really Say” thread at RC. I know the comment made it onto the board, so I guess the moderator found it offensive. Here it is, in its entirety.

——————————————–

# Pat Frank Says:

Your comment is awaiting moderation.23 May 2008 at 6:25 PM

Re. #310 — B. P. Levenson wrote, “

Frank’s article assumes that global warming goes away if you take out the models.”It assumes no such thing.

“

But in a larger sense Frank’s argument is ridiculous.”You’ll have to demonstrate that on the merits. Unsupported assertions won’t do it. GCMs may have a million floated variables. John von Neumann reportedly said that he could model an elephant with 4 adjustable parameters and with 5 could wave the trunk. With a million variables, GCMs can be tuned to match any past climate, and that doesn’t mean anything about predicting future climate.

“

Frank’s article amounts to a lengthy argument that something we can see happening isn’t happening.”The article says nothing about climate as such. It’s about error assessment and model reliability; nothing more.

#311 — Ray Ladbury, your supposed “

insidious motives” are products of your mind, not mine.#312 — Dan, for proof see article references 13 and 40.

Other examplpes:

M.L. Khanekar, T.S. Murty and P. Chittibabu (2005) “The Global Warming Debate: A Review of the State of the Science” Pure appl. geophys. 162, 1557-1586; L. C. Gerhard (2004) “Climate Change: Conflict of observational science, theory, and politics” AAPG Bulletin 88, 1211-1230; V. Gray (1998) “The IPCC future projections: are they plausible?” Climate Research 10, 155-162; W. Soon and S. Baliunas (2003) “Global Warming” Prog. Phys. Geog. 27, 448-455.

See also Hendrik Tennekes’ “A Personal Call For Modesty, Integrity, and Balance by Hendrik Tennekes” here: http://tinyurl.com/2jxqgs

These are far from exhaustive. Were any of them picked up by news media for public re-broadcast?

——————————————–

Honestly, I don’t see anything offensive there, unless it’s straightforward self-defense that offends.

#204

Sounds familiar.

You miss the point of RC, Pat. They’re a support group; not a discussion group. But I will say I’m glad Jerry and yourself are cross-posting your responses to comments on RC here. Its instructive, for instance, that Levenson’s deceptive misrepresentation is permitted but your reply is not.

Mosh, continue picking that scab you’ve found re attribution studies; I think there might be an infection underneath.

Pat Frank (#204),

I wrote the exact thing in response to Levenson’s first statement and the comment was deleted. It is very clear that RC is more than willing to show comments that make outrageous claims, but as soon as someone shows that the claims are complete nonsense, the rebuttal is deleted. In fact I have had to go to the point of saving all of my comments (because one doesn’t know what will be allowed) in order to be able to repost them on Climate Audit. Completely disgusting and unethical behavior.

Jerry

Gerald Browning and Gavin’s responses #203 CA, #324 RC:

Gavin’s responses

1.>[Response: Your (repeated) statements do not prove this to be the case. Climate models do not tune the radiation or the clouds or the surface fluxes to fix the dynamics – it’s absurd to think that it would even be possible, let alone practical. From http://www.iac.ethz.ch/people/knuttir/papers/tebaldi07ptrsa.pdf

Gavin 2. >Nonetheless, the large scale flows, their variability and characteristics compare well to the observations. From http://www.iac.ethz.ch/people/knuttir/papers/tebaldi07ptrsa.pdf

Emphasis mine.

Gavin (#330)

So you don’t know the answer for the need to resolve the mesoscale storms, fronts and hurricanes, but then you state that a climate model is accurate without resolving them. Is there a contradiction here? Wouldn’t a good scientist determine the facts before making such a statement?

Jerry

Gavin #3: The only conclusion that one can draw is that the equations they are solving do have an affiliation with the true physics and that the dissipation at the smallest scales does not dominate the large scale circulation…

What is an affiliation, you join the Hansen-AGW club??? Satire intentional. Not that they are the “true physics”, but that they just kinda hangout with true physics, you know AGW-physics homies.

NOT TRUE. Another explanation quoted below from http://www.iac.ethz.ch/people/knuttir/papers/tebaldi07ptrsa.pdf

(this is the physics that can’t)

,

emphasis mine

It is apparent that it is not the physics but that “they seem to work”. Cherry Picking may be more appropriate term.

Warning, Block quotes may be off.

All,

I finally caught Gavin in a contradiction of his own making.

Gavin (#231),

> [Response: Climate models don’t in general have mesoscale storms or hurricanes. Therefore those features are sub-gridscale.

And thus all of the dynamics and physics from these components of climate

are not included nor accurately modeled. Fronts are one of the most important

controllers of weather and climate. You cannot justify neglecting them in a climate model, yet claim a climate model accurately describes the climate.

>Nonetheless, the climatology of the models is realistic.

Realistic and accurate are two very different terms. You are stating that fronts are not important to climate, is that correct?

>Ipso facto they are not a first order control on climate.

Pleas cite a mathematical proof of this affirmation.

> As far as I understand it, the inverse cascade to larger-scales occurs mainly from baroclinic instability, not mesoscale instability, and that is certainly what dominates climate models. – gavin]

If this assertion is correct (please cite a mathematical reference)

the jet cannot be accurately approximated by a 100 km mesh across its width.

Therefore the model does not accurately model the jet that you claim is important to the inverse cascade. Now you have a scientific contradiction based on your own statements.

Jerry

And Gavin actually responded to this remark:

# Gerald Browning Says:

24 mai 2008 at 8:00 PM

Gavin (#330)

So you don’t know the answer for the need to resolve the mesoscale storms, fronts and hurricanes, but then you state that a climate model is accurate without resolving them. Is there a contradiction here? Wouldn’t a good scientist determine the facts before making such a statement?

Jerry

[Response: But the fact is that climate models do work – and I’ve given a number of examples. Thus since those models did not have mesoscale circulations, such circulations are clearly not necessary to getting a reasonable answer. I’m perfectly happy to concede that the answer might be better if they were included – but that remains an open research question. – gavin]

All,

So if a jet is properly approximated across its width, the mesh size will need to be less than 10 km and the dissipation can be reduced accordingly.

This will be essentially the same as the runs Lu et al. made or I made on this site. then let us see what happens with the hydrostatic models.

Jerry

John (#210),

Thank you. I know that the amount of water fallout is chosen as some fraction of the size of a mesh rectangle. This is clearly not based on any physical principle, but on what works exactly as you state. And that affects radiation and many other things.

Jerry

Gerald #213 from Gavin

[Response: But the fact is that climate models do work – and I’ve given a number of examples. Thus since those models did not have mesoscale circulations, such circulations are clearly not necessary to getting a reasonable answer. I’m perfectly happy to concede that the answer might be better if they were included – but that remains an open research question. – gavin]

from http://www.iac.ethz.ch/people/knuttir/papers/tebaldi07ptrsa.pdf

emphasis mine

So much for his physics arguments, he is saying, in agreement with referenced pdf, I, Gavin, win

simply because they(Gavin’s arguments)seem to work.http://www.realclimate.org/index.php/archives/2008/05/freeman-dysons-selective-vision/langswitch_lang/tk

I addressed this from an engineer of 20+ years expierence. But it got lost, I assume. As I stated on RC “I would rather be made a fool than censored.” I am posting a synopsis of what I posted on RC.

This is what I had severe problems with accepting.

There is a rule for engineers, without contrary evidence, when using gross as in Gross Domestic PRODUCT, one uses a factor of 10. As in, a computer board costs you a gross of $1, one needs to make the consumer pay $10 in order to make a profit.

A few tranlates to about 33%. Using economic multiplier effect, this translates to 44%. (0.33 + 0.33 * 0.33+…. expressed as percentage). Note the consintancy of the following significant figure.

So…in the USA…46% average taxes of gross, 44% cost of CO2, the most liberal of estimates of real disposable income after paying for food, gas, religion, retirement plan of the 54% after taxes (100 – 46) is 30%, depending on taxes, government subsidies and actual taxes…we obtain 46+44+30=120%. This means that unless the US can provide 20% subsidy above GROSS domestic product (a trivial impossibility) we cannot afford cuting CO2 emmissions all things being equal (120%=100% only happens in GCM’s as far as I know). Satire off.

I’m sensing collapse of a house of cards here.

Tom Vonk,

You might be interested in this reply.

# Gerald Browning Says:

24 mai 2008 at 7:49 PM

Gavin (332),

Please cite a reference that contains a rigorous mathematical proof that the climate is chaotic. As usual you make statements without the scientific facts to back them up. I suggest that the readers review Tom Vonk’s very clear exposition on Climate Audit in response to this exact claim on the thread

called Koutsoyiannis 2008 Presentation in comment #174 if you want to know the scientific facts.

Jerry

[Response: No such proof exists, I never claimed it did, and nor do I suspect it is. However, NWP models and climate models are deterministic and individual realisations have an strong sensitivity to initial conditions. Empricly they show all the signs of being chaotic in the technical sense though I doubt it could be proved formally. This is a different statement to claiming the climate (the attractor) itself is chaotic – the climate in the models is stable, and it’s certainly not obvious what the real climate is. (NB Your definition of ‘clear’ is ‘clearly’ different to mine). – gavin

All,

Here is Gavin’s response and my final message to him.

[Response: Many things are not included in climate models, who ever claimed otherwise? Models are not complete and won’t be for many years, if ever. That is irrelevant to deciding whether they are useful now. You appear to be making the statement that it is necessary to have ever small scale feature included before you can say anything. That may be your opinion, but the history of modelling shows that it is not the case. Fronts occur in the models, but obviously they will not be a sharply defined – similarly the Gulf Stream is too wide, and the Aghulas retroflection in the South Atlantic is probably absent. These poorly resolved features and others like them are key deficiencies. But poleward heat transport at the about the right rate is still captured and the sensitivity of many issues – like the sensitivity of the storm tracks to volcanic forcing match observations. This discussion is not about whether models are either perfect or useless, it is whether given their imperfections they are still useful. Continuing to insist that models are not perfect when I am in violent agreement with you is just a waste of everyones time. (PS. If A & !C => B, then C is not necessary for B. It’s not very hard to follow). (PPS, try this seminar). – gavin]

Good try. I now know that you are full of hot air and won’t waste any more time arguing with someone who can not cite any mathematical proofs of any of his

statements. If the models do not resolve the basic features (easily proved just by their crude mesh), then it is clear that the model solution is not near reality or even the solution of the differential equation. And given that fact,

the dynamics is wrong and then necessarily the physics.

Jerry

All,

Note that the reference that Gavin cites is not very rigorous and if anything supports my arguments more than his. I can pull out the relevant statements by Lynch if desired.

Jerry

OK, Jerry and John,

Since climate models are not physics we have to conclude that all engineering models are not physics.

Do you agree?

You also don’t seem to understand Tapio Schneider’s point about predictablility of the first kind and second kind.

This is already well known in turbulence research, see Pope’s book on Turbulent Flows. Turbulence models with parameterizations for the small scales diverge from the fully resolved solution in a short time scale. But the statistics of the flow will be correct if a proper dissipation term is used. Your statements about unphysical large dissipation are thus wrong. The statistics can only be correct if by adding a dissipation term the correct mean dissipation is obtained.

#219 (Jerry)

Would this be a reasonable layman’s summary or your criticisms of the models:

There is no mathematical evidence that the GCMs used to model the climate actually solve the physical equations that they are claiming to solve. If they did they should be able to reproduce a number of critical climate features such as weather fronts and cyclones. This means that any success at hindcasting is most likely the result of parameter tuning and not because they correctly model the physics. For this reason, any predictions made by these models must be treated with extreme caution.

#221 Your statement is incorrect. There are models that get the physics right. One such model that ran on an old 8088 with a math coprocessor was called modflow. It modelled groudwater flow using solvable differential equations used in a finite differencing matrix. The mesh could be reduced so that it could model contamination of an underground storage leak. Or the mesh could be expanded until it modelled a major aquifer for an area 100,000 X larger. It was “useful”. In fact, I modeled a site where the conductivity of the soil and the water elevations did not agree with what the model said should occur. It made no sense. I reduced the mesh and started exploring by changing the conductivity in certain cells. Once the model and the actual water elevations agreed, I had constructed in my model two cells that were not native soil, but had the conductivity of “sugar” sand. The site had been closed for about five years. We went through the files and found that the model had gotten the number, location, and size correctly. What had happened was that there were 2 tanks that had been dug up, and backfilled with sugar sand. The model had the general size and location correct. It also had that there were 2 areas thus filled. After such an actual validation, most quit arguing about the validity of my models. The one who did continue to argue and had our engineering firm design it to his specifications, not the ones that my modelling said should be used, faced a $1,000,000 lawsuit. I only had to go to deposition, and show that it was not designed according to my specifications. After which the suit was dropped against myself and my firm.

The point of several threads has really been what is “useful”. However, one cannot simply wave hands and say you have the physics right if the statement is untrue. Jerry and Tom have been pointing out some big holes in claiming the physics is correct. This is known by modellers.from http://www.iac.ethz.ch/people/knuttir/papers/tebaldi07ptrsa.pdf Just read the introduction (it is well written and easy to follow) and you will see what Jerry and Tom are saying is in agreement with what the modellers say of their models. To me the question has always been that of validation, and verification of usefulness.

The modflow equations for groundwater were actually shown to be reasonable using resistors with known values and measuring voltage as a proxy for conductivity and water elevation.

John,

The authors refer to verification and validations studies, so people are working on it! Furthermore, they discuss the usefulness of ensemble simulations which implies that they think that predictability of the second kind is possible and that the internal variation is not arbitrary large. And when they talk about comparisons with observations they say ‘… all those activities have helped in improving the models, and have greatly increased our confidence that the models capture the most relevant processes …’ Nowhere they say that climate models are unphysical!

It is a good article and I recommend reading it. It points out that validating and verifying climate models is not trivial and should be done carefully (not like Douglass and Koutsoyiannis). But they don’t endorse Jerry’s and Tom’s view in any way!

Keep pounding Schmidt on chaos, attactor geometry, ergodicity, and ensemble statistics. If he understands these things, he hasn’t shown it yet. And if that’s the case, then he’s not qualified to judge GCM performance vs reality.

Bender,

These are issues that you don’t understand yourself.

#222

Jerry is saying that some aspects of the disspation are unphysical. Gavin is granting this, but arguing that despite being unphysical, they are useful approximations.

That something is ‘unphysical’ does not mean it isn’t a useful (empirically valid*) approximation. But a fudge in this case would increase the odds that the parameterized dissipation is a

poorapproximation. And if it is a poor approximation, then it increases the chance that the CO2 sensitivity parameters are off. (Through compensatory parameterization.)[*Empirical validity, however, can only be judged in out-of-sample tests, which Schmidt & IPCC are desperate to avoid.]

What this is about is vertical convection & cloud formation. Anything meso-scale that could act as a powerful negative feedback that might over-ride many or all of the positive feedbacks we hear about continually in the media. IMO this is about ocean-atmosphere convection especially in the tropical S. Pacific in DJF. Internal variability and trends in cloudiness vs. external forcing via GHG trend.

Even if GCMs correctly reconstruct the past, they do not necessarily predict the future. Especially if there are unphysical dissipations that are poor approximations to reality.

#226

Prove it.

#226

Laughable that a blogger named ‘bender’ should be held to the same high standard as Schmidt and the entire IPCC, but ok. Bring it on.

226 (gb) That he could make such a marvelously pertinent list argues some understanding.

===================================

#230 Ironically, #227 was a crosspost with #226.

Awaiting your lecture, dr gb …

After spending a year reading the arguments and counter-arguments concerning the validity and usefulness of the GCMs for climate prediction, I have reached the conclusion that without having a significantly better understanding of the actual physical processes operating within their actual physical systems, the modelers are simply chasing computational rainbows.

What kinds of field experiments and field observations are necessary to gain a significantly better understanding of the actual physical processes operating within their actual physical systems? How much data, and of what kind, must be collected in the field, and how? What will this cost, and how much time will be needed to collect the data, to process it, and to interpret it?

#232

Great questions. Let’s ask the authority. gb?

#226 = Driveby shooting with blanks.

It is possible that the models are a useful approximation. BUT: the mechanism of negative feedback discovered by Roy Spencer opperates below the grid scale and is thus completely missed by the models, and it specifically involves convection.Thus just because dissipation is approximated such that the past climate is simulated ok does NOT mean the models will work under elevated CO2 properly.

Actually, his car stalled as he essayed the getaway and he’s out there, mute and mortally wounded.

===========================================

Useful for what? Useful in forecasting GSMT? Look at the spread in the models after 20 years. As gavin point out some realizizations show a negative temperature trend after 20 years. Is that useful?

Useful for what? Useful for saying we should ignore the past 7 years of data.

Here is what models are really useful for. Fitting past data. are they useful for projecting? I dunno. wait 20 years. But I would not act on them until their usefulness was proved

#235

Slight reword:

It is possible that the models are a useful approximation

of the PAST.Q: Is it possible that the models are a useful approximation of the FUTURE?

This depends on whether or not a powerful negative feedback from ocean clouds kicks in at higher GMT. Do the GCMs simulate such a “mesoscale” processes? No. However this “mesoscale” process has global-scale consequences.

The best way to deny this possibility is to deny funding to those wishing to study it.

Note: I never said in #225 that Gavin did not understand those topics. I merely sugggested that his replies at RC do not communicate that understanding. To me, they suggest a misunderstanding. But I’m totally willing to be convinced otherwise. [While pounding Schmidt, be sure to pound me too. Let’s all pound each other. That’s how science works, folks.]

from http://www.iac.ethz.ch/people/knuttir/papers/tebaldi07ptrsa.pdf emphasis mine

From gb

In what way should have Douglass and Koutsoyiannia have proceeded? If a model’s predictive skill is usually measured by comparing the predicted outcome with the observed one; that atmospheric temperature, precipitation, pressure, vertical profiles, ocean temperature, etc. can be used through some metric, what metric do you advise?

I agree with the comments about the statistics that Becker posted. However, I was left somewhat taken back in that using such a strict method basically meant that model skill could vary from a minus correlation to a near perfect correlation and not be inconsistant with the model…that it “paid” to have as many poor or exaggerated models such that the probability distribution was as large as possible. Where is the predictive usefulness or skill of such an ensemble? And does not such an ensemble invalidate the proof?/assumption noted below of

?

And since

shouldn’t Jerry’s and Tom’s comments such as

be addressed in a form where temperature and pressure profiles match the independent realizations of the true system with agreement to boundary conditions without an aphysical forcing by the models(modellers)?

If multi-model simple averages are widely used, what is it that makes say Douglass in particular wrong? Especially given model errors tend to cancel. Is this, model errors tend to cancel, an assumption, has it been proven? If it has been proven, how and where? What metric did they use and should we not take, say, Douglass and use the same metric if possible, or at least as a starting point for our measurement of predictive skill?

Re #148 & $150

Since a petaflop is 3 orders of magnitude greater than a teraflop, shouldn’t the augmented number required to reach a petaflop be 924 rather than 13, or did #148 mean that they only needed 13[.16]

timesthe current 76 teraflops?Slightly OT, the posted discourse between Jeff and Gavin recalls a line from Richard Feynman’s Appendix of the Rogers Report to the

Challengerdisaster, to wit:Plus ça change, plus c’est la même chose.

Finally, if one really seeks to comprehend the economic magnitude of AGW advocacy, check this out.

Re garbled French in #241,

That should have been:

Plus ça change, plus c’est la même chose.

My bad HTML.

Steve McIntyre,

The seminar at this URL was posted as a response to one of my comments on RC. Could you start this as a new thread called Energy and Enstrophy (Vorticity) Cascades and Inverse Cascades so we can address the issues discussed in this seminar in detail. It raises many of the scientific issues that RC continues to ignore.

http://mathsci.ucd.ie/met/seminars/Cascades_files/frame.htm

Jerry

Re # 238 bender

Answer: Unequivocal

NO, on logic grounds. There is nothing known to which to compare their performance.If you think these models have utility, why not adapt their useful components and apply them to trading in stocks and shares? That way you might lose money more rapidly. Also, the chances of calibration by hindcast are better.

I have yet to see a set of objectives for modellers, a structured design for investigation, a placement of resources to lessen replicative effort, the objectives of review points along the way, the criteria that allow continuation or cause termination of effort. In all, as the history of science has shown, an unplanned approach is less effective than a planned one and serendipity, though sweet when it occasionally happens, cannot be built into requests for more funds.

As before: What is the question, what is the acceptable answer and what is the path mapped between them?

Re Gerry #243,

Note that the link requires IE (or the IE Tab extension in FireFox)

#218

I never doubted one second that Schmidt’s definition of clear was special and he’d probably give mud as an example of clear water .

I have also noticed that he equates chaos to sensibility to initial conditions what shows that he has , let’s say , a rather naive vision .

Indeed sensibility to initial conditions is a

necessarybutnotsufficient condition for chaotic behaviour (a^x is very sensible to initial conditions but not chaotic) .He also associates chaos to randomness (you know , statements like “small scale things cancel out”) what is not only naive but wrong .

One can especially appreciate the use of the word “technical sense” – as there is no definition of chaos in a “technical sense” , it is just a convenient buzzword trying to vehiculate the notion that the author is a “technician” what he clearly is not .

Indeed the deterministic chaos is a

PROPERTYof a set of differential equations .The existence or non existence of chaotic behaviour depends not only on the form of the differential equations but also on the

numericalvalues of the parameters appearing in the said equations .There is a whole field of study called “transition towards chaos” that shows that continuous variation of some parameters (f.ex the Reynolds number) will lead a perfectly deterministic non chaotic system described by the same equations towards different chaotic behaviours .

In this sense one doesn’t need to “doubt” the possibility to prove chaotic behaviour , the impossibility of such a proof is directly related to the fact that the climate is NOT described by a set of differential equations and as , per definition , it will never be the case so it will never be possible to prove anything about the properties of unknown solutions to non existent equations .

Now the second part shows clearly (in a non Schmidtian sense) that he doesn’t know what he is talking about .

So the statement about the existence of chaos is “different” from a statement if “the attractor itself is chaotic” … one has difficulty to not burst out laughing .

What is an attractor (and I remind again that an existence of an attractor is NOT necessary for a chaotic system) ?

When it exists it is a sub space of the phase space that contains all the dynamical states of the system .

It is unique and well defined by its topology .

That this topology may be complex (f.ex fractal attractors like the Mandelbrot’s set) is interesting but irrelevant .

In any case an attractor is never “chaotic by itself” , it is just an invariant set of points for a given dynamical system that is itself chaotic under certain conditions .

Talking about “attractors chaotic by themselves” is about as nonsensical as nonsensical goes .

Back to the climate .

Let’s suppose that God provides us with 3 456 266 coupled differential and algebric equations and gives us the insurance that that is exactly everything that is needed to completely describe the climate for eternity .

He might give us only 77 777 equations and warn us that it will reasonably approximate the system for no more than 130 years and as he is infinitely good , He would give us the error bars .

Let’s suppose farther that we are able to study the behaviour of the solutions of this system and prove that it presents an attractor .

When I am talking about “solutions” , I mean of course continuous solutions in Jerry’s sense .

This attractor being an invariant of the climate system we have now the certainty that

ANY possibleclimate will be somewhere inside the attractor .Of course given an initial condition we will not be able to select the trajectory of the system among the infinity of possible trajectories but we’ll know that it will be continuous and stay inside the attractor .

The question would be now if the perfect knowledge of the attractor is useful for something .

Well it depends .

If its topology is simple and its volume small (like a very thin hypertorus f.ex) then yes .

Indeed while the exact evolution of the system would be unknown , it would stay during short times in relatively small volumes of the phase space where the dimensions of the volume would give the uncertainty of the parameters .

If its topology is complex (f.ex fractal) and its volume large then no .

Indeed such a system would present many bifurcations and fast transitions from large volumes to other large volumes of the phase space .

In other words the spectrum of possible states would be so large that it would be of no practical use .

As the past data show us that the real case is the second one , even if there was an attractor and we knew it , this knowledge would be useless because knowing that the real Earth is somewhere between an ice ball and a desert ball (not mentionning jungle and water balls) and going unpredictably to the one or the other , is of no practical value .

Tom: sorry, but the fact that the earth has gone from an ice ball to a desert does NOT mean that the earch climate has an attractor with very wide bounds. The sun’s output and earth orbit both vary a lot on million year timescales, and the continents move around. This means that the external forcings vary and you can’t attribute all behaviors to internal oscillations. The rest of your discussion is right on, however. Does the earth system have a dynamics that can be described by a compact attractor? Who knows? You certainly can’t prove it with the kludgy models and short test history we currently have.

Sun’s output, over million year timescales, is pretty steady and very gradually rising.

H/t LS.

====

#247

Craig

I have set myself rigorously in the frame of the chaos theory because Schmidt argued with some pseudo chaos vocabulary .

I have never attributed anything to “internal oscillations” , I believe that I have even never used the words “internal” and “external” .

A chaotic system generally needs 2 things – non linear equations and energy dissipation .

But as there is energy dissipation , there must also be energy supply because if there was none , the system would only spiral (in a chaotic or non chaotic way) to an equilibrium .

In that case the attractor is ALWAYS only one point .

Despite the fact that it is not forbidden to have 1 point attractors , the chaos theory is generally used for less trivial cases .

So sure , the energy input (solar) plays also a role .

It is certainly a necessary condition to have a chaotic system and an attractor (if it exists) because else the Earth would have probably been already in a thermodynamical equilibrium with the void .

And of course the size and the complexity of the attractor would depend on a billion of things , the solar input , the orbital parameters , the angle of the rotation axe with the ecliptics etc being among them .

So now having seen that we have an energy supply to the system over a reasonably long time , we take all that in account and look for the attractor of the climate system .

If there is one , it is “large” .

Now a theoretical question “Among millions of parameters which is the one who has the biggest impact on the “size” of the attractor ?” is a question which can’t be answered .

Sure you could do some kind of sensibility study (let’s not forget that we are supposed to know all the equations describing the system) but as we have a system where virtually every parameter is interacting with every other parameter over paths with variable lengths , you could obtain similar attractors with very different physical settings .

But as you say , it is all a question of scales .

The attractor is definitely vast but one must not forget that the time dimensions is not part of the attractor .

Observation of an attractor doesn’t say how long it takes to move from one point to another – 2 points very near (in the phase space) can be very far (in time) and 2 points very far can be joined fast .

However when we look at small time scales where many parameters can be considered constant (like decadal or centenial) , the number of

interestingdimensions of the phase space can be reduced .That doesn’t mean that the attractor was reduced , the attractor is what it is .

But it means that the relevant region where the system will be for those time scales is reduced to something smaller that is not an attractor but is (per definition) within the attractor .

The whole question is then how far was it reduced and to what time scales can I extrapolate this result that is only valid for the time scale considered (f.ex decadal) .

So for example you could say by using a decadal scale that the ice ball Earth is far (5 000 years) .

Bad luck , you are not allowed to use the decadal scale result on a scale 500 times bigger and the Ice ball Earth will begin to appear in 1000 years .

Or the other way round .

I guess gb took a day off. I will try to answer one of the questions I asked.

1. The nature of skill…the answer (emphasis mine)

The most telling points are “mean climatic conditions” and “simulate well-understood processes”. If skilful is the ability to predict mean and simulate, then Douglass and Koutsoyiannis need to be evaluted wrt this statement from The use of the multi-model ensemble in probabilistic climate projections BY CLAUDIA TEBALDI 1,* AND RETO KNUTTI 2.

Indications that Douglss approach should be considered as a correct approach

The real question is if Koutsoyiannis, can be ignored or assumed relevant.

gb said:

They are working on it. SO in this sense, Douglass’s methodology represents the science in its current state. And since we are “improving” our confidence about relevant processes then that whether the models are not falsifiable or verifiable, as in Koutsoyiannis, would be a direct counter to the claim that the models have captured enough of the relevant processes.

Douglass used the accepted methodoly for climate models…ensemble mean. Gavin’s response bounded this methodolgy when used by Douglass to any and all models, indirectly proving Koutsoyiannis’s point. The real argument by Gavin and stated in TEBALDI, AND KNUTTI, is simply that they seem to work. This is a shamanistic approach to the natural world, not the scientific approach.

The result of accepting such utterances means that climate models are without known skill, in the scientific sense…i.e. they just seem to work, but not necessarily too well as shown by Douglass.

re: #241

here

Let’s stipulate that a petaflop is three orders of magnitude greater than a teraflop. My somewhat lame joke was that you would need 13 of the new “76 teraflop supercomputers” (ok, 13.157894), so about 12 more to reach 1 petaflop. Ok course we know that teraflops cannot be simply added and must pass a rigorous performance test to get a rating.

My main point was despite some important arguments being raised here about the ability of models to give good guidance about future climate, money is being spent on adding teraflops. We can expect continued momentum in this regards.

Mr. Pete(#245),

I am using SUSE Linux with Konquerer as my web browser and the link seems to work?

Jerry

#250

chuckle

There was this:

And then there was this:

If I interpret the Tebaldi/Knutti paper correctly, then it will not be possible twenty years from now to use actual field observations of actual physical processes to validate the predictive skill of today’s climate models, including today’s prediction datasets for future climate behavior.

If this is indeed the case, then it raises the most obvious and tortuous questions, the most important being, “Is this really science?”

It is safe to say that as far as the world’s carbon growth scenarios go, the rate of growth will likely continue at current rates and will probably even accelerate.

So in reality, the Great Experiment will go forward, C02 levels will continue to rise, and now we have to ask the question, what constructive role will the GCMs play twenty years from now in explaining what actually happened over that two-decade timeframe?

I think we would be just as well off today in the year 2008 if we recognize that the creators of these models have become, for all practical purposes, programmers for a new class of pseudo-scientific video game.

Here is what we should do. We should combine all the existing code from all the most popular GCMs into one large, graphically-driven modeling program, one with numerous parameter and pathway options which the user can choose at various critical points in the program.

What should it be called? How about …..

Grand Theft, Climate.John F. Pittman(#240)

A good reference and many honest statements about what is really going on in climate model comparisons with reality. Maybe you could suggest Gavin

and the Boys from Brazil at RC read the reference.

Jerry

Jerry

re: Lynch & Clark

I stepped through the Lynch & Clark energy and enstrophy slides that Gavin suggested. Perhaps you or someone more familiar with these things than I am could help me understand a little better?

As best I can tell from the slides, the 2D turbulence theory they reference predicts that injected energy should cause an *energy* cascade in the atmosphere upward to larger scales (smaller wave numbers) according to a power law with an exponent of -5/3. Injected energy should also (theoretically) lead to an *enstrophy* cascade to smaller scales with a rate of fall-off proportional to k^(-3).

However, actual observations (which were made more than 25 years ago!) show a -5/3 exponent at scales 600 km and below, indicative of an energy cascade dominating at smaller scales, and a -3 exponent at scales above 600 km, indicative of an enstrophy cascade dominating at larger scales (for east-west winds). This phenomenon they call a ‘kink’. The kink for north-south winds is at a scale an order of magnitude smaller. (Presumably due to disruptions by a more severe north-south temperature gradient?)

Questions: I am confused by the distinction here between energy and enstrophy. Isn’t enstrophy the energy in vorticity? By ‘energy’ do they mean the energy in irrotational velocity?

The model they reference (the ECMWF model) does not reproduce this kink. Lynch & Clark attribute this to excessive damping of energy. Thus, the model has too little energy at small scales.

They point to baroclinic instability as the energy-driver for the cascades and conjecture that below 600 km the k^(-5/3) slope is caused by a downscale energy cascade, probably augmented by an inverse energy cascade from storm-scale phenomena. Above 600 km scales the k^(-3) slope is dominated by an inverse enstrophy cascade.

Question: How similar is the atmospheric physics modeling in the ECMWF model to GCM models? Surely ECMWF (being used for weather modeling) is no worse than that of an average GCM?

Assuming the GCM and ECMWF models are similar in relevant respects, the significance of the Lynch and Clark discussion is that

A, Even a finely resolved weather model gets the gross energy spectrum of the atmosphere wrong;

B, Excessive damping at small scales contributes to the inaccuracy of the models;

C, The main driver for the shape of the energy spectrum occurs at a scale (baroclinic instabilities) which is below the resolution of current global climate models;

D, Energy and enstrophy cascades are operating in both downscale and upscale directions, and switching dominance, at scales which are not resolved by current global climate models.

If I have read this presentation correctly, the Lynch & Clark presentation does not seem to support the idea that the global climate models are rendering atmospheric physics with high fidelity. Nor does it seem to support the notion that Jerry is wrong to emphasize the unphysical nature of the energy damping in current models.

I would very much appreciate any corrections.

Neil,

I think it is pretty well accepted that the spectrum at scales larger than 600 km is caused by quasi geostrophic turbulence (strong influence of rotation and stratification), which is not the same as 2D turbulence. But the cause of the spectrum at scales smaller than 600 km is still a topic of research. Some say it is 2D turbulence but not all observations are consistent with this hypothesis. The kink is perhaps related to the fact that at smaller scales the influence of rotation is less important. I have seen results of other atmospheric models (with hyperviscosity!) that agree better with observations, an approx. -3 and -5/3 spectrum and a kink at about the right scale. It can depend on the numerical method and the vertical resolution.

gb,

Thank-you for the observations. By ‘rotation’ you are referring to coriolis forces? Are you saying the shape of the larger scale spectrum is determined by ‘quasi geostrophic turbulence’ *not* by inverse enstrophy cascade? Did I misread the presentation, or is this a point of disagreement? [Apologies for the elementary questions, but this is obviously not my bailiwick.]

Yes. I am not an expert on geostrophic turbulence but I believe in that case potential vorticity is an issue, which is different from vorticity. Better to read a book on geophysical flows.

Neil Haven (#256),

>Jerry

re: Lynch & Clark

I stepped through the Lynch & Clark energy and enstrophy slides that Gavin suggested. Perhaps you or someone more familiar with these things than I am could help me understand a little better?

I will give it a try.

>As best I can tell from the slides, the 2D turbulence theory they reference predicts that injected energy should cause an *energy* cascade in the atmosphere upward to larger scales (smaller wave numbers) according to a power law with an exponent of -5/3. Injected energy should also (theoretically) lead to an *enstrophy* cascade to smaller scales with a rate of fall-off proportional to k^(-3).

Recall that I provided a simple proof that one can obtain any solution one wants by an appropriate choice of forcing, even for an incorrect time dependent system. Thus one must be very careful with forcing, especially when model results are quoted. There are some very nice mathematical

results called the minimal scale estimates by Henshaw, Kreiss, and Reyna for the nonlinear, incompressible 2D and 3D NS equations (reference available on request). These estimates state how many spatial waves must be resolved in order to correctly compute the continuum solution for a given kinematic viscosity coefficient.

Numerical convergence spindown tests of these estimates in 2D and 3D have shown that the estimates are bang on (references available on request). The interesting thing about these convergence tests is that once the correct number of waves is resolved by a numerical model, additional resolution provides the same solution as predicted by the theory. And incorrect size or type (e.g. hyperviscosity) of dissipation provides a different solution.

I would at least peruse these articles (especially the numerical results) to gain a good understanding of these issues.

>However, actual observations (which were made more than 25 years ago!) show a -5/3 exponent at scales 600 km and below, indicative of an energy cascade dominating at smaller scales, and a -3 exponent at scales above 600 km, indicative of an enstrophy cascade dominating at larger scales (for east-west winds).

Note Lynch is very clear about the sparsity of observations, especially around the equator and in the Southern Hemisphere. This problem is also mentioned in Roger Daley’s text. Given such sparsity, I must admit that I am somewhat skeptical of any claims about a global spectrum or one at smaller scales.

> This phenomenon they call a ‘kink’. The kink for north-south winds is at a scale an order of magnitude smaller. (Presumably due to disruptions by a more severe north-south temperature gradient?)

Questions: I am confused by the distinction here between energy and enstrophy. Isn’t enstrophy the energy in vorticity? By ‘energy’ do they mean the energy in irrotational velocity?

Enstrophy is an integral of the square of the vorticity and this quantity is discussed in detail in the Henshaw, Kreiss, and Reyna manuscript.

Energy usually refers to the total energy, kinetic plus potential,

and is a conserved quantitative of the inviscid system of Eulerian equations (essentially the inviscid compressible NS equations).

>The model they reference (the ECMWF model) does not reproduce this kink. Lynch & Clark attribute this to excessive damping of energy. Thus, the model has too little energy at small scales.

Of course the excessive damping comes as no surprise to me. The amusing thing is that they talk about a random backscatter approach to pump energy into the spectrum near the bottom. I don’t find this very scientific.

Also note the resolution of the ECMWF model, i.e. T799 – the highest of any weather or climate model. But the forecast accuracy is barely better than models with much less resolution (see error plots on Canadian Meteorological site).

>They point to baroclinic instability as the energy-driver for the cascades and conjecture that below 600 km the k^(-5/3) slope is caused by a downscale energy cascade, probably augmented by an inverse energy cascade from storm-scale phenomena. Above 600 km scales the k^(-3) slope is dominated by an inverse enstrophy cascade.

Note that these are only theories with very little supporting evidence.

But if there was not an insertion of energy from some source at some scale, then the system would spin down. Thus this is the real scientific question. If there is a reverse cascade that is important and it is coming from scales less than 100 km that are not resolved by climate models, the whole current effort is nonsense.

>Question: How similar is the atmospheric physics modeling in the ECMWF model to GCM models? Surely ECMWF (being used for weather modeling) is no worse than that of an average GCM?

The ECMWF physics (tuning) is considered to be the best in the world.

The climate models must necessarily use cruder parameterizations because they cannot resolve the smaller scale features as in the ECMWF model or use smaller dissipation coefficients. Actually the dynamical core of the NCAR atmospheric component of the NCAR climate model was originally obtained from ECMWF, but ECMWF would not give out their physics. Thus the only difference is in the resolution and physics.

>Assuming the GCM and ECMWF models are similar in relevant respects, the significance of the Lynch and Clark discussion is that

A, Even a finely resolved weather model gets the gross energy spectrum of the atmosphere wrong;

B, Excessive damping at small scales contributes to the inaccuracy of the models;

C, The main driver for the shape of the energy spectrum occurs at a scale (baroclinic instabilities) which is below the resolution of current global climate models;

D, Energy and enstrophy cascades are operating in both downscale and upscale directions, and switching dominance, at scales which are not resolved by current global climate models.

Neil, I think you have a very good understanding of Lynch’s slides!

Certainly much better than Gavin.

>If I have read this presentation correctly, the Lynch & Clark presentation does not seem to support the idea that the global climate models are rendering atmospheric physics with high fidelity. Nor does it seem to support the notion that Jerry is wrong to emphasize the unphysical nature of the energy damping in current models.

Obviously I agree.

Jerry

A quick shout out to all our AGW-physics homies! (John 2 10)

I knew I wasn’t the only one sensing that, the house of cards thing. (Pat 2 17)

Jerry you said in #260

To make sure I did not misinterpret Lynch or your remarks, when I read these two, this is what I conclude: If there is not an insertion of energy from some source at some scale, the system energy in the system will spin down. Because of excessive damping, causing a spin down, to get the model to come in agreement with the data, Lynch pumped energy into the system. Is this correct?

John F Pittman (#262),

Not exactly. I will restate to try to clarify things.

In the 2D and 3D numerical convergence runs for the viscous, incompressible NS equations (those normally used for turbulence studies) various initial data with a prescribed spectrum were tried and there were no forcing terms.

These are called spin down runs because the only vorticity (enstrosphy)

comes from the initial data and the vorticity damping will eventually

decrease the enstrophy to zero. And because the divergence is required to be zero in these systems. when that happens there is no velocity (no kinetic energy). Lundstrom and Kreiss also did some 3D runs with forcing placed at different points in the spectrum. The results are worth reviewing.

The inviscid, unforced Eulerian equations conserve total energy, but once dissipation is added all bets are off. Depending on what form of dissipation is added, i.e. only vorticity dissipation or vorticity and divergence dissipation, the dissipation can reduce the horizontal velocity to zero and then necessarily the vertical component of velocity that depends on the horizontal velocity in the hydrostatic system. Thus forcing terms are necessary to keep feeding enstrophy and energy into the system.

For example, in reality the sun feeds energy into the system, but it is not necessarily at the large scale because of cloudiness.

Now Lynch said that Schutt recommended inserting energy at high wave numbers in a random backscatter approach to lift up the spectrum

in the ECMWF model because it was losing too much energy due to unphysically large dissipation .

Does this help?

Jerry

Raven (#222),

Sorry I missed your comment earlier. Your summary is in agreement with my

arguments.

Jerry

gb (#221),

>You also don’t seem to understand Tapio Schneider’s point about predictablility of the first kind and second kind.

I understood it perfectly, but there is no mathematical proof that the statement is true for the climate and certainly not for a climate model

that is ill posed.

> This is already well known in turbulence research, see Pope’s book on Turbulent Flows. Turbulence models with parameterizations for the small scales diverge from the fully resolved solution in a short time scale.

That is certainly true. And also for a long time scale. See the convergent numerical runs by Henshaw et al.

> But the statistics of the flow will be correct if a proper dissipation term is used.

And therein lies the game. What is the universal replacement for the NS dissipation operator?

> Your statements about unphysical large dissipation are thus wrong. The statistics can only be correct if by adding a dissipation term the correct mean dissipation is obtained.

If that term essentially changes the nature of the continuum equations that

should be solved, then one can obtain anything one wants, i.e. it is just another form of tuning.

Jerry

#260

This was the part that really impressed me .

I never imagined that so little has been done and achieved about this major problem .

– the data is 23 years old and covers only a part of the 30°N – 55°N band .

– the data is coming from commercial flights so without any validation and controlled experimental setting

– the basic theory is 30 years old , the only “progress” seems to be this cryptical “stochastic backscattering” process that is supposed to heal the unrealistic overdamping at small scales

– the “kink” is not yet well understood

Perhaps the strangest thing is that both theory and models that are supposed to have universal validity are tuned to match data that covers only a small band of the Earth atmosphere .

It clearly must be extremely different in the tropics and as for the S hemisphere , all bets are on .

Where did all that research money go ?

Jerry,

Thank-you for your extensive and helpful comments.

My reading notes to myself on the presentation have the line “backscatter… from what?”

Following the units from the definitions, I see now that the volume integral of enstrophy with density has units of energy. That helps me understand.

I will track down the Henshaw et al papers and glean what I can from them. Maintaining an opinion on these matters turns out to be hard work!

Tom Vonk,

These are measurements taken during 6000 flights over long distances at a height of 10 km. Can you imagine that it is extremely expensive to repeat them?

There is no reason to believe that the spectra are different on the S hemisphere.

What kind of controlled experiments are you proposing? It is not possible to create the same conditions in a lab.

Many rather smart people have been working on this problem in the last 30 years. It is not an easy one!

Please, tells us how we should solve the problem or what the solution is. You seem to think that it is easy to address.

gb: that someone criticizes the lack of progress in a field as Tom Vonk does is not an indication that they know how to fix it. It relates to the crucial role this science plays in the greenhouse theory and that lack of progress does not speak well to our level of understanding. If we can’t make progress in 30 years (and clouds would be another example) then the problem is really resistant to solution and/or there has been little money spent on it. This is not an insult to the intelligence of those working on it. What it does indicate is that there is nothing “settled” about such a topic. Tom, I hope I understand your POC here.

Jerry #263

Yes, it helps. The statement where you talk about lifting up the spectrum is where I want to have a clear understanding, because my understanding is that this is a central idea to deciding about the whether the model is correct in physics or ill-posed.

A list of some relevant references

Smallest scale estimates for the Navier-Stokes equations for incompressible fluids

Journal Archive for Rational Mechanics and Analysis

Publisher Springer Berlin / Heidelberg

ISSN 0003-9527 (Print) 1432-0673 (Online)

Issue Volume 112, Number 1 / March, 1990

DOI 10.1007/BF00431721

Numerical experiments on the interaction between the large- and small-scale motion of the Navier-Stokes Equations, W.D. Henshaw and H.O. Kreiss and J. Yström, SIAM Journal on Multiscale Modeling and Simulation, 1, pp. 119-149, 2003. HenshawKreissYstrom.pdf (LLNL site)

Comparison of Numerical Methods for the Calculation of Two-Dimensional Turbulence. G. L. Browning. H.-O. Kreiss. Mathematics of Computation, Vol. 52, No. …

Jerry

There has been considerable progress both in the understanding of turbulence (see above references)and in the understanding of balanced atmospheric flows. Not from numerical models, but from rigorous mathematics

and careful numerical convergence tests.

Jerry

John F. Pittman (#271),

> Yes, it helps. The statement where you talk about lifting up the spectrum is where I want to have a clear understanding, because my understanding is that this is a central idea to deciding about the whether the model is correct in physics or ill-posed.

Even though the ECMWF model is very high resolution , it still does not accurately approximate atmospheric features less than 100 km in scale.

The ill posedness will appear in full bloom when those features are

resolved and the dissipation reduced accordingly. This is a mathematical problem with the continuum dynamical (hydrostatic) system, not the physics. The current ECMWF spectrum is thought to be wrong and the pumping of energy into the high wave numbers of the spectrum is just another

attempt to make the spectrum look more “realistic”. There is no theory behind the method.

Jerry

Thanks Jerry.

Confirms what I am trying to make sure that I understand.

All,

Some remarks on the GISS ModelE.

The following statements are taken from the reference

Schmidt, G.A., R. Ruedy, J.E. Hansen, I. Aleinov, N. Bell, M. Bauer, S. Bauer, B. Cairns, V. Canuto, Y. Cheng, A. Del Genio, G. Faluvegi, A.D. Friend, T.M. Hall, Y. Hu, M. Kelley, N.Y. Kiang, D. Koch, A.A. Lacis, J. Lerner, K.K. Lo, R.L. Miller, L. Nazarenko, V. Oinas, Ja. Perlwitz, Ju. Perlwitz, D. Rind, A. Romanou, G.L. Russell, Mki. Sato, D.T. Shindell, P.H. Stone, S. Sun, N. Tausnev, D. Thresher, and M.-S. Yao 2006. Present day atmospheric simulations using GISS ModelE: Comparison to in-situ, satellite and reanalysis data. J. Climate 19, 153-192.

I will cite the page for each statement.

page 157

>The runs described here use a second-order scheme for the momentum equations.

Note that it has been shown By Oliger and Kreiss that second order

numerical finite difference schemes are extremely inferior to higher order

accurate finite difference schemes. In fact, all modern atmospheric

models (ECMWF and NCAR) use pseudo spectral methods that are the limit of finite difference methods as the order of accuracy of a finite difference method increases. The other reason that the modern models use the pseudo spectral method is that the pole problem, i.e. the singularity of the Jacobian at either pole, can be handled in the correct mathematical manner.

That is not the case for a finite difference method that uses spherical

coordinates and these methods were abandoned many years ago. The inferiority of the numerical approximation will show up in any accuracy comparisons of dynamical cores.

Page 158

> We ensure that the loss of potential energy is exactly balanced by the gain in kinetic energy using a small global correction to the temperature.

This is an ad hoc method to keep the total energy in balance. There is no theory to support such an adjustment and the impact of the solution over time can be considerable.

page 158

> The basic dynamics code has not changed substantially since SI2000:

to increase the accuracy and stability near the poles; however, there have been a number of modifications that aimed to increase the computational efficiency of the dynamical core and its accuracy and stability at the Poles.

See above remark about the Pole problem.

page 158

>Occasionally, divergence along a particular direction might lead to

temporarily negative gridbox masses. These exotic circumstances happen rather infrequently in the troposphere but are common in stratospheric

polar regions experiencing strong accelerations from parameterized gravity waves and/or Rayleigh friction. Therefore, we limit the advection

globally to prevent half of the mass of any box being depleted in any one advection step.

So one kluge (parameterized gravity waves) leads to the necessity

for another. Has anyone ever heard of negative mass?

I can continue but I think the point is clear. The GISS model uses the most archaic numerical method that was rejected long ago by other modelers.

And the number of unphysical kluges in the model is a very clear indication

of a rather cavalier approach to the scientific method.

Jerry

Even ecosystem models, which are rather kludgy, never get negative animals in a patch…I am proud to say! I love how they call it “unphysical” instead of “wrong”.

Jerry, a while back, I asked Gavin Schmidt about the problem you described, viz:

He assured me that the imbalance was small (but did not provide any numbers).

I was not reassured …

I also asked him about the other question, viz:

He said this happened, but was very uncommon (again no numbers). He admitted this was a kluge, but claimed it was an unimportant kluge.

Color me unimpressed …

w.

PS – what I actually asked was:

In reply, he said:

I then asked

Didn’t get an answer to that one …

Willis (#278),

Gavin always has some dodge for any rigorous scientific question.

And if not, RC selectively edits out comments or parts of comments that it does not want to address or to be seen. I have been banished from the site

and will state here that I have never seen such nonsense from people who are suppose to be scientists in my entire career.

Just to confirm my suspicions, I started to read the GISS ModelE manuscript. If any numerical analyst saw the kluges that are endemic

in this manuscript, they would die laughing. And isn’t it interesting that

no one has the computer resources to answer some very basic questions about the impact of these games? Pathetic.

Jerry

P.S. When heating caused a model to become nonhydrostatic due to overturning, the models arbitrarily would redistribute the heat

and reset the flow to be hydrostatic. Now there is a real scientific technique. I see that in the GISS model they now have used turbulence

(diffusion) to “solve” the problem. I cannot believe that Gavin can make the statements that he does knowing full well the GISS model is a complete

joke.

Jerry #279 Thanks for your patience and posts. You said:

Do you mean set it to be in hydrostatic balance for a set volume/grid ? This is implied from your statement

since I assume that this would be a grid by grid “adjustment” in a finite difference method model.

I find the comparison with data from Albany, Florida to be quite interesting as a Floridian. May I offer the hypothesis the sharp cooling there is a land use effect? I got the idea from this presentation and talk by James J. O’Brien. He says “Florida is cooling and…this cooling is man-made.”

John F. Pittman (#280),

The “convective adjustment” is done column by column. It has nothing to do with the finite difference scheme as it is an ad hoc method to keep the model in hydrostatic balance.

Jerry

John F. Pittman (#280),

I will elaborate a bit more. When physical parameterizations that are not physically accurate are used in a model that must be kept in hydrostatic

balance, i.e. the model is based on a system that is not the correct system to describe the atmosphere for all scales of motion, the forcing can lead to overturning in a column. In other words, the parameterization would

lead to mass that is heavier above than below in the column. In order to prevent this from happening, the forcing, e.g. latent heating in the column, is arbitrarily redistributed to keep the overturning from occurring. This is clearly an ad hoc fix because in the real atmosphere such overturning, if it were to occur, would be handled by small scale turbulence that is not resolvable in a large scale model. Let me know if this helps.

Jerry

#283, Yes it does help, Jerry and thanks. I was not sure if it was by column; however from my veiw and comprehenion, a column would be usually in a grid, yeilding a 3-D control volume from the way the models have been described in this thread. Not necessarily factual, just how I model the models.

The reason I wanted to be sure is that certain general statements and conclusions can be obtained from such a system. Since we are talking about heat transfer, forcing a system to hydrostatic balance when it should be turbulent, is grossly inaccurate. Such an estimation can easily be a factor of 2 or more off. Since the greenhouse effect (simplified version) is about mean optical path and delaying heat loss to space, heat transfer in the column would seem to me to be the one part that HAD to be correct. As an aside, one should not state that it is a boundary value problem and not use boundary value phenomena, such as turbulence. Because, I would assume such an ad hoc klug factor for boundary value, means either the physics is wrong (unknown) or the math implementation is wrong (unsolvable).

Another question, is this re-distribution, the same part of the kinetic energy as when you responded to Gavin

or is it another error?

How is this energy, or forced hydrostatic balance related to the “sponge layer” or

, I believe you named it.

Is the hydrostatic klug factor related to

?

John F. Pittman (#284),

>Another question, is this re-distribution, the same part of the kinetic energy as when you responded to Gavin

Since this is about half the forcing for a doubling of CO2,

or is it another error?

I don’t believe this was a response by me? If you do a google search on convective adjustment you obtain

convective adjustment-A method of representing unresolved convection in atmospheric models by imposing large-scale vertical profiles when convection occurs.

As originally developed, convective adjustment was applied when modeled lapse rates became adiabatically unstable. New temperatures were calculated for unstable layers by conserving static energy and imposing an adiabatic lapse rate. If, in addition, humidities exceeded saturation, they were adjusted to saturation, with excess water removed as precipitation. A related adjustment, (stable saturated adjustment), for stable layers with water vapor exceeding saturation, returned them to saturation, also conserving energy. More recently, convective adjustments have been developed that adjust to empirically based lapse rates, rather than adiabatic lapse rates, while still maintaining energy conservation. Convective adjustment is generally applied to temperature and humidity but, in principle, can also be applied to other fields affected by convection.

>How is this energy, or forced hydrostatic balance related to the “sponge layer” or

excessive damping

, I believe you named it.

The excessive damping is a horizontal dissipation operator (many times hyperviscosity) typically applied at all grid points. The sponge layer is another type of dissipation applied near the upper lid of the model to artificially damp outgoing gravity waves. High wave numbers of the spatial spectrum are also chopped (set to zero) to stabilize the pseudo spectral method. The can considerably lower the stated accuracy of the pseudo spectral method.

>Is the hydrostatic klug factor related to

divergence along a particular direction might lead to temporarily negative gridbox masses

?

This sounds like it is due to horizontal advection that is a different mechanism.

Do you begin to get the feeling that the modelers resort to a myriad

of games in order to extend the period of integration beyond a

reasonable length of time, i.e. beyond a short term forecast?

Jerry

Here’s what may be my last post in defense at RC. Things have come to an impasse, and further debate seems relatively pointless. The claim that I’ve confused type 1 and type 2 errors doesn’t seem to stand in the face of the systematic correlations of cloud error among GCMs.

No one seemed to want to really touch the implications resulting from the success of a simple linear model reproducing GCM projections. One, for example, is that the result implies that across 80 years the GCMs dump all of the excess GHG forcing into atmospheric temperature. None of it appears to go into ocean warming, or ice-cap melting, or etc. This is very peculiar, given the coupled climate-mode feedbacks they’re supposed to accurately represent.

Anyway, here it is. We’ll see if it survives moderation.

============================

#384 Pat Frank Says:

Your comment is awaiting moderation.31 May 2008 at 9:29 PM

#370 — Ray, with respect to an anthropogenic cause for climate warming, all one need do is show a large uncertainty. The cloud error, which is entirely independent of the linear model, fully meets that criterion. In a larger sense, what do non-unique parameterization schemes do to prediction falsification?

I’m not saying an incomplete model has no predictive power. I’m saying that a model must have resolution at the level of the predictions. That criterion is clearly not met with respect to a 3 W m^-2 forcing, as amply demonstrated by the magnitude of the predictive errors illustrated in WGI Chapter 8 Supplemental of the AR4.

In the absence of constraining data, and when the models available are not only imperfect but are imperfect in unknown ways, use of those models may be far more disastrous than statistical extrapolations. The reason is that a poor model will make predictions that can systematically diverge in an unknown manner. Statistical extrapolations allow probabilistic assessments that are unavailable from imperfectly understood and systematically incomplete models.

Any force in your last paragraph requires that all the relevant climate forcings be known, that they be all correctly described, and that their couplings be adequately represented especially in terms of the nonlinear energy exchanges among the various climate modes. None of that is known to be true.

#376 — Gavin, modeling real-world events can only be termed physically successful in science if the model is predictively unique. You know that.

If you look at the preface to my SI, you’ll see mentioned that the effort is to audit GCM temperature projections, and not to model climate. You continually seem to miss this point.

There is no confusion between an error in the mean and an error in a trend, because the cloud error behaves like neither one. Cloud error looks like theory-bias, showing strong intermodel correlations.

#377 — Ray, have you looked at the GCM predictive errors documented in the AR4 WGI Chapter 8 Supplemental? They’re very large relative to GHG forcing. Merely saying that CO2 is a greenhouse gas does not establish cause for immediate concern because the predictive errors are so large there is no way to know how the excess forcing will manifest itself in the real climate.

Just saying the models cannot reproduce the current air temperature trend without a CO2 forcing bias may as well mean the models are inadequate as that the CO2 bias works as they represent.

And in that regard, have you looked at Anthony Watts’ results at http://www.surfacestations.org? It’s very clear from the CRN site quality ratings that the uncertainty in the USHCN North American 20th C surface temperature trend is easily (+/-)2C. Are the R.O.W. temperature records more accurate? If not, what, then, is the meaning of a +0.7(+/-)2 C 20th C temperature increase?

#378 — dhogaza, if the paleo-CO2 and temperature trend sequences were reversed we can be sure you’d be bugling it, canard or no.

Honestly, I see little point in continuing discussion here. I came here to defend my work in the face of prior attacks. It’s not about climate but about a specific GCM audit, no matter Gavin’s attempts to change the focus. Claims notwithstanding, the evidence is that none of you folks have mounted a substantively relevant critique.

============================

Jerry said:

Yes, that is why I wanted to as specific in my understanding as I could reasonably be. Any one of these “games” by itself is enough to raise serious counter arguments to say what Gavin has been stating. Thank you for your help.

It was W.E. that said

It seems Willis has a few points about kludge factors.

I can only sympathize with Pat’s comment. I have found the CA folks rough but attentive. The RC folks seem to be pernicously hard-headed about straw-man and irrelevant side track arguments. Though I would hope that Pat adds to CA’s “Lost at Sea” and other threads as to whether his model will be more accurate without adjustment with the temperature corrections going on. A number of CA posters are wondering about the impact to the aerosol question.

Well, I guess I got sucked in one more time. Ray Ladbury posted this:

==============================

386 Ray Ladbury Says:

1 June 2008 at 10:32 AM

Pat Frank, you are utterly ignoring the fact that the signal and noise may have utterly different signatures! For my thesis, I extracted a signal of a few hundred particle decays from a few million events–using the expected physics of the decays. I’ve worked with both dynamical and statistical models. I’ll take dynamical any day. For the specific case of climate models, CO2 forcing is constrained by a variety of data sources. These preclude a value much below 2 degrees per doubling (or much over 5 degrees per coubling). Thus, for your cloudy argument to be correct, you’d have to see correlations between clouds and ghgs–and there’s no evidence of or mechanism for this.

In effect, what you are saying is that if we don’t know everything, we don’t know anything–and that is patently and demonstrably false. What you have succeeded in demonstrating here is that you don’t understand climate or climate models, modeling in general, error analysis and propagation or the scientific method. Pretty impressive level of ignorance.

==============================

Here’s my reply:

==============================

#388 Pat Frank Says:

Your comment is awaiting moderation.1 June 2008 at 2:13 PM

#384 — Gavin, the cloud error came from a direct assessment of independent GCM climate predictions. The linear model reproduced GCM surface temperature outputs in Figure 2, and was used to assess GCM surface temperature outputs from Figure 1. Where’s the disconnect?

#386 — Ray your example is irrelevant because you extracted a signal from random noise that follows a strict statistical law (radiodecay).

Roy Spencer’s work, referenced above, shows evidence of a mechanism coupling GHG forcing to cloud response, in support of Lindzen’s iris model. More to the point, though, in the absence of the resolution necessary to predict the distribution of GHG energy throughout the various climate modes, it is unjustifiable to suppose that this effect doesn’t happen because there is no evidence of it. The evidence of it is not resolvable using GCMs.

You wrote, “

In effect, what you are saying is that if we don’t know everything, we don’t know anything–and that is patently and demonstrably false.”I’ve never asserted that and have specifically disavowed that position at least twice here. How stark a disavowal does it take to penetrate your mind? My claim is entirely scientific: A model is predictive to the level of its resolution. What’s so hard to understand about that? It’s very clear from the predictive errors documented in Chapter 8 Supplemental of the AR4 that GCMs cannot resolve forcings at 3 W m^-2.

Your last paragraph is entirely unjustifiable. To someone rational.

=================================

Pat #288. I once got in a discussion with Ray Ladbury about the precision and accuracy of what IPCC had in their reports, and what could be attributed to natural and manmade causes. Ray believes that the signal is detectable and is manmade. That there is a clear confirmation, not derivation, the models represent confirmation of the signal. I do not see that the signal has been confirmed one way or the other. But I note that Ray has on several occasions worded his responses to indicate that the signal is known and has been measured, despite the possible inaccuracies or imprecisions, which is what I made a mistake of trying to learn on RC. The folks here on CA actually discuss thoughts with each other.

Pat, you say:

I’ve taken this up with Gavin before. His response is that although the absolute values may be way off (they are, I’ve posted on GISS model errors before), they are still accurate for the ∆ in the modeled results of the changes in forcings. In other words, despite the fact that their cloud representation is way off (59% areal coverage modeled, 69% measured) and is parameterized (by specifying ad-hoc the conditions at which clouds are supposed to form), their model is still accurate in measuring the difference in forcing between say, a run with CO2 doubled and a run with CO2 held steady.

Now, in the simplest of models and the simplest of systems, this might be true. Even there, however, it would need to be demonstrated before it would be accepted.

Given the complex, interconnected nature of the climate system, however, I find this point of view laughably naive … but they believe it with all their being.

w.

288 (Pat Frank): Don’t be surprised if that’s the last comment your

allowedto make. If it gets through at all. You should never suggest that the Iris Hypothesis may be correct!289 (John F. Pittman): I suspect many RC commenters are scientists in other fields who have no understanding of the science what-so-ever, but merely put there faith in their fellow scientists, no matter how counterintuitive the statements they make are. Sure, a thermometer can’t distinguish between mankind and nature, but if scientists say they can, well, scientists would never lie! Then, of course, is the feeling that if your up against the “bad guys”, the right wingers, the corporations, big oil, you

can’tbe wrong (or it doesn’t matter if you are) becuase you aremorallyright. For myself, I find this to be an attitude that makes a mockery of science. I firmly stand by the old saying “Extraordinary claims require extraordinary evidence.”Andrew #291

Agreed. Also, remember for business, extraordinary costs require extraordinary justification. Which usually has to proceed from the extraordinary evidence.

#290 (Willis)

“…In other words, despite the fact that their cloud representation is way off (59% areal coverage modeled, 69% measured) and is parameterized (by specifying ad-hoc the conditions at which clouds are supposed to form), their model is still accurate in measuring the difference in forcing between say, a run with CO2 doubled and a run with CO2 held steady.”Can you please clarify Gavin’s claim for me? Is it:

a) Cloud formation doesn’t change in the presence of an external forcing from CO2.

b) The CO2-induced changes in cloud formation from a base of 59% are equal to the change(s) from a base of 69%. And since the incremental changes are the same in either case, the absolute error in baseline doesn’t matter.

c) Other?

Also, if possible, could you please post a link to the source(s) for your comparison of Planet Earth vs Planet GISS. I know you’ve posted it before, but one was a dead link, and I’m having trouble finding the other occurences.

Best,

James

293 (James): I’m not Willis, but just to start with, I very much doubt that Gavin is suggesting option “a”. That would be untenable, regardless (or especially) of (or becuase) what one believes about climate change. Clouds will either act to amplify or dampen warming, and I would think Gavin would be firmly in the amplifying camp, not some “no change” camp.

I think you want “Present day atmospheric simulations using GISS ModelE: Comparison to in-situ, satellite and reanalysis data”, available here

My understanding of what Gavin said is like your “b”. It was that, although things are in error in absolute terms, they are correct in relative terms. For example, suppose you want to model a car, and see what happens when you increase the fuel by some amount. Now, the exact modeled value for the car speed given a certain amount of fuel might be off.

But my understanding of the processes, my calculation of the conversion process of fuel to MPH, might be good enough so that the various

from a given change in fuel would be reasonable, even though some absolute values might be off.changesAnd in fact every model runs on just this assumption, that there are things that we can safely ignore and things that we have to pay close attention to.

I’d need a lot of supporting evidence, however, before I’d believe that a climate model with parameterized cloud formation could properly model the result of a given change in forcing. There are too many interactions, too many feedbacks, to just assume that. You’d have to prove that before I’d buy it.

Gavin, of course, says something like “But look at how well our model handles Pinatubo”, which is true. But he thinks that means something. Me, I think that the ability to correctly represent transient events is a much simpler problem than representing long-term evolutions in what I see as a naturally equilibrated system.

w.

295(Willis Eschenbach): Not to mention the fact that cirrus clouds were produced during the Pinatubo eruption.

http://www.sciencemag.org/cgi/content/abstract/257/5069/516

And How about Douglass and Knox 2005?

http://arxiv.org/ftp/physics/papers/0509/0509166.pdf

Willis (#295),

Satellite data is not in situ and neither is reanalysis data.

The former is a blend of surface info (if it exists) and satellite radiance info. The accuracy of the transformation of radiances to model variables is

a very questionable procedure, especially in the presence of clouds.

The latter is a blend of model and observational data and can be completely in error, especially near the equator where there are few obs and the physics is not understood.

Note that GISS receives money from NASA. I was once told that anyone that

criticized NASA would have their funds chopped. I doubt that GISS is going to criticize the satellite data altough there has been an extensive discussion of its poor quality on this site.

Jerry

Wiilis (#295)

I thought that readers here might be interested in the following statements

from the GISS manuscript.

page 168

Similarly, satellites that see clouds cannot generally see thru them,

and this needs also to be accounted for.

The net albedo and TOA radiation balance are to some extent tuned for,

and so it should be no surprise that they are similar across models

and to observations.

So despite all of Gavin’s bluster, the article clearly states that the model has been tuned using multiple gimmicks to obtain agreement with certain questionable information. Isn’t that exactly what I proved earlier,

i.e. that by suitably choosing the forcing. one can obtain any solution that one wants, even for an inaccurate numerical model.

Jerry

Jerry, Gavin doesn’t deny that the models are tuned … he just claims that there’s not many tuning knobs. Six, if I recall correctly. Riiiiiight …

See the discussion on the Briggs site.

w.

PS – I love their statement about albedo. The model is more than “somewhat” tuned for albedo, it is entirely tuned for albedo and for global radiation balance … the report sez:

The interesting part to me is that when their clouds are tuned for the proper albedo (about 30%) and to achieve global radiation balance, the cloud cover is way low (58% vs 69%, see Table 3, p 159). If that were my model, I’d be disassembling it to see why the huge error … but they just carry on as if everything were just fine.

#295 (Willis)

Thank you for the link, Willis. The paper contains the fact(s) I was seeking.

There are too many interactions, too many feedbacks, to just assume that. You’d have to prove that before I’d buy it.Amen.

## 4 Trackbacks

[…] evaluation of models such as Douglass et al. 2007 and Koutsoyiannis et al. 2008, who conclude in an Assessment of the reliability of climate predictions based on comparisons with historical time serie… (M)odel outputs at annual and climatic (30‐year) scales are irrelevant with reality; also, they […]

[…] Comment on Koutsoyiannis 2008 Presentation by joey crawford …http://low-nocostbusinessideas.blogspot.com/|||Comment on Koutsoyiannis 2008 Presentation by Joe CrawfordAfter datum Koutsoyiannis, I surmisal things haven’t denaturized a aggregation in the terminal 25 or 30 eld after […]

[…] August 3, 2008 — climatereview Vor einiger Zeit griff Steve McIntyre von Climate Audit eine Präsentation einiger griechischer Forscher auf, die sich mit der Verlässlichkeit von […]

[…] Koutsoyiannis has a career of work grappling with non-normal statistics in hydrological data, using models with long-term-persistence, and the difficulty of prediction. These more advanced analysis attempt to account for the fact that extremes of precipitation happen more frequently than expected from typical approaches, and are well worth the study. That is, there is no need to reinvent the wheel here. […]