Exponential Growth in Physical Systems

Gerry Browning sent in the following post:

If a time dependent equation has a solution that grows exponentially in time, then that solution is very sensitive to errors in the initial condition, i.e., any error in the initial condition will cause an exponential deviation in time from the true solution.

Using a mathematical perturbation of smooth, large-scale atmospheric flow in the presence of vertical shear (jet stream), Browning and Kreiss (1984) showed that the hydrostatic equations can lead to unbounded exponential growth (ill posedness) and the non-hydrostatic equations can lead to rapid exponential growth on the time scale of a few hours when the wave length resolved by a numerical model approaches 10 km.

I have developed inviscid approximations of both systems for a doubly periodic domain that verifies these conclusions. The growth in the hydrostatic system becomes larger and larger in the matter of a few hours as the resolution increases while the growth in the non-hydrostatic system remains bounded, but is very rapid. The bounded, fast exponential growth and rapid cascade to smaller scales has also been seen in the NCAR WRF and Clark-Hall viscous, nonhydrostatic models (Lu et al., 2006), but at a slower pace because of the dissipation.

These analytical and numerical results raise a number of troubling issues. If current global atmospheric models continue to use the hydrostatic equations and increase their resolution while reducing their dissipation accordingly, the unbounded growth will start to appear. On the other hand, if non-hydrostatic models are used at these resolutions, the growth will be bounded, but extremely fast with the solution deviating very quickly from reality due to any errors in the initial data or numerical method.

References:

Browning, G. and H.-O. Kreiss: Numerical problems connected with weather prediction. Progress and Supercomputing in Computational Fluid Dynamics, Birkhauser.

Lu, C., W. Hall, and S. Koch: High-resolution numerical simulation of gravity wave-induced turbulence in association with an upper-level jet system. American Meteorological Society 2006 Annual Meeting, 12th Conference on Aviation Range and Aerospace Meteorology.

441 Comments

  1. John A
    Posted Feb 11, 2007 at 11:07 AM | Permalink

    Therefore, those flux adjustments set by the modellers are critical to preventing this sort of instability?

  2. TAC
    Posted Feb 11, 2007 at 11:38 AM | Permalink

    I understand that sensitivity to errors in initial conditions should undermine confidence in any particular prediction — that problem arises in weather forecasting, or forecasting a vast array of geophysical phenomena. But is it also the case when trying to describe population characteristics (e.g. the mean, variance, etc.) of future events? In particular, is there some reason why you can’t predict the affect, in terms of the distribution of future events, of altering coefficients of the governing differential equation — or am I missing the point here?

  3. Ian S
    Posted Feb 11, 2007 at 12:25 PM | Permalink

    #2 It seems to me that most of the state variables of climate exhibit fractal self similarity. For example, if I were to show you a graph of temperatures over time, but not show you the timescale, could you tell me what the approximate timescale was? I don’t think so. Chaos that results in this type of scale independent fractal behavior is equally chaotic and unpredictable at all scales.

    It’s like the stock market. Can one predict population characteristics of the stock market? No. [well they can go ahead and predict if they like, but don’t put your money on them ๐Ÿ˜‰ ]. Similarly with climate, it seems quite likely that long-term behavior is no more predictable than short-term behavior going by the fractal self similarity nature of the measured state variables.

    cheers,

    Ian

  4. Ian
    Posted Feb 11, 2007 at 12:49 PM | Permalink

    In comment #3 I am referring to self-similarity within the current interglacial. At ice-age time scales self-similarity disappears and interestingly we have a half decent chance at prediction.

  5. Posted Feb 11, 2007 at 12:51 PM | Permalink

    Re 3: Ian,
    Molecular diffusivity tends to provide a cutoff frequency at small scales. So, if you ‘climate’ behavior will tend to be self-similar to all scales, that’s probably not true. In climate flows, these small scales are truly small.

    Re 2: People do a lot of work trying to predict the average behavior of non-linear systems. Predicting the average long term behavior is often no easier than predicting instantaneous behavior over a long time period. ( Depending on the problem doing one may be more difficult than the other– it just depends.)

    Fluid mechanicians doing research have run DNS (direct numerical simulation) codes that fully resolve the flow behavior at all scales and then take averages to find out what happens on average. (These DNS computations are limited to relatively simple flows and take a lot of computational power.)

    Researchers have developed codes to predict the average behavior of some systems involving fluid flow, heat transfer and mass transfer fairly well without resorting to tuning; other flows can’t be predicted well.

  6. Gary Strand
    Posted Feb 11, 2007 at 12:58 PM | Permalink

    Re: 1

    The CCSM3 has no “flux adjustments”, if you’re talking what I think you mean.

  7. Ian S
    Posted Feb 11, 2007 at 1:05 PM | Permalink

    #5 unpredictable at all scales was far too strong a word for me to use (the weather this second will probably be extremely similar to the weather in the next second, so obviously self-similarity does not apply at this short a time scale). How about “unpredictable at most scales of interest with regard to predicting the effects of global warming?” One has to be so extremely careful with their choice of words on this site!! All these hypervigilant auditors ๐Ÿ˜‰

  8. Ian S
    Posted Feb 11, 2007 at 1:06 PM | Permalink

    #7 “.. effects of GHGs, not global warming” just to preempt that audit objection ๐Ÿ˜‰

  9. Gerald Browning
    Posted Feb 11, 2007 at 1:10 PM | Permalink

    John A (#1):

    The large, nonphysical dissipation (compared to the real atmosphere) used in the atmospheric portion of the coupled climate models together with the mesh not yet resolving the smaller scales of motion under 10 km prevents the unbounded exponential growth from being seen at this stage. However, Willis has indicated that the CCSM3 is doing worse than the CCSM2 and this could be the beginning of the problem. I am waiting to hear from Willis in this regard.

    If one peruses the CCSM documentation, one sees that only a few resolutions are guaranteed to work. If the equations are well posed and no tuning of physics or dissipation is required, the convergence to the true solution of the continuous system should occur automatically for an accurate and stable numerical method, at least for short time periods. This is not the case for the unforced, hydrostatic system.

    Also note that the non-hydrostatic models are closer to the full inviscid (viscous) NS equations and once the solution is properly resolved by a stable numerical method, the solution should be closer to the correct continuous solution. These solutions have indicated severe problems with fast, bounded exponential growth and cascade to scales not resolvable by a model.

  10. TAC
    Posted Feb 11, 2007 at 1:20 PM | Permalink

    #3 Ian, your point that

    most of the state variables of climate exhibit fractal self similarity

    is entirely consistent with what I have observed, both in climate datasets and in a whole variety of other types of geophysical datasets. However, I tend to believe that the fractal nature of climate is a fundamental property of the system. I am not sure how it could be connected with errors in initial condition (or, for that matter, in how we measure the natural system).

  11. Gerald Browning
    Posted Feb 11, 2007 at 1:33 PM | Permalink

    Margo (#5):

    I agree with all of your statements. The problem with the hydrostatic system is that it is ill posed. Instead of running complicated coupled climate models for hours on end, this can be resolved by running the inviscid, unforced CAM model in the presence of a simple jet at higher and higher resolutions for less than one day of simulated time. A fairly cheap method to determine if there is a problem with the continuous system?

  12. Ian S
    Posted Feb 11, 2007 at 1:48 PM | Permalink

    #10 TAC,

    I think the fractal nature ( self-similarity) can be connected to the sensitivity to initial conditions and long-term predictions in the following way:
    – We cannot predict short-term ( one week to one-month kind of thing) state variables due to the sensitivity to initial conditions.
    – The self-similar nature of the behavior of these state variables from one week to decades or longer is very strong.
    – To me, this suggests that the long-term behavior of the state variables is equally unpredictable?

    It’s certainly not bulletproof ๐Ÿ˜‰ this is my feeling — it seems reasonable.

    cheers,

    Ian

  13. TAC
    Posted Feb 11, 2007 at 3:21 PM | Permalink

    #12 Ian, thanks for the reply. I find the self-similarity issue fascinating (btw, Demetris Koutsoyiannis, an occasional visitor to CA, has written a lot on this topic), even though I don’t have a good physical explanation for it. In any case, it continues to amaze me that so many studies in climate science (not to mention climate modeling) fail to recognize this seemingly ubiquitous property of natural systems.

  14. Willis Eschenbach
    Posted Feb 11, 2007 at 3:44 PM | Permalink

    Jerry B., you say:

    The large, nonphysical dissipation (compared to the real atmosphere) used in the atmospheric portion of the coupled climate models together with the mesh not yet resolving the smaller scales of motion under 10 km prevents the unbounded exponential growth from being seen at this stage. However, Willis has indicated that the CCSM3 is doing worse than the CCSM2 and this could be the beginning of the problem. I am waiting to hear from Willis in this regard.

    I answered elsewhere that I had misremembered, it was CM2.1 that was worse than CM2.0, not CCSM3 vs. CCSM2. However, the following comment on the GISS model might be of interest in that regard:

    Our experience has been that while some aspects of a simulation can be improved by increasing the resolution (frontal definition, boundary layer processes, etc.), many equally important improvements are likely to arise through improvements to the physical parameterizations.

    Indeed, some features (such as the stratospheric semiannual oscillation, high-latitude sea level pressure, or the zonality of the flow field) are degraded in higher resolution simulations, indicating that resolution increases alone, without accompanying parameterization improvement, will not necessarily create a better climate model. As models improve and computer resources expand, there will always be a tension between the need to include more physics (tracers, a more resolved stratosphere, cloud microphysics, etc.), to run longer simulations, and to have more detailed vertical and horizontal resolution. The balance that is struck will be different for any particular application and so a flexible modeling environment is a prerequisite. In this paper, we therefore show results from three different configurations that differ principally in their horizontal and vertical resolution.

    w.

  15. Francois Ouellette
    Posted Feb 11, 2007 at 4:35 PM | Permalink

    Willis et al.

    I agree. Personnally, I’m not at all convinced that GCM’s are useful at all if all you want to know is the response to increased GHG’s. I mean, you’re trying to simulate a hugely complex system that is notoriously nonlinear, just to figure out what’s going to happen if you only change ONE parameter?

    On the one hand, GCM’s should be useful to understand the characteristics of a STABLE climate: if, starting from first principles, plus a limited number of parameters, you can reproduce some emergent properties of the Earth’s climate, then that helps you decipher why and how those properties come about. But we all understand that what you get there is a qualitative picture, with only modest numerical accuracy w/r to observations. Still, a useful tool. IMO, if you ever get to the point where you can reproduce the climate’s natural variability on a global as well as a regional scale, then you know you’re in business. Are we there yet? I don’t know, not being a specialist. But that’s only for a STABLE climate, ie. forcings do not change, or they change within their known range of variability and known dynamics.

    Now you really want to know what’s going to happen if you change ONE forcing, say GHG. Two things can happen. One, if the perturbation is small enough, the response is linear. Fine, but then you don’t really need a complex GCM. All you need is a simple physical model, which will give you the actual reponse (say forcing + feedback), and whether you plug this into the GCM or not, you get roughly the same answer. And it has been shown here that this is exactly what happens: GCM simulations do no better than a simple linear response! IOW what you get out of the GCM is exactly what you put in there in the first place.

    But it MAY be the case that the system does NOT respond linearly. In fact, the Earth’s climate is known not to respond linearly to small changes in forcings: just look at the glacial cycle, where the minuscule changes in solar insolation result in huge global and regional responses. So we know that the climate response is full of nonlinearities and thresholds, in which case a tiny change in the parameters leads to a very different response. The paper by Didier Paillard on glacial cycles is a nice example: he shows a model that accurately reproduces the sea level for the past million year, using threshold values. But if he wants to forecast the future, he has to chose within a range of values for one threshold that give indistinguishable responses for the past, but hugely different responses for the future! So, are the GCMs useful in that case? Not at all, because if you try to capture this nonlinear behavior, you will always be limited by the resolution, but if you increase the resolution, you get right into the scale of the turbulent behavior and you’re screwed again because now you have the resolution, but not the boundary conditions! So you still have to resort to parametrization, knowing that slightly different values of your parameters will give you very different responses.

    That’s why I don’t believe that GCM’s are the right approach if all you want to know is the response to GHG’s. A better physical understanding should at least give you an approximate answer that you can roughly trust. If I were in control of the climate research budgets, I would put less money into GCM’s and more into simpler, physical models. At least in a simple model, you can figure out what’s going on. GCM’s are so complex that at some point nobody really knows what they’re doing.

  16. Posted Feb 11, 2007 at 4:42 PM | Permalink

    Hi! The general property of some differential equations that makes perturbations of the initial configuration to grow in time exponentially is referred to as “instability” of these equations. For example, all equations in field theory with tachyonic modes are unstable because the tachyons (particles with negative squared mass) grow with time exponentially for reasonable initial conditions.

    For Navier-Stokes equations, one deals with another problem, namely turbulence. The typical solution in the turbulent regime has a fractal character, and some invariants defining the distance between two configurations are divergent.

    Nevertheless, general classes of these hydrodynamic equations are unstable. This instability certainly occurs whenever viscosity becomes non-constant etc, see e.g.

    http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=365860

    which is a 1965 paper with 100+ citations. Incidentally, I just recently linked to the book called Apollo’s arrow by David Orrell

    http://motls.blogspot.com/2007/02/david-orrell-apollos-arrow.html

    which is not only one of the bestsellers in Canada right now but this book by a mathematician in Vancouver exactly analyzes whether the increasing errors of climate models at longer timescales are due to “chaos” as in chaos theory or due to other things. He essentially concludes that they’re due to errors in the models. But instability of equations is discussed, too.

    The book is probably fun because the author also analyzes how environmentalism fits into the evolution tree of religions – and people’s everlasting desire to predict the far future of our civilization even though these predictions have always been wrong.

  17. Bob Weber
    Posted Feb 11, 2007 at 4:43 PM | Permalink

    Is the climate deterministic? If so then a GCM can be written if we know all of the parameters and their behavior.
    Or is the climate chaotic? If so then no computer program can make reliable predictions beyond a relatively short time.

    Bob

  18. Gerald Browning
    Posted Feb 11, 2007 at 4:50 PM | Permalink

    Willis,

    Sorry about that. I hadn’t read your other post when I responded here.

    Thanks for the comment on the GISS model. If the GISS model is a hydrostatic model, parameterizations will be only one issue. The other will eventually appear if the resolution is adequately increased and the dissipation reduced, no matter how the (nondissipative) physical forcings are tuned. This is a problem in the basic dynamical system.

    I am very familiar with the so called microphysics packages (reference on request). They have their own set of mathematical and physical issues.

    It seems to me that there are two obvious questions here.

    Why are there so many different physical parameterizations for a given component of the forcing if they all purport to represent that component of forcing at a given (or any) resolution? Mathematically you would expect them all to reduce to the same function unless they are being used to tune the model? (This could explain the error bars you cited?)

    Why is NCAR proposing to insert the non-hydrostatic model into the hydrostatic model or change to a non-hydrostatic model if the hydrostatic model converges to the correct physical solution?

    Next I need to send Steve a post that supports his hypothesis.

    Jerry

  19. bender
    Posted Feb 11, 2007 at 5:04 PM | Permalink

    Re #18
    Bob, chaos is deterministic (= extreme sensitivity to initial conditions + an inability to know past conditions with abosolute precision). Read Lorenz’s “Deterministic non-periodic flow”.

  20. Hans Erren
    Posted Feb 12, 2007 at 3:45 AM | Permalink

    my go at a cool business as usual projection
    http://tech.groups.yahoo.com/group/climatechangedebate/message/5121

  21. Willis Eschenbach
    Posted Feb 12, 2007 at 4:04 AM | Permalink

    I’d been wondering why the climate models don’t show any negative feedback, because it always seemed to me that increasing cloud albedo (from increasing evaporation in a warming world) would lead to less sunlight hitting the earth, and that the models should show that.

    Upon closer examination, I find that despite large errors in cloud cover (~15% in the GISS E model) the albedo is quite close to the measured value. This is because the albedo is parameterized, rather than calculated. To quote from the Schmidt description of the GISS E model,

    The net albedo and TOA radiation balance are to some extent tuned for, and so it should be no surprise that they are similar across models and to observations.

    Near as I can tell, “to some extent” means they calculate the surface albedo, and “parameterize” the cloud albedo, but they’re not real clear about that.

    In any case, this means that the models are incapable of modeling albedo feedbacks, because the albedo is not tied to the physical conditions, but is just a best guess.

    Given assumptions of that size … there’s nothing the models can’t “prove”. Exponential growth damped out by ultra-high viscosity? Hey, no problem …

    w.

  22. TAC
    Posted Feb 12, 2007 at 5:32 AM | Permalink

    #16 Lubos, thank you for clarifying the issue of stability of differential equations:

    The general property of some differential equations that makes perturbations of the initial configuration to grow in time exponentially is referred to as โ€œinstabilityโ€ of these equations.

    Assuming instability of the Navier-Stokes equations when applied to the climate problem, is it necessarily the case that one cannot predict a meaningful “average effect” (where the average is computed over multiple realizations) of a change to the coefficients of the differential equation?

    Specifically, given the differential equation for the “climate system” with Conc(CO2)=X, we could simulate many realizations based on random initial conditions and then compute a statistical description (mean, variance, etc.) of the climate for Conc(CO2)=X. Similarly, you could do the same experiment for a Conc(CO2)=2*X. Is it not meaningful to compare the two population averages, for example, as a way to estimate the effect of doubling CO2?

  23. Steve McIntyre
    Posted Feb 12, 2007 at 9:49 AM | Permalink

    The net albedo and TOA radiation balance are to some extent tuned for, and so it should be no surprise that they are similar across models and to observations.

    Whether the methodology is any good or not, what a ridiculous description of the methodology? “to some extent tuned for” – I wonder what the operational tuning procedure was.

  24. Steve Sadlov
    Posted Feb 12, 2007 at 10:36 AM | Permalink

    RE: #21- The climate system obviously has multiple parasitics. These are the key. It is what the AGW fanatics miss (or, choose to paper over).

  25. gb
    Posted Feb 12, 2007 at 11:18 AM | Permalink

    GB,

    I do not understand some of the things you are saying. Why do you say that the dissipation is unphysical? If one simulates a turbulent flow but does not fully resolve all scales it should crash because the system cannot get rid of its energy (at least if there is a forward cascade of energy). In that case a subgrid model that acts as a sink of energy must be implemented, quite physical indeed. The rapid cascade in high resolution models can also be physically correct and there is in principal also nothing wrong with the idea that the parameterization must be changed as the resolution is changed. To develop a subgrid model or a parameterization one has to invoke assumptions/approximations at some point. In other, there does not exist a parameterization but there are several possibilities and one can tell only after testing which one is better but fundamentally there is nothing wrong with that. At the moment, observations are lacking and the knowledge about the dynamics of smaller/mesoscale motions in the atmosphere and oceans is rather limited and this hinders the development of more physical correct and accurate parameterizations.

  26. Gerald Browning
    Posted Feb 12, 2007 at 12:10 PM | Permalink

    Lubos (#16):

    Exponentially growing solutions are not unstable in the mathematical sense because they can be computed for short periods of time if the initial conditions are correct or close to correct. The problem with these solutions is that they can’t be computed for long periods of time because of the exponential growth, i.e., any errors will grow at an exponential rate.

    Ill posedness is an entirely different animal. There the exponential growth depends on the spatial wave number and there is no bound on the growth in time even for infinitesimally short periods.

    Jerry

  27. Gerald Browning
    Posted Feb 12, 2007 at 12:24 PM | Permalink

    Lubos (#16):

    Addendum. You might want to peruse the Math. Comp. manuscript I cited earlier. There solutions of the incompressible NS equations are computed deterministically for long periods of time. There are mathematical estimates for the smallest scale that will appear for a given kinematic viscosity (see Henshaw, Kreiss, and Reyna cite in that manuscript). The reason that turbulence has not been computed correctly before is because the solution was not properly resolved or the boundaries handled incorrectly.

    Jerry

  28. Gary Strand
    Posted Feb 12, 2007 at 12:26 PM | Permalink

    Re: many

    Dr. Browning, since you assert that systems of the type you’re critiquing suffer from such rapid growth that they’re untenable for their use in climate simulation, then how do such systems avoid quick “unphysicality”, i.e., CCSM3 can be run for hundreds/thousands of years of model time and yet remain physical. Is it because the model is continually forced back to a physical state, or is it because the parameterizations and approximations are so tuned as to keep the model physical?

  29. Gary Strand
    Posted Feb 12, 2007 at 12:27 PM | Permalink

    Re: 29

    Please substitute “rapid error growth” for “rapid growth”.

    (It sure would be nice to go back and make minor changes in one’s posts!)

  30. Posted Feb 13, 2007 at 11:24 AM | Permalink

    Re 16:

    Nevertheless, general classes of these hydrodynamic equations are unstable. This instability certainly occurs whenever viscosity becomes non-constant etc, see e.g.

    http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=365860

    In that paper, Yih is discussing a special case. You don’t need non-constant viscosity to make flows unstable. Coutte flow is unstable at high enough Reynolds numbers.

    Yih was looking at a low Reynolds number (Re) flow because people use Coutte flow viscometers to measure the viscosity of fluids. Generally, people assume these viscometers will work– because low Re flows are stable. So, in principle, you run the test, check the viscosity, then recheck the RE number. If it’s low, the test is good. However, if the viscosity is not constant, the viscometer still won’t work. That results in an error.

  31. Ian S
    Posted Feb 13, 2007 at 11:38 AM | Permalink

    #29 Gary,

    What is meant by unphysical? I assume that you mean that the models match past temperatures (sort-of)? But isn’t this basically circular logic. Models are fine-tuned until they match past temperatures are they not? On the other hand, when it comes to predicting future temperatures and properties they fail spectacularly in a very short time period ( as has been mentioned in several other posts). It is my understanding that the models do not match reality ( Antarctica temperatures, troposphere temperatures, recent Ocean cooling, so on and so forth) and ( I may be wrong about this one) are not even capable of predicting El Niร ยฑo etc? So, in this sense they become un-physical very quickly. Use your model to predict temperatures 5/10 whatever years from now. When the time comes we will see how ‘physical’ the model really is ๐Ÿ˜‰

    By the way, one can create models to match past stock market histories very nicely and general stock behavior even more easily, just don’t kid yourself that this means they can predict the future. I believe this is what is occurring.

    Ian

  32. Steve Sadlov
    Posted Feb 13, 2007 at 11:41 AM | Permalink

    FYI – En queue at RC:

    RE: #73 – As with all systems, the bugaboo in the stability analysis is the parasitics. Do they result in an asymptotic situation, where, when you approach the “safe operating envelope,” the parasitics result in a “fold back” innately due to their increasing contribution to energy dissipation? Or are they more in the realm of something that adds to the main “control loop” and results in a “runaway” situation? The prevailing theory of the orthodoxy appears to be something like the latter, albeit on a limited scale (for example, the “runaway” in this case would not be at the total system level but only within the behavior of the parasitic terms, resulting in a new “higher equilibrium”). My own experiences with complex systems suggest that as you scale, the sorts of mechanisms that might result in a complete system runaway or even a runaway of some parasitic term across the system, become more and more implausible. That’s because with increasing scale and complexity, the opportunities for loss of efficiency and internally dissipative mechanisms increase. Consider grid lock.
    by Steve Sadlov

  33. Gary Strand
    Posted Feb 13, 2007 at 11:49 AM | Permalink

    Re: 32

    By “unphysical” I mean that the model ends up with the climate of Venus, not Earth.

    If, as Gerry asserts, error growth very rapidly makes a system completely unstable, then how do climate models manage to run for decades of model time without rapidly creating planets that are not Earth? In the old days, climate models did “flux corrections”, but many models have moved away from them and don’t rely on them at all.

  34. Ian S
    Posted Feb 13, 2007 at 12:03 PM | Permalink

    #34
    Gary — “By โ€œunphysicalโ€ I mean that the model ends up with the climate of Venus, not Earth.”

    Oh… Well, no offense, but one can capture the ‘behavior’ of a chaotic system very easily so this is not what I would call a great accomplishment. In fact you can use completely unphysical equations to do so. For the stock market, for example, a random walk model captures behavior quite nicely.

    Being able to replicate ‘behavior’ tells you nothing about the validity of your model (other than it is not immediately invalid).

    As for your example, any model that allows the modeled climate to achieve a state of Venus (we are already near the maximum temperature — http://www.ianschumacher.com/gwc.html) would not be obeying the laws of physics at all (it is simply not a possible state of the earth) and so would be a very bad model indeed.

    Ian

  35. Dave Dardinger
    Posted Feb 13, 2007 at 12:17 PM | Permalink

    re: #34,

    I think what needs to be done is to examine exactly what is being done in a model, in a mathematical sense. Obviously we can’t look at every equation and all grid cells, but something could still be done by looking at likely critical paths.

    Let’s say a worry is that CCSM3 doesn’t allow a reasonable amount of convective heat flow to the upper atmoshere as surface temperatures increase. What equations / methods are used for this process and what is a link to the code where the equations / methods are implemented? Also is there any point elsewhere in the code which examines the results of this process and adjusts something else to keep the results “physical”?

    What I think would ease everyone’s minds here is to work through a few such examples with someone who can ease the search (such as yourself) and people who can throw pointed questions out which can be batted back and forth without letting people get upset and political. There are actually a lot of people here who’d be willing to dig in and find things if there were a specific question trying to be answered and there was a committment to actually come to a final conclusion on the situation. More long-range, if say 5-6 such “problems” were worked through and the majority agreed that the methodology use in CCSM3 was reasonable and physical, then there’d be a greatly enhanced tendency to accept results produced by the model.

    Pending getting a buy-in by yourself or others to do this sort of thing, it might be useful for people to suggest such limited audit problems which might be solved.

  36. Bill F
    Posted Feb 13, 2007 at 12:19 PM | Permalink

    Gary,

    Unphysical doesn’t have to be related to the end state of the model. If for instance, the model accurately predicts the things it is tuned to predict, such as avg global temperature and co2 concentration, yet in the course of doing so, errors force the relative humidity to rise to close to 100% without a corresponding increase in rainfall or cloud albedo, then such a model would be unphysical. Such a model may accurately back forecast temperature and co2, but it has to resort to making the other parameters not match what is likely in the physical world to do it. I think that is where Jerry B is coming from. If the equations propogate errors that grow exponentially, yet the model is tuned to force the output to still match the observed data set, then there is no way to know how reliably the model will predict future behavior. Just because the model is able to function in a stable manner over a long period of time doesn’t mean it isn’t propogating large errors. If the tuning of the model is such that it minimizes the effect of the errors to maintain the stability of the model, then the model has no predictive capability because it has essentially removed the ability of certain errors to affect the model output.

  37. Dan Hughes
    Posted Feb 13, 2007 at 12:27 PM | Permalink

    I have some comments on numerical solution methods that I think are applicable to AOLGCM calculations here.

    All corrections and comments appreciated.

  38. fFreddy
    Posted Feb 13, 2007 at 12:42 PM | Permalink

    Re #34, Gary Strand

    In the old days, climate models did โ€œflux correctionsโ€, but many models have moved away from them and don’t rely on them at all.

    So, just to be clear, your CCSM model does not use flux corrections ?

  39. Posted Feb 13, 2007 at 12:51 PM | Permalink

    Re 18: Jerry, you said:

    I am very familiar with the so called microphysics packages (reference on request). They have their own set of mathematical and physical issues.

    I’m requesting!

    Re 23:

    Assuming instability of the Navier-Stokes equations when applied to the climate problem, is it necessarily the case that one cannot predict a meaningful โ€œaverage effectโ€ (where the average is computed over multiple realizations) of a change to the coefficients of the differential equation?

    One can often predict a mathematical meaningful “average effect” and it has been done. You can look up “Direct Numerical Simulation” (DNS) and find papers where people have done this in a variety of cases. The solve the NS equations with no parameterizations. To get average behavior, they calculate averages directly from many instances. (Note: You’re search will not produce papers describing solutions in flows anywhere near as complicated as climate models because no computer is large to permit such runs. GCM’s contain parameterizations for a number of things. Also, once you introduce a parameterization, you’ve introduced the same approxiation into every run. So, averaging will just tell you the average effect of a model containing that approximation. )

    Re 26:

    Why do you say that the dissipation is unphysical?

    I can’t answer for Jerry, but I’d call the dissipation in a model “unphysical” if it’s in there for one of any three reasons:
    1) the sub-grid models do not correctly describe the average effect of small scales (which can happen because it’s not easy to capture the average effect in a manner that is correct in all possible conditions.)
    2) the dissipation is introduced not to reflect any physical instability, but only to avoid a numerical stability. (This appears to be done in some GISS models near the poles where a false, and very large, viscosity, in introduced to avoid problems when the courant number for the grid cell approaches 1.)
    3) the discretization process itself introduces a false diffusivity that is in some way proportional to the grid size (or some power of the grid size.)

    These ‘unphysical’ viscosities may or may not cause noticable inaccuracies– but I’d say the term “unphysical” would apply.

    Jerry Re 27:
    Lubos is using the term instability the way I find it meaningful.

    People in continuum mechanics traditionally call certain types of analyses “stability analyses”. One finds a steady state (or sometimes just stable time dependent ) solution– which may apply in some condition. Then, you study that solution to determine if that solution is stable to small deviations from the steady state solution. If you introduce a small disturbance to the solution, and find it decays, the solution is called “stable”. If the small deviation grows exponentially, it is “unstable”.

  40. Posted Feb 13, 2007 at 12:58 PM | Permalink

    Dear TAC #23,

    instability implies that the precise details of the input are so important that the output – future values of quantities – can’t be viewed as a simple function of the input – the initial conditions.

    In reality, the exponential growth never continues indefinitely. At some point, new terms or regulation mechanisms become important and stop this exponential growth. The whole world is, in this sense, stable.

    This fact that you must rely on other phenomena to get rid of the instability means that the equations you started with can’t tell you anything about the long-term behavior of the system – the long-term behavior of the system is governed by the other regulating mechanisms that were neglected in the unstable equations.

    This really means that you should only trust equations that are stable within the timescale where they’re used.

    Best
    Lubos

  41. Gerald Browning
    Posted Feb 13, 2007 at 1:16 PM | Permalink

    Gary Strand (#29):

    I find it amusing that you are unable to answer basic questions about the coupled climate model that you supposedly code for IPCC, but claim to know so much about ill posedness. Large dissipation can overcome many model flaws. Please run the same set of experiments on the CAM model that Lu et al. did on the WRF. Hand waving doesn’t cut it with me.

    Jerry

  42. Gerald Browning
    Posted Feb 13, 2007 at 1:35 PM | Permalink

    Wiilis (#21):

    Consider an arbitrary homogeneous (unforced) nonlinear time-dependent system :

    u_t - N(u) = 0

    Suppose you want a solution v. Substitute that known solution
    into the equation to obtain the forcing

    F = v_t - N(v)  that will produce the solution you want:  latex u_t – N(u) = F $

    Thus I can produce the earths atmosphere from any set of equations.
    Scary?

    Jerry

    Jerry

  43. Gerald Browning
    Posted Feb 13, 2007 at 1:57 PM | Permalink

    Margo (#40):

    Here is a reference that describes typical microphysics using scaling as is normally done in fluid dynamics:

    Lu C., P. Schultz, and G.L. Browning (2002):
    Scaling the microphysics equations and analyzing the variability of hydrometeor production rates in a controlled parameter space.

    Advances in Atmospheric Sciences

  44. TAC
    Posted Feb 13, 2007 at 2:13 PM | Permalink

    #41 Lubos, Thanks for the reply.

    TAC

  45. Gerald Browning
    Posted Feb 13, 2007 at 2:15 PM | Permalink

    Margo (#40):

    Well stated. Only a few questions. In number 2, I assume you (also) mean numerical instability like nonlinear instability? If a model uses a parameterization that is in error by 100%, then the forecast or error bars do not have much meaning?

    Jerry

  46. Steve Sadlov
    Posted Feb 13, 2007 at 2:41 PM | Permalink

    RE: #41 – Danged squared relationship between mass and energy again! Wormholes notwithstanding, LOL ….

  47. Gerald Browning
    Posted Feb 13, 2007 at 3:31 PM | Permalink

    Bill F. (#37):

    Very well stated. See corresponding mathematical explanation
    in #43.

    Jerry

  48. Posted Feb 13, 2007 at 3:51 PM | Permalink

    Gerald,
    I’m not sure precisely what you are asking — I assume those are two separate quetions?

    For Q1:

    Some physical systems are unstable themselves. (Example, motionless salt water over fresh water.) You can sometimes create them, but if the system is disturbed, the state will change. (So, for example, some salt water will descend, fresh will rise, then the whole system will overturn.) This has nothing to do with numerics.

    In other cases, the system might be stable, and if you represent the system using continuous equations, a solution might be stable. However, if you discretize the equations, you can get numerical instabilities. (So, when central differencing convective heat transfer problems, if the grid box has a Peclet number greater than 2 — I think– the numerical solution will develop weird wiggles that have nothing to do with reality.

    There are different ways to deal with these wiggles– some better, some worse. Sometimes those writing code will decide to just introduce a false diffusivity (thermal or viscous depending on the specifics) to solve the problem. It’s not a sophisticated way to deal with the problem, but sometimes the effect of the false diffusivity is known to be minimal and it’s ok to do it. Other times the effect is significant, and it’s not ok ot do this. (Often, you don’t know what the effect is a priori– or even a posteriori, and that’s where the danger comes in.)

    In one of the GISS models, the happen to introduce a false diffusivity to deal with type of numerical instability at the poles.

    For Q2:

    If a model uses a parameterization that is in error by 100%, then the forecast or error bars do not have much meaning?

    Well, I can only give an “on the one hand on the other hand answer”.
    If a parameterization is flat out wrong, then a forecast of error bars has no meaning– but that’s not quite what meant.

    It’s more like this: If you flip a biased coin a zillion times, the average result will differ from the one you get using an unbiased coin.

    Here’s the issue in a more “climate” type discussion.
    Supposed you make a very simple parameterization to predict when rain will fall in a computational grid. You decide it will rain when the grid cells relatively humidity is 80%. You’ve now picked that.

    You know in reality there isn’t a “constant” relative humidity that means rain. But you figure it will sometimes not rain unless the rel. h is 82% and sometimes it will rain when it’s 78%– but you’ve convinced yourself 80% is a good number.

    Now, you code that in. Splendid!

    But now there are at least problems:
    1) Supposed the *real* answer was that, the average all over the world is it rains when the rel humidity is 85%? Then, if you run your 80% model, rain will consistently fall too soon. You can run the model 100 billion times and take the average– but your average will rain too soon. There is no way to fix this by averaging. (And of course, in a climate model effects will progate.)

    2) The second problem: Because effects propate, other things will go wrong. For example, if your “model” planet rains too soon, there may be fewer clouds– or clouds may be in wrong places. So, you’ll change the heat addition. Other things will happen.

    So, on average you will make mistakes in other equations. There is no magic automatic canceling of these problems.

    So, you see, you can run this over and over and over, but there is no reason to expect taking the average will result in the right answer for he “real” system. What you get is the converged solution for the model.

  49. jae
    Posted Feb 13, 2007 at 3:53 PM | Permalink

    A lot of the mathematics discussed here is over my head, but IMHO, the climate models are nothing more than a classical example of overfitting by careful selection of parameters.

  50. Pat Frank
    Posted Feb 13, 2007 at 4:08 PM | Permalink

    About 40 years ago, John Bell wrote his classic “Speakable and Unspeakable in Quantum Mechanics.” It looks to me that Jerry and Margo should write ‘The Speakable and Unspeakable in Climate Modeling.’ It would be very timely, and bring a properly modest perspective back to a field that now looks out of control. Even just a critical paper by that name, and discussing what you folks are saying here, would have a powerful impact of the entire subject.

    I figure you two could turn that out in a month. Just in time to greet the unsuspecting WG1 Report.

  51. Posted Feb 13, 2007 at 4:19 PM | Permalink

    Pat,
    Unfortunately, I couldn’t turn it out in a month for two reasons:
    1) I have a real job that I need to attend to and
    2) I don’t know enough specifically about what’s in the models to know how close or far off they should be.

    My major difficulty with the stuff that I have found is that the few papers that I do find in peer reviewed journals give what I would call ridiculously cursory explanation of the mathematical models in the codes. That would be ok if I also saw references to the “theory manuals” describing the mathematical models, but I don’t see those references. (In honesty, it’s the lack of citations to the theory manual that surprises me most. The full theory is generally so long you expect it in a theory manual or ph.d thesis. Still, You usually see such citatons in papers in Journal of Fluid Mechanics, Physics of Fluids etc. People are usually thrilled to cite their own theory manuals, Ph. D. theses etc. So, I’m not understanding why things are this opaque in Journals like J. Climate.)

    Now, mind you, I assume if I spent enough time, I’d find these theory manuals, but, unfortunately, I do have other things to do.

    But, when people here do discuss things I happen to know about, I can pipe in.

  52. Gerald Browning
    Posted Feb 13, 2007 at 5:05 PM | Permalink

    Margo (#49):

    Sorry about that. I am trying to juggle too many things at once (see new ITCZ thread) and am not the best writer in the world. ๐Ÿ™‚ I did mean that as two separate questions.

    Q1: Another possibility is that an approximation is made to the well posed continuous system (in this case the hydrostatic approximation) and that has an impact on the reasonable mathematical properties of the unmodified original system.

    Q2: Well stated.

    I hope you continue to respond when you can. I believe your explanations
    are very well written and more helpful to the general reader than the mathematical equations, but the mathematics puts the arguments on a firm foundation that are not easy to refute.

    Jerry

  53. Pat Frank
    Posted Feb 13, 2007 at 5:54 PM | Permalink

    #52 — Margo, that’s why you’d pair up with Jerry. He apparently does know what’s in the models. Between the two of you, you’ve got everything well covered. You two would write a killer paper.

    And in that paper, you could note among all the other things, the peculiar lack of reference to theory manuals that seems unique to climate science modeling.

  54. Posted Feb 13, 2007 at 6:03 PM | Permalink

    Jerry: With regard to Q1, I didn’t mean my list was complete list of all unphysical behaviors, approximations, or other unphysical anomolies that can be introduced into a code.

    As to 2: I usually like mixing math and words.

    Also, I do want to be careful to note that computations being what they are, sometimes, some approximations — including unphysical artificial diffusivity– may be ok. It depending on context. The difficulty is that sometimes there may be nothing in the output of the computer code to tell you when the approximation is being used outside a range where it applies.

  55. Paul Penrose
    Posted Feb 13, 2007 at 9:37 PM | Permalink

    A very interesting converstation. My question is simple: how can I even evalutate the meaning of the model outputs if there are no error bars reported with them? Reporting a temp increase of 3 degrees C +/? is, frankly, meaningless. So until an analysis of the internal errors and how they propagate through the simulation is performed, the models simply can’t be used to predict, or project if you wish, future temperature trends. Maybe Gary has a different perspective on this, but I’ve yet to see him address this issue.

  56. Gerald Browning
    Posted Feb 13, 2007 at 9:56 PM | Permalink

    Margo,

    We seem to be in complete agreement. ๐Ÿ™‚

    I am asking Dan Hughes if he is interested in running the experiment in my earlier comment. I think it would be educational for everyone.

    Jerry

  57. Gerald Browning
    Posted Feb 13, 2007 at 10:11 PM | Permalink

    Pat Frank (#54):

    Thank you for the complement to Margo and me. Obviously she is a very skilled writer
    and quite knowledgable about the scientific issues. This blog has been an inspiration
    to me because I can discuss the issues openly and not have to battle reviewers
    with vested interests.Thanks to Steve M. for this opportunity and to all of the other posters that have discussed the issues in an open and frank manner.

    Jerry

  58. Gerald Browning
    Posted Feb 13, 2007 at 10:31 PM | Permalink

    Paul (#56),

    The first step is to make sure that the basic dynamical system is well posed (see earlier Numerical Climate Model post). If that is not the case, then all bets are off, i.e. no numerical method can converge to the correct solution. Sufficiently large dissipation might overcome this problem, but then the solution behaves very strangely. Heinz and I believe there is a flaw at this level in the hydrostatic equations. If Dan agrees to run the test, this can be resolved very quickly. If the system is well posed, then the system must be approximated by a stable and accurate numerical method. That should converge for short periods of time even for the nonlinear NS equations. None of these basic tests have been run except for those on my PC.
    The accuracy of the forcing has already been put into question, but that is down the road aways.

    Jerry

  59. Paul Penrose
    Posted Feb 14, 2007 at 9:42 AM | Permalink

    Re: #59
    Jerry,
    I’m glad you are looking into these issues, but this type of analysis should have already been performed by the modelers, if, and this is an important condition, if they wish to have their results used to bolster the AGW hypothesis and/or support advocacy for change of government policy with regard to climate and the environment. Now I’ve heard some modelers claim that their models are just research tools and that they can’t help how others use them, but htis is BS. If the modelers feel that the results of their research has been used inappropriately by AGW advocates and policy makers (ie. in the public sphere), then it’s their duty to stand up and make a public stink about it, otherwise it’s easy to take their silence as tacit approval.

  60. Gerald Browning
    Posted Feb 14, 2007 at 11:48 AM | Permalink

    Paul (#60):

    The models have not been tested in a rigorous manner and I agree with the rest of your statements, but egos and support are involved so don’t expect any miracles. The only way we have had any success revealing the flaws is thru rigorous mathematics and even that was an uphill battle.

    Jerry

  61. Dan Hughes
    Posted Feb 14, 2007 at 1:12 PM | Permalink

    Instability in Code Calculations

    The continuous equations might model physical instabilities, and the numerical solution method might reflect its instabilities, but there are much more mundane things that will effect stability of a calculation.

    Discontinuities in any and all dependent variables will cause instability. Many of these are the physical instabilities modeled by the continuous equations. Discontinuities in density, temperature, and velocity, for examples, give the Kelvin, Helmholtz, Rayleigh, and Taylor (and combinations thereof) kinds of physical instabilities in fluids. Such discontinuities occur at the interfaces between subsystems modeled; atmosphere-ocean, for example. Some physical instabilities are governed by the critical gradient for some dependent variables; the Richardson number, for example, or gradients that are inverse to a natural stable state. Some flows might be surface-tension dominated and discontinuities in surface tension, or even an adverse gradient in surface tension could caused instabilities in these flows. All the physical instabilities will be bounded; mother nature just will not allow truly exponential un-bounded growth of these instabilities. Linear analyses of the continuous equations applied to the onset of the motions following the instabilities might indicate exponential growth, but the analyses typically omit the nonlinear effects that will lead to bounded growth.

    The numerical solution method is required to correctly reflect the physical situation for each physical instability. Often, it is usually the case that accurate spatial and temporal numerical resolution of the physical instability is not important for a particular application calculation.

    Discontinuities in the discrete geometric grid used to represent the solution domain for the problem will also cause instabilities and non-physical calculated results.

    Improper, or incorrect, boundary condition specification, in either the continuous or discrete numerical domain, can cause instabilities.

    The list is very long and each must be investigated in detail to ensure that the numerical method correctly reflects the physical situations.

    An important source of numerically-introduced instabilities are in the many algebraic correlations and models that are very important parts of many models and codes developed for real-world applications of inherently complex physical phenomena and processes. The resistance to motion at an interface in such models is usually represented by correlations for the drag between the fluid and another material. I suspect that the ever-present (omnipresent?) parameterizations in the AOLGCM models are generally algebraic. All the algebraic equations must be carefully constructed so as not to introduce discontinuities. Again, generally it is a good idea to ensure that the algebraic equations are continuous functions of its independent variables plus the first derivatives should also be continuous. Spline fits will do this job, but frequently the process is not applied to the algebraic equations until it is discovered that one of them is causing problems.

    Finally, certain intrinsic-functions in the language used to construct the code frequently in discontinuities. In the FORTRAN language, as an example, the MAX and MIN statements have the potential to introduce discontinuities into the numerical solution methods. MAX and MIN constructs are frequently used to bound the range of dependent variables. Variables that should always be positive, for example, can be bounded by use of these. Another application frequently seen is to limit the change in a dependent variable between time steps, or iterations. These are very bad ideas, however, because such statements present pure step-function like changes to the numerical methods. The variable and its derivative are both discontinuous. The discontinuities present perturbations to the discrete equations and associated numerical solution methods. Such perturbations will generally introduce non-physical instabilities into the numerical solutions.

    One thing that makes careful consideration of all possible discontinuities very important relative to the AOLGCM models/codes is the presence of sensitivity to initial conditions. The perturbations act exactly like changes in initial conditions that then initiate the chaotic response that is said to be a part of these models/codes. So far as I know the dynamical-system properties and characteristics of equation systems that contain discontinuities and exhibit chaotic response has yet to be determined.

  62. Posted Feb 14, 2007 at 1:34 PM | Permalink

    Variables that should always be positive, for example, can be bounded by use of these.

    Are there any variables that should always be positive in a GCM? I was reading a GCM description by the GISS team, and mass in a grid box is permitted to be negative, but and occasionally does for brief periods. Not making this up. ๐Ÿ™‚

  63. Armand MacMurray
    Posted Feb 14, 2007 at 2:38 PM | Permalink

    Re:#63
    It seems that experimentalists looking for “new physics” should be studying the climate system… ๐Ÿ™‚

  64. Steve Sadlov
    Posted Feb 14, 2007 at 3:19 PM | Permalink

    RE: #63 – They must have gone to one too many Star Trek conventions. Negative mass ….. dilithium crystals …. etc ๐Ÿ™‚

  65. Posted Feb 14, 2007 at 3:53 PM | Permalink

    Well, they do know negative mass doesn’t happen. . .

    I looked for the reference. It’s Schmidt et al J. Climate jan 2006. p. 158
    Discussing tracers (like water vapor) they say:

    “Occasionally, divergence along a certain direction might lead to temporarily negative gridbox masses. These exotic circumstances happen rather infrequently in the troposphere but are common in stratospheric polar regions experiencing strong accelerations from parameterized gravity waves and/or Rayley friction. Therefore, we limit the advenction globally to prevent more than half the mass of any box being depleted in any one advection time step.”

    So, it appears these negative water vapor masses have appeared, but they do try to fix it up somehow. (I’m not sure what they mean by saying they limit advection globally. It doesn’t sound like they mean they reduce the time step which is stated as 30 minutes. Do they mean they fudge the transport velocity of the fluid when calculating the water vapor budget? And they do this globally? That would seem very weird. Oh well. . .)

  66. Bob K
    Posted Feb 14, 2007 at 11:00 PM | Permalink

    Dan,

    I’m not familiar with the fortran language. But if it handles floating point equality evaluations the way most languages do, you have your work cut out for you. Some very subtle errors can creep in without being realized. Unless they have a special math package that compensates, a lot of calculations don’t evaluate the way one would expect.

    How would fortran evaluate the following code involving two 32 bit floats?

    x = 5.1 + .3
    y = 54 / 10
    if x = y print true else print false

    Compiled on my pentium this evaluates to false even though it should return true. The error is in the least significant digit, but if I was looking for an equality, I’d never find it. Seems to me the error would continuously accumulate. Don’t know if it would ever become significant over millions of iterations though.

    If fortran evaluates the same way, you’ll have to pay close attention to any conditional statements involving floats of almost equal value. They don’t always evaluate the way you’d think they would. I don’t envy the job you’ve set for yourself.

    Conditional statements like if, select case, while loops etc. are where I’d expect to find automated flux adjustments, if they exist. I’m not familiar with what the fortran wording would be.

  67. Gerald Browning
    Posted Feb 14, 2007 at 11:52 PM | Permalink

    Steve M.,

    I will try to find a link or make my own plots and send them to you for posting so people can actually see the seriousness of these mathematical issues. Then I think the discussions will be more to the point and not drift off onto side issues.

    Jerry

  68. Gerald Browning
    Posted Feb 14, 2007 at 11:59 PM | Permalink

    Steve M.,

    That comment seemed to have disappeared so I will repeat it because it will resolve many of these questions. I will find a link or make some plots and send them to you so that people actually can see the seriousness of these mathematical issues.

    Jerry

  69. TAC
    Posted Feb 15, 2007 at 6:21 AM | Permalink

    #67 FORTRAN differentiates between INTEGER and FLOATING POINT (REAL and DOUBLE PRECISION) arithmetic. This makes a big difference when it comes to division. Thus, for REAL x, x=54/10 yields x=5.0 while x=54./10. yields x=5.4. For operators requiring two arguments (i.e. +,-,*,/), if one of the arguments is INTEGER and the other FLOATING POINT, the INTEGER will be promoted to FLOATING POINT before the operator is applied. So x=54/10. yields x=5.4.

    This is all nonsense except to people of a certain age.

  70. Greg F
    Posted Feb 15, 2007 at 8:23 AM | Permalink

    TAC,

    I think you misunderstood what Bob K is saying. What he is referring to is the least significant digit that is possible due to the limited precision. A completely fabricated example for illustrative purposes may produce the results from the 2 calculations as such:

    5.1+.3 = 5.4000000000000000

    5.4/10 = 5.4000000000000001

  71. Paul Penrose
    Posted Feb 15, 2007 at 10:14 AM | Permalink

    This is a well known problem with representing fractional values using binary storage. Even when the mantissa can be expressed with a single base-10 digit, like .4, it may not be the case in base 2. In this particular example .4 in base-10 is .0110011011… in base-2 and is in fact indeterminate. This can lead to subtle representational errors that normally are not a problem if you have 32 or 64 bits to store the value in and are not performing an interative calculation. However in areas like modeling you are likely to be iterating many times, so this has to be accounted for. Do you just allow things to truncate or do you perform rounding of some sort? How does this affect the calculation? Sometimes through the use of clever scaling and rearranging the order of intermediate calculations you can reduce (and sometimes eliminate) these effects. Who knows how the modelers have handled this; they aren’t saying, at least not as far as I know.

  72. TAC
    Posted Feb 15, 2007 at 12:43 PM | Permalink

    #71, #72, GregF and Paul: I also was unsure whether BobK’s issue had to do with FORTRAN’s quirky arithmetic rules or because of limited machine precision. However, the test example he provided would fail for the former reason and not the latter (modern versions of FORTRAN are somewhat tolerant wrt tests), and BobK admitted he was unfamiliar with FORTRAN.

    FWIW: Here are the results for BobK’s test example:

    c BobK's example
    [:~] t% cat zz.f
    x = 5.1 + .3
    y = 54 / 10
    if(x .eq. y) then
    write(*,*) 'TRUE'
    else
    write(*,*) 'FALSE'
    endif

    write(*,'(f20.18)') x
    write(*,'(f20.18)') y
    [:~] t% g95 zz.f
    [:~] t% a.out
    FALSE
    5.400000095367431641
    5.000000000000000000

    c repeated using "10." instead of "10" in denominator
    [:~] t% cat zz.f
    x = 5.1 + .3
    y = 54 / 10.
    if(x .eq. y) then
    write(*,*) 'TRUE'
    else
    write(*,*) 'FALSE'
    endif

    write(*,'(f20.18)') x
    write(*,'(f20.18)') y
    stop
    end
    [:~] t% g95 zz.f
    [:~] t% a.out
    TRUE
    5.400000095367431641
    5.400000095367431641

  73. Earle Williams
    Posted Feb 15, 2007 at 12:58 PM | Permalink

    Bob K.’s example in #67 is a real problem because of the way floating point numbers are represented, as Paul describes in #72. It is also something that should be taught to every scientist in their first programming class. It certainly was taught to me in CompSci 201 some 25 years ago. The solution to the “is equal” test isn’t actually to test if they are equal but to test if they are within an arbitrarily small value of each other. That value might be determined by the compiler or it might be hard-coded by the programmer.

    There shouldn’t be any comparison like Bob K describes in these models. It is just so rudimentally wrong that I cannot accept that any vetted model would have such inherent issues. That said, the comparison certainly could be in there. It would be like sending your comuter off for repair and getting it back with duct tape holding the side panel on.

    The comparison I was taught to test for equality of floating numbers is:
    if( absolute_value(x – y)

  74. Earle Williams
    Posted Feb 15, 2007 at 1:00 PM | Permalink

    Curses! Foiled by the less than symbol!

    Anyways, I shan’t bother with reconstructing my closing paragraphs. The test though is to see if the absolute value of the difference is less than the arbitrarily small number. If it is less, then the two values are considered equal.

  75. Earle Williams
    Posted Feb 15, 2007 at 1:02 PM | Permalink

    Re #74
    Curses! Foiled by the less than symbol!

    Anyways, I shan’t bother with reconstructing my closing paragraphs. The test though is to see if the absolute value of the difference is less than the arbitrarily small number. If it is less, then the two values are considered equal.

    (Apologies if this gets posted twice)

  76. Paul Penrose
    Posted Feb 15, 2007 at 1:29 PM | Permalink

    Re: #73
    The results of the second version of the program are hardly surprising; both .1 and .3 are indeterminate in base-2 representation such that when added together they exactly produce the .4 indeterminate sequence. The more interesting case would be to create a constant of 5.4 and then compare the variables ‘x’ and ‘y’ to that constant. Those comparisons should fail, which would be quite confusing to someone not aware of this issue.

  77. Bob K
    Posted Feb 15, 2007 at 4:39 PM | Permalink

    Mea culpa I said 32 bit floats in my above comment. They were actually 64 bit.

    I just tried comparing 64 bit x and y to a 64 bit constant value of 5.4.

    The y calculation 54/10 turns out to be equal to the constant. Not so with x = 5.1 + .3

    y > x by 8.88178419700125E-16

    It seems the compiler handles the constant the same as the division.

    Also tried 32 bit floats all around and they were all equal.

    The problem is the shear number of people working on the coding over the past couple decades. Along with the cross-pollination of code fragments between models. Pick a couple hundred people out of any profession and you’re bound to find examples of the Peter Principle in action. Some V&V would definitely be appropriate.

  78. Gerald Browning
    Posted Feb 15, 2007 at 5:05 PM | Permalink

    Steve M.,

    Because this thread has gone way off course, I have sent you contour plots of the vertical component of the velocity (w) at 3 km for two very high horizontal resolution runs of a nonhydrostatic and hydrostatic model.
    The former produces the same solution at 12 hr for both resolutions as expected. The latter, although in some agreement with the nonhydrostatic
    model at the lower resolution up to 12 hr (all runs start with the same balanced large-scale initial conditions), in the higher resolution run blows up because of the ill-posedness of the hydrostatic system
    as expected from the mathematical theory in the reference cited above.
    And at finer resolutions the hydrostatic model blows up even sooner as expected from the ill-posedness. These runs were prepared as part of a
    manuscript being written by Heinz and me as a follow on to our previous work.

    Jerry

  79. TAC
    Posted Feb 15, 2007 at 6:29 PM | Permalink

    #72, #77 Paul, your point is well taken. In fact, just this past week I have learned a bit more about this topic than I’d have liked. I am not a FORTRAN programmer (aside from graduate school; we all were back then), but nonetheless had to spend the past two weeks tightening up numerical algorithms in a FORTRAN code so that, in double precision, it now reliably produces 3 digits of precision in the final result.

    IMHO, if you have any concern for civilization, FORTRAN constitutes a considerably greater threat than AGW.

  80. Jaye
    Posted Feb 15, 2007 at 9:51 PM | Permalink

    FORTRAN constitutes a considerably greater threat than AGW

    Amen

  81. Jaye
    Posted Feb 15, 2007 at 10:02 PM | Permalink

    I also was unsure whether BobK’s issue had to do with FORTRAN’s quirky arithmetic rules or because of limited machine precision. However, the test example he provided would fail for the former reason and not the latter (modern versions of FORTRAN are somewhat tolerant wrt tests), and BobK admitted he was unfamiliar with FORTRAN.

    The “error” is an implicit type conversion, that is either happening or not happening based on what the compiler sees. Has nothing to do with precision…and I would really call it “quirky arithmetic rules” but type conversion. Most languages do this sort of thing.

  82. Jaye
    Posted Feb 16, 2007 at 12:12 AM | Permalink

    make that “wouldn’t”

  83. fFreddy
    Posted Feb 16, 2007 at 2:56 AM | Permalink

    Re #82, Jaye

    …but type conversion. Most languages do this sort of thing.

    All the more reason to be disciplined, and to switch on strong type checking. Dunno if Fortran can do this, but C sure can.

  84. Chris H
    Posted Feb 16, 2007 at 6:44 AM | Permalink

    This whole floating point conversation has gone way off topic but if you want exact decimal results you don’t use binary floating point anyway. What you use is some kind of BCD (binary coded decimal) package, which stores numbers in decimal. You see these packages used in banking applications and in mathematical applications where very high precision is required.

    For the approximate solution of differential equations, which is what we are talking about here, the error in the approximation will outweigh the error caused by using finite precision floating point and it doesn’t matter whether that floating point error is in decimal or in binary. Since these solutions are approximate, it would be wrong to compare them with zero irrespective of the way the values are stored.

  85. Jaye Bass
    Posted Feb 16, 2007 at 2:47 PM | Permalink

    Dunno if Fortran can do this, but C sure can.

    Fortran has a keyword “implicit none” which forces one to declare types. I occasionally have to hold my nose and deal with legacy fortran. In the course of doing that there are a few old fogeys I’d like to have cursed into oblivion for not using “implicit none”.

    Most academic code is pretty sloppy, buggy and unmaintainable. I wouldn’t be surprised if there are tons of unknown bugs in code like ModelE.

  86. Tom Vonk
    Posted Feb 20, 2007 at 3:41 AM | Permalink

    I am still wondering what the models do with the very basic physics .
    Specifically the following question restricted here to the Navier Stokes equations.
    “If it is undisputed that N-S equations exhibit chaotic behavior , does the temporal mean of a
    parameter exhibit also a chaotic behavior ?”
    Now there have been billions of papers in the course of the century handling with the stochastical treatement of N-S .
    When you change the variable for the velocity by writing U = V + |v| where U is the instantaneous velocity , V the temporal mean of the velocity and |v| a random velocity component with temporal mean 0 you do 2 things.

    First you assume that this decomposition makes physical sense , in other words that that the difference between the true velocity and its temporal mean is random .
    Second the transformed N-S equations are not simpler and exhibit also chaotic behavior for the temporal mean V .
    As any model has to simulate the fluid flows (among many other things) it can’t get rid of the chaos by taking temporal means .
    And I don’t even mention here the number of other complicated interactions and feed-backs that should further amplify the chaos at all time scales .
    So it seems to me that all models are a self fulfilling prophecy – by simulating numerically a system where short term fluctuations are supposed to be random (aka “atmospheric noise”) , then with a suitable adjustment preventing unstabilities you come to the conclusion that the temporal means are largely relevant and non chaotic while everything else is random .

    Somebody here with a deep N-S knowledge who could elaborate why anybody should believe that ?

  87. Posted Feb 20, 2007 at 10:25 AM | Permalink

    โ€œIf it is undisputed that N-S equations exhibit chaotic behavior , does the temporal mean of a parameter exhibit also a chaotic behavior ?โ€

    The answer depends a bit on what you call “temporal mean”.
    However, using a definition like “Reynolds Averaged”, which can actually be a bit vague, often the mean behavior is not chaotic.

    Long before we could model behavior, we knew the mean flow behavior is many systems of engineering interest is quite predictable and not chaotic. That’s why we can size pumps for water systems, predict heat transfer inside double pane windows etc. We didn’t need to run models to figure out the average behavior in these cases is not chaotic, we can tell from experiments.

    There are cases where what one might call “averaged” flow could conceivably be “chaotic”. I’m trying to think of a confirmed case that’s been observed in a lab but I can’t off hand.

    But you know what? Modeling even single phase flows with non-chaotic mean behavior is itself difficult. Modeling coupled heat transfer and flow is even more difficult.

  88. Hans
    Posted Feb 20, 2007 at 10:50 AM | Permalink

    From the New York Times: “A new book argues that nature is too complex and depends on too many processes that are poorly understood or little monitored to be modeled using computer programs.”

    The above quote is from an interesting book-review in New York Times Science Section, today, Tuesday, February 20, 2007 for book USELESS ARITHMETIC Why Environmental Scientists Can’t Predict the Future. By Orrin H. Pilkey and Linda Pilkey-Jarvis.

    Mentions climate modelling as one example.

    Also highlights that too often model sensitivity analysis involves assessing importance of parameters “in the model, not necessarily in nature.” [from the article: If a model itself is โ€œa poor representation of reality,โ€ they write, โ€œdetermining the sensitivity of an individual parameter in the model is a meaningless pursuit.โ€]

    Please read more (may require free sign-in) at link: http://www.nytimes.com/2007/02/20/science/20book.html?_r=1&ref=science&oref=slogin

  89. gb
    Posted Feb 20, 2007 at 11:15 AM | Permalink

    Re # 89:

    ‘A new book argues that nature is too complex and depends on too many processes that are poorly understood or little monitored to be modeled using computer programs’

    What a nonsense. Should we stop all science? Natural sciences are about modelling processes in nature, what else? And yes, that can be difficult. But does that imply we shouldn’t try to?

    Re # 87:

    Simulating chaotic or turbulent flows is perfectly meaningful. People do it already for more than 30 years. It is because turbulent flows, which are governed by the N-S equations have a well defined mean, variance of the fluctuations and other statistical properties. For the atmosphere and the ocean it is less clear how large and what the time scales of these fluctuations are.

  90. MarkW
    Posted Feb 20, 2007 at 11:42 AM | Permalink

    gb,

    Nobody is saying that we should discontinue modeling. They are saying that until the models can accurately reflect reality,
    we shouldn’t base policies on them.

  91. Tom Vonk
    Posted Feb 20, 2007 at 12:33 PM | Permalink

    The answer depends a bit on what you call โ€œtemporal meanโ€.
    However, using a definition like โ€œReynolds Averagedโ€, which can actually be a bit vague, often the mean behavior is not chaotic.

    Thanks , yes , that’s what I have been thinking about .
    I can broadly understand the maths involved with this treatement in the case of a compressible fluid and I played a bit with it .
    Of course as I am doing that in my spare time , I have not the time (and probably the knowledge) to read a century of papers concerning what actually the Reynolds stresses are and how they behave .
    However as long as one only stays at that stage there is no problem – after all the only thing I did was a variable change which gives me just another expression of N-S where I transferred the original problem in another problem where the Reynolds stresses appear but the equations are still as chaotic as they originally were .
    Now where I do have a problem is when one goes a step farther and assumes that the “perturbation” element is random and eventually adds further conditions – like normal distribution and/or mean = 0 .
    That makes then the calculations a bit simpler and from what I understood looking at a couple of ENSO models , that is precisely what they do .

    Now what I’d like to know is what kind of validity has in GENERAL such an assumption – I mean what influence has such an assumption on the behavior of the numerically calculated solution vs real behavior of the flow over a long period of time .
    I can’t get rid of the impression that there is something circular in the argument – if I assume that the “fluctuation” is random then I cannot say that the mean became more predictible because … the “fluctuations” are random

    Of course being engineer , I perfectly know that we can calculate pressure drops in turbulent flows .
    Well at least in suitable conditions ๐Ÿ™‚
    But that is not the question , the question is the one of the relevance of stochastics in the general case of a compressible N-S flow .
    And I wouldn’t even go in the questions concerning the smoothness and continuity of the functions that are conditions to have well defined means .

  92. Mike T
    Posted Feb 20, 2007 at 2:12 PM | Permalink

    Before the long discussion on floating point arithmetic and implicit data type conversions, there was some discussion of unphysical values like negative mass and how could it happen. Although I don’t know the details of the climate models, I do know that these types of things show up when doing event detection for solving ODE’s.

    For example, most ODE solvers require the function and its first derivative to be continuous. In order to achieve this, we must stop the solver at exactly the point where the discontinuity happens. An example of this would be say an absolute value computation. The solver would actually allow the absolute value to go negative to avoid the discontinuity, and then at the end of the time step the solver will go back and try to approximate the time when the value crossed zero and restart the solver from that point.

  93. Steve Sadlov
    Posted Feb 20, 2007 at 2:28 PM | Permalink

    RE: #89 – Indeed, from time to time, there is discussion of earthquake prediction. Interestingly, the issues are, if not similar to climate systems, not entirely dissimilar. The mantle is a highly viscous semi liquid, believed to be slowly flowing in large convection cells. The crust is plastic material of lighter and cooler phases floating on top of the mantle. The crust has become a series of plates due to the underlying convection. While the overall sense of plate motion is reasonably well understood, the minutae of behavior at plate boundaries are only know after the fact. Chaos prevents anything better, at present.

  94. Gerald Browning
    Posted Feb 20, 2007 at 4:06 PM | Permalink

    Tom (#87):

    You might want to look at the manuscripts on the minimal scale estimates (Henshaw, Kreiss, and Reyna) and some of the associated numerical runs, e.g. the Math. Comp. article I cited earlier or the Browning, Henshaw, and Kreiss manuscript from Los Alamos. The viscous, incompressible
    NS equations are not chaotic, it just takes sufficient resolution to correctly compute the smallest scale for large Reynolds numbers (or small kinematic viscosities).

    Jerry

  95. Dan Hughes
    Posted Feb 20, 2007 at 6:21 PM | Permalink

    We frequently see appeals to the Navier-Stokes equations and turbulence as an analogy to the chaotic response seen in weather/AOLGCM models and calculations. The analogy is not correct for the reason Jerry gives in #95. The ‘chaotic’ response of turbulence is only seen when sufficient temporal and spatial resolution is used in numerical solution methods (DNS) applied to the Navier-Stokes equations. Additionally, I think the case is that calculated data is time-averaged and spatially-correlated in order to recover the mean-flow state. In other words the decomposition of the flow into mean and fluctuating parts as mentioned in #87 above is recovered from the very-high resolution calculations.

    The AOLGCM model equations do not apply at the scales of resolutions needed to model the trubulent flow. And certainly AOLGCM calculations are not done at these levels of resolution with the discrete equations. Additionally, the decomposition, which has been very successfully applied, assumes a random model for the turbulent fluctuations, so that the time average is zero. Chaotic responses are not random and the time-averages are not zero.

    I think that it is also the case that the only two known properties of turbulent flows derived from theory by Kolmogorov (Kolmogoroff) are based on random motions, not chaotic. Finally, I think numerical solutions to the Navier-Stokes equations can be shown to converge as the sizes of the discrete spatial and temporal are refined sufficiently to resolve the smallest scales in the flow. These scales are in turn related to the two known theoretical results mention previously. These Kolmogorov scales are tiny and computing power to carry out actually useful and practical calculations is not yet available.

    All corrections appreciated.

  96. Tom Vonk
    Posted Feb 21, 2007 at 3:47 AM | Permalink

    You might want to look at the manuscripts on the minimal scale estimates (Henshaw, Kreiss, and Reyna) and some of the associated numerical runs, e.g. the Math. Comp. article I cited earlier or the Browning, Henshaw, and Kreiss manuscript from Los Alamos. The viscous, incompressible
    NS equations are not chaotic, it just takes sufficient resolution to correctly compute the smallest scale for large Reynolds numbers (or small kinematic viscosities).

    Thanks . I have found (too) many papers on 2D incompressible flows and I can’t access most of them because registration is necessary .
    As I am thinking about 3D compressible flows it doesn’t help much .
    Actually , as usual , one has to be very careful with terms to be sure that we are talking about the same thing .
    I have difficulty to see what you mean when saying that N-S equations , even in the incompressible case , are not chaotic .
    If you look at the surface of the ocean under a storm that is governed by incompressible N-S , you observe typically deterministic chaos at all relevant time and space scales .
    The velocities fields are clearly discontinuous and there doesn’t seem to be any emerging steady state be it local or average .
    It is neither random nor computable .
    I don’t doubt that a numerical simulation could produce a kind of surface that would look like a real surface given suitable measures to prevent numerical problems due to discontinuities .
    What I don’t see is what practically would be the use of such a simulation .
    It wouldn’t reproduce the real surface , it wouldn’t reproduce the time evolution , it wouldn’t give the right spatial averages of the velocities on small (say 1 m) or big (say 100 m) scales .
    What kind of “average” could one calculate that would be both relevant and predictible ?
    Or even if one wanted to take the stochastical point of view , what probability of occurence would have a particular point in the phase space calculated by a specific numerical simulation ?

  97. Dan Hughes
    Posted Feb 21, 2007 at 9:02 AM | Permalink

    Google, using ‘with all these words’, turbulence banerjee ucsb.

    The microscopic details of the fluid motions at and near the interface between the atmosphere and bodies of water determine the uptake of gases, CO2 for example, by the liquid. The problem has been studied for decades. Turbulence is of first-order importance.

  98. Tom Vonk
    Posted Feb 21, 2007 at 10:16 AM | Permalink

    Google, using with all these words’, turbulence banerjee ucsb.

    The microscopic details of the fluid motions at and near the interface between the atmosphere and bodies of water determine the uptake of gases, CO2 for example, by the liquid. The problem has been studied for decades. Turbulence is of first-order importance.

    Yes that illustrates my point .
    Even though my primary issue is the relevance or irrelevance of stochastical treatments of 3D
    compressible N-S and not gaz transfers at the liquid/gaz interface what is the case of Banerjee .
    For instance if one supposed here that the liquid behaved like a sum of a steady state function and a random noise with mean 0 (or even worse that the surface is flat) , one would be off by 300 % at least as far as I interpreted the Banerjee papers .
    So not only we have chaos at macroscopical level (what was already clear) but this chaos is heavily interacting with other equally important parameters .

  99. Posted Feb 21, 2007 at 11:47 AM | Permalink

    I think that it is also the case that the only two known properties of turbulent flows derived from theory by Kolmogorov (Kolmogoroff) are based on random motions, not chaotic.

    The Kolmogorov time and length scale are derived by assuming that the scales of the smallest eddies are affected by only two factors: kinematic viscosity and the viscous dissipation rate. That’s pretty much all the physics in the derivation.

    Once you make that assumption, the length and time scales are obtained by purely dimensional consideration. (Actually, there is also a Kolomogorov velocity scale, but you get that by dividing the length by the time. It’s not independent.)

    Experiments show the turbulent energy spectrum it does drop off very rapidly near these scales. (The shape of the drop off is understood and discussed in Tennekes and Lumley.)

    These Kolmogorov scales are tiny and computing power to carry out actually useful and practical calculations is not yet available.

    Direct Numerical Simulation (DNS) models flows down to the Kolomogorov scales. These computations are done by turbulence modelers; use of these codes is restricted to research projects. Results of DNS compuations are often presented at engineering conferences.

    Sanjoy Bannerjee wrote this book:
    Direct Numerical Simulation of Turbulence and Scalar Exchange at Gas-Liquid Interfaces.

    Engineers developing products don’t use DNS; it’s much too computationally intensive. Needless to say, climate models are not “DNS-like”.

  100. Posted Feb 21, 2007 at 11:56 AM | Permalink

    Hmmm… my comment was munched!

    These Kolmogorov scales are tiny and computing power to carry out actually useful and practical calculations is not yet available.

    Direct Numerical Simulation is a computational method that solves the NS equations down to the Kolmogorove scales. Google “Banerjee DNS”. ๐Ÿ™‚

    Turbulence modelers run these codes and often call them “numerical experiments”. Some then also try to develop turbulence closures to use in production codes.

    DNS is not used in industrial applications; it’s too computationally intensive.

  101. Gerald Browning
    Posted Feb 21, 2007 at 9:38 PM | Permalink

    Dan (#96):

    DNS computations for the 2D, viscous, incompressible NS equations in a doubly periodic domain have been carried out for kinematic viscosities down to sizes typically considered reasonable for real fluids. Although not mentioned in the Math. Comp. article, Heinz and I tried without success to find a suitable closure for resolutions that are not sufficient to resolve the minimal scale for a particular kinematic viscosity. This is one of the advantages of having a convergent numerical solution, i.e., these closures can be tested to determine their impact on the real solution.

    There have been similar computations performed in 3D by Ystrom and Kreiss (without and with forcing), but the kinematic viscosity is limited by the computing power that is available. Nonetheless, there are indications that the smallest scale estimate also holds in 3D (assuming the velocities are bounded).

  102. Gerald Browning
    Posted Feb 21, 2007 at 10:02 PM | Permalink

    Tom (#97):

    Many of these publications started out as reports at Los Alamos, UCLA, KTH (Sweden) or other institutions. Do a google on Henshaw Kreiss Reyna, Browning Henshaw Kreiss or Ystrom Kreiss. If you are unable to find the manuscripts, I would be happy to send you a copy.

    Jerry

  103. Dan Hughes
    Posted Feb 22, 2007 at 9:46 AM | Permalink

    re: #96 et al. I missed a word in this sentence: “Finally, I think numerical solutions to the Navier-Stokes equations can be shown to converge as the sizes of the discrete spatial and temporal are refined sufficiently to resolve the smallest scales in the flow.” I should have included, “… discrete spatial and temporal increments are refined…” And I wanted to say after that,”Unlike the much simpler equation systems in which chaotic dynamical responses are studied.”

    The computing power needed to accurately resolve the smallest motions in turbulent flows generally goes as the Reynolds number cubed.

    I think that the dimension of turbulence as a chaotic dynamical-system, again measured by the Kolmogorov scales, is much too large to be a candidate for interpretation as a low-dimensional chaotic system. The degrees of freedom for turbulent flows is measured by the macroscopic, or integral-scale, Reynolds number raised to the (9/4) power. The integral-scale Reynolds number can be calculated in terms of the Kolmogorov scales, too. The integral-scale Reynolds is the one usually encountered in engineering analysis of turbulent flows. I think there are few, if any, methods developed to determine dynamical-system dimensions above about 10 to 11.

    Corrections and clarifications appreciated.

  104. Gerald Browning
    Posted Feb 22, 2007 at 11:04 PM | Permalink

    Dan,

    Explicit mathematical estimates are available that involve the kinematic viscosity. See the Henshaw, Kreiss, Reyna references or the Math. Comp. article by Kreiss and me.

    Jerry

  105. Gerald Browning
    Posted Feb 23, 2007 at 1:48 PM | Permalink

    One of the problems with both climate and weather models is that they use switches (if tests) in their parameterizations. These lead to discontinuous forcing terms that cause considerable difficulties for the numerics.
    (reference available). These problems are in addition to the inaccuracies
    of the forcing terms.

    Jerry

  106. Gerald Browning
    Posted Feb 26, 2007 at 12:13 AM | Permalink

    For those interested, the latest measure of forecast accuracy is called fuzzy forecast verification (see seminar announcement on NCAR’s list of seminars). Reading the announcement, I believe this means that instead of using the standard mathematical measure of accuracy (norms), the presenter is relaxing the measure of error to project better verification scores in the new measure. IMHO this is necessary because the data on the smaller scales is very tenuous and, as mentioned in earlier references, any errors in the data or parameterizations leads to immediate errors in the location and intensity of a storm, and exponential growth and rapid cascade to smaller scales near a jet.

    Jerry

  107. Tom Vonk
    Posted Feb 26, 2007 at 8:04 AM | Permalink

    Reading in the last IPCC report that they have put a number on the “confidence” that the
    author of a model has in the ability of his model to rightly predict the relevant parameters ,
    I would like to know how EXACTLY is this number calculated .
    For instance high confidence means 80 % and very high confidence 90 % .
    What is the process that enables to say “My model represents the reality at at least 80 %” ?
    I mean that is a very different thing to say “Given the accuracy of the data X , the accuracy of the output is Y” and to say “My model doesn’t take everything in account but with what it does take in account we can trust its predictions to 80 %” .
    I mean no author of a model would come and say “My model is a heap of steaming HS
    and I don’t trust it at all ”
    I can think of no rational process that could quantify something that I don’t know and obviously things that I don’t know and don’t use in a model have an unknown impact .
    Unless they organise a kind of vote where they compare outputs of different models and if they
    predict similar things then they say that they are confident .
    But confident how ?
    What is the difference between being confident at 80% and being confident at 90% ?

  108. Gerald Browning
    Posted Feb 26, 2007 at 8:32 PM | Permalink

    Tom (#108):

    I think we are in agreement. ๐Ÿ™‚

    Jerry

  109. Tom Vonk
    Posted Feb 28, 2007 at 7:19 AM | Permalink

    Jerry that was actually not a comment but a true question ๐Ÿ™‚
    Do you know how they REALLY do it ?
    Is there a written procedure or a kind of QC or what ?
    And if yes , where could I find it ?

  110. Gerald Browning
    Posted Feb 28, 2007 at 6:20 PM | Permalink

    Tom (# 110):

    No I do not know the details and do not believe that you can put confidence levels on models that have O(1) relative errors. If the IPCC report does not state how the confidence levels are obtained, then the report should be judged accordingly.

    I am familiar with some of the traditional verification statistics used in weather forecasting, but as stated they are not standard mathematical norms. Some forecasts are judged against persistence and I don’t think this is a mathematical measure.

    Jerry

  111. Gerald Browning
    Posted Feb 28, 2007 at 11:06 PM | Permalink

    Tom (#110):

    The hydrostatic climate models are ill posed, so there is no way to have confidence in the models because as the mesh is reduced (assuming the dissipation is also reduced to more realistic values), the numerical solution will grow larger and larger in a smaller and smaller period of time as it approaches the inviscid continuous solution which is unbounded in any small amount of time.

    And a nonhydrostatic model grows exponentially fast (100 times faster time scale than the large scale solution) in time near any jet so that no confidence intervals can be placed on them if the mesh size is reduced.

    IMHO, only by keeping the mesh size and the dissipation too large can either of these models continue to run for long periods of time.

    Jerry

  112. Tom Vonk
    Posted Mar 1, 2007 at 2:26 AM | Permalink

    Jerry as this thread is turning in a dialogue between us 2 , could you send me a mail to :
    ultimni@hotmail.com .
    I have some more comments and ideas .

  113. Paul Penrose
    Posted Mar 1, 2007 at 6:49 PM | Permalink

    Please don’t go to email. Some of us lurkers are enjoying your conversation.

  114. Gerald Browning
    Posted Mar 1, 2007 at 11:31 PM | Permalink

    Paul (#114):

    I did respond to Tom, but mentioned that I had sent plots to Steve M.
    to illustrate the convergence problems with the hydrostatic and nonhydrostatic models. Steve and I are determining the best way to display those plots on his web site and then I will write a complete description of the convergence experiments and the individual plots. Thanks for mentioning that you are reading our discussion in the background.

    Jerry

  115. Paul Penrose
    Posted Mar 2, 2007 at 9:03 AM | Permalink

    Jerry,
    I look forward to seeing more information on your experiments. Often I learn more in one thread on this blog than I did in an entire quarter of college. Thanks for taking the time to educate us here.

  116. Gerald Browning
    Posted Mar 2, 2007 at 1:38 PM | Permalink

    Paul (#116):

    Heinz Kreiss was a wonderful mentor. He never bragged about the quality of his work, but the quality is obvious. I hope that I can pass on some of his insights thru references to his work and to collaborative manuscripts with his students.

    In this regard, the plots are visual images of complicated mathematics. But they can be explained and understood in relatively simple terms. And I will be available to answer questions in case my initial explanation is not sufficiently clear.

    Jerry

  117. John Reid
    Posted Mar 3, 2007 at 12:43 AM | Permalink

    Please don’t go to email. Some of us lurkers are enjoying your conversation.

    I agree. What a great thread!

    JR

  118. Gerald Browning
    Posted Mar 3, 2007 at 8:17 PM | Permalink

    John A.,

    Does the tex tag allow the use of the LaTeX includegraphics command?

    If so I can include the contour plots in my LaTex file I build at home and then move th entire file to a reply.

  119. Dan Hughes
    Posted Mar 4, 2007 at 12:50 PM | Permalink

    re: #112. Jerry, I have calculated the eigenvalues of the characteristic equation for the PDEs for a hydrostatic model. I find them to be real and distinct and so the equations are strictly hyperbolic. I think that this means that initial-value, boundary-value problems based on the equations, given that the boundary condition specifications are consistent with the eigenvectors, is well-posed. As far as I can tell, the momentum and energy equations do not account for diffusion of momentum and energy, respectively. Otherwise the equations would not be hyperbolic.

    Will you provide additional information relative to the basis of the ill-posedness of hydrostatic models?

    All these years when I have read that the AOLGCM models are based on the fundamental continuous equations of conservation of mass, momentum, and energy, I took it at face value. So far I have not found an AOLGCM model that is based on these equations. All, as I have always suspected, apply assumptions that greatly simplify the basic fundamental continuous equations. Additionally, none of the public mentions of the models and codes have ever noted that some numerical methods, applied to the conservation forms, do not in fact conserve mass and energy.

    Finally, can you tell me if I correctly understand that there is no resistance to the fluid motion in the vertical direction. I’m thinking that is potentially a source for problems. Additionally, the hydrostatic modeling, I think, means that pressure changes are instantaneously transmitted throughout the flow thus allowing for extremely easy introduction of lots o’ kinks and flow oscillations. Fluid motions in the vertical direction have no inertia.

    Thanks

  120. Gerald Browning
    Posted Mar 4, 2007 at 5:25 PM | Permalink

    Dan,

    The hydrostatic system is not hyperbolic because the time derivative of the vertical velocity is removed. A complete mathematicsl analysis is shown in the above reference.

    Jerry

  121. Gerald Browning
    Posted Mar 4, 2007 at 6:19 PM | Permalink

    As I am working on displaying the contour plots mentioned above, I would like to add a bit of simple mathematics to discuss ill posed and well (properly) posed behavior. One of the simplest examples that can be used to show both types of behavior is the heat equation (see my original post on Numerical Climate Models)

    u_{t} = \nu u_{xx}

    where

    u (x,t)

    is a function of space and time. The subscripts denote partial differentiation. For simplicity assume a separable solution of the form

    u = U(k,t) \sin ( kx )

    Substituting into the original equation one obtains

    U_{t} (k,t) = - \nu k^{2} U (k,t)

    where k is the horizontal wave number. The solution of this simple ODE in time for each fixed value of k is

    U(k,t) =  U(k,0) \exp (- \nu k^{2} t)

    Now if \nu  is positive (normal heat equation), then the solution decays exponentially for each k and is bounded above by the initial condition.

    But if \nu  is negative (heat equation backwards in time), then the solution is not bounded in any small interval of time because the horizontal wave number k  can be arbitrarily large for a general function.

  122. Jim D
    Posted Mar 4, 2007 at 6:19 PM | Permalink

    I’ll just post some statements here that may help with the discussion, in terms of
    outlining what is known about the hydrostatic approximation.
    The hydrostatic approximation simplifies the full nonhydrostatic equations by
    primarily removing the vertical momentum equation and replacing it with the
    hydrostatic balance.
    This approximation is very valid when the horizontal scales far exceed the vertical
    scales of motion, as in all current global climate models.
    This approximation becomes inaccurate as the scales become comparable, and breaks down
    completely as the horizontal scales become less than vertical scales. In fact, growth rates
    tend towards infinity as the horizontal scale goes to zero (nonhydrostatic
    growth rates remain limited for even the smallest scales). I suspect this is what
    is meant by ill-posed, since the most unstable modes are the smallest.
    For this reason, a hydrostatic model cannot be used to model thunderstorms, for example,
    as they lack any concept of parcel theory (CAPE/buoyancy/updraft speed limits),
    and anyone in the field knows that reviewers would never accept such an approximation at those scales.

  123. Gerald Browning
    Posted Mar 4, 2007 at 6:59 PM | Permalink

    Paul Penrose, John Reid, and others,

    If you have any questions about the example or brief explanation, please ask. I will be happy to explain in as much detail as required until everyone feels they are comfortable with the example. Then when the contour plots are discussed, hopefully everyone will have a better feeling for the significance of the plots.

    Don’t be shy! I am here to help. ๐Ÿ™‚

    Jerry

  124. Gerald Browning
    Posted Mar 4, 2007 at 7:30 PM | Permalink

    Jim D. (#123):

    If you read the introduction to this thread, there are problems with both the hydrostatic and nonhydrostatic equations. The global climate models are all based on the hydrostatic system and do not resolve the smaller scales of motion under 100 km. Thus, they necessarily must use unphysically large dissipation (or chopping) to artificially dissipate the rapid cascade of enstropy to small scales (see Math. Comp. manuscript) and then must be tuned to compensate for this. In addition, they cannot resolve the smaller scales because of the above issue near a jet. If the nonhydrostatic equations are used in a neighborhood of a jet instead of the hydrostatic equations, there will be fast exponential growth of the solution. (The only reason that this growth has not been seen before is because the mesoscale models did not resolve the solution and used too large of dissipation.) So neither alternative is viable IMHO.

    Jerry

  125. Gerald Browning
    Posted Mar 4, 2007 at 7:48 PM | Permalink

    Dan (#123):

    The mesoscale models used to be based on the hydrostatic system. Only when pressure was applied because of the ill posedness of the IBVP for that system did the mesoscale models switch to the nonhydrostatic system. There are hundreds of references that support this. Give me a break.

    Jerry

  126. Jim D
    Posted Mar 5, 2007 at 10:35 AM | Permalink

    I guess I am disagreeing that the nonhydrostatic equations, which are the NS equations,
    would have any problem, unless the finite differencing techniques have an error in
    them making them inconsistent with the equations. All models have some dissipation/diffusion
    techniques to avoid a build-up of energy at the finest resolved scales, and if it works
    correctly, the model will have a decent energy spectrum at resolved scales. I’ll wait for
    your evidence of a problem with jets.

  127. Gerald Browning
    Posted Mar 5, 2007 at 11:17 AM | Permalink

    Jim (#127):

    Sorry I used the wrong name in #126. The atmospheric equations have a term that is not present in the NS equations (but it becomes less important at the smaller scales where the system starts to behave more as the NS equations), namely the gravity term. This has a very important impact (see Tellus reference by Browning and Kreiss that discusses the various scales for the atmospheric equations) and is the cause of many of the problems with the hydrostatic system.

    Jerry

  128. gb
    Posted Mar 5, 2007 at 11:23 AM | Permalink

    I agree with Jim D. It will never be possible to resolve all (turbulent) scales. This implies that there is a subgrid term which is non-zero. One has to account for that by adding a subgrid model which drains energy from the resolved scales. So, in contrast, omitting a subgrid model/dissipation would be unphysical.

  129. Gerald Browning
    Posted Mar 5, 2007 at 12:02 PM | Permalink

    Jim (# 127):

    Read the Lu et al. reference at the top of this thread. Their results are sufficient to show the problem with the nonhydrostatic equations near a jet
    (no need to wait for my plots).

    Jerry

  130. Gerald Browning
    Posted Mar 5, 2007 at 12:05 PM | Permalink

    gb (#129):

    The obvious mathematical question is can that subgrid scale parameterization lead to the correct solution. The answer is
    no, and I will deal with this issue when I return.

    Jerry

  131. Jim D
    Posted Mar 5, 2007 at 5:24 PM | Permalink

    Jerry,
    Going back to #1 and the Lu et al. reference, I see that paper as not presenting any problem
    with the nonhydrostatic equations, but just presenting results that look like a successful
    simulation of a mechanism for clear-air turbulence, in some way validating the nonhydrostatic
    equations and model.
    I am not sure when you say that the nonhydrostatic solution may depart from reality,
    whether you are suggesting that these modes won’t exist in reality, or whether they
    do exist but can’t be forecast accurately. I agree with the latter statement only.

  132. Gerald Browning
    Posted Mar 5, 2007 at 6:37 PM | Permalink

    Jim D.,

    The amount of dissipation in the Lu et al. reference is still much larger than the real atmosphere. Once the dissipation is reduced to a level more appropriate to the real atmosphere, the cascade to smaller scales will happen even faster, i.e. on the order of hours, not days. The growth rate happens exponentially fast in time and means that any error in the observational data or model parameterizations (forcings) will lead to large errors in any numerical solution. Note that these solutions were not even seen in a model until the mesh size and dissipation were considerably
    reduced. The obvious mathematical issue is that if they have not been correctly computed before, how large of error was there in the computed solution when the dissipation was larger and the mesh coarser, especially if the dynamical mechanism is crucial to the solution?
    So we now return to my previous statement that there is fast exponential growth in the nonhydrostatic system near jets and the hydrostatic system is ill posed at smaller scales near jets. I assume that is now sufficiently clear.

    Now a bit more of mathematics to prove an earlier point.

    Consider the equation

    u_t + u_x = \nu_s u_xx +f

    where

    \nu_s

    is the real dissipation and f the real forcing.

    Now compute the solution with

    v_t + v_x = \nu_l v_xx +f

    where

    \nu_l >> \nu_s

    Then the error e = v – u in the computed solution is given by

    e_t + e_x = (\nu_l - \nu_s ) e_xx

    and to first approximation this can be replaced by

    e_t + e_x = \nu_l  e_xx

    which behaves as a heat equation with large dissipation (see example in this thread). The only way to reproduce the real solution is by adding an unphysical forcing term as in my other example.

    Jerry

  133. Gerald Browning
    Posted Mar 5, 2007 at 6:40 PM | Permalink

    The second x in the above message should also have been a subscript.
    One of the problems of not having a previewer for tex ๐Ÿ˜ฆ

  134. Gerald Browning
    Posted Mar 5, 2007 at 7:01 PM | Permalink

    Paul and John,

    Are the mathematical examples clear or helpful? I have tried to keep them as simple as possible, but realize that the general audience might not understand the mathematics. It is hard for me to know what level of mathematical training a reader might have. Therefore I am very willing to answer any questions at any level.

    Jerry

  135. Dave Dardinger
    Posted Mar 5, 2007 at 11:13 PM | Permalink

    re #135 re #133 etc.

    I think you do a pretty good job of making the equations understandable but I expect how people react would be all over the board. Some people will say, “Ho, hum. Why doesn’t he get to something I don’t know already. Others throw up their hands and wish you’d speak a language they understand and still others are in my boat and recognize a lot this tongue but get lost at a few places and wish we had a dictionary handy. Part of the trouble is that even if you clearly define terms as you go along, the definitions won’t carry along from post to post and that makes it hard to remember what a given equation means. For instance, I’m assuming u sub t or u sub x are general functions of some sort, but I don’t know if they’re supposed to be differentials or linear functions or higher order functions, etc. I’m assuming the sub t means a function of time rather than temperature since we’re dealing in general terms.

    Perhaps specific examples and maybe even problems would help those interested in learning the details to get a better feel for things.

  136. Paul Penrose
    Posted Mar 6, 2007 at 8:46 AM | Permalink

    As a software engineer my immediate expertise is in the implementation of the models, ie. how well they are written and tested. In this area I can make contributions to the discussion. All of my math and physics are rusty by a good 25 years, so I’m struggling with that part, but like Dave, I think I have a handle on the core arguments. The attempts to show, in detail, which arguments are correct and which are not, are of interest to me, however I don’t feel qualified to weigh in on any of them at this time, so I’m staying mostly quiet. I’ll continue in this mode, watching the experts chew over the matter, and pipe up when I have salient point or a good question. Carry on.

  137. gb
    Posted Mar 6, 2007 at 11:06 AM | Permalink

    Sorry Jerry, I don’t get what you are trying to say. My point is that if you use a (coarse) grid you are not solving the N-S equations or nonhydrostatic equations or whatever but in fact the filtered governing equations. In any book on turbulence one can find these equations. There one can see that these fitered equations contain a so-called subgrid-scale stress/dissipation term. This term has to be taken into account, otherwise you solve a set equations that is incomplete. In particular at places with intense shear (near jets) this subgrid-term can be quite large I suspect. Using some kind of eddy viscosity is just not unphysical or can you tell us what is missing in the equations?

  138. Gerald Browning
    Posted Mar 6, 2007 at 11:57 AM | Permalink

    gb (#138),

    This is a very naive statement issued by someone that does not want to understand the problem of using the incorrect (large) amount of dissipation and yet must have some mathematical training as you claim to understand turbulence equations in texts. If you actually took the time to peruse or read in detail the Math. Comp. article by Browning and Kreiss cited earlier (that you obviously have not done), you will see very clearly that the only way to run the NS equations without the correct small physical dissipation (kinematic viscosity) is by using a larger unphysical viscosity (what you call a subgrid scale sink of energy) and this alters the spatial spectrum of the solution as clearly shown in the Math. Comp. manuscript or the above mathematical argument showing the impact on the error when the dissipation is too large. Also you prefer to ignore that the solution in the Lu et al. manuscript was not seen until the mesh size and viscosity was reduced (but still too large). It doesn’t get much clearer than that. When you provide me with some more information about your background and credentials (and not hide behind initials), I think we will have a clearer view of your intentions of making such a statement. Until then I will not respond to any more of your misleading statements.

    Jerry

  139. Gerald Browning
    Posted Mar 6, 2007 at 12:30 PM | Permalink

    Dave (#136),

    I did not expect everyone to understand the mathematical examples. But those that have any basic knowledge of calculus should be able to understand a scalar function of space and time and partial differentiation. And I offered to explain any details to those that are truly interested in understanding the simple mathematical examples, but whose calculus is a bit rusty. Paul Penrose has indicated that his calculus skills are a bit rusty and that is the kind of response I would expect from an inquisitive person seeking to understand the equations and I will answer any technical questions that he might have. On the other hand, gb (hides behind initials) clearly claims to understand calculus (he cites turbulence texts) and I find his response is probably one from a person that has a political agenda.

    It is easy to move back to a previous comment on a thread. What terms would you like me to define for you to save you having to look them up?

    Jerry

  140. Gerald Browning
    Posted Mar 6, 2007 at 1:11 PM | Permalink

    Dave (#136):

    I also note that you must be familiar with the UNIX text formatting language (as am I), because of your use of sub to indicate a subscript.
    I have had to switch to LaTeX so tend to make a few mistakes (as you have seen). ๐Ÿ™‚

    Jerry

  141. Posted Mar 6, 2007 at 4:06 PM | Permalink

    Gerry B said:
    [blockquote]The atmospheric equations have a term that is not present in the NS equations (but it becomes less important at the smaller scales where the system starts to behave more as the NS equations), namely the gravity term.[/blockquote]

    I haven’t read your paper, but I would think the gravity terms would be important in some instances.

    If someone does take a Reynolds averaging method approach or filtered approach (which is presumably what gb is thinking about), then, if we account for gravity, we’ll see the scalar product of (the covariance of the density and velocity) and (gravity). (Sorry, I don’t know Latex or I’d write that in mathematical terms. Anyway it’s sort of [ro’u’]* g . ) The term may create or destroy turbulence ( and should have a pretty strong tendency to create it, since low density packets of fluid will tend to rise).

    Given the way hydrodynamic instabilities work, the effect likely happens at quite large scales and produce turbulence at larger scales.

    If someone just replaced the effect of gravity with the average hydrostatic effect, the modeled system would totally lose the physical effect of the gravity terms on the turbulence. If they kept the gravity terms but modeled them badly, they’d get a poor representation of the physical effect. And, if they are using a turbulence model for (ro’ u’) based on the assumption that turbulence production is dominated by the interaction between the Reynolds stresses and the mean shear, when, it’s in fact, dominated — or at least significantly influenced– by bouyancy driven events, their model for the production term will be qualitatively incorrect. (And you could go so far as to call such a parameterization “unphysical”.)

    Is it the effect of this term you are discussing?

    If gb wants to get a feel for what these would do, he can grab Tennekes and Lumley, add the gravity term, and repeat the derivation of the TKE equations for variable density fluid. The terms will pop right out.

  142. Juan Rivero
    Posted Mar 6, 2007 at 7:21 PM | Permalink

    This is a very interesting thread. The hydrostatic approximation was introduced into numerical modeling because of the Courant-Friedrichs-Lewy (CFL) requirement for numerical stability,

    \frac{\Delta x }{ \Delta t} \lt c

    Where c is the speed of sound, and the deltas are spatial and temporal resolution. Any decent vertical resolution would force a time step so small that the computers of that time couldn’t handle it. The physical justification for the hydrostatic equation is that the horizontal length scale far exceeds the vertical; then the hydrostatic approximation comes out of the scale analysis. The early papers by Charney and coworkers, or say the GFD text by Pedlosky, have the details on this. In early models the horizontal resolution was perhaps 1000Km and the vertical, tens of Km so the scale ratios worked.

    Since then, the horizontal and vertical resolutions have steadily crept towards each other, and the physical argument loses its validity. It seems, however, that the hydrostatic approximation soldiers on in the GCMs. Perhaps this is due to timestep limitations. But unfortunately, the hydrostatic equation is usually on page 1, so to speak, of an average atmospheric dynamics course and many, if not most, modelers do not question it. I’m very glad that someone(s) finally is doing so.

    BTW, there is a previewer for LaTex one can use:

    http://www.forkosh.dreamhost.com/mimetexpreview.html
    It is a bit inconvenient, but much better than nothing.

    Juan

  143. Gerald Browning
    Posted Mar 6, 2007 at 10:29 PM | Permalink

    Juan (#143),

    One of the major problems with the hydrostatic approximation is that it changed the mathematical properties of the original unmodified, inviscid system (the principal part of the original system is essentially a hyperbolic system, but not symmetric for large scale motions). Oliger and Sundstrom showed that the initial boundary value problem for the hydrostatic system with any point wise lateral boundary conditions is always ill posed. This led Heinz and me to look at ways to treat the original system in an alternate manner. If you look at our 1986 Tellus manuscript, you will find a complete analysis for all scales of motion for the original system and the introduction of a hyperbolic system that has shown to accurately portray multiple scales of motion without a discontinuity in systems. If you look here on the ITCZ thread, there is a reference that shows the power of the multiscale system in handling multiple scales of motion in a continuous manner. It is well posed for both the IVP and the IBVP for all scales of motion. The reference shows how the system can recreate a mesoscale system very accurately with open boundary conditions. The second reference on the ITCZ thread shows similar results for a model with typical parameterizations.

    Jerry

  144. Gerald Browning
    Posted Mar 6, 2007 at 10:49 PM | Permalink

    Margo (#142),

    You seem to have a very good grasp of fluid dynamics (as I have seen in other threads). I will respond to your comment tomorrow.
    I may have found a way to show the contour plots and hope to do so shortly.

    Jerry

  145. Gerald Browning
    Posted Mar 6, 2007 at 11:44 PM | Permalink

    Juan (#142):

    Thanks for the tip on the previewer for mimetex!!!!
    I tried the example and that should be just what I need to help me debug my examples.

    Jerry

  146. Posted Mar 7, 2007 at 7:56 AM | Permalink

    All the contents of Tellus A, January 2007 – Vol. 59, Issue 1, Page 1-154, are free online. The main issue page is here.

    The article, “How reliable are climate models?” on pages 2’โ‚ฌ”29 might be of interest. The abstract, the complete article in pdf and html.

  147. Jean S
    Posted Mar 7, 2007 at 8:16 AM | Permalink

    re #147: Yes, it is of some interest. However, when reading the article, remember to use your RealClimate -filter: the author is one of the leading alarmist-scientist in Finland. He was also an author of 4AR Chapter 11 (“Regional Climate Projections”). I’ve heard him to speak few times and read most of his publications, and I think he has problems of making a distinction between a computer model generated climate and the true one…

  148. Juan Rivero
    Posted Mar 7, 2007 at 9:30 AM | Permalink

    Re #145: Your’e welcome. I think I got the tip from Lubos Motl’s blog.
    However, I see the CFL equation got munged; it should be Dx/Dt

  149. gb
    Posted Mar 7, 2007 at 10:53 AM | Permalink

    Re # 142, Margo,

    I am aware that a Coriolis and buoyancy term should be included. What I wanted to say was that atmospheric models don’t resolve the dissipation scales (Kolmogorov length scale). In other words, atmospheric don’t resolve the whole spectrum of scales present in the amtmosphere. As in large-eddy simulations a subgrid model should therefore be included. I have an article by Mahalov et al., GRL (2004), vol. 31, L23111. They report a successful DNS of jet-stream turbulence but with a vertical resolution of 1 to 10 m, much finer than what is possible with atmospheric models. A fair questions is if the subgrid models used in atmospheric models are accurate. Perhaps they are to dissipative. A constant viscosity as subgrid model is also not very physical. But not including some kind of subgrid model or eddy mixing parameter is not correct either.

    To be honest, I am not an expert in geophysical flows so perhaps my interpretation is not completely correct. But I have quite some experience in simulations of turbulent flows and I do know what is required for a physical realistic simulation.

  150. Gerald Browning
    Posted Mar 7, 2007 at 12:16 PM | Permalink

    gb (#150),

    I find this reply more reassuring. Thank you for that. The experiments in the Math. Comp. manuscript demonstrate that a dissipation mechanism of some sort must be included in the viscous, incompressible NS equations for long runs. In fact, the minimal scale estimates require a dissipative term for their derivation and this is mathematically reasonable because there is a physical mechanism to remove enstropy as it cascades down the spectrum (in other words the mathematics and fluid dynamics are consistent). However, the manuscript also demonstrates that the solution is impacted by the type of dissipation (or subgrid scale sink of enstropy) and this was one of the major points of the manuscript.

    I might also mention that the forcings in the atmosphere are crucial to any accurate replication of the weather or climate and there are large errors in these forcings for all weather and climate models in addition to the problem with the type and size of dissipation.

    Jerry

  151. Posted Mar 7, 2007 at 1:22 PM | Permalink

    Re 150. (gb)

    I am aware that a Coriolis and buoyancy term should be included. What I wanted to say was that atmospheric models don’t resolve the dissipation scales (Kolmogorov length scale).In other words, atmospheric don’t resolve the whole spectrum of scales present in the amtmosphere.

    No, they don’t capture the Kolmogorov scale! ๐Ÿ™‚

    In fact, if you are thinking LES, the climate models probably don’t resolve any significant amount of turbulence. (Though I don’t know. Still, I think there is clearly a lot of turbulence at scales less than many kilometers.)

    As far as I can determine, the GCM type climate climate models are more “Reynolds Average Navier Stokes” like; that is RANS models. That is to say, pre-LES type models. (The terminology used to describe the closures I recognize are o nly appropriate for RANS models, not LES models.)

    The distinction is somewhat important because outside climate science, “sub-grid” is a term usually reserved to describe the sorts of closures required in LES, not Reynolds Average (RANS) type models. Reynolds Average models usually use the term “closure model” for the higher order moments that appear in averaged equations . (There is a something of a distinction, and strangely enough, the distinction matters when deriving the equations and the closures. That’s why you’ll also see LES modelers mention “Leonard Stresses” and “Reynolds Average” models refer to “Reynolds stresses”. The distinction between LES and RANS )

    As in large-eddy simulations a subgrid model should therefore be included. I have an article by Mahalov et al., GRL (2004), vol. 31, L23111. They report a successful DNS of jet-stream turbulence but with a vertical resolution of 1 to 10 m, much finer than what is possible with atmospheric models.

    Yep. DNS has been used for loads of things — at fairly low Reynolds numbers and fairly “uncomplicated” flow conditions.

    Generally, people take the output and try to develop closure for RANS and/or LES models. This can be done with some success– provided the closure is only used in places where it applies.

    That can be a trick in any complicated flow where many things might be happining. After all, you need to code to “recognize” enough about the solution to know what closure applies! Is the flow shear dominated? Bouyancy dominated? If you don’t know in advance, developing the proper closure is difficult– particularly in a RANS model.

    There is a long history of people suggesting plausible closures that then didn’t work when you applied them just slightly outside the region where they were tested. That’s why RANS type models progressed from mixing length closures to k-epsilon (and similar) to full Reynolds stress modeling. It’s also why you find different closures in fluids with constant density, variable density due to thermal effect, compressible flows etc.

    The hope for LES was to resolve some of the turbulence and assume the smaller scales all look alike. That tends to work better than RANS models– but as far as I can tell, climate models are RANS like! (Or LES at such a huge scale that they resolve none of the turbulence, which means, they will suffer from all the difficulties affecting a RANS code.)

    A fair questions is if the subgrid models used in atmospheric models are accurate. Perhaps they are to dissipative.

    “Are accurate or not” is often a difficult question. In RANS like models, the issue is general: Are they accurate in flow regimes where they are applied. (LES actually gets around some issues. That’s why people proposed it as a solution to the multitude of difficulties associated with RANS like models.)

    It is clear that some GCM’s turn up viscosity for no other reason than to avoid numerical instabilities. This is not something that would give fluid dynamicists outside climate science the heebee jeebees. (Unless you can explain why it doesn’t matter in the situation under consideration.)

    A constant viscosity as subgrid model is also not very physical. But not including some kind of subgrid model or eddy mixing parameter is not correct either.

    I doubt any GCM uses a constant viscosity subgrid model.

    The papers I’ve read suggest things like k-epsilon closures. (That is, 1980s-1990s era closures used in RANS models. ) I’ve read even older closures. But generally speaking, trying to find the precise descriptions of models in the published peer reviewed papers is… well… not easy. You can pull up reference after reference, and the level of description is sort of vague. (As I told my husband, “It’s turtles all the way down!” )

    To be honest, I am not an expert in geophysical flows so perhaps my interpretation is not completely correct. But I have quite some experience in simulations of turbulent flows and I do know what is required for a physical realistic simulation.

    I also don’t do geophysical flows.

  152. Gerald Browning
    Posted Mar 8, 2007 at 11:13 PM | Permalink

    gb (#150),

    What does a successful simulation of jet-stream turbulence mean? If the dynamics involves a fast, exponential growth in time, then any small error in the numerics will produce a different result. The only way to circumvent this problem is by the use of dissipation than masks the fast error growth as discussed in the comment#139 on the above manuscript by Lu et al. Does Mahalov reference their manuscript?

    I will obtain a copy of the article, but I am quite suspicious of the claim.

    Jerry

  153. Gerald Browning
    Posted Mar 8, 2007 at 11:24 PM | Permalink

    Jim D (#123):

    One of the most recent examples of a hydrostatic model used at fine scales is the NOAA Rapid Update Model (RUC). Also the NCAR MMM limited area model was hydrostatic for many years.

    Jerry

  154. Posted Mar 9, 2007 at 6:13 PM | Permalink

    Finally, we’re getting somewhere. I’m thinking that this paper cannot be ignored. It’s peer-reviewed, after all.

    I’ll be posting discussions next week, or “real soon now.” For starters the time-step size can never be a valid parameter for generating an ensemble to obtain an average ‘predicted’ value of a physical quantity.

  155. Jim D
    Posted Mar 9, 2007 at 9:13 PM | Permalink

    Jerry,
    #154 Yes, hydrostatic models are being used close to 10 km grids where they still are valid,
    but get much finer than that and you are in trouble, except in well-behaved cases like
    non-breaking flow over topography, or highly stratified flow with little vertical motion
    or resolved turbulence. Hydrostatic assumptions break down for any kind of resolved over-turning.

    #153 Surely you are not expecting turbulent flow to be predicted deterministically.
    It can only be simulated in a gross sense of having turbulence in the right places,
    and that would be considered a success. It is somewhat like trying to predict every thunderstorm
    or tornado in the right place, which is generally considered beyond determinism, given
    the limitations of both the model and initial conditions. This is where ensembles come
    in to give better probabilities, rather than trying to do things with a single deterministic
    forecast. Fine-scale modling is definitely going in the probability/ensemble direction now.

  156. Gerald Browning
    Posted Mar 9, 2007 at 11:21 PM | Permalink

    Jim D (#156)

    I beg to differ with you. If a hydrostatic model is being used at 10 km, then the dissipation must be much larger than in the real atmosphere in order to hide the ill posedness.

    There is a difference between turbulent flow and rapid, exponential growth in time in the solution. Isn’t it interesting that the problem near a jet has only been seen recently in simulations, but was known analytically since 1984? And how do you propose to contain the fast exponential growth with finer mesh models – evidently only with unphysically large dissipation. Once large dissipation is included, the parameterizations are necessarily not realistic. Then they must be tuned and the forecasts cannot be trusted.

    Ensemble forecasts were not started because of turbulence, but because the parameterizations did not produce the correct small scale storms. The hope was that ensembles would produce better forecasts, but that has not been the case (did any model forecast the Orlando tornado)?

    The facts are on the table now and the plots will be posted this weekend.

    Jerry

  157. Jim D
    Posted Mar 10, 2007 at 10:15 AM | Permalink

    Jerry,
    #157. On a 10 km grid, the shortest waves being resolved would be about 40 km.
    These would still be very hydrostatic, and would not have growth rates
    more than a few percent larger than the nonhydrostatic proper growth rates
    in unstable stratification. These models have not had dissipation added to
    control exponential growth, they mainly have dissipation to prevent
    build-up of energy at poorly resolved scales less than 4 grid-lengths,
    even in stable conditions, so it is a numerical rather than physical reason.
    Also smooth models tend to perform better at root mean square error scores
    than detailed models, as you might imagine, so operational centers have been
    reluctant to reduce the dissipation for that reason too, because these scores
    are the bottom line for them. Just my opinion here.

  158. Gerald Browning
    Posted Mar 10, 2007 at 2:22 PM | Permalink

    Jim D (#158),

    First you said that no reviewer would accept the hydrostatic approximation at the smaller scales (#123). Now you say that the hydrostatic approximation is good down to 10 km (#156). Am I missing something here? If there is overturning caused by numerics, physics, or ill posedness in the hydrostatic system, the model must be stabilized either by large dissipation or an ad hoc adjustment to reinstate the hydrostatic balance (and usually both). What does the latter adjustment do to the dynamics when the nonhydrostatic system is producing an entirely different solution than the hydrostatic system? And what does such an adjustment do to the numerical accuracy?

    Jerry

  159. Gerald Browning
    Posted Mar 10, 2007 at 3:34 PM | Permalink

    Jim D (#158} (continued),

    A second order method on a 10 km mesh does not resolve a 40 km wave, especially when the large numerical dissipation is damping that wave. As has been stated earlier, if a numerical model does not resolve the real atmospheric spectrum, then it necessarily must use a large nonphysical dissipation, whether that be implicit (in the numerical method) or explicit (large dissipation coefficient). Then the forcing must be tuned to produce something that looks realistic, but the dynamics is not correct.

    And doesn’t it seem contradictory to refine the mesh and leave the dissipation large in order to smooth the solution, in effect reducing the resolution of the solution. What has been gained?

    Jerry

  160. Posted Mar 11, 2007 at 6:10 AM | Permalink

    Fine-scale modling is definitely going in the probability/ensemble direction now.

    Do you mean fine scale modeling is now done with DNS — with results averaged to learn more about the flows? In that case, they aren’t inserting things like false diffusion hydrostatic assuptions to patch things up.

    If you insert a physical approximation like false diffusion and then average, your averages will be the average of what would occur if that diffusion were real. There no particular reason to expect that average to converge to what happens in a real flow.

  161. Jim D
    Posted Mar 11, 2007 at 5:49 PM | Permalink

    Addressing #159-#161.

    I meant that much below a 10 km grid the hydrostatic models become invalid, but
    they are perfectly fine at 10 km or more, or in special cases at finer scales.
    I am also of the opinion that some operational models have too much dissipation,
    for reasons I mentioned in #158, but research models tend to try to minimize
    dissipation to what is necessary for numerical reasons, or justifiable by
    physical sub-grid mixing. I agree that if a hydrostatic model was used at
    finer scales than its valid range, it may survive instability by using more dissipation
    to suppress unrealistic growth rates, but I know of no one doing this, i.e. using
    a hydrostatic model at, for instance, 1 km grid size.

    Regarding fine-scale modeling, I was referring to cloud-resolving modeling, not DNS,
    as DNS is not used in meteorology due to its limitation to neutral stratification.
    Fine-scale modeling uses the nonhydrostatic equations. In a good model, there will
    not be larger-than-natural dissipation at scales much larger than the grid scale,
    and one could argue whether that is 4 grid-lengths, or 7, or more. That said,
    these models represent realizations of the real atmosphere at some resolved scale,
    and, because the eddies (or thunderstorms), won’t be forecast in the correct phase,
    an ensemble at least can tell you something about the uncertainty in their location
    or timing. Hope that helps.

  162. Posted Mar 11, 2007 at 6:42 PM | Permalink

    Re 162:

    Since when is DNS limited to neutral stratification? Note the first paper here (http://meetings.aps.org/Meeting/DFD06/SessionIndex3/?SessionEventID=55292) involves Rayleigh Bernard convection.

    I assumed DNS is not used in meteorology because of the excessive computational burden. That’s the same reason it’s not used in zillions of other flows of practical importance.

    As to your claim the models are “perfectly fine” at 10km or more, could you be a bit more specific? Perfectly find when used to predict what? All sorts of approximations are “perfectly fine” when their results are limited to some small range, and become entirely deficient when used outside that range.

  163. Gerald Browning
    Posted Mar 11, 2007 at 6:43 PM | Permalink

    Jim D. (#162),

    You have not addressed the basic issue that a climate model (this is the climateaudit blog) uses very large unphysical dissipation because it is unable to handle all the atmospheric scales of motion and therefore must tune the forcings to make the spectrum appear realistic (but the dynamics is wrong). The climate models also use the hydrostatic approximation that has a number of problems, especially if the models are used at the finer resolutions that would be necessary to resolve real atmospheric features that are dynamically important.

    The so called cloud resolving research models also have serious problems as can be seen in the rapid exponential growth near jets in the Lu et al. reference that uses the NCAR WRF model. The experiments in that reference do not involve any parameterizations (forcings) and those problems won’t magically disappear using ensembles. And when the microphysics is added, those parameterizations have their own set of problems.

    The comment at the beginning of this thread summarizes these problems and IMHO so far no one has shown that the comment is incorrect.

    Jerry

  164. Posted Mar 11, 2007 at 8:21 PM | Permalink

    Jerry commented to Jim

    You have not addressed the basic issue that a climate model (this is the climateaudit blog) uses very large unphysical dissipation because it is unable to handle all the atmospheric scales of motion and therefore must tune the forcings to make the spectrum appear realistic (but the dynamics is wrong).

    This comment leads me to want to clarify my question at the end of 163. I’m getting the sense that Jim B is saying the hydrostatic approximation with excess dissipation added for smoothing may, in some sense, be “perfectly fine” for short term weather prediction when grid sizes are greater than 10km. (I would assume he then only means that for these short term predictions, that approximation introduces errors that are no worse than other errors — like not having a perfect representation of the initial conditions. )

    Is that what Jim is trying to say? If it is what he’s trying to say, and even if he’s right, that still leaves open the issue errors of using patched up models of these sorts to predict climate.

    Oh, Jerry, I see you referring to your J. Math Comp article. Could you list the full citation. (I keep scrolling up through comments, but I can’t see it.)

  165. Gerald Browning
    Posted Mar 11, 2007 at 11:17 PM | Permalink

    As promised, I am providing the links to four plots that I will discuss in great detail. These plots are the vertical velocity w at 12 hrs for the doubly periodic in x and y (w = 0 at bottom and top of 12 km atmosphere) for two resolutions for the inviscid multiscale model and for an inviscid hydrostatic model. Increasing the resolution for the multiscale model results in essentially the same solution:

    But increasing the resolution in the hydrostatic model leads to a rapid cascade to smaller scales and the solution blows up:


    The vertical velocity is shown because it is a very sensitive variable and crucial to determing storms in the presence of moisture. Both models start from the same large scale balanced state and the lateral dimensions are 2000km by 2000 km.There are 256 points in each lateral direction for the low resolutions and 512 for the high resolutions. Thus the lateral mesh length is approximately 7 km for the low res runs and 3 km for the high res runs. The vertical mesh is 1 km, but I have also run different heights, vertical mesh sizes, and time integration schemes.

    The lateral domain was chosen to be doubly periodic so that there would be no boundary problems for the hydrostatic model. Of course then the Coriolis term must be shut off, but it is not crucial at these resolutions.

    Jerry

  166. Gerald Browning
    Posted Mar 11, 2007 at 11:23 PM | Permalink

    Well the links didn’t work so I must not have used the link tag correctly.

    I will just list the url’s for the images for now:




    Jerry

  167. Gerald Browning
    Posted Mar 11, 2007 at 11:31 PM | Permalink

    Margo (#165),

    The article is in Mathematics of Computation. If you do a google scholar search on Browning Kreiss it is the second reference. (1989)

    Jerry

  168. Willis Eschenbach
    Posted Mar 11, 2007 at 11:52 PM | Permalink

    Jerry, thanks for the links. To use the “Link” button, first select the text that you want to show underlined as the link. Then press the “Link” button, and paste or type in the URL.

    w.

  169. Gerald Browning
    Posted Mar 11, 2007 at 11:58 PM | Permalink

    I will discuss the above plots in more detail and add additional plots in the comments to follow.

    Jerry

  170. Gerald Browning
    Posted Mar 12, 2007 at 6:33 PM | Permalink

    Margo (#165),

    If you look at my comment on Numerical Climate Models posted under modeling, you will find that Sylvie Gravel, Heinz, and I did a rather thorough study of a well known large scale weather prediction model.
    The results of that study were that all of the parameterizations could be turned off (except one that I will discuss) and the model produced the same forecast as the full model for approximately 36 hours. After that period of time both models had relative errors that were O(1). The one parameterization was the planetary boundary layer formulation, but that only had an impact on the small velocities closer to the surface and could be greatly simplified and still produce the same result.

    How was this possible? The observational data in the large scale NWP models is typically updated every 12 hours through a process called data assimilation. That is what keeps the NWP models from deviating too far from reality, not the parameterizations.

    We also looked at the data ingest and found that the main source of help was the wind data near the jet stream as that is where the major source of kinetic energy is in the atmosphere and the variable that is most crucial to periodic updating (reference by Heinz and me available on request).

    And finally, the satellite data was not very helpful unless there
    was a rawinsonde sounding nearby to anchor the information.
    Temperature is not a good variable for updating and the vertical profiles are not that accurate, especially in overcast regions. This has been known for some time, but not stated as the satellite data is a source of funding for many orgs.

    Jerry

  171. Gerald Browning
    Posted Mar 12, 2007 at 6:39 PM | Permalink

    Margo (#165),

    I forgot to mention that I had Sylvie make a minor correction to the
    data assimilation program based on the Bounded Derivative Theory and it had
    a major impact in the reduction of the error in the forecast. In fact the Canadian model started to outperform the US global model, even though
    the formar uses a less accurate numerical method.

    Jerry

  172. Gerald Browning
    Posted Mar 12, 2007 at 7:03 PM | Permalink

    Willis (#167),

    Well I kinda did that, but this is my first attempt. ๐Ÿ˜ฆ
    The good news is that now I can start to post contour plots to show what is really going on with the various systems of equations.

    Were you able to link to the plots and how readable are the basic contour lines in the plots (except for the fourth one which is a mess because of the ill posedness). I will explain the text and displayed info in more detail as we proceed.

    The vertical velocity is at z=3 km and there is no viscosity in either model so that nothing is hidden by dissipation. I was unable to stabilize the hydrostatic model with dissipation and it behaves exactly as in the mathematical theory and the simple example above, i.e. as the mesh is refined the growth gets worse because there are more horizontal waves in the model.

    Note that these runs were made on my home computer. I made longer runs with the same velocity profile as in our multiscale manuscript with similar results. But so that the runs wouldn’t take too long, in the doubly periodic case, I increased the speed slightly. I will also show all of these plots and discuss the results in detail.

    I have also run a channel model with the multiscale system and will show what happens there because that model includes the Coriolis term and is closer to what happens in reality.

    Jerry

    Jerry

  173. Gerald Browning
    Posted Mar 12, 2007 at 7:30 PM | Permalink

    Willis et al.,

    It might be of interest to compare the vertical velocity in the low res versions of the two models. Both started from the same initial data,
    but there are already differences in the low res runs at 12 hours.

    Jerry

  174. Jim D
    Posted Mar 12, 2007 at 8:09 PM | Permalink

    Addressing #165 and #166

    Hydrostatic models do not need large dissipation at scales where hydrostatic dynamics
    is valid. This is at grid sizes > 10 km, i.e. climate model scales. Their dynamics
    are perfectly fine to deal with any waves that they resolve. You would find
    that they only dissipate energy at scales that are poorly resolved, and that would lead
    to an unrealistic spectral peak at those scales if not dissipated. So I dispute the
    premise that these models are adding dissipation to counter dynamical flaws. There
    are no dynamical flaws at their resolved scales.

    Similarly, nonhydrostatic models such as in the Lu et al. paper have dynamics valid
    at all scales, and any growth rates they produce are realistic if the model is obeying
    the basic numerical time-step limitations. I haven’t seen anything to suggest there is
    anything wrong in that paper, but may be missing the point.

    #163, Margo, I only said DNS couldn’t handle stratification because someone earlier
    said DNS didn’t have gravity. I guess I am still confused as to what defines DNS.
    Do they have buoyancy terms (gravity) or not? My knowledge in that area is not great.

    #171, Jerry, I would like to see a hurricane simulation done without parameterizations,
    specifically latent heat processes. This can’t be claiming that latent heat is unimportant,
    could it? Many studies have shown you don’t get much of a forecast with a dry model,
    unless perhaps you are in Saudi Arabia or the Sahara.

  175. Gerald Browning
    Posted Mar 12, 2007 at 11:23 PM | Permalink

    Jim D (#175),

    1. Please cite a quantitative scale analysis that shows that it is appropriate to use the hydrostatic approximation down to 10 km. In this regard you might want to read the Tellus 1986 manuscript by Kreiss and me. And it will be shown that the cascade of enstropy down to the smaller scales where the hydrostatic balance is not applicable happens within hours, not days. The inappropriate damping of these scales has an impact on the spectrum, especially in longer runs (see Math. Comp. article). Your statement is pure semantics, not mathematical.

    2. Please cite the type and size of dissipation in any hydrostatic model that is running at 10 km. I will be happy to add that operator to the hydrostatic model to determine its impact on the accuracy of the solution and the spectrum. Essentially this was done in the Browning, Hack, and Swarztrauber manuscript and the dissipation destroyed the spectral accuracy of the numerical method, i.e. all the cost of the extra computation of the pseudo-spectral method was wasted.

    3. Clearly you do not want to understand that any numerical model that tries to simulate a physical problem with an exponential growth in time (let alone one that is very fast) will have problems. This is well known in the numerical analysis community.

    4. I did not know that climate models (or for that matter NWP models) could accurately forecast the development and intensity of a hurricane. My argument was for the global Canadian weather model and the global US model. A cute gimmick to try to change the subject. If you look at the references on the ITCZ thread, I am fully aware of the importance of heating in smaller scales of motion. I just don’t trust the accuracy of the parameterizations, the interface conditions between the atmosphere and ocean, etc.

    Jerry

  176. Posted Mar 13, 2007 at 8:42 AM | Permalink

    Re #175 Jim D says:

    #163, Margo, I only said DNS couldn’t handle stratification because someone earlier
    said DNS didn’t have gravity. I guess I am still confused as to what defines DNS.
    Do they have buoyancy terms (gravity) or not? My knowledge in that area is not great.

    DNS is direct numerical simulation. A computation is called “DNS” if it fully models all features of a flow, resolving all scales of turbulence. The term implies the computation is “exact”– no closure models or sub-grid parameterizations of any sort. (I’m emphasizing the “sub-grid” bit because in later papers, some sorts of approximations are used. One key feature is these approximations or parameterizations have nothing to do with the existence of a grid. They are the same ones used for analytical treatments.)

    The earliest computations were for homogeneous isotropic, incompressible turbulence with no gravity. This particular flow is of great theoretical importance, though, strictly speaking, it exists nowhere. This flow was tractable because of the very simple geometry; there were also some very good experiments that permitted modelers to compare their results to reality. (Comparisons were excellent.)

    The important feature of early DNS computations is they included no “closures”, or “sub-grid parameterizations”. However, the results are limited to flows with no density variations. (There are tons of flows with practically no density variations; modelers and researchers learned a lot from these computations. )

    With time complicating factors were added to DNS. By the mid-90s, DNS was applied to flows with heat transfer, density variations and included the gravity terms. The very earliest used the Boussinesq approximation to model density variations . This approximation for density variations is known to be excellent in many flows of engineering importance; it’s also valid for studying some geophysical flows. The Boussinesq approximation is not “sub-grid” and it’s use has nothing to do with the grid, or poor resolution of a computer model. It has been widely used in analytical solutions.

    So, use of this approximation, which could be called a “parameterization” is not considered a violation of DNS. It is simply accepted that the numerical experiments performed with DNS describe only those flows where that approximation is valid. (This range is actually well known from previous experimental and analytical work. The treatment for density variations could be made exact if computational power were available and modelers found flows where they needed to improve the parameterization describing density variations in some particular flow. )

    You’ll also find all sorts of DNS computations for convective flows. It’s also been used to model particle laden flows and bubbly flows–where gravity is definitely important. (Though, in that case, an approximation describing the interaction between the fluid and individual particles is added. The DNS refers to the bulk of the fluid. Once again, the approximation or parameterization has nothing to do with the grid size. So, it’s not a “sub grid parameterization. )

    As far as I’m aware, the main reason DNS is not used for most geophysical flows is that the computational burden is too great. It’s not due to any fundamental limitation in dealing with density variations. Certainly, there is no issue with stratification per se.

    Their dynamics are perfectly fine to deal with any waves that they resolve. You would find that they only dissipate energy at scales that are poorly resolved, and that would lead to an unrealistic spectral peak at those scales if not dissipated. So I dispute the premise that these models are adding dissipation to counter dynamical flaws. There are no dynamical flaws at their resolved scales.

    Jim, you have me totally lost here. What do you mean by “perfectly fine”? Perfectly fine in what type of flow?

    It certainly seems to me that it won’t be perfectly fine in many flows. Based on understanding of the turbulent energy cascade, it strikes me that if all you do is chop off the small scales modeling the dynamics of the large scales would be “ridiculously bad”.

    Generally speaking, in a real flow turbulent energy is always dissipated at the smaller scales– the ones that aren’t resolved by the computations you seem to find “perfectly fine”. In a real flow, the bulk of the energy is created as as result of interactions between the large scale structures and the mean flow. (This bit, would, in principle, be captured even if you chop off the small scales.) In a real flow, practically no energy is dissipated by these large scales.

    However, that doesn’t mean the energy created at large scales dissipates at large scales. Rather, larger eddies interact with other eddies– including smaller eddies (in a way I am not going to describe in blog comments). The result is an energy cascade causing energy from large scales to pour down to smaller scales where the energy is dissipated. (For details see Chapter 8 of Tennekes and Lumley, 1972.)

    Because the dynamics of the energy cascade involve larger eddies interacting with smaller eddies you can’t model the dynamics of the large eddies “perfectly” or even “adequately” if you don’t include the effect smaller eddies somehow! If you don’t include the behavior of the small eddies somehow the dynamics of the large eddies will would be “all screwed up”! The precise way in which things would be “all screwed up” is this: If you don’t capture the dissipative eddies, but you do capture the energy creation, over all, energy created at large scales would grow, and grow and grow to some horrifically large magnitude. This would happen because the creation term exists, and the dissipation term is absent.

    Since you chopped off the smaller eddies, the computation will present some sort of “barrier” preventing the smallest resolved scale from transferring energy to the smaller– non-resolved scales. Likely as not, you’ll see an unphysical pile up of energy here. (After all, on average, in a real flow, it can’t efficiently transfer energy “up” to larger scales. Numerically, it’s blocked from transferring energy to the smaller scales. On average, energy is going to build up there. )

    (Why does this sound qualitatively like what Jerry is describing happens in his paper? Energy created at larger scales keeps growing and growing and growing.)

    So, saying “There are no dynamical flaws at their resolved scales.” makes no sense to me– since losing the dynamics associated with the interaction between the small eddies on the large eddies is a dynamical flaw with regard to modeling the larger eddies– even if you resolve the large scale eddies!

    So, of course models using coarse grids are adding dissipation to resolve the dynamical flaws introduced by chopping off the smaller scales. If you didn’t include some sort of model to account for the effect of the missing scales, the turbulent kinetic energy would build up to infinity.

    Oh– Jerry B, can you send me an email at margo -at- truthortruthiness.com. I want to ask some stuff privately!

  177. gb
    Posted Mar 13, 2007 at 10:33 AM | Permalink

    Jerry B.

    Have you read post # 177? There Margo says that if you don’t resolve all scales of motion (like in an atmospheric model) you need a subgrid parameterization. That’s what I tried to explain a numner of times. Do you agree that some kind of parameterization of the subgrid scales is required in an atmospheric model or do you have another opinion?

    I have read your Math. Comp. article (1989) and I don’t understand it. Do you claim that a numerical model of turbulence with and without a parameterization for the subgrid scales should converge to the same solution?

    Another question. How do you know that the dissipation in atmospheric models is unphyscially large?

  178. Gerald Browning
    Posted Mar 13, 2007 at 2:03 PM | Permalink

    gb (#178),

    Have you carefully read Margo’s post?

    In the Math. Comp. article, what do the mathematical estimates state for the minimal scale that will be produced?

    I can run an atmospheric model for a short period of time and obtain convergence without dissipation (see reference under ITCZ). The problem is when one insists on running a model for long periods of time (as in a climate model). Then the enstropy starts to cascade down to smaller scales and a model with insufficient resolution to handle the smaller scales will have to add a dissipative term. If that term is not the correct type or size, the cascade will not be correct and will have a substantial impact on the accuracy, i.e. the model solution will start to deviate from the continuum solution with the correct type and size of dissipation.

    Because the climate models do not have sufficient resolution to resolve the cascade that occurs very rapidly. Note that the large scale forecast models circumvent this problem by updating the data fairly often as discussed.

    Jerry

  179. Gerald Browning
    Posted Mar 13, 2007 at 8:25 PM | Permalink

    This is a test as suggested by Willis:

    Multiscale Model (Lo Res)

  180. Gerald Browning
    Posted Mar 13, 2007 at 8:32 PM | Permalink

    Test 2:
    MS256

  181. Gerald Browning
    Posted Mar 13, 2007 at 8:43 PM | Permalink

    Test3


  182. Gerald Browning
    Posted Mar 13, 2007 at 9:03 PM | Permalink

    One final test

    text

  183. Gerald Browning
    Posted Mar 13, 2007 at 9:20 PM | Permalink

    Thumbnail


  184. Gerald Browning
    Posted Mar 13, 2007 at 9:32 PM | Permalink

  185. Gerald Browning
    Posted Mar 13, 2007 at 10:42 PM | Permalink

    Lets try again.

    These four plots are the vertical velocity w at 12 hrs and z = 3 km for the doubly periodic in x and y (2000 km by 2000 km domain) and rigid boundaries at z = 0 km and z = 12 km, i.e. w = 0 at the bottom and top boundaries, for two resolutions for the inviscid, multiscale (nonhydrostatic) model and for an inviscid, hydrostatic model (all runs start from the same large scale balanced state). Increasing the resolution for the multiscale model from 256 by 256 by 12 points (7 km by 7 km by 1 km) to 512 by 512 by 12 points (3.5 by 3.5 by 1 km) results in essentially the same solution.



    Click on thumbnails to see full images.

    But increasing the resolution in the hydrostatic model leads to a rapid cascade to smaller scales and the solution blows up.



    The vertical velocity is shown because it is a very sensitive variable and crucial to determing storms in the presence of moisture. The vertical mesh is 1 km, but different verical mesh sizes, heights for the top boundary, and time integration schemes have also been tried with similar results.

    The lateral domain was chosen to be doubly periodic so that there would be no boundary problems for the hydrostatic model. Of course then the Coriolis term must be shut off, but it is not crucial at these resolutions.

    Jerry

  186. Tom Vonk
    Posted Mar 14, 2007 at 9:33 AM | Permalink

    Re 177

    Extremely enlightening and that’s the direction I am going too .
    Energy building up at large resolved scales can NOT be dissipated at small unresolved scales because , ummm they are not resolved .
    So to avoid the system blowing up , something artificial has to be build in .
    This something is by definition unphysical bacause the physical bit is happening at scales that are unresolved .
    To make things worse , it is even impossible to evaluate how very unphysical this unphysical
    something is because the physical part is ignored and unresolved .
    Follows imho a “damned if you do , damned if you don’t” – if you go down you’ll be hit by Jerry’s exponential evolutions and if you stay up you’ll have just a pile of steaming HS .

    Now , and that is the point I am trying to get deeper in , there might be a hope ? assumption ?
    approximation ? that a statistical kind of treatement (like Reynolds averaging + “noise”) will save some sort of validity for some sort of averages under not nearer specified conditions .
    My intuition is that nothing gets saved when the calculations runs long enough .
    The nice thing would be if I could find on the net somewhere a mathematical treatment for the case of statistical treatment .

  187. gb
    Posted Mar 14, 2007 at 11:15 AM | Permalink

    Re # 187

    An atmospheric model does not solve the full governing equations but in fact the filtered governing equations whereby the small scales are removed. It is quite simple in mathematical terms to carry out a spatial filtering of the governing equations. If you do that you will find an additional term describing the flux of energy from the resolved scales to the unresolved scales. A subgrid model is meant to describe this process. You call it something artificial. However, a subgrid parameterization models in fact an existing physical process. If a subgrid parameterization does that correctly is another question but it is entirely reasonable and physically correct to add a subgrid model in an atmospheric model. Climate science is not unique in that respect. In very few practical applications turbulent flows can be fully resolved. If engineers at Boeing compute the flow around an airplane they use commercial fluid dynamics codes full with ‘artificial’ terms. I am working in engineering and there we develop many ‘artificial’ subgrid models. Numerical fluid dynamics codes with these subgrid models produce results that match experimental measurements quite well in general.

    And it is not true that it is impossible evaluate how unphysical/physical these subgrid models are. There are observations: spectra, wind speed, ocean currents. In some cases there are quite large differences between model outputs and observations but in other cases the correspondence is quite good. For instance, computed spectra agree quite well with measurements. If subgrid models would be so bad that wouldn’t happen.

  188. Posted Mar 14, 2007 at 2:26 PM | Permalink

    gb:

    Of course engineers at Boeing use models for these flows. But the types of flows are very different, and the modeling challenges are very different. Flows around wings were some of the earliest that resulted in decent design information. (There are, of course, difficulties in these flows. However, the geometry of wings is such that we could get things like lift and drag fairly well without having good turbulence models. In fact, flows around wings are “easier” than flows around cylinders.)

    Flows with density variations tend to be particularly difficult. Flows that involve high shear and bouyancy at the same time are especially difficult.

    No one is saying one should use no models or parameterizations. If you read Jerry’s comments and papers, you’ll see many of his comments and papers discuss what is wrong with specifics types of closure models in specific circumstances.

    Are you using LES type models or RANS models to study flow around aircraft? Also, are you modeling the flow around the full aircraft body, or running models to predict the behavior of components. (Any combination of the four are possible; I’m just trying to find out which one.)

    Also, what types of closures or subgrid models are you using?

  189. Posted Mar 14, 2007 at 3:00 PM | Permalink

    Re 187:

    To make things worse , it is even impossible to evaluate how very unphysical this unphysical
    something is because the physical part is ignored and unresolved .

    Tome:
    Strangely enough, sometimes rarely impossible to evaluate how unphysical an approximation is. However, someone has to try to evaluate how good or bad a closure model is.

    The purpose of Browning and Kreiss 1989, (which I now read ๐Ÿ™‚ ) is to examine how good or bad two very specific closures worked in a specific flow system. (If I understood correctly, the case examined was the evolution of decaying homogenous isotropic 2-D turbulence. This is a fairly “simple” flow, in the sense that the geometry is easy and the only “thing” going on is turbulence interacting with itself. So, it’s a good platform for testing whether or not a closure or subgrid parameterization has even the slightest hope of replacing the behavior of small scale turbulent fluctuations. )

    Anyway, Browning and Kreiss began by calculation exact solutions, performing some grid independence tests to make sure the results of their exact comutation aren’t an artifact of failing to resolve scales. Because these sorts of exact solutions are now possible, that first result they present in their paper a sort of “numerical experiment”. These results are shown.

    Now, as a practical matter, even though B&K ran this ‘exact’ computation, we know that this type of computation is limited to a small class of flows. (Because too much computing power is required in other flows. Using approximations would reduce the computational load. People want to know how well some approximations might work.

    So, after running the numerical experiment, B&K changed the numerical model, introducing two specific parameterizations that had been proposed and used in the literature. One “parameterization” is called “chopping” (you just throw away energy from time to time). The other is called “hyperviscosity”. (I’d honestly never heard of anyone doing this–but then, my training is in experimental fluid mechanics. )

    Browning and Kreiss showed that if the two parameterizations they tested quickly gave poor results for the evolution of two-dimensional turbulence. It’s a pretty nice test. (Though, it’s all written in a sort of “mathy” way. I would have also computed some measurements that mimic things one could measure in a lab, and compare those. But that’s because my training isn’t applied math. ๐Ÿ™‚ )

    Follows imho a โ€œdamned if you do , damned if you don’tโ€ – if you go down you’ll be hit by Jerry’s exponential evolutions and if you stay up you’ll have just a pile of steaming HS .

    The nice thing would be if I could find on the net somewhere a mathematical treatment for the case of statistical treatment .

    You really need to take courses to understand statistical fluid mechanics for a number of reasons. First: There are different statistical approaches to modeling. (You’ll see I keep mentioning DNS, RANS, LES. Each is different.) The strengths and short comings are different in each type. Second: The “problems” are all in the closures (to use the correct term if you use RANS type averages) or “sub-grid” models (to use the correct term for LES models.) DNS works. It just can’t be used for flows of practical importance.

  190. Gerald Browning
    Posted Mar 14, 2007 at 3:14 PM | Permalink

    gb (#188),

    I have shown a mathematical example that any system of equations can be used to produce the desired solution, let alone a spatial spectrum fall off, by adding an appropriate forcing term. Just because the spatial spectrum has the right fall off does not mean that the dynamics is correct.
    This counterexample shows why that is not the case.

    Jerry

  191. Jim D
    Posted Mar 14, 2007 at 9:59 PM | Permalink

    Margo, thanks for the DNS explanation.
    As I understand it, DNS assumes all eddies are resolved, so they have an
    effective Reynolds number mostly defined by their grid resolution. So
    since atmospheric Reynolds numbers are about 100000, DNS can’t be used for any
    significant volume of atmosphere with current computers.
    I am somewhat agreeing with gb in that what some people consider dynamics,
    others consider physics or parameterzation. To me and most atmospheric people
    turbulence has to be parameterized somehow.
    When I say hydrostatic models are fine at resolved scales, I am only including
    motions due to advection, pressure gradients, coriolis, and buoyancy, for which
    analysis gives the well known dispersion relation ( I won’t risk TeX) that
    omega squared equals (N squared times k squared) over (m squared plus k squared)
    ignoring coriolis and an extra density variation term, where
    omega is the frequency of gravity wave oscillation,
    N is the Brunt-Vaisala frequency that depends on stratification,
    k is the horizontal wavenumber (inversely proportional to wavelength)
    m is the vertical wavenumber.
    In hydrostatic models the denominator k squared is missing, severely distorting
    behavior as k increases to m in size, but fine as long as m somehwat exceeds k.
    This is what I base the validity of hydrostatic models on.
    Obviously the atmosphere permits other waves, but acousic modes have no meteorological
    significance, and larger scale waves due to rotation effects are handled better
    by hydrostatic models, so it is gravity waves that are the limitation in resolved
    dynamics. I mention this to clarify what I meant in previous e-mails when
    referring to hydrostatic versus nonhydrostatic. If we are not talking about
    gravity waves or unstably stratified resolved eddies, this may be besides the
    point of this thread.

  192. Gerald Browning
    Posted Mar 14, 2007 at 11:19 PM | Permalink

    Margo (#190),

    A good summary of most of the numerical experiments in our manuscript. But the mathematical minimal scale estimates (Henshaw, Kreiss, and Reyna) are extremely important to our manuscript and others by Heinz and students that followed. The estimates predict very accurately what the number of waves (and hence the grid size) must be to resolve the flow you described in 2-D and 3-D for a given kinematic viscosity coefficient. Different types of dissipative operators (like hyperviscosity used rampantly in atmospheric models) produce different estimates of the minimal scale and their use changes the solution from that of the “standard” dissipative operator.

    Jerry

  193. Gerald Browning
    Posted Mar 14, 2007 at 11:39 PM | Permalink

    Jim D (#192),

    It is not quite that simple. I will discuss this in more detail
    when I return.

    Please quantify the relationship between k and m in terms of length scales when the hydrostatic relationship fails for the general audience.

    Jerry

  194. Tom Vonk
    Posted Mar 15, 2007 at 3:11 AM | Permalink

    You really need to take courses to understand statistical fluid mechanics for a number of reasons. First: There are different statistical approaches to modeling. (You’ll see I keep mentioning DNS, RANS, LES. Each is different.) The strengths and short comings are different in each type. Second: The โ€œproblemsโ€ are all in the closures (to use the correct term if you use RANS type averages) or โ€œsub-gridโ€ models (to use the correct term for LES models.) DNS works. It just can’t be used for flows of practical importance.

    I have mostly experience with DNS .
    And doing that , one is stricken very fast by some simple observations where not many maths are needed .
    1) The general solutions of N-S are unknown , it is a similar situation like in the general relativity .
    2) Not having a clue about behaviours (even existence , continuity and unicity) of general solutions , one takes a stance of engineering , asking no more after general solutions but asking after practical “gimmicks” that enable to have some designing rules in EXTREMELY restricted flow cases .
    3) By doing that , one gets a catalogue of practical cases where a specific formula , a specific
    approximation has a limited experimental validity . In those cases it is irrelevant if such approximations have a physical meaning , what is relevant is that it “works” in this specific case .
    4) Obviously it is not because I can design pumps working with specific fluids in specific conditions that I have understood something about N-S or turbulence because I didn’t .
    5) Now with climate modelling I have a system that is as complicated as it can get . It is full of feedbacks , coupled PDE , multiphaseous interfaces . This system is neither a pump , nor a flow
    around a wing nor any of the idealised specific cases where practical formulas “work” . The
    mass of unphysical approximations necessary to only make the thing run is staggering . But
    even worse – you have no wind tunnel where you could run the climate over and over to check
    if your “formulas” work at least for a limited time .
    6) Follows that you are unable to answer the most basic question “Is the stability/predictability that my model displays an artefact due to my intervention in the numerical code or is it due to real , by definition unknown , properties of “solution(s)” of equations governing the system ?”

    That’s why I find Jerry’s approach extremely interesting because he seems to be tackling
    exactly this question . Admittedly he chooses also specific cases but I am convinced that there can be a general demonstration showing that any dissipative non linear system exceeding a certain complexity is subject to exponential decay making it essentially unpredictible and no RANSs or LEDs can help .
    I see it a bit like the Goedel’s theorem – as soon as an axiomatic system is not trivial , it is either inconsistent or incomplete .

  195. Tom Vonk
    Posted Mar 15, 2007 at 3:50 AM | Permalink

    To illustrate the problem , let’s take the ENSO .
    It is not very complicated to understand the main causalities .
    Btw by saying “main” is already an assumption because I admit without demonstration that poorly understood or “small” causalities are not “main” but anyway .
    Now it is easy to see that this (simplified) system would behave like a delayed oscillator .
    Starting from there I can build a model based on the delayed oscillator and begin to play with it .
    And indeed the model will give results that look more or less like a real ENSO or at least like the idea I have of the real ENSO based on the very limited timeframe in which I have data that I consider reliable/sufficient .
    From there I can begin to tinker with it making other assumptions , I can refine it and I can begin to make “predictions” .
    I can even “prove” that I was right to neglect this and simplify that because when I change
    this or that , the model shows a low sensibility for such changes .
    But did I prove anything concerning the real ENSO ?
    Of course not , I have proven that a delayed oscillator with N supplementary ad hoc assumptions has this and that property .
    As the real ENSO is obviously no delayed oscillator , after all this work I am still left with the unanswered question about how relevant the prediction bit is and for how long .
    I am not saying herewith that everything was useless , by using an analogy that has a
    QUALITIATIVE justification I might get some QUALITATIVE insights in some short term
    properties of the real ENSO .
    But from there to quantitative predictions over a long period of time is a distance that might very well be infinite ๐Ÿ™‚

  196. Posted Mar 15, 2007 at 8:33 AM | Permalink

    Re 192 Jim D.

    On the parameterization issue:
    I agree that turbulence has to be parameterized in problems of practical importance. Practical importance= gives useful results to guide engineers and applied scientist when doing full problems. The computational burden limits it both to small Reynolds numbers and relatively trivial geometries.

    I’m an engineer, so of course I totally accept the need for closure, parameterizations etc. (In fact, from my point of view, the purpose of DNS is to try to test proposed closures in a variety of well controlled situations. To an extent, DNS replaces the need for some physical experiments. That’s it current role.)

    I don’t think anyone is saying one may never or should never use parameterizations. Everyone knows you need to use them– but that computational results are, at best, only accurate, when all parameterizations used are appropriate and found to be valid. When one identifies reasons why they must fail, that need to be taken seriously.

    Now onto the gravity wave issue.
    First, I’mnot sure what specific problem we are addressing. (Partly because I’m having trouble identifying precisely where to find Browning and Kreiss 1984 cited in the main article that launced off this discussion. I found this: Numerical problems connected with weather prediction but, that only provides me the abstract.

    However, way up in the article, the problem mentions “shear” and “jet stream”.

    But, as to this bit that you said “that
    omega squared equals (N squared times k squared) over (m squared plus k squared)
    ignoring coriolis and an extra density variation term,…”

    I understand that by neglecting products of the “disturbance” terms (describing the wave), we can get a solution like those discussed here (www.cost723.org/school/material/lectures/KEY4-gravity_waves-vaughan.ppt). (or his companion .doc paper.)

    At least based on the discussion Jerry provided above, he’s not discussing stable gravity waves. He’s discussing instabilities. (So, can models capture what happens when they break due to shear or buoyancy. Both at the same time? Etc.)

    When things go unstable, the linearized equations describing stable behavior have broken down. So, why would you conclude that the resolution required to describe the dynamics of a stable system is sufficient to describe the behavior of the unstable system? The general case is that we can’t.We can’t in pipe flow. We can’t in shear flow. We can’t in any flow I can think of. From a “math” point of view, what happens when things go unstable is that all those terms we neglected in the linear solutions become large. What happens from an engineering point of view is any closure, solution or analytical method that workd to describe the stable case becomes entirely deficient. You can’t use it!

    If you could supply more details of the flow system you think we are discussing, maybe that would help me!

  197. Posted Mar 15, 2007 at 9:02 AM | Permalink

    Tom:
    I think you and I are in general agreement. When I said you needed classes to understand statistical fluid dynamics, I meant you need it to understand the hows, whys, plusses and minuses of RANS (Reynolds Average Navier Stokes) type codes and LES (Large Eddy Simulations.) Any engineer or scientist can get a qualitative level understanding of what’s involved in DNS. “Research tool” doesn’t always mean “harder to understand”.

    Strangely, I once characterized DNS as “the big hammer approach” to a coworker who’d just finished his Ph.D and was switching his projects as a result of change in employment. He thought I meant that “big hammer” as a bad thing. Six months later, after he’d started running doe DNS models for a particular project, we were talking about his results. And he said, “You know, there really is nothing like a ‘big hammer’!”. He’d finally realized that “the big hammer” is a good thing — provided you have one. He actually switched to useing “the big hammer” metaphor.

    The problem is, we don’t usually have the big hammer.

    I disagree with you one one minor point: LES type codes might help resolve the issues Jerry is identifying. However, I’m pretty sure those are also too computationaly intensive for GCM’s.
    In an LES code, you’re filter must be set to resolve the large scales of motion that are anisotropic, and then only model small scale motions that are isotropic. There isn’t a distinct boundary separating “large” and “small scale”, but one can estimate these things and we absolutely know that GCM don’t resolve all scales of motion that are anisotropic!

    At large Reynolds numbers, the subgrid models in LES should be more general than those used in RANS like codes because small scale turbulence does look similar in many, many flows. (However, these models are computationally intensive, and as far as I can tell, the GCMs are absolutely, positively not LES like. Or, if someone claims they are LES like, they think it’s ok to set the filter wider than anyone can justify based on known LES computations or theory. )

  198. gb
    Posted Mar 15, 2007 at 10:44 AM | Permalink

    Re # 193. For 3-D turbulence the minimum length scale is the Kolmogorov length scale, nothing else. The rule is kmax*eta > 1 in spectral simulations where kmax is the maximum resolved wave number and eta is the Kolmogorov length scale. Everybody applies this rule. In case of hyperviscosity one can define a similar Kolmogorov length scale and the same rule applies. Changing the grid means changing the hyperviscosity and that is already well known. Subgrid models like the very basic Smagorinsky model adapt themself automatically to the grid, i.e. if the resolution changes then the viscosity changes so there is no need for adjustment.

  199. gb
    Posted Mar 15, 2007 at 11:39 AM | Permalink

    Re # 198. There is no strict requirement that the subgrid scales should be isotropic in LES. However, if the subgrid scales are significantly affected by rotation or stratification one can’t use a subgrid model developed for Kolmogorov turbulence but has to develop a new model. Perhaps difficult but in principal possible.

    I have seen articles where they present spectra computed with atmospheric models so they resolve a part of the ‘turbulent’ scales and I think we can consider them as some kind of LES. However, the turbulent motions are perhaps not resolved in the boundary layer but above only above it.

  200. Posted Mar 15, 2007 at 12:22 PM | Permalink

    #200.
    gb– You’re right. I said something stupid. I agree there is no strict requirement that the smaller scales be isotropic.(In fact, to an extent what I said doesn’t make sense. Clearly, if the turbulence were isotropic, you’d have no off diagonal components to the Leonard stresses, and that makes no sense. )

    However, “universally” (or at least widely ) applicable closures do require that the smaller have a sort of “sameness” from flow to flow. Otherwise, you have to use different closures for different flows and pick and choose closures, which means you need to validate the closure for each and every flow type.

    ( Of course, historically, validating closures for each “flow type” was done. So, you might tweak closures to work for channel flow. Then tweak them for flows around flat plates, wings etc. Opps… need tweaking. Then try those closure in free shear, and oopps! Tweak them again. Eventually, you figure out the closure requires tweaking for each new “type” of geometry, so you propose new ones etc. And, of course, meanwhile, even if a closure isn’t universal, engineers still use them in flows where they are known to work. Then they supplement this with physical experiments– at least a few. )

    Could you cite the articles you’ve seen that describe spectra in the atmosphere? That way we can see if we are talking about the same sorts of things.

    I have seen LES models in climate science papers that focus on specific subfeatures. However, I haven’t seen LES in GCMs. (This doesn’t mean they dont’ exist. It just means I haven’t seen them, so if you have references, I’d be interested in reading them.)

  201. Tom Vonk
    Posted Mar 15, 2007 at 12:52 PM | Permalink

    Margo

    You are right .
    It was actually my first motivation to come here because coming from DNS (yeah , right , the big hammer) I began to look at RANS and found that it didn’t make things simpler or more “operationnal” .
    I also admit that my culture in the field of LES is at best approximative .
    Problem being , of course , that having a job I have not much time to spend it on reading the
    hundreds of publications that must have been written on that field .
    Also coming from DNS , I have a very deep distrust in computers because since Turing and Goedel we know that everything is not calculable .
    A computer is good to make very fast big amounts of additions but that is only useful when I
    am perfectly sure that the additions it makes are only additions and nothing more .
    That is very obviously not the case in climate modelling because here I ask from the computer that he replaces my understanding (or lack thereof) by telling me what will happen in a largely badly defined configuration .
    Modelling can be useful in engineering for the reasons you already mentionned but it is poison and worse in theoretical science .
    My goal would be to mathematically prove that systems like the climate are not calculable in
    Turing sense but I am not sure that I have the time to reach it ๐Ÿ™‚

  202. Dave Dardinger
    Posted Mar 15, 2007 at 1:34 PM | Permalink

    re: #102

    I have a very deep distrust in computers because since Turing and Goedel we know that everything is not calculable .

    Good to know we can’t calculate 1 + 1…

    Ahhh, those misplaced modifiers!

  203. Posted Mar 15, 2007 at 1:41 PM | Permalink

    Re 202

    My goal would be to mathematically prove that systems like the climate are not calculable in

    Oh, I suspect they are calculable given a big enough hammer. But who knows, it may turn out they aren’t. Still, even things that aren’t strictly speaking calculable may permit very good approximations.

    Strangely enough, my “issues” are much more modest with regard to discussing models. I think there is something worthwhile in modeling. I also think there is a lot of overselling of many things going on; I think there are quite strong double standards for hypotheses testing seem to have materialized. (And surprisingly to some, I perceive these things even though I think there is more than likely some AGW going on right now. I just don’t see why thinking AGW is going on should lead to people overselling models and setting aside normal standards for hypothesis testing. )

  204. Gerald Browning
    Posted Mar 15, 2007 at 2:11 PM | Permalink

    Jim D (#192),

    I issued a number of very specific responses in #176 to your #175. Instead of answering with information that would be helpful to a general audience to clarify the issues, you responded with the standard meteorological equation for dispersion that IMHO does not answer the first question I asked in a clear manner, and you did not respond to my following three statements that I believe would clarify and quantify the discussion.

    1. Please state the specific lower limits on the horizontal and vertical length scales where the hydrostatic approximation is suppose to be valid and cite the appropriate reference. At that point we can discuss this issue clearly.

    2. I have offered to include the dissipation operator that is used in any atmospheric hydrostatic forecast or climate model that is running at 10 km and check its impact on the solution. This is clearly possible by comparing the convergent numerical solution of the nonhydrostatic model with the corresponding viscous, hydrostatic model. Why hasn’t this been done before? Interesting that you didn’t take me up on this offer and I will let readers of this thread decide why that is the case.

    3. You brought up the simulation of hurricanes that had nothing to do with the careful, quantitative comparisons that we made using various simplified versions of the Canadian global NWP model with Sylvie Gravel. Nor did you respond to how a small change in the data assimilation
    program had a larger impact than all of the parameterizations.
    I also pointed out I am fully aware of the importance of heating by citing several references, but there was no response to my statement.

  205. Jim D
    Posted Mar 15, 2007 at 8:22 PM | Permalink

    #205, Jerry, some responses, best I can.
    1. From the dispersion relation when k approaches m, means when the horizontal scale
    starts to get as small as the vertical scale. This depends on the depth of the
    atmosphere which gives the smallest possible m. If k was 0.5*m (so the horizontal
    scale is twice the vertical scale), and m was say 20 km, then k would correspond to 40 km.
    Since this shows up as squares in the denominator, the error is 25% in the square of
    omega, or maybe 12% in omega itself. So for resolved waves of 40 km and deep modes,
    the 12% error is starting to look bad, but for 100 km, it becomes about 2%, much more
    acceptable, and shallower modes are even more accurate. It’s a greay area, and depends
    what error you are willing to accept to say which scale is poorly represented.
    For unstable stratification N squared becomes negative and omega likewise becomes imaginary
    and then represents a growth rate. So hydrostatic models overestimate the growth rate as
    the approximation breaks down (due to the lack of a denominator term).

    2. It is very easy to test hydrostatic along with nonhydrostatic models. Stable flow over
    topography is a prime example of how to demonstrate the hydrostatic failure,
    and has been done many times to test nonhydrostatic models
    but that shows up well only around 2 km grid-lengths with mountain widths about 20 km.
    This is not a viscous case, nor has anyone tried making the hydrostatic model viscous
    to cure this problem. Maybe I am misunderstanding.

    3. I was trying to make it clear you can’t run a weather or climate model with
    all the parameterizations turned off, using an extreme example of a hurricane to
    make the point. If you re-initialize the model every 12 hours with data assimilation,
    maybe you can get away with it, but that is like cheating by giving the model the answer
    every 12 hours. Good global models have value out to 7 days without data assimilation,
    except at the beginning,
    and I can assure you it would be much less than 7 days without parameterizations.
    I am not sure of the point of #171 if it wasn’t a jab at the value of model physics.

  206. Gerald Browning
    Posted Mar 16, 2007 at 12:34 PM | Permalink

    Jim D (#205),

    Now we are getting somewhere.

    1. How is the standard dispersion relationship derived and did it predict the ill posedness of the IVP and IBVP for the hydrostatic equations?

    2. If a climate model would eventually be run with a horizontal resolution of 10 km (which does not resolve most of the major dynamical features such as the Gulf Stream, hurricanes, the ITCZ features, etc.) and a vertical resolution of 1 km ( ~ 10 km lid), is the hydrostatic approximation valid? How important is the heating in this case? What impact does the heating have on the dispersion formula?

    3. All of the NWP models update the data as often as possible. In fact the
    NOAA RUC model is called the Rapid Update Cycle model, is hydrostatic,
    and runs at very fine resolution. The reason for the updates has been made
    clear – without them the NWP models deviate from reality very quickly.
    These updates are not possible in a climate model.

    4. I believe that Sylvie’s experiments speak for themselves. It was quite interesting to watch the lower boundary layer approximation start to destroy the accuracy of the interior solution within a matter of hours, even though the velocities at the lower boundary were quite small compared with the jet. And the lack of observations off the west coast of the US was a disaster for the forecast in the western US. Go to the Canadian meteorological web site and look at the comparison of the accuracy of the Canadian and US global NWP models in the first 36 hours. There is a dramatic improvement in the Canadian model relative to the US model beginning in 2002 when the simple change to the data assimilation was performed.

    5. Clearly a test of an unforced, hydrostatic model in the simplest setting against a resolved solution of a nonhydrostatic model in the same setting is the best way to determine the impact of the hydrostatic approximation on the continuum solution. The first example of such a test is shown above and is quite illuminating.

    6. How is the supposed accuracy of the 7 day forecast measured. By L_2
    norms on each vertical level against observational data. Or against the blend of observational and model (assimilation) data at 7 days. The latter is a bit incestuous.
    Sylvie interpolated the model forecast to the observation locations at later times and then used standard mathematical norms at each level over the US where the observational data is most dense. IMHO this seems to be the most reasonable scientific way to measure the accuracy of the forecast.

    Jerry

  207. Jim D
    Posted Mar 16, 2007 at 7:14 PM | Permalink

    Jerry, on #207 points raised
    1. This is a standard textbook formula based on the linear inviscid incompressible equations
    with buoyancy and constant stratification, but ignoring the density scale height and coriolis,
    which only slightly complicate the equation. I think the only thing it predicts that is ill-posed
    is the hydrostatic approximation as the length scale decreases. Maximum growth rates occur
    for infinitely small horizontal scales. This is a fundamental problem related to the
    fact that the hydrostatic equations have no concept of “parcel theory” that limits
    vertical kinetic energy based on available potential energy.

    2. A 10 km hydrostatic climate model may still be OK, because it only resolves waves that
    are much longer than the maximum vertical scale. Climate model tops usually are more like
    30-50 km, however, so it starts to be questionable, based on the dispersion relation, and
    climate and weather global models look to go nonhydrostatic in preparation for 10 km grids.
    Heating is important. Radiation has a mean atmospheric cooling effect, plus surface
    temperature effects that would lead to severe biases if ignored. Latent heat is a vital
    energy source for the climate system. The dispersion relation can be modified for
    idealized heating profiles, but the simplest thing is to take convectively unstable
    cases as unstably stratified by using negative N squared.

    3. Yes, I agree all the data available are used to provide the best possible initial
    and, for non-global models, boundary conditions, but clearly
    in forecast mode you don’t have data in the
    future, so you are running the model for days without data assimilation.

    4. Clearly more data helps, and it has to be used sensibly.

    5. Yes, the results in #186 are in accord with the dispersion relation that
    overpredicts hydrostatic growth rates at smaller scales, but limits them for
    nonhydrostatic models. Not clear if you have just a numerical instability,
    which you can check if reducing the timestep changes the solution.

    6. The most common verification is 500 mb height anomaly correlations,
    of the forecast with an analysis,
    where skill is regarded as an 0.6 correlation. Skill extends beyond
    7 days (after the last data assimilation) when averaging results over many forecasts.
    Skill is shorter-lived for fine-scale things like precipitation forecasts, maybe 2-3 days at most.

  208. Gerald Browning
    Posted Mar 16, 2007 at 10:18 PM | Permalink

    Jim D. (#208),

    It is quite clear that you are knowledgable about the models, i.e. you are not a student but most likely a modeler by your statement and avoidance in answering direct questions or skirting the issue. I have seen this type of behavior many times before so I will treat you as a modeler defending his turf until shown otherwise.

    1. I asked that you cite a text that shows the dispersion relationship and specifically asked you how it is derived. You come back stating that it is in many texts (no specific reference) and no details as to its derivation, only vague generalizations. The two good things are that you stated that the formula is for the inviscid system and that the Coriolis term is not crucial so that means the experiments above are even more reasonable. I also mention that as in all the Browning and Kreiss manuscripts, the experiments are used to illustrate the mathematical theory and the two go hand in hand to make a strong case just to stop this kind of nonsense. No one has been able to show that any of our manuscripts are incorrect. I also mention that 7 km is not infinitely small and that the nonhydrostatic equations (avoid all of the fancy lingo) have fast exponential growth so they are not going to solve the problem as you assert. I waited very patiently for the Lu et al. manuscript and the Page et al. manuscript to appear as they totally support the mathematical theory and the experiments above based on your own models.

    It has been clearly shown (two references cited on the ITCZ thread)
    that once one moves from the large scale (1000 km) to smaller scales that the vertical velocity is proportional to the total heating and this is also true near the equator. Thus any errors in the mesoscale heating or the equatorial heating leads to immediate errors in any NWP or Climate model. Arguments based on the simple dispersion relationship cannot fix this problem.

    2. Talk about waffling. Fist you said that no editor would allow the use of the hydrostatic approximation for anything but the large scales (1000 km horizontal scale) and now you state that they may be OK for the 10 km horizontal case. I seriously doubt that the climate models go to 50 km and suspect there is a sponge layer or other dissipative mechanism at the top to absorb vertically propagating gravity waves produced by discontinuous forcing terms. This is certainly the case for the NCAR WRF model because I specifically asked Joe Klemp this question at his seminar at CSU.

    3. First you said that no model would use updating. I also asked you very specifically how that accuracy is measured and gave specific details of how it was done by Sylvie Gravel.I also indicated the Canadian (CMC) site to support the claims of the comparison between the Canadian and US global models. You return with no details of how the accuracy is measured in a global NWP model (I think many readers would be interested in this
    formula).

    5. Good try. But the model is stable for the 7 km run
    and when refining the mesh I cut the time step in both models. The hydrostatic model doesn’t have vertical soundwaves, but the multiscale does. Also I have specifically asked Strand to run the same test as
    Lu. et al. on the hydrostatic system in basic dynamic mode.
    Any takers?

  209. gb
    Posted Mar 17, 2007 at 7:06 AM | Permalink

    Jerry,

    The dispersion relation can be found in for example ‘Elementary Fluid Dynamics’ by Acheson, ‘Physical fluid dynamics’ by Tritton and any book on geophysical fluid dynamics.

    The come to a previous point. The smallest length scale in 3-D turbulence is the Kolmogorov length scale, see ‘A first course in turbulence’ by Tennekes and Lumley and in any other book on turbulence. No difficult mathematics is required. Simple and elegant dimensional reasoning will do.

    I’m a bit amazed now. Before one starts to critise atmospheric models there is need to understand some of the basic physics of atmospheric flows?

  210. fFreddy
    Posted Mar 17, 2007 at 7:57 AM | Permalink

    Gerald Browning et al, thank you for this fascinating thread. Please keep going.

  211. gb
    Posted Mar 17, 2007 at 8:11 AM | Permalink

    Margo,

    Takahashi et al. (2006) GRL, vol. 33, L12812. There you can find a simulation of the mesoscale spectrum.

    Re # 191. Reproducing observed spectra with a model is not trivial, it puts a quite severe restraint on it. Very few scientist have been able to reproduce the mesoscale spectrum with idealised models.

  212. Jim D
    Posted Mar 17, 2007 at 5:21 PM | Permalink

    Re #209

    Jerry,

    1. Yes, I am in mesoscale modeling, not climate modeling. I got onto this blog to
    make sure the arguments don’t start reinforcing any misconceptions, as they
    can tend to drift unless checked. I mainly wanted to respond to the first post,
    and have not looked yet at Browning and Kreiss, so can’t comment on that.
    The first post implied hydrostatic model’s have an error that is routinely
    corrected by viscous terms (as I interpret it). I dispute that, and have shown
    the reason that hydrostatic models have errors which depend on scale. If diffusion
    helps, it only does so at some specific scales where you eliminate the wavelengths
    that are causing problems.

    2. I have not said that hydrostatic models are invalid at 10 km, just that
    the approximation error starts to get significant for deep modes at that scale.
    Specifically in #123 I referred to thunderstorms as an example of something
    you can’t credibly model with a hydrostatic model. This would be

  213. Jim D
    Posted Mar 17, 2007 at 5:24 PM | Permalink

    Re #209

    Jerry,

    1. Yes, I am in mesoscale modeling, not climate modeling. I got onto this blog to
    make sure the arguments don’t start reinforcing any misconceptions, as they
    can tend to drift unless checked. I mainly wanted to respond to the first post,
    and have not looked yet at Browning and Kreiss, so can’t comment on that.
    The first post implied hydrostatic model’s have an error that is routinely
    corrected by viscous terms (as I interpret it). I dispute that, and have shown
    the reason that hydrostatic models have errors which depend on scale. If diffusion
    helps, it only does so at some specific scales where you eliminate the wavelengths
    that are causing problems.

    2. I have not said that hydrostatic models are invalid at 10 km, just that
    the approximation error starts to get significant for deep modes at that scale.
    Specifically in #123 I referred to thunderstorms as an example of something
    you can’t credibly model with a hydrostatic model. This would be less than 5 km grids
    at most.
    On the model top height issue, yes, mesoscale models usually put the top below
    20 km, but at least above the tropopause. Climate models, and global models
    such as ECMWF have tops nearer 1 mb ( 50 km), because there are some important
    radiative things happening up there.

    3. Did I not say that the anomaly correlation of the 500 mb height is the
    standard measure of accuracy? This shows skill to beyond 7 days after the last data
    assimilation. Do I need to explain what the 500 mb height is, or the term
    anomaly correlation?

    5. I’m just saying that hydrostatic models should have unrealistic growth rates
    as the vertical and horizontal scales become comparable. If you carry on reducing
    the grid down to 1 km, the hydrostatic model would get even worse, while the
    nonhydrostatic solution would continue not to change, as is correct when there is
    no small-scale forcing. Nonhydrostatic model growth is limited by N squared
    as the scale goes to zero. This can be interpreted as having the kinetic energy
    constrained by the available potential energy, as I mentioned before. The exponential
    growth is limited realistically by energy conservation.

  214. Jim D
    Posted Mar 17, 2007 at 5:48 PM | Permalink

    Re #209 (re-post #213 since it los everything after a less than symbol)

    Jerry,

    1. Yes, I am in mesoscale modeling, not climate modeling. I got onto this blog to
    make sure the arguments don’t start reinforcing any misconceptions, as they
    can tend to drift unless checked. I mainly wanted to respond to the first post,
    and have not looked yet at Browning and Kreiss, so can’t comment on that.
    The first post implied hydrostatic model’s have an error that is routinely
    corrected by viscous terms (as I interpret it). I dispute that, and have shown
    the reason that hydrostatic models have errors which depend on scale. If diffusion
    helps, it only does so at some specific scales where you eliminate the wavelengths
    that are causing problems.

    2. I have not said that hydrostatic models are invalid at 10 km, just that
    the approximation error starts to get significant for deep modes at that scale.
    Specifically in #123 I referred to thunderstorms as an example of something
    you can’t credibly model with a hydrostatic model. This would be less than 5 km grids
    at most.
    On the model top height issue, yes, mesoscale models usually put the top below
    20 km, but at least above the tropopause. Climate models, and global models
    such as ECMWF have tops nearer 1 mb ( 50 km), because there are some important
    radiative things happening up there.

    3. Did I not say that the anomaly correlation of the 500 mb height is the
    standard measure of accuracy? This shows skill to beyond 7 days after the last data
    assimilation. Do I need to explain what the 500 mb height is, or the term
    anomaly correlation?

    5. I’m just saying that hydrostatic models should have unrealistic growth rates
    as the vertical and horizontal scales become comparable. If you carry on reducing
    the grid down to 1 km, the hydrostatic model would get even worse, while the
    nonhydrostatic solution would continue not to change, as is correct when there is
    no small-scale forcing. Nonhydrostatic model growth is limited by N squared
    as the scale goes to zero. This can be interpreted as having the kinetic energy
    constrained by the available potential energy, as I mentioned before. The exponential
    growth is limited realistically by energy conservation.

  215. Jim D
    Posted Mar 17, 2007 at 5:55 PM | Permalink

    Sorry for the repeats due to a technical error. Feel free to delete #213 and #214.

  216. Gerald Browning
    Posted Mar 18, 2007 at 1:24 PM | Permalink

    I am pleased to announce that I have been informed by Christian Page that the manuscript cited under the ITCZ thread is in the galley stage and should be published shortly in Monthly Weather Review. As mentioned on that thread, the manuscript implements the balance scheme for mesoscale motions suggested by Browning and Kreiss in the multiscale initialization manuscript mentioned on the same thread in a more practical setting with realistic parameterizations.

    Jerry

  217. Gerald Browning
    Posted Mar 18, 2007 at 2:02 PM | Permalink

    The Evolution of Limited Area Models

    I have found a reference that I thought might lead to some insight as to the progression of limited area models at NCAR.

    A Nonhydrostatic Version of the Penn State-NCAR Mesoscale Model: Validation Tests and Simulation of an Atlantic Cyclone and Cold Front

    Monthly Weather Review, May 1993 , V 121, 1493-1513 by Jimy Dudia

    I will quote a few lines from the introduction to the manuscript.

    “The hydrostatic mesoscale model presented by Anthes and Warner (1978) has been employed to investigate meteorological phenomena on a wide range of scales ranging from continental-scale cyclone development to local valley flows.”

    “With improvements in computer power it is becoming possible to use finer meshes. Higher resolutions reduce errors associated with finite differencing; thus it is clear the the future mesoscale model should not be constrained by the hydrostatic assumption, which effectively limits the resolution to around 10 km except in weak flow, nonconvective situations.”

    “There are currently few nonhydrostatic models designed for use with ‘real data,’ that is, with data derived from three-dimensional , observed wind, temperature, and moisture soundings. ”

    Does the second quote sound familiar?

    Note that the Browning and Kreiss manuscript in Tellus appeared in
    1986.

    Jerry

  218. Gerald Browning
    Posted Mar 18, 2007 at 3:07 PM | Permalink

    Jim D(udia?) (#215),

    I would now like to respond to a few of your points in #215.

    1. The entire point of the discussion of the unresolved scales of motion is that the use of an incorrect size or type of numerical dissipation can have a dramatic impact on the numerical approximation of the continuum solution. The point is clearly stated and has been restated by Margo. Interesting that you keep trying to twist our words.

    2. It has been clearly stated that at the smaller scales of motion the vertical velocity becomes proportional to the total heating. The dispersion relationship has no bearing on the accuracy of the various parameterizations used in smaller scales of motion or near the equator. Please read the two references on the ITCZ thread.

    You failed to mention the “sponge layer” at the top of the WRF model. What physical mechanism is that based on and what is its impact on the numerical approximation of the continuum solution. What inflow boundary conditions are specified at the top of the model as there is clearly information coming down from higher altitudes.

    How are the lateral boundaries treated at inflow and outflow boundaries? Have you recreated the simple numerical experiment in the first reference in the ITCZ thread with all of the dissipative mechanisms left on in any of your models?

    Where is the top of the latest version of the NCAR CCSM model that is used for paleoclimate studies and for the IPCC model?

    3. Please provide a relative L_2 norm of the error of the geometric height at 7 days for a number of forecasts. Can you explain why only the 500 mb height is used at that time to check the accuracy (I know the answer and so does Sylvie, but I would like to have it explained to the general reader).

    As you have mentioned,the precipation is unreliable long before 7 days time. So basically when using large scale NWP models the parameterizations do not help in the first few days and are inaccurate shortly thereafter.
    What impact does this have on climate models that run even coarser grids and do not use data assimilation to update the information?

    5. Someone above mentioned tha Mahalov reference and I am obtaining a copy.
    But they evidently had problems much further down the scale than Lu et al.
    Do you find it rather curious that the more resolution that is applied, the
    worse the cascade of enstropy to smaller scales?

    There is no energy conservation in numerical models that numerically damp the solution unless it it some adhoc procedure.

  219. Jim D
    Posted Mar 18, 2007 at 6:17 PM | Permalink

    OK, yes, Jim Dudhia here, but not representing anyone but myself.
    Not sure what you were implying with a quote from my Introduction,
    but it was a well established fact, and just a statement of the obvious, as is
    often found in Introductions. It served to justify the point of the paper which
    was to introduce nonhydrostatic dynamics into what was previously the hydrostatic MM4.

    1. Maybe I am missing the point, but atmospheric models are not normally dominated by
    dissipation unless you are using them at scales where they barely resolve your
    features of interest. The fact is that inviscid behavior is well handled
    by models with adequate resolution if the dynamics is valid for the scale.
    2. I can’t comment without reading the references, but the vertical velocity response
    to heating is understandably proportional in a linear system.

    Yes, I do know WRF too, and the upper sponge is to prevent upward propagating energy
    from gravity waves from reflecting back down from the model top surface, so it simulates a deeper or infinite
    atmosphere, which is more realistic since wave energy is not reflected from there in fact.
    There is no important net vertical mass flux at such heights.
    Lateral boundaries are specified from analyses when using a regional model and add some relaxation
    zone to prevent noise. In the case of forecasts the “analyses” are actually usually
    forecasts from a larger scale regional model or global model.
    The top of the CCSM model is about 1 mb. I could find out, but I am guessing here.

    3. I don’t work in global verification, so I can’t provide numbers any better than you can
    research them. If, by L_2 norm, you mean rms error, there is skill if the forecast has less error
    against the verifying analysis than a randomly chosen analysis does.
    The 500 mb height is chosen because it is a smooth field representing positions of air
    masses and jetstreams reasonably well in a single map. The first single-layer models
    back in the 60’s only predicted this field, so it is traditional in some way.

    As you have mentioned, the precipation is unreliable long before 7 days time. So basically when using large scale NWP models the parameterizations do not help in the first few days and are inaccurate shortly thereafter.
    What impact does this have on climate models that run even coarser grids and do not use data assimilation to update the information?

    It is not the parameterizations necessarily that are inaccurate, it is that smaller
    scales have a shorter range of deterministic prediction than longer scales, and
    precip is mostly tied to smaller scale features. Climate models do not deal with
    deterministic prediction, only mean behavior over long periods. Climate is a very
    different application that evaluates models using different metrics more
    associated with this mean behavior. Their parameterizations have to pass tests based
    on this type of evaluation before being used more extensively.

  220. Gerald Browning
    Posted Mar 18, 2007 at 7:23 PM | Permalink

    Jim Dudia,

    First I will check the altitude of the lid of the NCAR CCSM and then respond to your other statements.

    Both models referenced in the ITCZ thread are full nonlinear models.

    Jerry

  221. Willis Eschenbach
    Posted Mar 18, 2007 at 9:26 PM | Permalink

    First, Jim D and Jerry B, thank you very much for a fascinating discussion of things computorial. I have learned much from your discussion, and I appreciate both the content and the tone of your posts.

    Next, Jim D., I have a question that I have asked many times, but have never gotten a satisfactory answer. You say:

    Climate models do not deal with deterministic prediction, only mean behavior over long periods.

    My question is, if the deterministic predictions are incorrect or unreliable, what assurance do we have that the “mean behavior” will be reliable?

    Now, I understand that if we throw a perfect die, we can predict that the number “4” will come up about one time out of six, despite the fact that we cannot predict any given throw. But this assumes that we understand the physics of the situation, that the die is perfect, that we know how many faces the die has … in other words, we not only have to have a perfect die, but a perfect understanding of the physical situation.

    But we are nowhere near that level of understanding of climate. In fact, many of the models omit a variety of known forcings (e.g. dust, sea spray, etc.), all of them omit some known forcings (e.g. cosmic rays, methane from plants, DMS from plankton), and all of them omit the unknown forcings (e.g. … well, you know).

    In addition, the models have huge known errors in various aspects of their “mean behavior”, errors which dwarf the signal we are interested in. The GISS model, for example, is off regarding DLR by ~ 15 W/m2, regarding cloud cover by ~ 15%, and has a host of other known erroneous results.

    Given all of that, what assurance do we have that the models can get the “mean behavior” right? Since they are tuned to replicate the past, being able to replicate the past proves nothing, so what facts show that they can forecast the “mean behavior” of the most complex system we have ever tried to model?

    All the best,

    w.

  222. Jim D
    Posted Mar 19, 2007 at 7:42 PM | Permalink

    Will,
    I can give my opinion, but take it that this is from a modeler, albeit a weather not climate
    one. Also I should keep it short, since we are off topic in this particular thread.
    Yes, climate models have biases, and yes it is a very complex system, much more so
    than weather, where we don’t have to worry about ocean developments, sea-ice changes,
    vegetation feedbacks, aerosols, which is a host of difficult sub-models that climate models have.
    Given that, they know the things of first-order importance well enough to give a credible
    representation of current climate. Even simple models show how CO2 leads to warming,
    and these models are used to add the details around that. It is a valid question whether
    the model response to a forcing that hasn’t happened before can be trusted. All the modelers can
    do is see if they understand how the model is responding, and whether it is scientifically
    sensible. On balance, they see no good reason to throw the model result away, particularly
    if they are not outliers among existing independent models. As I write this, it looks like
    a circular argument, but no one has yet come up with a credible model that won’t warm
    much as CO2 rises. This would require a large negative feedback that no one has thought of yet,
    and think of the incentive to come up with such a mechanism if it is correct. It would be on
    a par with the ozone hole science. The nature of feedbacks is that they can’t mask the original
    signal much, and I believe the warming evidence shows that nothing is kicking in to offset it.
    I’ll conclude by saying I trust that the models have all the basic mechanisms. If they can simulate
    arctic and tropical climates, which differ by 50 C they can probably simulate a changing climate
    that changes by 2-4 C.

  223. Gerald Browning
    Posted Mar 19, 2007 at 9:34 PM | Permalink

    Jim D.,

    From your manuscript it is clear that Anthes made many runs at many different scales of motion using a hydrostatic model that is ill posed for the initial boundary value problem (now the initial value problem). How were the problems at the boundary and the propagation of the perturbations into the interior overcome?

    In your manuscript, there are dissipative terms on every equation. But on a brief reading I have yet to find the size and type of these terms. Can you tell me on which page of your manuscript these can be found?

    I would like to point out that the time split method used for many years in these models has been shown to be numerically unstable and that instability must be hidden by dissipation. In fact I believe that the WRF model no longer uses that method?

    You stated the reason that modelers use a sponge layer at the top of the atmosphere. You did not explain its impact on the continuum solution when there is information coming from above the top of the model or on the accuracy of the approximation on the continuum solution.

    The smoothing at the lateral sides of a limited area model have been employed by modelers for many years. Please run the test in the manuscript cited on the ITCZ thread to determine the impact on the continuum solution.

    I find it interesting that you stated that you wanted to be on this thread
    because you wanted to make sure things didn’t stray too far (or some such).
    Did you mean that you didn’t want too much information about all of the fudge factors in the models to be revealed?

    I do not agree with anything you said about climate models. As you go further in time in a model, things become harder, not easier. Clearly from the simple contour plots I have shown, the enstropy can cascade down from a balanced large scale motion to tiny scales not resolvable by
    any model very quickly. At that point the only thing that can be done to stop the model from blowing up is to include an unphysically large dissipation that will have an impact on the spectrum.

    Jerry

  224. Gerald Browning
    Posted Mar 19, 2007 at 10:22 PM | Permalink

    gb (#210),

    I am amazed that you think that simple dimensional reasoning as used by Kolmogorov is the same as a mathematical proof. Only when certain mathematical assumptions are made does the Kolmogorov theory hold.

    Did it ever dawn on you that there is a reason I asked that Jim D. provide the details of the proof, not just cite a reference?

    Jerry

  225. bender
    Posted Mar 19, 2007 at 10:56 PM | Permalink

    Echoing fFreddy’s #211 support for this ongoing discussion. Fascinating.

  226. MarkW
    Posted Mar 20, 2007 at 5:31 AM | Permalink

    #223 states “than weather, where we don’t have to worry about ocean developments, sea-ice changes,
    vegetation feedbacks, aerosols, ”

    The problem is, from what I can tell, few if any of the climate models worry about any of these things either.

    A number of strong negative feedbacks have been found. The so called iris affect for one. That’s where warm SST’s increase the
    efficiency of tropical storms, so that the air being transported to the upper atmosphere gets drier.

  227. Tom Vonk
    Posted Mar 20, 2007 at 8:05 AM | Permalink

    I do not agree with anything you said about climate models. As you go further in time in a model, things become harder, not easier. Clearly from the simple contour plots I have shown, the enstropy can cascade down from a balanced large scale motion to tiny scales not resolvable by
    any model very quickly. At that point the only thing that can be done to stop the model from blowing up is to include an unphysically large dissipation that will have an impact on the spectrum.

    I fully agree with the above and it applies also to the Kolmogorov argument .
    By staying “only” with Navier Stokes equations , it is well known that with the introduction of temporal means of the velocities , the equations don’t become easier , they become more complicated instead .
    The equations cannot be solved for temporal means any better than for the instantaneous values .
    Follows that tMean of U(x,t) is as fundamentally chaotic as U(x,t) .
    After all it would be very astounding that the chaos (aka non predictability and exponential divergences) magically disappears only because somebody has chosen to make a mathematical change of variables in the relevant equations .

    It is exactly for that reason that no climate model afaik is able to evaluate how legitimate it is to make predictions .
    I don’t know by what process the IPCC modelers award themselves their “virtually certains” and “extremely unliklies” but I suspect that it is very similar to what I have seen in the evaluations of the ENSO model .
    I mean one can’t stop laughing seeing phrases like : “The long term simulation with uncoupled atmospheric noise is considered to be the control run (“true” simulation or “observation”) and is used as initial condictions for the prediction runs and to verify the predictions .”
    I am not joking , that’s what they really do – they analyse the model skills to predict by comparing it … to itself (Kirtman and Schopf , 1998)
    More circular than that you can’t do !
    Do these people still realise the difference between the reality and a numerical code ?
    Before beginning to analyse irrelevancies like dispersion models (that are already approximations) , it is more than necessary to come back to fundamentals about predictability like Jerry does .

  228. Paul Penrose
    Posted Mar 20, 2007 at 8:11 AM | Permalink

    Jim D.,
    Your support of the climate models is based, as far as I can tell, on faith. We all know that weather models, if not constantly updated with new physical data, diverges pretty quickly from reality. I’ve seen nothing presented to convice me that climate models don’t behave the same way. I’d put as much stock in the prediction of a climate model running for 100 years a that of a weather model running for 100 days. Think about it this way: none of the climate modelers have published confidence intervals for the outputs of their models; I don’t even know if it’s possible to compute them given the way the models are written. But without this information how can one even determine if the reported warming is even significant? Without knowing what the error floor is you can’t know if the results have any meaning or not. Just because all the models are reporting similar results does not help. I’m not saying the models are not useful. They may indeed be useful to the researchers trying to understand the physical processes driving the climate, but as far as I’m concerned they can’t make any useful prediction or projection or whatever of future climate.

  229. Tom Vonk
    Posted Mar 20, 2007 at 9:52 AM | Permalink

    Think about it this way: none of the climate modelers have published confidence intervals for the outputs of their models; I don’t even know if it’s possible to compute them given the way the models are written.

    Regardless how the models are written that is impossible .
    By definition a confidence interval means that you can catch a real solution between 2 approximate solutions with a given confidence (>95%) .
    Now if you take for variable “the spatial temperature average” which is what the models give you don’t even know if there is a unique solution to the (incomplete) set of equations you are trying to simulate numerically let alone what that temperature will really be .
    Therefore there is no way you can tell what the confidence interval of your simulation is .
    The only thing you can do is to run N models with M different sets of assumptions and analyse the sensitivity OF THE MODELS (not reality) to changes in assumptions .
    Then you are in known field and get all possible results about ranges and standard deviations displayed by the models .
    Needless to say that all this guarantees nothing whatsoever as for the behaviour of the real solution that is still as unknown as it was before .
    Over shorter period of times you are “saved” by the inertia of the system that enables to prove that statistically the best weather prediction for tomorrow is to say that “It will be similar to the one observed today .” .
    But as soon as the time gets longer you begin to be wrong and have no idea by how much .

  230. Paul Linsay
    Posted Mar 20, 2007 at 11:43 AM | Permalink

    From a nonexpert: A question for the assembled experts here. One of the claimed successes of the climate models is the ability to “hindcast” the climate, in some cases as far back as 100 years. Yet they claim that forecasts with a short time horizon are not possible. Why cann’t they simply use the hindcasts as a starting point and give us forecasts of the global temperature map for the next five to ten years? I want a map, not the mean GT because that can hide a lot of flaws.

    Thanks–

  231. Tom Vonk
    Posted Mar 20, 2007 at 12:16 PM | Permalink

    I am certainly not a climate model expert .
    But you could very seriously question the ability of the models to REALLY hindcast the climate .
    On thing is sure – if you could initialise the model for , let us say 1850 (OK you have not the data but let’s suppose you have) and then make it run to 2000 , you would get no map for every year with the right temperatures , the right winds , the right cloudiness distribution and the right precipitations .
    What you would get would be an artefact that looks like A climate yet is not THE climate .
    Neither completely stupid , nor completely right .
    However if what you saw was blatantly wrong , you could always play with the parametrisations to get a better fit .

    That’s what R.Lindzen called “Crude data fitting based on a hopelessly naive assumption that somebody understands the natural variability .”

  232. Bob Koss
    Posted Mar 20, 2007 at 3:59 PM | Permalink

    For those that might be interested.
    There are quite a few model budgets for the GISS Model E located here.

    Seems like they’re juggling quite a large amount of variables.

    A couple things I noticed make me question the diligence employed to ensure the greatest possible accuracy of the output. Looking at the number of days simulated for the runs, they don’t seem to be concerned with great accuracy. A simple function would return the correct number of days of the simulation, but instead they use a standard year of 365 days and therefore lose a day every 4 years. Minor error on a short run, could be a whole season on a 400 year run. Makes me wonder how many other minor errors are embedded in the model.

    Also found this note here that seems to show them mixing output from a subsequent simulation into output from an incomplete previous simulation. Doesn’t sound like a good procedure to me.

    The original runs E3OCNf8a-dM20A were renamed as E3OCNf8a-dM20A_b .
    The standard deviations of the old run were incomplete – instead of
    recomputing them, the st.dev. of the corrected runs were substituted.
    Also, the runoff of was taken from the new set rather than recomputing
    them.

    Hope the two links come out OK. For some reason I’m not getting a preview of my comment.

  233. Jim D
    Posted Mar 20, 2007 at 8:06 PM | Permalink

    Jerry,
    Answering the questions in order.

    From your manuscript it is clear that Anthes made many runs at many different scales of motion using a hydrostatic model that is ill posed for the initial boundary value problem (now the initial value problem). How were the problems at the boundary and the propagation of the perturbations into the interior overcome?

    In your manuscript, there are dissipative terms on every equation. But on a brief reading I have yet to find the size and type of these terms. Can you tell me on which page of your manuscript these can be found?

    I would like to point out that the time split method used for many years in these models has been shown to be numerically unstable and that instability must be hidden by dissipation. In fact I believe that the WRF model no longer uses that method?

    You stated the reason that modelers use a sponge layer at the top of the atmosphere. You did not explain its impact on the continuum solution when there is information coming from above the top of the model or on the accuracy of the approximation on the continuum solution.

    The smoothing at the lateral sides of a limited area model have been employed by modelers for many years. Please run the test in the manuscript cited on the ITCZ thread to determine the impact on the continuum solution.

    I find it interesting that you stated that you wanted to be on this thread
    because you wanted to make sure things didn’t stray too far (or some such).
    Did you mean that you didn’t want too much information about all of the fudge factors in the models to be revealed?

    I do not agree with anything you said about climate models. As you go further in time in a model, things become harder, not easier. Clearly from the simple contour plots I have shown, the enstropy can cascade down from a balanced large scale motion to tiny scales not resolvable by
    any model very quickly. At that point the only thing that can be done to stop the model from blowing up is to include an unphysically large dissipation that will have an impact on the spectrum

    First of all, there is nothing to hide about dissipation terms in models. It is known
    how much these terms do in the solution, and the model predicts the weather verifiably anyway.

    Boundaries are handled using the pragmatic approach of relaxing them towards analyses.
    No math is needed to prove this works, just check the results.

    The dissipation in MM5 is documented with other technical details in the Grell et al. NCAR Tech Note (1994)
    with a title something like “Description of the Penn State/NCAR Mesoscale Model (MM5)”.
    The paper only describes the difference between MM4 and MM5, so this was not in it.

    Time split methods can be unstable when used with the wrong spatial scheme,
    and the leapfrog scheme is also unstable without a time filter. Part of
    the dissipation keeps numerical instabilities from growing, while part
    represents Reynolds stresses, and in practice these are almost equal. There is also
    a method for selectively damping sound waves to prevent their instability in the
    split method. WRF also has such a sound-wave damper, but needs no additional
    dissipation for numerical instability when odd-order advection schemes are
    used because those have noise control built in.

    Tests with the sponge layer show that it helps reduce reflection. This is
    seen visually with comparisons of linear mountain waves against known
    non-reflecting analytic solutions, but can also be quantified.

    I am not sure I understand the test yet. Is it driven with complex real data,
    because that is what we use?

    I stated I wanted to monitor this thread to make sure the contributors
    don’t go off track because I have seen threads on sites where it looks like
    someone can say things that are known to be false by people in the field,
    but go unanswered and therefore mislead. I’m sure you’ve seen that happen too.

    I have said that large dissipation is only needed if you are doing something wrong
    like using hydrostatic dynamics at fine scales. Nonhydrostatic models can be
    run with little dissipation and give valid flows on all resolved scales. However
    there are physics processes such as convection and boundary layer schemes to
    handle some complex things occurring at unresolved scales, and that is a physics
    issue, not dynamics.

  234. Jim D
    Posted Mar 20, 2007 at 8:29 PM | Permalink

    Your support of the climate models is based, as far as I can tell, on faith. We all know that weather models, if not constantly updated with new physical data, diverges pretty quickly from reality. I’ve seen nothing presented to convice me that climate models don’t behave the same way. I’d put as much stock in the prediction of a climate model running for 100 years a that of a weather model running for 100 days. Think about it this way: none of the climate modelers have published confidence intervals for the outputs of their models; I don’t even know if it’s possible to compute them given the way the models are written. But without this information how can one even determine if the reported warming is even significant? Without knowing what the error floor is you can’t know if the results have any meaning or not. Just because all the models are reporting similar results does not help. I’m not saying the models are not useful. They may indeed be useful to the researchers trying to understand the physical processes driving the climate, but as far as I’m concerned they can’t make any useful prediction or projection or whatever of future climate.

    Paul,
    Faith and evidence, not just faith.
    Yes, climate models don’t track the weather, system for system, any longer than
    weather models. As far as the atmosphere goes, there is no major difference in the
    processes simulated, or the equations used. The difference is that weather models
    are attempting to keep on track for the longest possible, so all the effort is in
    improving initial and boundary conditions, as well as the model physics. Climate
    models may run for 100 years, but they are not initialized with real weather maps,
    so their forecasts can’t even be associated with any real calendar date. They will
    have realistic looking weather, however, and, like I said, only the averages matter to
    climate modelers, such as the mean daily maximum temperature over the winter season,
    or the mean spring rainfall distribution. As we know, these vary from year to year, and
    climate models are considered successful if they capture the range of variability.
    Confidence intervals are given in the IPCC reports, and can be based on the way
    models compare with the last 100 years, or how much models differ from each other.

  235. Gerald Browning
    Posted Mar 20, 2007 at 10:22 PM | Permalink

    Jim D. (#234):

    Let us address one issue at a time.

    Please state the type and size of dissipation you used in the experiments in your manuscript for every time dependent equation so the general reader will know how much Anthes and you used in the models for these equations.

    Once those values are clearly stated, the discussion can proceed to the next step.

    Jerry

  236. Willis Eschenbach
    Posted Mar 21, 2007 at 12:02 AM | Permalink

    I had asked Jim D. “My question is, if the deterministic predictions are incorrect or unreliable, what assurance do we have that the โ€œmean behaviorโ€ will be reliable?”

    Jim D., thanks for your reply. You say:

    Will[is],
    I can give my opinion, but take it that this is from a modeler, albeit a weather not climate one. Also I should keep it short, since we are off topic in this particular thread. Yes, climate models have biases, and yes it is a very complex system, much more so than weather, where we don’t have to worry about ocean developments, sea-ice changes, vegetation feedbacks, aerosols, which is a host of difficult sub-models that climate models have.

    Given that, they know the things of first-order importance well enough to give a credible representation of current climate. Even simple models show how CO2 leads to warming, and these models are used to add the details around that. It is a valid question whether the model response to a forcing that hasn’t happened before can be trusted. All the modelers can do is see if they understand how the model is responding, and whether it is scientifically sensible. On balance, they see no good reason to throw the model result away, particularly if they are not outliers among existing independent models.

    As I write this, it looks like a circular argument, but no one has yet come up with a credible model that won’t warm much as CO2 rises. This would require a large negative feedback that no one has thought of yet, and think of the incentive to come up with such a mechanism if it is correct. It would be on a par with the ozone hole science. The nature of feedbacks is that they can’t mask the original signal much, and I believe the warming evidence shows that nothing is kicking in to offset it. I’ll conclude by saying I trust that the models have all the basic mechanisms. If they can simulate arctic and tropical climates, which differ by 50 C they can probably simulate a changing climate that changes by 2-4 C.

    You mention that “The nature of feedbacks is that they can’t mask the original signal much”. In that regard, perhaps you could comment on the recent Nasa study which showed that feedback, in the form of increasing clouds, almost completely masked the change in ice and snow albedo from recent polar warming.

    This is the problem I have with modelers. Your models don’t show significant feedback masking original signals, and you quite blithely extrapolate this to concluding that the planet doesn’t show feedback masking original signals … and then you state it as though it were an established fact, when it is nothing of the sort. I have been taking heat for years for saying that as the planet warms, there will be more clouds, and that will provide a strong negative feedback to any temperature rise. My own feeling is that on a planet with lots of water, it will warm until enough clouds form to where they cut down the sun enough to create an equilibrium … and of course I could be wrong, but claiming as you do that feedback can’t mask an original signal reflects what I can only call a “modeler’s view” of the world.

    Do you see why I find your protestations that yes, everything is just fine in the model universe less than reassuring? As another example, you say “Even simple models show how CO2 leads to warming” … so what? A simple model would show that if I put my feet into a bucket of hot water, my body temperature would go up … should we be convinced by that simple model as well?

    Nor is the existence of agreement between models reassuring. I’d be surprised if they didn’t agree, because as you point out, they just “add the details around” the simple model … but we since we have no reason to trust the simple model, there is no reason to trust the dozens of models that generally agree with each by adding other details to the simple model. That’s what I was asking, what assurance do we have that the models are reliable? You can’t demonstrate model reliability by agreement between models, as you point out, that’s circular.

    Next, as far as I know none of the models include the following recently discovered forcings:

    1) Cosmic rays

    2) Coronal Mass Ejections

    3) Methane from plants

    4) DMS from plankton

    Given that there are no models which contain those mechanisms, and that we don’t know if any of those are of “first-order importance”, I find it curious that you say “I trust that the models have all the basic mechanisms,” and “… they know the things of first order importance well enough …”

    I don’t understand your claim that there is some incentive to discover a feedback mechanism which would show that global warming is not going to happen, since if a modeler does that, the funding for climate science and model development would dry up completely … what modeler wants that to happen?

    I also don’t understand the logic behind saying that if a model can simulate arctic and tropical climate, which are 50ยฐ apart, they can simulate a climate change of 2-4ยฐC. That makes no sense at all. I can build a simple model which will say that winter will be different from summer by 15ยฐC but which is totally incapable of predicting climate at all. The two are not connected logically or physically.

    Finally, I must confess that I am not surprised that modelers see no good reason to throw their beloved model results away … heck, I don’t throw my model results away … but I don’t trust them all that much either. The modeler’s claim is that their results are good enough to bet billions of dollars on, despite the fact that they can’t predict next month’s weather. That’s the reason for the question I asked, and for which I still have no answer.

    Thank you again, however, for your response.

    w.

  237. MarkW
    Posted Mar 21, 2007 at 5:07 AM | Permalink

    #235,

    you state that confidence intervals are partially based on how “or how much models differ from each other”.

    In other words, if two guesses are similar, it is assumed that both guesses are accurate?

    I wasn’t aware that was how science was done.

  238. Paul Linsay
    Posted Mar 21, 2007 at 7:42 AM | Permalink

    #235, Jim D.

    Climate models may run for 100 years, but they are not initialized with real weather maps, so their forecasts can’t even be associated with any real calendar date.

    So what is their connection to the physical world and how can they claim to make long range predictions? Increasing fluctuations or a growing trend is not enough, all they could be observing is the growth of errors in the models.

  239. Gerald Browning
    Posted Mar 21, 2007 at 1:10 PM | Permalink

    Paul Linsay (#239),

    This is a very valid point. I will discuss a well known example.

    When the possibility of satellite observations was first introduced, a number of so called “twin experiments” were run with NWP models (e.g. see Roger Daley’s book for a discussion of these experiments). A control “forecast” was run with a particular NWP model. Then the initial data was perturbed and the new information that was to become available from a satellite was introduced from the first run into this second run of the model to determine if the “forecast” with the satellite data insertion over a period of time would improve (and it always did). However, when the satellite data became available, the results did not match those from the twin experiments. The reason (see Periodic Updating reference by Browning and Kreiss) was that the NWP models had excessive unphysically large dissipation so the error equation behaved as the heat equation in the above simple mathematical example, i.e. the models were not a true reflection of the real atmosphere and led to completely misleading results.

    This example alone should give one pause about the use of information based on models with unphysically large dissipation, but it appears that this obvious lesson still has not been understood by a number of modelers.

    Jerry

  240. Ron Cram
    Posted Mar 21, 2007 at 5:59 PM | Permalink

    re: 223

    Jim,

    You wrote to Willis:

    As I write this, it looks like a circular argument, but no one has yet come up with a credible model that won’t warm much as CO2 rises.

    Yes, that does look like a circular argument. I am not a climate modeler but feel certain that I could make adjustments to current models that would do exactly that. Richard Lindzen and Sallie Baliunas both have expressed their belief that increases in CO2 will have a diminishing impact in the future. Call it a result of the Law of Diminishing Returns. As I understand it, it is their position this will be true regardless of possible negative feedbacks which will reduce the impact of rising CO2 even further.

  241. Gerald Browning
    Posted Mar 21, 2007 at 8:35 PM | Permalink

    Community Atmospheric Model (CAM) Parameters

    There is a table under the CAM documentation that shows the T31 model
    uses a dissipation coefficient 20 times larger than the T85 model.
    I am still searching for the height of the upper boundary, but have seen some indications in the code that the number of levels is 26 in all of the models. I will continue to search for this info, but if anyone knows the answer, please let me know. You would think that this info would not be that hard to find, but it seems to not be easy.

    Jerry

  242. Paul Penrose
    Posted Mar 21, 2007 at 9:19 PM | Permalink

    Jim D.,
    Statistically speaking, those are not valid confidence intervals at all. In fact, they don’t tell us anything at all about the signifigance of the results of the models. I think the modelers should strongly consider getting a statistician on their projects to explain some of these issues to them.

  243. Jim D
    Posted Mar 21, 2007 at 10:12 PM | Permalink

    #236
    Jerry,
    The dissipation term is a fourth-order diffusion with two
    parts to the coefficient. The constant part is 0.003 * dx^4 /dt
    in units of m^4/s. The other part is proportional to deformation,
    and is of comparable magnitude in practice.

  244. Jim D
    Posted Mar 21, 2007 at 11:03 PM | Permalink

    OK, many points to address above.
    Willis (sorry for truncating your name before). Feedbacks by their nature have to have something
    to feed back on. If they completely mask the signal they remove their reason for existence.
    I’m not an expert, and this is just an opinion. I believe there is a cloud feedback that is
    negative too, but if it is missing in models, why is the cloud cover in the warmest part of the world,
    the tropics, not higher than can be simulated by climate models? So if you imagine the world
    just tending towards being more tropical, there is no reason to believe we don’t have the
    important feedbacks already.
    Yes, simple models show why CO2 leads to warming, and the eassuring part about complex models
    is that they agree with both simple models and data when run over the last 100 years.
    The other mechanisms you mention don’t seem to have timescales compatible with the observed
    warming, nor is there any evidence these things have changed recently, and people are watching
    them for sure.
    You seem to be in the Crichton/Inhofe school of a mass conspiracy driven by government funding. Why
    would this administration want to fund global warming proofs? They have done everything to stifle
    the evidence. We are lucky science has been funded at all in this environment.

    MarkW, confidence intervals can be dependent on the confidence in model parameters,
    and also how well the results match previous climate. The difference in models is only
    part of it.

    Paul L, weather is just turbulence to climate scientists. The climate scientists look at the
    mean behavior of the atmosphere subject to forcings, including obvious ones such as solar
    radiation, and also various CO2 increase scenarios. To predict the climate you don’t need to predict
    the weather on a daily basis. It’s like if you put a lid on boiling water in a saucepan
    on a stove, you know it will warm faster, and you don’t need to know what the bubbles are doing.
    Climate modelers look at the mean temperature, and weather modelers look at the bubbles.

    Ron, I would welcome someone finding a way to get Lindzen’s ideas into a climate model,
    if it can be done with credible assumptions, and the model can simulate current and past
    climates. I have nothing against ideas being tried out. I think such a model,
    if successful, would be good for the debate, but no one has done it yet as far as I know.

    Paul P, even with a statistician, how do you evaluate how confident you are in parameters,
    when there are “unknown unknowns” in these complex systems? We don’t have enough information
    about the degree of uncertainty to make a precise error bar. You can run the model
    thousands of times, changing each conceivable parameter, and the Hadley Center (UK) did
    that as a distributed public PC project (like SETI), but that is a massive undertaking
    for the biggest climate models.

  245. Willis Eschenbach
    Posted Mar 22, 2007 at 2:16 AM | Permalink

    Jim D., thank you kindly for your reply. You say:

    Willis (sorry for truncating your name before). Feedbacks by their nature have to have something
    to feed back on. If they completely mask the signal they remove their reason for existence.
    I’m not an expert, and this is just an opinion. I believe there is a cloud feedback that is
    negative too, but if it is missing in models, why is the cloud cover in the warmest part of the world,
    the tropics, not higher than can be simulated by climate models? So if you imagine the world
    just tending towards being more tropical, there is no reason to believe we don’t have the
    important feedbacks already.

    According to Gavin Schmidt et al., the albedo in the GISS models is not calculated from physical first principles, but is tuned for:

    The net albedo and TOA radiation balance are to some extent tuned for, and so it should be no surprise that they are similar across models and to observations.

    In addition, the cloud coverage in the GISS model is way different from reality, with the three GISS models reporting 57-59% global cloud cover and the ICCSP data showing 69%. According to the same report:

    Total cloud cover is definitely too low.

    So I’m not clear why you are claiming that the cloud cover “is … not higher than can be simulated by climate models”, since it is certainly higher than can be simulated by the GISS model, which is one of the best. And despite it being a world-class model, the report says:

    The cloud radiative forcing is again very similar across the models and, in the global mean, similar to the ERBE analysis (Fig. 10). Looking more closely, the models have SW forcing in the Tropics that is too negative, but not negative enough in the midlatitudes. For the LW forcing, model values in the Tropics are too low (by up to 20 W m 2).

    So we have a 20 W/m2 tropical error in the model (among many other errors of a similar size), but we should trust it to predict the effect of a 3.7 W/m2 increase from a CO2 doubling because it agrees with a simple model? …

    In addition, since the albedo is not calculated but “tuned for”, the GISS model cannot show cloud feedback because of its design … I certainly hope that you don’t think that the lack of cloud feedback in the models means that there is no cloud feedback in the real world.

    Finally, I think you misunderstand feedback. You say that “if [feedbacks] completely mask the signal they remove their reason for existence”. But consider the first use of feedback to control machinery, the flyball governor popularized by James Watt. It uses centrifugal feedback to limit a machine to a certain speed, thereby totally masking the signal … but it still continues to work.

    Yes, simple models show why CO2 leads to warming, and the reassuring part about complex models
    is that they agree with both simple models and data when run over the last 100 years.

    Why is it reassuring that climate models agree with data when run over the last 100 years? Climate models are tuned and re-tuned until they agree with historical data, it would be very surprising if they did not agree with historical data … but that means nothing, because as they say in the stock broker’s ads, “Past success is no guarantee of future performance” …

    Nor is it necessarily a good sign, when modeling a complex system, if a complex model agrees with a simple model … the issue is not whether the simple model and the complex model give the same result, it is whether the simple model (and thus the complex model) gives the correct result.

    The other mechanisms you mention don’t seem to have timescales compatible with the observed warming, nor is there any evidence these things have changed recently, and people are watching them for sure.

    Say what? The DMS/plankton connection has only recently been discovered, how can people have been watching that connection when we didn’t even know about it until last month? And Svensmark has recently provided more evidence that cosmic rays are a significant factor in the recent warming.

    You seem to be in the Crichton/Inhofe school of a mass conspiracy driven by government funding. Why
    would this administration want to fund global warming proofs? They have done everything to stifle
    the evidence. We are lucky science has been funded at all in this environment.

    I don’t follow this at all. If you can find one word I’ve written which suggests a “conspiracy”, I would be extremely surprised. And I think funding for climate change research is at an all-time high, so much so that people are claiming tenuous links between their research and climate change just to tap into some of that funding. Federal support for the R&D in the environmental sciences has tripled in the last 20 years, with over 2,000 separate grants related to climate change given in 2002, more than $2 billion in climate change grants in 2004, and $5.5 billion spent by the US Government in all areas of climate change in 2006.. In addition, much as I dislike Bush, the “Climate Change Research Initiative” was instituted under his Administration, not Clinton’s.

    Also, you seem to be confusing the amount of funding with the direction of funding. While the amount of funding is controlled by the Administration, the direction of the funding is controlled by the bureaucrats who administer the grants … and as the example of James Hansen shows, many of them are staunch global warming supporters.

    In short, I fear that my question about why we should trust climate models to give us accurate long-term forecasts when they cannot give us accurate short-term forecasts remains unanswered. To me, it is intimately connected to the problems which Jerry Browning is highlighting in this thread, which is that without artificial controls, climate models rapidly go way off the rails. These artificial controls, it seems to me, virtually guarantee that the long-term forecasts from these models will lack skill.

    In closing, let me say that I appreciate your willingness to come here and discuss these matters, and I hope you will continue to do so.

    All the best,

    w.

  246. Tom Vonk
    Posted Mar 22, 2007 at 4:18 AM | Permalink

    Paul L, weather is just turbulence to climate scientists. The climate scientists look at the mean behavior of the atmosphere subject to forcings, including obvious ones such as solar radiation, and also various CO2 increase scenarios. To predict the climate you don’t need to predict the weather on a daily basis. It’s like if you put a lid on boiling water in a saucepan on a stove, you know it will warm faster, and you don’t need to know what the bubbles are doing.
    Climate modelers look at the mean temperature, and weather modelers look at the bubbles.

    This analogy is so wrong and misleading that it really has no place in this thread .
    First if one puts a lid on a pot , it will not warm faster – the heat rate depends only on the stove burner .
    What will happen will be that the boling temperature will increase with the increased pressure .
    Second as a phase change happens at constant temperature depending only on pressure , not much modelling is necessary to predict a mean temperature that happens to be constant when the pressure is constant .
    So obviously turbulences have not much to do with the phase change temperature , only basic thermodynamics has .

    The right analogy would be as follows :

    If you put a lid on a pot , you modify all principal parameters governing the dynamics of the system .
    That’s why if the question asked is one concerning the DYNAMICS , you have to look at all relevant dynamical processes and most notably the turbulence happening in the pot .
    Indeed it is irrelevant to know that the temperature in the pot would be constant (as long as there is some liquid in it) if the question is how long it will take before there is no liquid in the pot .
    If you forget that there are bubbles in the pot , you get the heat transfers wrong , you get the velocity fields wrong , you get the whole dynamics wrong and the pot will blow up in your face .

    Climate modelllers are people who try to simulate the dynamics of the boling pot by ignoring the
    bubbles and claiming success because the model is saying that the temperature in the pot is constant .
    The only difference being that their “pot” is supposed to boil for centuries and half of the eqautions governing the dynamics are either unknown or neglected .

  247. MarkW
    Posted Mar 22, 2007 at 4:40 AM | Permalink

    #223

    The problem is the weasel word “credible”. When you define credible as showing high sensitivity to CO2, it’s hardly surprising
    that all of the credible models show high sensitivity to CO2.

  248. MarkW
    Posted Mar 22, 2007 at 4:42 AM | Permalink

    JimD,

    That’s just it, the models don’t come close to matching previous climates.

  249. MarkW
    Posted Mar 22, 2007 at 4:46 AM | Permalink

    While it is true that a feedback cannot mask 100% of the original signal, it’s also true that it can, and sometimes does mask more
    than 99% of it.

    In electronics it’s not hard to design feedback systems that mask all but a few parts per million of the original signal.

  250. MarkW
    Posted Mar 22, 2007 at 4:50 AM | Permalink

    Kind of reminds me of Hansen, giving 1400 on the job interviews complaining about how the Bush administration is suppressing his
    views.

    If there’s any conspiracy here, it’s the AGW alarmists who are conspiring to label anyone who disagrees with them as either being not
    a scientist, or bought and paid for by the oil companies.
    (You don’t need active coordination to form this kind of conspiracy, just followers who are willing to follow your lead.)

  251. Gerald Browning
    Posted Mar 22, 2007 at 12:50 PM | Permalink

    Jim Dudhia (#244),

    Is the coefficient the same for every time dependent equation at every vertical level?

    I assume that the hyperviscosity is written with a dx^4 in the denominator. Is that correct?

    If the hyperviscosity is written as indicated above, isn’t it the case that the dx^4 terms cancel and the only crucial terms are the constant in front and dt?

    What is the value of dt in your manuscript?

    If the two dissipative terms are the same size, why add the second term, i.e. what role does it play?

    What is the coefficient and formula for the second term?

    How are the dissipation terms handled near a boundary with a model that provides the boundary data?

    Is there dissipation associated with the second order finite difference scheme without any added explicit terms of the above form?

    Jerry

  252. John Baltutis
    Posted Mar 22, 2007 at 6:45 PM | Permalink

    Sorry for the interruption, but just to add to the mix, the following is from The Future of Everything [originally entitled Apollo’s Arrow: The Science of Prediction and the Future of Everything] by David Orrell, pg. 324), WRT climate models:

    Einstein’s theory of relativity was accepted not because a committee agreed that it was a very sensible model, but because its predictions, most of which were highly counterintuitive, could be experimentally verified. Modern GCMs [global climate model] have no such objective claim to validity, because they cannot predict the weather over any relevant time scale. Many of their parameters are invented and adjusted to approximate past climate patterns. Even if this is done using mathematical procedures, the process is no less subjective because the goals and assumptions are those of the model builders. Their projections into the future’โ‚ฌ”especially when combined with the output of economic models’โ‚ฌ”are therefore a kind of fiction. The problem with the models is not that they are subjective or objective’โ‚ฌ”there is nothing wrong with a good story, or an informed and honestly argued opinion. It is that they (GCMs) are couched in the language of mathematics and probabilities; subjectivity masquerading as objectivity. Like the Wizard of Oz, they are a bit of a sham.

    All in all, an interesting read. Much to ponder.

    Now, back to the main presentation by Gerald Browning and Jim D, with support from others.

  253. Jim D
    Posted Mar 22, 2007 at 7:32 PM | Permalink

    Is the coefficient the same for every time dependent equation at every vertical level?

    I assume that the hyperviscosity is written with a dx^4 in the denominator. Is that correct?

    If the hyperviscosity is written as indicated above, isn’t it the case that the dx^4 terms cancel and the only crucial terms are the constant in front and dt?

    What is the value of dt in your manuscript?

    If the two dissipative terms are the same size, why add the second term, i.e. what role does it play?

    What is the coefficient and formula for the second term?

    How are the dissipation terms handled near a boundary with a model that provides the boundary data?

    Is there dissipation associated with the second order finite difference scheme without any added explicit terms of the above form?

    Jerry,
    Yes it is a constant for all variables everywhere.
    dx^4 is in the denominator (1,-4,6,-4,1) stencil in each horizontal direction.
    The number .003 becomes non-dimensional with it written as I have it.
    With dx = 10 km, we would typically use dt = 30 seconds.
    The first term is a background term needed for numerical stability, since low-order
    schemes typically accumulate noise at small scales without such terms. The second
    term is more of a Reynolds stress (i.e. physical sub-grid eddy) term. It goes
    like .08 *dx^4 *D
    where D is a 2d deformation term (like horizontal shear) in units of 1/seconds.
    Near the boundary, this is reduced to 2nd-order diffusion with a stencil that
    only needs one point on each side (1,-2,1) stencil. At the boundary itself, no
    physics or diffusion is required because that point is specified from analyses.
    The boundaries have separate relaxation terms in the 4 points in from the boundary.
    The 2nd order space differencing used for advection is neutral (i.e. non-dissipative).
    Hope that helps.

  254. Jim D
    Posted Mar 22, 2007 at 7:39 PM | Permalink

    Tom (#247),
    Thanks, I am not going to argue about the science of pot-boiling, but the
    point was, the weather is more like the bubbles that do the heat transfer, and the
    climate is more like the net result. You don’t need to know every bubble to
    predict the net effect, but if you are sitting at some location at some time
    in the pot, you would like to know when a bubble is going to go by. Weather
    modelers aim to predict each and every bubble, which is obviously a short-term
    proposition due to predictability limits (butterfly effect, Lorenz, etc.).
    Climate models can have bubbles, but not necessarily matching the real pot
    at any particular time. However, they change things, like putting a lid
    on, to see what happens. I still like this analogy.

  255. Paul Linsay
    Posted Mar 22, 2007 at 7:56 PM | Permalink

    #255, JimD. To continue your analogy with the boiling pot: all the climate models give are average global temperature, average global pressure,…? If so, why does anyone care? Or for that matter, why does it take so much compute power?

  256. Paul Penrose
    Posted Mar 22, 2007 at 8:01 PM | Permalink

    Jim D.,

    Paul P, even with a statistician, how do you evaluate how confident you are in parameters, when there are โ€œunknown unknownsโ€ in these complex systems? We don’t have enough information about the degree of uncertainty to make a precise error bar. You can run the model thousands of times, changing each conceivable parameter, and the Hadley Center (UK) did that as a distributed public PC project (like SETI), but that is a massive undertaking for the biggest climate models.

    You are correct sir! That’s exactly the point I was making. Without a lot of work, which has not been done, we don’t know what the actual error bars are, and consequently we can’t know if the model predictions mean anything. That does not mean that the models are useless, it just means that we can’t use them to predict future temperature change (or any other parameter, like preciptitation).

  257. Jim D
    Posted Mar 22, 2007 at 8:11 PM | Permalink

    Willis (#246),
    OK, I talk about climate models as a weather modeler, not an expert.
    Climate models are tuned to get correct radiative balances, and they have tunable
    parameters because they are at scales where sub-grid effects, like clouds,
    have complex structures that are not easily represented in terms of grid-scale
    variables (parameterized as it is called). Models rely heavily on physics
    parameterizations, which why we are all still in the business of trying to improve them.
    If GISS has those biases, I am sure they are figuring out how to solve them.
    This is how science progresses. I don’t expect all models to have the same bias.
    If they did, that would be very interesting, and someone could write a good paper
    on solving it.

    Conspiracy theory: I based that on this that you said

    I don’t understand your claim that there is some incentive to discover a feedback mechanism which would show that global warming is not going to happen, since if a modeler does that, the funding for climate science and model development would dry up completely โ€ฆ what modeler wants that to happen?

    I disagree strongly. Funding is done based on proposed research. No one knows what
    results they will get at the proposal stage, so usually they get funded based on
    whether they are asking interesting questions. An interesting question would be
    something like how to avoid the albedo bias in GISS. If GISS solves this and it leads
    to global warming being canceled by cloud feedback, that would be worth showing.
    Such a paper would be acceptable if reasonable things are done to solve the problem.

    Feedback: If cloud feedback was as strongly negative as would be needed to cancel
    global warming, someone would be able to demonstrate that with warmer versus cooler
    years’ data in the historical record. I believe Lindzen tried, but his data analysis seems not
    to have been convincing.

    DMS from plankton: Is DMS a long-lived greenhouse gas, or more likely, short-lived and
    confined to ocean boundary layers, where it won’t have much effect. Or perhaps this is
    a positive feedback mechanism if plankton increases with CO2 or warming? Questions to be asked.

  258. Gerald Browning
    Posted Mar 22, 2007 at 8:14 PM | Permalink

    Jim Dudhia (#254),

    Changing the dissipation operator near the boundary in the manner you describe leads to a discontinuity in the approximation to the continuum system. You have changed from a hypeviscosity type dissipation to a regular viscosity type dissipation term at one point inside the boundary.

    Also on both inflow and outflow boundaries, all variables should not be specified. This leads to an overspecified system. I would surmise that this is hidden by the extra smoothing near the boundary, but leads to a considerable reduction in the accuracy of the solution. It is a fairly trivial exercise to run the example in the first reference of the ITCZ thread to determine the impact on a mesoscale storm embedded in a balanced large scale solution. Until that is accomplished for MM4 and MM5, I will continue to have no confidence in your numerical methods.

    Have you run a hydrostatic model yet without any dissipation at the same resolution as in the experiment above. This should be a trivial exercise for you.

    What is the technical note reference for MM4 and is that model still available?

    Jerry

  259. Jim D
    Posted Mar 22, 2007 at 8:23 PM | Permalink

    #256 Paul L,

    To continue your analogy with the boiling pot: all the climate models give are average global temperature, average global pressure,โ€ฆ? If so, why does anyone care? Or for that matter, why does it take so much compute power?

    Climate models add everything thought to be important in the mix. This includes weather turbulence.
    They can give the time mean at any location, not just a global mean. Of course, this is just
    a projection based on what they put in the model, but models are the only way to add so many
    factors together.

  260. Jim D
    Posted Mar 22, 2007 at 8:32 PM | Permalink

    Paul P (#257)
    If something is too complex for error bars, you assume the error bars are
    large and the models are worthless, regardless of their successes. I just disagree.

  261. Gerald Browning
    Posted Mar 22, 2007 at 10:44 PM | Permalink

    Jim D (#257),

    There have been several references given as replies to my first discussion on this site under Numerical Climate Models. In those references cited by others, the manuscripts stated that the climate models do not reproduce the climate even for a year. If those references are valid, then the climate models cannot be called a success.

    And if as indicated in the mathematics cited at the beginning of this thread, demonstrated in several of NCAR’s own models, and in the above contour plots, the climate models are headed to a precipice and cannot be considered a success.

    Also did you read the text about the disaster in twin experiments where the excessive dissipation led to completely erroneous results? I assume that cannot be considered a success?

    Yes, it is possible to add dissipation mechanism after dissipation mechanism to a model and publish many manuscripts (and in some cases not even citing in the manuscript those mechanisms) and one can obtain something, but not necessarily even close to an accurate solution of the homogeneous (unforced) solution of the original continuum system or the real atmosphere.

    Jerry

  262. MarkW
    Posted Mar 23, 2007 at 5:12 AM | Permalink

    #261,

    If you can’t calculate error bars, then you have no idea what you have calculated. Are you within 1%, 10%, 1000%? If you can’t
    can’t calculate error bars, then you have no idea if you are close to the right answer, in the ballpark, or even on the same planet.

    If you can’t calculate error bars, then it would be overly complimentary to characterize your calculations as a guess.

  263. MarkW
    Posted Mar 23, 2007 at 5:15 AM | Permalink

    If all climate models can do is create global averages (and the ones that I have seen aren’t very good at doing even that much).

    Then why are so many people making predictions regarding how AGW is going to affect regional and even sub-regional climates?

  264. Paul Penrose
    Posted Mar 23, 2007 at 8:21 AM | Permalink

    Jim D,
    Any time you analyze a stocastic process there is the possibility that the results are just random chance. The question is, what is the probability that the results are just random? A large part of statistics is in quantifying this probability, which is what confidance intervals are. Without them there is simply no way to know (or prove) that the results are likely to not be random. So I’m not assuming that the error bars are huge, I’m just saying that we don’t know they aren’t, and in fact we don’t even know what the probability is that they aren’t. This leaves you with no mathematical or statistical grounds to stand on when you say you believe the predictions of the climate models. All you have is your “gut feel”, or in other words, faith. Which I’m fine with, by the way, just don’t claim that these predictions are “scientific” or “prove” anything. Also claims that the climate model results support other work, like past temperature reconstructions, are dubious at best.

  265. Tom Vonk
    Posted Mar 23, 2007 at 11:55 AM | Permalink

    Any time you analyze a stocastic process there is the possibility that the results are just random chance. The question is, what is the probability that the results are just random? A large part of statistics is in quantifying this probability, which is what confidance intervals are. Without them there is simply no way to know (or prove) that the results are likely to not be random. So I’m not assuming that the error bars are huge, I’m just saying that we don’t know they aren’t, and in fact we don’t even know what the probability is that they aren’t. This leaves you with no mathematical or statistical grounds to stand on when you say you believe the predictions of the climate models. All you have is your โ€œgut feelโ€, or in other words, faith. Which I’m fine with, by the way, just don’t claim that these predictions are โ€œscientificโ€ or โ€œproveโ€ anything. Also claims that the climate model results support other work, like past temperature reconstructions, are dubious at best.

    Yes . I am also exactly along this line . I can spend weeks staring at the N-S equations and manipulating them and I did , yet I can’t see a trace of randomness in it .
    Even the weak hypothesis that one can solve the N-S in the general case when one assumes a certain dose of randomness would probably earn the Fields medal to the guy who would prove it .
    Of course the strong hypothesis that the stochastical way is THE way to solve them is afaik completely out of reach .

    So we re back again where we were .
    We have a system with equations that we can’t solve , with known phenomenons where we don’t have equations and with phenomenons we don’t yet know about .
    Hovering above everything we have the ill posedness of Jerry and the ominous interactions between resolved and unresolved dimensions that are arbitrary numerical artefacts anyway due to computer power considerations .
    And the magical word should be “parametrisation” ?
    Is that science anymore ?
    Even the most basic test that defines science that is to compare predictions to experience doesn’t work because the modellers themselves say that their models give no measurable local predictions .
    Local as in “when” will happen “what” , “where” .
    At least meteorology makes sense because they know that they are forever limited by the fundamental chaos so don’t attempt to go too far and they make observable predictions like in “It will rain in Paris tomorrow .”

    It is indeed matter of “faith” that making a computer run with some set of reasonable (but far from complete) assumptions , some sort of average result or “global” trend in an arbitrary time frame would not be too far (define too far) from the reality .
    It will not be right in a smaller time frame and it will be unknown in a bigger time frame .
    I simply can’t consider seriously something with such a massive demand for “faith” .

  266. Jim D
    Posted Mar 23, 2007 at 7:27 PM | Permalink

    Jerry (#259)

    Changing the dissipation operator near the boundary in the manner you describe leads to a discontinuity in the approximation to the continuum system. You have changed from a hypeviscosity type dissipation to a regular viscosity type dissipation term at one point inside the boundary.

    Also on both inflow and outflow boundaries, all variables should not be specified. This leads to an overspecified system. I would surmise that this is hidden by the extra smoothing near the boundary, but leads to a considerable reduction in the accuracy of the solution. It is a fairly trivial exercise to run the example in the first reference of the ITCZ thread to determine the impact on a mesoscale storm embedded in a balanced large scale solution. Until that is accomplished for MM4 and MM5, I will continue to have no confidence in your numerical methods.

    Have you run a hydrostatic model yet without any dissipation at the same resolution as in the experiment above. This should be a trivial exercise for you.

    What is the technical note reference for MM4 and is that model still available?

    We go with relaxation terms in the boundary zone because the analysis is typically of low space and
    time resolution, and so is far from perfect, so we blend in what the limited area model does with those
    analyses. There doesn’t seem to be any merit in reducing the boundary smoothing and having those
    analyses being more effective.

    MM4 is described in a Technical Note by Hsie et al. Dissipation terms were the same, but I think
    they needed a shorter time-step (maybe 20 s, instead of MM5’s 30 s for a 10 km grid).
    These models can’t be run without dissipation due to using low-order numerical methods.
    My 1993 paper shows hydrostatic versus nonhydrostatic comparisons. At 10 km, you don’t
    get a lot of difference.

  267. Jim D
    Posted Mar 23, 2007 at 7:47 PM | Permalink

    I want to get it straight that I didn’t mean global climate models only give you global means
    as output. They give you time-means, such as winter average max daily temp, for each position
    depending on their resolution, and they give those means for the current and future climate.
    So, in a sense they blur the stochastic randomness of weather. They also run climate models
    several times to determine internal variability due to this randomness. You can also run
    with different amounts of CO2, and see a signal of warming above the noise of this randomness,
    and the warming is clearly related to the CO2 put in. It is incorrect to say you can’t predict
    climate because of this randomness. The signal from CO2 is strong enough to see.
    Tom, you say you can’t trust models because they use parameterizations. It would be
    a long wait if you want to do climate with DNS and bin microphysics, so we use the models we have, and they
    have shown some success with weather and climate that some people here choose to ignore.

  268. Dave Dardinger
    Posted Mar 23, 2007 at 9:48 PM | Permalink

    re: #268 Jim,

    Is the CO2 affect calculated from first principles? I.e. does it work by calculating the amount of IR absorbed in each atmospheric layer based on amount of IR received from above and below the water vapor content, and the estimated cloudiness, or is it entered as a parameter?

  269. Gerald Browning
    Posted Mar 23, 2007 at 9:54 PM | Permalink

    Jim D (#267),

    I have never heard of a numerical model having to use dissipation because of the low order of the numerical method.

    Please run the cases you have been asked to run. Given that you have already run 10 km meshes, the inviscid runs above for less than a day should be trivial as you have both the hydrostatic and nonhydrostatic models available from your manuscript and a supercomputer (I only have my home PC).

    There is always considerable verbage from you, but no mathematical proofs or convergence runs. Thus I have to believe that you are more interested in a smoke screen than science.

    You are a modeler, provide the equivalent convergence runs for the inviscid runs discussed above or inviscid runs similar to Lu et al. that were run on NCAR’s WRF and Clark-Hall models. Enough gamemanship. Where are the results that support the accuracy and dynamical stability claims that you have made?

    Jerry

  270. Jim D
    Posted Mar 24, 2007 at 9:02 PM | Permalink

    #270 Jerry,
    Dissipation is needed when you use second order space differencing
    and leapfrog time differencing as these models do. Low order
    means second order in this case.
    The von Neumann analysis shows that 2 grid-length waves are
    stationary, and short-wave speeds are very underestimated, so these schemes
    are poor at shape preserving near the grid scale, and therefore
    dissipation is needed to remove the poorly represented wavelengths.
    If you didn’t you would just get short-wave noise, and an ugly solution
    if not outright instability. The dissipation I described is the minimum
    amount for a clean solution.
    I will stay with the verbiage approach, as I am only on this site
    recreationally, not for work, and also it has been many years since
    I have run MM5, let alone MM4, as we use WRF with higher-order numerics now.

  271. Jim D
    Posted Mar 24, 2007 at 9:23 PM | Permalink

    #268 Dave D,
    The IR is always done with vertical integrations as you describe,
    taking into account CO2, water vapor, ozone and clouds in each column.
    CO2 and ozone are taken as a given profile, while the water vapor and clouds
    vary as predicted variables. The theory for clear sky is well known,
    and well simulated. It is the clouds that are trickier due to their complex
    structures an compositions, and this is where the tuning goes into climate models.

  272. Dave Dardinger
    Posted Mar 25, 2007 at 7:33 AM | Permalink

    Jim,

    Thanks for the reply and I’m glad that’s how CO2 is handled. Where would I go in the documentation to see just what figures were used? Am I right in assuming that X% of the available IR flux is assumed to be absorbed in each of a number of frequency bins taking into account that CO2 will share this flux with H20 and/or other substances?

  273. MarkW
    Posted Mar 25, 2007 at 6:28 PM | Permalink

    JimD,

    So the idea is that it’s better to move forward with bad data, than risk waiting for accurate models?

  274. Gerald Browning
    Posted Mar 25, 2007 at 8:03 PM | Permalink

    Jim Dudhia (#270),

    You seem to have numerical accuracy, numerical stability, ill posedness and fast exponential growth confused.

    The leapfrog method with centered second order differences in time and space on a nonstaggered grid is a neutral method (I would be happy to cite an elementary numerical analysis text or demonstrate with two lines of mathematics) and is second order accurate in space and time and numerically stable for a well posed system (e.g. hyperbolic systems) that do not have negative exponential growth.

    The stability of the leapfrog scheme has been compared against a proper mplementation of the semi-implicit method originally proposed by Kreiss in a manuscript by Steve Thomas and myself for the mesoscale case (reference available) and the two numerical methods produced the same numerical solutions when both used the same time step.

    We are not discussing bad implementations of good numerical methods.

    When a system is ill posed for the IVP as is the hydrostatic system, no numerical method will converge to the continuum solution as can be seen in the contour plots I have shown above.

    And when there is fast exponential growth in a system, any error (numerical, initial data, parameterization, etc) will cause an exponential growth of the error between the numerical solution and the continuum solution.

    I continue to wait for an inviscid run of the hydrostatic model and the nonhydrostatic model (MM4 and MM5) following Lu et al.

    Jerry

  275. Gerald Browning
    Posted Mar 25, 2007 at 10:06 PM | Permalink

    Jim D (#273),

    It appears that suddenly you have become an expert in climate models by describing how they handle cO2. Are you an expert in climate modeling?

    And because you don’t want to run MM4 (hydrostatic and ill posed) even though Anthes ran it for years and published many manuscripts, or MM5, please feel free to run WRF. The numerics have no bearing on these continuum issues if they are accurate and stable and correctly approximate the continuum system.

    I repeat, these are not numerical issues. they are much more serious than that.

    I am still waiting for any evidence, mathematical or convergent numerical solutions.

    I assume that you can run a model on your home computer just like I can.

    Jerry

  276. Tom Vonk
    Posted Mar 26, 2007 at 2:59 AM | Permalink

    Tom, you say you can’t trust models because they use parameterizations. It would be
    a long wait if you want to do climate with DNS and bin microphysics, so we use the models we have, and they
    have shown some success with weather and climate that some people here choose to ignore.

    Jim , it is not parametrisation I do not trust .
    What I do not trust , because it is simply wrong , is the underlying assumption that
    there is RANDOMNESS in weather and its temporal means called “climate” .
    There seems to be that wrong idea that Chaos = Randomness and even you write about the “randomness of the weather” while everything in the weather system is strictly determined by deterministic equations .
    Yet since Lorenz we know that the weather dynamics in the phase space have a strange attractor that prevents it to be an arbitrary random noise .
    That’s why the study of N-S equations and the direction Jerry is going is fundamentally what needs to be done .
    If there arose the impression that I would like to do climate with DNS , then it is a wrong impression .
    Long term numerical simulation of the atmospheric system is bound to fail regardless of the method – be it DNS or the current climate approximations using the temporal means instead of the instantaneous values .
    If the exact solution U(x,t) (supposing it exists and has the right properties of continuity and differentiability) has a chaotic behaviour as it has , then its temporal average U'(x) shows ALSO a chaotic behavior .
    The only difference being that the time scale at which the chaotic behavior appears seems to be different because by the simple fact of making this variable change from U(x,t) to U'(x) you decided that the behavior of U(x,t) (which , let us remind again , you know nothing about) has suitable properties especially continuity over an arbitrary time scale T where you define the temporal mean so that the temporal mean is well defined .
    To make things worse , there is a still stronger assumption currently done in climate models afaik (f.ex the ENSO model) and that is that the “fluctuations” of the averaged variable around its mean are random with average = 0 .
    There is no demonstration and there will never be that the chaos disappears through averaging because the simple observation of the system shows that it is full of discontinuities .

    Now I would certainly not ignore any success if I was shown a single one .
    Obviously the fact that the models do not diverge (the weather models do btw) proves nothing because it is a self fulfilling prophecy .
    Also let us absolutely avoid to confuse PARTIAL results (parametrisation) with a “success” .
    The difficulty is not in the parts but in the whole because it is a well known fact that in a
    non linear chaotic system you cannot separate the problem in N independent problems – that is even THE reason why the system is chaotic .
    In other words you could have a rather convincing model about let’s say the evolution of the arctic glaciers (of course with the famous assumption “all things being equal”) yet as soon as you plug it in a global climate system the things stop being “equal” and the results become different and unreliable in the long run .
    So either you force them to behave like in the partial independent model and all becomes horribly wrong or you let them free and then it’s your partial model on which you based everything that becomes wrong .
    That is the circularity that has been mentioned here all the time .

  277. Posted Mar 26, 2007 at 5:04 AM | Permalink

    This paper Time-step Sensitivity of Nonlinear Atmospheric Models: Numerical Convergence, Truncation Error Growth and Ensemble Design addresses some of the important issues under discussion here. I have posted a couple of preliminary comments on the paper. Additional information will be added as I have time away from real life. There are other posts over there that address other important mathematical modeling and numerical solution issues.

    At this time I can state that it is very likely that the small systems of nonlinear ODEs that exhibit chaotic response have never been numerically solved to convergence. All published numbers have been numerical noise that does not satisfy the continuous equations. The ramifications relative to NWPs and AOLGCMs very likely are great.

  278. Tom Vonk
    Posted Mar 26, 2007 at 5:34 AM | Permalink

    Calculations of the Lorenz 3D equations with all standard numerical solution methods for ODEs will never indicate convergence. It is possible that this statement is holds also for all non-linear ODEs that are assumed to exhibit chaotic behavior. The same question relative to PDEs is still a very much open issue. The numerous well-founded problems associated with attaining accurate numerical solutions ot PDEs make it very difficult to separate these issues from the issues associated with chaotic response. Little of a basic theoretical nature is known about chaotic response and PDEs. It is important to note, however, that some methods of solution for PDEs depend of the same type of ODEs that are used as illustrations of chaotic response.

    Thanks Dan !
    That is completely on spot .

  279. Tom Vonk
    Posted Mar 26, 2007 at 6:21 AM | Permalink

    Dan .
    Very enlightening – I particularly liked the remark about the Lyapounov coefficients .

    How do you explain that despite all the peer reviewed papers showing that the numerical climate models do just that – produce numbers , there are still people believing that it shows a kind of “reality” ?
    I mean if the numbers I get depend on the hardware , software , time and space steps not mentioning the parametrisation I choose AND if I know that I will never be able to demonstrate convergence then how can I get any scientific credibility by showing such numbers ?

    Are such papers discussed in the frame of IPCC and if yes , why aren’t they mentionned ?

  280. Tom Vonk
    Posted Mar 26, 2007 at 8:42 AM | Permalink

    Sorry for the third post but I have a comment about the Shadowing Lemma .
    Never mind the Riemanian manifolds but from what I understood in the climate modelling discussions , the interpretation is as follows :
    “If I have a trajectory calculated in the phase space by a numerical model , then there exists a true trajectory of the system that is arbitrarily close to the calculated trajectory for any period of time .”
    I am not sure if that lemma states the obvious like “For any graph in S , there exists a function defined on S that can be arbitrarily “close” to the graph” or if it has somme deeper meaning in terms of dynamics .

    In any case it can’t tell anything about the validity of the calculated trajectory because even if the Lemma applied what is in no way clear , it would only tell that what I calculated is not completely stupid aka is not outside of the relevant subspace of the phase space (strange attractor) however the distance between what I calculated and what will happen can be arbitrarily large , bounded only by the size of the attractor and I suspect that this one would be HUGE for the climate .
    Any additions ?

  281. David
    Posted Mar 26, 2007 at 5:11 PM | Permalink

    #32 >Use your model to predict temperatures 5/10 whatever years from now. When the time comes we will see how physical’ the model really is.

    Ian, in case you missed it this has been done a number of times in the scientific literature lately. Probably the best example is the projections Hansen made to Congress in 1988. You can see this discussed in the PNAS (at http://www.pnas.org/cgi/content/full/103/39/14288#F2). The relevant figure is . Rahmstorf et al. 2007 (Science express, 1 Feb 2007) have done the same for the IPCC projections.

    The predictions look pretty good to me. Does this convince you?

  282. Posted Mar 26, 2007 at 5:57 PM | Permalink

    re: #283.

    Does this convince you?

    Nope, not me.

    Maybe somebody will run some stats on measured vs. calculations and compare with pure random/luck/guess.

  283. Jim D
    Posted Mar 26, 2007 at 8:08 PM | Permalink

    Dave D, #274

    Thanks for the reply and I’m glad that’s how CO2 is handled. Where would I go in the documentation to see just what figures were used? Am I right in assuming that X% of the available IR flux is assumed to be absorbed in each of a number of frequency bins taking into account that CO2 will share this flux with H20 and/or other substances?

    The longwave schemes we have in MM5 and WRF are in a couple of papers. The RRTM scheme is
    from Mlawer et al. (1997), I believe it is JGR. The CCSM scheme is in their Tech Note
    downloadable from NCAR CGD’s pages. (to Jerry, weather modelers use radiation too,
    but we keep our CO2 fixed). Only some of the frequency bins need to handle CO2 and ozone,
    but I think they all handle water vapor (I also forgot methane is handled too). You seem
    to know more about this than you are letting on, Dave. The schemes are mostly look-up
    tables (thousands of elements) that are fit to the line-by-line (exact) model.

  284. Paul Linsay
    Posted Mar 26, 2007 at 8:08 PM | Permalink

    #283, I’m with Dan on this one. Plug in the satellite temperatures from 1980 on and the predictions are way too high. The ground based weather station data is full of errors and has corrections the size of the signal that somehow are always biased positive. My favorite weather station is the one in the back yard with the thermometer above the barbecue. There’s unprecedented global warming every time the family has friends over for steaks and beer.

  285. Jim D
    Posted Mar 26, 2007 at 8:20 PM | Permalink

    Jerry,
    Yes leapfrog plus second order schemes are neutral, but have poor behavior at
    low resolution making damping these scales necessary.

    And because you don’t want to run MM4 (hydrostatic and ill posed) even though Anthes ran it for years and published many manuscripts, or MM5, please feel free to run WRF. The numerics have no bearing on these continuum issues if they are accurate and stable and correctly approximate the continuum system.

    I repeat, these are not numerical issues. they are much more serious than that.

    I am still waiting for any evidence, mathematical or convergent numerical solutions.

    I assume that you can run a model on your home computer just like I can.

    I wish I understood your model problem better. I have mentioned why I think your hydrostatic
    results don’t look good. Also, exponential growth only occurs while systems remain
    linear, and are limited by nonlinear effects in any bounded model. Neither nature nor
    models have exponential growth that is unbounded. Can you elucidate what this simulation
    is exactly (initial and boundary conditions)?
    I might consider it for WRF, but I don’t have a home computer with Fortran,
    and would have to do that at work.

  286. Jim D
    Posted Mar 26, 2007 at 8:31 PM | Permalink

    #275 MarkW
    I’m not a Rummy fan, but only slightly modifying what he said
    “we do science with the models we have, not the ones we would wish to
    have at some future date”
    For some of us, these models have proved themsleves, for others they haven’t.

  287. Jim D
    Posted Mar 26, 2007 at 9:00 PM | Permalink

    (PS sorry for the double post above, I was getting an error message)

    Tom Vonk,
    It seems like where we disagree is the extent to which the atmosphere is
    chaotic on time scales of a century. Recent history shows that there is
    no reason to expect it has multiple attractors on this time scale, unlike the longer
    scales of ice ages, Milankovitch cycles and
    continental drift, where there are some very different “equilibrium” (attractor) states.
    Changing ocean circulations certainly throw in a chaotic factor, but the experts
    don’t seem to expect that to happen in the next century because of the slow ocean time scales.
    The atmospheric state therefore moves around a single, but possibly slowly moving,
    attractor (in Lorenz terms).
    This is why it makes sense to regard the climate as the average of weather over time.
    We can define climate in the same way for the next century as we have in the past
    century, by averages because we have a slowly changing “equilibrium” state on the century scales.

  288. David
    Posted Mar 26, 2007 at 10:18 PM | Permalink

    >Plug in the satellite temperatures from 1980 on and the predictions are way too high.

    But the predictions are for the surface! Of course there are at least 3 different satellite reconstructions which have trends from (around) 0.14C/decade to more than 0.2C/decade. Some will show more and some less warming than predicted but they are all in the ball park.

    Lets face it, the request for validation was made and when the results come back with the wrong answer the rules are changed.

  289. Dave Dardinger
    Posted Mar 26, 2007 at 11:53 PM | Permalink

    re: #285

    Can anyone help me with links to any of Jim’s references? I don’t have direct access to journals.

    Jim, I guess what I’d like to be able to do would be look at just how an absorbance figure gets used in the model program. It’s true that I have tried figuring things out in available models a time or two, but always get hung up trying to get my hands around it. I only really learn when I understand something. I can’t just memorize bare facts for very long. But once I understand the concept involved it’s mine forever. If there’s anyone here who could work up a baby program which would illustrate the concepts it would be grand; though I’m not holding my breath. I suspect it isn’t simply a case of running a few lines of R.

  290. Gerald Browning
    Posted Mar 27, 2007 at 12:20 AM | Permalink

    Jim D (#287),

    Your group (MMM) managed to run the WRF and Clark models for the Lu et al. cases. Try running them with the same two resolutions as above and only dynamics (no damping, physics, etc). That is called a convergence test and is a standard for all numerical methods for the homogeneous system to check that the numerical accuracy is as claimed. If the numerical methods are accurate and stable, then you should see a reduction in the error by the appropriate amount (4 for a 4th order in time and space numerical method)) for at least up to a day.

    And frankly I don’t believe there isn’t a copy of a hydrostatic model lying around so that the same test can be run on it. You might ask Stan Benjamin for a copy of the RUC model that is hydrostatic and from the same source as the NCAR-Penn State model.

    I restate that these are continuum issues and should appear in any clean numerical approximation of the hydrostatic and nonhydrostatic systems. The problems certainly appeared as expected in the Lu et al. manuscript (and evidently in the Mahalov manuscript although I have been unable to obtain a copy) and in the above simple tests.

    And finally I point out that the new multiscale continuum system came from a rigorous mathematical proof of accuracy when compared with the original unmodified system based on the theory of hyperbolic systems.
    No such proof ever existed for the hydrostatic system (and it cannot).

    Also the reference above is a mathematical analysis of the problems with both systems. Clearly that analysis seems to be applicable to the Lu et al. manuscript and the cases shown above.

    Jerry

  291. Tom Vonk
    Posted Mar 27, 2007 at 4:59 AM | Permalink

    It seems like where we disagree is the extent to which the atmosphere is chaotic on time scales of a century. Recent history shows that there is no reason to expect it has multiple attractors on this time scale, unlike the longer scales of ice ages, Milankovitch cycles and continental drift, where there are some very different โ€œequilibriumโ€ (attractor) states. Changing ocean circulations certainly throw in a chaotic factor, but the experts don’t seem to expect that to happen in the next century because of the slow ocean time scales. The atmospheric state therefore moves around a single, but possibly slowly moving, attractor (in Lorenz terms).
    This is why it makes sense to regard the climate as the average of weather over time. We can define climate in the same way for the next century as we have in the past century, by averages because we have a slowly changing โ€œequilibriumโ€ state on the century scales.

    Jim I am afraid we disagree about more because I can’t make much sense of the above .

    1) A strange attractor can NOT be determined by observation and specifically not for a thing like a climate .You’d already be hard pressed to say what is the dimension of the phase space in which you represent the “climate” . If you take a climate model then its associated phase space dimension would be PxN + K where N is the number of cells you use . That is a HUGE number and a strange attractor would be a subspace with a similar number of dimensions . You don’t even know if there is one , let alone what properties it has .

    2) A chaotic system is not chaotic on a “time scale” . It is or it is not . And if it is , you will not make it to be less chaotic by making arbitrary mathematical assumptions concerning its dynamical parameters like f.ex taking a time average instead of an instantaneous value . It is like if you said that by taking a daily average of the air velocities you can predict weather in a week while you cannot if you use quasi instantaneous values .

    3) I suspect that you confuse the speed of divergence of trajectories of the system in the phase space with the fact that they do diverge . The chaos has local causes due to the nonlinearities of the differential equations . But as soon as you pronounce the words “differential equations” you know that what matters is microscopic (dt , dx etc) and not global . You should read the paper that Dan linked – you’ll see that the trajectories depend on the time and space step you choose . That alone is enough reason to conclude that the models only produce numbers . The shadowing lemma (even if I am not really sure that it applies to a system that is not COMPLETELY described by a set of differential equations) would only insure that the numbers are not stupid or in other words that they look like a real climate . However the convergence proof is still completely out of reach and will probably always be because I am firmly convinced that you cannot establish a complete set of differential equations describing the evolution of the system . The only thing you can do is to neglect everything that is SUPPOSED constant , random , irrelevant and run computer programs with whatever is left .

    4) Your reference to Milankovitch cycles seems to show that you consider the climate like a sum of random short term fluctuations that somehow “don’t matter” and “slow” chaotic long term evolutions . In other words everything that happens on a short time scale (define how short) is not deterministic chaos but random noise . So why should we bother with differential equations if they only produce random noise ?

    5) Talking about “slowly changing “equilibrium” states at the scale of centuries” is stating the obvious . If I have a dynamic variable x(t) showing wildly chaotic yet bounded behavior and if I define X(T,t) the time average over T of x(t) then with increasing T , X(T,t) will change slower and slower . However the mathematical causality is x(t) -> X(T,t) and I can’t conclude anything about x(t) if I only observe X(T,t). More specifically the chaos and unpredictability didn’t disappear because X(T,t) changes “slowly” with a great enough T . The things get even worse because with a great enough T you will loose most of the relevant dynamics of x(t) because they will pass under the detection threshold .

  292. MarkW
    Posted Mar 27, 2007 at 5:23 AM | Permalink

    JimD,

    I don’t know of any models that have proved themselves.
    Your the one who wants to put the world’s economies at risk and to drastically lower the lifestyles and even lifespans of billions.
    I would expect better evidence from you that such drastic measures are necessary.

  293. Paul Linsay
    Posted Mar 27, 2007 at 12:14 PM | Permalink

    #290, David,

    I didn’t realize that the models had the resolution to compute temperatures only 2m off the ground. I thought their resolution was on the order of 100 km. Please reference the relevent models. The experts on this thread will be very interested in what you have to say.

  294. Jim D
    Posted Mar 27, 2007 at 8:11 PM | Permalink

    Tom,

    4) Your reference to Milankovitch cycles seems to show that you consider the climate like a sum of random short term fluctuations that somehow โ€œdon’t matterโ€ and โ€œslowโ€ chaotic long term evolutions . In other words everything that happens on a short time scale (define how short) is not deterministic chaos but random noise . So why should we bother with differential equations if they only produce random noise ?

    I think this does reflect my view, and I would answer that you do need the differential equations, and
    they don’t produce random noise, so much as non-deterministic eddies, but these eddies are important
    for the mean poleward heat transport, and as long as they get the mean transport right, the details of the
    eddies don’t matter.
    I interpret chaotic as going to significantly different areas of phase space and staying there.
    Perhaps Ice Ages are an attractor, and interglacial periods are another. I think I am safe
    in saying such transitions are not expected to occur in this century, and that to achieve
    such a transition you need at least a change in ocean circulation, as the atmosphere can’t
    do it on its own. That is why I interpret the climate as non-chaotic on time scales where
    the ocean circulation or other external factors don’t change significantly. Even as CO2
    increases, the main effects have a long time scale (such as ocean warming and ice melting).

  295. Jim D
    Posted Mar 27, 2007 at 8:23 PM | Permalink

    Jerry,

    Your group (MMM) managed to run the WRF and Clark models for the Lu et al. cases. Try running them with the same two resolutions as above and only dynamics (no damping, physics, etc). That is called a convergence test and is a standard for all numerical methods for the homogeneous system to check that the numerical accuracy is as claimed. If the numerical methods are accurate and stable, then you should see a reduction in the error by the appropriate amount (4 for a 4th order in time and space numerical method)) for at least up to a day.

    And frankly I don’t believe there isn’t a copy of a hydrostatic model lying around so that the same test can be run on it. You might ask Stan Benjamin for a copy of the RUC model that is hydrostatic and from the same source as the NCAR-Penn State model.

    I restate that these are continuum issues and should appear in any clean numerical approximation of the hydrostatic and nonhydrostatic systems. The problems certainly appeared as expected in the Lu et al. manuscript (and evidently in the Mahalov manuscript although I have been unable to obtain a copy) and in the above simple tests.

    And finally I point out that the new multiscale continuum system came from a rigorous mathematical proof of accuracy when compared with the original unmodified system based on the theory of hyperbolic systems.
    No such proof ever existed for the hydrostatic system (and it cannot).

    Also the reference above is a mathematical analysis of the problems with both systems. Clearly that analysis seems to be applicable to the Lu et al. manuscript and the cases shown above.

    WRF has a hydrostatic option, so that is not an issue. I Lu et al. already used WRF, I am not
    sure what I am supposed to do different. I think I just disagree with this methodology,
    because how do you measure error or truth in these systems? They won’t converge to a solution
    as the grid size reduces due to allowing finer eddies that may not have been permitted by
    the coarser grid. The convergence tests we do with WRF involve using a fixed physical viscosity,
    so that certain small scales really are not permitted, so I guess I am disagreeing with
    you at a more fundamental level than just model results.

  296. Jim D
    Posted Mar 27, 2007 at 8:34 PM | Permalink

    MarkW,
    Like I say I am not a climate modeler. However climate modelers do the
    best science they can, and it is the policy makers who translate that into action
    or not. It is a risk management exercise to weigh probabilities against costs.
    The IPCC produce error estimates too, and this goes into the equation. If you don’t
    believe the error estimates either, you have to have a likely alternative scenario
    in mind, or a good reason to say why the error is higher than the consensus view.
    I know this won’t go over well here, but that is my view.

  297. David
    Posted Mar 27, 2007 at 9:52 PM | Permalink

    >I didn’t realize that the models had the resolution to compute temperatures only 2m off the ground. I thought their resolution was on the order of 100 km.

    Paul you are mixing vertical with horizontal resolution. A “typical” climate model has around 30 levels (though weather models routinely have many more). The lowest level is typically 50-100m above the surface with an additional surface level (though there are marked variations from these rules).

    A few minutes with google will turn up hundreds of papers which discuss climate models (or alternatively check out the massive bibliography attached to the IPCC reports). Groups like the Hadley Centre, CSIRO, NCAR are a good place to start. The model that Hansen used is referenced in the PNAS report.

  298. Tom Vonk
    Posted Mar 28, 2007 at 3:11 AM | Permalink

    I interpret chaotic as going to significantly different areas of phase space and staying there.
    Perhaps Ice Ages are an attractor, and interglacial periods are another. I think I am safe in saying such transitions are not expected to occur in this century, and that to achieve such a transition you need at least a change in ocean circulation, as the atmosphere can’t do it on its own. That is why I interpret the climate as non-chaotic on time scales where the ocean circulation or other external factors don’t change significantly. Even as CO2 increases, the main effects have a long time scale (such as ocean warming and ice melting).

    That is an extremely daring assumption and on top one for which there is not the slightest theoretical or experimental evidence .
    I’ll repeat it one more time – nobody has the slightest clue about what the strange attractor of the climate could be . By observing the past and assuming that you are at able to define the phase space you are working in (I notice that you didn’t try to estimate its dimension what would be the very least) you’d get only one trajectory .
    Now that is irrelevant for the future evolution of the system because even the study of the rather simple Lorenz attractor is showing how easily 2 initially slightly different trajectories move to very different places in the phase space .
    So “details” DO matter – there is no need to have something to change “significantly” to induce a significant movement of the system in the phase space .
    You probably still live in a linear world where significant changes need significant causes and non significant causes produce no significant changes and that is a common view in the computer modelling community because it all boils down to linearize everything due to the fact that a computer can only add numbers .

    Deterministic chaos is precisely the opposite of that – small causes can amplify and induce big changes and big changes can have negative feedbacks that cancel them . All of that happening at the same time and interacting – THAT is chaos .
    So you are not safe saying anything at all regarding “transitions” whatever this term might mean because the evolution of trajectories is BY DEFINITION unpredictible .
    It is all the time this regrettable confusion that getting a qualitative understanding of a partial phenomenon enables a quantitative understanding of the whole system .
    Of course as I already wrote (point 5 of my post above) – by averaging you indeed eliminate the inconvenient local details that you don’t want “to matter” .
    But only for a time and during this time you could well be able to say that every deviation from your smoothed prediction is only a random fluctuation due to things that you don’t know but that “don’t matter” anyway .
    As your calculated means will deviate more and more from the reality with t beginning to get great compared to T (time over which you averaged) , the chaos will reappear as it should .

  299. Tom Vonk
    Posted Mar 28, 2007 at 5:51 AM | Permalink

    The IPCC produce error estimates too, and this goes into the equation. If you don’t believe the error estimates either, you have to have a likely alternative scenario in mind, or a good reason to say why the error is higher than the consensus view.
    I know this won’t go over well here, but that is my view.

    Might that “http://www.maths.uwa.edu.au/~kevin/Papers/JAS07Teixeira.pdf” just be an excellent reason ?
    And that “http://www.springerlink.com/content/g28u12g2617j5021/” might very well be an alternative likely scenario .
    Not that I believe more in it than in any IPCC computer run but it is as likely as likely in climate guessing goes .
    As for the “IPCC error estimates” … that horse has already been beaten to death , those are opinions and not error estimates .
    It is not a matter of here or elsewhere so please stop talking “consensus view” – first there is none and second it has no scientific value anyway .

  300. Paul Penrose
    Posted Mar 28, 2007 at 7:51 AM | Permalink

    Jim D. made an important point that I want to address, and it is:

    However climate modelers do the best science they can, and it is the policy makers who translate that into action or not. It is a risk management exercise to weigh probabilities against costs.

    Now this is a true statement, however it is imperitive that the scientists are brutally honest with themselves and the policy makers on the limitations of their work. In the case of climate models this means that they should disclose that there are no valid statistical error estimates for the outputs of their models and that any reported estimates are really just guesses. Meaning, of course, that the model outputs could be signifcant, or not, but that we really can’t quatitatively tell which. Now I understand why the modellers might want to omit this fact; I can just hear the politicians asking why, after spending hundreds of millions (billions?) of dollars modeling the climate, and all you can say is “we don’t know”?

    My point is that you can’t just foist all the responsibility onto the policy makers. The scientists are responsible for reporting all the strengths, weaknesses, and certainty levels of their analysis clearly and completely. In the AGW debate I don’t believe this is always done. The complete uncertainty of the climate model outputs is one good example. I’m sure that the modelers are doing the best they can with what they have, and I’ll give them an “A for effort”, however this does not change the fact that they are not providing any useful information to the policy makers that they can base decisions on.

  301. KevinUK
    Posted Mar 28, 2007 at 9:15 AM | Permalink

    #302 PP

    I agree with most of what you’ve said BUT I disagree with your assessment that the climate modellers deserve an “A for effort”. A think that like all good hired consultants they deserve an “A” for providing the answer to the policymakers which the policymakers have funded them to give. I think that perhaps with one or two exceptions the vast majority of climate modellers know that what they are doing is a complete waste of the hard earned taxes which the rest of us earn inorder to keep them in the style in which they have become accustomed to living. Most of them IMO are not in the least bit interested in applying the scientific method. On that basis I would rather not use the terms science and climate in the same sentence when referring to climate modelling. They have their ‘fifteen minutes of fame’ thanks to the politicians at the moment but I glad to say that they are now well into their fourteenth minute. Ultimate they need to understand that you can fool some of the people some of the time and maybe even some of the people most o fthe time but you cannot fool all the people all the time as eventually the scientific method will be applied by people like Steve and Ross and their claims will be refuted.

    KevinUK

  302. Posted Mar 28, 2007 at 10:46 AM | Permalink

    #302, 303

    I can’t give “A for effort” because they continue to ignore the most fundamental aspects of (1) mathematical model development and analysis, (2) numerical solution methods analysis and development, (3) computer software development, and (4) applications of all these to the intended areas of application. What is being ignored is taught at undergraduate level and those students would never get an “A”. They would be lucky to get a passing grade.

    And don’t even get me started on the fact that public policy decisions that will affect billions of people will potentially be based on the numbers.

  303. Gerald Browning
    Posted Mar 28, 2007 at 7:43 PM | Permalink

    Jim Dudhia (#297),

    Run the WRF model fourth order numerical approximation in space and time on both the hydrostatic and nonhydrostatic systems for the inviscid, unforced initial boundary value problem that Lu et al. did for the two resolutions above as I did. That will determine how rapidly the enstrophy can cascade down the spectrum for the Lu et al. example for convergent numerical approximations (at least in the first day or so) of the two different continuum systems and how much of a role the dissipation is playing in controlling that cascade.

    Jerry

  304. Jim D
    Posted Mar 28, 2007 at 7:46 PM | Permalink

    Tom #300,
    I was trying to convey that the atmosphere alone doesn’t jump to different
    climatic states, something else has to happen. Those “something else’s” involve the
    ocean, and CO2 content of the atmosphere, and possibly volcanoes, meteors, etc.
    To the extent that those externals are known, there is no reason to believe in a
    fast chaotic transition. Climate models (given their number of points and variables)
    have a dimension of about a million. Of course, the accessible phase space is a small
    subset of this because of dynamical constraints and things like energy, mass and
    water conservation. Anyway, you asked for a dimension, so I gave it.

    In answer to other issues relating to climate scientists, I would say that
    scientists are motivated by the truth. There is nothing worse for their reputation
    than being proved wrong, so they wouldn’t be doing all these publications
    if 100 years from now they thought there was a good chance they were proved wrong.
    It is all about legacy (for sure it isn’t well paid). Climate science is a bit
    unrewarding, however, because they can’t be proved right or wrong immediately,
    while in other areas, like weather prediction, you get instant gratification
    if you improve a 24-hour forecast. Climate science falls more in the category of
    cosmology or string theory where we may not know the right answers very soon.

  305. Willis Eschenbach
    Posted Mar 28, 2007 at 10:10 PM | Permalink

    Jim D., you say:

    In answer to other issues relating to climate scientists, I would say that scientists are motivated by the truth.

    While this view is touching, it is also incredibly naive.

    If you think that truth is what has driven Michael Mann to hide his data, tell us how. If you think that’s what impels James Hansen, the most widely quoted climate scientist on the planet, to not only get up on his soapbox during company time, but to claim that he is being “muzzled while doing so, please explain it to us.

    Scientists are like everyone else, driven (in no particular order) by the usual mix of desire for recognition, fear of being proven wrong, wanting to get grants, looking for social approval, jealousy of other scientists, wish for security, aversion to going against the herd, and all the rest of the things that drive people. They fight over territory in the lab, and over honors in the public eye. And yes, somewhere in the mix, there is the truth … but unfortunately, it is elusive, doesn’t pay well, and may take decades to become evident.

    And since in all likelihood, we won’t know “the truth” until some and perhaps all of today’s scientists are dead, the door is wide open for all of the other motives to come in. In particular, the desire for truth gets subsumed into the desire to have other scientists agree with you, which is what passes for “truth” in climate science these days … which is why AGW supporters are so dismissive and abusive towards those who dare to disagree with them. It also explains why they cling so hard to the fiction that climate models are data, and thus can be proxies for the unreachable truth.

    w.

  306. John Baltutis
    Posted Mar 29, 2007 at 12:54 AM | Permalink

    Re: 300, 301, 302, 304, & 307

    Well said, by all.

  307. Tom Vonk
    Posted Mar 29, 2007 at 3:02 AM | Permalink

    I was trying to convey that the atmosphere alone doesn’t jump to different climatic states, something else has to happen. Those โ€œsomething else’sโ€ involve the ocean, and CO2 content of the atmosphere, and possibly volcanoes, meteors, etc.
    To the extent that those externals are known, there is no reason to believe in a fast chaotic transition. Climate models (given their number of points and variables) have a dimension of about a million. Of course, the accessible phase space is a small subset of this because of dynamical constraints and things like energy, mass and water conservation. Anyway, you asked for a dimension, so I gave it.

    Jim so the dimension of the phase space is several millions assuming a spatial resolution around 200 km .
    For every global spatial temperature average there is for all practical purposes an infinity of different dynamical states .
    The notion itself of “transition” based on a single arbitrary parameter among millions (some global temperature average) doesn’t make much sense in the chaos theory .
    The only thing external to this system is the Sun and possibly cosmic rays .
    Now it would be very naive to believe that even for strictly constant Sun parameters the system goes in a steady state aka is represented by a single point in the phase space .
    On the contrary it moves with chaotic pseudo periods related to the changes of Earth’s orbital parameters and Sun’s activity at all time scales from a day to millions of years .
    So yes , the system does things “by itself” only by virtue of receiving and dissipating energy – the Lorenz system does everything “by itself” too .
    You don’t need any external changes significant or otherwise to make it move – only linear systems need that .
    You could make some educated statements about the neighbourhood where it moves (for a limited time) if and ONLY if you knew the topology of the strange attractor assuming there is one .
    Yet nobody has a clue and I dare say nobody ever will for the simple reason that to establish this topology , you’d need a complete set of differential equations describing its dynamics and to solve them what seems to be out of reach .
    Sorry but conservation of energy , momentum and mass doesn’t fit the bill again for the simple reason that there is an infinity of different dynamical states doing so . Lorenz’s system also conserves all that yet stays happily chaotic at all time scales .

    A computer model hopes to have picked up one possible state among infinity at t ? Big deal , it will have to guess at t + dt again !
    I understand that you work in meteorology – do you sincerely believe that you would be 24 times better at predicting at a given place daily temperature averages rather than hourly temperature averages (pick your time unit – day , week , month , year etc) ?

  308. Jim D
    Posted Mar 29, 2007 at 9:25 PM | Permalink

    Jerry, #305, I have no plan to do this unless I understand what you are getting at,
    and I think we are at an impasse because I don’t think I will now. Please don’t ask me to do work.
    Willis, it seems to be a sad impression you have of the scientific community. Everyone
    esteems scientists that have been proven right by history, and forgets those who haven’t,
    so why would anyone hide a truth to further their career? These things are found out,
    and lead to disgrace and can be career-ending. The few cases making the news, like the
    Korean cloning scandal, and cold fusion, are enough of a deterrent.
    (rant mode on) People seem to think scientists are inventing a crisis based on flimsy facts, like the
    recent record-breaking years, melting glaciers, ocean warming, etc. If they seem to
    understand why these things are happening, they are criticized. If they can’t come up
    with a reason this won’t continue into the future, they are criticized. Some say,
    OK, CO2 leads to warming, but cloud albedo feedback will come along to rescue us, so don’t call it a crisis.
    This looks like wishful thinking rather than science. There is not much that
    increases cloud albedo, other than putting aerosols in them, which nature is not going to do on its own.
    (rant mode off)
    Tom, In the UK the annual average temperature varies within about 1 degree C, but the
    daily prediction is often worse than that, so yes, it is easier to predict averages.
    I guess I am saying you don’t have to predict every temperature wiggle to predict the climate.
    Two climates can be statistically the same, even if they are different in detail, as can
    a model climate and the real climate. Climate modeling is about getting a statistically
    similar climate to the real world.

  309. Gerald Browning
    Posted Mar 29, 2007 at 10:54 PM | Permalink

    Jim Dudhia (#310),

    That is exactly what I expected you to do in the end. Lots of verbiage and no mathematics or convergence tests.

    If you were a scientist trying to understand what the continuum equations
    are really doing, you would either derive the appropriate mathematical details or, at the very least, run the tests I have indicated to check the behavior of the systems with convergent numerical solutions. But given that you are a modeler that has fudged the numerics with all types of dissipation just in order to obtain a result, the outcome is no surprise.
    You came to this thread mainly to hide what you have done to obtain such results. That is easily seen because every time I asked you a specific scientific question, you either changed the subject or did not provide a reasonable scientific answer. IMHO you should be ashamed of yourself.

    I have included a mathematical reference at the top of this thread and contour plots to show convergent numerical solutions that agree with the mathematics. You have not touched either result and I think that any logical person can determine who is willing to back up their statements with hard facts.

    Jerry

  310. Tom Vonk
    Posted Mar 30, 2007 at 3:39 AM | Permalink

    Two climates can be statistically the same, even if they are different in detail, as can a model climate and the real climate. Climate modeling is about getting a statistically similar climate to the real world.

    OK Jim so I guess that I have to admit that you have the right to ignore chaotic dynamics and have faith in things that have been proven wrong .
    I guess that it is also your right to refuse to read any paper showing that numerical models are unable to correctly predict the dynamics of such systems and this thread doesn’t lack of references .
    You can even believe that only “statistics” (whatever you put in this word) matter and everything that is not an average is random and doesn’t matter despite results that prove otherwise .
    Stays that it is then permitted to ask what you want to achieve in a discussion dedicated to issues that are adressed here .
    You don’t answer any argument or reference given here and basically either avoid the discussion or make assumptions that are unproven like “the climate jumps between 2 attractors defined only by one arbitrary parameter among millions” .
    Never mind that talking similarity in a space whose topology you ignore is at best wild guessing .
    Not a very scientific attitude if you ask me .

  311. MarkW
    Posted Mar 30, 2007 at 5:21 AM | Permalink

    JimD,

    You write:
    “CO2 leads to warming, but cloud albedo feedback will come along to rescue us, so don’t call it a crisis.
    This looks like wishful thinking rather than science.”

    How does this differ from the modelers declaring, a priori, that rising RHI will stay constant, throughout the atmosphere, as
    temperatures rise? There is no evidence to support such a claim, and increasing evidence to doubt it. Yet it is the core around
    which the catastrophic temperature rise predictions are built.

    BTW, it is usually considered bad form to mischaracterize someone else’s argument.
    Nobody said that the albedo of individual clouds would get greater. The claim is that there will be more clouds.
    If you are going to try and ridicule someone else’s argument, it helps to get the other persons argument right. Otherwise it is you
    who are being made to look the fool.

  312. MarkW
    Posted Mar 30, 2007 at 5:24 AM | Permalink

    Two climates might look statistically the same, right now, even though their details are different. That’s true.
    But one of the things about systems that are different, is that they respond to changes differently.

    And how climates respond to changes is what the whole AGW debate is about.

  313. Jim D
    Posted Mar 30, 2007 at 8:27 PM | Permalink

    MarkW,
    I am actually not sure the cloud albedo fans all say the area will increase either.
    I checked Lindzen’s “iris” paper, and part of his argument is that upper clouds will decrease
    in area letting more IR out, but I confuse all those arguments, and would like to see a mechanism
    for increased negative cloud feedback that can be tried out in a model. Lindzen proposed a GCM test of his
    idea in 2001, but no results have been forthcoming.

    MarkW and Tom,
    This is where we disagree. You think of climate as an initial value problem, i.e. the final
    state depends critically on the initial state (like the weather), while I believe the climate
    state is somewhat independent of the initial state details after a few years of integration, being
    determined mostly by the forcing terms.

    Jerry,
    I came to this thread to sort out hydrostatic versus nonhydrostatic dynamics, which I know
    something about. I also was able to interpret your #186 plots as a failure of hydrostatic
    dynamics. These seem to have nothing to do with dissipation because features are well resolved.
    If you go the other direction and double your grid sizes, you should see dissipation effects.
    Similarly, I don’t see a problem with the Lu et al. results as I said. They had an unstable
    situation, probably due to low Richardson number, where finer scales led to resolving turbulence
    that otherwise was not.

  314. Gerald Browning
    Posted Mar 30, 2007 at 9:45 PM | Permalink

    Jim Dudhia (#315),

    1. It appears everything has to be tried out in a climate model. But both climate and weather models have been shown to have O(1) errors. What kind of scientific response is that?

    2. Is the term “I believe” a scientific statement? Given that the forcings in climate models are unphysical and tuned to overcome the large dissipation inherent in them, how can the forcings give any reasonable prediction in a climate model, let alone of the real atmosphere.

    3. Both of my hydrostatic and nonhydrostatic models have no dissipation so that there is nothing to hide the cascade of enstropy in the convergent numerical approximations. The hydrostatic model is behaving exactly as the mathematics predicts, i.e. it is ill posed and the growth becomes larger and larger as the number of resolved waves increases (as opposed to the nonhydrostatic model that shows the same solution for both resolutions as expected). I have not shown additional plots of longer runs of the inviscid, nonhydrostatic model. That is to come and will complete the agreement with the mathematics.

    I assume that the WRF model is available to the public as it was developed using public funds. I will obtain a copy and make the appropriate runs for comparison with those from the above plots since it appears to be too much of an effort for you to do.

    Jerry

  315. fFreddy
    Posted Mar 31, 2007 at 3:43 AM | Permalink

    Re #315, Jim D

    You think of climate as an initial value problem, i.e. the final state depends critically on the initial state (like the weather), while I believe the climate state is somewhat independent of the initial state details after a few years of integration, being determined mostly by the forcing terms.

    So why waste time, money and super-computers on great big complicated models with lots of grid cells ? Why not make much simpler models that only focus on the forcings, and concentrate your efforts on getting them right ?

  316. Jim D
    Posted Mar 31, 2007 at 10:30 AM | Permalink

    #316

    Jerry, WRF is a free model for anyone to download and play with. It has idealized
    cases set up to use. I expect it to give similar results to yours, so it might be
    a sidetrack to the issue, which was to discuss your results. What was it you didn’t
    like about your nonhydrostatic result? Maybe I can help understand that issue.

    2. Is the term โ€œI believeโ€ a scientific statement? Given that the forcings in climate models are unphysical and tuned to overcome the large dissipation inherent in them, how can the forcings give any reasonable prediction in a climate model, let alone of the real atmosphere.

    Climate runs are distinguished by the forcing. They do many different initial conditions
    for each forcing, and the results cluster by the forcing. I state this as something that
    happens, and you can draw your own conclusions, but when I say “I believe”, it refers to
    my conclusions.

    #317
    There is a whole range of models, including simple ones. The more complex ones can
    include more physical processes including feedback mechanisms, and are a way of checking that they are not missing any
    important ones.

    While I am here, I want to say more about cloud feedback. If the cloud albedo is to
    increase due to cloud area increasing, I would like to know a mechanism that causes an
    increase in global cloud area (or a paper on this issue). The only thing I could think
    of was to increase cloud lifetimes. The way to do that is to reduce droplet or ice crystal
    sizes so that the clouds don’t precipitate out so fast and the way to do that is to increase
    the number of condensation or ice nuclei, i.e. aerosols, so that is why I mentioned aerosols earlier.
    Smaller droplets also directly increase the albedo, as well as the lifetime. Aerosols are
    an uncertain part of the forcing in climate models because they needs to account for the future
    clean-air policies in various countries, which is an anthropogenic factor, but there are
    also natural aerosols due to volcanoes and blowing sand, etc.

  317. Gerald Browning
    Posted Mar 31, 2007 at 6:06 PM | Permalink

    Jim Dudhia (#318),

    I have already begun to read the WRF documentation and will make the necessary runs so that readers of this blog can see how the WRF model behaves relative to the contour plots above and the impact the multiple forms of dissipation in the numerical method, in the dissipation terms, at the top and lateral boundaries, etc. have on a continuum solution.

    Then you can use all of the verbiage you would like to explain why the accuracy is what it is compared to a continuum solution.

    Jerry

  318. Dave Dardinger
    Posted Mar 31, 2007 at 10:33 PM | Permalink

    re: #218

    If the cloud albedo is to increase due to cloud area increasing, I would like to know a mechanism that causes an increase in global cloud area

    I thought the whole point of the enhanced greenhouse effect was that increased CO2 caused warming which increased water vapor in the atmosphere. Well, if we have increased water vapor across the board then some areas which normally won’t have enough vapor to cause serious cloudiness to occur will then have enough and this will be the mechanism you’ve wanted. I will accept that in many cases increased water vapor will merely make the clouds rain out quicker, but in that case there is not as much extra vapor in the air as the enhanced effect would predict, so a negative feedback wouldn’t be needed anyway.

  319. gb
    Posted Apr 1, 2007 at 3:10 AM | Permalink

    Re # 319.

    I do not really understand your ideas about dissipation, continuum solutions and why you look at contour plots. Consider the more general case of turbulence governed by the Navier-Stokes equations. What should the resolution be according to you in a fully resolved simulation? Then assume we do not resolve all scales but we have a model for the subgrid scales, i.e. we add an extra dissipation term. How do we validate that the dissipation model is correct? What kind of tests should we perform?

    When we consider atmospheric models I do not understand how you can say that the models are wrong by comparing two model outputs. Since it is unknown what the dynamics is on mesoscales (does the kinetic energy go from large scales to small scales or from small to large scales? Are waves of importance or are there vortices?) the only way to validate a model is to compare model outputs with real observations.

  320. Gerald Browning
    Posted Apr 1, 2007 at 12:21 PM | Permalink

    gb (#321),

    You continue to answer replies I have made to Jimy Dudhia. Is there some connection?

    After you have read and understood the minimal scale estimates by Henshaw, Kreiss, and Reyna for the incompressible NS equations, please let me know.

    There are many ways to verify models. Don’t forget that global weather and climate models deviate from the observations in a matter of hours without updating as has been discussed on this thread and others. Therefore the use of observations to verify one of these models is not a valid argument.

    You are just repeating the same discussions again and I will better spend my time on the WRF documentation for reasons that will become clear.

    Jerry

  321. Willis Eschenbach
    Posted Apr 1, 2007 at 2:45 PM | Permalink

    Jim D, thanks for your response. You say:

    Willis, it seems to be a sad impression you have of the scientific community. Everyone
    esteems scientists that have been proven right by history, and forgets those who haven’t,
    so why would anyone hide a truth to further their career? These things are found out,
    and lead to disgrace and can be career-ending. The few cases making the news, like the
    Korean cloning scandal, and cold fusion, are enough of a deterrent.
    (rant mode on) People seem to think scientists are inventing a crisis based on flimsy facts, like the
    recent record-breaking years, melting glaciers, ocean warming, etc. If they seem to
    understand why these things are happening, they are criticized. If they can’t come up
    with a reason this won’t continue into the future, they are criticized. Some say,
    OK, CO2 leads to warming, but cloud albedo feedback will come along to rescue us, so don’t call it a crisis.
    This looks like wishful thinking rather than science. There is not much that
    increases cloud albedo, other than putting aerosols in them, which nature is not going to do on its own.
    (rant mode off)

    You said that scientists were “motivated by truth”. I said that the scientific community is motivated by all of the same emotions and drives as the rest of humanity … I hardly think this is a “sad impression” of the scientific community, but I do think this is an accurate one.

    Next, I think you misrepresent people’s position regarding the observational data. There is general agreement these days that the world has been warming since the 1700s. Now, in a period of general warming, there are some things that we would expect to see, things that would be a surprise to us only if we didn’t see them. These include:

    1) Record-breaking years, probably occurring in clumps.

    2) Melting glaciers.

    3) Warming oceans.

    You seem to think that these things taken together constitute a “crisis”, that they are somehow unusual … but in fact, in a three century warming period, it would be unusual if we did not observe those things. There is no evidence that the recent end of this warming is any different than the earlier periods, either in the size of the trend, the total temperature change, or the length of the warming. Before you can say that there is a “crisis”, you need to show that the recent climate is unusual or anomalous in some way. What is your evidence for that?

    This is science, not wistful thinking. What is wistful thinking is your claim that “there is not much that increases cloud albedo”, the simplest thing is increasing clouds. For example, a recent NASA study revealed that as the Arctic snow and ice have decreased, the total albedo hasn’t decreased (as is predicted in every one of the models), but has barely changed.

    Why not? Because the cloud albedo has increased over the same period, due to the increased water content of the air. What is wistful thinking is the idea that the cloud albedo is constant. As a simple proof that it is not constant, it varies annually from summer to winter, and in addition there are a wide variety of studies showing longer-term changes in cloud albedo … how do you think that happens?

    w.

  322. Tom Vonk
    Posted Apr 2, 2007 at 4:36 AM | Permalink

    MarkW and Tom,
    This is where we disagree. You think of climate as an initial value problem, i.e. the final state depends critically on the initial state (like the weather), while I believe the climate state is somewhat independent of the initial state details after a few years of integration, being determined mostly by the forcing terms.

    Jim this one is easy .
    The system we are talking about is described by a system of PED .
    Never mind that it is incomplete , let us assume that it is complete .
    Another assumption , there is a continuous solution to the system of PDE .
    Then this solution depends on initial conditions .
    I think everybody agrees because it is basics .

    Now you observe that you can’t find this solution numerically or otherwise because of millions of reasons , chaos and divergence being the main of them .
    So you conclude that instead of going after THE unreachable solution , you will go for the “statistical” properties of the solution .
    In other words you will try to find suitable functions (f.ex averages) that do not exhibit a strong dependence on initial conditions .

    To make the point I will use an analogy I know well – the linear optimisation .
    You have a complicated system with known constraints and you want to know its final state after a time T optimising for a value of one parameter .
    All your unknowns are time averages over T , you find numerically the solution and can do all kind of statistics by varying the constraints and/or initial conditions .
    So now you have State(0) , State(T) and associated statistics .
    Does that say anything about the dynamics what really happens between 0 and T ?
    Nothing at all because there is an infinity of paths leading from State(0) to State(T).
    Now the 1 $ Question : “Am I sure that all those paths lead to State(T) ?”
    And the not so surprising answer is “I do not know .” because the question doesn’t make sense for a system where only time averages over T exist .

    Back to the climate modelling .
    By choosing constraints in form of conservation laws and applying it on averages , we get a State(T) that may depend weakly and simply from State(0) .
    And we have also an infinity of dynamical paths leading from State(0) to State(T) .
    The answer on the 1$ question is also the same .
    But here we DO know that the system is higly non linear and chaotic so we know that there is also an infinity of States(T) that respect the constraints and can be distinguished only if I know the underlying dynamic .
    Yet that is what we do not know .
    QED .
    The day there would be a demonstration (computer calculations do not qualify) that :
    a) either all dynamical paths converge to the SAME State(T)
    b) or there is a statistical law governing the States(T) and independent of the dynamical paths
    would be the day the problem would be solved .
    Untill this very improbable demonstration , the evidence is rather suggesting that dynamics and initial conditions do matter .

  323. Jim D
    Posted Apr 2, 2007 at 10:17 AM | Permalink

    A few responses
    #320 Dave D,
    Generally it is considered that relative humidity is what is preserved as warming progresses.
    This is why vapor increases, but clouds respond to and interact with the RH, and don’t increase.
    Before people complain this is built into the models, it isn’t. This is what is found from
    the models, and agrees with the scientific knowledge of cloud formation processes.

    Jerry, I have many of the same questions as gb that I don’t think you have answered yet.
    One way models like WRF have been tested against observations regarding energy is
    by looking at the energy-wavelength spectrum, where the -3 exponent at large scales turns into
    the -5/3 exponent at mesoscales as is observed by aircraft data.

    Willis, the acceleration of the records is evident because prior to the 90’s we
    haven’t had such a series of warmest-ever years. As I said, scientists understand
    why this is happening, and it would be more surprising if it didn’t happen given what is known.
    On the cloud/sea-ice issue, as we know, sea-ice melting is a positive feedback to global
    warming, so if cloud cover is increasing, it leads to a negative feedback on that
    positive feedback, but doesn’t help much with the underlying global warming. If you
    want a negative cloud feedback for global warming itself, you need to look for something
    that increases cloud cover in warmer regions.

    Tom, the models do resolve this question, but you choose not to believe them. The purpose
    of the ever increasing complexity in models, such as including vegetation prediction sub-models
    and ocean and sea-ice coupling, is to look for responses that cannot be represented in simpler
    models, but none have yet ben found that affect the conclusion that the forcing dtermines the
    climate.

  324. Dave Dardinger
    Posted Apr 2, 2007 at 11:59 AM | Permalink

    re: #325

    Generally it is considered that relative humidity is what is preserved as warming progresses.

    At every height?? I think you’re confusing surface, over-all or column RH, with RH at each point. But look at it this way. Say we have two adjacent areas one with a surface temperature of 18 deg C and the other 19 deg C. Assume the temperature rises one deg C in each. Then I assume you’ll agree that each has additional water vapor in the column. Convection cells are always forming over warm surfaces and they will cause the surface air to rise. When they reach an altitude where the relative humidity = 100% they will start to form clouds. Since we’re assuming more absolute humidity (i.e. constant relative humidity but warmer air)then the clouds will form at a lower altitude*, which it’s known results in either a negative feed back or less of a positive feedback. Now we know that the formerly 18 deg. parcel will now be the same, in some sense, as the formerly 19 deg C parcel and the 18 deg parcel is replaced by a 20 deg C parcel in a sense, the cloud cover will now have less tendency to give a positive feedback and more to give a negative one.

    * I say this because the increase in humidity is greater than the change in temperature. I.e. vapor pressure increase is greater than linear with temperature. So even if we assume constant decrease in temperature with height, it will reach the dew point at a lower altitude than before. In my particular example the values for water vapor pressure over water are 15.477, 16.477 & 17.535 mm Hg for 18, 19 & 20 deg C giving us differences of 1.000 and 1.058 respectively.

  325. Gerald Browning
    Posted Apr 2, 2007 at 3:59 PM | Permalink

    Jim Dudhia (#325),

    A climate model or any time dependent continuum system can provide any spectrum or solution that you want by appropriately choosing (tuning) the forcing as shown by my earlier simple mathematical example.
    That does not necessarily tell you anything about the accuracy of the numerical approximation of the continuum system or the accuracy of the parameterizations. In fact, because a climate model cannot resolve any of the main features (resolution is still not less than 100 km and they use the ill posed hydrostatic system), the forcings are necessarily unphysical if the model produces a spectrum similar to reality. I also noticed that the CCSM uses a sponge layer at its top and a number of other questionable dissipation mechanisms. What quantitative error checks have there been on these?

    More of the same verbiage without any mathematical proof. You might buy a pair of waders.

    The number of aircraft measurements is not sufficient to determine the spectrum of the global atmosphere because the routes are not dense over the entire globe and tend to be at certain elevations. You might want to read Roger Daley’s book.

    BTW I have begun to read the WRF documentation and the number of different dissipation mechanisms is a riot (including one on the vertical velocity). From what I can tell there is not a single test showing the impact of these methods on the numerical accuracy of the approximation of the continuum system when all of the dissipative mechanisms and discontinuous forcings are active. The very fact that many different parameterizations of the same processes have been included tells us that these processes are not well understood as has been discussed many times before on this blog with references provided.

    Ii I am able to compile the WRF model, many of these issues will become moot.

    Jerry

  326. Tom Vonk
    Posted Apr 3, 2007 at 1:23 AM | Permalink

    Tom, the models do resolve this question, but you choose not to believe them. The purpose of the ever increasing complexity in models, such as including vegetation prediction sub-models and ocean and sea-ice coupling, is to look for responses that cannot be represented in simpler models, but none have yet ben found that affect the conclusion that the forcing dtermines the climate.

    Do you mind to elaborate on your general statement ?
    1) What question do the models exactly resolve ?
    2) Do the models give the proof of the unicity of the final state – some references perhaps ?
    3) Do the models give the proof that the solution is independent of time and space steps ?
    4) Do the models give the proof that the solution is independent of the time over which an average is taken ?
    5) Do the models give the proof of uniform convergence of the runs to the exact solution ?

    Unless you can give some answers to the above , all is only words and faith .

  327. Jim D
    Posted Apr 3, 2007 at 10:25 PM | Permalink

    A variety of things to respond to in #326-#328
    Dave D, the jump in your argument that I don’t follow is that a lower cloud base
    leads to more negative feedback, because for low clouds the primary radiative
    effect is cutting off surface shortwave due to their areal coverage, and I don’t think your
    argument leads to any change in that respect.

    Jerry, the w damping mechanism you mention is an option for operational forecasters who
    don’t want to be limited by the time step and vertical motion. It is normally off for
    scientific studies, and actually has very little effect on results.
    I would ask how you explain that WRF has been used for LES studies and compares
    as an equal to LES state-of-the-art models, if it is too dissipative. WRF’s numerical
    methods minimize the dissipation.
    I have also shown mathematically that you can’t call the hydrostatic option ill-posed
    when it is used at the right scales (climate model scales), as it is then a very
    good approximation to the full gravity wave frequency equation.

    Tom, sorry only words from me again on this one. The question you asked

    The day there would be a demonstration (computer calculations do not qualify) that :
    a) either all dynamical paths converge to the SAME State(T)
    b) or there is a statistical law governing the States(T) and independent of the dynamical paths
    would be the day the problem would be solved .

    I say that (b) is what is indicated by models. The future climate states (T) can be reached
    by many paths, each starting from a different initial states (0). 2) As far as unicity goes,
    that is reflected in the confidence of the warming due to a given CO2 scenario. If the
    final states of their runs from different States(0) were wildly different, they would
    have no confidence in any State(T). 3) Different resolutions affect the results, as does
    using different models, but consensus is not affected. 4) There are standard times of
    averaging, such as 3-month seasons. Obviously 3-month averages will differ from 6-month
    averages, so I don’t understand this question. 5) there is no exact knowable solution,
    except to measure current conditions, and no, not even weather models converge to the
    exact measured state because that would require modeling every atom/butterfly, etc.
    Models can converge to exact mathematical solutions using idealized test conditions
    if that is what you mean. These mathematical solutions involve dry continuous inviscid
    flows with small (linear) perturbations. Make it any more complicated, with physics
    or nonlinearity, and the mathematical solution becomes impossible, making models the
    only hope for the real atmosphere.

  328. Tom Vonk
    Posted Apr 4, 2007 at 3:52 AM | Permalink

    Models can converge to exact mathematical solutions using idealized test conditions if that is what you mean. These mathematical solutions involve dry continuous inviscid flows with small (linear) perturbations. Make it any more complicated, with physics or nonlinearity, and the mathematical solution becomes impossible, making models the
    only hope for the real atmosphere.

    I guess that says it all and makes my point .
    We indeed completely agree here .
    So you are basically also agreeing that we do not know the exact solution of the system behavior , that we do not know if it is unique and we know that it is impossible to achieve this knowledge .
    Of course we could do all of the above IF the system was linear but it isn’t .
    Therefore the next logical step is to state the obvious – since I do not know the exact solution I cannot talk about CONVERGENCE of numerical simulations because I do not know to WHAT it should converge .
    Let me say it again – the stability of a numerical run is NOT , has never been a proof of convergence and about any paper I know in the domain of climate modelling is talking about stability . Stability is just the first step above garbage but stability alone can produce any amount of numerical artefacts that have no predictive value .

    The rest of your opinion is circular .
    If I assume first that the averages matter and the “details” do not matter then obviously any model correctly built on such an assumption will exhibit a behavior where the averages matter and the “details” don’t .
    I do not contest that it is consistent , what I contest is that it is relevant for predictions .
    Any wrong assumption correctly modelled will lead to a consistent behavior what doesn’t change anything on the fact that it is wrong .
    That this assumption is wrong is an already known result – the averaged N-S equations have EXACTLY the same chaos in them as the non averaged ones .

    What stays is the illusion due to the behavior of running averages well known by stock exchange chartists .
    While daily values exhibit chaotic behavior , a running average is much smoother .
    Indeed while I am unable to predict the tomorrow’s value , I can predict with a reasonnable accuracy the tomorrow’s 60 days running average because the one unknown day weights only one 60th of the series where the other 59 values are known .
    Yet if I try to predict the running average in 60 days , the chaos is back and the “prediction” is worthless .

    The aggravating circumstance for the climate compared to the stock exchange is that there is NOTHING statistical in it , trying to substitute statistics to the solutions of deterministics PEDs is bound to fail because it is physically wrong .

  329. Dave Dardinger
    Posted Apr 4, 2007 at 10:34 AM | Permalink

    re: #329 Jim,

    for low clouds the primary radiative effect is cutting off surface shortwave due to their areal coverage

    No, for low clouds the primary radiative effect is to reflect solar radiation away. The secondary effect is indeed to block IR but this is almost by definition less than the incoming radiation blocked.

    High clouds are normally thinner and thus allow some to most solar radiation in and then can block IR from escaping (i.e. they’re more opaque in the IR). This makes them more a positive feedback. But the low clouds are a negative feedback because they’re denser and thus little solar radiation gets through. That’s why if you see “dark” clouds you know rain is possible.

    Now, there are special circumstances to be considered, such as that if clouds don’t form during the day but form at night this could produce a positive feedback even with low clouds. It doesn’t sound to me like this is taken into consideration by models, but I could be wrong.

  330. Gerald Browning
    Posted Apr 4, 2007 at 7:36 PM | Permalink

    Jimy Dudhia (# 395),

    I don’t recall mentioning anything about LES simulations. Because you apparently are a master at changing the subject, from now on I will ask you specific questions that have one word answers.

    1. Did you claim that the leapfrog numerical method is unstable? Yes or no.

    2. Is the leapfrog numerical method stable? Yes or no.

    3. Did a correctly implemented semi-implicit numerical method produce the same solution as the leapfrog numerical method for a mesoscale case? Yes or no.

    4. Has it been shown that the vertical component of velocity (w) is proportional to the total heating for a developing midlatitude mesoscale storm? Yes or no.

    5. How many different parameterizations of cloud processes are there in the WRF code including the multiple ones in the Grell scheme? Number of schemes.

    6. Are there discontinuities in any of these schemes? Yes or no.

    7. Do these different schemes produce different latent heating profiles and thus different vertical components of the velocity (w)? Yes or no.

    8. Are different WRF boundary condition options used for LES and limited area weather prediction models? Yes or no.

    9. Did you read the mathematical example that shows that the forcing can be tuned to provide any desired solution ( and hence spectrum) from any well posed system ? Yes or no.

    Let us see if you are willing to answer very specific questions with one word answers.

    Jerry

  331. Jim D
    Posted Apr 4, 2007 at 9:43 PM | Permalink

    Jerry, #332, LES is one area of modeling that tests model dissipation the most,
    and I had he impression that this thread was about dissipation in models. At LES
    scales we have a good idea what the sub-grid scale is supposed to do, and so
    there are some ways of demonstrating these results without physics questions
    coming into the dynamics. Now to your quiz.
    1. I probably said it was unstable for the reason given in 2.
    2. Leapfrog is neutral but very noisy and unusable unless time-filtered.
    3. Probably no. Two different numerical methods can never give the same solution.
    I actually don’t know what this question means. Semi-implicit methods are used for
    sound wave sub-steps, and combined with leapfrog methods in MM5 for example.
    4. WRF has 3 cumulus schemes and 7 microphysics schemes, but Grell-Devenyi
    has up to 144 ensemble members.
    5. No, I can’t imagine why this would be true.
    6. Yes, cumulus schemes have on/off time discontinuities, and microphysics
    deals with cloud edges.
    7. Yes, they produce different latent heating profiles and results, otherwise why have
    different schemes?
    8. Currently LES is not driven by real-world boundaries, being idealized and periodic, so yes.
    9. No, what mathematical example?

  332. Jim D
    Posted Apr 4, 2007 at 9:56 PM | Permalink

    #331, Dave, probably semantics, because the way clouds cut off shortwave radiation from
    reaching the surface is by reflecting it away. Maybe I left open the possibility that
    clouds absorb some shortwave too, but that is, I would agree, a small part.
    High clouds tend to be more a wash regarding feedback, because the positive longwave
    cloud forcing and negative shortwave forcing tend to oppose, and either can be larger,
    possibly depending on cloud opacity.
    Models, for sure, have a diurnal cycle, so this effect is taken into account, if that
    is your question. The diurnal timing of clouds is sometimes off in models, because
    that relies on the parameterization used, and none are perfect globally.

  333. Jim D
    Posted Apr 4, 2007 at 10:17 PM | Permalink

    #330, Tom, this is getting too philosophical for me.
    I would say that you chose a good example with the stock market,
    because that rises on the long term but is unpredictable on the short term.
    Similarly for climate there are good reasons for future temperatures to be
    warmer.
    Of course we don’t know the exact solution of the climate system behavior,
    but does that mean we give up? No, we look for approximate solutions, which
    is what models are. They give an approximate solution to system behavior,
    including the level of chaos (however you measure that). Yes, the models
    could still be missing something pertinent to the climate in the next
    hundred years, which is why no one attaches 100% certainty to climate
    predictions, but the low level of uncertainty these days reflects the degree to which
    the modelers believe they have all the major processes accounted for.

  334. Tom Vonk
    Posted Apr 5, 2007 at 2:11 AM | Permalink

    Jim philosophy may be an interesting disciplin too but my primary issue is the behavior of non linear systems completely described by a set of ODE or PED .
    There is a lot of established results and it is science , not philosophy .
    If I post here it is mainly because I find that the climate modellers don’t bother at all with basic mathematical problems like unicity , convergence , continuity .
    It is similar to Steve’s approach who shows that people use statistics for tree rings without bothering to check the validity of assumptions they take .
    If I am facing a problem that is deemed to obey to deterministic physical laws describing its dynamics by a set of PED , it is legitimate to look at what the mathematics have to say about such systems like f.ex the much simpler Lorenz’s system .
    And I find it mind boggling to see how freely are used words like “approximate solutions” or “uncertainty” when nobody has proven what this solution “approximates” or what the measure of the “uncertainty” is .

    I am not so much saying that the models are “missing” some relevant physical processes as it would only be speculation , I am saying that the models can’t know what is important and what is not because everything is .
    There is a HUGE difference between a list of physical processes and the global dynamics when all interact .
    Should we abandon numerical “climate” models ?
    Not really because even if they should be a collection of numerical chaos with some statistical properties what they probably are , they may lead to discoveries of unknown processes (like f.ex the Svensmark theory) or trigger mathematical proofs in the field of deterministic chaos that would otherwise stay unexplored .
    However they should not be oversold for what they are not – they are not a solid long term predictive tool for dynamical parameters of the climate system exactly like chartist methods are not a predictive tool for the evolution of a stock exchange .
    What would you think about computer models if the climate goes in 10 or 30 years in a stable or cooling regime like suggests the paper I linked here ?

  335. Gerald Browning
    Posted Apr 5, 2007 at 2:19 PM | Permalink

    Jimy Dudhia (#333),

    The heading for this thread is exponential growth in physical systems.
    The specific points are that climate or weather models based on the hydrostatic system are ill posed for the IVP near a jet and that problem will start to show as the hydrostatic models reduce their resolution to around 10 km. And models based on the nonhydrostatic system have very fast exponential growth near a jet that will destroy the accuracy of any numerical approximation of the nonhydrostatic system in that neighborhood. I don’t think there is anything mentioned about LES in the lead in to this thread.

    Now as to the specific points that you almost answered with one word as asked.

    1. The leapfrog scheme is stable.

    2. In test after test starting from balanced initial conditions, the unphysical component of the finite difference solution is not excited for long periods of time without any time filter. It can be excited by unbalanced initial conditions or discontinuous forcing. In the latter
    instance I suggest you read the manuscript entitled “The impact of rough forcing on systems with multiple time scales” by Heinz Kreiss and me that appeared in JAS. In the cases I ran above, I also used the Runge-Kutta method with similar results just to ensure that the results were not dependent on the time integration method.

    3. Read the manuscript by Steve Thomas and me that appeared in MWR.
    I had Steve implement the semi-implicit numerical method the way that method was first introduced by Heinz Kreiss (a world reknown PDE expert
    and numerical analyst). The contour plots from the two methods when using the same time step were indistinguishable as they should be if the numerical methods are converging to the solution of the continuum system.

    I believe you switched the answers to 4 and 5.

    4. The balanced conditions for midlatitude mesoscale storms is now well understood. Read the two manuscripts that appear on this blog under the ITCZ thread, one by Heinz and me in JAS and the other by Christian Page et al. in MWR. Christian implemented the balance in a mesoscale weather model with considerable success.

    5. The number of different parameterizations is an indication of the uncertainty in the physics (forcings). As different ones lead to different
    (latent) heating profiles, they also will lead to different solutions
    for the same dynamics.

    6. The discontinuities destroy the accuracy of the numerical method
    and lead to considerable roughness in the numerical solution (see manuscript cited in 2.)

    7. As stated above the physics is very poorly understood or else there would be no need for multiple schemes.

    8. Periodic lateral boundaries for the idealized LES case is very different from all of the smoothing incorporated for a forecasting case. The very fact that you didn’t point this out tells me loads about your intentions.
    But since you brought this up, why not run the cases in the Math. Comp. article by Heinz and me or in the SIAM journal on Multiscale Modeling and Simulation by Henshaw et al.

    9. The simple mathematical argument appears in this thread (#43).

    It appears to me that you are not reading the literature in your own area and the manuscripts cited repeatedly on this blog (e.g. at the beginning of this thread and the ITCZ thread ) or are choosing to ignore the manuscripts in the literature that point out the problems with
    the incorrect application of numerical methods to PDE’s. It is time that you do so.

    Jerry

  336. bender
    Posted Apr 5, 2007 at 9:59 PM | Permalink

    Gerald Browning, I really enjoy your posts, but feel I need more background to get their full meaning. What, in your opinion, are ten papers and three textbooks that any person should read if they want to understand the top issues in climate physics, dynamics, and modeling?

  337. Jim D
    Posted Apr 5, 2007 at 10:27 PM | Permalink

    Jerry,
    The area I agree with you is that there is uncertainty in the physics,
    hence so many parameterizations, and why I am in a job. The problem is by no
    means solved. It gets complex as soon as you go beyond the N-S equations to
    deal with real atmospheric physics.
    I also agree that the dynamics is well known, and pure dynamics solutions
    can be found, and models with any reasonable numerical scheme will converge to the
    same correct solution in those limited pure-dynamical cases (e.g. inviscid or
    specified viscosity).
    I disagree with the opening paragraph you had above. Being inaccurate in a highly
    turbulent case does not mean the solution is completely invalid, or that the nonhydrostatic
    equations have broken down. It just says the flow is less predictable in a deterministic
    sense. To get turbulent flow completely accurately you need a model that can represent
    all scales (like DNS), and completely accurate initial conditions. The dynamics equations are known,
    but the computers aren’t there yet. Meanwhile we use sub-grid scale representations
    of those eddies, and much of that science has been done in the LES field.

  338. Jim D
    Posted Apr 5, 2007 at 10:47 PM | Permalink

    #336, Tom, if the climate stops warming in 10-30 years it will be a consequence
    of something unaccounted for in the IPCC scenarios (asteroids, nuclear wars, major volcanoes,
    come to mind). I couldn’t see the paper you suggested, but unless it has something on
    that scale, it is unlikely to counteract CO2, in my opinion. CO2 is not a subtle
    thing to do to the atmosphere, it is big, and it will not be a subtle thing that
    cancels it. You suggest climate scientists could have missed something, even
    barring a catastrophe, which is where I differ. Current climate is well understood
    in that there are no major mysteries as to why things are the way they are. Changing
    the climate changes regimes for sure, but not far beyond anything we know, in my opinion.

  339. Jaye
    Posted Apr 5, 2007 at 11:13 PM | Permalink

    It appears to me that you are not reading the literature in your own area and the manuscripts cited repeatedly on this blog (e.g. at the beginning of this thread and the ITCZ thread ) or are choosing to ignore the manuscripts in the literature that point out the problems with
    the incorrect application of numerical methods to PDE’s. It is time that you do so.

    Ouch…

  340. Tom Vonk
    Posted Apr 6, 2007 at 1:39 AM | Permalink

    No Jim .

    “http://www.springerlink.com/content/g28u12g2617j5021”

    Abstract :
    “A novel multi-timescale analysis method, Empirical Mode Decomposition (EMD), is used to diagnose the variation of the annual mean temperature data of the global, Northern Hemisphere (NH) and China from 1881 to 2002. The results show that:
    (1) Temperature can be completely decomposed into four timescales quasi-periodic oscillations including an ENSO-like mode, a 6’โ‚ฌ”8-year signal, a 20-year signal and a 60-year signal, as well as a trend. With each contributing ration of the quasi-periodicity discussed, the trend and the 60-year timescale oscillation of temperature variation are the most prominent.
    (2) It has been noticed that whether on century-scale or 60-year scales, the global temperature tends to descend in the coming 20 years.
    (3) On quasi 60-year timescale, temperature abrupt changes in China precede those in the global and NH, which provides a denotation for global climate changes. Signs also show a drop in temperature in China on century scale in the next 20 years.
    (4) The dominant contribution of CO2 concentration to global temperature variation is the trend. However, its influence weight on global temperature variation accounts for no more than 40.19%, smaller than those of the natural climate changes on the rest four timescales. Despite the increasing trend in atmospheric CO2 concentration, the patterns of 20-year and 60-year oscillation of global temperature are all in falling. Therefore, if CO2 concentration remains constant at present, the CO2 greenhouse effect will be deficient in counterchecking the natural cooling of global climate in the following 20 years. Even though the CO2 greenhouse effect on global climate change is unsuspicious, it could have been excessively exaggerated. It is high time to re-consider the trend of global climate changes.”

    I apologise to Jerry to be slightly off topic .
    But then again not so much because the peer-reviewed paper linked above is using the same set of assumptions (stochastical analysis) as the “standard” models do and concludes to a cooling .
    As long as a rigorous analysis of the dynamics and of the divergence problems is not done and despite your beliefs it is neither done nor proven , there will be no reliable answer to long term trends .

  341. Willis Eschenbach
    Posted Apr 6, 2007 at 3:05 AM | Permalink

    Jim D., you say:

    #336, Tom, if the climate stops warming in 10-30 years it will be a consequence of something unaccounted for in the IPCC scenarios (asteroids, nuclear wars, major volcanoes, come to mind). I couldn’t see the paper you suggested, but unless it has something on that scale, it is unlikely to counteract CO2, in my opinion. CO2 is not a subtle thing to do to the atmosphere, it is big, and it will not be a subtle thing that cancels it. You suggest climate scientists could have missed something, even barring a catastrophe, which is where I differ. Current climate is well understood in that there are no major mysteries as to why things are the way they are. Changing the climate changes regimes for sure, but not far beyond anything we know, in my opinion.

    What evidence do you have that “CO2 is not a subtle thing to do to the atmosphere, it is big …”?

    There is an interesting study, Is the Earth still recovering from the โ€œLittle Ice Ageโ€? A possible cause of global warming by Dr. Syun-Ichi Akasofu, who is the director of the International Arctic Research Center at the University of Alaska Fairbanks. He points out that both long term termperature records (CET, Armagh, etc.), and a variety of proxies (ice cores, ice extent, etc.) indicate that the earth has been steadily warming (plus or minus natural variation) at about a half a degree per century for two centuries and likely as much as four centuries.

    There is no reason to assume that this trend has suddenly stopped. Therefore, before we can assign any warming to CO2, we need to subtract out the long-term warming of about half a degree/century. The world warmed somewhere about 0.6ยฐC in the 20th century … which would indicate that the CO2 contribution must be tiny.

    The recent warming (1980 – present) is not statistically any faster, longer, or larger than the 1915-1945 warming. So where is the evidence that CO2 is “big” … please bear in mind that computer models are not evidence. Evidence is data and observations.

    Where is the evidence?

    w.

    PS – if you don’t think that climate scientists have missed anything … time for a reality check. The climate is a chaotic, multistable, driven, optimally turbulent terawatt scale heat engine, with a host of both known and unknown forcings, natural oscillations, and feedbacks. It comprises five planet-wide subsystems (ocean, atmosphere, biosphere, lithosphere, and cryosphere), each of which has both natural and forced variability. Each of these subsystems has its own feedbacks and forcings, both individually and between the subsystems. The climate is the most complex system that we have ever tried to model, and we’ve only been working on the project for a few decades. We don’t have mathematics to calculate what’s happening even in simple turbulent systems, much less complex ones.

    For example, in the 1970s, the effect of CO2 doubling was estimated at between 1.5ยฐ and 4.5ยฐC, a very wide range. But now we’re in 2007, and because the scientists are not missing anything, our understanding of the climate has been greatly improved because of our wonderful climate models that aren’t missing anything either. Because of the greater understanding we’ve gained over the last 30 years, the effect of CO2 doubling is now estimated at … between 1.5ยฐ and 4.5ยฐC.

    Our understanding of the climate system is in its infancy. New, previously unknown forcings are discovered on a regular basis. For example, when plankton get too hot they emit DMS, which creates clouds above them and cools them down … care to guess how many climate models include that forcing?

    Finally, most of the forcings we know about are poorly understood. The IPCC rates the Level of Scientific Understanding (LOSU) of the majority of the forcings used in GCMs as either Low or Very Low … and that doesn’t include such known forcings as cosmic rays or plankton DMS, which we know next to nothing about. Even such fundamental, basic questions as “do more clouds warm or cool the earth” are not resolved.

    Major mysteries abound in climate science. Why did the earth cool from 1945-1970? (Please don’t insult our intelligence by saying “aerosols”, the data doesn’t support it.) Why has the earth warmed since the 1600s? Why was the early part of the Holocene warmer than it is today? Why has the earth not warmed since 2001? A recent NASA study showed that the reduced snow and ice in the Arctic haven’t reduced the total albedo, because as the ground albedo decreased, the cloud albedo increased … no one, not one climate scientist, predicted this hugely significant effect, and yet you think climate science is “well understood”, with no “major mysteries”?

    Like I said, just a quick reality check …

  342. Gerald Browning
    Posted Apr 6, 2007 at 1:03 PM | Permalink

    bender (#338),

    The manuscripts and texts that I would recommend obviously will depend on your area of expertise. Personally I am very familiar with climate models because of my experience working on them as a scientific programmer during the early years of my professional career before I returned to school to obtain my advanced degree. The basic dynamical equations are quite standard and for a description of those equations including a corresponding scale analysis that leads to a complete understanding of the slowly evolving solution in time from a mathematical (bounded derivative) standpoint, obviously I think the Tellus 1986 manuscript by Heinz and me and the ones cited on the ITCZ thread on this blog lead to a very good understanding of the issues associated with the entire range of scales involved in the free atmosphere above the boundary layer. The latter manuscript can be read without going into the mathematical details to any great extent and includes numerical examples of the mathematical results that helps to quickly obtain a better understanding of the issues.

    For a corresponding description and scaling of the microphysics equations used in the latest mesoscale models, there is a manuscript by Chungu Lu et al. that gives one a very clear picture of all of the problems inherent in the forcings (physical approximations = parameterizations).

    If you are interested in delving into the NS equations and numerical methods for time dependent PDE’s, I can highly recommend texts by Kreiss although these are quite technical. Obviously I come from more of a numerical analytic / applied math background so am biased in that direction.

    I think you might want to peruse the ITCZ manuscript first to see if the text and examples make sense (don’t go into the mathematical details). Depending on how that goes, i.e. what questions are raised, we can proceed from there. I will be very happy to answer any questions or issues that you raise. Don’t be afraid to ask. I am here to help those that want to
    understand and will do all I can to do so.

    Jerry

  343. Gerald Browning
    Posted Apr 6, 2007 at 3:35 PM | Permalink

    Tom (#342),

    You are not the problem and there is no need to apologize. ๐Ÿ™‚

    Jerry

  344. John Baltutis
    Posted Apr 6, 2007 at 5:11 PM | Permalink

    Re: #344

    Could you be a bit more specific in your recommendations? Searching this site is a bit problematic (ITCZ search term yields over 10 pages, unsorted). Specifically, are these the ones you’re referring to?

    Tellus 1986 manuscript by Heinz and me‘โ‚ฌ”Browning, G. L. and H.-O. Kreiss: Multiscale bounded derivative initialization for an arbitrary domain, JAS, 59 ,1680 -1696

    โ€ฆthe ones cited on the ITCZ thread on this blog Which thread?’โ‚ฌ”I only find two references on the Possible ITCZ Influence thread and one is the above one.

    โ€ฆthere is a manuscript by Chungu Lu et al.‘โ‚ฌ”do you have more details?

  345. Gerald Browning
    Posted Apr 6, 2007 at 5:59 PM | Permalink

    John (#346),

    Yes I will be happy to cite the references more specifically later this evening.

    Jerry

  346. Gerald Browning
    Posted Apr 6, 2007 at 8:37 PM | Permalink

    John (#346),

    A quicker way to find many of the references I have mentioned on various threads under modeling might be to do a google scholar search using browning kreiss similar to what Steve M. did when I first posted on this blog. It has most of the publications listed there.

    The 1986 Tellus manuscript is

    Scaling and Computation of Smooth Atmospheric Motions
    G. Browning and H.-O.Kreiss
    Tellus 38A 295-313

    It is cited in the 2002 Journal of Atmospheric Sciences manuscript you found and the one I recommended that bender peruse first:

    Multiscale Bounded Derivative Initialization in an Arbitrary Domain
    G. L. Browning and H. O. Kreiss
    JAS 59 1680-1696

    The 2002 microphysics reference is

    Scaling the Microphysics Equations and Analyzing the Variability of Hydrometer Production in a Controlled Parameter Space
    Chungu Lu, Paul Schultz, and G. L. Browning
    Advances in Atmospheric Sciences, Vol. 19, No. 4, 619-650

    If the latter is hard to obtain, I am sure that Chungu would be happy to send you a reprint. You can reach him at the NOAA lab in Boulder.

    I also have a copy that I can reproduce and mail you if need be.

    Jerry

  347. Jim D
    Posted Apr 6, 2007 at 10:16 PM | Permalink

    Tom,
    The work you cited seems to be a frequency analysis that is then used to extrapolate
    into the future based on cycles in the data that can’t be even proven given the length
    of the historic record. It also doesn’t seem to offer any science beyond the statistics.
    Good to have faith in something, but I wouldn’t put it in quasi-cyclic extrapolations.

    Willis, ice cores are where the evidence is. CO2 hasn’t been at the current level for a
    million years, and the global temperatures go with CO2. This isn’t just making it up.
    OK, there are other mechanisms, methyl clathrates, hydrogen sulphide. You
    only have to read Scientific American to find out about these. Some of them don’t happen
    unless something else has already lowered sea-level or warmed the oceans to a tipping point,
    but here we are only focussed on the next century. Anecdotes on regional warming and cooling
    don’t prove global climate swings, while CO2 is a globally mixed gas. Regional changes may
    result from changes in ocean circulations that do have very long time scales. Solar output
    reduction may also account for the Little Ice Age, another slow change.
    Why do you say the 1945-1970 cooling can’t be explained by aerosols or pollution when it can?
    Are you invoking solar output as an alternative? I believe there is evidence to the contrary,
    because the cooling was mainly in the areas downstream of industrial regions (e.g. Russia,
    and not Australia), as I recall from the “Global Dimming” documentary.

  348. John Baltutis
    Posted Apr 7, 2007 at 12:53 AM | Permalink

    Re: #348

    Thank you for the citations and your kind offer. I’ll contact Chungu Lu for a reprint.

    Cheers

  349. John Baltutis
    Posted Apr 7, 2007 at 1:35 AM | Permalink

    Re: #350

    Better yet, I found the Chungu Lu document at: http://www.iap.ac.cn/html/qikan/aas/aas2002/200204/020404.pdf

  350. DeWitt Payne
    Posted Apr 7, 2007 at 2:24 AM | Permalink

    #349

    While it’s true that the current atmospheric CO2 concentration has increeased above the level of the last million or so years, it’s also true that the level of the last million years is at an all time low for at least the last 500 million years. Considering that photosynthesis stops working at CO2 concentrations below 90 ppmv, one could even say the minimum concentrations during recent glaciations were dangerously low.

  351. Willis Eschenbach
    Posted Apr 7, 2007 at 3:08 AM | Permalink

    Jim D, thank you for your reply to my question. I had asked where the evidence that adding CO2 to the atmosphere would result in large, significant changes that will be hard to “counteract”, as you put it. I’m sorry if I was not clear, I was not asking for evidence that CO2 levels are high, but for evidence that this will make a difference.

    Regarding the aerosol theory, if it were correct then the Northern Hemisphere would have cooled much more than the Southern Hemisphere. But it did not do so. In addition, the drop was not slow as one would expect from increasing aerosols, but was quite rapid. Here’s the HadCRUT3 record for the period:

    Note the plunge in the Southern Hemisphere was larger and steeper than that of the Northern Hemisphere … how do aerosols explain that?

    But there is an even larger problem with the aerosol theory, which is that we don’t have measurements of aerosols for the period, so the theory is based on emissions data. However, there is no evidence that the emissions actually changed the amount of sunlight hitting the earth during that time. To the contrary, there is good evidence that there was no measurable change in atmospheric transmission during that time. Using pyrheliometer data, the authors of the paper

    Hoyt, D. V. and C. Frohlich, 1983. Atmospheric transmission at Davos, Switzerland, 1909-1979. Climatic Change, 5, 61-72.

    showed that in Switzerland there was no trend in atmospheric transmission. As you point out above, this is one of the areas where the IPCC claims the biggest trends in aerosols were occurring.

    The pyrheliometric ratioing method is so accurate and stable, so much so that it even discovered one unknown eruption, as described in:

    Hoyt, D. V., 1978. An explosive volcanic eruption in the Southern Hemisphere in 1928. Nature, 275, 630-632.

    But despite that accuracy, it showed no drop in received sunlight post 1945. As far as I know, this paper has never been seriously challenged by any other studies of the question. I seem to recall that Doug Hoyt posts occasionally on this forum, and could probably tell us more.

    Anyhow, that’s a couple of very strong reasons why the aerosol theory is an insult to our intelligence … because it doesn’t fit the observations.

    I await your evidence for increased CO2 levels causing big, significant changes that will be hard to counteract, as well as evidence that aerosols caused the 1945-1975 cooling.

    All the best,

    w.

  352. gb
    Posted Apr 7, 2007 at 9:16 AM | Permalink

    Here is some other (interesting) literature:

    Thunis & Bornstein (1996) J. Atmospheric Sciencecs, vol. 53, 380-396. discuss the different assumptions and equations used in atmospheric mesoscale models.

    The book by Pope ‘Turbulent Flows’ discusses the length scales found in NS turbulence including the smallest length scale. Chapter 13 presents nicely the problems of simulating a chaotic turbulent flow if one cannot resolve all scales of motions (like in atmospheric models). In particular in 13.5 it is shown what the consequences are. It says ‘it is impossible to construct a LES (read atmospheric model) that produces filtered velocity fields that match those from DNS (fully resolved model) realization by realization’. This has important implications for the way how parameterizations should be validated.

    The papers by Meneveau (1994) Physics of Fluids, vol. 6, 815-833 and by Nadiga and Livescu (2007) Phys. Rev. E, vol. 75, 046303 are good papers on the problem of parameterizations in turbulent flows and their implications. Perhaps also something for Jerry and Tom Vonk.

  353. Gerald Browning
    Posted Apr 7, 2007 at 1:25 PM | Permalink

    John (#351),

    You are very resourceful! Let me know if you have any questions about any of the manuscripts. I will be happy to try to answer them.

    Jerry

  354. Jim D
    Posted Apr 7, 2007 at 7:57 PM | Permalink

    Willis,
    Re: aerosols
    Thanks for forcing me to read up a bit more on this.
    I found a counterpoint paper below. So aerosols may not have an easily seen global impact
    but they do have a noticeable local one.

    *****
    GEOPHYSICAL RESEARCH LETTERS, VOL. 32, L17802, doi:10.1029/2005GL023320, 2005
    Alpert et al.
    Abstract
    From the 1950s to the 1980s, a significant decrease of surface solar radiation has been observed at different locations throughout the world. Here we show that this phenomenon, widely termed global dimming, is dominated by the large urban sites. The global-scale analysis of year-to-year variations of solar radiation fluxes shows a decline of 0.41 W/m2/yr for highly populated sites compared to only 0.16 W/m2/yr for sparsely populated sites (

  355. Jim D
    Posted Apr 7, 2007 at 8:03 PM | Permalink

    (Sorry for the re-post, I got hit by that less-than sign again. Great that the back button works
    to recover the message)
    Willis,
    Re: aerosols
    Thanks for forcing me to read up a bit more on this.
    I found a counterpoint paper below. So aerosols may not have an easily seen global impact
    but they do have a noticeable local one.

    *****
    GEOPHYSICAL RESEARCH LETTERS, VOL. 32, L17802, doi:10.1029/2005GL023320, 2005
    Alpert et al.
    Abstract
    From the 1950s to the 1980s, a significant decrease of surface solar radiation has been observed at different locations throughout the world. Here we show that this phenomenon, widely termed global dimming, is dominated by the large urban sites. The global-scale analysis of year-to-year variations of solar radiation fluxes shows a decline of 0.41 W/m2/yr for highly populated sites compared to only 0.16 W/m2/yr for sparsely populated sites ( less than 0.1 million). Since most of the globe has sparse population, this suggests that solar dimming is of local or regional nature. The dimming is sharpest for the sites at 10ยฐN to 40ยฐN with great industrial activity. In the equatorial regions even the opposite trend to dimming is observed for sparsely populated sites.

    *****

    I would also note that volcanic eruptions have a global impact on
    aerosol amount from measurements, and there is some evidence of a global
    temperature reduction after recent eruptions. It is just a matter of scale, but you can’t deny
    that aerosols cause an effect on surface solar fluxes, and efforts to quantify it globally
    (yes, using models), have succeeded in accounting for the non-warming period (1945-80).
    How about the red sunsets due to aerosols? Isn’t that evidence enough of an impact?
    Regarding the plots you show, I would interpret it as showing that there are global
    and local aerosol effects, and what you see there are the global ones, due to the smaller
    aerosols that stay in the atmosphere longer.
    I will get to the CO2 issue in another post.

  356. Willis Eschenbach
    Posted Apr 8, 2007 at 2:45 AM | Permalink

    Thanks for the link, Jim, it’s an interesting paper.

    The paper uses data from GEBA, the Global Energy Balance Archive. However, instead of the much more accurate pyrheliometric ratioing method used by Hoyt et al., they used direct pyrheliometer readings. I have registered to obtain the GEBA data, to see what it says.

    Taking a look at the paper you cited, I find the following curiosities:

    1. They seem to never have heard of autocorrelation, or of the problems it causes. They divide the data into highly and sparsely populated sites. Then they say:

    Moreover, the decline for highly populated sites during the
    25-year period under investigation was approximately
    2.6 times as large as the decline for sparsely populated
    sites. In particular, the slope for large cities was estimated to
    be 0.41 W/m2/yr compared to 0.16 W/m2/yr for sparsely
    populated sites (Table 1). The goodness-of-fit measure (R2)
    of 0.52 at the significance level p

    In fact, adjusted for autocorrelation, there is no significant difference between the trend of the sparsely and densely populated sites.

    In addition, they’re playing fast and loose with the R^2 measure in a couple of ways. While the R^2 of the linear trend of the populated sites is 0.52, adjusted for autocorrelation it is not statistically significant (p = 0.06). Also, the R^2 of the linear trend of the less populated sites is 0.18, not statistically significant (p = 0.16), and for the entire dataset it is 0.43, also not statistically significant (p = 0.07).

    2. They are assuming that the change in radiation is due to aerosols. However, this seems doubtful, because of the large year-to-year changes in radiative flux. The emission of aerosols doesn’t change a whole lot from year to year, but the radiative flux in the highly populated areas changed as much as 13 W/m2, and in the sparsely populated areas, by as much as 6 W/m2. Surely this can’t be due to aerosols alone, there would have to be other factors to make those large changes.

    3. None of the sites that they reported on are south of 15ยฐ … curious … I would think they’d want to compare the NH and the SH.

    4. They say:

    The paradox, however,
    is that the observed decline in broadband global solar
    radiation concurred with the observed temperature increases
    over land, by 0.09 K per decade, between 1951 and 1989
    [Intergovernmental Panel on Climate Change, 2001]. This
    puzzling evidence could, by all appearances, put in doubt
    the dimming trend. Nevertheless, based on climate simulations
    with the aid of a general circulation model, Liepert et
    al. [2004] argued in favour of the real existence of
    solar dimming, which they attributed to interactions of
    greenhouse gas forcing combined with aerosol effects. They
    found that reductions in surface solar radiation are only
    partly compensated by enhanced down-welling longwave
    radiation from the warmer and moister atmosphere.

    While this all sounds good, it doesn’t work out mathematically. It is truly a paradox, since the land was warming from 1964 to 1989, while the “global dimming” was going on. From their paper, insolation decreased by 4 – 7 W/m2, and CO2 forcing increased by 0.6 W/m2 (using IPCC max figure of 4.5ยฐC/doubling). Even if we assume a further doubling of the CO2 forcing from the mythical positive feedback (to 9ยฐC/doubling), that’s still a net reduction of 3 – 6 W/m2 in forcing … meanwhile the land warmed in the NH. Doesn’t figure … one of the non-existent major mysteries of the climate, I guess.

    5. Unfortunately, they do not give an area-averaged value for the global dimming. However, since urban areas are only about 0.2% of the land area, the area-averaged value over the Northern Hemisphere is likely much closer to the “sparsely populated” trend in global dimming than the “densely populated” trend. The fly in the ointment is that the “sparsely populated” trend is only -0.16 W/M2/yr, and is not statistically significant (p = 0.16!) …

    In the meantime, while I wait for the GEBA data, here’s a post from Doug Hoyt on this question.

    w.

  357. Willis Eschenbach
    Posted Apr 8, 2007 at 2:49 AM | Permalink

    Grrr … I got bit by the dreaded “less than” symbol, it was in the quote so I didn’t notice it … I’ll try again, ignore #358.

    Thanks for the link, Jim, it’s an interesting paper.

    The paper uses data from GEBA, the Global Energy Balance Archive. However, instead of the much more accurate pyrheliometric ratioing method used by Hoyt et al., they used direct pyrheliometer readings. I have registered to obtain the GEBA data, to see what it says.

    Taking a look at the paper you cited, I find the following curiosities:

    1. They seem to never have heard of autocorrelation, or of the problems it causes. They divide the data into highly and sparsely populated sites. Then they say:

    Moreover, the decline for highly populated sites during the
    25-year period under investigation was approximately
    2.6 times as large as the decline for sparsely populated
    sites. In particular, the slope for large cities was estimated to
    be 0.41 W/m2/yr compared to 0.16 W/m2/yr for sparsely
    populated sites (Table 1). The goodness-of-fit measure (R2)
    of 0.52 at the significance level p less than 0.001 indicates a good
    fit of the linear trend for highly populated sites.

    In fact, adjusted for autocorrelation, there is no significant difference between the trend of the sparsely and densely populated sites.

    In addition, they’re playing fast and loose with the R^2 measure in a couple of ways. While the R^2 of the linear trend of the populated sites is 0.52, adjusted for autocorrelation it is not statistically significant (p = 0.06). Also, the R^2 of the linear trend of the less populated sites is 0.18, not statistically significant (p = 0.16), and for the entire dataset it is 0.43, also not statistically significant (p = 0.07).

    2. They are assuming that the change in radiation is due to aerosols. However, this seems doubtful, because of the large year-to-year changes in radiative flux. The emission of aerosols doesn’t change a whole lot from year to year, but the radiative flux in the highly populated areas changed as much as 13 W/m2, and in the sparsely populated areas, by as much as 6 W/m2. Surely this can’t be due to aerosols alone, there would have to be other factors to make those large changes.

    3. None of the sites that they reported on are south of 15ยฐ … curious … I would think they’d want to compare the NH and the SH.

    4. They say:

    The paradox, however,
    is that the observed decline in broadband global solar
    radiation concurred with the observed temperature increases
    over land, by 0.09 K per decade, between 1951 and 1989
    [Intergovernmental Panel on Climate Change, 2001]. This
    puzzling evidence could, by all appearances, put in doubt
    the dimming trend. Nevertheless, based on climate simulations
    with the aid of a general circulation model, Liepert et
    al. [2004] argued in favour of the real existence of
    solar dimming, which they attributed to interactions of
    greenhouse gas forcing combined with aerosol effects. They
    found that reductions in surface solar radiation are only
    partly compensated by enhanced down-welling longwave
    radiation from the warmer and moister atmosphere.

    While this all sounds good, it doesn’t work out mathematically. It is truly a paradox, since the land was warming from 1964 to 1989, while the “global dimming” was going on. From their paper, insolation decreased by 4 – 7 W/m2, and CO2 forcing increased by 0.6 W/m2 (using IPCC max figure of 4.5ยฐC/doubling). Even if we assume a further doubling of the CO2 forcing from the mythical positive feedback (to 9ยฐC/doubling), that’s still a net reduction of 3 – 6 W/m2 in forcing … meanwhile the land warmed in the NH. Doesn’t figure … one of the non-existent major mysteries of the climate, I guess.

    5. Unfortunately, they do not give an area-averaged value for the global dimming. However, since urban areas are only about 0.2% of the land area, the area-averaged value over the Northern Hemisphere is likely much closer to the “sparsely populated” trend in global dimming than the “densely populated” trend. The fly in the ointment is that the “sparsely populated” trend is only -0.16 W/M2/yr, and is not statistically significant (p = 0.16!) …

    In the meantime, while I wait for the GEBA data, here’s a post from Doug Hoyt on this question.

    w.

  358. Jim D
    Posted Apr 9, 2007 at 9:25 PM | Permalink

    Wilis,
    I will not argue as this is somewhat outside my field, and I only read the
    Alpert et al. abstract. I am certainly grateful for your critique and the Hoyt link,
    as I was not aware there had been a debate on this topic.
    I will obviously not, based on just this, completely dismiss aerosols as having a cooling
    effect, especially since no alternative has been put forward. Also if volcanoes have
    an admitted cooling effect, why not anthropogenic aerosols, which are surely similar, and
    what about the well known contrail studies around 9/11 (admittedly not many contrails
    in 1945, but later it could be significant)? Additionally, I mentioned the cloud/aerosol effect in
    an earlier post, which is also a real one in terms of increasing cloud albedo. I will
    also mention again that models like the recent CCSM work including aerosols, presumably by people who know
    aerosol/radiation effects, do reproduce the cooling period.

    On CO2: I believe you asked me to explain why having the most CO2 in the atmosphere in
    a million years is a “big” forcing on the atmosphere. I could go back over the
    radiative forcing argument again here, but instead I will only state that
    ice cores show that each time CO2 increased 100 ppm between the Ice Ages,
    the temperature increased by 10 C. It is not clear which is cause and effect,
    but it says something about an equilibrium because of their high correlation.
    We are already 100 ppm above the 1900 CO2 level, and the temperature is trying to catch up.
    Projected rises by 2100 are not 10 C, but that could happen over a longer time.

  359. Willis Eschenbach
    Posted Apr 9, 2007 at 10:01 PM | Permalink

    Jim D., as always, good to hear from you. A couple of comments:

    1. You say:

    I will obviously not, based on just this, completely dismiss aerosols as having a cooling effect, especially since no alternative has been put forward.

    I’ve never understood this line of reasoning. It’s the same reasoning that says “We don’t know what caused the recent warming, so it must be CO2”. If an argument is flawed, it is flawed whether it is the only explanation proposed, or whether it is one of fifty possible explanations. We should not hold on to it just because we don’t know the true answer.

    2. In the ice ages, it is clear that initially CO2 has to be the effect, since it lags the temperature changes going into the ice ages. In addition, the cessation of the CO2 increase at the end of the transition out of the ice ages also lag the temperature change.

    3. You can’t compare ppm of CO2 to temperature, the relationship is logarithmic. If CO2 were responsible for the ice ages, it would imply a climate sensitivity of 3.6ยฐC per W/m2. The IPCC estimate is 0.4ยฐ – 1.2ยฐC per W/m2. If that were actually the sensitivity, we should have seen about a 5.5ยฐC temperature rise since 1900 …

    w.

  360. DeWitt Payne
    Posted Apr 10, 2007 at 1:18 AM | Permalink

    It’s the same reasoning that says โ€œWe don’t know what caused the recent warming, so it must be CO2’โ‚ฌยณ.

    This is, I think, a variaton on the logical fallacy known as false dilemma or false dichotomy. Anyone using it in an argument gets an F in logic.

  361. Tom Vonk
    Posted Apr 10, 2007 at 2:32 AM | Permalink

    Tom,
    The work you cited seems to be a frequency analysis that is then used to extrapolate into the future based on cycles in the data that can’t be even proven given the length of the historic record. It also doesn’t seem to offer any science beyond the statistics. Good to have faith in something, but I wouldn’t put it in quasi-cyclic extrapolations.

    Ah … “faith” again !
    I have neither “faith” in the numerical models based on stochastical assumptions nor in frequency analysis using the same assumptions .
    What I am saying is that one can’t have it both ways .
    IF the climate can be adequately modelled by neglecting the differential equations describing its dynamics and assuming the “long terme averages + noise” hypothesis THEN very obviously the evolution of the “long term averages” will show in any serious stochastical frequency analysis .
    That is exactly what is happening in the work I quoted – the only way to dismiss this paper would be to say that the statistics are not relevant to the problem but then the “long term trends + noise” hypothesis would not be relevant either .
    It should be clear by now , given the evidence quoted in this thread , that neither the convergence nor the unicity of computer models is adequately documented and that’s an understatement – it can’t actually be proven .

  362. Tom Vonk
    Posted Apr 10, 2007 at 4:52 AM | Permalink

    The papers by Meneveau (1994) Physics of Fluids, vol. 6, 815-833 and by Nadiga and Livescu (2007) Phys. Rev. E, vol. 75, 046303 are good papers on the problem of parameterizations in turbulent flows and their implications. Perhaps also something for Jerry and Tom Vonk.

    Those and others are LES approaches .
    It is often very hard to read because the authors generally take a specific case (like 2D incompressible flows) and then derive results that may be valid for this specific case under a given set of assumptions .
    Also it is very rarely recalled that the original Kolmogorov work postulates some rather strong hypothesis like isotropy , homogeneity and self similarity .
    Getting back to basics the principle is easy :
    1) I acknowledge that I can’t find solutions to the continuous PED be it analytically or numerically .
    2) If I want to continue numerically , I acknowledge the computer constraints on the resolution .
    3) Given 2) above I will separate the real flow in resolved and unresolved flow and create an arbitrary interface between the 2 (arbitrary because the resolution size is imposed upon me by the computer and not by the physics)
    4) Then 99% of the time and ressources will be spent on analysing the relevance of different SGS (Sub Grid Scale) modelling .

    And it is at 4) that all the difficulties begin .
    – the method will by definition break down every time when you get discontinuities or a significant feed back from unresolved dimensions to resolved dimensions . The real world is full of those and Jerry is giving some examples .
    – it has never been proven that the LES model is close to the true flow averages in a general case .
    – by first modelling the continuous solutions of the large eddies and then discretizing it for a computer run you get an unsavory mix of modelling and numerical errors that are close to impossible to separate because all the “calibrations” are done with again … numerical methods . In other words if you run a LES with an error A and compare it to a f.ex DNS run with an error B , then getting similar results doesn’t say much about A and B .
    – space resolution is not everything , time resolution plays an important and largely unknown role too .

    And overriding all this we have the general problem common to climate “modelling” and that is that every partial success (success ?) gets published while it is extremely rare to publish failures . Yet as I have pointed out several times in this thread , we learn much more from the failures than from partial successes .
    So clearly the behavior of this extremely complicated system called climate is far from being “closed and settled” .

  363. MarkW
    Posted Apr 10, 2007 at 4:56 AM | Permalink

    #360, Jim D:

    I’d be interested in knowing why you are so conviced that something is cooling down the planet.
    Is it not more reasonable to assume that the models are broken?

  364. Gerald Browning
    Posted Apr 10, 2007 at 12:19 PM | Permalink

    Tom (#364)

    Very well stated.

    Jerry

  365. Jim D
    Posted Apr 10, 2007 at 8:51 PM | Permalink

    Willis,

    I’ve never understood this line of reasoning. It’s the same reasoning that says โ€œWe don’t know what caused the recent warming, so it must be CO2’โ‚ฌยณ. If an argument is flawed, it is flawed whether it is the only explanation proposed, or whether it is one of fifty possible explanations. We should not hold on
    to it just because we don’t know the true answer.

    This one will answer MarkW too.
    I know I didn’t say this well, but I am more convinced by the arguments for aerosol-induced cooling than the
    ones against it, but I also think this is a side issue that came up because I claimed everything was explainable.
    I would say that wiggles of the magnitude of this period have shown up in the past records, and the underlying
    temperature rise that started in 1900 hasn’t, so the aerosol cooling mechanism isn’t critical to the warming issue.,
    and wiggles of this size, even if somehow natural, won’t do much to stop warming.

    2. In the ice ages, it is clear that initially CO2 has to be the effect, since it lags the temperature changes going into the ice ages. In addition, the cessation of the CO2 increase at the end of the transition out of the ice ages also lag the temperature change.

    I didn’t think it was clear which one lagged because of the time resolution of the data, but that fits
    the normal ice age mechanism that it comes from orbital and precession effects which force temperature,
    not CO2 directly. So when the temperature rises CO2 is released from the ocean.
    However, this is not an argument against the possibility of CO2 leading temperature, especially
    with the radiative effect of CO2.

    3. You can’t compare ppm of CO2 to temperature, the relationship is logarithmic. If CO2 were responsible for the ice ages, it would imply a climate sensitivity of 3.6ยฐC per W/m2. The IPCC estimate is 0.4ยฐ – 1.2ยฐC per W/m2. If that were actually the sensitivity, we should have seen about a 5.5ยฐC temperature rise since 1900 โ€ฆ

    If you prefer, I could have said that a 50% increase in CO2 went with a 10 C increase, which is even more impressive.
    Indeed, this argument I used is not a “standard” one. I was trying to emphasize a correlation
    that is clearly not accidental, and it follows from this correlation alone that more CO2 leads to higher temp.
    I suspect the effect is not as big now as in the ice ages because we have lost most of the positive
    feedback from ice albedo, but the effect is unlikely to shut off completely while polar ice is still around.

  366. Jim D
    Posted Apr 10, 2007 at 9:16 PM | Permalink

    Tom, #363
    The problem with statistical approaches that look for cycles and not exponential growths
    when the exponential growth fits the data better, should be fairly clear. They found a
    60 year cycle from 120 years of data that only permits sub-harmonics like 60, 40, 30, etc.
    to be found. If they had 140 years, they might have found 70 years. I give them credit
    for not reporting a 120 year cycle, though clearly that would have had significant energy too.
    Regarding climate, I still don’t understand what you have against averaging out the weather
    to obtain climate, whether it is in models of the future or past observed data makes no difference.
    Climate models do converge as I said. You can vary the initial conditions and the results,
    such as mean global surface temperature, cluster by the CO2 forcing, and this is the
    reason climate modelers have any faith in projections.

  367. Jim D
    Posted Apr 10, 2007 at 9:37 PM | Permalink

    Tom, #364 on LES,
    I agree with this. There never will be a perfect sub-grid scheme. Luckily in
    weather there is a scale separation that permits us to model thunderstorms, for example
    on 1 km grids because the dominant motions are resolved. I am not saying the sub-grid scheme choice
    has no effect, but we can understand a lot about thunderstorms with even a very simple
    sub-grid scheme. [However there are now scientists who say you need more like a 200 m grid
    (basically LES) to do thunderstorms properly.]
    On the other hand, in areas like boundary-layer growth, sub-grid
    schemes are front and center, and we are still working on getting PBL schemes better.

  368. Willis Eschenbach
    Posted Apr 11, 2007 at 12:25 AM | Permalink

    Jim D., thanks for your answers. You say:

    This one will answer MarkW too.
    I know I didn’t say this well, but I am more convinced by the arguments for aerosol-induced cooling than the
    ones against it, but I also think this is a side issue that came up because I claimed everything was explainable.
    I would say that wiggles of the magnitude of this period have shown up in the past records, and the underlying
    temperature rise that started in 1900 hasn’t, so the aerosol cooling mechanism isn’t critical to the warming issue.,
    and wiggles of this size, even if somehow natural, won’t do much to stop warming.

    Heck, I might be convinced by the arguments for aerosols myself … if I could ever find out what they are. All I can find is vague handwaving, plus the claim that it works in the GCMs … but since you are more convinced by the arguments for, perhaps you could explain what those arguments are.

    The earth has been warming at about half a degree per century for the last four centuries, and seems to be continuing to do so … I’m not sure what you mean when you say that natural warming or cooling “won’t do much to stop warming”.

    Next, I had said:

    2. In the ice ages, it is clear that initially CO2 has to be the effect, since it lags the temperature changes going into the ice ages. In addition, the cessation of the CO2 increase at the end of the transition out of the ice ages also lag the temperature change.

    You replied:

    I didn’t think it was clear which one lagged because of the time resolution of the data, but that fits
    the normal ice age mechanism that it comes from orbital and precession effects which force temperature,
    not CO2 directly. So when the temperature rises CO2 is released from the ocean.
    However, this is not an argument against the possibility of CO2 leading temperature, especially
    with the radiative effect of CO2.

    Heck, even the arch AGW folks at RealClimate acknowledge that CO2 lagged temperature during the ice age transitions … don’t know how to reply to the claim that it’s not clear which one lagged. There’s been lots of studies on this.

    w.

  369. Gerald Browning
    Posted Apr 11, 2007 at 12:52 AM | Permalink

    Jimy Dudhia (#369),

    There is no rigorous mathematical proof that there is a distinct scale separation, only that there have been two distinct types of numerical models (hydrostatic for the large scale motions and nonhydrostatic for the mesoscale with the latter change only taking place fairly recently) with very different assumptions in each. You continue to assert things that are not proved or are folklore. If the enstrophy can cascade down from balanced large scale flow in a matter of hours because of large exponential growth near a jet, locally the scales are not separated , i.e. smaller scales can form very quickly even from a balanced large scale state.

    Jerry

  370. Tom Vonk
    Posted Apr 11, 2007 at 1:07 AM | Permalink

    Regarding climate, I still don’t understand what you have against averaging out the weatherto obtain climate, whether it is in models of the future or past observed data makes no difference. Climate models do converge as I said. You can vary the initial conditions and the results, such as mean global surface temperature, cluster by the CO2 forcing, and this is the reason climate modelers have any faith in projections.

    I have nothing against averaging . The RANS are valid equations . What you apparently can’t understand is that RANS are EXACTLY as chaotic as the non averaged NS .
    What I have something against is hand waving and saying that magically the chaos disappeared when I change variables .
    That’s negating everything the mathematics say .
    As for the “convergence” – I have asked over and over to WHAT the models converge .
    One thing is sure , they don’t converge to exact solutions of the equations governing the dynamics .
    But if you have the proof that they do , don’t forget to claim the prize of 1 M$ that the Clay’s institute will pay you .
    You are confusing all the time convergence and stability .
    If the latter is a necessary condition for a model to be a little bit more than numerical garbage , it is not and has never been the proof of the former .
    That the LES and similar models break down in the real world has already be proven and explained 100 times by Jerry .

  371. MarkW
    Posted Apr 11, 2007 at 5:23 AM | Permalink

    The only argument I have ever seen in favor of aerosol cooling, is that it makes the models work.
    Comparison with the real world shows that aerosols don’t work. That is, we don’t have a good number for how many aerosols were released during the time in question. We know very little about the distribution of types of aerosols that were released. We know very little about aerosols actually affect climate.

    Beyond the uncertainty is the absolute fact that while most of the aerosols were being released in the NH, most of the cooling that is being blamed on aerosols occurred in the SH. Maybe it’s more of the teleconnections that magically make trees respond to temperature changes hundreds of miles away?

    Beyond that is the fact that India and China are releasing huge quantities of aerosols over the last few decades, so the claim that aerosols only cooled the climate during the 70’s still fails the real world test.

    I suspect that the reason you cling to belief that aerosols fix the problems with the models, has more to do with your attachment to the models.

  372. Jim D
    Posted Apr 12, 2007 at 9:35 PM | Permalink

    Willis and MarkW,
    Re: aerosols. If aerosols have any reflective characteristic, whether in clear air
    or in clouds, they will provide forcing with a cooling effect. It would be miraculous
    if they were perfectly transparent. Now, they could also have an IR effect, which
    would be more of a warming below and cooling above (like clouds). I would agree that
    their net effect at the surface is debatable. I need to dig into their radiative effects
    more as I can’t argue this except based on hearsay.
    RE: CO2. OK, like I said, it is understandable if temperature leads CO2 in the Ice Ages.
    It doesn’t contradict the possibility that CO2 can lead warming, and that they are highly correlated.
    In response to the warming for centuries claim, the warming only took off since 1900, before that,
    even if you see some kind of linear trend,
    perhaps due to solar changes, there is no way to say it is part of the current exponential trend,
    which is explainable in terms of CO2, and I haven’t heard a credible alternative warming explanaion,
    which would also need a reason why CO2 isn’t doing it. Why go for something that needs two mechanisms
    (one for warming, one for why CO2 isn’t responsible) when one suffices?

  373. Jim D
    Posted Apr 12, 2007 at 9:51 PM | Permalink

    #371, Jerry,
    In a lot of atmospheric systems, the smallest scales act just like diffusion, which
    is simply represented in models. That is, the small scales are just a sink of energy or enstrophy,
    and contribute nothing else in those cases. Thunderstorm models and large-scale circulation
    models represent those processes fine with just diffusion representing the small scales.
    In three-dimensional turbulence energy cascades to smaller scales, unless it is an unstable
    situation, when growth from small scales does occur. Idealized models of thunderstorms are often
    initiated with a resolved bubble so that they don’t have to develop the growth from small scales
    themselves, or in forecasting with storm-scale models we rely on convergence lines to release
    the convective instability in the right place, and this works quite well. So, yes, small scales
    matter in development, but after the storm has developed they just contribute diffusion.

  374. Jim D
    Posted Apr 12, 2007 at 10:10 PM | Permalink

    Tom,
    As far as convergence goes, it is easy to show models converge to certain
    mathematical inviscid linear solutions, like flow over a hill. And anything, you
    can get mathematics to do with the NS system, you can get a model to do.
    Regarding chaos, climate models consist of many interacting systems (atmosphere,
    ocean, ice, land, vegetation, etc.), and therefore have the complexity to
    produce chaos if such existed. However, as I began this part of the argument with,
    the century time scale is short for chaotic transitions in something as highly
    inertial as the climate system, and it is quite non-chaotic, and in a sense
    boring and predictable, when you only run these models 100 years. CO2 traps
    heat in the system, and the atmosphere warms. No surprise, and what grounds are
    there for expecting a surprise?
    How else do you solve this question, other than with models?

  375. MarkW
    Posted Apr 13, 2007 at 5:13 AM | Permalink

    JimD,

    Some aerosols reflect, some absorb, some have different characteristics based on altitudes and conditions.

    Nobody has ever said that aerosols have no affect. The point is, we don’t know enough about the amount, distribution and makeup of the aerosols that have been released over the years to have a solid feel for what contribution they made at any given year.

    The modelers declare that X amount of aerosols “fixes” their models, and since X is a reasonable guess, (in their opinions) then X must have been the amount released.

    You still haven’t tried to explain why the aerosols had more affect in the southern hemisphere, even though most of them were released in the northern hemisphere. Nor have you tried to explain away the assumption that aerosol production has gone down in recent decades, despite the fact that it hasn’t.

  376. MarkW
    Posted Apr 13, 2007 at 5:15 AM | Permalink

    JimD,

    70% of the warming occurred before 1940, yet 80% of the CO2 increase occurred after 1940. I would say that even in the modern era, warming preceeds CO2.

  377. MarkW
    Posted Apr 13, 2007 at 5:17 AM | Permalink

    JimD,

    Only one explanation is needed. We have nothing that “shows” that CO2 is causing the warming. There is an assumption that it must be, since no other “acceptable” explanation has been put forward.

    The problem is, several explanations have been put forward. They are just dismissed because the “the models show us that CO2 is all the explanation we need”.

  378. MarkW
    Posted Apr 13, 2007 at 5:23 AM | Permalink

    What you might say are these explanations?

    The two biggies are inadequately adjusted UHI. As has been discussed on this site, Jones’ UHI adjustment of 0.05C per century is absurdly small. For example, a recent study out of China came to the conclusion that around 80% of the warming seen in that country is due to UHI. Another study of California reached the same conclusion for that state.

    Another is the fact that the sun is unusually active, perhaps more active than it’s been at any time in the last 8000 years. This both increases direct radiance, which even the IPCC has admitted has warmed the earth, and indirectly by reducing cosmic rays which reduce cloud cover. The problem is that the exact relatioship between cosmic rays and clouds has not been established yet. That is, an X percent change in rays produces a Y percent change in clouds.

    Between these two factors alone, there is not much warming left for CO2 to be responsible for.
    Then when combined with the FACT that historically, CO2 trails temperature changes, indeed, it’s hard to find any CO2 induced temperature signal in the historical record.

  379. Jim D
    Posted Apr 13, 2007 at 8:50 PM | Permalink

    (I am going to be on travel, so this is it for about a week).

    MarkW.
    OK, various isses raised.
    Aerosols; yes, they reflect and absorb. I agree,
    it is complex, and modelers have to put some numbers in, and justify those numbers for peer review.
    I hope the review system works for issues like this.

    Warming: OK, interesting about the UHI (urban heat island effect), and maybe you answered your
    own question about early-century warming with that one.
    However, more recent warming has been confirmed by satellite (after some debate), and conforms
    with the CO2 picture of warming near the surface and cooling higher up. Satellite data
    is more global, so we can’t attribute that warming to sampling errors.

  380. Willis Eschenbach
    Posted Apr 14, 2007 at 1:06 AM | Permalink

    Jim D., you seem to misunderstand the peer review process. Even when it works well, it only is concerned with whether there are obvious errors, gross mis-statements, or unsupported allegations. It is almost never concerned, for example, with whether the assumptions in a climate model are correct or appropriate. Passing the peer review means nothing about whether the paper’s claims are correct. That judgement comes later, based on whether it stands the test of time.

    And when the peer review system works poorly … we get people reviewing papers who might be friends or past co-authors of the current paper’s authors, who have might a predetermined point of view, and any paper that agrees with their point of view may be given only the most superficial review.

    Regarding UHI, the satellite record is way too short to say much of anything about UHI. Here’s the comparison:

    As you can see, the confidence intervals of the trends are far too wide to draw any conclusions. In fact, the trends of the three records are not statistically distinguishable.

    w.

  381. bender
    Posted Apr 14, 2007 at 7:55 AM | Permalink

    Willis, yes, this is exactly how the peer review process works and what its purpose is. The idea is to make sure the interpretation given to the data is coherent, that the experimentation was done correctly, and that the methods are repeatable. The correctness of the result is for posterity, not peers, to judge – and this through repeated experimentation. Peer review can not achieve that.

  382. MarkW
    Posted Apr 14, 2007 at 9:14 AM | Permalink

    JimD,

    Funny thing about uncertainty. When it comes to aerosols, the modelers admit that there is much uncertainty, which justifies them putting in exactly the amount of aerosols necessary to “fix” their models, despite the fact that real world evidence shows that aerosols don’t work the way modelers assume they do.

    On the other hand, modelers claim that since there is much unknown about the affect of cosmic rays on clouds, that justifies keeping that factor out of the models completely.

    The difference, aerosols can be made to support the conclusion that CO2 is something to worry about. COsmic rays weaken that same claim.

    All of models that I am familiar with state that the upper atmosphere should be warming faster than the surface.

    The satellite data definitely does not support the models.

  383. Tom Vonk
    Posted Apr 16, 2007 at 3:08 AM | Permalink

    As far as convergence goes, it is easy to show models converge to certain mathematical inviscid linear solutions, like flow over a hill. And anything, you can get mathematics to do with the NS system, you can get a model to do. Regarding chaos, climate models consist of many interacting systems (atmosphere, ocean, ice, land, vegetation, etc.), and therefore have the complexity to produce chaos if such existed. However, as I began this part of the argument with,
    the century time scale is short for chaotic transitions in something as highly inertial as the climate system, and it is quite non-chaotic, and in a sense boring and predictable, when you only run these models 100 years. CO2 traps
    heat in the system, and the atmosphere warms. No surprise, and what grounds are there for expecting a surprise?
    How else do you solve this question, other than with models?

    So some models converge to certain and linear solutions ?
    As the climate is neither certain nor linear that says ?
    I have tons of examples where models converge to certain solutions in certain circumstances yet such examples don’t even begin to hint at to what the general N-S modelling would converge to .
    As for the chaotic behavior it would be nice if you stopped repeating the same mantra and brought some mathematically supported argument if any .
    So the climate is non chaotic and predictable ?
    What would actually be the parameters that are so absolutely predictable with 99,9 % certainty ?
    The average temperature ? The average humidity ? The average cloudiness ? The average precipitation ?
    Where is that magical time scale threshold that transforms a chaotic system in a non chaotic one ?
    Where is the proof (not that is likely that one will ever exist) that global space averages are totally independent of the distribution of local space averages and their dynamics ?
    You should really look a bit more in depth how behaves the Lorenz’s system to understand what chaotic means .
    Approximate the climate to a steady state system in equilibrium is really the degree zero of physics .

    What should we do with computers in climate modelling ?
    Mothball them and spend the wasted billions in fundamental physics of non-linear chaotic systems .
    Once we understand them better and especially understand what the computers can and can not do , then we can do some educated choices .

  384. Gerald Browning
    Posted Apr 16, 2007 at 12:37 PM | Permalink

    bender (#383) and Tom (#385),

    When used appropriately, computers and numerical models can be wonderful
    tools. Unfortunately, in many disciplines (and I certainly include numerical weather prediction and climate modeling in these categories
    for reasons that have become quite evident on this thread), these tools have become severely abused. It is quite easy to add sufficient dissipation to almost any numerical approximation (including a numerical approximation of an ill posed system such as the hydrostatic system as discussed
    on this thread) in order to obtain colorful numerical plots that many times have no bearing on reality. These results are grouped together in a manuscript, reviewed by a good old boys and girls club of peers, and many times accepted without question. When someone is willing to question the actual science involved, new results can be obtained, but many times at the expense of the questioning scientist’s career. This has happened many times in the past and will continue to happen as long as the reviews are politically motivated and/or sloppy. It is my personal belief that the more rigorous the scientific field, the less likely these problems are to occur. But I have seen politically motivated reviews even in the most rigorous areas. A sad commentary on the current state of science abused
    by the misuse of these tools.

    Jerry

  385. Gerald Browning
    Posted Apr 17, 2007 at 7:33 PM | Permalink

    Tom (#385),

    Jim Dudhia stated:

    And anything, you can get mathematics to do with the NS system, you can get a model to do.

    What Jim Dudhia obviously did not say is that you can get a numerical model to do anything you want it to do by adding large, unphysical dissipation, fudging (smoothing) all of the boundary conditions, and tuning the myriad parameters (forcing). IMHO this is not science and a kind description might be that it is trial and error.

    Jerry

  386. Tom Vonk
    Posted Apr 18, 2007 at 5:53 AM | Permalink

    Jerry #387

    Yes I know that .
    As has been stated several times a model can’t do anything with a simple chaotic Lorenz system unless to state the obvious and that is that numerical trajectories depend on unphysical parameters like time and space steps .
    What the mathematics can do is to state that there exists a strange attractor what no numerical model would have found .
    The reason for that is easy and a very general one – questions concerning existence (or its more complicated variant uniform convergence) are unreachable through number crunching because it would need an infinite time .
    Could a numerical simulation ever prove the Syracuse conjecture ?
    I guess not .
    The faith of some people that a numerical computer run could replace understanding or a general proof has always astounded me ๐Ÿ™‚

  387. Posted Apr 21, 2007 at 1:35 PM | Permalink

    Just wanted to provide further evidence that discontinuous forcing, such as in the Grell moist convective scheme used
    in MM5 (and also WRF?) destroys numerical accuracy
    (of leap-frog and other schemes):

    Sensitivity Analysis of the MM5 Mesoscale Weather Model to
    Initial Teperature Data using Automatic Differentiation
    by C. H. Bischof, G. D Pusch and R. Knoesel
    Argonne National Labs

    They show how discontinuous forcing triggers so-called
    convective bombs or grid point storms that exhibit +/-
    oscillations. The order of accuracy is reduced to zero.

  388. Gerald Browning
    Posted Apr 21, 2007 at 3:48 PM | Permalink

    Steve (#389),

    Thanks for coming on line.
    Has the manuscript appeared in a journal or is it an Argonne tech note?

    Jerry

  389. esceptico
    Posted Apr 22, 2007 at 1:22 AM | Permalink

    Bischof, automatic differentiation and other MM5 papers

    http://www.sc.rwth-aachen.de/bischof/

    http://www.autodiff.org/?module=Applications&application=SAoaMWM

    http://www-fp.mcs.anl.gov/autodiff/tech_reports.html

    ftp://info.mcs.anl.gov/pub/tech_reports/reports/P532.ps.Z

  390. esceptico
    Posted Apr 22, 2007 at 1:27 AM | Permalink

    Bischof, automatic differentiation and other MM5 papers

    >

    ftp://info.mcs.anl.gov/pub/tech_reports/reports/P532.ps.Z

  391. Posted Apr 22, 2007 at 6:36 AM | Permalink

    Re: # 389 and following

    It should not be necessary to use a ‘big hammer’ tool such as automatic differentiation to re-discover the obvious. All discontinuities in all aspects of both the continuous and discrete equations have the potential to reduce the order of the truncation error to zero. A simple convergence study will reveal the actual order of the discrete approximations. All functions, and at least the first derivatives, must be continuous. See # 62 above.

    For how much longer will the basic fundamentals of analysis of numerical methods continue to be ignored and simple true facts be re-discovered while at the same time ‘predictions’ using fundamentally flawed models and methods are presented?

    The ‘chaotic response’ seen in NWP and GCM codes is all numerical error. Until all the potential sources of such numerical errors have been eliminated the calculated numbers do not represent solutions to even the discrete approximations. Additional discussions are given here. All comments and corrections for the material presented there are appreciated.

    The lack of more nearly accurate calculation of observed data is due firstly to numerical errors and additionally to model errors. Chaos has nothing to do with it.

  392. Posted Apr 22, 2007 at 8:53 AM | Permalink

    Re:389 – 393

    I’d be glad to post the article on my web site
    or here if that is of interest

    Steve

  393. Gerald Browning
    Posted Apr 22, 2007 at 12:46 PM | Permalink

    Dan Hughes (#393),

    We are in agreement. The obvious question is which agencies are continuing
    to fund such nonsense at the expense of quality scientific research.

    Jerry

  394. Jim D
    Posted Apr 23, 2007 at 4:52 PM | Permalink

    Several responses for issues raised since my last post (#381)

    CO2 and the satellite record. Yes, we can wait, but I think it is already proven.

    Mathematics and models. I was referring to math solutions of the NS equations, and
    saying that inviscid and viscous solutions are easily reproduced in models and are often
    used as numerical checks for model consistency with the NS equations.

    Chaos and climate. The climate has not been recently chaotic on the century scale,
    and while CO2 forcing is significant, it is not yet predicted that a chaotic transition
    will occur. But if the Gulf Stream stops soon, as is thought unlikely, or Greenland’s
    glacier sheet slides into the ocean, who knows?. We follow the consequences of
    global warming with interest. If it is chaotic, it can only be worse.

    Cosmic rays. If you can think how to put a cosmic ray effect into climate models,
    let me know, especially if you believe this is a forcing that will impact the
    future climate more than it has in the past.

    Models and discontinuous forcing. I believe Steve’s was a response to Jerry’s point
    that leapfrog is stable without time filters. This is the kind of thing time
    filters alleviate. Convective schemes provide a discontinuous forcing that cannot
    be handled well by low-order time schemes. However, convection, both in models and in the
    real atmosphere, are sources of sensitivity and lead to uncertainty that limits
    deterministic weather predictability. The best we can do is probabilities, like saying
    tonight eastern Colorado may well have severe storms, but not pinpointing the towns.

    Error and models. The error is not all numerical. A larger part is in the initial state.
    Even a perfect model cannot give a perfect forecast without a perfect initial state,
    and observations limit that accuracy. Studies have shown that independent models
    have errors that cluster more with each other than with the true state when using
    the same initial data. The initial state is crucial in weather prediction, which is
    why the data assimilation research field is an important one.

    Peer review. Yes, I agree that peer review is hit and miss, depending who gets to review
    the paper. Perhaps it is too early for the aerosol impact papers to be judged, and
    we should await more studies. I would have thought that the papers in the IPCC report
    would have had more scrutiny, however.

  395. MarkW
    Posted Apr 24, 2007 at 5:13 AM | Permalink

    YOu put cosmic rays in, the same way you put aerosols in.
    You guess.

  396. Gerald Browning
    Posted Apr 24, 2007 at 12:45 PM | Permalink

    Jimy Dudhia (#396),

    It seems that you are prone to twist others comments to suit your own needs. Steve Thomas clearly said that rough forcing had a negative impact on the “leapfrog and other schemes”. And as usual you failed to read the literature in your own field. Try reading the manuscript

    Browning, G. L., and H.-O. Kreiss, 1994.
    The Impact of Rough Forcing on Systems with Multiple Time Scales
    JAS, 51, 369-383

    that analytically shows that rough forcing has a bad impact on the continuum solution of a system.

    With regard to the currently used numerical method in WRF, read the manuscript

    Browning, G. L., and H.-O. Kreiss, 1994.
    Splitting Methods for Problems with Different Timescales
    MWR, 122, 2614-2622

    that shows that the WRF time splitting method is unstable. Note that Joe Klemp and Bill Skamarock managed to force this manuscript to appear as a note in MWR despite its rather substantial mathematical analysis and the reputation of its two authors. A sad statement on the quality of the journal, the Editor, and the political influence of numerical modeling groups.

    And finally, even if the numerical errors are discounted, assimilation will only work if the forcings are smooth and accurate:

    Browning, G. L., and H.-O. Kreiss, 1996.
    Analysis of Periodic Updating for Systems with Multiple Timescales.
    JAS, 53, 335-348.

    Lu, C., and G. L. Browning, 2000.
    Discontinuous Forcing Generating Rough Initial Conditions in 4DVAR Data Assimilation.
    JAS, 57, 1646-1656

    Clearly your scientific credibility is dubious at best given that you twist others words to suit your response, do not read the literature in your own area (or ignore the results to suit the wishes of the WRF modeling group), and defend numerical models based on no credible scientific evidence or manuscripts. For those that want to know, the funding for this “science” is coming from NCAR (thru NSF – the same institution that refused to insist that proxy data be archived), NOAA (uses ill posed hydrostatic models at fine scales with large dissipation), and the DOD.

    Jerry

  397. Jim D
    Posted Apr 24, 2007 at 10:32 PM | Permalink

    Jerry,
    Thanks for your constructive comments.
    I agree rough forcing leads to poor-looking numerical solutions.
    It would be amazing if it didn’t, and a sure sign of over-smoothing.
    The time-splitting method is unstable if not used carefully. A little
    experimentation shows that simple and selective acoustic-mode filters stabilize it, as is
    clear in the papers describing the use of sound-wave time-splitting.
    About 1000 published papers have used MM5 and WRF successfully. How can those be ignored
    when talking about the literature? I think one of those might have noticed an instability
    by now if it existed. Not only WRF and MM5, but RAMS too. Are you going to
    criticize Cotton, Tripoli, Pielke and other RAMS developers and users too? How about
    NCEP’s NMM model using time-splitting for the US national weather forecasts,
    as also do German, French and Japanese weather models? Unstable methods don’t become so popular.

  398. Gerald Browning
    Posted Apr 24, 2007 at 11:35 PM | Permalink

    Jimy Dudhia (#399),

    Spare the false politeness. Quantity and quality are two very different things.
    How many manuscripts were published based on Anthes MM4 model that is both ill posed for the IBVP and the IVP? How were these results obtained under such adverse circumstances? It is well known that the forcing parameters in the models in the majority of these publications are tuned to overcome the unphysically large dissipation to provide results similar to reality for a special case and then published (as shown in the simple mathematical example on this string). The problem with this approach is exactly as stated earlier and also seen in the NAS panel refusal to demand that the proxy data be archived, i.e. the manuscripts are sent to the same group of buddies and accepted. Then they are called peer reviewed. What a joke.
    Why don’t you send the manuscripts to a numerical analyst? As David Hughes has
    stated, they wouldn’t hold muster under the scrutiny. And why not have a
    PDE specialist, numerical analyst, and statician look at the IPCC report. When Wegman looked at the report, he considered the statistical results to be absurd exactly as proved by Steve M. I admire Steve M. for his honesty and have no respect for the brand of “science” advocated by your illustrious modeling group or others that play the same game. Isn’t it interesting that Joe Klemp and Bill Skamarock pulled strings to prevent the publication of our spltting manuscript in MWR. Were they a bit concerned by the analysis that showed that their vaunted numerical method was unstable? Would it reveal that the multitude of manuscripts had been fudged with large dissipation and tuned forcing?

    Did you read any of the manuscripts and if so which one so I can ask the appropriate questions.

    And please describe to me how data assimilation will be used in the ill posed hydrostatic climate models to forecast the future.

    Jerry

  399. MarkW
    Posted Apr 25, 2007 at 5:02 AM | Permalink

    JimD,

    I’ve asked you this before. Why is it that you are willing to wait until everything is known about one phenomena (cosmic rays) before putting it into your models, but other phenomena (aerosol) you are eager to rush them into your models before much is known about them?

    Is it because the aerosols protect the conclusion that you want to support, while cosmic rays undercut it?

  400. Gerald Browning
    Posted Apr 25, 2007 at 12:00 PM | Permalink

    Jimy Dudhia (#399),

    BTW, Roger Pielke Sr. was the editor for several of our JAS manuscripts that were rejected by members of your lab in spite of their mathematical proofs and illustrative numerical examples. Only when the manuscripts were given to reviewers outside of NCAR and the modeling community did they pass the peer review process. In fact, one of those manuscripts later received a NOAA publication award. The upcoming manuscript by Page et al. in MWR
    was given to an editor outside of NCAR, has been accepted, and will appear shortly. I was pleased to write a letter of recommendation for this fine young scientist.

    Roger Pielke Jr. reads this site and both Steve M. and Roger have accused some members of your lab of using information from their blogs
    without proper credit. There seems to be a number of problems in your lab.

    And if Roger Pielke Jr. sees something that I have misstated, he is welcome to discuss it with his father who can bring it up on this blog. I have nothing to hide and am willing to let future generations judge your work and ours.

    Jerry

  401. Jim D
    Posted Apr 25, 2007 at 8:00 PM | Permalink

    #401 MarkW,
    If there is an equation for cosmic ray nuclei helping condensation of cloud droplets (I am guessing that is
    the proposed mechanism), it would need to be demonstrated that this of similar magnitude to all the other
    nucleation mechanisms before it is worth including. Aerosols, on the other hand have observable
    effects on cloud nucleation in dirty air. Is cosmic ray nucleation observable or just part of the background noise?

  402. Jim D
    Posted Apr 25, 2007 at 8:43 PM | Permalink

    Jerry,
    The papers describing the time-splitting scheme and the sound-wave damping methods
    go back to Durran and Klemp (1983), Skamarock and Klemp (1992, 1994), and mine (Dudhia 1993).
    Durran has written a definitive book on numerical methods. It really is simple to stabilize sound waves.
    I am not sure what your preferred method is, but fully implicit methods used by some Canadian
    and French models have much less selective damping because they don’t separate sound and
    gravity waves at all, and basically damp everything. Semi-lagrangian models also damp
    when used with long timesteps, but that is more hidden.
    I have not yet seen a model that treats sound waves without some implicit or
    explicit damping. The key is to damp selectively, and not affect gravity waves or
    advection processes, which is how WRF came to its current form.

    The model is not tuned in any way to give a particular result, like flow over a hill.
    The same model and dynamics parameters are used for ideal and real-data cases, just with
    different parameterizations and with several choices of widely accepted sub-grid turbulence schemes
    for the user to choose.

    In addition to RAMS, I also forgot to mention COAMPS, the Navy model, uses time-splitting.
    All these rival groups, in many ways, came to the same methods after studying the options
    available. The scheme is particularly suited to parallel computers because it requires
    no global solvers for Poisson or Helmholtz pressure equations, like some other methods. This use of
    only localized calculations is a major benefit on today’s larger computers.

    In regard to the data assimilation question, that is a method for improved initial conditions
    for weather forecasting. Climate models would not benefit at all from it. Climate is not an
    initial value problem in the sense that it would not matter if you started a 100-year climate run
    from today’s state or last Tuesday’s for the purposes of climate prediction. This seems
    to be a source of much confusion here.

  403. Gerald Browning
    Posted Apr 26, 2007 at 12:48 AM | Permalink

    Jimy Dudhia (#404),

    As usual you did not answer my questions. How many MM4 (ill posed hydrostatic) model manuscripts were published. Please answer that question as then I think the general reader will have a pretty good idea of the quality of the “science” to expect from your modeling group.

    And now we design numerical methods around the type of computer and ignore their stability? There’s a new concept.

    Is Durran a numerical analyst of Kreiss’ caliper? Not even close. I suggest you read the Oliger, Sundstrom, and Kreiss book on numerical methods for PDE’s. These are famous and reputable applied mathematicians and numerical analysts. And there are even proofs included for all of the methods. In particular, how to properly do the semi-implicit method that will produce the correct solution.

    As I recall, the Klemp wilhelmson splitting method artificially damps the divergence (and requires a time filter that is not second order accurate). Wrong thing to do for mesoscale motions where the divergence becomes important. I had no problem seeing the instability of the method when I coded it for the MWR manuscript. And you forgot to mention what happens at open lateral boundaries where you have arbitrarily added dissipation. You are not interested in accurate numerics, just in producing some output.

    I am still waiting for you to run the well documented development of a mesoscale storm shown in our multiscale manuscript when all of your boundary games and dissipation mechanisms are active. You don’t seem too excited to do so. Is it possible that it might show exactly what I have been saying?

    I still have not heard from Roger Pielke Sr. Maybe you should call him. Or does he have a higher standard of ethics than the editor that Joe and Bill managed to influence at MWR. BTW that same editor later wanted to work with me and confided that he was in a bad position when Joe and Bill played their little game.

    Were you able to read any of the manuscripts I cited. Evidently not.
    Page’s manuscript will appear shortly. Enjoy.

    From what I can tell, the local forecasters are looking at Doppler radar and satellite imagery to obtain a pretty good idea of how to forecast the weather in the near short term and coming day or so. Why do we need numerical models for this. Evidently mainly to support archaic groups such as yours.

    Jerry

  404. MarkW
    Posted Apr 26, 2007 at 5:16 AM | Permalink

    JimD,

    Let’s see. Since aerosols have a noticeable affect on cloud formation, we can guess what the total amount is, and put it into our models.

    On the other hand, even though cosmic rays have a proveable affect on cloud formation, since we don’t know the exact formula, we must keep it out of our models.

    Every time you post, my impression that you are trying to justify the unjustifiable grows stronger.

    If anything, we know more about how cosmic rays affect clouds than we do how aerosols affect them. The only real difference is that aerosols get you where you want to go, cosmic rays don’t.

  405. Jim D
    Posted Apr 27, 2007 at 7:34 PM | Permalink

    MarkW,
    Have you observed the cosmic ray effect on cloud formation? I’m intrigued by what
    that looks like. Usually things that are not observable are also not modelable.
    How many cosmic ray particles are there per aerosol particle? Do many even make it into the
    troposphere? These are the questions.
    Aerosol effects on the other hand are seen in aircraft contrails, and even steamship plumes
    affecing low cumulus droplet densities, and obviously downstream of industrial areas,
    and in maritime versus continental cloud formation. What, if anything, about clouds is noticeable from
    cosmic ray effects, or can’t be explained by other means?

  406. Jim D
    Posted Apr 27, 2007 at 8:02 PM | Permalink

    #405 Jerry,
    MM4 was a model of the 80’s and served its purpose as one of the best of its
    generation. I am not going to defend it as obviously things have moved on in 20 years.
    Models these days, like WRF, serve their purpose too but maybe one day will be outdated,
    but I have yet to see improved methods, especially that would be as efficient, while accurate.
    If you want to see state-of-the-science simulations look at the current real-time
    WRF forecasts here http://www.wrf-model.org/plots/realtime_3kmconv.php
    Click on the SURFACE menu and check out Maximum Reflectivity in a Column by
    hitting View Forecast for an animation.
    This is being done as an experiment to display model products in a way forecasters
    are familar with. I would say that it looks like WRF works well enough, despite
    your objections to its basis.

  407. Gerald Browning
    Posted Apr 27, 2007 at 9:27 PM | Permalink

    Jimy Dudhia (#408),

    Answer the question. How many publications were based on Anthes MM4 ill posed hydrostatic model?

    This gives the general reader a good understanding how multiple modeling publications don’t mean a thing without mathematical analysis.

    Jerry

  408. bender
    Posted Apr 28, 2007 at 8:05 AM | Permalink

    Not only do I like Gerald Browning’s papers and his style of argumentation on this blog, but I am growing increasingly sympathetic to his cause, as revealed in some of his comments:

    You are not interested in accurate numerics, just in producing some output.

    General observation: There are indeed modelers out there who are more interested in any answer than an accurate answer. Policy makers love these guys because they can deliver quickly. It’s what they’re delivering that can be problematic.

    … JAS manuscripts that were rejected by members of your lab in spite of their mathematical proofs and illustrative numerical examples. Only when the manuscripts were given to reviewers outside of NCAR and the modeling community did they pass the peer review process. In fact, one of those manuscripts later received a NOAA publication award.

    Ahh, peer review. Manufacturing consensus. Keeping the grant money away from those who might rock the boat. Wegman’s social network hard at work.

    Thanks for posting at CA, Dr. Browning. And thanks to Dr. Dudhia as well. Audit the GCMs indeed.

  409. Dave Dardinger
    Posted Apr 28, 2007 at 9:27 AM | Permalink

    re” #410 Bender

    In essence I agree, but as someone who doesn’t have the math a the tip of my fingers, I wish there was a less time consuming way of deciding who is right. I’m a skeptic by nature, not just because of training or political inclination. So when I see someone like Dr. Browning pounding Dr. Dudhia rather mercylessly I eventually start having thoughts like, “What if Dudhia is actually right, but just not as articulate in the Blog atmosphere?” I know I should read a bunch of the important papers, but as I say, that’s time consuming.

    But it is useful for people like yourself who have read the papers to weigh in on your conclusions. It doesn’t obviate my responsibility to be informed myself, but I don’t have to feel as much doubt as to what I’d decide if I did do the reading. Thanks!

  410. Posted Apr 28, 2007 at 2:14 PM | Permalink

    Hello all again,

    I have posted a rather longish literature review on some apsects of numerical solutions of complex dynamical systems described by ODEs: Chaos Part 0: Chaotic Response is Numerical Noise. Nice catchy title isn’t it?

    All comments, and especially corrections, are appreciated.

  411. Gerald Browning
    Posted Apr 28, 2007 at 4:56 PM | Permalink

    Dave (#410),

    I have given Jimy Dudhia an obvious way in which to show who is right, i.e.
    by running the same example in our multiscale manuscript to determine the impact on the numerical accuracy of all of the ad hoc treatments
    of the lateral and vertical boundaries, the questionable numerical method,
    and the large, unphysical dissipation using their latest model.
    No response.

    We have analyzed the time split numerical method and found it to be unstable. Jimy’s response is that, yes it is unstable, but it runs fast.
    No mention of the impact at lateral boundaries or the necessary damping mechanisms. A properly implemented semi-implicit method provided the same solution as the leapfrog system for the multiscale model.

    I have asked Jimy to run the same convergence tests that I did. No response. Others that have run similar unforced tests near a jet
    using two different NCAR models have seen the problem indicated by the mathematical theory.

    Jimy Dudhia raised the issue of quality versus quantity. So I asked how many manuscripts were published using Anthes ill posed hydrostatic MM4 model to show that quality and quantity are two different things. (Of course this also reflects on the peer review process in the corresponding journals.) No response.

    I had Sylvie Gravel (RPN) run a number of cases with the Canadian global weather model (manuscript rejected for obvious reasons but copy available). The same results could be obtained without any forcing except a simple lower order drag as the mathematics shows if the forcings have relative errors O(1). The only thing that keeps the model on track is the periodic updating (assimilation) of wind data near the jet level exactly as expected from the mathematical theory in our updating manuscript. When a simple change in the data updating (assimilation) was made according to mathematical theory, the improvement was dramatic. And now Christian Page (Canadian) has shown that our mathematical theory about the slowly evolving solution in time can be used in a practical model to provide a balanced solution. Jimy’s response?

    I have cited manuscripts that contain mathematical proofs and numerical examples to better understand the theory for every one of my points.
    Jimy’s response – change the subject or twist my words.

    Are we beyond a reasonable doubt at this point? If so, ask Jimy how the accuracy of the WRF model is measured when it is used in forecast mode.
    His response is that they now deal in probabilities. This is no surprise given that there are large uncertainties in the continuum system near a jet, large uncertainties and discontinuities in the forcings, an unstable
    numerical method with questionable damping mechanisms and ad hoc dissipation procedures near the lateral and vertical boundaries. And you ask why the local forecasters are relying on Doppler radar and satellite imagery for the near term?

    And to top this all off, intimidation of an Editor and rejection of multiple manuscripts based on politics even when the manuscripts contained mathematical proofs. IMHO I find all of this quite pathetic.

    Jerry

  412. Jim D
    Posted Apr 29, 2007 at 4:16 PM | Permalink

    Jerry,
    I show results that clearly are good, and highly detailed, and realistic, as a response
    to the claims of poor boundary conditions, highly smoothed. Does the
    state-of-the-science result look reasonable or not? A lot of forecasters praise this advance.
    We don’t have to do idealized tests with toy models when full-up ones are demonstrated to work.
    I also came onto this thread to explain why the hydrostatic system is fine if used
    within its range of validity (grid size greater than 10 km, put simply), but you
    still prefer to call it ill-posed when I have taken away the mathematical grounds
    for saying that. It is flat wrong to say that the hydrostatic equation system is ill-posed
    when it is used in its range of validity. Time-splitting, too, I have said, is
    simply stabilized by damping sound waves, but that was ignored too. I feel like I
    am answering the same questions again and again. The good thing is that
    maybe some others here have understood, because they are no longer asking about these issues.

    I am sure MM4 has hundreds of references too in many journals internationally, and the results speak to the issue of
    posedness themselves. As to the issue of quantity, quantity shows quality in this case, because no one
    uses a model for publication that doesn’t give reasonable results. If it doesn’t, they either
    improve it, or go to another model.
    Which model do you prefer and why? Doesn’t the Canadian global
    model share the flaws you keep bringing up here? Yet, you still refer to its results.
    No model is perfect including that one. Semi-implicit methods like theirs need damping too,
    and there you are damping gravity (buoyancy) waves, while for WRF development we
    had a preference for not damping gravity waves. Would you prefer damping gravity waves or not?

    As we know, weather forecast models have to be efficient to be at all useful.
    They are no good if it takes more than 12 hours to give a 24 hour forecast.
    They fundamentally have to balance accuracy and efficiency, which is why I
    bring up computational efficiency a lot. This is more of an issue than in other
    areas of science and engineering because of the real-time applications.

    OK, which ones haven’t I answered? And check back on the thread before saying I didn’t.
    I am trying, but the questions keep coming back.

  413. MarkW
    Posted Apr 30, 2007 at 5:47 AM | Permalink

    JimD,

    You know as well as I do, that given the disperse nature of cosmic rays, it is impossible to compare an individual spot to another, the way you can with aerosols. Your objection is highly disengenuous.

    Likewise, you know as well as I do, that cosmic rays have been shown to have an affect on cloud formation in cloud tank experiments.

  414. Gerald Browning
    Posted Apr 30, 2007 at 2:47 PM | Permalink

    Jimy Dudhia (#414),

    You do not define “good” or “realistic: or “detailed” by any quantitative measure. Exactly what I would expect. Show a color picture, but whatever you do don’t check the accuracy using a standard mathematical measure (norm) in a case where the accuracy can be checked. Same old game.

    Please answer the question – how many manuscripts were published using the Anthes ill posed (IVP and IBVP) hydrostatic model. This has a direct bearing on the quality of models that have not passed standard mathematical and numerical standards. You call MM4 a model of the 80’s, yet your manuscript suggesting a change was in 1992. (The ill posedness of the IBVP for the hydrostatic system was shown mathematically in 1976). Given that MM4 was a quality model based on your standards of quantity, why the change? Shouldn’t it produce the same quality results as the WRF model? Are you now going to redo all of MM4 manuscripts with the WRF model for pub count?

    When did you predict the last tornado using a model? Why are local weather forecasters using Doppler radar and satellite imagery if the models are so great?

    Reasonable by your standards are that the model produces some output.
    How accurate are the forecasts and how is that accuracy measured?
    Who checks these results quantitatively. Someone from one of the modeling groups (or from your lab which is now the case).
    Which parameterization did you decide to use of the many different ones?

    BTW you can’t just take away a mathematical proof of ill posedness
    of a continuum PDE system or numerical instability of a numerical method.
    What you can do is add dissipation to a numerical model to hide the
    problems exactly as you have done.

    Just keep avoiding the issues. You aren’t fooling anyone.

    Jerry

  415. Jim D
    Posted Apr 30, 2007 at 8:02 PM | Permalink

    To clarify, my 1993 paper introduced MM5, a nonhydrostatic successor to MM4.
    I have no reason to suppose MM4 was any better or worse than other models of its
    generation. Likewise, in the 90’s MM5 was of a similar generation to COAMPS,
    hydrostatic Eta, NMM, RAMS, RSM, hydrostatic RUC, ARPS, and that’s just in the US.
    Was it better or worse than all those? I don’t know. They all had their advantages and disadvantages.
    I don’t know if you are complaining about the specific models NCAR has been involved in,
    or this long list. If you had a specific complaint, I could give a specific answer.

    What model has ever predicted a tornado? That capability is beyond data assimilation.
    Models have simulated tornadoes, which is different, but not predicted them. It is not
    a model limitation, because we know the equations, and we now finally have the computers.
    It is a data limitation, because we don’t know the initial state well enough in 3d.

    Regarding model accuracy, the best objective evaluation is given by the forecasters
    who use the products, and whose decisions are influenced by them. They are not modelers,
    and aren’t afraid to criticize. Their evaluation is useful, being unbiased. I am taking
    part in an annual program in Norman, Oklahoma this Spring where modelers sit with forecasters at the
    Storm Prediction Center, and these forecasters experiment with the guidance given by new model products.
    This keeps our feet on the ground regarding how models fit in the forecasting process, and
    the problems that remain to be solved, but is also encouraging because it represents a direct
    application of our work in a meaningful area.

    Dissipation: All models have dissipation, except perhaps DNS which is not yet practical for
    the atmosphere over large areas. The aim is to dissipate things that are not resolved,
    and leave things alone that are. The success of this is measured by looking at energy-wavelength spectra for
    which the correct slope is known. Dissipation is not to hide anything. Overly dissipative models are
    not respected, because they inefficiently use computer memory to resolve a grid that is much smaller than the
    effective resolution.

  416. Jim D
    Posted Apr 30, 2007 at 8:11 PM | Permalink

    MarkW,
    I am fairly sure that the highly supersaturated clean conditions in cloud chambers
    don’t occur naturally very often. Those conditions are prevented by the presence of aerosols.

  417. Dave Dardinger
    Posted Apr 30, 2007 at 11:23 PM | Permalink

    re:418,

    I probably shouldn’t get into an argument I don’t have expertise in, but I have a couple of questions. 1. If supersaturated conditions don’t occur very often, then what are Jet contrails all about? And 2. It’d seem to me that every rising air column, except perhaps over desert areas, will eventually reach a point of saturation and then the question is just how many droplet forming particles are available. If there aren’t enough to produce all the droplets possible, then some degree of supersaturation will occur. Are there studies about just how likely a paucity of droplet forming centers are to occur; at the margins I mean. Obviously a lot of the time there are plenty. But if say 5% of the time droplets aren’t forming then this has an affect.

  418. MarkW
    Posted May 1, 2007 at 5:11 AM | Permalink

    JimD,

    You are supposed to be an expert in models, yet the best you can do, to defend your inclusion rejection of cosmic rays in them is,
    “I’m fairly sure”?

    [snip- calm down please. ]

  419. Gerald Browning
    Posted May 1, 2007 at 3:44 PM | Permalink

    Jimy Dudhia (#417),

    IMHO, I must say that you are in the right group at NCAR and have been
    trained well in deception by your mentors.

    To start, I was not supporting the Canadian model. Sylvie’s was the only global modeling group that was willing to run the tests. Kudos to her group
    for their honesty. What Sylvie’s results conclusively showed are that the forcing (parameterizations) in the Canadian global weather prediction model had O(1) relative errors in the forcing and those terms could be left out in the first 36 hours with no change in the forecast, exactly as expected from mathematical theory for forcing terms with errors of that size. Thus the parameterizations were unnecessary during that time period and constituted a misuse of computer resources that have continually been claimed to be the limiting factor in forecast accuracy (just like you do).

    During and after that time period, the accuracy of the forecast deteriorated rapidly. The only thing that kept the Canadian global weather prediction model on track during that time period was the periodic updating (assimilation) of wind data in the jet stream. After the simple change that Sylvie made in the data assimilation program based on the Bounded Derivative mathematical theory, the Canadian global weather model outperformed the NOAA National Weather Service global weather model by an amount unheard of before that time (phrasing by others that saw the results) even though the NOAA global model used a higher order accurate (spectral) numerical method, again obviously a misuse of numerical methods and computer resources. I am more than happy to post the unpublished manuscript on this site so that all can judge for themselves what is going on in the global models.

    Now these global models can be used to provide initial data and boundary conditions for limited area models such as WRF (for a clear picture of how this should be done in a mathematical manner according to theory, see the Multiscale Initialization manuscript by Kreiss and me). However, there are obviously a number of problems when doing so. The first is that the
    initial data for the limited area model does not contain any of the smaller scale features possible in the limited area model (I will go into this in detail in a later comment) and the data is from a global model that is based on a different system of equations (hydrostatic system), parameterizations (forcings) and viscosity. Thus there are discontinuities in the dynamics (hydrostatic system for the global model that is ill posed for the IVP and IBVP versus nonhydrostatic system for the limited area model), the parameterizations (forcings), viscosity, and mesh size at any lateral boundary. This is one reason for all of the ad hoc smoothing near the lateral boundaries of the WRF model. A feature that is not resolved by the global model will not be portrayed accurately in incoming flow and a small scale feature that develops inside the limited area model that is not resolved by the global model will conflict with the global model boundary conditions on outflow. Both of these problems will destroy the accuracy of the numerical solution of the limited area model very quickly. In fact,
    in many limited area manuscripts the contour plots were trimmed
    before publication to hide the problems near the lateral boundaries.

    Gravity waves are routinely surpressed in global weather prediction models
    as they are considered to be unimportant for large scale motions. But they can and will be automatically generated by latent heating (smaller scale forcing) in a limited area model (see mathematical discussion and proofs in above Multiscale reference). These waves obviously
    will also conflict with the boundary information from the global model
    that does not contain those waves. We discuss very clearly in our manuscript how gravity waves are generated and the impact of not treating them properly if they are responsible for triggering storms.

    I think it is very clear why you won’t run any of the “toy” problems as they will reveal all of the flaws in WRF as discussed above.

    Because you have claimed such forecast accuracy from WRF, please show WRF forecasts for a number of consecutive days, the observational data for those days, the differences between the two contour plots and the mathematical norm of that difference at the mandatory levels.
    I would surmise that I will be able to pick out the above problems with this info.

    Have you called Roger Pielke Sr. yet to see if I have misinformed
    the general reader?

    Jerry

  420. Ralph Becket
    Posted May 1, 2007 at 8:11 PM | Permalink

    #421: I am more than happy to post the unpublished manuscript on this site so that all can judge for themselves what is going on in the global models.
    Please do: I’d be very interested.

  421. John Baltutis
    Posted May 2, 2007 at 1:53 AM | Permalink

    Re: #422

    Better would be a link to a copy that we could download. Maybe Steve M. or John A. could provide that convenience.

  422. Tom Vonk
    Posted May 2, 2007 at 5:23 AM | Permalink

    In fact I recently posted a blog on a paper by Filippo Giorgi (see). What he is doing is a transition in thinking. He concludes there are components of a boundary problem and components of an initial value problem with respect to 30 year predictions. If it’s a combination of the two, it therefore is an initial value problem and therefore has to be treated just like weather prediction!

    What’s the difference between a boundary value and initial value problem?

    Initial value means it matters what you start your model with, what your temperature is in the atmosphere, temperature in ocean, how vegetation is distributed, etc. They say it doesn’t matter what this initial distribution is; the results will equilibrate after some time, the averages will become the same.

    The problem is that the boundaries also change with time. These are not real boundaries; these are interfaces between the atmosphere and ocean, atmosphere and land, and land and ocean. These are all interactive and coupled.

    There are two definitions of climate: 1) long term weather statistics or 2) climate is made up of the ocean, land ice sheets and the atmosphere. The latter definition is adopted by a 2005 NRC report on radiative climate forcings (see). This second definition indicates that it depends what you start your model with, e.g. if you start in the year 1950 with a different ocean distribution, you will get different weather statistics 50 years from then.

    The question is why should we expect the climate system to behave in such a linear well behaved fashion when we know weather doesn’t? In the Rial et al. paper (see), we show from the observations that, on a variety of time scales, climate has these jumps, these transitions, and these are not predicted by models. These are clearly non-linear and are clearly related to what you start your climate system with.

    That is a very nice summary of the problem at R.Pielke site . I will only quote this above but it deserves to be read in its entirety .
    It is also useful to read http://blue.atmos.colostate.edu/publications/pdf/R-260.pdf .

    There is the well informed and mathematical site of Dan Hughes .
    There are the Jerry’s papers .
    There is any amount of relevant mathematical material .

    I am beginning to believe that the core of the problem is that the “climate modellers” are just that – modellers .
    They don’t do mathematics , they don’t care about mathematics , they purposefully ignore questions of convergence and sensibility .
    Most probably they don’t even understand the problem and care only for the speed of a numerical output regardless its relevance .

    I have shown here and R.Pielke is also saying the same thing – a highly non linear , complex chaotic system stays non linear and chaotic at all time scales .
    No amount of averaging (or to use the modeller’s slang “long term global means”) will change that .
    Unfortunately despite the dozen of posts , Jim has failed to bring a single argument showing that the climate is not chaotic and thus predictable .
    The reason is simple – there is none because the assertion “climate is non chaotic” is false .
    Every and any mathematician dealing with those issues has alreday proven that but for some strange reason many modellers continue to ignore the evidence .
    Arguments like “CO2 is not a subtle thing to do to the atmosphere, it is big, and it will not be a subtle thing that cancels it.” show a deep misunderstanding of how chaotic system behave .
    Obviously it bases on the wrong belief that the system answers in a linear way – small changes cause small answers and big changes cause big answers .
    From that follow three other very wrong beliefs :
    a) initial conditions don’t matter
    b) subgrid dynamics don’t matter
    c) time steps don’t matter

    The so called climate modelling is the only branch of science (is it still science ?) that proposes a theory based on wrong hypothesis and whose results cannot be proven by experience .
    Who would believe a financial analyst who says “I don’t know the DJ index for the next year . I don’t know it for the year after nor for any other following years . Of course I also don’t know the values of the companies constituting the DJ . But you can put all your money on 2100 because I know with certitude what the DJ will be in 100 years .” ?
    I wonder how much mathematical proofs it will take to stop wasting billions on meaningless “predictions” for 2100 .

  423. bender
    Posted May 2, 2007 at 7:33 AM | Permalink

    #424 This IMHO is one of the most important points to be made that even smart people get wrong very frequently. If weather is chaotic at all scales then climate too is chaotic, such that structural features of the ocean-atmosphere circulation are not as invariant as modelers would have them be. It would be a useful thing to point out specific pieces of literature that get this wrong. Literature that assumes that because climatology = long-term weather, statistical averages and “confidence” intervals serve to put some bounds on the chaotic behavior of weather. If growth in physical systems is exponential, then features like jets and ocean currents are not immutable. Look at the multi-million year record of climate, which includes the PETM. It varies at all time scales. How then can one make use of the theory of large numbers (central limit thereom) that is the basis for statistics. You can not. Climatology as a statistical enterprise is an illusion resulting the very short time over which we have been measuring weather accurately on this planet.

    I do not think RC posters understand this very basic thing about weather vs. climate. I was a reluctant convert to statistical climatology about a year ago. Now I’m recanting. I don’t think the experts have got it right. There is no central tendency in Earth’s climate.

    Disclaimer: I am not a climatologist. I’m just like you: trying to learn what climatology is.

  424. Steve McIntyre
    Posted May 2, 2007 at 9:28 AM | Permalink

    bender – Mandelbbrot, the famous mathematicians and inventor of fractals, commented that he was unable to identify a scale such that you could meaningfully take averages. He observed variability on all scales.

  425. Mark T.
    Posted May 2, 2007 at 9:40 AM | Permalink

    That’s a fundamental feature of chaos theory. A chaotic system exhibits the same amount of chaos at every level of observation.

    Mark

  426. Neil Haven
    Posted May 2, 2007 at 11:04 AM | Permalink

    Bender,

    It is one thing to assert the non-existence of a central tendency for a climate parameter, it is another to assert the non-existence of bounds for the variation of that climate parameter. I can agree with you when you doubt the existence of central tendencies (in the sense of most probable’ values that are representative’ of some familiar distribution), and, given Jerry’s arguments, it seems unlikely that numerical climate models calculate such things, but I have difficulty with the idea that weather/climate is chaotic at all scales. (For one thing, I’m not sure what you mean by that.) For example, it may be nonsense to speak of a representative temperature for Nevada in January, but surely it is a scientific claim with some merit to state that the surface temperatures in Nevada in January will range between -100 degrees C and +400 degrees C so long as the Sun behaves itself and barring major tectonic activity?

    In the jargon of chaos-theory: although trajectories through a region of a dynamical phase-space may exhibit exponential sensitivity to initial perturbations (and thus potentially exhibit no meaningful central tendency), their divergence may still have bounded extent.

    The question for AGW climatologists may be whether the bounds on the exponential divergence of the climate system (the so-called natural climate variation) subsume the results from human-caused perturbations.

  427. Spence_UK
    Posted May 2, 2007 at 11:16 AM | Permalink

    This has been a fascinating and informative thread, and I’d like to say thanks to those participating. It has been a very interesting read. Thanks in particular to Dr Browning for kicking the discussion off, and to Dr Dudhia for taking time to present what I assume is the “consensus” view. It would be great if you guys could agree and complete an experiment to help to resolve differences in views, as Dr. Browning suggests.

    The issues around chaos and long-term weather have been a particular bone of contention with me. Tom Vonk’s post earlier probably captures most of my view on the subject; bender and Steve also. I really wish a climate scientist would explain to me why climate isn’t chaotic; so far the explanations are usually very weak, going to great efforts to define climate, but at no point do they define chaos, nor do they attempt to define a test for chaos. The test then applied is usually to eyeball a graph and determine whether it looks “random” or not. Of course, chaos can produce patterns as well as random looking output, so this test is atrociously weak.

    The issue of self-similarity / scale invariance is an important one though, and in some ways separable from the issue of chaos. If amplitude fluctuations are greater at longer timescales, as noted above, by long-term averaging, all you do is move to a different scale. Again, this is not necessarily evidence of chaos (both chaotic and non-chaotic systems can exhibit this behaviour), but either way it renders averaging of no use for determining some underlying state.

    Cartoon examples have limited value, but can be useful. Looking at the Mandelbrot set (a simple chaotic system, albeit a discrete system, rather than continuous like the weather/climate system), it is easy to observe self-similarity at different scales, patterns in the output, and unpredictable behaviour. I would note that whilst the chaotic behaviour exists over an infinite range of reducing scales, as you move up in scale there is a finite limit at which point chaotic behaviour stops. I suspect this may well be true of climate as well; as you expand in scale, eventually some other boundary condition steps in, and averaging will begin to work. Looking at long-term proxies, my guess is this probably kicks in at around the 100-million year scale, at which point the temperature varies over its maximum likely range.

    Anything shorter than that, is anybody’s guess. Of course, I could be wrong on longer scales; it might be an illusion due to inadequate data!

  428. bender
    Posted May 2, 2007 at 12:25 PM | Permalink

    Neil Haven,
    Thank you for your comments. Also Steve M and Mark T, with whom I agree.

    The issue is not just boundedness, but within what bounds? No one is going to argue against Neil that temperatures are not constrained within the wide bounds he suggests. The issue is far more subtle than that. What is at issue is whether heat fluxes and circulatory features etc. are bounded in the way, and to the degree, assumed by the GCMs. The point is this: if the GCMs are off, then the CO2 sensitivity coefficients that are used to repair the 20th c. model-fit is off. If the CO2 sensitivity coefficient is off, then the IPCC warming projection is off. Follow?

    This helps to answer Climate Tony’s question. The greater the unexplained variability in the past, the greater the uncertainty in the future because of the lower confidence in the GCMs. The GCMs are tuned by a statistically unpunished trial-and-error process of model over-fitting. Therefore you expect them to break down in out of sample trials (i.e. real-world predictions). How much they break down is the real, quantitative issue. AGW believers have confidence that the breakdown will not be so severe as to reduce the calculated CO2 sensitivity coefficient by โ€œmuchโ€. Skeptics aren’t certain that is the case.

    Skeptics want to know how much uncertainty there is in the parameter estimates. Even non-skeptics, such as AGW economists, need to know this, in order to do a cost-benefit analysis of various mitigation/adaptation strategies.

    It’s in everybody’s interest to get this uncertainty quantified. Unfortunately you now have this religious camp of uncertainty deniers to contend with. โ€œMust take action.โ€ โ€œMust get someone other than me to pay.โ€ They’ve turned what was a scientific issue into a moral imperative. And that’s going to make it harder to take appropriate, rational action.

    Therefore audit the GCMs. Maybe no action is the best action? I dunno. I’m not a climatologist.

  429. bender
    Posted May 2, 2007 at 12:45 PM | Permalink

    Oh, and on “central tendency”.

    In a spatially extensive turbulent terawatt heat engine such as earth you can expect many many equilibrium states and substates. Although there may exist central tendencies, the problem is the states to which the system is tending toward are not fixed. It would be a mistake to try to think in terms of a single central limit to which a system tends. The attractor is not a point. It’s far more complex than that. Far more complex even than Lorenz’s simple 3-equation “butterfly” attractor. New states would be created and destroyed as the solar-earth-climate system goes though its machinations. On top of that you have planetary perturbations (such as mid-Atlantic CO2 expulsion) capable of shocking the system between super states. Good luck with your gaussian ideal.

  430. bender
    Posted May 2, 2007 at 1:13 PM | Permalink

    P.S. Please be careful not to misinterpret people’s words when surgically removing phrases out of their carefully crafted context. When I said:

    There is no central tendency in Earth’s climate.

    that was said *in context*. It’s not a thesis; it’s a punctuation mark. Don’t argue the statement; argue the point.

  431. Gerald Browning
    Posted May 2, 2007 at 2:18 PM | Permalink

    I will post Sylvie’s unpublished (rejected) manuscript. I might not have the final version, but one that is sufficient to illustrate all of the main points and one that can be discussed in detail.

    Jerry

  432. bender
    Posted May 2, 2007 at 4:51 PM | Permalink

    My respect for Dr. Gravel’s work goes off the charts, almost to ’11’. If she told me lick Gore’s boots, I would.

    I would pay to hear the story of the quashed manuscript. Especially if I could see the reviewer commentary. That way I could judge for myself if the reviewers were being objective or defensive, protecting the emperor’s wardrobe as it were.

    $100 in the CA tip jar if Dr. Gravel herself can be enticed to comment on “exponential growth in physical systems”.

  433. jae
    Posted May 2, 2007 at 6:11 PM | Permalink

    According to this treatise, there can be some predictability of climate, despite its chaotic and non-linear characteristics (or am I reading this wrong?):

    However, if, contrary to the IPCC’s attitude, the sun is taken seriously as a dominant factor in climate change, this opens up a possibility to predict climate features correctly without any support by supercomputers. A string of examples will be presented. The chaotic character of weather and climate does not stand in the way of such predictions. Sensitive dependance on initial conditions is only valid with regard to processes within the climate system. E. N. Lorenz has stressed that only non-periodic systems are plagued by limited predictability. External periodic or quasiperiodic systems can positively force their rhythm on the climate system. This is not only the case with the periodic change of day and night and the Milankovitch cycle, but also with variations in solar energy output as far as they are periodic or quasiperiodic. The 11-year sunspot cycle meets these conditions, but plays no predominant role in the practice of predictions. Most important are solar cycles which are without exception related to the sun’s fundamental oscillation about the center of mass of the solar system and form a fractal into which cycles of different length, but similar function are integrated. The solar dynamo theory developed by H. Babcock, the first still rudimental theory of sunspot activity, starts from the premise that the dynamics of the magnetic sunspot cycle is driven by the sun’s rotation. Yet this theory only takes into account the sun’s spin momentum, related to its rotation on its axis, but not its orbital angular momentum linked to its very irregular oscillation about the center of mass of the solar system (CM).

  434. bender
    Posted May 2, 2007 at 6:48 PM | Permalink

    The isssue is not predictability of climate. It is about meaningfulness of the tunings used to achieve a fit to past climate. As I understand it*, many GCM parameters are fixed due to physical measurements. Others are free, and must be tuned in order to attain stable numerical solutions.

    Yes, climate is bounded over short time-scales and models can be tuned to generate fits over those short time scales. And maybe the forcings appear to work well over those time scales. That does not mean that the model fits will work as advertised when the model is asked to simulate climate in a different era long past where the known variability lies outside those limited bounds.

    The issue is not GCM performance over short time horizons. It is the meaningfulness of specific parameterizations over long time horizons. If those parameterizations are wrong, then the CO2 sensitivity estimates may be off. The more physical the model, the less the tuning is an issue becasue the modeling problem is not under-constrained. It’s the absence of knowledge about the physics of all those free parameters that makes you vulnerable to over-fitting.

    *I am not a climatologist, however. It would be better if someone more qualified were to address the issue.

  435. bender
    Posted May 2, 2007 at 6:57 PM | Permalink

    If climate is mostly periodically forced, then yes, predictability at all time scales is achievable – if you work out all the periodic forcings. The time-scales over which climate is not predictable is then the time-scales over which the terawatt heat engine produces deterministic non-periodic flow, in the language of Lorenz. The question is to what extent are things periodically forced and to what extent internally chaotic. It’s not a qualitative question, it’s a matter of degree.

    I know you love your periodicities, jae. I too am optimistic that we haven’t heard the last of Milankovitch, etc. But you know the problems with solar theory as it stands now, I don’t need to tell you.

  436. bender
    Posted May 2, 2007 at 7:04 PM | Permalink

    Oh, and, of course it goes without saying: the forcings need neither be periodic nor transient. They can occur in pulses and they can result in persistent shifts in dynamics. I didn’t mean to make it sound like all forcings (including solar forcings) are periodic. If a non-periodic event is predictable, then so are its consequences.

  437. Jim D
    Posted May 2, 2007 at 8:17 PM | Permalink

    I have never said climate is not chaotic, because the ice ages clearly show
    other equilibrium states that are fairly stable. I am saying that on the Century
    time scale there isn’t yet a known mechanism by which we will enter an ice age
    or a runaway global warming feedback leading to more than a 10 C increase,
    and the simple reason is thermal inertia. The oceans, in particular store a
    lot of the earth’s heat energy as they have well over 100 times the heat content of the
    atmosphere (by my reckoning, someone should check). They maintain a background temperature
    which is a major counter-agent to anything that might try to pull the atmosphere
    away from its near-equilibrium with it. Warming the ocean by a few degrees will
    take centuries when forcings are the size of CO2 forcing.
    Even if you believe the temperature record is fractal, that indicates that on smaller
    timescales the wiggles also get smaller in amplitude.

  438. Jim D
    Posted May 2, 2007 at 8:21 PM | Permalink

    MarkW #420. OK, I am completely sure that cloud chambers like those seen in nuclear
    labs don’t occur in nature. Aerosols prevent that from happening.

  439. bender
    Posted May 2, 2007 at 8:39 PM | Permalink

    Not sure if #439 is in reply to bender, but my #436-438 were in response to jae, not anything Jim D has said.
    Jim D, would you care to have the first crack at spinning the Gravel story?

  440. Jim D
    Posted May 2, 2007 at 8:59 PM | Permalink

    Jerry,
    In #421 you said.

    What Sylvie’s results conclusively showed are that the forcing (parameterizations) in the Canadian global weather prediction model had O(1) relative errors in the forcing and those terms could be left out in the first 36 hours with no change in the forecast, exactly as expected from mathematical theory for forcing terms with errors of that size. Thus the parameterizations were unnecessary during that time period and constituted a misuse of computer resources that have continually been claimed to be the limiting factor in forecast accuracy (just like you do).

    Taken at face value, this statement would be contrary to everyone else’s results, so I probably misunderstood it.
    What it looks like you are saying is that a simulation with no physics parameterization did as well as
    one with parameterizations at 36 hours in one case. Meanwhile operational centers have done countless
    demonstrations showing how physics improves forecasts. This is how they develop their schemes, because
    new schemes don’t make it into the model unless there is a demonstrated average improvement over a large number of cases.
    How can we balance this one example against all those countless others? Would it stand up to
    multiple cases being tested?

    On the gravity wave/boundary issue, mesoscale models need gravity waves to be correct. The fact that
    the boundary conditions contain no information in gravity waves is the reason a damping zone is
    used near the boundary to minimize reflection, so I think I agree with your statement here.
    It is a pragmatic approach given limited boundary data. The initial state can reasonably be approximated
    as hydrostatic, again because we don’t have the data resolution to pick up the nonhydrostatic
    scales, and the best we can hope for is that the model develops nonhydrostatic things like
    thunderstorms, and mountain-forced waves itself, and actually it does this very well. Obviously
    we don’t get every thunderstorm in the right place, but we get the right general idea which
    is all forecasters need. It is surprising you need evidence that WRF verifies against data
    when a simple literature search will show it. This is the bread and butter of model development.
    Nothing makes it into a weather model without such verification. What do you think is the
    main criterion for physics development? It doesn’t happen in a void.

  441. Steve McIntyre
    Posted May 2, 2007 at 9:15 PM | Permalink

    This thread has stayed on topic for a remarkably long time. So here’s a continuation.

2 Trackbacks

  1. […] post by Willis Eschenbach Share and Enjoy: These icons link to social bookmarking sites where readers can share and […]

  2. By The Next Battlefield « the Air Vent on Feb 24, 2010 at 6:59 AM

    […] new unphysical kludges in the software to get them to run more a few days without blowing up. The Exponential Growth in Physical Systems threads at Climate Audit discusses this in detail. Here’s a quote from the post at the top of […]

%d bloggers like this: