## This "Fix" in the GISS Code

Gavin Schmidt , one of the promoters of realclimate, is a climate modeler in his spare time, involved with GISS Model E. Dan Hughes, who sometimes posts here, has a new blog at which he’s posted up some comments on GISS code A GISS Model E code fragment. The code has the following interesting comment:

C**** This fix adjusts thermal energy to conserve total energy TE=KE+PE

Dan then proceeds to locate a variable *ediff *that seems to be a fudge factor. Now we’ve been told that the modern models no longer use *flux adjustment*, but, in Dan’s opinion,* ediff *appears like an unsavory type of adjustment. I hope that Dan continues parsing through GCM code. It looks like a fair amount of GCM code may now be available online and this might be something for computer-oriented people to parse through. I don’t plan to do this myself, but will be happy to host any comments and I plan to keep an eye on what Dan observes.

While I’m doing a shout out to new blogs, Margo has continued to post interesting posts at her blog.

### Like this:

Like Loading...

*Related*

## 51 Comments

Dan,

Do you have any access to substantial computer resources, i.e. one with large memory and fast CPU? The most telling thing you could do is to acquire one of the hydrostatic codes, throw away all nonessential junk, and run it at high resolution for a short period as suggested in one of my comments.

For me this would be more amusiing and educational than reading thru the code and I

am willing to help you shut down all of the nonessential routines so that you would just need

to run the dynamical core (in fact this may be separated out already in the CAM model).

At that point the code should be quite short and it should only be a matter of having enough resources to run the core.I am able to run the example I mentioned on my home PC, but theirs may take more memory depending on how many additional variables are saved in the core code.

Jerry

Dan.

I was a Computer Programmer IV at NCAR before I obtained my advanced degree so this won’t take long in terms of manpower. 🙂

Jerry

A profile of the GISS model says that it doesn’t have “flux adjustment”. Can someone explain to me the difference between the ediff adjustment that Dan’s identified and

flux adjustment? Have they just moved the adjustment to a different location in the code or is there is a substantive difference?Steve M.,

It might not be necessary to get into these details if Dan is willing to run the example.

Jerry

#3 — I’d bet that previously, flux adjustments were put in by hand. Now maybe they’re just written into the code as conditional statements. Then you can claim artificial flux adjustments are no longer necessary, which, of course, is strictly true. They’re now autosynthetic flux adjustments.

Look, Dan hasn’t even identified if this is dead code or not. If they say they don’t use flux adjustments (which I believe are defined as adjustments in energy flows between adjacent vertical layers) you should take them at their word. Given my intuitive understanding of what a flux adjustment is, this doesn’t appear to be one, but Dan never even said what routine this was buried in.

You guys aren’t skeptics, you are cynics. Hey, that might be better than the hotly debated term “denialist”, “global warming cynic” probably fits better.

#6 Given the obscurantism, outright obstructionism, data adjustments, cherry-picking, and vicious character assassinations that typify the alarmist side, a little cynicism is a rational response.

I think the flux fix is a good numerical thing to do. Such techniques have been used with great sucess in other fields:

“John Creighton writes:>Pierre Asselin wrote:

>> If you can put your equations in Hamiltonian form, you can use a

>> symplectic integrator. I did a quick search on netlib and came up

>> with nothing 😦 But it’s clearly the way to go. Head for the

>> library, I guess.

>I am not sure how this would work. Wouldn’t each iteration of the integrator

>cause an error and on average after n iterations we would find the energy of

>the system has increased by a*n^p. Consider a ball immersed in a friction

>less fluid. On average as the number of iterations goes to infinity the

>magnitude of the angular velocity will get faster and faster. This is

>clearly undesirable when modeling a system.

Symplectic integrators are magic. That overstates things, but

really only very slightly. They’re designed for Hamiltonian systems,

which have nice properties like conservation of energy and such, and

so make use of those conservation rules.

There will be errors, but they tend not to be the ones that

mess up the dynamics of the system. As one person teaching them to me

explained, let’s say you have an integrator modeling the orbit of the

earth for one billion years into the future. Your nonsymplectic

integrator may put the Earth one A.U. off its correct location, in the

center of the sun. Your symplectic one, though, if it has made an

error of one A.U. in its position, will put it one radian ahead (or

behind) in its orbit, but still in its correct orbit. Given the choice

between mistakenly having an early spring and mistakenly having all

life on earth extinguished in a firely blaze, most people prefer the

first.

A great reference for this, explained completely enough that

you can write software directly from it, is “Symplectic Integration

of Hamiltonian Systems,” by PJ Channell and C Scovel, writing in the

journal Nonlinearity 3 (1990), pages 231 to 259. Includes examples

of how very well it conserves energy for unspeakable billions of

timesteps. A two-minute search of the web doesn’t turn up a copy of

it online, but any university library should either have a copy or be

able to get one.

“http://groups.google.ca/group/sci.math.num-analysis/browse_frm/thread/cc58eff7716f9ad/c1b8a14b49bcee34?lnk=st&q=JohnCreighton_%40hotmail.com+integration&rnum=4#c1b8a14b49bcee34

Given that the central argument of AGW theory rests on the inability of GCMs to reproduce global surface temperture increases since 1980 without including an anthropogenic CO2 forcing component, it is essential that their code is available for audit.

I just downloaded the GISS model E code that Dan is looking at and there is a document called

conserv.txtin thedocfolder.Here is the relevant section.

So it looks like this code is live and an essential part of the model. Whether or not this kind of adjustment is reasonable depends on what kind of errors are being corrected here and whether the correction is biasing the results. I’ll do a bit more digging.

Jerry, here is a longish answer.

My original plan was to strip some AOLGCM code down to the hydro, or as close as I could, and run some simple ‘toy’ problems to check some of the fundamental properties and characteristics of the numerical solution methods. I eventually settled on the NASA/GISS ModelE code, primarily because it is one of the US codes. I have downloaded the code and attempted to understand the parts that I need to rip out. It’s a huge job, as you are well aware, and I put that aside. The input data files are enormous and I wasn’t at all sure that I could shut down everything that I did not need. I find the documentation too sparse, dispersed, and basically unavailable to be able to know with certainty that I can correctly reduce the code to a basic framework. I also tentatively concluded that maybe I do not have the computing power needed to successfully carry out the idea.

It is also not clear to me that the ModelE coding is sufficiently clean of compiler and operating-system dependencies that I could ever be sure that I was running the same code that runs on the mainframes.

I will be happy to work with you on any code that we can get up and running. While our computing power might be limited we can let our PCs run as long as is needed. I run on a 1.6 GHz G5-based Mac with 1.5 GB of RAM. I use the Absoft Fortran compiler.

In the mean time I have been looking at numerical solutions of simple systems of ODEs that exhibit chaotic responses. This has now taken several months of work. I have tentatively concluded that such equation systems, using ‘standard’ ODE solution methods, have never been solved in the sense that the calculated numbers satisfy the continuous equations. Convergence cannot be demonstrated, and has not ever been demonstrated. Additionally, I have also reached the preliminary conclusion that all such systems result in only numerically-induced chaotic responses. All the calculated numbers are numerical noise. This lead to the question of how to determine that all such numbers are numerical noise and are not solutions to the continuous equations.

I have two or three reports on this subject that I am working on. I would like to finish those before re-starting the simple AOLGCM work full time.

Which code do you suggest that we start looking into and reducing to a basic hydro framework? And I would like to suggest that we move this work over to my blog if that is ok with you and everyone else who wants to get involved. It does not have to be my blog, but I would suggest that we isolate the work to help improve the focus.

re: #10. Chris H, thanks for the pointer to the information about the energy conservation fix. I’ll look into it.

Re 10: If that’s the reason for that bit of code, it sounds perfectly reasonable.

Some energy is degraded from “higher” forms (potential and kinetic) to “lower” forms (heat) when things move. If any energy is degraded that way, you do need a term in the energy equation or that energy does just vanish from the computational system.

Chris H: Do you have the full url for the conserve.txt file in the doc folder?

Some specifics of the old flux adjustments are discussed here: http://www.grida.no/climate/ipcc_tar/wg1/315.htm in section 8,4,2.

#11, Dan,

If I were you I wouldn’t be so sure about that. Contact me and I will get you in touch with people who are genuine experts in this area.

re: #13 Margo,

The question, it’d seem to me, is whether any work is done in the process? Say we’re talking a mass of surface air moving across the ocean. Interacting with the waves et. al. some of it is converted into heat. But in the process, it may drive mixed layer convection. Thus the thickness of the mixed layer may be dependent on the amount of energy degraded. Simply throwing in a fix may give you a wrong number for the upper ocean temperatures.

Similarly for an air mass running into a mountain. Perhaps it results in kinetic energy of the mass being degraded into heat, but perhaps in the process, some is converted into changes in direction of the air mass. The process may be loosely compared to a cue ball reflecting off another ball. The exact deflection angle will be dependent on the exact wind angle. This may in turn determine the speed of a wind through a valley and thus the amount of rain in a basin. Working backwards, the exact angle of wind contact might well have been steered by upper level winds, i.e. the jet streams. And if the methods of adjustment are incorrect, it may all show up as an increase in the degraded energy adjustment.

So just how big are these adjustments? And are we expected to assume that their effects will all just cancel out in the end?

re: #10 and 12. Yes of course there is conversion of mechanical energy into thermal energy and that accounting is present in the basic continuous equations. It is generally in a form that includes the viscosity of the fluid and a sum of the squares of various derivatives of the velocity. If the term represented the physical quantity that appears in the continuous equations I would expect to see calculations that look like the description. It is always a positive contribution to the conservation of thermal energy equations.

As I read the coding, I do not see such a formulation. I see a ‘correction/fix’ based solely on the results of the numerical solution methods. The quantity

ediffcan be either positive or negative.Another example of where the lack of comprehensive documentation kind of leaves me in the dark.

Re # 11 and others.

‘Predictability of atmospheric and oceanic motions is fundamentally limited by a tendency for small differences in initial conditions to amplify: because measured initial conditions inevitably contain errors, prediction errors become large within a finite time. This is so even for prediction models that describe all relevant physical processes perfectly.’

Copied from an article. So you have to be careful with your requirements.

re: # 14. Will you send me literature citations in which it has been shown that ‘standard’ ODE solvers have attained convergence with respect to the step size for a nonlinear system of ODEs that exhibit chaotic response. As a matter of being specific, I would be very interested in numerical solutions of the equation systems in Lorenz63, and the Lorenz84 equations investigated in this reference.

Thanks for your assistance.

Margo (#13) I am surprised by your cavalier comment that the energy fudge factor “sounds perfectly reasonable.” Doesn’t

DaveD(#15) have a point in wanting to knowAlso

Parser(#9) makes an interesting observation:I am not saying that fudge factors are necessarily bad. I realize that numerical solution of PDEs is enormously difficult in many situations, and I know it is not easy to come up with schemes that simultaneously conserve mass, momentum, and energy. However, if the GCMs depend on an

energy fudge factor(rather than on unalloyed physics) in order to produce a realistic model of the planet’s energy budget, just how concerned should we be that the same GCMs cannot reproduce recent warming?Margo,

You will need to download the GISS Model E source code. Look for a link to modelE1.tar.gz on the web page http://www.giss.nasa.gov/tools/modelE/ .

Then you will need to uncompress the file and look in the doc folder.

If you Windows, you may have a problem decompressing this file, in which case I recommend you use the filzip decompression tool because it’s free and easy to use.

RE: #17

The initial conditions issue has always been a weak point in the models as far as I’m concerned. If you think about it you will realize that there must be dozens of variables maintained for every cell in the simulation, and this begs the question: how are these variables initialized at the start of the simulation? For example, if you are going to start your model running at 1980 where do you get all the data to initialize all the cells to 1980 values? We obviously don’t have accurate measurements of all the pertinent climatical data for every cell in the simulation so I’m guessing they are estimated. Now given that the real climate is a chaotic nonlinear system, and if the models are accurate simulations of it they must be too, it means that the models should be very sensitive to even small changes in the initial conditions. This leads me to have very large doubts about the outputs of these models, no matter how well they are constructed. In fact, as the models improve in accuracy this problem could very well get worse.

Re 15:

Yes. Some the force acting on the the moving water surface might result in increased kinetic energy of the water; some energy might be dissipated. But, strangely enough, the it makes sense for someone in code comments to call only the second irreverible bit of the transfer “a fix”.

I’m not quite sure how to explain the reason one is likely to read code comments using the word “fix” to deal with this in a model, but it’s a sort of natural word to use in code comments.

The underlying reason is that the reversible work sort of falls out “naturally” from conservation of momentum alone. So, the information you need to describe any reversible work is then available for use in the energy equation — which the climate modelers also code up and solve.

However, you really do need to figure out how much mechanical energy was dissipated to heat and insert that energy explicitly in to the energy equation.

The comment in #10 seems to be saying they are calculating this energy at the end of the DYNAM (and a few other places.) They are calling this “a fix”, which may be lose terminology– but what do you expect in comments? It is sort of “a fix”, in the sense that non-dissipative changes in forms of energy are already accounted for without “doing” or “fixing” anything.

Re #21, Paul Penrose

Which implies that the consistent results of these models are a function of the constraints built into them, and that all the highly detailed supercomputer-requiring calculations are rather a waste of time.

RE: #10 – They have to use a fudge factor to account for the global sum of all the parasitic terms of the energy equation of the system. Plus, there is more EM energy than simple isolation which is incident upon the system. The more we find out about the nature of energy and matter in space in general, the more we find out there there are unknown unknowns, which we continually chip away at to become known unknowns and knowns, but never fully master.

re: #22 Margo,

I realize that much of the energy will be allowed for, the question is the amount of the residual.

Let’s say we were looking still at a constant wind over an ocean. We’d ideally measure things like SST, average wave height, mixed layer thickness, wind velocity vector, etc. Then we’d have a set of equations implemented to calculate changes in these factors, but only to a certain extent. There will be constants plugged into the equations based on history and lab experiments and so they’d account for things like that higher winds push on waves more increasing both the wave height and the mixed layer depth and result in a small decrease in wind velocity (the decrease in momentum being taken up by either by changing ocean currents or by the Earth as a whole.) I’d assume that the constants for the equations would be tweaked to minimize losses as much as possible, but there are limits dependent upon the how finely the grid is sampled, how precise the implementations of the equations are and how correct the physical equations are to begin with. Admittedly a lot of the errors will cancel out, but it’s impossible they all will. So “fixes” are made to balance the books. But how large are these fixes compared to the amount of the AGW, for instance we’d like to measure? If they’re of the same order of magnitude or above, then the attempt to measure AGW will be hopeless.

Dan (#11):

Dan,

I would suggest the NCAR CAM model because it is probably the one that is likely to run on multiple machines. It is already uncoupled from the ocean model so can be run in stand alone mode. There are standard cases that can be run to test that the dynamical core is working properly at low resolutions with no input data required (see manuscript by Browning, Hack, and Swarztrauber). Once that has been demonstrated for the basic dynamical core, the dynamics resolution just has to be increased and the model run for a short period.

Note that I have been thru a similar process with the Canadian model

and that led to the results discussed in the Numerical Climate Model post

comment.

Jerry

#18, Dan,

references at the Maryland Chaos group website.

E. Kostelish, I Kan, C. Grebogi, E. Ott and J.A. Yorke, “Unstable Dimention Variability: A Source of Nonhyperbolicity in Chaotic Systems”, Physica D 109, 81 (1997) – Abstract.

T. Sauer, C. Grebogi and J.A. Yorke, “How Long Do Numerical Chaotic Solutions Remain Valid?”, Phys. Rev. Lett. 79, 59 (1997) – Abstract.

S. Dawson, C. Grebogi, T. Sauer, and J.A. Yorke, “Obstructions of Shadowing When a Lyapunov Exponent Fluctuates about Zero”, Phys. Rev. Lett. 73, 1927 (1994) – Abstract.

T. Sauer and J.A. Yorke, “Rigorous Verification of Trajectories for the Computer Simulation of Dynamical Systems”, Nonlinearity 4, 961 (1991) – Abstract.

C. Grebogi, S.M. Hammel, T. Sauer, and J.A. Yorke, “Shadowing of Physical Trajectories in Chaotic Dynamics: Containment and Refinement”, Phys. Rev. Lett. 65, 1527 (1990) – Abstract.

S. Hammel, C. Grebogi, and J.A. Yorke, “Numerical Orbits of Chaotic Processes Represent True Orbits”, Bull. Amer. Math. Soc. 19, 465 (1988).

H.E. Nusse and J.A. Yorke, “Is Every Approximate Trajectory of Some Process Near an Exact Trajectory of a Nearby Process?”, Comm. Math. Phys., 114, 363 (1988) – Abstract.

E.M. Coven, I. Kan and J.A. Yorke, “Pseudo-Orbit Shadowing in the Family of Tent Maps”, Trans. Amer. Math. Soc., 308, 227 (1988) – Abstract.

S. Hammel, C. Grebogi, and J.A. Yorke, “Do Numerical Orbits of Chaotic Dynamical Processes Represent True Orbits?”, J. Complexity 3, 136 (1987) – Abstract.

Re 25: I don’t know the magnitude of real honest to goodness global viscous dissipation rate compared to the differences in heat rated anticipated as a result of a CO2 doubling. I can say this though: In many, many engineering problems, when solving the energy equation, people just set the viscous dissipation rate to zero! It’s not equal to zero — it could be estimated–, but the effect on the solution to the temperature field tends to be small compared to other things. So, instead of burdening the computation with the bits of code required to estimate something very small, people just neglect it.

The fact that the climate modelers are accounting for it at all suggests the viscous dissipation has a noticable effect on the results of the model. That means they think it’s large enough to be worth estimating. However, I would be surprised if the effect is huge.

Chris H. (#20), et al. — the problem with unzipping that tarball on a Windows system is that they’ve used aux as a directory name, and that’s a reserved virtual device name on DOS (and Windows) systems. You can create a different directory, say aux1, then untar the files in aux one-by-one into the new directory – there are 17, excluding the (pointless) CVS directory – using “tar xvfO ..\..\modelE1.tar modelE1_pub\aux\vertflux.E001.f > vertflux.f” from that directory.

Of course, having the files in aux1 instead of aux will certainly screw up their Makefile system — its unscrewing is left as an exercise…

re: #28

But in many engineering problems a temperature difference of a degree or two doesn’t matter. I know when I was doing heat-treating, we only had to calibrate our temperature controllers and recorders to +- 5 deg F. Admittedly we could generally do it a little tighter than that, but being off a bit didn’t matter. Of course, when we’re talking rather lower temperatures the calibration, at least, should be better, but we’re still talking a temperature change comparable to our calibration uncertainty. If the models could get by with ignoring forcings of a watt/M2 or so, then it may be because the effect of something like a doubling of CO2 wouldn’t have much effect to begin with.

Reed,

The tarball doesn’t contain a subdirectory called aux, thanks. I was just answering Margo’s question about how to find the docs directory.

If anyone wants to follow your instructions, they will need a copy of tar for Windows (tar is a Unix utility). I would recommend installing cygwin which has Windows versions tar plus all of the Unix utilities that you will need to build the model from the source code. You’ll still need to find a Fortran compiler though.

re #8:

I’m not an expert in numerical methods, but I don’t think you can use symplectic integration because we’re dealing with a dissipative system here:

Dear Steve, it sounds funny (or stupid) but it could be just some innocent correction of numerical errors.

When you write a differential equation, e.g.

dx^2/dt^2 = -V'(x),

it preserves the total energy,

mv^2/2 + V(x).

However, when you solve the equation numerically by discrete time intervals, like “new x” = “old x” + “old v” dt, “new v” = “old v” + “-V'(x)”, then you create an inaccuracy because the “new” total energy computed in this way is not exactly equal to the old. In the long run, this error could accumulate and lead to spurious predictions, and maybe they just correct “x” or “v” by second-order terms in order to keep the total energy constant which could be a good idea.

Let me say an example. Take a number X and subtract a small fraction of it – say 1% – and then add 1% to the new value. Subtract 1%, add 1%. Repeat it 5000 times. Naively, you could expect you get X back. In reality, you get 0.9999^5000 (because 1.01 times 0.99 is 0.9999) which is comparable to one-half. This kind of numerical errors could accumulate in modeling whenever you discretize time and/or space, and I would agree if it were corrected.

But I don’t agree that this is what the line does.

Thanks Chris!

“I don’t agree” should have been “I don’t know for sure” 😉 – tired of teaching etc. today

re # 32. John West is correct and I agree.

It is also not clear that we are dealing with hyperbolic systems, either.

Why do you think we are dealing with hyperbolic equations? I can’t remember this off the top of my head, but I think at least the Navier-Stokes equations are mixed parabolic/hyperbolic.

Also, I’ve seen a lot of people mention that we have to get the IC’s exactly right to have any chance of predicting the climate. But this would imply that very small changes would cause the Earth’s climate to change (not just in the models). Do people believe this?

#8 you are probably right. I learned of the idea long ago but unfortunately I never really learned how the symplectic integration algorithm works. I am not suggesting they use the algorithm I am just pointing to an example where knowledge about how the system works (conservation of energy) was used with great success in numerical simulation. Thus it is not unreasonable that some kind of knowledge about macroscopic behavior could help to enhance numerical stability.

When I posted the article to usenet (6 years ago) I was only far enough along in my education to know about Euler type methods of solving differential equations numerically. Euler methods and any other types of predictor corrector integration methods are inherently unstable. Rung Kutta methods however are numerically stable. I am not exactly sure what this means but I’ll research it some later. I don’t know a lot about solutions to partial differential equations. I have learned of the method of relaxation before. I am not sure how well it works in dynamic systems.

#37 I don’t think initial conditions are an issue because in my view climate is a statistical property and not a realization of a particular simulation. Take hurricane’s for instance. They very a great deal from year to year and to predict the number of hurricane’s in the following year would take a great deal of skill. However, the statistics of hurricanes from year to year changes very slowly and is very deterministic.

Dan (#36),

The basic (unforced, inviscid) nonhydrostatic system for large-scale atmospheric motions is a hyperbolic system, but it is not a symmetric hyperbolic system. These questions and issues have been discussed by Heinz and me. Do a google on Browning Kreiss and see the references I have cited on the Exponential Growth thread or our Tellus manuscript. We approached the questions from the inviscid point of view because the dissipation on these scales is suppose to be essentially negligible. And this approach has led to what are new concepts in this area, including the balances for the smaller scales and near the equator.

I believe I have a copy of the CAM model or will obtain one to determine

the amount of work involved.

Jerry

#39, John,

I think that’s wrong for a very subtle reason. The climate is a nonlinear dynamical system. Such systems often exhibit hysteresis, that is multiple states for the same system parameters but which state you are in depends on history, i.e., initial conditions. The statistical properties of the different states can be very different. A clear sign to me that the climate does have such hysteretic states is the step function type changes that occur every so often like the PDO.

Hyperbolicity

The fluids of interest in AOLGCM models are viscous, heat-conducting fluids. The IPCC and many others in the climate-change-modeling community, always, without exception, advertise that the AOLGCM models are based on conservation of mass, momentum, and energy. That being the case, the momentum equations must contain accounting of diffusion of momentum. An equation for conservation of the sum of thermal, kinetic, and potential energy must contain accounting of the diffusion of energy and the rate of work done on the fluid by pressure and viscous forces. An equation for conservation of thermal energy must contain accounting of the diffusion of energy and irreversible viscous dissipation.

Models of fluid flows based on conservation of mass, momentum and energy and in which accounting of diffusion of momentum and energy are present are not hyperbolic. Models of fluid flows that do not account for diffusion of momentum and energy do not conserve momentum and energy. Thermal energy equations that do not account for viscous dissipation do not conserve energy.

Additionally, auxiliary differential-equation-based models that are functions of the dependent variables of the problem will also affect the classification into hyperbolic, parabolic, and elliptic. Auxiliary differential equation models can also affect the well- or ill-posedness of the overall equation system.

One reason that hyperbolicity seems to arise in these discussions is that hyperbolicity is required in order to ensure shadowing. At least that is my understanding. Shadowing seems to be invoked as a means to avoid all the adverse properties and characteristics of equation systems that exhibit chaotic responses. So far as I have been able to determine, the complete model equation systems used in AOLGCMs have not yet been classified into the usual hyperbolic, parabolic, and elliptic classes. Additionally, I have also been unable to find documentation in which the complete model equation systems, at the continuous-equation level, have been shown to have the potential to exhibit chaotic responses. All nonlinear physical phenomena and processes do not exhibit chaotic response. Nonlinearity is not a necessary and sufficient condition for chaos. Necessary, but not sufficient.

The properties of numerical solution methods used to obtain approximate solutions for the continuous equations also have profound effects on the overall problem. The numbers generated by the codes are what count. Consistency, stability, and convergence of the discrete equations must always be considered in detail in order to ensure the numbers represent accurate approximations to the continuous equations.

In many cases the properties and characteristics of the discrete equations can easily destroy those of the continuous equations. The discrete approximations to the continuous equations must preserve all the important properties and characteristics of the continuous equations. Especially, if the continuous equations have been rigorously shown to exhibit chaotic response, the discrete approximations must accurately preserve the properties and characteristics of those equations.

#39. Climate may be statistical, but what we get from the modelers is individual computer simulations. climateprediction.net tried to produce an ensemble of runs, but there are statistical issues in some of their methods – for example, some runs produced “cold equator” results and were rejected as being unphysical.

May I suggest that people read the provocative Holloway article on models which I discussed last year here

I have to chuckle a little at the “ediff” variable.

Years ago I used a similarly named “fudge factor” in the process to make rice krispies.

In my case it was e (empirically) d (determined) a (arbitrary) f (fudge) f (factor).

It thus follows that ediff must equal (experimentally determined intermediate fudge factor) or some such.

It was a small chuckle.

fudge factor?

Are you sure you weren’t making brownies?

John, it is not true that Euler methods are unstable and Runge-Kutta methods are stable in general. For example backwards Euler (or implicit Euler) is absolutely stable, yet a standard Runge-Kutta method like RKF45 has a relatively small stability region. In general explicit methods will require some type of restriction on the size of the time step for stability. You can find most of this info in any standard ODE solver text like Numerical Solution of Ordinary Differential Equations by Larry Shampine.

I’m still not sure why this “discovery” in the code deserved its own thread. I hope the audit of the code is not searching for suspicious comments.

#43 Steve McIntyre it could be true that the current models are not very useful for producing an ensemble of simulaitons. Hopefully we will get around this problem. Another possibility is to develop differential equations for the propagation of statistics. For example the covariance can be propagated as follows:

P=A P A’+ B Q B’

Where denotes the transpose

P is the covariance of the states

Q is the covariance of the input

A is the state transition matrix

B is the state input matrix

The entire distribution can be propagated as well via ito calculus.

http://en.wikipedia.org/wiki/It%C5%8D_calculus

I asked Gavin Schmidt, one of the GISS modelers, about this ediff code, writing:

I will paraphrase his answers, because this was on a mailing list, so I won’t quote directly. His main points were:

‘€¢ ediff is always small

‘€¢ it occurs because the temperature equation doesn’t have an implicit term for thermal dissipation (kinetic energy to heat in turbulent fluid)

‘€¢ it is monitored

‘€¢ it is a small percentage of total heat fluxes, usually around 2 W/m2, and doesn’t vary much

‘€¢ the source is probably associated with high mass (near surface) high velocity wind, but it’s hard to say because it’s small

‘€¢ any decrease in KE goes to heat, only a small amount, but it has to be accounted for

‘€¢ the difference in the wet/dry difference in the conversions didn’t amount to much

‘€¢ heat dissipated by surface friction with the lithosphere (about 1.7 W/m2) is much larger than momentum transferred to the lithosphere by the atmosphere.

‘€¢ the Length of Day (LOD) changes are small momentum exchanges, and don’t have a noticeable effect on climate.

I believe I have expressed his statements clearly, and any faults are mine, not his.

I wrote back thanking him for his answers and asking a few more questions, but he hasn’t answered, he might not have seen it. I asked why 2 W/m2 was “small” yet 3.7 W/m2 was the size of CO2 doubling of interest … seems like considering 2 W/m2 as small and not worth investigating, when looking for a 3.7 W/m^2 signal, does not bode well. It also highlights the small size of the signal of interest.

I also asked if there was any kind of real world evidence that the thermal turbulent loss was only 2 W/m2. This number seems very small to me, considering the amount of turbulence in the atmosphere and the ocean. Natural flows of any kind tend to speed up until further increase in speed is balanced by turbulent loss.

This type of turbulent speed limitation can be measured in, for example, a iron pipe connected to an elevated reservoir. Turbulence inside the pipe slows the outflow, such that the final speed is turbulence limited. This type of turbulence limitation is common in nature.

Another example is a jet of smoke. It is almost completely dissipated before it crosses the room. Once again, the distance the jet travels is limited by turbulence. How much of the kinetic energy of the jet has turned into heat? Seems like it would be larger than a few percent.

Finally I asked about his statement that LOD and climate weren’t related, given the large amount of scientific research showing correlations between the LOD and climate. A Google Scholar search on LOD and climate reveals over 2,000 references, such as

Angular momentum exchange among the solid Earth, atmosphere, and oceans: A case study of the 1982’€”1983 El Nià±o event, Dickey et al.

Trend in Atmospheric Angular Momentum in a Transient Climate Change Simulation with Greenhouse Gas and Aerosol Forcing, Huang et al.

I’m hoping he’ll answer. In the meantime, that’s as much as I know about the ediff.

w.

re: # 49. That’s good information Willis. Gavin has commented on the ‘ediff’ post over here. I hope you, and everyone else interested in the subject, will join us there.

Just downloaded modelE1_pub tarball from GISS and was trying to compile with gcc. Got error message saying CYGWIN_NT-5.1 not supported. Any ideas?