I’ve been browsing through some articles on climate modeling and GCMs since even the Hockey Team no longer seems to try to base climate policy on multiproxy studies. I’m particularly interested in the approach of maximum entropy theorists, since they offer a very non-IPCC perspective on GCMs. Here are a few quotes from Holloway [2004], “From Classical to Statistical Ocean Dynamics” which is online here. Holloway observed:

In principle we suppose that we know a good approximation to the equations of motion on some scale, e.g., the Navier-Stokes equations coupled with heat and salt balances under gravity and rotation. In practice we cannot solve for oceans, lakes or most duck ponds on the scales for which these equations apply.

He likened the GCM method for climate modeling to the following:

This enterprise is like seeking to reinvent the steam engine from molecular dynamics’ simulation of water vapour. What a brave, but bizarre, thing to attempt!

Here is a longer excerpt:

In principle we suppose that we know a good approximation to the equations of motion on some

scale, e.g., the Navier-Stokes equations coupled with heat and salt balances under gravity and rotation. In practice we cannot solve for oceans, lakes or most duck ponds on the scales for which these equations apply. For example, the length scales over which ocean salinity varies are often shorter than 1 mm. We try to solve for fields represented on grids (or other bases) that are far larger than the scales to which “known” equations apply. Then we are compelled to guess the equations of motion.

Guessing equations is uncomfortable, often causing us to assume without question the equations used by some previous author. When we are brave, we realize that this too is uncomfortable. It is natural to wish that, as computers grow ever more powerful, we guess less and less. What would be needed from the computer? In the oceans there are about 1.36 x 10^18 m3 of water. If we felt that variability was unimportant within volumes of O(10)9 m3) then the computer should track O(1027) volumes, each described by several degrees of freedom. Clearly one can fiddle these numbers. Today’s “big computer models”, e.g., for weather forecasting or turbulence research, may advance 10^7, 10^8 or 10^9 variables. Over time we are assured that computers will become bigger yet. Even if we imagine computer models advancing 10^12 variables (not on my desk in my lifetime!), we still face the situation that, for each one variable we track, we must guess how that variable interacts with 10^15 variables about which we are uninformed. Limiting ourselves to coastal oceans or lakes, the mismatch in degrees of freedom might reduce to 10^8 or less. Might computations for a suitably modest duck pond “someday ” be possible? Maybe. This enterprise is like seeking to reinvent the steam engine from molecular dynamics’ simulation of water vapour. What a brave, but bizarre, thing to attempt! For oceans, lakes and ponds the circumstance is even worse than the dismal numbers above suggest.

He was critical of some conventional parameterizations, pointing out:

traditional geophysical fluid dynamics (GFD), with traditional eddy viscosities, violates the Second Law of Thermodynamics, assuring the wrong answers.

I do not have an independent view of whether Holloway’s comments about traditional ocean models (which form one module of coupled GCMs) are right or not. I don’t know anything about Holloway, but he seems to have published a number of technical articles in respected journals on related topics. In this particular paper, he thanks Joel Sommeria, whose mathematical credentials strike me as far more imposing than those of the Hockey Team. The issues that he raises all seem plausible ones. I’ll post up some more comments on some articles raising similar issues in the next few days.

**Reference:** Greg Holloway, 2004. From Classical to Statistical Ocean Dynamics. Surveys in Geophysics 25: 203″€œ219, 2004. http://www.planetwater.ca/research/entropy/SurvGeophys.pdf

## 20 Comments

Way outside my field, but. People should be wary of making predictions about computers. How many did the first chairman of IBM think the world would need, six if memory serves. Indeed people should be wary of making predictions about the future of technology in any sphere. I remember being told about 20 years ago by a biology teacher that sequencing the human genome though theorecally possible, would be like counting all the grains of sand on all the worlds beaches for practical purposes impossible……. How wrong they were.

Personal experience – the Navier-Stokes equations are ok for laminar fluid flow but become S**t when applied to turbulent flow in creeks or rivers.

We are entering into shark infested waters on this thread.

Ray Kurzwell, writing in The Singularity is Near, forecastes exponential growth in computing power. See pages 70 and 71 for some graphs of current and future growth. Very interesting, our super computer capacity is doubling ever 1.2 years.

yeah, the deal with high power processing is that 99% of any task takes only the last few % of the total time to do the whole job. with the sequencing, they started veeeery slowly and gradually picked up the pace until the right computers became available and suddenly, almost overnight, the project was done.

makes you wonder if it’s worth it to start on something NOW or just wait till we know we can get the answer in an instant. i suppose there are lessons learned along the slow part that pay off with more efficient algorithms and the like…

mark

I was rather hoping some heavy hitters would jump in here and discuss some of the complexities of ocean circulation, but since they haven’t I’ll say a couple things I found out a while back while doing a couple of messages for RealClimate. This is off the top of my head, so I don’t vouch for anything but would love for someone with more knowledge to correct any mistakes.

It seems the mixing of the surface waters is dependent largely on winds and density differences. The mixing depth varies, being deeper near the poles and more shallow in the tropics. It occurs in the form of cells or vortexes and they actually turn-over quite quickly, as test balloons(weighted to have neutral boyancy) sink and rise in times measured in minutes or an hour or two. The mixing stops when the relatively low density surface waters slam into the heavier waters below. I’m not sure how long an individual cell lasts, but trying to model mixing given random creation and break-up of cells, differing wind speeds, the effect of rain or evaporation on density and thus mixing depth, etc., etc. is daunting to say the least. I know it’s not too difficult to compute averages of such things, but given the various sizes of important processes, and their interactions, the possibility of any forseeable computer being able to predict conditions very far into the future is basically nil.

Re # 4. We will never know the answer in an instant. When we get to the “last few per cent of the total time”, the available processing power is so high that some bright hot shot on his way up will decide that some of the original simplyfing assumptions (made because it was impossible to find the solution within the available time with the original computing power) are no longer necessary. So we make the original problem so complicated that it will again take about the same time to find the (more exact) solution. But, when we are close to the new “last few per cent of the total time”, the available processing power is so high that …

Is this a suitable place to ask if the erudite GCMs accurately reproduce the daily and annual variations in temperature that are so obvious to us all? I ask this having looked at “Temperature response of Earth to the annual solar irradiance cycle” by Douglass, Blackman and Knox. This suggests significant negative feedback as the only way of modelling the shift in annual max and min temperatures as opposed to the positive feedback we hear so much about from the alarmists. Negative feedback seems intuitively to be more likely to fit in with “global dimming” and declining pan evaporation as well as the theory of an exceptionally active sun.

I have read opinions that solar variance is too small for the observed changes, but am I wrong in thinking that we only have reliable data for the last 20 or so years? If the current solar output is above (even slightly) that needed for radiative equilibrium are we not bound to see constantly increasing temperatures checked only by the negative feedback afforded by increased cloud cover?

#7:

I don’t know how this result was measured but for what it’s worth it shows the irradiance steadily increasing in a way that matches the ground based temperature record. http://aom.giss.nasa.gov/srsun.html

The satellite measurement is here (ACRIM Compsite TSI series). http://www.acrim.com/ACRIM%20Composite%20Graphics.htm It shows an increase of about 0.25 W/m^2/decade for the solar irradiance which, if accumulated over a century, would easily account for much if not all of the warming seen. However, this seems to be a difficult measurement because of the need to splice data from satellites flying a different times and match the absolute levels of the irradiance. Read the papers on the site and make up your own mind on the validity.

Thanks for posting this. Essex and McKitrick made the same point in Taken by Storm. About half of the energy flow away from the Earth’s surface is carried away by turbulence, which is also governed by Navier-Stokes. Given that fact, predicting what the climate will do in response to a change in one little variable, i.e., carbon dioxide concentrations, is simply impossible. Climate modeling is nothing more than crystal ball gazing at this point.

re #7… GNU’s Not Unix! :)

mark

OT – Stere, Roger Bell,

Seems like this might be a clue.

“Empirical evidence for a nonlinear effect of galactic cosmic rays on clouds”, R.Giles Harrison and David B. Stephenson, Proceedings of the Royal Society A, DOI: 10.1098/rspa.2005.1628

Abstract:

Galactic cosmic ray (GCR) changes have been suggested to affect weather and climate, and new evidence is presented here directly linking GCRs with clouds. Clouds increase the diffuse solar radiation, measured continuously at UK surface meteorological sites since 1947. The ratio of diffuse to total solar radiation”¢’¬?the diffuse fraction (DF)”¢’¬?is used to infer cloud, and is compared with the daily mean neutron count rate measured at Climax, Colorado from 1951–2000, which provides a globally representative indicator of cosmic rays. Across the UK, on days of high cosmic ray flux (above 3600àÆ”¬”102neutron countshà⣃ ’ ‘1, which occur 87% of the time on average) compared with low cosmic ray flux, (i) the chance of an overcast day increases by (19±4) %, and (ii) the diffuse fraction increases by (2±0.3) %. During sudden transient reductions in cosmic rays (e.g. Forbush events), simultaneous decreases occur in the diffuse fraction. The diffuse radiation changes are, therefore, unambiguously due to cosmic rays. Although the statistically significant nonlinear cosmic ray effect is small, it will have a considerably larger aggregate effect on longer timescale (e.g. centennial) climate variations when day-to-day variability averages out.

#7 David, reading the IPCC report, I was struck by the emphasis put on the number of possible “positive” feedback mechanisms on climate, and the near absence of attention given to negative feedback. This is somewhat odd because any system with so much positive feedback would be so unstable that it would run away quickly. Still, climate is fairly stable, and global average temperatures, even if they fluctuate within a couple of degrees, remain confined. Well, at least over the past few thousand years, because it has indeed been much more unstable in the past. Yet when you look on a finer scale, you quickly realize that this average stability hides a very dynamic system. CO2 intake, for example, varies enormously on a seasonal and probably regional basis. So if climate had so much positive feedback w/r to CO2 concentration, it could never be that stable. Therefore, there must be some negative feedback somewhere. But it seems to me that researchers spend too much time looking for the “worst” scenarios: runaway climate, tipping points, etc, and forget to look for stabilizing mechanisms. To me, this is the main danger facing climate research: it is so obsessed by just ONE hypothesis that it forgets to look at others. The perverse result is that you only find what you’re looking for: any fact to goes with the hypothesis is given much more attention than a fact that goes against, or that is just neutral but may be indicative that something else is going on.

This seems like a good place to post this:

Here’s some interesting evidence for tidal forcing on approximately an 1800 year cycle:

http://www.pnas.org/cgi/content/full/070047197

What this all comes down to is: Just how important is CO2 when it comes to climate variance given all the other forcing and feedback mechanisms?

And here we come to an interesting quote from realclimate.org:

http://www.realclimate.org/index.php/archives/2005/12/natural-variability-and-climate-sensitivity/

That part in bold there explains whey they are so resistant to the global effects of a little ice age. Apparently it would invalidate all their models. Ergo, the little ice age must be local. Presumably the same is true in reverse e.g. the medieval warm period.

They appear to be assuming the very point they should be trying to prove.So what about that earlier paper I quoted with an as yet little understood 1800 year oceanic cycle? If that’s right, then there was a

globallittle ice age and a MWP and and none of their models properly account for that forcing *and* as a result they systematically overestimate CO2 as a forcing agent.Does that make sense?

Audit the GCMs?

If you thought auditing proxies was a challenge, consider this exchange – a sample extracted from a discussion on the use of GCMs to estimate climate sensitivity going on at RC. It’s telling.

Steffen Christensen 3 Aug:

I’m sure that I am misunderstanding some of the basic physics here, maybe you can clarify. You state that doubling atmospheric CO2 (presumably from 280 ppm to 560 ppm or thereabouts) adds around 4 W/m^2 to Earth’s power budget, with a naive effect of, in the steady-state, increasing the mean surface temperature by ~1 degree Celsius. You then compute the key atmospheric feedbacks as having an aggregate effect of 0.85 to 1.7 W/m^2/K or so – with a mean value of maybe 1.25 W/m^2/K or so. When I go and plug this mean value in with the 1 Kelvin rise in temperature from CO2 doubling, I get an additional effect of 1.25 W/m^2 from feedback, which with a linear temperature response, heats up Earth by an extra 0.3 K or so. We have to add the feedback from this as well and so on, so in the limit I get a temperature increase of 1.45 degrees C or so. But you state that the expected mean temperature impact of doubling CO2 is between 2.6 and 4.1 degrees Celsius increase, so I’ve done something wrong somewhere. What was it?Jeffrey Davis 3 Aug:

The preliminary discussion posits a straight 1C rise in temps per doubling of CO2 without a consideration of feedbacks. The graph, then, has a value marked ALL which looks like it posits a 1C (or less) rise in temps due to all the forcings under examination. I presume that means all the interactions of positive forcings, negative forcings, and associated effects. So, the combined rise in temps looks to be 1C for CO2 alone and around 1C (or less) when the feedbacks are included in the equation. A total of 2C (or less). How does one get to the 2.6-4.1C rise predicted in the first sentence of the article?This exchange between amateurs led to the discovery of a critical error in a published Figure, which was subsequently corrected. Yet discussion continued:

Leonard Evens 5 Aug:

Steffen, I did think I understood the process you used to draw your conclusion. I taught many generations of calculus students about the geometric series. It seems a plausible way to argue to me, but what do I know? What I didn’t understand was why it was at variance with the estimate of 2.6 to 4.1 K. Presumably what any given model does is hard to analyze by such methods, and the Soden Held paper is an attempt to grapple with that. I think we amateurs have to accept the 2.6 to 4.1 estimate as roughly correct whether or not it seems to square with a back of the envelope estimate we can do.Ike Solem 5 Aug:

Let me point out that the climate sensitivity estimates are the results of GCMs, and no napkin scribbles are going to reproduce these results – if they could, this would have all been figured out 100 years ago.I’m not saying the GCMers don’t know what they’re doing. But with amateurs correcting the ivory tower experts, this sounds like a situation begging for audit. Different models based on different assumptions producing vastly different outputs, none of it accessible. And we are supposed to trust this work? I’m not saying it’s not good work. I’m not a climatologist, so I am really unqualified to judge. But as an ordinary citizen I’m just wondering: does it make sense to trust this “as is”?

I wish the Peters would repond. But it’s fairly easy to predict what kinds of questions they will avoid.

Well it’s nice RC corrected the graphic in that instance.

They still have the wrong graphic in their “Missing piece in the Wegman Inquiry” post (from Rutherford et al 2005), despite it being pointed out to them (They wanted Rutherford’s Fig 3, not Fig 2 as posted).

Re #14, just came across this. Like you I’d say ‘I’m not a climatologist, so I am really unqualified to judge.’ but unlike you I wouldn’t then go onto judge.

Re: #16

Like Michael Mann, I’m not a statistician so therefore I shouldn’t publish statistical studies either.

bump

Bump

and Grind