Holloway [2004] on Ocean Dynamics

I’ve been browsing through some articles on climate modeling and GCMs since even the Hockey Team no longer seems to try to base climate policy on multiproxy studies. I’m particularly interested in the approach of maximum entropy theorists, since they offer a very non-IPCC perspective on GCMs. Here are a few quotes from Holloway [2004], “From Classical to Statistical Ocean Dynamics” which is online here. Holloway observed:

In principle we suppose that we know a good approximation to the equations of motion on some scale, e.g., the Navier-Stokes equations coupled with heat and salt balances under gravity and rotation. In practice we cannot solve for oceans, lakes or most duck ponds on the scales for which these equations apply.

He likened the GCM method for climate modeling to the following:

This enterprise is like seeking to reinvent the steam engine from molecular dynamics’ simulation of water vapour. What a brave, but bizarre, thing to attempt!

Here is a longer excerpt:
In principle we suppose that we know a good approximation to the equations of motion on some

scale, e.g., the Navier-Stokes equations coupled with heat and salt balances under gravity and rotation. In practice we cannot solve for oceans, lakes or most duck ponds on the scales for which these equations apply. For example, the length scales over which ocean salinity varies are often shorter than 1 mm. We try to solve for fields represented on grids (or other bases) that are far larger than the scales to which “known” equations apply. Then we are compelled to guess the equations of motion.

Guessing equations is uncomfortable, often causing us to assume without question the equations used by some previous author. When we are brave, we realize that this too is uncomfortable. It is natural to wish that, as computers grow ever more powerful, we guess less and less. What would be needed from the computer? In the oceans there are about 1.36 x 10^18 m3 of water. If we felt that variability was unimportant within volumes of O(10)9 m3) then the computer should track O(1027) volumes, each described by several degrees of freedom. Clearly one can fiddle these numbers. Today’s “big computer models”, e.g., for weather forecasting or turbulence research, may advance 10^7, 10^8 or 10^9 variables. Over time we are assured that computers will become bigger yet. Even if we imagine computer models advancing 10^12 variables (not on my desk in my lifetime!), we still face the situation that, for each one variable we track, we must guess how that variable interacts with 10^15 variables about which we are uninformed. Limiting ourselves to coastal oceans or lakes, the mismatch in degrees of freedom might reduce to 10^8 or less. Might computations for a suitably modest duck pond “someday ” be possible? Maybe. This enterprise is like seeking to reinvent the steam engine from molecular dynamics’ simulation of water vapour. What a brave, but bizarre, thing to attempt! For oceans, lakes and ponds the circumstance is even worse than the dismal numbers above suggest.

He was critical of some conventional parameterizations, pointing out:

traditional geophysical fluid dynamics (GFD), with traditional eddy viscosities, violates the Second Law of Thermodynamics, assuring the wrong answers.

I do not have an independent view of whether Holloway’s comments about traditional ocean models (which form one module of coupled GCMs) are right or not. I don’t know anything about Holloway, but he seems to have published a number of technical articles in respected journals on related topics. In this particular paper, he thanks Joel Sommeria, whose mathematical credentials strike me as far more imposing than those of the Hockey Team. The issues that he raises all seem plausible ones. I’ll post up some more comments on some articles raising similar issues in the next few days.

Reference: Greg Holloway, 2004. From Classical to Statistical Ocean Dynamics. Surveys in Geophysics 25: 203″€œ219, 2004. http://www.planetwater.ca/research/entropy/SurvGeophys.pdf

20 Comments

  1. Paul Gosling
    Posted Jan 24, 2006 at 3:40 AM | Permalink

    Way outside my field, but. People should be wary of making predictions about computers. How many did the first chairman of IBM think the world would need, six if memory serves. Indeed people should be wary of making predictions about the future of technology in any sphere. I remember being told about 20 years ago by a biology teacher that sequencing the human genome though theorecally possible, would be like counting all the grains of sand on all the worlds beaches for practical purposes impossible……. How wrong they were.

  2. Louis Hissink
    Posted Jan 24, 2006 at 5:20 AM | Permalink

    Personal experience – the Navier-Stokes equations are ok for laminar fluid flow but become S**t when applied to turbulent flow in creeks or rivers.

    We are entering into shark infested waters on this thread.

  3. Posted Jan 24, 2006 at 2:33 PM | Permalink

    Ray Kurzwell, writing in The Singularity is Near, forecastes exponential growth in computing power. See pages 70 and 71 for some graphs of current and future growth. Very interesting, our super computer capacity is doubling ever 1.2 years.

  4. mark
    Posted Jan 25, 2006 at 2:48 AM | Permalink

    yeah, the deal with high power processing is that 99% of any task takes only the last few % of the total time to do the whole job. with the sequencing, they started veeeery slowly and gradually picked up the pace until the right computers became available and suddenly, almost overnight, the project was done.

    makes you wonder if it’s worth it to start on something NOW or just wait till we know we can get the answer in an instant. i suppose there are lessons learned along the slow part that pay off with more efficient algorithms and the like…

    mark

  5. Dave Dardinger
    Posted Jan 25, 2006 at 10:07 AM | Permalink

    I was rather hoping some heavy hitters would jump in here and discuss some of the complexities of ocean circulation, but since they haven’t I’ll say a couple things I found out a while back while doing a couple of messages for RealClimate. This is off the top of my head, so I don’t vouch for anything but would love for someone with more knowledge to correct any mistakes.

    It seems the mixing of the surface waters is dependent largely on winds and density differences. The mixing depth varies, being deeper near the poles and more shallow in the tropics. It occurs in the form of cells or vortexes and they actually turn-over quite quickly, as test balloons(weighted to have neutral boyancy) sink and rise in times measured in minutes or an hour or two. The mixing stops when the relatively low density surface waters slam into the heavier waters below. I’m not sure how long an individual cell lasts, but trying to model mixing given random creation and break-up of cells, differing wind speeds, the effect of rain or evaporation on density and thus mixing depth, etc., etc. is daunting to say the least. I know it’s not too difficult to compute averages of such things, but given the various sizes of important processes, and their interactions, the possibility of any forseeable computer being able to predict conditions very far into the future is basically nil.

  6. J. Arbona
    Posted Jan 25, 2006 at 11:15 AM | Permalink

    Re # 4. We will never know the answer in an instant. When we get to the “last few per cent of the total time”, the available processing power is so high that some bright hot shot on his way up will decide that some of the original simplyfing assumptions (made because it was impossible to find the solution within the available time with the original computing power) are no longer necessary. So we make the original problem so complicated that it will again take about the same time to find the (more exact) solution. But, when we are close to the new “last few per cent of the total time”, the available processing power is so high that …

  7. David H
    Posted Jan 25, 2006 at 3:12 PM | Permalink

    Is this a suitable place to ask if the erudite GCMs accurately reproduce the daily and annual variations in temperature that are so obvious to us all? I ask this having looked at “Temperature response of Earth to the annual solar irradiance cycle” by Douglass, Blackman and Knox. This suggests significant negative feedback as the only way of modelling the shift in annual max and min temperatures as opposed to the positive feedback we hear so much about from the alarmists. Negative feedback seems intuitively to be more likely to fit in with “global dimming” and declining pan evaporation as well as the theory of an exceptionally active sun.

    I have read opinions that solar variance is too small for the observed changes, but am I wrong in thinking that we only have reliable data for the last 20 or so years? If the current solar output is above (even slightly) that needed for radiative equilibrium are we not bound to see constantly increasing temperatures checked only by the negative feedback afforded by increased cloud cover?

  8. Paul Linsay
    Posted Jan 25, 2006 at 9:36 PM | Permalink

    #7:

    I have read opinions that solar variance is too small for the observed changes

    I don’t know how this result was measured but for what it’s worth it shows the irradiance steadily increasing in a way that matches the ground based temperature record. http://aom.giss.nasa.gov/srsun.html

    The satellite measurement is here (ACRIM Compsite TSI series). http://www.acrim.com/ACRIM%20Composite%20Graphics.htm It shows an increase of about 0.25 W/m^2/decade for the solar irradiance which, if accumulated over a century, would easily account for much if not all of the warming seen. However, this seems to be a difficult measurement because of the need to splice data from satellites flying a different times and match the absolute levels of the irradiance. Read the papers on the site and make up your own mind on the validity.

  9. pj
    Posted Jan 26, 2006 at 11:00 AM | Permalink

    Thanks for posting this. Essex and McKitrick made the same point in Taken by Storm. About half of the energy flow away from the Earth’s surface is carried away by turbulence, which is also governed by Navier-Stokes. Given that fact, predicting what the climate will do in response to a change in one little variable, i.e., carbon dioxide concentrations, is simply impossible. Climate modeling is nothing more than crystal ball gazing at this point.

  10. mark
    Posted Jan 26, 2006 at 4:21 PM | Permalink

    re #7… GNU’s Not Unix! :)

    mark

  11. John G. Bell
    Posted Jan 30, 2006 at 9:02 PM | Permalink

    OT – Stere, Roger Bell,
    Seems like this might be a clue.

    “Empirical evidence for a nonlinear effect of galactic cosmic rays on clouds”, R.Giles Harrison and David B. Stephenson, Proceedings of the Royal Society A, DOI: 10.1098/rspa.2005.1628

    Abstract:

    Galactic cosmic ray (GCR) changes have been suggested to affect weather and climate, and new evidence is presented here directly linking GCRs with clouds. Clouds increase the diffuse solar radiation, measured continuously at UK surface meteorological sites since 1947. The ratio of diffuse to total solar radiation”¢’‚¬?the diffuse fraction (DF)”¢’‚¬?is used to infer cloud, and is compared with the daily mean neutron count rate measured at Climax, Colorado from 1951–2000, which provides a globally representative indicator of cosmic rays. Across the UK, on days of high cosmic ray flux (above 3600àƒÆ’”‚¬”102neutron countshàƒ⣃ ‹’€ ‘1, which occur 87% of the time on average) compared with low cosmic ray flux, (i) the chance of an overcast day increases by (19±4) %, and (ii) the diffuse fraction increases by (2±0.3) %. During sudden transient reductions in cosmic rays (e.g. Forbush events), simultaneous decreases occur in the diffuse fraction. The diffuse radiation changes are, therefore, unambiguously due to cosmic rays. Although the statistically significant nonlinear cosmic ray effect is small, it will have a considerably larger aggregate effect on longer timescale (e.g. centennial) climate variations when day-to-day variability averages out.

  12. Posted Jan 31, 2006 at 6:24 AM | Permalink

    #7 David, reading the IPCC report, I was struck by the emphasis put on the number of possible “positive” feedback mechanisms on climate, and the near absence of attention given to negative feedback. This is somewhat odd because any system with so much positive feedback would be so unstable that it would run away quickly. Still, climate is fairly stable, and global average temperatures, even if they fluctuate within a couple of degrees, remain confined. Well, at least over the past few thousand years, because it has indeed been much more unstable in the past. Yet when you look on a finer scale, you quickly realize that this average stability hides a very dynamic system. CO2 intake, for example, varies enormously on a seasonal and probably regional basis. So if climate had so much positive feedback w/r to CO2 concentration, it could never be that stable. Therefore, there must be some negative feedback somewhere. But it seems to me that researchers spend too much time looking for the “worst” scenarios: runaway climate, tipping points, etc, and forget to look for stabilizing mechanisms. To me, this is the main danger facing climate research: it is so obsessed by just ONE hypothesis that it forgets to look at others. The perverse result is that you only find what you’re looking for: any fact to goes with the hypothesis is given much more attention than a fact that goes against, or that is just neutral but may be indicative that something else is going on.

  13. Ashby
    Posted Jun 10, 2006 at 8:57 PM | Permalink

    This seems like a good place to post this:

    Here’s some interesting evidence for tidal forcing on approximately an 1800 year cycle:

    Ramifications of the Tidal Hypothesis. The details of the tidal hypothesis are complex. There is much about tidal forcing that we do not know, and there is not space here to discuss all that we do know that could contribute to proving whether it is the underlying cause of some, or all, of the events of rapid climate change. We are convinced, however, that, if the hypothesis is to a considerable degree valid, the consequences to our understanding of the ice-ages, and of possible future climates, are far from trivial.

    Should the tidal hypothesis of quasi-periodic cooling of the oceans turn out to be correct, a prevailing view that the earth’s postglacial climate responds mainly to random and unpredictable processes would be modified or abandoned. The 1,800-year tidal cycle would be recognized as a principal driver of climate change in the Holocene, causing shifts in climate more prominent and extensive than hitherto realized. The Little Ice Age would be seen to be only a lesser cooling episode in a series of such episodes. Viewed today as of “possibly global significance” (14), it would probably be confirmed as such, being linked to global tidal forcing. Other major climatic events since the glacial period, such as drought near the time of collapse of the Akkadian empire, might also be found to be linked to a global process.

    Looking ahead, a prediction of “pronounced global warming” over the next few decades by Broecker (15), presumed to be triggered by the warm phase of an 80-year climatic cycle of unidentified origin, would be reinterpreted as the continuation of natural warming in roughly centennial increments that began at the end of the Little Ice Age, and will continue in spurts for several hundred years. Even without further warming brought about by increasing concentrations of greenhouse gases, this natural warming at its greatest intensity would be expected to exceed any that has occurred since the first millennium of the Christian era, as the 1,800-year tidal cycle progresses from climactic cooling during the 15th century to the next such episode in the 32nd century.

    http://www.pnas.org/cgi/content/full/070047197

    What this all comes down to is: Just how important is CO2 when it comes to climate variance given all the other forcing and feedback mechanisms?

    And here we come to an interesting quote from realclimate.org:

    Some of these things are feebacks like water vapor, clouds and sea-ice, which could be reasonably presumed to the future as well as the past. Other forcings, including the growth and decay of massive Northern Hemisphere continental ice sheets, changes in atmospheric dust, and changes in the ocean circulation, are not likely to have the same kind of effect in a future warming scenario as they did at glacial times. In estimating climate sensitivity such effects must be controlled for, and subtracted out to yield the portion of climate change attributable to CO2. Broadly speaking, we know that it is unlikely that current climate models are systematically overestimating sensitivity to CO2 by very much, since most of the major models can get into the ballpark of the correct tropical and Southern Hemisphere cooling when CO2 is dropped to 180 parts per milllion. No model gets very much cooling south of the Equator without the effect of CO2. Hence, any change in model physics that reduced climate sensitivity would make it much harder to account for the observed LGM cooling.

    http://www.realclimate.org/index.php/archives/2005/12/natural-variability-and-climate-sensitivity/

    That part in bold there explains whey they are so resistant to the global effects of a little ice age. Apparently it would invalidate all their models. Ergo, the little ice age must be local. Presumably the same is true in reverse e.g. the medieval warm period. They appear to be assuming the very point they should be trying to prove.

    So what about that earlier paper I quoted with an as yet little understood 1800 year oceanic cycle? If that’s right, then there was a global little ice age and a MWP and and none of their models properly account for that forcing *and* as a result they systematically overestimate CO2 as a forcing agent.

    Does that make sense?

  14. bender
    Posted Aug 9, 2006 at 12:18 AM | Permalink

    Audit the GCMs?
    If you thought auditing proxies was a challenge, consider this exchange – a sample extracted from a discussion on the use of GCMs to estimate climate sensitivity going on at RC. It’s telling.

    Steffen Christensen 3 Aug:
    I’m sure that I am misunderstanding some of the basic physics here, maybe you can clarify. You state that doubling atmospheric CO2 (presumably from 280 ppm to 560 ppm or thereabouts) adds around 4 W/m^2 to Earth’s power budget, with a naive effect of, in the steady-state, increasing the mean surface temperature by ~1 degree Celsius. You then compute the key atmospheric feedbacks as having an aggregate effect of 0.85 to 1.7 W/m^2/K or so – with a mean value of maybe 1.25 W/m^2/K or so. When I go and plug this mean value in with the 1 Kelvin rise in temperature from CO2 doubling, I get an additional effect of 1.25 W/m^2 from feedback, which with a linear temperature response, heats up Earth by an extra 0.3 K or so. We have to add the feedback from this as well and so on, so in the limit I get a temperature increase of 1.45 degrees C or so. But you state that the expected mean temperature impact of doubling CO2 is between 2.6 and 4.1 degrees Celsius increase, so I’ve done something wrong somewhere. What was it?

    Jeffrey Davis 3 Aug:
    The preliminary discussion posits a straight 1C rise in temps per doubling of CO2 without a consideration of feedbacks. The graph, then, has a value marked ALL which looks like it posits a 1C (or less) rise in temps due to all the forcings under examination. I presume that means all the interactions of positive forcings, negative forcings, and associated effects. So, the combined rise in temps looks to be 1C for CO2 alone and around 1C (or less) when the feedbacks are included in the equation. A total of 2C (or less). How does one get to the 2.6-4.1C rise predicted in the first sentence of the article?

    This exchange between amateurs led to the discovery of a critical error in a published Figure, which was subsequently corrected. Yet discussion continued:

    Leonard Evens 5 Aug:

    Steffen, I did think I understood the process you used to draw your conclusion. I taught many generations of calculus students about the geometric series. It seems a plausible way to argue to me, but what do I know? What I didn’t understand was why it was at variance with the estimate of 2.6 to 4.1 K. Presumably what any given model does is hard to analyze by such methods, and the Soden Held paper is an attempt to grapple with that. I think we amateurs have to accept the 2.6 to 4.1 estimate as roughly correct whether or not it seems to square with a back of the envelope estimate we can do.

    Ike Solem 5 Aug:
    Let me point out that the climate sensitivity estimates are the results of GCMs, and no napkin scribbles are going to reproduce these results – if they could, this would have all been figured out 100 years ago.

    I’m not saying the GCMers don’t know what they’re doing. But with amateurs correcting the ivory tower experts, this sounds like a situation begging for audit. Different models based on different assumptions producing vastly different outputs, none of it accessible. And we are supposed to trust this work? I’m not saying it’s not good work. I’m not a climatologist, so I am really unqualified to judge. But as an ordinary citizen I’m just wondering: does it make sense to trust this “as is”?

    I wish the Peters would repond. But it’s fairly easy to predict what kinds of questions they will avoid.

  15. James Lane
    Posted Aug 9, 2006 at 2:57 AM | Permalink

    Well it’s nice RC corrected the graphic in that instance.

    They still have the wrong graphic in their “Missing piece in the Wegman Inquiry” post (from Rutherford et al 2005), despite it being pointed out to them (They wanted Rutherford’s Fig 3, not Fig 2 as posted).

  16. Peter Hearnden
    Posted Aug 9, 2006 at 3:47 AM | Permalink

    Re #14, just came across this. Like you I’d say ‘I’m not a climatologist, so I am really unqualified to judge.’ but unlike you I wouldn’t then go onto judge.

  17. John A
    Posted Aug 9, 2006 at 5:58 AM | Permalink

    Re: #16

    Like Michael Mann, I’m not a statistician so therefore I shouldn’t publish statistical studies either.

  18. Steve McIntyre
    Posted Feb 15, 2007 at 8:32 AM | Permalink

    bump

  19. Steve Sadlov
    Posted Feb 15, 2007 at 2:44 PM | Permalink

    Bump

  20. MarkW
    Posted Feb 15, 2007 at 2:59 PM | Permalink

    and Grind

Follow

Get every new post delivered to your Inbox.

Join 3,203 other followers

%d bloggers like this: