Lucia on Model E's Viscous Dissipation

Lucia has an interesting post on how GISS Model E deals with heat from viscous dissipation here. This is an excellent and technical discussion of a specific modeling issue.

It is precisely the sort of discussion that I think is instructive and useful in this field: a specific issue about a specific model. If the models were properly documented (in an engineering sense), there would be no need to try to figure out (speculate) on how climate models did things, as it would be set out in long boring reference manuals, as is done in other fields. (Lucia mentions nuclear plants.) Climate science has, for the most part, eschewed reference manuals, preferring toy articles in “high-impact” journals. As a result, there is a niche for technical blogs, as discussions such as Lucia’s somewhat fill the gap left by the absence of reference manuals that would exist in other fields.

People sometimes complain that I’m not “interested” in the physics of AGW – opining this because of the lack of coverage of physics topics here. It’s not that I’m uninterested in the “physics” – it’s that I’m uninterested in personal opinions and pet theories. I’m interested in technical discussion of articles and models relied upon by IPCC. But 99.99% of the people who want to discuss the “physics” are not interested in discussing viscous heat dissipation in Model E, they want to discuss Mickowski or Beck or Svensmark. I have no interest in such discussions.

Lucia’s topic is dry. It’s technical. I don’t personally understand the issues, but I like seeing people discuss them in the hope that maybe I will. It’s a “Climate Audit” sort of topic. At least there’s a fighting chance that people entering the discussion can end up finding some foothold of common understanding on a technical point, as opposed to merely venting opinions past one another.


69 Comments

  1. Francois Ouellette
    Posted Jan 13, 2009 at 11:48 AM | Permalink

    Steve,

    This is the second time you mention Svensmark as not being “mainstream science”. I beg to disagree. He publishes regularly in mainstream journals, and is credible enough to have an experiment set up at CERN to try to confirm his hypothesis.

    That being said, I agree that it is difficult to discuss physics on a blog, though some blogs go so far as to discuss string theory! It is true that many here have their own pet theory, or seem to argue over and over again about CO2 absorption. Being a physicist myself familiar with spectroscopy, I don’t know if I would have the patience to reply to all those comments. For example, I don’t think the physicists in charge of calculating CO2 absorption over an atmospheric column would be so dumb as not to take into account the varying concentration, pressure broadening, and spectral overlap with water vapor. On the other hand, there are much subtler issues when it comes to, say, water vapor feedback, that a non-specialist, and even a full-fledged physicist, could have a hard time comprehending without reading at least a textbook, and some literature.

    I think it’s easier to discuss statistics on a blog because many more people use statistics in a variety of fields. Physicists, on the other hand, are a rare species, and tend to stick to their narrow little field.

  2. Steve McIntyre
    Posted Jan 13, 2009 at 12:04 PM | Permalink

    #1. Francis, my point is that SVensmark is not relied upon by IPCC science. Maybe I haven’t used the mot juste, but you know what I mean.

    And my point about Lucia’s discussion is that IMO, if people want to talk about lines, they should look at the handling of lines in GCMs, rather than get involved in ab initio first principles dicussions.

    I think that it would be quite possible for someone to make the physics just as accessible as the paleoclimate statistical issues by working through things in bites. And taking small bites like Model E viscous heat dissipation seems to me how to do it. That was the point of the post.

  3. Chris D.
    Posted Jan 13, 2009 at 12:26 PM | Permalink

    Is this related to the “Butterfly Effect” that Pielke, Sr. and Gavin were having a disagreement over? Was last year, iirc.

  4. jae
    Posted Jan 13, 2009 at 12:52 PM | Permalink

    Dan Hughes is also working on this topic.

  5. Posted Jan 13, 2009 at 1:11 PM | Permalink

    Chris D.
    The viscous dissipation issue is less esoteric thatn the Butterfly Effect Issue!

    Right now, I just posted a thread because people want to discuss it. I’m learning as much as anyone.

    At somepoint, I may write down some equations to explain what this viscous dissipation is in terms of equations. (But I’m nto sure that will help many people.)

    But in terms of the closest physical problem you might be familiar with: Suppose you rub your hands together. You have to push a little to over come fricition. The result of the friction is heat.

    Similar things happen in fluids: The fact that viscosity is not zero results in some degree of “friction” in flows. It then generates frictional heating. Dan saw terms in Model E that look like frictional heating. They may or may not represent that. Wiliis has seen similar terms in models. They describe some features of how this is modeled that seem odd. On the other hand, some people thought the entire existence of the term must be odd. In fact, the term does exist, and it belongs in the model.

    So, different people are popping in comments, saying what they know, adn at some point, I’m going to read all of it and figure out what it all might wrap up to be.

    If you visit the post, you’ll see that I make no claims to reveal what actually happens in Model E!

  6. Dishman
    Posted Jan 13, 2009 at 1:28 PM | Permalink

    The long, boring reference manuals you’re referring to actually should be there according to NASA software standards.

    By my reading, GCM Model E should have been built according to NASA’s highest Software Assurance standards (potential losses over $200M, etc.). GISS chose not to implement those standards for either GCM Model E or GISTEMP. I am not certain that choice was lawful, but that’s the choice they made.

  7. tetris
    Posted Jan 13, 2009 at 1:53 PM | Permalink

    Re:2 Steve M
    1] I would suggest caution in setting aside Svensmark like that just because he “is not relied on by IPCC science”. Pielke Sr has several noteworthy compilations on his site outlining the significant body of [peer reviewed] quality science that the IPCC systematically opted to disregard. The IPCC’s decision to deliberately dismiss the role of water vapour/clouds in its analysis has been criticized by a good number of qualified observers.

    2] Auditing and analyzing the “technical discussions of articles and models relied on by the IPCC” is certainly a worthwhile endeavour. But I would have thought that the very purpose of auditing is not just to find shortcomings omissions or [heavens forbid] malfeasance, but to trigger consequences by identifying accountability/responsibility. Absent the latter, it is possibly interesting but rather futile exercise.

  8. Posted Jan 13, 2009 at 2:01 PM | Permalink

    Well, I, for one, find the radiation into gases topic fascinating. There is a dichotomy there: some think the atmosphere can be handled as a simple slab; others think it has to be sliced up into layers. I suspect the resulting models, under either assumption, contain errors. The atmosphere is neither a slab nor a stack of minislabs.

  9. sean egan
    Posted Jan 13, 2009 at 2:02 PM | Permalink

    The source for modelE is available for download. The source looks surprisingly small as a 1Mbyte tarball Have any of the posters here actual compiled from the source, and does it run to conclusion. Will it run on a PC or do you need a supercomputer?

    These may sound like dumb questions, but it is a lot easier to work through code if you can run it, and I do not have a supercomputer!

    Sean

    • counters
      Posted Jan 13, 2009 at 4:39 PM | Permalink

      Re: sean egan (#9),

      You don’t need a supercomputer to run any of the open source climate models such as the ModelE or the CCSM. You do, however, need to be comfortable with shell scripting (that way you can configure the model, build it, configure the executable, line up all your data sets, and run it in a semi-coherent manner). Some understanding of your hardware also helps. The ModelE isn’t really intended as a publicly-consumed model (as in, average Joe can download it and perform climate experiments with it), but the CCSM is very accessible and has a great deal of documentation – both on the technical/scientific aspects of the model as well as on how to run it.

  10. Posted Jan 13, 2009 at 2:51 PM | Permalink

    Don’t call nuclear power plant manuals boring. They are fascinating reading. At least to some of us. I spent 12 and 16 hour days (excluding meals and bathroom breaks) reading those suckers. And I loved it. As they say YMMV.

  11. Francois Ouellette
    Posted Jan 13, 2009 at 4:05 PM | Permalink

    #8 Jorge, I guess at some point there is the issue of calculation efficiency. What I meant to say is that the physics of radiation absorption and emission is known well enough and straightforward enough that there is no question that the people involved in modeling it would know how to calculate it exactly. If they then resort to approximations to improve the speed and/or efficiency of the model, then the approximation itself may be disputable. Such algorithms would in principle be tested, and there is indeed a vast literature on new and improved algorithms to be used in models (just ask Judith Curry).

    I think what happens in blogs is that a lot of people think they can just reinvent the wheel. So there are only a couple of formulas that can work for a science blog. Either the blogger is “the” expert, distributing his/her knowledge and wisdom to readers who just opine and express their admiration (that would be RC, but then also most science blogs). Or the audience is made of other experts who can then have high level technical discussions. Or the technical level is such that a well educated and well informed audience can also contribute to the technical discussion. The latter type is well represented by CA and Lucia’s blog.

  12. Jeff Alberts
    Posted Jan 13, 2009 at 5:04 PM | Permalink

    I don’t know squat about modeling or the mathematics involved. I just have one question.

    If you run the same model 10 times with exactly the same inputs, do you get exactly the same results?

    If not, why? If so, why?

    Ok more than one…

    • joshv
      Posted Jan 13, 2009 at 5:54 PM | Permalink

      Re: Jeff Alberts (#13),

      I did a bit of simple modeling back in the day. The model itself was entirely deterministic. For the same set of parameters and initial conditions, two runs would produce exactly the same output – and this was even incorporating randomness into the model. Even random number generators produce the exact same sequence of numbers when given the same seed.

      If I seeded the random number generator with something truly unique – like the current date/time it would then produce truly unique output every time I ran it. But really the seed is just one component of the initial conditions, so that’s cheating. I believe most GCMs are the same – they are entirely deterministic. So to answer you question, yes, for the same set of inputs, you must get the same output.

      Now I could imagine a model that incorporate real world random inputs – perhaps via a hardware random number generator, or even keyboard events – the output of such a model would not be deterministic – but I am not sure what the utility of such a model would be, as its results would not be reproducible. It’s pretty important to be able to code changes to your model and validate that your change did not affect the results (for example, performance tuning).

      • Jaye Bass
        Posted Jan 14, 2009 at 1:08 PM | Permalink

        Re: joshv (#15),

        I wouldn’t call it cheating at all. If nothing else it tests the random number generators. Not all are created equal.

    • Mark T.
      Posted Jan 13, 2009 at 5:55 PM | Permalink

      Re: Jeff Alberts (#13),

      If you run the same model 10 times with exactly the same inputs, do you get exactly the same results?

      That depends… if there are no random processes designed into the model, then yes, since they are then deterministic, repeated runs should provide identical results. If there is some random process included, such as a noise input, then no, there will be differences though they may be minor depending upon how large the random process is compared to the deterministic process.

      Mark

      • Jaye Bass
        Posted Jan 14, 2009 at 1:13 PM | Permalink

        Re: Mark T. (#16),

        Not true, if the seeds aren’t changed then the random number generators, regardless of what type of distribution they represent, will produce exactly the same sequence of numbers – unless you get some really bad round-off error…which doesn’t happen much these days.

        • Mark T.
          Posted Jan 14, 2009 at 4:41 PM | Permalink

          Re: Jaye Bass (#50),

          Not true, if the seeds aren’t changed then the random number generators, regardless of what type of distribution they represent,

          Then it’s not very random, is it? Think before you post, then think again, that way you won’t post silly statements such as these.

          Mark

  13. Peter
    Posted Jan 13, 2009 at 5:49 PM | Permalink

    Lucia #5

    My field is financial math, specifically portfolio optimization etc. I can relate to the friction between fluids, however, because one of my fraternity brothers worked at a chemical plant during summer recess. One fall, he brought back a powder he called “floc”(sp?). We spread it on the floor of the dining hall one night, physical plant came along with mops and pails, and you can probably guess the rest. A classic.

  14. Jeff Alberts
    Posted Jan 13, 2009 at 6:03 PM | Permalink

    Thanks for the replies, folks.

    So, if any part of the chaotic model is poorly understood, and therefore poorly modeled or excluded altogether, can there possibly be any relevance with reality?

    • counters
      Posted Jan 13, 2009 at 6:51 PM | Permalink

      Re: Jeff Alberts (#17),

      Jeff, don’t forget that “chaotic” is not the same as “random.”

      The climate is obviously an incredibly complicated system, and is indeed chaotic. But even chaotic systems, over time, evolve to a somewhat steady state. In a climate model, that steady state is the long-term trend; the chaos is apparent in the fluctuations over time as that end-state is reached. Think about the real atmosphere for example. Presuming you live in the US, you can expect to have 4 seasons each year with roughly the same general weather patterns – snow and cold in winter, warm in the summer, and mild and wet in autumn and spring. Weather patterns in the short-term are chaotic, but over a long period of time we don’t expect to skip a winter all of a sudden. This is partly because the seasons are governed by forcings outside of the climate itself.

      So, in a nutshell, I think we have to re-phrase your question. The way you have asked it presumes that the models aren’t in accordance with reality. Instead, I propose you ask, “What causes the models to deviate from reality?” That’s a great open-ended question, and a constant one in climate modeling research (and research in other modeling fields).

  15. Alan S. Blue
    Posted Jan 13, 2009 at 6:49 PM | Permalink

    There could be relevance. It depends on how strongly the chaotic system is constrained.

    Picture a (frictionless, perfect) spirograph with a magnet attached to the pendulum and various magnets randomly scattered across the drawing area. We know the physics involved quite well. But the path depends on the starting conditions too precisely for us to measure. (To the point where Heisenberg says we -can’t- know precisely enough.) We have a lot of trouble predicting where, exactly, the pendulum might be pointing tomorrow at noon, let alone where it might be in a hundred years.

    But we’re pretty darn sure that the chance of the pendulum pointing straight up, or being horizontal is essentially zero. And we really didn’t need a good model at all to correctly predict that.

    A significant piece of the “Tipping Point” philosophy is that the system isn’t constrained to sane temperatures on the upper end. Essentially saying that we’re going to overpower any potential oppositional feedback and keep climbing. This would be ‘open loop unstable’ in engineering terms, and it isn’t something observed in nature much.

    • counters
      Posted Jan 13, 2009 at 6:53 PM | Permalink

      Re: Alan S. Blue (#18),

      Alan, I just want to add that the “tipping point” theory isn’t necessarily readily-accepted by climate scientists across the field.

  16. Posted Jan 13, 2009 at 6:51 PM | Permalink

    Jeff–
    That depends on how you define “poorly understood”. If some phenomena is poorly understood, but it’s effect in a particular application is small, lack of understanding doesn’t matter. What one needs to figure out is how well does a model describe reality and does it describe it well enough to provide some level of guidance. The you use the tool, keeping in mind it’s shortcomings.

    It’s not much different from finding utility in an old jalopy. Jalopies have components that don’t work anywhere near perfectly. But sometimes, it’s still preferable to drive the jalopy than walk! Just don’t pretend it’s a highly reliable, speedy, comfortable efficient car.

  17. Andrew
    Posted Jan 13, 2009 at 7:10 PM | Permalink

    If a certain part of an old jalopy doesn’t work, (like the engine or the starter or the gas tank) the jalopy (the whole jalopy) is not worth much for getting around, even though 99 percent of it “works”.

    Just sayin’ :wink:

    Andrew

    • Jeff Alberts
      Posted Jan 13, 2009 at 7:18 PM | Permalink

      Re: Andrew (#22),

      But if the turn signals don’t work, or a fender falls off, it still runs the same. But if clouds stop forming, the climate will still “work”, just not the way we expect it to. I still think it’s a poor analogy.

  18. Alan S. Blue
    Posted Jan 13, 2009 at 7:11 PM | Permalink

    Oh, I agree. But the enthusiasts are quite… enthusiastic.

  19. Jeff Alberts
    Posted Jan 13, 2009 at 7:17 PM | Permalink

    Thanks again for the replies.

    Lucia, not sure if the jalopy analogy is a good one. I can build a plastic model of a jalopy that won’t run at all, but it will look almost exactly like the real thing. And, I’m not sure how to phrase this, but it isn’t something chaotic that changes with every second of every day (except that miniscule parts oxidize and degrade).

    Thanks, Counters (or do I use a small “c”?). I understand that chaotic random. I guess what I’m trying to get at is, if a climate model seems to resemble reality, and some (one, many, I don’t know) components are poorly understood and therefore can’t be adequately modeled (such as the effects of clouds, or even when/where clouds will form, what the sun will do 30 years from now, etc), how does one know that the seemingly correct result isn’t just chance? Especially if you’re running the model(s) hundreds of times and then averaging the results? It just doesn’t make sense to me that we can confirm that “yes this model has accurately modeled the Earth’s climate, because the temperature is the same as the model ‘predicted’.”

    I dunno, maybe I’m just being argumentative, trying not to be, but it’s difficult.

    • counters
      Posted Jan 13, 2009 at 7:47 PM | Permalink

      Re: Jeff Alberts (#24),

      Great points, Jeff. Two things, and then I’ll answer your question about why we can be confident the results aren’t due to chance alone:

      For starters, ensembles of climate models aren’t averaged together. You’re probably familiar with the “spaghetti diagrams” which show a large number of results – this is as close to “averaged” as a good study is going to do. If we want an ensemble of model runs, then often times what we will do is run a model with a single configuration and a single set of initial conditions. However, on subsequent runs, we can perturb in the initial conditions. Our chaos discussion takes over in a big way at this point; the models are deterministic, but if the data is perturbed slightly, we can get a very different answer. However, we very rarely if ever see an answer completely divergent from our first one; you’re not going to get a Venusian atmosphere with one run, perturb the IC’s, and then get a Plutonian atmosphere. When we plot all the different runs together, we expect to see an trend emerge from all the results. In the case of the IPCC’s climate simulations, even when different models are run with about the same set of initial conditions, we see a cross-model trend of global warming. So essentially we’re looking at individual runs in the context of all the runs we’ve performed rather than lumping them all into one value (although there are statistical processes that are used to analyze sets of runs).

      Second, I think you’re overstating our deficiency in understanding certain components of the climate system. Cloud feedbacks, for instance, have been heavily studied since the IPCC AR4, and in the new models that are just being frozen for the AR5 runs, clouds are handled in a much more complex manner. Some models even go so far as to embedding weather models inside climate model grid-boxes to help rectify this problem. With respect to solar behavior, the sun really doesn’t vary that much in output. Solar cycles (what we observe in relation to sunspots and whatnot) are all modeled. But they’re really not that big of an input; orbital cycles are far more important. The sun’s output is relatively constant on a multi-century scale ( it’s not going to change in such a dramatic way that it’s going to have a significant impact on the climate, at least compared to forcings from greenhouse gases, land-use changes, or volcanic eruptions).

      So why are we confident that the results aren’t due to chance alone? Well, a major reason is that as we’ve improved our methods and understanding over the years, our model results have continued to fall within the error bars previously established. For instance, the climate sensitivity derived in the 1980’s is just about the same as what has recently been derived (~ 3 K for 2x CO2). We’re using drastically more sophisticated models than we were 3 decades ago, yet we’re still seeing the same general trends. The bottom line is that no one has shown the climate to be dominated by the “noise.” It’s major things – changes in the orbit, rising greenhouse gas concentrations, large volcanic eruptions – that have the tangible effects on the climate.

      I’m not saying that we shouldn’t deal with models with a healthy skepticism and uncertainty. But I certainly believe that there are many misconceptions floating around about climate models, and a lot of the skepticism towards them is drawn from those misconceptions. Models are an incredibly important and useful tool, and we’ll continue developing them and refining them in the years to come.

      • Ron Cram
        Posted Jan 13, 2009 at 8:12 PM | Permalink

        Re: counters (#28),

        Exactly what is the difference between an ensemble run and an average? And isn’t the point of a spaghetti graph to get the eye to do a visual averaging?

        Your claim about cloud feedbacks being well understood and well represented in the models is a view disputed by a great many people. Have you read this blog post by Roy Spencer? He believes observations show cloud feedbacks are negative. I would love to read your response to Spencer’s challenge.

        Regarding the climate not being “dominated by the noise” for the last 30 years, Spencer will tell you that is because the PDO was in the warm phase for the last thirty years. But the PDO went negative in late 2007 and we had a cold winter then and a cold winter now. By the looks of things we will have 30 more years of cold winters.

        I believe models can help us learn about the climate. The main value is in explaining to us what we do not understand properly. I do not expect the models will ever have predictive value. But I am glad you are here and hope you continue to post.

        • Ron Cram
          Posted Jan 13, 2009 at 8:17 PM | Permalink

          Re: Ron Cram (#31),

          I notice I did not phrase my comment on “dominated by the noise” well. I think people will understand it, but the point is the “noise” lasted for 30 years so it didn’t look like noise. But now that the PDO has gone to its cool phase, it will begin to show up as noise and we will learn more. Alternatively, we could go back and realize the PDO was in the cool phase from 1945 to 1975 and that was the biggest reason for the Earth cooling. The 2007 Chylek paper makes it pretty clear that aerosols do not have as much of a cooling effect as once thought.

        • Ron Cram
          Posted Jan 13, 2009 at 8:21 PM | Permalink

          Re: Ron Cram (#32),

          Geez. I must be tired. Let me have one more go. Of course, it is possible the PDO is not noise at all but a driver of climate (as in the view of Roy Spencer). I do not have an opinion of whether it is a driver or not, but the fact it plays a HUGE role in the climate has been ignored by the climate models far too long.

      • Posted Jan 13, 2009 at 10:51 PM | Permalink

        Re: counters (#28),

        With respect to solar behavior, the sun really doesn’t vary that much in output. Solar cycles (what we observe in relation to sunspots and whatnot) are all modeled. But they’re really not that big of an input; orbital cycles are far more important. The sun’s output is relatively constant on a multi-century scale ( it’s not going to change in such a dramatic way that it’s going to have a significant impact on the climate, at least compared to forcings from greenhouse gases, land-use changes, or volcanic eruptions).

        If there is an integrator in the system even small long term changes can have large output effects.

        Some people think the ocean is that integrator in the Earth climate system. Now there is some evidence for this. The CO2 lag in ice cores for instance. Air temps go up and it takes a while for the oceans to respond. Classic integrator behavior.

        However – because the system is chaotic – it has strange attractors (preferred states within limits) so that a process might work one way under certain initial conditions and another way under different initial conditions. Patterns form. We see this in weather. The surest guide to weather tomorrow is: what is the weather today. Stresses build up in the system and you don’t see gradual change: you see jumps to a new pattern.

        Well this is all very general. Where it gets tricky re:modeling is: what is the time constant of the integrator. Then there is tricky tricky: there are multiple time constants. We see this sort of thing in real electronic components esp capacitors. Keep a fixed voltage on one long enough and the capacitor can’t be discharged by shorting it out for a second. For very high voltage it can be dangerous. For an electronic integrator it means that the behavior is not ideal.

        Nature is full of all these second and third order (and higher) gotchas.

        So when you ask is the model a good representation of reality all I can answer is: it depends. How close is good enough? If you are interested in trends: simple models can be rather good. If you want to get closer than 1% then a lot more things must be taken into account at higher precision than a “if I do this where will that head?” model.

        Now when it comes to electronics I can separate the components from the circuit. Make the appropriate measurements (dielectric absorption can be measured – temperature dependence can be measured etc.) so the models can be pretty good. With climate you can’t measure a component except as it interacts with other components so you are never completely sure if you “know” a component.

        Is it good enough? Well there are some who say yes and some who would say otherwise.

  20. Andrew
    Posted Jan 13, 2009 at 7:24 PM | Permalink

    Jeff Alberts,

    Turn signals not working and fenders falling off also present problems that could wreck the jalopy and then it wouldn’t work. Actually, I like Lucia’s analogies. They are usually “close enough.” :wink:

    Andrew

  21. Alan S. Blue
    Posted Jan 13, 2009 at 7:33 PM | Permalink

    How about our model of electrons in an atom? You’ve got your choices between wave model, particle model, wavicle, just-use-the-Hamiltonian, whatever.

    Each of these models was designed to get around some limitation of a previous model. Even the best model sucks. (If only because it requires excessive math!)

    You can never say “The electrons is right there!” And yet, you have a good enough idea of how the electron might react to model chemical reactions, light absorption, light emission, magnetic effects, temperature effects….

  22. Andrew
    Posted Jan 13, 2009 at 7:58 PM | Permalink

    Just a comment and questions on counters post- if climate models are being updated to improve them, then the climate models of yesteryear were that much more erroneous as each year passes. At what point can we say that climate models have met a standard of credibility when currently they only get “better”? At what point does a person say these models are a pile of dung?

    What quantifiable threshold are we aiming for here? Does anyone know?

    Andrew

    • counters
      Posted Jan 13, 2009 at 8:37 PM | Permalink

      Re: Andrew (#29),

      Well, they always only be “getting better.” When a generation of models comes along that completely debunks the previous generations, or a major flaw is discovered in the physics behind the model that completely changes the output, then I think we can say that everything before that point is fatally flawed. The current models aren’t doing that; they’re continuing to assert the same general pattern we’ve seen before. As a general question to those of you who are especially critical of the models – why do you suppose that the models will one day be shown to be fatally flawed? You all are aware that we do indeed study whether the models can reproduce the trends recorded over the instrument record to verify that they have some grounding reality, right? If not, please let me know so I can try to fish up some literature explaining how this works.

      Re: Ron Cram (#31),
      An ensemble is a collection of a large number of runs. Yes, one can do a “visual averaging” with spaghetti plots, but the point isn’t to actually reduce things down to a trend. It’s to show that the evolution of the model is not dependent on subtle changes to the initial conditions. Assuming we’re studying the effect of increasing CO2, it shouldn’t matter whether we start out with the temperature at New York City being 25 C or 15 C. Spaghetti plots also illustrate the variability in our results; in a way, they represent the envelope of how we expect the climate to trend based on our experiment. When observations start deviating significantly from that envelope, something’s amiss.

      With regards to Dr. Spencer, I don’t want to sound harsh, but you’re always going to be able to point out a Dr. Spencer who will disagree. Dr. Spencer alludes to the fact that he’s publishing his work; in that article you linked, he cites a paper published here. I think Dr. Spencer is presenting a strawman, though. Is our understanding of certain feedbacks insufficient? Of course. Dr. Spencer suggests that we should continue research into the deficiencies of the models, but such research has never stopped – in fact, its pace has increased in the past few years. Without beating the dead horse of the consensus, it’s still important to recognize a simple fact: Dr. Spencer is not the only person who studies this issue. He may see things in a different light, but that doesn’t really say anything significant in itself.

      I don’t have the time to go through Dr. Spencer’s paper and elaborate on its points, but I can say that he’s not upsetting the status quo. He’s not even overthrowing any major aspect of climate modeling. At best, he’s demonstrating that the previous generation of climate models need work – particularly in their paramterizations of clouds. That work has and is being done, and we’ll see what Dr. Spencer brings to the table in a year or two after we consider how the models have evolved. At that point we can re-analyze and see what further work needs to be done.

      I’m curious about something: when you say that you do not expect the models to have predictive value, what exactly do you mean? Could you elaborate on this point a little bit?

  23. Paul Penrose
    Posted Jan 13, 2009 at 8:08 PM | Permalink

    If only modeling the climate was as easy as modeling the movements of electrons around a single atom.

  24. Jeff Alberts
    Posted Jan 13, 2009 at 8:24 PM | Permalink

    Thanks again, counters.

    Some models even go so far as to embedding weather models inside climate model grid-boxes to help rectify this problem.

    This doesn’t give me a warm and fuzzy. Weather modeling doesn’t seem to work, at all. That or they’re not letting the local weather guy know the results, because they sure get it wrong more often than not.

  25. Steve McIntyre
    Posted Jan 13, 2009 at 8:36 PM | Permalink

    I really don’t want to have a lot of generalized discussion of models and models in general. EVeryone is ignoring my point of the benefits of dealing with specific issues – if you want to discuss viscous heat dissipation in Model E, then please do so – but it would be better to discuss with Lucia who knows about it.

  26. counters
    Posted Jan 13, 2009 at 8:38 PM | Permalink

    I just saw Steve’s comment, so I guess we’ll pick up the discussion some other time.

  27. Posted Jan 13, 2009 at 11:32 PM | Permalink

    You all are aware that we do indeed study whether the models can reproduce the trends recorded over the instrument record to verify that they have some grounding reality, right?

    Steve is rather on point here. Without isolating out the individual components (to the best of our ability) the fact that a model can follow the past is no proof that it is correct. Suppose the ocean integrator time constant is too low but the cloud feedback term is too high (in the appropriate direction). One tends to compensate for the other. That does not make the model correct nor give you confidence that it will do well in predicting the future.

    Assume there is some important but currently discounted term (cosmic rays, solar magnetism, etc.). The model can be fixed (well we measured x and we think y is that) but if we are not taking into proper account the effect of z then y will be wrong and the model will diverge from reality over model time (say 100 years in the future) as z changes in ways that do not correlate with x or y.

    Will it diverge enough to matter? Well there is a difference of opinion.

    I will say again: Steve is on the right track. We have to measure the components in order to model the circuit correctly. And to properly model the components we have to know EVERYTHING that has a significant effect on the measurement. i.e. the usual of accounting for error sources. Once you have error bands you can monte carlo the climate model runs and see how consistent they are.

    But you have to be brutally honest to do it right. All the resistances (dissipaters), integrators, inertias, and variable inputs have to be known to pretty good accuracy to get a reasonably good answer. And since it is an relatively unconstrained fluid flow problem on top of everything else your grid boxes and time increments need to be small enough so that the chunkiness of the model (and all models of the sort are chunky) does not cause too big a divergence from reality. The chunks add their own chaoticness on top of the modeled system chaoticness. So are the time and space chunks small enough? One way to find out is to change their size and see how much the results change. And if you don’t have the computer time to make them smaller making them bigger will give an indication.

    Well there are some who are happy with the current chunk size and others who are less so.

    Now I’m not a deep student of the climate model literature, but I have never seen model runs done with the same parameters and different chunk sizes to test the matter. Perhaps some one has a link?

  28. Posted Jan 13, 2009 at 11:54 PM | Permalink

    One thing bothers me about the PDO. Why hasn’t its effect been subtracted out from the past yet so that the other parameter values can be determined more closely? After all the PDO (and other ocean oscillations) have been known for about 10 years. And it is my understanding that there are records going back to 1500 of at least that (PDO) element.

    The closest I have seen is the head of the IPCC saying that the PDO is probably significant. How significant? There are no numbers attached to the statement.

  29. Nick Stokes
    Posted Jan 14, 2009 at 1:19 AM | Permalink

    I think the search for viscous dissipation is misplaced; it isn’t like the viscous dissipation you might calculate for a laminar flow. The Re in the atmosphere is orders of magnitude too high for that. What you have is an eddy diffusivity for momentum. The diffusion is mostly vertical. The effect of this is added in as part of the general force in the momentum equation (see eg this GISS 1974 paper Sec 4) and there is a resulting kinetic energy loss. This is then added in as a temperature gain in lines 1390-1403 in ATMDYN.f of the GISS Model E code.

  30. Nylo
    Posted Jan 14, 2009 at 1:32 AM | Permalink

    From my unscientific and disinformed point of view, just ridden by the logic, winds everywhere are caused because of hot air rising somewhere. I find it a bit ridiculous to focus on how much heat is produced by the friction of the winds without first determining precisely how much heat is being helped to disipate by the rising of hot air into upper layers of the atmosphere, which I would not be suprised to discover being an order of magnitude higher. Do models correctly represent this? Or is the lower troposphere only supposed to cool by emitting long-wave radiation?

  31. Nick Stokes
    Posted Jan 14, 2009 at 1:54 AM | Permalink

    Nylo #43 Yes, thermal convection is basic, and has always been computed. In that GISS 1974 article I cited, the global picture is at Fig 30.

  32. VG
    Posted Jan 14, 2009 at 3:15 AM | Permalink

    re #36 “I don’t have the time to go through Dr. Spencer’s paper and elaborate on its points” = the answer

  33. anna v
    Posted Jan 14, 2009 at 7:22 AM | Permalink

    A basic question: Are the models cycled through the year with realistic sunshine etc variations over the globe, or is an “average” earth, atmosphere, ocean response used over an “average” sphere?

  34. Posted Jan 14, 2009 at 8:10 AM | Permalink

    AnnaV-

    A basic question: Are the models cycled through the year with realistic sunshine etc variations over the globe, or is an “average” earth, atmosphere, ocean response used over an “average” sphere?

    The current generation of GCMs’ used for the AR4 cycle through the whole year. Other more simplified models exist and might be used for other reasons. But, ModelE for example, cycles through the whole year. In this post about GMST in non-anomaly space you’ll see that the models do predict the average surface temperature varies throughout the year. This compares two models.

    When evaluating a model’s treatment of a phenomenon, it is important to ask questions like the one you asked. In some very, very simple models that left out lots of stuff, people like Dan and Willis wouldn’t be finding code lines discussing viscous dissipation, and we wouldn’t be discussing it because so many other things were so approximate that we would be sure any approximation related to viscous dissipation would be irrelevant. (If they were restricted to that type of model, modelers probably wouldn’t try to create projections resulting in curves like Figure 10.4 in the WG1 of the AR4.)

    So, the question with the viscosity isn’t — as someone suggested above– Does the way they treat this term make the model totally and utterly useless? The questions are: How the heck do they actually treat the dissipation. And b) How much uncertainty does the treatment intrioduce into simulations and, specifically, projections?

    The models are certainly useful in some way. Are they jalopy like? Are they Hyudai in their first year? Are they brand-spanking new Ferrari’s? Are they Ferrari bodies with some serious engine trouble because component “X” does something weird? Jalopies can be useful. Ferrari’s can break down.

    Our conversation at my blog is about “component X”= viscous dissipation.

  35. Andrew
    Posted Jan 14, 2009 at 8:31 AM | Permalink

    “The models are certainly useful in some way.”

    I apologize for being a jerk, but this is a memorable quote.

    Andrew ♫

  36. Posted Jan 14, 2009 at 2:24 PM | Permalink

    Lucia on Model E’s Viscous Dissipation

    http://www.listdll.com

  37. Paul Penrose
    Posted Jan 14, 2009 at 7:36 PM | Permalink

    Mark,
    I think that Jaye was referring to pseudorandom number generators, or a PNG, not a true random number generator (RNG). If a good PNG is seeded with a sonce then it can be used a RNG in many cases, depending the degree of entropy and the exact usage (how many values are pulled out before it is reseeded, etc.)

  38. anna v
    Posted Jan 14, 2009 at 11:44 PM | Permalink

    Paul Penrose:

    I think that Jaye was referring to pseudorandom number generators, or a PNG,

    Quite right. For most problems the cycles are so large that it is irrelevant to go to an RNG. Actually I would be curious to learn which problems really need an RNG.

  39. Mark T
    Posted Jan 15, 2009 at 12:09 AM | Permalink

    Immaterial, Paul. My statement was clear in that if a random process was included in the models, which by necessity would be a PRN generator, you could expect change in the outcomes. If the same seed was chosen for every run, regardless of what that seed was, the state of the PRN would be deterministic and unchanging and therefore not random (or pseudo-random as the case may be). It’s not a “distribution” if it has the same number every time, it’s a single value.

    Mark

  40. kim
    Posted Jan 15, 2009 at 6:38 AM | Permalink

    People, yes, but how can a model have a vicious disposition?
    =======================================

  41. Ryan O
    Posted Jan 15, 2009 at 2:13 PM | Permalink

    Just to clear something up, the model runs are different each time because many of the underlying physical processes are nonlinear. Dan Hughes has some additional explanation for this on both his blog and Lucia’s. No need for PRN generators or anything like that to generate different outcomes. The different outcomes occur naturally, even when all the input parameters are identical between runs.
    .
    As a result, the models do not make deterministic predictions. They describe a range of what may happen assuming the underlying physics is right. And because the real climate is a chaotic system, extremely small changes in the underlying physics can have large impacts on the final result.
    .
    I will quote myself from Lucia’s blog (the context is a physics class where we modeled mechanical systems using electric circuits):

    The projects were meant to demonstrate the uncertainty principle and problems with using simple physics (ideal RC circuits to represent oscillation, for example) to simulate complex systems by disregarding “small” contributors. We never abstracted the results to chaos theory or the particulars numerical computer simulations because those were beyond the scope of the course.
    .
    The main points, though, were:
    .
    1. Many common-sense simplifications that work in one case may not work in other cases.
    .
    2. Sometimes even large errors don’t affect the simulation. Other times, incredibly small errors can. It is difficult (if not impossible) to tell this ahead of time.
    .
    3. Running simulations multiple times may not improve the predictive power of the simulation, especially in systems where oscillatory behavior is important, and this is difficult (if not impossible) to tell ahead of time.
    .
    4. The source of the divergence cannot always be found.

    .
    Just because an error source is small doesn’t mean it’s contribution is small. Likewise, just because it’s big doesn’t mean it’s contribution is big.
    .
    Now to the point about the models fitting the observational record as a check, this is not entirely true. Some of the parameters for the models (like aerosol forcings) are unknown. So in order to get a range for those parameters, the models are run and those unknown parameters are adjusted such that the model can be made to fit the observational record. In the case of aerosols, this yields a “likely” range of about 0.5 – 1.7 W/m2 (the numbers the IPCC used can be found in Chapter 9 of the WG1 report – but I’m too lazy to look them up). Needless to say, this is a fairly large range when you consider the effect of GHG forcing from 1890 – present is an average of 1.2 W/m2.
    .
    Are the models useful? Absolutely. Can you legitimately put a confidence level to them that they will match the behavior of the physical earth? Absolutely not. All you can say is: given this range of input parameters, this number of runs, and this timeframe, the models produce results that have X distribution with Y standard deviation.
    .
    The degree to which those results will match future observations is unknown – at least until the future arrives.

  42. Ryan O
    Posted Jan 15, 2009 at 2:32 PM | Permalink

    In case you were wondering how the models can end up being non-deterministic, a description of how this is done is here:
    .
    http://eprints.soton.ac.uk/12436/01/Preprint.pdf (page 4)
    .
    I believe the first method mentioned – spinning up the models – is the one most commonly used.

    • Mark T.
      Posted Jan 15, 2009 at 4:31 PM | Permalink

      Re: Ryan O (#58), Um…

      The climate models we are using are deterministic. If we run them with the same parameters (here we
      include external inputs with the internal parameters), they will give the same answers, give or take some
      numerical “noise”. To make probabilistic predictions, we need to specify probability distributions for all
      these parameters and specify a set of initial conditions for the state variables with known uncertainty.

      Which is pretty much what I was saying… they are deterministic other than what they call numerical noise (which I can only assume means rounding issues – there are random rounding algorithms – which would present itself as a random process).

      Non-linearity by itself will not produce differing results unless there is some means for picking one possible outcome over another, which again leads to some form of randomness.

      Mark

  43. Ryan O
    Posted Jan 15, 2009 at 4:53 PM | Permalink

    Mark: I do not believe the source of the numerical noise is deliberately programmed into the model. It’s a function of the computer, not the model, and is inherent to numerical simulations. Sometimes the result of an operation is 0.199999999999999994743, sometimes it’s .2. I don’t know why this happens, but I have seen it happen when number-crunching engineering stuff . . . and I know that I did not use any random rounding algorithms.

  44. Mark T.
    Posted Jan 15, 2009 at 5:04 PM | Permalink

    Immaterial, Ryan. The numerical noise, as they put it, is certainly a random process whether intentionally designed in or not, though random rounding noise is hardly what we were referring to above. The point was whether or not identical inputs would result in identical outputs, and that paper clearly states that YES, they will. The whole thing about spinning up is unrelated as well (that’s where they get into discussions of input parameter distributions, which is moot when considering fixed inputs).

    I don’t know why this happens, but I have seen it happen when number-crunching engineering stuff . . . and I know that I did not use any random rounding algorithms.

    You don’t have to, processors/software often do this on their own.

    Mark

  45. Ryan O
    Posted Jan 15, 2009 at 8:13 PM | Permalink

    Mark,
    Judging by your answer I have a misconception. My impression was that, for the most part, climate model runs had three distinct phases: a spin-up period to allow the initial conditions to randomize (so that you don’t have to define a probability distribution for each state variable), a calibration period to make sure that the resulting state is stable, and an experimental period where you perturb the system and watch what happens.

  46. Mark T
    Posted Jan 15, 2009 at 11:25 PM | Permalink

    I don’t think you were really misjudging climate models per se, I think you were misjudging what was being asked (waaaay back with Jeff Alberts in post 13) and what I meant with my answers. Jeff simply wanted to know “do they produce the same outputs if the inputs are held constant.” The link you provided (thank you for that, btw) answered the question: yes. :)

    The models do include randomization, but that’s apparently on the inputs, which is really what I had guessed, though was moot since the question assumed fixed inputs. Now that we’ve really gone off the deep end of off-topic… hehe.

    Mark

  47. Lewis
    Posted Jan 16, 2009 at 4:52 AM | Permalink

    As I said on WUWT, I think if medals were going around, it would be (yes, don’t believe in it) equal first. I think someone previously intimated, that you were in ‘opposition’ and that such opposition (you’ll understand this, you being Canadian, I UK) is always gracious, ‘pretending’ to be open was implied. And therefore triumph is brutal. Wrong on all counts. This is nonsense, the kind of ‘nonspeak’ that academics get involved with. That is to say, we are as we are. I’m grateful for your gracious sanity.

  48. Lewis Deane
    Posted Jan 16, 2009 at 5:11 AM | Permalink

    Steve, there’s something wrong with your comment protocol. First. I’m using a phone, an N73 to comment. So, when I tried I got an error page the details of which I can’t remember because I immediatly flipped back and then resubmitted getting the legend, to paraphrase, ‘I think you’ve already said that’. Am I doing something wrong?

  49. Ryan O
    Posted Jan 16, 2009 at 7:05 AM | Permalink

    @ Mark: Yes. I managed to drag us completely off topic by splitting hairs. Haha! :)

  50. Mark T.
    Posted Jan 16, 2009 at 10:11 AM | Permalink

    Can’t say I’ve never been down that road on my own doing either, hehe.

    Re: Lewis Deane (#65), If it says “you’ve already said that” it means you had a comment accepted with the same text previously. Are you sure the first “error” wasn’t a kinda-captcha page, which then takes you to what looks like an error page, but it is actually a “comment accepted” page?

    Mark

    • Lewis Deane
      Posted Jan 25, 2009 at 1:44 AM | Permalink

      Re: Mark T. (#67), (11 days later) Yes. You were right, Mark T. I think I was a bit worse for ware when I made that comment, hence the incoherence, though what I said on WUWT seemed articulate enough. But hence the 11 days delay but for the wish not to see what I might have written! However, I do believe, atleast with an N73 and using opera mini, there is some kind of misalliance for even though all the required fields are filled I am still getting an error message saying otherwise and will probably now be treated as spam because I automaticaly resent it!

Follow

Get every new post delivered to your Inbox.

Join 3,420 other followers

%d bloggers like this: