Gerry Browning: In Memory of Professor Heinz Kreiss

Gerry Browning writes:

The Correct System of Equations for Climate and Weather Models

The system of equations numerically approximated by both weather and climate models is called the hydrostatic system. Using a scale analysis for mid-latitude large scale motions in the atmosphere (motions with a horizontal length scale of 1000 km and time scale of a day), Charney (1948) showed that hydrostatic balance, i.e., balance between the vertical pressure gradient and gravitational force, is satisfied to a high degree of accuracy by these motions. As the fine balance between these terms was difficult to calculate numerically and to remove fast vertically propagating sound waves to allow for numerical integration using a larger time step, he introduced the hydrostatic system that assumes exact balance between the vertical pressure gradient and the gravitational force. This system leads to a columnar (function of altitude) equation for the vertical velocity called Richardson’s equation.

A scale analysis of the equations of atmospheric motion assumes that the motion will retain those characteristics for the period of time indicated by the choice of the time scale (Browning and Kreiss, 1986). This means that the initial data must be smooth (have spatial derivatives on the order of 1000 km) that lead to time derivatives on the order of a day. To satisfy the latter constraint, the initial data must satisfy the elliptic constraints determined by ensuring a number of time derivatives are of the order of a day. If all of these conditions are satisfied, then the solution can be ensured to evolve smoothly, i.e., on the spatial and time scales used in the scale analysis. This latter mathematical theory for hyperbolic systems is called “The Bounded Derivative Theory” (BDT) and was introduced by Professor Kreiss (Kreiss, 1979, 1980).

Instead of assuming exact hydrostatic balance (leads to a number of mathematical problems discussed below), Browning and Kreiss (1986) introduced the idea of slowing down the vertically propagating waves instead of removing them completely, thus retaining the desirable mathematical property of hyperbolicity of the unmodified system. This modification was proved mathematically to accurately describe the large scale motions of interest and, subsequently, also to describe smaller scales of motion in the mid-latitudes (Browning and Kreiss, 2002). In this manuscript, the correct elliptic constraints to ensure smoothly evolving solutions are derived. In particular the elliptic equation for the vertical velocity is three dimensional, i.e., not columnar, and the horizontal divergence must be derived from the vertical velocity in order to ensure a smoothly evolving solution.

It is now possible to see why the hydrostatic system is not the correct reduced system (the system that correctly describes the smoothly evolving solution to a first degree of approximation). The columnar vertical velocity equation (Richardson’s equation) leads to columnar heating that is not spatially smooth. This is called rough forcing and leads to the physically unrealistic generation of large amounts of energy in the highest wave numbers of a model (Browning and Kreiss, 1994; Page, Fillion, and Zwack, 2007). This energy requires large amounts of nonphysical numerical dissipation in order to keep the model from becoming unstable, i.e., blowing up. We also mention that the boundary layer
interacts very differently with a three dimensional elliptic equation for the vertical velocity than with a columnar equation (Gravel, Browning, and Kreiss).

References:
Browning, G. L., and H.-O. Kreiss 1986: Scaling and computation of smooth atmospheric motions. Tellus, 38A, 295–313.
——, and ——, 1994: The impact of rough forcing on systems with multiple time scales. J. Atmos. Sci., 51, 369-383
——, and ——, 2002: Multiscale bounded derivative initialization for an arbitrary domain. J. Atmos. Sci., 59, 1680-1696.
Charney, J. G., 1948: On the scale of atmospheric motions. Geofys.Publ., 17, 1–17.
Kreiss, H.-O., 1979: Problems with different time scales for ordinary differential equations. SIAM J. Num. Anal., 16, 980–998.
——, 1980: Problems with different time scales for partial differential equations. Commun. Pure Appl. Math, 33, 399–440.
Gravel, Sylvie et al.: The relative contributions of data sources and forcing components to the large-scale forecast accuracy of an operational model. This web site
Page, Christian, Luc Fillion, and Peter Zwack, 2007: Diagnosing summertime mesoscale vertical motion: implications for atmospheric data assimilation. Monthly Weather Review, 135, 2076-2094.


73 Comments

  1. Posted Feb 27, 2016 at 10:51 AM | Permalink | Reply

    Steve’ Climatologists , particularly those with a background in physics and mathematics , need to get their heads around the notion that climate processes are inherently incomputable from the bottom up i.e by modelling.The modelling approach is inherently of no value for predicting future temperature with any calculable certainty because of the difficulty of specifying the initial conditions of a sufficiently fine grained spatio-temporal grid of a large number of variables with sufficient precision prior to multiple iterations. For a complete discussion of this see Essex: https://www.youtube.com/watch?v=hvhipLNeda4
    See Section 1 at
    http://climatesense-norpag.blogspot.com/2014/07/climate-forecasting-methods-and-cooling.html
    for more discussion.
    This does not mean that useful forecasts can’t be made using the methods discussed in section 2 of the link
    Norman Page

    • Gerald Browning
      Posted Feb 29, 2016 at 10:37 AM | Permalink | Reply

      snip – The accuracy of a numerical method in describing a system of partial differential equations is determined by Taylor series expansion of the solution of the system.
      Thus the solution must be smooth, i.e. , have a number of time and space derivatives. As soon as the forcing does not have spatial derivatives (as is the case with columnar forcing in hydrostatic models), the accuracy of the numerical method is no longer valid. This was the entire point of our article on rough forcing cited in the post. The modelers have completed ignored this point and instead use unphysical dissipation to smooth the solution after introducing the rough forcing.

      Note that in our 2002 paper no dissipation was necessary because we had guaranteed the evolution of a smoothly evolving solution using the Bounded Derivative Theory so that even a low accurate numerical method sufficed to accurately compute the correct solution.

      Jerry

    • Gerald Browning
      Posted Feb 29, 2016 at 11:16 AM | Permalink | Reply

      When we were preparing to develop a numerical model based on the results from the Bounded Derivative Theory, we thought it would be useful to understand what observational data and forcing terms (parameterizations) were important in determining the accuracy of current large scale weather models. Fortunately, Sylvie Gravel from the Canadian Weather Service was visiting me and agreed to run a series of tests on their operational forecast system using the current data assimilation system (periodic insertion of observational data) and numerical forecast model. As the control case she used one of the standard forecasts they used for testing any changes to their system.

      The first test case was to shut off all parameterizations and dissipation. The test case showed that the velocity at the surface increased in a unrealistic manner. So I had Sylvie run a second case where she included the boundary layer parameterization that is used to slow down the increase in the surface velocity (note that this introduces a discontinuity in the system as a different equation is used at the surface than in the troposphere). The second case resulted in a forecast just as good as the full forecast system. Here we see the “engineering”
      (or tuning as we call it) that is not accurate physically, but is used to make the solution look better. The parameterization completely destroyed the accuracy of the forecast in a matter of a few days by propagating the error vertically. This was the dominant parameterization, i.e., the remaining parameterizations had little impact on the forecast accuracy.

      We then tried removing different data sets. The only ones that mattered were wind informatiion from radiosondes and aircraft. Satellite data was unnecessary (this result agreed with our mathematical paper on periodic updating).

      Note that the errors between the control and test cases were only computed over the US where the observational system is relatively dense. Although there are slight differences in the accuracy bewtween diffferent forecast models, in the first few days they are very close (as they must be to justify their existence).

      Jerry

    • Gerald Browning
      Posted May 8, 2016 at 4:14 PM | Permalink | Reply

      This link is in support of the above Memorial.
      Using actual output from the generally acknowledged best global model, i.e., the ECMWF
      global model, the graphics and text show that the model is using incorrect dynamics.
      In particular the model has precluded interaction between incorrect and unresolved mesoscale features and the large scale pressure by forcing all features to satisfy the linear balance equation. The small scale features are separately controlled by unrealistically large dissipation.

      https://drive.google.com/file/d/0B-WyFx7Wk5zLR0RHSG5velgtVkk/view?usp=sharing

      Jerry

  2. rpielke
    Posted Feb 27, 2016 at 11:15 AM | Permalink | Reply

    Hi Gerry Great post! To add to the discussion with respect to the use of the hydrostatic approximation, these papers of ours might be useful:

    Martin, C.L. and R.A. Pielke, 1983: The adequacy of the hydrostatic assumption in sea breeze modeling over flat terrain. J. Atmos. Sci., 40, 1472-1481. http://pielkeclimatesci.wordpress.com/files/2009/09/r-38.pdf

    Song, J.L., R.A. Pielke, M. Segal, R.W. Arritt, and R. Kessler, 1985: A method to determine non-hydrostatic effects within subdomains in a mesoscale model. J. Atmos. Sci., 42, 2110-2120. http://pielkeclimatesci.wordpress.com/files/2009/09/r-52.pdf

    Hu, Qi, E.R. Reiter, and R.A. Pielke, 1988: Analytic solutions to Long’s model: A comparison of nonhydrostatic and hydrostatic cases. Meteor. Atmos. Phys., 39, 184-196. http://pielkeclimatesci.wordpress.com/files/2009/09/r-93.pdf

    Dalu, G.A., M. Baldi, R.A. Pielke Sr., and G. Leoncini, 2003: Mesoscale nonhydrostatic and hydrostatic pressure gradient forces: Theory and parameterization. J. Atmos. Sci., 60, 2249-2266. http://pielkeclimatesci.wordpress.com/files/2009/10/r-263.pdf

    I also want to point out to the readers that the hydrostatic assumption does not mean the vertical accelerations are identically zero. The assumption just means we replace the vertical equation of motion with the balance between the gravitational force and the vertical pressure gradient force. Obviously, there must be vertical accelerations otherwise air would never move.:-) It is just not calculated from a prognostic equation, but from mass conservation.

    We have also looked at the importance of compressible effects in models (and in the atmosphere) and are finding this to be a more important subject even in terms of mass and energy transfers. Mel Nicholls is leading our effort on this. Here are some of our past papers on this subject:

    Nicholls, M.E. and R.A. Pielke, 1994: Thermal compression waves. I: Total energy transfer. Quart. J. Roy. Meteor. Soc., 120, 305-332. http://pielkeclimatesci.wordpress.com/files/2009/09/r-160.pdf

    Nicholls, M.E. and R.A. Pielke, 1994: Thermal compression waves. II: Mass adjustment and vertical transfer of total energy. Quart. J. Roy. Meteor. Soc., 120, 333-359. http://pielkeclimatesci.wordpress.com/files/2009/09/r-161.pdf

    Pielke, R.A., M.E. Nicholls, and A.J. Bedard, 1993: Using thermal compression waves to assess latent heating from clouds. EOS, 74, 493. http://pielkeclimatesci.files.wordpress.com/2009/10/r-183.pdf

    Nicholls, M.E. and R.A. Pielke Sr., 2000: Thermally-induced compression waves and gravity waves generated by convective storms. J. Atmos. Sci., 57, 3251-3271 http://pielkeclimatesci.wordpress.com/files/2009/10/r-223.pdf

    Schecter, D.A., M.E. Nicholls, J. Persing, A.J. Bedard Jr., and R.A. Pielke Sr., 2008: Infrasound emitted by tornado-like vortices: Basic theory and a numerical comparison to the acoustic radiation of a single-cell thunderstorm. J. Atmos. Sci., 65, 685-713. http://pielkeclimatesci.wordpress.com/files/2009/10/r-327.pdf

    Your comments on this work would be valuable to us.

    Best Regards

    Roger Sr.

    • Posted Feb 27, 2016 at 3:44 PM | Permalink | Reply

      Roger Your last link says “Most importantly, it is shown that simulating tornado infrasound likely requires a spatial resolution that is an order of magnitude finer than the current practical limit (10-m grid spacing) for modeling thunderstorms.”
      This is exactly what I am saying in my 1st comment above. It applies to climate science and complex systems in general. For useful forecasts you might consider some type of pattern recognition approach rather than numerical models.Patterns subsume or integrate billions of calculations and are thus inherently more economic.

      • rpielke
        Posted Feb 27, 2016 at 4:55 PM | Permalink | Reply

        Hi Norman

        Pattern recognition, I agree, is a useful forecast approach particularly on the really fine scales where the needed precision and spatial resolution of observed data becomes difficult, if not impossible, to obtain. On larger (e.g. synoptic scales), however, analog methods have been superceded by numerical prediction models.

        Best Regards

        Roger Sr.

    • Gerald Browning
      Posted Feb 28, 2016 at 4:36 PM | Permalink | Reply

      Roger,

      I find it interesting that the majority of your cites are of papers before our 2002 manuscript that settled the issue of the the correct equations for both large scale and mnesoscale motions. In the post I only cited the larger scale motions to simplify the presentation.
      The fact that you are trying to bring in your old work to counter hard mathematics is very telling.

      Jerry

      • rpielke
        Posted Feb 28, 2016 at 5:05 PM | Permalink | Reply

        Jerry – you and I agree that with the hydrostatic (and also the analastic or incomprssible) assumptions, the equations are ill posed. So I am not sure what we disagree with???

        NWP models do as well as they do since they are adjusted back to reality using real world observations. I think we agree on that too. Model solutions will drift for the reasons you present, and also inadequate spatial resolution, inaccurate parameterizations, etc. These NWP models are the best we have for weather prediction out to a week or two. If you have an alternative for more skillful predictions, present that evidence.

        Multi-decadal climate models, in contrast, do not have observations to direct them back to reality. They have all of the problems of NWP models plus others (e.g. the need to accurately represent slow response feedbacks, sensitivity to initial conditions for the ocean, etc).

        So what do we disagree on????

        Roger Sr.

        • Gerald Browning
          Posted Feb 28, 2016 at 6:42 PM | Permalink

          Roger,

          The equations that are in our 2002 manuscript used for proper initialization to obtain the smoothly evolving solutions are well posed as is the hyperbolic multi-scale system. Note that the initialization equations can be used for forecasting (with the addition of the time dependent vorticity equation).
          That was the whole point of the Bounded Derivative Theory, to replace an ill posed system with a well posed system that accurately describes the large scale motions of the atmosphere. That this system also describes mesoscale motions was no fluke, but the genius of Heinz.

          Also note that resolution is not the problem despite claims by modelers that is the case.
          We easily recreated the development of an evolving smooth mesoscale feature with a low accuracy second order method.

          Jerry

    • Gerald Browning
      Posted Feb 28, 2016 at 6:23 PM | Permalink | Reply

      snip

      Steve: not in compliance with blog comment policy

      • rpielke
        Posted Feb 28, 2016 at 7:08 PM | Permalink | Reply

        Wow – I was complementing what wrote. Seems you have a grudge since 2002. Remarkable. I am going to move off this thread. This has become a counterproductive discussion with you. Sorry to see that you have gone done this road.

        Steve: I’ve snipped the offending comment as against blog policies.

        • Posted Feb 28, 2016 at 7:13 PM | Permalink

          Roger, If you could, please help me out by responding to me if Gerry said something to offend you. I see no contradiction between what you are saying and Gerry’s point. Models can have skill for some things and it might still be that they use poor methods and can be dramatically improved. There is a very well developed field in Partial Differential Equations (PDEs), both their theory and how to solve them numerically. It’s at least as big a climate science and on the whole pretty rigorous even though there are some issues with positive results bias.

          David Young

        • g
          Posted Feb 28, 2016 at 10:38 PM | Permalink

          Roger,

          It will become more apparent how the modelers have fooled the public when I include a copy of the proof of the ill posedness of the hydrostatic equations. It has long been known that the IBVP for the hydrostatic equations is ill posed. But it is even worse than that. In the 2002 paper we show how a numerical model based on the well posed multi-scale model and the correct initialization constraints can be used to produce an evolving mesoscale storm in a limited area.

          And you bet I have a grudge against the meteorologists that did everything in their power to prevent us from getting to the source of the problem and providing a mathematical solution.

          Did you forget that your buddy Cotton tried to have me fired.

          Jerry

        • rpielke
          Posted Feb 28, 2016 at 11:26 PM | Permalink

          dpy6629 – There is no contradiction. Apparently, Jerry had an experience, that I know nothing about.

          I have said all I am going to on this thread. Interested readers can read my modelling book to see my perspective on modeling if they have that interest. It is very much an engineering problem as applied to NWP.

        • Posted Feb 29, 2016 at 3:08 PM | Permalink

          Roger, I will just say one more thing about this. “Engineering” solutions especially in complex computational settings can sometimes look good after sufficient tuning. That can be very misleading. We have a new paper coming out on this destroying the fiction about eddy viscosity models a lot of engineers naively believe. It is a shame that there is not more mutual respect between more mathematical types likes TJR Hughes or Leszek Demkowicz and the engineering people. The problem here is that the best people in the field know the limitations of tuning inadequate models. People who run the models often do not.

          Perhaps at some point I will do a post here on it if Steve wants if.

  3. Posted Feb 27, 2016 at 3:10 PM | Permalink | Reply

    Gerry, I am assuming Prof. Kreiss recently passed away. If so my condolences to his family and collaborators. Kreiss is rightly famous in the field of numerical solution of PDE’s for his work on hyperbolic systems and Gerry’s work fits in that tradition of technical excellence. I just wish people paid more attention to this work.

    David Young

  4. Gerald Browning
    Posted Feb 27, 2016 at 3:27 PM | Permalink | Reply

    Roger,

    As seen in our 2002 manuscript, the solution to the correct atmospheric equations for the mesoscale case (horizontal length scale of 100 km and time scale of a few hours) in a limited area is essentially not computable for many reasons. If the mesoscale model uses boundary data from a global model that has removed gravity waves thru initialization (the common practice), then the boundary data will conflict with the interior gravity waves that are generated by mesoscale storms in the limited area. The gravity waves from multiple storms can add together and become essential to the correct solution. Also note that to be computed correctly, the gravity waves would have to be observed correctly and this is not possible with the current observing system because of their short time scale.
    Finally, the mesoscale forcing, e.g. heating and cooling, must be accurate because the vertical velocity is directly proportional to the heating and that is what drives mesoscale storms. Again the current observing system does not provide this information and the parameterizations of the forcing are less than perfect.

    Jerry

  5. Gerald Browning
    Posted Feb 27, 2016 at 3:32 PM | Permalink | Reply

    David,

    Heinz passed away in December 2015. He once modestly told me that he was slow to understand a problem. But clearly once he understood it, he understood it very well. This is obvious in all of his work.

    Jerry

  6. rpielke
    Posted Feb 27, 2016 at 4:52 PM | Permalink | Reply

    Hi Jerry

    I am unclear of what you mean. Of course, even with the limitations you point out, there is useful skill in mesoscale models. In terms of the hydrostatic assumption, this has been removed in most models as well as now permitting compression (albeit they slow the acoustic modes in order to permit longer time steps).

    You are correct with respect to shortcomings in parameterizations. In terms of forcings, we can capture most of this with respect to surface forced mesoscale systems. I provide numerous examples in my book:

    Pielke Sr, R.A., 2013: Mesoscale meteorological modeling. 3rd Edition, Academic Press, 760 pp. http://store.elsevier.com/Mesoscale-Meteorological-Modeling/Roger-A-Pielke-Sr/isbn-9780123852373/

    For propagating mesoscale systems and internally generated mesoscale systems, I agree this is a more difficult problem. We discuss this, of example, in our paper (e.g. see Table 3)

    Pielke, R.A., G. Kallos, and M. Segal, 1989: Horizontal resolution needs for adequate lower tropospheric profiling involved with atmospheric systems forced by horizontal gradients in surface heating. J. Atmos. Oceanic Tech., 6, 741-758. http://pielkeclimatesci.wordpress.com/files/2009/09/r-68.pdf

    The needed observational resolution and precision becomes an increasingly more difficult problem as the spatial scale becomes smaller.

    Best Regards

    Roger Sr.

  7. Gerald Browning
    Posted Feb 27, 2016 at 6:30 PM | Permalink | Reply

    Roger,

    Clearly you have a vested interest in defending mesoscale models. That has been what your career was based on. Unfortunately defining the skill of a mesoscale model for forecasting has been a dubious process at best. There are few mesoscale observations and skill should be defined as a mathematical norm of the difference between obs and model results. Given that parameterizations are not accurate, then neither are the initial conditions of the forcing of a real storm. This leads to instantaneous large errors in the forecast of any mesoscale storm. And the gravity wave problem I described cannot be handled correctly by any mesoscale model. Also note that the equations of mesoscale motion only apply above the planetary boundary layer, i.e. there is a discontinuity generated by the different equations used in the two regions

    Do any of the large scale models use the multiscale approach of Browning and Kreiss or some other adhoc approach, e.g. by adding a diffusive term to the vertical velocity equation without any mathematical proof of the accuracy. I certainly have not seen any meteorological manuscript references to our multiscale differential equation work, e.g. look at the cites of our 2002 manuscript.

    Jerry

  8. rpielke
    Posted Feb 27, 2016 at 7:40 PM | Permalink | Reply

    Hi Jerry

    The ultimate arbitrator of model performance (which include basic physics, but since they use tuned parameterizations are, of course, engineered tools) is how well they do with respect to predicting observed real world features. Such evaluations clearly show mesoscale and regional NWP models have skill with respect to important weather variables such as temperature, humidity, precipitation and winds. Still more work to do, but these models have saved lives.

    Roger Sr.

  9. Gerald Browning
    Posted Feb 27, 2016 at 8:13 PM | Permalink | Reply

    Please provide cites that definitely prove that mesoscale models are accurate. You need to read Sylvie Gravel et al. manuscript to see how inaccurate large scale models are. The artificial boundary layer parameterization destroys the accuracy above the boundary layer in a few days. Only because new observational data is inserted every 6 hours does the model stay on track.

    BTW the ECMWF model (considered the best large scale model) is hydrostatic.

    • rpielke
      Posted Feb 27, 2016 at 11:22 PM | Permalink | Reply

      Jerry – I have an entire chapter in my modeling book on the value of mesoscale models.

      As to their inaccuracies, they certain diverge from reality when observed data is not assimilated to force them back towards reality. This is one (of a number of reasons) that I have been so critical of multi-decadal climate predictions. They claim that these are different types of prediction (i.e. they call “boundary forced” as distinct from “initial value” problems), but, of course, they are also initial value predictions, as I discuss in

      Pielke, R.A., 1998: Climate prediction as an initial value problem. Bull. Amer. Meteor. Soc., 79, 2743-2746. http://pielkeclimatesci.wordpress.com/files/2009/10/r-210.pdf

      As to the ECMWF using the hydrostatic relationship (as do other large scale models), since the atmosphere is essentially hydrostatic on the synoptic spatial scales, that approximation fits in terms of the close relationship between the vertical pressure gradient force and gravity. The ECMWF, as you note, is considered the best NWP code, so we agree on that.

      Indeed, in my books on mesoscale features, I define them as large enough that the hydrostatic approximation is quite accurate, but the gradient wind relationship does not generally hold above the boundary layer (i.e. the divergent component of the wind is not necessarily a small fraction of the rotational component).

      Roger Sr.

    • Steve McIntyre
      Posted Feb 28, 2016 at 11:44 AM | Permalink | Reply

      Gerry, you say “BTW the ECMWF model (considered the best large scale model) is hydrostatic.”

      What does this mean in the context of this discussion?

      • Michael Jankowski
        Posted Feb 28, 2016 at 2:04 PM | Permalink | Reply

        “…Browning and Kreiss (1986) introduced the idea of slowing down the vertically propagating waves instead of removing them completely…”

        Hydrostatic = “removing them completely.” The vertical momentum equation (from Navier-Stokes, the latter being a distant relative of your favorite racehorse) is replaced by an equilibrium approximation (the upward pressure gradient force – i.e., the decrease of pressure with height – is balanced by the downward gravitational pull of the earth).

      • Gerald Browning
        Posted Feb 28, 2016 at 4:43 PM | Permalink | Reply

        Steve,

        Roger mentioned that the some of the large scale modelers were slowing down the vertical waves as in our work. But the operational models remain hydrostatic so my post apples

        Jerry

    • Steve McIntyre
      Posted Feb 28, 2016 at 11:47 AM | Permalink | Reply

      Gerry, a few weeks ago, the weather models successfully predicted a huge snowstorm around Washington DC. While they undoubtedly refreshed the weather model every 6 hours, doesn’t this give some form of mesoscale vindication?

      • Gerald Browning
        Posted Feb 28, 2016 at 4:18 PM | Permalink | Reply

        Steve,

        The models also predicted large Eastern storms in the past and nothing materialized. That is the problem.
        Being “engineered” (Pielke’s words) means they may or may not be realistic. It is a crap shoot. Please read Sylvie’s manuscript to see how the unrealistic boundary layer parameterization slows down the increase in the velocity at the surface, but destroys the accuracy of the numerical approximation above the boundary layer. Also it is pointed out that the OBS over the midwest are more dense and thus better data is inserted into the models as the storms move east.

        Jerry

        • rpielke
          Posted Feb 28, 2016 at 5:08 PM | Permalink

          Do you have a better forecast approach? NWP models in recent years present ensemble envelopes of individual realizations, so they are considering uncertainty due to initial conditions, etc.

          I recommend you follow @RyanMaue [Twitter} for his views on NWP model performance.

  10. Gerald Browning
    Posted Feb 27, 2016 at 8:45 PM | Permalink | Reply

    Roger,

    What has saved the most lives is dopplar radar that can see the moisture in a mesoscale storm as it approaches.

    Jerry

    • rpielke
      Posted Feb 27, 2016 at 11:26 PM | Permalink | Reply

      Yes- that has saved lines. But so have the NWP models that alert us that a severe weather outbreak will occur, or that ships should avoid a region as the models predict a hurricane to pass a certain area. The NWS Watches [blizzard, severe storm, tornado, flash flood etc) include NWP output in deciding to issue. On the shortest time periods, observational systems such as doppler radar and satellite, for example, become the primary tools.

      • Gerald Browning
        Posted Feb 28, 2016 at 4:23 PM | Permalink | Reply

        At least the hurricane prediction center has the decency to show the uncertainty in the path prediction due to the parameterizations (engineering According to your words).
        Jerry

        • rpielke
          Posted Feb 28, 2016 at 5:10 PM | Permalink

          All NWP models now present this uncertainty. We call them spaghetti plots with respect to large scale features such as 500 hPa heights.

      • Gerald Browning
        Posted Feb 28, 2016 at 5:13 PM | Permalink | Reply

        Your careful choice of the words “includes NWP” is telling. Has a mesoscale model ever predicted a tornado? It is the hook exho from Doppler radar that is used for this purpose. I will not go into the !mathematical and numerical details why a model cannot do that.

  11. bernie1815
    Posted Feb 27, 2016 at 10:53 PM | Permalink | Reply

    It is something when what I assumed was by way of an eulogy/festschrift/encomium should turn in such an odd direction.

    • bernie1815
      Posted Feb 27, 2016 at 10:55 PM | Permalink | Reply

      If we are to discuss the strengths and limitations of different modelling approaches, perhaps the title of the post should be changed.

    • rpielke
      Posted Feb 27, 2016 at 11:28 PM | Permalink | Reply

      Bernie1815 – I feel we honor Professor Heinz Kreiss by discussing science and engineering. Memorial conferences are often held for such a purpose. We are doing that here. Roger Sr.

      • David Brewer
        Posted Feb 28, 2016 at 5:26 AM | Permalink | Reply

        Couldn’t agree more. Any real scientist would love to think he could still start a robust discussion after he died. How unusual – and refreshing – to see an exchange in this field proceed from the personal to the scientific.

      • bernie1815
        Posted Feb 28, 2016 at 8:51 AM | Permalink | Reply

        Roger: I am all for that. If you guys, who apparently know each other, are OK with the tone, then so be it.

    • Steven Mosher
      Posted Feb 28, 2016 at 11:10 AM | Permalink | Reply

      Last time Gerald was here he lost the debate to Lucia and Dr. Curry.

      This time Roger will out cite him.

      Steve Mc: People do not win or lose debates by number of citations. I do not know enough of the particulars to have a properly informed opinion on the merits of either side of the discussion and doubt that Mosh is either.

      • Michael Jankowski
        Posted Feb 28, 2016 at 11:47 AM | Permalink | Reply

        Now now, we all lose when you make appearances such as this.

      • terrymn
        Posted Feb 28, 2016 at 2:52 PM | Permalink | Reply

        Just a friendly reminder Mosh, that if you can’t add anything productive to the discussion it’s ok to say nothing.

      • Gerald Browning
        Posted Feb 28, 2016 at 4:47 PM | Permalink | Reply

        I did not lose the debate. It is hopeless to debate people that know nothing about climate models or refuse to read mathematical manuscripts.

        Jerry

      • Posted Feb 28, 2016 at 6:34 PM | Permalink | Reply

        Mosher, I dislike saying it, but you are ignorant of the issues being discussed here. As far as I can see, Pielke Sr. who I respect is merely saying that despite any serious problems, the models have skill at some things. There is no contradiction with Gerry at all. The question here is can we dramatically improve models and Gerry believes (along with Paul Williams and myself) says emphatically yes and its very obvious that the answer is yes to anyone with the expertise in the field. I recently delved into starting a project to write a new GCM with modern methods with perhaps the top person in the field of numerical PDE’s. His response was that technically it was a great idea but politically, it was a nightmare. I would respectfully suggest that adding to the political disfunction surrounding this issue helps no-one.

        • Posted Feb 29, 2016 at 4:40 AM | Permalink

          His response was that technically it was a great idea but politically, it was a nightmare.

          Thanks to Gerry and our host (as ever) for opening it up.

  12. Posted Feb 28, 2016 at 9:55 AM | Permalink | Reply

    Roger, thank you for the friendly, collegial tone of your comments. It’s much appreciated by casual readers like me.

  13. Michael Jankowski
    Posted Feb 28, 2016 at 11:49 AM | Permalink | Reply

    This is old and VERY basic: http://www.accuweather.com/en/weather-blogs/weathermatrix/why-are-the-models-so-inaccurate/18097

    But I found the cyclical nature of the accuracy of predictions – “…note the seasonal dips in the Northern Hemisphere, proving models are more inaccurate during the Summer than the Winter (not so in the southern Hemisphere…)” – to be very interesting.

    • g
      Posted Feb 29, 2016 at 11:45 AM | Permalink | Reply

      Michael,

      There is some hope for the large scale (winter storms) because the observational system partially resolves them and the periodic insertion of new observational data keeps the model on track. Summer storms are much smaller in scale and the observational system does not resolve them. Here is where Doppler radar plays a crucial role.

      Jerry

  14. Posted Feb 28, 2016 at 3:19 PM | Permalink | Reply

    Roger, I will give this a try even though Gerry should correct any misstatements I make. I think his point is that the hydrostatic approximation gives rise to unbounded solutions. To prevent this, artificial dissipation must be added. This dissipation is not physical and always degrades accuracy. In CFD the improvement you get when going from 1st order dissipation to 2nd order is dramatic. Some GCM’s still use methods that are effectively 1st order accurate. However, the overly dissipative methods can still have skill at a lot of things, its just that they could be dramatically improved with better methods.

    David Young

    • Gerald Browning
      Posted Feb 28, 2016 at 4:49 PM | Permalink | Reply

      I did not lose the debate. It is hopeless to debate people that know nothing about climate models or refuse to read mathematical manuscripts.

      Jerry

    • Gerald Browning
      Posted Feb 28, 2016 at 4:57 PM | Permalink | Reply

      David,

      It is true that the hydrostatic equations are ill posed (I will obtain a copy of that proof and ask Steve to post it as it is no longer in print).
      I did not get into that issue because the problem is even more serious than that – they are using the wrong system to describe the smoothly evolving large scale solutions.

      Jerry

  15. Posted Feb 28, 2016 at 4:49 PM | Permalink | Reply

    Looking at the comments to date I can reasonably conclude that y’all would agree with my comment at the top of this thread – at least as far as GCMs are concerned.

  16. Gerald Browning
    Posted Feb 29, 2016 at 1:37 PM | Permalink | Reply

    Tom Holzer, a reknown plasma physicist at NCAR (now retired), once told me there are two types of scientists: one that seeks to understand a scientific issue and provide a clear analytical understanding or solution to the problem (clearly Heinz fit in this group) and those only interested in building or maintaining their (funding) empire.

    Jerry

  17. Posted Feb 29, 2016 at 3:06 PM | Permalink | Reply

    Here are links to the citations in the original post.

    G. L. Browning and H.-0. Kreiss (1986), Scaling and computation of smooth atmospheric motions, Tellus, 38A, 295-313.

    ABSTRACT
    We introduce a general scaling of the inviscid Eulerian equations which is satisfied by all members of the set of adiabatic smooth stratified atmospheric motions. Then we categorize the members into mutually exclusive subsets. By applying the bounded derivative principle to each of the subsets, we determine the specific scaling satisfied by that subset. One subset is midlatitude motion which is hydrostatic and has equal horizontal length scales. Traditionally, the primitive equations have been used to describe these motions. However it is well known that the use of the primitive equations for a limited area forecast of these motions leads to an ill-posed initial-boundary value problem. We introduce an alternate system which accurately describes this type of motion and can be used to form a well-posed initid-boundary value problem. We prove that the new system can also be used for any adiabatic or diabatic smooth stratified flow. Finally, we present supporting numerical results.

    G. Browning, A. Kasahara and H.-O.Kreiss (1980), Initialization of the Primitive Equations by the Bounded Derivative Method, Journal of the Atmospheric Sciences, 37, 1424-1436

    ABSTRACT
    Large-amplitude high-frequency motions can appear in the solution of a hyperbolic system containing multiple time scales unless the initial conditions are suitably adjusted through a process called initialization. We observe that a solution of such a system which varies slowly with respect to time must have a number of time derivatives on the order of the slow time scale. Given a variable which is characteristic of low-frequency motions (e.g., vorticity), we can apply this observation at the initial time to find constraints which determine the rest of the initial data so that the amplitudes of the ensuing high-frequency motions remain small. Boundary conditions of the system must be taken into account in the derivation of the constraints. This procedure is referred to as the bounded derivative method.

    For a general linear version of the shallow-water equations, we prove that if the initial kth order time derivative is of the order of the slow time scale, then it will remain so for a fixed time interval. For the corresponding constant coefficient system, we compare the present initialization procedure with the normal mode approach. We then apply the new procedure to initialize the nonlinear shallow-water equations including the effect of orography for both the midlatitudeand equatorial beta plane cases. In the midlatitude case, the initialization scheme based on quasi-geostrophic theory can be obtained from the bounded derivative method by certain simplifying assumptions. In the equatorial case, the bounded derivative method provides an effective initialization scheme and new insight into the nature of equatorial flows.

    Heins-Otto Kreiss (1984), Problems with different time scales, Contributions of Mathematical Analysis to the Numerical Solution of Partial Differential Equations. Anthony Miller, ed. Proceedings of the Centre for Mathematical Analysis, v. 7. (Canberra AUS: Centre for Mathematics and its Applications, Mathematical Sciences Institute, The Australian National University, 1984), 93 – 105.

    G. L. Browning and H.-O. Kreiss (2002), Multiscale Bounded Derivative Initialization for an Arbitrary Domain, Journal of the Atmospheric Sciences, 59, 1680-1696.

    ABSTRACT
    The bounded derivative theory (BDT) for hyperbolic systems with multiple timescales was originally applied to the initialization problem for large-scale shallow-water flows in the midlatitudes and near the equator. Concepts from the theory also have been used to prove the existence of a simple reduced system that accurately describes the dominant component of a midlatitude mesoscale storm forced by cooling and heating. Recently, it has been shown how the latter results can be extended to tropospheric flows near the equator. In all of these cases, only a single type of flow was assumed to exist in the domain of interest in order to better examine the characteristics of that flow. Here it is shown how BDT concepts can be used to understand the dependence of developing mesoscale features on a balanced large-scale background flow. That understanding is then used to develop multiscale initialization constraints for the three-dimensional diabatic equations in any domain on the globe.

    Christian Page, Luc Fillion, and Peter Zwack, (2007), Diagnosing summertime mesoscale vertical motion: implications for atmospheric data assimilation, Monthly Weather Review, 135, 2076-2094.

    ABSTRACT
    Balance omega equations have recently been used to try to improve the characterization of balance in variational data assimilation schemes for numerical weather prediction (NWP). Results from Fisher and Fillion et al. indicate that a quasigeostrophic omega equation can be used adequately in the definition of the control variable to represent synoptic-scale balanced vertical motion. For high-resolution limited-area data assimilation and forecasting (1–10-km horizontal resolution), such a diagnostic equation for vertical motion needs to be revisited. Using a state-of-the-art NWP forecast model at 2.5-km horizontal resolution, these issues are examined. Starting from a complete diagnostic partial differential equation for omega, the rhs forcing terms were computed from model-generated fields. These include the streamfunction, temperature, and physical time tendencies of temperature in gridpoint space. To accurately compute one term of second- order importance (i.e., the ageostrophic vorticity tendency forcing term), a special procedure was used. With this procedure it is shown that Charney’s balance equation brings significant information in order to deduce the geostrophic time tendency term. Under these conditions, results show that for phenomena of length scales of 15–100 km over convective regions, a diagnostic equation can capture the major part of the model-generated vertical motion. The limitations of the digital filter initialization approach when used as in Fillion et al. with a cutoff period reduced to 1 h are also illustrated. The potential usefulness of this study for mesoscale atmospheric data assimilation is briefly discussed.

  18. Gerald Browning
    Posted Feb 29, 2016 at 4:52 PM | Permalink | Reply

    There are many sources of error in numerically approximating a system of time dependent partial differential equations and then using the numerical model to forecast reality. The total error E can be considered to be a sum of the errors

    E = D + S + T + F + I

    where D represents the error in the continuum dynamical differential equations versus the system that actually describes the real motion, the spatial discretization (truncation) error, the time discretization (truncation) error, and the errors in the forcing (parameterizations versus real forcing) and I the error in the initial data.

    D contains errors from inappropriate descriptions of the dynamics, be they incorrect physical assumptions or too large of dissipative terms, S the errors due to insufficient spatial
    resolution, T the errors due to insufficient temporal resolution and F the errors from incorrect specification of the real forcing.

    We have shown that S and T are not dominant for second order finite different approximations
    of the multi-scale system that describes large (1000 km) and mesoscale (100 km) features (also see Naughton and Browning for a similar discussion). We have shown that these scales can be computed without any dissipation and it is known that the dissipation for these scales is negligible. It has been proved mathematically that the multi-scale system accurately describes the commonly used fluid equations of motion for both of these scales. We are left with F and I and we have shown that F is large, e.g. the boundary layer approximation, and I is large because of the sparse density of observations even for the large scale (Gravel et al.). Thus there is no need for larger computers until F and I are not the dominant terms.

    Jerry

    • Pat Frank
      Posted Mar 1, 2016 at 11:39 AM | Permalink | Reply

      Jerry, the error equation you show is exactly the approach I take in the paper on uncertainty air temperature projections I’ve been trying to publish for 3 years.

      I’ve yet to encounter a single climate modeler who understands physical error analysis, or in fact the difference between accuracy and precision. Or even the critical importance of a unique result.

      One suspects that sort of ignorance accounts for the crummy state of the field.

    • Posted Mar 3, 2016 at 9:22 AM | Permalink | Reply

      Jerry or anyone else –

      In a “typical” GCM, how many of these F-type parameterizations are there? Dozens? Hundreds? It seems to me that this is the biggest weakness of the GCMs. Just one unrealistic parameterization could lead to huge error despite accuracy in all the others.

      • Posted Mar 3, 2016 at 12:19 PM | Permalink | Reply

        Tom, I dont know the exact number of subgrid model parameterizations, but can comment on the eddy viscosity turbulence model they use. The “error” in these models can be large or manageable. Error size is strongly case dependent. As Gerry has pointed out as infinitum GCM’s have far too much dissipation which will smear out regional features for example. I also know for a fact that many GCMs use the best numerical methods of the 1960’s and this can dominate the error in some cases. Gerrys frustration is one I share. Many scientists are fully invested in badly flawed computtion models. This leads to a strong cultural trend to mathematics denial. “The code worked” is a phrase we should ban from science. “Models help us understand the system” should also be banned. The latter is Ken Rices favorite and its a lazy excuse to delay acknowledging and fixing problems

  19. Gerald Browning
    Posted Feb 29, 2016 at 5:06 PM | Permalink | Reply

    Reference

    Michael J. Naughton, Gerald L. Browning, and William Bourke, 1993: Comparison of Space and Time Errors in Spectral Numerical Solutions of the Global Shallow-Water Equations. Mon. Wea. Rev., 121, 3150–3172

  20. Posted Feb 29, 2016 at 5:29 PM | Permalink | Reply

    My condolences to Professor Kreiss’ family and collaborators.

    The Continuous Equation Domain
    There should not be any questions about the mathematical results obtained from analyses of the continuous equations. If there are questions, simply re-do those straightforward, well-accepted analyses and illustrate the errors.

    The presence of complex, or imaginary, characteristics for the model equations in the continuous-equation domain for IBVPs, for which real characteristics are expected, however, always leads to long and, very frequently, spirited discussions. In one arena of engineering these discussions have been ongoing since about 1972; over four decades now, and counting.

    Generally, characteristics that do not correspond to those that have been long-accepted for the classical formulations of the continuous equations arise whenever those formulations are modified in some manner. Sometimes, however, the classical formulations are modified to be applicable to new applications and these in turn become accepted, in part because the characteristics of the new formulation are consistent with the new applications. The formulation of finite-time temperature propagation in heat conduction is an example. In that case, the parabolic nature of the heat conduction equation, for which every spatial location knows that all other spatial locations exist for all times, becomes a hyperbolic problem with a limited range of influence that is determined by the propagation speed. Fundamental modifications to the usual derivations of the Navier-Stokes approximation are presented from time-to-time.

    Within the linearized domain, the theoretical ramifications of non-real characteristics for IBVPs, which should have real characteristics, can be unambiguously determined, for both the continuous equations and the discrete approximations to the continuous equations. This is especially true for the case that the continuous equations do not contain algebraic terms such as those that generally occur at interfaces between the materials that occupy the solution domain. The ramifications in the physical domain are really fuzzy, including the possibility that information at future times must be specified; not as a BC at a spatial location, but at a future time value.

    The Discrete Approximation Domain
    Moving to the discrete-approximation domain introduces a host of additional issues, and these are complicated by the ‘art’ aspects of ‘scientific and engineering’ computations. Within the linearized domain the theoretical ramifications can be computationally realized by use of idealized flows that correspond to the theoretical aspects of the analyses. In particular, aphysical growth of the highest frequencies that can be resolved by the discrete approximations can be computationally realized. Within the well-accepted continuous-equation domain, these are the small-scale, dissipative frequencies that should prevent growth of perturbations.

    The adverse theoretical ramifications do not always prevent successful applications of the model equations. In part because the critical frequencies are not resolved in the applications, and in part due to the properties of the discrete approximations which usually have inherent implicit representations of dissipative-like terms. Such dissipative terms are also sometimes explicitly added into the discrete approximations. And sometimes these added-in terms do not have counterparts in the original fundamental equations.

    Reconciliation of the theoretical results with computed results is also complicated by the basic properties of the selected solution method for the discrete approximations. The methods themselves can introduce aphysical perturbations into the calculated flows. And these are further complicated whenever the discrete approximations contain discontinuous algebraic correlations ( for mass, momentum, and energy exchanges, for examples ) and switches that are intended to prevent aphysical calculated results. In the physical domain any discontinuity ( in pressure, velocity, temperature, EoS, thermophysical, and transport properties, for examples ) has a potential to lead to growth of perturbations. In the physical domain, however, physical phenomena and processes act to limit growth of physical perturbations.

    • Posted Mar 1, 2016 at 6:30 PM | Permalink | Reply

      Yes indeed the very fact that energy gets transferred from small to large scales is what makes it so difficult to model turbulence. The impression I have is that energy transfer happens both ways all the time however the net transfer is normally from large scales to small scales.

      This is why one can get the impression an eddy viscosity model (or indeed numerical dissipation) gets more or less the right qualitative behaviour for some flows.

      When one looks in more detail it becomes clear that net forward transfer (from large to small scales) can contain strong forward transfer in one direction at the same time as smaller backward transfer (from small to large scales) in the other two directions. Ignoring the effect of the backward transfer will ultimately make the flow prediction incorrect.

      • Posted Mar 2, 2016 at 2:13 PM | Permalink | Reply

        Yes rja what you say is broadly true. The real problem here is that the eddy viscosity assumption and Reynolds averaging make untrue assumptions such as that the flow is stationary. In reality we need some breakthroughs in theoretical understanding to push much further beyond the current inadequate state of the art. Running codes just distracts from the issues.

        • Posted Mar 2, 2016 at 5:14 PM | Permalink

          Although I agree with your comment dpy, I think it should be noted that Reynolds averaging doesn’t necessarily have to mean time averaging and that the flow is stationary. It could also be derived based on ensemble averaging. For example the repetition of a non stationary experiment many times and averaging the flowfield across the ensemble of experiments for each moment in time.

        • Posted Mar 2, 2016 at 8:32 PM | Permalink

          RJA, Ok but is there any proof that the aversge is stationary? Wang at MIT has the best work on this and he says it’s a very distant goal even for Navier Stokes. I would be interested in any reference you can give me.

        • Posted Mar 9, 2016 at 10:24 AM | Permalink

          averaging the flowfield across the ensemble
          =============
          this will remove some types of noise, but it will not eliminate bias. and over the long term bias cannot be detected, except as divergence from actuals. which renders the models impractical for long term forecast.

          you need a mechanism to recenter the models and thus eliminate long term bias. but when dealing with the future there is no recentering solution.

          for example, take the inertia guidance problem. it is equivalent to predicting the future. the guidance system will always drift and must re-align itself with an observed position. but the future has no such observation points.

  21. Craig Loehle
    Posted Mar 1, 2016 at 8:38 AM | Permalink | Reply

    The limitations described in the ability to solve the continuous equations mean that the GCMs are not “just physics” as is often claimed in blog discussions (and by Michael Schlesinger in my debate with him), a claim that shows to me that the speaker has never tried to simulate a physical system. The possibility of all sorts of numerical artifacts and divergences leads to the possibility that long simulations will diverge infinitely (not to infinite temperature, but away from any correct solution). Does it “matter”? The claim that boundary conditions such as heat dissipation to space and turbulent mixing smooth out these types of numeric errors is often made (ie, that the global solution over time is correct) but this claim is meta-scientific and has never been proven. If known numerical errors are compensated to some degree by computational tricks (such as dissipation of energy, floor and ceiling values for some variables, kludges) then the tool might be of some use for some applications such as hurricane forecasting, as Roger said, though note in this example how often the projected paths are all over the place. And this is almost a best case–a few days, a well-organized storm system. IF the GCMs used the proper numerical solution methods (if known) this would increase confidence in them a lot. In the absence of this, Browning is right that the whole enterprise is on shaky ground.

    • Posted Mar 1, 2016 at 3:40 PM | Permalink | Reply

      Craig, Your comment is a good summary in laymans terms of the situation. I would only add that in CFD the literature is really infected by selection and positive results bias. Basically, the literature gives a far too optimistic view of the real skill of the models. When using these models in areas where human safety is involved, correcting this bias becomes a matter of business survival.

  22. Posted Mar 9, 2016 at 10:09 AM | Permalink | Reply

    Whenever I see a numerical solution to weather and by extension climate, I ask a simple question.

    Are we able to model the earth’s tides from first principles? Surely the tides, with a much simpler forcing, must be solvable?

    But of course the tides are not solvable from first principles. Instead, the tides are solved via astrology, from the position of the sun and moon in the sky.

    so why should climate be any different? what is unique about climate that permit it to be solved when the tides cannot be?

  23. kim
    Posted Jul 10, 2016 at 12:43 AM | Permalink | Reply

    There’s a hole in my model, Dear Liza, Dear Liza, there’s a hole in my model, Dear Liza a hole.

    Well fix it, Dear Henry, Dear Henry, well fix it, Dear Henry, Dear Henry fix it.

    With what shall I fix it, Dear Liza, Dear Liza, with what shall I fix it, Dear Liza with what?

    Politics will fit it, Dear Henry, Dear Henry, politics will fit it, Dear Henry will fit.
    =============

Post a Comment

Required fields are marked *

*
*

Follow

Get every new post delivered to your Inbox.

Join 3,633 other followers

%d bloggers like this: