Continuation of Exponential Growth # 1. Consult the original thread for an interesting dialogue.

Tip Jar

Pages

Categories

Articles

Blogroll
 Accuweather Blogs
 Andrew Revkin
 Anthony Watts
 Bishop Hill
 Bob Tisdale
 Dan Hughes
 David Stockwell
 Icecap
 Idsos
 James Annan
 Jeff Id
 Josh Halpern
 Judith Curry
 Keith Kloor
 Klimazweibel
 Lubos Motl
 Lucia's Blackboard
 Matt Briggs
 NASA GISS
 Nature Blogs
 RealClimate
 Roger Pielke Jr
 Roger Pielke Sr
 Roman M
 Science of Doom
 Tamino
 Warwick Hughes
 Watts Up With That
 William Connolley
 WordPress.com
 World Climate Report

Favorite posts

Links

Weblogs and resources

Archives
552 Comments
Thanks for allowing us to carry on, Steve! Bump!
And thanks to Jerry, Jimy, Bender and others for one of the most interesting threads I’ve seen on any climate site.
It seems to me that some of the issues being debated could be readily resolved by experiment, so what about it?
Bender [#432],
It seems to me the point about the existence of central tendencies is an important one, and one on which we are in nearly complete agreement, so I must have misunderstood your point or its context. To be explicit: My complaint with the use of statistics such as average’, median’, etc. in describing a multistable, aperiodic dynamical system is that, even when such numbers are calculable, they are often not useful as summary statistics. A good statistic tells us something useful about the distribution it is taken from. For an example with obvious analogy to climate, consider some measure of a bistable system oscillating aperiodically between an island of stability at 1 and at +1. What is the average for that measure? Assuming your calculation converges, you might find it to be near 0. But the average for that measure is a queer statistic: it doesn’t tell you anything about the probability of finding the system itself near 0. I would be willing to say that such a system has no (single) central tendency. If this clashes with anyone’s opinion, I’d be interested in learning how.
Maybe some statistic would correct me (I use statistics just to analyse data, not because I studied them deeply) but one example could be comparing, on entire periods, the functions y=0 and y=sin(x): they have the same average (and this is obvious) and I can even find a good correlation between them because of this (here a statistic could correct me) but we know they are completely different functions.
For the matter originally leading this discussion (maybe someone pointed it before, so I beg your pardon, I had lost it in the preceding 443+3 messages) there could be also the inverse problem: from a small beginning trend, we can extrapolate almost every kind of function we want; and it is very easy to demonstrate, numerically, how a line could be very near to an exponential or to a function which is now growing but will decrease at a certain point, in such particular conditions.
Computational fluid dynamics is a recent and still difficult branch of science, which still needs many “field tests” and heavy computational power for (relative) small problems; and then I find very unlikely (as much as the IPCC finds it very likely) that we can fully modelise then forecast up to a century how Earth’s atmosphere behaves – not to talk about random and outofcontrol influences, like volcanoes or the same Sun. Of course, we do not need a 0.01°C gradient in 1m, neither we could complain about a 0.5°C error over 100 years for a 100km^2 area, such precision is not required for now, but I have still my doubts about.
So such a kind of errors, like forecasting an exponential growth even when it is not happening, can be easy to do: indeed what it seems to be happening: grow trend for global temperature is smaller since 1998 than in the previous 10 but also 20 years; for anyone having studied basic mechanics, it is obvious that global temperature rise is now slowing down (which does not mean it has stopped or is decreasing); so, even with an approximation, I cannot find signs of exponential growth for now; neither I can understand how someone can talk of acceleration of global warming in recent years.
Eureka moment …. what triggered it was reading an analytical “one pager” by Dave Melita, a meteo dude working for a boutique firm servicing the oil and natural gas futures trading biz. What he wrote was an interesting observation regarding bias in short to mid range forecasting models used by NOAA/NWS. It seems that they consistently underforecast the impacts of southward surges of Arctic air in the US. His notion is that southerly tracking lows “spike” southbound cold intrusions. As a result, they make it intact further south than the models forecast. The flaw in the models is therefore, that they don’t forsee the southern lows and their attractive power.
That got me to thinking about some old Geography / Meteo 101 stuff. One of the basics is the net energy balance versus latitude. Equatorward of about 36 or 37 its postive, poleward of that negative. I hereby surmise that any net increase in retained energy would be greater equatorward. Since we don’t see evidence of tropical tropospheric warming anywhere near what the climate models portray, then I hereby surmise that the net increase in energy is not expressed in an overall rise in temperature over a large volume, instead, due to the highly uneven heating from both direct insolation and reradiation in the tropics, only certain parcels warm and are immediately transposed into convective / low pressure enegetic expressions. This also helps explain southern lows that are not forseen by shorter range “weather” models.
The kicker in all this is, since poleward of 37ish, the impact of CO2 is going to be less due to the innate negative energy balance, the cold air “factories” still produce lots of cold air. Southbound air masses get sucked even further south by the lows which are even lower due to GW.
Is this a negative feedback? Hmmmmmm ……
Steve S: Are you essentially saying that the rising hotter air in the tropics and subtropics will “suck” more cold air from the artic (and antartic)?
[snip]
RE: #6 – Rising at first, but mostly, spinning not to mention, greater turbulence. But yes, the idea would be, GW (including whatever percent is “A”) results in a larger number of more intense lows at or below 36 deg latitude. Possible corollary may be, more excursions of the jet stream into semi tropical latitudes. Seems like a possible built in negative feedback loop, right down there at the basic circulation level.
RE: #8 – plus, GW may be a misrepresentation. It could in fact be parcel wise warming on a highly selective basis. Another note, in addition to heterogenous tropical and semi tropical warming, I also expect higherthanbackground warming in areas of higher CO2 emissions, especially during inversions. There could be 1000 or greater PPM locally, talk about reradiation! Add to that the UHI hotspot effect and you’ll have some of those “odd parcels” of warmer air over cities.
#3, I am dealing with precisely that problem at one of my current sites. We are working in an area where rainfall is highly correlated with the ENSO state in the pacific. So while the “average” rainfall over a 30 year period is around 32 inches per year, something like 7080& of the years have a rainfall total that is either 5+ inches above or below the average. In my case, we are trying to establish what the “typical” ecological conditions are in a creek where the flow conditions are controlled by the rainfall and by the elevation of the water table (where the level is also controlled by the rainfall). For the last 1015 years, the rainfall has alternated between historically high and historically low annual totals, and our challenge is to determine if the creek is a intermittent stream or not. The answer is yes…and no. When there is alot of rain, it is perennial and when there isn’t alot of rain, it is intermittent. So how do you decide what the “typical” conditions in the creek are when the rainfall is very rarely “typical”? With no central tendency, how do you decide what set of conditions to evaluate the stream for?
All,
I have found a draft of the unpublished manuscript by Sylvie Gravel,
G. L. Browning F. Caracena and H. O. Kreiss entitled
The Relative Contributions of Data Sources and Forcing Components to the LargeScale Forecast Accuracy of an Operational Model
The manuscript contents were presented at CIRA (CSU) and later at a Canadian conference by Sylvie.
It is not the final version, but it contains the main points that I discussed earlier.
Because it is a hard copy, I will have to scan it in somehow
and then provide a link to it so that we can discuss it in detail.
Jerry
Jimy Dudhia (#442)
I was very careful how I worded the discussion about Sylvie’s results.
The results were obtained for multiple cases over an extended period of
days and used mathematical norms of the differences between the forecasts interpolated to observational locations over North America where the observational data is most dense (as opposed to incestuously comparing to model analysis at a later time). I am amused to see that you are shocked.
One of the reviewers stated that “we knew all of this”, but obviously that is not the case (this is a standard reply by someone wanting to hide something).
I also again point out the fact that a simple change in the CMC assimilation program according to “toy” theory had a sufficient impact on the forecast to surpass the accuracy of the NOAA NWS global assimilation and model package.
Why do you think that Christial Page wrote his new manuscript supporting our balancing method for mesoscale storms. The WRF model will have
considerable difficulties using that balancing method because of all of the kluges at the boundaries.
As soon as I am able to scan the manuscript, you can read the results for yourself.
Have you read our discussion about gravity waves? Evidently not.
Where are the multiple cases on consecutive days for the WRF forecasts with errors computed as Sylvie did? Talk is cheap. Quantitative analysis speaks for itself.
Jerry
#7 MarkW, water supersaturation doesn’t exceed 1%, except in unusually clean air,
a very small fraction of the atmosphere for cosmic rays to do their stuff.
Has someone quantified this effect?
#12 Jerry
The technique you describe, comparing against radiosondes, is a standard one. NCEP uses this
routinely to evaluate its models. I put a link to their site at the end of this post.
Could it be that when the reviewer said “we knew all this”, he/she was referring to the
finding that adding data improved the forecast, in which case I wouldn’t be surprised either.
The findings also point to major deficiencies either in the model or initialization system
used for the control run, and the CMC should have been embarrassed by such a result. Anyway, I want
to see that paper, because it just doesn’t sound right.
I don’t know the Page method you mentioned, but I don’t see how balancing storms relates to
the boundaries unless those boundaries are far too close to the area of interest for comfort.
Usually we keep domains large enough that boundary effects are reduced in the area of interest.
http://wwwad.fsl.noaa.gov/users/loughe/projects/NCEP_verif/
Jim D’s remarkable patience is duly noted. Let’s just chill while we watch the real story here unfold.
may I suggest that bystanders who are unfamiliar with numerical analysis issues bite their tongues and remain bystanders on this thread? Go to Unthreaded.
Good suggestion. /lurk=on
#13: Henrik Svensmark, of the Danish Space Reasearch Institute, estimated the influence of cosmic rays in recent years warming by a variation in cloud cover of about 3%, causing an heat flux increase of 0.8 to 1.7 W/m^2.
http://www.dsri.dk/~hsv/
James Lane (#2),
I have proposed a number of comparisons and Jimy Dudhia has avoided
every one. I ran inviscid tests of a hydrostatic and nonhydrostatic
model at fine resolutions. The results agreed with the mathematical analysis cited in a reference. Others have shown problems with two NCAR models exactly as predicted by the mathematical analysis, but Jimy will not run those cases. He refuses to run the case in our Multiscale manuscript using the WRF model when all of their smoothing is activated. This case was designed explicitly to show how to correctly forecast the development of a mesoscale storm using a limited area model and the associated problems.
Jimy will not provide the same sort of methodical comparisons
performed by Sylvie, but is critical of her results (even in the face that a simple change in the CMC assimilation program caused the CMC surface forecast to outperform the NOAA NWP global forecast system). Jimy clearly has not kept up with the literature in his own field and does not understand the importance of the boundary conditions in the initialization process, nor the consequences of fudging the boundaries. In the WRF
documentation, there is a description of a damping of the horizontal divergence that is important to smaller scale motions and the resulting balance. Shortly an independent manuscript is due to appear in MWR by Christian Page et al. that supports all of these results in a more practical setting.
And all Jimy does is spout words. I am very familiar with this game from this group and that is the only reason that I have persisted in exposing this nonsense in a public forum (thanks to Steve M.) where everyone can judge for themselves what is happening.
Jerry
RE: #14 – “Usually we keep domains large enough that boundary effects are reduced in the area of interest.”
Please expound on this. Reduced by X in an area (more properly volume ) V. Do you know what X and V are? Sort of curious to see your characterization of what you think might be the range of magnitudes. Please advise.
RE: #14 and #20 – Please use the operational definitions from the linked site in your characterization:
http://www.du.edu/~jcalvert/tech/fluids/vortex.htm
I anxiously await your response.
Jimy Dudhia (#14),
The errors computed in Sylvie’s manuscript are relative errors, not absolute errors as cited in your link. Unless the general reader knows the magnitude of the fields at various altitudes, it is very difficult to determine what is actually happening. But with relative errors, it is trivial for the general reader to see the magnitude of the problem at each level (as you shall soon see). That is why we were able to diagnose the main contributors (forcing and data) to the forecast errors very quickly.
The lower boundary layer parameterization was the only parameterization that was crucial in the time frame indicated. It was quite interesting to watch the error in the lower boundary layer parameterization pollute the forecast as time proceeded.
Only because the winds at the lower boundary were small compared to the jet stream did the problem not have a faster impact.
I mention that Sylvie was quite surprised by the results, but at least
she was willing to run the tests and write an honest assessment. She also presented the results at CSU and at a Canadian conference so must have felt that there was substance to the manuscript. Clearly the results had an
influence on CMC thinking and they (Page, Fillion, Zwack) are moving forward.
I also mention that in Sylvie’s presentation, she showed that the CMC model produced as accurate a forecast as the ECMWF forecast during the period cited, i.e. the Canadian model is just as good as the often quoted best global model during the period cited. Another one of your bogus arguments.
If gravity waves are important to the mesoscale forecast and the
dscontinuous forcings that have large relative errors are generating
incorrect gravity waves, how can the forecast be correct?
And how far must you move out the boundaries. Our analytic theory shows that gravity waves generated by mesoscale storms O(100 km) are O(1000 km). The point of a limited area model is to decrease the mesh size in a smaller area to resolve the smaller scales, e.g. a thunderstorm. If you must move out the boundaries (the new method to trim the contour plots?), what has been gained in terms of computational resources. Have you parallelized the code to make up for the waste of resources.
Jerry
Steve M (#16),
Thank you.
Jerry
FYI:
Western US 500mB T Bias and Correlation
Sorry, make that 500 hPa
Steve S. (#20),
I just read your excellent question after I had responded to Jimy
Dudhia’s message. Thank you for raising the issue and I hope my response to
Dudhia helps to answer that question. If a small scale storm or a squall line of storms intersects one of the lateral boundaries on the boundary of the forecast region, then there is a serious problem because it will not be present in the large scale data. Unless the forecast region is changed all the time to account for this problem (a pain and might not even be possible), the error in those storms at the boundary will have an impact on the forecast. Errors at the boundaries can be quantified in mathematical terms for the slowly evolving solution in time and a demonstration of their impact has been published.
Let me know if my answers have been sufficient.
Jerry
Steve S. (#24),
The RTVS group was formed at NOAA while I was there. The RUCS
modeling group complained bitterly about its formation because
it might show the exact flaws in their models that I have been discussing.
Might I say that such a group should be formed outside of the influence
and funding of modeling groups, i.e. one that cannot be influenced by those groups.
Sylvia’s tests were run at all levels and relative errors are the right
measure to show flaws.
Jerry
Steve S. (#24),
Is there a description on the RTVS site how the errors are computed?
There should be a complete description somewhere on the site discussing
that.
Why do they not plot the relative errors at all levels as a function
of time as Sylvie did? The most important variable is the vertical component of vorticity that is not mentioned.
And it should be possible to look at a particular variable at a given level
and compare it to observational variables and other models at the same time.
Then it would be a lot easier to understand the problems.
Jerry
Steve S (#24),
Well I tried the site and as expected it controls the results
in favor of the models. It does not provide a complete
description of how the errors are computed. It only allows limited comparison with obs, e.g. conventional upper air data (radiosondes at jet
stream level?) where the
parameterizations are not important exactly as Sylvie showed. Why not with all levels of radiosondes as Sylvie did so one can watch the errors propagate? I think you can see the influence that Sylvie’s visit and manuscript had on NOAA, but the site is clearly biased to show the
best results that Sylvie obtained. And I tried to compare two of the models
(GFS and RUC) and there were problems. Is this an honest comparison
or one to support the continued funding of these groups?
Jerry
A bit off topic, but related to boundary values. Climate is supposed to be a boundary value problem and weather an initial value problem. I’ve never understood the distinction for a fluid system like the ocean or the atmosphere. Boundary value problems always mean to me solutions that are time invariant, yet all but the simplest and most viscous fluids don’t show that. I’m slightly aquainted with TaylorCouette flow, the flow of a fluid between two rotating cylinders. At very low rotation speeds the patterns are “static.” But turn up the angular velocity some and the flow patterns become very complicated and are history dependent, i.e., depend on initial conditions as well as the boundary conditions of rotation speed and geometry. Why shouldn’t the earth’s fluid fields behave in the same way? In some sense the ocean and atmospheric currents are Couette flow on a very rough sphere.
Interesting insn’t it? Did you try ground level?
May 1998, SunnyvaleLos Altos CA, F2 and F1 – Food for thought?
A meaty case study. Approaching the current limits of understanding.
Paul (#30),
In mathematical lingo, the initial value problem (IVP) for a time dependent PDE means the PDE is solved over the entire spatial domain, e.g. in 2D this means the time dependent PDE is solved over the entire x, y plane using only initial conditions in time. On the other hand, the initial boundary value problem means that the PDE is solved over a subset of the plane, e.g. the initinite strip problem that is bounded between two finite values of x, and requires not only initial conditions, but also boundary conditions at one or more of the boundaries in x. Let me know if this
helps.
Jerry
Steve S.(#31),
After seeing how the site is biased and undocumented, I did not go any further. But I will take a second look at ground level just to see what they did there. Note that Sylvie’s adjustment to the data assimilation program provided an increase over the NOAA model at ground level.
That has decreased over time as the word spread.
Jerry
Steve S (#32),
I note that the tornado was seen by radar and not predicted by any model. 🙂
Jerry
Steve s. (#31)
I tried to compare the two (GFS and RUC ) at the surface. Same problem.
Are we getting suspicious yet?
Jerry
RE: #35 – Indeed a classic.
RE: #36 – Yes.
Re # 19.
I still don’t get it. Why are you insisting on inviscid simulations? Neglecting eddy diffusion doesn’t seem physically correct to me. On this point I agree with Jim I think. Neglecting eddydiffusion in a largeeddy simulation leads to bad predictions, this has been reported many times. Parameterization of eddy diffusion is thus essential.
As it happens, I will be on travel this week at the Norman, OK, Storm Prediction Center
where they have a annual severe storm model and forecast evaluation. They have been
using WRF at high resolution for several years now, and this year several forecasts
are being run at 24 km grid size for the forecasters to look at. These are very useful
to modelers and forecasters. The link below is the one I gave on the first thread.
This is the NCAR contribution to the WRF forecasts, but the University of Oklahoma
and NCEP also provide forecasts for this program. Obviously this model would not have
got this far (i.e. exposing forecasters to the outputs) if it showed no promise
in the last few years, and each year we can do better because of increasing computer
power and improving physics for these scales. The Kansas tornado was definitely
given as a probable occurrence by these forecasts, which showed very dangerous conditions,
that were optimal for tornadoes in that region, and gave almost 24 hours warning of such
an eventuality. You can go to the SURFACE menu and click Max Reflectivity in a Column, then View Forecast
to get an animation. This Web page is a little tricky for the uninitiated, being for scientists
rather than the public.
A field to look at under the Severe Storm menu is the Vorticity Generation
Potential, which is a good indicator of tornado potential, when greater than about 0.3.
http://www.wrfmodel.org/plots/realtime_3kmconv.php
(note I may be not posting again for about a week as I will be on travel)
I am still confused as to what Jerry wants to show with a WRF model run.
Clearly the model is doing realistic forecasts, and it can do any idealized case too
without any tuning. What else is there to show? He seems to be casting very vague and dubious
aspersions without showing a convincing argument against the dynamics or numerical methods.
Also, if people here don’t even trust NOAA to evaluate their forecasts properly, what hope
is there? Don’t you think they have any oversight given that they provide a public
service? Why should the American weather service be any less honest than the Canadians
or Europeans?
Regarding boundaries, as you see in the link posted in my previous message, the
domain goes from the Rockies west of Utah to the east coast, but the area of interest is
Tornado Alley. Here the boundaries are sufficiently far that systems developing to the west
that might generate thunderstorms in the central US, develop mostly within the
domain rather than being fed in through the boundaries, as they would have been
if the west boundary was too far east. Boundaries are necessarily of low space and
time resolution, so we need the model to be able to develop hires features after
the air has come into the domain.
Finally, I am confused by all the issues that are conflated around Sylvie Gravel’s paper.
I now believe, having seen what the BDT idea is about, this is about the improvement of a poor initialization technique
that could easily have degraded a global model. Data assimilation, numerical methods, and physics are side
issues when the dynamics is poorly initialized, because you get basically a very noisy forecast
that would even be visibly rough looking at standard fields like 500 mb height. I suspect
this was the case, and the gain came from reducing the noise level.
#36 what problems?
link
Jimy Dudhia,
When you run some verification tests on the WRF forecasts exactly
like Sylvie did, then I will discuss the issues with you. Otherwise from now on I will ignore your comments because you are just wasting my time
with verbiage.
The great web site that you suggested is not documented and does not compute relative errors at all levels. Surprise, surprise. The humorous thing is that the group that developed that web site was formed when I was at NOAA and they still haven’t computed the right items (for the reasons I have indicated).
And you never did address the issue of gravity waves generated by forcings that are in error. If they are important as you stated (the reason that the
unstable numerical method you use is suppose to be so great), then what happens when the model generates the wrong gravity waves thru forcing errors. Oops. And those waves have been shown to have scales on the order
of 1000 km, the size of the middle portion of the US. Small problem.
Talk to you when you provide some quantitative results. Meanwhile I need to
find out how to scan in Sylvie’s manuscript for discussion.
Jerry
gb (#38),
The viscosity in the atmosphere is much smaller than that used in
any NWP or climate model. The models (especially climate models) are closer to approximations of a heat equation than to the approximation of a hyperbolic system with small dissipation (the continuum system which is
assumed to be the correct system for the atmosphere). And this problem is
aggravated by the use of inappropriate approximations of the continuum
system, discontinuous forcing, ad hoc boundary treatments (in both the continuum and discrete cases), unstable numerical methods, errors in observations (even the largest scales are marginally observed), inappropriate initialization methods, …
Except for the very smallest (turbulent) scales, the continuum system to all intent and purposes is inviscid. Note how the theory of hyperbolic systems (see Bounded Derivative Theory references) has led to a substantial improvement in the global forecast (Sylvie’s result), to the understanding of gravity waves generated by smaller scale storms, to the balances that are necessary for smaller scale motions in the midlatitudes and near the equator for slowly evolving solutions in time, and to the discovery of exponential growth near jet streams that will destroy all numerical accuracy in the neighborhood of those features.
All of these analytical results have been demonstrated in careful numerical examples without the use of any viscous terms and have been shown to have beneficial impacts in operational forecast models.
Can a numerical model produce such analytical understanding that is so valuable. Not when the forcings and boundary conditions are tuned to produce results that obscure the dynamics.
Heinz and I clearly know how to analyze continuum equations with dissipation, to develop stable numerical methods and stable boundary conditions for those systems. For example, the smallest scale proofs
for the viscous NS equations by Henshaw, Kreiss, and Reyna and subsequent
numerical examples of the accuracy of the estimates in those proofs. The theory and numerical examples show that inappropriate dissipation has a considerable impact on a solution. I have shown a simple mathematical example that any unjustified large dissipation can be overcome by unphysical forcing to produce any solution or spectrum that one desires. That does not mean the result is physical. The number of parameters available for tuning the forcing in NWP models has been discussed.
When Sylvie ran careful tests of the operational model, the results
were as expected from mathematical theory. They had just been so intermingled that the lay person would never have any hope of
fully understanding what was happening. I believe that is why the manuscript was rejected, but you can judge for yourself once I
am able to scan it. It is short and to the point. The plots
speak for themselves.
Jerry
Jimy Dudhia (#41)
If you use your example and switch to 24 hours, the RUC model data is not available. Then switch to vector wind and the SRF data for the
NAM model (whatever that is) is worse than the global model.
As I have said many times, the models can be tuned to produce results, but how physical they are is open to question.
And the lay person can interpret relative errors (percentage error)
like Sylvie used. Then things can be understood.
I wait for your WRF relative error forecast plots (that will never appear).
Jerry
All,
Fedex Kinkos can scan the draft manuscript and will produce a PDF
version on a computer. I hope that will suffice. I will try it tomorrow.
Any other suggestions?
Jerry
Re # 43:
Jerry, the point is this: Assume the motions are governed by the NavierStokes equations (so we don’t have to worry about density fluctuations). In the atmosphere a broad range of spatial and temporal scales are present (be it turbulence, gravity waves etc.). Atmospheric models are never able to resolve the whole spectrum of scales (as we agree on). This implies that atmospheric models do not directly solve the NavierStokes (as in DNS) but in fact the filtered NavierStokes equations (as in LES). To see what is really solved we have to take the NavierStokes equations (or better the nonhydrostatic equations) and apply a spatial filter. So let us assume we have a spatial filter (I will denote it by [.]) which doesn’t touch the large scales (larger than the grid size) but takes an average over the small unresolved scales (the so called subgrid scales). The result of the filtering are the NavierStokes equations again in terms of the filtered velocity and pressure [u_i] and [P] plus an additional term: the partial derivative of [u_i u_j] – [u_i][u_j] which is called the subgrid stress tensor (sorry, I don’t know how to use Latex here). The filter must satisfy some conditions. All this, including the filtering, can be found in any recent text book about turbulence. The point is that the subgrid stress tensor is not zero but gives a contribution and has a clear physical meaning. It is the effect of the unresolved scales on the resolved scales. Simply putting this term to zero is not correct. It has to be taken into account, i.e. parameterized, in an atmospheric model so far I can judge. Whether or not it is parameterized in an accurate way I don’t know but it should be there in a model. Perhaps you could discuss how to deal with this term or the other unresolved subgrid motions. I can imagine that a jet in the atmosphere generates a lot of shear and commonly this produces a lot of turbulence and thus a large local subgrid contribution. Having quite high viscosities near a jet might be very realistic and physically correct.
Jerry(#33). Thanks, but it sounds like you are describing something like finding the solution to the heat equation if I want to cool a block of warm material. I can vary the time evolution by modulating the boundary conditions bu the final state will always be the same. I don’t think that’s true for a fluid system of even modest complexity. You get very different final states depending on the past history.
All,
Kinkos has scanned the draft manuscript and converted it to pdf format on a CD (not perfectly, but it is possible to read the text and view the plots). I have included one extra plot to show the improvement of the Canadian global model compared to the NOAA global model at 36 hours. This is not during the period when the work was completed (the improvement was even more dramatic then and I also have that plot if interested), but it is sufficient to show how a bit of theory has a much larger impact than all of the other ad hoc tunings of the forcings in the models. Now I just have to find the best way to post the pdf file on the web so it can be accessed and discussed by those interested.
Jerry
Paul (#47),
A hyperbolic system for the IVP never has a steady state,
i.e. any waves will just keep propagating forever. However
it can have a steady state for the IBVP (depending on the boundary
conditions). Parabolic problems (heat equation) damp out the initial conditions for the IVP and the solution will die out very quickly. As you point out, the solution of the parabolic problem for the IBVP depends on the boundary conditions. Here I am assuming homogeneous problems,
i.e. with out any forcing terms. Once forcing terms are added, one can obtain any solution one wants as I have shown.
Jerry
gb (#46),
It has been shown analytically and demonstrated with convergent numerical solutions that incompressible NS turbulence models that do not properly resolve the scales of the solution will not produce the correct solution.
Any subgrid scale methods can be checked against the correct solution in 2D. I know about ad hoc dissipation and am not interested in that topic.
Please do not repeat the same argument again until reading
the references by Kreiss and his students, e.g. in the SIAM
Journal of Multiscale Modeling and Simulation. Then I am willing to discuss the issue in more detail. Thank you.
Jerry
Re #48 Jerry, one possibility might be to place the file on esnips (it’s a free site). It would need to go into a public folder. I believe that esnips allows up to 5GB of free space and can accomodate many forms of information. Just Google esnips and follow the instructions.
David (#51),
Will give it a shot later this evening. I look forward to your comments
and a lively discussion with all interested parties on this thread.
Jerry
David (#51),
I managed to create a folder and download the pdf file, but am having problems verifying that the link works. Will wait until tomorrow
to see if John A wants to load the file on his server or if he prefers not to do that.
Jerry
David Smith (#51),
Can a link be formed to the pdf file in a folder on esnips
or must each reader download the manuscript from that site?
How does that site compare to the google document site?
I have also asked Steve if he would like to add the copy on his web
server.
As soon as there is an acceptable solution to everyone, I can
post the manuscript.
Jerry
David Smith (#51),
Well I tried google docs and they won’t let you upload a pdf file, but will let you save a file in pdf format. I guess that is the advantage of esnips?
I stii am wating to hear form John A or Steve M.
Jerry
Jerry, what user name did you sign on to esnips with ? And did you use any tags on your upload ?
RE #54 Jerry, I think each reader must download the file, similar to what works with this Excel file I placed on Esnips ( link ).
It’s not elegant but it works. I think it should also work for pdf
All,
Steve M. has graciously consented to post the manuscript on his site.
I sent him a copy this evening and hopefully it is not too large for the email system. Once it is posted, I will add a bit of background and answer any and all questions. I believe the manuscript will settle a number of issues and look forward to hearing your comments.
Jerry
David Smith (#57),
Google had a bit more help on their site, but with the limitations I indicated. I think that esnips will work too, but the site is not
fully developed yet. If Steve M. is able to post it, then hopefully it will be easier for everyone to access. Thanks so much for the suggestion!
Jerry
All,
Steve M. has kindly posted a copy of the draft of the manuscript at
For those that want to read the manuscript right away, you can download a copy from Steve’s site. Meanwhile I will begin to comment on a number of issues related to the manuscript.
Jerry
All,
Well I still don’t have the lingk tag working correctly.
The link is
All,
The url is
Click to access browning.pdf
Jerry
For those that want to peruse the manuscript,
First be fully aware that this was not the final version, but an earlier draft. Although I have versions that are closer to the final draft, they were marked up with corrections and suggestions for Sylvie. However, this draft contains the essential information and I can add any other information from the final version if needed.
First, if you go to the last page of the pdf file (#27), you will see a graph that is currently on the CMC website that shows that the CMC global modeling system (assimilation and model) is performing considerably better than the NOAA NCEP global modeling system (now called the GFS). The dramatic improvement of the CMC system began in 2002 while Sylvie was visiting me at the NOAA Forecast Systems Lab (since reorganized). Although the difference plots on the current CMC web site do not go back that far, I have a copy of the plots that showed that the improvement started when stated. That improvement came from the change in the data assimilation system based on the improvement of the system using ideas from the Bounded Derivative Theory.
In the body of the manuscript, we studied what forcing terms and what data provided the largest contributions to the accuracy of the short term, largescale forecast. To minimize the amount of effort for Sylvie, we tried to minimize the number of changes to the codes. In particular, the relative errors that are displayed are not the normal l_2 relative errors, but sufficiently close to be translated into percentage errors. One just needs to multiply the x axis labels on the difference graphs in the vertical direction by 100 to obtain relative percentage errors on each vertical level.
As can be seen in the manuscript, all physical forcings could be turned off in the first 2436 hours with little change to multiple largescale forecasts. The largest contributor to the difference in the control and no forcing cases was the boundary layer parameterization. And although the errors between the two curves at the lower levels look substantial, one must recall that the winds are much smaller at the lower levels, i.e. the majority of the kinetic energy is near the jet where both cases were still quite similar. Out of curiosity, we had Sylvie simplify that parameterization and still obtained results quite similar to the control run for the period of time indicated.
We then proceded to try various data sources based on the theory
from our periodic updating manuscript. As in that theory, one only needs
the wind data from the radiosondes and the aircraft data to provide a
sufficient analysis. In the manuscript, Sylvie stated that it was also possible to use satellite data. But that statement is not quite correct. Because of pressure from someone at her organization, we allowed that statement to remain in the manuscript. But we also had Sylvie run the satellite data without the radiosondes, and the results were quite different.
Although I have a number of other comments, I will open the floor to discussion at this pont.
Jerry
All,
Strawman summary of results:
a) For short term, largescale forecasts a very simple dynamical model suffices.
b) The radiosonde and aircraft wind data can be used in an analysis that involves only simple interpolation and provides adequate results compared with the more complicated assimilation schemes in current use.
c) Based on the above two facts, a supercomputer is not required to obtain a comparable forecast for the periods indicated.
d) Forecasts from longer time periods that depend on parameterizations are obtained from extensive (and expensive) trial and error experiments that may or may not be physical. For example, relative errors in the boundary layer parameterization grow significantly as the forecast proceeds. This particular problem and others are masked by the periodic updating of the NWP model with new observational data every 612 hours.
e) The largescale forecasts can be used to provide boundary conditions for limited area models with the following caveats. The smaller scale features are not sensitive to errors in the largescale forecast (unlikely). Gravity waves in the largescale forecast do not have an impact on the small scale features and gravity waves produced by small scale features in the limited area model, if important in triggering other storms, are generated correctly (unlikely) and interact with the boundary data properly (unlikely).
f) This periodic updating is not possible in climate models and they use very crude parameterizations with O(1) relative errors.
Comments on any of these strawman conclusions are welcome.
Jerry
Steve Sadlow (#37)
I think you can now clearly see why the errors computed in the site quoted by Jimy Dudhia are extremely misleading and how the particular comparisons were selectively chosen.
I will wait for quantitative results similar to Sylvie’s from the WRF model before any conclusions can be reached with respect to that model.
Jerry
bender,
Thank you for being patient and waiting until Sylvie’s manuscript was made available for everyone. Hopefully this manuscript has made a number of issues clearer for everyone. I also hope that everyone can see the benefit of having quantitative results available to resolve issues without
the need for excessive verbiage. Steve M’s analysis of MBH results
were sufficient to hold up to criticism by the hockey team and the NAS panel for exactly the same reason. Steve M. is well aware of the backlash
when pointing out questionable claims. I admire his tenacity, sense of humor (that I lack), and his brilliance.
Jerry
I expected a number of questions about the manuscript and strawman
assertions.
Is the lack of response because the manuscript results are not clear or
are the quantitative results definitive?
Jerry
re: #67
Well I read most of the paper and skimmed the rest, but I’m not experienced in modeling so I don’t have anything partiularly to contribute. You may have to wait till Jim D gets back to get much of a response. Though you never know around here when someone will speak up.
RE: #67 – I’ll probably have a couple questions after I properly read it in depth (gave it a quick go through yesterday). No real questions about anything conceptual, more around the nitty gritty of the calculus. Stay tuned.
Jerry,
I am now back, and will take a look at the manuscript within the next day, and get back to you on that soon.
I am not sure why you don’t trust the verification page I posted, so I will see if I understand
what it is you are asking for from the paper. (the RUC forecast only runs 12 hours, so that is why
you can’t get longer verifications for that model).
As we now focus on storm scale, verification tends to be more against simulated radar echoes
than global scale RMS, and I am not in the global verification field. What I showed from NCAR is a forecast
page. Verification pages are usually set up by independent groups from the modelers.
I also want to extend the comments of gb on subgrid parameterizations. Atmospheric models always
have subgrid turbulence schemes. These usually activate only in turbulent areas such as the
daytime boundary layer, or highly sheared or overturning regions associated with mountain waves
or convection. Methods of handling these regions range from a turbulent kinetic energy prediction
equation to simpler diffusion coefficients based on local shear and stability. In addition,
many models have numerical diffusion schemes to keep energy from accumulating at poorly
resolved scales near the grid scale. These can be seen as a smoothing of the finer scales.
WRF has numerical methods that allow us to minimize the need for this particular type of diffusion.
OK, finally, comments on Gravel et al. (#62).
Contrary to what I expected this is a purely modeing paper, no mathematics, and
nothing about initialization, gravity waves, smoothing, boundary conditions, or other discussion
points on this thread.
This paper makes the claim that much of the accuracy of a global model is preserved even when taking away a lot of
the physics and some of the data assimilation. The verification is carefully selected to show this,
and it left me wondering what the point of the paper really was. Since it has flaws and gaps in its reasoning,
it is far from a complete or convincing work. Let me enumerate some specific concerns.
1. The paper takes out convective and condensational schemes, and thus latent heating.
It therefore eliminates rainfall prediction, making this something other than a weather
forecast model.
2. The radiation is taken out, therefore eliminating the diurnal cycle, and further
degrading its use as a forecast model for any kind of surface temperature behavior.
3. The verification is restricted to the L2 norm, which is a method of ignoring bias errors.
Bias errors are likely to be very large without physics, especially as the forecast
is extended, and I would like to have seen the bias errors presented.
4. The verification is only 2 days, when global models are run at least 5 days. I suspect
the errors in the simplified model got too large to present beyond 2 days.
5. The verification is only at 24 and 48 hours. Showing 12 and 36 hours would have
shown what the lack of a diurnal cycle does, especially at low levels. It probably
would have looked bad, so it wasn’t presented. I would like to have seen it.
6. This is a winter case verified only over North America. The model would have failed
somewhat more dramatically in the spring/summer or tropical regions where latent heat drives
major components of the flow, so the results are far from general.
7. The data assimilation test of removing satellite data didn’t seem to show anything
interesting, and I didn’t see the point of that experiment.
8. Though references were made to more mathematical papers by Browning and Kreiss,
a direct link was never presented, and the connection was tenuous at best.
The paper seemed to be about how much you could degrade a weather model into a
dry dynamical general circulation model, and still dress up the statistics to
minimize the appearance of degradation. As I say, I am not sure of the purpose of all this.
Just my opinion.
Jimy Dudhia (#71)
Gee, it seems like we struck a nerve. Sylvie’s manuscript agrees with all of the “toy” theory that you have never read and I doubt can even comprehend. Her manuscript is more honest than anything you have said or done.
While you were gone I looked up your publications. Without a model to tune,
you would be in bad shape. You have never written an analytic (model independent) manuscript in your entire career. That is why you are so adamant in defending your heavily dissipated and smoothed models.
This approach has become known as soft science and is a disgrace to the great theoreticians of the past.
I await an honest assessment of the errors in your forecasts. I surmise that it will never happen.
Have you called Roger Pielke Sr. yet.
Jerry
Patience, Dr Browning. I just downloaded the manuscript.
The discussion appears to be about weather forecasts now. However, the topic here should be climate modelling and climate projections. Calculating L2 norms or related quantities perhaps make sense for short term weather forecasts but are I think irrelavant for long term climate simulations because of the limited predictability of the dynamics.
gb (#74),
This manuscript was posted because of the claims by Jimy Dudhia that NWS (NCEP) tests the global forecasts against radiosondes. He posted a site that only checks against two levels (surface and upper air that the manuscript shows were selectively chosen), but has no documentation. Poorly done and totally misleading, exactly as expected.
Sylvie’s manuscript discusses in detail how the errors are computed (relative l_2 norms are a standard in numerical analysis) at every level and how the forcings have little bearing on the accuracy of the forecast for 2448 hours. If that is the case, then the forcings are clearly tuned in an attempt to provide improved forecasts at longer time periods by a very expensive trial and error process. This was demonstrated in one parameterization case, namely the boundary layer parameterization.
The last page also shows that the CMC global assimilation/model is outperforming the NCEP global assimilation/ model near the surface beginning in 2002 because of a change in the assimilation system, something Jimy Dudhia refuses to believe.
I am tired of dealing with this group at NCAR and have had similar
nonsensical reviews by them for many years. If you have followed this
blog, then you know that they have rejected manuscripts that went on to receive awards. I have no respect for these people – most of their manuscripts are based on tuning a model for a specific case.
How this works can be seen by the number of publications NCAR produced based on Anthes ill posed MM4 model. NCAR claims many publications. But many are of this exact type, i.e. based on model runs that have never been checked.
So it is no surprise that Jimy Dudhia will not run any of the “toy” problems or verify any of his forecasts.
And if the forcings are not correct over a short time period, the climate models are pure nonsense as the parameterizations there are even cruder and the viscosity even larger (and there can be no updating). A simple error analysis of a system with the incorrect viscosity shows that this is the case.
This thread was started because the hydrostatic system (used in climate models and largescale NWP models) is ill posed for both the IVP and the IBVP. This has been shown by theory and demonstrated using convergent numerical solutions on this site. And nonhydrostatic models have a problem near a jet that independently has been shown using NCAR’s own models. Enough nonsense.
There has been a terrible waste of taxpayers funds on these models with little or no gain. More has come from theory than any of these models.
Here I will provide a specific example. There have been hundreds of
manpower hours and computer resources spent on the so called nonlinear normal mode initialization (NNMI) procedure. When the Bounded Derivative
Theory (BDT) was introduced, the statement was made by an NCAR Senior Scientist
that he could see no advantage to the BDT. Well one no longer hears about the NNMI procedure because it had several basic flaws: it could not handle
lateral boundaries, smaller scale heating in the midlatitudes, or
for any scales near the equator. The BDT has solved all of these problems
and is the most efficient method to do so because it uses fast elliptic solvers based on numerical analysis. And the theory has been applied in oceanograpy, plasma physics, the NS equations, and other areas of applied mathematics.
Jerry
Jerry,
I gave my opinion, and would be glad if someone else in atmospheric modeling (outside NCAR, if you want)
took a look at the paper, and at my opinion, and checked for biases. I only stated
well known facts, and noticed that none of them were disputed. If I was reviewing
that paper, I would have asked for those additional plots, and the main argument of
the paper would have crumbled by itself. How can a claim be made that a model
with neither rainfall or a diurnal cycle is almost as good as one with them?
It is mindboggling to people in NWP (and, yes, I am firmly and unapologetically in NWP),
and it is clear why an NWPfocused journal rejected it.
The data assimilation part of the paper was basically data impact tests with nothing new
proposed. I don’t see the connection with the CMC vs US scores attached at the end, or
how you claim credit for them. Surely the Canadian model uses physics to get these scores,
thus disproving the point of the paper.
For gb, the connection to climate here is the physics. Jerry seems to be saying
physics is not needed, I think, but I am not sure.
I am not sure why you keep referring me to Prof. Pielke Sr. I only raised
his name as an example of someone involved with the RAMS model, which uses the very
same NWP techniques that you seem to have rejected, and it would be more appropriate for
you to settle your differences with him than me.
Jimy Dudhia.
The review you wrote is a joke and does not even accurately portray
the draft of the manuscript. And I sure as heck wouldn’t trust anyone else in the fine atmospheric modeling area, especially those that don’t document their web site and selectively choose which observations one is allowed to compare against. The increase in the errors
in all of the numerical models is still shown on the CMC website
for those that want to look. What is not shown are the relative errors that already are bad at 48 hours and terrible at 120 hours. Why not have NCEP show the relative errors at all levels so that this can be settled? Why does the RUC model only run 12 hours before updating with no forecast beyond then (I know the answer – do you)?
Please don’t bother me with more of your same old verbiage. You are not a theoretician, just a tuner.
When you provide some relative errors for any of the runs of your model,
then we can talk.
I will now go into the history of this manuscript in more detail in the next comment for the general reader.
Jerry
After many of our theoretical manuscripts on the BDT had been rejected
by reviews quite similar to those above (but eventually published
because the mathematics could not be overcome) and with claims that
the global models and assimilation systems were doing so well, I had begun to write a global spectral model based on the multiscale system to determine how well an unforced, multiscale model would compare to the current hydrostatic spectral models. The model was already producing quite reasonable results (in particular fronts formed by the convergence of characteristics) when Sylvie came to visit
She saw some of the results and appeared quite concerned. Her visit was intended to learn more about the BDT so we started down the path
of the manuscript to determine what forcings and data should be initially added to a new model based on the BDT. There was obviously considerable concern at her organization, but Sylvie made the runs (she also met Heinz while here). At the end of the exercise, she presented the results at CSU (announcement available) and at a conference in Canada. Can someone explain to me why she would do that if the manuscript was incorrect or why her organization would pay for her meeting expenses? Or why Sylvie participated in Christian Page’s new manuscript
(available in draft on American Meteorological Society web site under
PTA and to appear shortly.) Note that Page’s manuscript was not given to an Editor at NCAR but to one at NASA and was accepted. Feel free to ask CMC or Sylvie if the manuscript is correct as stands. If not, she spent over a year running extensive tests and using considerable computing resources leading everyone on including her bosses. I don’t think that Sylvie would do that. In fact Sylvie’s significant other is Ted Shepard, the Editor of JAS. Clearly he knew exactly what we were doing and never disagreed
with any of the findings. Is he also dishonest. I think we all know where that label lies.
Jerry
I now plan to provide a summary of all of the current myths and problems in NWP and many have application to climate models.
Jerry
Re #79, Gerald Browning
Yes, please. Also, a list of acronyms would be very helpful.
Re: #78
WRT to Pagé’s paper, do you mean Diagnosing Summertime Mesoscale Vertical Motion: Implications for Atmospheric Data Assimilation (DL pdf) shown at Monthly Weather Review?
Re: #80
My best guess for those not in the acronyms list in lefthand sidebar:
BDT’€”(unknown)
CMC’€”Canadian Meteorological Centre
CSU’€”Colorado State University
JAS’€” Journal of the Atmospheric Sciences
NCAR’€”National Center for Atmospheric Research
PTA’€”Papers to Appear
Re #81, thanks. BDT is defined in #75 as Bounded Derivative Theory. Any idea on NWP ?
I think NWP means “numerical weather prediction”. There is a fairly good summary on wiki here.
I think it has been shown (oh, dear, here we go) via social network diagrams that NCAR and The Hockey Team have massive overlap.
John (#81),
Yes. Christian has posted a copy on the PTA list so everone can obtain a copy there.
All of the acronyms you have defined are correct and BDT and NWP as defined are also correct.
Jerry
Steve S. (#84),
Steve M. has had considerable “interaction” with a number of the “scientists” from NCAR. His experiences have essentially been the same as mine. IMHO more about unverified modeling results than great science. And lots of fine verbiage with no substance or content.
Jerry
All,
Note that in Sylvie’s manuscript the model physics has little or no impact for the first 36 hours, especially near the jet stream. Also note that the error curves on the last page are for 36 hours. Thus there is a direct link between the improvement shown at the beginning of 2002 and a change in the
assimilation system. The obvious question is why the sudden improvement at the beginning of 2002 and why is NCEP so far behind?
Jerry
Jerry
Jerry,
Please read a book on mesoscale modeling (Cotton and Pielke at CSU have both written them),
and then explain to me what aspects of this kind of modeling you don’t like. Do you think they
live by tuning the physics, and inventing forcing terms? Please give your opinion on their RAMS model,
which shares a lot of the same basics with WRF and MM5 that you have been criticizing. I have a lot
of respect for these two scientists whose life’s work has been in developing the atmospheric modeling field
to where it is today.
Why do you keep saying the page I posted in #14 doesn’t verify other levels when it does?
(clue: hit the Level menu). What is wrong with verifying against soundings like it does?
As for the RUC model, they only run 12 hours because the purpose of that model is to provide
hourly analyses in realtime, and their scope is shortrange to avoid overlap with the
scope of the NAM which is up to several days.
Thanks for the history of the paper, but I remain skeptical (I am allowed to here, aren’t I?),
and all my questions stand.
Jimy Dudhia (#88),
Well why don’t we change the subject again (your typical ploy)?
Has your “review” of Sylvie’s manuscript suddenly fallen apart? Has NCEP RTVS now computed the relative l_2 errors and separated out the different forcings and data (as Sylvie did very carefully) so everyone can see if Sylvie’s results also hold for the NCEP GFS forecast system (that is doing worse than the CMC system)? Let us complete this little discussion
and experiment before continuing since you made a big issue of it. Then we can deal with the WRF model using similar quantitative comparisons.
Thank you.
Jerry
Jimy Dudhia (#88),
Do you believe the curves on the CMC website that compare the errors in the various international models at 48 and 120 hours for the wind data? (Figure 3d with definition of measure given in decription of plot):
How does the root mean square vector wind error compare with the l_2 error of the difference of the forecast and observations of the vector wind?
Jerry
All,
The formulas for the bias and root mean square error are available on the CMC web site. If the bias formula is correct, it does not even
satisfy the definition of a norm (the standard measure of distance between two functions). Interesting that the “tuner” would ask to see that quantity.(I believe he would have asked for any quantity that was not in the manuscript.) Who dreamed up this measure – the World Meteorological
Organization? And without a relative l_2 error as in Sylvie’s manuscript, it is not possible for the general reader to know the percentage error.
Is the percentage error a secret?
Mathematicians in graduate school take a course in real analysis for a year that discusses the axioms to measure the differences between functions in Banach and Hilbert spaces. Do the meteorologists use any of these standard measures – no, they make up some measure that is not even positive for all functions.
If you look at Sylvie’s manuscript you will see that the percentage errors at various levels grow quite large very quickly. Thus the forecast of these quantities lose all meaning in a short period at these levels.
How about a bit of honesty from RTVS using standard mathematical measures of percentage errors so this can be settled.
Jerry
Shhhhhh!
Re: #91
Bias is a useful measure for forecast verification because it tells us something more
about the error. For example, a model may have a low bias in daytime surface temperature
implying some lack of heating. As a person dealing with physics this is more useful to
know than rms or L2 norm because those measures only say how much your model is wrong,
not in which direction.
Re: #90
Of course I believe the statistics presented, and since I am no more involved with NCEP’s
models than you are, I am not playing favorites among these models.
You asked a question about L2 norm versus rms for vector wind. The rms error would show up
a mean error, which you would see more often in temperature or geopotential height than wind, while
the L2 norm would subtract this and just give credit for having the right pattern, even if
you are off by a few degrees everywhere. If I understand it, the L2 norm would be the same if
you added a constant to the whole field, so it is basically more like a correlation in
giving credit for the right pattern, even if you are several degrees off everywhere in temperature.
It is OK to present the L2 norm, but that info is not complete without a bias when looking for model errors.
Jimy Dudhia (#93),
Baloney. The very fact that you said that Sylvie’s manuscript would
have been rejected, but you are unable to understand the results states very clearly that the manuscript did present something new (at least to your fine group). And the fact that the “statistic” that you asked for is not even a mathematically defined measure of the error between different functions says even more. Then you have the nerve to state that the RTVS site is meaningful when a reader cannot decipher what the errors really mean, does not separate out
the main contributors (forcing and data) to the accuracy of the forecast. and does not indicate that the largescale forecast can be done with a PC. I think you have lost your scientific credibility (if you ever had any). And I would hope that the general reader on this site starts to wonder how their taxpayer funds are being spent.
Jerry
All,
Jimy wants to know the sign of the error so he can tune the “physics” (parameterizations) to fix the error. Is that not a trial and error approach? If the physics (heating terms) are well known, then they should not need to be tuned. And he wants me to read a book on mesoscale models that explains that the physics are well understood?
If he wants a list of scientists that I admire, it would include scientists like Hilbert ,von Neumann, Bob Richtmyer, Gilbert Strang, Jim Wilkinson, Peter Lax, Friz John, et al. that are unquestionably the best of the best.
Jerry
All,
I asked Jimy Dudhia if he believed the error curves on the CMC web site not to choose the best tuned model (that is well known – it is the ECMWF model), but to explain to me how to interpret the percentage difference between the errors in the models at 48 and 120 hours using only the root mean square error. Also to explain the percentage error in any model at either times using only the root mean square error.
Jerry
All,
One more comment for tonight. It does not take a rocket scientist to
code up a finite difference scheme for some partial differential equations and add enough dissipative terms to keep the model results bounded.
But there are a few more serious problems that must be investigated before that model can be believed. It must be shown that the original continuum system of partial differential equations accurately describes the motions of interest and is well posed (the mathematical test for well posedness for the initial value problem (IVP) is quite different and less complicated than the mathematical test for well posedness of the initial boundary value problem (IBVP)). If approximations have been made at the continuum level, e.g. the hydrostatic approximation, then the accuracy of that approximation must be independently verified for the motions of interest and then the well posedness of the IVP or IBVP proved for that system.
Once these issues have been resolved, the system can be approximated by an accurate and stable numerical approximation (the stability condition for the IVP is quite different than for the IBVP). If the numerical method satisfies these conditions, then it will converge for some period of time.
The length of the period is determined by the properties of the continuum solution, e.g. the solution of the inviscid Burger’s equation can shock and the finite difference equation will not remain bounded without explicit or implicit dissipation terms in the finite difference approximation. On the other hand, the solution of the viscous Burger’s equation is analytic and the numerical method will converge to the continuum solution with adequate resolution. And finally, the best test of any numerical method is to show that the results converge as the mesh size is reduced. This can be
done by choosing any reasonable bounded, differentiable continuum solution,
substituting it into the homogeneous differential system to obtain appropriate forcing terms to ensure that it is a solution of the system,
and then running convergence tests on the solution. Note that
the WRF model will not pass the latter test because of all of the ad hoc boundary smoothing and large dissipation terms and is the reason that Jimy Dudhia will not run the “toy” problems. The forcing terms (physics) that are added have unknown errors and are discontinuous and this has a detrimental impact on the numerical solution and on the approximation of the continuum solution of the original continuum system.
continuum solution
Jerry
But gavin at RC told me that parameterizations were not an issue. That the problem of parameterization was not underconstrained. Implying that the various parameterizations are in no way overfits.
And yesterday gavin at RC pretends not to understand the need for calculating the overall error associated with an inferred parameter. He gets all huffy at a reader who suggests Hansen’s models, as good as they are, need improvement before he is convinced.
Sorry. I am way OT and ranting. I just wish someone would show how error propagates through a GCM parameterization, from the “known” physics to the unknown/inferred physics. Instead, I see denial and coverup. “What error?” “You don’t know what you’re talking about.” Of course I don’t know what I’m talking about. That’s why I’m asking questions!
Jerry, I laughed when I read your comment that:
The reason I laughed was because I recalled how Gavin Schmidt handwaves this problem away:
Model getting worse when you make the resolution finer? No problem, just adjust your parameters …
w.
RE: #97 – It’s been a few years (more like decades – LOL!) ….. my main use of continuum physics wayyyyy back in my own academic days, was for geophysical apps. Heat flow, seismic reflection prospecting, etc. I recall we used ERF and ERFC. I need to crack the old books. This is a fascinating thread.
RE: #98 and 99 – Gavin is so arrogant and affected by tunnel vision that he even tries to belittle people who most assuredly known what they are talking about!
RE” #101 – “Known” should have been “know.”
RE: #99 – One of the most sadness inducing things as one grows older is when one’s childhood heroes cease to command respect. As a toddler, just becoming fully aware of the world around me, I was awe struck watching the Gemini shots on TV. By the time of the Apollo program I was getting into elementary school, even built a 2 foot tall Saturn V model. Oh how it pains me to write this – given the absolute trash I have read over the past month from NASA, it’s all been shattered. All that awe and respect, all that glory of yore, into the bin. Very, very painful, personally. But also, a cathartic experience as well.
Jerry, (#94#97)
The mathematician’s perspective (which is clearly where you are coming from) is very different
from that of the physicist in looking at model outputs. The people on your list (#95) that I had heard of
were in the field of numerical methods, but none of them did physics as far as I know.
You probably know that numerical weather prediction is a combination of numerical methods with
physics, and all your complaints seem to point to a lack of understanding how physics
is implemented in models. Physics is based on equations too (thermodynamics, radiation, turbulence,
and more complex systems associated with microphysics, cumulus parameterization and vegetation effects).
An NWP model is a combination of all these mathematical subsystems. The reason we use computers
is because there is not way to study this interaction purely mathematically.
Why wouldn’t you want to know whether your forecasts have a cold or warm bias? This would be a
mathematical, not physical, perspective.
Percentage error is also somewhat less useful in atmospheric science. For example we prefer to
know if the temperature has a 1 degree error than a percentage error of 0.3% if you use the
absolute temperature scale. You just get more of a feel. Likewise with geopotential height there is
a very large background value making percentage errors quite small. Physicists prefer to see
errors with physical units. One area we use percentage error is in rainfall bias, where it
is useful to know if a model produces 10% too much rain over a domain.
On the subject of convergence tests, we’ve been over this before on this thread. The way
to do a convergence test is to know the correct converged solution as the scale gets smaller. The
real atmosphere has such a high Reynolds number that models can’t test all scales down to
molecular viscosity for convergence. To do that you would need a model that could
represent motion on scales over at least five orders of magnitude, which is not possible
on current computers. So what is done instead for these tests is to reduce the Reynolds number
so that convergence can be achieved with current computers. The method they choose is to
select a physical viscosity and keep it fixed as the grid length is reduced. This shows
convergence.
bender (#98),
The parameterizations are extremely crude in the climate models.
Has nayone asked why there are so many climate models and NWP models if the
numerical errors are small? The obvious answer is that there are so many different eddy viscosity types (that are too large) and so many different parameterizations that are used to overcome the inappropriate
viscosity in order to obtain a spectrum that kind of looks reasonable.
It has been known for some time that the highest resolution global model (ECMWF) has not converged to a better forecast exactly for this reason.
This is science? 🙂
Jerry
Willis (#99),
Exactly.
Jerry
Steve (#101),
It is not arrogance. It is naivete and wishful thinking.
Jerry
Steve (#103),
There has been a general decline of science and I blame some of it on the misuse of computers. Code up a model and write a manuscript.
It is disheartening.
Heinz once said that we needed to start a journal called
Numerical Modeling without Tricks. 🙂
Jerry
Jimy Dudhia (#104),
The only part you have right is that meteorologists don’t care about percentage errors because they will conclusively show how poorly the NWP models are performing.
Have you been able to understand the significance of Sylvie’s manuscript yet. I continue to wait for an explantion of how to understand what the RMSE plots mean on the CMC website in terms of percentage errors.
If there is no connection, what is the point. And I continue to wait for
relative errors from RTVS or any run of the WRF model. but I am not holding my breath.
You are not a numerical analyst (there is not even one at NCAR because they don’t want anyone to check their models), you are not a physicist (just a glorified tuner), and know nothing about the analysis of differential equations.
The people I mentioned are so far beyond anything you or any of your colleagues can comprehend in multiple areas of science including fluid dynamics that your comment is a complete riot. Have you heard of the texts called Mathematical Physics by Courant and Hilbert. Are you aware of the contributions of Bob Richtmyer to the Manhattan project or his contributions to mathematical physics (read his book). I guess I just didn’t realize the low level of your educational background.
Jerry
Jimy Dudhia (addendum),
You seem to have forgotten that I worked on those “glorious” climate models at NCAR for a number of years and you might want to look at the manuscript
on microphysics that I wrote with Chungu Lu. I am well aware of the
“physics” in these models and the crudeness of the assumptions is pathetic.
Jerry
Jimy Dudhia,
I see you are starting to reply during working hours. Are we starting to use work time and resources to reply?
Jerry
#98 — right on, Bender. I’ve also been asking to see such things for a long time. I’ve searched the literature for a published study of total physical error in GCM projections, and found none. I asked Lindzen if he knew of any such study, and he replied that to his knowledge, no such study had ever been done. Everywhere in the physical sciences, errors get propagated through the relevant theory and the reliability of a prediction is judged in that light. But not in GCM science. There, prediction is latterday divine revelation. It’s prima facie true, and you’d better believe it.
For the record, Gavin Schmidt’s reply to my question about GCM parameterization:
http://www.realclimate.org/index.php/archives/2007/05/hansens1988projections/#comment33242
Jerry,
#111. First of all, I do not do this in work hours. Friday was a day off, and I use my home computer.
#109#110. I don’t know why you want a percentage error from the CMC data, and besides,
for a vector wind error, what would you use as a mean to divide by? Obviously they
don’t produce percentage errors because that is not as useful a quantity as absolute error.
This is standard in the field so that people can compare results with each other.
If you want a percentage error from RTVS, you can just find a mean from climatology and divide by it.
As far as physics goes, the models are only getting close to real weather and climate
because they have physics, and your manuscript (Gravel et el.) would have demonstrated that
if only you showed the lack of diurnal cycle and rainfall that resulted without physics,
or tried to verify the same setup in the tropics or the US convective season where the lack
of physics would have led to more immediate dynamical errors.
If you worked in physics, you should know that the basis of physics parameterizations
is the knowledge of the processes. For this knowledge we rely on the scientists who
work in the various fields (radiation, microphysics, landsurface, turbulence and boundary layer),
and develop parameterizations. Yes, it is difficult to bring all these parameterizations
together into a model, but each modeling group has taken that on, and with success as
demonstrated by the journals that are full of verifications of these models if you just
look for them. A model that does not verify well does not survive. You are going to say
that the models are tuned to verify well, and it may surprise you that I agree, but the
disagreement is that your perception of tuning is that the physics is somehow grossly distorted,
while mine is that the tuning is minor and within the uncertainties of the processes,
requiring some physics insight, and not just blindly turning knobs. Very few parameterizations
work perfectly first time when put into a full model. So call me a tuner, but there is expertise in that,
because to get it right requires a knowledge of how everything interacts in the model, which is
50000 lines of code, as well as the counterparts in the real atmosphere, and don’t belittle the tasks
faced in developing fullphysics models.
#114 Jim D
First off, thanks for taking part in the discussion here.
I noticed in your posting referenced above that you admitted that the models need to be tuned. This doesn’t surprise me, in fact I would be shocked if a model worked the first time it was run without any tuning. Some of the tuning probably relates to fixing “bugs” in the code, and others have to do with the model itself. It seems that it would be easy to verify the output of the model by just comparing it to the actual weather and atmospheric conditions. I’m sure there are times when a model often performs well, but is occasionally way off. That is one of the characteristics of digital systems. This could be caused by such things as errors that often cancel each other out, but sometimes do not.
Without this interative refinement, I think it would be fair to say the weather models would be pretty much useless. The same would be true if you could only refine the models by verifying the predictions once or twice against actual conditions as some errors won’t show up right away. My question is this, since this kind of iterative refinement is not possible with climate models, how can we have any confidence in the predictions they make? It seems to me that the best you can do is compare the results to past climate (problematic since even past climate is controversial – was there a MWP or not?), or to other models. As I mentioned, digital system can perform well sometimes and horribly other times, so may “predict” past climate well, but fail miserably on future climate.
Jimy Dudhia (#1110,
I apologize for the assumption that you replied from work.
Because you have hinted that it is possible to know the percentage error from the absolute error, please provide the relative (percentage) errors for the CMC curves or the RTVS information. Is the general reader suppose to know how to translate the absolute error to a percentage error?
It is trivial to provide the percentage error – why not do so?
Clearly a case of obfuscation. Note that the percentage errors in Sylvie’s manuscript are as high as 7075% in the winds at the lower levels
at 48 hours for the operational model. Thus the vorticity for any boundary conditions for a limited area model at these levels would be nonsense. How is the small scale vertical component of vorticity determined in your WRF initial conditions when it is clearly not present in the largescale observational data or in the initial conditions from the largescale model? What happens if the small scale storm is already present in the limited are domain at the beginning of the WRF forecast?
Norms do not require a mean and if the model is coded properly, the mean pressure (or geopotential) should be removed from each level in order to avoid problems with steep topography. Also if the model and obs are close, the mean should cancel out unless the model is way off. By computing the relative errors at each level, the mean is allowed to change with height, i.e. an even more compelling reason to plot the percentage errors as a function of height.
Because your model is doing so well, please provide the percentage errors for it as a function of height for multiple forecasts as Sylvie did. A simple way to resolve this.
If you read our manuscript or Page’s, you will see that you only need the vertical component of vorticity and the total heating to determine the slowly evolving solution in time. Thus no model run is needed if you have the wind data from a storm – all you need to do is play with the million parameters in the well known microphysics until the remaining variables match the observations. Of course there might be more than one set of parameters that works. How many hours of computer time would that take
you in your current mode?
The errors in the tropics are terrible because the model solution is so dependent on the total heating that is poorly parameterized and there are very few observations in that area. Sylvie states very clearly that we chose the winter time because that is when the largescale motions are typically the most important and we chose the most observationally dense
area so the errors could be computed with some reliability. If the models are having problems in this best case scenario ….
We also provided horizontal plots of the vorticity just to ensure that the percentage errors were providing correct information (as they clearly are
just as the mathematics states they will).
I continue to wait for the percentage errors as a function of height from
your model. Anyone can publish a single case study where they can tune
the physics to provide something that looks like the obs. Let us see
how it works on a tougher “toy” case or in a predictive setting for a consecutive period of days.
Jerry
Jimy Dudhia (#114).
The fact that you equate lines of code with scientific quality is humorous.
Clearly if each different specialty group has provided code, does that mean that you understand all 50,000 lines? And how many lines for the simple dynamics versus all of the boundary smoothing, time filtering, different microphysical parameterizations, different boundary conditions, parallelization, etc. A bit of a breakdown here might be interesting so the general reader can determine where the bulk of the code lies.
Essentially WRF has lumped together everything including the kitchen sink.
Every manuscript based on WRF should indicate every option and parameter that was used. Is that the case? Then one can readily determine if there is tuning going on in each case.
Also don’t forget all of the fine manuscripts based on Anthes ill posed MM4 model. A manuscript based on a model is not a proof, but a matter of tuning.
The amount of code for the multiscale model in the manuscript by Heinz and me was trivial and completely documented on one page. The model was able to recreate a developing mesoscale storm from its beginning stage to its maximum development without any dissipative mechanisms or tricks. I am waitng for you to run that “toy” case with all of your boundary smoothing and filters turned on as you do during a forecast. And if you have the nerve to do so, the next “toy” problem will be to run the storm thru the boundary to determine the relative errors in both models for that case.
Jerry
#115 C_G_K. Thanks for your comments on tuning. I certainly concur with what you said
about weather prediction models, and we have many cases on which to base the statistics
when the models are run daily in forecast operations. For climate models, it is a valid
question that past climate represents basically one case to tune the model to, but that
case has lot of data because it covers the last century, so in some ways it is also
quite constrained. Now the question is whether by fitting past climate we can guarantee
fitting future climate. This comes down to how much we understand about the changes
occurring, particularly with greenhouse gases, and their impact on the climate system.
The consensus (and I know people here don’t like that word) is that we do know enough
of the climate feedbacks to have confidence in the projections, and the physics of
the climate system has no surprises in store for us. This is justified because in terms
of the extra energy CO2 adds to the system, the changes are incremental rather than
destabilizing in the short term (less than a century), so the predictions of even simple linear
models are not far off.
#116#117 Jerry,
You keep throwing so many things into your questions that I find it hard to focus on an answer.
I am not going to run a toy problem because that should not be needed to argue a case.
I don’t equate lines of code with quality, I equate it with difficulty in tuning.
At NCAR, we don’t run rms verification system on our WRF forecasts, because we are
nowadays more focused on finescale convection and hurricanes for which rms is a
useless statistic. However, the US Air Force and operational centers in Taiwan and Korea,
as well as parts of NCEP have been running WRF operationally and in testing recently, and the
statistics are comparable with those you see on RTVS for the current NCEP operational models.
Most papers on WRF indicate which version of the code they use, and what, if anything they changed
from it. This is sufficient because WRF is freely available, so anyone can see the full code
for themselves.
Percentages. It turns out that te RTVS site (#14) gives you the information to compute the percentage.
On the Statistic 1 or 2 menu, set it to Obs Avg, which gives you the mean of the field, so
you can get the percentage by dividing the rms by the mean. You will also see Correlation is
an option.
Jerry #116,
I forgot to answer this part
We use the primitive equations, so momentum is the boundary condition, not vorticity. It is
correct that smallscale vorticity is often not present in initial or boundary analyses
if it is associated with an unresolved storm like a convective system, but usally the analysis
has enough other clues (instability, convergence boundaries) for the model to generate its own within a few hours.
This only becomes a big issue in shortterm forecasting (16 hour range), where they
have to rely on cycling model information from previous forecasts with newer data (data assimilation).
Jimy Dudhia (#119),
You won’t run the “toy” case because you are scared to death it will show the seriousness of the problems in your model. You won’t run the hydrostatic system at fine resolution with small viscosity because it will show the unbounded growth. You won’t run the
case with small viscosity and shear that Chungu Lu did with your model because it will show the exponential growth. And you won’t run the forecast with percentage errors because they don’t exist. But you will publish hundreds of manuscripts using Anthes ill posed MM4 model and claim a high pub rate for NCAR. Are we to suddenly believe this model is any better given the lack of any legitimate verification by a PDE analyst, a numerical analyst, or common sense. You aren’t fooling anyone.
Saying that WRF is as good as the current models at NCEP isn’t saying much
when the CMC global model is outperforming the NCEP global model due to a simple change in the CMC assimilation program. When RTVS provides
percentage errors at all levels over the US then everyone can judge for themselves what is happening. When I visited AFWA they thought MM4 was a great model, but I could clearly see the problems near the boundaries. Doesn’t say much for their credibility.
Have you divided up the code into its components so that the general reader can see how much is dynamics and how much fluff?
I asked my wife if she had any idea what an error of 5 m/s meant in the wind. As expected, she had no idea. But when I told her there was a 75% error in the wind at 48 hours, that made complete sense. Quit playing your foolish games and either state the percentage error (you can’t because that is not possible without the l_2 norm of the observational values at a level – that is not the same as the mean of the observations).
And provide some relative l2 errors for either the toy problem or
consecutive forecasts as Sylvie did. Your review of her manuscript was and is a joke. An attempt to cover up some scientific results that the modelers do not want known as I stated long ago. Every one of your fallacious remarks has been refuted. The full CMC model is outperforming the NCEP model, but sill has large relative errors very early on at the lower levels.
Jerry
Re #121: Jerry comments “Are we to suddenly believe this model is any better given the lack of any legitimate verification by a PDE analyst, a numerical analyst, or common sense. You aren’t fooling anyone.”
Unfortunately, he is fooling the vast majority of scientists. Willful ignorance is the order of the day.
Jimy Dudhia (*120),
As usual sidestep the real issue. The point is that if there are large
errors in the boundary data as indicated in Sylvie’s results,
the information coming into the limited are model at an inflow boundary is bad and no amount of smoothing will help that problem (and it will translate into a bad vertical component of vorticity as soon as the wind information enters the region). And there will be a conflict at the outflow boundary if the solution in the interior does not have the same large errors.
Excuse me but are you saying that the correct small scale vorticity is generated in a few hours (a substantial error if the small scale vorticity is already present in the limited area domain at the beginning of the WRF forecast) or that some small scale vorticity is generated. There is a very big difference as can be seen in our manuscript or Page’s manuscript.
And where does the small scale information for the heating (humidity, graupel, etc.) come from? It is not present in the initial data from the largescale model.
Now we are getting down to the serious problems in the WRF model that have already been shown in the “toy” problems in the above manuscripts.
Jerry
Reid (#122),
Hopefully this thread and the one before have raised some eyebrows.
All my arguments have been backed up with mathematical proofs and supportive numerical examples using convergent numerical solutions.
Even the CMC model supports my arguments. And Jimy has done nothing but spout nonsense exactly as expected. I think you can see why the peer review system has major problems. If Jimy were able to be an anonymous reviewer
with a biased Editor, Sylvie’s manuscript would have been rejected just like many of our mathematically rigorous manuscripts would have been without major confrontations.
At least this is in a public forum where Jimy can’t make his statements without me being able to show the game he is playing.
Jerry
Jim D,
Thanks very much for commenting here. You say:
This DID surprise me.
I would like a straight answer as to how much tuning is done. How many parameters, and what is the process? I understand that tuning is as much art as science, and so the process may be complex, so please don’t respond with that sort of fluff. You say you have models with 50,000 lines of code, and that this is an indicator of how many parameters that are available for tuning. This is as I expected, but is in direct disagreement with what I’m told by others – that in the physicsladen models there are only 34 free parameters available for tuning (see #113). I have tried to get some substantive answers from others and have been blown off. Any references to the primary literature would be greatly appreciated.
For those interested in the party line of the AMS see the draft statement
under the AMS web site called Weather Analysis and Forecasting.
Compare with the percentage errors in Sylvie’s manuscript.
Jerry
bender,
Take a look at the documentation for the WRF model on the WRF site.
In particular the document called
A Description of the Advanced Research WRF Version 2
Even on the contents page it is possible to see that are many different parameterizations for the same physical phenomenon included in the same model and the number of adjustable parameters is not 3 or 4
in any of the microphysics schemes, let alone the boundary layer, turbulent mixing and model filters, etc. The climate models also have many knobs,
but the parameterizations are even cruder because the climate models do not resolve any of the smaller scales of motion. Gavin is using verbiage to cover the truth just as Jimy does.
Now let us see what Jimy says about this.
If you want I can access the NCAR climate model and show you all the knobs
and I can ensure you that there are more than a few.
Jerry
Jerry
Jerry
Sorry about the multiple signatures. I always want to make sure that
my posts are signed the same way and just didn’t see the ones below the comment before I posted.
Jerry
Re #127, Gerald Browning
If it is easy to do, I would be very interested in a list of all the free variables (=parameterisations?), their maximum and minimum possible values, and the values actually used.
Parameterizations are whole sets of parameter values. Some parameters are more fixed than others because there is less uncertainty about their true values. These are the ones that can be justifiably tweaked. Gavin Schmidt assures me there are only 34 of these. Maybe he’s talking about the assemblage of modules, as opposed to what’s available for tweaking withing individual modules. I’m not sure, but his answers are always unsatisfactory, smelling of obfuscation.
Is Gavin Scmidt honest?
Please stop it . This is so wrong that my head hurts .
As you make computer runs to do weather forcasts , please stay on that topic and don’t venture in long term evolutions of non linear systems .
It has already been shown a dozen of times in the previous thread that the climate system is not , I repeat IS NOT , linear !
There is no , I repeat NO , time scale at which it is linear .
It can not , I repeat can NOT , be approximated by any linear function .
No equation entering in the long term forecast is linear . None .
The linearity hypothesis/approximation not holding , the assumption that a small perturbation has a small effect is hopelessly wrong .
Can you understand that ?
The physics has certainly still many surprises in store (precisely because you can’t enter everything in a model and you can’t know what can be neglected and what not) but what has much more surprises in store is that the evolution is chaotic .
After all these weeks you still seem to live with that fiction that because the time averaging has a visually smoothing effect , it represents something physical and that actually the phenomenons BECOME “almost” linear because they get time averaged .
Well if you average over a long enough time it will even become so linear that everything will be constant – very physical indeed …
Then I have a scoop for you that Jerry will confirm – if you can solve for time averaged parameters then you can solve for the continuum solution .
If you are convinced of that and can prove that you have a time averaged solution , then you have solved the NS equations and won the 1 M$ prize .
Congratulations you have just become one of the greatest mathematicians of all times .
Being able to successfully hindcast historical climates (even if the models could actually do this) is not proof that the models will be able to successfully forecast future climates.
Ffreddy (#129),
Before I spend time doing so for the NCAR climate model, you might first want to peruse the manuscript by Chungu Lu et al. that discusses the typical microphysical schemes I mentioned earlier.
That manuscript shows just how many free parameters there are in a typical microphysical scheme. The schemes include different number of variables,
but the equations are essentially just advection schemes for each variable
with sink and source terms on the right hand side.
If there are errors in the velocities (which is certainly the case as can be seen in Sylvie’s manuscript) then the advection
of a variable has large errors (discussed in manuscript). In fact the errors in the velocities typically overwhelms many of the smaller sink and source terms.
There are discontinuities in the source and sink terms plus many parameters
based on rather crude physical parameterizations (size of droplets, etc).
Some of these are extremely touchy, especially when a storm is fully developed (e.g. latent heating completely controls the vertical
component of velocity during this stage and any tweaking of this parameter
will have a major impact on the intensity of any simulated storm).
The climate models parameterize clouds and rainfall over large areas. These parameterizations select a certain amount of rainfall based on a fraction in the parameterization list. Clearly a slight tweaking of this parameter will have a drastic impact on the amount of clouds and rainfall and subsequently on the impact on the radiation parameterization. If you look at the documentation of a climate model, these values should be discussed in detail. One just need to look for the particular physical phenomenon and then see what is used to control the amount of cloudiness and rainfall.
Don’t forget that the viscosity in a climate model is unphysically large so that the forcings must be tuned to overcome the excessive damping and make the spectrum behave somewhat realistically.
Jerry
Re #133, Gerald Browning
Sure. Is there anywhere I can download it from without learning Chinese ?
fFreddy: http://ams.confex.com/ams/pdfpapers/100521.pdf
(am I missing a joke? call me hank)
Re: #134
A google search for Chungu Lu and Browning and further digging found this: Scaling the microphysics equations and analyzing the variability of hydrometeor production rates in a controlled parameter space, a pdf file.
Hopefully, that’s the one Jerry referenced.
bender, thank you. No joke on my part, the Chinese page was where Google Scholar led me. No problem, Hank.
John Baltutis, thank you too – I think I have to improve my Googling.
John (#136),
That is the correct one. If anyone has any questions about this manuscript, I will be happy to answer them. Scaling the microphysical equations is quite illuminating, especially when there are large errors in the winds coming from a global model. If a mesoscale storm is fully developed, there are only 1 or 2 terms that are important and these are related to condensational cooling and latent heating. Before then the so called triggering terms can be completely overwhelmed by errors in the large scale velocities. If you read my
recent post discusssing the lack of the vertical component of vorticity in the limited area domain and the lack of other small scale heating components, you can start to see the problems with a limited area model that is attempting to develop smaller scale features that don’t exist in the large scale initial data. There is a game that is played that I will discuss in more detail. You will be quite shocked how this has been done.
Jerry
Jerry, I don’t understand your version of a percentage error, but to me it is
the error divided by a mean value.
Parameters: Yes, there are hundreds of parameters in fullphysics models,
thousands if you count the lookup tables used in radiation schemes. Someone
could in theory vary any of these except the known physical constants like gravity,
the earth’s rotation rate, solar constant, and heat capacities. The results are
sensitive to some parameters and not others, and some are more certain than others, so
there is no definitive number of parameters you could consider free.
The tuning process is physicsbased, not just playing with numbers. Sometimes
parameters within schemes are changed to correct systematic biases, sometimes
equations are changed, sometimes the whole scheme is replaced. This is the
development process that has led to the models we have today, and by trial and
error the fittest schemes survive. Work in numerical weather prediction revolves
around verifying and improving, and building on the work of previous generations.
It can now be regarded as finetuning compared to what was probably done in
firstgeneration models back in the 60’s and 70’s. It is a very open process, because
when someone finds a way to improve a model, they want to publish it, so I don’t
know why the view here is that something is hidden about this.
#123 Jerry, small scale vorticity is usually associated with convection, and we
can generae convection within a few hours in roughly the right place, even using the
NCEP 40 km analysis. This indicates that there is enough information in the large
scale to place the smallscale systems quite well.
#121 Jerry, I keep telling you why the hydrostatic equations fail at small scale,
and you keep bringing up the Lu et al. paper. What is the point of explaining this again?
Explain to me what the toy case is, and what it is supposed to show. Is this
that same hydrostatic test that is supposed to fail at small scales?
#131 Tom,
This site has had people showing that linear bulk (globalaverage) models can predict CO2 warming to
within degree of the sophisticated models, and people respond with glee saying what
is the point of a complex model? Now you contradict them and say that can’t be done.
I was trying to use language people here would agree with, but there seems to be an
internal conflict among CA posters on this point.
The linearity of these bulk models comes about because the energy balance change due to CO2 is small compared to
the energy storage in the atmosphere/ocean system, and compared to the total forcing
by solar and infrared fluxes. The storage acts like a damper, not allowing large deviations
of the surface atmospheric temperature from the ocean temperature in a global average.
Likewise the upper atmospheric temperature is constrained by the surface temperature and
physical laws.
Jimy Dudhia (#139),
You claimed you read Sylvie’s manuscript, but conveniently have missed
every relevant point in the manuscript. The formula Sylvie used is present in the manuscript. Duh. One always uses the same l_p (not L_p)
norm in the numerator and denominator. Obfuscation again. I await your
percentage errors.
The tuning is not physics based. The “physics” of the atmosphere is
very poorly understood as anyone reading the microphysics manuscript will see. The number of parameters that are relatively well known are few
and even some of those are questionable (e.g. the earth is not a perfect sphere so the gravity constant is not really a constant). The ones that are available for tuning are many. For you to state that someone can look up the WRF documentation and determine the parameters is a joke.
Every manuscript that is published using WRF should show every
constant in an appendix so that any tuning is clear and the run easily reproducible (code should be frozen and archived as Steve M. has said).
The manuscript should also show convergence tests with the same parameters
and percentage errors if comparing to obs. Was any of this done with
the hundreds of manuscripts based on Anthe’s ill posed MM4 model? This type of “science” cannot be allowed to continue. It is a waste of time and money
and does more to set science backward than move it forward.
At least you have now stated that it is trial and error and not science.
This is an extremely inefficient way to progress and the dangers can be seen in Sylvie’s manuscript, i.e. a simple mathematical theory had more of an impact than all of the tuning.
The number of different microphysics schemes in the index of the WRF documentation should be the first clue of a problem. And if I recall correctly, a simple heat equation is used to determine the properties of the soil.
How accurate is that approximation and how many measurements are there
to supply the initial conditions for the heat equations? Does the soil vary in property from point to point? Does any living human being know how the soil varies from point to point? How is this questionable approximation used to tune the model?
Please quantify “quite well” with percentage errors. If the large scale has 75% percentage error in the winds and the large scale is determining the
smaller scale thru cascade, isn’t there a contradiction here. And several hours is a long time at these smaller scales where a tornado can form in a matter of minutes.
I have cited and described the manuscript containing the formation of a mesoscale storm about 10 times. And you still don’t know which one?
More obfuscation.
And the Lu et al. manuscript is not the hydrostatic system.
Do you have Alzheimers or is this just more obfuscation?
Jerry
#140 Jim
I have begun to post in this thread because the topic (exponential propagation of errors) is one but not the only one issue in the dynamics of chaotics systems such as weather and climate .
Insofar I don’t take position what kind of sophistication would be necessary for a numerical model to adequately predict long term dynamics .
Whatever the sophistication , everything is garbage or to put it in more diplomatic terms of Dan Hughes “numerical chaos” .
In the course of the discussion I realised that you don’t understand chaos dynamics so the point is becoming somewhat moot .
Apparently you can only think in linear terms such as “The linearity of these bulk models comes about because the energy balance change due to CO2 is small compared to ….” .
Such a line of thinking is hopelessly inadequate and wrong for chaos dynamics .
You could spend your life “sophisticating” numerical treatements of a simple Lorenz system (only 3 ODE !) and be flabbergasted how difficult it is to only sort out the problems of convergence , error propagation and sensibility to non physical parameters like the time and space integration step .
Forget about long term predictions time averaged or otherwise in any case .
Of course that there are damping terms in the system , there always are but they don’t make a chaotic system less chaotic .
Of course that some parameters may be bounded that’s what the strange attractors are there for .
Now proving their existence and metrics is something that nobody did and I suspect that it is not feasible for such a complex system .
Like Lindzen says , nobody is able to adequately simulate the “natural variability of the climate” so what is the point to pretend that it can be simulated when only one variable , namely CO2 concentration , changes ?
As long as you believe that a computer run and linear thinking is adequate for the climate system , you will be only dabbling in the dark untill the difference between the numerical long term “predictions” and reality grows to a point where it will be no more possible to pretend that we know everything but only need a couple of billions because we didn’t sophisticate enough .
Thanks again for your continued commentary, Jim D. Regarding the number of free parameters available for tuning, you say:
The answer is: because what you are saying here:
is in direct contrast to what the folks at RC are saying about how these models are tuned. Ask this question at RC and you will get obfuscation, if not lies (as the link in #113 indicates). Honestly, it looks like the problems with these models are being covered up. If you don’t understand the perception, and you think it is false, you had better start doing something to clear it up.
I thank you for your directness and for your agreement with Gerald Browning’s remark (bolded):
This agrees with my perception. And if it is a false perception, then the modelers are going to have to start doing a better job at communicating their methods and results.
I note that the position that Gerald Browning is outlining in #141 is consistent with the one that Isaac Held put forward some months back. His tone was not as volatile, but he was saying essentially the same thing.
#140 Tom,
I agree that weather is chaotic, and this is a level of chaos that is understood
from the weather record and models. Now to make the jump that climatescale chaos
will somehow manifest in the next century is the part I don’t go along with. The
disagreement here is the amplitude of the chaos, not whether it exists. The obvious
experiment is to run a weather model for a century with CO2 changing, and this is
what is done. Climate models are just lowresolution weather models with some
additional features to allow for ocean, seaice and vegetation changes.
The results show a growth in temperature against the background weather
chaos. I just state the results, and say they seem reasonable to me based on
energy considerations and the atmosphere/ocean system, where we know the primary processes.
I would be very interested if someone finds shortterm chaos in the climate,
but no mechanism for such a thing has yet even been proposed as far as I know.
Bear in mind, such chaos would have to include the ocean from energy considerations,
and that has a large thermal inertia.
Jim D.,
What do you make of the the problem of structural instabilities in AOGCMs?
Given that there is some irreducible imprecision in model formulation and parameterization, what do you reckon is the impact of this imprecision? To be clear, I ask in a different way: if models are tuned to certain circulatory features that are taken as fixed, when they are in fact variable, then doesn’t that mean the parameterizations could be far off the mark? What would this mean for the overall uncertainty in the CO2 sensitivity estimate?
I understand that oceans have large thermal inertia. But this does not imply that they too do not exhibit structural instabilities resulting from chaotic (albeit slow) dynamics. Does it?
Thanks as always for your patient replies.
#141, Jerry,
Do you mean the run you asked for in #305 on Part 1 of this thread? It mentions Lu et al.
and a hydrostatic test in #186. Yes, I am confused. I already explained that result for you.
#141 and #143, Jerry and bender,
The WRF model code is free to look at, so you can see the parameters there.
Yes, even gravity and the shape and size of the earth are approximated in models together with
physics, yet the models reproduce the key features of the weather and can be used to predict
weather. It is amazing to me how simple the model is compared to the real world, and
still is so successful. Uncertainties in the model are not as much a limitation as
uncertainties in initial conditions from data. Changing g by 0.1% or using a more ellipsoidal
earth won’t noticeably change weather forecasts, so these are not considered primary
parameters to vary. The bigger model uncertainties relate to how to deal with subgrid convective
towers, or the fallspeed of ice crystals, or the effects of vegetation and soil on
evaporation, or the effects of cloud on radiation. These parameterizations are continually evolving.
This is where the research is. However, despite all this, when you see a blown forecast
by a global model, it is more likely due to poor initial conditions than a fault of the
model itself. So, now that the models are quite good, data assimilation has become a major
focus for the further improvement of forecasts, and that wouldn’t have been the case if
the models were severely flawed.
#145, bender,
I am not sure I completely understand the question, but models may take some things as
fixed (e.g. the Greenland ice sheet) that may actually change, or sealevel or the solar constant, etc.
What is kept fixed versus what is varied is usually based on being conservative about
variations that are not known with certainty when doing runs for the IPCC reports, but
I am sure a lot of these assumptions have been tested in other climate runs, and
we would only find out about those from the climate modeling literature.
Jimy Dudhia (#146),
I await the percentage errors that back up any of these claims. Otherwise
you are just spouting nonsense as usual.
The parameterizations in the CMC model overwhelmed the assimilation process
in 2436 hours (and the CMC global model does better than the NCEP global model).
I also see you didn’t discuss how the soil parameterization
is initialzed nor how the model is able to generate the correct
vertical component of vorticity several hours late in the forecast
(essentially the time scale of these motions).
Just keep sidestepping the issues. You have been well brain washed
by Klemp and Skamarock.
Trial and error to the max.
Jerry
Jimy Dudhia,
When I look at the difference in the wind vector at 250 hPa
for the three models (GFS, NAM, RUC) the RMSE is around 7000 and the mean around
1200. The way you want to do things that translates to over a 15 %
error at the best (jet) level in 12 hours. The humorous thing is that the global model does as well as, if not better than, the NAM and RUC models.
It would be much easier if the division did not have to be approximated by eye. Also note that in Sylvie’s manuscript the CMC global model does better than any of these three at 250 hPa. Similar results hold near the surface.
The GFS model also outperformms the NAM model at 24 hours.
what was that argument about how well the limited area models are doing?
Jerry
Jimy Dudhia,
error (CMC)
Jimy Dudhia,
The site does not like the less than sign (tag)
error (CMC) is less than error (GFS) is less than or equal to error (NAM,RUC)
Surprise.
Jerry
Jimy Dudhia,
Did you forget to mention the site
http://wwwad.fsl.noaa.gov/fvb/rtvs/wrf/DWFE/model/model_verification_tool_user_manual.html#sect6.0
Evidently there is a site that plots the errors as a function of height.
Where are those errors plotted and why doesn’t RTVS show them?
Jerry
Gerry,
Interesting comment re: 149.
When I read the forecast discussion from the NWS for my area (North Central TX), it’s interesting to read how they take the results from GFS, NAM, CMC, etc., compare the model runs, and then make guestimates/adjustments to their forecasts, usually by siding with one over the other, or using a combination of several over another one.
I frequently see reference to the GFS being too wet/slow with it’s forecast most of the time, but other times, they side with GFS over NAM and CMC based on the particular type of system that is approaching because it traditionally has handled that type of system better in the past.
If the NWS can’t rely on any single model for a forecast 24 – 48 hours out, how can anyone possibly use models or even ensembles for years out with any degree of confidence that they will be correct.
It’s amazing the run to run inconsistencies these models produce for such short periods.
Jonathan
Jonathan (#153),
When I was at NOAA FSL, a daily forecast meeting was held in a similar manner. The forecasters tended to look at the global models that have some accuracy for 1236 hours (because of data assimilation), then at the RUC limited area model, and then would guess. Sounds like you have had a similar experience. Thank you so much for posting.
After everything that Jimy Dudhia said about the RTVS site, I come to find out that at one time they did plot the errors as a function of height,
but that no longer is done. I wonder why.
Jerry
#154,
Gerry,
Thanks for replying. Much of this thread has been completely over my head, but I certainly have enjoyed reading it and have learned a few things along the way. That’s usually my #1 criteria for things I invest time in.
It would appear as though the failure to produce error bars is prevalent in the climate sciences as seen through the many posts on Steve’s blog. It does make you wonder why.
Jim # 144
Still more of the same .
All your arguments go like that :
1)
I run a global model for some long term prediction (never mind that it is incomplete and wrong)
2)
I change a variable and run the global model again (never mind that it is still incomplete and wrong)
3)
I look at the difference of the 2 states (never mind that it is wrong too) and if it is small I conclude that the parameter I changed has a small influence .
The above approach is not science because it is not a comparison of theory vs experience , it is a comparison of theory vs theory .
It could have a limited validity if and only if it has been proven that this particular theory (in this case a given numerical model) adequately and precisely represents the reality in all its aspects .
My point from the beginning has been that this proof has not been given and in my opinion will never be because precisely the climate is chaotic on ALL TIME SCALES .
Now as you admit that the climate is indeed chaotic , you have to unwind this thread untill its end .
First and most obviously it is not predictible on the long term (never mind Lyapounov coefficients , you can’t calculate them for this system) .
Second you have pseudoperiods at all time scales and associated thresholds .
Third you can never say when one of the multiples thresholds will be reached , so you can never say when a relatively fast transition will begin .
And with all that somebdoy would want to make a valid long term prediction ?
Short term chaos in the climate ?
All depends what you call by short term .
Top of head I can already think of several examples :
– NINO is an obvious one
– number of large scale extreme events per year is another (or in physical terms energy dissipated by such events per year)
– NAO is an interesting case that is probably interacting with Nino what would yield a certain amount of pseudo periods
– a very prominent case is the arctic ice mass variation – chaotic with badly understood multidecadal cycles superposing on shorter term variations
– average yearly cloudiness variation is not understood but certainly chaotic
To sum it up I’ll quote R.Pielke :
“The claim by the IPCC that an imposed climate forcing (such as added atmospheric concentrations of CO2) can work through the parameterizations involved in the atmospheric, land, ocean and continental ice sheet components of the climate model to create skillful global and regional forecasts decades from now is a remarkable statement. That the IPCC states that this is a “much more easily solved problem than forecasting weather patterns just weeks from now” is clearly a ridiculous scientific claim. As compared with a weather model, with a multidecadal climate model prediction there are more state variables, more parameterizations, and a lack of constraint from realworld observed values of the state variables.”
#156,
Jim has already admitted that there are hundreds, perhaps even thousands of potential variables in the climate models.
Now if factor B provides damping on factor A, and if B is set too high, then A is over damped, and even large changes to A will have little impact on the output.
In a system as hideously complex as the climate, or even the hugely oversimplified climate models. To just change one variable, and from that run (I refuse to call it an experiment) declare that you know whether the one variable in question is a significant one or not, is the height of folly.
Fox watching the henhouse,
For those that are not aware of the Developmental Testbed Center (DTC) at NCAR, it is suppose to check new models for their improvement in forecasting. The problem is that it is funded by NOAA and NCAR, i.e. the organizations that are developing the models. I now come to find out that in the website just above, there was a wintertime experiment of the WRF model (just like Sylvie did over 5 years ago). Note that Jimy Dudhia said it was wrong to check a winter case, but WRF in fact did so without reference to Sylvie’s manuscript. I call this intellectual dishonesty of the worst type. On the website I posted I come to find out that RTVS (part of NOAA that develops models) clearly listed many more statistics than on the current web site mentioned by Jimy Dudhia, including ones that shows errors as a function of height similar to what Sylvie did many years earlier. Jimy is also intellectually dishonest because he failed to mentioned he WRF test knowing full well it had been run and then had the nerve to criticize Sylvie’s manuscript for the same sort of test. He is dishonest for failing to mention that the statistic that I wanted to see (errors as a function of height) had been available before and used on the WRF model, but is no longer displayed on the RTVS site he quoted.
I will no longer respond to Jimy Dudhia given his intellectual
dishonesty. I have seen this enough times in his comments and by the same group at NCAR. Enough.
Jerry
Scenario (a real scenario). Zonal jet, taking hard right turn just West of the Rockies. Repeated forecasts depicting subsidence / offshore flow events in California lasting longer and being more intense than they end up being. Just saying …..
Re #147
Thanks for the reply Jim D.
You say:
I guess I am going to have to keep working at clarifying my question, because I think you’re right. Maybe my question is still too illposed.
You say:
But when I express concern over “things taken as fixed which are in fact variable” it’s not parameters I’m concerned about being fixed, but circulatory features/scenarios against which model predictions are validated. Tom Vonk #156 seems to understand my question because he makes reference to the kinds of features I have in mind: ENSO, NAO, etc. I’m thinking major features like the ITCZ, and the things that characterize it (vorticity, periodicity, etc.)
So, again: are these features not structurally unstable? My very limited understanding of climate dynamics suggests some of them, to some degree, are. Next, if they are structually unstable then what would this mean for the overfit parameters that are derived through the trialanderror process that you describe (basically, a genetic algorithm)? How do these seemingly irreducible uncertainties affect estimates of key parameters such as CO2 sensitivity?
That’s about as straight as I can ask it.
An alternate scenario.
Factor B is a feedback affecting temperature response, in conjunction with factor A. The strength of the feedback is not known, indeed, even whether the feedback is positive or negative is not known. If the model assumes that the feedback is a moderate positive feedback, the model will find that A has a large impact on temperature.
But if B is in fact a negative feedback, then A will have a much, much smaller impact on temperature.
Thus changing A, when you don’t know the proper value of B, will tell you absolutely nothing regarding whether A is a strong or weak climate forcer.
In all of the early models, water vapor was ASSUMED to be a moderate to strong positive feedback. ALL of the recent science has indicated that water vapor is in fact, a weak to moderate negative feedback. Yet every model that I have ever researched still assumes that water vapor is a positive feedback.
Re: Neil Haven #3,
Yes, and yes.
I predict that we are about to find out how. For starters:
From one Timothy Chase:
http://www.realclimate.org/index.php/archives/2007/05/hansens1988projections/#comment33257
Responses to Jerry,
I am not responsible for what RTVS chooses to verify, but if you want to
delve into verification sites, you need to understand the rms errors, otherwise you
will have a tough time forming any kind of informed opinion on it. How about correlation? Isn’t that adequate either?
In verification at stations, models with smoother results (e.g. global models) tend to do
better at rms because the details in the higherresolution models, even if slightly
out of phase, penalize them in this type of verification. I mentioned that rms is not such
a useful measure for regional models.
Soil initialization is based on previous weather to provide an estimate of soil moisture and
temperature. It is important to know how dry or moist the soil is to get decent surface fluxes.
I don’t understand the vertical vorticity question. Do you think models can’t develop a correct vertical
vorticity by themselves?
Then there is this
and the rant that followed. It is not wrong to check a winter case, as that is an important season for all kinds
of weather. What I objected to was that physics tests where you turn off latent heating will not show
up differences as much in the winter. I thought I said this clearly in #71, but you chose to misunderstand and distort.
It does not serve you well to distort and base whole rants on that.
RE: #153#154
Forecast models do vary in their answers based on methods of initialization and on the models
themselves. Their variation is a measure of the uncertainty, since forecasting is not an exact science.
This is far more useful than using one model, and ensemble methods are being developed to take
advantage of giving this type of error bar to forecasts. A similar thing goes for climate models
that have been developed independently. When they agree on something, you take more notice.
Jimy Dudhia,
Go tune your piece of junk.
Jerry
#156 and #160, Tom and bender,
The way I understand it, and I am not a climate modeler, things like the NAO,
El Ninos, and other causes of interannual variability are checked in climate models, which
have limited success in simulating them, especially El Nino which needs a full ocean
coupled model. OK, so you say things that have been stable in the past may not be stable
in the future, but you suggest models have been tuned to somehow make them stable,
when they just are. Tuning would impy that some pretuned simulations showed them to be unstable,
which is what I doubt happened. Also, climate modelers are looking for inbuilt assumptions,
and have found some potential instabilities that have been well publicized, such as one
group finding that the Amazon rainforest might disappear naturally, and another group finding
that Arctic seaice might be gone within a century. So this type of research shows that the stability
assumptions are being continually questioned within the climate community.
MarkW, the parameters have physical interpretations, so if a model doesn’t respond to a
change the way it is expected to, that would be a red flag for the modeler.
On water vapor feedback, you probably meant cloud feedback, because water vapor has
a positive feedback, not just assumed, but by the laws of physics which say that
saturation vapor pressure increases with temperature, and water vapor is a greenhouse gas.
Well, I think this thread ran its course finally. I’ll not be checking in here
so often now, as I have found it to be a drain on my free time.
Signing out.
Jim D
For those on this thread,
Jimy doesn’t get off quite that easy with his nonsense. I am working on making the runs that Jimy does not want you to see.
Jerry
Thanks very much for posting here, gentlemen. Hope to see you again when “Exponential Growth in Physical Systems #3” is open. Have a great weekend.
Since there’s a break in the action, I recommend useless arithmetic, Pilkey & PilkeyJarvis, Columbia Univ. Press, 2007, for your reading pleasure, a companion piece to Orrell’s The Future of Everything.
Re; #167
I eagerly await the run results.
For those on this thread,
I compiled the gnu version of g95 (Fortran 95) last night and will test it on a trial case. According to the g95 group, g95 will compile WRF ( publicly available on the WRF site) and it should run on a Linux or Windows computer. So anyone that wants can run the
WRF model with all of its kluges on analytical or real cases to determine the impact on the accuracy of the numerics of all of the smoothing at the boundaries, filters, divergence damping ( numerical instability) and other games that have been played. Any takers?
Jerry
Jerry
Jerry
All,
I can’t believe I signed it three times again. I need to look a bit closer below the linr before posting. 😦
Jerry
Yes they all are in reality Bender .
Once it has been admitted that the climate is as chaotic as weather is , you are in for the ride .
One has only to get through the smoke screen of temporal averages , scaling illusions and other nonresolvedyetwellunderstood phenomenons to find the good old chaos with all its attributes .
If you don’t admit that what does that mean for the overfit parameters ?
That you produce numbers that show what the models makes them to show .
That means numbers that move somewhere in the phase space and LOOK like climate whereby lacking the characteristic chaotic behavior .
The interesting and absurd result is that if somebody took the pain to go for a 10 000 years “simulation” he’d probably still see a behavior without a trace of unstability or unexpected transitions .
Everybody should read what the people really do with ENSO simulations .
Actually simulation is a quite strong word – what they do is to assume a delayed oscilator (because a qualitative argument shows that such a system if it is extremely simplified would behave like a delayed oscilator) then they add an “atmospheric noise” aka assume that everything else is random and make computer runs .
As your basic model is both bounded and mathematicaly reasonable the output stays bounded and looks reasonable .
The random part creates variability and the oscilator part creates stability .
Of course both the model and the results have nothing to do with the reality but if you do , say , a 200 years run , you get a pseudo periodic curve (f.ex for the pressures) that looks like an ENSO .
Now you can infinitely vary boundary and initial conditions , the randomizing of the “noise” , couple it , uncouple it and generally publish one paper per year .
However all what such papers do is to say in a very sophisticated and costly way what a delayed oscilator with a random component does when certain assumptions are met or not met .
No chance that you ever get in unstabilities or fast transitions – a delayed oscilator doesn’t use to do that .
Now if you were to say “Hey guys but ENSO is anything but a delayed oscilator with noise .” you’d only get uncomprehending stares and a “Huh ? What are you talking about and who are you first place ?”
In a much more complicated manner the climate models are doing more or less the same thing .
The WRF Documentation,
Well I have read thru the WRF documentation and the model is a monstrosity.
There are 9 different versions of microphysical schemes, 3 longwave radiation schemes, 4 shortwave radiation schemes, 2 subsurface layer schemes, 3 land surface schemes, 3 planetary boundary layer schemes, 3 cumulus schemes, multiple diffusion and damping options, etc.
Many of these schemes have been around in one form or another for many years. Some are relatively new. As I mentioned before the WRF group has gathered almost every possible scheme and added it to the code and this game has surely contributed to the large number of lines of code (but not necessarily to the quality of the model). I will provide more info on how much code there is in various areas as I proceed. (I have attempted the first compilation of the code with some success, but there are still some problems with the compilation as might be surmised with this much code).
There are a number of problems with this approach that I can already see.
The first is that the WRF group requires that you load the netCDF code from NCAR. Fortran allows input and output by itself and although netCDF is suppose to make it easier to read and write data that can be transmitted between machines, it is an extra set of code to compile and to maintain on a machine (clearly this helps NCAR keep their claws on a user). A simple model does not need this code to produce and plot output on a PC.
Currently there is only one dynamical core available and it has a number of known stability problems. I beleve that the average user is not going to code a new solver even though it is relatively easy to do so. Thus I believe this approach discourages competition and creativity from young scientists and the former benefits NCAR but not science.
If the model has basic problems in the dynamical solver as I believe,
then all of the rest of this is fluff and tuning parameters meant to allow the user to overcome the basic numerical problems (accuracy and stability) with the dynamical solver.
NCAR has extensive tutorials to learn how to use this blackbox. I think that spending time learning how to use a blackbox would better be spent coding your own simple model on a PC so that one fully understands exactly what tricks
are incorporated in a blackbox. I would not risk my scientific reputation
on a piece of code this large without fully understanding the code (and I think that is an impossibility given the size of the WRF model). However I am sure that many lazy people will use the WRF model because it is an easy way to publish questionable manuscripts that are accepted by AMS journals.
I will return after successfully compiling the code and running a test case.
Jerry
Tom Vonk #172
You may be interested in the hatchet job by William Connolley on my comment at this thread at RC:
http://www.realclimate.org/index.php/archives/2007/05/climatemodelslocalclimate/#comment33923
I had pointed out how in his essay with James Annan he said that [paraphrase] climate is not chaotic, but then was forced to backpedal here, in response to comment #27:
http://www.realclimate.org/index.php/archives/2005/11/chaosandclimate#comment5396
admitting that sometimes climate is chaotic. I’m not trying to show him up. It just seems to me he and some others may be in denial over the true nature of the global circulation. I’m not asserting this. I’m asking.
Your input in the debate there might be helpful to obtaining a resolution of the issue.
For the record, two instances where knowledgeable people at RC have distanced themselves from William Connolley’s untenable position that “climate is not chaotic”.
R. T. Pierrehumbert:
http://www.realclimate.org/index.php/archives/2005/11/chaosandclimate#comment5427
Isaac Held:
http://www.realclimate.org/index.php/archives/2005/11/chaosandclimate#comment5479
And yet Connolley persists in both his rhetoric and his heavyhanded control on the dialogue. This is nonscientific, activist behavior, bordering on paranoia. Keep the emperor clothed.
bender,
If you are not able to back up your arguments on your own, it seems to me that you should not engage in those arguments and then ask Tom to
fight your battle for you.
Jerry
Re #176
Believe me, Jerry, I did think about that before posting #174. I think those are wise words in #176, particularly if selfpreservation is important to you. Unfortunately, I do not have the luxury of staying within my area of expertise, seeking only to maximize my personal credibility. I need answers fast, and so must be willing to go out on a limb on occasion, trying to provoke people into answering my questions. And, where possible, getting people smarter than me to ask the big questions for me. i.e. It’s not defending I’m seeking; it’s answers. (My skepticism comes honestly.)
Thanks for the advice, but more importantly, thanks for the extended lesson on “exponential growth in physical systems”.
bender,
I do not think anyone knows for sure if the climate is or is not chaotic.
There are certainly areas of fast exponential growth in local solutions
that would lead to large numerical errors of any numerical solution.
In the actual continuum equations those solutions are bounded – just next to impossible to compute accurately. That is why I am so skeptical of
any numerical climate or NWP model if these local areas are triggers for smaller scale storms.
The original equations that Ed Lorenz derived were an extremely simplified version of the interaction of several spectral modes of the inviscid equations of motion and could hardly be called an accurate approximation of the full system of equations. However those equations led to a strange attractor and an entire field of mathematics that analyzed ode’s with strange attractors. I do not agree that Ed’s simple example implies anything about the full system. On the other hand, if the full system
has something similar, it certainly raises disturbing issues. The
presence of large, fast exponential growth also raises serious
questions.
I look forward to hearing Tom’s thoughts on this text.
Jerry
Jerry
Thanks Jerry Jerry 🙂
Bender
Sorry to disappoint you but I am almost physically allergical to “Realclimate” .
I have been there 2 times long ago and had to admit that nevermore .
It is not a place where you can lead a meaningful discussion because people like Connoley and Gavin (to mention only 2) bring out the worst in you .
Their contributions are brimming with agressivity , intolerance and dogmatism – it made me think of what an Inquisition court could have looked like with boiling lead and iron maidens included .
As my living doesn’t depend on scientific publications on the climate field but I have some knowledge about chaotic systems , I prefer to spare me the high adrenaline levels 🙂
That’s why I couldn’t read the links you provided .
As for the question if the climate is chaotic it is almost common sense .
First difficulty is to define chaos – up to now many necessary conditions are known (all of them present in the climate) but nobody can define with mathematical rigor what a sufficient condition would be .
Second difficulty is stochastics and that’s where many people make the jump from weather to climate .
Indeed chaos is all about the trajectory of a system in the phase space .
While this trajectory is continuous and unique , it can’t be foreseen beyond a certain period of time .
However some chaotic systems present one or several strange attractors that are subspaces of the phase space with the property that all trajectories may move ONLY within those subspaces .
So if those subspaces are relatively “small” then despite the unability to predict exactly the evolution of a trajectory you can bound it (in a space with several millions of dimensions) and thus separate the possible from the impossible .
Of course if you have f.ex 2 connected attractors (like an 8) then a system that is in the lower part (f.ex an ice age) can stay there a certain time and then unpredictibly move to the upper part (f.ex a warme age) by a fast transition .
Obviously the climate models don’t do anything like that but when the modellers talk about damping and averaging then without knowing it they postulate that the system is moving within a “small” strange attractor .
Typically the RANS (reynolds averaged navier stokes) is such an approach .
If you take the original NS and make a change of variable substituting to the instantaneous variable v(t) the sum of its time average over T V(T) and a perturbation term [w(t)] what is legit if v(t) is continuous you get another set of equations .
As the original NS is chaotic , the RANS are also chaotic so you won nothing .
But your PED has now a time independent term V(T) and the new variable is [w(t)] while v(t) = V(T) + [w(t)] .
Again and it is common sense if v(t) is chaotic then [w(t)] is chaotic too . BUT V(T) being time independent is NOT chaotic .
So you see the opening – if I can say something special about [w(t)] like it is “small” , it is “random” etc then I could get solutions where the v(t) would be the sum of a stationary solution over T and another calculated factor .
Once you have that you go from [0,T] to [T,2T] and redo the same trick .
It works in some simple systems where [w(t)] seems arguably to be more or less random .
That is also what basically the models do – hence the formulations like ENSO = delayed oscillator + noise .
So now even if this is only an analogy based on NS , when somebody says that climate is not chaotic it translates that the time averages of all parameters are not chaotic AND that the “perturbation” terms are random or negligible or whatever .
To the first it can be said that it is rigorously true over [0,T] if and only if you have the right PEDs .
However the jump to [T,2T] is an initial condition problem and you’d have to check all that Jerry is talking about here – convergence , error propagation etc while not having the right PEDs anyway .
To the second it can be said that it is a boundary problem of the famous unresolved dimensions and their interaction with the resolved dimensions .
It is an extremely complicated problem with preciously few results – the least that can be said is that there is not a magical minimal resolution size above which things matter and below which they either don’t or can be easily parametrised (remember : we know all the physics and there are no surprises in store according to Jim) .
On top of the above what is a mathematical problem (preferably not ill posed) you get a numerical problem that is due to the fact that you can’t even begin to dream to have all the equations let alone solve them so you have to simulate the system numerically .
Wrapped up that translates that you produce numbers that are supposed to converge to an unknown exact solution of an unknown number of equations . Hardly looking like science , does it ?
Last but not least and back to chaos . Jerry has mentioned the Lorenz system that is an easy system of 3 non linear ODEs producing chaos .
Even such a system exhibits unpleasant dependence on unphysical parameters like time and space step and stays hardneckingly unpredictible .
It is not excluded and it is my belief that no physically feasible numerical resolution can converge or even approximate
the exact solution . The reason being that an infinitely small integration step would imply an infinite calculation time what is not physical .
To farther chaos reading I strongly recommend Dan Hughes site : http://danhughes.auditblogs.com/
Thanks Tom
Darn emotes .
In the above read : (like an 8 .) and not (like an 8)
Jerry
I completely agree with your point of view .
I never considered the Lorenz system as something meaningfully physical even though he derived his equations from fluid dynamics .
They certainly don’t adequately represent weather or climate or things like that .
However it is the oldest example of a very simple ODE system that gives rise to deterministic chaos and presents a strange attractor .
It is a simple chaos generator , simple enough to be mathematically studied .
General mathematical results that one can obtain from the Lorenz system may be generalised on other chaotic systems – that’s its value .
Everybody can test on it the usual treatements (Lyapounov coefficients , averaging , sensibilities etc) and more or less understand the results .
Actually rather less than more and that is what has always mesmerized me – how only 3 lines with a couple of simple terms can be so resistant to all attempts to understand what this thing does .
Now the certitude is that IF we were able to describe the climate by equations , we’d get a hugely complex non linear PDE system compared to which the Lorenz system would look like a supereasy primitive toy .
And even if it is not a proof , such a system could only be chaotic at all time scales .
Tom
Agree up to the last statement that is a leap in logic. No one knows
the answer because we do not actually know if there is a continuous system of equations that describes the dynamics, physics, etc. of the real climate at all space and time scales. And if there were, as you have stated, it would be heinously complex far beyond the simple system approximated numerically and parametrically in climate models. Over long periods of time, no one knows what simplifications, if any, would be allowed
despite the wishful thinking of the climate modelers. I think this we both are in full agreement.
Jerry
Jerry
Tom,
One other quick point. I seem to recall that Ed used an approximation of the inviscid system. The unforced viscous, incompressible NS equations in 2D and 3D have very different properties because of the dissipation terms, i.e. it is exactly those terms that allow the Henshaw, Kreiss, and Reyna smallest scale estimates to be made. Now the unforced viscous, incompressible NS equations are not the equations that describe the climate, but the body of work does indicate that there is a vast mathematical difference between the inviscid and viscous cases, and the impact of using an incorrect kind or size of viscosity in an attempt to compute the continuum solution with the correct kind and size of kinematic viscosity.
I will see if I can manage to sign just once. 🙂
Jerry
Re # 180. Tom Vonk, in # 175 Bender gave links to some very thoughtful comments by Pierrehumbert and Isaac Held, no Gavin or William involved here, so to refuse to read these comments seems a bit silly. Pierrehumbert and Isaac Held are experts in chaotic systems and climate dynamics. I have seen a number of their papers and they have clearly much more knowledge than you haveabout turbulence and the role of chaos in atmospheric and ocean flows, and climate. You might learn something from them. There is clearly chaos on short time scales and perhaps there is chaos on longer time scales, this cannot be ruled out if I understand their comments correctly. However, the key question is how large are the fluctuations? There seems little evidence that chaos will bring about a global temperature change of a degree or more in a time scale of a century, surely not of the same order as what a doubling of CO2 can bring about. Think of a turbulent flow (my field of expertise). Chaos can be present on all scales. However, measurements and simulations show that beyond certain length and time scales the fluctuations are insignificant. Furthermore, the mean variance of turbulent fluctuations is bounded and in fact shows clear scaling behaviour. This is because the chaotic system still obeys physical laws, which you easily forget if you think only in terms of Lorenz equations. Measurements and simulations of turbulent flows (DNS, LES) are possible because the statistics (mean and variance) can be reproduced, in contrast to every realization. So therefore it does not seem so odd to me that long term weather predictions are impossible (of course) but climate predictions are.
By the way Bender and Tom, how do you think that chaos can cause a change in the global mean temperature? The chaos in the ocean and atmosphere does not create or destroy heat. Take a pan of water and stir the water. It will very difficult to predict the flow but I can tell you what happens with the temperature: nothing. The chaos in the atmosphere and the ocean just redistributes the heat. The only way that chaos can change the mean global temperature is by changing the albedo of the Earth, i.e. clouds or see ice. Does that seem likely and will that cause a large temperature change within a century? I just want to mention these point because people here seem to thing that chaos can cause all kinds of magic: it doesn’t. Physical laws are still valid and physics puts some clear constrains on the chaotic fluctuations, also in the climate. That doesn’t rule out that some surprises can occur but that should be a reason to be even more careful about adding CO2 to the atmosphere, not less. However,I have to say that I am not a climate expert so perhaps some people can correct me.
#186
To avoid misunderstandings . My comments extended only to my allergy to the “Realclimate” site .
Of course independently of that I have read papers of Pierrehumbert , Held , Annan & Co .
So I avoid the site but not papers concerning non linear system dynamics .
I don’t agree with the statement that the key question is the scale of fluctuations .
That question supposes the problem solved namely that I am able to calculate the fluctuation what I am generally not .
The key problem is to answer the question of convergence of numerical methods treating with chaotic systems and specifically the dependence on the time and space steps .
This key problem is not yet solved even for the simple Lorenz system .
I also don’t know of any evidence that chaos would or would not change the global temperature (whatever this may mean) by X degrees in T years .
This for the simple reason that global climate models do NOT consider the system as being chaotic (see the comment about averaging) .
So it is a kind of selffulfilling prophecy to say that a model that is not chaotic by design does not exhibit chaotic behavior .
On the other hand I agree that chaotic systems are governed by physical laws , that’s why it is question of deterministic chaos .
Whether the boundings on the trajectories (existence and metrics of attractor(s)) are sufficiently small to meaningfully speak of an “average” trajectory remains to be shown and we are far , very far from it .
It is right that some turbulent systems can be meaningfully treated by stochastical methods but it is wrong to say that every chaotic system can be meaningfully treated that way (multidecadal phenomenons are clearly in this case) .
Now to the point that chaos “only” redistributes energy in a non predictible way .
That is very obviously true because the only source of energy for the earth is the sun and the gravitation .
Now it would be to lean out of the window very far as to say that because energy is conserved , the way how it is dissipated doesn’t matter and that the system has any amount of quasi invariants , “global” temperature being one of them .
All the radiation properties that can be summed up under the general term albedo can wildly vary according to the evolution of literally thousands of parameters all interacting with each other .
Believing that a couple of global averages (like temperature and albedo) explain long term variations of a chaotic system would be like believing that the million dimensions of the phase space can be reduced to only a couple of dimensions that represent 99 % of the behavior .
I am right now building a dissipative chaotic system model with only radiation properties to try to show that for different temperature distributions having the same spatial average we get EXTREMELY different dynamics and very different avspatial temperature average evolutions .
That’s one example to show how chaos can very significantly change the trajectories of a system with same averages and same energy input .
gb,
You, like the zealots at RC, are missing COMPLETELY my point. I, unlike IPCC, am interested in the uncertainty (i.e. error bars) surrounding the estimate of the CO2 sensitivity coefficient – which is derived through parameterizations with certain types of complex models. If those models are structurally incorrect, the error in that model specification propagates through to the parameter estimate. I am NOT saying that chaos creates global heat. Someone else might have said that at some time, but PLEASE stop attributing such sophomoric statements to me. That is an RC trick. I understand the difference between redistribution and rise.
Now, you tell me what the RCers couldn’t: what caused the Arctic warming in the 1930s40s? Hansen et al. (2007) admit that they do not know, and are ready to give up looking for a regionalscale forcing, factor X. You will notice how that issue was dodged completely at RC? Is it possible that our estimate of the current global warming trend is exaggerated by a modern recurrence of factor X? Yes, it is.
“Where is the heat coming from?” This sounds like Steve Milesworthy. Remember: you have not censused the temperature field. You have sampled it. Incompletely (and highly inefficaciously, I might add). Now, what if the heat bounces around chaotically from areas where you’ve sampled accurately (precious few) to areas where you’ve not sampled or sampled inaccurately (the great majority of land surace, sea surface, ocean depth, earth depth, upper atmosphere, etc.)? What will that do to your search for “global” forcings and your “global” parameter estimates? It will bias them. In short, I am suggesting the CO2 sensitivity parameter may be overestimated because of SEVERAL reasons, inappropriate model choice and faulty parameterization being just one of these.
None of these issues are on the table however – because the issue of uncertainty surrounding parameter estimates has been taken off the table from day one. If the “experts” would put the statistical issues on the table, it could be discussed by people more qualified than myself. So why is it off the table? Why is it the unmentionable topic? Why is it left to the “unqualified” people to do what the qualified should be doing?
In conclusion: climate chaos is not a hypothesis designed to discredit the models. It is just one of the reasons why climatologists might be hooked on overfitting ad hoc models. Overfit, ad hoc models are errorprone models. Maybe there is some error in the estimation of the CO2 sensitivity coefficient? It is just a question. Why can this question can not be asked without stirring a witchhunt?
gb (#186),
Have you read the manuscript on the smallest scale estimates by Henshaw, Kreiss, and Reyna yet like I asked you to do a long time ago?
You are not an expert in turbulence (and neither is Isacc Held) until you understand and can explain the mathematics behind those estimates.
Jerry
Viscous Dissipation of Fluid Motions and Conversion to Thermal Energy
In the previous discussions of chaos, dissipation, and illposedness in the comments listed above in this post, there was little mention of the thermal effects of physical dissipation in the fluid motions of interest. I have been looking for information on this subject without much success. I have contributed these notes to a thread at Climate Science (the site appears too be down at this time, I’ll add a link later). Viscous dissipation has also been the subject of this post. To date I have gotten very little feedback of any kind. Maybe all this is incorrect, but no one has said that either. All pointers to literature relating to the following will be appreciated.
To be clear I am referring to the conversion of fluid motions into thermal energy via the viscous shearstress terms in momentum balance equations. These are momentum diffusion contributions to the momentum balance equations. The thermal energy due to viscous dissipation appears as a positivedefinite contribution to the various forms of the thermal energy conservation equation. Viscous dissipation always acts to increase the thermal energy of the fluid. If a temperature representation is used as the thermal energy equation, it always acts to increase the temperature.
Viscous dissipation is a volumetric process occurring at all times so long as fluid motions are present. The process is constantly acting to increases the thermal energy content of the fluid and thus increase its temperature.
In contrast I am not referring to the explicit and implicit viscouslike terms that arise from, and sometimes added to, the discrete approximations to the continuous form of the momentum equations. Somewhat ironically, these terms seem to be frequently labeled as ‘momentum dissipation’. The label momentum dissipation seems to be used in the GCM world more than in other computational fluid dynamics applications. I think it is a good assumption that the viscositylike coefficients that are used for momentum dissipation are not used to calculated the viscous dissipation contributions to thermal energy equations.
It is of course true that these momentum dissipation additions to the momentum balance equations have an indirect effect of the viscous dissipation to the extent that they modify the velocity distributions and gradients in the flow. These latter are the correct terms for calculating the viscous dissipation.
Modeling and calculation of the viscous dissipation and consequent thermal energy addition in GCM models has a somewhat checkered history. This is due in part to the evolutionary nature of the models and changing application areas. More nearly complete and comprehensive accounting of the components of, and physical phenomena and processes occurring in, the climate system have generally developed over decades of time. Applications to calculations of the thermal history of the planet over hundreds of years has required that the energyconservation aspects of the modeling be fundamentally sound and theoretically correct. However, a large contribution to addressing the fundamentally sound and theoretically correct model has been the approximations made at the continuousequation level of the modeling. The momentum balance equations used in the models are simplified versions of the complete equations. More specifically, the thinatmosphere approximation on a spherical surface, the representation of surface drag, the noslip condition at landatmosphere interfaces, the corresponding boundary condition at oceanatmosphere interfaces, and the decomposition of the velocity into horizontal and vertical fields has also contributed to the problem. While calculations and analyses with the GCM models/codes have been carried out over four or five decades, it seems that only late in the 20th century and early in the 21st century have the problems with the modeling and calculations of the viscous dissipation been corrected in some of the models/codes. Two somewhat recent discussion have been given by ,Boville and Brethron and Becker.
Again this situation is most likely a reflection on the interests in carrying out calculations for 100s of years of time.
It is my understanding that the globalaverage volumetric viscous dissipation in the atmosphere is calculated to be equivalent to about 2W/m^2 of energy; and I have seen much higher values. I do not know if there are estimates available from measured data in the atmosphere. A very wide spread has appeared in the literature over the years. This conversion of fluid motions into thermal energy has occurred for as long as the present composition and motions in the atmosphere have been roughly equivalent to the presentday conditions.
The standard argument is that this is a small number relative to the other energyaddition contributions to an energy balance for the planet. However, the radiativeequilibrium argument means to me that, as equilibrium is approached few energy additions can be consider to be a small number and neglected. Almost all finite numbers will not satisfy attempts to make 0 = 0. As I understand the situation, the effect of doubling of CO2 in the atmosphere is equivalent to about 1.6 W/m^2 and that the effects of the consequent changes in the thermalenergy state for the planet will be easily measured and observed in time spans of only 100s of years. How can an equilibriumbased approach to descriptions of the thermalenergy state of the planet neglect internal volumetric sources such as the viscous dissipation if they are of the same order as assigned primary imbalances?
I think another important issue is related to the development of the continuous equations used in CGM models/codes. Taken as a whole these equations are known to be incomplete. The basicequation models for the fluid motions and thermal state are not the complete equations that describe the motions and energy conservation. The allencompassing parameterizations, many of which deal with mass and energy sources and sinks and interchanges across subsystem interfaces, are ad hoc/heuristic, bestexpertapproximations (EWAGs) and thus cannot be assured of complete accounting of the mass and energy balances for the processes that are parameterized. In summary, the continuous equations very likely do not accurately account for the mass and energy conservation that actually occur in the physical system. I suspect it is easily possible for the lack of completeness and complete understanding to be responsible for several W/m^2 difference between the model equations and physical reality.
Is it not possible that the differences between the model equations and the actual physical phenomena and processes incur errors on the order of a few W/m^2. Again this might be a small number relative to the macroscopic energy balance for the planet, but as equilibrium is approached, and the imbalance is accumulated over 100s years of time in a calculation, significant differences are very likely possible. It is an important issue that the level of incompleteness and imbalances in the modeling at the continuousequation level, relative to physical reality, must be significantly less than the physical imbalances that are driving the planet toward a new equilibrium state.
Finally we come to, as we always do, the fact that the numbers are the results of numerical solution methods. In order to ensure that strict accounting and conservation of the energy distributions within the system requires extremely close attention to how the numerical methods are developed and implemented. An example of how easily it is to overlook important details is given by the usual practice of numerically integrating different parts of a model system using different time steps. A related issue is calculations using parallelcomputing capabilities by various approaches to domain decompositions. Exchanges of mass and energy at interfaces between subsystems presents another opportunity to overlook mass and energy conservation requirements. Generally these must be evaluated at the same timestep level in order to ensure strict conservation.
It is important to note that while there are many ways to account for the mass and energy conservation of a given calculation, this process in no ways ensures that the calculations are in accord with and reflect the actual mass and energy balances and conservation in the physical phenomena and processes. However it is an important issue that the numerical imbalances must be significantly less than the physical imbalances that are driving the planet toward a new equilibrium state.
As a stable equilibrium state, the radiative equilibrium state for example, is approached, no imbalance can be counted as small and dismissed. The imbalance between physically reality and the continuous model equations is almost certain to be a genuine problem. The effects of the viscous dissipation, constantly acting to increase the thermal energy of the atmosphere is physical reality. I am uncertain of the actual physical value. The imbalances introduced by numerical solution methods is very likely a problem in some GCM models/codes. This problem has been discussed as recently as 2003. Small imbalances acting constantly over long periods of time cannot be ignored as equilibrium states are approached.
Here is a curious side issue that was discussed in this olde paper by H. A. Dwyer from 1973 located here. The results given in the paper indicate that the effects of his estimate of the power generation activities by humans can easily be seen in the calculations with his model. He used 15.0 x 10^18 BTU/yr (1.58 x 10^22 Joules/yr) as the ‘heat generation’ by mankind over a period of 100 years. Energy conversion activities by humans is another one of those processes that is constantly occurring and adding energy into the climate system
Worldwide energy consumption by the human race is over 446 Quadrillion BTUs at the present time. This is equivalent to 131,400 TWhr or 471,000 PJ (= 10^15 J) per year. If we take an average efficiency to be 33%, the total energy conversion is about 3 times the consumption, or 1,413,000 PJ per year = 1.413 x 10^21 J/year. This is within a factor of 10 of the value used by Dwyer. (While we consumed about onethird of the total conversion, all the energy converted will always reside in the climate system until it is lost to space.) Can this be another source of internal energy conversion that cannot be ignored over long time scales as an equilibrium state is approached.
The thermal state of the planet, as measured by the temperature, is a strong function of the thermodynamic processes occurring within the climate system. The temperature distribution near the surface is determined by the transport and storage of the energy additions to the system; not by the energy additions alone.
Chaos and Butterflies
Dissipative systems (physical or mathematical)will have several attractors (assuming they exist). The typical Lorenzlike ODE systems are dissipative, and conserve energy in the dissipationless limit. Thus these systems, the original 1963 and the later 1984/1990 systems are examples, have more than a single attractor. The dissipative and energyconservinginthedissipationlesslimit properties are generally considered necessary in order for systems of ODEs to be Lorenzlike.
The oscillatory/periodiclike/aperiodic response seen in calculational results from these systems remains bounded due primarily to the linear terms on the righthand sides of the equations. The plots of the dependent variables from Lorenzlike ODEs ‘look’ like bounded numerical instabilities. These are damping terms in the equations; the resistance offered to fluid motions, for example. If the coefficients on these terms are increased slightly from the usual default values of unity, the system can be shown to become a little underdamped to massive overdamping with the trajectories smoothly approaching equilibrium states. The effect is very dramatic on graphical plots.
It is possible to find values of the coefficients that produce periodic responses having almost no change in frequency or amplitude. And some values will bring the initial trajectory motion away from the initial point to a screeching halt at a new equilibrium point. In general, the chaotic response properties are lost and deterministic predictability returns.
I suspect, but haven’t yet done any calculations, that coefficient values less than unity might lead to unbounded responses. The nonlinear terms in the equations might also provide contributions that assist in maintaining boundedness. It would also be of interest to investigate the effects of modeling the momentumequation resistance as a nonlinear function such as in turbulent flows.
The numerical solution methods used in NWP and AOLGCM models/codes have both implicit and explicit numerical damping in addition to the physical damping contained in the basicequation models for mass, momentum, and energy for the fluid motions. These systems are dissipative and might be energyconserving in the dissipationless limit. I do not know that if the ‘momentum dissipation’ terms are not present the codes can even maintain bounded responses. The effects of the numerical damping relative to the ‘chaotic response’ properties of the continuous equations are not known, of course.
When performing an initialvalue sensitivity analysis, specified initial conditions are varied and the response of the calculations are observed. It is not possible a priori to know to which attractor for a dissipative system a given trajectory will approach. It seems that this property means that even longrange calculations of responses cannot be assumed/hypothesized to be reliable. Again, all this assumes that attractor(s) exist.
All corrections appreciated.
hmmm … I don’t know how #191 got here. Maybe I did the ‘a href’ incorrectly?
Dan, your link is a trackback. When you link to another blog, it submits a trackback comment on the target blog that links back to your own.
#186 gb
How can one be expert in something (chaotic systems) whose behaviour, quantification, etc. must be by definition, be unknown?
#194
Well literally you are right – noone is expert in chaotic systems .
However on the mathematical side those systems call for specific treatments and exhibit specific behavior – propagation of errors , convergence , metrics of trajectories in the phase space , non linear PDE equations etc .
Experts in those fields are by extension (kind of) experts in chaotic systems .
However numerical analysts (LED , RANS) and people dealing with stochastical methods mostly understand nothing about chaos because they precisely look for specific physical cases where chaos can be “eliminated” by considering that small scale phenomenons (small scale is to be understood here both as spatial and temporal) are random .
As chaos is NOT equal to (stationary state + random fluctuations) this family of methods is wrong in the general case and the domain of validity of the approximation is constrained to certain spatiotemporal intervals that are in most cases poorly defined – some simple cases of turbulence which is a particular case of chaos can be treated that way .
The deterministic chaos has most probably its origin in the non linear interaction between feed back loops that produce a mixture of amplifications and dampings whose relative strength is changing all the time .
It is when those relative strengths unpredictibly change (f.ex a prevailing amplification trend gives way to a prevailing damping trend) that the system goes to a fast chaotic transition that is one of the signatures of deterministic chaos
If the interactions are weak the system is quasi separable and the chaos is weak if the interactions are strong it is not separable and the chaos is strong .
The climate is a system that has an astronomically large number of interacting feed back loops most of them interacting strongly on different typical time scales .
As such it is probably beyond any attempt to be adequately simulated over a long period of time like centuries .
Tom (#195),
Can you elaborate on the type of turbulence that is chaotic?
For the unforced, incompressible NS equations, numerical solutions will converge to the continuum solution if the numerical model has sufficient resolution to resolve (adequately) the minimal scale. Thus, although the turbulence looks chaotic, when properly resolved it is not.
Jerry
re: #196
That is a very important point Jerry. I understand that valid DNS results are independent of the discrete spatial and temporal increments for each realization. If this is not the case, of what use are the continuous equations?
This situation is in stark contrast to the GCM, and maybe NWP, case(s) in that it is known that the calculated numbers are functions of the discrete increments. What do the continuous equations provide under this conditions?
Jerry
I am not aware that turbulent regimes admit a unique continuous solution that could be approached numerically .
Typically the chaotic transition leading to fully developped turbulence (f.ex RayleighBenard , KelvinHelmholtz) is characterised by discontinuties in the velocity fields and instabilities extremely dependent on initial conditions .
In a similar way it is not possible to numerically simulate (predict) the form of the liquid/gaz interface when the gaz is in movement (f.ex the sea surface under a storm) .
That’s why the only thing that is being done when chaotic turbulence appears is to go over to stochastic methods and Kolmogorov like theories .
But even here it is restricted to certain domains of validity .
As an example if you take the air flow around a plane wing you have typically some 10^8 Reynolds .
The energy dissipation happens then on the Komogorov scales from 10 m to 10^5 m so to adequately simulate its dynamics you’d need the order of magnitude of 10 ^18 cells what is far beyond any computer .
The flow is chaotic but the propagation of error is sufficiently “small” that on the small space scale (the wing area) some parameters like the air pressure can be calculated with a sufficient precision that suits the practical needs .
However if one begins to extend the space scale far beyond the wings , one gets the usual unpredictibility and sensibility to initial conditions .
LES cuts off the smaller end by stochastical treatments what enables to reduce the calculation time but is then restricted to much smaller Reynold numbers and is obviously only an approximation that must be checked on a case by case basis .
Actually we are here in a domain that has been seriously studied only since the 80ies so there is preciously little firmly established knowledge about the whys and hows of chaotic instabilities in physical systems .
I happened to see a series of 50 2 days simulations of the situation in december 99 when one of the most severe very large scale storms happened in Europe .
The 3 simulations considered at that time the most probable had no relationship whatsoever with what actually happened .
However among the 50 , 2 looked a bit similar to what happened .
That’s what’s chaos for me – a quasi infinity of possible final states and no possibility to define the probability of realisation of any of them because they don’t only depend on the initial state but also on the way to get there which is in turn subject to fast expanding chaotic unstabilities at every moment .
Going from weather to climate doesn’t change the picture , it makes it rather worse by increasing the number of relevant subsystems and interactions .
Dan Hughes (#197),
The numerical solutions are not independent of the space and time increments, but as the space and time increments are refined,
the solution converges to the continuum solution of the unforced, incompressible NS equations if the spatial resolution is sufficient to resolve the minimal scale indicated by Henshaw, Kreiss, and Reyna
for a given kinematic viscosity coefficient and the numerical method is accurate (consistent) and stable (satisfies the CFL criterion).
Convergent numerical runs have been made in 2D for kinematic coefficients that are essentially the ones that exist in real applications. The
turbulence is not chaotic, but converges to the same solution for a fixed period of time. There have also been 3D resolution runs made with no surprises for as high as resolutions as currently possible.
Thus the turbulence is not chaotic, just very fine scale that must be resolved.
If the turbulence is not properly resolved, the model will blow up
because of an accumulation of energy at the highest waves due to the unresolved cascade of enstrophy. Then the only way around that problem is to add either an explicit (eddy viscosity)
or implicit (numerical scheme) unphysically large dissipation. Of course the latter alters the numerical solution and it will not converge to the
continuum solution with the correct kinematic viscosity.
Jerry
Dan Hughes (Addendum),
I agree about the NWP and climate models. There are basic mathematical problems with the hydrostatic system and exponential growth of solutions for the nonhydrostatic system in the neighborhood of a jet. Thus numerical methods cannot converge to the correct continuum solution in either of these (almost) inviscid, unforced cases let alone with inaccurate forcing terms (parameterizations). As I have discussed before, this is a game of unphysically large dissipation and parameterization tuning.
Jerry
Tom (#198),
You might want to peruse the manuscript by Henshaw, Kreiss, and Reyna
and the associated manuscripts by Kreiss and his students. The results are quite illuminating. Let me know if you have any problem finding those
references and I can be more specific.
Jerry
Jerry
I have found rather too many than too little references but all seemed interesting .
As I mentioned , my job is not primarily to study climate models so I have unfortunately not the time to read all of what I found and plead guilty as charged .
However from what I read (Computers and Fluids 23 , 1994) it is demonstrated that a 4th order approximation is giving stable boundary conditions .
The case studied here is a particular case of NS (incompressible fluid , unforced , linearly independent boundary conditions ,
only 2 Pi periodic functions in y) .
The charts showing some examples of numerical solutions are for low Reynolds values (Re ~ 100) .
While this is not the place to discuss details of the mathematics (which are understandable and undoubtedly correct) , the case chosen doesn’t go in the chaotic regions .
In other words the results seem valid for systems having the postulated characteristics but would fail in systems where the chaotic transition phenomenons (f.ex Rayleigh – Benard) are well established .
Also I don’t know what it would give for very high Reynolds where probably both the incompressible hypothesis and the linear independence of the boundary conditions fail .
I have also read 2 papers on interactions between large scale and small scale motions that are a more classical Kolmogorov like treatment (incompressible , spectral Fourier and Laplace transforms , simplified Burger equation , pure NS with no interactions) .
Here the results are more traditional – sometimes the small scale “adapts” to the large scale and sometimes it doesn’t and
when it does , the behavior is similar to the simplified Burger equation .
As such approaches are mostly model vs model comparisons the problem of convergence (to a real solution if it exists)
and time step sensitivity doesn’t seem to be treated , at least not in those papers .
This however is the key problem of deterministic chaos as Dan has pointed out .
Tom,
Look at the reference in the SIAM journal of Multiscale Modeling
and Simulation (volume 2) by Henshaw and Kreiss. There are numerical runs that support the minimal scale estimates for very high Reynolds numbers for the unforced, incompressible NS equations in 2D. The runs are made for the doubly periodic domain so that boundary conditions do not become an issue
(many numerical runs use unstable boundary conditions). And there are 3D runs by Kreiss and Enstrom. Let me know if you are not able to find the references and I will provide more specific info.
Jerry
Jerry
Googling that I get 12 pages and most of proposed references is subject to subscription .
So if you could give me a link with uploadable text , I’d appreciate .
Thanks
Tom,
I know there was a manuscript that preceded the 2D publication available at Los Alamos. Also there were some manuscripts that preceded the 3D publications from UCLA. Look under Bill Henshaws site at LLNL for the references.
Jerry
Tom,
I looked on William Henshaws site at LLNL (see OVERTURE – related publications). He lists most of the references and many are available thru
publicly available sites. Let me know if you have any problems accessing
those sites. I am sure Bill would be happy to assist you to obtain
reprints (his email is shown on the site). He is an extremely nice guy.
Jerry
Tom (Addendum),
Both the 2D and 3D manuscripts are available from Bill’s website.
The estimates have been published in various journals and I think
may also be available on an open site. Ask Bill for info.
Jerry
Re: #207
Took a bit of doing, but here are two links: Henshaw papers and other Henshaw papers. His homepage.
John (#208),
Hats off to you for tracking down the manuscripts and posting the links.
If anyone has any questions about any of the manuscripts I will be happy
to try to answer them and I know Bill would also be happy to respond.
I can pick out the exact manuscripts from the lists if desired.
Jerry
No problem, just takes a bit of digging. BTW, the Jacob Yström, HeinzOtto Kreiss 3D NS paper is available here in various formats.
Jerry
I no longer have your contact info but would
very much like to get together to chat.
I’m no longer at NCAR.
Steve
Steve (#210),
My home phone number is in the Boulder phone book.
Jerry
Allan Ames
Perhaps Allan could join this discussion as well?
#212. MAndelbrot’s view was that there was no scale at which one could define “climate”.
Re: 213 MarkR, 212 Steve
Has there been discussion of what sort of statistics we might expect from exponential, or any other type of system?
A related question is, how could we get switched over to thinking about the type of variability, rather than summary parameters of an arbitrarily assumed distribution?
The Vostok O18 data is a beautiful bi (or more) modal, just what you would expect from a system with exponential growth followed by collapse. (How do I include a graph without a web site?)
re #10 Bill F
Are you acquainted with Mandlebrot and his exposition on Hurst’s study of the Nile River? “Fractals by Jens Feder has a chapter “Fractal Records in Time”. There are several others which I do not own. I don’t know if a Hurst exponent analysis would help, but it would be a way to categorize this against other data and show or deny consistency at the fractal level.
#215. A simple way is to set up an account at esnips.com or something like that, upload there and link.
#216 Hi Allan. I use esnips.
http://www.esnips.com/SignInAction.ns
But there are others I dare say.
How did we get into fractals? This is all supposition and not based on any theory that I know.
For smaller scales of motion in the atmosphere and ocean, the slowly evolving solutions in time approach solutions of the incompressible, NS equations. Once the minimal scales for the incompressible, NS equations are properly resolved, the manuscripts cited above show that errors in the forcing of the incompressible NS equations lead to differences in the solutions of the size of the errors in the forcing. And they also show that it is possible to recreate the small scales by continually assimilating the large scales. These results have useful applications to many flows.
The problem with atmospheric models is that the kinematic viscosity
is unphysically large (not even close to resolving the minimal scale for the atmosphere if one exists) and the forcing terms have O(1) errors at all scales, i.e. the errors in the forcing terms are not small.
Thus atmospheric models do not satisfy the above conditions and never will. In addition, the hydrostatic system is ill posed and the nonhydrostatic system is sensitive (displays fast exponential growth) near a jet.
Jerry
Well one gets into fractals because they can’t be dissociated from deterministic chaos .
It goes approximately like that :
Chaotic systems (some) present a strange attractor in the phase space > the study of the strange attractors shows that they are self similar and scale invariant > fractals are self similar and scale invariant > the relevant geometry to study trajectories in the phase space is the fractal geometry .
As the dimension of a fractal curve is between 1 and 2 , the usual mathematics on curves of dimension 1 don’t work , specifically differentiation . A fractal curve can be continuous everywhere and differentiable nowhere . The metrics are complicated – a curve enclosing a finite surface can have infinite length .
Basically it translates the fact that a chaotic system (like turbulence) has an ever larger number of ever smaller parts .
That plays among others havoc with statistics .
The probability density function is usually expected to be normal with a well defined mean and standard deviation .
The PDF of many natural data has a fractal distribution and then the mean and standard deviation has no sense .
F.ex a tree is a fractal object , there is ever more of ever smaller branches – the average diameter of the branches and their standard deviation are meaningless .
If one increases the size of a fractal data sample , there is no convergence to some average value .
Assuming that the climate has a strange attractor (what nobody knows) then the question of convergence of solutions can be translated in terms of the dimension of the relevant fractal space and the only way to say something meaningful about this system would be to talk in terms of the topology of the attractor instead in terms of time series .
P.S
I am now reading the Henshaw paper .
re: 219 Gerald Browning– Why fractals?
Tom Vonk gave a lovely explanation. Mine would be simpler. How do you know you are describing the proper system if you cannot match the variability? If the empirical data says something is bimodal, and you produce a gaussian, hasn’t something been left out? Then, don’t you know, the thing you left out will turn around and bite you.
Tom (#220),
Turbulence in the incompressible NS equations is not chaotic if properly resolved. Errors in the forcings translate to reasonable errors in the solution.
Your assumptions about fractals are just assumptions, i.e.
that there is an attractor for the climate system equations.
Until someone proves that is the case, the discussion is not on firm grounds?
Jerry
Jerry
In order to avoid misunderstandings – I have never said that there was a strange attractor for the climate .
I have indeed said that it is not proven that there is one and I even think that it is impossible to prove it because the system is so complex that we will never have a complete set of equations describing it .
However there are many phenomenons be it the glacial/interglacial transition , multidecadal events like the ENSO or gaz absorption at the liquid/gaz interface or indeed the weather that are definitely chaotic .
What I said was IF there was a strange attractor THEN its structure was fractal (see f.ex “On chaos, fractals and turbulence” Peinke &al Physica Scripta, Vol. T49, p.672 or “Fractal Dimension as a Measure of the Chaotic Behavior of TaylorCouette Flow” Olsen &al) .
Regarding the particular case of NS that represent a subset of the climate problem I will come to it later when I have read the papers you linked to .
I have always seen 2 different problems with NS .
First is the existence , unicity and properties of a solution of the general NS equations and this problem is not yet solved .
Second is numerical simulations of particular cases of NS with particular assumptions (like boundary conditions , form of forcings etc) and there is the problem already mentioned by Dan Hughes that IF the numerical methods in those particular cases converge , to WHAT they converge .
Tom (#223),
I understand that you said that if there is an attractor for the climate
equations.
Now you are claiming that there are certain atmospheric phenomenon that are chaotic. Is there a proof of that statement?
It is my recollection that the Lorenz equations were derived from the inviscid atmospheric equations using just the interaction of the lowest 3 spectral modes. Is that correct? If so was dissipation added for computational reasons at a later time? If not, what dissipation was used in the derivation?
Jerry
Tom (#223),
How are fluid experiments run for TaylorCouette flow with infinitely long cylinders?
If the flow is simulated by a computer model, what are the continuum
and discrete boundary conditions at the top and bottom of the cylinders and at the lateral boundaries?
How physical is the TaylorCouette problem?
Jerry
There is an artifact of NH polar jet behavior and the behaviors of parcels within a distance described by “r, theta, delta” of them which I am casually observing this summer. Namely, it is the accuracy of forecasting regarding heat events and marine layer events in California. This should be very interesting, considering how persistent the polar jet is this late spring / summer in terms of residing just to the north of us. Normally, it would have moved up into BC by now. The NWS is progging a heat event, especially for inland areas, between now and thursday. I’ll be checking in on this at least twice daily.
Update …. so here is the reality of the heat of today. To recap, NWS proged a heat event based on a progressive ridge. They proged a sea breeze only close to the immediate coast, and northerlies everywhere else. They proged a rapid heat up to “summer like levels of high heat.” If I were to take it at face value, I would have expected us to be cracking 90 in many locations, for there to be 80s in places along SF Bay and in the LA basin to very near the shore and for there to be many, many temps well into the 90s elsewhere. In reality? A few 90s, with a strong on shore push, especially in SoCal. Let’s see how long this “heat wave” lasts and how intense it actually gets.
NoCal actuals for 1500 PDT
SoCal actuals for 1500 PDT
Wind map for SF Bay, Delta and near coastal waters
Steve (#227),
That is quite the interpolation (extrapolation) scheme for the winds. 🙂
Has anyone done an error analysis of the scheme? My guess is that
given the few observations, the results are not unique?
At what level is the plot?
Jerry
Steve (#227),
I see that the winds are in knots at 10 m. But I was unable to switch to the site that discusses the interpolation scheme. What are the winds aloft,
i.e. near the jet?
Jerry
Jerry, I don’t know if the people doing those maps do anything aloft. I can look around later today and try to find out.
{Update, mid AM PDT …. today is do or die time for what was prog’d. If the NWS prog is correct, then there will not be enough of an onshore push today to prevent the prog’d effects of the ridging. Interesting things to note, however, even now, include a possible previously unforecasted southerly surge, a possible more rapid and lower latitude track for the mid latitude cyclone off of the Columbia River mouth, and a TD down off of Baja (those can throw a wrench into the works with no warning).}
Jerry, I still have not found anything on winds aloft, other than the one off daily radiosonde run at OAK.
[In terms of the actual vs forecast of the heat event:
A STRONG RIDGE STRETCHING FROM THE PACIFIC INTO CENTRAL CALIFORNIA
WILL CONTINUE THE HOT INLAND TEMPS THROUGH THE END OF THE WEEK…
EVEN THOUGH THERE SHOULD BE SEVERAL DEGREES COOLING EACH DAY. AS
HEIGHTS LOWER AND A SEA BREEZE DEVELOPS…THE COAST SHOULD BE
SIGNIFICANTLY COOLER BY FRIDAY. A TROUGH MOVING ACROSS THE PACIFIC NW
WILL BRING A RETURN OF STRATUS TO THE COAST WITH AN INCREASING
ONSHORE FLOW FRIDAY MORNING.
So stick a fork in this one, it’s getting cut short, lowering heights under way. Gave us a day and a 1/2, pretty wimpy as these things go. NWS got it half right but as usual overplayed the warming / increasing hts card]
Steve (#230231),
The observational data are the poorest off the western coast of the US.
This could be seen in Sylvie’s manuscript, i.e. the forecasts
improved as one moved toward the eastern US. The errors in the CMC model were quite bad after 2436 hours and it appears that you are seeing similar results. All the talk of good forecasts for longer periods of time seem to be just that and the quality of those extended forecasts are judged by
nonstandard measures. If there is no upper air data from radiosondes or aircraft, the jet is poorly described and there will be corresponding large errors in the model forecasts. As one moves toward the middle of the US there are more radiosondes and aircraft obs. These are why the forecasts over the eastern US are better than those over California. I also believe that much of the improvement in the forecasts is because it is possible to see what is coming via satellite and the Doppler radars.
Jerry
# 224
The Lorenz eqns are viscous. They describe a simplified single overturning convection cell. Equivalently, think of the motion of air in a tyre that is heated from below. The three variables in the equations are the updown and leftright temperature difference and the speed of the overturning. The system is dissipative (ie energy is lost) due to viscosity and thermal diffusion, but it is driven by the heating from below.
The Lorenz equations have chaotic solutions. Although they are not an accurate model of the atmosphere, or even a single convection cell, more accurate simulations of convection also show chaos, so convection cells in the Earths atmosphere are probably chaotic. Of course it is not possible to ‘prove’ this in the precise mathematical sense.
As for ‘the climate equations’, as Tom says, the whole system is too complex even to write down or agree on, so the question is meaningless. But it would be simple to check whether GCMs are chaotic. I expect someone has done that. My guess would be yes.
Chaotic and divergent? Much earlier in this thread I challenged Jim D to apply DIV …. just to pull a possible setting out of my hat …… the part of his model describing conditions near a jet stream …. there was deafening silence.
RE: #234 – Here is a template for the sort of characterization methology I would want to see NCAR use to certify their models. This is, mind you, only an inkling, only a snap shot of a subset of the tools and metrics (in Sigma parlance, the X and x measures). The actual, realized methodology that NCAR would have to use would clearly be much more intricate and complicated. This is simply to set the tone of my expecatations:
http://www.du.edu/~jcalvert/tech/fluids/vortex.htm
re 213: MarkR
The cumbersome link should, I hope, get some excel graphs and tables from the Vostok Ice Core Data for 400,000 years on the GT4 scale. One set is as sampled, the other is interpolated to 1000 years. (All improvements gratefully accepted.)
http://www.esnips.com/doc/e3bf7ba694c94061bc40634f281975c8/DIST410.XLS
I note that financial theories have audits, scientific theories have experiments, but climate models have only statistics – and Climate Audit — to keep them honest.
Paul M (#2330,
If the Lorenz equations are viscous, what was the size and type of viscostiy used in the original derivation and what was the scale of motion?
A reference will suffice. Thank you.
Jerry
Jerry,
The Lorenz eqns have a dimensionless parameter, the Rayleigh number R, that measures the ratio of the thermal driving to viscosity and diffusion. chaos occurs for R greater than about 20. Now in the real world R would be huge (10^9? 10^12?) because viscosity is such a small effect on geophysical scales, but for these values the model is not at all realistic.
There is lots of stuff on the web for example
http://planetmath.org/encyclopedia/LorenzEquation.html
http://mathworld.wolfram.com/LorenzAttractor.html
or you can look at Lorenz’s original classic 1963 paper, J atmos. sci. vol 20 p 130.
He ends up with some warnings about the impossibility of longrange forecasting that seem to have fallen on the deaf ears of current climate scientists.
Jerry #224
There is a number of necessary conditions to have a chaotic behavior – high dependence on initial/boundary conditions , non linearity , existence of energy dissipation , existence of thresholds and fast transitions between states , growing error/unstabilities .
The phenomenons I have mentioned and many others present all the necessary conditions and stay indeed unpredictible over the long term .
That is not a formal proof but as there is no proof of the contrary either , the balance of evidence points to chaos .
To the Lorenz system see : http://www.maths.uwa.edu.au/~kevin/Papers/JAS07Teixeira.pdf
That was at the core of my discussion with J.Dudia that he has probably not understood .
The numerical simulation of a chaotic system presents a high dependence on the time and space steps .
As the time step you use is not freely chosen but depends on the hardware available and as
the exact solution is not known , there is no way to say that a numerical simulation of a
chaotic system converges to anything meaningful and indeed in the Lorenz case it does not .
Jerry #225
See http://www.citebase.org/fulltext?format=application%2Fpdf&identifier=oai%3AarXiv.org%3Aphysics%2F0210009 .
The infinite cylinder is only a limit case with mathematical interest .
It is obviously unphysical as the solution must be invariant by a Z translation while the physical solution is not .
The staggering complexity of the chaos transition in the Taylor Couette flow has been illustrated by D.J.Tritton in “Physical Fluids Dynamics” where he shows a chart giving the different states of the system in a plane defined by the rotations of the 2 cylinders .
There are 2 dozen of states going from Couette flow to a fully developped turbulence through different pseudo stationnary structures . Parts of the plane are labelled “unknown” as there is no knowledge about the behavior in those regions . .
That translates mathematically that the NS have multiple solutions for a given Reynolds and geometry .
PaulM #233
Nobody has ever checked that for the good reason that the GCM are not chaotic by construction .
They don’t solve NS , they are not sensible to initial/boundary conditions , they consider that unresolved dimensions are either (gaussian) noise or parametrised , they use systematically time averages to smooth fluctuations , they impose tunings to insure that runs converge for the particular space and time steps they choose , they neglect hundreds of interactions and feedbacks , they never analyse sensibility on time and space steps etc .
I would bet that in the preliminary runs they have found some more or less intense chaotic behavior .
But as they can’t afford that – indeed it would mean that for different 100 year runs they get different final states – any trace of chaos is probably “tuned” out .
That way the runs converge but to what , only God knows .
Paul M (#238),
So we have a simple model that is no where close to an accurate approximation of the real atmosphere and all of the commotion about chaos is based on this model? As much as I would like to believe that this proves that the atmosphere is chaotic, it is in fact no where close to a proof.
Jerry
Tom Vonk (#239),
See my response to Paul M. I have yet to see anything that is a rigorous proof that the atmosphere is or is not chaotic. Until that is shown (and I would surmise that is not possible for the exact reasons mentioned above), this is all a moot point.
Jerry
Tom Vonk (#239),
A numerical approximation of a PDE can show sensitivity to spatial
and temporal resolutions for reasons other than chaos.
I have shown a simple example that forcing can be used to produce any type
of solution one wants. If the Lorenz system uses unphysical values for its
parameters as has been indicated above, it must be forced by unrealistic heating and cooling to produce anything that looks physical. Also note that the viscous, compressible NS equations are essentially a symmetric hyperbolic system for the largest scales of atmospheric motion.
If we are talking about continuum solutions of TaylorCouette
flow, what is the fluid and what is the Reynolds number?
If we are talking about numerical simulations, what is the numerical
method and what is the discrete approximation of the boundary conditions?
Jerry
Tom Vonk (#239),
I see in the above reference that there is a reference to Kreiss’ theorem. It appears that Heinz (or Gunilla if plane flow) has already looked into this problem analytically? Also the equations are the viscous, incompressible NS equations to which the Henshaw, Kreiss, Reyna estimates apply for the unforced, periodic case. This case is being forced by the rotation of the cylinders and again the question arises as to the physical reality of the experiment.
Jerry
Jerry #242
As for Taylor Couette , Couette , Benard Raleigh flows those belong to the most studied cases both experimentally and theoretically since Poincarré and that is more than 100 years ago .
The ranges of Reynolds and fluids used in experiments are big and the number of publications is beyond what you or me could read in a lifetime .
As an example showing the experimental device and results you may see : http://www.physics.ohiostate.edu/~reu/03reu/REU2003reports/Hinko_final.pdf .
In this reference you have also the diagram showing the states of the fluid in a plane defined by the main rotation parameters .
It has been established beyond any doubt that the fluid behavior can be chaotic already at relatively low Reynolds .
As those physical flows also obey to NS , it is established that solutions of NS (provided that a unique solution exists what is not yet proven) are chaotic over a large part of the parameter space .
Jerry #241
I don’t really understand this line of argument .
Chaotic behavior can be proven experimentally and that is the case for most physical flows mentioned above (see the references) .
Absence of chaotic behavior can be proven by showing theoretically that a system converges to a fixed point in the phase space or that it describes a periodical trajectory and then proving by experiment that it is indeed the case .
To prove chaotic behavior consists to prove either experimentally or theoretically that a system does NOT converge to a fixed point and that it does NOT describe a periodical trajectory in the phase space .
F.ex it has been proven for the 3 body gravitational system that it does not converge to a fixed point and that it does not describe a periodical trajectory in the phase space .
Hence the 3 body gravitational system is a chaotic system .
As to the climate/atmosphere .
There we have 2 problems .
First and most important is that we have not a complete set of equations describing its behavior .
Lacking that , the term itself of “solution” is mathematically meaningless (“solution” of what ?) .
Secondly admitting that a “solution” exists (even if we don’t know the equations) there is no other way than observation to guess its behavior .
Knowing that the phase space has several millions of dimensions (increasing as a cube with the decreasing resolution) it will never be possible to prove either theoretically or numerically that the system is NOT chaotic .
Indeed such a proof has not been given and imho will never be .
Symetrically and for the same reason the proof that the system IS chaotic will also not be given .
Stays observation .
Observation of the system over short periods of times (days/weeks) shows that the system does not converge to a fixed point and does not describe a periodic trajectory .
Observation of the system over multidecadal periods (ENSO , arctic ice cover) also shows that the system does not converge to a fixed point and that it does not describe a periodic trajectory .
Despite the fact that that does not constitute a formal mathematical proof (which is impossible either way) then by applying the Occam’s razor it is probable that the system is chaotic or if it is not then the exact solution is beyond our understanding .
Even the staunchest IPCC supporters admit that their models can NOT predict a future state of the system but only a difference between 2 states that they suppose to have a statistical meaning .
Of course that implies that all the bias and errors magically cancel out over the period of simulation what is totally wrong for a non linear system but that begins to be a bit out of topic .
Tom Vonk (#244)
If each experimental apparatus provides the same flow for the same fluid and Reynolds
number, why have there been so many publications? Does each publication just use a different fluid
or radius? It seems to me this provides an infinite number of possibilities (publications), but not much physical insight if the experiment cannot be related to a real world fluid and Reynolds number?
And what about the Kreiss analysis, i.e. how does that fit in?
I also believe that the fluid is not incompressible, but rather slightly compressible.
This will have an impact on any theory that is based on the apparatus. In addition, there are particles injected into the flow that have some impact on the fluid, i.e. it is not a pure
fluid. I see a number of scientific issues that to me are not fully resolved even for the manuscript you cited.
A forced (nonchaotic) PDE system does not have to converge to a fixed point or a periodic trajectory?
As shown it can produce any solution that one wants.
The modifier “probable” is not a mathematical proof and neither is an experimental apparatus
that has a number of issues related to its design?
Jerry
Jerry #244
Yes that’s exactly that .
The parameter space is virtually infinite but that’s also the case for any system described by NS that can have an infinity of initial and boundary conditions , not mentionning its dynamic parameters like viscosity etc .
Fluids used were from very high viscosity oils through water to gases .
Very everyday life physics and there is no particular problem with experimental design – everything is well defined .
However if you look at the chart I provided , the by far biggest complexity comes from the behavior of the fluid itself .
One has also to keep in mind that there is no need to focus on Taylor Couette , in my opinion Benard Rayleigh
is more interesting .
What these experiments do is to provide insights in one of the most important questions – what are the reasons for chaotic transitions and how it can be explained in terms of NS solution .
Alternatively it is used to see if numerical simulations are able to reproduce the fluid behavior and as they are not , why .
The issues you mentioned (pure fluid , compressibility etc) are indeed issues and if there are so many publication it is that those issues have also been dealt with .
Any system of ODE or PDE whose solutions don’t converge to a fixed point (equilibrium) or a periodic orbit of whatever shape and period in the phase space is unpredictible .
That’s the basis of the chaos theory that is well established and the strange attractors exist precisely because systems have not only 2 choices – uniform convergence to an orbit or divergence . They can also stay in a subspace of the phase space without converging to any particular orbit .
That’s the case for many physical systems where the equations are known (double pendulum , 3 body system etc) and experience suggests that it is also the case for fluids described by NS .
Jerry
You really are missing the main points. Lorenz is just a simple model to illustrate chaos. It is not true the ‘all the chaos commotion is based on this model’. There are numerous other simple models that also show chaos. In more complicated systems (as I said before) it is impossible to ‘prove’ chaos in the rigorous mathematical sense, so it is a waste of time talking about this. Taylor Couette flow is just another model to illustrate instability and transition to chaos and turbulence in fluids (it has a close analogy with convection, centrifugal force replaces buoyancy).
You say ‘each experimental apparatus provides the same flow for the same fluid and Reynolds number’ – again, this is confused. For small Re there are transitions to the same regular (for example periodic) flow but for larger Re there is chaos so the exact flow is quite different in each realisation. There are thousands of experiments and numerical simulations that show chaos (exponential divergence) at large Reynolds / Rayleigh number (regardless of how the boundary conditions are discretised). So it’s bucketloads of evidence but no real proof – a bit like evolution!
As for the climate, we dont even know what the key physics is, but I am now repeating my earlier post and Tom’s. But in general if you make the system more complicated, it stays chaotic or gets more chaotic.
Here is real world …. California, upper 30s N lat, coastal strip (0 – 20 miles from the ocean). As of yesterday AM we were experiencing a so called southerly surge, bringing a thick marine layer and good stratus intrusion. NWS progged for yesterday a shift to NW winds and mix out based on mid latitude cyclone being carried into Washington / N. Ore by the polar jet. Progged a weak progressive ridge passing over Cal. In reality ….. we got an unexpected eddy and the high fog continued to intrude, the marine layer got even thicker. Today it’s so thick that the high fog is breaking up early.
Paul M (#247),
I am not missing the point. There are many differential systems that show
chaotic behavior. That is not a proof that the atmosphere is chaotic nor
the NS equations. In fact, for the periodic cases we see convergence and stability to perturbations of the numerical approximations. Thus these cases do not display chaotic behavior if the minimal scale is properly resolved. This case is just as valid as the TaylorCouette problem.
Let us stay with a single fluid (say water) for the TaylorCouette flow between rotating cylinders. What is the physical reality that this
apparatus is suppose to describe? Is it just a problem to demonstrate
that the fluid behaves very differently when forced using different
cylinder heights, radii, and rotation rates. Is this a surprise?
Clearly different forcing will provide different solutions for the same
fluid.
And in the corresponding numerical simulations is the flow properly resolved for the given fluid and viscosity?
Jerry
Ton Vonk (#246)
The verb “suggests” is not a proof. If the TaylorCouette problem for water is not physical, it may well be that for certain parameters
the flow will behave in a strange manner unseen in any physical case.
Jerry
Jerry # 250
I think that we lost the track and can’t see your point anymore .
You focus on a word (suggests) within an argumentation that goes much farther than this one word .
What’s the matter with “physical” ?
Water is physical , Reynolds are physical , boundary conditions are physical – everything in the Taylor Couette experiment is physical .
If for some reason you don’t like the idea of cylinders , take Rayleigh Benard – it’s as physical as physical goes and there are no cylinders .
Yet the results are the same .
PaulM is right that you probably missed the point – there is no surprise that even a simple physical system described by NS equations presents chaotic behavior .
Deterministic chaos is the rule in fluids and steady states an exception .
Do you think that the weather can be predicted over an infinite time with accuracy provided that it is “properly resolved” ?
If so , where is the proof and what is the “proper resolution” ?
Actually the understanding of chaos dynamics in natural systems is completely beyond simple finitedifference numerical runs on NS equations .
The only relevant methods explore the metrics of the attractors , exponential separations of trajectories of solutions , fractal dimensions , Lyapounov coefficients etc .
One has to stop to look after deterministic predictability which is impossible but look instead at patterns and topology .
A specifically challenging problem is the one of systems with attractors having an extremely large number of dimensions like typically the weather or climate .
What happens when a Lorenz like physical chaotic system with a low number of dimensions sees this number increased ?
That is still an open question even if everything we know today suggests (suggests as in “nothing known suggests the opposite”) that the chaos intensifies .
For a good summary see f.ex : http://www.nls.physics.ucsb.edu/papers/Ah_Benard_03.pdf
As the research on this field is rather young , there is still much work to be done but the chaotic behavior (=aperiodic time dynamics) of fluid systems is established both theoretically and experimentally beyond any doubt .
Tom Vonk (#251),
You have failed to discuss how Kreiss’ theorem (referenced in the manuscript above cited by you) applies to the case under consideration in the manuscript.
Also note that the Henshaw, Kreiss, and Reyna (HKR) minimum scale estimate is a mathematical proof (no “suggests” or other clarifier needed) for the continuum, incompressible NS system and has nothing to do with numerics. Have you read that manuscript yet? The numerical runs simply demonstrate that the proof was bang on, i.e. when the smallest scale is properly resolved, any numerical approximation will converge to the same solution as it should. This was demonstrated for different numerical methods, i.e. all converged to the same solution so the continuum solution is well posed (computable).
Just because there is a real (mixed) fluid between two rotating cylinders, that does not mean that the experiment will produce results seen in the real world for all values of the parameters. For example, there may be no similar situation with such high rotation rates, small gap, and solid wall boundaries in the natural universe (not man made).
At the moment we are not discussing weather or climate equations or models (which I have no confidence in because of the problems that I have thoroughly discussed above). We are talking about chaotic systems and a proof that turbulent flows are or are not chaotic. You have raised the issue for a particular experimental apparatus (TaylorCouette flow) and HKR have proved in a different case that the flow is not chaotic. Which is suppose to be closer to reality? I would doubt that any computer simulation of TaylorCouette flow resolves the smallest scale for high Reynold’s numbers, e.g. for air at a Reynold’s number close to those of the atmosphere and for scales that match those of the real atmosphere. Such 3D computations are quite impossible with current computers.
Jerry
All,
I thought you might be interested in the abstract for the following seminar
at NCAR.
Advances in Explicit Convective Forecasting with WRFARF
Morris Weismann (MMM)
Wednesday Sept 19 at 11:00 A.M.
The abstract is available in the seminar announcement and roughly states that there has not been any improvement in the quality of forecasting with
the fine mesh (1 km) WRf model and that the large scale is not generating the smaller scales properly.
Note that this is the exact problem that this thread has discussed in consideable detail, i.e. that fast exponential growth in the continuum solution will lead to large errors in any fine scale numerical model
and the problem will become more apparent as a model starts to resolve features on the order of 10 km.
Jerry
Abstract at http://www.ncar.ucar.edu/forstaff/ under ASP Seminar: click the (website) link.
So if I may clarify what should be obvious, the problem is inherent in the equations, and until we have computers with many orders of magnitude more RAM and FLOPS (or vast arrays), these models are never going to produce accurate results?
John Baltutis (#254),
Thank you for posting the url.
Jerry
Larry (#255),
The problem has little to do with computer FLOPS or RAM. The problem is inherent in the sensitivity of the large scale continuous equations in the presence of vertical shear. If there is any error in the large scale observations, the sensitivity will display itself as incorrect locations of smaller scale storms even if the physical
parameterizations were perfect (which by now we know is not the case). That is why ensemble forecasts have been tried, but that does not solve the inherent sensitivity.
Using initial and boundary data from a weather prediction model with less resolution will lead to incorrect initial and boundary data and this will also lead to incorrect smaller scale storms.
We were able to recreate an analytic mesoscale storm under ideal circumstances, i.e. knowing the initial small scale vertical component of the vorticity, the exact heating, and the correct large scale boundary conditions. None of these are known in real cases and the presence of typical vertical shear leads to fast exponential growth in the neighborhood of the shear.
Jerry
#255, Larry,
If it’s like a chaotic system, it takes exponentially more computational power to get a linear increase in resolution.
[bump]
Chaotic (complex, multistable) systems may appear ergodic for a while, but then their lack of ergodicity is revealed by an unexpected change in state, such as the emergence of a novel flow. What I want to know is whether the climate system is thought (by GCMers in particular) to behave in this way, and if so, how one can possibly justify tuning to a superficially stable substate (such as a subset of 20th weather/climate scenarios).
If I understand what Gerald Browning has been saying for the last year, then the GCMs are damned. If that is the case, then the role of misplaced faith in propping up the IPCC/AGW conspiracy is critical. Expose that and the house of cards will come down very quickly, of its own accord.
Gavin Schmidt, please reply.
260, no it won’t. They’ll fall back on the precautionary principle. You really don’t know these people, do you?
Gerald, you might be right, if you are saying that the GCMs are useless. However, climate should not be approached as a really long weather forecast.
Larry, that’s a recurring point you make that I agree with. I too recoil when folks seem to actually expect AGW people to be intellectually honest. Like they will say, “Oh, I see, you’re right, Never mind then…”
I casually scorecarded the NWS forecasts in NorCal during the last couple weeks of Oct. They were calling for a record warm / offshore event. Certainly, further south from here, the combination of the (seasonally normal) Santa Ana winds and arsonists had a dramatic impact. But here? We had a couple warm (but not record breaking) days. Then it swung cold (which, was the grove for most of Oct, see Watt’s blog for more on that). We got into a borderline rainy pattern with interludes of “just south of the zonal jet” dull weather. Bottom line is, I believe meteo models hype warm / dry related events and surpress cold / wet related ones. Why?
@Gerry in 249:
In response to: “Let us stay with a single fluid (say water) for the TaylorCouette flow between rotating cylinders. What is the physical reality that this apparatus is suppose to describe? ”
Taylor Couette flow describes…. Taylor Couette flow! This apparatus is used in viscometry, where use is restricted to low Reynolds number. So, this has always been a physical of interest– if for no other reason than to study itself. (It gets used for other things too.)
Still, I’m not entirely sure what larger point Tom is trying to make by pointing us to the Ohio State Reference. Yes. People have observed loads of complex, pretty phenomenon during transition to full blown turbulence. Various forms of stability analyses have been applied to describe what is seen.
Meaning… what in terms of Global Climate Change? Or for full blown turbulence? Or for GCMs? Maybe it means something; unless Tom can explicitly connect these pretty pictures to computations of full blow turbulent flows, I’m throwing my vote in with you and say, I’m not seeing how the pretty flow viz pictures relate to GCM’s.
I have moved this from Unthreaded #24:
All,
As a mathematical friend of mine stated clearly, “A numerical model is not a mathematical proof of anything.” And as far as I am concerned that is what has caused all of the nonsensical debate. If a climate (or engineering) model started with accurate initial conditions, contained all of the correct forcing for the continuum dynamical system (there is already an assumption that the continuum system accurately describes the fluid of interest), the numerical method employed by the model is accurate and stable, and the numerical method is convergent, i.e.
has resolved all the scales of motion that develop in the continuum solution, then you might be able to call the numerical model result a
demonstration of reality, but not a mathematical proof. Note that none of these criteria are satisfied by any climate model nor most engineering models. Thus they are a shot in the dark and any conclusions drawn are wishful thinking.
Jerry
Gunnar (#261):
Why shouldn’t climate be approached as a long term weather forecast?
That is exactly what it is. And currently NCAR is moving the WRF fine resolution limited area model to a global version, i.e. a long term fine scale weather forecast. Unless someone can prove that the smaller scales of motion in the atmosphere do not have an impact on the climate over a long period of time, then those scales must be accurately computed to obtain that feedback.(The current HKR smallest scale estimates for the incompressible NS equations indicate that only when the smallest scales are resolved by a numerical model is the solution accurately computed.) And that is certainly not the case with any global climate model
and it will never be with fast exponential growth near shear in the basic dynamical system. The latter result for numerical approximations of exponentially growing solutions is is well known in numerical analysis.
The comment that preceded this one describes what it would take in order to have any confidence in the results from a numerical model. And as stated there, none of the conditions have been satisfied for any climate model.
Jerry
Re # 262 Steve Sadlov
I guess this is a bit like saying that engineers consistently go over budget and beyond time on large projects, because the models they use are not at the detail of counting and costing nuts and bolts, nor knowing the shape of the time equations to match a nut with a bolt and screw it.
bender, could the assumption of ergodicity be related to the assumptions inherent in the Uniformitarian Principle?
The discussion in this thread gives me great pause when considering dendroclimatology’s assumptions.
This is a terrific stinger of a PhD candidacy question.
The short answer is that the two are somewhat related, but are subtly and importantly different. The uniformitarian principle requires that all agents and processes acting in the system do not change behavior over time. In a linear system this will lead to stationarity in the system’s parameters. Ergodicity is one order more stringent a requirement than stationarity, and it becomes very important in nonlinear dynamical systems. Even moreso if the system is structurally unstable (emergent features exist intermittently) or metastable.
I’m not a climatologist. But it seems to me that there are some basic agents like heat and heat exchange, water and fluid flow, etc. that you can assume do not change their behavior over time. This is tending to satisfy the uniformitarian principle. But because the agents behave nonlinearly, they do not satisfy the ergodicity assumption. Emergent features (such as the walker circulation, THC, ENSO, etc.) may or may not be structurally stable. (Some may be, others not.) In this sense, these higherorder “agents”, if they are intermittent, do not conform to the uniformitarian principle. Such a system is certainly not ergodic. And if they are not ergodic then you can not assemble “ensemble runs” and assume they are representative of the true ensemble expectation; the mean and the expectation are never equal.
This explains why I am so interested in the weather/climate scenarios that climatologists use to generate “ensemble” runs. An ensemble has a very precise meaning in statistics. Not so in climatology.
It would be useful to seek out climatology papers that include the phrase “ergodicity”. So far I have seen only one. The one pointed out to me by Nasif Nahle: Smith, L.A. 2002. What might we learn from climate forecasts? PNAS 99: 24872492. Seems to me that warmers are in total denial of the ergodicity problem in statistical climatology.
Regarding your last comment, treerings are, in all likelihood, ergodic systems, that is, as long as they are also stationary (i.e. conform to the UP). Or maybe it is better to say that they are ergodic only to the extent that living systems have a way of linearizing the nonlinearities of their physical surroundings.
That is my offthecuff stab at answering the question. Would love to hear Wegman on the topic.
(Obviously) I’m not an expert in any of this… I get pretty far with general education and common sense…
Am I making bad assumptions, perhaps applying UP at the wrong level/scale? I’m thinking that perturbations such as:
* Major continental airflow shifts
* External damage (nearby tree falls, destroying half the bark; fallen tree rots and is gone.)
* Root growth into new nutrient pockets
…cause havoc with these assumptions. Some might claim these are rare outliers. But standing in the middle of a BCP forest, surrounded by trees that almost all have been beaten to a pulp for centuries, it is hard to see such events as “rare” 😉
#265 >> Why shouldnt climate be approached as a long term weather forecast?
Gerald, here is my answer, and to speed things up, I’ll answer some objections to my answer as well.
Meterology is completely different than climatology, and one is not just a long term view of the other. In the time scales that concern it, Meterology assumes that the sun and earth are constant. Meterology is primarily concerned with atmospheric circulation. There are so many chaotic processes involved that it’s much more of an observational science. Climatology, specifically the AGW idea, is more related to thermodynamics, atmospheric physics, electromagnetics, chemistry, etc. It’s a mistake to approach Climatology like Meterology, which is what the AGWers have done with their “general circulation” models. For the climate, most of the chaos of “weather” is irrelevant.
Climate needs to consider many variables that are absolutely not considered in weather. Some of these are: solar dynamics, solar rotation, earth orbital variation, solar flares, solar magnetics, earth molten core, plate tectonics, lunar effects, ocean thermodynamics, ocean chemistry, ocean carbonic cycle, biosphere carbonic response, atmospheric physics (weather related, but others as well, like cosmic ray effects), atmospheric thermodynamics, etc.
Gunnar, you say that climate is more concerned with things like thermodynamics, atmospherics, and chemistry but so is weather, so thats just a question of degree.
Actually, Meterology does not involve carbonic cycle chemistry. There is thermodynamics, but only some atmospheric Thermo, not considering ocean/crust/atmospheric interactions. There is of course lots of atmospheric physics involved, but the focus would be different than climatology. In weather, the sun and ocean dynamics are considered constant. This is a good assumption, since weather is concerned with the next 10 days.
Gunnar, you say that for climate, much of the chaos of weather is irrelevant do you mean that climate is not chaotic?
There is still plenty of chaos, but the source is different: solar chaos replaces earth atmospheric chaos. We may never be able to predict a direct hit of a solar flare, but we should get to the point that when we measure one occurring, we can model that input, and say, ok, this is how this direct hit will affect the next 10 years.
Gunnar, how can you predict something about 100 years from now, when we cant predict the weather?
Because there are really complicated dynamics that we can ignore, and less complicated dynamics that we now must consider. Here is an example:
Dropping a leaf into a river. It can be shown that the position of the leaf cannot be predicted past say 30 seconds. It just too chaotic. Someone else is studying how that same river dumps sediment into the ocean. The sediment study does not need to predict where a single leaf will end up. He deals with bigger time scales, and has a different focus.
Gunnar, but if we cannot predict the weather (when many of the forcings can be ignored because they are nearly constant), how can we predict the climate, which as you point out, is dependent on many more known and unknown forcings?
One is not a subset of the other. In weather, we can ignore many physical phenomena, since we can safely assume they are constant in the short time scales. However, the ones we cannot ignore introduce a tremendous amount of chaos. This makes weather forecasting exceedingly difficult, and because of the inherent chaotic elements, we may never be able to extend weather forecasting much past 15 days, 30 at most. I mean: never. Climate on the other hand, can ignore the chaotic circulation issues, since they don’t matter on the larger timescales. There are other chaotic elements, but they may very well be easier to deal with.
@Bender (in italics).
Im not a climatologist.
Nor am I. I”m a mechanical engineer by training.
Honestly, I’m a bit puzzled by any focus on ergodicity.
But it seems to me that there are some basic agents like heat and heat exchange, water and fluid flow, etc. that you can assume do not change their behavior over time. This is tending to satisfy the uniformitarian principle. But because the agents behave nonlinearly, they do not satisfy the ergodicity assumption.
From a mathematical point of view, flows in general are not proven to ergodic.
That said, wide classes of flows have been observed in nature, studied in labs and treated analytically. They appear to be ergodic.
There are a very few situations in which solutions are not ergodic, but as an empirical matter, it takes some doing to find them. Yes– there are those who look for them. If you examine carefully, the Reynolds numbers will tend to be relatively low. Moreover, when the statistical properties of these types of flows are observed over sufficiently long periods of time, they appear to be ergodic. (Experiments that don’t collect statistical properties, or which are not examined for long times can’t be used to ‘prove’ some flows are ergodic.)
As an engineer doing casual thinking about climatology, I would not look to the NS, energy equations etc. to find ergodicity. If the climate is ergodic, I’d be surprised if that arose out of the nonlinear nature of the Navier Stokes equations themselves. It arises out of something else. (Possibly ice forming in northern climes, then melting. Possibly other things.)
Emergent features (such as the walker circulation, THC, ENSO, etc.) may or may not be structurally stable. (Some may be, others not.)
Is there any really any disagreement that, at least, in some sense, these ENSO and Walker circulation aren’t entirely, totally, rock solid, stable structures? There are analogs in transition to turbulence. In certain experiments at controlled, moderate Re numbers you see patches of turbulence within laminar flows. The patches may have short lifetimes, vanish, reappear and do all sort of interesting things.
But does this invalidate, or cast in doubt, any assumption of ergodicity?
I may be mistaken, but I’m pretty sure that — as an empirical matter– when flows of this type are observed over sufficiently long periods of time, they appear to be ergodic. They may “flip” back and forth between a few states– but many, many flips, between states, it appears you end up with a system where you can interchange time and ensemble averaging. (At least as far as one can tell empirically. Lucky for graduate students, “a long time” in a laboratory experiment is rarely longer than 1 hour, and more commonly, no longer than 1 second. Lucky for engineers, good design generally means avoiding flows that slowly flip back and forth between various states. It would be really bad if the lift on an airplane wing varied between two values, with each lasting, on, 1 minute or more.)
So, the upshot is, knowing what I know about flow, ergodicity, and applying ensemble averages, I’m not to concerned about absolute lack of ergodicity in climate arising out of the navier stokes equations, though I would defer judgment on the possibility of nonergodicity arising out of other factors. (Ice sheets forming and melting? Continents moving? External radiative forcing functions? )
In this sense, these higherorder agents, if they are intermittent, do not conform to the uniformitarian principle. Such a system is certainly not ergodic.
I’d be cautious saying “certainly not ergodic”. At best, you can say “have not been proven to be ergodic either using mathematics or empirically”.
And if they are not ergodic then you can not assemble ensemble runs and assume they are representative of the true ensemble expectation; the mean and the expectation are never equal.
True… but if they are ergodic, you can interchange them.
This explains why I am so interested in the weather/climate scenarios that climatologists use to generate ensemble runs. An ensemble has a very precise meaning in statistics. Not so in climatology.
On the one hand, I defer judgement on whether climatologist application of statistics and ensemble averaging to climate is done correctly. I’d need to know more. I am certain that at a minimum, assumptions are involved.
After all, assumptions are used when applying ensemble averaging to every transport model I’ve ever seen. Certainly, ergodicity is assumed . In engineering problems (where we can often isolate factors leaving only dominant ones) assumptions are often either empirically based or semiempirically based.
I’d be surprised if climatology is an exception in this regard. (The open question is always: does a very common place — almost routine — assumption apply to a new physical system? The answer might be “no”. The other open question: Assuming the system is ergodic, is the time scale for the computations sufficiently large to approach the ensemble average. )
Still, I’d be very surprised if “ensemble” doesn’t mean the same thing in climatology as in statistics. Those working in mechanics use the term the same way statisticians do, so why wouldn’t climatalogists?
(As for having a clue how those in mechanics or climatology might use “ensemble average”, Google Liljegren “ensemble average”; you’ll find me. Don’t confuse me with my husband though. 🙂 )
Re #269
Nonstationarity is what you get when a tree’s living circumstances change. But given the external forcing on the tree’s growth, that nonstationarity is going to have the same effect on every tree that could have been growing in the place of that tree. The timeseries of of growth rings is ergodic. This ergodicity is what allows you to assume that indivudal trees are replicates of one another. You want a stronger forcing signal, so you bump up the sample size and the central limit thereom takes over. Your samples are indicative of both a living population, and the population of trees that might have existed in a different instance (think stochastic model run).
The continental airflow shift example you give is more like the kind of process I have in mind. One earth, no opportunity to replicate, limited ability to look back in time. In a nonergodic system (nonlinear dynamics, exponential growth), this presents a major problem. If the climate attractor comprises multiple coexisting convection schemes, then no one convection scheme is the right one to tune to. i.e. The 20th century is no more “indicative” than the 10th.
What I see in the paleoclimate literature now is people claiming that parts of the global circulation used to be different than they are now. Correct me if I’m wrong, but doesn’t this mean that some features of the circulation are not reliable (i.e. intermittent)? Doesn’t this mean that you want to avoid overfitting your models to those features?
Given that GCM tunings are nonunique (because of the redundancy among free parameters) many tunings will yield the same result. Is it not possible to find a tuning that is compatible with the data, yet predicts a cooler future in 50% of its runs? (That’s why I want to understand exactly how many parameters are free for tuning. Because the probability of overfitting is proportional to this.)
The question is: what is the magnitude of the internal variability versus the external forcing? We are continually told by Gavin Schmidt et al. that the internal is small compared to the external. I would like some proof of that assertion. Because the higher the internal, the less the external residual available for attibuting to GHGs.
Lucia #271,
Thanks for parsing my contribution. I will consider it. For folks who want to understand what ergodicity is and why it’s important, lucia’s got the nub of it: it’s about the interchangeability of time series and series ensembles. If the two have the same statistical properties, they are interchangeable, and the process is ergodic. Then statistical inference (in terms of extrapolating to the uncertain future) is greatly simplified.
My question is about how many ensembles you need and how long the series need to be before you get that interchangeability. The higher this requirement, the more likely the GCM tunings are to be in error as a result of misrepresenting the internal variability. My complaint is that the folks at RC don’t seem to understand the question. Yet, clearly, lucia does. Watt’s up with that?
RE: #268 – What an excellent topic for a new study / paper.
RE: #270 – RE: “Meterology assumes that the sun and earth are constant.”
Here’s a grey area. Look at the NWS 72 hour – 365 day outlooks. Superimposed on them is an overt “AGW” signal. The NWS assumes AGW as a “meteo factor” in models out past about 72 hours.
The difference, in a nutshell, between weather and climate, is that it’s presumed that energy can be stored (or brought out of storage) over weather time scales, so conservation of energy doesn’t apply to weather. Conservation of energy is assumed to apply to climate, over it’s time scale (whatever that is). Considering that some oceanic processes can go into 1000s of years, that means that the time scale for climate may be that long. Over the 30 year horizon that we’re all arguing about, conservation of energy doesn’t apply.
This is an important point that seems to have been lost in the definition someone found in one of the other threads (from ask.com). Ergodicity allows swapping of the statistics, which is beneficial particularly when you have only a limited amount of time series data available, but for all of the ensemble. As I recall, Mann makes this assumption in his modified RegEM method, without any explanation that I know of, which Jean S. first pointed out (his nanmean function). I have not had time to dig into this… soon.
Interestingly, the only text I have that actually provides a lucid definition of this (whith most of a chapter dedicated to ergodicity) is the Papoulis book “Probability, Random Variables and Stochastic Processes” (I have the 2nd edition, the more recent edition has an additional author). Ziemer and Tranter mention the concept briefly in “Principles of Communications.” None of my other signal processing texts even list it in the index.
Mark
RE: “Conservation of energy is assumed to apply to climate.”
Aha! This is the critical flaw in the GCMs! Conservation at the universe scale, yes, but not at the planetary scale!
On a slightly different note, here would be some more meteo to scorecard, applicable to an area near 125 W and between 36 and 40 N:
N UPPER RIDGE CURRENTLY WEST OF THE SOUTHERN CA COAST IS FORECAST TO
AMPLIFY AND SHIFT EAST INTO WEDNESDAY WHICH SHOULD BRING WARMER TEMPS
AND LESS CLOUD COVER TO OUR ENTIRE FORECAST AREA ON WEDNESDAY. BY
THURSDAY THE RIDGE AXIS IS FORECAST TO MOVE INLAND…ALLOWING THE
STORM TRACK TO SHIFT SOUTH TOWARDS OUR AREA. RAIN CHANCES
WILL BEGIN IN THE NORTH BAY AS EARLY AS THURSDAY NIGHT. PREVIOUS GFS
MODEL RUNS INDICATED THE UPPER JET AND MOIST WESTERLY FLOW WOULD
GRADUALLY SINK SOUTH THROUGH THE END OF THE WEEK AND INTO THE
WEEKEND AND RAIN CHANCES WOULD SPREAD SOUTH ACROSS MUCH OF THE SF
BAY AREA BY SATURDAY. BUT THE 06Z RUN OF THE GFS MAINTAINS A
STRONGER RIDGE TO OUR SOUTH AND KEEPS ALL RAINFALL NORTH OF THE
GOLDEN GATE FROM FRIDAY THROUGH SUNDAY. THIS LATEST GFS SOLUTION IS
SIMILAR TO THE 00Z ECMWF. BUT THE ECMWF IS EVEN DRIER…BUILDING A
STRONGER RIDGE OVER CENTRAL AND SOUTHERN CA DURING THE WEEKEND AND
PUSHING RAINFALL NORTH OF EVEN THE NORTH BAY. IF THE 00Z ECMWF IS
CORRECT…THE UPCOMING WEEKEND WOULD BE DRY AND MILD FOR OUR ENTIRE
FORECAST AREA. HAVE LEFT CHANCE POPS IN FOR MUCH OF THE SF BAY AREA
OVER THE WEEKEND PRIMARILY BECAUSE THE 00Z GFS ENSEMBLE MEAN PRECIP
FIELD FORECASTS RAIN SOUTH TO ABOUT SAN JOSE. THE MODELS ARE TRENDING
DRIER FOR THE WEEKEND AND POPS WILL NEED TO BE LOWERED IF THIS TREND
CONTINUES. THE AIRMASS OVER OUR REGION WILL REMAIN MILD THROUGH THE
FORECAST PERIOD AND TEMPERATURES WILL HOLD NEAR NORMAL TO SLIGHTLY
ABOVE NORMAL.
Bender: My question is about how many ensembles you need and how long the series need to be before you get that interchangeability.
I think this is the correct question. I have no idea what the correct answer is — because I have no idea what the time scales associated with the full range of relevant phenomena might be. I also have no idea how many ensembles of “weather 2007” those running a particular GCM run, as I’m not at all familiar with that literature.
Why would I know:I’m a mechanical engineer. Based on my background, the most I’d say with confidence is that I see a long time scale in the ice cores. So, the absolute upper bound for interchanging time and ensemble averages appears to be several ice ages.
But don’t get too excited and think I’m suggesting the correct answer is anywhere near the length of an ice age. If predicting climate is framed a bit differently, the correct answer may well be 1 year.
One might also be able to learn a lot by framing the problem differently. I’ll explain by way of analogy:
My husband graduate thesis involved modeling the diffusion of passive scalars (smoke) in the boundary layer. ( Think military smoke screens.) On the one hand, if you at the problem of predicting where smoke travels on average over the course of a full day, you would realize there is an imposed length scale of at least 1 day. The “forcing” function for smoke is atmospheric conditions, and even on average, atmospheric conditions change dramatically over the course of a day. (Worse, the weather could change, and things all look different on the next day.)
Fortunately, for both modeling and, more importantly, data collection and processing– which were done in the field, there is a separation of scales. So, it was possible to separate the turbulent diffusion problem from the weather prediction problem.
Analyzing the data was more difficult than formulating a model because even if you can formulate the two issue separately, the sun still rose during the experiments, and so, trying to decide whether the turbulence had changed “too much” to test a hypothesis describing the instantaneous response of smoke particles to turbulence was difficult.
In the end, Jim had to formulate a method of partitioning the data to minimize uncertainty intervals due to both a) sample size and b) change in the statistical properties of the forcing function during the measurement period.
It turned out to be possible to make something of the data because there was a large separation of scales. The time scale of the turbulence (seconds) was very short compared to 1 day.
So, it turns out, even though the first estimate of a time scale for interchanging time and ensemble averaging sounded like “you will never graduate”, the problem could be framed in a different way. It turns out that if you know some initial conditions, and information of forcing functions etc. You can predict the behavior of a smoke plume. ( The ultimate method was monte carlo simulation, which, of course, involves tracking individual smoke molecules.)
@Gunnar: With regard to your sediment example…sometimes you can’t predict where individual leaves go and you can’t predict the average behavior either. Some of the factors that complicate the small scale problem are retained. If you don’t deal with them, they screw up your macroscale prediction. That is to say, if your subgrid parameterization fails to capture phenomena that affect particle transport on average, your macroscale model will also be wrong on average. Macroscale models that contain microor meso scale closures often contain tuning parameters. The tunings can be flow dependent; they often are.
“Can’t get the small scale right” => “Can’t develop a reliable general macroscale model that relies on subgrid parameterizations” is the rule.
“Can’t get the small scale right” => “Can develop a reliable general macroscale model that relies on subgrid parameterizations” is, unfortunately, the exception.
Back to Bender: Yet, clearly, lucia does. Watts up with that?
My training is as an experimentalist?
Anyone that believes that a climate model conserves energy is sadly mistaken. The dissipative mechanisms that climate models employ are unphysical and necessarily much larger than reality because the models cannot compute at the fine resolutions that shorter term weather forecast models do, nor correctly resolve the scales of motion in the true atmospheric solution.
To overcome these problems, the climate models are necessarily forced in an unphysical manner (this has been shown in earlier discussions on this web site) to produce what appears to be a realistic atmospheric spectrum (see Sylvie Gravel’s manuscript on this web site for how artificial forcings are used to tune a model but do not necessarily lead to a more accurate approximation of reality). There is no evidence that such an unphysical mess will generate anything close to reality. And given the large physical approximation errors in such a questionable process, introducing small changes in solar variation will be overwhelmed by physical and numerical errors even though in reality the variations are quite important. To see this one need only run a climate model at several finer resolutions with the same parameters (not allowing the dissipation to change with resolution), one will quickly
see quite different behavior. This is one reason that the models do not converge to better solutions at finer resolutions and are retuned. And as pointed out many times, the unbounded growth in hydrostatic climate models starts to rear its ugly head at finer resolutions as do the fast exponential growth near vertical shear (jets) in the nonhydrostatic continuum solutions.
Jerry
>> @Gunnar: With regard to your sediment example sometimes you cant predict where individual leaves go and you cant predict the average behavior either
lucia, don’t forget that I also said “and different focus”. I agree with what you say about micro vs macro scale, but it may not be relevant. In the example, the sediment study is not a true macro version of the leaf study. They are also differently focused, ie different equations, different inputs, different outputs. They only thing they have in common may be the river.
Similarly, a correctly approached climatology and the meteorology needed to predict whether one should bring an umbrella tomorrow, may only have in common that they involve the mythical planet of origin, which some say is called Earth.
280, I’m dumbfounded. If they don’t conserve energy, how on earth are the supposed to predict the climate in 100 years?
Weather is rather a simple thing, basically wind speed, barometric pressure, relative humidity and temperature.
What’s the weather like tomorrow? Well, I might be able to guess cold and dry and windy if it was that day yesterday and I don’t see any atmospheric conditions likely to change it. But then again, something could change in the pattern, and a low or high pressure system shift and make the weather different. So wearther is basicially current conditions.
Climate is rather simple also, it’s
Or perhaps a bit more complicated:
… and by human activities, primarily changing the properties of the surface of the Earth, and by the release of particulates and greenhouse gasses that influence the greenhouse effect and carbon cycle in various random overlapping interlocked ways.
>> 280, Im dumbfounded. If they dont conserve energy, how on earth are the supposed to predict the climate in 100 years?
I’m surprised that you didn’t realize that what Jerry says in #280 is absolutely correct. This time, it’s you that have given them credit for something they haven’t done, which is approach the problem as a scientist. Like you said to someone else, “You don’t know these people”. The polite of saying it is: climate models are unphysical.
Note to new visitors: what you are seeing here is the benefit of a multidisciplinary team getting involved in significant scientific issues.
If you ever start wondering whether CA brings any value, look around in the archives for more of this kind of discussion. There may be few if any real climate scientists here, but that doesn’t seem to harm the level of discussion too much.
😀
My naive assumption was that climate models are based on an energy balance, and the dynamics are moreorless ignored. How else do you account for the effect of GHGs? Energy balance is the keystone of the greenhouse effect. Excuse me while I pick my jaw up off the floor.
>> My naive assumption was that climate models are based on an energy balance
Your assumption was not naive, it was rational. Now, I don’t speak from personal experience (life is too short to start looking at GCM code), but my understanding is that GCMs are derived from the weather models, hence, the name, General Circulation Model. To simplify, the took a meteorology GCM, cut out parts that were making the simulation run too long, added Arrhenius, and fiddled till they got it to predict doom and gloom.
My understanding is that there is NO thermodynamics (despite the fact that the name seems to match what we’re trying to do, must be some denialist plot), no Henry’s law (Henry? has that been peer reviewed by the team?), etc. How else do you think that they could come up with a model that predicts that changing C02 from .000171% to .00034% of the mass of the oceans would cause the oceans to warm up by 4 deg C?
Whatever is in that model, it sure isn’t 1st law.
@Gunnar,
I hate to ask this because it’s likely to spiral off into an irrelevant credential check… but is what is your background in transport modeling?
Because, microscale modeling and the details are relevant to getting the macroscale transport models correct. The fact that the guy who runs macroscale models would rather gnaw off his arm than track individual leaves (particles, bits of sediment or what not) doesn’t change this fact. The fact that the methods and types of equations use differ also doesn’t change this.
I could write more, but it’s difficult to organize the appropriate type of discussion without knowing whether your background is physics, chemistry, statistics, engineering &etc.
Larry (#282),
The numerical climate models damp energy thru unphysical large dissipation and create new energy thru unphysical forcing. They are then run for long periods of time until the total model energy ceases to oscillate as rapidly as in the beginning. It is no small effort to achieve this model energy balance as it is not real and the forcing must be retuned for different resolutions as the dissipation is (hopefully) reduced at finer resolutions. This should come as no surprise as the models are nowhere close to using the correct size or type of dissipation and this serious problem (along with many others) must be artificially overcome. The error in the numerical solution as compared to reality is completely in question.
Tonight I will provide a basic error analysis for numerical approximations
of a differential equation with an exponentially growing solution to show that there is a serious problem with any finite difference approximation
of such a system. The analysis will be simple and easy to understand
for anyone that has had basic calculus. Then the only question is do such solutions appear in the basic dynamical system used in global circulation
(or weather forecast) models. The answer is yes as indicated above and therefore no numerical model based on the equations of motion
for the atmosphere will ever compute perturbations near jets correctly.
These perturbations are thought to initiate mesoscale storms (according to the scientists at NCAR as the source generator for these exponentially growning solutions supposedly has been retained for that reason). Whether this is true or not, the perturbations will never be computed correctly so the models will never converge to a continuum solution as required in the comment above.
I hope this analysis will be beneficial.
Jerry
Jerry, 289, what you’re describing sounds like pure fluid mechanics. To model the effects of GHGs on climate, there’s a whole lot else involved. Are you basically claiming that radiative part is moreorless ok, but the fluid mechanical part is irredeemable?
>> microscale modeling and the details are relevant to getting the macroscale transport models correct.
This is a straw man, since this is changing the point.
>> I could write more, but its difficult to organize the appropriate type of discussion without knowing whether your background is physics, chemistry, statistics, engineering &etc.
Since you don’t appear to be asking in order to make a fallacious argument by authority argument, I’ll answer. I have a BSEE, as does my wife & older brother, while my father had a PhD in EE. My mother has a masters in history, my uncle a PhD in Physics.
said, and you responded:
>> microscale modeling and the details are relevant to getting the macroscale transport models correct.
This is a straw man, since this is changing the point.
I don’t mean to change the point, I am trying to address what I understood to the argument you advanced.
If I understand you correctly, you are:
a) saying that modeling weather is not relevant to modeling climate and
b) the reason modeling weather is not relevant to modeling climate is
c) modeling the microscopic details in a sediment transport model is not relevant to predicting sediment transport at the macroscopic (average) level.
Is that what you have been saying? Or something different? ( I mean, I’m pretty sure you said ‘a’, ‘b’ is just a transition, so the issue is just ‘c’.)
If statement “c” is not what you are using to support your claim (a), I would prefer not to delve into what is true about sediment transport — since in that case, arguing about the truth or fallacy of ‘c’ would, indeed, be a strawman.
So… maybe you could clarify me: You brought up sediment transport /leaf issue as an analogy for understanding climate modeling. What do you mean to illustrate using sediment transport?
(On the credentials thingie: My Ph. D. is in multiphase flow. My specific area is not sediment transport–many sediment transport problems are a subclass of multiphase flow problems! )
Mr Pete #285:
I agree. And I want to thank Gerry, lucia, Nasif and others for helping to bring my question (and my formerly muddled understanding) to a fine point. See what distributed communication can accomplish?
I conclude that the questions I was asking at RC on this topic were indeed sensible, if maybe a little illphrased. To prove my point I want to call attention to the John Christy interview being cited in “unthreaded #24”. A mustread. John Christy would understand the ergodicity problem with timeseries lengths and ensemble sizes. Why? Because he is a real scientist, interested in the truth, and devoted to selfcriticism as a way of preventing selfdeception. Like lucia, he understands the value of experimentalism.
>> in multiphase flow
Ok, I see where we got disconnected. We all tend to see the problem in terms of our own expertise. When you have a really big hammer, everything looks like a nail.
>> What do you mean to illustrate using sediment transport?
I’ll try to clarify. My example
Which you characterize with
even though I said: lucia, dont forget that I also said and different focus. If you read carefully, you’ll notice that leaves are not sediment. Therefore, this is a great example of straw man, since you wanted me to say something I did not say. Because of your PhD work in multiphase flow, you wanted this to be an argument about that. And now back to
>> a) saying that modeling weather is not relevant to modeling climate
I have explained that, and it would be boring to repeat it, but I’ll reemphasize, it’s not a micro vs macro problem.
Sidenote, even if it were, your point that micro analysis is always relevant to macro scales cannot be supported. If I wanted to calculate how long it will take me to drive the family to NYC, should I include time dilation effects?
Double Sidenote: our discussion seems to be going along the familiar lines of scientist vs engineer.
Gunnar, if I were you, I wouldn’t be trying to explain chaos to a fluid mechanics specialist. I can guarantee you that Lucia understands chaos theory perfectly. What you’re doing is like a chemist trying to explain spread spectrum to you. Quit while you’re behind.
I agree with Gunnar, scientist vs engineer and a different reference frame. (I’m more an engineer type BTW) But sometimes your points are a little difficult to understand.
From what I’m hearing, we’re not seeing the forest because of all those pesky trees in the way.
Is that the crux of the micromacro analogy?
If I understand this, Lucia and Gerry are arguing that weather and climate are for practical purposes not different, and Gunner is saying that they are. Did I get that right? Scientist/engineer cultural divide notwithstanding, I’m finding the scientists’ position more persuasive on this one, if I understood it correctly.
>> I wouldnt be trying to explain chaos to a fluid mechanics specialist
If you thought I was, then you missed my point: fluid dynamics, no matter how chaotic, is irrelevant to climate, which at the essence, is a thermodynamics problem. The internal dynamics of the atmosphere, an extremely tiny fraction of the mass of the system, is meaningless.
>> Is that the crux of the micromacro analogy?
My communication skills are so poor. It’s not a micromacro analogy, it’s an applesoranges analogy.
1) Leaf flowing in river affected by chaos –> storm flowing towards city, affected by chaos
2) sediment flowing towards sea –> solar energy flowing towards earth
In 1, chaos dominates the problem
In 2, the problem is completely different, different inputs, different equations, different outputs. Chaos is a factor, but source is different , and much less important. It is NOT a macro version of problem 1.
Maybe the heart of the problem is a confusion between the “study of” and the object being studied. The same object may be studied in different ways, whether a river or the earth, and the scientific principles and methods are completely different from each other. To have a true micro vs macro, it would have to be the same object, studied with the same science, and then just vary the time scale.
I’m not sure Larry, but if that’s what the discussion is, nobody’s making it clear.
And if so, they’re probably both correct but not communicating. You can’t have climate without weather, the definition of climate in the dictionary is weather over time in “an area”. On the other hand, climate involves things that are not normally thought of as weather. Heat content of the oceans, GHG behaviors, albedo, cloud feedback, cosmic rays….
I’d say weather is created by things that are focused in on with climate, but nobody really thinks about them as weather, although many create or impact the weather.
Same but different.
Are we having this discussion, or is it something else?
No, they’re both involved. For just one example, convection in the troposphere bypasses the greenhouse effect in the troposphere. So, if there’s enough convection, the greenhouse effect is happening, but it’s irrelevant. There’s a short circuit around it.
There’s no way you can just do summary thermo, and ignore the transport part.
Reading that post, it sounds like that’s what you’re saying Gunnar.
It’s more a matter of determining the terminology first before starting to discuss the specifics.
Is it fair to say we have two different ways to describe the two, and two different ways to think about the two? That they are in some ways the same and in some ways different?
Weather causes climate. The things focused on in both are different, but they’re all interlocked. What they are is how they’re thought of and which aspect is highlighted.
299, climate affects weather. Climate change (or dynamics, if you will), will also affect weather, but the longterm averages change. The question is, is it a distinction with a difference? I don’t think the answer is as obvious as it might seem.
Larry (#290),
My training is in differential equations and numerical analysis. In the case of time dependent partial differential equations, we split the initial value problem into the initial conditions and forcing (the latter includes all forms of forcing whether it be from the sun, latent heating, etc.). The solution can be written as a sum of the homogeneous system (no forcing) plus a forced component. If there are problems with the solution operator of the homogeneous problem, then the forcing will not make things better. And in the case of climate models, there are problems with the solution operator that impact numerical methods and with the unphysical forcings (parameterizations).
Jerry
Wait a minute, climate does not affect weather. Climate is a long term weather pattern. Climate is how the weather acts.
303, but what most people here want to know is the differential equations describing what? The radiation, or the fluid mechanics? Based on my understanding, fluid mechanics is a much more difficult diff eq problem because of nonlinearity. Am I correct that the problem is with the fluid mechanical solutions?
>> No, theyre both involved. For just one example, convection in the troposphere bypasses the greenhouse effect in the troposphere.
You have good point there, fluid dynamics is important for climate. But the focus is different.
>> summary thermo, and ignore the transport part
right, but you also don’t have worry about whether San Fran gets rain or not. On the scale of climate, you know that so many storms will occur, and they will transfer so much energy in a certain way.
>> Weather causes climate. climate does not affect weather. Climate is a long term weather pattern. Climate is how the weather acts.
First, I want to remind ourselves to not confuse the object being studied with the ology, ie study of that object. I have not said the object is different, but rather, that climatology is completely different than meteorology.
Second, I disagree with all the assertions above. Weather is synomous with chaotic atmospheric circulation. Climate, despite the incorrect dictionary meaning, in actual usage, is referring to the average temperature of earth, undistorted by weather. All the chaos of weather cannot add or subtract energy from the system. Note, I’m not saying that convection doesn’t aid heat transfer, since it certainly does. In climatology, one can rest assured that convection takes place, and it doesn’t matter where or when. I’m saying that the chaotic aspect of weather, ie whether SanFran gets rain or not, does not affect the climate, ie the average temperature of earth, which represents the energy level of earth.
>> radiation, or the fluid mechanics
Plus thermodynamics. Cannot calculate a temperature without thermo. So, I say:
climatology = solar activity (radiation) + thermodynamics + atmospheric physics + fluid dynamics + earth science + …
I think this supports my point that climatology is completely different than meteorology.
Exponential Growing Solutions (tutorial)
Consider the simple scalar ordinary differential equation (ODE)
du/dt = a u
where u(t) is only a function of time t, a is real and positive,
and u(0) is given. The solution of this ODE is
u(t) = u(0) exp (at)
as can be verified by substitution.
Clearly the solution grows exponentially in time and such solutions are physically possible and allowed for in mathematical and numerical analysis theories (more about this in the following tutorial). Now let us approximate the ODE by the finite difference equation
L v = a v
where v is a difference function defined on a grid of size h and L v represents the finite difference approximation of du/dt, e.g. L v could be [v ((j+1)h) v(j h)]/h.
To determine the error of the finite difference approximation we can write
L v = du/dt + tau where tau is the truncation error determined by Taylor series expansion. For accuracy tau must approach 0 as the grid size h approaches 0. Here we shall assume that the finite difference mesh size is 0 so that we can write the above truncation error equation in the form Lv= du/dt, i.e. the truncation error is 0. Subtracting the equation for v from the equation for u, the error E = uv satisfies the equation
dE/dt = a E
and the error E satisfies the same equation as u. Thus any nonzero initial error will grow exponentially in time. (An initial error can come from a number of sources including observational error, forcing error, numerical error, etc.) The point is that the numerical solution will then deviate from the true solution exponentially fast in time.
Jerry
>> Weather is synonymous with chaotic atmospheric circulation.
to put it another way: if climate were just the long term average of weather, there could no climatological change. Weather averages to zero. If the sun was completely constant, day after day, year after year, century after century, millenium after millenium, and there was no variation of orbit, and no increase or decrease in cosmic rays, no variation in heat transfer from the core, and no variation in anything else that could affect energy coming in, energy going out, work being done or the internal energy of the planet, then the long term trend of the average temp would be a flat line.
Ill posedness (tutorial)
Exponential solutions that are bounded in a finite time interval such as in the previous tutorial are allowed in mathematical theory and fluid dynamics, but as can be seen are a serious problem for numerical approximations. Now let us look at a more serious type of problem.
Consider the ODE (obtained from a Fourier series representation of a PDE)
du/dt = k u
where the integer k can take on any positive value. Now the growth is unbounded in any finite time interval (by choosing k sufficiently large). This type of behavior (unbounded exponential growth in any finite time interval) is called ill posedness of the differential equation and means that the solution is not physically reasonable nor computable.
To see how this problem affects numerical approximations of the
ill posed hydrostatic system used in the majority of global climate models, see the numerical runs on this thread. The numerical solution not only does not converge, it blows up as the mesh size is reduced exactly as expected in this tutorial.
Jerry
@Gunnar:
I have to admit that it never for a moment occurred to me that your argument boils down to “leaves aren’t sediment”. Are we next going to move on to rabbits aren’t kangaroos. I’m now mystified by the what analogy you are trying to draw between weather and climate.
Given your little remark about dilatation effects and your later explanations, I begin to suspect my mistake was to attempt to infuse some sensible meaning into your analogy. I’ll not make the mistake of trying to do that in future.
@Sam:
The micromacro issue actually does have something to do with relating climate to weather. (In contrast, as far as I can tell, “leaves aren’t sediment” has nothing to do with the problem.)
The micromacro issue goes sort of like this:
Sometimes we (engineers and scientists) try to understand what happens in a forest by studying what happens in and around individual trees. They may do this for many reasons, but this is taking a micro individual tree approach to studying a problem.
Sometimes, we try to understand the what happens in a forest by treating the forest as some sort of “average” thing with “average” properties. (This is a macroview.)
The two problems are always related because forests are collections of many trees. More importantly, if we want to figure out how to correctly treat the forest as some sort of “average”, we do that by studying the “micro” problem.
Interestingly, I could come up with several specific “tree/forest” examples.
Example 1: Suppose you are interested in how forest fires propagate across California. You might want a system of equations that predicts how mass, momentum and energy propagate through a forest treated as a sort of continuum. When studying this problem, you likely wouldn’t give a hoot about individual trees.
However, to get a half way decent prediction, you likely need to understand something about how individual trees burn– for example you might want to know the Temperature at which a tree spontaneously combusts. Or, you might want to know given a certain amount of oxygen, wind velocity and local temperature, how quickly the tree burned. Etc.
To get that information — which is required to do the “macro” large scale forest fire prediction, you do a “micro scale” analysis examining how individual trees burn, do some sort of averaging, and introduce it into the larger “macro” problem.
We could come up with examples for studying all sorts of ‘treeforest’ problems. (FWIW, sometimes over studying microproblems is nothing more than navel gazing; sometimes it’s necessary.)
So, getting some correct information out of the “micro” problem about “trees” is always relevant to a macro problem about “forests– which are bunches of trees”.
@Larry: I’m an engineer. MSME. BSME. PhD. ME.
Gerry appears to be an applied Mathematician. (Am I right there, Gerry?) You will never read me writing “If there are problems with the solution operator of the homogeneous problem, then the forcing will not make things better.” in comments! However, transport phenomena, which involves mass, momentum and heat transport and thermodynamics involves loads of math.
I reserve judgement on whether “climate” and “weather” are largely the same. Whether they are “the same” or “different” may depend on how you define each. My understanding was climate is defined as the ensemble average of weather– which would mean they have a lot to do with each other. Still, maybe there are other nuances that I’m not familiar with.
That said, I am sure that no matter how you frame the problem, the two must be strongly related (even if leaves aren’t sediment and rabbits aren’t kangaroos).
welikerocks (#304),
Exactly. The weather can be changed by many factors, but it is the long term behavior of weather that will show if there is climate change.
Ask a climatologist to precisely define climate. Don’t be surprised if the answers vary all over the place. 🙂
Jerry
Steve has cited my credentials but just for completeness I have a Ph.D.
in mathematics from the Courant Institute of NYU. My mentor was HeinzOtto Kreiss.
Jerry
309, reaching way back through the cobwebs, I seem to remember that there are “marching” solutions, and “relaxation” solutions to numerical diff eq problems, and the marching ones, as you describe, have the problems associated with extrapolation, because in essence that’s what they’re doing, where the relaxation solutions are interpolations, and thus better behaved. I’m assuming that the kinds of problems faced in the models are only suitable for marching solutions? In that case, I understand full the intractability of the problem.
@311:
I concur with Jerry, but I would have used the word “ensemble average” in place of “long time average”. At that point, we could all return to the discussion of ergodicity introduced by Bender.
If Bender comes back: there is a very brief mention of ergodicity in Landau & Lifshitz “Fluid Mechancis” 2nd Edition, “Course in Theoretical Physics Vol. 6”. In the section about “Strange Attractors” under “Turbulence” you will find a sentence that ends….
“…; each path belonging to the attractor wanders through all layers and in the course of sufficiently long time passes indefinitely close to any point of the attractor– the ergodic property”.
If you read L&L, you will also have to endure words like “manifold” and “Cantorian structures”, but just thank your lucky stars you will not need to deal with “Hilbert Space” or “defined on a compact domain”.
@Larry:
I think Gerry is concerned with the illposedness of the continuum formulation. That is to say, he is discussing a problem that happens before you code the problem up, and which can’t be fixed up by any method of marching forward in time. I think you are thinking of problems that may occur when even wellposed systems get discretized.
I’m not familiar with the specific physical problem Jerrry is discussing, but “illposedness” is considered a very bad thing in a model for a real physical system.
Larry (#305),
I consider the inviscid NavierStokes (Eulerian) equations as the basic
(homogeneous) time dependent dynamical system. All other forcing (solar change, wet physics, viscosity, etc.) are physical parameterizations
(forcing terms). If the correct type and size of viscosity were used, then I would consider the viscous NavierStokes as the basic dynamical system.
Note that when the viscosity is reduced (or set to zero as in the runs cited), the kinetic energy in the convergent computed solution cascades downscale on the order of hours as expected. Thus none of the current weather or climate models resolve the real solution. As mentioned multiple times on this thread, this is artificially overcome by fictional large dissipation. There is no theory behind such games. In fact the theory
that exists (minimal scale estimates for incompressible) by Henshaw, Kreiss, and Reyna shows that if one uses the wrong size or type
of dissipation, the solution is wrong.
Jerry
forest & trees:
I must understand that removing fuel (slash) from the bottom three meters of each individual tree (micro level understanding) will radically change the fire characteristics of the whole forest (macro level)…
I must understand how a tree responds to strip barking (micro level) to properly interpret the ensemble composite ‘response’ of a regional stripbark chronology (macro) over time.
A lesson learned long ago from developing GIS and thematic mapping… get the micro picture wrong and the macro picture will be garbage. It is rare that they are so disconnected that one can study macro without any concern for micro.
Larry (#305),
Go to comment #167 in the first version of this thread (#1) and look at the four plots there. Compare them with the tutorials above and things should become more clear. Nonlinear time dependent equations can be solved numerically if they are well posed.
Jerry
lucia (#315),
That is exactly correct. The cause of the ill posedness in the continuum
equations is the hydrostatic assumption in the neighborhood of
vertical shear (jet stream), i.e. the gravity waves have unbounded exponential growth as in the tutorial above. The gravity waves are also the source of the fast exponential growth in the nonhydrostatic system.
See the reference cited in the earlier thread.
Jerry
lucia (#314),
I like Hilbert spaces and compact domains. 🙂
Jerry
Signing off for the night. If anyone has additional questions relative to the tutorials or plots, I will try to answer them tomorrow.
Jerry
Just to add to the party, I did a very abstract course involving Hilbert spaces and compact domains leading to proofs of the ergodic theorem. IT was in either 3rd or 4th year topology. One of the disadvantages was that the approach was so abstract that I learned the theorems, but didn’t ever understand the purpose of the course and how it connected to anything.
Steve (#322),
I would like to address your comment tomorrow.
Jerry
#306 Gunnernot arguing just adding this link here. (Maybe my thinking is just a mess!) This concept of “the average temperature of the earth” is a thorny concept in our house, especially talking in fractions of a degree, like the climate scientists do.
The climate on the earth has structure and it is set in zones. Maybe look at the Earth’s cycle instead the thermostat – in other words we read what the temp is at this moment in timejust like in our houses. And “weathering” can speed this cycle up or slow it down. The thermostat in my house isn’t my house; it is just a mechanism of my house. AGW theory claims we have broken the thermostat of the Earth or it isn’t set “right” anymore because of us.
>> “leaves arent sediment. Are we next going to move on to rabbits arent kangaroos. Im now mystified by the what analogy you are trying to draw between weather and climate. Given your little remark about dilatation effects and your later explanations, I begin to suspect my mistake was to attempt to infuse some sensible meaning into your analogy
I went back over my posts to see if I insulted you in some way. I couldn’t find anything to explain this ridicule. I’m tempted to explain it again, but I think I explained it quite clearly in 298. Maybe, you’re in a place where you cannot adjust your level of abstraction?
Maybe the disconnect is not just scientist vs engineer, it’s abstract conceptual thinking vs concrete detailed math thinking.
I gave an analogy which you misunderstood as an example of micro vs macro, when it clearly wasn’t. I think the analogy is valid, since no one has pointed out where it’s wrong.
To illustrate the concept, imagine if you earned the post of Chief Climatologist of Earth, and you use the science of meteorology to study and predict the climate. Then, one day, the sun strengthened by 10%. The climate changed. The emperor comes to you and says “Why didn’t you predict this change?” And you respond, “I’m sorry your highness, my detailed weather model (micro) should have predicted this change, but they failed me, and I failed you”. The emperor says, one more failure, and it will be your head. Then, the planet’s orbit changes, and the climate changes again. The ending is not good.
Summary: It’s not a micro vs macro problem, no matter how much you want it to be. The problems are completely different, thus, so must the sciences of climatology and Meteorology be different. Climate can change the Weather, but Weather can’t change the Climate. Sorry, if this upsets you.
Conclusion: the argument against AGW “if you can’t predict the weather, how can you predict the climate” is invalid.
@Gunnar — Reexamine 294. You may not think your question to me about taking account of time dilation is amounts to a rhetorical slap or eyeroll but I certainly do. How is the person asked, or others, supposed to take that sort of question?
In 298, you say a number of things that make no sense whatsoever. I could responded like this
If you thought I was, then you missed my point: fluid dynamics, no matter how chaotic, is irrelevant to climate,
Wrong.
which at the essence, is a thermodynamics problem.
Wrong.
My observation that your claims are simply wrong has nothing to do with abstract thinking, concrete thinking, focus, engineering, science , nitpicking or anything of the sort. Your arguments are founded on false ‘facts.’ Period.
>> You may not think your question to me about taking account of time dilation is amounts to a rhetorical slap or eyeroll but I certainly do.
No, it was a serious comment, and meant to counter the argument made by you that micro effects are always important for macro analysis. It was certainly not meant to offend you, and if you took it that way, I apologize. The point still stands. Your principle:
All I have to do is think of one contradictory example, and your assertion is invalidated. You say that one ALWAYS has to study the micro effects before studying macro effects. So, I ask you, do I need to consider time dilation effects prior to my road trip or not?
>> How is the person asked, or others, supposed to take that sort of question?
As a critical thinking counter argument. I speculate that you don’t like to be contradicted.
>> In 298, you say a number of things that make no sense whatsoever. I could responded like this
Yet, you don’t explain why they don’t make sense to you.
>> If you thought I was, then you missed my point: fluid dynamics, no matter how chaotic, is irrelevant to climate, Wrong.
As Larry pointed out, I did overstate that, but the corrected statement would be: weather, no matter how chaotic, is irrelevant to climate. More specifically, the fact that the math to predict whether a storm will hit San Fran one month from now is impossible (or illposed as Jerry says) is irrelevant to a proper study of the climate.
>> which at the essence, is a thermodynamics problem. Wrong.
Care to support your assertion? I’m really curious as to how you would calculate temperature without involving thermodynamics.
>> Your arguments are founded on false facts. Period.
Me thinks thou dost protest too much.
No, she doesn’t protest too much. The individual storm may be irrelevant, but the net effect of a number of storms is quite relevant. Remember that the most uncertain part of this puzzle is the feedback. That’s all transport. Without being able to model that, we have no way to take a stab at feedback. Then the whole modeling exercise becomes pointless.
Ok, I think I just answered my own question about why the fluid mechanical modeling is so critical. It’s the feedback mechanisms that are so dicey.
>> The individual storm may be irrelevant, but the net effect of a number of storms is quite relevant.
Which is what I said in 306:
So, for climatology, we only need to know how a storm works, and then we can understand the overall effect. We do NOT need to determine when and where it will occur (meteorology), so the mathematical complexities of chaotic weather are not relevant to a proper study of the climate. (see #270, 281, 294, 298, 306 and the conclusion in #325).
Gunnar; I totally agree, that is an invalid argument. Weather doesn’t have trends like climate does, comparing the predicting of them is not a valid concept. But we’re all in a circular argument, let me try and clarify everything.
Let me see if I can distill your #298
1) Weather – storm, not predictable in any meaningful sense.
2) The output of the sun is less chaotic, fairly stable, and therefore more predictable.
A meteorologist deals with the first, a climatologist the second. But then:
Concept 1: The sun doesn’t do anything to climate, it does something to the weather. The weather over time creates the climate. But you don’t really study the sun when you’re looking into weather. You might mention it’ll be out today though. 🙂
Concept 2: One of the things done when trying to predict climate change is to take how the sun is acting into account. Not anywhere in the way you look at it when thinking weather.
Don’t mix the concepts up, the first is cause/effect, the second is investigating the mechanisms.
“climatology is completely different than meteorology.” I don’t think anyone was arguing that they are the same thing (I think “completely different” is a overstated so I phrase it as they are not the same thing. )They are different fields of study, and discussing how they are similar or different is one topic. How weather over time is climate is another topic. I would say there’s some overlap between climatology and meteorology, but the mechanisms that overlap are looked at in a different way for different things. So in that way, they are completely different, but the differences are at times not so complete.
But trying to discuss both subjects (ology and weather/climate) at once is just confusing things.
As I said, before discussing something, you have to get the terminology being used decided upon. For example, there’s a lot of stuff in statistics I don’t understand, and one of the reasons is a great deal of the math is beyond me, and another is the words and descriptions differ from the way I would describe them from a computer science standpoint where there is overlap/similarity.
I thought I made a good analogy of you can’t see the forest for the pesky trees. 🙂 But this is one of the things that makes me question the idea that there is such a thing as a “global temperature” or at least that the concept is meaningful. Taking the micro (local temp 35 C at airport thermometer location at measurement time) and making it a macro (departure from average global temp of 14 C (or whatever) over the course of a year). Be that as it may, certain other discussions require that it be taken as a given that it does exist and is meaningful; discussing the anomaly trend and its causes for example.
Now, would a weatherman discuss CO2 outgassing from the oceans in a forecast? Probably not. Does that have an impact on climate? I would say yes, because it does affect the portions of weather that result in climate. So in that way “weather” is not “climate”. But let’s just talk about one thing.
Weather and climate are not the same in the sense that weather is local short term (current) behavior, and climate is “some area” and how the patterns of weather act over time. In the sense that one creates the other, they are the same in the sense of a worm is not a butterfly and a tadpole is not a frog; weather is climate at a different stage of development. 🙂
Which is not exactly a micromacro issue. What is one however, is turning the measured temperature (one aspect of weather) into a global temperature anomaly (one aspect of climate).
Returning to the forest fire analogy, if I don’t know at the start of a fire that the bulk of trees are ones that burn quickly and are in a fairly dry brushy area, I won’t know that fire will behave differently than the last one I saw that involved trees that burn slowly that are in a fairly moist grassy area. Or it could be some area that involves a mix of both.
But I don’t need to know the exact characteristics of every single tree for that purpose. I don’t think it makes it any easier to discuss this to try and generalize everything, especially not in absolutes. You have to know the purpose of what you’re doing to put things into perspective.
330, wrong. If you can’t mathematically model transport, how are we supposed to know how feedback works? I’m not saying that it’s possible to do accurately, I’m just saying that in principle, that’s the only way to model feedback. And if you can’t model feedback, the greenhouse part isn’t worth attempting to model. To model feedback requires the same circulation models as predicting the weather. Bottom line, if we can’t predict the weather, we have no tools for modeling feedback.
@Gunnar:
1) To answer your direct question:
No, I am not going to support my assertion that your unsupported bald claims are simply wrong. Several others have also told you they are wrong, and even provided the required support.
2) As to your notion that you have falsified my statement that macro and micro problems are always related: You are wrong. Accounting for time dilation effects has nothing to do with the “micro” / “macro” ways of looking at how long it takes for people to get from point A to B.
Estimating how long it takes one family (possibly your family) is the micro problem. You should account for whatever matters. I suggest accounting for potty breaks, and delays for explosive pointless arguments that result in the driver pulling off the side of the road and stewing. Unless you have rocket thrusters on your care, I’d suggest neglecting time dilatation effects.
Estimating how long it might take huge number of families similar to yours all traveling between the two cities for some popular event (like vacation, skiing season, christmas break) etc. would be the macro problem. “Submodels” from the micro problem are necessary for this problem. (In this problem, one might also be able to account for the traffic jams that will ensue when all these argumentative people get behind the wheels of their cars and end up on the same problem.)
Note: The problems are related.
Note: time dilation is irrelevant to both problems.
3) Based on 327: you don’t understand what “ill posed” means.
4) The difficulties in predicting weather may or may not preclude predicting climate. That is an open question. I don’t know the answer, but it on why the difficulties exist and whether or not these go away when the “average” (climate) problem is addressed. Your bald claim otherwise does not resolve the question.
5) Trying to prove yourself right using obscure questions, mysterious incomprehensible analogies and telling other people you are the concrete thinker is futile. I’d suggest just stating your claims and trying to provide support.
Failing that, state your claim, tell people aren’t going to spend time supporting them and admit it’s fair enough if others don’t wish to be convinced.
Like this: When you say “weather, no matter how chaotic, is irrelevant to climate”, you are wrong. In fact, that’s bunk. But I’m not going to try to convince you by giving evidence. Feel free to continue believing that if you like.
@Larry–
328 and 329 look about right.
Will you share your popcorn? 🙂
I’d disagree with that Larry. We don’t have to predict weather, we just have to know the mechanisms involved and create something that mimics the chaos (or at least the basic operation). It probably wouldn’t provide anything useful climatewise, but you could at least get some sort of approximation if you’re modeling all the processes involved in the way they behave. Some will be more accurate than others, then when combined, you at least get an idea of what’s going on. The feedback model may suck, but it will be there at least.
How accurate and meaningful the overall model is, that’s another issue.
I’m probably going to regret jumping in here, but …
I don’t believe Larry declared that you have to be able to predict weather in order to predict climate.
He stated that in order to predict weather, you have to be able to model such things as individual storms. He went on to say that until we have an understanding of how storms work, we can’t model climate.
I believe he is claiming that we can’t predict either weather, or climate, because we don’t have a basic understanding of how the atmosphere works yet.
To put it another way, it’s not that one, climate, depends on the other, weather. It’s that they both depend on an understanding of the atmosphere that we just don’t have yet.
Sam, thanks for responding with your thoughtful 331. I don’t think it’s a circular argument.
>> The sun doesnt do anything to climate, it does something to the weather. The weather over time creates the climate.
The sun does something to Earth. The chaotic atmospheric circulation is the weather. Changes to the overall energy level (temperature) of earth is the climate. Weather does not create or cause climate changes.
>> climatology is completely different than meteorology. I dont think anyone was arguing that they are the same thing
I think that some people were confusing the issue by thinking that since the object of study is the same (Earth), and since climate has a longer timescale, that therefore, climatology is just a macro version of meteorology. They then concluded that the mathematical problems of meteorology are transferred to climatology. Not so my friend.
>> But trying to discuss both subjects (ology and weather/climate) at once is just confusing things.
I really don’t think that it takes that much brain power to conceptualize the difference between our study of something, and the object being studied. [Snarky remark suppressed by the author]
>> Weather and climate are not the same in the sense that weather is local short term (current) behavior, and climate is some area and how the patterns of weather act over time. In the sense that one creates the other, they are the same in the sense of a worm is not a butterfly and a tadpole is not a frog
I disagree that climate (from a scientific point of view, not common dictionary usage) refers to “how patterns of weather act over time”. Refer to my Chief Climatologist example. Weather does not create climate, nor does it grow up to become climate, so the tadpole/frog analogy is not appropriate.
Weather is like the vibration of atoms in a hockey puck. Climate is Gretzy shooting the puck into the net. The false argument asserted by many is equivalent to saying “since we can’t mathematically predict where a certain atom will go in the puck, we cannot determine if Gretzy will score or not”.
That’s about right. To correctly model feedback, we need to have circulation models that generate storms and other dynamics that result in the aggregate statistics being about right. This is not the same as predicting the weather, but it’s made out of the same things. More importantly, the problems that Gerry was talking about will prevent these from giving good aggregate statistics. Or at least that’s the issue raised here.
It should be easy to confirm or falsify this with a number of different model runs. If you tweak the resolution of the model, and the feedback changes, you got problems.
I keep coming back to this and thinking about posting… never quite get around to it though!
Gunnar, both Tom and I have tried to explain to you the scaling behaviour of chaotic systems, and you still haven’t yet got it. I’ll have another try because I’m a sucker for punishment (!)
Chaotic systems generally exhibit selfsimilar behaviour over a massive range of scales. The reason for this is that phase transition events tend to cluster, then those clusters tend to cluster (giving clustersofclusters), then those clusters tend to cluster (giving clustersofclustersofclusters) and so on. A small change in the initial conditions can result in an entirely different result on all scales.
What is the difference between weather (say events of the order of hours) and climate? Five orders of magnitude? A mere stone’s throw to a chaotic system.
That said, there is no proof or disproof that weather (or climate) is chaotic. Until we have the full mathematical derivation of climate, or a few billion years worth of good quality climate data, we are not likely to find out. But the consequence of the scaling behaviour means that either chaotic behaviour drives both weather and climate, or drives neither weather nor climate. It is very unlikely to drive one and not the other.
I used to be fairly committed to the concept that weather and climate were chaotic, just because the scaling behaviour of both seemed to be a good fit. Noting Dr. Browning’s views above, in conjunction with reading some of Prof. Koutsoyiannis’ back catalogue, I’m now starting to form the view that chaos isn’t an essential ingredient at all to cause the scaling behaviour apparent in weather and climate.
Interesting papers on the topic:
On the quest for chaotic attractors in hydrological processes (click on preprint)
Shows that tests for lowdimensionality chaotic attractors fail, and there is insufficient data (by a large margin) to determine presence of highdimensionality chaotic attractors in hydrological processes. Not all of these arguments can be applied in quite the same way to temperature though;
The longrange dependence of hydrological processes as a result of the maximum entropy principle (click on presentation)
Illustrates how long range dependency and scaling structure can be derived from systems with autocorrelation and high variability in fine scale structure from the principle of maximum entropy; the scaling structure is what makes long term averaging give misleading answers when tested with classical statistical methods. In particular, pg 12, “increased information gain for increasing scale (leading to) increased predictability for increasing lead time (is) physically unrealistic” – I hope I’ve interpreted the implications correctly there – these seem to extend fully to temperature.
The reason that these discussions have become so heated is that no one has produced a mathematical definition of climate. Assume that there was a perfect model (analytical or numerical) that described the weather on the earth from moment to moment. Then define weather prediction and climate in terms of a mathematical formula that involves the output variables from that model. My guess is that there will not be any single answer by
a climatologist or anyone else. If that is the case, then arguing about the difference or similarities is pointless. And if there is a single answer, then a discussion of the flaws in any model can be discussed separately.
Jerry
>> telling other people you are the concrete thinker is futile
Lucia, I have nothing to respond to, since you made no argument, simply resorting to adhominem. I just want to clarify that in the quote above, I was thinking of myself as the abstract thinker. I’m sorry that you take offense to everything I write.
>> Several others have also told you they are wrong, and even provided the required support.
I missed that part.
>> Estimating how long it takes one family (possibly your family) is the micro problem
You can’t redefine the problem to fit your view.
#335, MarkW, I agree with that 100%.
339, I don’t think we need a mathematical definition of climate; we just need a mathematical definition of climate sensitivity. That’s what we need to condense from the behavior of the GCMs.
338, we do know that fluid mechanics is chaotic in the case of low viscosity, because the NavierStokes equations are nonlinear. And it sounds like they don’t even attempt to account for viscosity in the models, so the models would be chaotic, even if the real atmosphere isn’t.
Gunnar, forgive me for jumping in here, but in very broad terms climate is the integral of weather. While it may be very difficult to predict ONE storm accurately and it is not necessary to predict every single storm accurately, in order to take the integral of all the storms to come up with climate, you DO have to be able to accurately model weather in the aggregate to predict climate. The difficulty I have with the climate predictors is that their claims of accuracy far exceed the very measurable inaccuracy of weather predictions. They say they have many different measurements and they are all iid. But, when you look at them in detail, not only are the measurements not all measuring the same variable (i.e. many are proxies), but even the direct temperature measurements turn out not to be so iid.
You are correct in that it is possible to do topdown modeling (and I studied and did a little bit of that kind of modeling in school), but as I understand it, the predictors are not claiming that their models are topdown models, so the criticism of a disconnect with the micro elements of their models I think has validity.
If there are some topdown models I am unaware of, I would be very interested in seeing how they are constructed.
Spence_Uk, I remember well our back and forth on this very subject. I also remember that the source of the confusion was that we were on different levels of abstraction. You were focused on the object being studied and treating it from a statistical point of view only. I was talking about two studies of that object with completely different goals, namely meteorology and climatology. I remember that in the end, you seemed to understand that. By your comment here, it looks like you forgot the issue, or you never did get it. Larry, please pass the popcorn, the smell is just too good.
>> But the consequence of the scaling behaviour means that either chaotic behaviour drives both weather and climate, or drives neither weather nor climate. It is very unlikely to drive one and not the other.
I don’t think you can support this assertion, which is akin to saying “either chaotic behaviour drives the movement of puck atoms and wins hockey games, or it drives neither puck atoms nor wins hockey games. It is very unlikely to drive one and not the other”.
Well, I got news for you Spence ol buddy, Chaotic behaviour drives the movement of atoms, but does not cause Gretzky to score. Chaotic atmospheric circulation drives the weather, but does not have anything to do with the many other things that affect the climate, including solar dynamics, orbital dynamics, ocean circulation, plate tectonics, man changing the attributes of earth, etc, etc.
>> climate is the integral of weather
again, while true, that is a statement about the object being studied. Just like the puck doesn’t go into the back of the net without it’s atoms.
Steve (#322),
I think that your comment is bang on. One of the problems with the teaching of mathematics after an area has become fully developed is that it is easy to lose insight into the original foundations of the subject. This is partly a fault of the texts in the developed subject area and partly a fault of the teachers. Mathematics can be a very boring subject and the latter problems only intensify the difficulty. It is my personal belief that the teaching of mathematics must always be tied to more concrete examples.
When I started undergraduate school, I initially started in the more theoretical mathematical department. But my room mate started in engineering mathematics. I was more interested in his assignments than mine so switched to the applied math area in the engineering school and enjoyed that program. But in graduate school at the same university, the focus was more on algebra (Galois theory), topology, etc. and I lost interest very quickly.
I was fortunate that Heinz helped me return to school. At NYU the program was more to my liking. The subjects tended to stress fluid dynamics and
practical applications of differential equations. There was no topology teacher although topology was taught. And of course working with an applied mathematician over the years was very helpful.
Jerry
Larry (#342),
This is not a factual statement. Might I suggest that you peruse the minimal scale estimate manuscript by Henshaw, Kreiss, and Reyna
cited here many times.
Jerry
I think you are confusing molecular vibrations (which are chaotic) with molecular motion (which is not).
Clearly, if the system is insensitive to initial conditions (as in the case of Gretzky scoring), the system is not deterministically chaotic in the conventional definition of the term.
We’re all trying to discuss something abstract that has no answer, perhaps. Like the blind guys and the elephant, or two people on opposite sides of a curved wall arguing if it’s convex or concave with the guy looking at the top of the wall North and seeing a half left circle and a guy at the underneath looking South who sees a half right circle.
comment160933 MarkW, I think you put that well. I was merely disagreeing that ‘no weather prediction’ doesn’t have to equal ‘no tools for modeling feedback’.
But I tend to think of climate in terms of how the dictionary defines it, weather over time. Modeling either of them is a different thing (and are different things) but they depend on each other (or as you said, both depend on the same types of things, like clouds). And notice, I did say that how accurate and meaningful the models are is another issue. So once again, multiple topics that need to be discussed in separate ways.
I don’t think it’s needed to be able to model individual storms, rather to model the patterns so they are similar. Which is why I said, all we can get is an idea of how things basically work.
comment160942 Gunnar, it depends on how you define climate. You define it as changes to temperature (which is not really the overall energy level, but close enough). I don’t. I define it as weather patterns over time. And weather is created by, among other things, the sun. I think we’re arguing over concepts and minutia we don’t need to argue about. I think they key here is that if we don’t understand how clouds work well, how clouds interact with water vapor, CO2, oceans and everything else in the cycle (including the clouds) we can’t model all the interactions well if at all.
And it’s not brain power to conceptualize the difference that’s the point, it’s not discussing multiple subjects at once as if they’re all the same thing, using terminology that’s not agreed upon, and that’s from different aspects of the issue and different viewpoints. That’s why I said circular, perhaps not the best choice of words.
comment160943 Larry As soon as we get a model that when run takes what it was like 10 20 30 40 and 50 years ago and spits out results of what like 10 20 30 40 and 50 years(the now) later, and what it will be like in 10 years from now, I’ll let you know in 10 years I’m convinced the models are anything but guesses using poorly understood aspects. But we can’t say we can’t model them, just that the models are vague indications of poorly understood systems.
I wasn’t even trying to discuss models per se, just that there’s a difference between not understanding something and not being able to model it at all (no matter how badly!).
comment160956 Gunnar, If you define climate as the entirety of the system, every aspect of the cause of weather and everything that affects it in any way, directly or indirectly…. I suppose I am a bit perplexed at you stating that man changing the attributes of the Earth is climate. Man chaning attributes of the Earth affects weather.
And this too. So is plate tectonics part of the climate or of climatology?
344,
But the only way to model chemistry from first principles is to model the sum of all of the chaotic behavior. So yes, chaotic behavior does cause Gretzky to score. It also causes your car engine to run smoothly. And chaotic behavior of electrons in your computer cause the right bits to show up in the right places. All of what appears to be deterministic behavior in nature is riding on top of at least one layer of chaos, maybe more.
Re # 345:
I wouldn’t formualte it like that. Chaos or turbulence in the atmosphere and ocean is important. It has a large influence in fact on the cicrculations in both the ocean and the atmosphere and consequently on climate. So, in any GCM the effect of turbulence (which is not resolved or only partly) has to be taken into account with parameterizations. But the aim of the parameterizations is not to mimic every single eddy but more the average effect of the turbulent eddies on the larger resolved scales and processes. This approach is probably sufficient for climate modelling. The same is done in all engineering models of turbulent models. The average effect of the turbulent eddies is modelled as good as possible. The resulting model does not predict exactly where and when an eddy will exist and how strong it is (this would be perhaps similar as weather prediction) but tries to capture the mean effect of the eddies which can still be good enough the compute the mean flow/drag (climate?). People can look this up in textbooks on turbulence or perhaps try a google and type in something like “largeeddy simulation”. There they use this approach.
Larry (#341),
If we don’t have a mathematical definition of climate, how can we have a mathematical definition of climate sensitivity. Let us keep this thread on a quantitative level. Each commenter can put forth his mathematical definition of climate based on the above assumption. Then those definitions can be analyzed and scrutinized in a quantitative fashion. Isn’t it interesting that no one has put forth a quantitative definition even for a perfect model. No wonder there is so much verbiage and so little substance in the climate change debate.
Jerry
Nonsense. If they’re really any good, we can go back in time, and see how well they predict the recent past. We don’t have to wait to see if they have predictive power. But, we can’t let our knowlege of the recent past influence our parameter selection.
352, good question. The climate sensitivity is a somewhat unphysical metavariable that is the way GMT changes with CO2 doubling, completely ignoring time dynamics. It’s based on the basic heat balance change due to the doubling. The fact that it’s unphysical doesn’t mean it’s undefined. And if we could know what it is, it would shine a lot of light on the question of what we can expect from the various IPCC scenarios.
Just because it’s not physical doesn’t mean it’s not useful.
>> I suppose I am a bit perplexed at you stating that man changing the attributes of the Earth is climate. So is plate tectonics part of the climate or of climatology?
To study the climate, one needs to study all the things that can affect it. This includes man’s actions, plate tectonics, etc. That means that these are part of climatology. It does not mean that they are part of the climate.
Sorry for the broken record folks, but… “object of study” vs “our study of”. Our study of something depends on why we are studying it. The purpose of meteorology is to predict the weather, an inherently difficult problem, because the weather is chaotic (even though Spence isn’t convinced it is [giggle]). The purpose of climatology is to study all the things that could affect the average temperature of earth. See conclusion in #325.
RE: #278 – Scorecard, Day 1, AM. We hit dew point over night. However, there was a puff or two of offshore wind as dawn broke. Yet, cirrus moving in from the WSW, contrails high over head. Still, the NWS stick to their guns, progging a warm and dry sequence through Thanksgiving. (This is an interesting time of year. Forecasts are often way off in this CWA, especially when the jet is oscillating in terms of latitude).
gb (#351),
Pure nonsense. The only mathematical proof that exists shows that if you do not properly resolve the minimal scales in a solution of the compressible NS equations, the numerical result is not accurate. LES is just another parameterization. You need to read the mathematical literature cited on this site.
Jerry
Thanks for the ignorant straw man, Gunnar. I didn’t say it was or it wasn’t, I said there was no evidence to support the presence of chaotic attractors (in much the same way that MBH98 does not support the modern period being warmer than the medieval). Lack of support is not disproof. My main point is that if chaos is important in weather, it will be important in climate; if it is not important in weather, it will not be important in climate. I also provided a peerreviewed paper that supports my views (not that this is proof, but if the best comeback you’ve got against a peerreviewed paper is “giggle” then you have a lot to learn)
Gunnar (#356),
So where is your mathematical formula? Are you saying it is the mean temperature of the earth? For how long a time period is the spatial average of temperature suppose to be taken and at what vertical level?
Stop the verbiage and produce something rational.
Jerry
The reason why the engineering correlations won’t work here is that they all relate to a solid object in a flow field (or in some cases, a flow field inside of a solid object). This is a different animal. What’s the Reynolds number of the atmosphere? What’s the characteristic length?
#350 >> But the only way to model chemistry from first principles is to model the sum of all of the chaotic behavior.
Only way? Are you saying that there was no science of chemistry before they modelled chaotic behaviour? Which came first, Ohm’s law or quantum electrodynamics? And even now, to design a power grid or a generator, I don’t need the quantum stuff.
#350 >> So yes, chaotic behavior does cause Gretzky to score. It also causes your car engine to run smoothly
A ridiculous statement. Again, you are confusing the object and the study of. Science is not reality, it’s the study of it. Conceptualize the abstraction. The atoms are necessary, in order for the puck to exist, otherwise, Gretzky can’t score. However, we don’t need to study chaotic atomic “vibration” in order to figure how anything about hockey.
#351 >> atmosphere and consequently on climate. So, in any GCM the effect of turbulence
It may be a factor, but there are other causes to climate change. Climatology must study those causes. A GCM may not be the primary tool needed for climatology.
Spence_UK (#358),
No one has proved or disproved the existence of strange attractors
or chaos for the equations of motion for the earth. Let us first
see if there is any rational mathematical definition of climate
that everyone can agree on. If this is not possible under the above simplified assumption, then there is no reason to proceed with the
discussion. And this is an interesting result in and of itself. 🙂
Jerry
Gunnar, whatever. You’re not even reading the words on the monitor. What part of “model from first principles” don’t you understand?
Larry (#360),
There are many scales of motion for the atmosphere and oceans. Might I suggest that you peruse some of the manuscripts by Heinz and me that have been cited before on this website.
Jerry
Re # 357:
Jerry, perhaps you need to open a textbook or look in the literature. You just keep on refering to articles where you are one of the coauthors, but this is just a very, very small subset of the vast material on numerical simulations (direct numerical simulations, largeeddy simulation, just use google, don’t be afraid) of turbulent flows. Yes, you are right. If your goal is to reproduce one particular realisation precisely you need to resolve all scales and need to have to completely the same initial conditions. However, if your goal is to reproduce the averaged statistics (mean velocity, momentum transfer, drag. The averaged stats are the only thing that you want to know in engineering) you can use largeddy simulation for example, which doesn’ resolve all scales but uses a subgrid model. If you have a good subgrid then a largeeddy simulation can closely reproduce (not precisely) the averaged statistics obtained from a fully resolved simulation or experiments. There are scores of papers that prove this. I don’t think Jerry is so openminded to ever accept this, but at least I hope that other people look a bit further than just the papers mentioned by Jerry if they want to know more about the subject. Something similar should be possible with an ocean/atmospheric model.
Re #362
I fully agree. My first mistake is to try to educate Gunnar. This challenge is perhaps only marginally less great than trying to derive equations to fully describe climate 🙂
I used to be of the view that chaotic behaviour seemed likely due to the way in which the scaling behaviour of climate (statistically speaking) seemed to match the scaling behaviour of chaotic systems. However, the paper I link above showing that simple deterministic systems are prone to this scaling behaviour as well (on the principle of maximum entropy), suddenly I realise that this is not important at all.
I am increasingly of the view that if we had more people like you looking at the pure analytical side, and more people like Prof. Koutsoyiannis looking at the statistical side, climate science would not be in the mess it is in today.
Of course not. Molecular (not atomic) vibrations describe the IR spectra of molecules, not their motion. Why would the IR spectra of molecules influence a game of hockey?
Everybody – I can see that we have a gulf between the mathematicians and the engineer/scientists. The mathematicians are focusing on the problem of getting exact 4dimensional simulations of flow, while the engineers and scientists are trying to relate this to some meaningful summary statistic, like climate sensitivity. So there’s a lot of talking past going on.
I don’t think we’re going to stop talking past each other until we lay out exactly how we get from GCMs to climate sensitivity. Without that, we’re just waving hands.
>> you still havent yet got it. Ill have another try because Im a sucker for punishment (!)
Thanks for the ignorant straw man, Spence.
>> I didnt say it was or it wasnt
which confirms what I said: “even though Spence isnt convinced it is”
I don’t need mathematical proof to confirm observation during the thousands of days I’ve been alive.
>> youve got against a peerreviewed paper is giggle then you have a lot to learn
If the best comeback you have to logical arguments is a “peer reviewed” paper that questions whether weather is chaotic or not, then I have a right to giggle.
>> So where is your mathematical formula?
Who says you need a mathematical formula to define a concept?
>> Are you saying it is the mean temperature of the earth?
It’s not me saying it, that’s how the term is used, by both AGWers and antiAGWers alike. The ideal would be to measure the Joules of energy.
>> Stop the verbiage and produce something rational.
[giggling] I think giggling is a more polite and mature response to adhominem.
>> Molecular (not atomic) vibrations describe the IR spectra of molecules, not their motion.
cause the IR radiation, not describe.
>> Why would the IR spectra of molecules influence a game of hockey?
Exactly. Why would the ability or non ability to predict a storm 20 days hence affect our ability to study how the sun, moon, stars, earth orbit, man etc, affect the climate? Using logic, I worked you into completely agreeing with my point.
I will wait a day or so to see if anyone produces a mathematical formula for the definition of weather or climate. If not, then everyone should be able to comprehend the futility of a discussion on the difference between weather prediction and climate prediction when even under the assumption of a perfect model of the earth a mathematical definition of neither one is available.
Jerry
It’s not just the IR spectra, or all of the spectra, it’s all of chemistry that’s determined by those vibrations. And biology is determined by chemistry, and the hockey game is determined by biology. If those molecules don’t move, the game doesn’t happen.
Gunnar (#369),
You are great at producing hot air. Where the beef? Do you or do you not have a mathematical definition of climate?
Jerry
355 Gunnar says:
November 14th, 2007 at 12:47 pm
No. Meteorology is a part of atmospheric science. Climatology is a branch of meteorology which concerns itself with the longterm statistical trends in the meteorological conditions in areas of the planet which exhibit persistent differences in such meteorological trends. The study of temperatures in the different climatogical areas is just one of many climatological conditions which compise the subject areas studied in climate science. The study of temperature is and never has been the sole subject of climate science, and the fact that different areas exhibit different climates with different meteorological conditions, including temperatures, is direct proof there is no such thing as an “average temperature of the Earth in the study of climatology. Whatever the average kineticheat content of the plantary atmosphere may be at any given period of time, that condition is the composite result of far more than just the sum of meteorological conditions that are the subject of study in climatology. That is why climatology is a subpart of the broader disciplines of atmospheric science and physical science. Any approach to the study of climates which focuses on the study of temperatures while neglecting all of the other meteorological conditions and nonatmospheric conditions is utterly invalid and whimsical.
>> If those molecules dont move, the game doesnt happen.
Larry, true, but I’m talking about the study of! Do we need to STUDY how the molecules move, in order to figure out how to win a hockey game?
>> I will wait a day or so to see if anyone produces a mathematical formula for the definition of weather or climate.
Just curious, why did you use words? Why didn’t you give us a math formula to express the concept?
There are several aspects to predicting weather.
1) whether a particular storm will hit Chicago or South Bend.
2) whether a particular storm will start at 1pm or 5pm.
3) how big will the storm(s) be
4) how long will the storm last
factors 1 & 2 are irrelevant when it comes to predicting climate.
factors 3 & 4 are relevant when it comes to predicting climate.
Gunnar is concentrating on factors 1 & 2. Hence his leaf in a stream example.
He is however ignoring factors 3 & 4.
Right, so what test did you apply to your observations to determine chaotic behaviour? Bearing in mind you don’t actually know what chaotic behaviour is, I can’t see how you could develop a test for it.
I put forward a series of arguments via a peer reviewed journal. I didn’t say they were right, but if you think they are wrong then you need to tell me WHY they are wrong. Or you can just giggle, in which case you look like an idiot.
Both causes, and describes.
No, you don’t understand my point. I’ll explain it to you in more detail.
The motion of molecules is described by Brownian motion, a random walk. You originally implied that the motion of molecules was chaotic, which is fundamentally wrong.
Initial condition errors in a random walk evolve proportional to the square root of time. This is below linear, which means averaging buys you something in future predictability. On this basis, Brownian motion of molecules is not chaotic; that is why the puck flies of predictably, because the system is not chaotic.
The chemistry, the IR radiation, has nothing to do with the motion of the puck whatsoever. Different, separable system. This system exhibits chaotic behaviour.
So no, your original argument is flawed because there is no chaos involved. You did not realise this because you are unaware of how to define, or test for, a chaotic system.
In simple, logical statements, your original premise:
Molecular motion is chaotic, puck behaviour predictable, therefore averaging overcomes chaos.
Molecular motion is not chaotic, therefore your argument does not stand. (Not “molecular motion is not chaotic, puck behaviour predictable, therefore averaging overcomes chaos” as your last post attempted to imply)
The latter is a form of debate suitable for the school playground.
Oh and Larry:
The motion of the molecules is quite separate to the molecular vibrations.
376, the real issue here is, is it possible to know 3 and 4 without knowing 1 and 2? The important climate statistics depend on the details, and I don’t think anybody’s come up with a good way to arrive at the statistics without developing the details first. That’s what we’re all arguing about; is a detailed simulation of the atmosphere’s flows necessary in order to calculate climate sensitivity, or is there a short cut? If you think there’s a short cut, show me.
Just as we don’t need to know the motion of each and every molecule in order to predict tomorrow’s weather, we don’t need to know the energy of each individual storm in order to predict climate.
However, we do need to know the aggregate energy of all the molecules in order to predict weather. Likewise we need to know the aggregate energy dissipated by all storms in order to predict climate.
Oh? Enlighten me.
Larry,
Imagine a bell ringing.
Imagine a bell falling.
Imagine a ringing bell that is falling.
The ringing is molecular vibration.
The falling is molecular motion.
#374, I agree! My tweaks:
Whatever the average kineticheat content of the plantary atmosphere may be at any given period of time, that condition is the composite result of far more than just the sum of meteorological conditions that are the subject of [current] study in climatology, which is in it’s infancy. That is why climatology is a subpart of the broader disciplines of atmospheric science and physical science [and many other sciences]. Any approach to the study of climates which focuses on the study of temperatures [(like current AGW papers) while neglecting all of the other meteorological conditions and nonatmospheric conditions [like solar, orbit, stars, man] is utterly invalid and whimsical.
Corrected statement of climatology purpose: The purpose of climatology is to study all the things that could affect the average kineticheat content of the planetary atmosphere.
Larry, I wouldn’t categorize it as nonsense to ask to have proof that not only did your model have predictive powers in the past, but that it has predictive powers in the future. Or in other words, does someone have enough faith in the model to make a prediction now at the current settings. Then there’s proof that the model is good and hasn’t been tuned or mucked with, because I already have the output. Think of it as a challenge to the model.
Gunnar, we’re having a discussion of climate itself. So plate tectonics doesn’t have anything to do with climate (other than the effect it has on the geological systems that would lead to affecting the weather) . But studying the climate includes studying plate tectonics because it’s part of climateology. That seems to be getting rather circular if you’re trying to delinate a thing with the study of that thing but you lump in all the studying in different ways.
So are we talking about the climate and what it does, the studying of climate to see how it does it, the predictions themselves of how climate might change in the future, or climate modeling itself? Or are we discussing how the vibrational characteristics of some element or another works?
And no, the purpose of meteorology is not to predict the weather, it’s to study the atmosphere and it focuses on weather processes and forecasting. It’s to illuminate and explain observable weather events. And if you want to be specific about it, you actually have to look at the geospatial size of what meteorology aspect you’re talking about.
And actually on that note, how do we track the temperature portion of climate change, that is, derive the data? That would be by using meteorology to take surface measurements, do remote sensing and take satellite observations. This is pointless for that fact, we’re feeling the elephant here with this discussion.
I totally agree, if, as RP SR suggests, we go to Joules, then we have something quantifiable to look at.
Jerry, yes. But I don’t see how we can mathematically define this. Unless we were tracking total energy levels (if that’s even possible, which I don’t think it is). On the other hand, we could just look at the hocky puck and not argue about the molecules in it or the fact that if it wasn’t for molecules of hydrogen and oxygen nothing would exist, especially not water.
Mandelbrot (of fractals fame) pondered (See http://www.climateaudit.org/?p=396 http://www.climateaudit.org/?p=382
380, if the bell isn’t ringing, it can’t fall, because it’s in a solid lattice. Got it? It can only fall if it’s a liquid or a gas.
Re #379
The motion of molecules can be quite predictable, even under complex situations.
Lets take a simple system. Let’s put two hockey pucks in the middle of space, with no other matter nearby. The pucks will, quite predictably, move towards each other under gravity. They will bounce of each other, causing a loss of energy (primarily into heat). The will bounce away, until gravity brings them together again, again with a loss of energy, until eventually they will stop still, in the middle of space, resting against each other.
Lets expand that now, to billions of pucks in space, all moving randomly, colliding with one another. The pucks will exhibit Brownian motion, just like molecules in a gas. But Brownian motion behaves predictably on a large scale. Why? Because the error growth with respect to the initial conditions grows with the square root of time (simple mathematical demonstration of this). It is the evolution of errors of a random walk. Because of this, long term averaging – which reduces error linearly with time – eventually cancels out the random fluctuations.
This is why, when a force is exerted on the puck, it flies in a predictable direction.
The key is the evolution of errors. If errors evolve in a linear manner, averaging can cancel them. If errors evolve with an order lower than linear, averaging yields good results. If errors evolve with a higher order than linear, then averaging yields nothing – the error growth, even with averaging, accelerates away from you.
That is why the difference between a system exhibiting exponential error growth, or linear error growth, are critical to understanding whether it is easier or more difficult to solve problems on a larger scale.
Incidentally, my view on the game of hockey, is not on the larger scale issues of how the game progresses, but on the small scale issue that Gunnar was eluding to – when a puck is hit correctly in a certain direction, it goes in that direction, irrespective of the complexities of molecular behaviour.
atoms in a solid latice can still ring (vibrate)
Re #380
Nice explanation, very neat! 🙂
Thanks.
Okay, let’s make this simple:
Meteorology: Study the atmosphere focusing on weather processes and forecasting for the purpose of illuminating and explaining observable weather events. Weather events are bound by Earth’s atmospheric variables – temperature, pressure, and water vapor. The variables are subject to the gradients and interactions of each and how they change in time. Geospatial scales of meteorology include micro, meso and synoptic. Meteorology, climatology, atmospheric physics, and atmospheric chemistry are subdisciplines of the atmospheric sciences.
Climateology: Study of the frequency and trends of weather systems over years to millennia and deals with long term average weather patterns. Besides studying the nature of climates on local to global scales, climatologits study natural and human matters that cause climates to change. Some of the related disciplines are atmospheric physics, statistics, chemistry, ecology, geology,
oceanography, and glaciology.
Face it; if we’re talking climatology involving atmospheric physics, and using data derived from surface readings, and remote sensing developed by meteorology, and they’re both subdisciplines of the same field, they must be similar, right?
@Gerry– 339.
“The reason that these discussions have become so heated is that no one has produced a mathematical definition of climate.”
Possibly, but the dictionary, national labs, the usda, GISS, and a variety of science departments at universities seem to share some the idea that climate is some sort of average of weather. Definitions of climate.
Definitions that separate weather from climate appear to be rather.idiosyncratic.
@Larry on 341– I think an accurate definition of climate sensitivity presupposes an accurate definition of climate. If climate is an ensemble average of weather,then climate sensitivity is a change in the any statistical property as a function of a forcing. (Example: Change in global average temperature as a function of increased CO2.)
@Larry– 368. Are you commenting on the difference in opinion about LES vs. DNS between gb and Gerry? That’s a very real issue and the argument actually is about whether or not one can get summary statistics like climate sensitivity by running GCM.
As to Gerry and gb’s specific claims: Both are saying some correct things. But, if you boil it down to “So, can we get climate sensitivity.” I don’t know! 🙂
Why not” I don’t know if LES resolves things well enough to provide decent answers on the AGW problem. I just not familiar with the literature on GCM’s, how LES may be implemented in the models or what runs have been done to demonstrate accuracy.
Generally speaking, the current state of LES is that it’s a decent tool when used properly — but it doesn’t always work straight out of the box for absolutely everything.
DNS is the only tool that is know to give correct answer in every problem all the time. However, DNS is too computationally intensive to use in most engineering flows. We certainly can’t apply that to the whole planet.
To get back to the question you asked long ago:
When assessing whether or not a parameterization (like LES) works, scientists and engineers do ask questions like:
“We can’t predict individual storms. So, how can be sure we predict their aggregate effect in an LES model?”
These questions are ordinarily considered good questions, and quite a bit of work is done to try to determine whether or not the models (like LES) capture the aggregate effect of storms (or any feature) sufficiently well to predict average behavior . (By the way, is the “micro scale” vs “macro scale” question. )
Mark, I agree with you.
>> He is however ignoring factors 3 & 4.
Well, not exactly. Larry did correct me about that, and I agree that understanding how a storm works, how it transfers energy, and what causes it to be big or small is important to climatology.
>> you dont actually know what chaotic behaviour is
You mean, I don’t understand the math behind chaos. I do understand what the chaos concept is.
>> a series of arguments via a peer reviewed journal
which would be an argument about whether weather 🙂 is chaotic or not. That’s a straw man that I don’t need to answer, since my point is not that weather is chaotic, although I’m quite certain it is. (see #270, 281, 294, 298, 306 and the conclusion in #325)
>> You originally implied that the motion of molecules was chaotic, which is fundamentally wrong.
I mispoke, but the error doesn’t affect my point.
>> The chemistry, the IR radiation, has nothing to do with the motion of the puck whatsoever. Different, separable system. This system exhibits chaotic behaviour. So no, your original argument is flawed because there is no chaos involved.
No, the IR radiation is caused by molecular vibration of atoms in the puck. These atoms are part of the puck, and not separable.
>> Molecular motion is chaotic, puck behaviour predictable, therefore averaging overcomes chaos.
I’m talking about the study of things, not the object itself. Hello? Can you possibly switch abstraction levels? You misunderstood my logic completely, I’ll clarify:
premise: Molecular vibration is chaotic
premise: Some people may choose to STUDY those molecular vibrations
premise: Some people may choose to STUDY how to win a hockey game
premise: Studying molecular vibrations does not help win hockey games
Conclusion: Studying the chaos of puck molecular vibrations is not necesary to winning
Similarly, studying the chaos of weather is not necessary for a study of the things that can affect the climate
Dr. Liljegren: This is a definition issue (although to be truthful, I don’t always feel as if everyone is having the same conversation at all; such is common in matters that are by nature abstract e. g. global climate)
As far as the discussion, all, I simply point everyone to my comment at http://www.climateaudit.org/?p=1516#comment161068
The fact is they are both atmospheric sciences. They simply focus they studies on different things, and while both do not cover the same exact areas, they both cover the same thing at their core. That’s why they’re both atmposheric sciences.
You don’t need to know how a certain individual storm is going to act (weather), but you have to know how they act in general (climate). As long as we agree on that, can we “move on”? 😀
This boils down to, and I’ll paraphrase Gunnar, kinda:
You can talk about weather.
You can talk about climate.
You can study weather.
You can study climate.
You don’t have to study weather to study climate, but you have to understand them both, on different levels of abstraction.
So unless somebody wants to argue that both aren’t atmospheric sciences, there is nothing to discuss but the specifics of a specific topic. And I think that’s what Dr. Browning was alluding to in asking for a mathematical formula.
>> climate is some sort of average of weather. Definitions that separate weather from climate appear to be rather.idiosyncratic.
If you were paying close attention, you would see that I agree with this (#345). That doesn’t mean that climatology is the average of meteorology. It means that climatology is the study of those things that can affect the average weather. And weather CANNOT affect the average weather.
Only something external can affect the average weather, and that’s what climatology is about. Note that AGWers claim that an external agent Man, is changing the climate. Note that antiAGWers dispute this, and claim that a different external agent is affecting the climate. NO ONE in their right mind thinks that absent any external agent, the weather itself can change the average temperature.
Lucia, if I understand you correctly, a corollary of that would be that if we can’t define climate, we can’t define climate sensitivity. CS is a rather core concept. Are you saying that it doesn’t exist? Are you saying that if you remove all dynamics and noise, we can’t know how much the GMT changes with a doubling of CO2? I don’t think that’s right.
I suppose that example implies that climate is the limit of weather as time approaches infinity.
393 addendum: …with all other forcings held constant.
Sam, thanks, you got me right. We should be able to move on, without any antiAGW straw men about not being able to predict climate change, because we can’t predict the weather.
The issue here isn’t if we can predict anything or not, but how well we can predict it, and what we’re predicting about it, given our understanding of the system and the tools we’re using and their limitations (and correctness and appropriateness and meaningfulness).
Plus that, unless we understand what it is that we’re predicting, how, and why, it’s just a generalized nonspecific issue that’s not defined. What is the discussion about, predicting the weather, or understanding weather well enough to predict climate? I’m thinking clouds. Do we know how clouds act under certain weather conditions well enough to say we know how the system reacts to its various inputs?
But that’s a funny little concept, the weather changing itself. What is the weather but an effect caused by how its input variables act, react and interact? If it rains and the clouds go away, so it gets more humid, and instead of 2 hours of sun between storms there’s 4, things happen. In that sense, weather changes itself all the time, if you’re talking about a selfadjusting system with variable inputs. On another level, it never changes itself because it’s just a collection of variable inputs and they’re the ones that change. Weather is just the output of the inputs. So therefore, weather is a machine that performs a function based upon the inputs. But it’s not a machine that we can really understand, because we don’t know how the inputs will act, react and interact.
Well, except in a general way – I can tell you what kind of general behavior there will be when a high and low front meet, or what will happen when the temperature goes under 32 F for a long enough time around water, and so on. But not where the inputs will be and at what level or for how long.
As I said, weather events are bound by Earths atmospheric variables – temperature, pressure, and water vapor. The variables are subject to the gradients and interactions of each and how they change over different geospatial scales.
Is that Chaotic? Ergodotic? Deterministic? Random? I’m just throwing words out here people, don’t get upset 🙂
Thanks gunnar.
Good one larry; the limit of weather as time approaches infinity. Anyone know any calculus so we can determine the forumula for that limit? 😀 I don’t think we really have a good idea of what the climate sensitivity is, hence a however many hundred or thousand page report (written in what I would tend to call a “cryptic” manner) that takes the IPCC how many years to to produce?
I was looking at a hydrogen molecule the other day, and I was thinking how cute the protons are compared to the neutrons. The hardest part was separating it from the water with my tiny pair of pliers.
I don’t know why I said that, it just seemed funny.
Umm….hydrogen don’t have no neutrons.
There are several slippery notions bound into the definition of climate that a definition ought to nail down. First, climate depends on where and when you are looking, so any definition must involve specification of spatial and temporal boundaries. Second, before we can evaluate any claim about climate, we must know specifically what measurable quantities are considered to belong to climate. [Third, any climate prediction must set out a set of explicit criteria that can be used to evaluate the prediction when compared with the climate it is predicting.]
A simple definition of climate is: climate is a record of the evolution of all the variables describing the instantaneous weather within a region for some period of time. This definition has the virtue of being complete enough to leave nothing out, but perhaps it is too broad.
Heres an attempt at a narrower definition:
Climate(TIME,SPACE) is a set of probability distributions {PDF1,PDF2, ,PDFN} for some set of variables {VAR1,VAR2, ,VARN} over a specified time interval or union of disjoint time intervals TIME within a contiguous earthly volume or union of disjoint volumes SPACE. Each probability distribution PDFX describes the distribution accumulated over the specified TIME of measurements of some particular variable VARX measured within the specified SPACE.
The set of variables with pdfs in Climate must be specified before we have a complete definition. This definition insists that the set of measured variables be finite, but does not otherwise constrain them.
(For example, lets suppose average surface temperature is one variable in Climate. Then a histogram approximating the pdf for surface temperature can be obtained by measuring the surface temperature (say, hourly) averaged over the ground surface in SPACE.)
This definition also asserts that the timehistory of the evolution of the particular variables making up the Climate is not relevant, only the accumulated pdfs matter.
According to this definition, Climate only exists for the past. In the spirit of this definition, I would propose that a ClimatePrediction(TIME,SPACE,ERROR) is a set of probability distributions over TIME within SPACE that may be compared with the set of probability distributions {PDF1, , PDFN} in Climate(TIME,SPACE) according to a set of error criteria ERROR by means of which we may judge the accuracy of each probability distribution within the ClimatePrediction.
We can argue over the appropriate error criteria for any given predicted variable. Thus, if I expect a runaway global warming this century, I can make a ClimatePrediction for 21st Century Earth for the surface temperature of a constant 60 degrees Fahrenheit. That prediction can only be judged according to some error criterion. If my error criteria are lax enough and my prediction is close enough, my prediction gets verified.
Sidebar: This web page gives a history of climatology
It was all about statistics and weather Gunner; in the beginning at least. And its current form (the climatology you describe or see) is a pretty new form.
link
It’s still all about statistics! (networking, groups and meetings too) Its a fun club! 😉
>> weather changing itself. What is the weather
Sam, by weather, I mean “chaotic atmospheric circulation”. So, more precisely:
premise: chaotic atmospheric circulation cannot affect the average temperature.
premise: climatalogy is the study of anything that can affect the average temperature.
conclusion: Climatology is not a study of chaotic atmospheric circulation, although it must include anything that affects energy levels.
@Larry– Yes. At least in some sense, we need some definition of “climate” to define “climate sensitivity”. I’m not suggesting climate sensitivity doesn’t exist — I’m just saying that for two people to agree on the definition of “climate sensitivity”, those two people need to agree on a definition of climate. Later, if they wish to quantify “climate sensitivity”, they need to agree on metric.
So, you could have a set of three compatible definitions like this.
1) Climate is the average characteristics of weather. (This would mean the average of any quantity that can be averaged. It might even include standard deviations etc. Also, we can argue about what sort of average is applied to any measurable characteristic. )
2) “Climate sensitivity to X” would then be any change in the average climate as a result of a change in an external variable X. This would be a generic idea.
3) By official decree, we could decide the metric used to measure climate sensitivity will be the global average temperature based on measurements at stations “A, B, C…. Z”.
If you don’t agree on (1) it’s very difficult to agree on (2) and nearly impossible to come up with a metric (3).
Later on, even if you agree on 1,2 & 3, there will be difficulties that have to do with usage. Notice, I slot “climate sensitivity to X” under #2?
It appears the IPPC uses the term “climate sensitivity” to describe this specific metric : ” the equilibrium change in global mean surface temperature following a doubling of the atmospheric (equivalent) CO2 concentration.”
Meanwhile, you’ll see the state of California and the AMS use “climate sensitivity” this way:
“the equilibrium response of the climate to a change in radiative forcing; for example, a doubling of the carbon dioxide concentration. (EPA)”.
http://www.climatechange.ca.gov/glossary/letter_c.html
“climate sensitivity1. The magnitude of a climatic response to a perturbing influence. 2. In mathematical modeling of the climate, the difference between simulations when the magnitude of a given parameter is changed. 3. In the context of global climate change, the equilibrium change in global mean surface temperature following a unit change in radiative forcing.”
The AMS usage fits my (2) above.
Are you saying that if you remove all dynamics and noise, we cant know how much the GMT changes with a doubling of CO2? I dont think thats right.
I’m afraid I’m not sure I understand what you are asking me,… If we remove all dynamics and noise from what? STill, I suspect you are asking me about my comments on the Gerry/gb argument. So, I’ll try to answer..
1) If we retain all dynamics in a “DNSlike” computational model, and remove all “noise” do to roundoff etc, we should absolutely positively be able to predict h ow much GMT changes with a doubling of CO2. Arguments between GB and Gerry would evaporate because we’ve captured everything in the computation. (These models aren’t used.)
2) If we remove all dynamics from a computational model to predict climate, I doubt we could predict anything at all about what happens if we double CO2.
3) If we include dynamics but mess them up a little (or even a lot), we may be able to get “good enough” predictions. Or, the predictions may be horrible. It depends on how messed up the dynamics are, and whether or not those matter. Also, we can’t know in advance, and there is nothing in the results that lets us know the answers are wrong. (This is where arguments about LES ensue. I don’t know the answer.)
“LES” which gb favors, falls in category ‘3’. It messes up the dynamics a least a little.
If I understand Gerry, he thinks LES models don’t capture the aggregate effect of the relevant dynamics well enough. He advances some valid arguments to support his view..
“gb”, thinks LES models do capture the aggregate effect of all relevant dynamics well enough. He advances some valid arguments to support his view.
I can’t chose between the two views, but that’s because I simply don’t know enough about GCM’s.
Does that answer the question you intended to ask?
Nobody said climatology was about a study of chaotic atmospheric circulation. But without that, how do you calculate climate sensitivity?
Ball’s in your court, Gunnar. How do you calculate climate sensitivity? Maybe you can write the elusive 2.5C paper that Steve’s been looking for.
Lucia, but the metric is well defined (sort of). There’s a version of climate sensitivity in W/m^3, and one in terms of degrees C. Both are in response to a doubling of CO2 (because the logarithmic form of concentration dependence is uncontroversial). So now that we’ve nailed that down, is climate defined?
>> is a pretty new form.
Yes, but the history of any science is the same. A modern EE curriculum is quite different than what Ben Franklin was doing.
>> Its still all about statistics!
Yes, it is, because the science part hasn’t really started yet. The AGWers are using statistics, because that’s the extent of their logic “gosh, it’s hot, let’s restrict the right to use energy”. The antiAGWers merely poke holes in their statistical procedure.
The reality is, despite all this sound and fury, Man has no particular need to study the climate. There is no profit in it. The most advanced sciences are those that are advantageous for man to know.
From what you’ve said so far, I’m not sure you do. You certainly do not understand the scaling structure associated with chaotic systems. That is a pivotal aspect that would prevent you from thinking errors can always be averaged away. Exponential error propagation cannot be averaged away by moving to larger scales. It gets worse. This is a critical, and very simple to understand concept, that entirely refutes your arguments. The only reason the examples you give work is because none of them involve exponential propagation of errors from initial conditions, i.e. none exhibit chaotic behaviour.
The document I linked to described chaos in the context of scale behaviour. Climate is merely rescaled weather (in the context of an IPCC like definition). But you still don’t understand the fundamental relationship between scale, chaos, exponential error propagation etc.
If you are quite certain that weather is chaotic, you should publish. It has yet to be demonstrated convincingly, as far as I am aware. Your arguments do not adequately test for chaos. The document I linked describes many of the simple errors people make when inferring chaotic behaviour, and some of the accepted methods of test.
Then your argument is irrelevant. Winning a hockey game cannot be inferred from molecular vibrations because they are an irrelevant, separable system. I can set up a double pendulum at home, it will exhibit chaotic behaviour – and lo and behold – it will not influence any hockey matches. Weather is not separable from climate – the two are closely linked. If there is a scaling relationship between the two, and long term persistence, then weather influences climate. If exponential error propagation exists, then moving to longer scales makes prediction more difficult, not easier.
But Dr. Browning is right. Chaos is irrelevant. Nonchaotic systems can exhibit these problems as much as chaotic systems, so the point is moot anyway.
>> Nobody said climatology was about a study of chaotic atmospheric circulation.
This was implied by folks claiming that we needed to be able to predict the chaos of weather in order to study climate.
>> Balls in your court, Gunnar. How do you calculate climate sensitivity?
You have to know how the atmosphere works.
A valuable excerpt from a paper by Prof Koutsoyiannis here which includes valuable insights on scale:
It must be a good paper, because it cites ClimateAudit 😉
RE: #406 – Predict general synoptics out at say, 168 hours, and you be able to credibly argue that a similarly structured GCM would be of value. Fail to do that, and the GCM is essentially rubbish.
392 Gunnar
There are many kinds of ‘average’. If you are referring to an average over say 100 million years, that is quite different from say an average over a 100year period. The former is not too meaningful or useful in the context of ‘climate’ as far as mortal humans are concerned.
The latter is more meaningful, but your statement is wrong for that kind of average. The average weather for each century can and will vary from one century to the next without the need to invoke “external somethings”.
What are you using as YOUR averaging period?
Larry:
“Umm .hydrogen dont have no neutrons.” That’s what makes it funny. Of course the protons are cuter! They have no competition! 🙂 (That I could pull apart water with pliers and look at the nucleus notwithstanding!) Still, it makes me said that poor little hydrogen doesn’t have a baryon. I have to watch it cry itself to sleep resting in its little bed.
Gunnar:
Bad premise. Average temperature is an artificial constuct built off of the artifact of sampling the atmosphere at a certain location in the at a point in the chaotic atmospheric circulation. So what’s going on with the circulation basically is the average temperature.
Bad premise. Climatology is not only things that effect average temperature.
Faulty conclusion. Even if those were true, it wouldn’t preclude studying chaotic atmospheric circulation.
Incorrect assumption. Weather is only temperature.
Incorrect assumption. Climatology only deals with average temperature.
Incorrect assumption. There is such a thing as an average temperature.
Incorrect assumption. Climatology only deals with energy levels.
Atmospheric circulation is not all there is to weather.
Question: If it rains and the clouds go away, so it gets more humid, and instead of 2 hours of sun between storms like the day before and now theres 8, does that affect the weather? If the next day the situation is the same and 25 MPH winds come in, does that affect the weather? If that third day becomes the norm over 5 years and then starts acting like the first day for the next fiver years, does that affect the climate?
Now if your point is that currently in mainstream climate science they mainly focus on the effect of temperature and the cause of carbon dioxide and the source as humans, I couldn’t disagree with you there. But it seems you are the only one disputing the definitions of these things.
This is getting rather tedious, I don’t know how much clearer I can make my position. Wearther is when it rains. Climate is when it rains 200 days a year. 50 degrees is the temperature out the door. A +/ .5 anomaly over the average of the min/max then averaged over a year outside of my door is climate.
I meant the definitions of climate and weather.
In case nobody noticed, I switched between using chaotic and not using it. It doesn’t matter if it is or isn’t.
This is my last word on the subject, they are clearly defined. You could always go argue with the editors of wikipedia on the subject too! 🙂
http://www.answers.com/topic/weather
http://www.answers.com/topic/climate
http://www.answers.com/meteorology
http://www.answers.com/topic/climatology
Climatology
The scientific study of climate. Climate is the expected mean and variability of the weather conditions for a particular location, season, and time of day. The climate is often described in terms of the mean values of meteorological variables such as temperature, precipitation, wind, humidity, and cloud cover. A complete description also includes the variability of these quantities, and their extreme values. The climate of a region often has regular seasonal and diurnal variations, with the climate for January being very different from that for July at most locations. Climate also exhibits significant yeartoyear variability and longerterm changes on both a regional and global basis.
The goals of climatology are to provide a comprehensive description of the Earth’s climate over the range of geographic scales, to understand its features in terms of fundamental physical principles, and to develop models of the Earth’s climate for sensitivity studies and for the prediction of future changes that may result from natural and human causes. See also Climate history; Climate modeling; Climate modification; Weather.
——————————————————————————–
@Larry–
The IPPC metric is fairly well defined. That’s one of the great thing about metrics.
I don’t think you can always work backwards from the metric, because any metric could be consistent with more than one definition. So… maybe we don’t need to agree on the definition of climate to get a metric? Hmmm..
Anyway, the IPC appears to be consistent with climate being “the equilibrium global mean surface temperature” , but it’s also consistent with more general definitions of climate like “the equilibrium value of any meteorological variable”.
The interesting thing (to me) is the IPCC members must have agreed absent addition of CO2, equilibrium exists. This means we should all be arguing about whether or not “equlilbrium” is possible and not about “ensemble averaging” vs “time averaging”. 🙂
@Spence_uk: I think Gunnar is the only one discussing or describing weather or climate in terms of chaos.
Lucia,
Or that there’s an equilibrium, if all forcings are held constant, that forms a baseline on which the chaos of weather is superimposed. I believe that’s the assumption. Then, they try to determine how doubling CO2 perturbs that.
412 lucia
It’s hard to believe that anyone could see the icecore history and believe that there was equilibrium before humans. What are they smoking?
It looks like a bistable system to me…..
Lucia, indeed, that is why I’m not really arguing with anyone else 😉
Indeed, and there is a really nice discussion here (click on preprint) on the consequences of longterm persistence / scaling behaviour on averages, and what they mean (note: no need to invoke chaos!); particularly the intro (pp13) and doesn’t the “longrange dependence” plot on pg 8 just look so much like a global mean temperature plot? It has the correct scaling behaviour too, which stochastic / markovian models lack.
lucia, it depends on how you define chaos. I’d tend to call weather unpredictable, but… I know nothing on this subject, so all I can do is find stuff on it.
Sensitive Dependence on Initial Conditions
Ruelle, D. (1991). Chance and chaos. Princeton, NJ: Princeton University Press.
It is conceivable that the presence of Venus, or any other planet, modifies the evolution of the weather, with consequences that we cannot disregard. The evidence is that whether we have rain or not this afternoon depends upon, among many other things, the gravitational influence of Venus a few weeks ago! (p 23)
The short and sweet on it mathematically is, according to the wiki article,
Mathematically, chaos means an aperiodic deterministic behavior which is very sensitive to its initial conditions, i.e., infinitesimal perturbations of boundary conditions for a chaotic dynamic system originate finite variations of the orbit in the phase space.
In lay terms chaotic systems are systems that look random but aren’t. They’re actually deterministic (predictable if you have enough information) systems that are governed by nonlinear dynamics.
Or you may prefer Columbia Encyclopedia:
chaos theory, in mathematics, physics, and other fields, a set of ideas that attempts to reveal structure in aperiodic, unpredictable dynamic systems such as cloud formation or the fluctuation of biological populations. Although chaotic systems obey certain rules that can be described by mathematical equations, chaos theory shows the difficulty of predicting their longrange behavior. In the last half of the 20th cent., theorists in various scientific disciplines began to believe that the type of linear analysis used in classical applied mathematics presumes an orderly periodicity that rarely occurs in nature; in the quest to discover regularities, disorder had been ignored. Thus, chaos theorists have set about constructing deterministic, nonlinear dynamic models that elucidate irregular, unpredictable behavior (see nonlinear dynamics). Some of the early investigators of chaos were the American physicist Mitchell Feigenbaum; the Polishborn mathematician and inventor of fractals (see fractal geometry) Benoit Mandelbrot; the American mathematician James Yorke, who popularized the term chaos; and the American meteorologist Edward Lorenz.
http://www.answers.com/chaos+theory
RE: #278 – Scorecard, Day 1, PM. NWS now slightly backpeddling regarding occluded front slated to move in from the Pacific tomorrow and Friday, now saying POPS in the northern part of the CWA. But still sticking to their guns about dramatic AGW signal superimposed on normal seasonal cooling and dampening, resulting in killer, unprecedented late season offshore event after Friday. We’ll see.
@Spence_UK 415–
I am familiar with the temperature plots you likely mean, and there is a haunting qualitative similarity there. I reserve judgement about what that might mean… but yes, there is qualitative similarity.
You correctly inferred the “here in comments” I left out when I wrote my final sentence in 412. Still, I think a slight correction may be required. Sam Urbinto may also be discussing weather in terms of chaos.
@Pat — 414
Yep. Equilibrium. Shall we leap together into the deep chasm of Ontology?
Oh, don’t tell me that it depends on what “is” is…
#405 >> From what youve said so far, Im not sure you do.
It doesn’t matter, since my point certainly doesn’t depend on it being chaotic.
>> Weather is not separable from climate – the two are closely linked
Yes, but meteorology is quite separable from climatology. If you were to read all my comments carefully, you would see that we’re talking about different things.
>> that weather is chaotic, you should publish
Why would I care?
>> Then your argument is irrelevant.
No, you just refuse to understand the distinction between the object and the various studies of that object.
>> Winning a hockey game cannot be inferred from molecular vibrations because they are an irrelevant, separable system.
That’s right, that’s my point, STUDYING molecular vibrations of a puck is completely irrelevent to STUDYING how to win a hockey game. The purpose of the studying is different. Yet, the two studies both have the same object, the puck.
>> Weather is not separable from climate – the two are closely linked
Yes weather is like noise on the climate signal, but meteorology is quite distinct from climatology. Studying where and when a storm will go is irrelevant to STUDYING what things would affect the metrics of climate.
#408, agreed.
#409 >> What are you using as YOUR averaging period?
In fact, I’m talking about a geographic average. Time averages are not very interesting.
#410 >> Average temperature is an artificial constuct built off of the artifact of sampling the atmosphere at a certain location in the at a point in the chaotic atmospheric circulation
Actually, that’s another antiAGW straw man. It is certainly valid to take all grids on the globe and average their temperatures.
418 lucia
Ontology is too rich and abstract for me — I’m just a simple physicist. But the icecore record looks a lot like a noisy bistable oscillator output to me. It spends a lot more time transitioning from one limit to the other than sitting in equilibrium.
BTW, there was an interesting oscillatorperiod shortening to 10^5 yr about half a million years ago. Nobody seems to care, but I wonder why?
It is a different approach – it is a statistical approach, which some people dismiss – but, as written, statistical questions demand statistical answers; even if we can derive realistic models, they will ultimately be tested against the real world using statistical assessments. Classical statistics when applied to natural processes give the wrong answer. Statistics with the correct scaling behaviour, justified above from first principles (maximum entropy, multiple reservoir concept), and demonstrated to be a better model for climatological systems than classical methods, are a huge step forward. They aren’t actually very new (around 5060 years old) but have been shunned by climate science up to now. (They have been used extensively and to great success in the hydrological sciences)
This investigation and understanding needs to happen in parallel with the good work carried out by pure theoretical analysis, such as that conducted by those studying the detail of the models, such as the work of Dr. Browning. To me, these two threads are amongst the most important aspects of climate science today.
The difference is, Sam is talking about chaos in terms of enquiry, trying to gain a better understanding. Gunnar made bold statements about chaos which were major misconceptions. I don’t mind the former, I get whiney about the latter 🙂 I am sensitive to it due to the rubbish disseminated about chaotic systems by proAGW proponents (particularly with regard the idea that largescale averaging can overcome exponential error growth from initial conditions, a depressingly common mistake…)
BTW I couldn’t agree more with your comments over at Tam*n*’s “Analyse This” thread! If someone is going to do an intro to stats, they should at least get the stats right.
420 Gunnar
You are talking about spatial averaging, not timeaveraging? What on earth would be a sensible use of taking an average of the weather in India, Spain, Iceland, and Russia?
Mainly because the referees would explain to you why you are very wrong and it would save me having to do it.
OK lets try to boil this down to the basics.
As explained in the document I noted above, high variability in fine scale structure leads directly to a power scaling law due to the maximum entropy principle. The fine scale structure directly leads to the large scale variability. This is because fine scale events tend to cluster. Then you get clustersofclusters, and clustersofclustersofclusters. The Joseph Effect. If you don’t model the fine scale variability, the large scale variability ends up wrong.
So, even though small scale events may well drive climate, climate scientists should go and study something else just so they meet your strange definition? Wonderful.
I’m not disputing there are certain things that climate scientists must analyse in addition to the weather, but the weather must be in there as well. Your belief that it can be ignored is part of the electronic engineers’ perspective fallacy outlined in #407.
larry, all I’m saying is that the chaos stuff I’ve read uses weather as an example at times. I’m not trying to discuss the merits if “Sensitive Dependence on Initial Conditions” applies to weather or not. I don’t know and it’s not really relevant to the conversation if it’s chaotic or not.
gunnar, sometimes (*ahem*) you frustrate me. I’m not using a strawman argument. I’m not even really arguing at all. I’m telling you that the idea of a global mean average temperature is helpful for thinking about some things, but isn’t real. It isn’t a thing. It’s just some derived number. Just because it’s “valid” to average averages of averaged huge areas for an anomaly doesn’t mean the number is meaningful, even if it’s accurate. Get over it.
I fear I’ve made it so Dr. Liljegren thinks I’m either pro or anti AGW theory (idea, hypothesis, whatever). I just don’t know. I have made it clear many times that I don’t care what the truth is, I just want to know it. Arguing about minutia is rather counter productive (what is is)
Pat Keating, if you look at the absolute values of the anomaly for GHCNERSST, it also tends to bounce up and down from around .7 or so (as a global mean anomaly) So I agree wholeheartedly!
I’m not really trying to figure anything out, Spence_UK. I’m just saying in chaos theory the case is made that weather is chaotic, so it’s not like it’s anything new.
Lastly, let me state this one more time, if I could.
YOU CAN’T HAVE CLIMATE UNLESS YOU HAVE WEATHER.
And that they’re both subdisciplines of the atmospheric sciences.
@ Spence_UK: Statistics are boring and tedious, but necessary. I’m tempted to write a blog post with the title:
“Why people who don’t know what the are doing should never do statistical analyses based on fewer than 30 independent samples.”
But who would read it?
On averaging: high pass filters are only useful when the stuff you filter out stuff is noise visavis the problem studied. I’d high pass filter with abandon if I know my flow is driven by a blower whose power drifts. Or if I know the barometric pressure was dropping or rising during an experiment to study something unrelated to the current atmospheric conditions. But I’d also try to take fiduciary measurements to confirm problems.
I feel sympathy with those who study things out in the field because you often have so little control of the experiment.
On the other hand, many don’t collect their one data and just fish stuff out of data bases. So, that just makes any sympathy for sloppy data handling vanish.
@Larry in 419– Precisely.
@ Sam Urbinto 425
Don’t worry. I can’t begin to guess if you are proor anti AGW based on I’ve read. I’m fine with either point of view and like you don’t know what the truth is. Over time, I’m likely to get every one angry.
This is probably not important, but if you feel the need to use “dr” it’s probably wise to use “dr. lucia”. There is another “Dr. Liljegren”. Probably noone on this thread has heard of Jim, but he used to actually work on projects in the area of climate change. (Not GCM’s!)
So, in the off chance a random reader might get confused, “Lucia” is better!
>> even though small scale events may well drive climate, climate scientists should go and study something else
As confirmed in this thread, climate is the average global temperature (never heard anyone use any other parameter). A turbulent air flow event, like whether a storm goes north or south, cannot affect the geographically averaged temperature. Nobody in this field is postulating that the average temperature has gone up because of small scale turbulence effects. They are all looking for an external cause. If you believe that everyone is barking up the wrong tree, you should publish.
>> just so they meet your strange definition? Wonderful.
It’s the defacto definition everyone is using.
>> Im not disputing there are certain things that climate scientists must analyse in addition to the weather, but the weather must be in there as well.
Certainly, they must understand the weather, especially as it relates to how energy is transferred around. More importantly, they must understand how the atmosphere works. They do not need to study meteorology and predict the weather. The original question was “Why is climatology not like a really long weather forecast?”. I think I have now answered that ad nauseum.
>> Your belief that it can be ignored is part of the electronic engineers perspective fallacy outlined in #407.
Except that I agree with the quoted text in 407. I have said the exact same thing in this blog.
>> Just because its valid to average averages of averaged huge areas for an anomaly doesnt mean the number is meaningful
Sure it’s meaningful. Temperature is a state variable in the thermo equations, and represents the “average kineticheat content of the planetary atmosphere”.
>> YOU CANT HAVE CLIMATE UNLESS YOU HAVE WEATHER.
A meaningless straw man.
No need. Prof. Koutsoyiannis has already published this viewpoint. From the peerreviewed articles I linked to above, one of which you giggled at. Worth noting one of the articles I linked to above was rejected by GRL, but accepted by a hydrological journal. Shows the reluctance to accept these concepts in climate science circles. I suspect the story they tell ties in well, from a stats viewpoint, to Dr Browning’s work. I can’t tell for sure, but there seem to be certain similarities; and both viewpoints appear to be under censorship from the climate science peer review process. Someone doesn’t seem to want this viewpoint in climate science.
Then read the papers. The direct consequence of this is that the fine scale structure has large scale structure consequences. The postulate is that the causal mechanism of the event (e.g. storm, rainfall) exhibits selfsimilarity. If you want to argue that an additional, anthropogenic influence is dominating behaviour above this selfsimilarity, you need to test the natural clustering behaviour of events, which is a direct consequence of the fine detail of weather, not from the broad brush of climate.
Someday Spence_UK I have to ask you how to compute the hurst component.
For now, I sit back and enjoy reading everyone ( yes GUNNAR even you!)
Those philosophizing on the nature of the difference between weather and climate should reread Neil Haven’s #3. The question is not whether global averages are meaningful, but how meaningful. If climate (measured by whatever set of parameters you like) has central tendency, then a mean is very meaningful. The more complex the equilibrium state, the less meaningful a single parameter is. If weather is governed by a strange atrractor, not a single point attractor, then so is climate to some degree. Some have remarked on a tendency toward longterm global bistability. i.e. There is no central tendency; there are two tendencies. That is in the timedomain. i.e. A longtime average may do a poor job of describing the climate system’s state at any given time.
How about in the spatial domain, since we are talking about the meaningfulness of global spatial averages? Well, the same multistable behavior observed in the timedomain appears in the spatial domain as well, whether you are talking shortterm weather or longterm climate. How comfortable are you with a single scalar parameter as a descriptor when there is persistent regionality? Of course, the global circulation records do not go back that far, but there is evidence emerging that ocean currents, for example, may be intermittent, or reversible, that atmospheric ciculation may snap in and out of certain configurations, and persist that way for quite some time. What is “stable” over secular scales may not be stable over millenial scales. I don’t see how one can argue that chaos exists at weather scales, but not climatic scales. Chaos is not temporal, it is spatiotemporal. And it is not necessarily featureless – not if the attractors are strange.
Here is where the artifical distinction between weather and climate really breaks down. Is ENSO/PDO/NAO/THC “stable”? Is jet stream “weather” or “climate”? Given that these emergent features are posthoc classifications of mesoscale phenomena, maybe other characterizations would be possible at other times. We don’t know. We haven’t been observing the system in enough detail long enough to know. Some features are convincingly stable like polar jets and ITCZ, others less so.
I don’t like the dichotomy of weather vs. climate, but of course there are good practical reasons why we are taught to think that way. I just think that statistical meteoclimatology is in its infancy and we should keep open minds. Remember that an average is not always meaningful, even if it is calculable. Fluid systems are more complex than that.
My intent here is not to correct anyone (especially not lucia or Gerry), but merely to highlight that there is some truth in all these comments. Let’s try to be civil and keep a high SNR ratio on this thread. Save the trolling selfindlugence for “unthreaded”.
RE: #278 – Day 2, AM. Here is a major update from the NWS, followed immediately by my comments:
DISCUSSION…OFFSHORE FLOW IS WANING OVER THE AREA AS SURFACE
HIGH PRESSURE CENTERED NEAR YELLOWSTONE NATIONAL PARK CONTINUES
MOVING EAST. ONSHORE FLOW WILL SLOWLY STRENGTHEN TODAY AS HIGH
PRESSURE OVER THE EASTERN PACIFIC BUILDS OFF THE CALIFORNIA COAST.
THIS WILL BRING AN END TO THE WARM WEATHER OF THE PAST COUPLE OF
DAYS. AN UPPER TROUGH IS MOVING INTO THE PACIFIC NORTHWEST BUT RAIN
WILL REMAIN WELL NORTH OF OUR DISTRICT. BY TONIGHT MODELS SHOW AN
INVERSION LAYER DEVELOPING AT AROUND 850 MB MEANING THE RETURN OF
SOME COASTAL NIGHT AND MORNING FOG AND LOW CLOUDS. THIS PATTERN
CONTINUES THROUGH SATURDAY NIGHT WITH CONTINUED COOLER TEMPERATURES.
BY SUNDAY A STRONGER UPPER TROUGH WILL MOVE THROUGH THE PACIFIC
NORTHWEST AND NORTHERN CALIFORNIA WITH THE TAIL END OF THE SYSTEM
BRINGING A CHANCE OF RAIN FROM THE SAN FRANCISCO BAY AREA NORTH.
LONG RANGE MODELS SHOW THE UPPER TROUGH DEEPENING OVER THE GREAT
BASIN EARLY NEXT WEEK WHILE SURFACE HIGH PRESSURE BUILDS OVER THE
PACIFIC NORTHWEST BEHIND THE SYSTEM. THIS WILL BRING A COLD AND DRY
OFFSHORE FLOW OVER OUR DISTRICT WITH POTENTIALLY BREEZY CONDITIONS IN
THE HILLS. THE WINDS WILL DIMINISH BY THE MIDDLE OF NEXT WEEK
ALLOWING FOR COLD MINIMUM TEMPERATURES…BUT AT THIS TIME IT LOOKS
LIKE THE COLDEST AIR STAYS EAST OF THE AREA SO DO NOT EXPECT ANY
FREEZING NIGHTTIME TEMPERATURES. NO RAIN IN SIGHT FOR ALL OF NEXT
WEEK.
=========================
So, within 12 hours we went from “gonna have wicked Santa Anas nearly statewide with above normal temps, due to killer AGW signal superimposed on climatological norms” to “maybe rain in part of this CWA this weekend, followed by dump of MacKenzie Delta (cP) air mass, we hope it doesn’t damage the crops!” I may not continue this scorecard much longer. My point may have already been made.
By the way, alluding to some of the BCP related threads, this is a classicscenario for formation of a Tonapah Low!
All,
The title of these two threads (#1 and #2) is Exponential Growth in Physical Systems. The main point of the original thread (#1) was that the basic inviscid system (essentially the inviscid compressible NavierStokes equations with only gravitational and Coriolis forces included) numerically approximated in all weather prediction and climate models has mathematical problems in the continuum system that prevent any meaningful numerical convergence. In the case of the above system combined with the hydrostatic assumption (the majority of global models are based on the hydrostatic system), the continuum system is ill posed for the initial value problem and this has been shown mathematically (reference cited) and illustrated numerically (see plots in comment #167 in original thread). And even when the hydrostatic approximation is not made, the system has areas of extremely fast exponential growth in the presence of vertical shear, e.g. near jet streams. This has also been shown mathematically (same reference)
and illustrated numerically (reference cited and can also be illustrated using a model in #167). The addition of any other forcings (solar or otherwise) will not help these underlying problems with the continuum system. And though unphysically large dissipation terms (eddy viscosity or other such gimmicks) can hide the problem, as the mesh size is reduced and
the dissipation becomes closer to that of the real atmosphere, the numerical problems start to appear (they have already shown up in versions of NCAR’s fine scale hydrostatic and nonhydrostatic models – reference cited). These problems are serious as the tutorials on exponential growth and ill posedness on thread # 2 have shown using simple examples.
Thus this thread is not about chaotic systems (maybe Steve M. wants to start a different thread on this topic, but until someone can prove or disprove that the continuum system that describes the climate is or is not chaotic, I think this discussion is meaningless), but that the
the current weather and climate models based on the NavierStokes system
used in most areas of fluid dynamics have serious problems that are being hidden by artificial means. In particular, the unphysically large dissipation can make the models appear smooth, but require unphysical
forcings to keep the spatial spectrum from decaying too rapidly. And this discrepancy is more severe in the climate models that necessarily use the largest viscosity to overcome the resolution that is insufficient to resolve the scales of motion in the continuum solution.
The conclusion is that if a numerical method cannot converge to the correct solution in the simple case where only the gravitation and Coriolis terms are included, then it will not do so in cases with additional forcings.
And the numerical problems occur within a matter of hours (see comment #167 and cited reference by Lu), hidden only by
excessive dissipation if computed for longer time periods (as in climate models). Thus climate models necessarily are not realistic and closer to the solution of a heat equation than the true solution of the NS equations.
Jerry
Gunnar.
Lucia and I have both discussed our scientific background. Can you describe yours. Thank you.
Jerry
Steve Sadlow (#432),
The errors in the global forecast models appear very quickly (see Sylvie
Gravel’s manuscript on this thread) and are only overcome by updating
with new observations every 612 hours. On the west coast the obs are less dense and thus the forecast less accurate (Sylvie separated out the different geographical areas over the US). Your results are entirely consistent with the mathematical analysis discussed above and Sylvie’s
results. Thank you for monitoring the forecasts. 🙂
Jerry
434, I was able to get that from other comments, but what the (largely engineering) crowd here I think is having trouble with is translating that into terms that have something to do with the calculation of the climate sensitivity (which is the goal here). If you don’t make that connection, this is all a lot of abstract gobbledygook. So to bridge that gap, please confirm that the problems in the numerical models that you’re describing make the calculation of climate feedback (and thus climate sensitivity) inaccurate. Is that a fair statement?
Bender (#431),
Well said. One of the problems with blogs is that when people like Steve M. are very open, anyone can write a comment no matter what his background or biases. That obviously has it pluses and minuses. But I still think it is far better than the censored version. (It can be frustrating at times because one does not know who is on the other end of the line.) 🙂
Jerry
One of the things I like about C is the number of well qualified contributors that post here both for AGW and against. I wish that SM would add to the website a way to go to the credentials of these well qualified individuals and see their backgrounds and expertise. I am not as well qualfied as others to comment on various issues but I have B.S. Mathematics and did graduate work both in Applied Mathematics and Computer Science. My early career was Space related and have continued interests in the effects of solar activity on our weather and computer modeling of physical systems. My speciality over later years was data base administration and data integrity. After I finish upgrading my personal computer with some compilers, I hope to be able to contribute and no longer just be an avid reader of the site. So please continue to identify your backgrounds bacause it adds to the credability of the site.
Larry (#437),
The thread topic is not climate feedback or sensitivity, but I will try to describe the impact in a more physical manner.
In a numerical model based on the compressible NavierStokes equations (a standard continuum system used to describe turbulence, but the discussion also applies to other systems), there is typically insufficient computational power to compute an accurate numerical solution of the system with the correct Reynolds number (or kinematic viscosity). The standard procedure in this case has been to use a dissipation of the same type with a larger coefficient so that the numerical model can resolve all of the scales of motion that appear in the continuum solution of the same continuum system, but with the larger viscosity. While this does produce a computable solution, the solution is not the same as the solution of the original problem. The obvious question then is does the computed solution have any relationship to the true solution. For example, if you are computing the stresses on an aircraft wing will they be the same in both cases and the obvious answer is no. If the solution is convergent, as one decreases the mesh size (and the viscosity), hopefully the stress will become more accurate. But if there is a problem such as described above, then the mesh size cannot be reduced to produce an accurate solution and
the numerical solution will not be close to the true solution. Keeping the viscosity orders of magnitude artificially large means that any real forcings that are used will not produce the same spatial spectrum (stresses) as the true solution so must be tweaked to produce the same spatial spectrum as for the correct solution. Thus the feedback or sensitivity of the computed solution is nowhere near those of the true solution.
I recommend looking at Heinz and my manuscript in Math Comp cited earlier to see some numerical examples of 2D turbulence computations. There are
also some 3D examples by Heinz and his other studens.
Hope this helps.
Jerry
RE 438. Dr. Browning
Your threads are like being back in school. And with Lucia, gb, bender, dan hughes and others the SNR
is very high. So my head hurts and thats a good thing.
Once in while we need a break. Which is why I love Sadlovian interludes about the forecast.
RE: #434 and 436 – “And the numerical problems occur within a matter of hours”
Yep!
Larry T (#439),
I think you will find that the honest contributors on this site will have no problem citing their backgrounds or credentials or even their true identities. It is the ones that cite none of these that I think have typically been problems. 🙂
Jerry
443 Jerry, 439 Larry
While I largely agree with your comments on qualifications, I think it should be noted that the lessqualified posters may be reluctant to joust with those with more paper certificates. While that may have some advantages in certain cases, it would also be a shame.
I suggest that we should let the brilliance and experience of the best come shining through their postings, rather than be defined by certificates.
On the other hand, a place where one could look up a poster, if one wished, might be useful.
(It might be noted that lessthanhonest contributors, if there are any, might possibly lie about their quals, anyway).
steven mosher (#441),
Well being back in school could be a good thing or a bad thing. 🙂
I prefer not to be too technical because then one loses some of the audience. On the other hand, when you use simple mathematical examples and numerical demonstrations, the SNR drops considerably. My hope is that between the two approaches, there will be a better understanding
of the care that must be taken when using numerical models to predict
physical reality and the problems with weather and climate models.
I will try to answer any reasonable question by those seeking knowledge, and will defend any of my statements with rigorous mathematics if necessary.
Jerry
steven mosher (#441),
That should be rises not drops. 🙂
Jerry
I’m not trying to hide anything, I just didn’t think it was important. I have a MSChE, and PEs in ChE, CsE, and EE. I’m familiar with the nature of the NavierStokes problem, but used to talking about it in different terms. But I don’t think this has anything to do with my issue; I’m just trying to understand the ramifications of this for the climate issue.
With this level of discussion, I’m not sure that I could articulate that this makes the climate sensitivity estimates close, way off, or garbage. Sorry if I seem fixated on that, but as far as I can see, that’s the connection to CA. Or is there a connection to another climate issue? I’m not trying to be negative, I’m just trying to make sure I understand the downstream consequences of this.
Pat Keating (#444),
Agreed, and that has been Steve M.’s open policy. But then the SNR drops
and it is more difficult for the general audience to filter out the facts.
I do not mind jousting with anyone as long as the arguments are based on logic and solid science. It is the hot air arguments and verbiage not based on anything that drives me up the walls. 🙂
Jerry
Larry (#447),
You have been asking reasonable questions and I have been trying to answer them in the best way I can. You have not been one of the problems and so I find it no surprise that you have been so open.
Keep asking questions if I do not make sense.
Jerry
@Jerry– 434The addition of any other forcings (solar or otherwise) will not help these underlying problems with the continuum system. And though unphysically large dissipation terms (eddy viscosity or other such gimmicks) can hide the problem, as the mesh size is reduced and the dissipation becomes closer to that of the real atmosphere, the numerical problems start to appear (they have already shown up in versions of NCARs fine scale hydrostatic and nonhydrostatic models – reference cited).
I haven’t read your analysis of the physical problem, and so can’t comment on that portion. But, I will make a few comments, which assume your analysis is correct. I am doing this because by assuming your analysis is correct, I think I may be able to at least comment on the numerical issues. (Then, people can argue about your analysis– and I’m afraid if I’m not going to pour over a stability analysis, few in comments will.)
So– assuming Gerry’s analysis is correct:
With regard to ‘gbs’ claim that LES would be good enough: No. LES would fail.
a) If the solution is illposed (Jerry’s analysis), which causes a solution to blow up and
b) The physical situation where the system is illposed happens with great regularity in the atmosphere (Jerry’s analysis applies to an important atmospheric structures) and
c) the illposedness can only be fixed up by adding artificial subgrid viscosity and that subgrid viscosity vanishes in the limit of zero grid size (as it would do if GCMs use LES as opposed to Reynolds decomposition type approaches ) then
d) predictions will worsen as the grid size becomes smaller.
Numerical models whose predictive ability worsens as the grid size gets smaller would generally be considered big disappointments. Very big disappointments. As “flustered graduate student blushes if an audience member points this sort of deficiency out during a public presentation”. 🙂
Since
*I have not delved into the stability analysis (though I know it’s published in peer reviewed literature ) and
* I don’t actually know what the guts of a GCM contains (maybe they are LES like as gb says here in comments, maybe they are RANS, maybe they are something entirely different.,
I can’t really conclude anything specific GCM’s based on this. But, I do agree with Gerry when he observes
“… if a numerical method cannot converge to the correct solution in the simple case where only the gravitation and Coriolis terms are included, then it will not do so in cases with additional forcings.”
With one caveat: If his illposedness problem arises only due to the interaction of gravity and Coriolis terms, then the problem would only be important in cases where gravitation and Coriolis terms are included.
Gravity matters in many flows, but the only major class of flows where the Coriolis terms matter are those with very, very, very large length scales… like atmospheric flows. (If your physics professor ever told you the Coriolis force dictates the direction of swirl in your toilet, he was wrong. My physics professor told me that freshman year. I was disabused of the quaint notion when I took my first real fluids course. The Coriolis force is neglected at that scale; it is important when explaining the motion of hurricanes, which are rather larger in scale than toilets.)
So, for example, if Gerry is correct in his stability analysis the illposedness Gerry describes suggests would cause the slightest difficulty when modeling flow past airfoils, pipe flow or many engineering flows. (Unless you build a really big toilet.)
(PS. Stephen Mosher. I’m learning R. Go take a look at the histograms I posted on unthreaded. Any help with R would be appreciated.)
Larry (#447),
I think most climate models use a mesh size ~ 100 km. They are not able to resolve any mesoscale (100 km length scale) features (hurricane details, squall lines, fronts, thunderstorms, etc.) with a finite difference mesh
of that size even if the model forcings were perfect (which they are not).
In order to resolve features on the next to the largest scale (1000 km),
the model would have to use a mesh size of 10 km or less. At that point the numerical problems cited above occur, i.e. the numerical solution will become nonsense. So climate models do not resolve the very basic features of the atmosphere and use parameterizations (artificial forcings)
in an attempt to account for those features. It has been shown in both a global weather prediction model (Sylvie Gravel’s manuscript) and in NCAR’s own climate model (David Williamson manuscript) that these forcings lead to a solution that deviates from reality in a matter of hours. You can tune these forcings to stay closer to reality over a few days (see discussion of boundary layer approximation in Sylvie’s manuscript), but that does not mean the numerical solution is any closer to reality, especially over longer time periods. In Sylvie manuscript the errors in the boundary layer
parameterization propagated vertically and caused O(1) errors in the solution within 36 hours. The only reason that the model continued to stay on track was the assimilation of new wind obs every every 612 hours
(reference by BK). If a model is not even close to reality, there is no scientific case that it is useful for anything.
Jerry
Thanks, Jerry. That’s a good answer. What that means is that the models will miss a lot of the features that cause a lot of heat transfer to occur. It also means that they’re completely unable to say anything about the behavior of clouds. I don’t like claims like “useless”, but I can’t see how a circulation model with a 100km resolution has a ghost of a chance of telling us anything useful about climate feedback mechanisms, when the entire mechanism is contained within one cell.
lucia (#450),
Just a few technical comments. Stability analysis for numerical analysts
investigating time dependent ODE’s and PDE’s usually refers to the stability of numerical approximations of the systems. The analysis
of posedness is of the continuum PDE system itself. The continuum system
must be well posed before any attempts are made at numerical approximations. Then the finite difference approximation must be checked for accuracy and stability to ensure convergence (Lax’s equivalence theorem). If open boundaries are present, a more complicated posedness and stability analysis are required (Kreiss).
The ill posedness of the hydrostatic system and the rapid growth of the
nonhydrostatic system is caused by the gravitational term, not the Coriolis term.
For those that want to see demonstrations of both problems in the corresponding systems, they can look at the plots mentioned above. Both use multiple decreasing resolutions to display the problems, i.e. convergence tests that should be run on all numerical models.
Jerry
Larry (#452),
Exactly. 🙂
As I stated, I will keep answering questions until the arguments make sense in a way that is understandable in your field of expertise.
Jerry
453 Jerry
I ran the GISS Model II with our infamous carboniferous GHG at 1 ppm instead of 280 (or 300). It ran for about 30 years, going to a ‘new’ ice age (12C lower, 40% of the water vapor, rel to initial/presentday), before going unstable.
Any ideas on why would it go unstable after running OK for 30 years?
>> You are talking about spatial averaging, not timeaveraging? What on earth would be a sensible use of taking an average [of the entire globe]
Gosh, I don’t know, how about a representation of the energy level of earth surface atmosphere! What do you think the satellite data is? There really is no logical reason for a time average in this context. When you check your child’s temperature, do you calculate a 30 day average? A global geographic average removes the effect of weather. As I’ve said, there is no noise.
Everyone is dealing with geographic averages, and only a few really desparate antiAGW types are resorting to the straw man argument that the earth does not have an average temperature, ie the “average kineticheat content of the planetary atmosphere”. I guess you’re saying that if we had a full blown ice age going, and people asked a PhD “Dr, how long will the ice age last?” The PhD would say, “what ice age, the geographically averaged temperature metric is not meaningful”.
Gunnar, the enthalpy of the atmosphere is zip compared to the enthalpy of the oceans. I presume that was what you were driving at with the “energy level”?
456 Gunnar
It seems that you are either being argumentatively evasive or have forgotten what the topic was. The topic was, to quote: “climate is the sort of average of weather” (#392).
Considering climate to be a spatial average of weather doesn’t make sense to me. Either you were really addressing a time average, as it appeared at the time (excuse the pun), or you might consider explaining how weather averaged spatially over the whole can become, say, a Climate, normally used to contrast different kinds of expected weather.
Stability analysis for numerical analysts investigating time dependent ODEs and PDEs usually refers to the stability of numerical approximations of the systems. The analysis of posedness is of the continuum PDE system itself.
Yes– I was under the impression the ill posed issue related to the continuum model. Similar erupt arguments over developing parameterizations for multiphase flow equation. Illpossedness in the continuum equations is a big nono. Big.
I suspect your comment is prompted by the fact that I I brought up gb’s discussions and claims about LES. I brought this up because LES has the particular feature that the “eddy viscosity” (if you wish to call model for the Leonard stresses that) should automatically vanish as the grid size drops. That’s the way that method works.
So, if the eddy viscosity acts as a patch to cover illposedness, then LES may work at large grid sizes and then stop working as the grid sizes drop because, to use a metaphor, the bandaid gets taken off. Worse results at smaller grid sides is the opposite of what LES is supposed to do.
But basically: If you are correct about illposedness and gb is correct the GCM’s use LES, then the modelers will soon be scratching their heads when they reduce the grid size and things get worse not better.
But, I don’t know if GCM’s use LES. They may use a RANS type (Reynolds averaged Navier Stokes) formulation. Under this concept, “eddy viscosity” describes momentum transport due to turbulent fluctuations and the mean flow. The amount of energy in the small scales has nothing to do with the grid size. Consequently, the eddy viscosity parameterization is grid independent. It won’t vanish as the grid size gets smaller.
So, in that case, the response to the idea that eddy viscosity is unphysical would be: “it’s physical. It’s just not a molecular property. ” Arguments could ensue as to whether their closure for eddy viscosity provides the correct magnitude in any particular case, but the patch for the illposedness would remain intact as grid sizes dropped. Modelers also won’t see things blow up as the grid size vanishes.
That said: gb says the GCM’s are LES. The idea of LES is newer RANS, and is introduced to resolves problems with getting RANS to work in general cases. (There is also a progression of creating more and more and more complicated RANS formulations to fix up lack of generality of the earliest formulations. RANS and LES coexist, and I don’t know which is used in GCM’s. Likely both are.)
#457, I agree. That’s why I said: energy level of earth surface atmosphere
#458, Certainly not being argumentatively evasive or forgetful. I really did mean a spatial average when speaking of the whole planet earth. It is the integral of weather.
I say it makes a lot more sense, both in terms of scientific meaning, and common usage. Otherwise, the question “What is the current climate?” would make no sense.
Consider that no one will ever agree on the proper time scale for a time averaged definition of climate. The reason for this is not disagreeable people, it’s the time averaging climate concept itself. With this definition, we’ll never know if the climate is truly changing long term, or we just didn’t set the time period long enough.
Consider that the only purpose of averaging for time is to remove the “noise” of weather. A global spatial average does this.
Perhaps. But what do you do to remove the “noise” of climate, recalling that exponential growth may lead to strange attractors and featureladen “noise” structures at ALL time scales? Over what timescale do you integrate weather to get climate? Or is this not possible, as Spence_UK seems to suggest (in #429 and elsewhere)?
(lucia, love your use of brackets.)
Lookie here:
http://radar.weather.gov/radar.php?rid=mux&product=N0Z&overlay=11101111&loop=no
More than virga. Stick a fork in yesterday’s meteo modelling.This ends the scorecard.
@bender — You’re an honest to goodness statistician right? I have a few honest to goodness stats questions and also some “how to do that with R” questions. Would you be someone I can ask? (Sorry to go off topic.)
460
We agree on this at least.
“What is the climate in northwest Alaska?” does make sense. “What are the main features of a Mediterranean climate?” makes sense. The climate is much more spatially variable than it is temporally variable, and much more information is glossed over and lost by spatially averaging.
Re: A global spatial average does this.
>> But what do you do to remove the noise of climate,
As I have independently concluded and stated in this blog, which is also stated in #407, there is no noise in climate, by definition.
>> recalling that exponential growth may lead to strange attractors and featureladen noise structures at ALL time scales?
That is someone’s unverified hypothesis, especially when it comes to long term weather. It’s also completely dependent on the time averaged definition of climate. In fact, with this improved definition, the whole problem goes away, since this “strange attractors” hypothesis could be true at all time scales, and not effect globally averaged temperature, ie climate.
>> Over what timescale do you integrate weather to get climate?
Who says we have to? A global spatial average does this. Climate is a word that denotes a concept. Concepts are valid or invalid, based on a comparison to reality. There is a reality to a global, spatially averaged, temperature. There is no reality associated with a time averaged temperature, since it is only a contrivance of the analyst. Physical reality is affected in real time. The reality described by the time averaged view looks like this:
A Sun, with a constant solar strength averaged between night and day down to 341 W/m2 shines on earth, a flat plate in space. Summer is averaged with winter, solar minumum averaged with maximum. In this make believe world, people don’t put sun screen on, since when averaged over night and day, the sun isn’t strong enough to burn skin. If a house starts to burn, firemen don’t come, because when averaged over a long enough period, there is no danger.
Reality doesn’t work that way. And btw, did you hear? The Mississipi just swung wildly all over the place. It was a real mess. One minute, a nice town in Iowa was 40 miles from the river, the next, the river came right through the town. The smart folks from the University took time off from torturing grad students to point out that “the chaos in the river flow works on the macro scale as well”.
Gunnar, you are revealing yourself to be a troll.
#463
I know what I know: just enough to get by. Ask away in unthreaded. I’ll help if I can. I know R.
>> We agree on this at least.
No, we don’t. Did you miss the “Otherwise”? I think it makes perfect sense to say “what is the current climate”. Everyone else does too. Ever heard anyone say that 1998 was hotter than 1938, or vice versa? Now, it’s arbitrarily averaged over a year, not 30 years, yet the claim and counter claim was that the climate was hotter in 1998, or 1938. It’s the defacto definition.
>> What is the climate in northwest Alaska? does make sense.
Right, and it’s spatially averaged. All too easy.
>> What are the main features of a Mediterranean climate? makes sense.
Right, and it’s spatially averaged.
>> The climate is much more spatially variable than it is temporally variable,
Hardly think so, since summerwinter deltas are large. However, variability is irrelevant. What is the purpose of our study? It is to determine whether the energy level of the atmosphere is increasing, decreasing, or staying within a normal range. That purpose can only be served by a globally averaged temperature.
>> and much more information is glossed over and lost by spatially averaging
No information is lost, we simply integrated over the surface of the globe. There is little or no useful information in a 30 year average. If the sun were to suddenly increase by 10%, it would take a long time to effect your metric, but it would not take a long time for it to effect the earth. Thus, the time averaged metric does not describe any actual reality. It is simply a contrivance.
>> Gunnar, you are revealing yourself to be a troll.
adhominem. Apparently, you define troll as someone who challenges the way you think about things? I thought it was someone who makes bald assertions to stir things up, but doesn’t support those assertions, or stick around to respond.
You >> What is the climate in northwest Alaska? does make sense.
Me: >> Right, and its spatially averaged. All too easy.
I need to clarify this. Because you gave an area, it’s clear that the information is spatially averaged. When someone looks up information about the climate of a certain area, they are NOT interested in the average over a whole year. They want to know how hot it gets in the summer, and how cold it gets in the winter.
I don’t the know the annual average where I live, nor do I care. I’m just pointing out the defacto definition is different than what you very highly educated people are using, and the defacto definition makes sense. If that takes the fun out of your very exciting discussion of strange attractors, I apologize.
Yes, it’s ad hominem, mr newsflash. I chose that tack because your “arguments” aren’t worth refuting. That’s because when they are systematically dismantled you do not acknowledge your errors or revise your thinking. You have no humility. You just keep going around in circles, returning back to the same mistaken arguments. If that is not a troll, maybe we can argue about what it is.
But rather than argue for the sake of arguing and hearing yourself speak, why don’t you try constraining yourself to some facts, so that we can make some forward progress? Facts like:
the climate (however defined) from 195787 is not like the climate from 19772007; it varies over time
as a time integral of weather there is no such thing as current instantaneous climate, only recent climate and future climate
phenomonena such as ocean currents & jets are, unfortunately, not necessarily fixed flows; they vary in both space and time
we have not been observing climatic phenomena in detail for more than a few decades, and paleo reconstructions are imprecise
there is no telling when these features may break down; impredictability of ENSO, for example
an average is only one parameter of a distribution; the more complex the distribution the poorer a descriptor it is
and so on. You are reading the papers that commenters at CA refer to, aren’t you?
Maybe it is a linguistic issue you have? I’m trying to cut you some slack, but your headstrong ways are going to drive Jerry up the wall and I fear for his health. Good night, sir. And mind the trolls.
bender (#470),
I think that we have established that Gunnar only uses nonsensical verbiage.
I wouldn’t waste any more time on him. Also note that he won’t mention any of his credentials
or background. Sound familiar.
Jerry
Pat Keating (#455),
A number of guesses.
But does the result surprise you given that when the parameterizations are changed the models
have to be readjusted (retuned) to prevent the spectrum from going bananas? 🙂
Jerry
468, 469
I will eschew further argument/discussion with you — it is too frustrating. Perhaps you and I are from different planets…….
Will stop feeding the troll.
I think I’ve said before I have no credentials or training in mathematics or climatology or even statistics. I’m selftaught. That means I make mistakes. But I try to limit my mistakes by not going out on too much of a limb. I have a PhD in ecosystem modeling. I’m a practising scientist for 5 years. I’ve written 15 peerreviewed papers and 2 book chapters. I learn something new every day by listening to people way smarter than myself, and by reading.
lucia (#455),
I have a few additional comments and will get back to you.
Your arguments are eminently reasonable and I was just describing the mathematical lingo for comparison with the fluid dynamical lingo.
Jerry
bender (#474),
You do not have to apologize. I have always enjoyed your comments and
appreciate your search for knowledge. Now gunnar is a different story. 🙂
Jerry
472
Seriously, as a neophyte in climate modeling, I was somewhat (but not totally) surprised. I think of a mere 1300ppm of the carboniferous gas embedded in a lot of N2, O2, and H2O as a perturbation, so that a decent model should be able to handle that range. It is obvious, from the iceage result obtained, that to the modelers it is much more than a perturbation.
Of course, to be fair, the model was developed before the lag in CO2 level was discovered……..
I am wondering if I have an analogy that may be helpful for mutual understanding, from several angles.
I’ll preface with MrPete’s “master of the obvious” restatement of what I see as the basic areas where people are talking past one another. This is simply my attempt to articulate what others have already stated in far more educated terms:
Gunnar (and gb?) are claiming analysis of weather and climate are independent in the sense that detailed understanding of weather (fine geo and chrono mesh) has no relationship to detailed understanding of climate (very large geo/chrono mesh.) His argument is from apparent “common sense” rather than scientific or mathematical principles.
Jerry, Lucia and others claim they are not independent. That the initial (fine mesh weather) conditions, and the (fine mesh) factors that influence weather, are inextricably linked to (large mesh) climate — even at the global average temp level or however you want to define it — through apparently unstable exponential processes.
Am I doing ok so far?
OK, here’s my analogy. It is NOT perfect for quite a variety of reasons… but (until someone expalains how horribly inappropriate this is as an example,) I’m finding it helpful for my thinking.
Some of you may remember Conway’s game of Life, first introduced to me as a school kid in Oct 1970 in the wonderful Scientific American “Mathematical Games” column by Martin Gardner. (Personal note: I had the privilege of programming that and other games for the first retail microcomputer, under the guidance of Arthur Samuel — a wonderful AI pioneer. Nice memory…maybe that’s why this is so vivid!)
Anyway, for those who are not familiar, Conway’s Life game is a cellular automaton. Play it online here, or download a very fast Windows version here.
The entire Life engine runs on a grid whose cells obey four simple rules:
1. Any live cell with fewer than two live neighbors dies (from loneliness.)
2. Any live cell with more than three live neighbors dies (from overpopulation.)
3. Any dead cell with exactly three live neighbors is born (wow, spontaneous generation!)
4. Any live cell with two or three live neighbors lives to the next generation.
What’s interesting about this for present discussion:
* One can observe and/or study Life at fine or large scales
* One could easily perform localized fine or largemesh measurements and averages to determine, for example, average
cluster populations over time.
* At the macro scale, one can observe for some time, see trends, etc… and it is not always obvious what will happen next.
* Depending on the situation, the population can grow forever, can stabilize, or can fade to extinction.
And what is most interesting: all this complexity is completely determined by the detailed initial conditions and a small set of finescale rules.
Gunnar: does this help make the connection? This is an example where macro and micro do not appear connected, but they absolutely are. And modeling of the macro is useless without detailed finelevel understanding.
Jerry and Lucia: is this example actually of any use with respect to what you are saying? At what point does it fall apart in providing the reader with useful understanding?
Another page on Conway’s Life with a great “applet” that lets you see how it works. Zoom to level 0 to see a lot of detail.
>> the climate (however defined) from 195787 is not like the climate from 19772007; it varies over time
That’s right, it varies over time, which means it has an instantaneous value. How else could anyone claim that 98 was hotter than 38 or vice versa? Climate is the spatial integral of weather. When you average that over time, you are actually talking about the average climate. This is not new, or my idea, it’s what everyone is doing, including people on this blog.
>> Gunnar: does this help make the connection? This is an example where macro and micro do not appear connected, but they absolutely are. And modeling of the macro is useless without detailed finelevel understanding.
Thank you Mr Pete for being civil. I agree that sometimes, micro analysis might be useful to macro analysis, but certainly not always. I think the time dilation question is a perfect example. It becomes completely obvious if you really focus on the fact HOW we study something is completely dependent on WHY we are studying it. The object of study may be the same as in many other studies, but the nature of that object doesn’t control WHY we are studying something. In the road trip example, the micro analysis is irrelevant.
However, in this case, per the defacto definition that people actually use (climate is the spatial integral of weather), it’s not a micro vs macro situation, with respect to time scales. Here is an analogy:
There is a hose filling up a pool. By some mechanism, the pool water is shaken, rotated and disturbed into complete turbulence. However chaotic or nonchaotic, the turbulence can never change the amount of water in the pool.
Climate is like the study of how much water is coming into the pool, and how much is leaking out the cracks in the pool. The study of the turbulence, while fun to watch, is irrelevant. And therefore, so are the math problems associated with that. Hence the conclusion in #325.
MrPete in 478
You have my view right. And with the qualifications as stated, you can add, I also agree with Bender. That’s why I was discussing the micro/marco scale point of view issue.
You’re example is great. Sounds like a very educational game.
@Gerry–
In 291, Gunnar gave his credentials this way:
“I have a BSEE, as does my wife & older brother, while my father had a PhD in EE. My mother has a masters in history, my uncle a PhD in Physics.”
@Bender– You are innocent of hurling ad hominem. You are guilty of name calling. Name calling is only an ad hominem if used as an argument. Argument by falsely labeling others’ statements with fancy terms for logical fallacies is something I don’t think I’ve seen before; I think we need to give it a fancy Latin name.
>> In 291, Gunnar gave his credentials this way
Ahh, you shouldn’t have told him, since he is clearly pursuing a logic fallacy.
Mr Pete,
That is precisely my perspective. One system, viewed through two lenses, two methodologies: meteorology, climatology. The only thing that differs is scale, and therefore what you are able to resolve and what you are willing to ignore. That’s not the system’s behavior, that’s the observer’s bias.
Why two lenses? Purely historical. Why not three? Why not seventeen? Why not ONE!? That would be postmodern. Whereas Gunnar is arguing as a classicist. He likes his dichotomy. It’s comfortable to him.
Unfortunately for him, the topic here is “exponential growth in physical systems” – which is a singular view, one that is relevant at all space and time scales. Therefore I predict he will never be ontopic. Therefore I predict he will always exhibit trolllike behavior on this thread. Therefore I will try not to feed him. Unfortunately, the smallest crumb seems to induce a frenzy.
MrPete,
You must get Wolframs book.
http://www.wolframscience.com/
>> Why two lenses? Purely historical.
Completely incorrect, since HOW we study something is completely dependent on WHY we are studying it. Our study has a purpose. Another one could be the study of earth’s carbonic cycle. Different purpose, different study and analytical techniques. The carbonic cycle analyst does not need to run long term weather forecasts either.
I’ve carefully explained this over and over, and nobody posting here deals with my argument, they simply assert something equivalent to “can’t have climate without weather” or “how can you we predict the climate, when we can’t predict the weather long term”. I guess it take some abstract thinking, at least more than the current posters are willing or able to engage in.
dismissive
presumptive
ad hom
redundant
yep, it’s a troll.
Ok, Gunnar, I’m not going to engage you pointbypoint on your ridiculous rebuttal becauae it’s an infinite regress with you. But I’m listening. Just tell us, in a snappy paragraph that is ontopic, what is this amazingly insightful argument of yours that we’re so incapable of seeing? We’ll try hard to keep up with your brilliance. Make it fast, my carbonic soup is bubbling.
Gunnar, Dr. Browning isn’t saying you are WRONG because you won’t give your credentials.
That would be a logical fallacy. And Bender isnt saying you are WRONG because you behave like a troll.
What they are saying is this. They don’t want to waste time on you. That’s a statement of personal
preference.
I like to waste time on you.
Sometimes I had a kid in class who would just badger away and never really listen. He wanted
to hear himself talk. Meanwhile, other kids who wanted to learn and ask genuine questions learned
nothing.
After a year of teaching I learned some fun techiques for these guys.
Good question Gunnar! Let’s table that until next class.
Good question Gunnar! You write that up as a paper, now getting back to what Robert Frost really meant.
Good question Gunnar! My office hours are on your syllabus, Schedule a time to speak with me, perhaps you
could do an honors seminar on this.
On Moshpit days:
Gunnar, you have just disproved the cliche that there are no stupid questions.
Gunnar, There are no stupid questions. Only stupid questioners. As we all know too well from your
performance this quarter.
Good to see you back on the bridge.
#487, >> its a troll
careful, you just defined many posters here as trolls.
>> in a snappy paragraph
read #480
mosher, doesn’t surprise me that you’re a teacher.
I say it’s the other way around, I’m the one advancing the ball, clarifying concepts, supporting my assertions, and it’s Gerry, Lucia, bender etc who are the ones who just want to hear themselves talk about their particular obscure area of expertise. They are reacting emotionally, because my conclusion means that their expertise is not particularly relevant to climatology, the nature of which depends only what the majority of people want to know about the climate, and is not dependent on the nature of the object being studied.
I find your school yard like bullying tactics of ostracization humourous. They say more about you, than about me.
It’s all a big conspiracy, ain’t it?
Mosh – that Utube is perfect.
RE: #478 – Yes. You are right on. If you take the DIV of, say, the “edge zone” of the jet stream, or, of some large macro parcel, inenvitably, if you don’t capture the detail of the small divergences, convergences and vortices, inevitably, these “brushed over small details” will over time cause your model not to converge. This is true whether its a CWA level meteo model, a synoptic / continential scale meteo model, a global meteo model or a GCM.
Gunnar,
Your attitude is ironic given I (and Mr Pete) are the only ones here trying to salvage something from your train wreck of an argument. You should be nice to us. #480 – which I already read – is incoherent, which is why I asked for something coherent.
To be generous, your fundamental argument seems to be that:
(1) there are certain features (i.e. oceanic & atmospheric flows) of the climate system that are persistent, either indefinitely, or over incredibly long time scales, and
(2) that this persistence justifies a division of timescale between that which is “meteorological” vs “climatological”, and
(3) therefore arguments about “exponential growth in physical systems” do not apply at those longer timescales.
Forgive if I misrepresent anything, it is not intentional. Regardless, my sense is this is the RC perspective. So I will continue.
Now, being a soup cook, I do not have a professional opinion. I merely want to understand:
(1) the empirical basis for the assertion that certain features of the climate are immutable (or persistent)
(2) the empirical basis behind the assertion that the internal variability of the terawatt heat engine is miniscule compared to the external forcing
(3) the logical basis behind the assertion that the GCMs have been tuned to a “representative” set of weather/climate scenarios
I don’t have an argument. Only questions. Can you answer any of them? (My prediction is you’ll try to unask all of them.)
Hi,
If people want some more background on chaos vs. turbulence or atmospheric flows use scholar.google and type in the words “predictability” and “flows” or “turbulence” or “atmosphere”. Many papers on this topic. I have no time to read them now, but I think the general adopted idea is that weather has a finite predictability just like turbulence. Do two simulations, same grid, same parameters, same initial conditions. Only in one of them finite distrubances are introduced at the small scales. Initially, the solutions look similar but after a finite time they start to diverge. Since in weather forecasts you never now the exact initial conditions (there are always some errors or some information is lacking), the simulated weather will diverge from the real weather within a finite time, even if you have the perfect model. So nobody thinks that one can ever predict 50 years from now. So far, I think I agree with Gerry.
However, this does not imply that one cannot simulate the future climate or say the probabily distribution of the temperature with a model.
I do not agree with Jerry when he says that when a subgrid model is used one cannot compute the stresses correctly. Really, many, many people have simulated turbulent flows and used subgrid models (largeeddy simulations) and these simulations are able to predict stresses in good agreement with fully resolved simulations. The models that the engineers at Boeing are using do not even resolve a part of the scales or spectrum but model the complete range of scales. If these models really would give BS results the engineers wouldn’t use them, don’t you think so?
Gunnar, your pool analogy states your perspective very well. Unfortunately, neither you nor the modelers have provided any evidence to support this assumption.
Jerry and Lucia are providing a detailed education, for those who have ears to hear, about exactly why your assumption is incorrect.
You won’t be able to counter their argument with ever more strident exclamations. They hear you. You need to hear what they are saying. They have evidence to *prove* that the measurement/analysis systems/methods are connected, not independent.
Re #490
You are making me ill with your selfpraise. Goodbye.
494, if you read Gerry and Lucia’s comments carefully, you’ll see that a flow field around a (relatively small) object and a flow field in open space (subject to hydrostatics) are two entirely different problems. And that’s not even bringing the Coriolis effect, and temperature gradients, and humidity gradients, and a number of other factors into the picture.
Re # 478:
Mr Pete, I didn’t say that micro scales do not matter, they do, also for climate. Thus their effect and their feedback on the large scales should be included in (climate) models. We are developing largeeddy simulations for turbulent flows. In a large eddy simulation one does not resolve the whole range of turbulent scales but only a part of it. An essential element is to develop a subgrid model for the unresolved scales that take into account their feedback on the large, resolved scales. With such a largeeddy simulation one is never able to reproduce one particular realisation ontained from a fully resolved simulation. So far, I agree with Jerry. However, when we compare the statistics (mean velocity, the average variance of the turbulent fluctuations) we can rather well reproduce the statistics from a fully resolved simulation if we use a good subgrid model. I think something similar should be possibel with an atmospheric model. Thus reproducing one particular realisation (weather): no. Reproducing some kind of average temp or a temp probability distribution, yes, I think that should be possible.
RE 490.
I am not a teacher.
I wrote ” sometimes I HAD a kid in class” If I were currently a teacher I would have written
“sometimes I HAVE a kid in class”
I fail to see how you have clarified anything or can clarify anything when you
cannot begin to understand the implications of a simple sentence.
#493 >> your fundamental argument seems to be that
For the record, that’s a complete misstatement of my argument, which is coherently explained in #480. These are the words that you folks continuously put into my mouth. In short, climate is the spatial average of weather, not time averaged. Climatology is the study of those things that can effect this spatial average of weather, representing the energy level of earth’s surface atmosphere. As such, climate is not weather on a longer time scale. (see also #270, 281, 294, 298, 306 and the conclusion in #325)
>> this is the RC perspective
Except that I’m probably more antiAGW than anyone here. However, intellectual honesty means rejecting straw man antiagw arguments. To be against my argument simply because it would be something that RC would advocate is pure partisanship.
>> selfpraise
hardly, it’s self defense. ref #488.
Don’t. Rattle. Cage.
gb,
Thanks for your comments.
In parameterizing the nonphysical portions of these models, what are the climate/circulation scenarios that are used, and how does one judge how representative they are of the climate system as a whole? Is there a good way to characterize how much “wiggle room” there is in the tuning process? i.e. How many parameters are free for tuning, and how much uncertainty/freedom is there with the fixed parameters that are not considered to be available for tuning?
[I know: Simple to ask, probably not so easy to answer.]
Ok, thanks a lot for that. [Backs carefully away from cage.]
Re # 497: In fact, I am doing simulations of stratified flows including rotation and a subgrid model. They do reproduce statistics observed in the real atmosphere. So, I think also for these class of problems it should be possible to develop some kind of subgrid models. It is only more difficult than for engineering type of problems.
504, but over what scale? Enough so that these variations (pressure temperature, etc.) enter the picture? There’s a lot more to this than just fluid mechanics.
Here’s a challenge: Simulate a cloud. Produce a 3D data set of a) albedo effect, and b) IR absorption. Include in the IR calculation effect of heat bypassing the absorption by vertical convection. Think you can do all that? Good luck.
>> neither you nor the modelers have provided any evidence to support this assumption.
You don’t understand my position if you think I’m defending the modelers. My conclusion means that GCMs should be abandoned. See #261, which started this whole discussion.
Also, you still think I’m saying something about the object of study. I’m saying something about the purpose of our study and the validity of the concept climate as used in this thread versus the validity of the concept as used almost everywhere else. I explain this in the 2nd half of #465.
>> They have evidence to *prove*
Of course, climate and weather are connected, the object is the same. However, the point about timescales is moot if climate is the spatial average of weather, not time averaged. This is the way everyone is using the concept, since it’s the only one that makes sense. Why else would one say “but the lower 48 is only 2% of the earth’s surface”.
Bender, I have never worked with climate models. I have an engineering backgroud and do research into turbulent flows. However, I have read quite a lot about climate, atmospheric dynamics etc. and now a bit about the parameterizations they use. These are somtimes not that different from the ones used in engineering models. Circulations are not parameterized directly so far I know, rather they are the output of a good ocean/atmospheric model with correct parameterizations. How many parameters? Depends on the complexity of the model. How are the parameters determined? There is no single answer to that question I think. The ocean circulation is quite a lot determined by the vertical mixing parameter. So this one is chosen so that about the correct circulation is obtained. Direct observations of vertical mixing are rather poor and cannot be used yet to constrain the parameters, unfortunately.
Re # 505. Larry, I didn’t say it would be easy.
[From a safe distance from the cage:] And that’s why the point isn’t moot: climate is the timeaverage of weather.
e.g. The Californian climate is warmer than the Norwegian climate. We know that because we’ve been observing them for decades and when you integrate over that timescale, you can make an inference about the past difference. Future difference is another keetle of fish. [Whoops, troll bait.] And this is where ergodicity (i.e. statistical interchangeability of timeseries and series ensembles) starts to matter.
The science of global climate change makes uses of spatial averages of (instantaneous) weather and (timeaveraged) climate simply because it’s easier to track one variable in a vector than one hundred in a matrix. Convenience is not correctness, however, and this parsimony comes at a cost. The question being asked is: what is that cost?
I am critical of the GCMs, but my criticism has a logical and empirical foundation. Let’s constrain ourselves to reality and reason, shall we?
@gb–
The models that the engineers at Boeing are using do not even resolve a part of the scales or spectrum but model the complete range of scales. If these models really would give BS results the engineers wouldnt use them, dont you think so?
No. Engeneers wouldn’t use models if they didn’t give decent results. However, that doesn’t tell us anything about the importance of the illposedness discussed by Gerry. It also tells us fairly little about GCM’s.
The reason is the problem Jerry is discussing would never arise in an aerodynamics application.
The sort of illposedness Gerry is discussing manifests itself only when a) density of air varies more or less hydrostatically b) the hydrostatic assumption is applied to model density variations and c) the Corriolis force matters.
None of these features are important at all when modeling flow around aircraft.
As to the hydrostatic assumption: Not only is it not used in Aero, for the purpose of modeling flow around an aircraft, we neglect gravity entirely. ( That is: it’s neglected in the flow calculation. Gravity is accounted for when trying to estimate the density in the vicinity of the aircraft– but that’s not a flow issue.) If you flip open an undergraduate aerodynamics book, you’ll see there is no “gh” in an aerodynamicists Bernouilli equation. (I once taught ME/CE oriented fluid dynamics courses and Introductory Aero course back to back. I’d have to blink and stop myself from automatically writing the “ro g h” term!)
As to the Corriolis force: Hah hah hah…. Everyone neglects this in engineering scale applications. Some of Boeing’s aircraft are large, but none have the wingspan of a hurricane.
One needs to be very careful thinking that if engineers who work very hard can get aircraft to fly, that means models for Climate also work. The problems are different. The models are different.
It will sound odd to say this, but the success of computational modeling for Aerodynamics applications lies in part with the fact that the flows are “cleaner”. You can neglect many things.
The problems do remain difficult, and it takes intelligence, skill and training, but Aerodynamics is an area where, by flying fast, you can leave many difficult fluid dynamics questions behind you.
If Jerry is right, then LES could work perfectly well in aero problems and 99.9% of engineering problems. However, it would fail when predicting both weather and climate. That’s the importance of his work.
#507
Appreciate the reply. Every nugget I can gather helps me to formulate a more precise question. Your last sentence is very interesting. Gives me some sense that my questions are not that irrelevant [whereas at RC I was either dismissed or parsed to death].
RE 501. Ok I’m gunna go Moshing.
I just ran the Life sim, lightweight spaceship like 5 nexts. It stabilized at a square of 4 against the wall up top near the right, about 20 squares in. Then I filled in a random space with cells with a mouse drag and it’s gotten to a point alternating between a cross and a circle of 12 cells total, the cross’ 3 x 4 centered around a box of 9 cells. Pretty cool.
Okay, Gunnar, which is it:
“only a few really desparate antiAGW types are resorting to the straw man argument that the earth does not have an average temperature, ie the ‘average kineticheat content of the planetary atmosphere’.” or “Thus, the time averaged metric does not describe any actual reality. It is simply a contrivance.”
Of course the Earth has an “average temperature”. Care to tell us what it is? But don’t forget to include all levels of the atmosphere, the soil, the water and the core.
Or did you mean the average temperature of “rural” samples of air in the boundry layer, and samples of the surface of the seas (excluding the top 180 squares) by satellite over x amount of time?
Given that you’re ignoring everything about climate except temperature, I’ll question the meaning of tracking the anomaly of the averaged between tmax/tmin land samplings averaged over 5×5 grids of randomly placed (and somewhat questionable at times) ground stations, and the averaged 2×2 water toplayer sampling that are then combined, adjusted, averaged and reaveraged over time. Oh, wait. Hmmmm. There’s that pesky time thing.
I don’t deny that the number has been bistable between about +/.8 for the last 125 years, and that the trend of it is +.7 over the last 125 years. Gee…. Wow, that’s time again.
I don’t agree that you can yank one component out of the system, regardless if it seems to correlate to that rise or not. Even if you could do that, nobody’s shown that the known OCO rise correlates in a cause/effect relationship to that number. But I can’t rule it out.
But in order to figure out what the system is doing, you have to understand how the system functions, and in order to understand how the system functions, you need to get to that point. So while climatologists may not be doing meterology, they have to understand it to model it . Which is why the models can’t get to more than the level we’re at if we don’t understand clouds etc. Which is why weather patterns over time have to be studied – boundry layer, mesoscale, synoptic scale and global scale. Are you really claiming that climatology doesn’t deal with dynamic meteorology over time?
But I do agree that not being able to forecast weather does not have to equal being unable to predict the climate. The question is how well. I agree with you somewhat there about the models; I don’t agree the models should be abandoned, but they certainly need to be taken with a grain of salt.
“the pool water is shaken, rotated and disturbed into complete turbulence. However chaotic or nonchaotic,”
It meets the definition of chaos, being that it has a sensitive dependence on initial conditions, it seems to me. Doesn’t matter. Now, given what you wrote there, climate is leaks in the pool, then I need to understand how turbulent the water is and in what ways it will act so I can figure out how much will leak and if the cracks are going to get bigger and in what ways. The leaking water and the ways it acts is certainly not irrelevant to the fact it’s leaking. How do I figure out when and how I need to increase or reduce the water flow if I don’t know how that’s operating? How do I stop the pool from totally breaking if I don’t know how the water is affecting the pool? I find your argument that there’s no time scales involved a strange one, I have to look at the cracks longterm. Yes, I know, that’s probably a bad analogy, since you can’t control the weather.
Not having to take weather into account and understand it and use tools and data about the weather to figure out climate is equally so incorrect. Like saying you can take apart a car and put it back together without understanding any of the systems, at least to the degree you need to know what tools to pick and how to use them.
That’s probably a bad analogy also. Hopefully you get my points though.
You say “the turbulence can never change the amount of water in the pool”. Sure it can. It becomes turbulent enough to exit the pool. It evaporates faster if it’s going up in the air and increasing its surface area by being in the air and some of it is breaking into drops. And for that, guess what. I need to know how it’s behaving, not at some instant, but over time, to know how much is getting out.
How can you possibly argue with “Climate is the average and variations of weather in a region over long periods of time”(wiki) “the statistical description in terms of the mean and variability of relevant quantities over a period of time ranging from months to thousands or millions of years”(ipcc) “The composite or generally prevailing weather conditions of a region, throughout the year, averaged over a series of years.”(nws)
>> climate is the timeaverage of weather.
As I explain in the 2nd half of #465, concepts are validated by a comparison to reality. If one can say that the climate changes over time, then there must be an instantaneous value. It makes no sense to say that the spatial average of world wide weather is weather.
>> e.g. The Californian climate is warmer than the Norwegian climate. We know that because weve been observing them for decades and when you integrate over that timescale
Note, you spatially averaged as the first step. If an alien landed on earth, first in Norway, then in California, spoke to no one, read nothing, he could properly conclude that the climate in CA is warmer than Norway. With your definition, he could not, without sticking around for 30 years.
[Sigh] Substitute ‘Fresno’ for ‘California’ and ‘Oslo’ for ‘Norway’ and rerun the analysis.
Yes, he could make on inference based on a single second or hour or day’s observation, but there’s a chance he’d be wrong. Integrating over time reduces the variance in your estimate and thus shields you somewhat from making an incorrect inference. There are times when Oslo’s daily temperature exceedsd that of Fresno. The question is: are there are entire centuries where that might be the case!?
“that the climate in CA is warmer than Norway”
Oh, really. What if when he landed on a day when it was warmer in Norway? What if he landed at Lake Tahoe in December versus Death Valley? What if he landed where it normally gets 200 days of rain a year and it wasn’t raining that day? Dry low humidity place that is having a monsoon?
He wouldn’t know jack about the climate in any of those places.
It occurs to me that the weather/climate dichotomy is a topic which deserves its own thread. It’s a distraction here.
I liked the response on turbulence changing the volume of water in the pool. Reminded me of the realworld incident our neighbors encountered in the (’87?) quake: they watched half the water in their swimming pool go vertical… and exit.
The earth has one or two boundaries as well, where “pool water” can exit. You can ignore such things in the short run, but eventually, you’re gonna have to pay the piper: there’s variability that is not explained by our simplistic models.
“If an alien landed on earth, first in Norway, then in California, spoke to no one, read nothing,
he could properly conclude that the climate in CA is warmer than Norway. ”
Well today if he flew from Bergen Norway ( 46F) and landed at Mount Shasta later on today ( 39F)
he would conclude the opposite.
Maybe this alien who speaks to no one and reads nothing is you. That is what I would properly conclude.
>> Yes, he could make on inference based on a single second
It wouldn’t be an inference, it would be observation. What you are doing is apriori assuming your conclusion. You’re not dealing with:
>> What if when he landed on a day when it was warmer in Norway?
It would be a correct observation on that day. If you average that over time, you get average climate. When you spatially average, you diminish the weather aspect, to the extent of the area of average. If it’s just norway, there is still a large weather component. If you limit the area to fresno, it’s just pure weather data. The climate of the earth could change, but fresno probably won’t show it. When you average over the whole globe, the weather is removed.
Larry (#517),
We are in agreement again. I have comments on the difference, but they do not belong on this thread.
Jerry
>> they watched half the water in their swimming pool go vertical and exit.
Right, that’s an external source of change.
>> theres variability that is not explained by our simplistic models.
Right, because of this very issue. If one only concentrates on weather, one would miss a whole host of external factors that affect the climate. See my Chief Climatologist story in #325. The models are extremely incomplete, because they include mostly internal effects. (ocean being an exception)
Pat Keating (#477),
Do you work for GISS or are you just running their model for interest?
What type of graphical output do you have available to determine the
cause of the bloup?
Jerry
523 Jerry
No, I don’t work for GISS. I recently downloaded the model to run on my PC overnight (from http://edgcm.columbia.edu/ ). The graphical output isn’t bad, but not finegrained enough in time (it’s annual, Dec 31) for me to see it go bad — it blows in December, unfortunately.
I started with 0 ppm, but changed to 1 ppm after the first blowup, just in case the program was taking a logarithm.
RE 524. Hi Pat, Have you taken a look at the MIT GCM? I was going to download and play with it
( ModelE made me ill) and the code looks to be more competantly constructed.
I’ll answer the main points and then stop. You are becoming less and less rational on this issue.
How is a single second observation different than a 1 hour or 5 hour observation? Your block quote about spatially averaging weather is incomprehendable. I can’t tell what you’re trying to say, but it doesn’t look like it even applies.
That day’s weather (or that hour or minute) doesn’t ever tell you what the climate is there. But you have to know the min/max, humidity levels, wind direction and speed, amount of rainfall and the like to derive the climate. You are simultaneously arguing weather is not climate, a single observation of the weather is climate, and that time is not involved.
Can you have climate without knowing rainfall levels? Can you know rainfall levels without measuring it? Does knowing the amount of rainfall in the last 10 minutes or 1 day tell you what climate is? You used to be pretty rational.
The sun, the Earth’s core, etc, are all “external forces”. How we change the land and air are external forces.
Just because climate looks at other things too has no bearing on the fact that weather is one of them.
So the models are incomplete and use assumptions. It doesn’t prove they’re totally worthless for anything.
Pat Keating (#524),
Did you save the entire model data every so often, every 10 years, so the model could be restarted?
What continuum dynamical system of equations is approximated in the model, e.g. is it the hydrostatic system?
Is it a pseudo spectral or finite difference numerical approximation?
What are the grid sizes in the three spatial directions?
Does the model run stably for more than 30 years using the unmodifed version?
Somewhere on this site I proved that if one is allowed to tune the forcing (parameterizations), one can compute any solution (or spectrum) that one wants even with the wrong system of equations. The proof is easier than me searching for where I made the comment. 🙂
Jerry
Pat Keating (#524),
I looked at the GISS site you mentioned and the model is based on very low accuracy numerics (the Arakawa schemes are finite difference schemes
with large built in implicit dissipation). The code does allow different resolutions to be run, but my guess is that you were not able to run at very high resolution? You should first run a version that GISS states
will work and look at the output to see if it agrees with theirs. Then proceed from there.
I will continue to help and provide suggestions if you want.
Jerry
Jerry
The LES Question (tutorial)
I assume that LES will work in both 2D and 3D. Convergent numerical solutions of the incompressible 2D NavierStokes equations can be (and have been) computed with kinematic viscosties (Reynolds numbers) of real fluids
(references previously cited and available). Given these convergent solutions, it would be trivial to test the LES methods against the converged solutions for accuracy.
It is my guess that the solutions will be different, especially over longer time periods.
Jerry
What exactly is the utility of a 2D model? It seems to me like that’s like modeling flow in a set of vanes. Or like current in a laminated transformer core.
Jerry– Sure. It could be worth while to stuff in an LES subgrid model, run it and show what happens and compare. I’m not very familiar with 2D turbulence, but simple googling reveals all sorts of people publishing papers on 2D turbulence and apparently doing exact computations. Sounds like LES of 2D turbulence could be a a good project for a graduate student. 🙂
You’d probably want to demonstrate some problems that worked well, and then apply it to the case where you expect LES to blow up.
525 Jerry
I will assume that 528 supercedes the earlier post.
I did run a version, including input file, that edgcm provides as one of a set of sample runs, and then used it for my run with 1 ppm by merely changing that in the input file (they have a nice interface to do that). The purpose was to use input parameter changes to probe the insides of the model.
So the version I ran was identical to one that runs fine, except for the CO2. Since the edgcm is intended for amateurs to run on a PC, I suspect that it does use a pretty coarse grid. I believe it is 8×10. The GISS II model is described at some length in Hansen et al, 1983 at http://pubs.giss.nasa.gov/abstracts/1983/Hansen_etal.html
but the edgcm version has been upgraded from time to time since then.
525
Steve
No, I didn’t want to get too much into very complex models (I don’t have that much time). The edgcm model is easy to use, but the tradeoff is that you can’t make internal changes, just change some of the parameters.
Browning comment to lucia: The atmosphere is not 2D and the lateral and vertical interconnections are extremely important to the dynamics and physics.
Sam, come join me where it’s safe, away from the cage.
534 bender says:
November 17th, 2007 at 8:45 am
Cage? I was under the impression it was the pit for blood sports!
Referring to the one that sleeps, over in his cage.
Oh, that. He who lives in barn and throws out many strawmen burns brightly in the winds of geostrophic chaos with each passing of the Sun, Moon, and stars.
Weather is to climate what a fart is to a biography.
How are you with Haiku?
I would suggest:
1. Exponential growth thread #3 (because I don’t think the issues here have been completely thrashed out), and
2. Weather v.s. climate thread (which is somewhat related, but a different topic).
Skeptics chirp in the Catbird Seat.
Alarmists all up the tree.
===========
Wind up
Pine down
All lost
Climate average
Rocket Man beats gorille
With holy hockey stick.
Goreblimey.
==============
Larry (#530),
Before popping off, you might want to do a bit of reading. Two dimensional and three dimensional turbulence have many things in common, but not everything. For example, the minimal scale estimates for 2D and 3D incompressible NS turbulence are essentially identical (Henshaw, Kreiss, and Reyna reference), but the three dimensional estimate requires an additional assumption (although it is quite reasonable – that the velocities stay bounded) that is not necessary in the 2D case. Thus 2D turbulence is a natural first step in understanding turbulence. Also the arguments for the proposed ad hoc subgrid parameterizations typically mean that they should work in both 2D and 3D cases. Thus if they do not work in 2D, they will not work in 3D. I will mention here that Heinz and I tried some pretty sophisticated ideas along this line and none worked.
Now as far as my comment about the atmosphere not being 2D, that is a very different animal. It has been shown very clearly that the large scale, slowly evolving in time solution of the atmosphere in the midlatitudes is a fully three dimensional beast. That is just one reason the hydrostatic approximation is nonsense. And in fact if you look at hydrostatic models, when there is overturning due to the unphysical forcing, the hydrostatic
balance must arbitrarily be reset. Not very scientific.
Jerry
Lucia (#531),
Check the manuscript by Heinz and me in Math. Comp. for convergent solutions in 2D. There are similar manuscripts by Henshaw et al.
available (see Henshaw’s website).
Jerry
542, in other words, you have to be able to walk before you can run. I see that.
Pat (#532),
As discussed above, a model with very crude resolution cannot describe
physical reality and the parameterizations therefore try to describe the impact of these small scale features over very large grid boxes (in your case 800 km by 1000 km in the horizontal). I believe that is one reason that orgs like NCAR are willing to provide model code to anyone. They know darn well that they cannot be run at the resolutions they do without the use of a supercomputer and also many times without the assistance of people in the organization (results in collaborative pubs and citations).
That is a pathetic given the poor quality of the science.
Jerry
@Bender in 534.
Ok… that made me laugh. I haven’t been back to unthreaded, but I’m fine with Gerry using 2D over here, and talking about 3D over there.
@D. Patterson 537
Global Climate Change
provokes violent debates:
animosity.
Gerry @543: I’m not planning to run models. I’m just agreeing the exercise you suggested earlier sounds like a good idea. The reason I suggest doing both runs that work and those that blow up is for completeness in a presentation. If some have already been done, all the better!
I’m mostly teaching myself R, and documenting at what will soon become the most boring blog on the planet. The documentation is to collect what I’ve learned so I can find it later myself. Blogs are actually good for that. (Plus, if I end up with questions, putting stuff up there can be useful as a communication tool.)
Anyone who wishes to see how mindnumbingly boring a blog can be can visit http://rankexploits.com/musings/ .
However, you are required to forgive any and all typos, lack of precision etc!
>> one that sleeps, over in his cage
He that chooses to design and build something, rather than waste time with insulting people who are either unable or unwilling to formulate precise concepts, continuously confusing each other with meaningless details in a dead end line of thought, totally devoid of integrating abstract concepts.
People can operate either
All,
On a different thread I mentioned that NCAR has now stated that there are problems with their limited area model (WRF) and are now developing a global version of the model. At the time I could not find the article (it disappeared very quickly), but since have found the article. Look under the NCAR website under the NCAR tab under recent articles. It is the first article.
Now the amusing thing about all of this is that NCAR tried to develop a limited area model in the 70’s and that resulted in the manuscript by Oliger and Sundstrom indicating the ill posedness of the IBVP for the hydrostatic system. NCAR developed hemispheric and global versions of
models, but then Anthes brought the limited area hydrostatic MM4 model to NCAR and 100’s of manuscripts were published using the questionable model.
Eventually (under considerable pressure) the model was changed to a nonhydrostatic model (WRF) and now that model has finally been shown to exhibit all of the problems discussed on this thread (but denied by Jimy Dudhia). So now they going to develop a nonhydrostatic global version,
but have not mentioned the continuum problems with the nonhydrostatic continuum system near jets even though those problems have shown up in their own models.This is the exact problem with models not based on good mathematics and numerics. Onward and backward.:)
Jerry
Larry (#544),
Exactly. The amount of computer power to solve for convergent numerical solutions in 2D for high (realistic) Reynolds numbers is staggering.
There 2D Fourier transforms were used for the doubly periodic case using the pseudo spectral numerical method. The satisfying thing is that the solutions actually converged when the model resolved the number of spatial waves indicated in the smallest scale estimates. The use of additional resolution produced the same solution, i.e. the solutions were not chaotic!
Quite a surprise to a number of people.
Jerry
Whose turn is it? I did it last week.
i.e. the solutions were not chaotic! Quite a surprise to a number of people.
Though not others. After all, it’s not as though engineers who work in areas where transport phenomena are important have embraced chaos as a way to truly understand or quantify phenomena of interest.
Though you will find it mentioned by a few who work in some highly specialize areas, we prefer to let those who insist on repeating “chaos, chaos, chaos” occupy remote corners at cocktail parties where they wonder why no one takes their intonations seriously.
Re #547 So, Ayn Rand was not fond of selfdoubt? She must have been a very, very good scientist. Please tell me some more about the philosophy of science. (In “unthreaded”, please.)
There’s a new thread for this discussion.
One Trackback
[…] at Climate Science (the site appears too be down at this time, I’ll add a link later) and at Climate Audit. Viscous dissipation has also been the subject of this post.To date I have gotten very little […]