Gavin vs Kaufmann

I posted up on Kaufmann and Stern [2005] on GCMs a few days ago. Kaufmann subsequently posted up at realclimate here about this, with a detailed reply from Gavin. The exchange is interesting on a number of levels – there is an interesting statistical point raised. In addition, you will notice how quick Gavin is to try to move what was quickly becoming a "serious" discussion offline, perhaps so that the hoi polloi aren’t involved.

Kaufmann wrote in as follows:

I would like to pick up on a comment made by per (#58) about testing GCM’s against real-world data. As an outsider to the GCM community, I did such an analysis by testing whether the exogenous inputs to GCM (radiative forcing of greenhouse gases and anthropogenic sulfur emissions) have explanatory power about observed temperature relative to the temperature forecast generated by the GCM. In summary, I found that the data used to simulate the model have information about observed temperature beyond the temperature data generated by the GCM. This implies that the GCM’s tested do not incorporate all of the explanatory power in the radiative forcing data in the temperature forecast. If you would like to see the paper, it is titled "A statistical evaluation of GCM’s: Modeling the temporal relation between radiative forcing and global surface temperature" and is available from my website
http://www.bu.edu/cees/people/faculty/kaufmann/index.html

Needless to say, this paper was not received well by some GCM modelers. The paper would usually have two good reviews and one review that wanted more changes. Together with my co-author, we made the requested changes (including adding an errors-in variables" approach). The back and fourth was so time consuming that in the most recent review, one reviewer now argues that we have to analyze the newest set of GCM runs – the runs from 2001 are too old. The reviewer did not state what the "current generation" of GCM forecasts are! Nor would the editor really push the reviewer to clarify which GCM experiments would satisfy him/her. I therefore ask readers what are the most recent set of GCM runs that simulate global temperature based on the historical change in radiative forcing and where I could obtain these data?

Gavin’s response:

Many of the runs have many more forcings than you considered in your paper which definitely improve the match to the obs. However, I am a little puzzled by one aspect of your work – you state correctly that the realisation of the weather ‘noise’ in the simulations means that the output from any one GCM run will not match the data as well as a statistical model based purely on the forcings (at least for the global mean temperature). This makes a lot of sense and seems to be to equivalent to the well-known result that the ensemble mean of the simulations is a better predictor than any individual simulation (specifically because it averages over the non-forrced noise). I think this is well accepted in the GCM community at least for the global mean SAT. That is why simple EBMs (such as Crowley (2000) do as good a job for this as GCMs. The resistence to your work probably stems from a feeling that you are extrapolating that conclusion to all other metrics, which doesn’t follow at all. As I’ve said in other threads, the ‘cutting-edge’ for GCM evaluation is at the regional scale and for other fields such as precipitation, the global mean SAT is pretty much a ‘done deal’ – it reflects the global mean forcings (as you show). I’d be happy to discuss this some more, so email me if you are interested [my bold][.

First, in terms of Kaufmann’s problems with GCMs reviewers: it seems to me that he is perfectly entitled to comment on the GCM models used in IPCC TAR, which have been published and documented to some extent and used in policy. If global mean SAT is a "done deal", as Gavin says then, then these earlier models should hold up to Kaufmann’s evaluation. I think that he should stick to his guns and not get into trying to sort out 14 new models.

Second, with respect to the newer models themselves, Gavin argues that, since they have more forcings, they will "improve the match to the obs" in the GCM. However, it’s pretty obvious that using more forcings will also improve the match in a simple linear model between the forcings and the global mean temperature as well. So Gavin’s first sentence proves nothing.

Third, look again at this extraordinary sentence of Gavin’s:

you state correctly that the realisation of the weather ‘noise’ in the simulations means that the output from any one GCM run will not match the data as well as a statistical model based purely on the forcings (at least for the global mean temperature).

Well, if this point of view is "correct", where have you ever seen it in print? Other than Kaufmann and Stern, has anyone ever seen a study comparing the results of a GCM to a simple linear model? Did Gavin ever say about his own GCM that: for estimating global temperature, we would have had more accurate results by a simple linear regression against the forcings? I remind people that I haven’t read the GCM literature and perhaps Gavin has done so, but I’d be extremely surprised. So Gavin’s condescending remark here is unjustified: Kaufmann and Stern’s point is interesting and not at all obvious. While "correctly" is correct, the observation is more than "correct"; it’s also interesting and provocative.

Fourth, look at Gavin’s next sentence, which is no better:

This makes a lot of sense and seems to be to equivalent to the well-known result that the ensemble mean of the simulations is a better predictor than any individual simulation (specifically because it averages over the non-forrced noise).

Well, it isn’t an equivalent result at all. Indeed, I think that it only re-inforces Kaufmann’s point. Kaufmann said that a simple linear model out-performed several prominent and high-powered GCMs that took days of supercomputer time to run. Gavin counters – an ensemble of GCMs will out-perform any individual GCM. But that’s a lot different point. If the net output of an ensemble of (10?) GCMs for global temperature maybe gets you back to the performance level of a simple linear model, then what’s the purpose of going through the 10 GCMs if you’re concerned about global temperature? Gavin tries to fend off this by arguing that the community has “moved on” to regional issues, because the temperature issues are a “done deal”. But the big issue is temperature – if the GCMs individually (and perhaps even collectively – we don’t know) cannot match a simple linear model, then why are these being touted as the way to study the problem and to generate null distributions – which is where we got into this via Cohn and Lins?

Gavin argues that the resistance to Kaufmann and Stern comes from a view that they are trying to go a bridge further and analyze the performance of other metrics. Given that there is no such discussion of other metrics in their article, there is no basis for this surmise.

Gavin then closes off:

I’d be happy to discuss this some more, so email me if you are interested.

I ifnd this the most offensive statement of all. Why not do this online? I once questioned realclimate’s commitment to their stated policy that "serious rebuttals and discussions are welcomed" in the context that they devoted a post to criticize Ross and me and then refused to post serious responses. In this case, they couldn’t get away with censoring Kaufmann, but it’s pretty clear that they didn’t want to have a "serious" discussion online. So Gavin asked Kaufmann to email him. I think that Kaufmann’s questions and points are good ones and Gavin’s rush offline is all too characteristic.

29 Comments

  1. Posted Dec 23, 2005 at 11:26 AM | Permalink

    Steve, there was a big bake-off of all the GCM including simple linear models about 10 years ago. I tried to find the report on the web – perhaps somebody has a link. In that the simple models bested GCMs.

    I am also surprised to hear that global SATs (surface temperatures?) are a ‘done deal’ when estimates of 2XCO2 sensitivity remain around the extraordinary range of at least 1.5 to 4.5 degrees. I got into climate science because it was absurd to try to do predictions of effects of climate on biodiversity with that level of uncertainty. I would appreciate it if they fixed that before talking about regional predictions, which must be even more uncertain.

  2. John A
    Posted Dec 23, 2005 at 12:00 PM | Permalink

    There are always ways to cut off debate and Gavin has gone through most of the obvious ones. “The community has moved on” is another of those weasel phrases designed to cut off informed criticism and be condescending at the same time.

  3. per
    Posted Dec 23, 2005 at 2:37 PM | Permalink

    I think you are just jealous !
    One of these days, if you work hard enough, and produce enough high quality research that is published in the best journals, you too might get invited into that exclusive club that is invited to send a personal email to gavin 🙂
    cheers
    per

  4. Paul Linsay
    Posted Dec 23, 2005 at 2:38 PM | Permalink

    I read this at RealClimate yesterday and burst out laughing.

    This makes a lot of sense and seems to be to equivalent to the well-known result that the ensemble mean of the simulations is a better predictor than any individual simulation (specifically because it averages over the non-forrced noise).

    I interpret it to mean that none of the models is really capable of making a meaningful prediction. If the physics, chemistry, biology, and input conditions that they stick into a model are correct it will make valid predictions. And once they know what those are, multiple different implementations will give the same results at all levels of detail.
    The right way to do this is for the different modeling groups to fight it out by challenging each others assumptions and inputs, not by hoping that there’s a kernel of truth in each which will remain after taking some ill-defined average.

    the global mean SAT is pretty much a “done deal’ – it reflects the global mean forcings

    If he’s talking about the famous “back-cast” to 1900 here, it’s exactly what Kaufmann and Stern are pointing out. That uses forcings like aerosols to make the temperature come out right for the entire 20th century [SAT = surface air temperature?]. The trouble is, as Lindzen pointed out in his testimony before the House of Lords, there is no aerosol data before 1967. But without it, the temperature wouldn’t come out right. In this case at least, the forcing carries the information about the temperature.
    (An unintended consequence of the “backcast” is to make clear that forecasts of any duration are impossible. They need external forcings, unknown and unknowable in 1900, to make the answer come out right.)
    And just what does “non-forced noise” mean?

  5. The Knowing One
    Posted Dec 23, 2005 at 3:56 PM | Permalink

    Re #1, by David Stockwell, you are perhaps referring to AMIP (Atmospheric Model Intercomparison Project). This was for atmospheric GCMs only, not coupled (atmosphere-ocean) GCMs. It is thus too dated to be highly relevant here.

    What is nonetheless interesting about AMIP was that several GCMs were found to give near-identical outputs even though the underlying physics in them was often quite different. Obviously what was happening was that the various parameters in the GCMs had been fine-tuned to give outputs that matched observations.
    This implies that comparison of GCM outputs with observations is not really the test that it is usually claimed to be.

  6. Willis Eschenbach
    Posted Dec 23, 2005 at 6:55 PM | Permalink

    Re #5, you say “This implies that comparison of GCM outputs with observations is not really the test that it is usually claimed to be.”

    This is very true if the comparison is only to simple global average temperature rise over time, the famous “trend”.

    A more meaningful comparison with observations is to compare such things as the range, average, skewness, kurtosis, and normality of the resulting model temperature datasets. Before we ask if the models give us reasonable results, we need to see if their results are even lifelike.

    In fact, by and large they are not lifelike. Individual models forecast temperature swings that are too large, or too small, or much faster or slower than observations. Some of them predict temperature swings of a size and speed that have never occurred in recorded history … and yet, because their final average rise is within predetermined (warming, of course) limits, they are accepted as a valid model. Valid model? A model that forecasts physically impossible temperature swings is a valid model?

    With few exceptions, the whole field of climate modeling is a disgrace. The researchers are giving each other a free pass, because they know their own models may not be any better than their neighbours, so nobody wants to dig out the truth.

    Finally, the assertion that the average of the GCMs is a better predictor than any given GCM is a) not proven in practice and b) wrong in theory. This is because one GCM is going to be the best predictor of the bunch. Adding more GCMs after that is more likely to degrade than to improve the results. The GCM community, however, has no interest in finding out which GCM that might be, because their GCM might come in last. So they all agree to average them … bad science, no cookies.

    I did a study of the GCMs used in the recent Santer et. al. Science paper, for example. The period of time when the GCMs were most in agreement, they were in very bad disagreement with the observations. Their agreement, in other words, meant nothing.

    w.

  7. Andy L
    Posted Dec 23, 2005 at 10:50 PM | Permalink

    I think Gavin’s point is that GCM’s incorporate a certain amount of non-determinism (thus #4’s “same results at all levels of detail” is not really appropriate). Since GCM’s are based on the idea of predicting the movement and behavior of blocks of air, they exhibit various forms of sensitivity to initial conditions. What Gavin is claiming is that although they will predict weather (in his terms, the specific state of the temperature/pressure field), they will diverge in that regard rapidly, while still maintaining certain global invariants (in Gavin’s terms, the climate — the statistical properties of the temperature/pressure field). Parenthetically, what I know I know mainly from reading the posts on realclimate, so I do not have any literature to back me up, but this is the impression I’ve gotten from Gavin’s comments.

    As a result of the nondeterminism, many runs of the same model will produce results that locally look very different, but globally will contain similar information. The obvious analogy is various computer simulations of turbulence, which show a similar characteristic — by looking at the average of several simulation runs, you can arrive at interesting conclusions even though the individual runs may not be directly predictive. I believe the computation of the frequency of shed vortices is the classical example of this type of modelling that has shown excellent agreement with laboratory results.

  8. Willis Eschenbach
    Posted Dec 23, 2005 at 11:06 PM | Permalink

    Re 7: Thanks for the comment, Andy. You say “I believe the computation of the frequency of shed vortices is the classical example of this type of modelling that has shown excellent agreement with laboratory results.”

    Unfortunately, the GCMs do not show such agreement with observational results. Individual models differ from observational results in a large variety of measurements, such as standard deviation, skewness, kurtosis, and normality.

    The difference is that the mathematics of shed vortices are well understood, to the point where we depend on them to design winglets on modern airplanes. Would you want a model as crude as the current spate of GCMs to be designing airplane winglets? …

    I wouldn’t fly in one.

    w.

  9. Andy L
    Posted Dec 24, 2005 at 1:25 AM | Permalink

    re #8: I would be interested in seeing references indicating that the GCM’s (in ensemble) do not correctly predict higher-order statistics of the global climate system. Since I wrote my post, I have looked around at what the IPCC has to say about ensembles, and I could only find references to means (that is, the ensemble mean of the predicted mean surface temperature); there were references to standard errors, but in the context of the standard error of the predicted mean surface temperatures, rather than the ensemble mean of the predicted standard deviation in surface temperatures.

    I gather from your posts that you have done some independent investigation here; I would be interested (as undoubtedly would others here) in seeing some of the raw data & processing that have led you to these conclusions.

    Also, you make the point in #6 that

    This is because one GCM is going to be the best predictor of the bunch. Adding more GCMs after that is more likely to degrade than to improve the results

    I’m not sure I agree with this; as a naive example, if you have a number of runs of a simulation of a 100 tosses of a fair coin, it is not necessarily true that the run that produces the number most similar to the experimental result is the best of the bunch.

  10. Willis Eschenbach
    Posted Dec 24, 2005 at 3:49 AM | Permalink

    Re 9, Andy, your points are good. A report of my findings is available for download (I hope) at

    http://homepage.mac.com/williseschenbach/

    My term “best predictor of the bunch” is poorly chosen. My point about ensembles was that some GCMs give results that are lifelike, and some don’t. By and large they don’t, so averaging them together is unlikely to give a more lifelike result than given by one of the (very few) lifelike models.

    w.

  11. The Knowing One
    Posted Dec 24, 2005 at 1:32 PM | Permalink

    #6, by Willis Eschenbach, argues against the point made at the end of #5. The argument makes it clear, though, that the point was not understood. Please read more carefully.

  12. Willis Eschenbach
    Posted Dec 24, 2005 at 2:21 PM | Permalink

    Re 11: “Knowing One”, you say that I “argue against the point made at the end of #5.”, and advise that I “Please read more carefully.”

    I assume the point in question is your statements that:

    What is nonetheless interesting about AMIP was that several GCMs were found to give near-identical outputs even though the underlying physics in them was often quite different. Obviously what was happening was that the various parameters in the GCMs had been fine-tuned to give outputs that matched observations.

    This implies that comparison of GCM outputs with observations is not really the test that it is usually claimed to be.

    It all depends which measures of the observational data you are comparing to model outputs. Usually, this is the trend in the temperatures. As you point out, the models are “tuned” to this trend, so this is definitely not the test that it is usually claimed to be. I did not argue against that point at all, it is very valid.

    My point was simply that, while these GCM outputs may match the trend, they usually do not match a number of the much more important measures of the observations, such as the standard deviation, skew, kurtosis, normality, inter-quartile range, and spread of outliers of the observations. This means that the models are not at all lifelike. In fact, models often predict such things as monthly temperature swings that have never occurred in the historical records. Should those models be trusted, even if they give reasonably accurate overall trends?

    Please read more carefully, oh Knowing One …

    w.

  13. Posted Dec 25, 2005 at 4:35 AM | Permalink

    Re #5:

    AMIP has been given a new life as AMIP-2 for atmospheric models only. It compared the results of 20 climate models for a first-order forcing: the distribution of the amount of the sun’s energy reaching the top of the atmosphere (TOA) dependent on latitude and longitude in the period 1985-1988. The recently published results can be found at Raschke ea.

    Abstract:

    Monthly averages of solar radiation reaching the Top of the Atmosphere (TOA) as simulated by 20 General Circulation Models (GCMs) during the period 1985–1988 are compared. They were part of submissions to AMIP-2 (Atmospheric Model Intercomparison Project). Monthly averages of ISCCP-FD (International Satellite Cloud Climatology Project — Flux Data) are considered as reference. Considerable discrepancies are found: Most models reproduce the prescribed Total Solar Irradiance (TSI) value within ±0.7 Wmàƒ⣃ ‹’€ ‘2. Monthly zonal averages disagree between ±2 to ±7 Wmàƒ⣃ ‹’€ ‘2, depending on latitude and season. The largest model diversity occurs near polar regions. Some models display a zonally symmetric insolation, while others and ISCCP show longitudinal deviations of the order of ±1 Wmàƒ⣃ ‹’€ ‘2. With such differences in meridional gradients impacts in multi-annual simulations cannot be excluded. Sensitivity studies are recommended.

    The full article (unfortunately under subscription) mentions in its conclusions:

    We recommend that in all climate models and in all “radiation climatologies” the incoming solar radiation at TOA must be identical for any given time period and area on the globe. Modelers should use the real length of the tropical year. Since a similar analysis of IPCC AR4 simulations shows qualitatively the same deficiencies as described here for the AMIP simulations, we think, that there is a need for sensitivity tests that investigate impacts of detected differences in the TOA insolation on circulation structures developing in the model’s climate system.

    ———-

    To be noted is that the errors are an order of magnitude larger than the change in radiation by GHGs in the same time frame…

  14. andre bijkerk
    Posted Dec 25, 2005 at 2:22 PM | Permalink

    I wonder how those models would predict the lenght of the Missisippi:

    “In the space of one hundred and seventy-six years the Lower Mississippi has shortened itself two hundred and forty-two miles. This is an average of a trifle over one mile and a third per year. Therefore, any calm person, who is not blind or idiotic, can see that in the Old Oolitic Silurian Period, just a million years ago next November, the Lower Mississippi River was upward of one million three hundred thousand miles long, and stuck out over the Gulf of Mexico like a fishing-rod. And by the same token any person can see that seven hundred and forty-two years from now the Lower Mississippi will be only a mile and three-quarters long, and Cairo and New Orleans will have joined their streets together, and be plodding comfortably along under a single mayor and a mutual board of aldermen. There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.”

    Mark Twain

  15. Paul Linsay
    Posted Dec 25, 2005 at 4:01 PM | Permalink

    #13

    To be noted is that the errors are an order of magnitude larger than the change in radiation by GHGs in the same time frame…

    But you don’t understand! As Gavin said over on RealClimate it’s well known that the average of the models gives a much better result than any single model.

  16. Posted Dec 25, 2005 at 8:07 PM | Permalink
  17. Posted Jan 2, 2006 at 10:40 AM | Permalink

    I don’t know much about GCM’s myself, but am a physicist with a little experience with (very simple) computer modeling. Once I found a very simple algorithm to compute propagation of light in certain types of waveguides. I could get trustworthy results in a small fraction of the time that standard computer programs (so-called “beam propagation methods”) would take. These programs would solve the exact propagation equations and were very time consuming, no to mention that you could get wrong results from a poor grid and boundary conditions choice. When I tried to publish my results, I had a really hard time convincing the reviewers that a simple algorithm could be as good as a very complex one, in specific situations. This despite the fact that the simulation results we had were so faithful to our experimental results that a reviewer thought we had mixed up our curves. The people who make and use these complex simulation programs tend to fall in love with them. Yet, they become so complex that I doubt anyone really understands what they’re really doing.

    In the case of GCM’s (and again I emphasize that I don’t know much about them), it seems to me that making them “work” by fitting them with past results, and then using them to predict the future, given that there IS an unknown natural, probably chaotic, variability, that is walking on very, very thin ice. Of course you can do ensembles and sensitivity analysis, but what that tends to do is just hide the fact that the results are meaningless in the end.

    In the field of long term climate predictions, it seems to me that we just don’t have enough comprehensive and accurate past data to fine tune a model and claim it will give accurate predictions for the next 100 years. But I guess all GCM researchers are well aware of that. But it must still be a fun thing to do, and you can get a lot of funding for it. It makes pretty pictures that look very realistic and frightening. As time goes by and we realize that they don’t really predict anything (I mean 1.5 to 5 degree C, what sort of prediction is that!), we might move on to other, cooler stuff. But maybe not, because the next model will always be better than the previous one (think Windows versions…)

    Francois

  18. ET SidViscous
    Posted Jan 2, 2006 at 12:02 PM | Permalink

    Francois

    Interestingly I’ve been trying to find the time (more like the energy since I’ve been on vacation for 3 weeks) to do just what you say, use a simple algorithm to predict “Global Temperature” using solar output as the input. This came about when I took these three graphs

    Temperature anomaly 1860-2000: http://tinyurl.com/b5vlv Bottom left graph (cleaner)

    Solar Output 1600-2000: http://tinyurl.com/dzcdb

    and CO2 Concentrations 1860-2000: http://tinyurl.com/93zu7

    then what I wanted to do was to make them all similar. So I simple did screen captures, limited the images to the 1860-2000 time period, so that they are all at the same time scale, then I sized them the same pixel width (Not perfect surely, but I don’t think I’m off by anymore than 5 -pixels total)

    What we then get is the following rough images.

    http://tinyurl.com/97trr

    Now, while there is no obvious correlation between the top graph (CO2 Concentrations) and the middle graph (Temperature anomalies) with enough manipulation and ignoring certain diversions (1930-1950 and such) you could “find” an underlying trend. This has been done ad nauseam.

    However, by simple visual look between the middle graph (Temperature anomalies) and the lower graph (Solar output) the correlation is pretty damn obvious. On first order anyways they seem to track, with global temperatures trailing 10-20 years.

    In fact from perusing the graphs what it seems to me is that CO2 concentrations are driving atmospheric response to solar Output (how quickly the atmosphere warms) while magnitude of warming seems to completely controlled by solar output.

    As you say I’m willing to bet that a simple algorithm with solar output as a feeder, CO2 concentrations an input for response rate would give better prediction abilities (for 10-20 years in advance anyways) than any GCM.

  19. Paul Linsay
    Posted Jan 2, 2006 at 12:35 PM | Permalink

    #18. Look at Figure 3 of http://www.oism.org/pproject/s33p36.htm. You can’t get a simpler model than that. It’s dead on and just uses the length of the solar cycle. If you can predict the solar cycle you can predict the weather.

  20. ET SidViscous
    Posted Jan 2, 2006 at 12:48 PM | Permalink

    Some deviations, and temperature seeming to outpace solar output (in both directions) and then capping early. But yes, extremely consistent.

    besides a simple algorithm, I wonder what happens to Global temperature anomalies trends when you subtract out solar Forcing. Me-thinks it’s going to have a tendency to flat line them, while a small (minute) trend might be left after, certainly nothing to get excited about.

  21. Posted Jan 2, 2006 at 2:59 PM | Permalink

    Paul, Erik,

    I wish it were all that simple! The solar hypothesis is appealing, but I’ve also seen criticisms of it. Quite frankly I don’t know enough to make my own opinion of it. There is a simple physical basis, but do the numbers add on? Apart from Soon and Balliunas, are there other references?

    Francois O.

    (btw, and completely off-topic, Erik, I used to work with Roctest. small world…)

  22. ET SidViscous
    Posted Jan 2, 2006 at 3:25 PM | Permalink

    Francois

    Of course it aint that simple, there are various other factors that effect it, and it is a complex coupled chaotic system, but if the primary mover is solar (which makes a whole helluva lot of sense) and CO2 is a minor influence than all of this hoopla can take backstage where it belongs. If a simple algorithm can take into account Solar output, CO2 concentrations and accurately predict (let’s say within .5 degrees) global temperature, then we can insert various numbers for CO2 (Doubling) and determine it’s effect on global climate. But even with that, being the chaotic system that it is, you will never be able to truly predict it very accurately. And until we get some sunspot data during an ice age it will be hard to determine if the algorithm can predict that.

    OT, funny that being in Montreal though it is somewhat to be expected. When was it? I spend a fair amount of time in Montreal because of it. Stop by Bishop st. some time. I perform a magic trick on Pints off Guinness there.

    On another odd coincidence I used to work in Optics 🙂 and if your doing work in that area it’s likely there are other coincidences.

    SM: I edited this a little since it was spliced.

  23. Posted Jan 3, 2006 at 8:05 AM | Permalink

    Erik,

    Certainly a point can be made that GCM’s can be useful to learn about regional characteristics of climate change, e.g. will temperature change more in the Arctic than in the tropics and by how much. But on the other hand, if you want to model the mean earth temperature, building a complex model that gives you detailed temperatures everywhere, just to average them out afterwards, may not be the most efficient way. A properly parametrized “energy balance” model could give you results that are as accurate or more. The trick is to identify which are the main mechanisms, and model their behavior. Solar activity is an obvious one, because we get most of our heat from the sun. Can we identify a proper “amplification” mechanism for it? The greenhouse effect should also be taken into account. Details such as the effect of cloud cover can be included in cross-products. That’s the main difficulty of such models: if the various effects are not independent from each other, you end up with highly nonlinear equations that can result in chaotic behavior. As you may know, it really doesn’t take much nonlinearity to give chaos. The problem with chaos is that if you want to validate your model with past data (the only way it can be done with climate), you need to know the initial conditions with infinite accuracy to account for future evolution. On the other hand, over short time frames (a century is quite short), and small enough perturbations (both changes in solar and GHG emissions vary relatively little), you can linearize the equations and possibly get meaningful results.

    BTW, I can’t understand the thing about “doubling” CO2. Doubling is a huge perturbation. CO2 has NOT doubled yet, it has only changed by about 30% as far as I know over 100 years. I just can’t see how you can use models based on such a slow increase and apply doubling of one parameter, and be condident that the answer will be right!

    Does anyone know of such simplified models ? The advantages of a simple model is that simulations are much easier to carry out. If the model is adequate, in my opinion, the chances of getting meaningful results are higher than with a very complex model.

    (OT my thing with Roctest is a long, and a bit of an extraordinary story, that dates back a few years. Most of the people I dealt with are now gone. I still live in Montreal. Yes, I worked in “optics”, but am now trying to reinvent myself, since the field doesn’t seem to want me anymore. I have a lot of “free” time (not so free ’cause I ain’t got no pay!), of which I spend an unordinate amount reading these blogs!)

    Francois

  24. ET SidViscous
    Posted Jan 3, 2006 at 10:43 AM | Permalink

    Thanks Steve

    Er ah Francois, we’re on the same page here. I hope you don’t think I’m promoting GCM’s as the way to go. I’m saying that I think a simple algorithm might be more powerful (In response to your earlier discussion where you created one that was).

    The “doubling” of C02 is certainly not my idea, nor do I think it will happen in the short (Relatively speaking here to mean by 2100 AD not PM). It was put forth by the IPCC and is in contention by the skeptics. I bring it up because if we can get a truly predictive tool that can predict past data, we can use it to show better what would happen with a doubling of CO2, which the warmers propose will happen by the end of the century.

    OT You’d be surprised, I’m sure there are some still there.

  25. Hans Erren
    Posted Jan 3, 2006 at 10:52 AM | Permalink

    re 23, 24

    ever since Arrhenius(1896) doubling of CO2 has been a value for climate sensitivity, as the raltionship lis logarithmic, you can calculate the inbetween values using Myhres equation.
    http://en.wikipedia.org/wiki/Svante_Arrhenius#Greenhouse_effect_as_cause_for_ice_ages

  26. Posted Jan 3, 2006 at 11:27 AM | Permalink

    Hans,

    Thanks for the reference. I should always check there first. Surprising that Arrhenius got the same estimate from a back of the envelope calculation 100 years ago, than the IPCC with billions of $ of R&D !

    I am obviously new to the field, so please excuse my naivete. I have a PhD in physics, and I’ve done a lot of science myself (56 peer reviewed papers) so I guess I can get a grip on some fairly technical issues (e.g I understand what a logarithm is…). I am a sceptic only in the sense that I think challenging the “mainstream” theory is the best way to make good science (which is why I prefer this blog to RealClimate). If you only “go with the flow”, you just look in one direction, and end up seeing just what you expected to see. My best science was when I was attentive enough to find those little anomalies in experimental results that, when you dig a little, end up revealing something much more interesting. There’s nothing more boring than performing an experiment expecting a result, and getting just that result. Yet, 90% of what’s published is exactly that.

    I think with climate research, if you are looking for warming, you will always find warming. If you want to relate it to GHG, you will always find a way. If you plug a number for CO2 sensitivity in a GCM model that you got from past data that ASSUME CO2 sensitivity, you will get the warming you want, but it’s circular reasoning. If the past trend was due to another unknown reason, you’ll never know about it because you’re not looking for it.

    Francois

  27. The Knowing One
    Posted Jan 4, 2006 at 11:23 PM | Permalink

    Re #12, by Willis Eschenbach, You still have not understood what I said. First, I said nothing about the forms of the comparison. Second, even if the forms included your recommendations, the problem of parameter-tuning (of a physically-incorrect model), would still remain–albeit being less probable. There is a huge number of parameters (degrees of freedom) in GCMs.

  28. Skiphil
    Posted Jun 13, 2013 at 4:00 PM | Permalink

    on broad topic of assembling many GCM runs to publish IPCC style curves, physicist R. Brown of Duke U. has some forceful criticisms of the lack of either physical or statistical justifications for accepting many ensembles of GCM outputs:

    No significant warming for 17 years 4 months

2 Trackbacks

  1. […] McIntyre wrote a long post on the affair here. [R]ealclimate’s commitment to their stated policy that “serious rebuttals and discussions […]

  2. By Willis on GISS Model E « Climate Audit on May 15, 2011 at 8:17 PM

    […] They had experienced extreme difficulty in trying to run the gauntlet protecting climate science doctrine. They turned up at realclimate here, but, rather than continuing this interesting but subversive discussion online, Schmidt asked they take the conversation offline. See CA discussions here here. […]