A realclimate Advisory

Warning – a serious discussion has unexpectedly broken out at realclimate. See comments 84 and 85 here. I don’t think that the IID team is faring very well.

56 Comments

  1. Paul Linsay
    Posted Jan 2, 2006 at 11:13 AM | Permalink

    Yes interesting discussion of IID. Held’s point, #85, brings up the question of why the global temperature should be stable, there is no a priori reason it shouldn’t fluctuate over any number of time scales. Rasmus demonstrates his ignorance when he brings up the chaotic behavior of climate and then wishes the fluctuations away by requiring external forcing. If it’s chaotic it fluctuates. The power law spectrum from the data indicates that it fluctuates at all time scales. Models don’t trump data.

    This brings up a very interesting problem, what is the error on the global mean temperature deviation? This is an especially important problem since the signal is the fluctuations in the temperture deviation. I’ve never seen it on any of the plots. The calculation of the error has to be quite tricky, because unlike an experiment, one does not have multiple measurements of a single quantity with a known error distribution for the measurements. Instead, there is a large heterogeneous population of local temperature measurements, with a certain error resolution but more importantly, unknown biases, that are combined to generate an “average” temperature.

    How does one assign an error? Is it one degree since that’s the typical error on a temperature measurement? Is it one degree/ sqrt(number of stations) as would be true with IID? My suspicion is that it’s closer to the first than the second.

  2. Dave Dardinger
    Posted Jan 2, 2006 at 12:19 PM | Permalink

    From the thread on RC:

    Rasmus: “I think that only such changes in forcing can produce changes in the global mean temperature, because energy has to be conserved.”

    Isaac Held: “The claim that only changes in forcing can produce changes in global mean temperature because of ‘conservation of energy’ is clearly not correct.”

    Rasmus’ statement was indeed clearly not correct. IMO, the simplest contra-example is where cloud cover changes. The changed cloud cover will reduce the amount of sunlight reaching the earth’s surface and thus, if it should continue, the global surface temperature. Of course, it could be argued that this increased cloud cover would be offset by reduced evaporation, reduced humidity at altitude and therefore there would be a negative feedback producing homeostasis at some fixed energy level. But of course if a member of the hockeyteam would to admit this it would essentially saw off the limb on which their entire theory sits. Negative feedback via cloud cover changes would call into question the very basis for contending that increased CO2 in the atmosphere must result in increased surface temperatures.

    Of course cloud cover is changing all the time and it doesn’t have the decadal-century time scale Rasmus is holding to. But except for a few, skeptics all accept that CO2 increases produce mild forcing factor. Where the objections to drastic AGW occurs is in the assertion that this mild heating will result in further warming via a positive feedback mechanism. It just seems contra-intuitive that in a system which must have many negative feedbacks present working at various scales, none of them would be able to counteract the CO2 forcing without first making a large positive excursion.

  3. Frank H. Scammell
    Posted Jan 2, 2006 at 3:09 PM | Permalink

    40 years ago, I was programming and running very large simulations (yes, I’m retired and realize that my comments must be very heavily discounted or discarded). The major error source was integration roundoff or truncation. It was evidenced as a random walk that would eventually mutate into an exponential growth. To suppress it required running in double-precision (i.e., 128 bit). I dont think that running in the time domain or the frequency domain will make much difference. Just contemplate the number of integration steps required for a 100 year “prediction”. And just for the humor of it, note that all simulations that show negative feedback, even though there has to be negative feedback, are discarded. Also what happens to all the volcanoes, El-Ninos, etc. that make Dr. Hansen’s papers so interesting. If the negative feedbacks are subtracted from the positive ones, there’s not much left to cause the increasing deviation from the satellite data – maybe no dramatic tempurature increase?. Have we found a way to suppress these effects? If so, it should be a lot cheaper than ruining the US or world economies. How do we select the people that have to go in order to reduce the world population to a manageable number to be sustainable?

  4. Steve Bloom
    Posted Jan 2, 2006 at 3:18 PM | Permalink

    Re #2: The fact that statements like “But of course if a member of the hockeyteam would to admit this it would essentially saw off the limb on which their entire theory sits” are made with some frequency on this blog and are met with apparent approval should make it crystal clear why most climate scientists continue to want to have nothing to do with this site or anyone associated with it.
    Regarding climate science being contra-intuitive, our limited direct exposure to data makes lots of things in science contra-intuitive. Relativity, e.g., remains contra-intuitive for most people, and even Einstein found some aspects of it to be so. Of course the climate record is riddled with sharp temperature excursions, the last big one being at the end of the last glaciation, but with more recent smaller ones as a result of volcanos (e.g., the half-degree dip following Pinatubo). Are all of these “contra-intuitive”? If not, what is intuitive about assuming that negative feedbacks will counter an unprecedented rapid pulse of CO2 so quickly as to obviate a significant positive temperature excursion?

    (Note: The comment box seems to be running under the right-hand column by about two characters, at least as seen on my 15″ monitor on IE. The preview is fine, though.)

  5. Dave Dardinger
    Posted Jan 2, 2006 at 4:25 PM | Permalink

    Steve Bloom: The fact that statements like “But of course if a member of the hockeyteam would to admit this it would essentially saw off the limb on which their entire theory sits” are made with some frequency on this blog and are met with apparent approval should make it crystal clear why most climate scientists continue to want to have nothing to do with this site or anyone associated with it.

    Well, it may be true that pointing out their contradictions would keep warmers from wanting to have to defend themselves here, but that’s not my fault.

    what is intuitive about assuming that negative feedbacks will counter an unprecedented rapid pulse of CO2 so quickly as to obviate a significant positive temperature excursion?

    Well, simply the fact that the majority (in some scenarios the vast majority) of the heating is supposed to be from increased H2O in the atmosphere caused by the very modest increase in temperature from increased CO2. Given that we have had much increased temperatures at various times in the relatively recent past, why didn’t they then result in much higher temperatures from increased H2O? The reason must be that fairly modest temperature increases call powerful negative feedbacks into play. Bear in mind that’s it’s not whole new IR spectral areas being attacked by CO2 absorption, but existing spectral lines being more highly absorbed to relatively modest degrees. Likewise with H2O. So the likelihood of the system running into negative short-term feedbacks like increased cloud cover, as well as longer ones like both heat and CO2 absorption in the ocean, is high. IOW, we don’t get large run-aways in temperature when things are added to the atmosphere like forest fire soot or volcanoes so why with a smallish increase in temperature from CO2 do the trick?

  6. Douglas Hoyt
    Posted Jan 2, 2006 at 7:08 PM | Permalink

    One of the IPCC story lines is that the Medieval Warm Period was confined to the North Atlantic and Europe (although I do not think the evidence supports this viewpoint). Nonetheless, if you do accept it, they are conceding that red noise fluctuations exist at the regional level. Presumably they argue these fluctuations are internal oscillations in the climate system. If internal red noise fluctuations can occur at the regional level, it is not hard to believe they are occurring at the global level.

    Some physical processes for these fluctuations could be oscillations in the oceanic circulation, changes in land cover, random and persistent changes in ice and snow cover, or random fluctuations in cloud cover such as supported by recent measurements of global dimming and global brightening. There are reasons to think that large fluctuations in temperature occur on all time scales that may very well be chaotic and not included in the climate models. Certainly treating temperature changes as IID unless externally forced is a simplistic assumption.

  7. Hans Erren
    Posted Jan 2, 2006 at 7:35 PM | Permalink

    Google akaike, the expert of noisy feedback systems.

    eg

    http://www-spc.igpp.ucla.edu/personnel/russell/ESS265/Ch9/autoreg/autoreg.html

  8. Armand MacMurray
    Posted Jan 3, 2006 at 12:22 AM | Permalink

    Re: #4
    Steve, you’ll find RC is certainly no better in terms of personalization and ad homs; since climate scientists seem to shun CA in favor of RC, your suggested cause lacks empirical support.

    That said, I certainly don’t believe that one wrong justifies another; however, since climate science directly affects political decisions that many see as affecting them directly, emotions come into play and sometimes people are not on their best behavior. I’ve found CA to have a high science/cheerleading ratio, which I think reflects well on Steve McIntyre. Prometheus also seems to have a high ratio, while RC seems to have decided on a different path (for example, compare the RC page on Cohn and Lins at http://www.realclimate.org/index.php?p=228 with Steve McIntyre’s discussion of the paper; also, the RC post at http://www.realclimate.org/index.php?p=210 seems like a giant ad hom, especially with RC-approved comments such as “The label I prefer is ‘denier’ as in Holocaust denial. It indicates someone who deliberately misinterprets or ignores the evidence.”).

  9. Ed Snack
    Posted Jan 3, 2006 at 3:05 PM | Permalink

    Perhaps Rasmus should get in on the ID debate as he uses the logically identical arguement that the ID proponents do, to wit, the arguement from ignorance. Being unable to imagine how the autocorrelation occurs in the data, he then take the logical leap that therefore that factor is artificial in some way and does not really exist. Note that this is also a claim that the climate is essentially perfectly understood apart from perhaps some mere details. A claim I suggest all genuine climatologists would not support.

    Rasmus faces a conflict between the data and the models, and prefers the models. Tell me, Steve Bloom, in the same situation, which do you prefer, models or data ? Do you deny the reality of autocorrelation in the temperature record ? And ignoring the “ignorance” arguement, on what grounds would you exclude that data ?

  10. Steve McIntyre
    Posted Jan 3, 2006 at 3:37 PM | Permalink

    A post at realclimate criticized me for making fun of Rasmus’ language skills. I’ve never made fun of Rasmus’ language skills; I’ve made fun of what he said. There’s a difference. While this may be sarcastic, I don’t see that any of it is an “ad hominem argument” – a definition of which is:

    An Ad Hominem is a general category of fallacies in which a claim or argument is rejected on the basis of some irrelevant fact about the author of or the person presenting the claim or argument.

    I don’t think that I’ve ever relied on Rasmus’ sayings about hair growth products or the existence of tides to dismiss his views on statistics. I’ve dismissed his views on statistics for statistical reasons. However, surely we can have a little fun every so often as long as Gavin lets Rasmus off the end of the bench.

  11. John Hekman
    Posted Jan 3, 2006 at 6:20 PM | Permalink

    Steve and Ed Snack

    I’m not sure that I understand Rasmus’ problem with autocorrelation. In economics, autocorrelation often occurs in time series models because of missing variables, i.e. a misspecified model. Why is this not the reason with regard to climate data as well? If we had a properly specified model of changes in temperature, the autocorrelation would be eliminated. And if autocorrelation is present, it means we have not yet come up with an adequate model of temperature.

  12. John S
    Posted Jan 3, 2006 at 7:17 PM | Permalink

    John H,

    Yes. But. There are many physical processes that ‘integrate’ other physical variables and thus have autocorrelation in their structure. For example, lake levels ‘integrate’ rainfall, atmospheric concentrations ‘integrate’ emissions etc. All these variables will display autocorrelation which must be accounted for if one is to model them (you can account for it by focusing on the, hopefully, stationary primitives such as rainfall or emissions or by employing appropriate statistical techniques on the autocorrelated variables).

    So, even if you had a perfect model of the climate, you would still need to deal with autocorrelation properly.

    I don’t understand what Rasmus’ problem with autocorrelation is either – I barely understand what his position is. He appears to set up a false dichotomy between statistics and physics that, if I were uncharitable, would attribute to fundamental ignorance of what statistics is about.

  13. Posted Jan 3, 2006 at 7:58 PM | Permalink

    Re #12. It seems quite simple. 1. Autocorrelation arises from physical variables with storage. Kaufmann has shown that GHGs stored in the atmosphere impart autocorrelation to temperature. Temperature does not need to be stored to have AC. 2. A simple null hypothesis like ‘no increase in temps over the last 100 years’ does not require a GCM, just a single zero. McKitrick has shown that the centennial increase, significant under IID errors, is not significant under AC errors. This is not incompatible with detection of significant forcing of temperatures by GHGs through cointegration techniques as Kaufmann have shown elsewhere. It just means that the 100 year ‘trend’ is not distinguishable from the noise.

    As discussed on this blog, assuming IIDs when in fact AC errors are present causes 1. false recognition of significance, and 2. exaggeration of the magnitude of forcings, as shown in an example by Granger. Such errors MUST mainifest in GCMs where calibration of such parameters as 2xCO2 sensitivity does not take into account AC errors. E.g Stern and Kaufmann estimate 2xCO2 sensitivity at 1.6K – at low end of generally accepted estimates – correlative paleo methods are around 2.3K. GCMs hover around 3K. Thats where they are going wrong. Where am I going wrong?

  14. Posted Jan 4, 2006 at 8:15 AM | Permalink

    I’m no expert on statistics and I’m still trying to come to grips with those issues. As a physicist, however, I’d like to comment on what Rasmus says about “energy conservation”. This is a misleading notion, because energy is conserved in a closed system, and the Earth surface (which is what we’re concerned about), is a wide open system. Furthermore, we are sitting on a huge heat reservoir (the bulk of the earth). The global temperature is not the result of “energy conservation”, but a delicate and dynamic balance between the energy we get from the sun, how much we shed back into space, not forgetting that there is a lot of stored energy that smooths out any fluctuations. Turn off the sun and you’ll see some real climate change…

    It’s hard to keep a sense of perspective in this matter. We are trying to identify decadal trends of a fraction of a degree when local temperatures vary on a daily basis by about 20 deg, and on an annual basis by up to 60 degrees (here in Montreal anyway…). So the heat gets moved around all the time.

    Looking at the GCM results versus actual data in the IPCC TAR, I was struck by the fact that the year to year fluctuations in the measured data are significantly larger than those given by all the models. There have been fluctuations of 0.3 deg over just a couple of years. The GCM people claim the models are good because they reproduce a trend that they put there in the first place, but can’t reproduce the details. What do you make of that? You can’t say that the data are bad and the model good, because you validate the model with the data! If the data are thought to be reliable, and the model doesn’t reproduce the large fluctuations, then maybe we still have a lot to learn.

    On the use of statistics: I can’t disagree more with Rasmus on this. If a given physical process follows a well known statistics, then you can reliably use the statistics instead of trying to work out all the fine details. This is done all the time, and physics wouldn’t be what it is now if we hadn’t learned to do that. Whether the climate evolution follows what type of statistics is unclear. On the one hand, you can reduce the evolution of global temperature to a highly nonlinear interaction between only a few parameters, which would make the temporal evolution deterministic, but chaotic. But on a short time frame (100 years), you may be able to reduce that to some kind statistical random walk. On the other hand, there ARE purely statistical events (iid) that have a significant influence on global temperatures, and I’m thinking mostly of large volcanic eruptions, which so far we can’t predict, but on the other hand we have SOME historical evidence we can work with. So if you want to extract a statistical behavior from past data, it’s probably a mixture of both.

    Francois Ouellette

  15. John Davis
    Posted Jan 4, 2006 at 10:43 AM | Permalink

    Re #15. No it doesn’t.
    On a different subject, this is a really good piece just put out by Hans Storch.

  16. Dave Dardinger
    Posted Jan 4, 2006 at 10:51 AM | Permalink

    Well, Peter, let’s go with your example a bit. Using this analogy, nutrition would equal the sun. Healthcare might be the oceans. Changing demographics might be volcanic eruptions. Wars can be Earth orbital changes. So what are CO2 emissions? Human Growth Hormone, perhaps? The point is that with so many large factors to consider, projections of average human height, measured in millimeters, is a dicey proposition.

  17. ET SidViscous
    Posted Jan 4, 2006 at 11:02 AM | Permalink

    More importantly people’s height stays pretty much constant through the majority of their life.

    Now if you took Peter’s pointless argument and changed it to measuring the average height of the entire population (or a decent sampling) and rather than measure their absolute height, but measured their average altitude of the top of their head for each individual sampling. i.e. my height changes when I’m sitting, laying down, standing up, sitting in my truck, sitting in my car, flying in an airplane.

    Then we would have similar type information, but we could still infer an average height for each sample point, and further average that over the population, and yes it would be difficult to develop a trend of fractions of an inch over decades.

    But since my height is 5’11” whether I’m sitting laying down or whatever, and this is true for everyone else (all the sample points) it tends to be much simpler to do statistical analysis of heights over time.

  18. ET SidViscous
    Posted Jan 4, 2006 at 11:38 AM | Permalink

    And the point we are trying to make is that height data is much simpler than temperature data with far fewer confounding factors, making statistical analysis more straightforward.

  19. John Davis
    Posted Jan 4, 2006 at 11:48 AM | Permalink

    Re #18, #19
    I suppose Peter, using your analogy, it’s rather like trying to measure the average height of the population to within, say, a micron or two. Or getting the average rate of growth over a week. It’s easy enough to crunch the numbers, but just what they mean once crunched is a bit debateable.

  20. Posted Jan 4, 2006 at 12:08 PM | Permalink

    Re #0:
    I appreciate the attention to my comment #84 on realclimate.

    Re #12 and #13:
    It is true that “Autocorrelation arises from physical variables with storage” and “There are many physical processes that ‘integrate’ other physical variables and thus have autocorrelation in their structure”. However, such mechanisms produce Markovian or quasi Markovian dependence – not long range dependence. The origin of long range dependence (else known as: long term persistence, long term memory, Hurst phenomenon, Joseph effect, or scaling behaviour) should be traced elsewhere. Here is a synopsis of my own attempts to trace it and the respective references:

    1. Koutsoyiannis, D., The Hurst phenomenon and fractional Gaussian noise made easy, Hydrological Sciences Journal, 47(4), 573-595, 2002. (This is not on line but if someone is interested I can email a copy).

    Here I showed that the Hurst phenomenon can be produced by fluctuations of a process upon different temporal scales. More specifically, a Markovian underlying process at the annual scale can result in a nearly scaling process if there occur random fluctuations of its mean even on two different scales (of the order of 10 and 100 years), yet the resulting composite process being stationary.

    Koutsoyiannis, D., A toy model of climatic variability with scaling behaviour, Journal of Hydrology, 2006 (article in press; http://dx.doi.org/10.1016/j.jhydrol.2005.02.030).

    Abstract: It is demonstrated that a simple deterministic model in discrete time can capture the scaling behaviour of hydroclimatic processes at time scales coarser than annual. This toy model is based on a generalized “chaotic tent map”, which may be considered as the compound result of a positive and a negative feedback mechanism, and involves two degrees of freedom. The model is not a realistic representation of a climatic system, but rather a radical simplification of real climatic dynamics. However, its simplicity enables easy implementation, even on a spreadsheet environment, and convenient experimentation. Application of the toy model gives traces that can resemble historical time series of hydroclimatic variables, such as temperature, rainfall and runoff. In particular, such traces exhibit scaling behaviour with a Hurst exponent greater than 0.5 and density function similar to that of observed time series. Moreover, application demonstrates that large-scale synthetic “climatic” fluctuations (like upward or downward trends) can emerge without any specific reason and their evolution is unpredictable, even when they are generated by this simple fully deterministic model with only two degrees of freedom. Obviously, however, the fact that such a simple model can generate time series that are realistic surrogates of real climatic series does not mean that a real climatic system involves that simple dynamics.

    Koutsoyiannis, D., Uncertainty, entropy, scaling and hydrological stochastics, 2, Time dependence of hydrological processes and time scaling, Hydrological Sciences Journal, 50(3), 405-426, 2005 (http://www.extenza-eps.com/IAHS/doi/abs/10.1623/hysj.50.3.405.65028;jsessionid=noyCMpKB1OFcDF7U5H?cookieSet=1&journalCode=hysj).

    Abstract: The well-established physical and mathematical principle of maximum entropy (ME), is used to explain the distributional and autocorrelation properties of hydrological processes, including the scaling behaviour both in state and in time. In this context, maximum entropy is interpreted as maximum uncertainty. The conditions used for the maximization of entropy are as simple as possible, i.e. that hydrological processes are non-negative with specified coefficients of variation and lag-one autocorrelation. In the first part of the study, the marginal distributional properties of hydrological processes and the state scaling behaviour were investigated. This second part of the study is devoted to joint distributional properties of hydrological processes. Specifically, it investigates the time dependence structure that may result from the ME principle and shows that the time scaling behaviour (or the Hurst phenomenon) may be obtained by this principle under the additional general condition that all time scales are of equal importance for the application of the ME principle. The omnipresence of the time scaling behaviour in numerous long hydrological time series examined in the literature (one of which is used here as an example), validates the applicability of the ME principle, thus emphasizing the dominance of uncertainty in hydrological processes.

  21. pj
    Posted Jan 4, 2006 at 1:30 PM | Permalink

    I wish somebody would address the point made in post #1. It’s a very simple statistical matter, but seems to strike at the very heart of the issue. All this sound and fury is based on the simple assertion that ground based temperature measurements show that the global average temperature has increased about a half degree C over the last 100 years. The problem with this assertion is that we haven’t randomly sampled the earth’s temperature… ever! Without a random sampling of the earth’s temperature we simply don’t know whether the earth has warmed, cooled, or stayed the same.

    An additional simple statistical exercise I’ve not seen done anywhere is a simple attempt to correlate CO2 emissions with “average” temperature over the period man has supposedly affected the climate. I’ve seen the two trends lines overlain, and it looks to me that the statistical correlation is weak to nonexistent, especially given the fact that the most rapid rise in CO2 occurred in the 30 or so years following WWII, when the planet was cooling.

  22. Steve McIntyre
    Posted Jan 4, 2006 at 2:51 PM | Permalink

    Re #22: Demetris, thanks for dropping by. I’ve downloaded several of your articles, which appear to be very stimulating and I will try to comment on them in the future.

  23. JerryB
    Posted Jan 4, 2006 at 3:00 PM | Permalink

    Re #16

    A minor point: that article by Storch is not just out, but it is a good piece. Steve mentioned it in his post at http://www.climateaudit.org/index.php?p=135

  24. Posted Jan 4, 2006 at 6:11 PM | Permalink

    Re: #15

    Peter, you say

    “The height of people varies a lot too, you’ll often see people well under five foot six or well over six foot. Does that mean we can’t say something meaningfull about the trends, even of fractions of an inch, in average human height over decades? ”

    I totally agree with you. I wasn’t saying that we can’t say anything meaningful. Don’t get me wrong. In the case you mention, say people’s height varies by about 1m at most (from dwarfs to giants). So looking for 0.6 degC trend in 60 degC range would be like looking for 1cm trend. Highly feasible. But as some pointed out, people’s height doesn’t vary for 50 years of adulthood. If we were to grow from 4 foot in the morning to 7 foot in the evening, and back to 4 foot again, every day, with that swing varying from people to people and from day to day, I’d say we have one more level of difficulty. But still, nothing fundamentally impossible. After all, we sent man to the Moon and Bush to the White House (that required a LOT of recounting, remember?…)

    But the main difficulty I see with climate is that we have only ONE set of past data to work with. Data that weren’t meant to be taken with the aim of identifying a trend of a fraction of a degree, so a lot of them may just be plain wrong by a degree or two. Still, by averaging out, you do reduce the error. Smart people have done that, and I’m not going to pretend here that I’m smarter than they are. But in the end we only have that ONE set of relatively incomplete and uncertain data. If they’re wrong by 0.2degC, which is quite small you’ll agree, that seems to have enormous implications.

    And if the models, as complex as they are, disagree by as much as 0.2 deg from that ONE set of data in the fluctuation pattern, isn’t that also a big problem?

    As an experimental scientist, my life is (was) easy because if I felt that my data were somewhat skewed, I could always repeat the experiment, over and over again, and every time refine my measurement technique until I was sure that the data were reliable and reproducible. Then you can start modeling and get meaningful results. This is not the case in climate science, where they keep modeling and modeling based on the same set of data. There’s really only so much information you can extract from it, and it’s pretty limited, so it’s no wonder we don’t seem to be making much progress. I have a lot of respect for the people who do that work. But I do get the feeling that if you’re too much into it, especially computer modeling, you do get carried away and tend to attribute too much value to your results, simply because they’re so hard to get.

  25. Posted Jan 4, 2006 at 8:39 PM | Permalink

    Re #22. What fantastic papers – thanks. “Nonstationary versus scaling in hydrology” is profound and very easy to read. I particularly like the observation that fitting a nonstationary trend with stationary errors is a contradiction. From what I understand, long range dependence presents the same problems for estimating significance and calibration of forcings as short range dependence, and so does not eliminate or even reduce the problem posed to GCMs by real world observations. Regards.

  26. Dave Dardinger
    Posted Jan 4, 2006 at 11:49 PM | Permalink

    re: #22

    I’m glad to hear the favorable report on the papers. The abstracts were surprisingly understandable concidering they contain things like: “Hurst phenomenon”, “Markovian”, “chaotic tent map”, “Hurst exponent greater than 0.5″, etc. I’m not sure if that means I’m starting to absorb some of the statistics lingo or Demetris is just a good writer. I’m going to have to go read the articles and see.

  27. nanny_govt_sucks
    Posted Jan 5, 2006 at 3:32 AM | Permalink

    #28: “Oh, and re your second paragraph – think aerosols!”

    That would take a lot of creative thinking. The Northern hemisphere, where most of the aerosols are emitted, does not show greater cooling than the Southern hemisphere. The areas of the globe where aerosols are primarily produced, like China, have shown warming, not cooling over the last 20 or so years. So the aerosol explanation doesn’t really seem to add up. There should be monstrous amounts of cooling in the aerosol-affected regions if it is supposed to lower global average termperatures.

  28. Hans Erren
    Posted Jan 5, 2006 at 4:00 AM | Permalink

    Also contrary to GHG, aerosols a notably poorly mixed, i.e. they tend to concentrate close to te source region. If the substantial decrease of sulfur aerosols in recent years in the us can explain all observed warming, this leaves no room for CO2…

  29. Ray Soper
    Posted Jan 5, 2006 at 4:40 AM | Permalink

    You are all probably aware that JunkScience.com ranked Global Warming as one of their Top 10 Junk Science Claims for 2005. Strikes me as a calm and rational assessment of the issues. In case you haven’t seen it: http://www.foxnews.com/story/0,2933,163999,00.html

  30. Posted Jan 5, 2006 at 7:52 AM | Permalink

    Re #29

    Peter, you say : “I think you’re wrong. Why? Because I’ve read and heard people explain how you actually don’t need very many data points to come up with robust temperature readings. And, remember, the difference between sat temps (well, the RSS dataset anyway) and surface thermometer ones isn’t great – you think the sats don’t sample properly? So, we have good data, we don’t have perfect data. All the data shows warming, all the physics suggests more warming in the pipeline, and all the evidence of human behaviour suggests that production of GHG’s will rip uncontrolled at least in the near future – more warming. So you think you can wish it all away with your comments?”

    Isn’t this an emotional reaction? You “have heard people” and that’s enough? Haven’t you also “heard” the opposite? You say sat temps are good, but reject one data set. How do you know which one is good? You say “All the data” shows warming. “All the physics” suggests more warming. “All” is a very absolute word, which should be used with great caution. It seems you are yourself “wishing all away” any result that goes against warming. To me, this is the antithesis of good science. You should always pay more attention to the possible flaws in any theory. I’m new to this debate, yet I’ve spent a lot of time reading large chunks of the IPCC TAR, and quite a few papers so far. I’ve been a scientist for 25 years, I have written but also reviewed a large number of scientific papers, so I’ve learned to have a critical eye, and I always look at ANY result with a relatively high degree of suspicion. It is the author’s duty to convince me.

    There is a lot of good scientific discussion here (and I apologize for using comment space without really contributing), and it helps me focus my inquiry on some of the more contentious aspects of global warming, but from there I usually try to go to the source and find out by myself if I can be convinced.

  31. Michael Jankowski
    Posted Jan 5, 2006 at 7:57 AM | Permalink

    And, remember, the difference between sat temps (well, the RSS dataset anyway) and surface thermometer ones isn’t great

    Define “great.” Statistically insignificant? The magnitude of warming according to the surface measurements of a year? Decade? Two decades?

  32. pj
    Posted Jan 5, 2006 at 8:36 AM | Permalink

    Re #29

    You don’t need many data points, but those data points must be random. As far as the difference between surface and satellite data, it is true that there isn’t a big difference, that’s mainly because the amount of warming we’re talking about is miniscule, a half to one degree over a 150 years. The fact remains that the satellite data show virtually no warming over a period that climate models “predicted” three times as much warming. Finally, aerosols is a copout. Our knowledge about aerosols is woefully inadequate to make such claims about their effect on the climate. It is a fudge factor used by modelers to rescue their models from absurd results.

    The fact remains that if you can’t show some type of correlation between changes in greenhouse gas emissions and changes in temperature then you don’t have any empirical confirmation for your hypothesis that one causes the other. Saying that there would be a correlation if not for aerosols simple won’t do.

  33. ET SidViscous
    Posted Jan 5, 2006 at 8:57 AM | Permalink

    “All the data shows warming”

    You might want to review that one there. It is decidedly untrue.

  34. Peter Hearnden
    Posted Jan 6, 2006 at 5:00 AM | Permalink

    Re #33. Emotional? No more than when you come to your conclusion, surley? I’ve looked at the data, the evidence, my conclusion is as it is. It’s honest and, I hope, thoughtful – it’s not set in stone (I’m pretty sure we’ll see more warming, howe much IS in question). To call it emotional is to just, ok slightly, ad hom me (as is to imply I’m not applying an objective scientific eye). How convinced do you demand? Can one ever be 100% convinced by anything? I’m focussing on the science of climate change and contentious areas of AGW sceptics. The whole lot, so I’m emotional? C’mon.

    You could say what you said if it warms by 2C by 2100! That’s the problem with the ‘I demand 100% proof CA type’ line. Oh, and btw, what do you regard as the contentious bits? Do you treat Steve’s work with suspicion too? OK, lets see your criticisms of it.

  35. Peter Hearnden
    Posted Jan 6, 2006 at 5:06 AM | Permalink

    Re #36. OK, lets see you cards.

  36. Peter Hearnden
    Posted Jan 6, 2006 at 12:36 PM | Permalink

    Re #30 and #33. I replied to these, the replies appeared, but I think they’ve now been ‘karmared’ :(

  37. Steve McIntyre
    Posted Jan 6, 2006 at 1:01 PM | Permalink

    Peter, we are being absolutely deluged with spam right now – sometimes over 100 a day. The retrieve screen shows the most recent 20 posts and I was able to recover them, but in a couple of hours, if it was off the screen, I wouldn’t have known how to. You must have used a spam-codeword. You used the word c**rds, which might be used to detect p**k**r sites. Otherwise I don’t know.

  38. ET SidViscous
    Posted Jan 6, 2006 at 1:26 PM | Permalink

    No problem Pete.

    You said “All” therefore I only need one counter example, I’ll give ya two hows that.

    http://www.junkscience.com/MSU_Temps/UAHMSUSPol.htm

    There is plenty of data that shows cooling in certain regions.

  39. Posted Jan 6, 2006 at 1:44 PM | Permalink

    #34

    Peter, your post did sound emotional to me for its overemphatic use of the “all” word. It’s OK to be emotional (this is a debate after all). But it doesn’t add anything to the science, and it doesn’t help the debate to claim that “all the physics” shows warming. What about the physics of cloud formation, and how cosmic rays may influence it? Someone recently brought that up, and it looks like an interesting effect to look at. We don’t know yet if it would end up showing more warming or less.

    It’s quite all right if you are convinced that there is warming and that it’s due to man made GHG, and that we should do something about it. If you have very convincing arguments, other than that “all the data show warming”, I’d like to hear them. The IPCC report isn’t that convincing. There’s a lot of data, but it is nevertheless full of caveats and uncertainties, and things we don’t know yet.

    And, yes, I am suspicious of Steve’s work as well. In fact I haven’t looked at it in detail yet, so I can’t comment. I’m trying to learn about advanced statistics to be able to read it and all those interesting papers that are discussed here. What I appreciate is that he didn’t take the previous results for granted. As for me, so far I’ve looked at the temperature data and how they were collected, and the GCM results and modeling. I have also read chunks of the Arctic council report and some other papers on Arctic temperatures that show that it was as warm in 1934, and that maybe soot from east asian plants may deposit on the ice in certain regions of the Arctic and help it melt away, whereas earlier in the century it would have been soot from Europe and North America, and that would have affected other parts of the Arctic. I’m looking at solar activity papers, to see if they make any sense.

    I don’t have a definitive opinion yet on GW. If I were a policy maker, with the evidence I’ve seen at this point, I might not support something like Kyoto, which is trying to rush things and achieves nothing significant. I would probably lean towards a horizon of something like 50 years, with a drastic reduction not in CO2 emissions, but atmospheric CO2 concentration back to something like “pre-industrial” levels, with an emphasis on advanced technology, and then put my money where my mouth is, and fund a huge global R&D effort towards alternative energy sources and energy efficiency, but also things like reforestation, that also help reduce poverty and misery. I would stop wasting huge amounts of money on useless wars that create more suffering. But then, having accepted that CO2 may be dangerous, I would not need spend one more penny on climate research!… ; )

  40. hans kelp
    Posted Jan 6, 2006 at 2:17 PM | Permalink

    This is just to recommend all of you to please visit Steven Milloys hhtp://junkscience.com. Today, Steven Milloy have some nice comments on the Artics vs GHG´s under the title “2005 Ties for 2nd Warmest Years”.

  41. Mark Frank
    Posted Jan 9, 2006 at 2:52 AM | Permalink

    I am a relative novice in both statistics and climate science so I expect to be proved wrong (which is great – because that way I learn something) but it seems to me that some of the posts above may have missed Rasmus’s point.

    You can stare at almost set of data and eventually, if you are a good enough mathematician, you will be able to come up with a stochastic model that gives a reasonable explanation – in the sense that the observed data are probable given the model. If you then use the model as a null hypothesis you will inevitably come to the conclusion that there has been no fundamental change in the data. Isn’t this what Rasmus means when he talks about the drunkard’s walk model as possibly being circular?

    Any proposed stochastic model has to justified by the realities of what is being modelled, not just the patterns in the data. In the most extreme examples the proposed model might be physically impossible. This is surely what he means when says “It is important to combine physics with statistics in order to obtain true answers”. (I don’t know why he excludes chemistry and biology which surely come into climate models?).

    He clearly believes that GCMs represent a good understanding of the realities of the situation and therefore are a justified choice for generating the null hypothesis. I am not nearly well enough qualified to judge whether they do represent reality but it seems a lot better than playing with models until you find something that fits that data.

    I don’t think he has “a problem with autocorrelation”. In fact he explicitly says that it is to be found in the GCMs. I think he just has a problem with autocorrelation that is derived simply by looking at the same data it is meant to explain.

    Is this very naive? Please comment.

  42. John S
    Posted Jan 9, 2006 at 3:26 AM | Permalink

    One way of thinking about it is in terms of Occam’s Razor. If your hugely complicated GCM model is no better (or indeed worse) than a very simple model e.g. Temp(t)=Temp(t-1)+epsilon(t) then you have to ask yourself, what exactly is the GCM doing?

    On another level, statistics is not about coming up with some stochastic model that works. It is about verifying that the model you are using actually tells you something meaningful (there is a lot of wishful thinking that humans are prone to and statistics can provide a partial check on that tendency). If there are physical laws which are properly incorporated in your model then some random model will not and can not be better at forecasting than your model – on the other hand, if your model is no better at forecasting than some random model then there is a very real sense in which you can say that your model is no more than a Rube Goldberg machine (thank you Lubos for that meme).

    Thus, it is not that anyone need believe that a simple univariate model of temperature is an accurate picture of reality, but that GCMs are no better than simple univariate models (despite all their seeming complexity). It is not circular at all – it tells you precisely how good the GCMs are. (If it looks like a duck, and it quacks like a duck I am perfectly happy to call it a duck until it solves a partial differential equation).

  43. Hans Erren
    Posted Jan 9, 2006 at 3:44 AM | Permalink

    re 42:

    That’s a good approach!
    Now what we observe is that a (theoretical) linear CO2 signal is still drowned in the stochastic noise of El Nino, this is where the huge uncertainty in climate sensitivity comes from. Also bearing in mind that multidecadadal variations also possibly have a stochastic nature.

  44. Louis Hissink
    Posted Jan 9, 2006 at 5:13 AM | Permalink

    I’ve been reading the various comments here and on other threads about “statistics”.

    GCM’s are mathematical models, not statistical models which are simply mathematical summaries of salient features of a discrete population of objects.

    I have also noted that when commentators start using “jargon” then that means they cannot explain their argument in plain language. Or to put it more bluntly, they seem not to understand what they are talking about, hence the retreat into jargon, and one of the reasons that the Englishman Tyndall wrote the English language version of the Christian Bible, now known as the King James Authorised version.

    I suspect much of the acrimony between the two camps debating climate change theories might be from confusing statistics and mathematics. How a statistically random process could be functionally predictive remains a mystery yet this is what I interpret GCM’s to be.

    Computing geological and mining ore reserves is a well developed mathematical discipline but it is essentially a “STATIC” model, in that the variables, ore grade, ore thickness, stripping ratios etc don’t change over time.

    Climate is different.

    It’s unpredictable.

  45. Mark Frank
    Posted Jan 9, 2006 at 5:51 AM | Permalink

    Re #42. John – I am not clear what you mean when you say one model is “better” than another. If you simply mean that one model explains the observed data better then I don’t agree with a couple of your statements.

    You say “If your hugely complicated GCM model is no better (or indeed worse) than a very simple model e.g. Temp(t)=Temp(t-1)+epsilon(t) then you have to ask yourself, what exactly is the GCM doing?”

    If the simple model is created by tinkering with models until you find one that fits the data then it will almost certainly be “better” than the GCM model. But that doesn’t mean it can legitimately used as the null hypothesis. It has to be based on underlying science as well. The GCM model is adding the underlying science.(I am assuming that the GCM model is based on the underlying science – that’s another issue).

    To take an extreme example, I remember a couple of years ago doing an exercise to find a model relating some variable (I think it was the prevalence of some kind of vermin) to the number of cats in the area. I found an excellent, simple, mathematical model that completely explained the observed data provided you accepted the concept of negative cats when the prevalence of vermin was very high. If you accepted that number of cats never dropped below zero then the model had to be adjusted and was not so good at explaining the data – but I think you would accept that restricting myself to positive cats was preferable.

  46. Louis Hissink
    Posted Jan 9, 2006 at 6:08 AM | Permalink

    Re #45

    You found a mathematical model that explained reality IF ….cats never dropped below zero numbers…

    That mathematical model did not, and this is the problem.

  47. Mark Frank
    Posted Jan 9, 2006 at 7:38 AM | Permalink

    Louis – I am sorry – I don’t know which model you are referring to when you say “That mathematical model did not, and this is the problem.”

  48. Steve McIntyre
    Posted Jan 9, 2006 at 7:59 AM | Permalink

    #41: Mark, Cohn and Lins argued that statistical significance tests should attend to the possibility of persistence in the data – which changes the relevant significance test dramatically. Rasmus has objected that you can’t define the persistence based on the temperature data itself. This is one issue and I’ve seen disagreement on this. But aside from the temperature data, you get very high levels of persistence in many forms of proxy data – some of which I’ve surveyed here, ranging from Vostok to Bob Carter’s ODP logs. Demetris Koutsoyannis has pointed out the same phenomenon in far more detail in his publications. So there is much evidence of persistence which is independent of the temperature data itself.

    It seems to me to be a valid question whether the GCMs generate the forms of autocorrelation observed in the long proxy data. I don’t know at present, but I would be cautious about taking Rasmus’ word for it.

    Aside from this particular thread, Rasmus’ has published both at realclimate and in the literature on statistics of record levels based on i.i.d. distributions – none of this survives any significance testing in which autocorrelation is considered. My recent post on Trenberth shows that Trenberth pointed out the problem (wll known in general statistics) to climate scientists as long ago as 1984.

  49. Mark Frank
    Posted Jan 9, 2006 at 9:50 AM | Permalink

    Steve

    Rather than get into what Rasmus has said elsewhere, why not consider the core proposition in the RC thread – independent of who said it. I take this as:

    1) Stochastic models that reflect the data, but are not founded in physical reality, are not reasonable candidates for generating the null hypothesis and this includes autocorrelation models.

    2) GCMs models are founded in physical reality and reflect the data and therefore do provide a reasonable basis for generating the null hypothesis. (I am not at all clear how this works – do they run the models thousands of times with marginally different starting conditions?)

    I don’t know enough about GCMs to comment on the second part and I think you are saying you don’t either. So I guess someone else will have to fill that in.

    I don’t see why autocorrelation in proxies is relevant to 1. As I understand it, the proxies are likely to be correlated with the temperature, but surely they are consequences of the temperature not causes. So they are really just better ways of estimating the temperature. They are not independent of the data. What is needed is a model that takes into account the *causes* of the temperature data and shows why an autocorrelation model is appropriate. This is not a dispute about the accuracy of the data. It is about the best model to use to predict it in the future.

    Unfortunate I can’t view Demetris papers without paying $30 which is a bit more than I am prepared to spend. The abstracts suggest that, while they are undoubtably outstanding work, they are all about the consequences of different models on the data. This is not relevant to deciding whether the models have a physical justification or not. But maybe there is more in the papers themselves.

    I have thought of another way of looking at this. Significance testing is all about the probability of the observed data given a model and statisticians are experts at this. They are aware of a number of models and can select models that best predict the observed data i.e. the probability of the data given the model is high including subtle characteristics of the data such as autocorrelation. However, going right back to basics, this is not the same as the probability of the model. To estimate that we need an a priori estimate of the probability of the model independent of the data (which can be modified somewhat by the observed data). This is where the subject matter expert comes in. They can look at a model proposed by the statistician and say “if that happened it might explain the data but I am afraid the world doesn’t work like that”.

  50. Steve McIntyre
    Posted Jan 9, 2006 at 11:59 AM | Permalink

    #48. Mark, GCMs are not run thousands of times – that’s one of the problems. Ammann told me that 25 years of model-time took one calendat day of supercomputer time. So there has never been even one run of a GCM covering the Pleistocene – anything that’s been run is something quite different. But you must recognize that you are really dealing with single runs of GCMs. Presumably these have been selected and we don’t know much about the selection process. There are more than one model, but in each case we are dealing with selected single runs – so, if there is a monoculture (as there is), there is a real possibility of systemic choice-making (from a statistical point of view).

    It is also a very real issue whether GCMs are properly “grounded” in physical reality, not least of which is that the parameterizations used in GCMs are ad hoc methods of “solving” Navier-Stokes on a gridcell basis, but there is no proof that you can expand Navier-Stokes from the micro-basis that it is shown to apply to, to gridcell aggregates.

    One way of testing the “grounding” itself is whether they generate autocorrelation features that match proxy information. I get the impresssion that they don’t do this very well – which points to potential problems in their methodology. If the data has persistence (as it does), then, if your models do not yield similar persistence, it seems evident to me that there must be some problem with the GCM and that you cannot rely on it to generate null distributions. (However, since you are dealing with single runs, they are not generating null distributions anyway.)

  51. Mark Frank
    Posted Jan 9, 2006 at 1:02 PM | Permalink

    Thanks for the info on GCMs. I might try and get some info on how they work on RC as I am not a known enemy :-)

  52. John S
    Posted Jan 9, 2006 at 9:29 PM | Permalink

    Re #45
    By ‘better’ I mean either it fits the data better or forecasts better. There are numerous statistics that can measure this quantitatively (e.g. R-squared or forecast error variances)

    “If the simple model is created by tinkering with models until you find one that fits the data then it will almost certainly be “better” than the GCM model.”

    The risk of ‘overfitting’ or otherwise generating the result you want with enough hard work is real and serious – but it applies equally to GCMs as simple models. However, in the long run, one model can’t be better than another model unless it contains all that is necessary to forecast (and nothing more). Just because a particular model fits the data well doesn’t mean it is right. But if a particular model doesn’t fit the data then you can be certain it is wrong. That is why rigorous statistical testing of models is required – to weed out the pretenders. No model should have a divine right to rule absent verified and consistent performance.

    Let me try to explain: Suppose that the true model of the world is that x=f(y,z). If I have a model that is univariate i.e. x=f(x) (forgive my abuse of notation here making it correct is just too much trouble, hopefully the idea is clear) then it can not be better than a properly specified model with x=f(y,z). On the other hand, if you have a model where x=f(a,b,c,d,e,f,g,…) it can ultimately be no better at forecasting than the simple univariate model x=f(x) (and probably worse). The reason is that whenever y or z change any model that excludes them will perform poorly – certainly worse than any model that includes them. Alternatively, if a complicated model is the same at forecasting as a univariate model I would be inclined to believe that all the extra variables that are included in the complicated model are, in fact, irrelevant.

    Statistical testing is about achieving optimal parsimony, including everything that should be in there but nothing else (Occam’s Razor again).

    (As for your cats, extremes can be tricky, but local linear approximations of non-linear relationships can be very useful – even climate scientists do it.)

    [And in a random comment that I just wanted to put out there for those that have ears to hear, GCMs remind me of RBC models - a lot.]

  53. Steve McIntyre
    Posted Jan 9, 2006 at 10:15 PM | Permalink

    John S.: re your random comment, arguably a lot of microeconomics is simply the study of convex functions. I remember doing an econometrics course nearly all about convex functions (this was 1969). Here’s an intriguing comment about thermodynamics: "phenomenological thermodynamics is the study of Legendre transformations of convex functions." So one would expect some analogies to develop. I’ve been mulling over some of the parallels from time to time, but my head gets sore after a while and I think about proxies to soothe the pain.

  54. Mark Frank
    Posted Jan 10, 2006 at 2:18 AM | Permalink

    Re #52.

    John – I am not sure why you think a model based on scientific reality need have excess variables. It should have the right variables that reflect what is actually going on in the world. This may be more or less complex than a mathematical model based purely on the observered data – but it certainly shouldn’t have unnecessary variables. The climate is complex and any model that reflects reality is likely to be complex – but that’s only because all those factors affect the outcome – I don’t think anyone is suggesting throwing in variables that are irrelevant.

    I do agree that the best model is the one that makes the best forecasts. A model that is based purely on finding the simplest solution that describes past data but has no foundation in the underlying science will surely come unstuck in the future (unless it happens by luck to also represent reality). In the example of my cats a linear model was a very simple solution that perfectly described the data available. However, it would come severely unstuck had I used it to predict the number of cats with very high levels of vermin (or vice versa). A solution based on a better understanding of the physical attributes of cats would be much more robust in making future predictions.

    Doesn’t this go right back to your first statistics course – don’t confuse correlation with causation (even auto-correlation!)? I believe this means – not just that you can’t deduce causation from correlation but that you should not make predictions from an observed correlation unless you understand the underlying causation.

    PS What’s an RBC model?

  55. John S
    Posted Jan 10, 2006 at 7:09 PM | Permalink

    One aspect of complicated model vs simple model is comparing reduced form models with structural models.

    Structural models put in all the steps in a chain while reduced form models solve out for all the intermediate steps and express equations in their simplest possible form by removing intermediate variables. Depending on your purpose, each can perform a role and mathematically they are equivalent. For forecasting or real world applications, however, a problem with structural models is that errors at each step of a chain can cumulate and your ultimate forecast will be very poor indeed. A reduced form model just needs to estimate the total effect and is much more robust to real world errors.

    I will try an analogy suggested by Rasmus: If I want to know the effect on the temperature of a gas from adding energy to it this can be expressed very simply in one equation. This is akin to a reduced form model. A structural model, on the other hand is akin to trying to model each particle in the gas. Theoretically you should get the same result, but in practice you won’t. The reduced form still incorporates physics in its parameter estimates, but has distilled them down to only those relationships that are necessary for the purpose at hand.

    Your comment about correlation and causation is important but pretty funny. MBH essentially confuses correlation with causation for bristlecone pines and global temperature without any understanding of the underlying causation (which turns out to be pretty evanescent). Anyway, there are a number of statistical tests that can minimise confusing correlation with causation. An important one in the current context is that with trending series (high autocorrelation) there is a significant problem of spurious correlation (or related issues) and you need to explicitly test for this possibility. The whole realclimate thread was kicked off by a paper that (to paraphrase) said “you might be confusing correlation with causation because you are not accounting for high autocorrelation properly”. So I agree with you absolutely that correlation vs correlation is a problem – but I see the fault in this issue with Rasmus et al because they seem to be doing precisely that.

    As for your cats, in the real world the perfect is the enemy of the good. Provided you knew the limits to extrapolation from your model (i.e. the range over which your linear approximation to the non-linear relationship held true) then it sounds like a eminently useful and sensible solution to a problem. Only if you had to deal with very high vermin numbers outside the limits of your model would you have needed to improve the model.

    You can Google for ‘RBC model’ but explaining it wouldn’t particularly benefit the discussion unless you are already familiar with them and their place in the history of economic thought.

  56. Mark Frank
    Posted Jan 11, 2006 at 1:10 AM | Permalink

    John and Steve thanks for your time. I am not convinced, but I now understand your viewpoint better. I suspect any further debate would waste everyone’s time.

Follow

Get every new post delivered to your Inbox.

Join 3,330 other followers

%d bloggers like this: