Pat Frank: Forcing assumptions in GCMs

The following comes from Pat Frank regarding my question here

Ten GCM model runs by Pat Frank

This plot shows projections from 10 of the 15 GCMs tested in the “Intercomparison of Present and Future Climates Simulated by Coupled Ocean-Atmosphere GCMs” CMDI Report No. 66. The GCM data were digitized off Figure 27 of the Report. The plot also shows the linear average of the GCM projections and the results of a simple calculation of global average temperature increase due to increases in greenhouse gases (GHGs). The acronyms at the top of the plot designate the GCMs that were used to make the respective projection, their average (GCM Avg.), and the simple calculation (Net GHG T).

The Legend to Figure 27 is: “Globally averaged difference between increasing-CO2 and control run values of annual mean surface air temperature (top) and precipitation (bottom) for the CMIP2 models. Compare with Figure 1, which gives control run values.

And the comment on the CO2 boundary condition of the GCM projections in the text is: “To begin our discussion of model responses to 1% per year increasing atmospheric CO2, Figure 27 shows global and annual mean changes in surface air temperature and precipitation under this scenario, i.e., differences between the increasing-CO2 and control runs.” The control runs were essentially flat lines with low-intensity wiggles.

The “Net GHG T” line reflects my own calculation and assumed the same 1% per year increase in atmospheric CO2 as the GCM simulations. This calculation also included forcings from methane (CH4) and nitrous oxide (N2O). The increase in these gases was extrapolated from polynomial fits to the measured trends.

The calculation further assumed that greenhouse gasses produce 40% of the total greenhouse warming above the Top of Atmosphere temperature. This 40% includes warming due to the increased water vapor induced by the same GHGs.

The forcings for CO2, CH4, and N2O were calculated according to the equations in G. Myhre, et al., (1998) “New estimates of radiative forcing due to well-mixed greenhouse gases” Geophys. Res. Lett. 25(14), 2715-2718, Table 3.

The net temperature increase is just a linear extrapolation of the temperature from the fraction of GHG forcing in the start year (1960 in all cases), i.e., global average T is scaled by the increase in GHG forcing.

Histrorical methane was obtained from: D.M. Etheridge, L.P. Steele, R.J. Francey, and R.L. Langenfelds. 2002. Historical CH4 Records Since About 1000 A.D. From Ice Core Data. In Trends: A Compendium of Data on Global Change. Carbon Dioxide Information Analysis Center, Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tenn., U.S.A; source URL: http://cdiac.ornl.gov/trends/atm_meth/lawdome_meth.html

Historical N2O was obtained from: Khalil M.A.K.; Rasmussen R.A.; Shearer M.J.(2002) “Atmospheric nitrous oxide: patterns of global change during recent decades and centuries” Chemosphere, Volume 47(8, June), 807-821. The percent 1900 forcing was obtained by linear extrapolation of the BRW and CM data from Table 1 of the reference.

Pat’s Comment: The simple GHG-induced temperature projection goes right through the middle of the pack of GCM simulations, and closely tracks the GCM average. As the average of GCM projections is typically accounted to more accurately follow measured climate trends, the same criterion indicates that the simple GHG projection is more accurate than any of the GCM projections. It seems lots of money spent hasn’t gotten us much. One other thing of serious note: It is now obvious that GCM modelers assume that the only element driving net climate change is the level of GHG gasses in the atmosphere. This seems extraordinarily naàƒÆ’à‚⮶e, physically.

164 Comments

  1. PHE
    Posted Dec 22, 2006 at 6:02 AM | Permalink

    This result demonstrates very clearly the nature of computer modelling. In fact, on my reckoning, the projection is very nearly a simple extrapolation of the temperature trend between 1976 and today (especially if you use the NASA estimate showing 2005 warmer than 1998, and as used in Gore’s film. Other estimates, such as Hadley Centre were lower). The models all assume that GHG rises are the main cause of temperature rise in recent years. Therefore, it is inevitable that they predict the trend continues. It is certain that the modellers came up with very strange results from time to time, such as temperatures spiralling out of control or even reducing! Then they would have realised they missed something. They would have adjusted their assumptions until a realistic result was produced. This is not to denegrate modelling. Modelling can be a very useful tool (I use it myself). But you have to be very wary of the results. All that these models show is the ‘feasibility’ of such a temperature rise. They do not prove anything.

  2. Posted Dec 22, 2006 at 6:36 AM | Permalink

    It is certain that the modellers came up with very strange results from time to time, such as temperatures spiralling out of control or even reducing! Then they would have realised they missed something. They would have adjusted their assumptions until a realistic result was produced.

    How realistic is something that cannot be checked against reality? Actually if this is what the modellers are doing then they’re trapped inside of a very powerful delusion – that the model represents something as tangible as real data.

    All of these adjustments mean that the final arbiter of whether the model is working correctly is the modeller himself.

  3. Steve McIntyre
    Posted Dec 22, 2006 at 7:33 AM | Permalink

    If you assume a 1% increase in GHG levels extrapolated and if you assume a logarithmic impact of increased CO2 on temperature, what’s surprising about a linear trend?

    In 1991, as I’ve mentioned before, Ellingson surveyed the infrared modules of the then GCMs, found that all of them were inconsistent in different ways, but that they all agreed on the impact of 2xCO2, observing archly that this could be construed as evidence of tuning.

  4. TAC
    Posted Dec 22, 2006 at 9:09 AM | Permalink

    My understanding is that the GCMs try to represent the atmosphere/ocean/land system by employing some sort of simplified Navier-Stokes equations (i.e. incredibly complicated non-linear partial differential equations). I am thus a bit surprised to see that the forecasts depicted in the Figure look astonishingly like simulated predictions from a trivial one-dimensional linear trend model (in log-space) with AR(1) errors drawn from a normal distribution — zero mean and variance equal to the variance of prediction (the trajectories spread apart with increased extrapolation because of uncertainty in the trend coefficient).

    Putting on my skeptic’s hat: Do the GCMs linearize Navier-Stokes to the point that it collapses to a linear model?

    Also, if it is a (statistical) linear model, are we sure that there are not higher-order terms in the trend? Finally, based on the spectrum of the CRU data from 1856-2005 (as well as all of the longer proxy series), I would expect to see substantially more low-frequency noise in the error series.

  5. TAC
    Posted Dec 22, 2006 at 9:16 AM | Permalink

    One other question: I realize that the GCMs are not expected to reproduce all of the statistical characteristics of future climates, or even of the current climate, so is there a generally accepted list of the statistical characteristics that GCMs can be expected to preserve? (e.g. CO2-induced temperature trend? Spectrum of temperature variability? Other features? etc.). A link would be fine. Thanks!

  6. Posted Dec 22, 2006 at 9:44 AM | Permalink

    Here’s the Mauna Loa record for carbon dioxide from 1959 to 2004 versus the modellers’ assumption of 1% compound growth.

    Mauna Loa data from here

    To make the assumption of 1% compound growth in order to force a linear T rise is to model a planet other than the Earth.

  7. RichardT
    Posted Dec 22, 2006 at 9:50 AM | Permalink

    #6
    The 1% growth in CO2 is not an assumption, it is one of a range of a scenarios.

  8. Posted Dec 22, 2006 at 9:53 AM | Permalink

    Yes but the charm of a math model lies in its complexity. In graduate school I learned that there were two criteria for a sucessfull multiple regression based public policy model. First it had to support the pre-conceived ideas of the customer and second it had to be impressivly complex. Obscurity was a virtue.

    Just a few years ago I and my web development team at a dot-com start-up were excluded from understanding the algorithm that calculated the volume of whiskey that went though the radio controled liquor bottle spout. This guy from the “systems” section often publicly explained that mere web coders couldn’t be expected to understand systems level programming. Later I took over the systems unit also, and after I fired him I got to see his algorithm. It was 2. He multiplied seconds by 2 to get ounces. He had never made a measurement and he had never tried another value. He just made it up. He had kept his job for years based on his mythical “conversion algorithm”.

    Investors put more than thirty million dollars into this start up. They liked the whiz-bang complexity of the secret technology. They never seemed troubled by the fact that virtually nothing ever worked.

  9. gb
    Posted Dec 22, 2006 at 9:54 AM | Permalink

    Re #4:

    No, the equations of motions are not linearized.

  10. Michael Jankowski
    Posted Dec 22, 2006 at 10:04 AM | Permalink

    Do the GCM runs you speak of assume a 1% annual growth of CO2 in terms of EMISSIONS or CONCENTRATION? That could explain the difference you see between Muana Loa (concentration) and model assumption (if it’s emissions), because not all CO2 emissions result in an equal atmospheric change (increase CO2 uptake rates in plants, increases in oceanic sequestering, etc).

  11. gb
    Posted Dec 22, 2006 at 10:12 AM | Permalink

    Re # 4.

    To see some high-resolution visualisations obtained with atmospheric/ocean models you can visit for example http://www.es.jamstec.go.jp/esc/eng/GC/index.html

  12. Paul Linsay
    Posted Dec 22, 2006 at 10:32 AM | Permalink

    #10, It has to be concentration since that’s what affects the radiative properties of the atmosphere. The radiative effect of CO2 is logarithmic in concentration, hence the need for an exponential increase to get the linear effect on temperature.

  13. jae
    Posted Dec 22, 2006 at 10:50 AM | Permalink

    I don’t understand much about modeling, but this looks ridiculous to me. It looks like they have just added a little “noise” to a straight line. If the models really say anything about climate, shouldn’t one expect some more variation? It looks very suspicious to me.

  14. Steve Sadlov
    Posted Dec 22, 2006 at 11:26 AM | Permalink

    RE: #8 – “and after I fired him I got to see his algorithm.”

    You are an engineering manager after my own heart. Shoddy Engineering and Science ethics are simply intolerable, and merit termination of employment (if not prosecution) of the perp(s).

  15. Steve Sadlov
    Posted Dec 22, 2006 at 11:30 AM | Permalink

    RE: #13 – Sigma moment … 😉 It may be a lot to ask for a model to inject the sorts of spectrally complex innate variability that one expects to see in natural systems. But it would not be too much to ask for a model to add estimated worst case error envelopes based on deconvolution of past variations and subsequent modelling of the resulting spectra appropriately to derive said envelopes.

  16. Reid
    Posted Dec 22, 2006 at 1:41 PM | Permalink

    Alice in Wonderland science.

    Conclusion first. Then program models to confirm conclusion.

    Viola! Manufactured consensus.

  17. PHE
    Posted Dec 22, 2006 at 2:06 PM | Permalink

    To reiterate a point I made in No.1, do these models do anything more than project the current temperature trend (since 1976)? I mean that as a serious question to someone who believes these models are reliable and informative.

  18. loki on the run
    Posted Dec 22, 2006 at 2:25 PM | Permalink

    Re: 8 and 14.

    While I agree that the behavior of that person was reprehensible, these are the things that QA should find.

    Also, the code should be in some sort of SCM system. Investors should insist on that.

  19. EP
    Posted Dec 22, 2006 at 2:36 PM | Permalink

    How far back can these models go? Can they reconstruct the paleoclimate graphs of Mann et al. ?

  20. Paul Penrose
    Posted Dec 22, 2006 at 2:43 PM | Permalink

    Re #9:
    GB, Are you claiming that the AO-GCMs solve the N-S equations for each cell in the simulation at each step?

  21. Nobody in particular
    Posted Dec 22, 2006 at 2:50 PM | Permalink

    “How far back can these models go?”

    My guess is that if they tried to run it in reverse, it would have Chicago under a mile of ice sometime around 1910.

  22. PHE
    Posted Dec 22, 2006 at 3:45 PM | Permalink

    The models will certainly claim to have been calibrated against historical data. This is critical to any model of the real world. It certainly adds credibility, but is far from fail-safe.

  23. MarkR
    Posted Dec 22, 2006 at 4:04 PM | Permalink

    A big problem with climate models is that they apparently require a lot of computing horsepower. Strategies of independant audit (a la SteveM) won’t work. The only people who can test the models over historical time perios, are the modellers themselves, and based on what we’ve seen so far, I don’t think they do it, and if they do they are suppressing the results.

  24. MarkR
    Posted Dec 22, 2006 at 4:07 PM | Permalink

    A way round this is to find a University that has the computer power, and also has a skeptical Climate Sciences Dept. Even a computer company like IBM might be interested in creating publicity for one of their new machines.

  25. Hans Erren
    Posted Dec 22, 2006 at 4:31 PM | Permalink

    A very intersiting conclusion on model sensitivity can be drawn from this:
    As all the models run on a simple 1% exponetial forcing the (transient) climate sensitivity can immediately read off from the graph. With a 1% compound increase the doubling occurs after 70 years. So the climate sensitivity is:

    low: 1.2 K/2xCO2
    median: 1.75 K/2xCO2
    high: 2.25 K/2xCO2

    Which is on the low side of the usual model climate sensitivity averages (1-3 K/2xCO2) !

    see also:
    http://home.casema.nl/errenwijlens/co2/tcscrichton.htm

  26. Hans Erren
    Posted Dec 22, 2006 at 4:32 PM | Permalink

    prefer a forum where you can edit your mistaks

  27. Steve Bloom
    Posted Dec 22, 2006 at 4:37 PM | Permalink

    Er, Steve M., did you somehow fail to observe that the cited paper is from *October 2000*. and so pretty much of historical interest at this point? Is there perhaps a more up-to-date comparison, maybe even of the AR4 models?

    The age of the paper aside, it’s kind of funny how the regulars are so quick to take everything Pat did at face value. For those who clearly failed to take the time, here’s the abstract:

    “We present an overview of results from the most recent phase of the Coupled Model Intercomparison Project (CMIP). This phase of CMIP has archived output from both unforced (“control run”) and perturbed (1% per year increasing atmospheric carbon dioxide) simulations by 15 modern coupled ocean-atmosphere general circulation models. The models are about equally divided between those employing and those not employing ad hoc flux adjustments at the ocean-atmosphere interface. The new generation of non-flux-adjusted control runs are nearly as stable and agree with observations nearly as well as the flux-adjusted models. This development represents significant progress in the state of the art of climate modeling since the Second (1995) Scientific Assessment Report of the Intergovernmental Panel on Climate Change (IPCC; see Gates et al. 1996). From the increasing-CO2 runs, we find that differences between different models, while substantial, are not as great as would be expected from earlier assessments that relied on equilibrium climate sensitivity.”

    It was obvious to me upon reading the foregoing where Pat had gone wrong, but here’s a more specific quote from the paper for him and anyone else who didn’t get it:

    “The next phase, CMIP2, collected output from both model control runs and matching runs in which atmospheric carbon dioxide increases at the rate of 1% per year. Under this common scenario of radiative forcing, any differences among the models are due to differences in their responsiveness, e.g., their differing equilibrium climate sensitivities and rates of ocean heat uptake, which in turn arise from differences in resolution, other numerical aspects, and parameterizations of sub-gridscale processes. CMIP2 thus facilities the study of intrinsic model differences at the price of idealizing the forcing scenario. No other anthropogenic climate forcing factors, such as anthropogenic aerosols (which have a net cooling effect), are included. Neither the control runs nor the increasing-CO2 runs in CMIP include natural varations in climate forcing, e.g., from volcanic eruptions or changing solar brightness.”

    Let me know if there’s anything unclear about this.

  28. PHE
    Posted Dec 22, 2006 at 5:56 PM | Permalink

    Year 2000 can hardly be called ‘historical’. I would hope the modelling has moved on since then, unlike tree-ring proxies which haven’t really progressed since 1998. By the way, 1998 is a date that the climate itself hasn’t moved on from. Despite the repeated claims of ‘accelerating change’. Irrespective of whether these are representative of the latest modelling findings, it remains suspicious that their predictions are all so close to a simple straight line extrapolation. Presumably, these results were important for the IPCC report of 2001, widely reported to so convincingly to represent a scientific consensus. Modelling is not something that has been discussed much here, so it would be interesting to hear of references of more up-to-date and convincing modelling results.

  29. Steve Bloom
    Posted Dec 22, 2006 at 6:28 PM | Permalink

    Re #28: Read what I wrote again, please, in particular that last quoted paragraph. Pat would have been wrong even if those models were still current.

  30. Posted Dec 22, 2006 at 6:38 PM | Permalink

    That’s the problem with climate models. They’re always obselete. They’re always less powerful than the ones that are to come.

    Let’s read that last paragraph that Bloom quoted but with emphasis:

    The next phase, CMIP2, collected output from both model control runs and matching runs in which atmospheric carbon dioxide increases at the rate of 1% per year. Under this common scenario of radiative forcing, any differences among the models are due to differences in their responsiveness, e.g., their differing equilibrium climate sensitivities and rates of ocean heat uptake, which in turn arise from differences in resolution, other numerical aspects, and parameterizations of sub-gridscale processes. CMIP2 thus facilities the study of intrinsic model differences at the price of idealizing the forcing scenario. No other anthropogenic climate forcing factors, such as anthropogenic aerosols (which have a net cooling effect), are included. Neither the control runs nor the increasing-CO2 runs in CMIP include natural varations in climate forcing, e.g., from volcanic eruptions or changing solar brightness.”

    So let’s say that the climate model runs ignore natural variations from wholly natural factors, anthropogenic factors which produce cooling, and force carbon dioxide up at an exponential rate. Can you tell us when climate models will start modelling something like reality under realistic scenarios?

  31. Hans Erren
    Posted Dec 22, 2006 at 6:48 PM | Permalink

    re 30:
    John it has been done, pick your scenario:
    http://www.grida.no/climate/ipcc_tar/wg1/552.htm

    http://www.grida.no/climate/ipcc_tar/wg1/338.htm

    see IPCC tar
    9.3 Projections of Climate Change
    9.3.1 Global Mean Response
    9.3.1.1 1%/yr CO2 increase (CMIP2) experiments
    9.3.1.2 Projections of future climate from forcing scenario experiments (IS92a)
    9.3.1.3 Marker scenario experiments (SRES)
    9.3.2 Patterns of Future Climate Change
    9.3.2.1 Summary
    9.3.3 Range of Temperature Response to SRES Emission Scenarios

  32. Fergus
    Posted Dec 22, 2006 at 6:56 PM | Permalink

    #16 Conclusion first. Then program models to confirm conclusion.

    This is the kind of thing I’m pointing out in Annan’s paper over in “Road Map”; although bender disagrees with me (I think Annan is “conveniently cherry-picking” what he wants and using an “expert prior” to clean up what his models are telling him).

  33. jae
    Posted Dec 22, 2006 at 6:56 PM | Permalink

    30: Yeah, LOL. It should say, “Here we ignore all of reality, except CO2, which we exaggerate because of the precautionary principle.” I simply can’t see how anyone can have any faith in these models.

  34. John Norris
    Posted Dec 22, 2006 at 7:15 PM | Permalink

    Re #27, 29

    Steve Bloom,

    Sorry, I apologize for being a little slow. If I understand correctly, Pat assumed that since the authors varied only C02 by 1% per year, then the GCM modelers have given up on varying anything other than C02. You are pointing out that the authors stuck strictly with varying just the C02, to somehow evaluate the quality of the models. Is that your point?

    If that is your point, I get it. I don’t however get the point of the paper though. I am not sure how you can evaluate the quality of the models when only changing one value from the control set. I read their conclusion but do not understand how they got there. Do you?

  35. Willis Eschenbach
    Posted Dec 22, 2006 at 7:27 PM | Permalink

    One oft-repeated claim of the modelers is that what we should look at is the claim that the effect of forcing can be calculated as the difference between their control runs and their runs with forcings. Gavin Schmidt says that the fact that the GISSE model underestimates cloud cover by a whopping 13%, for example, is supposed to make no difference, it somehow comes out of the equation when you subtract the control run from the run with forcings.

    Me, I don’t buy that. I just took another look at the control runs for the CMIP GCMs, and they run at wildly different temperatures. The lowest shows a global average of 11.5°C, while the highest shows an average of 16.5°C.

    I see absolutely no reason to believe that a climate model that is so far off the mark regarding the average temperature will have all of its errors somehow magically cancel out to give the correct answer when forcings are introduced. The control run temperature is some function of the form

    T_1=f(G,O,SD,SI,BC,OC,MD,SS,LU,SO,VL,...)

    where G = Well-mixed greenhouse gases, O = Tropospheric and stratospheric ozone, SD = Sulfate aerosol direct effects, SI = Sulfate aerosol indirect effects, BC = Black carbon, OC = Organic carbon, MD = Mineral dust, SS = Sea salt, LU = Land use change, SO = Solar irradiance, VL = Volcanic aerosols, etc., and all of those are held constant.

    The forced temperature, on the other hand, is some function of the form

    T_2=f(G+\Delta G,O+\Delta O,SD+\Delta SD,SI+\Delta SI,BC+\Delta BC,OC+\Delta OC,MD+\Delta MD,SS+\Delta SS,LU+\Delta LU,SO+\Delta SO,VL+\Delta VL ...)

    In order for their claim to be credible, we have to assume that those functions are linear in respect to each of those terms, such that

    f(\Delta G,\Delta O,\Delta SD,\Delta SI,\Delta BC,\Delta OC,\Delta MD,\Delta SS,\Delta LU,\Delta SO,\Delta VL) = T_2-T_1

    Given that the climate is nonlinear, and given the errors in the results of the GCMs (e.g., GISSE is in error by 20W/m^2 over the tropics), I doubt that greatly.

    w.

  36. Willis Eschenbach
    Posted Dec 22, 2006 at 7:29 PM | Permalink

    Oddity in the latex, previewed perfectly, the “#8230” should be “…”

    w.

  37. Steve Bloom
    Posted Dec 22, 2006 at 7:33 PM | Permalink

    Let me try again. The cited study was an *audit* of the models. To do a basic audit, the authors looked at how the models were able to handle the major forcing (CO2), holding everything else constant. IOW, these model runs were in no way intended to reflect reality. They proceeded by bookending the possible range of CO2 levels, looking first at CO2 held constant (thus tracking how the models reached equilibrium from conditions as they existed in 2000) and then at the 1% increase rate (which is what Pat looked at). In this latter case, what was being tested in effect was how well the models would reasonably follow the nice straight line Pat provided. To then criticize the models for following the line is to misunderstand the study.

    Thank you, Hans, for noting that there were indeed comparisons of actual climate projections by the models. As in already noted, by now all of this has been done again for the AR4 models.

  38. jae
    Posted Dec 22, 2006 at 7:47 PM | Permalink

    Maybe I’m way off here, due to lack of knowledge about the models. But it seems to me like they have to assume some sort of stationarity or monotonic behavior, with respect to all the variables. We know from past records that this is not the case. The models cannot explain a phenomenon such as the LIA or MWP, since they evidently don’t accomodate the effects of cyclic phenomena like solar cycles. Face it, we don’t know enough about the variables to be doing this kind of modeling.

  39. bender
    Posted Dec 22, 2006 at 7:53 PM | Permalink

    This is not an audit. It is a sensitivity analysis.

  40. Steve Bloom
    Posted Dec 22, 2006 at 8:05 PM | Permalink

    Re #38: jae, the models (broadly speaking) have no problem handling any of the things you listed. The difficulty with any such forcing prior to the instrumental period is knowing what to input into the models. Even a perfect model wouldn’t be able to track historical climate if these forcings weren’t known with considerable accuracy. For example, the LIA might have been insolation changes or it might have been vegetation growth (resulting in CO2 reduction and albedo changes) due to concurrent disease-driven depopulations (Europe followed by the Americas), or by a combination of the two.

  41. John Norris
    Posted Dec 22, 2006 at 8:23 PM | Permalink

    Re #27:
    “… Is there perhaps a more up-to-date comparison, maybe even of the AR4 models?”

    Re #37:
    ” … As in already noted, by now all of this has been done again for the AR4 models.”

    I’m guessing you like the new models much better. So lets talk about those. Where can I find the AR4 models?

  42. Willis Eschenbach
    Posted Dec 22, 2006 at 8:58 PM | Permalink

    I was not surprised that the average is linear, as Steve B. points out, we’d expect that. What I was surprised at was the non-linearity of some of the results. The GDFL model, for example, rises slowly for about 40 years, and then the temperature rises much faster. The Durbin-Watson statistic for the residuals from a linear trend line is 0.76, indicating that the linear model doesn’t fit. Compare this with a model which is linear, such as HadCM3, with a DW statistic of 1.69. Why should the GDFL model not show a linear response to logarithmically rising CO2?

    To me, this is why the CMIP project conclusions can’t be trusted. They’ve done the tests, but they haven’t inquired into the results.

    w.

  43. Fergus
    Posted Dec 22, 2006 at 9:02 PM | Permalink

    Also parameters aren’t well constrained within models and between models. Which is why appeals to expert priors etc is basically debating how many angels are dancing on the head of a pin.

  44. jae
    Posted Dec 22, 2006 at 9:06 PM | Permalink

    So it’s like this, eh? 1.) we don’t know much about the effects of clouds, so we “parameterize” this variable. (2) We don’t know much about the effects of the solar cycles, so we ignore them. (3) but we know EVERYTHING about CO2, so we emphasize this. I don’t know much about climate models, but I’m very very worried.

  45. John Norris
    Posted Dec 22, 2006 at 9:10 PM | Permalink

    I like the IPCC TAR charts in #31. Hockey sticks from models, rather than proxies.

    I recall the leak a month or two ago about AR4 having a narrower range of predicted temps for 2100; due to better science of course. Will the AR4 models still make a hockey stick handle?

  46. Pat Frank
    Posted Dec 22, 2006 at 10:00 PM | Permalink

    #27 — “Let me know if there’s anything unclear about this.

    It’s unclear to me how you missed that the models implicitly assert that cloud cover won’t change with temperature, there is no change in energy dissipation rate from more vigorous convection, and that there are no oscillations in energy distribution among the various climate energy sinks. The models, in short, suppose that Earth climate merely warms monotonically with GHGs. That is hardly reasonable.

    Despite your attempt at dismissal, Steve B., none of those models are irrelevant. Compare HadCM2 vs HadCM3, for example. There is nothing particularly unique in the HadCM3 trend. HadCM2 is closer to the ‘more accurate,’ GCM average than HadCM3, and doesn’t show as many precipitous jumps in temperature (e.g., ~-0.5 C at year 67). Who says later is better?

  47. Ken Fritsch
    Posted Dec 22, 2006 at 10:01 PM | Permalink

    The article notes that it does not contain a discussion of the temperature differences in the control runs in Figure 1 and gives a couple of references that evidently do. The excerpt below from the article implies that the models have stability and but that 11 of 16 do use fluxes to maintain it and without, to my knowledge, mentioning which models did not use fluxes.

    The scale is such that the relatively (relative to the 0.6 degree C anomaly we have seen over the past 100 years) large long term temperature drifts of some models, in my view anyway, and the short term excursions are more difficult to detect by eye. If they were plotted as a difference from the starting point, those variations would have been more readily visualized, hmmm (or whatever that expression is the Steve B frequently interjects at a point like this). I am guessing that fewer of the newer models use fluxes to maintain stability and was wondering if Steve B could guide us to where the computer models have moved on to — perhaps with similar sensitivity tests.

    Perhaps the most striking aspect of Figure 1 is the stability of model-simulated temperature and precipitation. The stability occurs despite the fact that 6 of the 15 CMIP2 models refrain from employing ad hoc flux adjustments at the air-sea interface. Until a few years ago, conventional wisdom held that in order to suppress unrealistic climate drift, coupled ocean-atmosphere general circulation models must add such unphysical flux “corrections” to their governing equations. The 1995 IPCC assessment (Gates et al. 1996) diplomatically expressed the concern that “[f]lux adjustments are relatively large in the models that use them, but their absence affects the realism of the control climate and the the associated feedback processes”.

  48. jae
    Posted Dec 22, 2006 at 10:42 PM | Permalink

    Smoke and mirrors.

  49. bender
    Posted Dec 22, 2006 at 10:44 PM | Permalink

    Also parameters aren’t well constrained within models and between models.

    Fergus, I made this exact point at RC once and Gavin Schmidt’s response was humorously dodgy. I’ll see if I can dig it up.

  50. bender
    Posted Dec 22, 2006 at 11:00 PM | Permalink

    Here it is – my question on GCM parametrization being a poorly constrained problem. (I see though that a much more satisfactory answer has been inserted after some delay.)

  51. Fergus
    Posted Dec 22, 2006 at 11:18 PM | Permalink

    seems to be a decent reply by Gavin on model development. But from what I’ve seen of climate model intercomparisons, the same parameter perturbed between different models has different effects, and what makes one model “sensitive” in the perturbation is not a big factor in another model. But then these models are all used to show that they’ve made a similar hockey stick of CO2 etc, even though physically they are doing different things at a lower level!

  52. Chris H
    Posted Dec 23, 2006 at 3:10 AM | Permalink

    There seems to be an assumption in the GCM community that in the absence of changes in external forcings the climate is inherently stable. Here are a few assertions that give me this impression.

    1. GCMs are ‘boundary value’ problems and not ‘initial condition’ problems.

    2. The GCMs are run for a warm up period and then perturbed.

    3. Model runs without external perturbations are compared to model runs with perturbations.

    4. Historical climate changes are attributed to external perturbations.

    Why is this assumption of stability justified? I would expect the climate to be chaotic, contain oscillations over different time periods and have many possible stable states.

  53. Posted Dec 24, 2006 at 12:36 PM | Permalink

    test

  54. Posted Dec 24, 2006 at 12:44 PM | Permalink

    Hello

    This is an test from Webserve Support Team.

    Jai

  55. Pat Frank
    Posted Dec 25, 2006 at 12:50 AM | Permalink

    #31 — Hans, all those projections are just calculations for various emission scenarios. They don’t change the conclusion that in the absence of human GHG forcings, or changes in other external forcings such as irradiance or volcanism, the GCMs predict a flat and steady climate across time.

    This supposes that Earth climate occupies just one steady state in the absence of perturbation. This is not realistic for far from equilibrium flux systems, which show all manner of oscillations about more than one quasi-stable state. I’ve pointed out the example of the Lorentz Butterfly before, and that example is very telling for what is missing in these climate models: Spontaneous oscillations among many states of approximately equal energy but different atmospheric temperatures.

    The projections in the LLNL sensitivity analysis presume, for example, that there is no change in cloudiness or in Earth albedo even while temperature increases. How likely is that?

    A steady Earth temperature is also not consistent with past climate history (unless one wants to believe the HS).

    #40 — “Even a perfect model wouldn’t be able to track historical climate if these forcings weren’t known with considerable accuracy.

    Collins* evaluated the HadCM3 under conditions of a perfect climate model, and found it could not predict global climate past 1 year. It was better at regional predictions, being able to predict climate up to 10 years out, but only if there was a large moderating force (i.e., considerable climate momentum) provided by, e.g., the Atlantic Ocean. This shows that even a perfect GCM can not predict future climate.

    * “Climate predictability on interannual to decadal time scales: the initial value problem” (2002) Climate Dynamics 19, 671–692

  56. Hans Erren
    Posted Dec 25, 2006 at 5:44 AM | Permalink

    Pat there are two issues here:

    How realistic are the SRES emission scenarios, which is an economic debate.
    How realistic is the Bern CO2 cycle model, which is diffusion physics.
    How realistic is the climate sensitivity range of 1-3 K/2xCO2, which is all about feedbacks, sun and aerosols.

    Castles and Henderson commented on the SRES scenarios
    Peter Dietze commented on de Bern model
    Ferdinand Engelbeen commented on the aerosols
    Willie Soon commented on the sun
    Nir Shaviv commented on geological climate sensitivity
    http://www.sciencebits.com/OnClimateSensitivity

    Conclusions:
    extreme emissions are unrealistic,
    co2 sinks won’t saturate,
    climate sensitivity is 1.3 +-0.4 K/2xCO2

    Meaning that in this century we will say well below an extra warming of 1 degree.

  57. Hans Erren
    Posted Dec 25, 2006 at 5:45 AM | Permalink

    three issues

  58. Ken Fritsch
    Posted Dec 25, 2006 at 12:39 PM | Permalink

    Re: #50

    It seems to me that the parameter issues with climate models were discussed previously here at CA in reference to Gavin Schmidt’s comments and another modeler who posted here and whose name currently escapes me and whose comments sadly I could not track.

    Schmidt notes that climate models can use 3 or 4 parameters (which are not well resolved when treated on an individual basis) to tune the models. The unnamed modeler noted that he did not necessarily distinguish between parameters and fluxes, but also commented that parameters were sufficiently “fixed” to make tuning with them difficult. The unnamed modeler, I thought, was here to give us more details, but he left without divulging much — even though I thought he was given due respect.

    I remain somewhat confused on the use of fluxes and parameters to stabilize and tune the climate models.

  59. Ken Fritsch
    Posted Dec 25, 2006 at 1:15 PM | Permalink

    Willis E introduced the thread here titled “CMIP Control Runs“:

    http://www.climateaudit.org/index.php?paged=6

    to discuss the controls in Figure 1 of the article linked in Pat Frank’s introduction to this thread. His discussion was more directed at the average temperature differences in the control runs between models being in the range of 4 degrees C. I have not found any good references explaining those differences in any of my, admittedly, perfunctory searches.

    What bothers me additionally about these control runs are the short and long term excursions of the temperatures, as noted earlier, in the control runs that to my way of thinking would make finding the real smaller changes, such as we have experienced over decadal time in the instrumentally measure period, difficult to impossible to extract from the “noise”. My question is: what is this “noise” attributed to in these models’ control runs? It appears that the problem of extracting the signal from this model noise is mitigated by the projection of a significantly larger trend in temperature than we have been experiencing over the past century or so.

    I was able to find more information on the models used in this study and, in particularly, determine which of them used fluxes:

    http://www-pcmdi.llnl.gov/projects/modeldoc/cmip/tables.html
    http://www-pcmdi.llnl.gov/projects/cmip/Table.php

  60. Steve Bloom
    Posted Dec 25, 2006 at 8:08 PM | Permalink

    I’m not sure if this is all of them, but it looks like most of the AR4 model papers can be found here. Of course a full-blown comparison of the sort that Hans posted above won’t be available until the AR4 is published. Pat may be interested in looking at this, this and this in particular (abstracts only for the last two, unfortunately). There is also this recent paper from GISS. For anyone with a whole lot of time on their hands, it appears that all of the AR4 model run data are available on the PCMDI site, although registration is required for access.

  61. Willis Eschenbach
    Posted Dec 26, 2006 at 12:33 AM | Permalink

    Re the recent paper from GISS cited in 60, could someone translate this statement for me?

    We examine the annular mode within each hemisphere (defined here as the leading empirical orthogonal function and principal component of hemispheric sea-level pressure) as simulated by the IPCC AR4 ensembles of coupled ocean-atmosphere models.

    Why is this called an “annular” (ring-shaped) mode? What is the difference between the leading EOF and principal component? Why does the “annular mode” involve both?

    w.

  62. Posted Dec 26, 2006 at 6:02 AM | Permalink

    Re #57

    It’s four issues if you include “..an almost fanatical devotion to the Pope”

  63. Hans Erren
    Posted Dec 26, 2006 at 7:57 AM | Permalink

    😀
    I didn’t expect a kind of Spanish Inquisition

    http://people.csail.mit.edu/paulfitz/spanish/script.html

  64. Posted Dec 26, 2006 at 8:21 AM | Permalink

    “NO-ONE EXPECTS….”

  65. bender
    Posted Dec 26, 2006 at 11:56 AM | Permalink

    Re #58
    Ken Fritsch, was that modeler Isaac Held?

  66. Ken Fritsch
    Posted Dec 26, 2006 at 12:17 PM | Permalink

    The unnamed modeler noted that he did not necessarily distinguish between parameters and fluxes, but also commented that parameters were sufficiently “fixed” to make tuning with them difficult. The unnamed modeler, I thought, was here to give us more details, but he left without divulging much “¢’‚¬? even though I thought he was given due respect.

    Dr. Isaac Held (the modeler’s name that I had forgotten) entered the CA blog discussion (with trepidation) here on the thread “Truth Machines” 10/03/2006. In comment # 64 of that thread Dr. Held stated the following, but as I recall that comment never came.

    I do want to comment on the tuning question. I haven’t the time right now — I will get back to this in a day or two.

    http://www.climateaudit.org/?p=845

  67. Ken Fritsch
    Posted Dec 26, 2006 at 12:31 PM | Permalink

    Bender, I missed your post before my last comment, but thanks for remembering and offering the name. I got so frustrated that I started perusing the bibliographies for climate modeling articles and eventually did find Held’s name. It took me while after that to pinpoint the Truth Machines thread. I thought that that discussion made some good points on tuning models and “fixed” and “free” parameters and certainly improved my understanding of that aspect of modeling. I was eagerly awaiting Dr. Held’s reply, but to my awareness never saw one.

  68. Posted Dec 26, 2006 at 2:25 PM | Permalink

    When you build a climate model, you assign values to all known inputs and processes. You assume that the model is correct when it can accurately model the past and then claim that the model can predict the future.

    Here is what I think is the fallacy behind ALL these models:

    If there is a significant input or process that you don’t know about, you will assign improper weighs to the other factors during the process of making the model match past history. This makes one or more of the weights wrong and damages the accuracy of the model to an unknowable degree.

    My conclusion: if you cannot be absolutely sure that you include ALL significant influences in the model, you can produce a model that can accurately replicates the past, but cannot produce a trusted projection. You can’t even know the error magnitude.

    Further, “robustness” is irrelevant when you have left out an important factor.

    Comments please.

    Thanks
    JK

  69. jae
    Posted Dec 26, 2006 at 7:09 PM | Permalink

    68: Yeah, isn’t this called overfitting? benderr?

  70. bender
    Posted Dec 26, 2006 at 7:12 PM | Permalink

    Re #67

    I was eagerly awaiting Dr. Held’s reply, but to my awareness never saw one.

    Correct – he never did return. Which made me sad. Because he clearly knows what he’s talking about.

  71. bender
    Posted Dec 26, 2006 at 7:14 PM | Permalink

    Re #69
    Yes. Overfitting to an existing historical sample whose population parameters (i.e. that which determine the future) are incompletely specified.

  72. Steve Sadlov
    Posted Dec 26, 2006 at 7:38 PM | Permalink

    RE: #64 – I always loved the WW-I head gear … LOL!

  73. Posted Dec 27, 2006 at 3:29 AM | Permalink

    Re #68, 72

    I think that I am going a little further and saying that if you cannot be absolutley sure that you know all of the inputs, then your model is worthless. AND in the climate field we can never be sure of knowing all inputs because they may include one time events outside of the solar system.

    Thanks
    JK

  74. Willis Eschenbach
    Posted Dec 27, 2006 at 6:15 AM | Permalink

    Jim K, your post here is right on the mark. The obvious corollary is that removing one forcing from a tuned model does not prove the necessity for that forcing, as is claimed all the time by the modelers. Having tuned the model, they then say “here’s our model with just natural forcings, doesn’t work, here’s just CO2, doesn’t work, but we include them both and model the past perfectly! Q.E.D”

    Regarding your statement that:

    My conclusion: if you cannot be absolutely sure that you include ALL significant influences in the model, you can produce a model that can accurately replicates the past, but cannot produce a trusted projection. You can’t even know the error magnitude.

    In this regard, please see the study in the Dec. 1 Science Magazine:

    Phytoplankton and Cloudiness in the Southern Ocean
    Nicholas Meskhidze and Athanasios Nenes

    ABSTRACT

    The effect of ocean biological productivity on marine clouds is explored over a large
    phytoplankton bloom in the Southern Ocean with the use of remotely sensed data. Cloud
    droplet number concentration over the bloom was twice what it was away from the bloom, and
    cloud effective radius was reduced by 30%. The resulting change in the short-wave radiative flux
    at the top of the atmosphere was –15 watts per square meter, comparable to the aerosol
    indirect effect over highly polluted regions. This observed impact of phytoplankton on clouds is
    attributed to changes in the size distribution and chemical composition of cloud condensation
    nuclei. We propose that secondary organic aerosol, formed from the oxidation of phytoplankton produced
    isoprene, can affect chemical composition of marine cloud condensation nuclei and
    influence cloud droplet number. Model simulations support this hypothesis, indicating that
    100% of the observed changes in cloud properties can be attributed to the isoprene secondary
    organic aerosol.

    Note the size of the effect, -15 watts/m2 … consider the amount of the earth covered by phytoplankton … and then recall Jim Hansen’s claim that his GISS model can determine the size of the TOA radiation imbalance to 0.85 ±.15 W/m2 …

    Note also the sign of the effect. As the water gets hotter, the phytoplankton make it cooler by putting up a sunshade … yet another feedback which is overlooked by the climate modelers, who seem to only believe in positive feedbacks

    w.

  75. bender
    Posted Dec 27, 2006 at 6:53 AM | Permalink

    Here’s an example of a paper that makes precisely that argument: take out GHG forcings and the model predictions don’t fit the observations. This of course presumes that the model is structured correctly and tuned correctly.

    But read Meehl’s own caveat:

    “The good correspondence between the model simulations and the observations should not be overinterpreted. If the assumed forcings were correct, then this agreement would indicate that the model’s climate sensitivity was realistic. Forcing uncertainties, however, admit a quite wide range of sensitivity possibilities.”

  76. beng
    Posted Dec 27, 2006 at 8:53 AM | Permalink

    RE 74:

    Note the size of the effect, -15 watts/m2 … consider the amount of the earth covered by phytoplankton … and then recall Jim Hansen’s claim that his GISS model can determine the size of the TOA radiation imbalance to 0.85 ±.15 W/m2

    Willis, maybe the phytoplankton effect is what the GCMs are missing when they (the models) fail to reproduce the low-level stratocumulus clouds over, for ex., much of the relatively cool waters in the eastern Pacific.

  77. Reid
    Posted Dec 27, 2006 at 11:19 AM | Permalink

    Even if the GCM algorithms are exactly known they will still fail as time progresses in a multi-decade model.

    If there was a perfect GCM it would need perfect data to yield skillful results over multi-decades. By perfect data I mean accurate temperature to a minimum of 10 significant digits down to the quantum level for the entire planet and a supercomputer that could process that info in a useful time period. Not going to happen in our lifetime to say the least. Remote sensing technology may evolve to the point of quantum level precision temperature readings for the entire planet. And supercomputers may evolve to process that massive data set. Will a perfect GCM ever come to be?

  78. Paul Penrose
    Posted Dec 27, 2006 at 1:55 PM | Permalink

    Reid,
    Of course, as the old saw goes, “All models are wrong…”, but perfection is not required. Even imperfect models can be useful; the trick is knowing which ones are useful and to what degree. Indeed, we use imperfect models all the time in the engineering realms, but the characteristics of such models are generally well understood and their uncertainties can be taken into account. This is where the GCM’s greatest failings are, I believe.

  79. K
    Posted Dec 27, 2006 at 3:47 PM | Permalink

    Well said Penrose: Perfection is not required.

    Trusted models can point out suspect patches of data. And when trusted models obviously fail the failure often points directly to where the model needs correction or underlying assumptions are wrong.

    Even primative models are worthwhile. They create something to inspect, a framework for dialog. The person making the poor model learns from the critique. And the reviewers learn from spotting the errors.

    The important thing is what researchers learn, not what some model predicts today. Models will steadily improve. But climate models may still not be definitive in our lifetime.

    A worse problem than incomplete or erroneous models is to believe those which seem to work; hence the question has been settled. That is my only real objection to the AGW cadre.

    Steve has done an astonishing job.

  80. jae
    Posted Dec 27, 2006 at 5:47 PM | Permalink

    Here’s another very plausible strong negative feedback that is probably not incorporated into GCMs.

  81. Pat Frank
    Posted Dec 27, 2006 at 7:46 PM | Permalink

    This (I hope) will be relevant to the comments in this thread about negative feedbacks entered into climate models. Following a little criticism about my choice of CMIP GCM sensitivity projections for comparison with pure GHG forcing, I went to the literature and digitized a set of outputs from two GFDL GCMs, that differ mostly in their spatial resolution (R15 below is coarser grained).

    Partial Legend: Fig. 1. “Time varying decadal mean surface air temperature (SAT) responses simulated by (a) the R15 set of six climate change experiments and (b) the R30 set of three climate change experiments.

    These are true projections, and according to the paper included aerosol and albedo feebacks, including ice. They included simulated ocean and THC responses and projected climate over the whole globe. Atmosphereic CO2 was projected to increase at 1% a year, making them comparable to the CMIP GCM projections.

    The data are from: Keith W. Dixon, ea “A comparison of climate change simulations produced by two GFDL coupled climate modelsGlobal and Planetary Change 37 (2003) 81-102.

    The plot below shows the global average temperatures predicted by the two GFDL runs, there compared with the CMIP 10 GCM average that was shown in the initial plot of this thread, and my calculated 1% pure GHG forcing projection also from the initial plot.

    Also included in the comparison are two other pure GHG projections: One is calculated to show temperature if atmospheric CO2 increases according to its current trend (BAU = ‘Business As Usual’). The second is a worst case scenario (WCS) that projects pure GHG warming if the rate of increase of atmospheric CO2 is driven by the current rate of increase of emissions which rate then continues unabated into the future. I.e., the WCS includes a CO2 acceleration factor that is absent from the BAU. As before, these last two calculations also included inputs from projected increases of methane and nitrous oxide derived from nonlinear fits to their current rate of increase.

    So, here’s the comparison plot, and I hope it comes out:

    All the plots have been normalized to the same start temperature. Here’s what I notice. Over its more limited range, the CMIP 10 GCM average is very similar to both GFDL projections. Therefore, the CMIP GCM projections were valid the climate simulations as offered and not just audits as Steve B. claimed.

    Second, the straight 1% forcing projection once again does a very good job of reproducing the predictions of both GFDL GCMs. The SDs of the GFDL outputs were about +/- 0.2-0.3 C, and the line for pure 1% GHG forcing is easily that close over the entire range of the projection. One wonders, then, whatever happened to the feedbacks. Whatever global climate feedbacks are included, they turn out to have approximately zero effect.

    Third, all of the GCM projections are notably more pessimistic about future temperatures than either the BAU projection or the WCS projection. As the pure 1% GHG projection does a pretty good job in reproducing the outputs of complex GCMs over quite long projection times under the same assumed CO2 increase rate, then it seems pretty clear that the WCS and BAU projections should likewise be fair representations of best-guess climate projections using state-of-the-art GCM climate models.

    The BAU and WCS are two worst case scenarios that presume we will be producing energy the same way and with the same efficiency in 40 years time. It may be that the lesson from the non-appearance of the worrisome doo-doo-disposal problem from 200 million urban horses is applicable here. Also, a ~5% increase in cloud-albedo would pretty much neutralize both projections.

    Finally, one is led to ask again: If simple GHG projections do just as well as complex GCMs, what is it, exactly, that we’re getting for all the money spent? And from where comes the confidence of the IPCC? And really, why hasn’t anyone commented in the literature that GCMs seem to predict what mere GHG forcing predicts? Isn’t that coincidence surprising and a little disconcerting? Where are all the other flux feedbacks going, that they should conveniently average out to almost zero?

  82. Pat Frank
    Posted Dec 27, 2006 at 7:47 PM | Permalink

    Well, the plot didn’t upload. I’ve sent it to John A, and so with his good graces hope it makes an appearance.

  83. Posted Dec 28, 2006 at 12:50 AM | Permalink

    re #81, Pat Frank

    Where are all the other flux feedbacks going, that they should conveniently average out to almost zero?

    In case you may find the surroundiung discussion useful…

    William Gray at

    Click to access hurricanes1.pdf

    bottom of page 11 says

    “To believe that humans are the cause of the global warming we have seen requires that one
    believe that all of the above climate change mechanisms (and others not mentioned) all sum
    to zero.”

    He discusses GCMs at page 13 ff.

  84. Pat Frank
    Posted Dec 28, 2006 at 2:08 PM | Permalink

    Re #82 — Thanks, John. 🙂

  85. Bob K
    Posted Dec 29, 2006 at 6:13 AM | Permalink

    Good link Dale. Thanks.

  86. Pat Frank
    Posted Dec 29, 2006 at 1:10 PM | Permalink

    My thanks also, Dale. I was aware of Gray’s paper, but hadn’t read it.

  87. D. F. Linton
    Posted Dec 29, 2006 at 2:00 PM | Permalink

    Re #81, last paragraph.

    Pat, excellent post.

    My cynical answer to your question is that spending all that money on GCMs does two things: First it adds those very real seeming squiggles to the lines. Smooth curves of temperature like yours wouldn’t convince anyone who ever goes outside, but add some random noise and its instant realism. Second, your answer is way too cheap to be convincing, but tell every one the noisy curves are the outputs of computer models that cost millions and the sale is made.

    Just wait till we have spent billions on these models and they have grown so complex that no one can even begin to explain how they work much less critique their output, then even the hardened skeptics who read this blog will surely come around.

  88. Posted Dec 29, 2006 at 2:57 PM | Permalink

    Smooth curves of temperature like yours wouldn’t convince anyone who ever goes outside, but add some random noise and its instant realism.

    I.e. simulated process realizations are preferred over expected values and prediction intervals.

  89. Posted Dec 31, 2006 at 9:18 PM | Permalink

    Predicting global temperature as a function of CO2 concentration changes (alone) is a simple one dimensional problem. GCMs should get this right as the original graph showed they did and so should a simple Arrhenius type evaluation or a simple equation which relates forcing to GHG concentration. This is, as it were a first order test, and the models appear to have passed it as you would expect. The variability among models is a demonstration of climate variability. A more interesting question is how that variability matches that observed in the climate. Three dimensional GCMs predict a lot of other things and that is where the serious evaluation has to be done.

  90. Willis Eschenbach
    Posted Dec 31, 2006 at 9:44 PM | Permalink

    Eli, thanks for the post. It is useful because it illustrates a common fallacy. You say:

    Predicting global temperature as a function of CO2 concentration changes (alone) is a simple one dimensional problem.

    The problem is, the global temperature is a complex dynamic system, with loads of known and unknown feedbacks. This complicates a “simple one dimensional problem” immensely.

    To illustrate what I mean, let’s consider taking a block of aluminum six feet long and putting one end in a bucket of hot water. Put a thermometer in the other end, keep the water hot, and in short order the temperature starts to rise. It is, as you described earlier, a simple one dimensional problem.

    Now let’s replace the block of aluminum with a complex dynamic system with loads of known and unknown feedbacks … lets say a human being. Put their feet in a bucket of hot water, put a thermometer in the other end … but you’ll wait a long, long time for a temperature rise.

    That’s why predicting the global temperature as a function of CO2 concentration is not a simple one dimensional problem.

    w.

  91. ET SidViscous
    Posted Dec 31, 2006 at 10:29 PM | Permalink

    If it’s such a “simple one dimensional problem” then why did for 30 years did the Temprature and CO2 concentration diverge, from approx. 1945 till apprx 1975.

    My understanding of a simple one dimensional problem is along the lines of Ax=Y, how can you get both a positive and a negative response out of a “Simple one dimensional problem?”

  92. Chris H
    Posted Jan 1, 2007 at 3:20 AM | Permalink

    A one dimensional problem is any function with a single variable, y=f(x), not just a linear one, y=Ax + B. The climate is a multidimensional function, y=f(a, b, c, d, …). The climate modellers are claiming that this function can be rewritten as.

    f(a, b, c, d, …) = g(a) + h(b, c, d, …)

    In general, this is not true, so I would be interested in seeing a justification for assming this for climate models.

    A trivial example of a function for which this separation is not possible is.

    f(a, b) = ab

    This same problem has been discussed on other threads with respect to the relationship between temperature and tree ring width.

  93. Pat Frank
    Posted Jan 1, 2007 at 1:44 PM | Permalink

    #89 — Eli, neither the CMIP projections nor the GFDL projections were one-dimensional studies. The CMIP tests were full climate simulations, excluding only volcanic explosions and any anthropogenic inputs except CO2. The GFDL projections were likewise, and specifically mentioned albedo feedbacks.

    In each case, the GCM outputs that included anthropogenic GHG forcings proved to be not significantly different from a strict dependence on the GHG forcing only. That is, there were no climate oscillations due to migration among quasi-stable states, no net temperature divergences due to changes in cloudiness, no net flux modifications due to changes in ice albedo, and not even any predictions of occasional large ENSO oscillations. In other words, the outputs look as though there is virtually no climate response to GHGs except an almost secular temperature increase. It’s clear that GCMs are driven by GHG forcing and virtually nothing else. In that event, it’s hard to imagine they capture anything important about an open coupled-oscillator petaWatt flux system like Earth climate.

    I think Willis has put his finger right on the problem in #90. The complex feedbacks and inter-dependent adjustments made by Earth climate subsystems in response to energy inputs are plainly not represented in GCM models. All they appear to do is impose net GHG forcing onto average temperature and let Earth climate warm up in response. That’s the story the plots are showing. All the other positive and negative feedbacks somehow cancel out, and the GHGs go their merry way.

    As Willis implied, in a coupled oscillator system like Earth climate, one would expect the energy to migrate among the oscillators — between cryosphere, ocean, and atmosphere, e.g., — even without any changes at all in net input flux. All by itself, global atmospheric temperature should rise and fall according to some natural quasi-period coupling. Likewise, the cryosphere should partially melt back and then re-expand; the oceans should warm and cool. The GCMs don’t show that at all (see Figure 1 in the CMIP Report 66 linked above).

    Here’s what I think — that when people talk about predicting Earth climate, they literally and scientifically don’t know what they’re talking about. The objective knowledge is lacking. Gerry North is wrong. We don’t know the forcings. And so here’s the bottom line as regards attribution: There’s no evidence whatever for human influence on average Earth atmospheric temperature. Zero. Because the theory is nowhere near adequate to detect such a small effect.

  94. Posted Jan 1, 2007 at 3:51 PM | Permalink

    On average no. While the dynamics of any complex system are difficult you can get pretty good average values from greatly simplified models. The point I was trying to make is that if you have a three dimensional model, it should reproduce the results of the simple model in the range of applicability of that simpler model was developed to explore otherwise you have big trouble. Global temperature is one of those things that does not vary much with the complexity of the model as has been demonstrated for over 100 years. The value of the complex model is the other things that it shows (and is tested against).

  95. Pat Frank
    Posted Jan 1, 2007 at 5:01 PM | Permalink

    #94 — Take a look here, Eli: http://www.ssmi.com/msu/msu_data_description.html#figures. Figure 7 TLT shows clear global average excursions due to ENSO events, including a full 1 C spike from the 1998 event. These are not present in GCM outputs.

    It also remains true that the simple GHG forcing predicts GCM outputs to very good accuracy, directly implying that all other forcings in GCMs average out to near zero. That still seems hardly physically likely.

  96. paminator
    Posted Jan 7, 2007 at 5:42 PM | Permalink

    Gavin Schmidt wrote a short news article in Physics Today, January 2007, linked here. This is the same venue where Emmanuel published a short news article on his Hurricane model last fall. It is disappointing that the American Physical Society’s membership publication, with a pretty good scientific reputation, has so far presented a lopsided view of the latest trends in the application of physics to climate science.

    Parameterization is mentioned in Gavin’s blurb a number of times as an acceptable approach. Gavin made an interesting choice in showing the success of GCM’s by simulating the Pinatubo eruption and its effect on simulated global temperatures, then compared it with actual measurements. It seems to me that this could be used to empirically evaluate lambda, the climate sensitivity parameter in C/W/m^2. Has anyone reported on this?

    Gavin also remarks on the reliability of averaging the outputs of a bunch of GCM’s:

    More than a dozen facilities worldwide develop climate models, whose ability to simulate the current climate has improved measurably over the past 20 years. Interestingly, the average across all models almost invariably outperforms any single model, which shows that the errors in the simulations are surprisingly unbiased. Significant biases common to most models do exist, however”¢’‚¬?for instance, in patterns of tropical precipitation.

    Concensus-building among GCMs? I thought GCM’s were reliable, predictive, and therefore needed no further funding?

    I’d settle for one GCM without adjustable parameters that accurately models the historical temperature record.

    http://www.physicstoday.org/vol-60/iss-1/72_1.html

  97. TAC
    Posted Jan 7, 2007 at 6:27 PM | Permalink

    #96 Paminator, I, too, have noticed bias in how Physics Today (which, btw, is published by the AIP (American Institute of Physics), not APS) treats climate change. In addition to the Schmidt article, AIP has published an extremely well written book on the Discovery of Global Warming (Spencer Weart, here).

    Do you think Physics Today would be willing to publish a comment/letter on the Schmidt article? It seems that might be one way to respond.

  98. Steve Bloom
    Posted Jan 7, 2007 at 9:51 PM | Permalink

    Re #96: Paminator, if you found Gavin’s article disappointing, I would advise you to get ready for a really bad remainder of the year. This editorial seems to capture the sort of thing that can be expected.

    “I’d settle for one GCM without adjustable parameters that accurately models the historical temperature record.” Now there’s an interesting concept. Aren’t forcings parameters?

  99. paminator
    Posted Jan 7, 2007 at 10:57 PM | Permalink

    Re #97- TAC

    My error on AIP. I sent a letter to the editor on Emmanuel’s article last fall, but received no reply. I suggested that they put together some more extended articles on physics-related issues of climate change in a format that they have used once in a while, where two authors with opposing views or opinions present their cases in one article.

    Re #98- Bloom, per your question, see your #27 for what parameterizations need to be replaced with calculations based on physical processes.

    “…and parameterizations of sub-gridscale processes.”

  100. bender
    Posted Jan 7, 2007 at 11:50 PM | Permalink

    Re #98

    Aren’t forcings parameters?

    paminator’s issue is not parameters per se, but fixed parameters vs free parameters. If parameter values are fixed by physical measurements, experimentation etc, then they’re alot more tolerable & parsimonious than free parameters that have been tuned to yield a certain fit. Yes, forcings are represented by parameters, but the question is do they have any grounding in physical experimentation, or are they fudged to make a fit. Too much fudge makes for an unconvincing fit.

    For a guy who is so pro-alarmist/extreme-AGW you might want to take study of these GCMs a little more seriously, as they’re currently your only line of defense. Just some friendly advice.

  101. Chris H
    Posted Jan 8, 2007 at 2:09 AM | Permalink

    It’s hardly surprising that the average of 20 GCMs should fit the data better. Take 20 different parametized equations and fit them to a set of data. Then take the average of those 20 equations. The average will fit the data better than the individual equations. This simply a result of over fitting, since you have 20 equations worth of free parameters.

  102. Steve Bloom
    Posted Jan 8, 2007 at 3:08 AM | Permalink

    Re #100: The point is that Pam wanted “one GCM without adjustable parameters that accurately models the historical temperature record.” As we’re talking the past here, physical experimentation seems a little beside the point; rather, the requirement would seem to be for forcings (a variety of parameter AFAIK) quantified from the instrumental record and proxies. In other words, Pam seems to have tripped over her (?) rhetorical shoelaces.

    As for my reliance on the GCMs, I thought I had noted elsewhere that my concern about AGW is based mainly on the large-scale behavior of past climate. That’s not to say that I have particular doubts about the GCMs, but the last time I checked into it one of their failings was an inability to reproduce (very accurately, anyway) abrupt changes such as the glacial terminations. Carbon feedbacks of all sorts are a problem as well, especially biological ones, and of course the GCMs don’t even try for ocean acidification. Even CO2 sensitivity can be reasonably guessed from looking at the recent interglacials, so IMHO the potential great value of the models in the next decade or so is to produce some useful information about near-term regional impacts. Progress on abrupt changes may or may not be made soon, and I suspect carbon feedbacks will take even longer.

  103. Steve Bloom
    Posted Jan 8, 2007 at 3:12 AM | Permalink

    Re #100: BTW, while I am certainly alarmed, I am not an “alarmist.”

  104. trevor
    Posted Jan 8, 2007 at 4:28 AM | Permalink

    Re #100, 103:

    Mr Bloom: Those who have witnessed your contributions to CA over the past year or more will have formed a very clear view of your position on these matters.

    “By their fruits ye shall know them.”

  105. Dave Dardinger
    Posted Jan 8, 2007 at 7:34 AM | Permalink

    re: #104 Trevor

    What I find especially interesting is that when we’re dealing in the somewhat mushy subjects such as this, people like Steve B are quite obvious by their presence while in the technical threads such as the current Paul Lindsay Poisson Fit thread, they are conspicuous by their absence. The only exception I can think of is the Deltoid guy (whose name is AWOL from my brain at the moment) who will claim to have found simple and obvious errors but will then find reason not to discuss his findings.

  106. cbone
    Posted Jan 8, 2007 at 10:25 AM | Permalink

    Re: 96

    I think this quote from Gavin’s article pretty much summs up the problem with GCMs:

    Given the nature of parameterizations among other features, a climate model depends on several expert judgment calls.

    Or in laymans terms, Gavin says: “Trust us, we know what we are doing.” I’m sorry, but I won’t put much stock in GCMs until there are a lot fewer of these ‘expert judgment calls’ i.e. guessing. Specifically, they can eliminate the guessing in the water vapor cycle, lest we forget that the currnet models are treating the dominant greenhouse contributor as a parameter that is subjected to ‘expert judgment calls.’ I would prefer it to be subjected to modeling from first principles instead of ‘expert judgment.’

  107. Steve Bloom
    Posted Jan 8, 2007 at 1:47 PM | Permalink

    Re #105: I think it’s fair to say that these threads have far and away the most traffic. I do particpate to a degree on the more technical threads, but of course I’m of the general opinion that much of the commentary here too often loses sight of the science for statistics. That’s fair enough since the expertise here is largely in the statistics rather than the science, but in any case I don’t think you’d be very appreciative if I piped up every time I saw that happening.

  108. Steve Bloom
    Posted Jan 8, 2007 at 1:52 PM | Permalink

    Re #106: Fair enough so long as you insist on a construction of statistical error from first principles the next time someone here uses statistics to criticize climate science. I do hope you’re aware that there’s some “expert judgment” involved there as well.

  109. Stan Palmer
    Posted Jan 8, 2007 at 2:28 PM | Permalink

    108

    Re #106: Fair enough so long as you insist on a construction of statistical error from first principles the next time someone here uses statistics to criticize climate science. I do hope you’re aware that there’s some “expert judgment” involved there as well

    In what way is “expert judgement” an aspect of the mathematical proof of a theorem in statisitics?

  110. trevor
    Posted Jan 8, 2007 at 3:05 PM | Permalink

    Re: #107

    but of course I’m of the general opinion that much of the commentary here too often loses sight of the science for statistics. That’s fair enough since the expertise here is largely in the statistics rather than the science.

    Mr Bloom: I’m afraid that I simply cannot allow you to get away with yet another politically biased statement. The open questioning of the real science at this site stands in marked contrast to the controlled biased groupthink approach that passes for science on some other sites.

    Can we agree that we should all be committed to sound science. However, that requires us to be objectively critical of cliques of non-independent climate scientists who are seeking to dominate the peer reviewed published literature.

    It is also a basic requirement of science that relevant prior documents be addressed, that data and methods be properly archived, and disclosed, that the climate scientists access standard statistics capability. “Peer Review” should meet the expectations of the scientific community. Reviewers should be independent and capable of reviewing the papers objectively. They should devote sufficient time and attention to ensure that the science is valid and robust. Editors should consider the views of all independent reviewers.

    Until the climate scientists begin to adhere to the most basic requirements of sound science, I think that we are entitled to remain sceptical of their claims. So far as I can see, the credibility of the climate scientists has been challenged in the new age of the internet which allows opportunity for published work to come under close and detailed scrutiny, whether that is in formal environments (CoPD) or informal blogs.

  111. Steve Bloom
    Posted Jan 8, 2007 at 5:08 PM | Permalink

    Re #109: OK, Stan, show us the theorem(s) demonstrating that the various types of statistical error reflect the real world.

    Re #110: Certainly all human endeavors are imperfect, but I would point out to you that phraseology like “cliques of non-independent climate scientists who are seeking to dominate the peer reviewed published literature” and “(u)ntil the climate scientists begin to adhere to the most basic requirements of sound science” will tend to result in your not being taken seriously. You’re certainly free to continue to criticize the science from the outside on any basis you like, though.

  112. Posted Jan 8, 2007 at 5:27 PM | Permalink

    [snip]

  113. Stan Palmer
    Posted Jan 8, 2007 at 5:45 PM | Permalink

    re 111

    Re #109: OK, Stan, show us the theorem(s) demonstrating that the various types of statistical error reflect the real world.

    This is absolutely astounding. Statistics do not reflect the real world. What world do they reflect? From my long ago statisitics class, the Poisson distribution was derived from the distribution fataliites due to horse kicks in the Prussian army.

  114. Steve Sadlov
    Posted Jan 8, 2007 at 5:52 PM | Permalink

    RE: #112 – An anecdote regarding data driven vs “morphological” science. In my naive youth, I took a field geology course where we were asked to do our own maps of an area. Lacking sufficient maturity, and not yet understanding the notion that a smaller high quality map would get me a better grade than a larger “morphologically creative” one, I fell into that age old trap which has ensnared numerous geologists, overreliance on geomorphology. Yes, jogs in streams can indeed indicated transform faults (but then again, they may not) and yes, a slope change MIGHT mean a change in underlying strata. Woe unto me, woe unto me indeed. What a brilliant learning experience it was, even if I only managed to eke out a B- (only managed a C on the map itself).

  115. trevor
    Posted Jan 8, 2007 at 6:45 PM | Permalink

    Re #118:

    Mr Bloom: Bios are only one element in good science. Surely you have seen enough to realise that there are some problems with the approach of the team? How much does it take to get you to realise that not everything that the team have to say stands up to analysis. Just one example, but a good one. The paper on CoPD by Dr Juckes attracted a fair bit of detailed critique and questioning, and deservedly so. You can’t seriously think that people who apparently persist in defending the undefensible for the now widely discredited Hockey Stick can be regarded as paragons of real science.

    It would be interesting indeed if these people were to have their work reviewed by their respective undergraduate and postgraduate professors. I wonder how many would get even a pass, let alone a credit or distinction.

    Seems to me that there will soon come a time when a bio that includes participation in any of the Hockey Stick corpus will become an impediment rather than an asset in finding a Real job.

  116. Boris
    Posted Jan 8, 2007 at 9:56 PM | Permalink

    realclimate authors in turn are too intellectually lazy to think without using abstract quotemining and computer models.

    ???? We shouldn’t try to model the climate because it’s lazy to do so ?????

    It’s comments like these that make this site the Onion of science blogs.

  117. Boris
    Posted Jan 8, 2007 at 9:59 PM | Permalink

    the now widely discredited Hockey Stick

    You’d actually belive this if you read this site enough. But the NAS says something else entirely.

  118. Willis Eschenbach
    Posted Jan 9, 2007 at 2:45 AM | Permalink

    “the now widely discredited Hockey Stick”

    You’d actually belive this if you read this site enough. But the NAS says something else entirely.

    Boris, you’d actually believe that if you hadn’t read the NAS report. It said you could believe the Hockey Stick as far back as the Little Ice Age, but not further. It also said that you shouldn’t use stripbark spp. in reconstructions. Perhaps you’d care to enlighten us regarding how many of the reconstructions have omitted stripbark species?

    Nor was the NAS the only ones to say that the HockeyStick contained both bad math and bad proxies. Whenever statisticians have looked at the hockeystick, they just shake their heads …

    They call that “discredited” on my planet, but you’re quite welcome to believe in it. If not, there’s always Santa Claus and the Easter Bunny …

    w.

  119. Posted Jan 9, 2007 at 2:49 AM | Permalink

    Boris

    What the NAS Panel says in their limited way, in no way supports the mathematical or methodological approach of the Hockey Stick. It downgrades most of the analysis downgrades all of the major conclusions down to “plausible”.

    And this was from a Panel that did no research, and just sat around a table and just “winged it”. There is not a single criticism of Steve Mcintyre’s work or analysis and they recommended nearly every suggestion made by him.

    I said “widely discredited” because if you follow the stories about the Hockey Stick from multiple independent sources (and from AGW and Greenhouse believers to boot) you’ll find the HS described as “junk”, “impossible to replicate”, “deeply flawed” and simply “bad science”.

    The NAS Panel clearly bent over into an unusual topological curvature of the spine to avoid writing off the Hockey Stick completely, but they did not salvage very much.

    Oh and my reference to RC authors as intellectually lazy, was simply referring to the normal modus operandi of them to write articles with extensive references only to their colleagues, to make bland statements writing off statistical analysis and making reference to climate modelling results as if they were experiments validating an existing theory when they are nothing of the kind. I’m not even the first person to note that they are too bone idle to read the literature that does not derive from an extremely limited group of researchers, nor to upgrade their statistical knowledge up to the level of the analysis they claim they are performing.

  120. trevor
    Posted Jan 9, 2007 at 3:34 AM | Permalink

    Re #117: Boris, read the NAS report. It is very clear that they were putting a sugar coating on a very bitter pill.

    Re the whole HT corpus: “By their fruits ye shall know them!”

  121. Chris H
    Posted Jan 9, 2007 at 4:12 AM | Permalink

    the now widely discredited Hockey Stick

    You’d actually belive this if you read this site enough. But the NAS says something else entirely.

    One of the great things about maths is that when you understand it, it is unambiguous. You don’t need to rely on appeals to authority. Read an undergraduate course book on statistics, read the HS papers and read the comments on this site and other sites. Then you will know yourself that the assertions made in these papers have very little merit.

  122. Posted Jan 9, 2007 at 8:01 AM | Permalink

    Steve B: Re 107.

    That’s fair enough since the expertise here is largely in the statistics rather than the science,

    Maybe. Or maybe not.

    Here is list of books Statistical Fluid Mechanics Are they largely about statistics, or science?

    You can read about the Reynolds “Averaged” Navier Stokes equations. (Note– averaged.) Read a bit more about LES.

    While reading, note the “closures” or “paramerterizations” in these equations. All paramerterizations in GCM models are qualitatively similar to these. (In the sense that they try to detect the “average” effect of microscale and meso scale physics that cannot be treated at the computational grid scale.)

    After you read these, I will ask you: the paramerterizations inside GCM’s largely about statistics or science? Is developing a mathematical model to appropriately describe meso and micro scale behavior so it can be introduced into a GCM largely about statitics or science? Is teasing out meaning from noisy experimental data describing chaotic systems, and trying to identify any coherent structures in the data largely about science or statistics?

    You will find specialists working in these fields spend a lot of time dealing with statistics!

  123. Jim O'Toole
    Posted Jan 9, 2007 at 8:04 AM | Permalink

    RE 117,
    Boris,
    I fancy myself as a sort of middle of the road kind of guy who tends to read everything and then decide for myself (I’m skeptical of everything, even skepticism). I can’t follow all the arguments on here because I lack the proper background on many of the disciplines (a lowly mech. engineer, not even a graduate degree). However, I have read the NAS report and only just got through it; it took me a while because…well, because it’s boring. But, just going by what is presented in it and not taking into account the politics and personalities surrounding it, what I took away from it was that it very much downplayed the reliability of the methods and data surrounding the ‘Hockey Stick’ with regard to MBH98. Keep in mind that I am just going by the report, not after-the-fact commentary from any of the participants.

  124. Chris H
    Posted Jan 9, 2007 at 8:09 AM | Permalink

    What happened to my nested block quotes? The second paragraph should also be quoted. Only the final paragraph is mine.

  125. bender
    Posted Jan 9, 2007 at 8:18 AM | Permalink

    Palmer’s #109 has got it, Bloom. Big difference between subjective assessment of a parameter’s fit vs. objective assessment through rigorous analysis. Big difference between statistics and heuristics. Yes, expert (and not-so-expert) judgement play a role in statistical analysis. That’s not a reason to avoid it!

  126. Boris
    Posted Jan 9, 2007 at 9:39 AM | Permalink

    Like I said, I know what many sceptics think the NAS report concluded, but what they said they concluded does not always jibe with this view. For instance, trevor claims that they were “sugar coating a bitter pill.” On what evidence is this based?

    John A points out that the results were deemed “plausible”. On what planet (Willis :)) does “plausible” equal “discredited”?

    And the main problem with reconstructions was not, according to the report, due to bad prozies or bad math:

    The main reason that our confidence in large-scale surface temperature reconstructions is lower before A.D. 1600 and especially before A.D. 900 is the relative scarcity of precisely dated proxy evidence.

    Willis writes:

    It said you could believe the Hockey Stick as far back as the Little Ice Age, but not further

    Yet the NAS comittee says they have a high confidence until 1600, somewhat less confidence from 1600 to 900 and very little confidence before 900. Your quote implies they have no confidence before 1600, and this is simply wrong.

    You may disagree with the NAS report, but it clearly does not discredit the hockey stick.

  127. welikerocks
    Posted Jan 9, 2007 at 10:04 AM | Permalink

    Re:126

    Wegman Report:

    It is important to note the isolation of the paleoclimate community; even though they rely heavily on statistical methods they do not seem to be interacting with the statistical community. Additionally, we judge that the sharing of research materials, data and results was haphazardly and grudgingly done. In this case we judge that there was too much reliance on peer review, which was not necessarily independent. Moreover, the work has been sufficiently politicized that this community can hardly reassess their public positions without losing credibility. Overall, our committee believes that Mann’s assessments that the decade of the 1990s was the hottest decade of the millennium and that 1998 was the hottest year of the millennium cannot be supported by his analysis.

  128. Dave Dardinger
    Posted Jan 9, 2007 at 10:27 AM | Permalink

    re: #126 Boris,

    You say,

    the main problem with reconstructions was not, according to the report, due to bad prozies or bad math:

    Then you quote the NAS panel as saying,

    The main reason that our confidence in large-scale surface temperature reconstructions is lower before A.D. 1600 and especially before A.D. 900 is the relative scarcity of precisely dated proxy evidence.

    And just how do you think this would follow if it wasn’t that MBH98, etc. didn’t use bad math? They claimed that their findings had statistical signficance; indeed, that was their main point. But the NAS realized that it wouldn’t stand up to proper mathematical analysis and that’s what that sentence means.

    It’s kinda like me quoting a cop saying, “the main reason we think he’s a thief is because we have a movie of him running away from the crime scene with the stolen goods in his arms.” and your replying, “but it doesn’t actually say he actually stole the goods, he might just have been carrying them fast to his car after buying them from the accuser.” We really don’t buy that sort of excuse around here.

  129. Lee
    Posted Jan 9, 2007 at 10:31 AM | Permalink

    Don’t forget also that the NAS cite additional qualitative and quantitative evidence that is supportive of or consistent with the ‘hockey stick.’

  130. jae
    Posted Jan 9, 2007 at 10:50 AM | Permalink

    129, Yeah, Lee, like Juckes. LOL.

  131. jae
    Posted Jan 9, 2007 at 10:54 AM | Permalink

    What I meant in 130 is papers like Juckes 2006 that also appear to have some major statistical and methodological glitches.

  132. Lee
    Posted Jan 9, 2007 at 11:03 AM | Permalink

    jae, stop making stuff up. Juckes et al 2006 is not cited in the NAS report. There are non-dendro specific data and reports that are cited in the NAS report as additional evidence in favor of the broad conclusions from the dendro work. Pointing at an uncited unpublished dendro paper that has unresolved issues, whether real or imagined, is completely irrelevant.

  133. Boris
    Posted Jan 9, 2007 at 11:05 AM | Permalink

    Dave,

    That’s a pretty distant paraphrase from what the report says given that the sentence doesn’t mention MBH98 or mathematical analysis. You’re coloring that sentence with your own views.

  134. welikerocks
    Posted Jan 9, 2007 at 11:07 AM | Permalink

    Lee, as I recall SteveM made a whole topic for you about these “other evidences”. It fizzeled out real fast as I recall as well. And since then there have been many topics on these “other papers”. Every one of them is filled with uncertainty and errors.

  135. welikerocks
    Posted Jan 9, 2007 at 11:12 AM | Permalink

    re 134 The NAS report still didn’t stop those pesky scientists from the same old same old: http://www.climateaudit.org/?p=967#more-967

    Quote: Here’s a quick summary of the overlap of proxies in three widely publicized “independent” 2006 studies. The number of proxies are all small (Juckes -18; Osborn – 14; Hegerl – 12). All three use multiple bristlecone/foxtail chronologies: Juckes 4; OSborn 2; Hegerl 2. All three use Fisher’s Greenland dO18, Tornetrask (Juckes twice, Hegerl mis-identifying it); Taimyr; the Yang composite; Yamal. Several series are used in 2 of three studies: Chesapeak Mg/Ca; Alberta (Jasper) tree rings; Jacoby Mongolia tree rings. There are very few “singletons” – Osborn 3; Hegerl 3 and Juckes 6, although the Juckes singletons were used in Moberg 2005 or MBH98.

  136. Lee
    Posted Jan 9, 2007 at 11:18 AM | Permalink

    rocks, Steve has never made a “whole topic for me” on anything. He has several times taken things I have said and posted them in an article, as a means of disputing what I said. Lately, he has taken to mis-citing, and failing to correct the mistaken cite, and failing to link to the original discussion when asked.

    That said, one of the things the NAS cite (ONE of) is the collapse of ice shelves. To say that discussion of that topic has ‘fizzled out real fast’ here is.. well.. very wrong.

    Another thing they cite is the Canadian ice core showing higher 20th century temps. That ice core is relatively close to the recent changes in Canadian ice shelves – I’ve posted the main figure from that paper recently here, linked to it, mentioned it. One of the reasons I want SteveM to link his Ice Shelf article to my original post is that I cited that and a few more things there. Again, not much fizzle there.

    Also, not much response to the actual evidence.

  137. welikerocks
    Posted Jan 9, 2007 at 11:55 AM | Permalink

    Well Lee, I remember after the NAS this was a point of contention for you, and he provided a place for you to do your thing.

    BTW the Association of State Climatologists is far more reasonable about the whole issue:

    From: Policy Statement on Climate Variability and Change
    they say:

    Climate prediction is difficult because it involves complex, nonlinear interactions among all components of the earth’s environmental system. These components include the oceans, land, lakes, and continental ice sheets, and involve physical, biological, and chemical processes. The complicated feedbacks and forcings within the climate system are the reasons for the difficulty in accurately predicting the future climate. The AASC recognizes that human activities have an influence on the climate system. Such activities, however, are not limited to greenhouse gas forcing and include changing land use and sulfate emissions, which further complicates the issue of climate prediction. Furthermore, climate predictions have not demonstrated skill in projecting future variability and changes in such important climate conditions as growing season, drought, flood-producing rainfall, heat waves, tropical cyclones and winter storms. These are the type of events that have a more significant impact on society than annual average global temperature trends.

  138. jae
    Posted Jan 9, 2007 at 12:08 PM | Permalink

    137: Yeah, I’m always amazed that the state climatologists, who probably know the most about climate, have such a reserved (and I would say scientific) position on AGW.

  139. welikerocks
    Posted Jan 9, 2007 at 12:08 PM | Permalink

    re: 137
    Here’s one of the those topics right here

    What is the evidence against warmer MWP? By Steve McIntyre-Lee has criticized me for not fully canvassing the supposedly manifold lines of evidence marshalled by the NAS panel against a warmer MWP. So I’ve done a little exercise to summarize the evidence AGAINST the MWP being warmer than mid-20th century, disaggregating what I believe to be the salient information from the spaghetti studies. The information is familiar, but it’s arranged below a little differently than I normally arrange it.

    The last comment #224 pretty much sums up the tone of the whole exchange.

  140. welikerocks
    Posted Jan 9, 2007 at 12:11 PM | Permalink

    #138 jae, exactly it is more scientific.

  141. Lee
    Posted Jan 9, 2007 at 1:14 PM | Permalink

    rocks,
    That quote is SteveM, in his inimitably snarky way, and misrepresenting what I actually said, using me name to introduce an article to DISPUTING what I said. It was not a thread ‘for me.’. My first post in that thread is at #192.
    I think I have 8 posts in the entire thread, mostly discussing use of present vs past relative treeline fronts as qualitative temperature proxies.

  142. Dave Dardinger
    Posted Jan 9, 2007 at 3:51 PM | Permalink

    Lee, if Steve M is snarky, what does that make you? And just why didn’t you join in that thread?

  143. Lee
    Posted Jan 9, 2007 at 4:04 PM | Permalink

    Dardinger, as I say in my first post in that thread, I didn’t know the thread was even posted until someone (not SteveM) emailed to let me know, 192 posts into it.

  144. Steve McIntyre
    Posted Jan 9, 2007 at 5:15 PM | Permalink

    Don’t forget also that the NAS cite additional qualitative and quantitative evidence that is supportive of or consistent with the “hockey stick.’

    Lee, two points. I don’t exclude the possibility that someone may adduce evidence that the modern warm period is warmer than the MWP – that would not mean that Mann had done so – which is where I started, in the same way that the Piltdown Man remained a fake no matter what other evidence accumulated.

    The NAS panel did not cite any evidence that supported Mann’s calculation of confidence intervals.

    In my AGU presentation (and here), I observed that the NAS panel did not assess other studies to see whether they used bristlecones/foxtails, which the NAS panel said shouldn’t be used. Any engineer that, like the NAS Panel, continued to rely on studies using defective proxies, without checking to see whether the defective proxies were used, would lose his credentials.

    The NAS panel made no attempt to evaluate the Thompson data – have you read the discussions of, for example, the Guliya ice core? Younger scientists at AGU that I talked to agreed – as long as I didn;t attribute it to them – that you couldn’t say that Mount Logan dO18 was local and Dasuopu was global. I’ve posted on Quelccaya plants.

    The NAS panel made an incorrect statement about Antarctic isotopes – Cuffey has acknowledged this, but still thinks that bore holes can carry the day. Hugo Beltrami says that there are flow problems with glacier bore holes.

    I’ve been looking at ice shelves and finding that the dates for Arctic (Crary 1960) are nearly 50 years old and don’t necessarily reflect modern views of radiocarbon reservoirs etc. At some point I’ll post on Domack’s Antarctic ice cores.

  145. Ken Fritsch
    Posted Jan 9, 2007 at 5:17 PM | Permalink

    Re: #141

    rocks, Steve has never made a “whole topic for me” on anything. He has several times taken things I have said and posted them in an article, as a means of disputing what I said.

    Steve M has spent a good deal of effort attempting to point to and analyzing proxies other than tree rings (look at the current threads on Lorenz and Stott). Lee contended that the NAS committee had made a substantial case for HS temperature reconstructions outside the TR domain. Steve M’s efforts and posting were more in line with an attempt to show what the NAS had failed to discuss and those efforts to me have been most appreciated and informative. I have concluded that NAS has a POV on these matters and I would like to see a different POV.

    Lee, I would appreciate hearing your POV also, but I found that you initially were making, in my view of this, rather vague references to the NAS report and spending too much time complaining about your treatment here.

  146. Boris
    Posted Jan 9, 2007 at 5:33 PM | Permalink

    If the NAS is wrong, then why do posters on this board repeatedly attribute inaccurate statements to the NAS report?

  147. Ken Fritsch
    Posted Jan 10, 2007 at 11:42 AM | Permalink

    If the NAS is wrong, then why do posters on this board repeatedly attribute inaccurate statements to the NAS report?

    To me it is more like NAS has a POV (a consensus scientific one as it relates to a preferred policy) from which they publish. In order to obtain a more complete view one needs, in my judjment, to look to other sources and analyses. Some of their statements appear to be intentionally rather vague and very generalized in order to, in my view, avoid talking about a measured uncertainty in the data and analyses and/or the inability to measure the uncertainty involved. Those statements can lead to rather arbitrary interpretations. The statements appear to be developed more by a show of hands for a given conclusion than a true compilation of scientific evidence with a conclusions left to others.

  148. Willis Eschenbach
    Posted Jan 10, 2007 at 5:21 PM | Permalink

    Boris, without examples your post is meaningless.

    w.

  149. bender
    Posted Jan 10, 2007 at 7:30 PM | Permalink

    avoid talking about a measured uncertainty in the data

    Alarmists, these are your instructions.

  150. Boris
    Posted Jan 11, 2007 at 7:00 AM | Permalink

    Willis, I’ve already shown how you misrepresent the NAS.

  151. Dave Dardinger
    Posted Jan 11, 2007 at 7:33 AM | Permalink

    re: #150 Boris,

    I shot you down in #128, to which you weakly responded,

    That’s a pretty distant paraphrase from what the report says given that the sentence doesn’t mention MBH98 or mathematical analysis. You’re coloring that sentence with your own views.

    The trouble is that the entire NAS report was about MBH98 (and successors) and its mathematical analysis. There’s no need to mention it in every sentence.

    Fact is you’ve still not presented any legitimate examples of distorting the NAS report and everyone here knows it.

  152. Boris
    Posted Jan 11, 2007 at 9:24 AM | Permalink

    Dave,

    What was the title of the report? Perhaps that may clear up the lingering confusion/fantasy you have about it being just about MBH98.

    I think my response to Willis is quite clear in showing how he misrepresented the report. I notice you dodge the specifics.

  153. Steve McIntyre
    Posted Jan 11, 2007 at 10:13 AM | Permalink

    #152. Boris, I agree with you that the report was not just about MBH but not about your other assertions. Since you’ve thrown allegations around about the NAS report being misrepresented at this site – can you provide a single instance where I’ve misrepresented the report?

  154. Boris
    Posted Jan 11, 2007 at 12:19 PM | Permalink

    153:

    To my knowledge, Steve M, you have not misrepresented the report. You have found fault with parts of the report, which is fair game.

    I probably should have said “some commenters” on this blog, since this is a clearer statement. I stand by criticism of Willis’ and Dave’s respective comments.

  155. Dave Dardinger
    Posted Jan 11, 2007 at 2:45 PM | Permalink

    re: #152 Boris,

    You ask,

    What was the title of the report? Perhaps that may clear up the lingering confusion/fantasy you have about it being just about MBH98.

    It is, at least where you can download it from National Academies Press:

    “Surface Temperature Reconstructions for the Last 2,000 Years”

    Now I don’t know what sort of “confusion/fantasy” you have about this not meaning MBH98, but go ahead, fill me in. And please note that I said “and successors” not just MBH98.

    While it’s true, as Steve said before, that there are other things besides MBH98 included, that’s the overriding issue. And it’s Steve’s & Ross’ analysis of MBH98/99 which caused the NAS panel to be convened. I know that you’re trying to confuse the issue, but it won’t get you very far here.

    We eat trolls for breakfast (though Steve keeps trying to make us at least use knives and forks.)

  156. Boris
    Posted Jan 11, 2007 at 7:06 PM | Permalink

    Dave,

    Earlier in this thread you wrote:

    They [Mann, etc.] claimed that their findings had statistical signficance; indeed, that was their main point. But the NAS realized that it wouldn’t stand up to proper mathematical analysis and that’s what that sentence means.

    What results didn’t have statistical significance? The ones the NAS had high confidence in, somehwat less confidence in, or low confidence in? You and Willis want to throw out past reconstructions because the NAS has lower than a high confidence in them?

    But, revisiting the sentence in question:

    …our confidence in large-scale surface temperature reconstructions is lower…

    Again, they still have some confidence in the reconstructions between 900 and 1600. I don’t know how you translate this to not “stand[ing] up to mathematical analysis.”

  157. bender
    Posted Jan 11, 2007 at 7:26 PM | Permalink

    Boris, there can be precious little confidence in the reconstructions prior to 1600. And that is the conclusion reached with unrealistically narrow confidence bands! If the confidence bands were estimated honestly NAS would have ZERO confidence in the pre-1600 data. Read the blog. Search on terms like “confidence interval” and “uncertainty envelope”. My friendly advice to you is to spend more time reading and less time talking.

  158. Boris
    Posted Jan 12, 2007 at 7:46 AM | Permalink

    Bender,
    “Precious little” is your interpretation. That’s fine. But it is not the NAS conclusion. I’m only arguing what the NAS said and how some sceptics act as if the NAS “discredited” the hockey stick.

  159. MarkR
    Posted Jan 12, 2007 at 8:22 AM | Permalink

    #158 Boris

    Why not read the dicussion that took place at the time of the NAS report, for example:

    4) With respect to methods, the committee is showing reservations concerning the methodology of Mann et al.. The committee notes explicitly on pages 91 and 111 that the method has no validation (CE) skill significantly different from zero. In the past, however, it has always been claimed that the method has a significant nonzero validation skill. Methods without a validation skill are usually considered useless.

    Link

  160. Steve McIntyre
    Posted Jan 12, 2007 at 11:44 AM | Permalink

    Boris, there are layers of issues here. Our claim that started the debate was that Mann’s data and methods were flawed, such that he could not validly assert that 1998 was the warmest year and 1990s the warmest decade of the millennium. At the time, Eduardo Zorita, hardly a skeptic, said that the NAS position on Mann was as harsh as conceivably possible, given all the constraints on them. The NAS panel in its running text endorsed all of our key criticisms of the Mann reconstruction and withdrew any claims to confidence prior to AD1600.

    The NAS panel conspicuously did not endorse the view that Mann’s data and method enabled him to assert that 1998 wsa the warmest year and 1990s the warmest decade. They did say that such conclusions were “plausible”, but I would submit, the alternative is also “plausible”.

    I would not use the language that the NAS panel “discredited” the HS – if it was discredited, this was done prior to the NAS panel, which was merely reviewing the debate. I don’t think that any fair-minded reader of the NAS report can hold that the NAS panel took the view that MBH data and methods could yield a result upon which scientific weight could be placed. Of course, Wegman was even stronger on this, stating very clearly that a “Right Answer” arrived at through “Wrong Methods” is not scientifically valid.

  161. Paul Penrose
    Posted Jan 12, 2007 at 3:45 PM | Permalink

    Boris,
    After reviewing all your postings here I have less confidence that you understand the issues involved in the debate. Now is that a supportive statement, or something else all together? Contrast this with your message #156.

  162. Ken Fritsch
    Posted Jan 12, 2007 at 4:33 PM | Permalink

    Boris, I think if you really make the effort you will see that the NAS report had a bit of schizophrenia in its writing – as I believe the term was used by Steve M . Look at what they actually said about the methodology that Mann used. On that they had little support for Mann.

    I agree that, if one is so inclined, one can carry away from the report that their less than precise statements about MBH reconstructions prior to 1600 constituted, if not a face saving gesture to Mann et al., at least something that AGW advocates could hang unto. They pointed to other temperature reconstruction results (and did so without a review of the methodologies of these other proxies as they had been assigned to do for Mann’s) and indicated that in consideration all the other reconstructions sited it was plausible that Mann et al. results could be correct, albeit without stating it explicitly, by way of incorrect methodology.

    Since the issuance of the NAS report, Steve M has opened several threads that related to some of the other reconstructions that NAS sited in their report and other papers that the NAS did not include. You and others can take from the NAS report what you will and perhaps that was part of the NAS motivation for writing it as they did, but the bigger point at this blog is, I believe, the discussion and analysis of not only the original Mann et al. data and methodologies but other work sited in the NAS report. I would much prefer to hear those threads contents discussed here than to expend bandwidth on debating exactly what NAS said. For starters the inclusions of many of the same proxies in the so-called “independent” reconstructions comes to mind.

  163. John Baltutis
    Posted Feb 2, 2007 at 3:42 AM | Permalink

    Innteresting stuff on modeling at http://climatesci.colorado.edu/2007/01/31/a-personal-call-for-modesty-integrity-and-balance-by-henkrik-tennekes/#comments.

  164. Leonard Herchen
    Posted Aug 27, 2007 at 10:56 PM | Permalink

    Has anyone run the GCM’s out 200, 1000, 10000 or even 100 000 years? When does the trends stop or reverse? Do they give realistic super long term trends? (Then of course, super long term would include factors probably not included such as changes in the earths orbit, and tilt)
    Anyway the super long term GC projections would be very interesting to see.