Gerry North's Suggested Reading on Climate Models

Obviously the big issue in climate is the impact of 2 times CO2. While my own issues tend to be statistical, this site obviously attracts many readers who want to jump straight to discussion of thermodynamics and theories of atmospheric physics. Articles like G and T obviously inflame such tendencies. For some reason, people who don’t necessarily feel competent to speak up on statistical issues, somehow feel able to opine on complicated issues of atmospheric physics. Although the issues are ultimately much more important than the statistical issues, the quality of discussion, in my opinion, is consistently much lower. This is the only reason that I’ve discouraged discussion of these topics.

However, despite my efforts to discourage such topics, such discussion has become an increasing proportion of posts here and, in my opinion, this has detracted from the average quality of comments here.

I don’t particularly blame readers for their struggles in trying to fully understand exactly how increased CO2 results in higher temperatures. I believe that most readers would like an articulate exposition of how increased CO2 translates into increased temperature. I believe that IPCC had an obligation to provide such an exposition and failed to do so. Their failure to provide such an exposition has left fallow ground for articles like G and T.

As some of you are aware, I’ve regularly asked critics of this blog for suggestions on an exposition of how increased CO2 translates into increased temperature and have little to show for such requests – this in itself is surprising or should be surprising. Yesterday, I asked Gerry North, the Chairman of the NAS Panel for a suggested reference and he’s sent me an article and covering letter.

The purpose of this blog is to analyze articles in detail and not to provide a platform for venting. For the next couple of weeks, I would like to declare a moratorium on people venting their own opinions about climate models, atmospheric physics and thermodynamics, however meritorious these opinions may be. If you want to comment on the model set out in the article below and identify shortcomings and defects in it, fine, otherwise please hold your fire on these topics.

I sent the following request to Gerry North:

Gerry, can you give me a reference to a clear up-to-date exposition of how increased CO2 translates into 2.5 deg C preferably using 1D or 2D models? I’m looking for something more sophisticated than Houghton’s arm-waving “the higher the colder” and something that does not import what seem to me to be the irrelevant numerical complications of GCMs. IPCC unfortunately does not deign to provide such explanations. I’m familiar with the Ramanathan vintage articles from the 1970s, but would like to see something more up to date. Perhaps something from your own work. Preferably something between 15 and 100 pages. Thanks, Steve

Gerry sent me a paper that he co-authored in 1993 uploaded here , together with the following covering note:

Actually the higher the cooler is the best explanation.

But let’s go to the simple energy balance model, which you might prefer. The outgoing IR is given by an expression due to Budyko and used in all of my simple model papers: I=A+BT, where A and B are empirical constants and T is the surface temp in Celsius. The empirical constants come from satellite measurements of IR, fitting the seasonal cycle and the latitudinal dependence of temperature. Typically A = 200W/m^2 and B=2.00W/m^2 deg^{-1} (A and B vary a bit from study to study so they are at best approximate, but good enough for this exercise). For your reading pleasure, I am attaching an ancient paper of mine in which the fitting was done. It is important to note that B here contains important feedback information (for an earth with no atmosphere it would be more like 4.60). It presumably includes cloud and water vapor feedback.

The reduction of outgoing IR due to doubling CO2 is about 4 to 5 W/m^2. This comes from detailed radiative transfer calculations and it is not controversial.

But now take deltas: \Delta I=B \Delta T, and this leads to \Delta T=(4 or 5)/2.00 deg C and there you have it.

This is a zero dimensional model (global average). It has many faults, but seems to not be too far off. Analysis of output (just as though they produced real data) from most of the GCMs give about these same values for A and B, etc. This simple model probably includes the lapse rate feedback, etc. It does not include snow/ice feedback which would raise the sensitivity slightly. Finally, the fits do not properly take the tropics (all-important, unfortunately half the planet’s area) into account. Lindzen would argue that it fails miserably there. The clouds muck up the tiny seasonal and latitudinal signal in the tropics and this could change the result. I cannot deny it.

You have to take these simplified models with a grain of salt. They are good educational tools, but simply do not have all the physics in them. They would not pass your standards for due diligence, but they were cooked up as sanity checks on the big boys.

I’ll try to comment on this at some point in the future. There are two main issues that trouble me in respect to the models and, until I have occasion to work through matters in detail (and who knows when that will be), they merely trouble me – I’m not suggesting that they are gotchas: (1) excessive rigidity in the lapse rate. I’ll post some time on Crowley’s thoughts on this, as it appears to me that, if changes in the lapse rate are possible, this could have a substantial effect on CO2 impact; (2) cloud feedback.

North says that the derivation of the 4-5 wm-2 forcing is said to be uncontroversial. However, this doesn’t mean that the derivation should not be set out in detail in some IPCC document, but it isn’t. I’ll re-visit this at some time. If people want to do something useful, it would probably make sense to translate the model described in this paper into R or Matlab so that people can experiment with it.

201 Comments

  1. SteveSadlov
    Posted Aug 1, 2007 at 8:35 PM | Permalink

    Eli Rabbet appears to be a physicist specializing in absorbtion spectrometry, emission spectrometry and quantum mechanics involving gases. Recently, he’s posted seemingly relevent materials at his Rabbet Run blog. It would be good, in my opinion, for true content experts in this area (which I most definitely am not) to visit his blog, and to look at this here, and try to connect the dots. I recently saw something by Tom Vonk, another apparent content expert, over at Pielke Sr’s blog, which was highly revelent in my opinion. The expertise is out there. It definitely is. Let’s see some of it here.

  2. Pat Frank
    Posted Aug 1, 2007 at 9:10 PM | Permalink

    There’s a third troubling point you haven’t included, Steve. The relevant issue of AGW is the dynamical projection of the effect of increasing CO2 on centennial migrations of climate. One cannot do a single-point, instantaneous 2x jump in atmospheric CO2 and call that calculation the effect of doubling CO2 on Earth climate. In a time-wise projection, a sequentially iterative calculation must be made, with single climate steps of no more than one year; better that the steps should be monthly.

    In a sequential self-recursive iterative calculation, the systematic (theory-derived) errors accumulate, and accumulate rapidly. Gerry North’s explanations are almost irrelevant to this, the major reliability issue of global warming. In the event, the uncertainties grow so large that it becomes impossible to predict a reliable future Earth climate even one year out. The intrinsic theory-based errors are huge in climate physics, e.g., W. Soon and S. Baliunas “Global Warming” (2003) Progress in Physical Geography 27, 448’€”455, and especially, W. Soon, S. Baliunas, S. B. Idso, K. Ya. Kondratyev, E. S. Posmentier (2001) “Modeling climatic effects of anthropogenic carbon dioxide emissions: unknowns and uncertaintiesClimate Research 18, 259’€”275, and so the physical uncertainties around the effects of increasing atmospheric CO2 are immediately gigantic.

    When it comes to future Earth climate, it’s very clear that no objective knowledge is available.

  3. Jonathan Schafer
    Posted Aug 1, 2007 at 9:37 PM | Permalink

    My question would be do we have any models of the current carbon cycle and can we reasonably assess how much is going in and how much is going out on an annual, decadal, centennial basis? How do we get to a 2X CO2 level? Do we know the following:

    1. How much is naturally occurring CO2
    2. How much is anthropogenic in nature
    3. How much is absorbed by various sinks
    4. What are the limits of those sinks
    5. How long before such a doubling of CO2 occurs.

    I’m not interested in the temperature part, just the underlying mechanisms.

  4. Bill Bixby
    Posted Aug 1, 2007 at 10:00 PM | Permalink

    The 4 W/m^2 number is relatively easy to demonstrate. First, you need to get a column radiative model. Several are in the public domain and are available on-line (they are almost always written in fortran). Then, you run the model with CO2 = 280 ppmv and CO2 = 560 ppmv and you’ll see that the net TOA flux is about 4 W/m^2 lower in the 560 ppmv case.

    re: comment #3:
    1. How much is naturally occurring CO2 + 2. How much is anthropogenic in nature

    natural CO2 is about 280 ppmv; today’s CO2 is about 380 ppmv, so about 30% of the CO2 in the atmosphere is due to humans

    also, we know that the increase in CO2 over the last hundred years or so is due to fossil fuels because it is depleted in both C13 and C14.

    3. How much is absorbed by various sinks

    we emit about 8 billion tons of C each year into the atmosphere, and the amount of C in the atmosphere goes up by about 4 billion tons … which says that sinks absorb the other half.

    4. What are the limits of those sinks

    good question

    5. How long before such a doubling of CO2 occurs.

    probably around the middle of this century

  5. Steve McIntyre
    Posted Aug 1, 2007 at 10:11 PM | Permalink

    The 4 W/m^2 number is relatively easy to demonstrate. First, you need to get a column radiative model. Several are in the public domain and are available on-line (they are almost always written in fortran). Then, you run the model with CO2 = 280 ppmv and CO2 = 560 ppmv and you’ll see that the net TOA flux is about 4 W/m^2 lower in the 560 ppmv case.

    I’m well aware of these online calculators, but I’d like to see an explanation of the underlying algorithms and assumptions. Are they based on radiative-convective models? IF so, I’m interested in particular in the parameterization of convection and the assumptions on lapse rates.

    Can you give me a citation or reference? There should be a clear exposition of this, but it’s like pulling teeth to get a reference. I’m not saying that there isn’t a good up=to-date reference somewhere.

  6. tetris
    Posted Aug 1, 2007 at 10:15 PM | Permalink

    Re:4
    Pls provide “mainstream” references [in keeping with Steve M wishes] to support your contention that “natural CO2 is about 280 ppmv”. I may have been missing something but my impression is that geologically speaking [and therefore by definition non-anthropogenic], we have generally accepted data that support CO2 concentrations substantially higher than 280 ppmv.

  7. Bill Bixby
    Posted Aug 1, 2007 at 10:23 PM | Permalink

    tetris-

    you’re right. CO2 has varied tremendously over the Earth’s entire lifetime. what I meant was “pre-industrial” (e.g., 1750) CO2 was 280 ppmv. it’s clear that it would still be around that value if humans hadn’t been dumping CO2 into the atmosphere. how we know that, etc. is well documented in chapter 2 of the IPCC report. even Prez. Bush accepts this, so I don’t think there’s any real dispute about it.

  8. Bill Bixby
    Posted Aug 1, 2007 at 10:25 PM | Permalink

    Steve-

    I’m pretty sure that it’s just a radiative calculation — no convection involved. I’ll check on that and get back to you.

  9. Dave Dardinger
    Posted Aug 1, 2007 at 10:43 PM | Permalink

    I’ll go read the paper when I get a chance. Meantime just one queston. North says, “It presumably includes cloud and water vapor feedback.” What does this mean? Is it the same as your desire for knowing how the column calculaters work? I.e. North doesn’t know for sure how the figures he’s using were calculated. Or does he mean that since the figures come from satellite measurements this can be presumed to include all feedbacks including clouds and water vapor. In this latter case it would also, presumably, include convective effects.

  10. DeWitt Payne
    Posted Aug 1, 2007 at 11:35 PM | Permalink

    Here’s some numbers from the Archer calculator, which is strictly radiative with a fixed lapse rate for each of the different locality settings. For clear sky conditions looking down from 70 km (any altitude above 50 km will do), tropical atmosphere, constant relative humidity (which doesn’t really matter because the surface temperature is constant and the humidity doesn’t change), all other conditions default: 280 ppm CO2 = 289.194 W/m2; 560 ppm CO2 = 286.023 W/m2 or a difference of 3.17 W/m2. To return the output to 289.194 W/m2 requires a surface temperature offset of 1.47 C at constant RH or 0.88 C at constant water vapor pressure. I don’t know if his code is freely available or not.

  11. Bob Weber
    Posted Aug 1, 2007 at 11:57 PM | Permalink

    Where is the derivation of 5.35ln(C/Co) W/mⰠas published in Myrhe et al 1998 or isn’t this applicable?

    Bob

  12. Willis Eschenbach
    Posted Aug 2, 2007 at 12:07 AM | Permalink

    Bill Bixby, thanks for your email. You say:

    The 4 W/m^2 number is relatively easy to demonstrate. First, you need to get a column radiative model. Several are in the public domain and are available on-line (they are almost always written in fortran). Then, you run the model with CO2 = 280 ppmv and CO2 = 560 ppmv and you’ll see that the net TOA flux is about 4 W/m^2 lower in the 560 ppmv case.

    Going to MODTRAN, and putting in 280 ppmv, I used the default settings:

    This gives outgoing radiation of 228.8 W/m2. Changing just the CO2 to 560 W/m2 gives 226.5 W/m2, a net change in TOA flux from a doubling of CO2 of 2.3 W/m2 … which is very different from your number of 4 W/m2. What am I doing wrong?

    w.

  13. DeWitt Payne
    Posted Aug 2, 2007 at 12:16 AM | Permalink

    #13

    I’ve played with Modtran a lot. The lower the surface temperature the lower the sensitivity to doubling CO2. Try subarctic winter. The delta is only 1.66 W/m2. Tropical seems to be the worst case.

  14. neil
    Posted Aug 2, 2007 at 2:22 AM | Permalink

    Re lapse rate
    I think yor comments are quite pertinent to the whole impact of the greenhouse theory.
    The standard lapse rate is primarily governed by the Earths gravity because as air rises it expands adiabatically as the air pressure reduces. The met world calls this the DALR (dry adiabatic lapse rate) = 3 degrees per 1000 feet.
    When the actual lapse rate at any location (called ELR = environment lapse rate) is greater than the DALR we have a situation called instability. Convection results with the thermals rising to just above condensation height. This is the classic fine weather cumulus cloud day.
    It seems to me that it is almost impossible to have an actual lapse rate much exceeding the DALR because large amounts of energy would be convected upward.
    Of note the Kevin Trenberth world atmosphere heat model allows only 27w/sq m for convection but in the real world during daytime very small temperature changes at ground level can trigger convection.
    Concluding –I cant see how it is possible to increase the ELR on a global average without a large negative feedback by convection.

  15. Posted Aug 2, 2007 at 3:02 AM | Permalink

    I agree with North’s comments that the zero-dimensional models are actually the best ones for these estimates at the present moment: one simply has to try the world averages. No one surely can’t get something like a 10% precision for the real world anyway. The discrepancies between more complex models show it, and even if these more complex models agreed with each other, it wouldn’t mean that they agree with reality.

    So figuring out how much more IR the CO2 absorbs and how it translates to temperature increase is the best thing one can do, and just a few factors (or fudge factors) like the albedo are added.

    The 1993 paper itself is already too complicated one for getting the numbers, I think. I am not sure why you think, Steve, that such an explanation that is right for you should have 15-100 pages except that such a comment is showing that you read long documents and it surely implies that you are a professional. 😉 The calculation of the order of magnitude – that the sensitivity is around 1 Celsius degree – is a doable thing and more complicated models don’t add much wisdom to it.

    More complicated models may get things right but the more separately untested, isolated ingredients a description has, the more errors these ingredients may add. It’s making things worse, not better. And these ingredients are really untested separately in most cases. They are essentially tested against one quantity only, the temperature (at most local temperature), which is simply not enough to disentangle each of them and check the description of each of them.

    Also, while I think that North is a pleasant smart guy and I agree with his general suggestions what is the best way to determine these things roughly, I think that this particular promotion of their 1993 paper is a self-promotion. If you think that the paper is particularly important or illuminating, could you please explain why?

  16. Posted Aug 2, 2007 at 3:29 AM | Permalink

    Otherwise, more generally, if you’re really asking, Steve, why CO2 increases temperature at all, in principle, a few sources and words about the greenhouse effect:

    http://motls.blogspot.com/2007/06/realclimate-saturated-confusion.html
    http://motls.blogspot.com/2006/05/climate-sensitivity-and-editorial.html

    Basics: the solar energy arrives to Earth. We know how much, the solar constant. In equilibrium, the same energy must leave the Earth. If the Earth were a black body, it would leave purely in form of thermal black body radiation (like the radiation if you heat up a piece of metal). The Earth would heat up to the right temperature so that its own thermal radiation would equal the average incoming solar radiation. See also

    http://motls.blogspot.com/2005/04/earths-energy-balance.html

    This is a trivial two-line clean physics calculation involving the Stefan-Boltzmann law and if you do it, you will find out that the result is not too far off but the Earth would be about 30 Celsius degrees cooler than it is by measurements (260 instead of 290 Kelvins, so to say). Why is there a disagreement? Well, because Earth is not a black body. It reflects some incoming radiation (this is reflected by the concept of albedo that adds various factors) and moreover it doesn’t radiate as a black body because some of this black body radiation is absorbed by the atmosphere, changing the spectrum.

    The radiation corresponding to low temperature is thermal, infrared, longer wavelengths than visible light. What compounds can absorb it? They must have a pretty fine spectrum in this infrared range of frequencies. You find out that these are complicated molecules such as H2O and CO2 that have a lot of ways how to be excited. Recall that atoms only absorb much higher energies than infrared photons and even too simple molecules (N2, O2) are not enough to allow interesting transitions where the energy difference corresponds to infrared photons.

    Then you must carefully look at the amount of H2O, CO2 etc. in the actual atmosphere and their spectrum – the ability to absorb the light in the right infrared band. You will find out that H2O has the highest ability to do these things, above 90%, because there’s enough of it and it has enough spectral lines there. The gases also fight for the same wavelengths that reduces the greenhouse effect (two gases together make smaller impact than the sum of impacts that they would make separately), but let me neglect it.

    If you know that water is 90% and you generously give the remaining 10% to CO2, and if you know that this whole effect is responsible for about 30 Celsius warming including all water, you will see that CO2 that exists gives you something like 3 Celsius degrees of warming. Iwas generous. However, the absorption slows down because the photons at the sensitive frequencies are already collected – you slowly converge to saturation, so to say – so another package of the same CO2 won’t cause 3 Celsius degrees but less. This sublinear dependence is discussed in the second link here – search for “climate CO2 sensitivity”.

    In this calculation, I needed the known temperature of Earth, but one could also calculate it purely theoretically, by knowing the absolute measure how much radiation the CO2 actually absorbs from some microscopic description. At any rate, you get something of order 0.5-2 Celsius degrees from the doubling. I think that the existence of this effect is really indisputable, the estimates of its strength by sensible people only differ by less than 1 order of magnitude or even less, and this numerical result is not really the thing that divides people to two hostile camps.

    What is much more dividing is the statement that this effect is one of dozens of comparably important effects that can influence the temperatures by comparable amounts, and how one interprets these things.

  17. DougM
    Posted Aug 2, 2007 at 4:31 AM | Permalink

    Whatever the theoretical rise in temperature for any increase in temperature based solely on radiation, shouldn’t the actual temperature rise be much less? The increase in evapouration caused by the increased temperature will carry more energy away from the earth’s surface, thus lowering the temperature. Also, if relative humidity remains constant with higher temperatures, the same amount of convective activity, will increase the heat transfered, because the absolute humidity is higher. Both of these factors will considerably reduce the temperature rise from an increased greenhouse effect.

  18. MarkW
    Posted Aug 2, 2007 at 4:48 AM | Permalink

    #4,

    CO2 from volcanoes and such is also depleted in C13 and C14.

  19. Paul Dennis
    Posted Aug 2, 2007 at 5:02 AM | Permalink

    It’s a long time since I’ve posted here due to illness, but I’m fit again. MarkW you’re right volcanic gas is depleted in C13 and C14 but not to the extent of plants and fossil fuels. Typically mid ocean ridge basalts contain inclusions with CO2 delta 13C compositions of -7 to -8 per mille, i.e. slightly depleted in C13 with respect to marine carbonate. On the other hand coal, petroleum and natural gas are strongly depleted in 13C. Bituminous coal is typically -25 per mille, petroleum -18 to -34 per mille, and natural gas can be as depleted as -45 per mille. These very depleted values are consitent with the derivation of fossil fuels from organic matter produced by photosynthetic organisms and plants.

  20. TAC
    Posted Aug 2, 2007 at 5:19 AM | Permalink

    Lubos, I have heard it argued that GW theory requires that increased CO2 should cause specific changes to the atmospheric temperature profile (warming in the lower atmosphere and cooling above: A “fingerprint” of global warming). Is that correct? If so, do we have the data to test this prediction?

  21. Andrey Levin
    Posted Aug 2, 2007 at 5:22 AM | Permalink

    North’s paper describes how using gridded surface temperature data (from Jones at al), satellite measured outgoing IR and albedo (monthly-averaged and gridded) is used to calculate parameters which is used to run climate models. Total solar irradiance constant is an variable, driving the climate (surface temperatures). Valuable parameter, calculated in the paper, is averaged climate sensitivity, which indicates how average global surface temperature will change if Earth/Earth surface will receive more radiation, in degrees C pew W radiative forcing. As based on actual observations, calculated parameters apparently should intrinsically include water vapor and cloud cover feed-back. Solar irradiance is, as usual, averaged through day and night for all surface of the Earth sphere. Again, as based on actual observations, such averaging of solar irradiance could produce correct parameters and coefficients. Now, I do not have clear idea if all these averaging are correct from physical point of view, or how valuable these averages are. Ross McKitrick probably has better idea. While the article is quite useful to get an insight how climate models are parameterized and run, not a single word in the paper is devoted to GHG radiative forcing, or it CO2 part.

    Now, I have troubles when CALCULATED radiative forcing from doubling of CO2 is directly incorporated in climate models as equal to total solar irradiance. Radiative forcing of GHG could not be calculated as average between day and night and be subject to same climate model’s parameters. Such averaging contradicts couple of well-known principles. About 2/3 of heat received by surface during the day is dissipated convectively, heating the air. Heated air rises and re-radiates the heat from upper layers of atmosphere by-passing GHG blanket. At night convective cooling of surface is close to zero. Does MODTRAN takes this effect into account?

    I surely hope that same type of averaging of day/night temperatures of engine coolant is not used in calculation of size of radiator in my car.

  22. DocMartyn
    Posted Aug 2, 2007 at 5:39 AM | Permalink

    I wonder if he would be so good as to supply the data sets from the figures 2 and 5. Could you ask him?

  23. Steve Milesworthy
    Posted Aug 2, 2007 at 6:01 AM | Permalink

    #22 Andrey
    The calculated forcing is a product of models not an input to models. So in the model, the night-side gridpoints experience no incoming shortwave, and the outgoing longwave is dependent on the temperature etc. of that grid-point at that time.

    This is why it is hard to produce, from first principles, a figure for sensitivity. Whether solar irradiance varies a bit, or whether the GHG profiles are not quite uniform is nothing as compared to clouds, distribution of water vapour, changes in lapse rate.

    Tools such as MODTRAN, and detailed studies of the properties of clouds etc. will get you closer and closer to an accurate radiation model, but without knowing where the clouds and water vapour are (and where they will be in a warmer world), I don’t see how they get you any further.

    I need to read the North paper before I comment further.

  24. Posted Aug 2, 2007 at 6:07 AM | Permalink

    Re: #s 10, 13 and 14

    Do I understand the following to be the case, under the assumption, or hypothesis, of radiative equilibrium, that the radiative difference must be compensated by a change in the surface temperature. Then the numbers given by the example calculations are interesting. In the sub-artic winter the smaller radiative delta will require a smaller surface temperature offset. While in the tropics the higher surface temperature will presumably indicate a larger radiative delta and thus require a larger temperature offset.

    If this is the case, aren’t these numbers and trends opposite of the actual physical situation? That is, at 2xCO2 the surface temperature in the tropics is not expected to change as much as that in the polar regions. The temperature in the tropics is dominated by the evaporation-precipitation cycle. The temperature in the polar regions is dominated by transport of energy from the tropic regions.

    Where have I gone astray? Thanks

  25. Posted Aug 2, 2007 at 6:48 AM | Permalink

    Dear TAC #21, yes, it is of course true. To see the profiles, one must work with one-dimensional models, as Steve probably prefers. The greenhouse effect is strongest 10 km above the surface – except for the polar regions – this is where you find most of the greenhouse “glass” that can absorb the relevant radiation.

    According to the one-dimensional models, the ratio between the warming trends in the mid troposphere vs. the surface should be about 1.3 – the troposphere should warm faster. According to observations by balloons and satellites, the ratio is close to 0.5 that doesn’t seem to match the detailed model.

  26. Steve McIntyre
    Posted Aug 2, 2007 at 7:01 AM | Permalink

    #20. Paul Dennis, how nice to hear from you again. I’m sorry to hear of your illness and hope that you are recovered.

  27. Posted Aug 2, 2007 at 7:50 AM | Permalink

    re 11:
    Myhre 5.35 ln( C/Co)is a curve fit

    Myhre, G., E.J Highwood, K.P Shine and F. Stordal, 1998, New Estimates of radiative forcing due to well mixed greenhouse gases, Geophys. Res Lett. 25, 2715-2718

  28. Tom Vonk
    Posted Aug 2, 2007 at 7:51 AM | Permalink

    I have 2 comments to the paper .

    1) Identical to Steve and that is :
    “The reduction of outgoing IR due to doubling CO2 is about 4 to 5 W/m^2. This comes from detailed radiative transfer calculations and it is not controversial.”
    When I read the words “this comes from detailed calculations” I translate “this comes from climate models” .
    If not then I’d like to read some physics and equations .
    And if I see the Planck’s law then I’d have several comments .

    2)
    The whole paper is not physics , it is statistics trying to establish empirically that I = A + B.T in not nearer specified conditions .
    The author doesn’t even try to guess at the physical “why” of such a linear relationship .
    I don’t feel competent to comment the results and the statistical methods .
    However as “I” contains everything and anything , I suspect that even if the linear formula was more or less true , the way from there to the CO2 role is at best hazardous .

  29. Paul Dennis
    Posted Aug 2, 2007 at 7:54 AM | Permalink

    #26 Many thanks Steve, I’m well on the way to a full recovery. It’s amazing what an enforced absence does for one’s enthusiasm to engage in science again!

  30. Bill Bixby
    Posted Aug 2, 2007 at 8:11 AM | Permalink

    Willis-

    I inadvertently wrote TOA flux in my previous post. I should have written “top of troposphere”. (that’s how radiative forcing is conventionally defined). if you put in an altitude of, say, 15 km for the tropics, I suspect you’ll see something more like 4 W/m^2. let me know if you don’t.

    Steve-

    I just checked the IPCC and indeed radiative forcing is a radiation-only calculation, without any feedbacks from changing lapse rate, etc.

    MarkW-

    my reading of the IPCC is that depletion in C13 is key evidence that the increase in CO2 is due to fossil fuel burning (they talk about it in chapter 2). however, I’d like to verify that what they say is correct. if you could point me to a reference that contradicts that I’d be appreciative.

  31. gdn
    Posted Aug 2, 2007 at 8:13 AM | Permalink

    natural CO2 is about 280 ppmv; today’s CO2 is about 380 ppmv, so about 30% of the CO2 in the atmosphere is due to humans

    That’s a fair guess, if one were to assume nothing else changed. I’d think you could get closer by estimating total mining and drilling production, plus waste from those, minus those produced materials that incorporate the Carbon within them. It appears though that even those records are unreliable.

    An obvious check to your method would be the study from just last year that strongly suggests that a rather sizeable portion of all of the atmospheric methane attributed to anthropogenic causes was caused by a previously unknown example of plant biochemistry.

    We’ve got some big gaping holes in our knowledge…things we’ve gone around saying, but never tested.

    Scientists from the Max Planck Institute for Nuclear Physics have now discovered that plants themselves produce methane and emit it into the atmosphere, even in completely normal, oxygen-rich surroundings. The researchers made the surprising discovery during an investigation of which gases are emitted by dead and fresh leaves. Then, in the laboratory and in the wild, the scientists looked at the release of gases from living plants like maize and ryegrass. In this investigation, it turned out that living plants let out some 10 to 1000 times more methane than dead plant material. The researchers then were able to show that the rate of methane production grew drastically when the plants were exposed to the sun.

    Although the scientists have some first indications, it is still unclear what processes are responsible for the formation of methane in plants. The researchers from Heidelberg assume that there is an unknown, hidden reaction mechanism, which current knowledge about plants cannot explain – in other words, a new area of research for biochemistry and plant physiology.

    In terms of total amount of production worldwide, the scientists’ first guesses are between 60 and 240 million tonnes of methane per year. That means that about 10 to 30 percent of present annual methane production comes from plants. The largest portion of that – about two-thirds – originates from tropical areas, because that is where the most biomass is located. The evidence of direct methane emissions from plants also explains the unexpectedly high methane concentrations over tropical forests, measured only recently via satellite by a research group from the University of Heidelberg.

    But why would such a seemingly obvious discovery only come about now, 20 years after hundreds of scientists around the globe started investigating the global methane cycle? “Methane could not really be created that way,” says Dr. Frank Keppler. “Until now all the textbooks have said that biogenic methane can only be produced in the absence of oxygen. For that simple reason, nobody looked closely at this.”

    Your method of calculating seems to be “If we don’t know where it came from, it is anthropogenic”, even though anthropogenic releases should be easier to account for than non-anthropogenic sources.

  32. Paul Dennis
    Posted Aug 2, 2007 at 8:28 AM | Permalink

    The atmospheric carbon cycle is undoubtedly very complicated and difficult to unravel in detail because the recent perturbations, wether natural or anthropogenic, are small compared to the reservoir size (terrestrial, oceanic and atmospheric) and the fluxes between them. There are things we can do to help understand what might be going on. One is to measure the 13C isotopic composition of the atmosphere. Over recent decades we observe a decrease in the 13C content of atmospheric CO2. This is what we would expect from burning fossil fuels which are depleted in 13C. However, on the isotope data alone it may be possible to develop alternative hypotheses. The other feature we should note is that if we burn fossil fuels then we consume oxygen. Measurements of the oxygen content of the atmosphere over the past decade show that there is a decline in the oxygen concentration consistent with the burning of fossil fuels and. From memory the rate of change of oxygen content of the atmosphere is -3 x 10^-6 atmospheres per annum. This rate of change is consistent with the fossil fuel inventory and the rate of rise of CO2 in the atmosphere. It is hard to see how other processes which generate CO2, other than burning fossil fuels can lead to a decrease in the oxygen content of the atmosphere. Hence the finger points towards fossil fuels!

  33. Bill Bixby
    Posted Aug 2, 2007 at 8:34 AM | Permalink

    GDN-

    I agree there are large uncertainties in the methane budget. But it does not therefore follow that there are also large uncertainties in the CO2 budget. From my admittedly non-expert view, it seems to me that the connection between human emissions and the increase in CO2 over the past 100 years is pretty well established. The isotopic argument in the IPCC is quite convincing to me.

  34. tetris
    Posted Aug 2, 2007 at 9:21 AM | Permalink

    Re: 33
    It is the next step in the hypothesis [i.e. CO2 as dominant positive driver of temperature] that remains the problematic one. How to reconcile that with, just as one example, Lubos Motl’s arguments in #16?

  35. Steve McIntyre
    Posted Aug 2, 2007 at 9:50 AM | Permalink

    Please – no more discussion of carbon cycle in this post. North’s model makes no mention of this. Carbon cycle is a large topic, that’s been discussed on other occasions and no need to divert the present discussion.

  36. Posted Aug 2, 2007 at 10:04 AM | Permalink

    If the incident solar IR upon surface is 465.25 W/m^2, and the surface absorbs 237.15 W/m^2, that amount of heat will cause a temperature (measurement of the movement of particles) of 324.45 K. The temperature of the mixed air (dry air doesn’t exist in nature) will be 308.05 K. The heat will flow from the warmer system to the colder system and the temperature of the colder system will increase. It means that the surface (warmer system) will cause an increase in the temperature of the air (colder system). What’s the responsibility of CO2 on this particular case?

    Δ q = k A (σ) (T1^4 – T2^4 K^4)

    Δ q = 0.016572 (5.67 x 10^-8) (42511547.3 K^4) = 0.04 W/m^2*K

    0.04 W/m^2 would an “anomaly” of the air temperature of only 0.03 K

  37. Posted Aug 2, 2007 at 10:07 AM | Permalink

    Steve, I was not discussing carbon cycle; but you erased my message.

  38. Jonathan Schafer
    Posted Aug 2, 2007 at 10:21 AM | Permalink

    #35,

    Sorry Steve, I guess I started that. I was just trying to understand the basics of how we even get to the 2x CO2 scenario and for me that starts a bit earlier with understanding the cycle. My apologies for diverting the thread from your direction.

    If I could ask one additional question that is related (feel free to delete if you think it inappropriate), is there any difference in IR absorption/re-radiation of depleted carbon from burning of fossil fuels vs naturally occurring CO2. If so, do these models account for the ratio of naturally occurring CO2/depleted CO2 and how they would impact the stated 4 W/m^2. Or is all CO2 the same, whether depleted or not.

  39. jimDK
    Posted Aug 2, 2007 at 10:34 AM | Permalink

    Nasif, so actually the greenhouse gases help transfer IR from the planet’s surface to space. If there were no gases that absorbed ir what would happen to the IR

    thanks, jim

  40. Bob Meyer
    Posted Aug 2, 2007 at 10:35 AM | Permalink

    Re: #32

    Paul Dennis said

    …It is hard to see how other processes which generate CO2, other than burning fossil fuels can lead to a decrease in the oxygen content of the atmosphere. Hence the finger points towards fossil fuels!

    There must be something else in addition to fossil fuels to account for the decrease in O2. Over the last ten years CO2 has increased by 15 ppm while O2 has decreased by 30 ppm according to your numbers. Since an O2 molecule is an O2 molecule the numbers should be about the same, shouldn’t they?

    Since CO2 rises and falls annually we should also see a corresponding fall and rise for O2. Do we?

  41. Paul Dennis
    Posted Aug 2, 2007 at 10:50 AM | Permalink

    Bob….following Steve’s request I think this is best discussed over in the unthreaded section.

  42. Paul Dennis
    Posted Aug 2, 2007 at 11:01 AM | Permalink

    #42….Exactly!

  43. Bill Bixby
    Posted Aug 2, 2007 at 11:03 AM | Permalink

    RE: North’s suggested reading

    I just read this paper and I think it’s important to understand what North’s paper is trying to do. Energy balance models calculate outgoing longwave radiation as a function of surface temperature with a simple linear relationship: A + B T. As he said, the B parameter would presumably include things like the water vapor and cloud feedback. In addition, EBMs need to parameterize the surface albedo as a function of surface temperature. His paper is basically an analysis of these parameterizations, as well as a test of the parameterization by simulating the seasonal cycle.

    Thus, the paper answers Steve’s original question, but not in a way that’s obvious to me. Climate sensitivity is definitely built into the A and B parameters, but in a confusing and non-obvious way. There must be a better paper showing this. Perhaps there’s an old radiative-convective paper on this subject. That’s probably a better bet. I’ll look for one.

  44. Tom Gannett
    Posted Aug 2, 2007 at 11:28 AM | Permalink

    [snip – I realize that this is earnest but please restrict your discussion to North’s paper and don’t veer off onto “skeptic” papers. Start with examining this literature at face value.]

  45. Jon
    Posted Aug 2, 2007 at 11:35 AM | Permalink

    I have to concur with #44. This paper while interesting in a limited way does not address the physics of CO2 forcing.

    It cannot serve to validate the work of more complex models because A and B are derived from macro-scale properties: the regression gives the fudge factor needed to explain the correlation between temperature and some driver (such as CO2).

    In this model, the effect of an amplifier is not distinguishable from the primary agent.

    Regarding #8: The convection effect is important. Obviously flux is not strictly balanced by flux out. e.g., there may be a net accumulation of chemical potential due to photosynthesis. Moreover as the temperature rises there is necessarily energy being stored.

    CO2 being an unconstrained gas tends to heat and expand as well as re-emit. Consequently it is false to assume complete re-emission. Thus some of the CO2 effect warms the “air” not the ground. The proportion of warmed ground to warmed atmosphere depends on convection effects.

  46. Posted Aug 2, 2007 at 11:46 AM | Permalink

    # 41

    JimDK,

    If there were not GHG, there would be oceans and an atmosphere of water vapor, which for this issue are the main “storage houses” of heat. But if there weren’t oceans either, the IR emitted by the surface would escape to the outer space, like in Mercury, asteroids, etc. The atmosphere or what it is called instead “greenhouse gases” is a conveyor of heat not a sink of heat. However, the ground can store heat by a coefficient of 0.043 W/m^2 K, in contrast with the 0.017 W/m^2 K of the CO2 (the ground is 2.53 times more efficient than CO2 to store heat). The last doesn’t mean that the ground is an accumulator of heat because of its low C.

    From the example of my message # 36, the heat held temporarily by the CO2 -before being transferred to other systems- is 0.09 W/m^2 K. If we double the atmospheric CO2 up to 762 ppmv (0.0012 Kg/m^3), the change of the temperature caused by that amount of stored heat will be 0.085 K. If we increase the volumetric mass of CO2 in consequence the divisor increases while the heat absorbed remains constant.

    Now, the number given of 4-5 W/m^2 as the “uncontroversial” amount of heat emitted by CO2 to other systems couldn’t be real for an open system. If we take into account that the heat tends to be dispersed spontaneously into more microstates and that the CO2 is not a perfect thermo or an isolated system, then the amount of heat absorbed by one molecule of CO2 will be transferred to the molecules of a colder volume of air closer to the warmer volume of air and from them to other closer molecules, etc., but the amount of heat remains constant in the sum of all the partial quantities of heat absorbed by all the molecules where that heat was transferred (law of conservation of the energy). For example, if I have 50 J of heat and it is transferred to 50 molecules of CO2, each molecule will bear 1 J (it’s not real because it will depend of the quantum state of every molecule, but it is just an example); however, if I have 100 molecules of CO2, each molecule will absorb 0.5 W and the temperature of that molecule will not count like an addition to the temperature of the whole set of 100 molecules because the movement (translational, rotational, vibratory) of each particle will be at the same level. The CO2 is not a generator (integral source) of heat, but a conveyor of heat.

  47. Steve McIntyre
    Posted Aug 2, 2007 at 11:46 AM | Permalink

    #43. it looks to me like this model folds clouds into the albedo term – we’re not talking typical surface albedo.

    IT seems bizarre that this article which has no discussion of CO2 is North’s best source for how increased CO2 translates into 2.5 deg C. You’d think that somebody would have written up a clear exposition in the past 30 years.

  48. Bill Bixby
    Posted Aug 2, 2007 at 11:54 AM | Permalink

    #45: Jon-

    In the atmosphere, of course convection is important. However, “radiative forcing” is *defined* to be the instantaneous change in flux at the tropopause due to some imposed change (like doubled CO2), with everything else held constant. That’s how the 4 W/m^2 is determined. Thus, the effects of convection are explicitly excluded from the definition of radiative forcing.

    I would not overestimate the importance of radiative forcing. It seems to me that it’s just a relatively simple metric that allows one to compare various climate forcings. If you believe Pielke Sr., it’s a poor metric.

  49. Posted Aug 2, 2007 at 12:02 PM | Permalink

    # 48

    Bill Bixby,

    Strictly speaking we cannot use the term “radiative forcing” because there is not any “forcing” in natural systems and we are talking about natural systems. It gives the impression that there is a huge man-made device that “forces” the heat from colder to warmer systems.

  50. Bill Bixby
    Posted Aug 2, 2007 at 12:02 PM | Permalink

    Steve-

    My reading of this is that the *shortwave* cloud feedback is in the albedo term and the *longwave* cloud feedback is in the “B” parameter. And I agree that this is a puzzling paper to send.

  51. Bill Bixby
    Posted Aug 2, 2007 at 12:06 PM | Permalink

    Nasif #48: “radiative forcing” is not MY definition so I won’t argue with you …. I’m simply relaying the climate community’s definition of the term in an effort to answer Steve’s question of where the 4 W/m^2 come from.

  52. Steve McIntyre
    Posted Aug 2, 2007 at 12:42 PM | Permalink

    #50. I know the definition – I’m asking for a derivation of this “uncontroversial” figure in an up-to-date article reflecting modern understanding. (BTW I am aware of an ancient derivation of the number but it’s not easy to find and the article is not referenced by IPCC or even IPCC references.)

  53. Gerald North
    Posted Aug 2, 2007 at 1:25 PM | Permalink

    No good deed goes unpunished.
    -Jerry

  54. Tom Gannett
    Posted Aug 2, 2007 at 1:44 PM | Permalink

    Steve,

    Your critique is humbly excepted. Let me try again. You asked Gerry for an explanation for how an increase in atmospheric CO2
    (presumably a doubling) brings about a 2.5’C rise in mean global temperature. He fails to answer the question by asserting a 4-5W/m^2 reduction in outgoing longwave radiation and then applying is energy balance relation to calculate a delta T from this delta I. I too would like a clear derivation of this delta I for a doubling of CO2 concentration. At the current concentration, it can be calculated using the Beers-Lambert Law and CO2 extinction coefficents, that better than 99.9% of longwave radiation in the CO2 absortion bands is already absorbed. How then can additional CO2 have an substantial effect on outgoing longwave radiation. I’m not try to be snarky, I , too, would like an explanation for this conflict of theories.

  55. Bill Bixby
    Posted Aug 2, 2007 at 2:59 PM | Permalink

    Steve-

    which number? the 2.5 deg C climate sensitivity or the 4 W/m^2 radiative forcing? the latter number can be determined for yourself by running an on-line radiative transfer model, as Willis did. or is that not sufficient?

    [snip – Gerry North did not discuss Venus. I know that you were replying but I’ve deleted the other post ]

  56. Steve McIntyre
    Posted Aug 2, 2007 at 3:59 PM | Permalink

    I’m drawing a line in the sand here on off-topic posts as I really don’t walk a lot of theorizing that is not based on this article. Sorry about that folks, but I’m trying to consider the interests of readers as well as writers. I’ve not forgotten about other topics and if someone sends in a reference which derives 2.5 deg C from increased CO2, we can get to that. In such discussions we are going to take CO2 levels and increases as given. I realize that there are some issues here, but you’re going to have to hold those aside for now at this blog.

    #55. I’m told that peer reviewed articles are the gold standard. I realize that there are online calculators that yield numbers like this, but I’m interested in an article in which the online calculator assumptions are described.

    For example, as I understand it, the forcing comes from “the higher the colder” – so don’t you need to include assumptions on the lapse rate. I’d like to see everything spelled out. I’d like to see graphics showing the wavelengths at which the increased forcing occurs – it’s in the wings and far bands, but, in any business projection, you’d show all this. You’d never say – well, there’s a calculator on the internet.

    I’d also like to see how the calculator reconciles to Clough JGR 1995 – an article which should have been discussed by IPCC, but wasn’t.

    So again, citations!!! not websites.

  57. DocMartyn
    Posted Aug 2, 2007 at 5:16 PM | Permalink

    Looking at both figures 2 and 5, and reading up on the summer/winter OLR of the poles it is quite clear that these relationships are not linear. The lowest OLR is about 140 W/m2 in the Antarctic. The data would probably plot nicely to an exponential, but Kohlrausch-Williams-Watts kinetics would make more sense.
    I wish I had the actual numbers.

  58. Jan Pompe
    Posted Aug 2, 2007 at 5:20 PM | Permalink

    #53 Jerry

    No good deed goes unpunished.

    You may be right but thank you. Your good deed is appreciated. There might yet be a reward like in a couple of months I’m going fossicking I expect a lot of gravel but do hope to find a gem or two amongst it. You might get lucky.

  59. Pat Frank
    Posted Aug 2, 2007 at 5:39 PM | Permalink

    #56 — It looks like a good basic derivation for the 2.8 K global effect of doubled CO2 is offered here: J. W. Chamberlain (1980) “Changes in the planetary heat balance with chemical changes in air” Planetary and Space Sci. 28(11), 1011-1018. I have this paper in pdf if you want it and can’t get it. According to ISI this paper has been cited only 8 times since 1980. None of them appear to have been by the major players in climate science.

  60. Bill Bixby
    Posted Aug 2, 2007 at 7:36 PM | Permalink

    RE: 4 W/m^2. I don’t have citations for you, but I can guess a few places to look. The NRC put out a report on radiative forcing a few years ago. Pielke Sr. is always going on and on about it. I bet he has a PDF copy and would share. Also, I’ll bet the IPCC’s first report, published in 1990, has a discussion — and maybe more recent IPCC reports. Unfortunately, the 1990 report is quite hard to find. I’ve looked a few times and have come up dry. If anyone has a PDF, it would be great to post it. If there are citations, I would expect them to be in those reports.

  61. Steve McIntyre
    Posted Aug 2, 2007 at 9:17 PM | Permalink

    RE: 4 W/m^2. … Also, I’ll bet the IPCC’s first report, published in 1990, has a discussion ‘€” and maybe more recent IPCC reports. Unfortunately, the 1990 report is quite hard to find. I’ve looked a few times and have come up dry. If anyone has a PDF, it would be great to post it. If there are citations, I would expect them to be in those reports.

    You’d lose your bet. I’ve specifically examined the IPCC reports, including IPCC 1990 (which is at U of Toronto library) and there is no derivation in those reports.

    The original derivation actually comes from some Ramanathan articles in the 1970s. I stumbled on the origins by sheer accident. I tracked all the IPCC citations and there was no derivation in them nor in any IPCC references. I found a reference in the MacCracken volume from the 1980s mentioned previously. At some point I’ll do a post on the Ramanathan articles, but I wanted to see if anyone hadan up-to-date reference and the Chamberlain 1980 isn’t that. Ramanathan’s derivation is different than the higher-the colder arm-waving that Houghton provides.

    BTW Ramanathan is in the news this week (Nature) reporting on Asian brown clouds – attributing some portion of warming to aerosols.

  62. Mark T
    Posted Aug 2, 2007 at 9:48 PM | Permalink

    It seems to me, Steve, that the entire process of your search serves as audit in its own right.

    Mark

  63. Jon
    Posted Aug 2, 2007 at 11:00 PM | Permalink

    Steve, re #61:

    I have to say Ramanthan (1978) while limited still seems to be common. e.g. the approach is repeated by a widely cited review article from 2000. “Water Vapor Feedback and Global Warming” Held, I.M. and Soden B.J. Annual Review of Energy and the Environment.

    To be more specific, I think you want a clear description wherein some of Ramananthan ’78’s assumptions are dropped

    a) zero net increase in kinetic energy
    b) no latitude effect

  64. Posted Aug 3, 2007 at 12:30 AM | Permalink

    Everyone who is interested in climate sensitivity should read this concise, 1.5-page-long calculation by Bob Weber!

    Click to access Doubling_CO2.pdf

    Thanks, Luboš

  65. Rod
    Posted Aug 3, 2007 at 3:06 AM | Permalink

    #61 Ramanathan and Asian brown clouds feature in the current Nature podcast.

  66. Posted Aug 3, 2007 at 3:51 AM | Permalink

    Re #60

    I second Bill Bixby’s request. If anyone knows of any IPCC First or Second Assessment Report in an electronic format they would be greatly appreciated.

    A bit odd, having to go to a major university to be able to read them.

    Thanks.

  67. Posted Aug 3, 2007 at 4:04 AM | Permalink

    #64
    That calculation adds albedo and emissivity together. I’m not at all sure that this is a reasonable thing to do. Please can you justify it.

  68. Steve Milesworthy
    Posted Aug 3, 2007 at 4:14 AM | Permalink

    #56 Steve
    Are you asking for too much in one paper?

    Radiation models such as the Edwards Slingo two-stream code are parametrizations (mainly for efficiency reasons because they are very expensive to run). They are validated against databases such as HITRAN. eg.

    “Investigating k distribution methods for parameterizing gaseous absorption in the Hadley Centre Climate Model” S. Cusack et al JGR (1999)

    (or other Google scholar searches on Edwards Slingo, to name one particular radiation scheme not at random)

    Databases such as HITRAN have their own citations (independent of climate science). I mistakenly downloaded one once. It was very long, detailed and boring.

    The 4W/m^2 or so is calculated by applying the (validated) radiation model to a representative distribution of the atmosphere and then comparing it with the same distribution but with doubled CO2. When this is done with a GCM, it is an output from a model rather than an input to a model that summarises one of its properties rather than being its key feature.

    To get to 2.5C though, you need feedbacks. Models provide assessments of feedbacks, and studies such as the North paper are one simple way, among many others, of validating the feedbacks of the models.

  69. Posted Aug 3, 2007 at 4:33 AM | Permalink

    Dear richardT #67, I agree, it looks somewhat strange on the second look. But the problem might be a problem of wording only. If you re-word it a bit, you could obtain a better v2 version of the calculation.

  70. Posted Aug 3, 2007 at 4:42 AM | Permalink

    #69 Steve Milesworthy

    You can’t validate one computer model by comparing it against a different computer model, you can only ‘verify’ it. And that verification is in itself only limited. It just shows that two independent people used different code and the same assumptions and data to come up with a similar result. It doesn’t prove that either computer model is validate for a given application and ceratinly not for confirming the validity of positive feedback mechanisms built into GCMs so that they can predict alarming increasing in the mean global surface temperature over the next century.

  71. Jan Pompe
    Posted Aug 3, 2007 at 4:52 AM | Permalink

    # 64 67

    I’m not sure it’s reasonable to add albedo and emissivity also but I’m also wondering why it is black body in grey body out. Which is the way it is presented.

  72. TAC
    Posted Aug 3, 2007 at 4:55 AM | Permalink

    Lubos (#64); Thank you for the linked document. It is easy to follow — exactly what I have been looking for. 😉

    For those who have not read it, the document shows that if you divide the well known greenhouse effect of 33 [degrees C] by the corresponding forcing [atmospheric and albedo, i.e. natural] of 124 [watts/m^2], one gets an average sensitivity of 0.25 [degrees C/(watt/m^2)].

    To first order, 4 watts of additional forcing, multiplied by a sensitivity of 0.25, results in 1 degree C of warming.

    As I understand it, none of this is controversial: The 33 [degree C] greenhouse effect is the long-established difference between the planet’s average temperature and its blackbody equilibrium temperature. The 124 [watts/m^2] come from observed values of albedo (0.297) and emissivity (0.612). The 4 [watts/m^2] shows up frequently in the AGW argument.

    Question: Why is the sensitivity computed this way so much smaller than one finds using the more detailed MODTRAN approach or GCMs?

    Is there an error in the zero-th order analysis?

    Is the sensitivity sufficiently non-linear that the average sensitivity is meaningless?

    Alternatively, is it possible that the assumptions in the particular MODTRAN/GCM cases that we have seen are unrepresentative of the planet as a whole?

    For example, is it possible there an error in the way that MODTRAN propagates the IR signal through the atmospheric column? Do the GCMs ignore/misrepresent an important feedback process (i.e. water vapor/clouds)?

    In short, what is the best explanation for the discrepancy?

  73. Steve Milesworthy
    Posted Aug 3, 2007 at 5:16 AM | Permalink

    #72 TAC
    However the derivation was done, the .25C/W/m^2 figure is the same as I’ve seen elsewhere. But the -33C difference includes all the GHGs (water vapour, CO2 etc.).

    The “alarming” (to coin this blog’s favourite word) sensitivity figures are higher because of feedbacks. For example, a basic calculation that assumes constant relative humidity would show a greater than 1C rise because the warming raises water vapour levels and so increases the amount of warming. Is there a calculator that includes this?

  74. Andrey Levin
    Posted Aug 3, 2007 at 6:13 AM | Permalink

    Re#64, Lubos:

    Averaging, averaging. Earth is not elementary particle.

    Average surface temperature of 15C is arithmetic average, which is not applicable to make calculations of radiative emissivety, because it is function of local and momentary T power four. This applies both to longitudinal differences in temperature and day/night differences in temperature. Temperature of actively emitting tropopause is minus 40C, while surface is much warmer. No way to simple temperature and energy balance averaging either.

    How about integration instead of arithmetic averaging? Shouldn’t be very difficult to active contributor to string theory.

  75. Tom Vonk
    Posted Aug 3, 2007 at 7:37 AM | Permalink

    #72

    There are many problems with “simple” calculations like in # 64 .
    I’ll take only one which is the all too often done confusion between T and T^4 .
    A planet does NOT radiate E.T^4 .
    A planet radiates Integral [E(P).T(P)^4] while its mean temperature is 1/S Integral [T(P)] – P being here a point in the middle of an elementary surface dS .
    Moreover an Earth without atmosphere , oceans & Co wouldn’t have the same albedo as an Earth with atmosphere , oceans & Co .

    An Earth without everything would have like + 100°C on the day half and -100°C on the night half (order of magnitude is right , details don’t matter)
    That’s an average “global temperature” of 0°C or 273K .
    It would radiate K.(173^4 + 373^4)/2
    Now nothing prevents you to write that it radiates K.BLOB^4 with BLOB having the dimension of a temperature .
    BLOB is 321K or + 48°C .
    So now you have a planet with an average global temperature of 0°C and a BLOB of 48°C .
    The Moon has f.ex an average global temperature of -23°C and a BLOB of 47°C .
    Confounding BLOB and a physical temperature shows that some people still ignore that a black body in equilibrium is isothermal what a planet with or without atmosphere is not .

  76. Dave Dardinger
    Posted Aug 3, 2007 at 7:39 AM | Permalink

    Lubos,

    I’m with Jan and RichardT in being doubtful of adding the reflection with emissivity. I’ll go farther and say it’s wrong. I read the paper and sat there scratching my head wondering what justification he had for that addition and couldn’t think of one. So I decided to come back and read the rest of the thread and see if anyone else objected to that line. I’m glad to see at least a couple of others had the same reaction I did.

    You might be right that it could be worded differently, but in that case do it! Humpty Dumptyish definitions of meaning aren’t very scientific.

  77. Tom Vonk
    Posted Aug 3, 2007 at 7:48 AM | Permalink

    P.S
    I clicked on submit before reading # 74 of Andrey Levin .
    So I say basically the same thing with some numbers on top .

  78. John Lang
    Posted Aug 3, 2007 at 8:22 AM | Permalink

    [snip – I’ve deleted a post decrying the possibility of models altogether as falling into the vein of general venting rather than an appraisal of this article.]

  79. mzed
    Posted Aug 3, 2007 at 9:05 AM | Permalink

    Steve McIntyre–would Hack 1994 help?

    Citation: Hack, J. J. (1994), Parametrization of moist convection in the National Center for Atmospheric Research community climate model (CCM2), J. Geophys. Res., 99(D3), 5541’€”5568.

    Way beyond my level, but it seems there might be some here who could make sense of it.

  80. Posted Aug 3, 2007 at 2:37 PM | Permalink

    #64 #76
    Lubos, Dave
    If you replace the albedo with 0.4 and run through the calculations you get a nonsensical value for the sensitivity. It gets the right answer, but for the wrong reasons.

  81. Bob Weber
    Posted Aug 3, 2007 at 4:45 PM | Permalink

    29.7% of Pin is reflected (albedo) leaving 70.3%. Of this, 61.2% is effectively emitted (emissivity) to space as Pout. This leaves 9.1% which I assumed is absorbed by the earth, resulting in the 33°C.

    Bob

  82. DeWitt Payne
    Posted Aug 3, 2007 at 5:20 PM | Permalink

    #81

    Um, I may be missing something here, but I was under the impression that emissivity/absorptivity and albedo were complementary at any given wavelength. That is emissivity/absorptivity = 1 – albedo. If absorptivity is much larger than emissivity you have a massive radiative imbalance which would lead to a rapid rise in temperature, not a constant offset. Absorbing energy = energy increase leads to a continuous temperature increase, not a constant delta T. IIRC, Hansen’s estimate is that there is a current radiative imbalance of slightly less than 1 W/m2 out of 342 W/m2 TSI.

  83. Curt
    Posted Aug 3, 2007 at 11:04 PM | Permalink

    DeWitt #82:

    You say, “I was under the impression that emissivity/absorptivity and albedo were complementary at any given wavelength.” You are correct here, but we are talking about dramatically different wavelengths for incoming solar radiation (visible and near infrared, primarily), and outgoing radiation (far infrared), with very little overlap. So the emissivities/absorptivities can be quite different for the two phenomena.

    When I was in college, I had some professors looking for materials/coatings for thermal solar panels that were high absorptivity/emissivity in the range of the solar wavelengths, but low absorptivity/emissivity in the far infrared. In the visible/near-infrared wavelengths, it’s the high absorptivity that counts; in the far infrared, it would be the low emissivity that mattered, keeping the panel from re-radiating out too much of the absorbed solar radiation, thus making for a hotter panel and more effective heating. I never found out how successful they were in that search.

  84. Bill Bixby
    Posted Aug 3, 2007 at 11:21 PM | Permalink

    Re: Lubos (#64)

    This calculation looks fishy to me. I guess my question would be whether the “effective emissivity” that is calculated is really a constant. Doesn’t it implicitly include the effects of greenhouse gases … so shouldn’t it vary with CO2?

    Any thoughts are appreciated.

  85. DeWitt Payne
    Posted Aug 4, 2007 at 1:18 AM | Permalink

    Curt #83

    It’s still adding apples to oranges and getting a result in grapefruits. The logic is circular to boot. The ‘effective emissivity’ was calculated by solving the Stefan-Boltzman equation for emissivity after plugging in the net solar energy flux and the surface temperature. If the radiative imbalance were actually as large as 124 W/m2 the surface would be a whole lot hotter than 288K. There is a massive error equivalent to dividing by zero somewhere in there, but it’s not worth the effort to find it. My opinion of Lubos Motl has been considerably lowered if he is actually serious about this.

    For those interested in nuts and bolts, this is the original Berk,et.al. report on the development of MODTRAN. Basically, MODTRAN improves the spectral resolution of LOWTRAN by modeling pressure and temperature broadening of the absorption lines of carbon dioxide, water vapor and ten other molecules. Apparently LOWTRAN doesn’t do well above 30 km because it doesn’t do this. MODTRAN breaks down at very high altitudes where local thermodynamic equilibrium can no longer be assumed. Now I need to find the documentation on LOWTRAN to see if the basic modeling is done the way I think it is.

  86. Jan Pompe
    Posted Aug 4, 2007 at 1:55 AM | Permalink

    # 64, 67, 71, 76, 81

    First #71

    Stay away from the keyboard when you should be sleeping.

    While I don’t thinks albedo and emissivity should be added I don’t think he does. He divides emissivity by the remainder after the albedo is subtracted from 1 i.e. the presumed absorptivity of the surface.

    However if this was a control problem and the output was the rate of radiated energy then we have 2 series functional blocks one with a gain of .703 and the next with a gain of .612 due to emissivity. Irrespective then of how these reductions occur or what causes them, then the energy radiated rate to space is expected to be

    1366 X 0.703 X 0.612 = 588 watt/m^2 which corresponds to a black body temperature of 225.4K.

    Sometimes because of the difficulty in taking measurements at intermediate points a control engineer will use a state observer to model as it were the intermediate states but ultimately must take a measure of the final output to keep it all together so we do that. On the NASA planetary fact sheet we are given the blackbody temperature of the earth as 254.3K.

    While I doubt that we can provide a feedback to correct that difference I do think it shows that such a simple model is not really very useful for working out what is happening with global temperatures. Not the least of the problems is the non-linearity of the relationship between temperature and the radiated power. Other possible problems might be that albedo or emissivity could also be wrong or maybe just like the temperature they might not relate linearly either.

    Does anyone know? Any other thoughts?

  87. John Finn
    Posted Aug 4, 2007 at 3:55 AM | Permalink

    I’ve only just come across this thread. I hope I’m not too late to post a few questions as there are some fairly large gaps in my knowledge on the topic in question. This may well become apparent shortly.

    I’ve quickly scanned Gerry’s paper and I think I’m right in assuming that the study has derived a fairly simple linear function for the Outgoing LW radiation at TOA (top of the atmosphere) in terms of the surface temperature. I assume TOA because of the numbers.

    As a quick check I substituted the average tamp of the earth, 15 deg C, into the ‘average’ skies equation to get a LW value of 230w/m2 (202.1 + 15 * 1.9) which seems close enough to the average TOA value of 235 w/m2 which is regularly published. So far – so good.

    In order to estimate climate sensitivity (above), Gerry has effectively re-arranged the straight line equation to find the temp, T, in terms of the radiation, I. This assumes T and I are mutually dependent. I suppose they are as I can’t imagine a situation where the outgoing LW varies independently of the surface temperature, but I do have trouble with the increased CO2 scenario.

    Let’s assume that 2xCO2 does lead to a forcing of 4w/m2 (though I’m not sure this figure is totally uncontroversial). This implies that the flow of outgoing radiation is impeded by 4 w/m2. It doesn’t mean that the TOA value increases by 4 w/m2. The TOA will still need to emit 235 w/m2 but in order to do so there will be some warming within the atmosphere and at the surface – but how will this take place? How will the change at TOA map down to the surface? Also is it not possible that more CO2, apart from impeding the outgoing IR, will also impede the flow of any re-radiation back to the surface?

    Using Gerry’s re-arrangement the estimated temp rise for 2xCO2 is ~2 deg C. But let’s look at this from another angle. If we assume that the earth’s surface (at 15 deg C) has increased by 2 deg C, then using Stefan-Boltzmann : The earth emits ~390 w/m2 at 15 deg C (288K); at 17 deg C (290 K) it emits ~400 w/m2, i.e 10 w/m2 more. So 4 w/m2 at TOA becomes 10 w/m2 at the surface? Is this correect? If it is it is presumably is due to the large water vapour feedback. but it seems a lot.

    I hope this makes sense.

    PS If these queries have already been addressed ‘€” I apologise

  88. John F. Pittman
    Posted Aug 4, 2007 at 8:24 AM | Permalink

    http://www.john-daly.com/bull-121.htm

    I found this article interesting.

    They carry out calculations on the spectra of the main greenhouse gases by all three of the recognised radiative transfer schemes, line by line (LBL), narrow-based model (NBM) and broad-based model (BBM). They calculate the Global Mean Instantaneous Clear Sky Radiative Forcing for 1995, for atmospheric carbon dioxide, relative to an assumed “pre-industrial” level of 280ppmv, as 1.759Wm-2 for LBL, 1.790Wm-2 for NBM and 1.800Wm-2 for BBM; a mean of 1.776Wm-2 with BBM 2.3 % greater than LBL.

    I wonder if Gerry North couold weigh in on this question/comment: why was 202.1+1.9C used instead of (0.01K+1)^4 or something similar?

  89. Allan Ames
    Posted Aug 4, 2007 at 11:15 AM | Permalink

    re 5 Steve M and following:

    (I enjoyed the Graves,Lee,North paper. It is straightforward in that you don’t have to agree, but you do know what was done and what was not done. They were not trying to intimidate me with references, which seems to be the common style in IPCC. Vonk is correct; it is statistics more than anything, like many of the papers in climate study.)

    Disclaimer: I have no training in climate study, but do know a photon from
    a phonon. A great variety of models exist to deal with radiation effects. As logical end points we have first the older grey body, band models and the more recent line by line/NLTE models. The band models are used in climate models to say where radiant heat will be found or lost. The LBL models are used to predict and interpret spectra. The K class of models attempts to approximate the LBL capability within climate models but is a sort of band model. The spectroscopic gold standard these days is full LBL, full NLTE handling. The text book by Petty covers all this pretty well. Most of the work on the LBL models was done under contract ‘€” the xxTRANs — and are covered in routine government publications. The peer reviewed publication occurred when the model was validated in some study but these do not usually have the details. HITRAN has evolved into the primary spectral data base for other LBL models which take into account variation in temperature, concentration, and has taken on a life of its own. A search on “line by line radiation transfer model” will get many useful references f.ex http://asd-www.larc.nasa.gov/~kratz/ref/p27jqsrt.pdf

    My point here is that the 4+/- W/M^2 can come from any IR spectroscopic model by using a thermal distribution as input. The interpretation of this number in terms of measurable temperature comes from a climate model
    “Radiative-convective” applies to climate models.

    Any progress here? Delete if not.

  90. Posted Aug 4, 2007 at 12:51 PM | Permalink

    Steve,

    There are a few fundamental problems with the Graves-Lee-North article.
    – The empirical evidence of the forcing-temperature relationship is in fact from solar changes (both forcing and seasonal variations), not from GHGs. As many times said here, the relationship between different forcings and their influence on temperature is nearly the same (within +/-10%) within GCMs, but not necessary in reality, as solar has its main influence in the stratosphere and in the tropics, while GHGs have their main influence in the lower troposphere and more spread over the latitudes.
    – The influence of solar variations on clouds is established (no matter if it is from direct sunlight or indirect by GCRs (see http://folk.uio.no/jegill/papers/2002GL015646.pdf fig.1) which increases the effect of solar. The influece of GHGs on clouds is far from settled. GCM’s contradict each other even in sign of cloud effect. Thus the feedback of clouds maybe wrong if generalised for all types of forcing.

    Further, the change in IR as measured by satellites in the article is problematic and recalculated, due to problems with satellite drift. See: http://asd-www.larc.nasa.gov/~tak/wong/f20m.pdf
    This doesn’t change the increased insolation in the (sub)tropics that much (still ~2 W/m2), but the net radiation TOA changed from negative (more LW to space) to positive (more heat retained). The main origin didn’t change: less humidity/clouds in the subtropics give more insolation and higher sea surface temperatures (which may go the other way round too). The latter is probably sun (or natural cycle) induced, not from GHGs, as these only give some 1.2 W/m2 increase since the start of the industrial revolution…

    Thus while the extra forcing from GHGs can be calculated (based on measured spectra), the real effect on temperature still is quite unsure.

    As an aside, thanks for the link of Ramanathan on Nature. It reinforced my suspicion that the (negative) influence of aerosols is largely overestimated in GCM’s. According to Ramanathan, the brown Asian cloud increases the GHG effect above India (and China?) with 50%. The IPCC gives a large negative (cooling) effect for human induced aerosols, but the main aerosol pollution changed from North America and Europe to SE Asia, thus inducing more warming than cooling… That has as consequence that the influence of GHGs must be reduced to match the 1945-1975 cooling trend in GCMs…

  91. DeWitt Payne
    Posted Aug 4, 2007 at 1:10 PM | Permalink

    More nuts and bolts on atmospheric radiative transfer models here. Probably more than you want to know.

    Personally, I accept the IPCC conclusion that level of scientific understanding of radiative forcing from well-mixed greenhouse gases is high. Where I have problems is the poorly understood forcings. Aerosols whether sulfate, carbon black from biomass (BB) and fossil fuels(FF), mineral dust, etc. are all ranked as low or very low.

    Yet the sulfate fiddle factor is critical to what success the GCM’s have for hindcasting the twentieth century temperature record. We also know that the ocean current cycles like ENSO and PDO are not modeled at all. I think William Gray is on the right track when he attributes much of the recent observed temperature fluctuation to variation in ocean heat transport from the SH and tropics to high northern latitudes. I think he will be proven correct in this view in the very near future.

  92. DeWitt Payne
    Posted Aug 4, 2007 at 2:05 PM | Permalink

    If you want to cut to the chase in the link in #91, then start at page 31. That’s the beginning of the section on the Radiation Transport Algorithm.

  93. Allan Ames
    Posted Aug 4, 2007 at 2:12 PM | Permalink

    Re 91 DeWitt Payne: I do accept that the understanding of radiative processes is high compared to many other factors. One of my favorites is the coupling between surface and air, which has several twists beyond just “albedo”. But like the drunk who searches where the light is, we work with what we understand, not with what needs to be worked on.

    For anyone who wants to add inundation to simple drowning, a search on
    “air force research lab users manual modtran” will produce both detail and history.

  94. DeWitt Payne
    Posted Aug 4, 2007 at 5:18 PM | Permalink

    Allan Ames #91: Heat transfer from the surface to the atmosphere is indeed an interesting subject. I think that’s referred to in the trade as sensible heat transfer. Dry surfaces, asphalt at a race track on a sunny day e.g., can be much hotter than the air just a few feet above the surface. The density and resulting index of refraction difference between the hot air immediately adjacent to the surface and the cooler air just above it often causes a visible distortion and reflection effect similar to a desert mirage. They also can cool rapidly when the sun goes down or behind a cloud if the heat capacity is low as in dry sand.

  95. kim
    Posted Aug 4, 2007 at 5:32 PM | Permalink

    N Johnson of Atmoz has two references to calculations of the Greenhouse effect in his 7/31 comment on
    the ‘Falsification’ thread.
    ==============================================================================

  96. DeWitt Payne
    Posted Aug 4, 2007 at 6:11 PM | Permalink

    kim #95: That’s fine if you have access to a university library that has copies or can get copies on loan. The rest of us are probably not up to a combined US $150 to buy these books. For anybody who’s interested here are the links to the Amazon pages for the books mentioned above:

    Fundamentals of Atmospheric Radiation: An Introduction with 400 Problems
    Radiative Transfer in the Atmosphere and Ocean

    Again, why doesn’t this page parse multiple links properly in the preview pane? I’m very reluctant to submit something that doesn’t look like it’s going to work correctly.

  97. Steve Milesworthy
    Posted Aug 5, 2007 at 4:54 AM | Permalink

    #87 John Finn
    My understanding is that the 4W/m^2 is a net value including effects on reradiation back to the surface etc. Also it is a calculation of instantaneous change, so ignores feedbacks.

    Gerry’s paper correlates observed surface temperature changes to TOA fluxes, so the figures don’t say anything about the effects of 2xCO2 as such. If it helps, the effective radiating temperature of the earth is more like 255K. A change in 2C here amounts to a change in just 7-8W/m^2. But, as indicated by North in his reply to Steve, I don’t think the paper is intended to fully answer your questions.

  98. paminator
    Posted Aug 5, 2007 at 11:24 AM | Permalink

    re- Bob Weber’s calculation-

    I am not convinced that the backbody calculation actually represents the temperature of the Earth without an atmosphere. The first calculation of temperature, assuming no atmosphere, sets the emissivity to 1. However, this is not correct. Averaging over the surface of the Earth, at the surface of the Earth (not in orbit peering through the atmosphere) gives an emissivity of about 0.78 (you cannot assume all of the water, trees, grasses, asphalt, etc are not present). This results in a no-atmosphere temperature of 271 K. Adding an atmosphere with additional IR absorption then results in a temperature increase of 16 or 17 C, not the 33 C commonly presented.

    For comparison, the emissivity of the Moon is about 0.88, not 1. You would still not arrive at a 33 C temperature change even if you assume the Earth is completely dessicated like the Moon, since it is highly likely that the dessicated Earth’s emissivity is close to that of the Moon.

  99. Mark T
    Posted Aug 5, 2007 at 12:42 PM | Permalink

    Again, why doesn’t this page parse multiple links properly in the preview pane? I’m very reluctant to submit something that doesn’t look like it’s going to work correctly.

    I just go with tinyurl links to save the issues revolving around the embedded links.

    Oh, also, you may have some recourse w.r.t. books at universities. In CO Springs, you can get books from UCCS via intra-library loan at your local library. I don’t know if it works that way everywhere, but here you’re allowed two books at a time.

    Mark

  100. DeWitt Payne
    Posted Aug 5, 2007 at 1:07 PM | Permalink

    paminator #99, You are correct about the calculation. It’s for the effective radiative temperature at the top of the atmosphere of the Earth-atmosphere system as it is, clouds and all, with an albedo in the wavelength range of solar radiation of 0.3. However, the actual spectrum emitted by the Earth is not the smooth curve that would be observed from a gray body at 255 K with an emissivity of about 0.9 in the IR. Also, when you add an atmosphere you increase the albedo from 0.22 to 0.3 because of reflection from clouds so in a sense you are comparing apples and oranges.

    What I’d really like to see is a calculation/model for a body with a completely transparent atmosphere that includes convective heat transfer from the tropics to the poles. Then add a heat transfer medium with most of the properties of water except that the vapor is completely transparent and the liquid form doesn’t freeze, or freezes at a much lower temperature so you could still have the same amount of snow and ice. I suspect the average surface temperature in this system would still be close to 255 K, though. Also, I’m pretty sure that the calculation assumes that the body in question is a perfect conductor of heat. IIRC, a perfect insulator would have a somewhat lower average temperature.

    After looking at the TOC of the Thomas and Stamnes book (second link in #96), I’m seriously considering a purchase. Has anyone here read it?

  101. Kenneth Fritsch
    Posted Aug 5, 2007 at 2:00 PM | Permalink

    This post is a reality check. I thought what Steve M was attempting to find was a less complicated modeling of the temperature variations from changes in the concentrations of atmospheric CO2. What he received in North’s suggested reading of his authored paper is a simplified model with all its anticipated shortcomings (my reading of it says it neglects the tropics or gets it very wrong and I am not so sure the boundary conditions are reality) that attempts to look more comprehensively at regional temperatures of the globe than focusing on GHG effects. In the ideal case, a simplified model for focusing on CO2 (and other GHG) effects on temperature would be provided to policy makers — being that policy makers should be almost totally interested in what they (think they) can control. Such a simple calculation would also ideally provide a more easily understood picture of the uncertainties involved in the calculated results. In this layman’s view the question remains whether such ideals can even be approximated in the practical world.

    I think in the end, however, that what the IPCC has attempted to do is to present as much evidence as they could from all sources in attempts to make their case that AGW exists and that it will have adverse future effects. In order to make a convincing case for near term mitigating actions, the IPCC judges, I am quite certain, that they need to point to some of the more extreme (and adverse) predictions of future climate, either localized or at least regionalized, as opposed to simple global average temperature changes. That process requires the more complicated black box types of approaches. From my non-expert viewpoint, I think that simple models (and complicated ones) can show the effects of added atmospheric CO2 and other GHG concentrations on some globally averaged temperature, but what that means is not evident when considering the local to regional variations in temperature anomalies from that mean and without adding in all the other complicating feedbacks from changes in atmospheric moisture content, clouds, albedo, etc. and, for that matter, the non-additive competition of all species of GHG for absorption of radiations.

    I am not at all sure how well even the complicated models can handle the climate changes going into and out of the glacier periods which require big climate changes (granted they occur over long time periods) in regionalized areas from relatively small changes in the summer insolation in the northern hemisphere. Does not one immediately need to invoke some serious feedbacks in attempts to explain this phenomenon with all the concomitant complications and uncertainties this implies?

    Having said all this I would think that a simplified look by the IPCC at increasing GHG versus temperature might be in order and if nothing else at least a comprehensive listing of all the uncertainties that go into the calculations no matter how complex the model used in doing them and further an indication of how the uncertainties can affect the final results. Currently, I think, one needs to depend too much on the faith one has in what goes on in those black boxes and the understanding of it that those expert scientists have that make a show of hands on likelihood and uncertainty issues for the AR4.

    And finally having said all that, I continue to learn from the discussions and comments stimulated by this thread.

  102. Allan Ames
    Posted Aug 5, 2007 at 2:18 PM | Permalink

    re 101 references For my purposes the Grant Petty book, “A First Course in Atmospheric Radiation”, on the web @$36 has worked well. Look carefully at the contents; it avoids some hot button topics, which I took as a plus. It is the only book on this topic I have seen in a decade.

    While only tangentially relevant to current discussion, I extract the reference from “Unthreaded 17”, which is a interesting result from one of the better spectral models, LBLRTM by Clough: http://www.aer.com/scienceResearch/rc/m-proj/abstracts/rc.clrt2.html

  103. Dave Dardinger
    Posted Aug 5, 2007 at 5:52 PM | Permalink

    re: #96 DeWitt,

    You can enter your link by writing a description and then highlighting it and clicking “link” in the ‘quicktags’ bar. This will pull up a box where you can paste your true link and it will work fine and not extend beyond the column.

  104. David Smith
    Posted Aug 5, 2007 at 6:06 PM | Permalink

    I’ve accepted the CO2 impact (circa 1.2K assuming no water vapor effect) as a given and my interests have been on other topics. However, in the back of my mind I’ve wondered how the “lumpiness” of the upper atmosphere plays into how Earth rids itself of IR.

    By “lumpiness” I mean the fact that the upper atmosphere is not composed of globally-uniform layers at uniform humidities and temperatures. Life isn’t that simple. Instead, the upper air is made of regions (my “lumps”) of air which contrast sharply on humidity and temperature, giving each significantly different radiative properties. Some lumps may be more important with regards to removing Earth’s heat than others and may be more (or less) affected by increased CO2 than what a simple global average might say.

    I don’t have a well-formulated question with regards to this and, as mentioned, it’s way down on my priorities, but one day it’d be great to read something that explores and answers this rather than simply assume that lumpiness doesn’t matter.

  105. DeWitt Payne
    Posted Aug 6, 2007 at 12:53 AM | Permalink

    Re: #104

    Dave, I do that. I did it for the first link. Then I tried to add the second link the same way. In the preview pane, the first link and all text between the links disappeared. This has happened to me many times. Sometimes I can get two links in a post to work and sometimes I can’t. As an experiment, I’ll try to post the same two links as above and submit even if they don’t show up correctly in the preview pane.

    link 1 and here is link 2
    Ok. It did it again and all I see is the second link. Now let’s see what actually shows up.

  106. DeWitt Payne
    Posted Aug 6, 2007 at 12:55 AM | Permalink

    Aha! The problem, as I suspected, is in the preview pane. John A???

  107. Posted Aug 6, 2007 at 3:00 AM | Permalink

    Re #106

    The link shows up in the preview pane as presented in the final result if you add a space before and after the full adress: “http…” will not always show up, ” http… ” does show up and the link still is working…

  108. Steve Milesworthy
    Posted Aug 6, 2007 at 4:54 AM | Permalink

    #102 Kenneth
    As I understand, there are two major problems with modelling ice ages.
    1. Computer time. A “complicated” model runs 3-4 model-years per day on a supercomputer, so an ice age cycle takes years-decades to run.
    2. Observational evidence. There isn’t much.

    Having said that, you are correct in saying that large feedbacks need to be invoked to explain the cycles. After estimates of solar, albedo, aerosol and greenhouse gases were made, sensitivities that equate to a 2xCO2 sensitivity of 2-2.5C were required to explain the climate of the last glacial maximum (a cold period) and mid-cretacious maximum (a warm period) in Hoffert & Covey “Deriving global climate sensitivity from palaeoclimate reconstructions” Nature 1992.

  109. Posted Aug 6, 2007 at 1:56 PM | Permalink

    Re #109,

    Steve Milesworthy,

    your point 1. is right, but Kaspar and Cubash used their ECHO-G climate model to look at the difference between two periods in the Eemian: at full warmth, and at the minimum temperature at the start of the last glaciation. Thus while they didn’t use the full period of the transition, they simulated two distict periods at 125 kyr and 115 kyr before present (BP).
    The interesting point is that during the cooling period, CO2 didn’t change at all (see here). Despite that, the model (with a low 2°C for 2xCO2!) could replicate the temperature (derived from pollen and ice cores). Unfortunately, they didn’t simulate the next step: the reduction of CO2 with about 40 ppmv from 113 kyr BP to 107 kyr BP, without much influence on temperature (ice sheets did build up again). Thus temperatures can go down from an interglacial to a new ice age without any help of a CO2 feedback.

    Reference:
    Kaspar, F. and U. Cubash. Simulations of the Eemian interglacial and the subsequent glacial inception with a coupled ocean-atmosphere general circulation model.

    André Bijkerk found something similar for the LGM (last glacial maximum) to the start of the Holocene: No measurable feedback from CO2. In this case for about 75 ppmv CO2 increase…

    The least we can say is that the influence of 40 to 75 ppmv on temperature is too low to be detected, and that the temperature increase for 2xCO2 is probably (much) lower than 3°C.

  110. Posted Aug 6, 2007 at 2:01 PM | Permalink

    Sorry made a mistake in #110:

    Ice sheets were melting again (as can be deduced from d18O changes), not building up, while CO2 levels decreased with 40 ppmv at the start of the last ice age…

  111. Pat Frank
    Posted Aug 7, 2007 at 12:30 AM | Permalink

    #53 — Jerry your complaint is a little self-serving. In your seminar at Texas A&M immediately after your stint on the Committee on Climate Change, you said, in response to a questioner, that for you GCM outputs were the reason that you credited AGW as true, because, “we know all the forcings.”

    But we don’t know all the forcings, and you know that we don’t know all the forcings. Even more than that, no one can accurately calculate the climate response to the forcings that are known.
    snip

  112. Steve Milesworthy
    Posted Aug 7, 2007 at 4:12 AM | Permalink

    #111 Ferdinand
    Interesting paper which I look forward to reading in more detail later. The argument I was making though was the strength of the feedbacks rather than a direct link to CO2, since the whole of the ice age cycle cannot be accounted for just by albedo, solar, aerosol and greenhouse gas changes.

    The change you identify, a drop in CO2 from about 270ppmv to 230ppmv, equates to a negative forcing of less than 1W/m^2 over a period of ten thousand years; about 1/3 of the change in greenhouse gas forcing we’ve had since 1750. The associated temperature change for a sensitivity of 2C is 0.5C which is well within the tolerance of the temperature reconstruction.

    I don’t know how much we can say about the influence of CO2 from that data.

  113. John Finn
    Posted Aug 7, 2007 at 6:12 AM | Permalink

    Re: #

    Steve Milesworthy

    Thanks for your reply. I agree with you about the remit of Gerry North’s paper. As far as I was concerned Gerry (or Jerry) was simply providing a starting point for further discussion. My questions were an attempt to frame that further discussion. The attacks on the paper were totally uncalled-for.

    A number of posters have queried the justification for the 4 w/m2 forcing for 2xCO2, but I think this is a separate discussion in it’s own right. I was hoping this particular discussion would focus on the physical processes (i.e. feedbacks) by which an initial forcing of X w/m2 (4,5 or whatever) is amplified by a factor of 2 or more at the surface.

    This is just to set the record straight. I was not making any criticism of the North paper and I apologise to him if it appeared otherwise. I would also like to thank him for his contribution.

  114. Allan Ames
    Posted Aug 7, 2007 at 11:15 AM | Permalink

    North is to be congratulated for having a paper still relevant for well over a decade.

    The Ramanathan review chapter noted earlier //www-ramanathan.ucsd.edu/FCMTheRadiativeForcingDuetoCloudsandWaterVapor.pdf
    (do I have to type in this whole thing to make a link?)
    contains a reference that might be useful for the temperature effect of CO2 if anyone has access: Raval, A. and V. Ramanathan (1989). Observational determination of the greenhouse effect. Nature 342, 758’€”61.

    In the 2006 review Ramanathan says

    We urgently need to extend the TOA forcing approach to consider the surface forcing and the atmospheric forcing individually.

    – which limitation is make rather clear by the North paper.

  115. Reference
    Posted Aug 7, 2007 at 1:25 PM | Permalink

    Perhaps Gavin heard your appeal for a simple explanation of the physics, well here it is in six easy steps

    Enjoy.

  116. Steve McIntyre
    Posted Aug 7, 2007 at 1:33 PM | Permalink

    I’ve actually been asking for something that does not merely arm-wave through things; something of intermediate complexity of 30-100 pages in which things are actually calculated. Gavin doesn’t provide any citations. There’s no mention of lapse rate assumptions in this post – and yet this probably figures into the argument somewhere. For example, the approach in Ramanathan’s articles is more detailed but they are 30 years old.

  117. Dane
    Posted Aug 7, 2007 at 5:30 PM | Permalink

    #113,

    You say “since the whole of the ice age cycle cannot be accounted for just by albedo, solar, aerosol and greenhouse gas changes.”

    As a geologist, who studied paleoclimate and solar forcing as an undergrad, I have to disagree. The solar forcing data fits remarkebly well with paleoclimate fluctuations over the last few hundred thousand years. Why do some find that so hard to accept? and I am talking about lake sediments, ocean sediments, and strat sections in exposed clifs etc, world wide.

  118. Jan Pompe
    Posted Aug 7, 2007 at 9:19 PM | Permalink

    #117 Steve

    I think Gavin is arm waving so hard he’s in danger of taking flight. Am I reading this right he’s saying in the first paragraph that the upward radiation from the surface is higher (390 W/m^2) than the incoming (240 W/m^2)? One thing seems certain is that he hasn’t put too much thought into this. i have found a couple of papers about of similar genre as Jerry North’s I’ll keep looking.

  119. Dave Dardinger
    Posted Aug 7, 2007 at 10:20 PM | Permalink

    re: #118 Jan,

    No, Gavin is correct about that. You have to add to the 240 incoming high frequency radiation from the sun the long-wave IR radiation from the atmosphere. This comes primarily from water vapor with additions from CO2 and other greenhouse gases. There is also some reflection from the clouds in the “IR windows” which is included. This is not really contraversial.

    Dave Dardinger

  120. Don Keiller
    Posted Aug 8, 2007 at 3:06 AM | Permalink

    Anyone ssen this before?

    Click to access 0707.1161v2.pdf

  121. TCO
    Posted Aug 8, 2007 at 5:34 AM | Permalink

    Steve if neither you nor Gerry North knows of an exercise of the form which you describe (length, scope, approach) than maybe it does not exist.

  122. Steve Milesworthy
    Posted Aug 8, 2007 at 6:00 AM | Permalink

    #117 Steve
    I think the best collection of papers so far is perhaps:

    Myhre, G., E.J Highwood, K.P Shine and F. Stordal, 1998, New Estimates of radiative forcing due to well mixed greenhouse gases, Geophys. Res Lett. 25, 2715-2718

    as cited by Hans in #27. That and its citations state the case for the forcing.

    J. D. Annan and J. C. Hargreaves. Using multiple observationally-based constraints to estimate climate sensitivity, Geophys. Res. Lett., 33, L06704, doi:10.1029/2005GL025259

    as cited by Gavin on realclimate, which states the case for sensitivity.

    Plus Gerald North’s paper, or a later version of similar which attempts to check whether a given warming at the surface results in the expected change in outgoing longwave at top of atmosphere.

  123. Steve McIntyre
    Posted Aug 8, 2007 at 11:10 AM | Permalink

    #123. Myrhe is useless for CO2 – as I recall, it calculates “CO2 equivalents” for a variety of minor gases and adjusts the CO2 forcing with this info, but does not derive the underlying CO2 forcing.

  124. EW
    Posted Aug 8, 2007 at 12:07 PM | Permalink

    #121
    Here is some discussion about it.

  125. Mark T.
    Posted Aug 8, 2007 at 12:14 PM | Permalink

    EW, your link is broken with an additional http// contained within. Also, that’s not much of a discussion considering the original paper had 90 pages or so and there’s only 3 points made in the original blog post. Quite frankly, the G&T paper is so convoluted and complex a mere blog discussion won’t be up to the task of discerning fact from fiction. G&T certainly did not help their case by making it so obtuse.

    Mark

  126. Steve McIntyre
    Posted Aug 8, 2007 at 12:25 PM | Permalink

    I set up this thread so that people would discuss mainstream papers (not G&T) so that any discussion would be based on a clear understanding of mainstream views. Until people understand those, there’s little point in thinking about G&T.

  127. Steve McIntyre
    Posted Aug 8, 2007 at 1:56 PM | Permalink

    I don’t want discussions saying things like “the ground only emitted 25 W/m2” on this thread. I’ve repeatedly asked people not to raise their own speculations on this thread (And to take time out on this board altogether from such speculations.) If any of you want to transfer such points to Unthreaded, you can, but I’m going to delete these non-North based discussions in a few minutes.

  128. Jan Pompe
    Posted Aug 8, 2007 at 9:42 PM | Permalink

    #120 Dave,

    Thanks for your remark I wasn’t suggesting he was being controversial it’s just the way he wrote it comes across to me anyway that we are getting more from the surface than is actually warming it.

    while the outward flux at the top of the atmosphere (TOA) is roughly equivalent to the net solar radiation coming in (1-a)S/4 (~240 W/m2).

    I’m interpreting this as it’s incoming to the surface it certainly isn’t clear where the extra 150 w/m^2 is coming from in the first place in order for it to be absorbed by the atmosphere on the way out. Out going and incoming are roughly the same so why is it higher in the middle? This doesn’t add up IMHO.

  129. Scott-in-WA
    Posted Aug 8, 2007 at 10:06 PM | Permalink

    Steve, after some number of years being interested in climate science and in the various facets of the global warming issue, you must have some idea in your own mind as what a topical outline for this paper would look like.

    I have a suggestion: Compose the paper’s topical outline as you think it should be structured, and perhaps Dr. Pielke and Dr. Curry could each put together a team of graduate students from their respective institutions for an intercollegiate paper writing contest to see which team is most effective in fleshing out the details in a clear, concise, and readable fashion.

    Dr. Pielke and Dr. Curry will choose a panel of five judges to decide which team’s paper best meets the stated requirements.

    The winning team gets an all expense paid dinner at the local restaurant of their choice, plus each of the winning team members gets a T-shirt emblazoned with “If you can’t stand the ClimateAudit heat, get out of the RC kitchen.”

  130. Dave Dardinger
    Posted Aug 8, 2007 at 11:59 PM | Permalink

    re: #129 Jan,

    I couldn’t find the version I wanted online but here a picture of the earth’s heat budget which may help. The thing to note is that 64% of the incoming energy is radiated to space by clouds and the atmosphere while only 6% makes it to space as IR directly. And all of that 64% will be matched by an equal amount of IR radiated downward. Now not all of it will reach the earth’s surface, but an equilivant amount will be radiated downward from the bottom layer of the atmosphere to the surface with one exception. Latent heat (i.e. evaporated water) will rise higher into the atmosphere and thus bypasses the lower layers of the atmosphere so that it’s heat can escape with less heat making it’s way back down to the surface.

    Anyway, if you want to discuss this more, take it to the unthreaded thread.

  131. Posted Aug 9, 2007 at 2:13 AM | Permalink

    #88 John F. Pittman,

    Because radiation depends on the fourth power of absolute temperature not the fourth power of a temperature difference.

  132. Posted Aug 9, 2007 at 2:46 AM | Permalink

    Simple calculation to show #131

    delta T = 10
    Base Temp 290K (radiation in arbitrary units)

    Delta radiation = (300)^4 – (290)^4
    Delta radiation = 8,100,000,000 – 7,072,810,000
    Delta radiation = 1,027,190,000

    vs

    (10)^4 = 10,000

    Big difference.

  133. Allan Ames
    Posted Aug 24, 2007 at 2:35 PM | Permalink

    Need reading for the weekend? Neither of the following satisfy Steve’s requirements for this thread, but they provide overview and lots of references. As do other bloggers on this fading thread, I suspect the paper Steve wanted does not exist.

    Cloud Feedbacks in the Climate System: A Critical Review by Graeme L. Stephens

    Click to access Stephens2005.pdf

    Fig. 13 shows CO2 sensitivity as 0.8 direct GHG; 1.7 water vapor feedback, 2.3 snow, ice, 1.9 to 5.2 cloud feedback.

    Suggested elsewhere by GS is: Climate Models and Their Evaluation:

    Click to access AR4WG1_Pub_Ch08.pdf

  134. DeWitt Payne
    Posted Aug 25, 2007 at 10:49 AM | Permalink

    Re: #134

    The Stephens review seems on first glance to justify the recent work of Spencer et.al. here and here. Note, these links may not be valid much longer. Climate Science is closing it’s doors after September 2. There will be an online archive, but it’s not clear whether the old URL’s will still apply.

    It’s always been my opinion that clouds were the Achilles heel of climate models. It’s nice to see this confirmed in detail. It also seems very clear that there is no simple way to explain how a climate sensitivity is derived, or in other words how you convert a forcing of 3.7 W/m^2 to an average temperature increase of 2.5 C. I do think it’s possible to explain how the forcing itself is calculated and the first order temperature effect. I would attempt it myself, but only with very strict ground rules on allowed comments.

  135. John F. Pittman
    Posted Aug 25, 2007 at 2:34 PM | Permalink

    M. Simon August 9th, 2007 at 2:13 am you said

    Because radiation depends on the fourth power of absolute temperature not the fourth power of a temperature difference.

    I asked

    I wonder if Gerry North could weigh in on this question/comment: why was 202.1+1.9C used instead of (0.01K+1)^4 or something similar?

    His was to C; mine was to K. C is relative, K (mine) is absolute temperature. You have indirectly pointed out what I did. Why is their relationship developed for a linear C, when the relationship should be T^4 K? But as far as temperature difference goes, it would be K1^4-K2^4, so temperature differences could be used. This was my question.

  136. DeWitt Payne
    Posted Aug 25, 2007 at 3:38 PM | Permalink

    John F. Pittman August 25th, 2007 at 2:34 pm

    Why is their relationship developed for a linear C, when the relationship should be T^4 K?

    Because he wanted a linear relationship so he could rearrange and find a sensitivity coefficient, and every continuous function is approximately linear over a small enough range and most others were doing the same. The justification is in the first paragraph of section 2.

    Most recent EBM studies have relied on the simple formula I = A + BTs

    The linear relationship does seem to be a reasonable approximation for the Nimbus and ERBE data for surface temperatures from -20 to +20 C. There is a T^4 plot, uncorrected for emissivity, in Figure 2, btw.

  137. DeWitt Payne
    Posted Aug 25, 2007 at 3:43 PM | Permalink

    Dave Dardinger August 8th, 2007 at 11:59 pm

    Dave, Was Kiehl and Trenberth’s article on Earth’s Annual Global Mean Energy Budget what you were looking for?

  138. Dave Dardinger
    Posted Aug 25, 2007 at 4:31 PM | Permalink

    re: #138 Dewitt,

    Well, it has the figure I was looking for. And when I tried saving it I found I already had it saved. I really ought to create a spreadsheet with a short discription of each article I have so that I can find something when I want it.

  139. John F. Pittman
    Posted Aug 26, 2007 at 10:56 AM | Permalink

    Yes, I read their reasoning. But why use a reasonable approximation when you can use the real relationship? Especially when in their Figure the aT^4 was different from their I=a+bT. What does the difference actually equal between the theorectical and the data? Is there a physical definition?

  140. Allan Ames
    Posted Aug 26, 2007 at 2:06 PM | Permalink

    re 135 DeWitt: Clouds have problems, but so does the water that leads to clouds. The Ramanathan review says that the constant relative humidity assumption used in the GCM’s is unsupported in theory.

    In http://www.gfdl.gov/~gth/netscape/2003/czhang0301.pdf Zhang et. al. show bimodality in tropical water vapor. GW theory is like a Sierpinski gasket – however you look there are holes.

  141. Philip Mulholland
    Posted Sep 29, 2007 at 1:33 AM | Permalink

    In a recent posting, on another thread, an observation was made that the upward radiative flux is equal to the downward radiative flux in the earth’s atmosphere. This observation is true only if we assume that the earth is flat. For a spherical earth however there is an inherent directional anisotropy in the global atmosphere due to spherical geometry.

    If we assume that we are going to build a whole earth atmosphere model, then we need to assess the basic parameters of our planet. The science of geodesy is a challenging discipline in its own right, but let us assume that the earth’s shape can be approximated to by a sphere of radius 6,371km. The surface area of a spherical earth is therefore 510,064,472 sq km and we can assume that this surface forms the base of the planetary atmosphere. If we now determine that in our model atmosphere, the unit cell covers an area 36 kilometres square, giving a cell size area of 1,296 sq km, then we require a total of 393,568 cells to cover the entire surface of the earth.

    At an atmospheric height of 10 km, the total radius of the planetary sphere increases to 6,381 km, giving a surface area of 511,666,935sq km, which is a 0.31% increase with respect to (wrt) the basal area. At the 10 km level we require 394,805 cells to tile the surface, an increase of 1,236 cells wrt the base. Similarly at an atmospheric height of 20km, the total surface area increases to 513,271,912 sq km, that is a 0.63% increase wrt the basal area. At this level we require 396,043 cells to tile the surface, an increase of 2,475 cells wrt the base.

    While it is easy to demonstrate that in any model of the atmosphere there must always be more cells in the layer above than the layer below, and that this raster anisotropy is due to spherical geometry, the key question in studying radiative flux in a planetary atmosphere is “Is there also vector anisotropy?” In order to answer this we need to consider the question of solid angles for any point above the surface of the earth. We do this by using the “distance to horizon” measurement as a proxy for solid angles.

    Simple Pythagorean geometry allows us to determine the distance to horizon for any point above the surface of the earth if we know the height of the observer and the radius of the earth. For an earth of radius 6,371 km and an observer at 2 metres above sea level, the distance to the horizon is 5,048 metres. If we take this horizon distance as being the radius of a “sphere of view” centred on the observer then we can determine what fraction of the total “sphere of view” contains the sky above and what fraction is contained by the sea below. For a sphere of radius 5,048 metres the total surface area is 320.24 sq km. For this shell the total surface area of the spherical cap below (i.e. the sea obscuring fraction) is 160.06 sq km. Therefore, at an observer height of 2 metres, the sea obscured fraction of the sphere of view is 49.98% and the open fraction of the sphere of view (the sky above) is 50.02%.

    At a height of 10 metres above sea level the horizon distance increases to 11,288 metres and the total sphere of view surface area increases to 1,601.2 sq km. The sea obscured portion of this is 799.9 sq km, so the obscured fraction of the sphere of view has decreased to 49.96%, while the open fraction increases to 50.04%.

    At 10 km height the distance to the horizon is now 357 km, the obscured fraction of the sphere of view occupied by the earth below is now 48.60% while the open fraction has increased to 51.04%. Out in space, at an orbiting height of 35,000 km, the distance to the horizon (now seen as the limb of the planet) is 40,877 km and the disc of the earth covers only 7.19% of the astronaut’s sphere of view.

    It is clear from this proxy solid angle analysis that, even for very thin (2 metre) onion layers, there is vector anisotropy in the planetary atmosphere and that outward radiative flux is greater than downward radiative flux.

    The distance to horizon computation has clear implications for global atmosphere model design. For a global tiling cell size of 36 kilometres square, the midpoint cell locations for the basal layer will first be in line of sight communication at a minimum height of 25 metres above the ground (the height at which the horizon distance is 18 kilometres). This implies that the minimum cell thickness in our raster model should be 50 metres and that for cells in a model atmosphere with a mid point separation distance of 36 kilometres, the radiative vector flux partition should be 49.93% down and 50.07% up.

  142. Pat Keating
    Posted Sep 30, 2007 at 10:30 PM | Permalink

    #64
    It is interesting that the Weber result of 1C is close to the value one gets from a simple non-radiative calculation/estimate where one looks at the amount of temperature change occurring in the more recent glaciation/deglaciation cycles and estimates the change in water vapor to minimum values as about 75% of current levels and the pre-industrial %CO2 GHG effect as about 5% of the current H2O GHG effect.

  143. cba
    Posted Oct 11, 2007 at 7:59 AM | Permalink

    It’s an interesting topic. While not my field, I started looking at some of the stuff on radiative transfer and co2 absorption last spring and creating my own 0 dimension model, using the Hitran database as well as started reading some of the existing literature on the subject. One of the goals was to duplicate as much of the Kiehl and Trenberth ’99 results as possible to verify the model.

    What has turned up so far is that I have duplicated a good deal of it but find my model in serious disagreement with some of their fundamental results – specifically, how much energy is absorbed in clear air prior to reaching the surface. In essence, the difference for their overall surface insolation and atmospheric absorption values are almost exactly backwards from mine. Their numbers indicate 70W/m^2 is absorbed over the average of clear and cloudy skies while 170W/m^2 is absorbed by the ground. My results are that these values are virtually swapped around (in magnitude – not some dumb mistake on anyone’s part.

    I believe another minor difference is that my co2 doubling absorption energy turns out to be 3.6 w/m^2 rather than 3.7 or 4W/m^2.

    Further studies with my model have brought about some interesting concepts. It seems that the value for 3.7w/m^2 (which I believe is considered the generally accepted value these days for a doubling) comes from radiative absorption, no convection involved. It is also based upon a column of atmosphere straight up – or with a minimum path distance through the atmosphere. It turns out that even at the equator, the average insolation at 1 point is through about 1.5 atmospheres and at high lattitudes, it can be much more. Outward radiation also averages through around 1.6 atmospheres since only a fraction goes straight up.

    Perhaps the most interesting realization is that the concept of reciprocity applies to the ‘kludge’ factor called emissivity – the correction value that permits a greybody to be modeled with planck’s law and stefan’s law – applies to the atmosphere. It is also a complex function of wavelength, not just a simple number. The consequence of this is that when the absorption of the atmosphere increases (overall or in certain bands) due to a ghg, this number increases. It is not only the simple absorption but also the emissivity in stefan’s law and the result is the power emission from an incremental shell of atmosphere increases just as its absorption ability increases and doesn’t require an increase in temperature T^4 component to balance. Actually, that assumes identical temperatures of the atmospheric shell increment and the original emitting body (earth’s surface) and one could need either a small increase or even a decrease to gain radiative balance again but in any case the change in temperature would be less than required to achieve balance only with the T^4 component. This is also a consideration I haven’t seen discussed anywhere during my limited search and study time.

    Other interesting observations I’ve made include the fact that most emissivity measurements show that the earth’s surface may be somewhat reflective in the visible, it becomes highly absorptive starting in the near infrared. The consequences of this is twofold. First, the 31% albedo of the earth is primarily due to cloud cover, probably 90% of it. Second, for the wavelengths of radiation emission for a body the temperature of the earth, the assumption that the surface is a true blackbody is almost perfect. Most emissivities over the wavelengths radiated by the earth are in excess of 95% and many of the common surfaces are over 98 or 99% absorptive. This corresponds to approximately an emissivity of 1.0 (or 0.98) where 1.0 is a true blackbody. For incoming radiation which is about 41% visible light energy and 46% IR energy, absorption is mostly somewhat less than this although ice and liquid water tend to be in the upper 90% range meaning they reflect less than 10% of the energy received.

    Another interesting factor is how energy is absorbed in the atmospheric column. GHG absorption lines can absorb substantial amounts of energy in centimeters at some wavelengths and perhaps as much as 20% of the total outgoing absorption within 100 meters at the surface. However, once the energy is absorbed at those wavelengths, the pathlength tends to suffer very little additional absorption of energy through the rest of the atmosphere. Incoming solar insolation tends to be a little more uniform, losing about the same fraction of energy as it travels through each additional fraction of the atmospheric mass. Consequently, there is relatively little additional energy absorbed for light traveling through 2 or 3 atmospheric thicknesses versus that going straight up through 1 thickness.

    My current conclusions on the analysis so far are severalfold. One is that my model seems to be working fairly well. Two, that the foundation of so much of these time domain gcm models seem to date back to Budyko for some of the most profound assumptions and that it would seem just on the face of it to be ludicrously off. I think his original paper indicated 5 deg C per Watt/m^2 increase using that empirical kludge equation he concocted which would imply all of the 33 deg C warming due to the atmosphere would be wiped out if we lost 7W/m^2 of insolation or that if we gained 16W/m^2 – the oceans would boil off. One more final observation on the notion of the 3.7W/m^2 absorption is that it doesn’t mean there is a blocking of radiated energy leaving the earth/atmosphere. It’s not a ‘half duplex’ activity. Emission and absorption are both going on simulataneously. What it means is that less surface radiation reaches space, requiring more atmospheric radiation be needed to reach space for balance. Stefan’s law dictates it is a T^4 (not Budyko’s linear eqn). However, the reciprocity of emissivity implies it’s virtually a ‘wash’ as increased ghgs mean increased absorption but also means increased outward radiation, essentially requiring no T^4 temperature increase contribution. My model actually indicated a tiny decrease in temperature was needed to reach balance for a co2 doubling.

  144. DeWitt Payne
    Posted Oct 11, 2007 at 9:50 AM | Permalink

    Re: #144

    However, the reciprocity of emissivity implies it’s virtually a ‘wash’ as increased ghgs mean increased absorption but also means increased outward radiation, essentially requiring no T^4 temperature increase contribution. My model actually indicated a tiny decrease in temperature was needed to reach balance for a co2 doubling.

    What about the decrease in temperature with altitude? That seems to me to be the major flaw in zero dimensional models. If the atmosphere were isothermal, the greenhouse effect would be much reduced or non-existant. Look at the IR emission spectrum of the earth from space. You see a dip at the CO2 15 micrometer (667/cm) band. This is because the emission comes from high in the atmosphere (approximately the tropopause) where the emissivity is limited by the lower temperature. Emissivity at any wavelength cannot be greater than one so the emissivity of a saturated band follows the Planck curve for the temperature where the emission happens. If you are high enough, you also see a spike in the center of the dip from emission high in the stratosphere where the temperature is warmer.

  145. cba
    Posted Oct 11, 2007 at 3:56 PM | Permalink

    you’re basically right there. emissivity though isn’t a function of temperature but stefan’s law with T^4 certainly is and that predicts the total amount of energy radiated. Higher levels of atmosphere where T drops down certainly do not radiate as much energy as it would if the temperature were higher. As mentioned, the emissivity reciprocity is a ‘wash’ when the temperatures are the same and a bit different when the temperatures are different. Also, if you’re above the surface at any level, there will be less matter to interact with and the atmosphere will not attenuate as much energy going out as it does from the surface. Note too that at some wavelength bands there is serious attenuation in distances as short as a few centimeters and total surface radiated energy by 100 meters has about 20% or so absorbed by this height.

    Considering that around 20% of the radiated energy from the surface makes it into space (according to my model) then so would anything emitted by near surface atmosphere. Also consider that temperature is a statistical sort of thing and that there are some molecules that will be extremely hot right next to some that are extremely cold. T is merely an average of these values.

    It would seem to me that this temperature curve of atmosphere with height is really just a reflection of what temperature is necessary for radiative equilibrium at any level. Note I’m ignoring convection and conduction in this which also can have some impact in the adjustments.

    Again, there will be radiation from the surface making it out into space and radiation from the atmosphere near the surface being about the same. The presence of ghgs and higher absoprtion is going to shift more to atmospheric emissions and less from the surface. If you consider a delta increment here of required energy and that before things were in equilibrium then most of this additional energy radiation will be met by the increased emissivity. The balance then must be met by temperature adjustments T^4. For my first efforts, the result became a requirement of slightly less temperature was needed for balance in simplified form. The other observation was absorption amounts over distance simply showed a reduction in the distance required for the same absorption.

    Above 50km, you’re into rather much of a vacuum full of ions and the like- the lower part of the ionosphere. There are serious differences where line spectra dominate and classical thermo emission doesn’t apply like low in the atmosphere. It also has it’s own new sources of thermal influx, from cosmic ray particle interactions, solar wind and the like.

    Another consequence of this tends to be that ghg ‘forcing’ or increase in energy absorption is not ‘created equally’. Emissivity doesn’t rise when solar insolation increases as that doesn’t affect absorption or the emissivity value of the air even though it changes the amount of energy that must be reradiated. That must be absorbed and reradiated by an increase in T^4 somewhere, starting with where ever the energy originally arrives at.

    If I get more time for this research, my next stage after cleaning up the results that exist now, is to try to adopt some sort of incremental absorption and reemission for the standard atmospheric temperature curve. One should be able to create this thermal curve in principle from a radiative perspective and show what happens with an increase in ghgs. I know I’m totally ignoring convection currents and precipitation/water cycle factors but I suspect on average the radiative is actually somewhat stronger. However, those are the icing on the cake and do provide some relief for imperfections in the approach.

    If i failed to mention above and I don’t have time to even check now, I think most of the 33deg K rise in avg. surface temperature over a BBody is due to cloud cover absorption and reflection where it is blocking outbound radiation. My efforts indicate it’s got to be responsible for the vast majority of the 31% albedo factor as well. This also indicates that a snow/ice sheet ice age can ‘short circuit’ what appears to be a nice heavy duty and stable feedback control system as well. On the flip side I would think it potentially enough to handle anything short of evaporation of surface water.

  146. DeWitt Payne
    Posted Oct 11, 2007 at 4:38 PM | Permalink

    This blog isn’t really the place to discuss this. There is a thread on UKweatherworld that might be more appropriate. There’s already a lot of stuff there on atmospheric physics and radiative heat transfer.

    Meanwhile, have you compared your results to those calculated by MODTRAN 3?

    The temperature drop with altitude is primarily a function of gravitational potential energy and adiabatic expansion. There would be a lapse rate (decrease of temperature with altitude) even if there were no ghg’s. The surface temperature would be lower, though. Emission and absorption in the atmosphere is controlled by temperature and heat input from convective processes (sensible and latent heat) not the other way around. Net heat transfer by convective processes is somewhat higher than net transfer to the atmosphere from the surface by convective processes.

    There has to be a mistake somewhere in your model or the spectral data for incoming solar radiation. The peak insolation for clear sky conditions in the tropics with the sun directly overhead at local noon is 1000 W/m^2 out of 1366. I’ve seen the same figure for the Nevada desert. It’s a little hard for me to reconcile that with your figure of ~50% absorption by the atmosphere. Do you include scattering as well as absorption? Are you including diffuse as well as direct radiation to the surface? All that blue sky counts for something. It’s not all that intense but there’s a lot of area.

    Cloud reflection is considered to be ~20% of the 31% albedo. The rest is reflection from the surface. While clouds do block IR from the surface, the net effect, depending on the type of cloud, is either a wash or net cooling because as much or more solar energy is reflected than emitted IR is blocked. Clouds emit IR too, btw. Also, the usual figure is 10% surface thermal radiation escapes to space.

  147. cba
    Posted Oct 11, 2007 at 8:51 PM | Permalink

    I haven’t tried the modtran 3 for comparison. Numbers I gave are assuming a simplified cloud cover of 62% which has about 80% reflectivity. My model does include absorption inbound and reflected outbound and this jives with the net total with 105W/m^2 going out for albedo. I distinguish between albedo as what is measured at the TOA as a fraction of what comes in to the TOA and reflectivity as the value at the point where the light is reflected. My 80% reflection corresponds to about a 46% albedo for cloud cover which is evidently on the low end of the estimates for cloud albedo – 45-50 % or so.

    I think I’ve got a bit more absorption going on for clear skies than is mentioned. However, I go from 200nm to 65535nm for my bandwidth. I’m also using the bb calculation for solar and not using actual solar. It’s something like 1365 or 1366W/m^2 at the TOA in the calculation. Bandwidth resolution is 1nm.

    The one simplifying assumption is that I’m using a constant pressure, average temperature atmosphere. It’s the equivalent of an 8km long horizontal laboratory chamber. I would expect its problems to be just a little bit of additional pressure broadening on line widths.

    Currently, there is only radiative efforts done. There is no scattering or other factors. Clouds are also very simplified, 62% cover with primary assumption of total blockage when present. They are just a sheet located midway in the column – equivalent to 5km height in the atmosphere. That is sort of an average approximation loosely based on K&T’s efforts that used 3 types of cloud layers.

    Surface albedo is based on a rough average of ocean/land using albedo measurements in some cases that go out from visible to 15um to ‘play’ with the numbers to see what they’re like. With oceans providing high angle albedo values of 3-5% or so and constituting about 75% of the surface, etc. it’s an overall rather low surface albedo.

    I take it your 20% is referring to 20% of the total 30% or 2/3 of the total albedo rather than 20% or 1/5 of the total. That would be more in line.

    I will try to check in the morning on the values for solar absorption peak. I’m thinking those numbers before dividing by 4 for averages amount to more like 899 sunny than to 1000.

  148. cba
    Posted Oct 12, 2007 at 6:13 AM | Permalink

    a quick update. I am using the std atmosphere and the incoming clear sky result is 796w/m^2

  149. cba
    Posted Oct 12, 2007 at 3:15 PM | Permalink

    I’ve seen that modtran 3 before and found it quite an effort. However, I haven’t seen ‘under the hood’ so don’t know what’s inside and what assumptions/approximations they are making there. If they are completely accurate, then it would seem I might have a problem in the software that calculates the line broadening for my model. But then, I don’t even know if they use line broadening adjustments or not there. I have to assume it’s something like the line broadening or the spectral resolution I’m using as the outgoing radiation on my model shows about an additional 50W/m^2 absorption over what would seem to be the same settings with modtran 3. This is substanial and I believe is somewhat greater than what would occur with an atmospheric pathlength that is double the standard (or the minimum). It would also possibly explain the 20% or so stronger absorption for the solar insolation per your comment.

  150. DeWitt Payne
    Posted Oct 12, 2007 at 3:49 PM | Permalink

    Shouldn’t you be correcting for incident angle as well? Insolation varies as the cosine of the angle of the source to the surface. So with no inclination of the rotation axis, the poles would get zero insolation. Insolation would increase to a maximum as you approached the equator. Also, the earth rotates so you have to integrate insolation over the rotation period as well to get the total insolation. 62% seems a tad high for cloud cover. Yes, I meant cloud albedo was 20 of the 30% not 6%.

  151. cba
    Posted Oct 12, 2007 at 6:15 PM | Permalink

    the basic model is 0 dimension. Playing around with it and extending it some brings in nonzenith circumstances. What one finds is that a lower lattitude tends to be more like an average insolation of around 1.6 times the minimum or straight up thickness. At the poles, it can be closer to 6x up to over 20x average. Outbound is much simpler and has been taken to average about 1.5x. It should be noted that currently a doubling or halving of the model atmospheric thickness results in additional absorption or reduced absorption of only about 20W/m^2 so isn’t extremely crucial – that’s why a variation of over 50W/m^2 with either modtran3 or with reality is a much greater concern.

    K&T were actually using 3 layers with an average overlap providing 62% cloud cover. I’m much simplified at present. Despite that, I’m reasonably close to some of their results. The overall absorption, if it is indeed wrong, means something will need to change as well to get back into basic agreement with their values. However, I think they are probably rather high in total insolation at the surface as they must have vastly reduced insolation for the cloudy areas and there’s only so much to start with and there is still absorption of inbound clear sky radiation going on.

    I suppose that if Archer’s simulation doesn’t use the broadening adjustments to the hitran data, that this might explain a serious difference between results. However, that doesn’t explain a genuine measurement of 1000w/m^2 at high noon if there is anywhere close to a standard atmosphere above.

    Unfortunately, at present it would seem that I am going to be at a loss to ascertain the problem other than going through the code again looking for coding errors. I might try to disable line broadening temporarily to see if that brings the results into agreement with modtran. The only other alternative is to refine the spectrum resolution to below 1nm which means lots more data storage and processing time. It also means I’ll have more trouble using the spreadsheet processing which is already a pain to use as an accidental invocation of recalculation leads to delays of close to an hour. It’s likely memory limited and using vm already.

    I also requested access to your referenced discussion area today.

    best regards

  152. DeWitt Payne
    Posted Oct 13, 2007 at 6:55 PM | Permalink

    The question about MODTRAN 3 is not whether it does pressure broadening, it does, but how much the lines narrow at high altitude.

    I can copy these posts over to UKweatherworld and continue there if you want. I’m also thinking of starting a thread to critically analyze the K&T paper. They had a thread that commented on it, but more on the significance rather than the actual contents of the paper.

  153. DeWitt Payne
    Posted Oct 13, 2007 at 7:41 PM | Permalink

    Sorry, I meant the G&T Falsification paper, not the K&T Energy Balance paper.

  154. cba
    Posted Oct 14, 2007 at 4:34 AM | Permalink

    I don’t know if I have access to that yet. I requested so late friday here which means it probably will not be noted until monday. Also, having no ‘track record’ at that site I have no idea if access will be granted.

    As I understand the line broadening, it’s going to really lose out in the lower ionosphere around 50km. It would seem that as it drops out, the absorption starts to seriously dwindle and also that the classical approaches would pretty much have to give way to quantum effects and would require a quantum approach.

    My initial theoretical ‘experiment’ is based on a ‘laboratory experiment’ where the column is horizontal and there is no change in the pressure or the broadening. This will overstate the absorption somewhat but I doubt anywhere close to the errors involved that seem to be detected at present and definitely not significantly when compared to the actual simplification of real world down to the experiment.

    snip – no G&T

  155. Steve McIntyre
    Posted Oct 14, 2007 at 7:14 AM | Permalink

    #155. I don’t want to spend bandwidth here on the G & T paper, until we have discussed a mainstream exposition. I’ve been canvassing for candidates for a while and no very good alternatives have been presented yet. MAybe I’ll go back to Ramanathan in the 1970s.

  156. cba
    Posted Oct 14, 2007 at 7:31 AM | Permalink

    Well, there is budyko and that 60s vintage analysis with that atrocity of an empirical estimate and climate sensitivity of a mere 5 C per W/m^2

  157. Fred Perkins
    Posted Oct 26, 2007 at 11:40 AM | Permalink

    Steve M. The best matematical reference I have found on how CO2 raises temperature is Pierrehumber’s textbook draft “Principles of Planetary Climate. Chapter 4, Radiative transfer in temperature stratified atmospheres, starting on page 73. The equations to calculate temperature are derived from fundamental concepts and the simplifying assumptions that are made to allow use in GCMs is described in detail. The relationship between the simplified equations and the HITRAN database is also covered. I think this is the type of explanation you wanted.

  158. Dodgy Geezer
    Posted Nov 29, 2007 at 8:32 AM | Permalink

    I note a suggestion by Roy Spencer that models which do not include the huge impact of weather systems are effectively pissing into the wind (pun intended!).

    As an example, he cites his recent paper suggesting that precipitation systems act like a thermostat, taking huge amounts of heat/water vapour out of the atmosphere, and challenges modellers to include this effect in their calculations. See http://www.agu.org/pubs/crossref/2007/2007GL029698.shtml

    Does this proposed model support the effect of precipitation systems?

  159. SteveSadlov
    Posted Dec 19, 2007 at 9:41 PM | Permalink

    Must reading:

    http://www.gfdl.noaa.gov/~ih/

    Are there any problems with these papers?

  160. SteveSadlov
    Posted Dec 19, 2007 at 9:48 PM | Permalink

    OK, let me start. A key assumption is, AGW raises the tropopause to a higher altitude. For long years it has been assumed that the sole reason for the tropopause being higher at the equator is the surplus heat. Is that proven to be true? Or might it only be partially true, and are there additional factors?

    A proposed corrollary to the above article of faith is that sub tropical Highs exapand with AGW. But there is another way they could conserve energy. Their velocity / power could increase, within the same form factor. Is that a negative feedback?

  161. Pat Keating
    Posted Dec 19, 2007 at 10:21 PM | Permalink

    161 Steve S

    the sole reason for the tropopause being higher at the equator is the surplus heat

    I believe that the main reason is the higher surface temperature there. You need

    &nbsp &nbsp &nbsp lapse_rate*altitude = surf_temp – (-50C),
    so the higher the temperature the higher the tropopause altitude.

  162. Pat Keating
    Posted Dec 19, 2007 at 10:22 PM | Permalink

    161 Steve S
    Darn it, why doesn’t html work?

    the sole reason for the tropopause being higher at the equator is the surplus heat

    I believe that the main reason is the higher surface temperature there. You need

    lapse_rate*altitude = surf_temp – (-50C),
    so the higher the temperature the higher the tropopause altitude.

  163. Pat Keating
    Posted Dec 19, 2007 at 10:24 PM | Permalink

    158 Fred

    Do you have a link?

  164. Pat Keating
    Posted Dec 19, 2007 at 10:30 PM | Permalink

    145 DeWitt

    Look at the IR emission spectrum of the earth from space. You see a dip at the CO2 15 micrometer (667/cm) band. This is because the emission comes from high in the atmosphere (approximately the tropopause) where the emissivity is limited by the lower temperature. Emissivity at any wavelength cannot be greater than one so the emissivity of a saturated band follows the Planck curve for the temperature where the emission happens. If you are high enough, you also see a spike in the center of the dip from emission high in the stratosphere where the temperature is warmer.

    Thanks, DeWitt, that’s a very useful addition to my knowledge. Do you have a link to the spectrum you refer to?

  165. SteveSadlov
    Posted Dec 19, 2007 at 10:59 PM | Permalink

    RE: #163 – But what are the other factors? For example, space weather. Earth’s magnetic field. Earth’s rotation. Winds above the tropopause. Again, GCMs badly oversimplify.

  166. Pat Keating
    Posted Dec 19, 2007 at 11:07 PM | Permalink

    166 Steve S

    Yes, and tropospheric humidity, too, since the total lapse is a little different for dry air and wet air.

  167. Jan Pompe
    Posted Dec 20, 2007 at 8:30 AM | Permalink

    #162 #163 Pat

    I think you left the semicolon off the &bnsp

     &bnsp; &bnsp; &bnsp; lapse_rate*altitude = surf_temp – (-50C),

    We’ll see if this works

  168. Peter D. Tillman
    Posted Dec 20, 2007 at 8:33 AM | Permalink

    Gerry sent me a paper that he co-authored in 1993 uploaded here http://data.climateaudit.org/pdf/graves_1993.pdf, together with the following covering note:

    Dead link, 404 not found.

    PT

  169. Jan Pompe
    Posted Dec 20, 2007 at 8:39 AM | Permalink

    Pat Keating says:
    December 19th, 2007 at 10:30 pm

    So much for “&bnsp;”
    anyhow

    Thanks, DeWitt, that’s a very useful addition to my knowledge. Do you have a link to the spectrum you refer to?

    I haven’t seen DeWitt about fort a bit but he has posted the spectrum here as has Hans.

    Hope this works Preview is not promising.

  170. Pat Keating
    Posted Dec 20, 2007 at 8:47 AM | Permalink

    170 Jan

    Thanks very much, Jan, for the link.

    I think you got the n, b transposed. Let’s see if this works,   with the semi-colon.

  171. Pat Keating
    Posted Dec 20, 2007 at 8:49 AM | Permalink

    Yes,   that works. I put the &nbsp with the semi-colon in after the comma in the first sentence.

  172. Jan Pompe
    Posted Dec 20, 2007 at 6:03 PM | Permalink

    #171 Pat

    Thanks very much, Jan, for the link.

    You are welcome.

    as for the space:
    dyslexic fingers at 0300 hrs.:-(

  173. SteveSadlov
    Posted Dec 20, 2007 at 8:38 PM | Permalink

    FYI:

    http://members.cox.net/bgary.mtp2/dc951211/index.html

  174. curious
    Posted Nov 19, 2008 at 4:16 PM | Permalink

    19Nov08

    Sorry to post to such an old thread but Steve, if you still monitor this one, I’d be interested to know if you ever found the doc. you were looking for. If so I’d really appreciate a reference to follow up. Many thanks and apologies if it has been covered elsewhere and I’ve missed it.

  175. Steve McIntyre
    Posted Nov 19, 2008 at 4:54 PM | Permalink

    I wish people would take the trouble to post a relevant comment on an old threads as opposed to diverting a current thread. But the answer is no.

  176. curious
    Posted Nov 19, 2008 at 6:28 PM | Permalink

    Thanks for the update. Not sure what you mean re: thread choice – was there somewhere else for this query?

    Steve:
    no, this was a good place and I wish people would place this sort of comment on an old thread where the old thread is the relevant one.

  177. curious
    Posted Nov 19, 2008 at 7:53 PM | Permalink

    Ok, thanks, understood re: thread choice.

    Do you mind if I make a suggestion on this topic?

    I’m a non professional with some science education working through some of the climate issues and hit your site a while ago whilst asking questions. I recently came to the point that I wanted to get the summary argument you were seeking above:

    “As some of you are aware, I’ve regularly asked critics of this blog for suggestions on an exposition of how increased CO2 translates into increased temperature and have little to show for such requests – this in itself is surprising or should be surprising. Yesterday, I asked Gerry North, the Chairman of the NAS Panel for a suggested reference and he’s sent me an article and covering letter.”

    and

    “North says that the derivation of the 4-5 wm-2 forcing is said to be uncontroversial. However, this doesn’t mean that the derivation should not be set out in detail in some IPCC document, but it isn’t. I’ll re-visit this at some time. If people want to do something useful, it would probably make sense to translate the model described in this paper into R or Matlab so that people can experiment with it.”

    I’ve read the thread and haven’t seen anything which suggests a concise and coherent derivation exists. I don’t have the skills to get to the bottom of some of the docs. referenced but I think I’ve got enough to get the gist of the comments. To my mind the physics of the argument should not be beyond undergrad science and I would have expected it to be an essential building block in every IPCC report.

    As it looks as if the reference does not exist (or exist in a readily accessible and digestible format) I suggest the heads of national science academies are requested to supply the argument from first principles. Given the fundamental nature of this to the climate change debate I think this should be something they would do. If not them then perhaps Gov. chief scientific officers. Ideally I think the request should be that the argument is made in terms accessible to anyone with undergrad physical science – at least then I’ll have a chance of following it!

    Apologies if this misses the point or is a non starter.

  178. Sam Urbinto
    Posted Nov 20, 2008 at 10:54 AM | Permalink

    Yes, it is far better to update an old appropriate thread than a new inappropriate one. 🙂

    I would suggest that the search for “an exposition of how increased CO2 translates into increased temperature” should begin with the IPCC’s stance on the lifetime of the gas in the ecosystem.

    Carbon dioxide does not have a specific lifetime because it is continuously cycled between the atmosphere, oceans and land biosphere and its net removal from the atmosphere involves a range of processes with different time scales.

    These same factors will need to be taken into account when attempting to determine the approximate impact some amount of carbon dioxide would have under some set of circumstances.

  179. curious
    Posted Nov 20, 2008 at 12:48 PM | Permalink

    Thanks Sam – but even before that can someone layout the argument and maths that supports this statement:

    “The reduction of outgoing IR due to doubling CO2 is about 4 to 5 W/m^2. This comes from detailed radiative transfer calculations and it is not controversial.”

    I haven’t found it – I’m not claiming an exhaustive search but given its significance I’d expect it to be very easy to find esp. with parameters which are in accord with the sun/space/atmosphere/planet system. Even simple things are hard to track down – for example where is the m^2 measured? Earth’s surface? Atmosphereric boundary? Is it the projected area relative to the incoming normal? How is the 4-5W distributed/averaged over this surface? What wavlength range is the 4-5W? etc etc. I’m looking for bulk level basic explanation and justification of the “detailed radiative calculations”.

    As above apologies, if this is available and I’m missing it or being obtuse. Thanks.

    • Pat Keating
      Posted Nov 21, 2008 at 8:26 AM | Permalink

      Re: curious (#180),

      A detailed radiative calculation gives about 1.25C climate sensitivity, not the 3.25C used by the modelers. The 3.25C is “supported” by arm-waving re water-vapor positive feedback which is just that, arm-waving. No-one knows whether the water-vapor feedback is positive or negative, much less its magnitude.

  180. GTFrank
    Posted Nov 20, 2008 at 2:39 PM | Permalink

    snip – OT

  181. Sam Urbinto
    Posted Nov 20, 2008 at 3:31 PM | Permalink

    Curious:

    You can’t get to what doubling CO2 in the atmosphere does in the system (take your pick, 280 to 560, 200 to 400, 390 to 780, 500 to 1000, whatever) until you can remove the other variables. It doesn’t seem even that the steps to actually get it to double in the atmosphere are clear. Given that 50-60% of anthropogenic conversion of carbon goes back into the ground, and given that water vapor and clouds in the atmosphere are “not radiative forcings” but impact the level, etc. And much less explains how the actual system behaves once a doubling happens.

    That 2-4.5 number that has a “most likely value of about 3°C” that is “very unlikely to be less than 1.5°C” is based upon estimates based upon estimates based upon estimates. If I remember correctly, it was traced back to some unreferenced paper from the 1950s or 1960s. Equilibrium climate sensitivity. Then there’s transient climate response, very likely larger than 1 and very unlikely greater than 3.

    But of course, “New observational and modelling evidence strongly supports a combined water vapour-lapse rate feedback of a strength comparable to that found in AOGCMs”. Although “Large uncertainties remain about how clouds might respond to global climate changes.”

    Until and unless better data comes along, I have no issue with operating under the assumption that if the level of carbon dioxide in the atmosphere doubled to 780, the anomaly trend would go up to 2.8 That I personally believe it won’t get to 780, nor that what people do could make it or stop it from doing so, nor that the anomaly trend reflects temperature (energy available in the system) doesn’t matter, I can’t (nobody can) disprove any of it one way or the other.

    Catastrophe or not aside.

    GTFrank: Ah, climate change is worse than people predicted. Or in other words, Lehmann is in the camp of those who think the IPCC scenarios underestimate the sensitivity. To put it even more plainly, human release of carbon dioxide into the atmosphere causes more warming than the timid political hacks are willing to risk their necks over.

  182. curious
    Posted Nov 20, 2008 at 6:30 PM | Permalink

    Thanks Sam – maybe I’m being dim here but I’m after those “detailed radiative transfer calculations” which are “not controversial”.

    As a first step to my understanding this, here is my pick to eliminate all the other variables:

    Air = 78%N2 22%O2. No other component except CO2. Case 1 280ppm replacing equivalent O2. Case 2 560ppm replacing equivalent O2.

    Earth radius as per google 6378.1km. Atmosphere thickness and stratification as per wiki mid range values. Sunlight as per wiki 1366W/m2 average on Earths cross section.

    Nothing else for now – what is the outgoing radiation of case 1? What is the outgoing radiation of case 2)? What is the difference on a W/m^2 average across the Earth’s cross section? If the calcs aren’t possible what other picks do I need to make (guess must need – avg. albedo? avg. temp at earth’s surface? avg. atmospheric temp at earth’s surface?) to get a stab at it, what are suitable values, what are the calcs and what are the results? I’m looking for the simplest workable case to start with.

    Any input appreciated and, as before, sorry if all this is very basic – please feel free to point me to a reference that covers it.

    Steve – if this off topic and/or painful, thanks for sticking with it!

    • DeWitt Payne
      Posted Nov 21, 2008 at 11:42 PM | Permalink

      Re: curious (#183),

      cba and I bored everyone to tears going over some problems he was having writing his own detailed radiation transfer model. It’s not controversial, but it’s not trivial either. If you use a full line-by-line model, the number of lines that you have to deal with is in the hundreds of thousands. You then have to have multiple layers of atmosphere in even a one dimensional model to cover the change in the number of molecules per volume element with altitude. The real fun comes with specifying how the temperature changes with altitude because that affects how water vapor behaves as well as the shape of the individual absorption lines and controls emission as well. A place you can play with a relatively simple one dimensional radiation transfer model is the Archer MODTRAN site. It’s a band model rather than line-by-line, but that means the numbers come back in seconds rather than hours. You can vary a lot of stuff and see what happens to the emission of thermal radiation. If you want more information, there are textbooks available. I can personally recommend Grant W.Petty, A First Course in Atmospheric Radiation.

      • Kenneth Fritsch
        Posted Nov 23, 2008 at 11:22 AM | Permalink

        Re: DeWitt Payne (#192),

        DeWitt, I judge that you are the most qualified poster I have read at CA (basic knowledge plus the ability to articulate it) to give an engineering exposition on the effects of 2XCO2 on temperature. If you are not up to an exposition, I would be interested in general comments on items such as how well can a band treatment approximate a line by line calculation and other limitations in data availablity and computational complexities.

        • DeWitt Payne
          Posted Nov 23, 2008 at 12:32 PM | Permalink

          Re: Kenneth Fritsch (#199),

          I appreciate the compliment, but I’m not sure that I can live up to the billing. Off the top of my head, band models like MODTRAN do well in the troposphere and less well at higher altitudes where the lines narrow substantially. OTOH, the troposphere is probably more important to the greenhouse effect. AFAIK, the GCM’s don’t even use band models because they still have too much computational overhead. There was some study a while back that found that the radiation parameterizations in some of the models were way off, yet they still seemed to be able to tune them to hindcast the GMST.

          Re: John Lang (#200),

          I think the argument that Dessler would make is that if the RH declines with a temperature drop, then it would go up with a temperature increase so rather than constant relative humidity, which would be bad enough, you would get increasing RH with temperature, which would be much worse. There would be additional feedback if the altitude of the tropopause increased with increasing temperature. But it’s still just one year’s data and the change in RH may have been the result of some other factor than temperature.

        • Kenneth Fritsch
          Posted Nov 23, 2008 at 2:43 PM | Permalink

          Re: DeWitt Payne (#201),

          This subject brings a feature of blogs that I find frustrating and that is to properly learn I need someone to summarize what has been posted in many dispersed locations on the blog. When someone does it (rarely), I find it really helps and particularly when it brings some subtle part of the topic to the fore that can get lost in individual posts.

        • John Lang
          Posted Nov 23, 2008 at 3:01 PM | Permalink

          Re: DeWitt Payne (#201),

          Global warming theory says that relative humidity should remain broadly constant. (The models do show some changes in the various levels of the atmosphere but overall it is supposed to be very close to constant.)

          And yes, the fact that the temp decline translated into a decline in relative humidity should indicate the water vapour feedback is, in fact, higher than projected. None of the models would work if there was a positive water vapour feedback. The Earth’s climate would just be runaway greenhouses and runaway ice planets if that were the case.

          Other studies have also shown wide-ranging results so I think this one just fits into the mode of relative humidity varies somewhat and we still don’t know what causes that variation. It might be constant or it might vary slightly with temperature as a negative feedback.

          I guess Antarctica would be a good place to test the theory. There should be no water vapour there at all given the average temperature is below the -18C that Earth would be without a greenhouse effect.

        • Pat Keating
          Posted Nov 23, 2008 at 4:51 PM | Permalink

          Re: John Lang (#203),

          the temp decline translated into a decline in relative humidity should indicate the water vapour feedback is, in fact, higher than projected.

          Not really. The effect of additional cloud formation due to higher RH is generally believed to provide a negative feedback, and must be factored in accurately before a statement like this can be made.

    • Pat Keating
      Posted Nov 23, 2008 at 7:52 AM | Permalink

      Re: curious (#183),

      Important new release from NASA at http://earthobservatory.nasa.gov/Newsroom/view.php?id=35952 :

      Andrew Dessler and colleagues from Texas A&M University in College Station confirmed that the heat-amplifying effect of water vapor is potent enough to double the climate warming caused by increased levels of carbon dioxide in the atmosphere…..

      The answer can be found by estimating the magnitude of water vapor feedback. Increasing water vapor leads to warmer temperatures, which causes more water vapor to be absorbed into the air. Warming and water absorption increase in a spiraling cycle.

      Water vapor feedback can also amplify the warming effect of other greenhouse gases, such that the warming brought about by increased carbon dioxide allows more water vapor to enter the atmosphere.

      “The difference in an atmosphere with a strong water vapor feedback and one with a weak feedback is enormous,” Dessler said.

      Climate models have estimated the strength of water vapor feedback, but until now the record of water vapor data was not sophisticated enough to provide a comprehensive view of at how water vapor responds to changes in Earth’s surface temperature.

      • kim
        Posted Nov 23, 2008 at 8:15 AM | Permalink

        Re: Pat Keating (#194),

        “The answer can be found by estimating”. Hmmmm. How about measuring instead of estimating?
        =======================================

  183. John Lang
    Posted Nov 20, 2008 at 7:39 PM | Permalink

    The CO2 doubling equals 3.25C increase in temps would have require temps to have increased by 1.3C to 1.4C already. We are only at 0.7C so there is clearly a problem with the expected doubling values.

    The AGW theorists first proposed aerosols as the reason temps have not kept up with predicted trends. This seems to be abandoned now and they are saying that the deep oceans are absorbing some of the increase (not the sea “surface” temps since these are already in the numbers.)

    The data shows there is some warming of deep ocean temperatures. There has been a 0.1C increase at some latitudes down to a 1,000 metres and 0.05C increase in some latitudes down to 2,000 metres. During the ice ages, deep ocean temperatures dropped from their current 3.0C to about 0.0C so there is definitely a deep ocean temperature response from a cooler surface.

    So perhaps they are right. But one has to wonder why they didn’t know this before. Why did Hansen’s 1988 projections not include this impact. They still say the equilibrium temperature increase from a doubled CO2 will eventually get to 3.25C but it will just take longer to get there.

    One should then ask why aren’t they telling us that then? The temperature response curve gets adjusted outward by 35 years or 75 years or even hundreds of years. There is also the 800 year lag in CO2 increase versus surface temperature in the ice ages so it may take 800 years for the deep ocean to complete a full cycle.

    We reach 560 ppm CO2 in 2075 but it still takes another 50 years before temperatures catch up to the 3.25C doubling? What if it actually takes another 800 years?

    Just be clear instead of saying “the deep oceans are absorbing some of the increase.”

    The other explanation, of course, is that the models are off by a factor of 2.

  184. Sam Urbinto
    Posted Nov 21, 2008 at 1:30 PM | Permalink

    Observationally, all else aside, the anomaly trend is up .7 and carbon dioxide up 33%. That gives 2.1 for a doubling, operating under the assumption that CO2 levels are a proxy for all anthropogenic effects upon the climate plus and minus, and operating under the assumption that the anomaly trend of near surface land and water is a proxy for energy levels of the planet.

    “We guess around 1-5. The science is settled! Stop denying there’s catastrophic global warming.”

    • Phil.
      Posted Nov 21, 2008 at 2:36 PM | Permalink

      Re: Sam Urbinto (#186),

      Observationally, all else aside, the anomaly trend is up .7 and carbon dioxide up 33%. That gives 2.1 for a doubling,

      Not really since warming is expected to respond to Log([CO2]/[CO2]’).

  185. John M
    Posted Nov 21, 2008 at 3:09 PM | Permalink

    Sam,

    Phil.dot’s right. If CO2 is up 33%, [CO2]/[CO2′] = 1.33, log 1.33 = 0.124. For doubling, [CO2]/[CO2′] = 2, log 2 = 0.301.

    0.124/0.301 = 0.41, so we’re 41% of the way there, which gives delta = 1.7 K for doubling.

    You over-estimated, but still within the 1-5 K estimate. In fact, even if my math is wrong, I’m sure I’m within the 1-5 K range.

  186. Sam Urbinto
    Posted Nov 21, 2008 at 3:38 PM | Permalink

    All I’m saying is that (ignoring such things as lag, non-climate non-human factors, etc etc) a simple extrapolation of the anomaly trend and carbon dioxide treated as a 1:1 cause/effect relationship gives .7 and 33%. Or 2.1 and 100%.

    I did say CO2 levels are treated simply as a proxy for all human activities plus or minus. And 2.1 is well in the range of 1.5 to 4.5 or what have you.

    As soon as we get that engineering level derivation of things, all that can be adjusted as needed….

  187. John Lang
    Posted Nov 21, 2008 at 7:19 PM | Permalink

    Using the logarithmic formulas for CO2 as a proxy for all GHGs, the increase in CO2 from 280ppm to the current 385ppm would have increased temps by exactly 1.49C already (if we are to meet the 3.25C per doubling formula).

    There has been some fall-off in Methane and CFCs which has reduced the expected trend to date numbers. gavin said recently on realclimate that it should have been about 1.2C to date but you can never really nail these global warming guys down on what is actually expected except that it will eventually be near disastrous.

    If you assume the deep ocean explanation is merely misdirection, the 0.7C to date would indicate temps will increase by about 1.8C per doubling or another 1.1C to go by about 2080 keeping Co2 growth at its current (slightly) exponential rate.

  188. John Lang
    Posted Nov 21, 2008 at 8:27 PM | Permalink

    Sorry, I should have noted that CO2 levels had already increased to 285ppm by the start of the first global temperature measurements in 1850. The forumulas show temps should have increased by 0.1C already going from 280ppm to 285ppm so the increase in temps from the earliest global temperature measurements in 1850 should only be 1.39C.

  189. curious
    Posted Nov 22, 2008 at 9:01 AM | Permalink

    Many thanks DeWitt Payne above – I’ll follow up on the reference.

  190. jae
    Posted Nov 23, 2008 at 10:27 AM | Permalink

    Climate models have estimated the strength of water vapor feedback, but until now the record of water vapor data was not sophisticated enough to provide a comprehensive view of at how water vapor responds to changes in Earth’s surface temperature.

    Hmm, then why isn’t it getting hotter? And why don’t the tropics ever get over 33 C?

  191. John Lang
    Posted Nov 23, 2008 at 12:03 PM | Permalink

    I read the Dessler water vapour paper talked about above.

    The results are not quite what is being portrayed.

    The main result was measuring the change in water vapour across all levels of the troposphere as temperatures changed in the winter (DJF) of 2007 to the winter of 2008. Temperatures declined over this period by about 0.4C.

    The results showed there was a decline in relative humidity of 1.5% (percentage points that is) in the very lower troposphere and an increase of 1.5% in the very upper troposphere. The middle was constant.

    Now there is much more water vapour in the lower troposphere than the upper so the study really found a decline in the overall relative humidity when there was a (historically) large decline in temperature.

    In my mind this does not at all prove that relative humidity stays broadly constant with changes in temperature. If anything the result shows that relative humidity has lots of variation.

    One should also note that thanks are provided at the end of the paper to gavin and the usual realclimate suspects and to NASA for funding the study.

    Dessler previously undertook a longer study (covering less of the troposphere) which showed that relative humidity was not keeping up with the temperature increases (so I suppose he has now been bought off.)