The IPCC “Simplified Expressions”

Reader DAV raised the following interesting question:

The strange thing about 6..3.5 Simplified Equations that gets me is why should CO2, CH4 and N2O have different equational forms? And what would be the physical basis for raising something to the 0.75 or 1.52 power? The whole thing looks ad hoc as if someone was insistently forcing a linear regression fit.

This was raised in the context of a discussion of the logarithmic form of the CO2 relationship, reader DAV observing that other structural forms were reported for other GHGs. So where did these other relationships come from originally?

IPCC TAR reported their “simplified expressions” in their Table 6.2 shown below here:

myhre1.gif

The provenance of the various CO2 expressions is provided, but the provenance for some of the other ones is not as clear as one would like. However, it’s not hard to determine that they come from Myhre et al 1998, the corresponding table being shown below here. Myhre states explicitly that the functional forms, with all their peculiarities, are derived from IPCC AR1 (1990); indeed, some of the parameters remain unchanged (CH4 for example.)

myhre4.gif

Moving back to IPCC (1990), it cited Hansen et al 1988, mentioning that the functional forms were adopted from Wigley 1987, a publication in the CRU house organ, Climate Monitor. And sure enough if one continues back to Hansen et al 1988, one finds the functional forms in Appendix B, as shown below:
myhre21.gif

As previously noted, there is no derivation of the functional forms in Hansen et al 1988; it cites Lacis et al 1981, where the matter is not discussed at all. Possibly Wigley 1987 will shed some light on this. The CRU library is mailing me a copy.

On a more positive note, whatever the reasons for the original derivation, Myhre et al 1998 re-estimated the functional forms from their radiative-convective model, showing the following relationship generated from their radiative-convective model. From this one can see that, at levels that interest us, not much really turns on whether the relationship is modeled as a log relationship or a square root relationship.

myhre5.gif

What is perhaps more instructive here is that the results in Myhre et al 1998, as in Lacis et al 1981, were calculated from a 1-D radiative-convective model, which should be much more accessible analytically than a 3-D model. Myhre et al 1998 summarize their model as follows:

We use the 10 cm-1 narrow-band radiative transfer scheme of Shine (1991) with the HITRAN 1992 (Rothman et al. 1992) spectral-band data, except where otherwise stated. In a number of publications (Freckleton et al. 1996; Christidis et al. 1997; Pinnock and Shine 1998) it has been shown that this scheme can reproduce both irradiances and forcings to within a few percent of line-by-line calculations for a wide range of gases.

The forcing is defined as: the change in irradiance at the tropopause following adjustment of stratospheric temperatures, following the trace gas perturbation. This so-called “adjusted forcing” is a better indicator of climatic impact than the so-called “instantaneous forcing” in which stratospheric temperatures are kept constant (see IPCC 1994, 1995; Hansen et al. 1997). We use the fixed-dynamical-heating approximation (see e.g. Forster et al. 1997) to calculate the temperature changes.

At this point, I’m reasonably content that the narrow-band radiative transfer scheme relied on here does do what it’s said to do here. I haven’t confirmed this, but I don’t plan to pursue this at this time. My browse of the radiative transfer literature leaves me with a pretty high comfort level that the authors are not Mannian.

I’m mainly interested in the apparent assumption of an unchanging atmospheric profile with additional CO2. It hardly seems a very unlikely coincidence to me that the tropopause just “happens” to occur at a level of pretty much maximum CO2 impact; it seems far more likely to me that tropopause height is interrelated to CO2 levels. This complicates the math significantly. It looks to me like a calculus of variations problem and not necessarily a very easy one.

One of the problems with articles like Myhre et al 1998 and Lacis et al 1981 is that they deal with the matter in purely parametric terms. They leave atmosphere unchanged and derive a sharp response. My guess is that concurrent changes in the atmospheric profile would reduce the response, perhaps even substantially. Because neither Lacis et al nor Myhre et al took a mathematical approach to the problem, they don’t really provide much insight into the relationships. Lubo” has posted on this aspect of the topic and I’m going to examine his logic.

Hansen, J., M. Sato, A. Lacis, R. Ruedy, I. Tegen, and E. Matthews, 1998: Climate forcings in the Industrial Era. Proc. Natl. Acad. Sci., 95, 12753-12758.
Myhre et al 1998 online here

31 Comments

  1. Larry
    Posted Jan 10, 2008 at 5:16 PM | Permalink

    Of course, these are just fits to model results. If the curve fits, wear it…

  2. Sam Urbinto
    Posted Jan 10, 2008 at 5:33 PM | Permalink

    We will sell no overfitting before its time.

  3. John Lang
    Posted Jan 10, 2008 at 6:48 PM | Permalink

    I guess we could empirically test these model fits since the calculations shown are for CO2 rising from 300 ppm to an Estimated Future Level (which is about 385 ppm currently.)

    We were at 300 ppm approximately at year 1900. Global temperatures have increase about 0.7C since 1900.

    Going from 300 ppm to 385 ppm translates into an increase of approximately 1 to 1.2 W/m2 according to the fitted chart.

    Each 1 W/m2 is supposed to increase average temperatures by an average of about 0.8C, so the models and the fitted chart are pretty close to the empirical results (if you believe global temperatures have actually increased by 0.7C since 1900 but then Hansen is the official recorder of that figure as well.)

  4. Sam Urbinto
    Posted Jan 10, 2008 at 7:11 PM | Permalink

    We know the global mean temperature anomaly has risen by a trend of .8C since 1880 (or about .6C since 1970).

    We know ice/air measured CO2 has gone up about 110 ppmv in the same period.

    We know the IPCC says land-use changes and fossil fuel burning is what they believe is primarily responsible for the anomaly trend rise.

    So CO2 alone is immaterial, it’s simply one componenent of one of the two reasons they believe the anomaly is up. And that says nothing about what the anomaly actually represents; it’s another issue.

  5. jae
    Posted Jan 10, 2008 at 8:20 PM | Permalink

    OK, I suppose this is another stupid question, but I have to ask it. All the GHGs except water vapor are included in Table 3. Where is the equation for water vapor?

  6. Demesure
    Posted Jan 11, 2008 at 2:32 AM | Permalink

    #6 Water vapor is SUPPOSED to be constant so no forcing. But the AR4 starts to mention forcing due to irrigation. Far from a settled science.

  7. dreamin
    Posted Jan 11, 2008 at 2:44 AM | Permalink

    I have a question. I was debating GCMs with somebody and I was arguing that GCMs used by the IPCC have obviously been tuned to fit the instrumental record; when you pull the C02 forcing out of the model, it’s no surprise that it no longer matches.

    He asked me whether anyone has taken a GCM without CO2 forcing and tuned it to match the record. i.e. why hasn’t anyone done this to show that the models can be and are tuned.

    Has this been done? If not, why not? And if so, where?

    TIA

  8. MJW
    Posted Jan 11, 2008 at 3:43 AM | Permalink

    I was curious as to whether I could match Hansen’s formula over its effective range with an exponential.

    Hansen’s formula is:

    T = log(1 + 1.2*c + 0.005*c^2 + 0.0000014*c^3)

    The range is 300 to 1000 ppmv.

    I hoped to find “A” and “P” such that the function,

    T = A * (1 – exp(P*c))

    was a close match.

    It turns out, I couldn’t. However, I did find “A”, “B”, “P”, such that,

    T = A – B*exp(P*c)

    is a very close match.

    The function is:

    T = 10.142 – 5.2312*exp(-0.00142*c)

    The maximum relative error over the range 300 to 1000 is about 0.236%, which occurs at c=1000. The correlation between Hansen’s function and the exponential is 0.9999048. I did the curve fitting by hand (with the help of R), so there may be even better versions.

    I’m not sure what to conclude, other than that just because a function kind of looks logarithmic (or exponential) over a limited range doesn’t necessarily prove it is.

  9. Posted Jan 11, 2008 at 4:00 AM | Permalink

    Dear Steve and others,

    I think it is obvious that the formulae with the logarithms of polynomials that happen to be truncated at the cubic term or ad hoc combinations of 0.75th and 1.52th power can’t be derived in any rigorous, analytical form from a sensible theory. They are just expressions designed to match graphs calculated numerically by climate models.

    Now, you’re right that some of the one-dimensional models are simple enough so that an analytical solution could exist but I think that the folks with some experience in differential equations feel that the forms above are simply not the kind of results that one may obtain analytically from the kinds of differential equations that occur in Nature. To make this sentence more dramatic, the “simple” solutions usually contain rational and hypergeometric functions, among others, while the combinations of logarithms etc. in the tables are the kind of functions that don’t appear as solutions but rather as man-made non-linear regression.

    The most obvious way to see that all the expressions can’t be analytical is to look at those many different forms of the CO2 result. This shows that they are guesses.

    On the other hand, related expressions could be analytical results. And surprisingly for many people, very bizarre fractional powers of quantities could occur. In critical phenomena described by scale-invariant theories, bizarre critical exponents entering exact but non-intuitive power laws are frequently found. Some of these unexpected powers are pretty universal and occur simultaneously in several disciplines of science. I could give you some examples.

    Dear jae #6,

    I think that water vapor is not listed in these tables because these are tables of formulae describing equilibrium and H2O is not a long-lived gas which means that the effects of an adjusted concentration of H2O will only last temporarily. The H2O cycle will rather quickly return the H2O concentration to levels dictated by other, “long-term” parameters of the environment – by evaporation, precipitation etc. In this sense, H2O is not a driver and its concentration is not an independent variable in these considerations.

    Of course, this conclusion is no dogma. There is no qualitative difference between H2O and other gases. But there is a lot of quantitative differences that make it natural to treat CO2, CH4 etc. as gases whose concentrations are adjustable (independent variables) in equilibrium but water as a compound whose concentration is determined by others.

    As guaranteed by my previous texts about the tropopause, I agree with Steve that its location is linked to the concentration of gases in the atmosphere. If CO2 were the only gas, (exponential) increase of the CO2 concentration would lead to a (linear) lift of the troposphere’s altitude. However, we should remember that CO2 is not the only (or the most important) gas in the atmosphere and the universal term “tropopause”, while not sharply defined, normally doesn’t refer to the CO2 component only but to the whole atmosphere. The tropopause is the upper boundary of the troposphere which is the layer where most of the weather occurs – and where the weather occurs is more influenced by water in the atmosphere rather than CO2.

    So if you fix the location of the tropopause to the major weather events, it is more closely linked to H2O concentrations which are not too strongly influenced by CO2 concentrations. H2O concentrations are rather constant, which is why the location of the tropopause is rather constant, too. In this sense, the position of the “tropopause” doesn’t shift if we add CO2. However, one must be careful about sentences involving the term “tropopause”: if these sentences are meant to be quantitative or even accurate, they should have a well-articulated definition of the tropopause. On the other hand, there can be meaningful qualitative or approximate sentences involving “tropopause” whose validity may be judged without a rigorous definition of the term “tropopause”. I think that most sentences with “tropopause” in the literature are of qualitative character only – models and reality simply don’t offer any sharp while useful boundary of this kind.

    Dear Dreamin #6,

    one of the usual alarmist arguments is that, in contradiction with your assumption, the existing climate models without the enhanced CO2 effect accounted for can’t reproduce the past observations and the 20th century warming. There has simply been some kind of “underlying” quasi-linear trend that is unlikely to be explained by the well-understood normal portions of the existing models. That’s why they say that the first new effect they think of – greenhouse effect – must be responsible for the whole anomaly.

    That’s of course no proof of anything because there can be many other important effects related to clouds, cosmic rays, ocean circulation etc. that are not being properly included in the GCMs. But on the other hand, you shouldn’t think that once you have some freedom to adjust parameters, everything goes. Even with some freedom in parameters, there are usually some predictions and constraints and hypotheses with unknown parameters (even many unknown parameters) may still be confirmed or falsified – or strongly supported or disfavored. This is a kind of discussion that has occurred in high-energy theoretical physics and laymen (sorry) tend to absolutize the notion that a free parameter means ignorance or freedom to fit the curves. It means some freedom but the amount of freedom is often insufficient to get what you want.

    Virtually all of scientific insights we have contain some free parameters that can be and must be adjusted to match the observations. But that doesn’t mean that these insights are equivalent to complete ignorance. It doesn’t mean that theories with parameters are useless to learn things. The Standard Model of particle physics has about 30 parameters (including neutrino masses) and it arguably describes all phenomena in Nature (when general relativity as a theory of gravity is included with an extra parameter or two) at the fundamental level. On one side from the Standard Model, string theory has no adjustable continuous parameters but gives us another kind of ignorance. On the other side, there is the rest of science where effective theories of complicated systems have many more parameters than 30 (in total). They’re still very useful.

    Best
    Lubos

  10. Raven
    Posted Jan 11, 2008 at 4:47 AM | Permalink

    Lubos,

    one of the usual alarmist arguments is that, in contradiction with your assumption, the existing climate models without the enhanced CO2 effect accounted for can’t reproduce the past observations and the 20th century warming.

    When I read the attribution part of AR4 I see that the IPCC assumes that the warming from 1980-1998 cannot be explained by climat random variations (i.e. the explicitly excluded an model runs that randomly produced a trend if CO2 forcing was not included). I am not convinced that this is a reasonable assumption when dealing with an autocorrelated time series like the annual mean global temperature. Do you have any thoughts on the matter?

  11. Hans Erren
    Posted Jan 11, 2008 at 5:18 AM | Permalink

    one other serious complication exists in the real world which we shouldn’t overlook. There are two stable tropopause heights observed in the atmosphere:
    Tropical tropopause
    Arctic tropopause

    At their boundaries (mid lattitude) the most intersting weather occurs, where most people live and where climate change affects the most people. What will happen with increased CO2, will the tropcal troposphere move northwards, will the interaction beteen arctic and tropica tropopause become more or less active?

    lots of guessing and scaremongering is happening.

    tropopause profile
    http://www-das.uwyo.edu/~geerts/cwx/notes/chap01/tropo.html

    tropopause maps
    http://www.atmos.washington.edu/~hakim/tropo/

  12. Phil.
    Posted Jan 11, 2008 at 8:09 AM | Permalink

    Steve, the different profiles arise because of the different absorption environments the different species find themselves in: a Gaussian line shape with saturation at the line center will have a log dependence on concentration, a Lorentzian line shape with saturation at the line center will have a square root dependence. John Creighton has shown this in another thread and it’s fairly easy to show numerically by integrating across the profile of the different lineshapes (10 mins to do it in XL). The different shapes depend on what the dominant mechanism for broadening of that species is, Doppler broadening or collisional broadening.

  13. Steve McIntyre
    Posted Jan 11, 2008 at 9:45 AM | Permalink

    #10. Luboš, thanks for chipping in on this both here and at your blog. I’m mulling over your recent post on the matter and hope to discuss it.

  14. Kenneth Fritsch
    Posted Jan 11, 2008 at 10:07 AM | Permalink

    Re: #12

    There are two stable tropopause heights observed in the atmosphere:
    Tropical tropopause
    Arctic tropopause

    Hans Erren, could this topic be introduced in query form on the Judith Curry thermo thread? I think she touches on this topic in her chapter 13.

  15. Sam Urbinto
    Posted Jan 11, 2008 at 10:26 AM | Permalink

    Regarding water. Obviously, water is the predominent force involved in climate. Land use and fossil fuel burning influence the natural cycles of ocean currents, fresh water production from melting, rain, cloud cover, and so on.

    The runnoff of manmade chemicals into rivers into the ocean. The particulates disolving into rivers and oceans and being deposited on snow and ice. The irrigation of huge areas of bare land. Contrails. Stratospheric water vapor as a forcing due to methane oxidation. The interactions with ozone and hydroxyl radicals et al. Sea salt aerosols. Soil dust and industrial dust et al and their impact on liquid water content and cloud amounts (see one such example on cloud condensation nuclei).

    The fact is that not only is water a huge major part of climate that impacts and is impacted by a large percentage of all the factors involved, but water vapor can be a positive forcing or a positive or negative feedback. Is rain a case of water being a negative forcing? I’d say so. And since a number of anthropogenic causes have effects (direct and indirect) upon water in all 3 forms, I call BS on the idea that humans have no control over water’s role in climate. Although it would be fair to say it really isn’t direct control, in that we don’t really know if doing X will result in Y. But there you go.

    The rest of it, Some of the things to consider in the mix.

    The hydroxyl radical (OH) is the primary cleansing agent of the lower atmosphere, in particular, it provides the dominant sink for CH4 and HFCs as well as the pollutants NOx, CO and VOC. Once formed, tropospheric OH reacts with CH4 or CO within a second. The local abundance of OH is controlled by the local abundances of NOx, CO, VOC, CH4, O3, and H2O as well as the intensity of solar UV; and thus it varies greatly with time of day, season, and geographic location.

    Based on the OxComp workshop, the SRES projected emissions would lead to future changes in tropospheric OH that ranging from +5% to -20%

    At http://www.grida.no/climate/ipcc_tar/wg1/ also see 154.htm and 155.htm

    Many aerosols are photochemically formed from trace gases, and at rates that depend on the oxidative state of the atmosphere. The feedback of the aerosols on the trace gas chemistry includes a wide range of processes: conversion of NOx to nitrates, removal of HOx, altering the UV flux and hence photodissociation rates, and catalysing more exotic reactions leading to release of NOx or halogen radicals.

  16. DAV
    Posted Jan 11, 2008 at 10:42 AM | Permalink

    Lubos Motl #10:

    I think it is obvious that the formulae with the logarithms of polynomials that happen to be truncated at the cubic term or ad hoc combinations of 0.75th and 1.52th power can’t be derived in any rigorous, analytical form from a sensible theory. They are just expressions designed to match graphs calculated numerically by climate models.

    Precisely what I was driving at. Overfitting a curve is a nice exercise but rarely has predictive value outside of the given range so what would be the point? You’d think the goal would be a useful model for future observation/prediction and not one that simply represents the current dataset on hand. While it might be interesting to note the last five passing taxi numbers were all powers of three but who really cares if it can’t be used to predict the number of the next?

    There are time when overfitting is desirable. For instance, I might have a model that predicts my spacecraft’s orbital position to a reasonable accuracy based only on the spacecraft clock value. But then, the orbit isn’t going to change appreciably and the clock will remain within a known range. It would be foolhardy to assume the model was usable for any other purpose, including other spacecraft.

    It’s worth repeating: I would think the goal in science, on the other hand, would be to provide a predictive model and not one that spits out the currently known data to a high degree of accuracy. If the GCM’s are nothing more than overfits, why would anyone rely on them for any prediction?

    The GCM builders surely know this. If they go about making predictions and labelling them reasonable guesses wouldn’t that make them dishonest?

  17. Kenneth Fritsch
    Posted Jan 11, 2008 at 10:47 AM | Permalink

    Re: #10

    That’s of course no proof of anything because there can be many other important effects related to clouds, cosmic rays, ocean circulation etc. that are not being properly included in the GCMs. But on the other hand, you shouldn’t think that once you have some freedom to adjust parameters, everything goes. Even with some freedom in parameters, there are usually some predictions and constraints and hypotheses with unknown parameters (even many unknown parameters) may still be confirmed or falsified – or strongly supported or disfavored. This is a kind of discussion that has occurred in high-energy theoretical physics and laymen (sorry) tend to absolutize the notion that a free parameter means ignorance or freedom to fit the curves. It means some freedom but the amount of freedom is often insufficient to get what you want.

    In my mind, what Lubos Motl says here captures the essence of the views that Isaac Held expressed on parameterizations in his visit to CA on the thread “Truth Machines” in post #64.

    http://www.climateaudit.org/?p=845

    One would have to agree that parameterizations are required in a climate model and that well defined and understood parameters probably are quite “stiff” in reference to overfitting the model. The analysis should, therefore, concentrate on what parametizations are being used in the models and how much confidence do we have in our understanding of the parameter and its implementation.

    Analyzing parameters is similar to the question posed by the subject of this thread in that one needs to know what is being parameterized and how it is being accomplished. The IPCC could do a much better job of detailing these issues.

  18. Phil.
    Posted Jan 11, 2008 at 11:07 AM | Permalink

    Re #17

    The log and square root dependencies are not random curve fit functions, they arise from the physics, see for example:
    line shape issues

  19. Larry
    Posted Jan 11, 2008 at 12:12 PM | Permalink

    17,

    Overfitting a curve is a nice exercise but rarely has predictive value outside of the given range so what would be the point?

    It’s frequently done in engineering to simplify calculations within the range. It’s completely useless for extrapolation. I think this is obvious. I hope we’re not beating a dead horse, but I don’t see any real insight beyond that.

  20. Larry
    Posted Jan 11, 2008 at 12:15 PM | Permalink

    19, fine, but a log of a cubic as an arbitrary overfit. Just because there’s a log in there somewhere doesn’t mean the cubic is anything but a wiggly spline with too many parameters. I’m sure it fits the data very well. But try that exercise on half the data, and see how well is predicts the other half.

  21. steven mosher
    Posted Jan 11, 2008 at 1:53 PM | Permalink

    If I were speculating I’d say that law should be logistic

    http://en.wikipedia.org/wiki/Logistic_function

    especialy if more C02 causes more warmth and more warmth ( oceans) more Co2

    kinda like an autocatalytical reaction which follows the logistic function.

    Just a thought.

  22. Bugs
    Posted Jan 11, 2008 at 10:04 PM | Permalink

    It’s worth repeating: I would think the goal in science, on the other hand, would be to provide a predictive model and not one that spits out the currently known data to a high degree of accuracy. If the GCM’s are nothing more than overfits, why would anyone rely on them for any prediction?

    The GCM builders surely know this. If they go about making predictions and labelling them reasonable guesses wouldn’t that make them dishonest?

    That’s why they are models, they use known physical processes as the basis, with paramaters to set certain settings that can only be derived empiracally, as Motl says. That is, they set the parameters for the modelled physical processes, so they can match known records, then make predictions. When Mt Pinatubo went off, it was a good chances to see how they went up against a real, short term but important phenomenon.

  23. John Baltutis
    Posted Jan 12, 2008 at 2:04 AM | Permalink

    Re: #23 wherein Bugs says:

    …they set the parameters for the modelled physical processes, so they can match known records, then make predictions.

    See my post at http://www.climateaudit.org/phpBB3/viewtopic.php?f=3&t=9 which discusses CGM predictability.

  24. DAV
    Posted Jan 12, 2008 at 2:13 AM | Permalink

    I guess I really should explain why I see the different curve equations for different gases a problem. Along the way, I may cast some light on Steve’s logarithm question.

    All I can say is if you are insisting on looking at the bandpass characteristics of a gas you are not only missing the forest for the trees but you don’t even see the trees because of all of the darn twigs that are in the way. Temperature in regard to climate is a macro effect. It’s an average of many complex kinetic interactions. The equations that I was referring to are using atmospheric concentrations (in ppm) to arrive at some additive temperature rise.

    Just an overall expectation: for any given gas, I think you should be able to come up with a pretty simple formula. GHG’s are going to raise temperature in pretty much the same manner as building insulation. That is, they impede the flow of energy. Steve made a comment that electrical equations are just an analogy but the reality is that electrical circuits are an analogy only in the sense that thermodynamic analyses and electrical analyses are dealing with different forms of energy transfer. In many, if not all, respects they are equivalent. Thermal resistance is used in the same manner in thermo in the same manner as resistors are used in electrical engineering.

    The bandpass characteristics of a gas are nice to know. For one, it explains why GHG effect is largely a one way effect. I would think that the energy coming from the Sun has a fixed energy distribution with regard to frequency. Likewise for the Earth’s radiation back into space. So given that distibution, the antenuation provided by a gas should be pretty much fixed. When you talk about increasing the concentration of gas, you are just talking about more of the same thing. So, on the surface, you’d think it would be a fairly straightforward equation.

    The form would be expected to be logarithmic. In building insulation, every bit you add has a diminishing return. 100 cm of insulation isn’t much better that 99 cm percentagewise. The reason is that in the outer layers, there is less energy flow to impede.

    Now, gasses aren’t quite that simple. First, in low concetrations, they tend to spread out. But in the climate world that’s pretty much already taken care of in the statement of concentration. Secondly, bulding insulation is pretty much fixed in position. Gasses, however, are free to move about and will change altitude depending upon how much heat they’ve absorbed. This may have a noticeable effect but I’m not sure how.

    Okay, then, I acknowledge that the total antenuation based upon concentration might not be a simple expression. But I still think one gas is pretty much interchageable with any other. A given gas may have some thermal resistance, X, at some unit concentration and another may have Y. But that doesn’t explain the different gasses have such a wild difference in form as the ones given for CO2 and methane. I’d think both would be of the form R(gas)A(conc), where R(gas) gives the gas’s thermal resistance (I would expect this to be a fixed value as it is for other insulators) and A(conc) gives the gaseous attenuation response for a give concentration. I would expect that to be the same for all gasses. The only exception that I can see is how the gas may rise when heated.

    So, maybe that’s why the equations are so different? Or maybe they just look different?

    In a previous post that has been shooed off to elsewhere, I made the unwarranted assumtion that doubling the concentration leads to doubling the thermal resistance. That would only be close to true in very low concentrations. The change in resistance is what is logarithmic. If you are working bacwards from power/unit area (watts/m^2), that too is logarithmic.

    I confess I haven’t read all pf the papaers yet. I need to do this. Still, those equations are bothering me.

  25. DAV
    Posted Jan 12, 2008 at 2:16 AM | Permalink

    Rats! I accidentally hit CR while spell checking that post. Caused it to be submitted before I was through (*sigh*)

  26. Bugs
    Posted Jan 13, 2008 at 5:37 AM | Permalink

    John Baltutis

    they are little more thourough and clever about it than that.

    http://icp.giss.nasa.gov/research/ppa/2001/mconk/

  27. Ron Cram
    Posted Jan 13, 2008 at 7:49 AM | Permalink

    re: 4
    John Lang,

    I think your calculation leaves out any consideration of natural climate variability (very common among climatologists). If you want to “empirically test these model fits” I would suggest you take changing temperature trends into account. From 1945 to 1975, CO2 was rising yet temperature was falling. This has to tell us something important (even if it is only to help us calculate the level of natural climate variability).

  28. kim
    Posted Jan 13, 2008 at 8:01 AM | Permalink

    And now temperature is falling while CO2 is rising. Tell me once, fame on you; tell me twice, shame on me.
    ======================================

  29. Phil.
    Posted Jan 13, 2008 at 10:32 AM | Permalink

    Re #29

    And yet according to GISS 2007 was the warmest year for the northern hemisphere over land.

    The increase in warming trend towards the north is dramatic and would certainly seem to tie in with the Arctic sea ice trends this decade.

  30. kim
    Posted Jan 13, 2008 at 1:22 PM | Permalink

    Landtemps appear to crack the whip, and the sources of ice are protean.
    ====================================

  31. John Lang
    Posted Jan 13, 2008 at 3:27 PM | Permalink

    The overall temperature trends reported by the NOAA and GISS follows this “forcing” extremely closely.

2 Trackbacks

  1. […] post by Climate Audit and software by Elliott Back This entry is filed under Data save. You can follow any responses to […]

  2. […] for these TAR results here and that Myhre et al 1998 specifically applied the IPCC 1990 forms (see here ); we noted that IPCC 1990 attributed the forms to Wigley 1987 and Hansen et al 1988 (see here for […]

%d bloggers like this: