Unthreaded #35

454 Comments

  1. BradH
    Posted Jun 3, 2008 at 4:32 AM | Permalink

    Rather sad (but typical of recent years) that Nature is gradually becoming a press release service for major institutions and drug companies.

  2. Geoff Sherrington
    Posted Jun 3, 2008 at 5:04 AM | Permalink

    Starbuckets.

    Now that we have identified buckets of various construction, reasonably able to be reproduced, is there a CA reader with a yacht willing to sip a coffee then spend a day on a “Starbuckets” excursion?

    Instead of taking past adjusted estimates of differences between collecting apparatus, is this subject not ripe for a set of modern comparative measurements? A lot could be clarified in a day, maybe several different days with different weather. I don’t have a boat. Volunteers? Past experience a boon for technique.

    Look at what Steve at al achieved with their Starbucks tree ring collection day. A highly important contribution.

  3. Larry T
    Posted Jun 3, 2008 at 8:43 AM | Permalink

    I was a scientist who worked with tracking observations from multiple stations and multiple passes to do trajectory analysis on satellites and deep space probes. I do not remember massaging data together to get a result that was better match what we thought the answer. I also find the technique of combining multiple observations sites into one longer to be troublesome. We treated each observation pass a a separate data entity even if from the same station. If the data showed a change, we did not change the data we produced a new analysis with the new data. All that the data manipulation is doing is introducing new errors and obscuring the underlying trends. Bad data and outliers need to be eliminated in a systematic and rational way before attempting to do analysis.

  4. Richard Sharpe
    Posted Jun 4, 2008 at 11:03 PM | Permalink

    Gasp, Pat:

    But that can’t be correct. They’re treating the data as though they were taking 60 immediately sequential readings of the same value, when instead they’re taking 60 time-separated readings of different values, once each.

    You mean, those 60 measurements are not randomly distributed around the mean monthly temperature?

    Who knew?

  5. Posted Jun 4, 2008 at 11:31 PM | Permalink

    Pat,

    But that can’t be correct. They’re treating the data as though they were taking 60 immediately sequential readings of the same value, when instead they’re taking 60 time-separated readings of different values, once each.

    If Brohan’s simplified model, y=T+n, where n is uncorrelated over time, homoscedastic, and independent of T, is correct, then the computation is correct as well. That is, for the average of Ts. For more general model,

    y=(1+\delta)T+b+n

    where \delta is the scale factor error and b slowly varying bias term, the computation would be incorrect. Brown’s assumption is that \delta=0, b=0 , and I don’t think that latter assumption is correct (see also http://www.climateaudit.org/?p=3132#comment-256188 ) . First Google result

    http://www.meteolabor.ch/vtp37_airport_e.pdf

    states accuracy similar to Brohan’s estimate (actually is is Folland’s estimate without a reference), but it is not claimed that sequential errors are uncorrelated. I have some experience of with instrumental errors, and I’ve learned that averaging is not the good way to increase accuracy, but money is.

  6. steven mosher
    Posted Jun 5, 2008 at 5:38 AM | Permalink

    re 144 I’ve raised this before a few times. It’s also complicated by the fact that actually you have one observation per day. once a day, in the US for example, the Min and max is observed. Each value is rounded to the nearest degree F then the two figures are added and divided by 2
    rounding the result as well.

  7. kim
    Posted Jun 5, 2008 at 6:29 AM | Permalink

    Yessirre, it’s the Great Uncertainty in Optical Path Length T.
    ===================================

  8. Phil.
    Posted Jun 5, 2008 at 7:49 AM | Permalink

    Re #534

    At any rate, I have given a full quantative explanation, you have only offered a flawed ad hominem attack.

    Hardly ad hominem you are a believe a string theorist not a spectrocopist and so are out of your field.
    Your explanation is flawed in that you believe that the log dependence is a property of the atmosphere whereas it can be demonstrated in a cuvette in a spectrometer, which indeed is how Arrhenius became aware of it.

    To derive the absorption dependence of a spectral line the absorption is integrated across the width of the line, in the general case a Voigt profile is used. In the case of a weakly absorbing line you get the usual Beer-Lambert law, for moderately absorbing lines where saturation occurs in the Doppler core of the line the dependence reduces to log(concentration), for a strongly absorbing line where saturation occurs into the wings the dependence becomes a square root law.

    Those who wish to see the detailed derivation of these expressions should consult a graduate level spectroscopy text, the integral calculus doesn’t lend itself to presentation in a blog.

  9. Andrew
    Posted Jun 5, 2008 at 8:02 AM | Permalink

    Oh boy, bender is ranting about the PDO again. Look out, I probably will get in trouble just for pointing out that he’s on his hobby horse again. In fact, bender considers me one and the same as the whole
    flock of “oscillators” so I’m surprised he didn’t snipe at me already and just refered to “PDO fans”.

    Say bender, you know what the problem is? You are really bad at articulating your ideas about the PDO in terms that I can understand. Perhaps you’d like to be more clear about your views? Perhaps if I agree to only refer tpo the PDV, you won’t mind?

  10. cba
    Posted Jun 5, 2008 at 8:13 AM | Permalink

    It’s the exponential nature of the absorption/transmission. It’s not an uncertainty of optical path length tau, it’s the fact that tau is an extremely complex function of frequency (wavelength). It’s a problem of trying to ignore this and assuming that one can use a simple eqn (valid for a wavelength) and then somehow assume something of all the other wavelengths involved.

    Whether or not its cause ultimately is associated with broadening is not something I care about because it is going to be very close to that of an exponential decay regardless of exactly why it is behaving that way. What’s more, the vast number of lines that are overlapping into bands makes details of the line broadening somewhat irrelevant as before one gets too far away from line center, some other line is going to be the dominant one.

    This stuff can be done for moist air just as easily as for dry air. It is important to analyze the variation while holding such things fixed because if not, one could have a factor which has no effect being assumed to be manipulating something with a an effect via a nonexistent mechanism or via 0 change in the causative agent.

    One can assume the atmosphere is a fixed amount of h2o vapor in it or one can assume it has a fixed humidity. Locally and short term, fixed humidity over T doesn’t happen. Perhaps fixed amounts of h2o doesn’t happen either and perhaps neither actually apply on the long term globally. Trying to figure it out without the constraint could be impossible, regardless of just how valid that constraint might be.

    What’s more, and this is probably where the gcm games get into trouble, the formation of convection and of clouds and the effects of those clouds is probably well beyond the ability to properly model and hence something with a net negative feedback is being ascribed a net positive feedback.

    For certain, it is necessary to understand very well the effect of co2 without other impacts getting involved. The tangled ball of twine must be unwound. Water vapor increases aren’t caused by co2 concentrations, they’re caused by temperature increases (ostensibly from the co2 increase but a T increase is a T increase). If increasing T by 1 degree brings up enough h2o vapor to increase the T by an additional 2 degrees then one has uncontrolled runaway because the first time it’s an extra warm day with warmer T by 1/2 degree, there will be enough h2o vapor added to raise the T to 1 degree and if the T is raised another 1 degree, there’ll be more h2o vapor raising the T by 2 degrees and so forth and so on – and voila – the oceans boil away.

    It cannot work that way because that way cannot work at all and the fact there are presently oceans on this planet is proof positive.

    To go beyond the basics of something like co2 only and invoking h2o vapor, one cannot go halfway. One cannot hope to come close to analyzing reality by applying only half the effect of adding in the h2o vapor as a variable. Treating it as a constant isn’t a problem though. That means far more complexity and analysis. If you vary h2o vapor, what is the change in the Earth’s albedo and the amount of energy reaching the surface? The upper stratosphere which is much warmer than the surface can basically radiate more energy out than is received by the Earth from the sun. That fulfills the basic Earth Sun balance. How the energy (and hence T) distributes itself lower down is a function of balances too.

    If one assumes that going halfway is acceptable – what happens if halfway means choosing to look at co2′s effects only in the visible and ultra far IR ignoring the primary IR arena? One could conclude from this that co2 has no effect of power absorption in the atmosphere. It’s the same thing with h2o vapor without the rest of the impacts – like clouds. If the net effect of co2 is to block more surface radiation and decrease surface T, then why would there be more h2o vapor in the air? CO2 has less of an effect on incoming solar than on outgoing lwr but it will still reduce the incoming somewhat. It will also increase the average emissivity of the atmosphere, making it radiate better at all levels – requiring there to be a drop in the T of the atmosphere to reach balance. Outgoing emissions to balance the energy flow can be done at higher levels.

    It’s a very tangled ball of twine

  11. kim
    Posted Jun 5, 2008 at 8:45 AM | Permalink

    The analogy I see
    Is the precise locality
    Of the mass particle e
    Yes, yessirree
    It’s the Great Uncertainty
    Of Optical Path Length T.
    ==============

  12. Posted Jun 5, 2008 at 9:40 AM | Permalink

    There is no consensus as to whether this eigenthing actually oscillates and what the mechanism whould be. It has no predictability. Not at all like ENS”O”.

    Perhaps Bender is right about the PDO, who knows. He’s a very good thinker and I totally sympathize with his attitude of being skeptical about skeptics. However his reasoning is failing here: the ENS’O’ is also unpredictable and the mechanism of what drives it is still unknown (the former being an obvious consequence of the latter). But quite clearly it does exist and it does affect the earth climate.

    Furthermore, even if the PDO were a simple construct, one still has to consider the rest of the ocean cycles, for which scientists seem to have found good observational and proxy evidence: AMO, NAO, AO,… Basically, if ENSO-like oscillations exist, why should there not be lower frequency ocean oscillations, when the evidence suggests just the opposite?

    But, if temperatures remain on the low side after September, I’ll then begin to wonder about the much-speculated solar/climate connection.

    I would also expect monthly temps to raise once the after-effects of La Nina are over but if they don’t, I’d go for PDO rather than solar, largely for the reasons stated above. The simpler explanation wins. But again, who knows? For now, I’ll wait and watch the data come in.

    PS- Perhaps one of the worst mistakes a genuine skeptic can make is follow the believers’ logic that one should come up with one alternative “explanation” for the small (and decreasing) warming we’re witnessing. I’m not even sure of how necessary an “explanation” is for the phenomenon we’ve watched in the last decades (the one of the last century is perhaps more interesting) and I don’t believe they know what the right explanation is either. Period.

  13. Sam Urbinto
    Posted Jun 5, 2008 at 9:41 AM | Permalink

    Re 147

    And if max is at max for 6 readings but min is only for 1, or vice versa, it further skews things.

    Plus, some programs round by truncating to the closest intger towards zero and others by rounding to the closest integer either direction, and that’s probably mixed.

    And some that actually round might miss 1.500000001 (since we know the thermometers are accurate to a billionth of a degree and everything). :D

  14. Sam Urbinto
    Posted Jun 5, 2008 at 10:05 AM | Permalink

    Well, O or V, that is the question. In climate, variability is also called climate change. But variable in and of it self means “Likely to change or vary; subject to variation; changeable” which is rather non-descriptive, wouldn’t you say?

    What about this: There are different types; harmonic, wave, coupled, damed, self-inuced, driven. Can you say the PD is like the Chandler wobble, ENS, quasi-biennial or ocean tides in any way? Remember, damped oscillations decay unless there’s a source of energy. This doesn’t have to ve like a pendulum of some sort. Do aircraft wings move in a regular predictable cycle or pattern? How about an oscillation with a variable oscillatory period due to dampening. Oh, heck, let’s just go to the dictionary.

    Oscillation:
    1: the action or state of oscillating
    2: variation, fluctuation
    3: a flow of electricity changing periodically from a maximum to a minimum; especially : a flow periodically changing direction
    4: a single swing (as of an oscillating body) from one extreme limit to the other
    Oscillating:
    1 a: to swing backward and forward like a pendulum b: to move or travel back and forth between two points
    2: to vary between opposing beliefs, feelings, or theories
    3: to vary above and below a mean value
    Fluctuation:
    intransitive verb
    1 : to shift back and forth uncertainly
    2 : to ebb and flow in waves

    Let’s just call it the PSF, Pacific Semi-decadal Flucuation. Or is there some other synonym we can use for “changing” that makes everyone happy?

  15. JohnL
    Posted Jun 5, 2008 at 10:14 AM | Permalink

    Can anyone help me?

    On some climate blog in the comments I found a link to a ~1960′s study of inlet temperatures for radar picket ships.

    I lost the reference, and cannot find it.

    Thanks

  16. yorick
    Posted Jun 5, 2008 at 10:28 AM | Permalink

    Phil,
    There are often multiple mechanisms to determine the same quantity. Even if everything you say is true, it has no impact on whether Motl is right or not. One could calculate the speed of a car by examining fuel consumption, oxegen intake, gear ratio, weight, drag, etc, or one could measure it accross a known distance with a stop watch. One of them is more likely than the other to be wrong. Never mind advocating for your particular means of determining the relationship, a more convincing approach would be to explain why Motl is wrong. I personally don’t know, but I wish you would take a step back and think about the philosophy behind your argument. An analogy to your argument would be the assertion “there are no black swans. I have looked everywhere and not seen any.” Before Black Swans were finally found in Australia, of course. In other words, you are asserting a negative, that there is no other way to calculate the relationship than the one you offer. This brings us back to the original point, why is Motl wrong?

  17. Sam Urbinto
    Posted Jun 5, 2008 at 10:31 AM | Permalink

    Min 45.61
    Max 67.24

    Day’s mean is (46+67)/2 or 56.5

    56 gets reported. We’ve cut off .5 from the rounded average and .425 from the unrounded. Every day. Every station.

    I think the monthly anomaly is the unrounded average of 28-31 days for each station then gridded, or are

    For a trend of .7 over 130 years, we’re worried about unknown quality (calibration, site bias, sample area and size) and known changing (buckets to inlets, glass to digital) instruments we know have been rounded and rounded (or not rounded; a mix) and averaged repeatedly, when actually we’re only accurate to 1 degree anyway, and any other digits the artifacts of repeated averaging.

    Does anyone really think we know October 1992 had a world wide average temperature of the month .04 lower than normal?

    No, we had some number .04 lower than the ensemble model mean for a 30 year period.

  18. Phil.
    Posted Jun 5, 2008 at 10:58 AM | Permalink

    Re #9

    Yorick Motl’s mechanism is related to the structure of the atmosphere however the dependence of CO2 absorption is logarithmic even when measured in a spectroscopic cell so isn’t a result of the atmospheric structure. Also his model doesn’t account for the different dependences of different ghgs. The effect of broadening on absorption of spectral lines is well known and follows the trend I outlined above, as I said get hold of a graduate spectroscopy textbook, it’s nothing to do with climate science it’s just routine spectroscopy.

  19. Pat Keating
    Posted Jun 5, 2008 at 11:08 AM | Permalink

    4 cba

    Water vapor increases aren’t caused by co2 concentrations, they’re caused by temperature increases (ostensibly from the co2 increase but a T increase is a T increase). If increasing T by 1 degree brings up enough h2o vapor to increase the T by an additional 2 degrees then one has uncontrolled runaway because the first time it’s an extra warm day with warmer T by 1/2 degree, there will be enough h2o vapor added to raise the T to 1 degree and if the T is raised another 1 degree, there’ll be more h2o vapor raising the T by 2 degrees and so forth and so on – and voila – the oceans boil away.

    A good point. Of course, the counter-arguments would be (a) that the time constant is longer than 24 hours, and perhaps longer than a year, (b) there may be a limit to the amount of T increase from this positive-feedback.

    The data seems to indicate that (b) is probably the case, and that we are already limited at current humidity levels (except for the polar regions).

    It is indeed a very tangled ball of twine.

  20. Pat Keating
    Posted Jun 5, 2008 at 11:16 AM | Permalink

    7 Sam

    Fluctuation is a very good choice, and may settle an argument (but no jokes about chinese or japanese, please).

  21. fFreddy
    Posted Jun 5, 2008 at 11:29 AM | Permalink

    Re #10, Phil.

    the dependence of CO2 absorption is logarithmic even when measured in a spectroscopic cell so isn’t a result of the atmospheric structure.

    Hmm. Does this spectroscopic cell reproduce all relevant atmospheric conditions ? At all altitudes, and so on ?

  22. yorick
    Posted Jun 5, 2008 at 11:31 AM | Permalink

    Phil, you just don’t get it. You are lost in the weeds. You proclaim that there is only one way to solve the problem. You have obviously invested a great deal of time on it. Your objections are not relevant. Everything you say could be true and Motl could still be right. No matter how many times you repeat it.

  23. yorick
    Posted Jun 5, 2008 at 11:34 AM | Permalink

    Phil, I will try to think of a way to express my objection more clearly to you.

  24. Craig Loehle
    Posted Jun 5, 2008 at 11:55 AM | Permalink

    Bender: how about the Pacific Decadal Flip-Flop. It flips, it flops, but we don’t imply that it is regular. PDFF. There. Fixed it.

  25. Sam Urbinto
    Posted Jun 5, 2008 at 12:17 PM | Permalink

    #543 on from last unthreaded. I agree with Yorick. “There’s more than one way to skin a cat” as they say.

    Pat #11 I’d say b) is certainly correct. Of course there’s a limit to the % of humidity. Then at some point the energy overall makes it rain etc.

    Pat #12 My point is that fluctuation, variation, oscillation, whatever, some kind of change with some kind of pattern, even if that pattern is not the same all the time. It’s turned into a discussion of people with a different definition they’ve chosen for it, don’t you think? Rather what yorick said in #9 above: There are various ways to calculate a relationship, and some are more likely than others to be wrong, but that doesn’t mean there’s no element of truth, nor that there’s any way to perfectly express any of it.

    Is there really a need to advocate for a particular way to describe a multi-functional system? Opinion based upon point of view.

    Craig #16 PDFF, wonderful. Or PPP, Pacific Pseudo Pattern. How about PWS, Pacific Weather System? Whatever.

  26. jae
    Posted Jun 5, 2008 at 12:29 PM | Permalink

    Is the Flip or the Flop the positive one?

  27. Posted Jun 5, 2008 at 12:33 PM | Permalink

    Concerning the logarithmic dependence, let me say the following elementary comments that are among the ingredients of the argument I presented and that I consider rather indisputable:

    1) The lapse rate – decrease of temperature with latitude – is important to get any nonzero greenhouse effect.

    2) Because it matters where individual CO2 molecules are located (what the temperature over there is), when you want to know the sensitivity, the distribution of the molecules with height is important, too. It is essentially the Maxwell-Boltzmann distribution.

    3) Because of the falling exponential character of this ditribution, a multiplicative increase of the total number of certain molecules is equivalent to an additive shift of the height of the tropopause, the effective surface where “things happen”.

    4) And the height more or less linearly influences various energy budgets. Consequently, an exponential change of the concentration is needed for a linear change of the temperature inbalance. Reverting this comment, we obtain the logarithmic relationship.

    5) My argument is of course free of feedbacks etc. and it only considers the effect of absorption and emission by CO2 molecules that are located in a specific environment, assuming that the rest of the atmosphere is “constant” as you increase the CO2 concentration. This is of course another approximation.

    But one needs to make such approximations. Moreover, the rough qualitative log dependence is independent of the complications and of the feedbacks.

    Concerning the Voigt profile, let me say the following. I am not disputing that the microscopic description of the absorption is in terms of spectral lines and there are many of them and they have a certain shape that it matters. In this sense, the description in terms of specific lines could be equivalent to mine – where my explanation would be a statistical treatment of a large number of spectral lines where I directly compute the energy summed over all of such lines.

    The main reason why I feel that the people who use the cliche “Voigt profile” actually don’t know any calculation that leads to the log dependence is the following simple reason: the Voigt profile itself is a convolution of the Gaussian profile and the Lorentzian curve. The log dependence of the resulting GH effect clearly follows only from one of the parts of the distribution – either the Gaussian or the Lorentzian curve: guess which one. The composite curve would generate much more complicated functional dependence.

    So if the people knew which one it were, they would say that the log dependence follows from the Lorentz broadening or they would say it follows from the Gaussian broadening. Because they use the term “Voigt profile” instead, it suggests that they want to “look” smart by using a more accurate description of the shape of the spectral lines that is nevertheless less relevant for this whole log argument than the right component (bell/Lorentz) of the distribution.

    BTW, buy Vaclav Klaus’ “Blue Planet in Green Shackles”.

  28. Andrew
    Posted Jun 5, 2008 at 12:38 PM | Permalink

    Here’s my best shot a less offensive name: The Pacific Decadal Mode that Isn’t proven to but we think it does oscillate.

    Too long?

  29. Mike B
    Posted Jun 5, 2008 at 1:03 PM | Permalink

    #16

    Bender: how about the Pacific Decadal Flip-Flop. It flips, it flops, but we don’t imply that it is regular. PDFF. There. Fixed it.

    Except that “decadal” implies a period, which implies regularity. If the “PDO” is just non-stationary or chaotic noise that happens to look sorta-kinda like decadal oscillation over a relatively short period of time, I fully understand where Bender is coming from.

  30. Craig Loehle
    Posted Jun 5, 2008 at 1:28 PM | Permalink

    The “PDO” is not like random noise. There is a suite of features that define it, including spatial pattern of warm vs cool water, high pressure/low pressure systems, prevailing winds etc. When in a mode it tends to stay there for a period of N decades, where N is usually 2 or 3. To me it looks like a system with instability but 2 dynamic attractors.

  31. Steve McIntyre
    Posted Jun 5, 2008 at 1:30 PM | Permalink

    Phil, while there is a spectroscopic aspect, that’s not the only aspect to the problem. The “enhanced” greenhouse effect relies on a “higher the colder” heuristic. Houghton’s book contains this heuristic in 4 lines. As you know, I’ve been looking a long time for a more substantial derivation than this sort of piffle and would welcome any specific reference from you beyond arm-waving.

    Your invocation of spectroscopic textbooks is idle, unless you can cite a specific textbook that contains the enhanced greenhouse effect calculation, rather than line calculations which are not at issue.

  32. Phil.
    Posted Jun 5, 2008 at 1:51 PM | Permalink

    Re #19

    The main reason why I feel that the people who use the cliche “Voigt profile” actually don’t know any calculation that leads to the log dependence is the following simple reason: the Voigt profile itself is a convolution of the Gaussian profile and the Lorentzian curve. The log dependence of the resulting GH effect clearly follows only from one of the parts of the distribution – either the Gaussian or the Lorentzian curve: guess which one. The composite curve would generate much more complicated functional dependence.

    Nice try at diversion, however this person explicitly stated which part of the profile would need to be saturated to give a log profile and also which would give a sqrt dependence, how does your model give a sqrt dependence for methane?

  33. Craig Loehle
    Posted Jun 5, 2008 at 2:21 PM | Permalink

    Phil: whatever the exact saturation curve for greenhouse gases, it is not the GHG themselves that are forecast to produce alarming warming–they give only 1 to 1.5 deg C. It is the water vapor feedback that the models produce that gives the alarming numbers.

  34. Posted Jun 5, 2008 at 2:23 PM | Permalink

    Re various proposals to change the nomenclature for ocean cycles(?)

    Hmm, I’m struggling to see why we shouldn’t take the “O versus V” argument to its conclusion and remove the ‘O’ from ENSO as well.

    During this last La Nina the SST anomaly in Tropical South America has been positive most of the time, rather than negative, as the ENSO theory (so to speak) would dictate. But the rest of the conditions were quite consistent with the cold mode of ENSO, so officially we’ve been living a La Nina episode.

    So let’s see, don’t we have the same situation here as in the official decadal “oscillations”?

    We observe highly irregular but seemingly periodic fluctuations of the tropical Pacific SSTs and, for our convenience, we categorize them as warm, cold and “neutral”. And there we are: we have created an Oscillation. But is that exactly what it is? We still don’t know what drives what we call ENSO. So, other than statistics, do we have any basis for assuming that this thing will continue showing an oscillatory pattern?

    Perhaps I’m just getting confused but, as long as something seems to be an oscillation, there may not be anything wrong with actually calling it an oscillation, no matter how angry Bender gets.

  35. Steve McIntyre
    Posted Jun 5, 2008 at 2:35 PM | Permalink

    As long as people understand what the names mean, but there’s a constant tendency to reify things, which then feeds talks of attractors and things like that. 1/f noise will sometimes be positive and sometimes negative. Is there anything to demonstrate that the PDO anything more than spatial 1/f noise of some sort?

  36. Posted Jun 5, 2008 at 2:52 PM | Permalink

    MWJ wrote (in untreaded #34)

    Hans Erren, I don’t see anything about molar absorptivity that would imply the result would be logarithmic. The Wikipedia seems to say that the total energy absorbed is gotten by integrating the Beer-Lambert results over all the pertinent frequencies. Well, that’s pretty obvious. But at each individual wavelength the function has the form Kw*[1-exp(-Aw*C)]. Certainly, the result can’t be a log over the whole range, since all the functions start a zero and are bounded above, while the log starts at negative infinity and is unbounded. With the correct choice of the Kw’s (which depend on the spectrum of the light) and the Aw’s (which depend on the absorbing chemical) the result could approximate a log function over some limited range, but it could approximate lots of other functions as well.

    Who is claiming that the log relationship is valid over the whole range?
    Now if you do your homework, like I did (Take an EPA or HITRAN absorbance spectrum, apply Lambert Beer apply the convolution with a planck curve and integrate over the spectrum), the result is a straight line when plotted using a logarithmic x-axis: to my knowledge that is a logarithmic relationship.

    See also Myhre, G., E.J Highwood, K.P Shine and F. Stordal, 1998, New Estimates of radiative
    forcing due to well mixed greenhouse gases, Geophys. Res Lett. 25, 2715-2718

    Here is data to play with:
    http://www.epa.gov/ttn/emc/ftir/aedcdat1.html#co2

  37. Patrick M.
    Posted Jun 5, 2008 at 2:56 PM | Permalink

    Re 27 (Steve McIntyre):

    I think Craig Loehle pointed out in 22 that there are physical symptoms that define the two phases. When it switches phases may not be easily predictable but what happens when a phase switch occurs is fairly predictable which makes me think it is tied to physical aspects of the climate and it’s not just noise.

  38. Scott-in-WA
    Posted Jun 5, 2008 at 3:10 PM | Permalink

    Steve to Phil: …. Your invocation of spectroscopic textbooks is idle, unless you can cite a specific textbook that contains the enhanced greenhouse effect calculation, rather than line calculations which are not at issue.

    Craig to Phil: Whatever the exact saturation curve for greenhouse gases, it is not the GHG themselves that are forecast to produce alarming warming–they give only 1 to 1.5 deg C. It is the water vapor feedback that the models produce that gives the alarming numbers.

    Let’s suppose Steve’s request for a significantly better explanation of the enhanced greenhouse effect (i.e., 2xC02 yields 2.5C warming) were to be fulfilled.

    Wouldn’t the next step be to determine what kinds of field observations are needed to verify the accuracy of that theoretical explanation, and then to compare that list with the kinds of observational techniques and methods now in use today?

  39. Phil.
    Posted Jun 5, 2008 at 3:16 PM | Permalink

    Re #25

    Phil: whatever the exact saturation curve for greenhouse gases, it is not the GHG themselves that are forecast to produce alarming warming–they give only 1 to 1.5 deg C. It is the water vapor feedback that the models produce that gives the alarming numbers.

    The question I was answering was why CO2 has a logarithmic response to concentration which is as I stated is due to the broadening of the spectral lines, which is also the reason CH4 & N2O have a sqrt dependence and why CFC11A & CFC12 are linear.

    Re #23
    Steve, if you want the textbook derivation I’ll dig one out for you when I’m in the office tomorrow, it’s rather routine which is why you don’t find the derivation of the functions in the IPCC report it would be like giving a derivation of the Beer-Lambert law.

    Steve: It’s not the same thing at all. Beer-Lambert is one thing; 2.5 deg C from first principles is another, Please understand that, unlike many readers, I don’t suggest that such a derivation cannot be plausibly done, only that no one to date has shown provided me with a reference to a proper derivation, working through all the parameterizations in detail. Maybe it’s too “routine” to derive the #1 policy issue of our day. Odd.

  40. Willem de Lange
    Posted Jun 5, 2008 at 3:43 PM | Permalink

    Re #22

    The PDO involves changes in physical (ocean + atmosphere) and ecological conditions. The reason I became interested in it around the SW Pacific is due to the ecological work done at the University of Washington. The phenomenon has been called by a variety of names, and in the Southern Hemisphere the agreed term is Interdecadal Pacific Oscillation – leading to the acronym IPO that causes fun if you Google it.

    The reason for the inclusion of oscillation is not to imply a regular or periodic pattern, but to be consistent with the work of Sir Gilbert Walker in the 192Os and 30s. He recognised the Southern Oscillation (now called ENSO) as a global-scale irregular pressure fluctuation, which appeared to be related to weather anomalies experienced particularly in the tropics. See Enfield, DB, 1987, Progress in understanding El Niño, Endeavor (new series), 11:197-204.

    The key aspect is that there are paired anomalies that switch sign irregularly. Initially the anomalies were pressure, but after the 1982 El Niño there was increased use of sea surface temperatures as an indicator of oscillations.

  41. SteveSadlov
    Posted Jun 5, 2008 at 3:58 PM | Permalink

    The Pacific 74HC74 …

  42. MJW
    Posted Jun 5, 2008 at 4:00 PM | Permalink

    Phil.: Your explanation is flawed in that you believe that the log dependence is a property of the atmosphere whereas it can be demonstrated in a cuvette in a spectrometer, which indeed is how Arrhenius became aware of it.

    You’ve made the claim about the cuvette before, but never provided a reference. I’d like to see one.

    To derive the absorption dependence of a spectral line the absorption is integrated across the width of the line, in the general case a Voigt profile is used. In the case of a weakly absorbing line you get the usual Beer-Lambert law, for moderately absorbing lines where saturation occurs in the Doppler core of the line the dependence reduces to log(concentration), for a strongly absorbing line where saturation occurs into the wings the dependence becomes a square root law.

    The references I’ve read, such as this one, say the curve of growth is approximately proportional to concentration for weak lines, sqrt(log(concentration)) for moderately strong lines, and sqrt(concentration) for strong lines. Notably, log(concentration) is missing.

  43. Barney Frank
    Posted Jun 5, 2008 at 4:19 PM | Permalink

    it’s rather routine which is why you don’t find the derivation of the functions in the IPCC report

    It’s so routine that after repeated requests by Steve over several months (years?) no one has produced a robust citation yet. Will Phil? Guess we’ll see tomorrow.

  44. Posted Jun 5, 2008 at 4:30 PM | Permalink

    phil 31:

    The question I was answering was why CO2 has a logarithmic response to concentration which is as I stated is due to the broadening of the spectral lines

    it actually the broadening of the 15 micron band
    http://home.casema.nl/errenwijlens/co2/co205124.gif

  45. Posted Jun 5, 2008 at 4:40 PM | Permalink

    MJW 34:

    The references I’ve read, such as this one, say the curve of growth is approximately proportional to concentration for weak lines, sqrt(log(concentration)) for moderately strong lines, and sqrt(concentration) for strong lines. Notably, log(concentration) is missing.

    As mentioned several times before it’s all related to the widening of the absorption band, not individual lines lines.
    do the calculations by yourselve, if you don’t trust me.

  46. bender
    Posted Jun 5, 2008 at 5:31 PM | Permalink

    Re #16 PDFF

    It flips, it flops

    Indeed it does not. It flips, it fluctuates, it flops (we think), it fluctuates. It is ENSO-like in some ways. But not ENSO-like in that the characteristic time-scale of “flip-flopping” is far less robust with PDO than with ENSO. So, Sorry, Mike. No anger here. Just facts on eigenthingies observed for less than n=2 cycles.

  47. bender
    Posted Jun 5, 2008 at 5:41 PM | Permalink

    The reason for the inclusion of oscillation is not to imply a regular or periodic pattern, but to be consistent with the work of Sir Gilbert Walker in the 192Os and 30s.

    I was about to say it, then self-edited, choosing to stick to the bare facts. But yes, it seems to me in Mantua & Hare’s writing there is a hint of love of consistency in taxonomy … and a desire to coin their own new phrase. Dispense PDV. Advocate PDO.

    But the fact is: no one has derived a physical mechanism accounting for PDV, or any periodicity in it. Unlike ENSO. This answers Raven’s conjecture about it being caused by ocean basin resonance.

    I have no argument. Just thought some people might want to know something beyond what Tammy says. If PDV is a 1/f process (as Steve M conjectures)- then you can expect the sort of non-stationary behavior that all teh PDO proxy reconstructions come up with.

    1/f processes take us directly to Koutsoyiannis, Wunsch, and everything not Tamino and not RC and not IPCC.

  48. bender
    Posted Jun 5, 2008 at 5:43 PM | Permalink

    #27

    Is there anything to demonstrate that the PDO anything more than spatial 1/f noise of some sort?

    IMO: No. The long-term proxy reconstructions are interesting. You would get a kick out of them.

  49. bender
    Posted Jun 5, 2008 at 5:46 PM | Permalink

    The point being (for those not following closely): if internal climate variability is synonymous with a 1/f noise process, then Gavin Schmidt has some (more) explaining to do for how GCM ensemble statistics are computed. i.e. Yesterday’s “trend” is today’s noise. Lucia’s divergence explained.

  50. MJW
    Posted Jun 5, 2008 at 5:56 PM | Permalink

    Hans Erren: As mentioned several times before it’s all related to the widening of the absorption band, not individual lines.
    do the calculations by yourselve, if you don’t trust me.

    Even the calculation for a single line is pretty complicated, and involves a lot of approximation and simplification, so I’d have no idea how to do it for multiple lines, each with a different absorption constant. If you’ve shown the calculation somewhere, provide a link and I’ll be happy to see if I agree. If you haven’t shown it previously, please do so now.

  51. Posted Jun 5, 2008 at 9:02 PM | Permalink

    When in a mode it tends to stay there for a period of N decades, where N is usually 2 or 3. To me it looks like a system with instability but 2 dynamic attractors.

    No wonder the state of climate science is so chaotic.

  52. Jaye
    Posted Jun 5, 2008 at 9:53 PM | Permalink

    I betting on a spectrum of fBm’s for climate noise (depending on the underlying processes).

  53. anna v
    Posted Jun 6, 2008 at 3:27 AM | Permalink

    Steve commenting in 31

    It’s not the same thing at all. Beer-Lambert is one thing; 2.5 deg C from first principles is another, Please understand that, unlike many readers, I don’t suggest that such a derivation cannot be plausibly done, only that no one to date has shown a proper derivation, working through all the parameterizations in detail. Maybe it’s too “routine” to derive the #1 policy issue of our day. Odd.

    What you are asking is: is there a differential equation written for the whole CO2 feedback mechanism which has been solved and boundary conditions imposed on the solutions that give 2.5 degrees increase in 100 years instead of the simple calculations of 1 degree.

    Differential equations start by gathering all the variables and writing the algebraic relationship of the differentials based on physics and conservation laws.

    Have a look at fig4 of http://www.globalchange.umich.edu/globalchange1/current/lectures/samson/feedback_mechanisms/

    Such a diagram is where one should start to write up the algebraic equations with the differentials which will be turned into differential equations to be solved. It should be evident that this is a hopeless project as far as solving differential equations goes, even if one keeps to a few first order effects, to anybody that has set up a differential equation. There are too many parameters and too many variables to be able to compute from first principles.

    What climate models do is :model in the computer how these feedbacks are expected to act,from general solutions of fluid dynamics equations, making the dxi(i from one to n variables) changes across 200kilometer*200kilometer*1kilometerheight blocs in a flat earth model and parametrise the expected boundary conditions so that the final outcome fits the temperature curves.

    Yes, it is too hard to derive the #1 policy issue from first principles, and even if they get the huge computers they are asking for, it will still be too hard.

    I think that a neural networks approach, as used by Tsonis et al might solve the equations (http://www.agu.org/pubs/crossref/2007/2007GL030288.shtml) in a more reliable way than the IPCC model guesstimates. I do not know though how difficult it is to introduce so many variables into a neural network.

  54. Michael Smith
    Posted Jun 6, 2008 at 6:16 AM | Permalink

    Steve said in 31:

    It’s not the same thing at all. Beer-Lambert is one thing; 2.5 deg C from first principles is another, Please understand that, unlike many readers, I don’t suggest that such a derivation cannot be plausibly done, only that no one to date has shown a proper derivation, working through all the parameterizations in detail.

    If such a derivation exists, why do we need the climate models?

  55. John M.
    Posted Jun 6, 2008 at 6:52 AM | Permalink

    Steve: It’s not the same thing at all. Beer-Lambert is one thing; 2.5 deg C from first principles is another, Please understand that, unlike many readers, I don’t suggest that such a derivation cannot be plausibly done, only that no one to date has shown a proper derivation, working through all the parameterizations in detail. Maybe it’s too “routine” to derive the #1 policy issue of our day. Odd.

    I seriously doubt that it still is the #1 policy issue of the day because with oil at $130 a barrel global warming is beginning to look like a complete red herring to a lot of people given there may not actually be enough exploitable fossil fuel reserves available to ever achieve an atmospheric CO2 doubling in the first place. Simply being able to keep modern civilization functioning through to the end of the century deeply concerns a lot of people at the moment, although the good news is that there is probably no need to build to a survivalist compound because if everything goes according to plan with the ITER project at Cadarache in France nuclear fusion reactors should start being commercially available by the 2030s and we can always tide ourselves over with nuclear fission and renewables in the mean time. Beyond all that, however, how can you seriously expect to be handed a derivation of 2.5 deg C from first principles when, as I am sure you are well aware, many of the complex feedback mechanisms that are involved are not fully understood yet and values for the sensitivity are typically reported as falling somewhere within a range of several degrees when bounded by 95% confidence limits?

    http://ipcc-wg1.ucar.edu/wg1/Report/AR4WG1_Print_Ch09.pdf

    Steve: Let me re-phrase a little. I was being a bit sarcastic with Phil’s idea that it was “routine” and beneath the dignity of climate scientists to actually provide a coherent A-to-B derivation of how doubled CO2 leads to 2.5 deg C. If mechanisms are not clearly understood, then, in an engineering quality derivation, I expect the present knowledge to be described, and if relevant mechanisms are not fully understood, I would expect this to be disclosed together with some assessment of the impact and some suggestions as to how to resolve these uncertainties. But in answer to your question, with all the money and interest in climate science, yes, I do seriously expect that climate scientists can and should provide a first principles derivation whether it takes 1000 pages or not.

    In making this comment, I wasn’t really meaning to express a view on a beauty contest between climate and the worrying long-term energy situation. On a personal basis, I very much share your concern about long-term energy issues. The structure of energy supply right now in lots of ways is pretty similar to the structure 50 years ago. It’s hard to imagine that when my grandaughter is my age, that the structure of energy supply will have changed as little in her lifetime as the structure has changed in mine. Perhaps we’re living in a golden age that future generations will envy. Lots of people in the past have forecast energy doom scenarios and none of them have materialized so far. But the more that I think about long-term energy, the more daunting the prospects. It seems like an enormous project and that we’re just nibbling around the edges of it. Interesting as these issues are, I don’t want to pursue such topics on this blog or it will overwhelm everything else.

  56. anna v
    Posted Jun 6, 2008 at 6:57 AM | Permalink

    p.s. to my 45

    Contemplating the feedbacks flow chart I linked to above, I was reminded of analogue computing, with which I had a brush back in the sixties.

    Maybe an analogue climate model( with resistances capasitors and inductors) would be able to solve all these coupled differential equations, at least give an output that would be consistent to all inputs.

    see “a great disappearing act; the electronic analogue computer”

    http://technology.open.ac.uk/tel/people/bissell/bletchley_paper.pdf

  57. Andrew
    Posted Jun 6, 2008 at 7:54 AM | Permalink

    47 (Julian Flood): Actually, under the Hatch Act, what Hansen does is what’s illegal. If he wants to be an advocate, he should resign, like some others have (Spencer, Miskolczi-any other ex-NASA?) and get advocating. But he likes complaining, it gets him more attention.

    I’m amused at how bender has declared Mantua “the enemy” more or less. Well, if it gets him off my back.

  58. kim
    Posted Jun 6, 2008 at 8:13 AM | Permalink

    47 (Julian) Andy says to send a back-up to dotearth@nytimes.com
    =========================================

  59. rafa
    Posted Jun 6, 2008 at 8:15 AM | Permalink

    Re.:31 Phil says “I’ll dig one out for you when I’m in the office tomorrow..”. I’m honestly curious. Steve, and many others, have looked everywhere to see if such paper or textbook exists without success. I’m intrigued now to see Phil’s response. Best.

  60. kim
    Posted Jun 6, 2008 at 8:23 AM | Permalink

    49 (anna v) Heh. The climate system is a large analogue computer constantly solving the problem.
    ===========================================

  61. Posted Jun 6, 2008 at 9:06 AM | Permalink

    re 42 MWJ:

    Even the calculation for a single line is pretty complicated, and involves a lot of approximation and simplification, so I’d have no idea how to do it for multiple lines, each with a different absorption constant. If you’ve shown the calculation somewhere, provide a link and I’ll be happy to see if I agree. If you haven’t shown it previously, please do so now.

    Huh? You can’t do this?

    Take an EPA or HITRAN absorbance spectrum, apply Lambert Beer, apply the convolution with a planck curve and integrate over the spectrum, the result is a straight line when plotted using a logarithmic x-axis

    Here is data to play with:
    http://www.epa.gov/ttn/emc/ftir/aedcdat1.html#co2

    Sorry then I can’t help you. You may stop waving now.

  62. Mike B
    Posted Jun 6, 2008 at 10:01 AM | Permalink

    #39 Bender

    Just thought some people might want to know something beyond what Tammy says. If PDV is a 1/f process (as Steve M conjectures)- then you can expect the sort of non-stationary behavior that all teh PDO proxy reconstructions come up with.

    I essentially agree on these points. Humans tend to see signal where there is only noise quite frequently. And when the noise structure is more complex than iid normal, it gets even more difficult, especially when so many people have Ecel fired up to recalculate trends each time a new data point comes along.

  63. Bill E
    Posted Jun 6, 2008 at 11:17 AM | Permalink

    Those that have pointed out that it isn’t the amount of CO2 that may make the earth warm dangerously, it is water vapor. Raising the temperature a bit via CO2, means more evaporation, more water in the atmosphere (water being a major absorber of infrared radiation, particularly in the 8-11 micron band). More water in the atmosphere means more warming, leading to more evaporation, leading to….. which should be runaway and the earth become like Venus. But it doesn’t.
    There was an experiment called CEPEX about ten years ago that looked into this. In the tropical pacific warm pool, the most likely sea surface temperatures were 28-29 degrees C. But temperatures of 31C are almost never found. The ocean would heat to a certain level and no higher. At the time it was postulated that at some sea surface temperature, the amount of evaporation and water in the air would trigger large scale convection (rising air condenses some of the water in it, making it warmer, driving it higher, condensing more water and so on) causing large scale storm systems with clouds that reflected sunlight back into space, cooling the surface and starting the cycle all over again. This was called the thermostat. But this was never definitively established.
    Modern science is reductionist. We take big problems like this apart and study all of the pieces. But modern science is unable to put Humpty back together. The best we can do are things like climate models. But the real physics that we learn is too complex to solve on any realistic scales, so we must parameterize what we know in order to do the climate calculations in any kind of reasonable time frame. Parameterizations are nice when things change smoothly, but they nearly always fall when they hit discontinuities like the thermostat. We really don’t understand what is different about a SST of 31C as opposed to one of 29C, so we can’t parameterize realistically. Our models then don’t really capture these kind of events, so we can’t really tell what is or should be happening.

  64. Sam Urbinto
    Posted Jun 6, 2008 at 11:46 AM | Permalink

    I understand Bender’s point perfectly. Maybe sort of.

    If PDV/O/F is a “a system with instability but 2 dynamic attractors” that happens every 20 or 30 years, flips or flops, or it’s just sometimes positive or negative noise, nothing “more than spatial 1/f noise of some sort” we get into the world of Koutsoyiannis and Wunsch, and out of the world of Tamino, RC, and the IPCC. And “if internal climate variability is synonymous with a 1/f noise process” then somebody (everybody) has a lot of explaining to do about “GCM ensemble statistics” computations — Lucia’s divergence is explained. Then all we really have is yesterday’s faux trend (it looks like a trend but it’s not) that’s really just today’s noise.

    Another way to solve the same problem, wrong reasons or not. Think outside the box so to speak. Use what is to get things the way you want them to be like. In the quest for finding the right solution to a problem, you’ll get some that are just not quite right, but if you keep searching you refine what you got wrong.

    Or in other words, stop thinking of this the way “they” want you to, don’t accept their premise and operate off of it trying to debate on their terms and their premise; instead go against the premise in the first place.

    As far as the rest, things are already changing. Ever heard of the Clean Development Mechanism? It’s already too late (or happening just in time). An entire industry funded out of thin air. That is paying for….

    See Steve’s comments in John’s #47 Come to the bulletin board if you’d like.
    We’ll have a happy time talking about CERs.

  65. Pat Keating
    Posted Jun 6, 2008 at 11:59 AM | Permalink

    48 anna
    I remember the analog computers.
    You would still have the same problem. If you could figure out the various resistances, gains, etc to solve all the diff eqs., you could set it up in a digitial computer, more easily.

  66. jae
    Posted Jun 6, 2008 at 12:05 PM | Permalink

    55, Bill E. Bingo! You never see air temperatures much above 32 C, either. You can’t raise the temperature of the water, without raising the absolute humidity of the air, and when it tries to go above about 25 g/m^3, it starts to cloud up and rain, thus limiting temperature. That’s why I think the wet tropical areas are usually “maxed out,” temperature-wise. This doesn’t happen in dry areas and temperatures get much higher, due to the lack of this NEGATIVE water feedback.

  67. Gerald Machnee
    Posted Jun 6, 2008 at 12:59 PM | Permalink

    Re #47 – **I seriously doubt that it still is the #1 policy issue of the day…Beyond all that, however, how can you seriously expect to be handed a derivation of 2.5 deg C from first principles when, as I am sure you are well aware, many of the complex feedback mechanisms that are involved are not fully understood yet and values for the sensitivity are typically reported as falling somewhere within a range of several degrees when bounded by 95% confidence limits?**
    As long as governments are still planning “carbon taxes” as in Canada, it will be at or near the top. The media for the most part are still with it. The main grant beneficiaries at the Universities and IPCC are using 2.5C or more, so Steve is correct is asking for a detailed explanation. If they cannot provide it, then admit it. I also ask for a study that shows what percentage CO2 has contributed to the warming. It makes for a short conversation.

  68. MJW
    Posted Jun 6, 2008 at 1:20 PM | Permalink

    Hans Erren (#53): Huh? You can’t do this?

    No, I can’t.

    (#28): Now if you do your homework, like I did (Take an EPA or HITRAN absorbance spectrum, apply Lambert Beer apply the convolution with a planck curve and integrate over the spectrum), the result is a straight line when plotted using a logarithmic x-axis: to my knowledge that is a logarithmic relationship.

    You say you’ve done your homework, well show your work.

  69. Craig Loehle
    Posted Jun 6, 2008 at 1:27 PM | Permalink

    55 Bill E.: this is exactly the infrared iris theory of Lindzen as elaborated recently by Roy Spencer.

  70. Posted Jun 6, 2008 at 1:52 PM | Permalink

    I apologise for going off track but some of you might like to view the UK’s Financial Times (http://www.ft.com/) which is doing a major feature on climate change. The Economist(http://www.economist.com) has a leader on the topic. Little of genuine technical interest but an insight into how the moneyed world views the debate.

    Again, sorry for being off track but are the real questions coming out of the “bucket” story :

    (a) for what time period will Met Office/Hadley adjust their data? Will it be just to 1960 or to 1980 or even 1990 as Kent et al’s paper would suggest.

    (b) What will be the size of the adjustment(s) and what will be the criteria that will justify those adjustments
    Looking back many of the comments in the discussions-and I was a guilty party- concentrated simply on data quality.

  71. gs
    Posted Jun 6, 2008 at 5:30 PM | Permalink

    This seems worth bringing to CA’s attention:

    Western Europe is warming much faster than expected

    Geert Jan van Oldenborgh, Sybren Drijfhout, Aad van Ulden, Reindert Haarsma, Andreas Sterl, Camiel Severijns, Wilco Hazeleger, Henk Dijkstra
    (Submitted on 4 Jun 2008 (v1), last revised 6 Jun 2008 (this version, v2))

    Abstract: The warming trend of the last decades is now so strong that it is discernible in local temperature observations. This opens the possibility to compare the trend to the warming predicted by comprehensive climate models (GCMs), which up to now could not be verified directly to observations on a local scale, because the signal-to-noise ratio was too low. The observed temperature trend in western Europe over the last decades appears much stronger than simulated by state-of-the-art GCMs. The difference is very unlikely due to random fluctuations, either in fast weather processes or in decadal climate fluctuations. In winter and spring, changes in atmospheric circulation are important; in spring and summer changes in soil moisture and cloud cover. A misrepresentation of the North Atlantic Current affects trends along the coast. Many of these processes continue to affect trends in projections for the 21st century. This implies that climate predictions for western Europe probably underestimate the effects of anthropogenic climate change.

    I’m only a casual, occasional, and unknowledgeable visitor here, so my apologies if the paper is old news.

  72. Posted Jun 6, 2008 at 5:37 PM | Permalink

    re 60:

    Hans Erren (#53): Huh? You can’t do this?

    No, I can’t.

    This really is undergraduate physics stuff.
    So you don’t understand what I did and there is no use of me showing my work to you.
    Come back when you studied some physics.

    bye

  73. Pat Frank
    Posted Jun 6, 2008 at 5:57 PM | Permalink

    #153 — Thanks, UC. Your comments are illuminating as usual. I’ll respond in a while; maybe even with something cogent. :-) But in the meantime, have Brohan &co. ever explicated their model publicly, or are you having to infer it from their visible work?

  74. jae
    Posted Jun 6, 2008 at 7:47 PM | Permalink

    Well, re: the discussion between Hans Erren, MJW, and whoever: All these radiation games are plain silly, since they don’t account for the OBSERVED heating of the other 98-99 percent of the atmosphere. There are other physical phenomena besides radiation. Think about thermalization and just where the heck that energy for thermalization comes from. I’m no physicist, but the holes in these radiation cartoons are SO big that it is laughable. Not even to mention convection, which is also ignored in all these silly radiation games.

  75. David Smith
    Posted Jun 6, 2008 at 8:58 PM | Permalink

    RSS global anomaly for May is -0.08C. The last three month period in the Southern Hemisphere (0-70S) is the most anomalously cool since 1985.

  76. cba
    Posted Jun 6, 2008 at 9:36 PM | Permalink

    61 (Craig):

    wanna bet it’s got to be something very much along that line? (iris)

    albedo varies substantially (10%) over time and the consequences are such that it swamps such notions as a paltry co2 doubling and to some extent it has to be a consequence of conditions as well as a cause of conditions. While some fraction may be random, some has to be an iris sort of feedback because albedo changes like this are totally clouds and they can have serious negative feedback effects and also serious positive feedback effects as well. Hence, it cannot be totally random and it cannot have a net positive feedback effect, it must somehow have a net negative feedback effect.

  77. Posted Jun 6, 2008 at 10:43 PM | Permalink

    Re 50 Kim

    Thank you. I just thought Lord Bragg’s Radio 4 programme about Lysenko was relevant to the politicisation of science aspects.

    JF

  78. DeWitt Payne
    Posted Jun 6, 2008 at 10:55 PM | Permalink

    MJW,

    The references I’ve read, such as this one, say the curve of growth is approximately proportional to concentration for weak lines, sqrt(log(concentration)) for moderately strong lines, and sqrt(concentration) for strong lines. Notably, log(concentration) is missing.

    Equivalent line width, as is described in your link, is a useful concept when the spectrometer resolution is much greater than the width of the line. However, that isn’t relevant to this discussion. Beer-Lambert is useful when the spectrometer has much higher resolution than the spectral feature, as is the case when the individual lines broaden sufficiently to produce band absorption or as in UV-VIS spectrometry in solution. The band wings will still behave according to Beer-Lambert even when the peak of the band has absorbance too high to be measured accurately with a real world spectrophotometer (stray light and other problems cause deviation from Beer-Lambert at high absorbance).

  79. Geoff Sherrington
    Posted Jun 7, 2008 at 12:12 AM | Permalink

    Hi all,

    Tried several excursions to Realclimate last night and was again treated to comments taken out of context, ridiculed, and my several rights of reply refused.

    I’m not going to try there any more. Some of them cannot even understand what is clearly written, preferring to ridicule what they imagine they have read, rather than what was written. Sure, I inserted a few subtle little traps. They might be realising this by now.

    One was a hockey stick reference from a submission to an Australia GHG emissions inquiry where I included the figure from UC http://signals.auditblogs.com/files/2008/03/mann_smooth.png
    with attribution, showing CRU temps spliced to an update using one of Mann’s smoothing algorithms. The wise folk at Realclimate thought this was a “hoot”, not realising that the shoe was on the other foot.

    And so on and so on.

    Summary of Realclimate – a Mutual Admiration Society lacking sophistication.

  80. MJW
    Posted Jun 7, 2008 at 1:09 AM | Permalink

    Hans Erren: This really is undergraduate physics stuff. So you don’t understand what I did and there is no use of me showing my work to you. Come back when you studied some physics.

    bye

    Aren’t you charming. You seem to be under the impression you can make any claim you want, and it’s my “homework” to prove or disprove it. If you’ve already done the calculations, why not show the results of each step that led to your conclusion? Why insist I redo the work you’ve already done? Bye, indeed.

  81. MJW
    Posted Jun 7, 2008 at 1:32 AM | Permalink

    DeWitt Payne: Equivalent line width, as is described in your link, is a useful concept when the spectrometer resolution is much greater than the width of the line. However, that isn’t relevant to this discussion.

    I have no idea whether equivalent line width is or isn’t applicable. Phil is responsible that claim, not me. I merely pointed out that if it is applicable, the formula for the moderately strong lines is sqrt(log(conc)), not log(conc).

    Let me add, as far as whether the CO2 response is logarithmic, I’m a skeptical agnostic. Agnostic, because I haven’t seen convincing evidence either way. Skeptical, because there doesn’t seem to be any theoretical reason, so if the behavior is logarithmic, it’s only due to coincidence, and it seems like an unlikely coincidence.

  82. cba
    Posted Jun 7, 2008 at 4:23 AM | Permalink

    DeWitt,

    you beat me to that broadening comment of the wings & overlaps combining before being placed in the exponent only because either CA was down at the time or this old computer was having issues with it.

    MJW, it works that way in the theory, not just with the instrumentation. Every wavelength is subject to the laws like Beer Lambert. The optical path thickness tau is based upon the amounts of various molecules in the path, usually as a concentration per cm which is then multiplied by the actual length containing those molecules. This tau is produced based upon the contributions from all lines whose wings reside in the small chunk of wavelength associated with the particular calculation – which is then done for every chunk within the overall bandwidth of interest. The transmission of that wavelength light through the length of medium is then exp(-tau*length). This is where the overwhelming ‘log’ property comes from because lines and wings are subject to being placed in this exponent for a particular wavelength. The result, when multiplied by the light intensity at the wavelength is what passes through that medium and what is trapped is 1-exp(-tau*length) times what starts through.

    While it is a log (or expenential) relationship for certain, it comes out looking less than perfect when dealing with a single molecule such as co2 because that is only a part of the whole effect and even then there are many lines whose wings overlap within a molecule type so the log relationship w.r.t. a single molecule type summed or integrated over all the wavelengths is somewhat less than perfect in appearance.

  83. anna v
    Posted Jun 7, 2008 at 5:43 AM | Permalink

    Pat Keating 57

    I remember the analog computers.
    You would still have the same problem. If you could figure out the various resistances, gains, etc to solve all the diff eqs., you could set it up in a digitial computer, more easily
    .

    Not in the same time scale and size of computer. In the link I gave with an overview of analogue computers it has an example where solving a differential equation takes hours on analogue and years on digital.

    Thus, instead of asking for terraflop computing power the climate modelers maybe should talk to electrical engineers.

  84. Steve K
    Posted Jun 7, 2008 at 5:53 AM | Permalink

    MJW, A lot of natural processes such as diffusion can be described as a log function.

  85. John Lang
    Posted Jun 7, 2008 at 6:29 AM | Permalink

    Given water vapour is supposed to increase in some kind of proportional manner to increasing CO2, where is the water vapour data to back this up this key parametre of the theory?

    Anyone ever found a good atmospheric water vapour content data series?

  86. EJ
    Posted Jun 7, 2008 at 7:10 AM | Permalink

    I guess Phil. never found that basic textbook which defines the temp. vs. CO2 relationship. I was waiting.

  87. bender
    Posted Jun 7, 2008 at 9:10 AM | Permalink

    I’m amused at how bender has declared Mantua “the enemy” more or less. Well, if it gets him off my back.

    Glad you’re easily amused. But in fact I made no such declaration. I merely point out that Mantua is pushing a paradigm – an oscillatory PDO – and kids like you are eating it up. When in fact the mechanisms driving PDV are not at all understood. Not in the least. I repeat: zero predictability for these so-called periodic shifts in state. Mantua’s competitors understand this, but they are less publically visible. You are, of course, free to believe what you choose.

  88. Pat Keating
    Posted Jun 7, 2008 at 10:13 AM | Permalink

    74 anna
    The point I made was that the problem with modeling is that the differential equations needed are still not properly understood. You still need to resolve that issue before successfully using either method.

    That quote you referred to was talking about a comparison between analog computers and digital computers in the 1950s. We all know how much faster digital computers are now compared with then.

  89. anna v
    Posted Jun 7, 2008 at 10:41 AM | Permalink

    Pat Keating,

    I agree about setting up the problem correctly before any thing else.

    Well, all the improvements have happened in the electronic components and I do not see why analogue circuits would not be equally accelerated. I do not know because it is completely outside my field.

  90. See - owe to Rich
    Posted Jun 7, 2008 at 11:13 AM | Permalink

    On the anniversary of hearing Lindzen give, at The Institute Of Physics, a devastating rebuttal of climate modelling, I indulged in some masochism by attending Professor Sir David King’s talk “The Hot Topic” at the Cheltenham Science Festival. Even though (or because?) I was sitting in the front row I was not one of the 6 allowed to ask questions. If I had been, I was going to put:

    “The globe has cooled by a full quarter of a degree over the last 12 months, the current solar cycle is set to be the longest since at least 1856, and a model combining CO2 effects with solar cycle effects predicts cooling for 25 years and warming of only 0.7C by 2100. Why is that a cause for panic, and shouldn’t we instead concentrate on ‘making poverty history’?”

    Anyway, here are a few snippets from the talk, which was very eloquent without any notes, but I suppose you expect that from a former Chief Scientific Advisor.

    1. The world population will increase by 1 billion over the next 13 years, to 7.8B, but is expected to flatten out at 9B.

    2. Demand for water will exceed supply by 2050 globally, and earlier in some places.

    3. Victoria in Australia has suffered drought for 7 years in a row [is this still true?].

    4. There will be conflicts and terrorism over dwindling resources.

    5. The ozone hole was solved by political response to scientific pressure, with removal of CFCs, but it will of course take until about 2050 to be completely repaired.

    6. He showed a graph by Fedorov et al in Science 312 which has a hockey stick, but fair enough CO2 does look like a hockey stick.

    7. In 1896 a scientist whose name I missed calculated that doubling CO2 would increase global temperatures by 5C. Even with today’s wonderful computers we now know that it is 5 +/- 2.

    8. Since CO2 goes down to low 200s in ice ages, we have now switched off ice ages for good.

    9. Deforestation is 17% of the current problem of increasing CO2 at 2ppmpa [I think it's more like 1.6 now, but 2.0 soon is plausible].

    10. If we stop CO2 emissions now there is 30 years’ of global warming in the pipeline.

    11. He cunningly mixed in Central European temperatures with global ones, and said that Hadley predict an increase of 5.5C for these by 2100, but at any time the weather noise is about +/-0.5C [so it'll take us longer to disprove this claim].

    12. In 2003 there were 32000 fatalities from heat, and now in 2008 we can expect each summer (on average) to be as warm as the superb 1947 one.

    13. The EU wish to limit post-industrial global temperature increases to 2C, compared with current 0.8C.

    14. The long right tail in the probability distributions of future warming is very worrying (“if a pilot told you there was an 80% chance of landing safely, you wouldn’t get on the plane would you?”).

    15. At +3.7C warming the Greenland icecap will melt.

    16. Under business as usual, 50 million people will be displaced by sea level rises by 2100.

    17. Under business as usual, our current 9 Gigatonnes of CO2 per year will increase to 27.

    18. Bloody nothing is being done about all this because of political inertia.

    19. Britain’s wonderful initiatives have been blocked by the US.

    20. Britain has taken the lead on cap and trade, and currently it is 28E (euros) per tonne, but 50E would be better to provide real incentives.

    21. There’s going to be a big bash at Copenhagen in 2009, and he’s intending to push for a CO2 asymptote of 450 ppm. [I wonder what the Danish meteorological authority's position will be by then?]

    22. Tonnes of emissions per person are 12 in UK, 27 in USA, 2.2 in India, but China has now overtaken USA in total emissions.

    23. His Institute with facilities near Loughborough is being funded to the tune of about £1B.

    24. But he felt he needed to “inject a note of optimism”. If we could harness all the sunlight landing in a small sq uare in each continent, it would satisfy our energy needs.

    In question time he was mostly polite but seemed to get visibly irritated by a question on solar effects – “Of course solar effects are in the climate models, they have some of the best scientists in the world working on them”.

    My only solace was that one questioner was an old retired colleague of mine, asking how he should respond to sceptics feeding him myths. So I collared him afterwards and gave him a copy of my paper (you could regard that as me feeding him my myths, depending on your views). Anyway, he’s always been extremely nice and promised to read it.

    Rich. (1.6 hours after leaving the talk by bicycle)

  91. mikep
    Posted Jun 7, 2008 at 1:04 PM | Permalink

    Always distrust scientists talking about economics. The idea that we can know now that demand for water will exceed supply by 2050. does he mean if present trends continue and with no change in prices? (Water is charged for in many cases and could be where it is not at present.) Excess demand is not an equilibrium!

  92. cba
    Posted Jun 7, 2008 at 3:57 PM | Permalink

    rich

    my condolences

    was anything that guy said bordering on the possibility of being true?

    I guess that little square for solar energy – only for transportation biofuels for the globe is probably south america.

    as for electrical solar panels – I wonder if he ever bothered to figure out what happens when one replaces an albedo of 10-40 % with an albedo of 2-3% over a significant region ??? Probably not!

    I bet he’s real good at hobknobbing at the coctail parties though.

  93. cba
    Posted Jun 7, 2008 at 4:59 PM | Permalink

    82 (mikep):

    the simple fact is that most scientists are totally clueless of economics. Maybe that’s good because the vast majority of them would be doing something other than the science they’re doing if they did. In that case, the only ones left would be those who truly wanted to do it rather than those just trying to make a living. I guess that would mean the ranks would be mostly “amateurs” with the “professionals” gone elsewhere.

    On the flip side, it’s probably a good idea that so many are scientists, where the potential damage to society is usually fairly minimal and the scientific method usually takes care of crackpot ideas in the fairly short run.

  94. Barclay E. MacDonald
    Posted Jun 7, 2008 at 5:20 PM | Permalink

    AIC as a musical? Does Climate Science get any better than this?

  95. Barclay E. MacDonald
    Posted Jun 7, 2008 at 5:25 PM | Permalink

    Ooops! Should be

    AIT as a musical?

  96. MJW
    Posted Jun 7, 2008 at 5:42 PM | Permalink

    cba: “While it is a log (or exponential) relationship for certain…”

    I don’t think it makes sense to say it’s logarithmic, and then add “or exponential” parenthetically, as if they’re somehow equivalent. (I also don’t think there’s anything certain about either.)

    Steve K.: MJW, A lot of natural processes such as diffusion can be described as a log function.

    I Googled “logarithm diffusion” and came up with some stuff on “logarithmic diffusion,” but nothing that clearly explained what it was. The best I could find was this article, which seems (from a quick reading), to say the logarithm occurs in the differential equation, not in the solution. Nothing else I saw indicated that diffusion was a logarithmic process, so I’ll need a link to a relevant article before I can comment on it.

    It seems to me that logs occur mostly in situations where they’re the inverse of an exponential process, and therefore the undefined “zero” case isn’t a problem. For example, stating from a single organism, the number of bacteria (growing without restraint) is an exponential function, so the time to produce N bacteria is K*log(N). It makes no sense to ask at what time there were NO bacteria. It obviously makes perfect sense to ask how much CO2 greenhouse warming results from no CO2; and the answer should be zero, not undefined or some huge negative value. While the response could be approximately logarithmic over some range away from zero, I think it’s harder to argue a theoretical basis for the relationship.

  97. Andrew
    Posted Jun 7, 2008 at 6:47 PM | Permalink

    Here’s something fun. I’m using the UAH LT data, with ENSO Effect removed (3.4, shifted six months and multiplied by .129) This is a comparison of the decline after Pinatubo with a simple model I created. I invited anyone to guess the τ and λ.

  98. Andrew
    Posted Jun 7, 2008 at 6:48 PM | Permalink

    Doh! I forgot-AOD from here:
    http://data.giss.nasa.gov/modelforce/strataer/tau_line.txt
    multiplied by -21 W/m2 to get the forcing.

  99. Geoff Sherrington
    Posted Jun 7, 2008 at 7:20 PM | Permalink

    Re # 81 See – Owe to Rich

    On Sir David King’s talk,

    3. Victoria in Australia has suffered drought for 7 years in a row [is this still true?].

    Yes, at least 7 years, possibly 11, definitionally-dependent. But not far North, there have been normal to heavy rainfalls over this term and an occasional short drought, say 100-400 km from the drought area. BAU. Near Brisbane (1000 km north) last week there were 4 inches of rain (100 mm) in 24 hours.

    Sir David’s statement is cherry picking of a most selective type.

    The real question is, “How have people coped with the drought?” and the answer is “Admirably”. We do not have food shortages, some people in marginal areas have sadly left their land, some food prices have risen but not excessively, there are no riots in the street and affected farmers are stoic, knowing drought has happened before and more rain will one day fall. For the AGW observers, southern Victoria has had its coldest consecutive 3 months since about 1990.

    Meanwhile the Victoria Government is pressing on with plans to build a desalination plant powered by windmills. As I have written so often before, never underestimate the ingenuity of people to respond to crises.

    But first, describe the crisis. What is the real CO2 sensitivity?

  100. Philip_B
    Posted Jun 7, 2008 at 7:53 PM | Permalink

    3. Victoria in Australia has suffered drought for 7 years in a row [is this still true?].

    Australian regions go through cycles of wet years and dry years – perhaps linked to ENSO, perhaps not – but you clearly see it in the rainfall records. 2 or 3 years ago after a number of dry years in many areas there was a lot of talk about GW sending Australia into permanent drought. Since then most populated areas have been wetter than normal and many much wetter than normal (large areas of Queensland, NSW and Western Australia).

    Its fair to say much of Victoria is still in a dry phase, as is Tasmania. Is it still in drought? I don’t think it ever was in the sense of long periods without significant rain. Melbourne has an average of 100 rain days a year. I checked 2006 and 2007. These years had 69 and 83 rain days respectively. Decide for yourself if that constitutes a drought.

    In Australia, AGW proponents have dropped the GW causing drought meme because most Australians are seeing an unusually wet year. However, ‘AGW causes drought’ is one of the dire consequences the IPCC predicts. I guess King had to come up with somewhere as an example and Victoria is probably the best he could find.

    BoM interactive maps

  101. Pat Keating
    Posted Jun 7, 2008 at 8:28 PM | Permalink

    81 Rich

    Since CO2 goes down to low 200s in ice ages, we have now switched off ice ages for good.

    How arrogant to make an assertion like that. Sounds like tempting fate, to me.

  102. Pat Keating
    Posted Jun 7, 2008 at 8:34 PM | Permalink

    87
    In statistical mechanics, entropy is S = k.ln N, and in information theory S = Sum (p.ln p). These are pretty basic equations.

    Steve: Indeed. I’d prefer that anything basic be discussed at the Bulletin Board.

  103. maksimovich
    Posted Jun 7, 2008 at 8:46 PM | Permalink

    re 91

    Its fair to say much of Victoria is still in a dry phase, as is Tasmania.

    Lets look at an adjacent cell in the same latitudinal range.

    This for the west coast of the south island of NZ.

    One would expect similar distribution in the Australian record for the same latitudes

    This for the SOI

  104. MJW
    Posted Jun 7, 2008 at 9:25 PM | Permalink

    Pat Keating: In statistical mechanics, entropy is S = k.ln N, and in information theory S = Sum (p.ln p). These are pretty basic equations.

    But what does it mean for a system to have NO microstates? Since that seems like an impossible (or meaningless) situation, the example supports the my point that when the log arises in an equation for a natural process, the zero value is either impossible, or the infinite result is appropriate.

    The information theory example illustrates the same point, since outcomes with p=0 are taken to be 0. That makes sense, both because lim p->0 p*log2(p) = 0, and because the sum can only reasonably be taken over the possible values.

  105. MJW
    Posted Jun 7, 2008 at 9:46 PM | Permalink

    Sorry to continue on this subject after seeing that Steve McIntyre prefers it be discussed elsewhere, but I wanted to add that in rereading the comment that Pat Keating responded to, I realize it made a less general point then “when the log arises in an equation for a natural process, the zero value is either impossible, or the infinite result is appropriate.” In any case, that’s the point I should have made.

  106. Posted Jun 7, 2008 at 11:45 PM | Permalink

    maksimovich, you are comparing Apples to Oranges – the west coast of the South Island of New Zealand is mostly much further south than anywhere in mainland Australia or Tasmania.

    New Zealand generally has the opposite phasing in it’s rainfall patterns to SE Australia – when it is a period in SE Aust it is often a dry period in NZ, and reverse.

    Victoria is mostly still in serious drought in spite of La Nina, as is much of inland NSW, however I still expect this to change as the E Indian Ocean develops a warm anomaly off Western Australia, which has been slow to develop this year – these areas get their rains from NW cloud bands crossing the continent from the NW that then interact with vigorous cold fronts coming in from the Southern Ocean to the SW.

    Meanwhile, much of the E coast of Australia has had good rains from La Nina.

  107. Posted Jun 7, 2008 at 11:51 PM | Permalink

    Should read:

    “New Zealand generally has the opposite phasing in it’s rainfall patterns to SE Australia – when it is a wet period in SE Aust it is often a dry period in NZ, and reverse.”

  108. maksimovich
    Posted Jun 8, 2008 at 12:56 AM | Permalink

    the west coast of the South Island of New Zealand is mostly much further south than anywhere in mainland Australia or Tasmania.

    Hobart, Tasmania 52s
    Greymouth, New Zealand Westport, Latitude, 42.434º S

    Meanwhile, much of the E coast of Australia has had good rains from La Nina.

    The blocking high in the Tasman is well documented eg Trebneth 1982

    As is the IPO relationship

    http://adsabs.harvard.edu/abs/2001IJCli..21.1705S

    and if you read the comment between the 2 graphs in 93 above one would not diverge into the comparative physiological anatomy of various fruit species.

    One would expect similar distribution in the Australian record for the same latitudes

  109. Posted Jun 8, 2008 at 3:58 AM | Permalink

    My my, Hobart appears to have traveled along way south since I last lived there! :)

    Lets see:

    Hobart, Southern Tasmania: 42°52′S, 147°19′E.

    Invercargill, Southern South Island New Zealand: 46° 26′S, 168° 21′E.

    New Zealand extends roughly 3.5 degrees or over 200 nm or nearly 400 km further south into the roaring forties:

    Map courtesy The World Factbook

  110. Bob B
    Posted Jun 8, 2008 at 6:11 AM | Permalink

    Tennet Naumer a blogger at DotEarth is trying to tell me Rossby Waves are used by GISS Temp to fix the lack of station coverage:

    Surface station coverage in 1978:

    http://www.climateaudit.org/wp-content/uploads/2008/05/ghcn_giss_250km_anom04_1978_1978_1951_1980.gif

    Surface station coverage in 2008:

    http://www.climateaudit.org/wp-content/uploads/2008/05/ghcn_giss_250km_anom04_2008_2008_1951_1980.gif

    “You also do not understand the methodology of the way GISS calculates surface temperature using the mathematics of Rossby waves — a very sound methodology, as it agrees quite well with other temperature sources.

    Go back to school.

    — Posted by Tenney Naumer ”

    I thought Rossby waves apply in a linear system and not a chaotic one. Any comments or references on this?

  111. John M.
    Posted Jun 8, 2008 at 8:03 AM | Permalink

    Gerald Machnee says:
    June 6th, 2008 at 12:59 pm

    As long as governments are still planning “carbon taxes” as in Canada, it will be at or near the top. The media for the most part are still with it. The main grant beneficiaries at the Universities and IPCC are using 2.5C or more, so Steve is correct is asking for a detailed explanation. If they cannot provide it, then admit it. I also ask for a study that shows what percentage CO2 has contributed to the warming. It makes for a short conversation.

    Maybe worth bearing in mind that what we actually get to hear about through the media is probably only the visible tip of the iceberg in policy terms. If you do a bit of reading you might be surprised to learn why the United States, Russia, the EU, China, Japan and India are all planning manned lunar missions between 2018 and 2030. Won’t get into blow by blow details as Steve McIntyre would probably see it as way off topic for this blog but suffice to say that policy makers who are familiar with the background to that may see some of the assumptions outlined in IPCC reports as being for public consumption only and not necessarily as something to be taken all that seriously.

    My main quibble on the 2.5C thing is that there is no way that one single value can be predicted from first principles at this point. What gets predicted is a wide sensitivity range within 95% confidence limits and different studies are still coming up with markedly different ranges. I don’t think it is the IPCC’s function to try to come up with the definitive answer at this point as the science simply isn’t there yet to be able to do that. If you check out section 9.6 in the link I provided above:-

    http://ipcc-wg1.ucar.edu/wg1/Report/AR4WG1_Print_Ch09.pdf

    you’ll see that there are still considerable differences between models put together by different researchers so a 2.5C sensitivity is probably best viewed as being a rough guesstimate rather than as being a fixed physical constant along the same sort of lines as the speed of light in a vacuum.

  112. cba
    Posted Jun 8, 2008 at 8:50 AM | Permalink

    102 (john):

    one can play with averages. However, one does have that limit of 33k for all contributions from all co2 doublings, and all contributions from all ghg factors, including h2o vapor. That’s the real world value of ghg contributions, the diff. between no atmosphere and having an atmosphere. There’s been 5 virtually identical effect doublings of co2 contributing plus another 5 or 6 of slightly reduced effects from co2 doublings – based upon co2′s actual effect. The sum of these parts cannot be greater than the whole. in fact, they must equal the whole.

    If one wants to postulate there’s new never before seen factors that do not enter in to any prior doublings they can have outrageously large values for delta T for the next. However, they have to describe what these are and why they never had any effect before. I’m not seeing any of that which leads to the absurd conclusion that the last couple of doublings must have had 10 times the effect of the previous three plus the similar prior 6 on the delta T and there’s no difference in the effect of each of the previous three and a rather minor difference with the prior six. It basically leads to the notion that co2 is responsible for 250% (number provided for example of over 100% and not based on even a swag) of total GHG contribution.

    All efforts at establishing a delta T by looking at a rise are going to be wrong unless all other contributions to that rise have been accounted for and until recently, no one has even been recording albedo – which varies dramatically year to year compared to other items like the last two hundred years of co2 rise – and which has the same effect as a significant change in solar TSI. That means the largest probable contributor to the variations of temperature is not even included in the calculations as something that varies. It also means that from an actual atmospheric effect, we have no clue yet as to whether an increase in co2 even warms or cools, despite what is known in the lab.

  113. Phil.
    Posted Jun 8, 2008 at 9:38 AM | Permalink

    Re #77

    I guess Phil. never found that basic textbook which defines the temp. vs. CO2 relationship. I was waiting.

    I’m travelling and was away from web access until now. I’m afraid you’ve misread what I was intending to provide which was the dependence of absorbance on concentration which as I said ranges from linear through log to sqrt at high concentration.
    You can find a derivation in Spectrophysics 2nd Ed. by A. P. Thorne, Chapman & Hall.
    In reference to earlier comments this is done line by line, not by bands, for a band it’s done by adding the individual line contributions if they’re well separated like H2O then they’re additive, if they overlap then it’s less. Can only be done experimentally for a band.
    CFCs are linear, CO2 log and CH4 is sqrt.

    I’m fly fishing in the N Maine woodlands for a week so talk to you later.

    Steve: Enjoy your fishing but once again this reference is to something that was not an issue for me and is totally unresponsive to my longstanding request and the request that Phil had undertaken to respond to. Let me repeat from #23 above:

    Phil, while there is a spectroscopic aspect, that’s not the only aspect to the problem. The “enhanced” greenhouse effect relies on a “higher the colder” heuristic. Houghton’s book contains this heuristic in 4 lines. As you know, I’ve been looking a long time for a more substantial derivation than this sort of piffle and would welcome any specific reference from you beyond arm-waving.

    Your invocation of spectroscopic textbooks is idle, unless you can cite a specific textbook that contains the enhanced greenhouse effect calculation, rather than line calculations which are not at issue.

    In #31 above, Phil responded to my #23 not some other reference and stated:

    Re #23
    Steve, if you want the textbook derivation I’ll dig one out for you when I’m in the office tomorrow, it’s rather routine which is why you don’t find the derivation of the functions in the IPCC report it would be like giving a derivation of the Beer-Lambert law.

    To which so my inline comment at # 31.

    Steve: It’s not the same thing at all. Beer-Lambert is one thing; 2.5 deg C from first principles is another, Please understand that, unlike many readers, I don’t suggest that such a derivation cannot be plausibly done, only that no one to date has shown provided me with a reference to a proper derivation, working through all the parameterizations in detail. Maybe it’s too “routine” to derive the #1 policy issue of our day. Odd.

    As it happens, I’m familiar with the reference that Phil mentioned and copied some relevant sections some time ago. It does not derive the enhanced greenhouse effect.

    Phil said that he would “dig out” a textbook reference that was responsive to my inquiry in #23 , which explicitly and categorically asked for a calculation that “that contains the enhanced greenhouse effect calculation, rather than line calculations which are not at issue.” Phil has not done this, despite his undertaking in #31. As Phil now says, perhaps it was never his intent to provide a reference that was relevant to my question, but his undertaking in #31 said that he was respond to my inquiry in #23, which he has totally failed to do. You may be fishing, Phil, but please don’t bait-and-switch.

  114. STAFFAN LINDSTROEM
    Posted Jun 8, 2008 at 10:06 AM | Permalink

    63 OLD NEWS???????????…..gs…..
    The revised version was submitted the 6th of June,
    2008 We are not that fast here, I tell you…Thanks
    My estimate of warming in W Europe is that 30-50 %
    of the alleged warming is due to local and/or regional
    land use change, mainly urban sprawling, more and broader
    roads etc. The rest is a combination of NAO, PDO and
    ENSO…We may see declining SST in the northern North
    Atlantic soon, or they are already under way. For some reason
    I don’t believe just a strong La Niña can get tropical TROPO
    TEMPS to drop to the lowest since at least 1958…(But Steve
    Mc, 1973 or so came in close??!!)
    In from NOAA same date 2008 June 6, US contigious 48, MAM=Spring
    36th coldest since 1895…Pennsylvania, the home state of a certain
    Jim Hansen, had its 8th coldest…As NOAA if you ask me, has some
    warm bias, well draw your own conclusions BUT Northern Europe
    has had a heatwave according to our dear SMHI, I don’t recognize
    it, should be 5 consecutive days above 25C, out of these at least 3
    above 30C [AT THE VERY SAME LOCATION AND QUITE A FEW OF THESE, NOT
    JUST 10-15 WEATHER STATIONS ALWAYS THE SAME ONES=LOCAL WARMING], that’s what our neighbouring countries have for HW requirements
    WMO definition is hilarious…Sorry, soccer time for a sucker…
    EURO 2008 12C in Basel yesterday, we’ve speculated in snowfall
    during the final of 2010 World Cup, J-burg…If WC 1978 had been
    in 2007 Buenos Aires…July 9th, WC in snowball…

  115. nevket240
    Posted Jun 8, 2008 at 8:11 PM | Permalink

    http://blogs.telegraph.co.uk/business/ambrosevanspritchard/june2008/climatechangedestroythiscentury.htm

    Orwells fear coming to you, sooner, rather than later. Central planning raises its ugly head yet again. Why am I even more certain that a large all-powerful US bank is bank-rolling this “scienec and its fixes???

    regards.

  116. Pat Frank
    Posted Jun 8, 2008 at 9:51 PM | Permalink

    #153 — UC, as a preliminary caveat, I have no problem with the statistical model of monthly mean temperature, as a statistical model. Statistics has its strict criteria in terms of skedasticity, etc., for how to model variance in different ways depending on numerical behavior. No problem.

    But here’s the problem I have with the physical application. The statistical model is implying an assumed physical process. The implied assumption is that every day a physical process (a causal process) drives the temperature always towards T, where T represents the observable of some stable or equilibrium physical state. That is, the model inheres an unstated hypothesis about physical reality, which assumes that T is a constant asymptote every day across the entire month, meaning that each and every day of that month is being driven to the same stable state.

    That T is not locally attained in any given day is assumed due to some stochastic process (i.e., refractive noise from the atmosphere, or varying breezes) plus some physical bias (i.e., slowly changing angle of insolation). But the statistical model asserts that nevertheless this stable state exists every day in any given month and tends toward a constant T. T is implicitly asserted to be a by-the-month constant physical quantity that should be expressible in terms of something like a thermodynamics of climate.

    But none of that is physically true. There is no daily recurring stable state, and no tendency toward a constant T every day for a month. So, even if the statistical criteria for the model are all rigorously met, the statistical model of Brohan is inappropriate to the physical process.

    In any month, each daily ‘t’ is physically independent of all 29 other t’s, and the accuracy in their average at the end of the month will be the rms average of the daily accuracies.

    The total uncertainty in the monthly average must be the rms average physical accuracy of each daily reading plus the rms average precision due to the noise in that reading. Not even the precision can be reduced by sqrt(60) because each daily reading is statistically independent. So, both accuracy and precision are monthly rms averages.

    This is implicit in the Meteolabor thermohygrometer pdf that you linked in your post. According to that document, the instrument can be read to (+/-)0.1 C (precision), and has a calibrated accuracy of (+/-)0.15 C. At the end of a month, with 60 readings, the accuracy in the average will not be (+/-)0.15/sqrt(60)=0.02 C. No series of readings will improve an accuracy beyond the limits of the instrument. Likewise, the final precision is not (+/-)0.1/sqrt(60)=0.01 because each daily reading represents a statistically independent quantity that is not merely an excursion from a physically real and unvarying T.

    The uncertainty in the average T in any month from that thermohygrometer, under ideal conditions, should be rms 0.15 and rms 0.1 over 60 independent readings, and written as (+/-)0.15(sys), (+/-)0.1(stat), and the total nominal uncertainty in the monthly average is T(+/-)0.25 C.

    In short, the statistical model has to be appropriate to the physical process, and I think Brohan’s model doesn’t make the grade.

  117. Pat Frank
    Posted Jun 8, 2008 at 9:54 PM | Permalink

    #156 — Actually, not a “constant asymptote” but a constant inflection.

  118. rafa
    Posted Jun 9, 2008 at 10:26 AM | Permalink

    re.:#104 I feel empathy for Steve’s response to Phil. Many of us have tried before to find what Phil said could be found in some textbook. Phil did not have any success. Let’s be compassionate and think we misunderstood him. He now says he was trying to provide a textbook reference to something else but it’s impossible not to feel he avoids talking about the subject. We are where we were before Phil’s irruption.

    best

  119. Gerald Machnee
    Posted Jun 9, 2008 at 5:28 PM | Permalink

    Re #104 – Phil has created what others have done before him – An intense local Greenhouse Effect.

  120. Posted Jun 9, 2008 at 8:14 PM | Permalink

    Back to PDO.

    Bender et al: at the risk of showing how slow I am, I still can’t get what the qualitative difference between PDO and ENSO would be.

    Imagine that we were in the early 1900s and we only had observed 2.5 cycles of ENSO plus some proxy evidence for a few more. Couldn’t we also postulate that we were just watching noise of some color?

    Again, ENSO is nothing like a pendulum sort of oscillation. It doesn’t switch modes in any predictable pattern at all. FWIW some NOAA models predict a new La Nina after this one.

    So is the size of N the only difference then?

    Just thinking aloud to myself here.

    Mikel

  121. MJW
    Posted Jun 9, 2008 at 8:42 PM | Permalink

    Phil.:

    I’m afraid you’ve misread what I was intending to provide which was the dependence of absorbance on concentration which as I said ranges from linear through log to sqrt at high concentration.

    Phi, I linked previously a reference that says it ranges from linear through sqrt(log) to sqrt. Do your sources disagree?

  122. John A
    Posted Jun 10, 2008 at 5:54 AM | Permalink

    Re #156
    Following on from Pat Frank, its worth pointing out that strictly speaking T is not temperature, at least in terms of physical theory. If there is a something, lets call it X, then the assumptions are that individual measurements on any particular 24 hour period converge on X and that X and T (thermodynamic temperature) are related.
    That’s not even a gap in the theory. It’s like someone producing a mathematical proof of Goldbach’s Conjecture where on page 5 it says “Then a miracle occurs…”
    There’s a much deeper mathematical issue that I’m not sure this is the correct forum for, but here goes nothing:
    All of the quantities being measured in climate are discrete, and are quantities meant to label states in the system which are bulk quantities, like temperature. But climate models assume fundamentally that the measurements are samples of a continuous function, or a collection of n continuous functions.
    So we collect all of the functions and we collect our measurements and we assemble our matrix, and we hope that there isn’t a row or column which is a linear transformation of any other row or column, and we push the button in the computer that says “Solve” and out comes (we hope) a set of eigenvalues and a set of eigenvectors from those eigenvalues and we plot them and we get something that looks like temperature change over the last x years and we write a paper about our model and everybody is impressed by our graphics.
    We then say “Hey lets extrapolate from out model and see what happens next”, but the model doesn’t behave outside its domain. It’s highly non-linear and tiny changes ninth place of decimals in a single eigenvalue, really tiny changes well below the limits of measurement resolution, produce either a hothouse or an icehouse.
    We know this. We also know that errors in our model increase rapidly outside its domain. The bulk data are not exactly accurate to several places of decimals, far from it.
    But I find myself wondering about the assumptions leading to the model – that there exists a continuous function at any scale, any fraction of time, any fraction of atmospheric volume. That cannot be true because we’re measuring states which have no meaning below a certain scale – a single molecule does not have a temperature. Nor does it have a definite energy at a quantum level.
    I’m NOT saying that thermodynamics, which deals with these bulk quantities, does not work – of course it does. But at some stage or other, whether its a thermistor measuring resitance in a circuit once a minute or a rainguage measuring rainfall, that there exists a set of continuous functions which describe their variations which exist at any scale.
    Or maybe I’m talking rubbish. You decide.

  123. MarkR
    Posted Jun 10, 2008 at 9:42 AM | Permalink

    Could Phil possibly provide a page and line number?

    Lawyers have dealt with Phil’s type for years.

    Title, Page Number, Edition, Line Number. Please.

  124. Sam Urbinto
    Posted Jun 10, 2008 at 10:02 AM | Permalink

    Interesting 2006 story from the Des Moines Register.

    Ethanol: The facts, the questions

  125. Sam Urbinto
    Posted Jun 10, 2008 at 10:08 AM | Permalink

    Is there a global temperature? I say no.

  126. yorick
    Posted Jun 10, 2008 at 11:13 AM | Permalink

    I am sorry Phil, but I still don’t see where you have made the transition from the test tube to the atmosphere. How does your spectroscopic effect dictate the energy balance of the planet without taking other things into account, such as the geometry of the planet and atmosphere, for one, the lapse rate, for another. You seem to be saying that these are irrelevant? Or am I hearing you wrong?

  127. DeWitt Payne
    Posted Jun 10, 2008 at 12:23 PM | Permalink

    Steve McIntyre, or anyone else who has the book:

    Would you please quote the “the higher the colder” four line heuristic from Houghton’s book (The Physics of Atmospheres??). Thanks.

  128. steven mosher
    Posted Jun 10, 2008 at 2:45 PM | Permalink

    101. he is blowing smoke out of his hindquarters.

    nothing of the sort is in the code.

    Tell him to quote the lines of code. they are public.

  129. steven mosher
    Posted Jun 10, 2008 at 5:04 PM | Permalink

    UC! tamino actually did a post on Hurst! have a look

  130. Sam Urbinto
    Posted Jun 10, 2008 at 5:37 PM | Permalink

    Mosh: Tamino? You might as well be trying to teach your dog how to speak German.

  131. Craig Loehle
    Posted Jun 10, 2008 at 6:44 PM | Permalink

    I heard they tried to import some rescue dogs, but they only understood German (not spoke it obviously)…

  132. kim
    Posted Jun 10, 2008 at 7:12 PM | Permalink

    115 (SM) Tenney Naumer is a distaff Dano.
    =======================

  133. Syl
    Posted Jun 10, 2008 at 9:12 PM | Permalink

    Perhaps Tenney could give us the PPM of water vapor. You know, I’m beginning to suspect that CO2 has little to no effect because the concentration is still too low. Mayhaps we should have this conversation again if it ever reaches 4000 ppm.

  134. Pat Keating
    Posted Jun 10, 2008 at 9:30 PM | Permalink

    120 Syl
    You have to remember (a) that CO2 is a very strong absorber at 15u, and (b) that water-vapor falls off rapidly with temperature, and thus altitude. You can’t just look at the surface concentrations.

  135. steven mosher
    Posted Jun 11, 2008 at 7:36 AM | Permalink

    re 117. I dont thnk many o his readers will comment since they havent been able to tie hurst to tobacco lobbies. But he does a fine exposition of the basics

  136. Sam Urbinto
    Posted Jun 11, 2008 at 11:31 AM | Permalink

    118 Craig “they only understood German”

    Miene Deutsch sprechen ist nicht so gut. Die hund? Eh. Nicht, nein, frank und stien. :)

    119 kim “a distaff Dano”

    For some reason, that made me wonder if Dano itself has a taff to dis.

    Best, Sam.

    121 Pat “a very strong absorber at 15µ”

    And a very strong emitter at 8-14 µm, seemingly. But the effects of water vapor and wind are very strong in the very low, very compressed air near the surface where the various bits of .3-2µm insolation is working with its emissions, too…..

  137. jae
    Posted Jun 11, 2008 at 4:23 PM | Permalink

    114, DeWitt: This isn’t from that reference, but it probably explains it:

    Spencer Weart in his Discovery of Global Warming 2008, presents the same concept :

    „Consider a layer of the atmosphere so high and thin that heat radiation from lower down would slip through. Add more gas, and the layer would absorb some of the rays. Therefore the place from which heat energy finally left the Earth would shift to a higher layer. That would be a colder layer, unable to radiate heat so efficiently. The imbalance would cause all the lower levels to get warmer, until the high levels became hot enough to radiate as much energy back out as the planet received.”

    From here.

  138. Pat Keating
    Posted Jun 11, 2008 at 6:36 PM | Permalink

    123 Sam

    I agree that atmospheric radiation is not an important issue at lower levels. I was speaking only radiatively, but if you throw in other processes, at lower levels we have natural convection and latent heat effects, of course – two of the latter (one when vapor condenses into the liquid state in clouds, another higher up when droplets or vapor freeze to the solid state. And, as you say, wind or advection.

  139. EJ
    Posted Jun 11, 2008 at 7:12 PM | Permalink

    Sorry Phil. I wasn’t the one who said I would cite the text the next day. You were.

    Perhaps you have done some ‘back of the envelope calcs’ for what you claim.

    Let’s look at those! All my work starts with ‘back of the envelope calcs’.

  140. Joel Shore
    Posted Jun 11, 2008 at 7:58 PM | Permalink

    Re #93: Anna, you say,

    We are not discussing physics in general, in my opinion. That similar temperature raises should show similar catastrophic feedbacks is one of the arguments against any CO2 induced runaway warming, using the icecore measurements which show no such effect.

    I lost you hear, who is talking about “runaway warming” unless you are using the term very loosely to mean that the net feedbacks are positive? As for ice cores, the estimates I’ve seen from the ice age – interglacial cycles give equilibrium climate sensitivities in the range of what the IPCC predicts (unless you believe Jim Hansen, who recently argues that, since these calculations consider ice sheet changes to be a forcing rather than a feedback and he believes they can respond fast enough to act as a feedback that matters to us in the current cause, the relevant climate sensitivity is about double the central IPCC value…although others are skeptical of this for various reasons).

    We are discussing whether the specific models used by the IPCC and the specific parameters of these models that were tuned to fit “data” and then used to project catastrophic temperature increases in a hundred years are correct.

    To say that the models are tuned to fit data is a bit misleading. Yes, there are parameters but they come from the physical processes involved and are not designed for maximum tuning potential to fit data. Furthermore, the amount of atmospheric data available is absolutely monstrous making this an extremely underfit problem.

    The mentality that “if the data do not fit the model, massage the data” kept humanity in the middle ages for many long centuries.

    I disagree. In fact, I would say that if you only accept theories that have absolutely no conflicts with data then I can guarantee you that is a recipe for going back to the dark ages, because you won’t have any scientific theories at all. At any given time, any theory is going to have data that support it and data that can’t be explained be the theory.

    The most important thing to do when you have data that disagree with a model is to try to look at both the data and the model and determine which one might be incorrect. In this case, I personally don’t envy anyone who is trying to come up with a way to change the models so that they agree with the data over the multidecadal timescales while not messing up the very good agreement that already exists on the shorter timescales…particularly when the physics seems to be dominated by processes occurring on still shorter timescales. But, I am certainly open to hearing hypotheses for effects not currently in the models that would accomplish this. So far, people haven’t exactly been beating down the doors with such hypotheses though!

    And, actually, what I think got us out of the middle ages was letting scientists do science free from political and religious control and interference. What helped even more was setting up organizations such as the National Academy of Sciences through which scientists could give policymakers and the public the best scientific advice possible. Unfortunately, some politicians like Sen. James Inhofe want to override that and instead have a free-for-all where each politician chooses his own “pet” scientists who support his views…much the same as each person in a legal case hires their own lawyers or expert witnesses. In my opinion, that more than anything else, is a way to send us back to the dark ages.

  141. Geoff Sherrington
    Posted Jun 11, 2008 at 8:25 PM | Permalink

    We interrupt Unthreaded topics with a short announcement that appeared from a distant email correspondent today. I have not read the article in full but it appears consistent with some UV photography work I have read. Throws a new light on dendro, one could say.

    Tree leaves stay in comfort zone
    Article from: Agence France-Presse
    From correspondents in Paris
    June 12, 2008 06:18am

    THE internal temperature of leaves, whether in the tropics or a cold-clime forest, tends toward a nearly constant 21.4 degrees Celsius, reports a study released today.

    It had long been assumed that actively photosynthesising leaves – using energy from sunlight to convert carbon dioxide and water into sugar – are nearly as cold or hot as the air around them. The new findings not only challenge long-held precepts in plant biology, but could upend climate models that use tree rings to infer or predict past and present temperature changes.

    For decades, scientists studying the impact of global warming have measured the oxygen isotope ratio in tree-rings to determine the air temperature and relative humidity of historical climates. Oxygen atoms within water molecules evaporate more or less quickly depending on the number of neutrons they carry, and the ratio between these differently weighted atoms in tree trunk rings has been used as a measure of year-to-year fluctuations in temperatures and rainfall.

    “The assumption in all of these studies was that tree leaf temperatures were equal to ambient temperatures,” lead researcher Brent Helliker said. “It turns out that they are not.” Mr Helliker and University of Pennsylvania colleague Suzanna Richter turned those assumptions upside down in examining 39 tree species, across 50 degrees of latitude ranging from sub-tropical Columbia to boreal Canada.

    They compared current observed records of humidity and temperature against the isotope ratios in the trees, and found that tree leaves were internally cooler than surrounding air temperatures in warm climes, and warmer in cool climes. Even more startling was that in all cases the average temperature – over the course of a growing season – was about 21 C.

    “It is not surprising to think that a polar bear in northern Canada and a black bear in Florida have the same internal body temperature” because both animals have internal thermostats to prevent overheating or freezing to death. But to think that a Canadian black spruce and a Caribbean Pine have the same average leaf temperature is quite astonishing,” he said.

    Tree leaves keep cool through constant evaporation and reducing sun exposure through leaf angles or reflective qualities. Warmth is gained by decreasing evaporation and increasing the number of leaves per branch.

    All these tricks should be seen as evolutionary adaptations that help the trees attain a maximum of nutrients through optimal photosynthesis, Mr Helliker said. The fact that part of this adaptation occurs at the level of entire forest canopies, and not just within individual leaves, is one reason direct measurements of tree temperatures have been so hard.

    The new findings, published in the British journal Nature, are bolstered by a recent study of a mixed species forest in Switzerland based on infrared thermal imaging. Measured across an entire growing season, the forest canopy temperatures were found to be 4C to 5C higher than the cool, ambient air in the Swiss Alps

    Normal service will now be resumed.

  142. Geoff Sherrington
    Posted Jun 11, 2008 at 8:26 PM | Permalink

    Correction “IR photography work”. Geoff.

  143. Raven
    Posted Jun 11, 2008 at 9:30 PM | Permalink

    Joel Shore says:

    And, actually, what I think got us out of the middle ages was letting scientists do science free from political and religious control and interference. What helped even more was setting up organizations such as the National Academy of Sciences through which scientists could give policymakers and the public the best scientific advice possible.

    Demonization of alternate viewpoints is offensive to people who care about science whether this demonization is done by political, religious *or* scientific bodies. Science has never been about consensus and scientific bodies forfeit their right to be called a scientific body when they take it upon themselves to push one side of a scientific argument in order to promote political objectives.

    It is unfortunate that too many AGW scientists feel they have to claim that they have more certainty than can be really justified by the evidence. I feel this will ultimately undermine the credibility of all scientists when the AGW catastrophe fails to appear.

  144. D Johnson
    Posted Jun 11, 2008 at 9:33 PM | Permalink

    Joel Shore said:

    And, actually, what I think got us out of the middle ages was letting scientists do science free from political and religious control and interference. What helped even more was setting up organizations such as the National Academy of Sciences through which scientists could give policymakers and the public the best scientific advice possible. Unfortunately, some politicians like Sen. James Inhofe want to override that and instead have a free-for-all where each politician chooses his own “pet” scientists who support his views…much the same as each person in a legal case hires their own lawyers or expert witnesses. In my opinion, that more than anything else, is a way to send us back to the dark ages.

    This has to be the most distorted view of scientific progress that I’ve encountered in years. The very idea that organizations, such as the National Academy of Science, and the advice they gave politicians, brought us out of the dark ages, seems absurd. The science and associated technologies, applied by insightful individuals and organizations, deserves much of the credit. It didn’t depend on “Academies”, which historically were often politically dominated and seldom in the vanguard. With regard to Imhofe’s witnesses, why don’t you attack their facts, rather than their right to be heard. That is, if you are in favor of freedom from political or religious control or influence as you contend.

  145. anna v
    Posted Jun 11, 2008 at 11:04 PM | Permalink

    Joel Shore 110

    In fact, I would say that if you only accept theories that have absolutely no conflicts with data then I can guarantee you that is a recipe for going back to the dark ages, because you won’t have any scientific theories at all. At any given time, any theory is going to have data that support it and data that can’t be explained be the theory.

    The following is a well accepted statement in the scientific community: A theory can never be proven right. It can only be consistent with existing data. A theory is proven wrong even if one of its predictions is falsified.

    There are no “accepted” theories which conflict with any data. They are scrapped. You are also fuzzy on “explained” . The general theory of relativity cannot explain DNA. But it makes no claim to be predicting it.

    To say that the models are tuned to fit data is a bit misleading. Yes, there are parameters but they come from the physical processes involved and are not designed for maximum tuning potential to fit data.

    You are wrong there. The lee way on the parameters is enormous, and the effect they have on the outputs too. They can predict ice ages to boiling within their limits and experimental measurement errors. They are tuned to data .

    Furthermore, the amount of atmospheric data available is absolutely monstrous making this an extremely underfit problem.

    Good, this means the models can be falsified, and this is what has been happening. It remains for the modelers and their followers to wake up and smell the roses.

    The most important thing to do when you have data that disagree with a model is to try to look at both the data and the model and determine which one might be incorrect.

    This statement is so absurd that it shows again you have little scientific training. I have been working with computer models and data for forty years, in the field of particle physics.

    Data is judged on its own quality absolutely independently of any models . There are stringent conditions, both on measurements and statistical analysis that guarantee this. Models and data are not on an equal basis like a chess game. There may be doubts about data,then the experiment has to be repeated, or the analysis that gave the data has to be repeated ( as happened with the hockey stick), or new measurement methods need to be found. If the doubts lead to massaging the data so that they fit the theory, we are back to the middle ages.

    But, I am certainly open to hearing hypotheses for effects not currently in the models that would accomplish this. So far, people haven’t exactly been beating down the doors with such hypotheses though

    Look in NATURE Vol 453| 1 May 2008| doi:10.1038/nature06921, on the article by N. S. Keenlyside1, M. Latif1, J. Jungclaus2, L. Kornblueh2 & E. Roeckner2.

    Advancing decadal-scale climate prediction in the
    North Atlantic sector

    I do not want to put up a link because it delays the posting, but it is easy to search for it. They predict a stall in temperatures for the next ten years or so, by the same IPCC models with extra parameters.
    The reason not many modelers are doing the same is that it means that the IPCC projections have to be drastically revised downwards. This will take the wind out of the sails of AGW.

    I am not entering an ice core discussion as it is off the thread subject.

    I agree with 113 on the middle ages comment.

  146. John A
    Posted Jun 12, 2008 at 1:48 AM | Permalink

    David Hathaway, solar physicist at NASA, has provided his own bold predictions about Solar Cycle 24 and greenhouse gases.

    Even if there were another Maunder minimum, he says, we would still suffer the effects of greenhouse gases and the Earth’s climate would remain warm. “It [solar variation] doesn’t overpower them at all,” Hathaway said.

    More at http://solarscience.auditblogs.com/2008/06/12/hathaway-suns-contribution-is-small-compared-to-volcanoes-el-nino-and-greenhouse-gases/

  147. kim
    Posted Jun 12, 2008 at 2:12 AM | Permalink

    “Probably 10-30 percent”. Doesn’t this guy have an obligation?
    =======================================

  148. Steve McIntyre
    Posted Jun 12, 2008 at 6:12 AM | Permalink

    #114. De Witt, see http://www.climateaudit.org/?p=2572 where Houghton’s “higher the colder” explanation is excerpted in “full”.

  149. Craig Loehle
    Posted Jun 12, 2008 at 7:41 AM | Permalink

    How do you test a theory? Anna V’s comments about particle physics experiments are a little too strong because this is the best possible case: the theory is highly mathematical and rigorous and the data are profuse and experimental. In most fields data can be ambiguous. There may be lots of noise, uncontrolled and unmeasured influences, levels of spatial complexity, the inability to conduct experiments, etc. The GCMs are not a theory per se, because many things (e.g. clouds, storms, convection) are either estimated or are below the scale of resolution of the calculations. Thus failure of models to match data is not a straightforward question. It may be the data is at the wrong scale, or does not affect larger scale calculations etc. BUT: effort should be made to clarify exactly what are the model predictions that are critical. AND: efforts to fix the data to match the models should not be done–instead it should be proven that any bias in the data is real.

  150. yorick
    Posted Jun 12, 2008 at 9:06 AM | Permalink

    Whodathunk that trees are warm blooded, err I mean, warm sapped?

  151. steven mosher
    Posted Jun 12, 2008 at 9:22 AM | Permalink

    re 136. you make several good points. GCMs are not theories, they are simulations of physical theories. I’ll give an example. I use newtonia physics to Simulate the speed a car can go around the track. My simulation says 62 seconds. I put a car on the track. We measure 57 seconds. Is F=MA falsified? No, the simulators go back to the drawing board and figure out that they have to model aerodynmics better. You do that, then the simualtion says 60 seconds, but the data says 57.
    Then you go back to the physics again, oh, we assumed tire temp was constant, but it actually changes. do some experiments, see how it changes, model it. Now, your simulation says 57.5 seconds.

    Have the basic physics been falsified here? has the simulation been falsified?

    GCMs are simulations, not theories. they are not falsified, they are continously improved.

    hmm.

  152. Posted Jun 12, 2008 at 9:26 AM | Permalink

    # 136

    Craig,

    A theory is a changeable accepted hypothesis because it has been tested and found true at least in most of its statements. Biologists have to deal many times with phenomena that cannot be tested in vitro; for example, the large diversity of living beings on Earth, when the mainstream hypothesis is that all living beings on Earth have derived from a single primitive cell (protobiont). That’s why the hypothetical-deductive method is the strongest reasoning mechanism for explaining those phenomena. Currently we observe small or big changes in species, so we infer that the nowadays diversity of species corresponds to those cumulative small or big changes in individuals. Many times we have to resort to other sciences to explain biological phenomena, but we find that some “theories” of those sciences are constructed on a pure hypothetical basis.

    The effort in the modern climatology -not attributable to all climatologists- for clarifying some phenomena has been elusive, and it is evident for all other scientists. For example, from some models they forecast “N” cyclones for this year; when cyclones do appear nowhere, they change “for the short term” the classification of hurricanes reporting thunderstorms as if these were hurricanes. Thus, they artificially fulfill the quota. Sometimes, some scientists tend to change the physics laws so their ideas become “theories”.

  153. Reference
    Posted Jun 12, 2008 at 9:30 AM | Permalink

    Craig Loehle #136

    Given your litany of weaknesses in GCM models, how would you characterize the usefulness of current GCMs as inputs for policy makers?

  154. Posted Jun 12, 2008 at 9:36 AM | Permalink

    # 138

    Steven Mosher,

    You’re correct, models are not theories, but efforts to simulate reality for making forecasts. The problem is when some people get an idea dismissing observation of phenomena related with that idea and bring the idea into a model trying to transform their idea into reality. ;)

  155. Stan Palmer
    Posted Jun 12, 2008 at 9:39 AM | Permalink

    GCMs are simulations, not theories. they are not falsified, they are continuously improved

    Falsification is just this. It is a means whereby theories can be continuously improved. That is how Popper saw it anyway. A theory is tested and found to be wanting. Because of this the theory is modified and the new modified theory is tested again.

    The issue of falsificationism and GCMs is the addition of ad hoc hypotheses. Ad hoc additions can allow any theory to be consistent with observations. Sulphates seem to be sued in this way in GCMs. it the observations don’t match the GCM predictions then add magic sulphate effects or new types of bucket performance until they do. Lakatos made a distinction between degenerate and progressive research program. The danger for GCMs is not falsification but degeneracy.

  156. Stan Palmer
    Posted Jun 12, 2008 at 9:42 AM | Permalink

    models are not theories, but efforts to simulate reality for making forecasts.

    A theory is something that is used to make predictions or forecasts. If ti cannot make forecasts then it is not a theory.

  157. Posted Jun 12, 2008 at 9:43 AM | Permalink

    # 142

    Stan Palmer,

    I agree… However, I think you should have written “falsification and verification”.

  158. Richard Sharpe
    Posted Jun 12, 2008 at 9:47 AM | Permalink

    And, actually, what I think got us out of the middle ages was letting scientists do science free from political and religious control and interference. What helped even more was setting up organizations such as the National Academy of Sciences through which scientists could give policymakers and the public the best scientific advice possible.

    It would seem that Joel was asleep during history classes. Who knew that the National Academy of Sciences was set up in the middle ages.

  159. Fred Nieuwenhuis
    Posted Jun 12, 2008 at 9:49 AM | Permalink

    Re. Data
    I believe data is the lynchpin on which AGW or anti-AGW arguments stand or fall. However, there are problems with the data: surface station, SST, radiosonde, satellite, tree rings, ice cores. There are issues with them all as discussed in this blog and others. So many issues, that making conclusions one way or another is folly as you run risk of interpreting the compromised data incorrectly, as some have been shown to have done so. What governments need to do invest HEAVILY in climate monitoring technology. In that way, a clearer picture can appear on what this planet is doing.

  160. Posted Jun 12, 2008 at 9:56 AM | Permalink

    # 143

    Stan Palmer,

    Of course, theories are used for making predictions; hence, it’s a difference between a hypothesis and a theory. Hypotheses do work as guidelines for any scientific investigation; however, a single idea should not be considered a hypothesis if it doesn’t match with verified theories or if they are not based on the observation of real phenomena. We often have experienced problems of this kind in Ecological issues. We’ve seen this contradiction in AGW, for a short example.

  161. Posted Jun 12, 2008 at 10:04 AM | Permalink

    # 147

    Me,

    My argument needs of a good example. Some biologists and climatologists believe faithfully that their models break out the reality, and that if we, the old scientists, do not see that their models agree with reality is because the reality takes in pronouncing or because somebody is boycotting, or denying their models, or that they were ahead to their time.

  162. Sam Urbinto
    Posted Jun 12, 2008 at 10:17 AM | Permalink

    Advection and convection are a wonderful thing. :) Long live the sum of advective and diffusive transfer!

    “some politicians like Sen. James Inhofe”

    Is that a generic comment, or a show of political outlook? In any case, certainly politicians are going to advocate for those that match their views or their agenda, but it doesn’t mean those that are being advocated for are right or wrong in and of themselves. They are right or wrong regardless of who advocates for them. Or not.

    “To say that the models are tuned to fit data is a bit misleading.”

    Yes, it is misleading. The models are turned to look like the past data they are trying to mimic. The problem is that past results are no promise of future results even if they look the same. As I’ve said before, the tricky part is getting the sign and the magnitude correct.

    And using the car timing analogy; what happens when the models match the race car’s time, but then the driver decides to go faster or slower next time. Or a tire blows and there’s a wreck and they never finish.

  163. DeWitt Payne
    Posted Jun 12, 2008 at 10:51 AM | Permalink

    Steve McIntyre,

    Thanks. I had forgotten about that thread and particularly your original post. Houghton’s explanation is so poorly written and oversimplified that it is essentially useless. It won’t help anyone who doesn’t already have a fairly good understanding of atmospheric physics and spectroscopy and for them it’s little better than a mnemonic. The problem with the IPCC reports is that they appear to be written like patents rather than scientific papers or review articles, that is the reader is presumed to be skilled in the art so detailed explanations and caveats are unnecessary.

  164. Philip_B
    Posted Jun 12, 2008 at 11:22 AM | Permalink

    I think models are theories, specifically GCMs are theories of climate. The problem as I see it is that they are composite theories and we have little data on their accuracy at the level of the subcomponents (i.e. the individual theories).

    I tried without success to find the quote from the IPCC thats says and I paraphrase – While the models at not good at predicting other climate variables and at the regional level, we still have confidence in the temperature predictions/projections at the global level.

    When I first read that quote my reaction was the composite prediction can only be more accurate than the components by chance (or tuning)

    Sam Urbinto, you owe me a monitor cleaning.

  165. Steve McIntyre
    Posted Jun 12, 2008 at 11:35 AM | Permalink

    #150. De Witt, a short summary would be OK if it was actually summarizing a more detailed exposition somewhere else. But it had no citations or references and I’ve never been able to locate an article in which these calculations are spelled out. Even if you know all the atmospheric physics in the world, Houghton’s argument here doesn’t rise above arm-waving.

    As I caution readers, this doesn;t mean that an exposition is impossible but I’ve been unable to find one that is remotely satisfactory and people like Phil always say it’s in a textbook somewhere but can never provide a reference.

  166. steven mosher
    Posted Jun 12, 2008 at 11:55 AM | Permalink

    re 142. yes, the issue of epicycles.

    when data conflicts with theory, WHICH IT ALWAYS DOES, one is left
    with these choices.

    1. Adjust the theory.
    2. Reject the theory
    3. adjust the data
    4. reject the data.
    5. cry noise and run away.
    6. cry noise and claim victory
    7. create a new theory.

    the history of science has examples of each of these responses. No one to my knowledge has tested which response has the most skill. that is, there is no scientific study of scientific decisions. To put it another way, the philosophy of science is unscientific.

  167. Posted Jun 12, 2008 at 12:06 PM | Permalink

    I wouldn’t worry about existence of \frac{1}{4 \pi \tau}\int _{4 \pi} \int T    , temperature increase in a sample , \sum \sum T,(which does exist) can anyway cause problems, such as glacier melting.

    there isn’t a word about instrumental accuracy in Brohan 2006.

    Only that 0.03 C, which seems to be incorrect. All you interested in GMT error analysis, check Homogenisation adjustment error -section, that’s the real challenge to understand.

  168. Craig Loehle
    Posted Jun 12, 2008 at 12:23 PM | Permalink

    In this context of testing theory I was asked what I think of the GCMs. In an engineering context, one can ask of a model “how skillful is it”? If it appears skillful for the circumstances of interest, then you use it. That is, I don’t think it makes any sense to think you can “falsify” a GCM with any single datum like you might a theory of the electroweak force or something. In every example I have seen where one can assess skill (and in the most important case–the 100 year projections–you can’t assess it), many of them discussed here, e.g. Douglass et al or Spencer or Hansen’s 1980 forecasts at regional levels or simulations of glacial climate, the results range from ok to ??? to arguably completely wrong. Even when an ensemble mean is ok the spread of the models is not skillful. The response to asking this question is “but over 100 years and over the globe the GMT is correct”–but how do we evaluate that claim?

  169. Sam Urbinto
    Posted Jun 12, 2008 at 1:31 PM | Permalink

    Craig; why bother trying to evaluate a claim about a global mean temperature anomaly trend over 100 years? Look at the satellite data, which fits quite well into the smack dab center of the CMIP ensemble output.

    Which is 14 +/- 2.5 I’m operating under the assumption that is the base period. So it used to be about .3 lower and is now .3 higher seems meaningless at best.

  170. anna v
    Posted Jun 12, 2008 at 1:56 PM | Permalink

    I think that the reason we are discussing the GCM as theories hinges on the fact that they are used for predicting the climate 100 years in the future. And in addition the world community is pressured to stand on one leg and breath every other day in order that the end of the world is avoided.

    If one thinks of them as tools useful for predicting next week’s weather, it is one thing, and it is fine. It is not a theory, it is a useful tool. It is then legal to keep on incorporating new data in the fits of the parameters, so as to have better forecasting ability. But that ability to see in the future is limited by the nonlinearity of the coupled differential equations that really control the atmosphere. All models assume simple relations to simulate the atmosphere, reducing complicated interconnections to linear first degree equations. That is why their predictive power is limited to a few weeks. One can always be fairly accurate in extrapolating linearly for a few “steps” of such models, because most solutions expanded in a perturbation series will have a linear first order term, or at most second order. That is why the harmonic oscillator is such a fair solution for a multitude of diverse problems.

    Average quantities are like picking from a perturbation expansion of complicated solutions the first order term.

    To assume that substituting average quantities for the complicated solutions of interconnected nonlinear differential equations that we know are there in reality, and call them climate models,that can forecast in decades is a fallacy, and this fallacy is displayed with the disagreements with data not included in the fits of the models. From cloud formations to tropical tropospheric behavior, from regional checks (Koutsoyannis et al, another thread here)etc etc.

    In addition using clouds of model outputs to obfuscate disagreements with data is another fallacy. Each simulation has to stand or fall by itself. It is falsified by the data it does not fit, and should be discarded.

  171. Posted Jun 12, 2008 at 2:38 PM | Permalink

    # 156

    Ana,

    I adhere to your beautiful explanation; but I would add that most of the outputs of GCMs didn’t fit with the scientific concept of hypothesis because of the uncertainty given on the microstates that drive the climatic variability.

  172. Posted Jun 12, 2008 at 3:11 PM | Permalink

    #154 Craig, Saw this on arxiv http://arxiv.org/abs/0806.0813
    Decompositions of Proper Scores Authors: Jochen Bröcker
    “Even when an ensemble mean is ok the spread of the models is not skillful.”
    Seems like a proper analysis of predictions of models would be in order.

  173. Andrew
    Posted Jun 12, 2008 at 5:07 PM | Permalink

    88 (Me): Give up yet? Six months and .18. :)

  174. cba
    Posted Jun 12, 2008 at 5:52 PM | Permalink

    One thing that has been missing here in the attempt at understanding what happens when one adds more ghgs is that the ghg alters the emissivity. It increases it just as it increases the absorptivity. Rather than just push out the altitude from where the radiation will escape from by presuming more ghgs will prevent radiation from previous altitudes where emissions left the earth, it’s going to increase the emissivity at a given temperature compared to less ghgs being present.

    Another little tidbit is – the T gets hotter higher up after the tropopause. Some of it is well over the surface T. Anything getting up there in the way of ghgs will cause significant increases in output since it’s linear in emissivity and T^4 in temperature.

  175. Pat Frank
    Posted Jun 12, 2008 at 7:24 PM | Permalink

    #158 — John, having read Ross’ and Chris’ “Taken by Storm” I agree with you and Sam (#160) that global average surface temperature is a statistic and not a thermodynamic quantity.

    I agree with you about the problem of using thousands of rather low resolution measurements as inputs to climate models, so as to simulate a climate that has trillions of infinitely precise discrete local climate states. One might wonder about the impact on predictability of a chequerboard spread of thousands of small inital value errors put into a non-linear climate model that is always skirting along the edge between a stable state and chaotic state-jumps.

    Given that, it’s amazing that the models predict smooth climate progressions at all. Maybe that smoothly evolving result is an unappreciated outcome of the hyperviscous sub-grid molasses atmosphere that exists in these models.

    Statistical thermodynamics does work, and merges gracefully into bulk thermodynamics. The question is whether, or really when, statistical averaging of unresolved and undocumented microclimate states has a visible impact on a climate prediction.

    UC, #161, I’ll have to leave parsing out their statistics to you. :-) I’m not an adept. But the physical hypothesis implied by the statistical model seems pretty evident and very wrong to me, and I can pay attention to that.

    The Brohan paper seems to avoid science altogether. It’s all about statistics. And yet, at the end of it all, the resulting time-series plots of average temperature are given physical meaning by all and sundry. Including climate scientists. This abuse has to stop.

  176. Raven
    Posted Jun 12, 2008 at 7:40 PM | Permalink

    159 (Andrew) – You might get more feedback on your graph at Lucia’s. I would post it on the one of the lumpy threads.

  177. John F. Pittman
    Posted Jun 12, 2008 at 8:08 PM | Permalink

    Pat thank you for your comments. You said

    Given that, it’s amazing that the models predict smooth climate progressions at all. Maybe that smoothly evolving result is an unappreciated outcome of the hyperviscous sub-grid molasses atmosphere that exists in these models.

    Statistical thermodynamics does work, and merges gracefully into bulk thermodynamics. The question is whether, or really when, statistical averaging of unresolved and undocumented microclimate states has a visible impact on a climate prediction.

    I find these two paragraphs contradictory. I have spent much time designing mass and heat transfer systems and I can tell you that no way could I design a system with a “hyperviscous sub-grid molasses atmosphere” or liquid that did not actually exist, and expect it to work.

    I think your statement

    The question is whether, or really when, statistical averaging of unresolved and undocumented microclimate states has a visible impact on a climate prediction.

    is not a question. The answer is that “statistical averaging of unresolved and undocumented states” cannot be reasonably approximated.

    Everybody knows about putting sugar in “drink of your choice” (I like hot tea). How fast and how long and how much sugar you stir effects how much sugar is at the bottom and how sweet the liquid is. This is the problem with that “hyperviscous sub-grid molasses atmosphere”. It is unreal. To dampen turbulent effects is to ask a tea (or stir drinker of your choice) if I do not tell you how much sugar I added or I do not tell you how much and how fast I stirred, and I will only let you consider your past expierence, how sweet will it be?? It cannot be determined, it is only a guess with 1 equation or two, and four or six unknowns. Unknowns grow the more you try to define it.

    One might wonder about the impact on predictability of a chequerboard spread of thousands of small inital value errors put into a non-linear climate model that is always skirting along the edge between a stable state and chaotic state-jumps

    I think that this is the next step. The models have not been validated to even consider such predictability. The models do not actually achieve this skirting without defining a grid that cannot go negative (they do otherwise) and that they include a “hyperviscous sub-grid molasses atmosphere” contrary to reality.

    Perhaps Hans could comment.

  178. DeWitt Payne
    Posted Jun 12, 2008 at 8:58 PM | Permalink

    cba,

    Rather than just push out the altitude from where the radiation will escape from by presuming more ghgs will prevent radiation from previous altitudes where emissions left the earth, it’s going to increase the emissivity at a given temperature compared to less ghgs being present.

    Yes, but for CO2 over most of the 15 micrometer band as viewed from space you’re increasing the emissivity from say 0.995 to 0.996. In the stratosphere, however, emissivity is not saturated and increasing CO2 will cause cooling. In simplistic terms, the absorptivity for long wave radiation in the troposphere is much greater than the absorptivity for short wave radiation so increasing CO2 causes warming. The situation is reversed in the stratosphere where oxygen and ozone absorb in the UV while there is little absorption of long wave IR, so adding CO2 causes cooling because emission increases but absorption doesn’t change.

  179. cba
    Posted Jun 12, 2008 at 10:39 PM | Permalink

    162 (DeWitt):

    Whoa! 0.99? That’s high surface emissivity. A shell of atmosphere thin enough to be roughly uniform in T isn’t going to have that kind of emissivity with the possible exceptions of being right at the peaks of some strong co2 absorbption lines.

    I think the consequences are more alongthe lines that the actual absorption or emissivity on an average basis – all those wings and lower strength peaks etc. are going to be well under 0.9 and so subject to variations rather than saturation. After all – the argument that co2 is saturated and can’t contribute any more is pretty much nullified and your comment appears to require to assume that it’s saturated lower down.

    By the same token, increasing the absorption by any amount will increase the emissivity by a simliar amount while at the same T. Example is the avg emissivity in stefan’s law.

    There is still a fairly uniform co2 concentration in the stratosphere according the the std atm 1976 – even though there’s lower pressure. However, up there, the T is shooting through the roof. T in excess of 300 K is going on and that isn’t being cause by some radiation from lower in the atmosphere. An increase in ghg concentrations here will cause the emissivity to increase as well, radiating more power while at the same T – or more likely, forcing the T to drop to balance what can be radiated.

  180. MarkR
    Posted Jun 13, 2008 at 1:19 AM | Permalink

    From Bob Tisdale on Rank Exploits:

    Last month, the GISS elevated anomaly was aided by the Antarctic. Check the zonal means plots, especially the Antarctic using the two smoothing radii.
    April 2008 with 1200 km smoothing radius:
    http://data.giss.nasa.gov/cgi-…..mp;pol=reg
    April 2008 with 250 km smoothing radius:
    http://data.giss.nasa.gov/cgi-…..mp;pol=reg
    Look at the big red line across the bottom of the Mercator projection. But why is the Antarctic so warm? One station (or multiple stations?) at the South Pole was (were) “warm”. It stands out in the Polar Projection of April 2008 with 250 km smoothing radius:
    http://data.giss.nasa.gov/cgi-…..mp;pol=pol
    With the magic of 1200 km smoothing, an anomalous month at one or two stations can skew the entire global data set.

    Now take a good look at May 2008 with 1200 km smoothing radius:

    http://data.giss.nasa.gov/cgi-…..mp;pol=reg
    The Antarctic anomaly is back in line with the rest of the world.
    But, did you notice the big differences in coverage between April and May? Forget about Africa and South America.
    Try May 2008 with 250 km smoothing radius:

    http://data.giss.nasa.gov/cgi-…..mp;pol=reg
    The data from Northern Canada and from Southern and Western Greenland are gone in May. They were used in April. Where’d they go? And why? (MarkR says F….)

    The last noteworthy items are the differences in global temperature anomaly noted in the upper right-hand corner of the maps, depending on smoothing radius. With the 250km radius smoothing, GISS and HADCRUT3GL are twins from January 1978 to March 2008.

  181. anna v
    Posted Jun 13, 2008 at 4:34 AM | Permalink

    Nasif 157

    I would add that most of the outputs of GCMs didn’t fit with the scientific concept of hypothesis because of the uncertainty given on the microstates that drive the climatic variability.

    I do not understand what you mean by uncertainty. Programs use real numbers as inputs and give real numbers as outputs.

    That is a hypothesis for me, example: given this set of input numbers I expect the following distributions for cloud cover versus latitude/longitude.

    Do you mean the hand waving of chaos and uncertainties without using the mathematical tools of chaos, in order to justify unjustifiable concepts like “model averages” and “errors from the model spread”?

  182. John A
    Posted Jun 13, 2008 at 5:29 AM | Permalink

    Pat Frank:

    Thanks for your feedback. It’s just annoying that we’re all in thrall to the black boxes called climate models without any explanation about how they’re meant to work outside their domain of measurements when they’re not exactly hyperaccurate datasets to begin with.

    As you pointed out in your critique of climate models in Skeptic magazine (and I’m curious to know what feedback you got from that article), with even the most optimist assumptions for accuracy, the models’ error ranges get rather large very quickly. The non-linear nature and hyperfine startup conditions for even simple models are readily apparent when you study them.

    I admit that I did not start off with a great appreciation for model outputs (mainly because they appeared to predict phenomena only after they’ve happened in the real world) but studying mathematical modelling at a fundamental level, I find it completely unlikely that there’s anything there at all – other than the will to believe a beguilingly beautiful graph by the climate modeller.

  183. Posted Jun 13, 2008 at 6:32 AM | Permalink

    Happy to see the conversation here, since I ask myself some questions about this and maybe some of you could help me to have an answer.

    Assume that the exact earth’s global variation of temperature is the integral I of a function T (T(x) being the variation of temperature at point x from a reference year) over the surface of the earth, a surface considered as a sphere S. The method in use for approximating this integral consists in considering several points x(1),…, x(n) on S, where the value of T is measured (the points x(1), …, x(n) simply correspond to meteo stations), and then averaging the observed values T(x(1)),…, T(x(n)), to get a value A.
    A crucial point is, then, the evaluation of the difference between I and A. As far as I know, there is only one way to estimate such a difference. It consists in the computation of the product of two values, D and N :
    - D is called the « discrepancy » of x(1),…, x(n) ; it quantifies the quality of the repartition of the points over S ;
    - N concerns the theoretical properties of T ; it is like a « norm ».
    The convenient definition of N depends on what is known on T, and this has also an influence on the choice for the definition of the discrepancy.
    Assume that the integration domain is not a sphere but a cartesian square. Then, a classical result from Koksma and Koksma-Hlawka asserts that the difference between I and A is upper-bounded by DN, where D is the so-called « *-discrepancy » and N the total variation of T (« in the sense of Hardy and Krause »). It is an optimal result (that is : there exists a function T of bounded variations for which the difference between I and A is equal to DN minus epsilon).
    Let us now make a very quick evaluation of DN (a « calcul de coin de table », in french). Assume that T has bounded variations ; the total variation is easily lower-bounded by the difference between extremal variations (in time, not in space) of temperature on the earth – say 5°C for example (remember that this is a lower bound : in general, the total variation of a function is often much bigger than the difference between its maximum and its minimum).
    As for the evaluation of the discrepancy, we can make the following observation : the value of the *-discrepancy is, roughly speaking, lower-bounded by 1/n – this latter value corresponding to the case of points x(1),…, x(n) as uniformly distributed as possible. As far a I know, the number of meteo stations in use for this kind of estimations is about 2000, so D is lower-bounded by about 1/2000.
    As a consequence, the difference between I and A is lower-bounded by the product 5×1/2000 = 0.025°C. Recall that this result involves a quite small value for the total variation of T (as far as I can say) and assumes that the meteo stations are almost perfectly distributed over the sphere, which is far to be true. (Note that if the x(i)s are randomly chosen, D is about 1/ (sqrt(n)), which already gives an error of about 0.1°C.)
    Of course, all of this should be made more precise (in particular, recall that I assumed here that we compute the integral on a cartesian square – surely, this makes me a « flat earther » !). Nevertheless, I think that it suggests that it would be of interest to go deeper in that direction. My intuition is that the theoretical error has about the same size as the allegued annual variation of global temperature.

    Does someone has an opinion on this ?

  184. iheartheidicullen
    Posted Jun 13, 2008 at 7:53 AM | Permalink

    reading the icecap story which shows historic GISS temps for Madagascar today, it echos what i have found while perusing the GISS stations data. If you go around the world and pick regions in either NH and SH and then click on “rural” stations that have continuous records from 1880 (which makes up about “missing” data is usually from 1920-1940. Go have a look (I realize you have done this in search of Waldo) but it is still quite striking that rural temps from around the world with any full history whatsoever (e.g. 1880 onwards) make up so little of the overall adjusted temps when one would think they are the most important! I don’t want to suggest any specific place to look (try to avoid confirmation bias), but i believe i stopped by greenland, australia, us, europe, siberia, india, alaska, chile, (let’s not forget MADAGASCAR!). make sure they’re rural!

  185. iheartheidicullen
    Posted Jun 13, 2008 at 7:56 AM | Permalink

    sorry, should have read:

    reading the icecap story which shows historic GISS temps for Madagascar today, it echos what i have found while perusing the GISS stations data. If you go around the world and pick regions in either NH and SH and then click on “rural” stations that have continuous records from 1880 (which makes up about less than 5% of contributing temperature series from the GISS global anomaly scheme, unfortuneately) there is a large likelihood of seeing warmer temps around 1920-1940 than present (ah, just like the inconsequential US temps!). In other rural 1880-present grids, “missing” data is usually from 1920-1940. Go have a look (I realize you have done this in search of Waldo) but it is still quite striking that rural temps from around the world with any full history whatsoever (e.g. 1880 onwards) make up so little of the overall adjusted temps when one would think they are the most important! I don’t want to suggest any specific place to look (try to avoid confirmation bias), but i believe i stopped by greenland, australia, us, europe, siberia, india, alaska, chile, (let’s not forget MADAGASCAR!). make sure they’re rural!

  186. Steve K
    Posted Jun 13, 2008 at 8:15 AM | Permalink

    Anyone? Re: the reference in #124 to adding gas at a higher atmospheric level… Wouldn’t this require a gravity adjustment, to have denser atmosphere at a higher level?

  187. Pat Keating
    Posted Jun 13, 2008 at 8:55 AM | Permalink

    182 John A
    It is somewhat akin to the field of magnetohydrodynamics (only worse), where an expert on the topic once remarked: “It takes a genius to get results from calculations, and an idiot to believe them.”

  188. Pat Keating
    Posted Jun 13, 2008 at 9:00 AM | Permalink

    Benoit

    a « calcul de coin de table », in french

    Interesting. In English, we say “back-of-the-envelope calculation”.

    As far as the content of your post goes, I will leave it to others more expert in statistics to answer.

  189. Posted Jun 13, 2008 at 1:04 PM | Permalink

    #183

    My intuition is that the theoretical error has about the same size as the allegued annual variation of global temperature.

    At this point it is good to remember the MBH98 1-sigma for AD1400 step , 0.15 C with 22 responders. They should use tree rings at airports.

    I think that the official estimate for the first \int - \sum (spatial sampling error) is from Shen 98,

    http://www.math.ualberta.ca/~shen/Sam_Papers_pdf/shen_jclim_1998.pdf

    They concluded that about 60 well-distributed stations can yield a global average annual mean surface air temperature with an error less than 10% compared with the natural variability

    ‘they’ means Shen 94,

    http://www.math.ualberta.ca/~shen/Sam_Papers_pdf/shen_jclim_1994.pdf

  190. Posted Jun 13, 2008 at 2:21 PM | Permalink

    #189 : Thank you very much for the references.

    I know some things about error square-mean, that will be of interest for me to compare with Shen et al. At a first glance, it seems strange to me that they do not explicit their assumptions on the functional space in which T is supposed to be, but I have to read more precisely what is going on.

    Elementary theoretical tools would probably easily prove that a precision of 0.15°C with only 22 points is simply impossible, even with a perfect knowledge of the temperature at these 22 points.

  191. rex
    Posted Jun 13, 2008 at 2:26 PM | Permalink

    I suspect Steve is about to post a major one on Leaves LOL

  192. kim
    Posted Jun 13, 2008 at 2:38 PM | Permalink

    Surely ring width has been benchmarked as a proxy for temperature somewhere.
    =========================================

  193. Posted Jun 13, 2008 at 3:02 PM | Permalink

    Note also the changed tone from 94

    In spite of the limitations just alluded to we feel that the magnitude of the sampling error is probably small compared to the other errors in the budget once the number of well-distributed stations is above about 60 and the optimal weighting is applied.

    to 98

    They concluded that about 60 well-distributed stations can yield a global average annual mean surface air temperature with an error less than 10% compared with the natural variability

    to Brohan’s 0.04 C 1-sigma for combined effects of all the uncertainties in global annual average temperature.

  194. Pat Frank
    Posted Jun 13, 2008 at 5:04 PM | Permalink

    #177 — John, you’re right about the disjoint. When I mentioned that stat thermo merges gracefully into bulk thermo, I had mesoscale equilibrium thermodynamics in mind, where “mesoscale” is what we do in labs, not what engineers do in large scale flow systems.

    To my mind, you’re also exactly right to reject the viability of simulations that merely suppress unresolved turbulence. There probably isn’t an engineer alive who would accept such a kludge. And really, that’s the heart of the matter because GCMs are being used as engineering models rather than as research tools.

    Averaging of microstates does work sometimes for some states. The success of equilibrium stat thermo is an example of that, and so is every QM molecular calculation that averages over quantum states. In far from equilibrium dissipative systems, I’d guess that there is some averaging over internal microstates but one would need a good understanding (theoretical or phenomenological) of at what level this won’t lead to unacceptable errors.

    But as I understand Jerry Browning’s criticisms, which is similar to what Ross and Chris discuss in Chapter 3 of their book, the unresolved (suppressed) turbulence in climate has an upward cascade of energy that can materially influence the outcome of large scale climate processes. That being true, then averaging or suppressing the microstates is a recipe for wrong outcomes.

    I like your sugar in tea analogy. But maybe my preference, sugar in espresso, is a bit more true to the observing of climate because unlike tea espresso is opaque.

  195. John F. Pittman
    Posted Jun 13, 2008 at 6:13 PM | Permalink

    #194

    But maybe my preference, sugar in espresso, is a bit more true to the observing of climate because unlike tea espresso is opaque.

    I agree; lol.

    You bring a valid point that Steve McI and others have brought up in one form or another. And that is the impact of approaching, or “turning over” as we phrase it in industry, the “ownership” of a model or a practical implementation, such as to engineers. Engineers are taught and AFAIK, believe that engineers are practical scientists. Their speciality is not the intricacy that Jerry or Hans have demonstrated, but a more mundane approach. Does it work, and how well does it work, attitude and implementation.

    And really, that’s the heart of the matter because GCMs are being used as engineering models rather than as research tools.

    I agree, in fact, this means, it is somewhat hard to converse with climate scientists, and others. An example of this in a conversation on another blog was the statement “has AGW fingerprints all over it”. The first assumptions I would make is that it is unique, it is measured, it is repeated, it has a standard. Of course the conversation degenerated quickly since none of these were explicitly true, they were just believed to be true. I believe that you have nailed it. The GCM’s are presented with confidence and language that indicates that it has reached engineering model level and has been verified.

    However, I note that this misdirection has not necessarily been the scientist’s, of the IPCC, fault. http://www.assa.edu.au/publications/op.asp?id=75 is a good read about problems and proposals. I found it to be an honest, somewhat brutal (deserved) self-examination of several issues that indicate why we are often conversing the way that we do wrt climate change. Don’t know if it is worth the time to read, but I found it so. We have discussed the items in this paper on several threads, but I found it interesting to have it condensed in such a fashion in one place. Perhaps others will as well.

    Averaging of microstates does work sometimes for some states. The success of equilibrium stat thermo is an example of that, and so is every QM molecular calculation that averages over quantum states.

    I think that the systems of knowledge that meet this definition have been blessed with empirical repeatability that we do not have with only one actual run of the weather/climate system. Hans had some links to papers that accepted this limitation and were upfront about the models’ limitations. A simplified and easier read is the first section of the paper I linked above. I have not convinced myself that all the claims/assumptions/etc in the first part of this paper are true, but it is honest as to the actual state of GCM’s as far as I can tell.

  196. Pat Frank
    Posted Jun 13, 2008 at 8:28 PM | Permalink

    #182 — John A, It seems a pecularity of the climate model black box syndrome, that in the scientific literature the reports seem pretty candid about the limitations of climate models. From comments here at CA, it even appears that people can obtain the code fairly readily. It seems that darkness descends whenever some IPCC stalwart discusses model veracity in the context of AGW. Then it’s trust us, we’re the scientists and everyone else is a ecocaust denier who should be silenced.

    I’m impressed that you’ve taken on mathematical modeling in your spare time. Good luck with that. :-) If models had remained in the province of climate-physics departments, they’d have been entirely a useful tool to understand how to model climate. Even their pretty outputs would have been understood for what they are — useful pictures of the climate of bizarro-Earth — rather than supposing they are depictions of physical reality. AGW is perhaps the most obnoxious convolution of secular religion with sentimetally irrational politics, ever. It’s almost wrecked climate science, seeks to wreck our civilization, and it’s our common misfortune to necessarily waste time combatting it.

    Feedback from the Skeptic article has included several appreciative emails, and some attention on the web. Kesten Green showed interest because Figure 2 showed one can ‘predict’ the effect of CO2 doubling as well with a simple linear model as with a million lines of GCM coded PDE’s operating on sponge layers and hyperviscous atmospheres. :-) Apparently this illuminates item 6.6 in the scientific forcasting list of do’s, which can be paraphrased as ‘Keep it simple.’

    Conversations in defense occurred on modeller Michael Tobis’ site, here, on a UCLA skeptic site here, and on RealClimate here. Jerry Browning alerted me to the discussion he was having at RC, and I joined in. The result was that Jerry is now banned from RC, and that not only am I ignorant but also stupid and a bad scientist, thus QED the Skeptic article can be safely ignored. The only person at RC who appeared to have actually read the article was Anthony Kendall, and his criticisms were not valid. Gavin consistently misrepresented it, and Ray Ladbury and Gavin each argued that it’s possible to accurately predict things one can’t resolve.

    As an aside, on another CA thread, someone posted a link to the YouTube video of the climate debate that included Richard Lindzen and Gavin, along with 4 others (Michael Crichton was one of them). They each got 8 minutes to start, and it was striking to see that Gavin used his time, in part, to accuse his opponents of lying. He was the only principal to stoop to that.

  197. Stan Palmer
    Posted Jun 13, 2008 at 8:34 PM | Permalink

    From the Times of London

    http://www.timesonline.co.uk/tol/news/environment/article4133668.ece

    Microbes which consume biological material and excrete crude oil. This is claimed to be more than a carbon-neutral way of producing crude oil. it is supposed to be carbon-negative

    Inside LS9′s cluttered laboratory – funded by $20 million of start-up capital from investors including Vinod Khosla, the Indian-American entrepreneur who co-founded Sun Micro-systems – Mr Pal explains that LS9′s bugs are single-cell organisms, each a fraction of a billionth the size of an ant. They start out as industrial yeast or nonpathogenic strains of E. coli, but LS9 modifies them by custom-de-signing their DNA. “Five to seven years ago, that process would have taken months and cost hundreds of thousands of dollars,” he says. “Now it can take weeks and cost maybe $20,000.”

    Because crude oil (which can be refined into other products, such as petroleum or jet fuel) is only a few molecular stages removed from the fatty acids normally excreted by yeast or E. coli during fermentation, it does not take much fiddling to get the desired result.

  198. Andrey Levin
    Posted Jun 13, 2008 at 9:34 PM | Permalink

    Stan Palmer:

    Virtually hundreds of companies and labs are working on conversion of biomass to liquid fuels. The main problem for most processes, both technical and economical, is to fracture cellulosic molecule into elementary sugars. Afterward it is no problem to convert it to ethanol, butanol (not require distillation), or other fuels. Look, for example, at topics “Fuel” here:

    http://www.greencarcongress.com/topics.html

    If you want, make an entry at BB for further discussion.

  199. Stan Palmer
    Posted Jun 13, 2008 at 10:29 PM | Permalink

    re 198

    The main problem for most processes, both technical and economical, is to fracture cellulosic molecule into elementary sugars.

    From the referenced story

    To be more precise: the genetic alteration of bugs – very, very small ones – so that when they feed on agricultural waste such as woodchips or wheat straw, they do something extraordinary. They excrete crude oil.

    It appears that the company is using the biological mechanisms of yeast of E coli to do just that. I don’t know if this is any more that hype but the story addresses the issue that you trfer to.

  200. DeWitt Payne
    Posted Jun 13, 2008 at 11:54 PM | Permalink

    cba,

    If emission is Planck curve limited, the emissivity is by definition 1, or at least a close approximation. Emission of IR in the 15 micrometer (667 cm-1 in graph) range is Planck curve limited for the most part. You can see the contribution from emission from the warmer stratosphere as a small peak in the center of the CO2 trough and a similar peak for ozone at 1000 cm-1. Emission, as opposed to emissivity, is a function of density and concentration as well as temperature so the power emitted from the stratosphere is small, a few watts/m2, compared to the power emitted by the surface and lower atmosphere. Here’s a MODTRAN calculated spectrum at 70 km looking down for demonstration.

  201. Andrey Levin
    Posted Jun 14, 2008 at 1:42 AM | Permalink

    Stan Palmer:

    It appears that the company is using the biological mechanisms of yeast of E coli to do just that. I don’t know if this is any more that hype but the story addresses the issue that you refer to.

    No it does not address it:

    LS9 has developed a new means of efficiently converting fatty acid intermediates into petroleum replacement products via fermentation of renewable sugars. LS9 has also discovered and engineered a new class of enzymes and their associated genes to efficiently convert fatty acids into hydrocarbons

    http://www.emc2fusion.org/

    First step – enzymatic hydrolysis of cellulose materials into elementary sugars (with consequent creation of fatty acids) is not what their bugs are doing. And this is the bottleneck of cellulosic bioconversion.

    Iogen of Ottawa demonstrated enzymatic hydrolysis of cellulose into elementary sugars 50 years ago. It is still far from economical. Once cellulose is broken, sugary bros could be easily converted into ethanol, butanol, pure hydrogen, acetic acid, etc. without much troubles. LS9 Ltd. now can convert it into close to crude oil substance. Great! However, first step of bioconversion is still limiting factor.

    Personally, with all billions going into genetic engineering of fuel-producing microorganisms, I have no doubt that in couple of years grass clipping from your lawn will end up in your gas tank.

  202. MarkR
    Posted Jun 14, 2008 at 2:25 AM | Permalink

    I suspect that because GISS use data that is unadjusted for Nino/Nina, they get a predominance of hinge points at incorrect places, and that incorrectly influences the subsequent adjustments for UHI.

  203. John F. Pittman
    Posted Jun 14, 2008 at 5:32 AM | Permalink

    #196 I folowed the debate at RC. I was a bit disappointed that Gavin did not attempt to refute you and Jerry in a more direct and informational way. I think it would have been an excellent tutorial and given many a better insight to both sides of the issue.

    Going to the beach for a week for R&R. Will try to catch up when I get back.

    I wish I knew enough to look at the models and where in detail. The forcing by doubling CO2 is what I would relate to as an inverted exothermic reactor design. In such a design it becomes a heat exchanger, since failure to remove heat causes a runaway exothermic reaction…explosion, as has been unfortunately discovered in poor designs. In this, I wonder that the suppression of unresolved turbulence of GMC’s acts in the inverted fashion…the heat is kept in the system. This would explain why it seems the forcing that models get is “exploding” well past what is observed. I do not buy the heat is “in the pipeline”. If it were, we would be able to find it and measure it; as it has been proven by IPCC’s claim that if you have looked for other causes and can’t find one, it has to be what you have already found, LoL. I think properly formulated, a control volume analysis could be done to show that the model concept will ALWAYS give a higher forcing than can be expected in the actual system. One needn’t solve all the PDE’s directly, but rather show that the model solution of the CO2 forcings will in all cases be more to much more that the actual system when using a damper or hypervicosity factor in the model.

  204. cba
    Posted Jun 14, 2008 at 7:55 AM | Permalink

    200 (DeWitt):

    I’ve got one of those too. I don’t recall if I completed a /cm curve as since I work and think in wavelength, so does my model. There’s other spikes around the shorter wavelength slope, perhaps ozone. But I go up to 120km on mine. All in all, it looks like around the same thing but things get squirrely and distorted when one uses the /cm x axis so an eyeball comparison isn’t nearly so good as comparing apples with apples.

    Having finally obtained a quad processor computer with some relatively serious memory, my efforts are easier now but the time constraints I’m under this summer are much more serious. It took all day to get a single graph back in January and a seriously overheated 3ghz laptop pc – probably due to virtual memory swapping delays. I can do the same in minutes now.

    The last run I did was to establish OLR power at the TOA for the various doublings and also to take deltas between to ascertain co2 doubling power sensitivity. I posted the chart here a month back (roughly). I did this for TOA being 70km, 100km, aod 120km and there were interesting differences. At 70km, it was within around 2% of the modtran 3 OLR. The Modtran 3 does not really go beyond this, definitely not up to 120km.

    What was interesting was that at 120km, there’s a hump in delta F that rises and then falls.

    My data is in wavelength form and merely has a sum for total but I have to combine it down to fit into a graph – limit of 32000 points. I’m afraid I haven’t reduced it down yet to fit on a graph yet, but the OLR exists by wavelength as output from each of the 50 or so atmospheric shells as handled by the 1976 std atm. It can also be recalculated in minutes to reflect any (or no) CO2 concentrations present although one can expect errors to start creeping in due to self broadening that isn’t dealt with differently from 384 ppm co2. I’ll try to get a graph posted up here – maybe for 70km and for 120km.

  205. rex
    Posted Jun 14, 2008 at 9:04 AM | Permalink

    I find it extraordinary the RC has no link to CA (but it is very telling isn’t it?) LOL

  206. cba
    Posted Jun 14, 2008 at 6:36 PM | Permalink

    200 (DeWitt):

    Here are three charts for various altitudes on the outgoing longwave radiation using my model. They are of course in wavelength and are condensed down by picking every 50th nm wavelength bin to display rather than something actually sophisticated. Included is the total outbound power as a number. Also, some reference BB values for some Ts used in the atm model are shown for comparison. It’s pretty much using the full compliment of Hitran / 1976 std atm molecules, not just a few. h2o vapor is treated as fixed partial pressure.

    It’s sort of interesting that where one picks the TOA actually does matter.

    The next step planned is to implement the energy budget at each layer, then ascertain the base line energy transfers and then do a pertubation on it to see what happens.

    It’s also making me curious of some of the conspiracy theories out there about a modern ionospheric project and brings back a bit of old memories of a mispent youth. I’m wondering just how sensitive it is high up to rather small amounts of twiddling when it comes to modulating incoming and/or outgoing power.

    The Modtran 3 calculator provides the following

    120km – not avail.

    70km – 259 W/m^2

    11km – 266 W/m^2

    There’s quite a bit of difference in the approaches including the fact I have a bit more limited bandwidth than they have which might account for my values letting a bit more power through.

  207. DeWitt Payne
    Posted Jun 14, 2008 at 11:36 PM | Permalink

    Andrey,

    What I don’t understand is why all the emphasis on bugs and enzymes? Why not take the wood chips, switch grass and other cellulosics and burn them in gasifier to make CO and H2. Then you can make pretty much anything you want with a minimal residue of inorganic ash, mostly potassium and sodium carbonate, as opposed to all the lignin and other largely useless residue left over from biochemical reforming.

  208. DeWitt Payne
    Posted Jun 14, 2008 at 11:45 PM | Permalink

    cba,

    The MODTRAN data is available in wavelength as well as wavenumber. If you select save file, you can access it. I haven’t tried to extract the data from the table format for plotting, but I’ve seen wavelength plots from Hans Erren, so it can be done.

    Just eyeballing your data, it looks to me as if the long wavelength side is a tad warm. That’s where the water vapor continuum should make a significant contribution.

  209. DeWitt Payne
    Posted Jun 15, 2008 at 1:07 AM | Permalink

    It isn’t difficult. You could probably easily write a macro for importing the data into Excel. For example, here’s the plot in wavelength for 1976 standard atmosphere and all other settings default:

  210. Andrey Levin
    Posted Jun 15, 2008 at 2:06 AM | Permalink

    DeWitt:

    Gasification installation is economical only if it is fairly big (and expensive). Collection and transportation of low-BTU moist biomass is economical only from small radius, which could not sustain big gasification facility. Collection and transportation of biomass from big territory and for long distance wastes more energy than it could provide.

    Enzimatic conversion plant could be economical even on small scale, fed by agricultural and forestry waste only from about 20 miles radius.

  211. DeWitt Payne
    Posted Jun 15, 2008 at 2:39 AM | Permalink

    To better illustrate the problem with cba’s long wavelength data I’ve added Planck curves for 288.2 K and 220 K:

  212. cba
    Posted Jun 15, 2008 at 5:05 AM | Permalink

    DeWitt

    your last graph is not displaying.

    also, where would you suggest I look for more information on the continuum for h2o?

  213. Pat Frank
    Posted Jun 15, 2008 at 12:03 PM | Permalink

    #203 John — “I followed the debate at RC. I was a bit disappointed that Gavin did not attempt to refute you and Jerry in a more direct and informational way.”

    We’ll see what turns up in Skeptic’s letter-box. I’ll have to reply to whatever criticisms Michael Shermer receives and chooses to publish.

    I wish I knew enough to look at the models and where in detail.

    You couldn’t do much better than to follow up on what Jerry Browning wrote in the “Curry Reviews Jablonowski and Williamson” thread here.

  214. cba
    Posted Jun 15, 2008 at 1:00 PM | Permalink

    DeWitt,

    I got a copy of the data from modtran 3 and played a bit with it. Picking a value at random in the region that looked to have the most variation ( at 29,762nm) and comparing apples to apples, the values were 0.0022211 compared to 0.002319 – a difference of 0.0000979 for a variation of about 4.3%.

  215. Richard Sharpe
    Posted Jun 15, 2008 at 1:30 PM | Permalink

    Stop the presses!

    Humans still responsible, but it’s because of aerosols more than CO2

  216. DeWitt Payne
    Posted Jun 15, 2008 at 11:53 PM | Permalink

    cba,

    The second graph displays for me using Firefox. I haven’t tried it with IE. I found a lot of stuff by googling variations on “water vapor continuum”. I didn’t keep the references because I wasn’t trying to do modeling. I was just looking for an estimate of the magnitude of water vapor ‘dimer’ absorption compared to nitrogen, oxygen and mixed nitrogen oxygen dimers. In the case of water vapor, dimer is probably the correct term. For nitrogen and oxygen, it’s more like a three body interaction of two molecules with a photon either emitted or absorbed. Water vapor continuum is orders of magnitude greater than for nitrogen and oxygen even considering the much lower concentration of water vapor and the concentration squared dependence of the absorption.

    I still can’t believe that the atmosphere layer from 70 to 120 km emits an additional 33.5 w/m2. Where’s the energy coming from? Local thermal equilibrium starts to fail at those altitudes so once an excited state emits a photon, it may be quite a while before it’s excited again. So using a Planck curve based on average kinetic energy temperature may not be justified. Not to mention that there’s just not that much mass there to emit.

  217. cba
    Posted Jun 16, 2008 at 6:04 AM | Permalink

    216 (Dewitt):

    Depending on who you listen to, LTE is vaid up to from 40km to beyond 120km. If one can ascribe a temperature to it – then it’s in LTE. The 1976 std atm ascribes 300k at 100km and 360k at 120km. Whether the LTE holds up or not and whether it just fades away or abruptly ends are some questions that I don’t currently have the answer to at present.

    There’s plenty of energy options that aren’t part of the BB curve plus a few that are. There’s short uV and x-rays (part of the solar atmosphere reaches a million K and is radiating). Then there’s cosmic rays, gamma rays, and solar wind. There are electric currents going on as well. Whether these add up to 33.5W/m^2 of incoming power or 0.000335W/m^2, I don’t know at present either.

    Considering there are not any clouds up there so it’s all clear sky, there probably isn’t that much happening there, unless that much is actually being measured by satellite. Then that means there’s non BB power coming in (and going out – as the ionosphere does absorb radio wave energy), in which case there must be that much energy reaching there, however and from where ever. Also, since it’s radiating more than we receive from the sun, it suggests that it might blow up, possibly due to LTE.

    I found some info on the continuum. So much for the well understood myth. It seems they have some fair observational understanding but have competing ideas on it. I am not sure what I’m going to do about it at present. My program uses the lorentzian wings out to about +/- 200nm which gets a lot but not nearly all of the effects.

    It brings to mind something else – that of particulates of ice and water. How much of this continuum is due to that versus simply molecular vapor? If none, then what are going to be the effects of these?

  218. John F. Pittman
    Posted Jun 16, 2008 at 7:22 AM | Permalink

    #213 Pat, Thanks. I will read while on vacation. They decided to allow internet here, the bandwith is not large though. At least I won’t have to wade through a thousand posts when I get back.

  219. MrPete
    Posted Jun 16, 2008 at 8:01 AM | Permalink

    HUGE NEWS!

    This just in: Honda has created a ZERO emission car. NONE of those nasty greenhouse gases. All it produces is…water vapor.

    “…emits only wator vapor and none of the gases believed to be responsible for global warming.”

  220. Craig Loehle
    Posted Jun 16, 2008 at 8:17 AM | Permalink

    And the 0 emission hydrogen car really is 0 emission (almost) if you get the hydrogen from nuclear…otherwise thermodynamics says that adding the extra step of making hydrogen from electricity you generated with oil adds a step at which energy is lost, and thus more fossil fuels must be burned. What hydrogen IS good for is reducing urban air pollution.

  221. Sam Urbinto
    Posted Jun 16, 2008 at 9:13 AM | Permalink

    More on land use:

    Turner, B.L. II, Eric F. Lambin, and Anette Reenberg, 2007: The emergence of land change science for global environmental change and sustainability, PNAS, vol. 104. no. 52, 20666-20671, 10.1073/pnas.0704119104.

    http://www.pnas.org/cgi/content/abstract/104/52/20666

    “Land transformation” refers to radical changes in land use and cover, usually over the long term, such as forest to row crop cultivation, or wetlands to urban settlement. The various estimates of these changes differ owing to the use of different metrics and measures and the uncertainties involved. Regardless, transformations are sizable as proportion of the ice-free land surface. If lands altered by human activity—lands retaining their base land cover but configured differently than in the “wildland” state—are included, a much larger estimate would result. Examples include degraded arid lands, pasture and grasslands invaded by or planted to exotic flora, and coadapted forests and grassland. Coadapted land covers are shaped and maintained by prolonged and repeated human activity, such as burning, that enlarges land use or land production: for example, annual burning that expands savanna grasses relative to woody species and enlarges food stocks for livestock and native grazers.

    As with estimates of land transformations and alterations, there is little doubt that human activity usurps a large proportion of terrestrial net primary productivity, but the uncertainty in the estimates remains large.

  222. MrPete
    Posted Jun 16, 2008 at 9:15 AM | Permalink

    I’m curious: has anyone calculated the water vapor generated by converting the world’s internal combustion engines to fuel cell? And would this have any impact on forcing trends? (I realize H2O vapor is short lived… just seems like a large shift that can’t be assumed as “zero”)

  223. Sam Urbinto
    Posted Jun 16, 2008 at 9:24 AM | Permalink

    They opened the first hydrogen station there four years ago…

    http://www.aqmd.gov/news1/2004/HydrogenStationGrandOpeningPR.html

  224. Mark T.
    Posted Jun 16, 2008 at 9:24 AM | Permalink

    Last time I checked, even alarmists admit that water vapor is responsible for the overwhelming majority of the so-called “greenhouse effect,” which implies an overwhelming majority of “global warming.” Funny how that little point gets lost.

    Mark

  225. Sam Urbinto
    Posted Jun 16, 2008 at 9:26 AM | Permalink

    222 MrPete:

    I’ve been wondering about the water vapor also. I’d think it would certainly have some impact.

  226. MrPete
    Posted Jun 16, 2008 at 9:45 AM | Permalink

    Mark, the key to the current understanding, if I myself understand it correctly, is that water vapor has a built-in “desired” level (of humidity), and quickly restores itself to that level through evaporation or clouds->rain. So water vapor is seen as a feedback efect… not something people have any (significant) ability to change.

    Clouds and heat-induced rain feedback are a huge unknown for modelers as has been pointed out many times.

    My curiosity here is what human-action it would take to move water vapor from ignorable to significant in terms of forcing.

  227. steven mosher
    Posted Jun 16, 2008 at 10:07 AM | Permalink

    sam,

    great a car thats causes floods.

  228. Pat Keating
    Posted Jun 16, 2008 at 10:19 AM | Permalink

    216 DeWitt

    Not to mention that there’s just not that much mass there to emit.

    I think I might be able to shed a little light on that.

    The central part of the 15u band has extremely high absorption (and emission) cross-sections. In that case, energy transfer in the troposphere and stratosphere is via a large number of very short ‘hops’ for these photons because their mean free path is short. The MFP and flux decreases with increasing alpha (extinction coefficient).
    As you go to higher elevations, the flux therefore increases with altitude, and eventually plays the major role because the emission cross-sections are so large, and you don’t need many molecules for these wavelengths.

  229. Sam Urbinto
    Posted Jun 16, 2008 at 10:21 AM | Permalink

    226 MrPete “Clouds and heat-induced rain feedback”

    We already know that large metro areas impact clouds and rain many miles away, and it’s difficult to quantify, but that it happens. Although we can figure out a car’s heat and particulate output. What will happen moving from current exhaust to water exhaust? I don’t think we can quantify either.

    227 mosh “great a car thats causes floods.”

    And droughts!

  230. jeez
    Posted Jun 16, 2008 at 10:47 AM | Permalink

    snip – policy

  231. jeez
    Posted Jun 16, 2008 at 11:03 AM | Permalink

    D’oh!

  232. Mark T.
    Posted Jun 16, 2008 at 1:09 PM | Permalink

    My curiosity here is what human-action it would take to move water vapor from ignorable to significant in terms of forcing.

    Climate scientists that are actually concerned about getting it right? ;)

    Mark

  233. KevinUK
    Posted Jun 16, 2008 at 3:30 PM | Permalink

    snip – cars are a bit OT.

  234. John M
    Posted Jun 16, 2008 at 4:40 PM | Permalink

    Re fuel and water vapor (I hope this isn’t off topic since it directly addresses Mr. Pete’s questions), from simple stoichiometry and energy content, burning hydrogen produces about twice as much water per BTU as burning gasoline. But the expectation is that fuel cells will be more efficient on a BTU basis than fossil fuels. I would not expect water from fuel cells to be any more of an issue than water from IC engines.

    Of course, hydrogen has its other problems, such as the only viable way to make it at scale currently is from natural gas. Its REALLY big problem is that it is an energy storage material, not an energy source (on Earth anyway). Where the hydrogen comes from is the real issue.

  235. MrPete
    Posted Jun 16, 2008 at 8:23 PM | Permalink

    Hydrogen from Natural Gas? GREAT! We made food expensive through biofuel. Now let’s really jack up the cost of home heating.

  236. Geoff Sherrington
    Posted Jun 17, 2008 at 4:38 AM | Permalink

    Please pardon me for using Unthreaded to introduce new topics. Today is gas in ice core as at Vostok, Antarctica. Jonathan Drake has already noted a correlation between CO2 in air bubbles versus the IGD. The IGD is the difference in derived time, usually derived by isotopes, of Ice and the Gas it encloses. The IGD at Vostok had been reported to exceed 6,000 years in some samples. That is, the ice dates at 6,000 years older than its enclosed gas. Yes, 6,000 years.

    There are various papers ‘explaining’ the mechanism of IGD, usually through processes as snow turns to firn to ice with bubbles to ice without bubbles, as depth increases. These explanations are not terribly convincing, though there has been a lot of work to bolster their ecceptance.

    Who can give the $64,000 answer to the question of why the gas dates are so many millennia from the ice around them? Novel suggestions are welcomed as the older explanations are on a path to nowhere, I feel. Please do not assume that the ice date is correct and can be used as a reference standard.

  237. Stan Palmer
    Posted Jun 17, 2008 at 7:37 AM | Permalink

    http://www.telegraph.co.uk/earth/main.jhtml?xml=/earth/2008/06/11/scileaf111.xml

    The following is from a story in the Daily Telegraph about new findings that cast doubt on current reconstructions using oxygen isotopes in tree ring cellulose. There does seem something odd about some of teh conclusions to me a complete layman. It does seem odd that any error would be biased to create a cooler past and if forests are acting to control the ambient temperature than why is the temperature that they create not the ambient temperature?

    The research contradicts the longstanding assumption that temperature in a healthy leaf are coupled to ambient air conditions. For decades, scientists studying climate change have measured the oxygen isotope ratio in tree-ring cellulose to determine the ambient temperature and relative humidity of past climates.

    This new work challenges the potential to reconstruct climate through tree-ring isotope analysis, since it suggests the method does not provide direct information about past climate, providing misleadingly warm estimates. The discovery will be a boon for ecologists because it opens the potential for the reconstruction of tree responses to both average climate and climate change over the last couple of centuries.

  238. anna v
    Posted Jun 17, 2008 at 10:45 AM | Permalink

    Stan Palmer 238

    It does seem odd that any error would be biased to create a cooler past and if forests are acting to control the ambient temperature than why is the temperature that they create not the ambient temperature?

    We also keep a level temperature of 37C for our bodies. The publications seems to say that the trees do something similar, except it is about 20C degrees.

    This would mean that the width of tree rings would be temperature dependent as a second order effect, through light, humidity etc, which of course are correlated with temperature, and extremes of temperature, not ambient everyday temperature.

    It is known that trees lower the temperature around them in the summer by two or three degrees, but it was thought it was from evaporation and shadow and chlorophyll working. Nobody had checked for winter temperatures.

  239. Steve K
    Posted Jun 18, 2008 at 5:14 AM | Permalink

    Geoff, 236. I have wondered why the gas & ice would remain together time-wise at all. It seems to me we are looking at a column with increased density/compression at the bottom. Therefore, the gas will rise to less dense layers as more snow is accrued and compressed.

  240. DeWitt Payne
    Posted Jun 18, 2008 at 12:49 PM | Permalink

    cba,

    Starting somewhere close to the tropopause, the atmosphere becomes optically thin even for strongly absorbing gases like CO2 at 15 micrometers. That’s why you see a spike for CO2 emission from the stratosphere in the center of the CO2 absorption trough. For an optically thin source, emission is a function of concentration and will never reach the Planck curve limit because this limit only applies to optically thick sources at LTE. Or, to put it in another way, a photon emitted from a CO2 molecule at high altitude towards space has an extremely low probability for absorption by another CO2 molecule. I did plasma spectroscopy for a living so I’m speaking from experience as well as theory here. Analytical plasma sources do not achieve LTE, btw, but that doesn’t mean they don’t have a temperature. They have several and they are all different, which is sort of the definition of non-equilibrium. For example, there’s the gas kinetic energy temperature and the electron temperature among others.

    The pressure at 70 km according to the 1976 standard atmosphere is less than 5 Pa. That means there is less than 5 kg/m2 of all components of the atmosphere remaining above 70 km. For CO2 at 400 ppm (ignoring mass to volume correction) that leaves only 2 grams total of CO2, and much less water than that to emit the 30 W/m2 that your calculation indicates. You still haven’t quantitatively demonstrated a mechanism or a source large enough to pump that much power into that little mass. Otherwise, the cooling rate would be about 22 degrees/hour.

    That amount of energy would also propagate downwards, and would be visible to observers in high altitude aircraft. Have you run your model looking up? According to MODTRAN, there’s only 0.6 W/m2 total IR power observed when looking up from 50 km. Here’s the calculated spectrum which looks very much like a completely unsaturated CO2 spectrum:

    At 20 km looking up you start to see water vapor and ozone emission and the CO2 emission is just starting to saturate. But there’s still less than 10 W/m2 total power.

  241. KevinUK
    Posted Jun 18, 2008 at 5:32 PM | Permalink

    #233 Steve

    My post made NO mention of cars whatsover yet you have snipped it because ‘cars are a bit OT’!!!! It was about the fact that hydrogen fuel cells are not a ZERO emission technology if the hydrogen is generated from nuclear electricity as Craig had posted. I was just attempting to correct a common misconception – what was wrong with that? There are several other posts that follow mine that specifically mention cars. Why havent you snipped them also?

    KevinUK

  242. kim
    Posted Jun 18, 2008 at 5:36 PM | Permalink

    KevinUK, you can’t take it personally; however he does it, it works. Look at the result. This is the only blog at which I don’t resent getting snipped or deleted, but as elsewhere, he deletes my best stuff.
    ===================================

  243. cba
    Posted Jun 18, 2008 at 10:07 PM | Permalink

    DeWitt,

    At 70km, I’m about 3-4 W/m^2 off from the modtran 3 calculation so I’m wondering just how far off things could be.

    My radiation output is based upon the planck bb spectrum for the temperature of each shell times the absorption coefficient which is (or should be) taking pressure and actual amounts of molecules in that slab. Starting with 0.98 surface emissivity, each slab has has a calculated absorption and a BB curve is calculated for the shell’s emission values which is then multiplied by the absorption curve on a by wavelength basis. What goes through the shell is the power transmitted through from below plus the emission generated in the shell. This is repeated for each of the 50 some odd shells. A low absorption shell should be low emission. All of this is by wavelength and by shell.

    I have started work on the downward radiation but do not have it put together yet. It may start with a 0 radiation coming in downward or I may combine it with the visible (average) insolation coming in and just combine them together.

    There is not grey body approximations going on or anything assumed of the shells other than an LTE. The LTE I understand as the concept of having one unique temperature rather than several. I know that is a very rough approximation as one climbs above 50km and may in fact be invalid higher up.

    Unless there is a blatant error in the implementation, I think at present that adding more sophistication to the model like continuum radiation should wait until the more fundamental concepts, such as scattering, have be included and appear to work. If there is a verifiable point where LTE breakdown is causing error, that will be good to know but can be overcome by limiting the height.

    As near as I can tell, the modtran 3 doesn’t go above 70km or 100km. Since all the emissions are mostly going to consist of the same absorption wavelengths as the lower atmosphere and the T is going to be somewhat lower than higher levels, it’s understandable that one is going to see almost no downward radiation not originating in the immediate vacinity. That will only occur when the T is higher than the near area.

    Remember that the sun’s photosphere at just under 6k K, is radiating tremendously and is a substantial vacuum.

    Of course it’s got to have a serious enery source, probably non bb radiation, coming in to maintain the T if it really is radiating that much. What I’ve already seen is that there isn’t that much in solar BB radiation being absorbed. That becomes an interesting question to study – if it is possibly going on.

    Looking back at the graphs I put up. On the 120km one, that top bb curve is the 100-120km altitude with T of 360k. Only some of the wavelengths are radiating out beyond the 220k T layer down near 70km.

    Some may be co2 but I think other lines are probably ozone and other molecules.

    These graphs though may be somewhat misleading in appearance because the points being shown are something like every fiftieth sample point – rather than the sum or average of all the sample points.

  244. DeWitt Payne
    Posted Jun 19, 2008 at 2:06 PM | Permalink

    cba,

    I think MODTRAN limits the height to 70 km for two reasons. First there just isn’t that much atmosphere above 70 km, less than 0.005%. Second LTE breaks down because the collision frequency is too low, on the order of 0.5 x 10^6 compared to 10^9 at sea level. Considering that most collisions are elastic, the frequency of spontaneous emission becomes comparable to the frequency of collisional deactivation so LTE is not a valid approximation.

    I still think there’s an error in your algorithm. You show saturated emission for some lines at 120 km. Therefore percent transmittance at those wavelengths must be less than 1%. Plugging 200 micrograms of CO2 (0.1 cm path length at sea level pressure equivalent to 2 g CO2/m2) into the spectral calculator, I get slightly less than 40% transmittance at the peak of the CO2 spectrum. That’s rather a long way from saturation. If you subscribe to spectracalc, you can do a lot more and their rates aren’t bad, ten cents US per calculation for 250 calculations in a month. There was some funny business with the calculation, though. When I lowered the pressure by 10 and increased the path length by 10 which I thought should have resulted in no change in the spectrum, the percent transmittance went way down.

  245. cba
    Posted Jun 19, 2008 at 3:09 PM | Permalink

    DeWitt,

    Those are through the atmosphere composite values. Those dozen lines or so that indicate full emission at 360k T are very likely not co2 or h2o vapor.

    Also, there’s probably problems with the simplistic condensing done with the graph data – only every 50th point was picked to display.

    I will try and look at the stuff this weekend. I’m using the standard corrections for T&p to adjust them from the std hitran values. That should mostly only affect the line widths. I will probably try to determine just what exists up there at 120km in the way of absorption /emission just to verify that the atm calc is using that value instead of the surface value as that is an area of code modified when going from 0 dimensions to the 1 dimensional model.

    clearly, if the path distance in increased by a decade and the concentration dropped by 1/10 the answer should be the same – except for the changes in line height / width due to p (self and other broadening).

    You didn’t say whether that test was done to modtran or to the spectral calculator – although position of your comment implied it was the calculator.

    If you drop p – the absorption line should narrow and the peak should rise. Whether it was enough change in p to make much difference is another thing. The area under the curve should be constant.

    As for sea level, I recall seeing some peaks that were at least one pathlength occurring within cm(s) or within a cm. 40% for 1mm sounds plausible for the ball park – but it’s a big park.

    I tinkered a bit at lunch time with modtran3. It could be they drop out at 70km or 100km – as I saw something referencing 100km in the file. And it could be because of LTE’s utter loss of accuracy up there. There was something else bothering me about the modtran runs though. Unfortunately, it slips my mind at the moment just what it was. I remember playing with surface, 1km & 2km looking up and down along with 20km 50km and 70km doing likewise. Mostly it was clear sky 1976 atm with a few runs of some lowlevel clouds tossed in. Somewhere, something just didn’t seem right about it.

    The weekend starts in just a few moments despite the 16 + hours of work needed to be accomplished between now and monday.

  246. DeWitt Payne
    Posted Jun 19, 2008 at 5:46 PM | Permalink

    cba,

    If you could identify those lines, I think it would go a long way to solving the problem. The only thing at that altitude that should have a high enough concentration to emit significant radiation is CO2. The only other possibility I can think of is molecular ions like N2+ or O2+. Are they even in the HITRAN database? My suspicion is that there’s a code glitch that is causing the program to calculate unrealistic partial pressures for some IR active component.

  247. DeWitt Payne
    Posted Jun 19, 2008 at 6:19 PM | Permalink

    cba,

    Looking up is easy if you’ve already done looking down. The weighting function you apply to the Planck curve at any altitude is exactly the same in both directions, at least according to Grant Petty, and it seems logical. The weighting function amounts to the absorptivity of the atmosphere above the observation point. You can start with zero for night sky or a solar spectrum and work from the top down instead of the bottom up. If you want to work in the visible and UV, you will have to include scattering, though. That becomes significant as you approach the short wavelength end of the spectrum. After all, that’s why the sky is blue.

  248. David Smith
    Posted Jun 19, 2008 at 7:20 PM | Permalink

    Here’s the ClimateAudit and RealClimate internet activity (reach) for the last ten months:

    CA continues its advance while RC moves sideways. Note the convergence. RC has a platoon of writers while CA is an Army of One. Remarkable.

  249. cba
    Posted Jun 19, 2008 at 9:50 PM | Permalink

    DeWitt,

    I looked briefly at the raw spectral data. One peak was around 2725nm that was substantial at 120km. The code has it broken between co2 and everything else so a variable co2 can be established. This peak (and others) is not in the co2 only data run. There’s another 38 or so molecules including n2, o2, and o3 as well as various other molecules.

    I also noted that the modtran3 does not offer a ground emissivity setting and they seem to have a fixed emissivity at about 0.92 rather than 0.98. On adjusting my emissivity at the surface, it would appear my outgoing radiative values are very close to the modtran 3 – matching to around 2 w/m^2 or so, sometimes better. It also has some ups and downs – sometimes more, sometimes less – but I’m using 37 or 39 molecules versus probably less than half that for modtran and my bandwidth is different on the far far long wavelength area.

    The values used for the 120km altitude is 360K hot. It would emit over 900 w/m^2 if at the surface. It’s contributing no more than 10w/m^2 so it can’t be at full pressure or full force.

  250. Bob Koss
    Posted Jun 19, 2008 at 11:26 PM | Permalink

    Isn’t the Alexa toolbar required in order to be included in their data collection?

  251. David Smith
    Posted Jun 20, 2008 at 7:20 AM | Permalink

    Hi, Bob, that’s a very good question. My understanding is that, prior to April 2008, Alexa relied totally on users of their toolbar. If their toolbar distribution was large and random then that method provided a decent sampling, at least for larger sites. Of course, if the disribution was not random or large then their data may not have reflected actual traffic.

    In April 2008 they expanded their data sources beyond their toolbar. I haven’t seen a list of what sources they now include but have read that they are friends of Google and I wonder if they’re somehow tapping into Google resources. They also changed their methods for analyzing the data. This April change affected some the rankings substantially, particularly among the smaller sites.

    What I’m told (by a user of for-fee internet data) is that Alexa is now reasonably useful for trends and for comparative sizes of the larger sites (say, top 100,000 sites). Also, in the particular case of CA and RC, my conjecture is that their audiences are similar and just as likely to have the toolbar (or whatever else Alexa now uses).

    It’d be interesting if someone with access to for-fee traffic data could comment on how the CA and RC Alexa data compares with the for-fee data.

  252. PaulM
    Posted Jun 20, 2008 at 8:29 AM | Permalink

    David, that graph is very interesting. You can see the spike (hockey stick?) in August 2007 when Steve discovered the GISS error. What caused the spike in November 2007? (OK, I know, we shouldnt ask ‘what caused’ every little wiggle in a graph!) When looking at the graphs we have to bear in mind that CA links to RC, but RC never links to CA and is in denial about Steve’s existence.

    If you go to Alexa and select ‘page views’ instead of ‘reach’, CA is much higher than RC, by about a factor of 2:

  253. Larry T
    Posted Jun 20, 2008 at 2:06 PM | Permalink

    Personally i feel that Alexa is both a pest and a nuisance and i periodically delete references of it on my computer. It is definitely deep into the gray areas of malware.

  254. Raven
    Posted Jun 20, 2008 at 5:24 PM | Permalink

    Steve,

    Is this something you would consider reviewing: http://www.worldclimatereport.com/index.php/2008/06/20/finnish-finish-global-warming/

  255. Richard Sharpe
    Posted Jun 21, 2008 at 9:48 AM | Permalink

    I wonder what other extreme climate events occurred in 1993 …

  256. Andrew
    Posted Jun 22, 2008 at 11:25 AM | Permalink

    I just saw something very interesting on the History Channel. They speculated that the KT impact created hot crater in the ocean that fueled a hypercane which destroyed the ozone layer by injecting water into the stratosphere, killing the Dinosaurs. Naturally Kerry Emmanuel was brought in as a proponent (after all, SST is the key variable in this scenario) however other people were less sure (Jeff Kiehl, for one). I have to admit, the ozone layer thing seems to fit well-but I’m doubtful on the hurricane thing, though.

  257. Craig Loehle
    Posted Jun 22, 2008 at 2:38 PM | Permalink

    Surprisingly, on CNN today, they discussed how the levies make the flooding worse along the Miss. because the water has nowhere to go. They did not blame it on climate change.

  258. Posted Jun 22, 2008 at 3:11 PM | Permalink

    Craig:

    Surprisingly, on CNN today, they discussed how the levies make the flooding worse along the Miss. because the water has nowhere to go. They did not blame it on climate change.

    Yes. This issue was also discussed after the 93 floods. On problem is town A will build a levy based on current flood levels. That levy is fine for a whilte.

    After some minor or major floods, one or several towns upstream will build a levy. Now, town A’s levy is undersized because the water that used to spread out stays in the river, resulting in higher crests.

    So, during the next big flood, town A’s levy fails.

    Even seemingly minor things can have effects when everyone one is doing them. There was a time when planners weren’t attentive to the effect of development on water retention and detention during floods. People covered over water absorbent soil with malls and black top, roads etc. Afterwards, that water rushes more quickly to rivers. This may result in greater variability of the river levels.

    In many areas, town planners have become more attentive to this and require builders to include some water retention and detention features to spare the lower lying areas downstream. This works to some extent, but I don’t think the problem is solved.

  259. Posted Jun 22, 2008 at 3:53 PM | Permalink

    With regard to the PDO: If you are going to rename, I suggest considering inserting the term “coherent structure” into the name.

    Examples of use:
    1:

    This recognizes that there are repeating patterns of motion in turbulent flows, giving hope that understanding these ‘coherent structures’ will give insight into the mechanism of turbulence, and so useful information for explaining phenomena and formulating models.

    2Most

    eddies shed from cusped capes on the Pacific-Coast increase the turbulent boundary layer, and form an organized coherent structure of mesoscale eddies interlocked in the Shikoku-Basin.

    3

    “Coherent structures are vortices that give the structure of the flow in the atmosphere, in the ocean, all over the planet—which is why we think they’re so important,” said Chomaz. “You see them at all scales, from small dust devils to tornadoes and cyclones that are thousands of times bigger. It’s important to understand the dynamics between all these scales to understand the dynamics of the atmosphere.”

    Granted, the PDO would be darn big. It’s not a vortex (but I’m not sure a coherent structure has to be a vortex.) It is a) coherent and b) has a structure.

  260. Geoff Sherrington
    Posted Jun 22, 2008 at 6:31 PM | Permalink

    Re # 239 Steve K

    Thanks for your comment.

    I’m thinking about closed systems and open systems. If isotopes, gases etc migrate within the boundaries of the study area, that smears the signal. If there is loss from the system, that causes bias. I cannot find which mechanism dominates in ice core like Vostok.

    Here’s a question. We are told that the Vostok core stopped about 420,000 years down. Rock is suspected to be a short distance below this. Was there bare rock before the present ice column at Vostok? If so, what climate or geographic conditions caused it? If there was an unconformity then, why cannot another happen now?

  261. BarryW
    Posted Jun 22, 2008 at 7:33 PM | Permalink

    Re 258

    I wonder if there is also another reason. A number of years ago there was flooding caused by the refusal of the manager to lower the water level behind a flood control dam early in the year because it had become a recreation area and lowering the water would have upset the boaters, even though the original intent of the dam was to act as a buffer when the spring floods came. Couple that with barge traffic being inconvenienced by low water. Could something like this have exacerbated the problem?

  262. Jonathan Schafer
    Posted Jun 22, 2008 at 8:09 PM | Permalink

    Hansen may have totally lost it now.

    Put oil firm chiefs on trial, says leading climate change scientist

    James Hansen, one of the world’s leading climate scientists, will today call for the chief executives of large fossil fuel companies to be put on trial for high crimes against humanity and nature, accusing them of actively spreading doubt about global warming in the same way that tobacco companies blurred the links between smoking and cancer.

    Hansen will use the symbolically charged 20th anniversary of his groundbreaking speech to the US Congress – in which he was among the first to sound the alarm over the reality of global warming – to argue that radical steps need to be taken immediately if the “perfect storm” of irreversible climate change is not to become inevitable.

    Speaking before Congress again, he will accuse the chief executive officers of companies such as ExxonMobil and Peabody Energy of being fully aware of the disinformation about climate change they are spreading.

    In an interview with the Guardian he said: “When you are in that kind of position, as the CEO of one the primary players who have been putting out misinformation even via organisations that affect what gets into school textbooks, then I think that’s a crime.”

    He is also considering personally targeting members of Congress who have a poor track record on climate change in the coming November elections. He will campaign to have several of them unseated. Hansen’s speech to Congress on June 23 1988 is seen as a seminal moment in bringing the threat of global warming to the public’s attention. At a time when most scientists were still hesitant to speak out, he said the evidence of the greenhouse gas effect was 99% certain, adding “it is time to stop waffling”.

    I wonder what the error bars are on his 99%. What a dope.

  263. David Jay
    Posted Jun 22, 2008 at 8:12 PM | Permalink

    Jonathon:

    I just came here to post the same article. Absolutely amazing arrogance.

    He wants them tried for “actively spreading doubt”. Thats the scientific method for you…

  264. David Jay
    Posted Jun 22, 2008 at 8:16 PM | Permalink

    Since this is filed under Nasa (Hansen), here is the latest quote from the good doctor:

    http://www.guardian.co.uk/environment/2008/jun/23/fossilfuels.climatechange

  265. VG
    Posted Jun 22, 2008 at 8:22 PM | Permalink

    How could this person be possibly in charge of GISS temp? Its quite obvious he is totally biased. Is there anybody at NASA who independently checks the raw data before it is released?

  266. retired geologist
    Posted Jun 22, 2008 at 8:51 PM | Permalink

    re #1 David Jay

    I hope all of you who go to the Guardian link in #1 notice that this is the twentieth anniversary of Hansen’s speech warning us about the horrors to come because of AGW. Let’s see, in twenty years exactly NOTHING SERIOUS has taken place! Can Hansen possibly be taken seriously by anyone?

  267. Scott Finegan
    Posted Jun 22, 2008 at 9:11 PM | Permalink

    “I guess that Hansen and associates regard themselves as being above the law.”

    Hansen thinks he is the law:

    James Hansen, one of the world’s leading climate scientists, will today call for the chief executives of large fossil fuel companies to be put on trial for high crimes against humanity and nature, accusing them of actively spreading doubt about global warming in the same way that tobacco companies blurred the links between smoking and cancer.

    http://www.guardian.co.uk/environment/2008/jun/23/fossilfuels.climatechange

  268. Jonathan Schafer
    Posted Jun 22, 2008 at 9:20 PM | Permalink

    #263

    Yes, it must be nice to have that kind of certainty about anything. He has ceased to be a scientist and instead has proclaimed himself chief inquisitor. These statements, regardless of whether they came on his own or not, including publicly proclaiming working towards the defeat of certain Congressmen because they don’t suport his views has to be incompatible with his position of a government scientist at NASA.

  269. David
    Posted Jun 22, 2008 at 10:51 PM | Permalink

    Scientists, like Journalists, should not be activists. It is a conflict of interest. Scientists and journalists should show their work and let activists determine whether to use their work. There are too many people in both professions who simply cannot see the problem with being activists on the side.

  270. Andrey Levin
    Posted Jun 23, 2008 at 4:26 AM | Permalink

    As radical environmentalists continue to blame the ferocity of Hurricane Katrina’s devastation on President Bush’s ecological policies, a mainstream Louisiana media outlet inadvertently disclosed a shocking fact: Environmentalist activists were responsible for spiking a plan that may have saved New Orleans. Decades ago, the Green Left – pursuing its agenda of valuing wetlands and topographical “diversity” over human life – sued to prevent the Army Corps of Engineers from building floodgates that would have prevented significant flooding that resulted from Hurricane Katrina.

    http://frontpagemag.com/Articles/Read.aspx?GUID=3C240039-FEA6-49D2-A5C5-6C41CF9263A8

  271. Phil.
    Posted Jun 23, 2008 at 11:12 AM | Permalink

    Re #273

    I don’t know what numbering system you’re using but on my computer in #271 Geoff S refers to AGW scientists as akin to terrorists I would say that’s worse than using the f-word which is one of the banned words on this site.

  272. Andrew
    Posted Jun 23, 2008 at 11:36 AM | Permalink

    271 (Phil.): Hmm…posts keep disappearing so I get confused. Unfortunately, his post is gone, but I think you misread. The terrorists he refers to are the activists not the scientists.

  273. Phil.
    Posted Jun 23, 2008 at 12:00 PM | Permalink

    Re #272 (Andrew)

    No I did not misread, he was referring to the scientists, glad to see that it’s been deleted.

  274. DeWitt Payne
    Posted Jun 23, 2008 at 12:33 PM | Permalink

    cba,

    You seem to be missing the point I’m trying to make which means I’m not doing a very good job of explaining. I’ll try again. Any radiation emitted in the 70 to 120 km range will propagate downwards as well as upwards. If you use your program to calculate the observed radiation from the ground looking up I’m betting that you will see a very large difference (greater than 100 W/m2) between starting your integration at 70 km and 120 km using deep space or zero watts as the background emission. Starting at 70 km, you will probably be in reasonable agreement with the MODTRAN results, which in turn are in reasonable agreement with observation.

    Five Pa is a really good vacuum. It’s not diffusion pump level, but it’s as good as or better than can be achieved with a two stage mechanical pump. And that’s at 70 km. At 120 km the pressure will be orders of magnitude lower, less than 0.01 Pa. There is no way you can get saturated absorption much less emission at that pressure, not to mention that collisional energy transfer will not be sufficient, given the mean time between collisions on the order of a millisecond, to maintain that level of emission. There is a glitch in your program or the database.

  275. DeWitt Payne
    Posted Jun 23, 2008 at 12:42 PM | Permalink

    Everyone, would you please use the Permalinks instead of the post number. It will avoid much confusion. Here’s how again: Right click on the post number to which you are replying, click on copy link location from the menu. Enter an identifier in the reply box. Highlight it with the mouse. Click on the Link button. Use Ctrl V or right click and paste to enter the permalink in the link box. Click ok. It’s only a few extra steps and will work as long as the post isn’t deleted or moved to another thread. Thanks.

  276. Posted Jun 23, 2008 at 1:14 PM | Permalink

    Hansen has an article up at Comment is Free.

  277. CuckooUK
    Posted Jun 23, 2008 at 1:56 PM | Permalink

    #10 Richard

    (apologies if this is off-topic)

    The article you are talking about is in the Guardian here:

    http://www.guardian.co.uk/environment/2008/jun/23/fossilfuels.climatechange

    What annoys me a little is the Guardian will print this article, but not these articles:

    http://www.sciam.com/article.cfm?id=ice-core-reveals-how-quickly-climate-can-change&sc=rss

    which reveals the temperature has always fluctuated, even from year to year

    http://www.trace2008.us.edu.pl/Abstracts.html

    which includes a report from M. Timonen, K. Mielikäinen, S. Helama of the Finnish Forest Research Institute which clearly states tree rings show temperature fluctuation

  278. Phil.
    Posted Jun 23, 2008 at 4:14 PM | Permalink

    Re #277

    Steve, please check this one.

  279. cba
    Posted Jun 23, 2008 at 4:28 PM | Permalink

    DeWitt,

    My efforts were thwarted this weekend and that mile I was attempting to go turned in to a few inches. I was not careful enough at transferring some data sheet information and exceeded some memory limits in the programming which caused the computer to waste hours playing virtual memory tail chasing.

  280. DeWitt Payne
    Posted Jun 23, 2008 at 4:52 PM | Permalink

    cba,

    Another indication that LTE no longer applies in the thermosphere (altitudes above ~100km) is that gas molecules with different molecular weights are no longer well mixed above this altitude, otherwise known as the heterosphere. Their pressure versus altitude behavior is a function of molecular weight with the partial pressure of heavier gases falling more rapidly with altitude compared to lighter molecular weight gases.

  281. John M
    Posted Jun 23, 2008 at 5:19 PM | Permalink

    Phil.

    As long as you’re patroling the blog, are you ready to use the phrase “a lot” yet?

    http://www.climateaudit.org/?p=2868#comment-232398

  282. DeWitt Payne
    Posted Jun 23, 2008 at 5:34 PM | Permalink

    cba,

    I asked Leif Svalgaard about the energy involved in heating the ionosphere. His reply here is that the energy is about five orders of magnitude less than 36 W/m2 at latitudes above 40 degrees and below -40 degrees and much less than that at lower latitudes.

  283. Posted Jun 23, 2008 at 7:16 PM | Permalink

    # 284

    Dewitt Payne,

    What you’re doing has a name, dogmatism. Sorry, but dogmatism has not place in real science. :)

  284. Andrew
    Posted Jun 23, 2008 at 11:29 PM | Permalink

    I don’t think anyone has posted this yet:
    http://www.friendsofscience.org/assets/documents/CorrectCorrections.pdf
    I find his explanation of the origin of the negative urbanization adjustments interesting.

  285. cba
    Posted Jun 24, 2008 at 6:52 AM | Permalink

    DeWitt,

    The fact that the molecules are settling out would indicate to me that there is finally insufficient mixing going on (think convection) to blend them together. That insufficient mixing itself doesn’t suggest a lack of LTE to me. An energy in/out imbalance though does strongy suggest that LTE is not in action – at least sufficiently well to use the assumption and that the std atm stated temperature may only apply to IR nonradiating molecules – which of course are the vast majority. That said, it would be interesting to determine the actual temperature of the radiating molecules versus that of the atmospheric bulk and how that relates to pressure and just how far down does a differential T exist in the atmosphere between radiating and nonradiating molecules – IE how good is the LTE assumption at higher pressures. That is, does it show 80% error at 100km and 10% at 70km or is it right on the money up to 80km and blow out exponentially above that?

    Anyway, time to finish constructing the downward radiation scenario is going to be very spotty for a week or more. It can probably be done in that time frame but projects done an hour here and an hour there often stretch out longer than one with half day or longer time blocks available.

    BTW, even though Leif has stated the total dissipation up there as being very small, there’s still some factors not covered. That is dust and debris coming in. However, without knowing that power, I would assume it to be somewhat less than the nonradiative influx. Others such as radio wave absorption and tidal forces are likely tremendously less as well. Also, I’m not sure just how much of Leif’s values were incoming to the earth system versus generated within the upper atmosphere by the internal effects of earth’s situation – due to earth’s magnetic field, rotational and tidal forces on ionized molecules etc. Whether those power generation factors can rise to the level of solar power dissipated in the upper reaches remains yet to be determined.

  286. Posted Jun 24, 2008 at 8:06 AM | Permalink

    285 (cba): the 25 GW quoted is solely the energy flux of precipitating particles from the solar wind and outer magnetosphere.

  287. Posted Jun 24, 2008 at 8:41 AM | Permalink

    Request for assistance.

    I have not been successful in locating a copy of this paper:

    F. Möller, “On the Influence of Changes in the CO2 Concentration in Air on the Radiation Balance of the Earth’s Surface and on the Climate”, Journal of Geophysical Research, Vol. 68, 1963, Pp. 3877-3886.

    I have found, online in the archives for the journal, a Comment about the paper and the Author’s reply, but I cannot find the paper.

    Can anyone point me to a copy?

    Thanks

  288. cba
    Posted Jun 24, 2008 at 8:54 AM | Permalink

    286 (Leif):

    Thanks for the clarification. The question becomes how much energy total is being dumped up there from all sources. Outgoing and incoming solar BB radiation will be handled by the calculations (hopefully). Is there going to be enough to compensate for the radiative losses implied by my model that is assuming LTE is in effect all the way up there and if not what will the situation be that brings us back down to the reality of conservation of energy?

  289. Posted Jun 24, 2008 at 9:41 AM | Permalink

    288 (cba): My estimate is that all the rest is at least an order of magnitude less than the 25 GW. One complication is that the deposition is highly structured and localized in space and time. LTE certainly does not apply up there. The medium is inhomogeneous and the electrical conductivity is different along and transverse to the magnetic field.

  290. Posted Jun 24, 2008 at 9:42 AM | Permalink

    287 (Dan): try here: http://www.agu.org/digital-library.shtml

  291. Posted Jun 24, 2008 at 10:06 AM | Permalink

    Thanks Leif, but I already did. Here’s what I find there for Vol. 68, 1963:

    Volume 68; 1963
    Issue, page span:
    No. 1, 1-344
    No. 2, 345-604
    No. 3, 605-956
    No. 4, 957-1201
    No. 5, 1203-1572
    No. 6, 1573-1791
    No. 7, 1793-2066
    No. 8, 2067-2358
    No. 9, 2359-2865
    No. 10, 2867-3334
    No. 11, 3335-3542
    No. 12, 3543-3743

    No page 3877 ??? Maybe the year is not complete, the number of issues varies.

  292. Posted Jun 24, 2008 at 11:36 AM | Permalink

    291 (Dan): At the time [1963] there were precisely 12 issues per year. Possibly the pages numbers are wrong [I have seen that happen]. Go look in more detail.

  293. kim
    Posted Jun 24, 2008 at 11:58 AM | Permalink

    291 (Dan Hughes) Maybe 3377?
    ===================

  294. Posted Jun 24, 2008 at 12:22 PM | Permalink

    It’s a very strange situation.

    There are very many citations to the article when I do any of several variations of the subject, title, author, journal, etc. in Google. All that I have checked always give the citation information the same; date, author, page numbers (starting and ending), year, volume number; everything.

    My few attempts with Google Scholar have not been successful; Google Scholar can’t find the article.

    I have searched, both quick and ‘advanced’, at the AGU.ORG site. It can’t find the article. When I do a ‘title’ search I get the hit of a Comment by Plass in 1964 because the title is repeated in the title of the Comment. When I do an ‘author’ search at AGU the search doesn’t even return the Author’s reply to the Comment. Yet when I go to the online copy of the Comment, Moller’s reply is the very next article in that issue of the journal. Google Scholar does find the Comment by Plass. And the AGU search finds the Plass comment by ‘author’ as well as ‘title’

    I bought the Plass comment thinking that the correct Vol, Issue, Page numbers would surely be correct in that article. These are all the same as all the hits from Google.

    kim, I have checked other combinations of the Page numbers by browsing around in the online TOCs at AGU, but of course not all of them. No luck there either.

    I’m now thinking the umlaut is screwing the searches at AGU and Google Scholar? But Google finds all the citations?

  295. PaulS
    Posted Jun 24, 2008 at 1:14 PM | Permalink

    Here’s a new push from Jet Propulsion Lab to publicize climate change:
    “Global Climate Change: NASA’s Eyes on the Earth”
    at climate.jpl.nasa.gov

    It includes a “tool” bar called “Vital Signs of the Planet” featuring
    Arctic Sea Ice, Carbon Dioxide, Sea Level, Global Temperature, and Ozone Hole.

  296. Andrew
    Posted Jun 24, 2008 at 1:46 PM | Permalink

    294 (Dan Hughes): Old papers like that are sometimes difficult to find. Your best shot would be if the author is still employed at a university, in which case his faculty page may have it.

    295 (PaulS): Naturally the people at NASA can only monitor the Earth, so they may be forgiven for not featuring the current solar stuff. ;)

    Note however that Arctic Sea Ice is a “vital sign” but Antarctic Ice is not…

  297. kim
    Posted Jun 24, 2008 at 9:13 PM | Permalink

    I knew a fellow once who had a unique role as an expert witness because he was the only one left in the world with the copies of some journals. A lawyer once told him that he could charge much higher fees than the ones he did.
    ============================================

  298. Ulises
    Posted Jun 25, 2008 at 2:38 AM | Permalink

    #291 Dan : It’s in Vol. 13 p. 3877, according to a bibliographic search. But I have no access to it.

  299. kim
    Posted Jun 25, 2008 at 5:08 AM | Permalink

    298 (Ulises) Do you mean Volume 68, Issue number 13? Leif claims only 12 issues per year. Surely not everyone has pitched their paper copies.
    =================================================

  300. Ulises
    Posted Jun 25, 2008 at 9:01 AM | Permalink

    #299 Kim :

    Do you mean Volume 68, Issue number 13?

    Yes, sorry. I find Issues up to 24 for Vol 68 / 1963.

  301. a reader
    Posted Jun 25, 2008 at 10:52 AM | Permalink

    Dan

    I found your article at my local university library. It’s only available on microcard with no way to scan or copy from the reader. It can be ordered, however, through your local public library from various other uni. libraries who do have hard copies or the ability to make one. It will take “a few weeks to get it with (possible) charges”. The ISSN is 0148-0227. I put in a tentative order contingent on price.

  302. Posted Jun 25, 2008 at 3:35 PM | Permalink

    300 (Ulises): 24 issues. It may be possible that AGU began to go to two issues per month around 1963. I’m having somebody check thew paper copies and scan the article if found.

  303. Posted Jun 25, 2008 at 4:23 PM | Permalink

    a reader, Leif, kim, Ulises

    Thanks for all the assistance.

    I looks like AGU started putting 24 issues online 1 year late. Volume 69 in 1964 has 24 issues listed. Volume 68 in 1963 has 12 issues listed. Probably a simple oversite.

    I’ll wait to see what Leif finds out before ordering through a library. I’m many miles (even more km) from a good university/research library.

    BTW, there are several interesting journal articles from the 1950s by Plass, Goody, and a few others. While searching around I ran across this from 1956, and this list of three short blurbs, and this.

    Note that in the latter blurb from 1953, Plass is said to have reported 1.5 degrees per century; F or C is not specified so I’m guessing F given the time period and the publication.

    Note also that in this paper:

    The influence of the 15p carbon-dioxide band on the atmospheric infra-red cooling rate
    By G. N. PLASS
    The Johns Hopkim Uniuersity, Baltimore, U.S.A.
    QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY Volume: 82 Issue: 353 Pages: 310-324 Published: 1956
    (Manuscript received 12 October 1955, in revised form 23 March 1956)

    Plass reports 3.6 C rise in the average temperature at the surface of the earth for double CO2 and 3.8 C decreases if CO2 is cut by half. I think this corrosponds to a pure radiative calculation but I haven’t completely digested the paper yet.

    There are many other calculations/predictions/projections/estimates from back then, too.

    Thanks again

  304. Bob KC
    Posted Jun 25, 2008 at 4:23 PM | Permalink

    Re. #302

    Now can any of the americans who visit this blog please explain to me how this government employed offical can get away with calling America next potential president a brontosaurus and with calling for oil company execs to face trial for their crimes against the planet?

    It’s a free country? No, no that’s not it.

    He earned a free pass by claiming he was being censored (!?) last year and the higher ups don’t want to face the political heat in an election year for firing him?

    That sounds closer.

  305. Jaye Bass
    Posted Jun 25, 2008 at 5:57 PM | Permalink

    RE: 302

    Wow…has the Age of Reason finally come to an end?

  306. cba
    Posted Jun 25, 2008 at 6:04 PM | Permalink

    304 (Dan):

    a pure radiative calculation will not net even 1 deg C or K variation for a doubling or halving. For that you have to invent some mythical h2o positive feedback. What’s more, any positive feedback you claim is at work for these must not have been at work for any others in the past so as to accomodate the problem of having 33 degrees of total warming from all GHG / cloud factors so far and around 40 times the difference in absorption or a co2 doubling going from no effect to now. To make matters worse, in that 40 times, there are over 5 doublings of co2 that would have the same effect as the current one or next one and over 6 more on top of that which should have nearly as much effect as each of those 5 doubling values. Evidently, there’s a lot of new magical mystery effects never before seen which weren’t present before. Either that, or someone learned in their school day fractions studying that one can get 16 slices from a 1/4 sliced apple all from ths same apple.

  307. DeWitt Payne
    Posted Jun 25, 2008 at 7:34 PM | Permalink

    cba,

    33 degrees is the lower limit for greenhouse warming. That’s calculated assuming the surface of the earth is the same temperature everywhere. Since the Earth isn’t a superconductor it’s warmer at the equator than at the poles. At the opposite end of the spectrum, if you assume no heat conduction and no heat capacity, the average temperature is quite low because as soon as the sun drops below the horizon, the surface temperature drops to 2.7 K. The real planet has non-zero heat capacity and heat conductivity, especially for the oceans where the high heat capacity leads to very small diurnal sea surface temperature changes. The land surface has much lower heat capacity so the diurnal temperature difference can be substantial. All this leads to a somewhat lower temperature in the absence of a greenhouse effect so the actual warming is on the order of 35 to 38 degrees. Simple radiative calculations at constant relative humidity seem to show that halving the CO2 should produce a smaller change than doubling it. But even at constant relative humidity, the first order temperature change is only about 2 degrees in the tropics and less at the poles. I think you have to postulate massive heat transfer to the poles leading to substantial warming (on the order of tens of degrees) before you can change the global temperature by 3.6 degrees for doubling CO2. I think you also need a large thermal reservoir with a time constant of about 50 years to get the temperature behavior in the twentieth century to agree with a sensitivity that high. I haven’t seen credible evidence for this.

  308. Posted Jun 25, 2008 at 9:56 PM | Permalink

    GEOPHYSICAL RESEARCH LETTERS, VOL. 35, L12503, doi:10.1029/2008GL033472, 2008
    Amount of CO2 emissions irreversibly leading to the total melting of Greenland
    S. Charbit
    Laboratoire des Sciences du Climat et de l’Environnement, UMR 1572, IPSL, CEA, CNRS, UVSQ, Gif-sur-Yvette, France
    D. Paillard
    G. Ramstein

    Abstract
    The long-term response of Greenland to anthropogenic warming is of critical interest for the magnitude of the sea-level rise and for climate-related concerns. To explore its evolution over several millennia we use a climate-ice sheet model forced by a range of CO2 emission scenarios, accounting for the natural removal of anthropogenic CO2 from the atmosphere. Above 3000 GtC, the melting appears irreversible, while below 2500 GtC, Greenland only experiences a partial melting followed by a re-growth phase. Delaying emissions through sequestration slows significantly the melting, but has only a limited impact on the ultimate fate of Greenland. Its behavior is therefore mostly dependent on the cumulative CO2 emissions. This study demonstrates that the fossil fuel emissions of the next century will have dramatic consequences on sea-level rise for several millennia.

  309. David Archibald
    Posted Jun 26, 2008 at 1:25 AM | Permalink

    Re 309, Dr Svalgaard are you saying you believe this pap or did you post it for our amusement? It is hard to tell which without an intro by your good self.

  310. trevor
    Posted Jun 26, 2008 at 3:54 AM | Permalink

    An interesting piece in the WSJ. http://online.wsj.com/article/SB121433436381900681.html

  311. cba
    Posted Jun 26, 2008 at 4:27 AM | Permalink

    basically, the simple average for a rotating earth gives the 33k balance and that is just based upon the power in = power out. there’s not enough time to reach 2.7k with the sun still present and there’s the internal primordial heat which is still outputting something but still quite hot. The diurnal variation would be large but the T^4 factor still means that power output shoots way up there as T increases above the average and it drops way low as T decreases below the avg. In this case the avg is what ever it takes to achieve a balance. For every minute spent below the average with a P deficit of N w/m^2 then there will be a minute spent above T with N w/m^2 – at least such that the product of time * N is the same for that which is below and above the average.

    constant relative humidity is not something I accept. It’s an assumption and has some serious consequences/requirements to achieve. On the short term localized one doesn’t have constant relative humidity as T changes. It’s quite a stretch to presume that it’s constant overall over the long term when it’s not over the short term and there are places where there is no additional vapor source to be had.

  312. jae
    Posted Jun 26, 2008 at 7:31 AM | Permalink

    309, 310: I agree with David: We have come to the point where any sort of modeling exercise is a “study.” I think that is some of the yellow science mentioned in the WSJ link in 311.

  313. Hoi Polloi
    Posted Jun 26, 2008 at 7:58 AM | Permalink

    Greenhouse gases over the tropical Atlantic are disappearing faster than expected, according to the first comprehensive measurements taken in the region.

    British scientists working at the Cape Verde Observatory on the volcanic island of São Vicente believe chemicals produced by sea spray and tiny marine organisms are speeding up natural processes that destroy the gases.

    Detailed measurements taken over eight years revealed that levels of one greenhouse gas were substantially lower than climate models predicted.

    The observatory, which was recently set up by British, German and Cape Verdean scientists, has given researchers an unprecedented ability to study climate change in one of the most remote regions in the world.

    In the tropics, intense UV rays in sunlight trigger reactions that effectively scrub greenhouse gases from the air. Without this natural cleaning process, atmospheric levels of the gases and other pollutants would be substantially higher than they are.

    One of the most important cleaning reactions destroys ozone, which contributes to global warming at lower altitudes. A byproduct of the reaction, called hydroxyl, cleans the air even more by breaking down methane, the third most abundant greenhouse gas in the atmosphere. The study, led by scientists at Leeds and York Universities, revealed ozone levels over the tropical Atlantic were 50% lower than expected.

    Ozone is a byproduct of burning fossil fuels, and is carried down to the tropics from the UK and Europe on trade winds. As the air crosses the ocean, the harsh sunlight reduces ozone levels from around 50 parts per billion (ppb) to 10 ppb, the scientists report in the journal Nature.

    To check their readings, the team used a research plane to measure ozone levels at different altitudes over the ocean. Their readings confirmed a major loss of ozone in the remote region.

    “At the moment this is a good news story, more ozone and methane being destroyed than we previously thought. But the tropical Atlantic cannot be taken for granted as a permanent sink for ozone,” said Alastair Lewis, who led the study at the national centre for atmospheric science in York.

    “It may mean that there are sources of methane and ozone that we aren’t aware of that the atmosphere has been working away on without us realising it. Or perhaps we’ve been overestimating how quickly ozone is removed in other parts of the world,” Lewis added.

    Instruments at the observatory later pointed to an explanation for the rapid destruction of ozone over the ocean. The sensors picked up bromine and iodine oxide, which are produced by sea spray and plankton, and join forces to attack ozone.

    Lewis said the discovery should give renewed impetus to programmes that aim to reduce atmospheric methane and low-level ozone. “It’s an incentive to get on with cutting our emissions of these gases, because if we do, the atmosphere will scrub them away quicker and we will get the benefits sooner. Carbon dioxide lasts for hundreds of years, while methane lasts only a year or two in the tropics,” said Lewis.

    Climate scientists will use the findings to fine-tune computer models that attempt to predict future warming. “We now have to get to grips with why this wasn’t spotted earlier and ultimately put this into our models so we can improve their accuracy,” said Lewis.

    Source: http://www.guardian.co.uk/environment/2008/jun/26/climatechange.pollution

  314. Posted Jun 26, 2008 at 8:16 AM | Permalink

    310,313 (DavidA,jae): Do I have to belabor the obvious? I’m amazed at what ‘qualifies’ as ‘science’ these days. I would almost call this paper an reductio ad absurdum’.

  315. beatk
    Posted Jun 26, 2008 at 9:51 AM | Permalink

    This thought provoking, brief essay raises the notion that with enough data, models are not required
    and the scientific method is obsolete.
    What are the potential implications of this, if any, on the future of solar science and of climatology?
    Is there currently sufficient (good) data available to make this concept applicable in those fields?

    http://www.wired.com/science/discoveries/magazine/16-07/pb_theory

  316. Boris
    Posted Jun 26, 2008 at 10:04 AM | Permalink

    311:

    Nice combination of an ignorance of the science and a conspiracy theory. Well played, WSJ!

  317. Posted Jun 26, 2008 at 10:37 AM | Permalink

    Regarding my search for the paper by Moller, I have found this 1975 paper by S. Schneider that provides a summary of the information that I am interested in. Various calculations/predictions/projections/estimates, made between about 1963 and 1975, of the equilibrium temperature for doubling CO2 concentration are the subject of the paper which has the title, “On the Carbon Dioxide-Climate Confusion”.

    The information is given in Figure 1 of the paper, and the caption, in part, has this, “For the reasons discussed in the text a state-of-the-art order-of-magnitude estimate is suggested between 1.5 and 3 K, but that the combined effects of improperly modeled climatic feedback mechanisms could, roughly, enhance or reduce this estimate by as much as a factor of 4.” The Arrhenius results are not given, and the various results of Plass are not summarized. I think all the results summarized in the figure were obtained using fixed relative humidity.

    It is interesting to me that all the older papers focus of the effects of water vapor, relative humidity, and cloudiness (how much, where, and what altitude) in addition to CO2. It seems that the real keys to an accurate solution have been known for over six decades. I think even Arrhenius considered cloud effects. It seems that these authors knew that the source of the cooling effects was more important than the pure-radiative effects of additional CO2. But then everybody got sucked up into Big Codes running on Big Computers, and the truly important physical phenomena and processes got regulated to the the status of the omnipresent parameterizations. The importance of convective and latent heat aspects were also identified many decades in the past. The fixation on radiative effects of CO2 has been very mis-guided, IMO.

    Corrections for all incorrectos will be appreciated.

  318. a reader
    Posted Jun 26, 2008 at 1:57 PM | Permalink

    Dan

    I did hand copy the abstract for your Moller paper:

    “The numerical value of a temperature change under the influence of a CO2 change as calculated by Plass is valid only for a dry atmosphere. Overlapping of the absorption bands of CO2 and H2O in the range around 15m
    (I don’t know how to make the symbol) essentially diminishes the temperature change. New calculations give Delta T=+1.5 degrees when CO2 content increases from 300 to 600 ppm. Cloudiness diminishes the radiation effects but not the temperature changes because under cloudy skies larger temp. changes are needed in order to compensate for an equal change in the downward long-wave radiation. The increase in the water vapor content of the atmosphere with rising temp. causes a self amplification effect which results in almost arbitrary temp. changes e.g. for constant relative humidity Delta T=+10 degrees in the above mentioned case. It is shown however that the changed radiation conditions are not necessarily conpensated for by a temp. change. The effect of an increase in CO2 from 300 to 330 ppm can be compensated for completely by a change in the water vapor content of 3% or a change of the cloudiness of 1% of its value without the occurence of temp. changes at all. Thus the theory that climatic variations are effected by variations in CO2 content becomes very questionable.”

    Any errors are my fault–I was scribbling.

    Dr. Moller was from the Meteorolgical Institute of Munich. The paper itself contained too may formula and graphs for me to copy.

  319. Posted Jun 26, 2008 at 2:38 PM | Permalink

    Thanks for taking time to provide the info, a reader.

    This article from 1979 has a summary of several of the values of the sensitivities [K/(W/m^2)]to doubling of CO2, up to the time of publication. They vary all over the map, which is not surprising given that the equilibrium temperatures vary all over the map.

    It’s interesting to read these old papers. The more things change the more they stay the same.

    One thing that remains elusive from me is the time scale to attain new ‘radiative equilibrium’ states. My impression is that it might be thousands of years, or maybe several hundred. If this is the case, how is it possible to be measuring changes in temperatures already so early in such a long time scale?

  320. cba
    Posted Jun 26, 2008 at 3:22 PM | Permalink

    316(beatk):

    I’d like to say that’s an easy answer. However, total and utter catastrophes tend to be very hard to describe in specific terms and details.

    Perhaps one can describe this petabyte data in terms of information and noise. Then one has the situation of trying to decide which is which. Even their beloved example of google statistics on website hits is bogus. Its validity is only for hits on a web site that might be useful for other advertisers who are paying for exposure. If that site is marketing its own product, there is no information as to whether these hits are potential customers or whether any are there to buy or to laugh at the products offered. Or, for that matter whether or not it is live people or some compromised node trying to break into the system.

    Who is to say which is more popular, the flat earth society website or the Sky & TElescope website. Does that mean the flat earth society is promoting correct and accurate scientific data and Sky & Telescope is not??? It’s not even by a democratic vote by people who don’t understand the situation.

  321. DeWitt Payne
    Posted Jun 26, 2008 at 6:47 PM | Permalink

    cba,

    The 33 degree figure comes from a model where temperature and thus OLR is constant at all latitudes and longitudes. That is not the case with the real Earth. The large temperature difference between the equator and the poles means OLR decreases with latitude. However, because there is heat transport from the equator to the poles, there is a deficit in OLR near the equator and an excess at high latitudes. That is not enough, though, for the isothermal approximation to be accurate. OLR isn’t even constant at constant latitude. OLR over the Sahara, for example, is much higher than over the ocean. Maksimovich posted a graph of this a while back. With heat capacity defined as identically zero, the drop in temperature with decreasing insolation is instantaneous. Obviously, even a small heat capacity will prevent a drop to 2.7 K because of the T^4 dependence of emission. But that very same dependence is what makes a non-isothermal body have a lower average temperature. The equator is only a little bit hotter, but the dark side and the poles are a lot colder.

    Look at the cycle of Lunar surface temperature for an example of very low surface heat capacity and conductivity combined with low rotation rate to produce an average temperature very much lower than the temperature of an isothermal body with the same albedo. The relatively constant sub-surface temperature of the moon is -35 C compared to -3 C for an isothermal body with an average albedo of 0.12 in radiative equilibrium.

  322. David Smith
    Posted Jun 26, 2008 at 9:41 PM | Permalink

    The global surface temperature anomaly Hovmoeller plot is here:

    The polar regions appear to have cooled (anomaly-wise) vs recent months. The tropics may be somewhat warmer while the mid-latitudes are a wash. The cool 2008 continues.

    This year the Arctic seems to be cloudier than last year, which slows the ice melt and keeps surface temperatures in the normal range. If that cloudy and cool pattern holds for the next six or so weeks then the 2008 meltback might not be as big as 2007, despite the wind-driven loss of the thick ice last winter.

    Some expect El Nino conditions to develop over the next six months but, if foced to bet, I’d bet on a return to a mild La Nina. Also worth watching over the next several months is the relative coolness of the Indian Ocean.

  323. Chuck C
    Posted Jun 27, 2008 at 5:25 AM | Permalink

    David Smith, #313

    New article on sea ice meltback predicts NO Arctic Ice this year:

    http://www.independent.co.uk/environment/climate-change/exclusive-no-ice-at-the-north-pole-855406.html

    It seems unthinkable, but for the first time in human history, ice is on course to disappear entirely from the North Pole this year.

    The disappearance of the Arctic sea ice, making it possible to reach the Pole sailing in a boat through open water, would be one of the most dramatic – and worrying – examples of the impact of global warming on the planet. Scientists say the ice at 90 degrees north may well have melted away by the summer.

    “From the viewpoint of science, the North Pole is just another point on the globe, but symbolically it is hugely important. There is supposed to be ice at the North Pole, not open water,” said Mark Serreze of the US National Snow and Ice Data Centre in Colorado.

    If it happens, it raises the prospect of the Arctic nations being able to exploit the valuable oil and mineral deposits below these a bed which have until now been impossible to extract because of the thick sea ice above.

    Seasoned polar scientists believe the chances of a totally icefreeNorth Pole this summer are greater than 50:50 because the normally thick ice formed over many years at the Pole has been blown away and replaced by hugeswathes of thinner ice formed over a single year.

    Any reaction? Seems like an overstatement to me with lots of assumptions about single-season ice and how things will continue, but I’m no expert.

  324. cba
    Posted Jun 27, 2008 at 5:57 AM | Permalink

    DeWitt,

    Of course there are variations. The definitions of the radiative equilibrium temperature must be that value which balances. Otherwise the T would be adjusting to radiate more, or less. What’s more, there are nonradiative heat flow adjustments horizontally and vertically as while heat transfer via radiative is substantial and transfer by conduction is somewhat minimal, convective is quite important as well. It takes over or becomes more important immediately upon the decrease in in radiative. Any slight change in radiative will be met by a T^4 increase in radiated power plus a linear increase in conduction plus an increase in convection.

    Even from the standpoint of the modtran calc, the radiated power for clear sky at the standard avg T is going to be at around 260w/m^2 which is in excess of what is coming in by over 20w/m^2.

    Basically, a balance is needed when a fixed amount of power comes in such as the solar insolation. If we do not average the same overall then there is a net loss or gain of heat which will occur until we do average the same as what is coming in. Stefan’s law shows that this power output varies with the 4th power of Temperature making it extremely sensistive to T variations and that makes the radiative factor an extremely high negative feedback term. If there is a need to regain balance by a shift in T, there is almost no change in T required to achieve the new balance. Averaged over time and position, there is a T that does meet the requirement. What’s more, on a local level this balance is being met or there is a rather immediate change in T wherever there is a difference. This average T value cannot vary by more than a small amount because there is going to be a massive radiative response in power output – either increasing or decreasing.

    Unless you change the incoming absorbed radiation from its value of around 239w/m^2 by a very large amount, you are not going to change the T by more than a tiny fraction of a degree. If you do a T measure of earth over a large enough area and time, you will measure it to be that value of T. Hence, the 33k avg rise is all you get for the atmospheric T increase at present.

    As for equilibrium in radiative, stefan’s law is for a surface area, not the whole surface or the whole body. You need thermal equilbrium in the particular meter squared area you’re dealing with and that’s it.

    While the assumptions made with this stuff are extremely crude, the results are quite good compared to what one would tend to expect.

    What’s more, the variations presumed by models due to h2o vapor – only partially improperly modeled, make the assumption of constant rel. hum. has to have some form of an average T change in spur on the change in rel. hum. which is the real basis for their supposed massive increase in T. Of course the first hint of problem here is that it is a totally unstable equilibrium condition which is something nature avoids – or more aptly – a condition that is not stable cannot exist because any pertubation (like a slight random increase in humidity) will immediately push itself off of the stability point and begin the slide down the curve until it hits the rail.

  325. Craig Loehle
    Posted Jun 27, 2008 at 6:15 AM | Permalink

    WSJ article on corruption of climate science.
    http://online.wsj.com/article/SB121433436381900681.html

  326. PaulM
    Posted Jun 27, 2008 at 6:26 AM | Permalink

    #323 Chuck C, The Independent and its ‘science’ editor (global warming hysteria editor) are one of the worst offenders for this. For the facts, look at #322 for example. Here is what I just put on their ‘right to reply’ page:

    This is the usual junk science from the Independent and Steve Connor. Notice all the ‘ifs’. I bet you it doesn’t happen this year. The arctic ice extent is considerably greater than last year and arctic temperatures are back to the normal levels. Meanwhile, globally averaged temperatures continue to fall in 2008.
    But dont let a few facts interfere with your scare story, Steve Connor.

  327. kim
    Posted Jun 27, 2008 at 6:28 AM | Permalink

    323 (Chuck C) This year’s Arctic ice melt is trailing last year’s despite the thinner ice, but this year did start from a higher baseline. So, more ice has melted this year than last, but there is still more ice now than at this time last year. Confused? Well, I’m no expert.
    =======================================================

  328. David Smith
    Posted Jun 27, 2008 at 6:43 AM | Permalink

    Hi, Chuck. The article predicts no ice at the North Pole (90N) but I think it allows for ice elsewhere in the Arctic Ocean. The article acknowledges that the immediate cause of the situation is wind, which displaced the normally-thick ice southward ultimately to melt as icebergs in the Atlantic.

    I think their odds (50/50) are probably about right. It will largely depend on cloud cover.

    My guess is that this would not be “the first time in human history” which the article claims.

    I’m not at all sure that current science understands all of the dynamics of Arctic ice. A series of cloudy, cool summers with annual winds which favor ice retention would change the situation, and maybe nature does that on occasion. Ocean currents play a role, too, and they also change over time.

    An enjoyable webcam is here , which is near the North Pole (or wherever it drifts to). (Always check the date stamp of a photo, as their cameras malfunction a lot.) You can watch cloud cover and the progress of the summer melting (melt ponds). The archives also have some interesting photos (none of Santa, though).

  329. BarryW
    Posted Jun 27, 2008 at 7:26 AM | Permalink

    Re 327

    Ya know, they could claim they were right even if the minimum is larger than last year. If they are stating at a higher maximum and have nearly the same minimum as last year then there was “more ice melt”.

    Re 328

    Santa moved to Asia awhile ago. That’s why all the toys are marked “Made in China”. /grin/

  330. fFreddy
    Posted Jun 27, 2008 at 9:49 AM | Permalink

    Re #326, PaulM

    The Independent and its ‘science’ editor (global warming hysteria editor) are one of the worst offenders for this.

    Very true, which makes it rather cheering that the first few pages of comments seem to be fairly solidly saying that the article is rubbish. If even Indy readers have seen through the AGW scam, then maybe there is some hope, despite this week’s national suicide note.

  331. Andrew
    Posted Jun 27, 2008 at 6:02 PM | Permalink

    If anyone is interested, please comment on my Draft Editorial/Article on climate sensitivity:
    http://www.climateaudit.org/phpBB3/viewtopic.php?f=3&t=373

  332. nevket240
    Posted Jun 28, 2008 at 1:57 AM | Permalink

    Blast!! I was about to post a link from todays Australian featuring the same MMH emotional terrorism
    http://www.theaustralian.news.com.au/story/0,25197,23933477-30417,00.html

    Why waste time. post anyway. Note the emotional terrorists couldn’t leave the bears out of it. No science, no facts, just ET. You are not battling crooked scientists, you are battling crooked political activists who have a whole generation of hippies to network with.

    regards.

  333. MJW
    Posted Jun 28, 2008 at 2:13 AM | Permalink

    It seems unthinkable, but for the first time in human history, ice is on course to disappear entirely from the North Pole this year.

    What I wondered when I read this is, how do they know this would be the first time in human history? I read the article, but it didn’t say.

  334. Pat Keating
    Posted Jun 28, 2008 at 9:40 AM | Permalink

    331 Andrew
    I suggest defining climate sensitivity as deg C for doubling.

  335. Posted Jun 28, 2008 at 11:38 AM | Permalink

    If you haven’t seen the 1-hour program, What Is Normal?, and its 10-minute excerpt, Don’t Panic, you certainly should because it is excellent:

    http://motls.blogspot.com/2008/06/coyoteblog-dont-panic.html

    It covers a lot of stuff about sensitivity, feedbacks, proxies, statistics, urban biases etc. etc.

  336. Andrew
    Posted Jun 28, 2008 at 2:49 PM | Permalink

    What happened to my response to Pat Keating and my hilarious response to MJW?

    335 (Lubos): I have seen it, it is pretty good. I’m also the semiregular commenter on your blog “Werdna”.

  337. Phil.
    Posted Jun 29, 2008 at 8:53 AM | Permalink

    Re #326

    #323 Chuck C, The Independent and its ‘science’ editor (global warming hysteria editor) are one of the worst offenders for this. For the facts, look at #322 for example. Here is what I just put on their ‘right to reply’ page:

    This is the usual junk science from the Independent and Steve Connor. Notice all the ‘ifs’. I bet you it doesn’t happen this year. The arctic ice extent is considerably greater than last year and arctic temperatures are back to the normal levels. Meanwhile, globally averaged temperatures continue to fall in 2008.
    But dont let a few facts interfere with your scare story, Steve Connor.

    Well you certainly didn’t let the facts get in the way of your post did you Paul?

    Arctic sea Ice extent is certainly not ‘considerably greater than last year’ nor are temperatures ‘back to the normal levels’!

  338. Phil.
    Posted Jun 29, 2008 at 9:14 AM | Permalink

    Re #337
    This was missed off the above post:
    Alert weather
    Alert records

    So at Alert today’s temperature at 10am exceeds the average maximum temp by ~11ºC and the record for the day by 3ºC.
    So rather than ‘returning to normal levels’ it appears that records are being set.

  339. Andrew
    Posted Jun 29, 2008 at 10:25 AM | Permalink

    Check this out:
    http://www.weatherquestions.com/Climate-Sensitivity-Holy-Grail.htm

  340. David Smith
    Posted Jun 29, 2008 at 10:58 AM | Permalink

    The most recent seven-day global temperature anomaly map is here.

    The Arctic has warm areas around Alert and much of the subpolar Arctic while the ocean areas are closer to normal. The Antarctic is generally cooler than normal, stretching into the midlatitude land areas.

  341. Real Richard Sharpe
    Posted Jun 29, 2008 at 11:10 AM | Permalink

    Re: 337 …

    So tell us Phil, how does the discovery of volcanic activity under the Arctic affect your views on what is causing these lows in sea ice etc?

    If you are not aware of this new information (shame on you) you can find it at WattsUpWithThat and Eurekalert.

    Here is what they say:

    The Gakkel Ridge in the Arctic Ocean spreads so slowly at 6-14 mm/year, that current theories considered volcanism unlikely – until a series of 300 strong earthquakes over a period of eight months indicated an eruption at 85° N 85° E in 4 kilometres water depth in 1999. Scientists of the Alfred Wegener Institute became aware of this earthquake swarm and reported about its unusual properties in the periodical EOS in the year 2000.

    Vera Schlindwein and her junior research group are closely examining the earthquake activity of these ultraslow-spreading ridges since 2006. “The Gakkel Ridge is covered with sea-ice the whole year. To detect little earthquakes, which accompany geological processes, we have to deploy our seismometers on drifting ice floes.” This unusual measuring method proved highly successful: in a first test in the summer 2001 – during the “Arctic Mid-Ocean Ridge Expedition (AMORE)” on the research icebreaker Polarstern – the seismometers recorded explosive sounds by the minute, which originated from the seafloor of the volcanic region. “This was a rare and random recording of a submarine eruption in close proximity,” says Schlindwein. “I postulated in 2001 that the volcano is still active. However, it seemed highly improbable to me that the recorded sounds originated from an explosive volcanic eruption, because of the water depth of 4 kilometres.”

    The scientist regards the matter differently after her participation in the Oden-Expedition 2007, during which systematic earthquake measurements were taken by Schlindwein’s team in the active volcanic region: “Our endeavours now concentrate on reconstructing and understanding the explosive volcanic episodes from 1999 and 2001 by means of the accompanying earthquakes. We want to know, which geological features led to a gas pressure so high that it even enabled an explosive eruption in these water depths.” Like Robert Reves-Sohn, she presumes that explosive eruptions are far more common in the scarcely explored ultraslow-spreading ridges than presumed so far.

    Perhaps you would suggest that these events lead to an increase in the thickness of Arctic ice?

  342. kim
    Posted Jun 29, 2008 at 11:20 AM | Permalink

    343 (RSS) Andy Revkin at the NYT DotEarth blog claims that his experts say the extra heat was kept deep in the Arctic and swept out of the basin before it could impact the ice. I doubt that and he claims to be rechecking with the experts for an article on this coming Tuesday.

    He, on DotEarth’s latest thread, has a fascinating graphic of the evolution of the sea ice in the Arctic for the last 25 years. I believe that I can see the effect of the 1999 volcano appear as a great blue spot suddenly. You can see that blue spot move out of the Arctic over the next two years, sweeping a nice chunk of multi-year ice out with it. I’m not the least bit sure of what I see, but I find it hard to believe that a huge volume of very hot water won’t rise to the surface over two years.
    ================================================

  343. Real Richard Sharpe
    Posted Jun 29, 2008 at 11:27 AM | Permalink

    Kim,

    Linky linky link, please … although I guess google is my friend.

  344. kim
    Posted Jun 29, 2008 at 11:28 AM | Permalink

    344 (kim) I furthermore believe I can see the effects of vulcanism in the Gakkel Ridge starting about 1988 on that graphic. There is a persistent hot spot there. I’d like to know what seismic activity did during that time. Was there a burst of seaquake activity around 1995 when another small blue spot appears? We know there was increased seaquake activity in 1999.
    ================================================

  345. kim
    Posted Jun 29, 2008 at 11:30 AM | Permalink

    345 (RSS) Sorry, I’m a linky Luddite. Links deteriorate and there goes your argument. DotEarth is easy to find.
    ==============================================================

  346. Phil.
    Posted Jun 29, 2008 at 11:45 AM | Permalink

    Re #343

    So tell us Phil, how does the discovery of volcanic activity under the Arctic affect your views on what is causing these lows in sea ice etc?

    Not in the slightest, it’s a complete irrelevancy.

  347. Joe Solters
    Posted Jun 29, 2008 at 11:51 AM | Permalink

    Re: 343 & 344. Andy Revkin (NYT blog) initially rejected out-of-hand any impact from Arctic volcanoes by reciting his sources arguments about ‘too deep’ and ‘layers don’t mix’ crapola conclusions. No names, no description of studies or scientific rationale for their opinions. Let’s hope Andy comes up with names, numbers and supporting rationale on Tuesday, after checking with his ‘big 24′ sources, seeking wisdom regarding Arctic volcanic impact theories and feedbacks. What are the odds that somebody says, “Hey, we just don’t know”?

  348. D. Patterson
    Posted Jun 29, 2008 at 12:27 PM | Permalink

    340 Phil. says:

    June 29th, 2008 at 9:14 am
    Re #337
    This was missed off the above post:
    Alert weather
    Alert records

    So at Alert today’s temperature at 10am exceeds the average maximum temp by ~11ºC and the record for the day by 3ºC.
    So rather than ‘returning to normal levels’ it appears that records are being set.

    So what, Phil? The fact that a new temperature record is set at an individual observation station or a few observation stations is not by itself an unexpected event or an event heralding anything remarkable about the state of the climate. In the first year of weather observations, a new temperature record is set every day. In the second year of observations, a new temperature record is again set every day, because a new record high or low temperature is set for every minute in every hour of every day. In the third year of weather observations, new record temperatures are still very common, but they are not being established at all times as in the first and second years of observations. As subsequent years pass, the frequency at which new temperature records are set at the observation station diminishes in trend because the most common temperatures for a given date have already been observed. Nonetheless, the cyclical changes in weather patterns and cyclical climate patterns over a period of one or more centuries will contiue to produce further daily temperature records on certain occasions.

    Weather stations in the Arctic have not been around very long. Even those non-Arctic weather stations with observational histories greater than one century have not been around nearly long enough to record the temperature extremes common to the 500 year climate cycles. Arctic weather stations have existed for only a relatively brief part of the major climate cycles. Consequently, there is nothing unusual or remarkable for these Arctic weather stations to observe record high and low temperatures. On the contrary, there would be something very much amiss if there were not a fairly high rate of record temperatures recorded at this early stage in their operational histories.

    Now you may choose in ignorance to let this ordinary weather event serve as a scare and boogeyman, but there is no rational basis for doing so. Anyone looking at the current weather in the region can see that Alert’s higher than usual temperature for this date is due to the warmer air being brought from south to north by the SSW winds. Observing normal or colder than usual air temperatures for the other Arctic weather stations in the surrounding 1,000 kilometers immediately demonstrates that Alert’s record high air temperature for this date is just another routine statistical phenomenon of no significant importance to the global climate trends.

  349. Craig Loehle
    Posted Jun 29, 2008 at 2:52 PM | Permalink

    In this whole AGW mitigation debate, people are acting like our past treatment of societal risks has been handled coherently. Not quite. DDT and Alar were banned based on shaky science. No nuclear reactors have been built in the US in decades (or refineries). People take vitamin supplements and drugs to lower cholesterol with very little proof they are beneficial. Playground equipment is banned because it is “dangerous” but then kids get fat from lack of exercise. Humans have a very hard time assessing risk. The more abstract it is, the more difficult to assess (like food additives). The illusion of control (like when riding a bike) causes people to discount risk. The lack of control (like when flying) causes people to exaggerate risk. Some people are fearful of the future and see disaster ahead, others are confident. All of this is ignored when talking about AGW.

  350. Jeff C.
    Posted Jun 29, 2008 at 3:38 PM | Permalink

    Fascinating article by Dr. Roy Spencer that explains why every climate model used by the IPCC assumes a stong positive feedback in climate sensitivity when he is sure the opposite is true. Conclusion – the warmers are misintrepreting the radiative/evaporative cooling seen in the satellite data. Stems from his previous work but I’ve never seen it explained so clearly.

    Has the Climate Sensitivity Holy Grail Been Found?
    by Roy W. Spencer, Ph.D.

    http://www.weatherquestions.com/Climate-Sensitivity-Holy-Grail.htm

  351. Steve McIntyre
    Posted Jun 29, 2008 at 4:01 PM | Permalink

    #346. I agree with Phil on the irrelevancy of Arctic volcanism to this issue. Please don’t spend bandwidtgh on it.

  352. kim
    Posted Jun 30, 2008 at 5:17 AM | Permalink

    350 (Jeff C) Hah! Climate sensitivity as an artifact of smoothing! Whodathunkit.
    =================================================

  353. Dave Dardinger
    Posted Jun 30, 2008 at 7:28 AM | Permalink

    re: #350, 352,

    Indeed interesting! Actually I’ve just been reading Spencer’s book and think the science portion to be quite interesting and well done. But this article seems to entirely different and possibly important. As usual it requires hearing what others might have to say on the subject before making a final judgment. So if RC chimes in, let me know.

  354. Andrew
    Posted Jun 30, 2008 at 10:59 AM | Permalink

    353 (Dave Dardinger): Since much of this draws on Spencer and Braswell 2008 they sort of already have responded-and Roy has hit back:
    http://motls.blogspot.com/2008/05/realclimate-vs-roy-spencer-non-feedback.html

  355. Posted Jul 1, 2008 at 3:43 PM | Permalink

    Wrote turn-key Matlab version of the Hockey Stick,

    http://signals.auditblogs.com/2008/07/01/hockeystick-for-matlab/

    http://signals.auditblogs.com/files/2008/07/hockeystick.txt

    pl. let me know if it works.

  356. Andrew
    Posted Jul 2, 2008 at 8:36 PM | Permalink

    146 (Hank Roberts): Why do you qoute a paper rebutting Schwartz as rebutting Chylek? Who suggested a conspiracy? It is funny that you are so clingy with the “climate sensitivity is 3 per 2xCO2. It is more than a little disturbing how unshakeble your belief is, when even an estimate of .9 to 2.9 is some how “coming back” to the same old number again. The real problem with Schwartz’s paper is that he supposes that the surface temperature record is correct.

    I’ll say again-I can make arguments to lower this value further. I don’t see the sense in say the true value is higher anymore.

  357. Andrew
    Posted Jul 2, 2008 at 8:41 PM | Permalink

    In fact, this is the reference in the paper:

    Attention is called also to other recent independent estimates of climate sensitivity that are likewise at the low end of the IPCC [2007] range: 0.29 to 0.48 ± 0.12 K/(W m-2) [Chylek et al., 2007]

    Not negative at all. If anything, Schwartz is looking for papers to support his revised estimate still being fairly low.

  358. Andrew
    Posted Jul 2, 2008 at 9:13 PM | Permalink

    Also, Steve? Would you mind moving this little convo to unthreaded, instead of deleting it as increasingly off topic?

  359. PaulM
    Posted Jul 3, 2008 at 5:37 AM | Permalink

    355 UC, your matlab doesnt work for me (matlab 7.4, Linux). Perhaps a firewall issue?

    ??? Error using ==> urlwrite at 134
    Error downloading URL.

    Error in ==> hockeystick at 24
    urlwrite(‘http://holocene.meteo.psu.edu/shared/research/ONLINE-PREPRINTS/Millennium/DATA/RECONS/nhem-recon.dat’,’nhem_recon.dat’)

  360. Posted Jul 3, 2008 at 6:45 AM | Permalink

    PaulM, I guess it’s the firewall.

  361. bernie
    Posted Jul 3, 2008 at 7:10 AM | Permalink

    This is on an entirely different tack, but I would be interested in other people’s reactions to Naomi Oreske’s video presentation:

    The American Denial of Global Warming
    It is kind of long and unfortunately I cannot find a write up or transcript.

    IMHO it is a very well constructed presentation and does lay out a plausible story about one possible source for the continuing debate over whether or not the science is settled around AGW. However, like Gore’s AIT, she does employ a series of rhetorical tricks which I found disturbing and prompt me to look at elements of her argument more carefully. Given the central concerns at CA, one assertion of particular interest is her claim that the polar amplification prediction has been proven by the 2000-2005 temperature records. I would be especially interested in an assessment of the validity of this assertion.

  362. TinyCO2
    Posted Jul 3, 2008 at 11:14 AM | Permalink

    Next week I will be able to attend a presentation of climate change by ‘weather and climate experts’ of the UK MET Office. If you were me, what one question would you ask them?

  363. Ice Anomaly
    Posted Jul 3, 2008 at 12:31 PM | Permalink

    Anthony censored my comment on his own blog, in the thread about the northwest passage being impassable. I’m posting it here.


    Of course the NW Passage is still impassable. It’s early July still.

    No one is predicting an “ice free Arctic Sea” this summer. Some are predicting an ice free North Pole. Ice melt around the pole is not symmetrical – far from it. It is influenced by the shape of the basin, the winds, where ice piles up, the distribution of first-year and multi-year ice, and so on. The prediction is that the sea AT the north pole will be ice free – everyone acknowledges that there will still be a lot of ice in the arctic basin, even if the pole itself melts this year.

    Jack Simmons said:
    “As some of the ice retreats from the coastal regions of Greenland today, some of the Viking farms are being revealed, for the first time in “recorded history”.”
    ONE farm was uncovered, and it was not uncovered by retreating ice. It was buried under sand in a river bed, and uncovered as the sand washed out.
    Viking farming was always tenuous, and the vikings dependent on hunted seal meat in most years. Poor farms regularly lost all breeding stock in winter, and had to be re-stocked from farms in more advantageous areas. Only a handful of the farms were ever able to raise cattle, and they had the smallest cattle recorded in any human settlement.
    Today Greenland grass farmers run a thriving dairy economy, they are getting two hay cuttings a year for the first time ever, they are growing crops the vikings could not. The ag evidence is consistent with Greenland being as warm or warmer than when the vikings were there.

    No one says there aren’t going to be wins in AGW-induced change. The argument is that the net is going to be very strongly on the loss side, with sea level rise and loss or expensive impact on low elevation infrastructure and lands, degradation of ecosystems and loss of accompanying ecosystem services, ag climate belt shifts with desertification of some highly productive farmlands and shifts of staple crop belts onto less rich farmland, and so on.

  364. Posted Jul 3, 2008 at 12:49 PM | Permalink

    # 361

    Bernie,

    It is just that, rethorics. Oreskes didn’t show a single physical basis to demonstrate what she was saying. Oreskes is a servil helper of Hansen.

  365. Posted Jul 3, 2008 at 1:05 PM | Permalink

    # 362

    TinyCO2,

    I would ask them:

    1. What’s the reason by which the increase of atmospheric concentration of CO2 follows the increase of temperature, and not the opposite?

    2. How is it that the oceans are now cooling, when the CO2 is increasing, if the CO2 is suposedly absorbing heat radiated from the surface and radiating heat to the surface? Consider that the concentration of CO2 in the atmosphere is 385 ppmV. It seems that the whole depends of the absorption of energy by the surface, not of the amount of the CO2 in the atmosphere.

  366. John M
    Posted Jul 3, 2008 at 1:32 PM | Permalink

    IceAnomaly 363

    Your comment is posted over there.

    Perhaps you don’t realize that Wattsupwiththat is a moderated blog, and it sometimes takes a few hours to clear the moderation queue.

  367. bernie
    Posted Jul 3, 2008 at 1:46 PM | Permalink

    Nasif:
    Thanks for the response. I did understand that the presentation was not scientific, but largely political. That said, I am still interested in the validity of one of her factual scientific assertions, namely that polar amplification has been demonstrated. Until I heard her say this, I understood that one of the weakness of the models is that polar amplification has not appeared as predicted. Both assertions are testable, but both cannot be correct (unless you are really cute with the baselines and periods you chose – I guess).

    I also prefer not to cast aspersions on people’s motives – I have no idea as to her connection with Hansen, though I admit that a close affiliation would be ironic given the content of her presentation.

  368. Posted Jul 3, 2008 at 6:07 PM | Permalink

    It seems that the MWP and the LIA and the recent warming are present in the record [at least according to this reconstruction]:

    GEOPHYSICAL RESEARCH LETTERS, VOL. 35, L13703, doi:10.1029/2008GL034187, 2008
    A late Quaternary climate reconstruction based on borehole heat flux data, borehole temperature data, and the instrumental record
    S. P. Huang
    Department of Geological Sciences, University of Michigan, Ann Arbor, Michigan, USA
    H. N. Pollack
    P.-Y. Shen
    Abstract
    We present a suite of new 20,000 year reconstructions that integrate three types of geothermal information: a global database of terrestrial heat flux measurements, another database of temperature versus depth observations, and the 20th century instrumental record of temperature, all referenced to the 1961–1990 mean of the instrumental record. These reconstructions show the warming from the last glacial maximum, the occurrence of a mid-Holocene warm episode, a Medieval Warm Period (MWP), a Little Ice Age (LIA), and the rapid warming of the 20th century. The reconstructions show the temperatures of the mid-Holocene warm episode some 1–2 K above the reference level, the maximum of the MWP at or slightly below the reference level, the minimum of the LIA about 1 K below the reference level, and end-of-20th century temperatures about 0.5 K above the reference level.

  369. Ice Anomaly
    Posted Jul 3, 2008 at 7:27 PM | Permalink

    re 366, John M.

    My previous posts at WattsUp have shown themselves to me, with a tag line that the post is awaiting moderation. That post, and the one I just made, simply disappear. My experience in the past is that this is what happens when I’m being blocked. Certainly, it is exactly what happened some time back when Anthony blocked me and then went back and retroactively removed all my previously-approved posts from the previous couple of months.

    I just posted the following over there; again, it simply disappeared, with no ‘awaiting moderation’ preview appearance.

    Tamara asks:
    “Lets break this down: 1.) Is the ice AT THE NORTH POLE single year ice (i.e. it was ice free last year)? http://nsidc.org/sotc/sea_ice.html.
    The ice moves. It can be single year ice at the pole, even if the pole didn’t melt last year.

    2.) Is it unheard of/unusual to find open water at the North Pole? (see Oldjim’s link above).

    Finding ‘Open water’ is a much different thing from ‘ice free.’

    3.) Is there proof that global warming is causing the disappearance of Arctic Ice? Will this be supported by the behavior of the ice this year, and if not what other factor is interfering with AGW?

    Proof? of course not – there is no ‘proof’ that gravity makes apples fall, either. What there is, is north polar amplification as predicted by the modern models, ice loss FASTER than predicted by the models, positive feedback as predicted by the models. Arctic ice area is at low negative anomalies not seen previous to two years ago, and dropping fast. This will, at the very least, be the second lowest level observed, by a LONG way over the third lowest. Accompanied by GRACE satellite data showing substantial ice loss from Greenland, and large temperature increases in recent years, all in accord with model predictions – this is very strong evidence for AGW in action.

    4.) Is the ice at the North Pole the thickest/oldest ice, and therefore an indicator of an alarming climate shift?”
    No. The thickest/oldest ice tends to congregate over around eastern Canada and Greenland, where it gets pushed and piled up by wind and current. The amount of multi-year ice even there is way, way, way down from 20 years ago.
    The fact that temps at the north pole itself are getting warm enough to create the possibility of an ice free (not just some open water) N pole IS an indicator of an large climate shift.

  370. M. Jeff
    Posted Jul 3, 2008 at 7:55 PM | Permalink

    Ice Anomaly,July 3rd, 2008 at 7:27 pm says:

    re 366, John M.

    My previous posts at WattsUp have shown themselves to me, with a tag line that the post is awaiting moderation. That post, and the one I just made, simply disappear. …

    The post that you refer to above in #363, July 3rd, 2008 at 12:31 pm, is currently viewable on the Anthony Watts site. It is not awaiting moderation. Excerpt copied from there:

    IceAnomaly (10:50:50) :

    Of course the NW Passage is still impassable. It’s early July still. …

  371. M. Jeff
    Posted Jul 3, 2008 at 8:13 PM | Permalink

    #369
    Another excerpt from the Anthony Watts site:

    IceAnomaly (18:21:34) :

    Tamara asks:
    “Lets break this down: 1.) Is the ice AT THE NORTH POLE …

  372. Gerald Machnee
    Posted Jul 3, 2008 at 10:24 PM | Permalink

    Re #361 and #364 – the Oreske hour. It is essentially a wasted hour if you watch it. There is no real science and the polar amplification is not specific enough. She seems to be more concerned with showing how the tobacco arguments are used in the global warming debate. And we cannot expect science as she is a history professor.

  373. bernie
    Posted Jul 4, 2008 at 7:33 AM | Permalink

    Gerald:
    Oreske is a scientist or was a scientist if you insist. Our host probably is less of a scientist so this point is neither here nor there. I do agree with you that the video is an inefficent way to understand her point. I also agree that there is minimal science in her presentation – it is more a public policy background presentation. The video does help understand the audience she is playing to and how scientifically and politically astute they are. Finally, from a policy perspective, I think that she is correct in identifying a central theme in the motivation of those associated with the Marshall Institute, namely a strong preference for minimal government intervention – I share that preference.
    Her points on polar amplification were specific – namely the predictiion was that the temperature change at the poles would be 4X the global change. This is a powerful rhetorical point. I would like to confirm that this is a misrepresentation of the actual data.

  374. Real Richard Sharpe
    Posted Jul 4, 2008 at 9:45 AM | Permalink

    Stop the presses. We now have to worry about NF3. It’s more dangerous than CO2.

    More at Plasma, LCDs blamed for accelerating global warming

    A gas used in the making of flat screen televisions, nitrogen trifluoride (NF3), is being blamed for damaging the atmosphere and accelerating global warming.

    Almost half of the televisions sold around the globe so far this year have been plasma or LCD TVs.

    But this boom could be coming at a huge environmental cost.

    The gas, widely used in the manufacture of flat screen TVs, is estimated to be 17,000 times as powerful as carbon dioxide.

    Ironically, NF3 is not covered by the Kyoto protocol as it was only produced in tiny amounts when the treaty was signed in 1997.

    Levels of this gas in the atmosphere have not been measured, but scientists say it is a concern and are calling for it to be included in any future emissions cutting agreement.

    Professor Michael Prather from the University of California has highlighted the issue in an article for the magazine New Scientist.

    He has told ABC’s The World Today program that output of the gas needs to be measured.

    Oh.

  375. Ice Anomaly
    Posted Jul 4, 2008 at 10:04 AM | Permalink

    M Jeff, yes, it looks like they are getting through. The phenotype, thought, is different from normal, and looks exactly like what I saw when Anthony expunged me from his site some months back.

    Steve:
    please stop this niggling.

  376. Gerald Machnee
    Posted Jul 4, 2008 at 11:00 AM | Permalink

    Re #373 Bernie – I could not resist the comment about history as similar comments had been made about Steve.
    My view is that any intelligent person should be able to understand the subject if their view is not clouded by preconceived beliefs. I feel her talk is one sided. She uses too many “in fact”s. If you change a few of her words or names you could apple her discussion in reverse and show how the same arguments are being used to promote AGW or just warming without any science.
    Re #374 – Plasma – Another whipping boy. The main problem with this gas may be the pollution. But the same applies to this gas as with CO2. Can someone provide a detailed calculation of the “heating” effect as Steve has asked about CO2? It is not enough to say that it is 17,000 times as strong as CO2 without an accurate measurement. Note that 17,000 times not much, may be a little more than not much.

  377. bernie
    Posted Jul 4, 2008 at 11:47 AM | Permalink

    Gerald:
    Again I am in agreement. It is always fun to see if you can reverse the current so to speak when someone is making one of these speeches. That said – I am still waiting (and hoping) for a rebuttal of her polar amplification assertion.
    On the sensitivity issue, Roger Pielke Snr referenced a recent NRC report on Radiative Forcings on CLimate Change which calls for a pretty extensive research program. The scope of this program is such that I think Steve’s inquiry about CO2 forcing is right on target – the Executive SUmmary certainly suggests that the key assumptions wrt to CO2′s net impact are, shall we say, more presumptive than proven. Perhaps Steve can get the low down on this committee’s functioning – Michael Mann was also on the Committee with Roger! I do not recall any comments about the report on RC.

  378. Posted Jul 4, 2008 at 1:30 PM | Permalink

    Re #376

    The reason NF3 has a strong GH effect if because it has a very strong absorption line in a region of the IR spectrum where there are no other absorbers.
    NF3 spectrum

  379. Posted Jul 4, 2008 at 5:15 PM | Permalink

    tinyCo2 #3620

    I suspect it will be the standard presentation the govt has instructed them to deliver to all businesses. If you go to my ‘hadley centre presentation’ thread over on the message board (its some way down there) I have shown the presentation hey were using. I would be very interested if you would post a comment after your indoctrination as to whether the content and message was the same. In particular see if they are issuing hockey sticks…!

    TonyB

  380. Gerald Machnee
    Posted Jul 4, 2008 at 8:11 PM | Permalink

    Re #374 – **The gas, widely used in the manufacture of flat screen TVs, is estimated to be 17,000 times as powerful as carbon dioxide.**
    More on this. The 17,000 times refers to an equal amount of each gas. The total effect has to be calculated on the total mass of the gases.

  381. Nick
    Posted Jul 5, 2008 at 5:06 AM | Permalink

    Can anyone help me?

    This is from the Met Office’s website

    “Our climate is changing and for many gardeners the first evidence of this can already be seen in gardens across the UK. Seasons are already changing with spring now arriving up to six days earlier than a decade ago and autumn ending up to two days later.”

    How do they (or anyone else, for that matter) estimate the precise day that spring arrives or autumn departs? May I’m a bit too ignorant on this matter, but how can the “fact” that autumn is leaving UP TO 48 hours later than 10 years ago (note the “UP TO”) be of any consequence…unless you want to alarm people, that is.

    I’m a bit bewildered.

  382. John M
    Posted Jul 5, 2008 at 7:35 AM | Permalink

    Nick # 381

    I can’t answer your questions, but if it makes you feel any better, according the Newsweek 1975, it may mean we’re just getting back to “normal”.

    In England, farmers have seen their growing season decline by about two weeks since 1950, with a resultant overall loss in grain production estimated at up to 100,000 tons annually.

  383. Posted Jul 5, 2008 at 10:06 AM | Permalink

    John #382

    Great post! I don’t want to take people off topic so can anyone point me towards a source that deals objectively with the global cooling scare? Here in the UK it was very big news but everyone now says it was just one Newsweek article that set the hares running.

    Bearing in mind this was pre internet and generally worse global communications I find that difficult to believe as cooling was reported everywhere.

    TonyB

  384. John M
    Posted Jul 5, 2008 at 10:48 AM | Permalink

    TonyB,

    I don’t know of any unbiased treatment. There are plenty of sites that list historical examples of media hype. Another example is the Time Magazine piece from 1974. I also remember a George Will editorial listing a lot of examples.

    There is a point to be made, however, that sometimes these summariess quote out of context, and that the consensus was more media hype than reality, but even that POV has to acknowledge the role of media hype in distorting reality, which is as true today as it might have been in the 70s.

    To me, the take home message is how little work seems to actually go into journalistic reporting. It’s like those old Mad Magazine features, where a template is provided, with a list of fill-in-the-blank words to create a document. No critical skills required.

    Just conduct this little exercise. Take either the Time or the Newsweek article, do a global replace of “warm” for “cool”, take a quick look for the fragments “low”, “short”, etc., fix those, and voila, an article for the new milennium.

  385. DeWitt Payne
    Posted Jul 5, 2008 at 2:22 PM | Permalink

    NF3 isn’t just used in the manufacture of plasma and LCD TV’s. Potentially it can be used in the manufacture of all silicon based semiconductors, including photovoltaic cells for conversion of sunlight to electricity. It replaces, and is apparently more efficient than, sulfur hexafluoride and tetrafluoromethane, which are also ghg’s. See here for a an alternative perspective.

  386. Andrew
    Posted Jul 5, 2008 at 2:23 PM | Permalink

    381 (Nick): Yes, especially since seasons are nothing more than arbitrary divisions of the Earth’s orbital period by human beings, with arbitrary official non-changing start and end dates.

  387. Andrew
    Posted Jul 5, 2008 at 2:31 PM | Permalink

    385 (Me): Okay, I’m semi-wrong-there are meterological and astronomical reckonings of seasons. Still, based on the definition, I don’t see how they could really “change”.

    http://en.wikipedia.org/wiki/Season

  388. DeWitt Payne
    Posted Jul 5, 2008 at 2:48 PM | Permalink

    I think they’re referring to the length of the growing season, i.e. the time between the last killing frost in Spring to the first frost of Autumn. That can and does change with climate.

  389. Nick
    Posted Jul 5, 2008 at 3:33 PM | Permalink

    [387] Thanks! That may be what they’re referring to but, if that is the case, it’s a very unscientific measure, IMHO. A killing frost depends on clear skies and a light wind and, of course a cold air mass. With the complexity of the island climate we have in the UK, it is a complete lottery to as to whether, in any given year, there is a run of lows coming in off the Atlantic bringing mild air at the time of the season change (and any intervening ridges of slightly higher pressure to reduce cloud cover, reduce wind speed and create slight frost, would have to coincide with hours of darkness) OR a stubborn anticyclone bringing a cold air mass with light winds for days on end. Trying to determine a specific trend in the growing season – to an accuracy of up to 6 days – from such a dynamic atmospheric system seems impossible. I note that the IPCC also go on about growing seasons in their reports. Now, if seasonal temperatures were to be shown to be, say, 30 days adrift from the norm over a protracted period, I might start to buy it, but “up to 6 days”? Nah, sorry!

  390. cba
    Posted Jul 5, 2008 at 5:09 PM | Permalink

    DeWitt,

    Here is the first results for both upward and downward radiation in my 1-d model. It still doesn’t have aerosols or h2o continuum. However, it has taken two false starts as the files tend to get screwed up when they get too large. The excel generating file for the wavelength based raw data is right at 1GB in size and bad things happen when that value is exceeded.

    What’s interesting is the relative closeness to K&T (and modtran) for some of the values, despite some significant differences in approach.

  391. Pliny
    Posted Jul 5, 2008 at 7:42 PM | Permalink

    #383 TonyB You probably weren’t looking for an article titled “The Myth of the 1970s Global Cooling Scientific Consensus”. But it does make a conscientious effort to reference the articles that were published over the period.

  392. Stan Palmer
    Posted Jul 5, 2008 at 8:03 PM | Permalink

    http://news.bbc.co.uk/2/hi/science/nature/7484975.stm

    The BBC reports that Stradivarius violins owe their quality to the wood:

    The unique sounds of a Stradivarius violin may come down to the density of the wood it is made from.

    Scientists say the patterns of the grain are markedly different from modern instruments.

    It is believed that the seasonal growth of trees in the early seventeenth century was affected by a mini-Ice Age.

    Stradivarius had the benefit of wood that was produced in conditions that have not been repeated since then, the journal Plos One reports.

    I suppose that this part of the BBC does not know that the LIA did not occur.

  393. Andrew
    Posted Jul 5, 2008 at 8:16 PM | Permalink

    390 (Pliny): That article is blatant political advocacy, and it is attacking a caricature of a naive argument anyway.

  394. Philip_B
    Posted Jul 6, 2008 at 3:22 AM | Permalink

    Re: Naomi Oreske’s video presentation

    If its the one I watched its main narrative is that we have known about the CO2 GH effect for a long time and as scientific research occurred we have come to progressively better understand the GHE. Oreskes presents this as a necessary consequence of science, which of course its not. And at the risk of getting snipped, it’s a view of science that owes more to Marx than Popper or Kuhn.

    Certainly, much of science works this way. However, this process is periodically and unpredictably interrupted by new data forcing part or all of existing theories to be questioned or discarded.

    New and better data may or may not support existing theories. Specifically, in the case the CO2 GHE, new data has progressively weakened the case for the theory in the view of some leading scientists such as Freedman Dyson. Hence their calls for more and better data.

  395. Posted Jul 6, 2008 at 3:43 AM | Permalink

    pliny #391

    I tend not to look for articles headed ‘myth’ as they are usually biased!!

    It was an interesting read and at first I was prepared to accept it at face value-as I have done with other aricles saying much the same thing.
    However it still seems extraordinary that such a tiny ripple could cause such a huge wave prior to the internet. Global communications were much poorer generally, yet it evoked the same sort of reaction as did the millenium bug and Global warming (although the latter has gone on longer-probably due to the amounts of research money…)

    Any scientists from that era still around who can tell me whether this was a serious issue in their community or it was just a small issue that was jumped on by one part of the media which then snowballed?

    TonyB

  396. Pliny
    Posted Jul 6, 2008 at 5:43 AM | Permalink

    TonyB – “Scientists from that era still around”? Well, maybe I qualify. I was a junior mathematician with CSIRO in Australia in the 70′s. My first professional contact with climate issues came when I arrived in Perth in 1976. The WA State government had a problem that they asked us for advice on. It had recently become economic for wheat farmers to plant in fairly marginal areas, where they might expect a crop only every second year or so. But to develop those areas required State investment in rail, silos etc. What were the long-term prospects?

    We asked our experts. The advice was clear – the greenhouse effect was the strongest influence expected. The rain that makes wheat possible in SW Australia comes from the belt of west winds (roaring forties), which come north in the winter, and bring almost all their rain. The global (Hadley) circulation cell that drives them would expand, forcing the winds further south. So the outlook was unfavourable, as we reported. It turned out to be right; drought in the wheat belt has since been a big problem.

    In 1979 I was in Melbourne, working with scientists in Atmospheric Research. The greenhouse effect was the dominant climate interest, and by now they were looking hard for evidence of CO2 effects in Australian climate data, particularly rainfall. You couldn’t then look up a Hadcrut plot on the internet. Just getting raw data was a tedious logistical exercise, and it needed a lot of processing to get a coherent set. Still, we got a lot of help from the Met Bureau. The result turned out to be at that stage not statistically significant. Negative results are not easy to publish, but we did present a conference paper.

    TonyB – “Scientists from that era still around”? Well, maybe I qualify. I was a junior mathematician with CSIRO in Australia in the 70′s. My first professional contact with climate issues came when I arrived in Perth in 1976. The WA State government had a problem that they asked us for advice on. It had recently become economic for wheat farmers to plant in fairly marginal areas, where they might expect a crop only every second year or so. But to develop those areas required State investment in rail, silos etc. What were the long-term prospects?

    We asked our experts. The advice was clear – the greenhouse effect was the strongest influence expected. The rain that makes wheat possible in SW Australia comes from the belt of west winds (roaring forties), which come north in the winter, and bring almost all their rain. The global (Hadley) circulation cell that drives them would expand, forcing the winds further south. So the outlook was unfavourable, as we reported. It turned out to be right; drought in the wheat belt has since been a big problem.

    In 1979 I was in Melbourne, working with scientists in Atmospheric Research. The greenhouse effect was the dominant climate interest, and by now they were really looking for evidence of CO2 effects in Australian climate data, particularly rainfall. You couldn’t then look up a Hadcrut plot on the internet. Just getting raw data was a tedious logistical exercise, and it needed a lot of processing to get a coherent set. Still, we got a lot of help from the Met Bureau. The result, at least with the data we could get, turned out to be at that stage not statistically significant. Negative results are not easy to publish in a journal, but we did present a conference paper.

    Some of my colleagues went to the First World Climate Conference in 1979. As I heard it from them, the talk was all about the greenhouse effect.

  397. Philip_B
    Posted Jul 6, 2008 at 6:35 AM | Permalink

    The rain that makes wheat possible in SW Australia comes from the belt of west winds (roaring forties), which come north in the winter, and bring almost all their rain. The global (Hadley) circulation cell that drives them would expand, forcing the winds further south.

    Pliny, this is about what climate change should mean (although it rarely does). Has there been research to see if the prediction of a decrease in Southern Ocean westerlies over the SW of WA has in fact occured? I know rainfall has decreased since the mid 1970s, but do we have data showing the climate mechanism?

  398. Ellis
    Posted Jul 6, 2008 at 6:49 AM | Permalink

    There is a new page at gistemp that some might find interesting. There is a map with every station used in their analysis, also there is an answer to the age old question, why 1951-1980 for a baseline,

    We choose 1951-1980 as the base period because that was the base period at the time that “global warming” began to be a public issue. Als o it is the time that “baby boomers” grew up, so this choice for base period allows those people to relate today’s climate to that which they remember.

    Very scientific, I guess this means P.D choose 1961-1990 so us gen x’ers can relate today to what we remember from our youth.

    As a side note, and I know this doesn’t bother Lucia, but the amount of typos at the page is unbelievable, to the point that it is hard to read. I mean it is tough enough to try and understand what they are trying to say without having to go back and re-read just to figure out the words.

    On the off chance that J.H. or M.S. has waded through 400 comments in an unthreaded at CA, one word, interns.

  399. bernie
    Posted Jul 6, 2008 at 7:23 AM | Permalink

    Philip_B:
    Wrt the Oreskes’ presentation, somebody appears to be tidying up the temperature record. The part of her presentation that I am interested in is strictly the polar amplification story and its accuracy. In trying to follow up I went to the GISS location. I don’t remember these particular options being available previously – perhaps somebody more familiar with the GISS site can clarify. It would be interesting to know when this was put up.
    That said, I went looking for the chart that Naomi Oreskes used in her presentation, The American Denial of Global Warming. As displayed in her presentation at about 21′ 30″ , the chart for the 2001 – 2005 Mean Surface Temperature Anomaly indicates a mean anomaly of 0.53C. She argues that the chart shows the predicted polar amplification of 4X. Clearly while her chart indicates that this is approximately true for the Arctic, it is definitely not true for the Antarctic. A very similar if not identical chart appears in Hansen et al, 2006. I tried to duplicate this chart at the GISS site. When I entered the annual anomaly for surface and ocean for the period 2001 to 2005 I got a very different looking chart – the mean global anomaly was still 0.53C, but thanks to the accompanying chart of anomaly by latitude a very clear indication that the polar amplification for the Arctic was approximately 2X not 4X. The amplification for the Antarctic was less than 1!! These numbers are based on the 1200km radius smoothing. Matters get far more problematic if you use the 250 km smoothing because of the huge amount of missing data.
    It certainly looks to me that somebody has been very busy in the last 2 years refining the existing data. I would be interested in getting other opinions. If things are as I report, then I can alert Naomi and she will be able to amend her comments accordingly.

  400. steven mosher
    Posted Jul 6, 2008 at 7:32 AM | Permalink

    re 398. Yes I remember the peroid 1951-1980. It was much cooler then,
    A lot cooler. opps I lived in a chilly part of the US then. Now, the base period makes no difference whatsoever, so why justify it with the lame argument that people will remember what it was like when then were young.

  401. Andrew
    Posted Jul 6, 2008 at 8:43 AM | Permalink

    399 (bernie): Re: Polar amplification: Yes and no. Yes if you restrict yourself to the last thirty years, but not so much if you look back farther:
    http://www.worldclimatereport.com/wp-images/arctic_temps2.JPG
    Oreskes’ claim is misleading, however, as any warming will be accompanied by polar amplification becuase of the ice albedo feedback.

  402. RomanM
    Posted Jul 6, 2008 at 8:49 AM | Permalink

    #400 stevenm.

    … and the snow piles were much higher then, too… ;)

    You just don’t appreciate how clever this new presentation is. Comparing the current temperature only to the 66.6th percentile of the fixed period 1951 – 1980 gets rid of those pesky misleading downturns in the temperature record in recent years. Virtually all of the station annual means could drop in a given year with very little change in their graphs.

  403. bernie
    Posted Jul 6, 2008 at 8:50 AM | Permalink

    Andrew:
    Thanks for the link – but can you provide a more complete source. Yes to what. Are you saying that she is correct? Then how come the GISS record indicates a different amplification factor.
    Also, do you have a view of what happened to the GISS records.

  404. bernie
    Posted Jul 6, 2008 at 8:52 AM | Permalink

    Andrew:
    Thanks for the link – but can you provide a more complete source. Yes to what. Are you saying that she is correct? Then how come the GISS record indicates a different amplification factor.
    Also, do you have a view of what happened to the GISS records?

  405. steven mosher
    Posted Jul 6, 2008 at 9:04 AM | Permalink

    re 403.. Yes its a clever little trick. I used to see snow in my backyard.
    Now I see it only in the movies. what in the world is happening.

  406. Andrew
    Posted Jul 6, 2008 at 9:17 AM | Permalink

    405 (bernie): If she is saying that the arctic has warmed faster than the rest of the world for the last 30 years, she is right. If she makes more of it than that she is either wrong or misleading.

    As for what is going on with the difference between her data and the GISS data-who knows, maybe she mislabeled/mispoke-or something more sinister.

    Ironically, the first person to point out rapid high latitude warming as evidence of GHG warming seems tp have been Pat Michaels:

    http://www.int-res.com/articles/cr/14/c014p001.pdf

  407. Ellis
    Posted Jul 6, 2008 at 10:02 AM | Permalink

    Along the lines of 403, Roman, how long until this graph can only be found on the internet archive wayback machine?
    I give the over/under at labor day.

  408. RomanM
    Posted Jul 6, 2008 at 12:15 PM | Permalink

    #407 Ellis

    It won’t disappear, it will be “adjusted”.

  409. Steve McIntyre
    Posted Jul 6, 2008 at 12:35 PM | Permalink

    Federer- Nadal – how great is this match!! 2 sets all. Federer was down 5-2, Nadal serving, in the 4th set tiebreak. Unbelievable tennis. One of the classic matches in any sport going on right now.

  410. Posted Jul 6, 2008 at 12:37 PM | Permalink

    Re #409
    Yeah I thought the rain break favoured Federer, however no more tiebreakers in the 5th.

  411. Posted Jul 6, 2008 at 12:39 PM | Permalink

    Steve #409, My prediction is Federer 6-4 in the 5th with Nadal folding like a lawn chair under the pressure…

  412. DeWitt Payne
    Posted Jul 6, 2008 at 1:21 PM | Permalink

    cba,

    Nice work.

    I presume you’re using NTFS disk formatting. FAT32 has a limit of 1GB for file size. That’s why there are multiple .vob files on a DVD. My guess is the 1GB file size problem is in Excel. At some point, we’ll all have to learn R.

    I’ve been thinking about the ‘no greenhouse effect in an isothermal atmosphere’ thing. It’s nominally true in that for line absorption an isothermal atmosphere should absorb just as much energy as it emits at every layer (ignoring solar absorption in the atmosphere and line broadening for the moment). The problem comes at the top and bottom. Since there is no downwelling radiation at the top, the top layer will be emitting more radiation than it receives from the layer below and must cool. That in turn will cool the layer below, etc. and result in a net energy imbalance requiring surface warming. At the bottom, assuming constant energy input, the emission from the bottom layer must warm the surface. That temperature difference will generate convective warming in the layers above and yet more surface warming. Hence, an isothermal atmosphere is probably not stable to radiative heat transfer and will end up with some greenhouse effect.

    I think there’s a similar problem, but with opposite sign, with an atmosphere with a dry adiabatic lapse rate, but I haven’t worked out the details yet.

  413. Real Richard Sharpe
    Posted Jul 6, 2008 at 2:15 PM | Permalink

    DeWitt says:

    I presume you’re using NTFS disk formatting. FAT32 has a limit of 1GB for file size. That’s why there are multiple .vob files on a DVD. My guess is the 1GB file size problem is in Excel. At some point, we’ll all have to learn R.

    Hmmm, last time I tried to copy 4.3GiB onto an 8GB flash drive, it failed, but when I split the file into on 2GiB and 1 2.3GiB file, I could copy them on. I seem to recall from experimentation that the FAT32 limit was 4GiB per file.

  414. Jonathan Schafer
    Posted Jul 6, 2008 at 2:53 PM | Permalink

    #411

    Not this year. Nadal wins. Great match by both players.

  415. Pliny
    Posted Jul 6, 2008 at 3:08 PM | Permalink

    #397 Phillip_B
    The point of my anecdote was really just to show what was on scientist’s minds at the time. But I did look up some weather data here. There were no collected time-series on wind-speed, but by comparing 30-year averages for individual stations, July winds in the SW do seem to be down a bit. Actually, I was surprised to find that average wind speeds are not markedly higher in winter. I guess both wind and rain tend in winter to come in intermittent storm patterns, which may not have a huge effect on the wind average.

  416. cba
    Posted Jul 6, 2008 at 3:18 PM | Permalink

    Dewitt,

    Excel nominally saves me a great deal of programing but it gets unwieldy rather quickly.

    In a ‘real’ atmosphere, there occurs a temperature gradient because the radiation is both up and down from a shell and each direction emits the same power and if the inner area is at the same T then the downward power = outward power. There must be another mechanism of power input to maintain the power levels. This would not be a problem is space were also radiating at T as the downward incoming would balance the outgoing and all would be well.

    My chart indicates this as the power deficit curve. Since we are dealing with a “real” atmosphere with the 1976 std atm, (well – at least sort of real), it is showing a deficit of power which must be made up by some additional power input. In the troposphere this amounts to a total of about 60w/m^2 coming from the surface and depositing itself according to the values shown in the deficit in each shell. Without this, the atmosphere would cool – at least in those areas with the values. It’s interesting that higher up there is something curious at around 55km and otherwise the deficit bounces between +/- and is quite small – and in reality, it has to be virtually 0.

    I finally managed to get similar numbers for 768PPM CO2 and have made a comparison chart of now vs a doubling from now. It has several interesting results. There is an increase of 3.49W/m^2 coming to the surface, despite a decrease in solar at the surface of 1.01W/m^2. At 70km, evidently getting close to the limit of LTE, the outgoing is only 2.5 or 2.6W/m^2 less for the 768ppm atmosphere. Again, there’s a curious variation at around 55km and an increase in difference between the two concentrations.

    I made a couple of the lines thicker so that they may be seen better underneath as they were essentially totally covered in many places. There is a difference or delta included for the two deficit curves. Remember for the deficits that a positive number means that much power must be added to keep the T value in each shell at the current level. That means over 15 W/m^2 of nonradiant energy must be absorbed by the bottom shell to maintain its current T and it almost doesn’t change in requirements for a whole doubling.

  417. Philip_B
    Posted Jul 6, 2008 at 3:55 PM | Permalink

    Pliny, I understood the point of your anecdote. My question was about whether they got the prediction right, for the wrong reasons. SW WA’s summer rainfall is highly variable. In most recent years you would wonder how the summer rainfall averages get as high as they are, because summer rainfall is minimal. Then we get a summer like the one just past, when we get hundreds of millimeters of rain from the coast to far into the interior.

    The bulk of the summer rain comes from sub-tropical lows rather than Southern Ocean westerlies. Its my observation that much and possibly most of the reduction in annual rainfall over the last 20 years is in summer. So rather than a reduction in rainfall due to weaker systems from the Southern Ocean, we get a reduction in rainfall due to fewer and weaker subtropical systems, which on the face of it, appears directly contrary to the GW thesis.

  418. Pliny
    Posted Jul 6, 2008 at 6:55 PM | Permalink

    #417 Phillip_B
    No, I don’t think summer rain variation is the explanation. You can look up the timeseries for the SW on this interactive site. Winter rainfall has been decreasing. Summer rainfall shows no marked trend, but it has always been small.

    You can confirm this with seasonal trend maps .

  419. David Archibald
    Posted Jul 7, 2008 at 4:19 AM | Permalink

    My day job as an oilman is taking me to Denver, Colorado, Gillette, Wyoming and Dallas, Texas next week. If anyone there would like a reprise of my paper presented at the New York climate conference in March, please contact me at: david.archibald at westnet.com.au The message is that AGW is only goof for about 0.4 of a degree over several centuries, increased atmospheric CO2 is wonderful for plant growth and that a solar-driven sharp temperature plunge is imminent.

  420. Len van Burgel
    Posted Jul 7, 2008 at 4:47 AM | Permalink

    Pliny (#415 and #418) mentions “There were no collected time-series on wind-speed, but by comparing 30-year averages for individual stations, July winds in the SW do seem to be down a bit.”
    From that site there aren’t too many stations in the Southwest which allow you to compare 1971-2000 option with the 30 earlier years (1941-1970).

    I could only find two:

    Cape Leeuwin
    The July average for 1971-2000 35.0 km/h (9am) and 33.5 (3pm)
    July average 1941-1970 27.9 (9am) and 27.1 (3pm).
    Annual average 1971-2000 30.3 (9am) and 31.0 (3pm)
    Annual average 1941-1970 22.9 (9am) and 23.2 (3pm)

    Geraldton
    The July average for 1971-2000 17.4 km/hr (9am) and 17.2 (3pm)
    July average 1941-1970 16.0 (9am) and 15.7 (3pm).
    Annual average 1971-2000 18.6 (9am) and 24.0 (3pm)
    Annual average 1941-1970 18.8 (9am) and 23.6 (3pm)

    Cape Naturaliste, Manjimup, Collie, Albany Airport, Donnybrook and Bridgetown do not have wind records listed dating back to 1941 (most only to 1957). If you select 1941-1970 you get temperature reading for those years but not a full record of wind.

    Both Cape Leeuwin and Geraldton are coastal stations and show an increase in wind speed. However I realise that comparing historical wind data brings more significant issues than comparing temperature data. It may be that a more detailed study of the archived data does show a decrease. But better is to compare changes in pressure gradients and/or absolute pressures.

    In answer to Phillip_B: A significant reason put forward for the fall in rainfall over Southwest Western Australia is the widespread clearing to establish the wheat belt. To separate the extent of that effect from any global warming trend is the impetus for much current research.
    Of interest is that on satellite cloud imagery, the Rabbit Proof Fence delineation can at times be seen with greater cloudiness on the eastern side (where there has been no clearing) from the western side where clearing has occurred (see: http://www.nsstc.uah.edu/~nair/BUFEX05/ and http://www.airborneresearch.com.au/Wheat%20Production%20(A4).pdf

  421. DeWitt Payne
    Posted Jul 7, 2008 at 9:01 AM | Permalink

    cba,

    The file size limit for FAT32 is indeed 4GB (actually 1 byte less than 4GB).

    For calculating forcing according to the IPCC definition, I think the next step is to lower the temperatures in the stratosphere to eliminate the delta deficit there. However, MODTRAN doesn’t do this. I just checked. I’m pretty sure that MODTRAN doesn’t care about power balance either. It’s probably way too much work, but a power deficit comparison with MODTRAN data would be interesting.

    I also noticed something else. You have total SW + LW impinging on the surface of 399 W/m2. Where is the 60 W/m2 (102 W/m2 according to Kiehl and Trenberth) required for sensible and latent heat transfer from the surface to the atmosphere you need for energy balance coming from? If I did my sums correctly, you have 383 W/m2 out and 399 in, that’s only 16 W/m2, not 60. Is the 399 W/m2 total the same for both CO2 concentrations? MODTRAN shows an increase of about 4 W/m2 for 384 to 768 ppmv CO2.

  422. See - owe to Rich
    Posted Jul 7, 2008 at 12:29 PM | Permalink

    Here is a question for Ross McKitrick, if he visits.

    I am starting to prepare a lecture on global warming/cooling, and I’d like to include some material from McKitrick and Michaels (2007). But there is one point I am confused about. In the “Background Discussion”, 3.3, he says that with clean data, the average trend at the surface in th epost-1980 interval would fal from about 0.30C per decade to 0.17. But when I look at the HadCRUT graph at the beginning, 1980 seems close to anomaly 0.00 and 2000 seems close to 0.40, and that’s a decadal trend of about 0.20 not 0.30. Where am I going wrong?

    TIA, Rich.

  423. cba
    Posted Jul 7, 2008 at 1:51 PM | Permalink

    DeWitt,

    I think you may have used stefan’s law for the 383 number – mine is less because my wavelength range of 0.2-65.5 microns loses a little over the total. It’s also with an estimate of 0.98 emissivity at the surface. I think my total leaving the surface is in the very high 370s.

    The number 60w/m^2 is really the sum of additional power needed for the troposphere balance to occur. This is not being supplied by radiative. 399 in is for 384ppm of co2 and 401 or 402 is the incoming for 768 ppm of co2. The difference amounts to about 3.49 at the surface. For the first km, the difference in E balance is more like 15 or 16 W/m^2 as indicated by the graph.

    The discrepancies may have something to do with clear / cloudy.

    The initial objective was to use the results of current deficits to approximate a new atmospheric lapse curve.

  424. Philip_B
    Posted Jul 7, 2008 at 2:22 PM | Permalink

    Pliny, I wrote a longish post that got eaten by internet gremlins. In summary, Looking at the links you provided and a number of wheatbelt stations, since 1970 (the most recent dataset in your links), I didn’t find any evidence for ‘drought’ in the wheatbelt. There has certainly been a reduction in rainfall in coastal areas, which would show up in SW Australia data. There may have been a reduction in rainfall in the far NW and SE of the wheatbelt, but these areas are marginal for wheat even in an average year. So, do you have any data showing significant reductions in rainfall in the wheatbelt since the 1970s?

    Len van Burgel, I’m familiar with the area and have seen clouds lining up along the bush side of the fence. The fence has been in existence for a 100 years and the land on wheatbelt side cleared by the 1920s. Interestingly, there were farms to the east of the fence at some point, although I am unable to find any information on when they were created or abandoned, or how extensive they were. The reason I know this is I travelled through the area to the east of the fence and there are large areas of secondary growth bush, quite different from nearby areas of mature bush (trees), which had to have been cleared at some time for agriculture. I’ve also been told that land was cleared in the late 1800s for sheep grazing and subsequently abandoned due to lack of trace minerals in the soil and predation by dingos.

    The reason I mention this is that the assumption that there has been a net clearing of land in the region of the fence over the 20th century may be wrong.

  425. DeWitt Payne
    Posted Jul 7, 2008 at 3:40 PM | Permalink

    cba,

    Yes I was using the Stefan-Boltzmann equation.

    For a lapse rate calculation, you need to include convection and to do that you need a complete energy balance. A significant amount of energy is transferred to the atmosphere from the surface by direct conduction and convection and even more by latent heat transfer from the evaporation and condensation of water, especially if you look at net transfer. That energy has to come from solar radiation. There is no other significant source. The total energy from SW and LW coming down also has to match the total energy from LW and convective heat transfer coming up. Otherwise there will be a temperature change.

    If you calculate a lapse rate simply from radiation, I think you’ll see a large temperature gradient between the surface and the bottom of the atmosphere. That situation is unstable and will be corrected in the real world by convective heat transfer to cool the surface and warm the atmosphere until balance is achieved. It looks like you have a pretty good radiative program. Now you need to add convection to calculate a lapse rate ab initio or de novo or whatever is the correct Latin phrase.

    MODTRAN avoids the problem by assuming a constant lapse rate so that an increase in surface temperature increases temperatures by the same amount at every altitude. I have my doubts about that assumption as well.

    There’s another fly in the ointment. You will probably have to make some assumption on how the water vapor mixing ratio changes with temperature. Unfortunately, that’s at the heart of the issue of the size of water vapor feedback.

    I would really like to see the mechanics of a 1D radiative-convective model. I’m of the opinion that the atmospheric lapse rate is not solely determined by the thermodynamics of adiabatic expansion in a gravitational field, but I have no idea how much radiative heat transfer modifies the lapse rate.

  426. DeWitt Payne
    Posted Jul 7, 2008 at 3:43 PM | Permalink

    I should specify, modify the lapse rate in the troposphere. It’s pretty obvious that the lapse rate above the troposphere is totally controlled by radiative heat transfer.

  427. Pliny
    Posted Jul 7, 2008 at 5:23 PM | Permalink

    $424 Philip_B
    Here’s
    a comprehensive study. Fig 4 has the info you want. Note the large area north of Katanning with a more than 30% decrease in winter rain.

  428. Pliny
    Posted Jul 7, 2008 at 5:28 PM | Permalink

    #427 Philip_B – sorry, scale misreading, a 20-30% decrease in winter rain.

  429. cba
    Posted Jul 7, 2008 at 5:44 PM | Permalink

    DeWitt,

    By starting with the lapse rate of a hypothetical standard, we then ascertain variation between this ‘real’ lapse rate and what radiative is responsible for. That leaves the contributions of other factors (latent/sensible) lumped together as the deficit in energy needed to maintain the situation. Then, the original goal was to establish the new lapse rate as a pertubation of the old with differences in absorptions and the redistribution of energy. As typical, I made a little headway and am back to head first into the work abyss with little spare time and much to do again. (I guess it would be accurate to say that after 30+ yrs of career in electrical engineering, computer science, and management, I made a career change into astronomy & physics with the associated education occurring prior to the first career).

    Anyway, the convective component is supposed to fall out as the difference between what really is and what radiative is supposed to do. It’s interesting just how close the variation between current and after adding another 384ppm of co2 turns out to be for most of the tropo. The big changes tend to be way up there. That 57km altitude hump is a real curiousity which I was not expecting. This is where a substantial change – at least what miniscule changes are actually visible on the chart. The rest is so close together on the energy budget requirements that it’s hard to imagine there could be a difference. It’s going to require more thought and scrutiny to conceptualize what might occur (and to verify there aren’t any gross faux pas there).

    As for that 57km hump, I don’t have a reason identified for its existance or why it would be more sensitive to co2 than virtually anything lower down. Also, I’m not sure where or how our deficits in energy are going to be made up yet. Best guess is it is related to clouds and is absorbing energy headed towards the surface. My model currently is strictly clear sky and there doesn’t seem (as you pointed out) that there’s anything to spare in the budget for what would seem to be a deficit.

    Creating a mixed column would toss in some equivalent effects for clouds – such as blocking some of the incoming solar and absorbing it higher up – in essence – creating the balance which exists in the 1976 std atm but doesn’t exist in the clear sky only condition.

    This doesn’t explain the 57km factor. This is something like the d-layer bottom of the ionosphere which absorbs radio waves. This has a deficit of about 4.5w/m^2 for its T compared to what is supposed to be absorbed. It has a good 10 million collisons per second and should be LTE. I can’t believe there is 4.5w/m^2 worth of RF energy being generated on earth or that there could be that much radiated to that area outside of my wavelength range. It could possibly be effects such as tidal energies or something more exotic or a combination of several smaller items including uV, cosmic rays, gamma rays, solar and lunar tidal effects, rf from the surface and from the sun and space, meteor debris (tons per day rain down as I recall – mostly dust or sand size).

  430. DeWitt Payne
    Posted Jul 7, 2008 at 11:51 PM | Permalink

    cba,

    A remote possibility for the stratosphere hump is that the 1976 standard atmosphere temperature profile in the stratosphere is wrong. That was certainly derived from balloon soundings as I don’t think there was much in the way of good satellite data then. Trying to measure temperature with a thermocouple when the atmospheric pressure is so low is not trivial. How much would you have to change the temperature profile to achieve energy balance?

    However, that seems like a lot of energy emitted for such a low pressure, less than 1 mbar at 50 km. The MODTRAN program calculates a total of only 1 W/m2 IR looking up from 40 km, from CO2 and ozone. I wonder if this isn’t another variant of the situation you observed above 100 km in the thermosphere. If you can, compare a spectrum you generate from 40 km to 100 km looking up with one from MODTRAN to see if you’re getting anomalously high emission from something other than CO2 and ozone.

    Increasing CO2 in the stratosphere should cause a radiation deficit and cooling. You’re increasing LW emissivity while SW absorptivity remains approximately constant. The increase in LW absorptivity is small because the LW intensity in that region has been depressed by CO2 absorption in the troposphere and the low concentration and pressure leads to narrow lines far from saturation.

  431. Len van Burgel
    Posted Jul 8, 2008 at 12:26 AM | Permalink

    Pliny #428 and Phillip_B
    Figure 2 actually shows a decrease in the May-July rainfall. The decline in winter rain is less (as fig 1 shows) because late winter and spring rainfall has shown little change. I don’t know why the authors didn’t pick growing season rainfall which the WA Dept of AG and Food deem to be May to October. This would be more meaningful. The trend is there as well but not as large as the decrease May-July. Fig 1 shows the “break of season” has tended to be later.

  432. Philip_B
    Posted Jul 8, 2008 at 1:02 AM | Permalink

    Pliny from the paper you link to,

    The area to the southwest of the diagonal line is averaged to produce the values in figures 1 and 2.

    SW of the line excludes at least 70% of the wheatbelt and the area SW of the line is mostly outside the wheatbelt. I’ll also note figure 4 looks substantially different to the BoM’s rainfall anomaly maps for the period. Otherwise, as Len van Burgel notes, its about the 3 months of the year which have shown the largest decrease in rainfall. The other 9 months have shown little change or an increase.

    This study looks like cherry picking to support the conclusion wanted. No one disputes the coastal areas have had a substantial decrease in rainfall. My question was about the wheatbelt ‘drought’ you claimed.

  433. pliny
    Posted Jul 8, 2008 at 2:07 AM | Permalink

    Philip_B and len

    SW of the line excludes at least 70% of the wheatbelt

    So? It’s just the area they chose for a particular graph, which they didn’t claim to be wheatbelt. Fig 4 shows a reduction in early winter rain in virtually the whole wheatbelt. And that’s the rain they most need. You’d have to go out to Southern Cross to find an area that hasn’t decreased.

    And yes, maybe they should have included August, which showed little change. The percentage decreases would have been slightly smaller.

  434. Philip_B
    Posted Jul 8, 2008 at 2:38 AM | Permalink

    Pliny, thanks. I think I have established there has not been a drought in the wheatbelt. Interesting exercise in sifting through the claims to determine the facts.

  435. Andrey Levin
    Posted Jul 8, 2008 at 5:25 AM | Permalink

    Speaking of science (falsifying hypothesis) and mathematics (positively proving things), was it true or only my impression that it has been mathematically (positively) proven that climate models (GCM-based) principally could not predict climate beyond correct timespan of weather prediction (about 5 days)?

  436. cba
    Posted Jul 8, 2008 at 6:26 AM | Permalink

    DeWitt,

    there are problems with the possibilities. There is a temperature hump shown in the 50-60km graphs for other than the 1976 std atm. Also, the pressure decline is pretty linear out to there as well. LTE is supposed to be good at least to past 60km which is probably the major problem with my model above 70km. Offhand, I don’t recall how high sounding balloons go but the one rule of thumb I use based upon limited experience is that 100k feet is usually the upper limit for balloon flights. That’s well below this area.

    Actually, the numbers for a co2 doubling turned out to be 1 W/m^2 drop going from 384 to 768 ppm in solar incoming at the surface. At 70km the outgoing was roughly 2.5 w/m^2 less at current Temperatures and at the surface the balance point went from a total of 399 to 402 (a rise of 3.5 W/m&2)

  437. Timo van Druten
    Posted Jul 8, 2008 at 6:45 AM | Permalink

    Once again an excellent guest post on the website of Roger Pielke sr

    http://climatesci.org/2008/07/07/guest-weblog-by-gerbrand-komen/

    Three statements are quite interesting:
    1. Nothing is ever absolutely certain. I think that speaks for itself;
    2. How to quantify confidence in “…… this type of probability has a clear subjective element”;
    3. “Continued greenhouse gas emissions at or above current rates would cause further warming and induce many changes . . that would very likely be larger than those observed in the 20th century”.

    Under 3); those this imply that continued greenhouse gas emissions below current rates would not cause further warming and induce further changes ….?? Is this scientifically possible if their hypothesis is correc? Why didn’t they write “Continued greenhouse gas would cause further warming ….” From a scientifical point of view this might be correct, but possibly “not done” from a political perspective!? Does this indicate the subjective element of the expert judgements when they quantified the confidence level of their assessment?

  438. Jeremy
    Posted Jul 8, 2008 at 10:28 AM | Permalink

    Re: 46

    Gunnar, Out of curiosity, has anyone created a summary of the “critical” flaws of the core AGW-theory papers? I suspect I haven’t even read most of the “core” AGW-theory papers and it would be good to look them over, as well as follow-on papers that are presumably part of the “moving on” process.

  439. Jeff A
    Posted Jul 8, 2008 at 10:52 AM | Permalink

    The problem isn’t just urban sites, but microsite biases at supposedly rural locations. That, and adjusting data based on sites within as far away as 1500km is simply ludicrous. One site should NEVER adjust another, up or down.

  440. bernie
    Posted Jul 8, 2008 at 11:56 AM | Permalink

    Jeff A:
    I mentioned this earlier in this string when I was looking at polar amplification, but if you go to http://data.giss.nasa.gov/gistemp/maps/ and choose any map and change the smoothing radius from 1200km to 250km you will see why the 1200km or 1500km smoothing is somewhat suspect.

  441. DeWitt Payne
    Posted Jul 8, 2008 at 12:45 PM | Permalink

    cba,

    A temperature error was a very low probability. I probably should have deleted that paragraph before submitting.

    I think there must be errors in the HITRAN database for one or more molecules you include in your model. Somehow some of the line intensities or partial pressures or both have been increased by many orders of magnitude. This leads to saturated absorption/emission even though the actual partial pressure is vanishingly small. If my suspicion is correct, it was probably a transcription error or a units mismatch when the data was entered. No one else may have noticed this because the other line-by-line models don’t use as many molecules as you are using.

    I haven’t messed with HITRAN, is it possible to search line intensities by wavelength?

  442. Jeff A
    Posted Jul 8, 2008 at 2:02 PM | Permalink

    Thanks Bernie. Doesn’t take something that rigorous to know it’s bogus, though. Simply living and looking out the window once in a while will tell you. When it’s raining in one place, it can be sunny and 3 degrees warmer just a couple km away. The rain and cool may never hit that other area that day, or week, or month. To couple one distinct place with another climatologically really can’t be justified in any way.

  443. jae
    Posted Jul 8, 2008 at 2:15 PM | Permalink

    438, Jeremy:

    Gunnar, Out of curiosity, has anyone created a summary of the “critical” flaws of the core AGW-theory papers? I suspect I haven’t even read most of the “core” AGW-theory papers and it would be good to look them over, as well as follow-on papers that are presumably part of the “moving on” process.

    By a “core” AGW-theory paper, I presume you mean one that provides a first-principles physics-based exposition of how CO2 is supposed to warm the Earth. Steve Mc has been trying to locate this paper for years, and last I knew, he hasn’t found it. He asked IPCCSo, first, we have to locate the core paper.

  444. DeWitt Payne
    Posted Jul 8, 2008 at 2:16 PM | Permalink

    cba,

    Then again, the partial pressure problem might be in your program. One way such an error could be generated is if the abundances of any or all of the minor isotopologues were entered as 1 rather than a tiny fraction of 1.

    If you scroll down to the bottom of the MODTRAN data, you find that that it does use a surface emissivity of 0.98. However, it only integrates emission from 100 cm-1 (100 micrometers) to 1500 cm-1 (6.67 micrometers) so the total emission is lower than calculated from Stefan-Boltzmann.

    I went to the HITRAN site and apparently you have to be associated with some institution to download the database. I guess I could email them to see if there are any exceptions.

  445. jae
    Posted Jul 8, 2008 at 2:18 PM | Permalink

    438, Jeremy:

    Gunnar, Out of curiosity, has anyone created a summary of the “critical” flaws of the core AGW-theory papers? I suspect I haven’t even read most of the “core” AGW-theory papers and it would be good to look them over, as well as follow-on papers that are presumably part of the “moving on” process.

    By a “core” AGW-theory paper, I presume you mean one that provides a first-principles physics-based exposition of how CO2 is supposed to warm the Earth, including how the “positive water vapor feedback” thing works. Steve Mc has been trying to locate this paper for years, and last I knew, he hadn’t found it. As an IPCC reviewer, he asked IPCC to include one in AR4. IPCC did not do so. He has challenged people on this blog many times to point out such a paper. Silence. So, first, we have to locate the core paper.

  446. Geoff
    Posted Jul 8, 2008 at 2:31 PM | Permalink

    Interesting new article by Huang:

    We present a suite of new 20,000 year reconstructions that integrate three types of geothermal information: a global database of terrestrial heat flux measurements, another database of temperature versus depth observations, and the 20th century instrumental record of temperature, all referenced to the 1961–1990 mean of the instrumental record. These reconstructions show the warming from the last glacial maximum, the occurrence of a mid-Holocene warm episode, a Medieval Warm Period (MWP), a Little Ice Age (LIA), and the rapid warming of the 20th century. The reconstructions show the temperatures of the mid-Holocene warm episode some 1–2 K above the reference level, the maximum of the MWP at or slightly below the reference level, the minimum of the LIA about 1 K below the reference level, and end-of-20th century temperatures about 0.5 K above the reference level.

    A late Quaternary climate reconstruction based on borehole heat flux data, borehole temperature data, and the instrumental record, GEOPHYSICAL RESEARCH LETTERS, VOL. 35, L13703, doi:10.1029/2008GL034187, 2008

    here

  447. bernie
    Posted Jul 8, 2008 at 2:42 PM | Permalink

    Jeff A:
    You are of course correct in the sharp regional and sub-regional variations that can exist in weather indicators but given that we want to come up with a single global indicator then the issue is not regional smoothing per se but the density of the data points. What the 250km radius smoothing dramatically depicts for me is the embarrassingly low density of stations and the absence of stations in huge critical areas. It is like doing a presidential poll but deciding to ignore Montana, Idaho, Wyoming, N and S Dakota because they are too small and too hard to get to and betting that you can use the average of Minnesota and Washington to make up for the aforementioned States. It is crazy but that is what is actually being done. The maps would look even worse if you decided to simply plot the truly rural stations – this would result in huge grey areas in S. America to match those in Africa, the Canadian and Russian Arctic and Antarctic. The saving grace is the satellite record – but that is another story entirely.

  448. cba
    Posted Jul 8, 2008 at 3:02 PM | Permalink

    DeWitt,

    The isotopes are done as fractions of the whole which then are multiplied by the whole fraction. That is they should be a fraction of a fraction multiplied by the pressure which is fraction atmospheres. I did request the database as an individual as I was not associated with an institution at that time and at least at present, this research is not being done through the institution I am associated with now. If anything is to be published, that will probably change. Right now, that is a very big “If” .

  449. DeWitt Payne
    Posted Jul 8, 2008 at 3:47 PM | Permalink

    cba,

    You have to look at LTE from both directions. LTE requires that a certain fraction of molecules are in an excited state which is an inverse exponential function of the energy of the excited state above the ground state and the temperature. That in turn determines the emission rate. However, LTE also requires that energy out equals energy in, i.e. equilibrium. If that isn’t true, there has to be an error somewhere. Either the energy absorption is being underestimated or the energy emission is being overestimated or both. Now since your program calculates excess emission any time the temperature is rising, i.e. both in the stratosphere and the thermosphere, then it seems to me it is most probable it is emission that is being overestimated. IMO, the best way to track this down is to find out what’s causing the excess emission. You have some wavelengths from the spectrum looking down from 120 km. You’ve already eliminated CO2. I would look for molecules with strong lines in the wavelength range of each of those emission lines/bands and see if there is a common factor. I would hazard a guess that one or more molecules aren’t being corrected for pressure or abundance or both.

  450. Sam Urbinto
    Posted Jul 8, 2008 at 5:56 PM | Permalink

    The sensitivity of climate (doubling carbon dioxide) is -.5 You’re welcome to try and disprove that statement, but I have conclusive evidence.

    No, I won’t share it. Figure out how to recreate it, do your own work. :)

    Oh, and Dr. Oreskes is more a politician-type. Her output is mostly op/ed stuff.

  451. Andrew
    Posted Jul 8, 2008 at 6:25 PM | Permalink

    437 (Timo): It is a bit incoherent, obviously because English is not his first language. I don’t see anything sinister, though (but he is a Smolin fan…).

  452. Timo van Druten
    Posted Jul 9, 2008 at 2:06 AM | Permalink

    (451) Andrew,

    I don’t see anything sinister either, but I am interested to see that somebody deeply involved in the IPCC-processes states that the expert judgements have a (strong) subjective element, while you would expect, yes even should insist, on objectivity with respect to such an important topic.

    Gerbrand Komen mentions we should avoid the pitfalls of groupthink and is in favour of transparent discussions. Because we are coming from the same country, The Netherlands (so English isn’t my mother tongue either) , we might (or at least I) carry a history of groupthink, chain-link evidence and misuse (or at least unprofessional use) of statistic. And this case is only one of few happening last 5-7 years.

    http://en.wikipedia.org/wiki/Lucia_de_Berk

    (But this is of course, way of topic).

  453. mikep
    Posted Jul 9, 2008 at 2:19 AM | Permalink

    Interesting poster from Finnish researchers using tree rings to reconstruct climate for the past 7,500 years with clear LIA, MWP and Holocene optimum.

    http://lustiag.pp.fi/gt_trace2008_cyclic.pdf

  454. cba
    Posted Jul 9, 2008 at 5:34 AM | Permalink

    DeWitt,

    Those spikes at the 120km go out to the 300-360k T which puts them at the highest altitudes which is understood not to be in LTE. I haven’t had a chance to look to see if the average (since there are + and – deficits up there) to see if there is just some variations which might be related to the concentrations. There is a T bump up there for real in the measurements but there is also an extra bit of solar absorption in that area.

    As stated before, the purpose of the little exercise is to try to create a baseline of nonradiative energy balance by subtracting out the radiative and perhaps to reblend and see what is the result happening to the energy balance with a different co2 mix and what it might take to reblanace the energy. What I expect to see is a fairly small amount of T increase which will be reduced to a lower balance by increases in convection and further reduced by the water cycle which will increase as the increase in delta T. What there is of conduction will also increase with delta T and there is not only a little heating lower down but a cooling higher up both increasing that delta T.

Follow

Get every new post delivered to your Inbox.

Join 2,875 other followers

%d bloggers like this: