AR4: "Now-Classic" Results on Cloud Uncertainty are "Unsettling"

AR4 (chapter 1 on the History of Climate Science) contains the remarkable statement:

The strong effect of cloud processes on climate model sensitivities to greenhouse gases was emphasized further through a now-classic set of General Circulation Model (GCM) experiments, carried out by Senior and Mitchell (1993). They produced global average surface temperature changes (due to doubled atmospheric CO2 concentration) ranging from 1.9°C to 5.4°C, simply by altering the way that cloud radiative properties were treated in the model. It is somewhat unsettling that the results of a complex climate model can be so drastically altered by substituting one reasonable cloud parameterization for another, thereby approximately replicating the overall intermodel range of sensitivities.

As they say, it is somewhat unsettling. On the basis that these results are “now-classic”, one would have expected them to have been prominently featured in TAR. [yeah, right.] So let’s how prominently TAR featured these results – were they as prominent as the Hockey Stick?

I did a search of Senior and Mitchell 1993 in TAR (google: grida senior mitchell 1993) and identified the following references in chapter 7 (Coordinating Lead Author – T Stocker, lead authors include Pierrehumbert), neither of which reported these “now-classic” results.

A first generation of so-called prognostic cloud schemes (Le Treut and Li, 1991; Roeckner et al., 1991; Senior and Mitchell, 1993; Del Genio et al., 1996), has used a budget equation for cloud water, defined as the sum of all liquid and solid cloud water species that have negligible vertical fall velocities.

and again in section 7.2.2.4:

Schemes predicting cloudiness as a function of relative humidity generally show an upward displacement of the higher troposphere cloud cover in response to a greenhouse warming, resulting in a positive feedback (Manabe and Wetherald, 1987). While this effect still appears in more sophisticated models, and even cloud resolving models (Wu and Moncrieff, 1999; Tompkins and Emanuel, 2000), the introduction of cloud water content as a prognostic variable, by decoupling cloud and water vapour, has added new features (Senior and Mitchell, 1993; Lee et al., 1997). As noted in the SAR, a negative feedback corresponding to an increase in cloud cover, and hence cloud albedo, at the transition between ice and liquid clouds occurs in some models, but is crucially dependent on the definition of the phase transition within models. The sign of the cloud cover feedback is still a matter of uncertainty and generally depends on other related cloud properties (Yao and Del Genio, 1999; Meleshko et al., 2000).

In the relevant AR4 chapter (chapter 8), the authors mention Senior and Mitchell 1993 in a very coy manner giving no idea of the blockbuster variations noted up in the historical review:

In many climate models, details in the representation of clouds can substantially affect the model estimates of cloud feedback and climate sensitivity (e.g., Senior and Mitchell, 1993; Le Treut et al., 1994; Yao and Del Genio, 2002; Zhang, 2004; Stainforth et al., 2005; Yokohata et al., 2005). Moreover, the spread of climate sensitivity estimates among current models arises primarily from inter-model differences in cloud feedbacks (Colman, 2003a; Soden and Held, 2006; Webb et al., 2006; Section 8.6.2, Figure 8.14). Therefore, cloud feedbacks remain the largest source of uncertainty in climate sensitivity estimates.

Reference:
Senior, C.A., and J.F.B. Mitchell, 1993: Carbon dioxide and climate: The impact of cloud parameterization. J. Clim., 6, 393–418.
1995 1-15, 189-228 url

53 Comments

  1. John Lang
    Posted Jan 6, 2008 at 8:59 AM | Permalink

    I’m assuming that the assumption of constant relative humidity built into the models (some models may vary this assumption slightly) then cloud cover will also be unchanging. It is the essence of relative humidity.

    The models are probably unstable when one trys to tweak these very basic parametres.

  2. Posted Jan 6, 2008 at 9:02 AM | Permalink

    The sign of the cloud cover feedback is still a matter of uncertainty

    And yet they claim knowledge of the magnitude without even knowing the sign.

    In addition I saw a nice graph at Nir Shaviv’s that showed the feedbacks used in the models (up to 2005 I think, I’ll post a link) were biased to the positive side.

  3. Larry
    Posted Jan 6, 2008 at 9:06 AM | Permalink

    It may be somewhat unsettling, but it’s not in the least bit surprising.

  4. Posted Jan 6, 2008 at 9:10 AM | Permalink

    Climate Sensitivity by Shaviv:

    http://www.sciencebits.com/OnClimateSensitivity

  5. Ron Cram
    Posted Jan 6, 2008 at 9:12 AM | Permalink

    This passage reminds me of Dr. William Gray saying “Global warming is a theory by people who do not understand how the atmosphere works.”

    It still bothers me that people call computer modeling runs “experiments.” They are not experiments which require the observation of nature and natural processes.

  6. Tom Gray
    Posted Jan 6, 2008 at 9:19 AM | Permalink

    Some of the scientists contributing here have speculated on what would be required to be in an engineering report. In an engineering report, the discrepancies noted here would be reported prominently along with their implications for policy.

  7. Phil.
    Posted Jan 6, 2008 at 10:15 AM | Permalink

    Re #5

    This passage reminds me of Dr. William Gray saying “Global warming is a theory by people who do not understand how the atmosphere works.”

    It still bothers me that people call computer modeling runs “experiments.” They are not experiments which require the observation of nature and natural processes.

    How it must have upset Gray this year when the UK Met Office did a better job of predicting this year’s Atlantic hurricane season using their computer model than he did with his method!

  8. Yorick
    Posted Jan 6, 2008 at 10:48 AM | Permalink

    Phil,
    The GCMs are famous for their ability to predict measures whose count is controlled by the like mindedly political.

    They can predict Hansen’s temp, even though it is out of step will all other major methods, and they can predict a tropical cylclone count, even though those previously in charge of naming tropical cyclones have said that they have changed the methods which ups the count.

  9. A Azure
    Posted Jan 6, 2008 at 11:00 AM | Permalink

    #6

    This is not a debate about science between scientists. These are computer program models with debates about coding.

    Science is done in the reality of the world, not the myth of a computer screen output.

    It is disgusting that a bunch of yahoo’s have seized and warped the conscientious of science and artificially gained some bizarre creditability.

    I am frustrated that it seems impossible for real science to wrestle these snake-oil salesman.

    #7
    Bizarre thinking, Phil. Why would Dr. Gray be upset?
    It is thinking like yours that disservices science – the belief it is a COMPETITION, with winners and losers.

    But let’s consider that Dr. Gray’s team released their guess in Dec. and the Met waits until June. And Gray’s team guessed 14 named storms back in Dec. The Met in June predicated 10, with a 70% of 7-14. There were 15 named storms in 2007. Further, the Met declined to estimate major hurricanes, unlike Grays’s team. So I believe you could only have misunderstood something else for you to make your complaint and comment.

  10. Alan D. McIntire
    Posted Jan 6, 2008 at 11:28 AM | Permalink

    The models all assume water vapor will remain about the same, but Minschwaner and Dessler have found that
    the feedback only increases natural background changes by a factor of 1.5, so a 1.2C increse in temperatures due to
    CO2 would be increased to 1.8C due to water vapor feedbacks, not the 2 to 4.5 indicated by some models.

    http://www.nasa.gov/centers/goddard/news/topstory/2004/0315humidity.html

    Click to access dessler04.pdf

    – A. McIntire

  11. John Lang
    Posted Jan 6, 2008 at 11:45 AM | Permalink

    This link has a really great description of the history of the development of General Circulation Models. It is long and there are lots of citation links to get lost in, but it is clear the models are just computer simulations and cloud effects are still the “greatest uncertainty” (among dozens of other uncertainties.)

    http://www.aip.org/history/exhibits/climate/GCM.htm

    By the way, the article notes that the climate sensitivity range of 1.5C to 4.5C for a doubling of CO2 comes from a 1979 National Academy of Sciences report chaired by Charney which compared the results of two models:

    To make their conclusion more concrete, the Charney panel decided to announce a specific range of numbers. They argued out among themselves a rough-and-ready compromise. Hansen’s GCM predicted a 4°C rise for doubled CO2, and Manabe’s latest figure was around 2°C. Splitting the difference, the panel said they had rather high confidence that as CO2 reached this level the planet would warm up by about three degrees, plus or minus fifty percent: in other words, 1.5-4.5°C (2.7-8°F). They concluded dryly, “We have tried but have been unable to find any overlooked or underestimated physical effects” that could reduce the warming.

    You guessed it, Hansen.

    And even his 1979 model predicting a 4.0C increase in temperatures for a doubling of CO2 is still cited as the upper range (even though he didn’t use supercomputers at the time and the model didn’t even incorporate cloud or ocean effects at that point.) Anyone do any sophisticated programming in 1979?

  12. jbleth
    Posted Jan 6, 2008 at 11:59 AM | Permalink

    Idso’s experiments yielded a climate sensitivity of 0.1 K°/Wm-2. This is less than half the result for a black body and indicates strong negative feedbacks. Lindzen’s “infrared iris effect” and Spencer’s et al. experimental results indicate negative cloud feedbacks.


  13. Terry
    Posted Jan 6, 2008 at 12:07 PM | Permalink

    re #7 Phil

    How it must have upset Gray this year when the UK Met Office did a better job of predicting this year’s Atlantic hurricane season using their computer model than he did with his method!

    Even a broken watch is correct twice a day. Its not surprising that their computer model will occasionally get it right.

  14. Jeff A
    Posted Jan 6, 2008 at 12:08 PM | Permalink

    How it must have upset Gray this year when the UK Met Office did a better job of predicting this year’s Atlantic hurricane season using their computer model than he did with his method!

    No great feat. I can predict an average year every year and be mostly right…

  15. Raven
    Posted Jan 6, 2008 at 12:45 PM | Permalink

    Alan McIntire, your NASA link says:

    Using the UARS data to actually quantify both specific humidity and relative humidity, the researchers found, while water vapor does increase with temperature in the upper troposphere, the feedback effect is not as strong as models have predicted. “The increases in water vapor with warmer temperatures are not large enough to maintain a constant relative humidity,” Minschwaner said. These new findings will be useful for testing and improving global climate models.

    This is one more data point in a long list of real measurments that suggest the climate models over estimate the extent of the CO2 induced warming. Unfortunately, we see no sign that the modellers are incorporating this data and scaling back their GCM predictions.
    We really need an independent assessement of these models before governments make huge investments based on predictions made by these models.

  16. bender
    Posted Jan 6, 2008 at 12:49 PM | Permalink

    I wonder if, back in 1979, Hansen could have defined “ergodicity”. I suspect not. I question the statistical relevance of those “now classic” GCM “experiments”.

  17. Peter D. Tillman
    Posted Jan 6, 2008 at 12:52 PM | Permalink

    #10, Alan McIntyre, CS

    “Minschwaner and Dessler have found that the [water vapor] feedback only increases natural background changes by a factor of 1.5, so a 1.2C increase in temperatures due to CO2 would be increased to 1.8C due to water vapor feedbacks, not the 2 to 4.5 indicated by some models.”

    Yet Another reasonable-looking study that estimates CS (including feedbacks) in the 1 to 2ºC range for doubling CO2. Hmm. I should make a table of these, and post it — if I could figure out how to do tables in WP. Is there a WP help page?

    TIA & cheers — Pete Tillman

  18. PaddikJ
    Posted Jan 6, 2008 at 1:14 PM | Permalink

    How it must have upset Gray this year when the UK Met Office did a better job of predicting this year’s Atlantic hurricane season using their computer model than he did with his method!

    Easy for the Global Warming Crusade: Just massage the criteria for calling something a “storm” until the count matches your predictions. You have read the Tiny Tim threads, yes? Also, just make predictions that fall within the 90% confidence range for the upcoming year, such as HADCRU just did for temps – the chances are very good that 2008 will indeed be one of the warmest years on record. You don’t need a GCM to tell you that(and, as someone has already pointed out, it doesn’t hurt to wait until mid-year to go public).

    …even though he didn’t use supercomputers at the time and the model didn’t even incorporate cloud or ocean effects at that point.) Anyone do any sophisticated programming in 1979?

    If memory serves – and it usually does – NCAR got one of the first Cray-1’s in 1977. If not that, what was Hansen using?

  19. Posted Jan 6, 2008 at 1:16 PM | Permalink

    bender–

    I question the statistical relevance of those “now classic” GCM “experiments”.

    Actually those represent a portion of sensitivity results that could, hypothetically, be used to estimate sensitivity to variations in parameter values.

    They are part of what you have been asking Gavin for, but he doesn’t understand what you want when you say things like “propagation of errors” &etc.

  20. Larry
    Posted Jan 6, 2008 at 1:56 PM | Permalink

    They are part of what you have been asking Gavin for, but he doesn’t understand what you want when you say things like “propagation of errors” &etc.

    That’s a bit of a surprising statement. I’m no statistician, and I know what that is.

  21. bender
    Posted Jan 6, 2008 at 2:34 PM | Permalink

    #19 lucia
    Agreed. I envision a major collaboration between statisticians and the GCMers. I would dearly like to see an ASA Journal of Statistical Climatology.

  22. Pat Frank
    Posted Jan 6, 2008 at 2:36 PM | Permalink

    #19 — “They are part of what you have been asking Gavin for, but he doesn’t understand what you want when you say things like “propagation of errors” &etc.

    I have a peer-reviewed article in press at Skeptic magazine, coming out in early 2008, in which an estimate for GCM cloud error is propagated through the IPCC SRES global average temperature projections for the 21st century.

    Here’s the summary: “Projections of CO2-caused future global warming are unreliable and simplistic. When accounting for the physical uncertainty from minimal cloud error, the IPCC SRES A2 global average temperature for the year 2100 is 3.7±111 C. The claim that anthropogenic CO2 is responsible for the current warming of Earth climate is scientifically insupportable.

    The article will be followed with a critique written by a climate scientist recruited by Michael Shermer, Skeptic’s publisher. My response to that critique and to selected letters will be published in the subsequent issue of Skeptic.

  23. bender
    Posted Jan 6, 2008 at 2:42 PM | Permalink

    #22 3.7±111 LOL

  24. Larry
    Posted Jan 6, 2008 at 3:31 PM | Permalink

    That’s not a typo, is it?

  25. bender
    Posted Jan 6, 2008 at 3:34 PM | Permalink

    I bet not.

  26. Steve McIntyre
    Posted Jan 6, 2008 at 3:37 PM | Permalink

    In my opinion, no one has made a relevant comment on what struck me as the key issue in this post. It’s got nothing to do with Bill Gray and the discussion about Bill Gray’s predictions was totally OT. When I read exchanges like this, I feel like just deleting comment after comment. IT’s got nothing to do with error propagation, interesting as that may be.

    There’s a very large issue here. AR4 says that Senior and Mitchell 1993 contains a “now classic” result – a result which is “unsettling” for GCMs. But this result was not mentioned in TAR – why not? Instead, TAR mentioned Senior and Mitchell 1993 in a different context. Did TAR have an obligation to report this “now classic” result?

  27. Ron Cram
    Posted Jan 6, 2008 at 3:49 PM | Permalink

    Sorry Steve. Yes, of course they had a responsibility to discuss it, even if they don’t like the fact it is unsettling. I am not sure you can say TAR citation is a “different context.” It seems to me in both cases, the report is discussing the models, clouds and uncertainty. The TAR just gives it a passing reference rather than discussing the level of uncertainty. The IPCC obviously does not want the uncertainty to be well understood by policymakers.

  28. bender
    Posted Jan 6, 2008 at 3:50 PM | Permalink

    There’s a very large issue here. AR4 says that Senior and Mitchell 1993 contains a “now classic” result – a result which is “unsettling” for GCMs. But this result was not mentioned in TAR – why not? Instead, TAR mentioned Senior and Mitchell 1993 in a different context. Did TAR have an obligation to report this “now classic” result?

    I figured the point is obvious and that we’d moved on. But, yes, this point is important to state explcitly. Results that are “unsettling” are presented as authoritative. Note the similarity to the other thread we have going. Science that is “settled” is at other times described as provoking “lively discussions”. I’ll bet they’re “lively”. What you have here, aspiring young policy wonk, is called a cover-up. An attempt to squeeze a consensus out of what is actually a mishmash of uncertainty, dissent, unqualified opinion, and shared belief.

    [Do I get snipped if I call Susann ‘grasshopper’? It’s a term of endearment.]

  29. Ross McKitrick
    Posted Jan 6, 2008 at 4:14 PM | Permalink

    Steve, here’s my conjecture about what this means. The references to Senior and Mitchell in the TAR were written by person A who had read them a while back and thought they fit into the big picture in such-and-such a way. The portion of TAR text you quoted was written by Person A late one tiring day, rushing to meet a deadline before he had to finish grading some term papers then pick up his son from track and field. He never figured what he was writing was the last word on anything in particular. Whether or not the text was commented on by busy reviewers, they ended up in the final text more or less in the form they were first jotted down.

    The portion quoted from the AR4 was written by Person B late one tiring day, rushing to meet a deadline before he had to finish grading some term papers then pick up his daughter from volleyball. He never figured what he was writing was the last word on anything in particular. Whether or not they were commented on by busy reviewers, they ended up in the final text more or less in the form they were first jotted down.

    The problem in each case is that text originating as the current opinions of Persons A and B, and therefore subject not only to revision but simple error, gets promoted by the IPCC as if it were the Last Word, the Authoritative, the Most-Stringently-Peer-Reviewed-In-History Dictation of the Angels. The fact that parts contradict each other within the same report, let alone across different reports, is only a problem for those naive enough to believe the hype from the IPCC leaders about the nature of the assessment reports.

  30. bender
    Posted Jan 6, 2008 at 4:34 PM | Permalink

    Dr McKitrick, as you probably know by experience, this is exactly how science-by-committee operates. That is why it is absolutely critical to try to formalize the committee’s understanding of collective uncertainty. Because no one person in the committee understands, or even has access to, all the canonical bits. The result is a tragedy of the commons; scientific truth is sacrificed for policy consensus.

  31. Posted Jan 6, 2008 at 4:35 PM | Permalink

    SteveM– You are correct that this should have been reported in TAR. If the paper is classic, that suggest a pretty large flaw in the process. You are correct that is the largest point.

    I’m trying to figure out how to explain clearly how the “propagation of error” issue is germaine — even though problems in the process are the larger issue. I know they just sounded snarky, but I suspect bender and Larry both sort of blinked there. The three of us are generally wordy, and you’ll notice some uncharacteristic brevity.

    What you have here is one clearly identified document that would indicate that the estimate of estimate of climate sensitivity is very sensitive to one particular parameterization. And, a system where that important information didn’t make it into the TAR.

    That’s pretty big already.

    But, actually, the issue of propagation of error makes that issue more alarming, not less.

    You found one classic paper describing a large uncertainty. What if there are more sensitivity studies suggesting large impacts from uncertainties in other parameterizations?

    Engineers estimate uncertainties in a calculated result they do propagation of error calculations, and the uncertainty in a cloud model (or any parameterization) would be involved in this estimate.

    The undergrad engineering relatively cludgy way is to say there might be a parameterization xi (say– a parameter in a cloud model.)

    Then you’d try to find the uncertainty in a computed quantity (say sensitivity “S” as a function of δS/δxi, and some uncertainty in the range for xi that you might get from the standard deviation σi.

    If there are a bunch of parameters, you often assume they contribute independently and estimate an uncertainty

    ΔS^2 = Σ (δS/δxi σi)^2
    (Less cludgy things get done too. It sort of depends on what’s appropriate in different instances.)

    So, when bender made his rather acerbic remark in 16, I wanted to point out that he’s been asking if this sort of thing has been done.

    And here you can see that you have noted that the process creating the TAR somehow overlooks the results that permit us to estimate these uncetainties individually.

    So, how is propagation of error relevant: If the process used to create the TAR is flawed with respect to collecting information relevant to the uncertainty due to clouds, could other classic information that helps us estimate uncertainties due to other factors be missing?

    And, with regard to bender asking for this stuff over and over– do some classic results suggesting high uncertainty just vanish into the ether?

  32. aurbo
    Posted Jan 6, 2008 at 4:36 PM | Permalink

    It is refreshing to finally see a thread on the topic of H2O even though it is introduced through the back-door (in the form of cloudiness). I’m a little sated with that group of “denialists” that say; “It’s the sun, stupid” at the expense of a much less publicized group who say; It’s the H2O, stupid. Furthermore, The Senior & Mitchell 1993 paper cited in Steve M’s original post, even includes refernces to phase changes of H20. How refreshing!.

    One wonders to what extent Charney’s panel consensus [see Lang’s post #11] was biased high by Hansen’s 4°C projection. It’s amazing how much influence this GISS spokesman has had in establishing the 1.5-4.5°C frame of reference for AGW.

    Although air is considered “saturated” when RHs reach or exceed 90%, clouds can still form at much lower RH’s as low as or even lower than 70% here. It depends on the amount and character of condensation nuclei present. The ECMWF model parameterization considers 50% RH as the level at which no clouds will be present (see link2).

    re:

    How it must have upset Gray this year when the UK Met Office did a better job of predicting this year’s Atlantic hurricane season using their computer model than he did with his method!

    The higher UK Met numbers were accommodated by TPC through their unusually high reliance on remotely sensed parameters to determine TS and HU stength in the absence of in situ observations. The goal-post transporters at work again. The ACE (Accumulated Cyclone Energy) index for the Atlantic storms which can be found here was the lowest since 1972. The Pacific ACE was the second lowest since these records began.

  33. kim
    Posted Jan 6, 2008 at 5:33 PM | Permalink

    It’s the sun and the water and the critters and we’re all stupid as rocks.
    ===========================================

  34. Tom C
    Posted Jan 6, 2008 at 6:14 PM | Permalink

    I’ve often wondered how Lindzen manges to stay sane in the midst of all this looniness. He has been making these exact points about the failure of cloud models and their huge impact on GCM results for something like 15 years.

  35. Mark T
    Posted Jan 6, 2008 at 8:04 PM | Permalink

    The problem in each case is that text originating as the current opinions of Persons A and B, and therefore subject not only to revision but simple error, gets promoted by the IPCC as if it were the Last Word, the Authoritative, the Most-Stringently-Peer-Reviewed-In-History Dictation of the Angels.

    This is not unlike the situation in which Cook was referenced in order to conclude that the 20th century divergence is unique unto the 20th century. Of course, the fact that Cook only looked at 20th century data (not much beyond that exists) to formulate such a conclusion is lost simply by the fact that it has now been promoted by the IPCC as a fact, and relied upon by everyone else claiming divergence has been solved.

    Mark

  36. Ian McLeod
    Posted Jan 6, 2008 at 9:15 PM | Permalink

    snip – not about science. sorry bout that.

  37. tetris
    Posted Jan 6, 2008 at 9:45 PM | Permalink

    Re: 26
    SteveM
    RossM [#29] is probably pretty close to the bone.

    Ian McLeod’s commentary [#36] shows how the cascade outlined by Ross becomes the mainstream media story line.

    This is why the two surprisingly sceptical articles that appeared in the New York Times and the Boston Globe last week are significant, because these very influential mainstream newspapers are now explicitely questioning the core AGW/ACC arguments.

    Once the science writers/editors of papers such as the NYT and BG come around, the very questions you are raising about the IPCC and its workings will also make it to their pages.

    Keep up the good work.

  38. Ian McLeod
    Posted Jan 6, 2008 at 10:05 PM | Permalink

    Your right, but I thought with Ross’s remarks I might back up the argument with other scholarly work in the humanities. I guess I can see how it can get OT quickly. No worries.

  39. Jeff A
    Posted Jan 6, 2008 at 10:14 PM | Permalink

    The article will be followed with a critique written by a climate scientist recruited by Michael Shermer, Skeptic’s publisher. My response to that critique and to selected letters will be published in the subsequent issue of Skeptic.

    Ah. But Shermer is a True Believertm. Says he saw the light after watching AIT. He’s no longer a skeptic in my book.

  40. PaddikJ
    Posted Jan 6, 2008 at 10:28 PM | Permalink

    Tetris,

    Could you post links for the BG & NYT articles?

    thx,
    PJ

  41. Ian McLeod
    Posted Jan 6, 2008 at 10:59 PM | Permalink

    Here is the NYT article.

  42. Mike B
    Posted Jan 7, 2008 at 7:46 AM | Permalink

    Steve #26

    In my opinion, no one has made a relevant comment on what struck me as the key issue in this post. It’s got nothing to do with Bill Gray and the discussion about Bill Gray’s predictions was totally OT. When I read exchanges like this, I feel like just deleting comment after comment. IT’s got nothing to do with error propagation, interesting as that may be.

    There’s a very large issue here. AR4 says that Senior and Mitchell 1993 contains a “now classic” result – a result which is “unsettling” for GCMs. But this result was not mentioned in TAR – why not? Instead, TAR mentioned Senior and Mitchell 1993 in a different context. Did TAR have an obligation to report this “now classic” result?

    I guess I didn’t see it as that big a deal for two reasons. One, I don’t think most policy makers are that aware of how dependent many of the “consensus conclusions” are on GCMs, so any comment about uncertainties in predictions based on the models would be way beyond what the scientists would present. They’re much more comfortable saying, “temperature is way up in the last 30 years, to levels not seen in the past 1K years, perhaps even 1M years, and the physics tell us its the CO2.”

    Two, “uncertainty in the sign of cloud feedback” could mean that the magnitude of the feedback is small, and the sign thus (relatively) irrelevant. That’s mighty thin, but it could explain why it was left out of the summary.

  43. SteveSadlov
    Posted Jan 7, 2008 at 2:19 PM | Permalink

    If the applicable examples Neil J. King’s posts are representative of the conventional wisdom regarding H20 and clouds, I must wonder if 1.5 deg C is actually the low end of the possible range of realized output values of the climate system.

  44. Neal J. King
    Posted Jan 7, 2008 at 2:24 PM | Permalink

    #43, SteveSadlov:

    I haven’t posted on this thread before, so I’m unclear on what you are saying.

  45. SteveSadlov
    Posted Jan 7, 2008 at 2:41 PM | Permalink

    RE: #32 – Make that IT’S THE H20, STUPID!. 🙂

  46. SteveSadlov
    Posted Jan 7, 2008 at 2:44 PM | Permalink

    RE: #44 – I am referring to what you’ve written about how H20 and clouds are handled on other threads. I presume your posts to reflect the conventional wisdom / mainstream of modeling thought.

  47. Neal J. King
    Posted Jan 7, 2008 at 2:57 PM | Permalink

    #46, SteveSadlov:

    It reflects my reading of the IPCC report and other discussions around that: It seems to be quite clear that clouds are an open question.

    I have no special “in” on live discussions.

  48. D. Patterson
    Posted Jan 7, 2008 at 4:00 PM | Permalink

    26 Steve McIntyre says:

    January 6th, 2008 at 3:37 pm
    In my opinion, no one has made a relevant comment on what struck me as the key issue in this post. [….]
    There’s a very large issue here. AR4 says that Senior and Mitchell 1993 contains a “now classic” result – a result which is “unsettling” for GCMs. But this result was not mentioned in TAR – why not? Instead, TAR mentioned Senior and Mitchell 1993 in a different context. Did TAR have an obligation to report this “now classic” result?

    To answer your question requires firstly a determination of what obligations were incumbent upon the IPCC and secondly whether or not the Senior and Mitchell 1993 result is encompassed within one or more of those obligations.

    To determine what obligations the IPCC has, we must first see what the United Nations mandate authorized the IPCC to do when the IPCC was organized. The mandate said in part:

    The IPCC does not conduct any research nor does it monitor climate related data or parameters. Its role is to assess on a comprehensive, objective, open and transparent basis the latest scientific, technical and socio-economic literature produced worldwide relevant to the understanding of the risk of human-induced climate change, its observed and projected impacts and options for adaptation and mitigation.

    The WMO and other organizations made the determination a third of a century ago that human-induced climate change was being caused by anthropogenic emissions of greenhouse gases, and they led the members of the United Nations to organize the IPCC to assess the environmental risks which stem from such changes and how those risks can be mitigated. Is there anywhere in the mandate of the IPCC an authorization to assess “scientific, technical and socio-economic literature” which is not “relevant” to the understanding of the consequences of ” human-induced climate change” or is relevant to natural induced climate change?

    Insofar as the results from Senior and Mitchell 1993 may be unable to support GCMs, how can such results be “relevant to the understanding of the risk of human-induced climate change” already assumed by international policy mandate and the IPCC to exist? In other words, is it even possible from the IPCC point of view for any literature whatsoever to be relevant to the IPCC mandate and assessments if and when such literature is inconsistent with and not supportive of the predetermined existence of “human-induced climate change?”

    Does the IPCC literally interpret its legal mandate to mean that information contrary to a determination of human induced climate change as being not relevant to the purpose and mission of the IPCC?

  49. John M.
    Posted Jan 7, 2008 at 7:22 PM | Permalink

    #42 Mike B says:
    January 7th, 2008 at 7:46 am

    One, I don’t think most policy makers are that aware of how dependent many of the “consensus conclusions” are on GCMs, so any comment about uncertainties in predictions based on the models would be way beyond what the scientists would present. They’re much more comfortable saying, “temperature is way up in the last 30 years, to levels not seen in the past 1K years, perhaps even 1M years, and the physics tell us its the CO2.”

    It’s in Chapter 1 of AR4 so they did present it. 🙂 There is a reason why there are still huge error bars on their predictions for what happens over the course of this century under the various future emission scenarios. I think you underestimate the intelligence of the average policy maker if you think they don’t notice the size of those errors bars and read a bit deeper than the executive summary to find out why they are still so large despite all the public money that has been poured into climate research in recent years. Most people like to know that they are actually getting value for money when large amounts of money are being spent after all.

    The problem isn’t so much with what the scientists are doing with the modeling (Mann’s hockey stick is a bit of a red herring in that regard and is not really a core part of what is happening scientifically), in my opinion, but with what the environmentalists and journalists are doing with the key data, which is being presented to the general public in a highly skewed manner either to scare people into backing a tree hugging back to nature agenda or to come up with a sensational headline.

  50. tetris
    Posted Jan 7, 2008 at 8:01 PM | Permalink

    Re: 40
    PaddickJ
    Pls see # Iain McLead at #41 for the NYT article. The BG can be found at: http://www.boston.com/bostonglobe/editorial opinion/oped/articles/2008/01/06 br r r Where did Global warming Go?”

  51. Ian McLeod
    Posted Jan 7, 2008 at 9:45 PM | Permalink

    Try this link.

    http://www.boston.com/bostonglobe/editorial_opinion/oped/articles/2008/01/06/br_r_r_where_did_global_warming_go/

  52. Posted Jan 8, 2008 at 11:12 PM | Permalink

    aurbo says:
    January 6th, 2008 at 4:36 pm

    It is refreshing to finally see a thread on the topic of H2O even though it is introduced through the back-door (in the form of cloudiness).

    But cloudiness is the high road to understanding: the albedo of the open ocean is essentially zero. The albedo of a low level stratocumulus blanket is up to 60. Plug those figures into the incoming radiation and see how the putative CO2 effects are dwarfed.

    M’s original post, even includes references to phase changes of H20. How refreshing!.

    remember that a cloud isn’t just a cloud: even the size of the droplets can significantly effect the albedo. No wonder climate modellers shy away from cloud physics when, for example, the collision of two drops is governed by the size of the impactees and can result in rain, lower albedo, or more smaller droplets which give higher albedo.

    Although air is considered “saturated” when RHs reach or exceed 90%, clouds can still form at much lower RH’s as low as or even lower than 70% here. It depends on the amount and character of condensation nuclei present. The ECMWF model parameterization considers 50% RH as the level at which no clouds will be present .

    But, of course, it’s even more complicated than that. The number of CCNs depends on all manner of things: industrialisation, wind strength, wave breaking, desperate inhabitants on the underside of melting icebergs pumping out DMS to save their little home. Even the presence of organics on the ocean surface can alter the number of CCNs by a factor of 1.5 (Tyree, Corey A.; Hellion, Virginie M.; Alexandrova, Olga A.; Allen, Jonathan O. Foam droplets generated from natural and artificial seawaters ), and that is a minor uncertainty. And, of course, lack of nuclei can mean no clouds form at all, with the expected albedo dropping from 60 to zero, remember (http://earthobservatory.nasa.gov/Newsroom/NewImages/images.php3?img_id=11271). So a piece of ocean that should be reflecting 60% of the incoming radiation is reflecting none. How many watts/metre^2 is that? Over what area?

    I can understand why clouds are parametised, understand but not approve.

    JF

  53. Posted Mar 15, 2009 at 3:45 PM | Permalink

    Here it is over a year later and Dr. Gray indeed gets the last, best word:
    “Recent GCM global warming scenarios assume that a slightly stronger hydrologic cycle (due to the increase in CO2) will cause additional upper-level tropospheric water vapor and cloudiness. Such vapor-cloudiness increases are assumed to allow the small initial warming due to increased CO2 to be unrealistically multiplied 2-4 or more times. This is where most of the global warming from the GCMs comes from – not the warming resulting from the CO2 increase by itself but the large extra warming due to the assumed increase of upper tropospheric water vapor and cloudiness. As CO2 increases, it does not follow that the net global upper-level water vapor and cloudiness will increase significantly. Observations of upper tropospheric water vapor over the last 3-4 decades from the National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) reanalysis data and the International Satellite Cloud Climatology Project (ISCCP) data show that upper tropospheric water vapor appears to undergo a small decrease while Outgoing Longwave Radiation (OLR) undergoes a small increase. This is opposite to what has been programmed into the GCMs. The predicted global warming due to a doubling of CO2 has been erroneously exaggerated by the GCMs due to this water vapor feedback.”
    The rest is here. http://tropical.atmos.colostate.edu/Includes/Documents/Publications/gray2009.pdf

One Trackback

  1. By Clouding Up Man-Made Global Warming on May 31, 2012 at 3:28 AM

    […] at how clouds behave in computer climate models and in nature. Climatologists acknowledge that clouds represent the biggest uncertainty about the future course of global warming. University of Alabama, Huntsville, climatologist Roy […]