Nic Lewis on IPCC Climate Sensitivity

Nic Lewis (a co-author of O’Donnell et al 2010) is a very sharp analyst who’s recently taken an interest in climate sensitivity estimates and has an interesting guest post at Judy Curry’s today.

In his studies, he noticed that the IPCC’s representation in AR4 Figure 9.20 of the probability distribution for climate sensitivity arising from observationally-based Forster and Gregory 2006 differed substantially from the distribution in the original article. He reports that the alteration had the effect of fattening the tails of high-end climate sensitivity. He reports:

The IPCC did not attempt, in the relevant part of AR4:WG1 (Chapter 9), any justification from statistical theory, or quote authority for, restating the results of Forster/Gregory 06 on the basis of a uniform prior in S. Nor did the IPCC challenge the Forster/Gregory 06 regression model, analysis of uncertainties or error assumptions. The IPCC simply relied on statements [Frame et al. 2005] that ‘advocate’ – without any justification from statistical theory – sampling a flat prior distribution in whatever is the target of the estimate – in this case, S. In fact, even Frame did not advocate use of a prior uniform distribution in S in a case like Forster/Gregory 06. Nevertheless, the IPCC concluded its discussion of the issue by simply stating that “uniform prior distributions for the target of the estimate [the climate sensitivity S] are used unless otherwise specified”.

The transformation effected by the IPCC, by recasting Forster/Gregory 06 in Bayesian terms and then restating its results using a prior distribution that is inconsistent with the regression model and error distributions used in the study, appears unjustifiable. In the circumstances, the transformed climate sensitivity PDF for Forster/Gregory 06 in the IPCC’s Figure 9.20 can only be seen as distorted and misleading.

Update– A commenter at Judy Curry’s has drawn attention to AR4 Review Comments. I’ve uploaded the more convenient version that used to be available (IPCC took this down in favor of a less usable version.) Search “uniform prior” for interesting discussion.

75 Comments

  1. Hector M.
    Posted Jul 5, 2011 at 9:40 AM | Permalink

    If reality does not fit the model, ignore reality. If the model does not fit the policy agenda, tinker with the model till it fits.

  2. Venter
    Posted Jul 5, 2011 at 9:46 AM | Permalink

    The only empirically observed evidence of climate sensitivity was tampered with and altered to show a false figure. Words fails to describe this piece of machination!

  3. KnR
    Posted Jul 5, 2011 at 9:57 AM | Permalink

    ‘distorted and misleading’ well that would appear to be part of the IPCC mission statement ., so I would say in this case they right on track.

  4. stan
    Posted Jul 5, 2011 at 10:04 AM | Permalink

    I am shocked, shocked, to find such things going on in an IPCC establishment.

  5. Posted Jul 5, 2011 at 10:28 AM | Permalink

    When dealing with a regression analysis, one must use the priors (if you insist on being Bayesian) on the quantity you estimate (Y here), not down the road where you divide by it to get a derived quantity (S, sensitivity in this case). I think you would be hard pressed to find an example in the stats literature where it is done the IPCC did it.

  6. Steve McIntyre
    Posted Jul 5, 2011 at 10:29 AM | Permalink

    A commenter at Judy Curry’s has drawn attention to AR4 Review Comments. I’ve uploaded the more convenient version.) Search “uniform prior” for interesting discussion.

  7. Steve McIntyre
    Posted Jul 5, 2011 at 10:33 AM | Permalink

    Reviewer Michael Mann criticized IPCCs use of a uniform prior as well:

    9-883 A 55:14 55:16 The uniform prior as used in these studies is not what statisticians would call “uninformative”, since it may give decisive information, e.g., on upper bounds on climate
    sensitivity.
    [Michael Mann (Reviewer’s comment ID #: 156-80)]

    Noted

    9-885 A 55:20 55:24 The uniform priors used in these studies can be rather restrictive, particularly in the estimation of upper bounds on the climate sensitivity. They may be overly restrictive, in
    that most of the information in upper bound estimates comes from the prior rather than
    from data.
    [Michael Mann (Reviewer’s comment ID #: 156-79)]

    The studies referred to in that section use multiple lines of evidence and are therefore only marginally influenced by the prior.

    • Steve McIntyre
      Posted Jul 5, 2011 at 11:05 AM | Permalink

      Nic Lewis explained the origin of his analyses at Judy’s as follows:

      I attended a Climate Change Question Time event in London late last year, at which many of the important UK players on the subject spoke. Vicky Pope, the head of Climate Change Advice at the UK Met Office, stated that the possibility of climate sensitivity being under 1.5 deg.C had effectively been ruled out. I subsequently emailed her to ask what her basis for saying this was, thinking that there might be some definitive new study of which I was unaware.

      Vicky Pope replied simply that “The AR4 concluded that ECS is very likely larger than 1.5C based on the evidence provided in Box 10.2 of AR4″. Box 10.2 reproduces Figure 9.20 so far as observational evidence goes, so I got hold of all the papers whose PDFs featured in Figure 9.20 and analysed them. The Forster/Gregory 06 paper seemed the best of the bunch (most of the others have severe shortcomings, IMO), but then I noticed that the IPCC had changed the basis on which its sensitivity estimates were calculated. Hence this article.

    • Posted Jul 5, 2011 at 11:24 AM | Permalink

      These comments do credit to Dr. Mann. But the end result disregarded them. This reminds us that no one Mann is bigger than the system as far as the IPCC and even the central science of WG1 is concerned. The bias in the system is always (it seems) towards magnifying the perceived threat. For me what Nic has uncovered is the worst example by far. Respect to Mann but not to the system that made him, which of course happily ignores him when needs demand.

      • NicL
        Posted Jul 5, 2011 at 11:45 AM | Permalink

        My reading of Mann’s comments is that he was more concerned that use of an upper bound for a uniform prior (or, presumably, any other prior) could lead to the risk of very high sensitivity being ignored, rather than that the uniform shape of the prior could distort sensitivity estimates in an upwards direction.

        • Posted Jul 5, 2011 at 12:02 PM | Permalink

          To be honest I read it the same way as you did. But then I thought, let’s give the guy (or my cynicism) a break. His language wasn’t totally clear. As for his ultimate motivation … who knows.

        • Bernie
          Posted Jul 5, 2011 at 12:29 PM | Permalink

          My read is that Dr. Mann is arguing for higher sensitivities.

        • Posted Jul 5, 2011 at 1:29 PM | Permalink

          Re: Bernie (Jul 5 12:29), That is my read on it as well.

        • tetris
          Posted Jul 5, 2011 at 1:39 PM | Permalink

          I can’t read it any other way than that.

        • Posted Jul 5, 2011 at 10:41 PM | Permalink

          My read is that Dr. Mann is arguing for higher sensitivities.

          My read is that in the first comment he’s arguing that ‘The uniform prior as used in these studies is not what statisticians would call “uninformative”’ – and Nic’s shown that he’s dead right about that. What follows is an e.g. To say that he’s arguing for higher sensitivities in this comment is too simplistic for me. He is saying something importantly right. That’s where I was willing to give him credit (a popular choice, the people’s choice … er, I think not!)

          But that raises the relationship between the two Michael Mann comments and the editors’ response to the second. Steve has highlighted the ‘marginally influenced’ in the latter – presumably because Nic has shown that not to be true for the effect on Forster & Gregory. That seems a key editorial statement – a significantly wrong one.

          But I admit I don’t understand the rest of the editors’ comment:

          The comment probably referrs to a few lines up, text has been inserted to clarify that the limits of the prior reflect computer time limitations and are generally wide enough to encompass ranges experts consider plausible.

          I don’t have the draft being refered to so the (perhaps erroneous) line numbers aren’t helping me. Can anyone point to where in the final text they “clarify that the limits of the prior reflect computer time limitations and are generally wide enough to encompass ranges experts consider plausible”?

  8. RuhRoh
    Posted Jul 5, 2011 at 10:47 AM | Permalink

    Thanks for that handy link to the AR4 comments.

    Annan makes some references to ‘LGM-derived prior’ in contrast to ‘uniform prior’.

    Please, what is the meaning of this TLA, “LGM” ?

    TIA

    RR

    Steve: Last Glacial Maximum

    • NicL
      Posted Jul 5, 2011 at 11:04 AM | Permalink

      Last glacial maximum – i.e. from changes dating back to the last ice age

      • RuhRoh
        Posted Jul 5, 2011 at 6:06 PM | Permalink

        Any chance of a nifty analogy or restating in a more ‘engineering’ lexicon?
        My need for remedial statistical education is abundantly clear, no argument about that.

        I was just hoping that the mathematical gyrations might be expressed in some kind of ‘signal processing’ perspective, or ?
        The net effect seems to be some kind of warping/stretching, no?

        I expect that these questions are strong evidence for my second sentence.
        I’m betting that the number of folks comfortable with ‘uniform prior’ is quite small, and this seems to be a big message worthy of wider comprehension.

        Does anyone 1)
        understand this material, and
        2) remember what it was like when they didn’t understand these topics, and
        3) retain the ability to convey it to non-statisticians?
        TIA
        RR

  9. Kenneth Fritsch
    Posted Jul 5, 2011 at 11:29 AM | Permalink

    Nic Lewis did a great job of analysis and explaining the Bayesian process that the IPCC misapplied in this case and the consequences of that misapplication.

  10. j ferguson
    Posted Jul 5, 2011 at 12:45 PM | Permalink

    Does the above mean that the expression of the only experimentally obtained grasp of sensitivity was “adjusted” to fall more into line with the products of modeling?

    Too bad the modelers were not unnerved by the gap between their work and what had been found by experiment.

    • mpaul
      Posted Jul 5, 2011 at 2:16 PM | Permalink

      Bingo.

  11. stan
    Posted Jul 5, 2011 at 1:48 PM | Permalink

    Please correct me if I missed something, but this appears to place IPCC defenders between the proverbial rock and hard place. Either the IPCC looks really, really bad for manipulating a critical study OR they argue that the statistical treatment is simply one of several acceptable options. If they argue the latter, however, they are admitting that the science is not “settled” and, indeed, not even close.

    • Posted Jul 5, 2011 at 2:42 PM | Permalink

      I predict (if anything) a vigorous defence of the IPCC assumptions of how to handle the statistics as the only valid method plus a defence that the sensitivity derived from models is “better” than that from data.

      • Posted Jul 5, 2011 at 4:54 PM | Permalink

        Except hansen says the models are the least informative

        • NicL
          Posted Jul 5, 2011 at 6:01 PM | Permalink

          My take on Hansen’s recent long paper is that he thinks AOGCMs have got sensitivity about right (for whatever reason) at circa 3C, a figure he strongly believes in based on some paleoclimate studies. However he now, rather belatedly, realises that the models effective ocean diffusivities are far too high. To prevent correction of this ocean mis-modelling causing AOGCM back-cast simulations producing unreaslistically high warming, he wants to assume (with no real evidence) that tropospheric aerosol forcing is much more negative than the current best estimates suggest.

        • Posted Jul 5, 2011 at 10:51 PM | Permalink

          Helpful pointer to what Hansen’s currently arguing, thanks. Aerosols always seem to be the get-out-of-jail card, as Lindzen’s pointed out for years.

  12. philh
    Posted Jul 5, 2011 at 3:41 PM | Permalink

    Does Vicky Pope know about this? NicL, you going to tell her?

    • NicL
      Posted Jul 5, 2011 at 4:19 PM | Permalink

      I have emailed her, but she is away from the office at present.

  13. tolkein
    Posted Jul 5, 2011 at 5:25 PM | Permalink

    There are two options here:

    The IPCC knew what they were doing when they amended the graph
    The IPCC didn’t know what they were doing when they altered the graph.

    Which option is worse?

    • JT
      Posted Jul 5, 2011 at 9:34 PM | Permalink

      Question: WHO altered the graph? Who is the person who did this? Because “the IPCC” doesn’t DO anything. Somewhere there is someone who did this. Perhaps that person should explain why.

      • tetris
        Posted Jul 5, 2011 at 10:38 PM | Permalink

        NicL
        This is a very pertinent question. Who was in charge of that particular chapter/section, and who did that individual report to?
        Is that a traceable matter of record?

        • Steve McIntyre
          Posted Jul 5, 2011 at 10:53 PM | Permalink

          Hegerl and Zweiers were the CLAs. They were also coauthors of hegerl et al, cited in the caption of Figure 9.20 for the methodology. The alteration to Fig 9.20 style would have been done by or under the supervision of hegerl and Zwiers. They were also judge and jury in rejecting Annan’s criticism of uniform priors.

        • Posted Jul 5, 2011 at 11:52 PM | Permalink

          About as far from an ‘engineering quality exposition’ as it’s possible to imagine.

        • stan
          Posted Jul 6, 2011 at 6:32 AM | Permalink

          Richard,

          That’s because it was only the science they were declaring “settled”. How to “engineer” the climate was someone else’s job. 😉

  14. kim
    Posted Jul 5, 2011 at 5:31 PM | Permalink

    Judy’s improved her thread all out of measure by deleting all the non-substantive commentary, mostly ill-tempered wrangling between Joshua and me. I can delight in her editorial skill, as do I Steve’s, but still regret that my best stuff always gets deleted. For example, this episode of my outrage:

    Bayesian babbling
    Snookered policy makers.
    Climate science mocks.

    H/t some kim at Wot’s Up?
    ==============

    • Posted Jul 5, 2011 at 8:17 PM | Permalink

      Joshua’s justification for trolling was interesting: “Judy Curry didn’t indicate this was a technical thread” (!).

  15. Soronel Haetir
    Posted Jul 5, 2011 at 8:45 PM | Permalink

    One thing to note, Lewis’ post at Curries indicates that there are two Dr. Michael Manns, any indication which made the quoted review comments? I would suspect the co-author of the paper under question, which in not the Dr Mann we all know and love.

    • Posted Jul 5, 2011 at 10:47 PM | Permalink

      Lewis’ post at Curries indicates that there are two Dr. Michael Manns

      Where?

    • Ron Cram
      Posted Jul 5, 2011 at 11:54 PM | Permalink

      There is a Michael E Mann and a Michael L Mann. Since the Michael Mann in the review comments was arguing for a higher sensitivity, I think we can safely assume it is the Michael E. Mann of Hockey Stick infamy.

  16. timetochooseagain
    Posted Jul 5, 2011 at 11:26 PM | Permalink

    I think that more attention needs to be placed on the issue of whether we should “insist on being Bayesian” in Craig Leohle put it. My understanding is that Bayesian methods are a good way to inject one’s assumptions into the analysis that should just be of the data.

    Perhaps on a stats oriented blog it would be informative to see a show of hands kind of thing, how many consider themselves “Bayesian” versus “frequentist”…I will say I for one am a frequentist. I am open to being convinced of anything of course! 🙂

    • mpaul
      Posted Jul 5, 2011 at 11:52 PM | Permalink

      I don’t think that this issue is, in any way, an indictment of Bayesian approaches. Rather the issue is the post hoc injection of ignorance into a perfectly adequate model in order to change an inconvenient result. A uniform prior is used when you have no expert predictors – you are at a point of complete ignorance. Information is then acquired and used to modify this state of ignorance and as a result, the model gets better (more skillful). What they did here was to take a very solid observation-based model and artificially inject ignorance into it. Only through this injection of artificial ignorance were they able to get the answer the wanted.

      • Geoff Cruickshank
        Posted Jul 6, 2011 at 1:44 AM | Permalink

        Except that as Annan points out in the discussion, there is nothing ‘ignorant’ about 0-10, its preemtively skewed high

      • Posted Jul 6, 2011 at 9:27 AM | Permalink

        “injection of artificial ignorance” this conjures such an image!

      • j ferguson
        Posted Jul 7, 2011 at 5:34 AM | Permalink

        Assuming that there might be “actual ignorance” or “real ignorance” in contrast, how is “artificial ignorance” distinguished? Is characterizing the quality or provenance of “ignorance’ a feature of Bayesian analysis, of which I’m demonstrating my own total ignorance? Another coefficient?

        But then putting a coefficient on zero doesn’t seem to do much. there must be something more.

        If ‘artificial ignorance” is not a term of art, then you’ve hit on something I’ll be able to use the rest of my life – thanks much.
        If it is a term of art, I’ll use it anyway. It’s very good.

  17. Alexander K
    Posted Jul 6, 2011 at 3:05 AM | Permalink

    Just to put this matter in another context;
    If a teacher of a university entrance-level subject alters student exam answers to achieve higher marks for the student cohort and thus better chances of entry to university for that cohort, that is an offence that carries very severe negative legal and professional sanctions.
    Or am I seeing this matter in shades of black and white that are too definite?

  18. Posted Jul 6, 2011 at 3:35 AM | Permalink

    Fiddle with the model till it fits? This seems fringe… We will see

  19. matthu
    Posted Jul 6, 2011 at 5:33 AM | Permalink

    Jeremy Harvey on Bishophill has written a great explanation of the Climate sensitivity controversy that I think is worthy of being published alongside Doug Keenan’s article “How Scientific is Climate Science?” in WSJ (easily found in Google).

    I had thought that what the IPCC had done would be considered too esoteric to be explained to the masses and therefore unworthy of much attention but he seems to have achieved the impossible.

    Comment no. 66 (this link should take you to p2 starting at comment 41 if I am lucky)

    http://www.bishop-hill.net/blog/2011/7/5/ipcc-on-climate-sensitivity.html?lastPage=true#comment13552977

    • John Whitman
      Posted Jul 6, 2011 at 1:32 PM | Permalink

      matthu,

      Thanks for pointing out Jeremy Harvey’s comment on Bishophill which helps explain in less formal statistical terms what Nic Lewis’s main CA post was all about.

      John

  20. Don Keiller
    Posted Jul 6, 2011 at 7:35 AM | Permalink

    Any comment from Nick Stokes?

    • GrantB
      Posted Jul 6, 2011 at 7:51 AM | Permalink

      Nick is a semanticist and lawyer. He will not be making comments about the Rev Bayes.

      • Geoff Sherrington
        Posted Jul 6, 2011 at 7:12 PM | Permalink

        GrantB Jul 6, 2011 at 7:51 AM Reply Nick is a semanticist and lawyer.

        Has this page suddenly reduced his seman output?

  21. Stacey
    Posted Jul 6, 2011 at 7:44 AM | Permalink

    Reading the “Convenient Version” leads me to ask who is VINCENT GRAY?

    He suffered so many rejections from the IPPC that one leads me to the conclusion that he must be extremely intelligent?

    “But I could have told you, Vincent,
    This world was never meant for one
    As beautiful as you.”
    By Paul Simon

    [RomanM: Don McLean, not Paul Simon]

    • Stacey
      Posted Jul 6, 2011 at 12:30 PM | Permalink

      @RomanM

      “Don McLean not Paul Simon”

      Sorry I better go and eat American Pie?:-)

  22. Ian Blanchard
    Posted Jul 6, 2011 at 7:55 AM | Permalink

    My understanding of the IPCC remit was that they were to undertake a review of the state of the published science of climate change at periodic intervals. My take on this is that they were significantly over-stepping the bounds of such a review in performing recalculations on already published data, especially in fundamentally changing the approach to interpreting this data. As such, this is surely (at least to a limit extent) original work rather than a review.

    It is obvious from some of the review comments that both Annan and Mann saw drawbacks with the statistical processing technique being applied, but that these criticisms were largely given the brush off by the lead authors. This raises the question of whether the authors correctly understood the processing that they were undertaking (and the transforming effect this had on the F&G06 results) or if they did understand, was the ignoring of criticisms because the results ‘looked right’ (I love the term ‘chartmanship’ for this type of manipulation).

  23. S. Geiger
    Posted Jul 6, 2011 at 8:20 AM | Permalink

    I too have been a bit surprised by the lack of discussion on IPCC’s pro-active ‘press time’ manipulation of a previous study. I *thought* IPCC was to provide a review of the state of the science….not inject new methods/approaches into the record or apply different methods to existing data that have already been reviewed and published (?)

  24. Posted Jul 6, 2011 at 9:02 AM | Permalink

    Sea level has it’s own curve stretching issue. One of the Hockey Stick team and RealClimate.org organizers named Rahmstrof used a very odd mathematical alternative curve shape instead of just fitting a curve to his data. The difference is shown here:

    Another bizarre oddity in the paper is the fitting of a straight line to scatter plot data that does not merit a linear fit. Then he extrapolates far into the future, based on it, as shown here:

    These odd manipulations (along with added “corrections” to *actual* sea level based on water reservoirs on land, while ignoring ground water pumping) resulted in a sea level curve that shows a recent upswing:

    Click to access rahmstorf_science_2007.pdf

    Tom Moriarty’s blog at http://climatesanity.wordpress.com has several recent entries on this fiasco.

    In other news, basic sea level data is not on the pro-AGW side at all:

  25. Roger Knights
    Posted Jul 6, 2011 at 9:14 AM | Permalink

    If the IPCC has no good rebuttal to this accusation, it will give a lot of folks (including politicians) the excuse they need to get off the warmist bandwagon. They will be able to say, “The IPCC misled me.”

  26. NicL
    Posted Jul 6, 2011 at 12:48 PM | Permalink

    I should maybe cross-post a comment that I have just made at Climate Etc:

    Gabi Hegerl, joint coordinating lead author of AR4:WG1 Chapter 9, has asked that it be mentioned on this blog that the authors of Forster and Gregory were part of the author team and not unhappy about the presentation of their result (in Figure 9.20). I hereby do so. That firms up my tentative earlier comment that F&G were Contributing authors for chapter 9 of AR4:WG1 and, presumably, accepted (at least tacitly) the IPCC’s treatment of their results.

    Piers Forster has also confirmed that when their paper was published, he tried to invert the results (convert them from Y to S) and got a range of sensitivity much like Figure 3 in this post. However, he remembers being persuaded by the Oxford group (Frame, Allen, Stainforth, etc, I assume) and other statisticians that by doing this simple inversion F&G were inadvertently assuming a very skewed and unrealistic prior themselves.

    Of course, whether or not the authors of a paper agree with presenting its results on a different, inconsistent basis in no way shows that doing so was valid, nor that there was anything wrong with the basis on which they were originally presented. I am not sure that Piers Forster realised that doing anything other than a simple inversion would produce a PDF for S that implied the results for Y presented in their paper were wrong. Perhaps David Frame told him that no such implication arose. Certainly, Frame doesn’t seem to have been very concerned about the inconsistencies between the various approaches he advocated. As climate scientist James Annan wrote, commenting on the approaches advocated in Frame 05: “Basically, you have thrown away the standard axioms and interpretation of Bayesian probability, and you have not explained what you have put in their place.”

    • Steve McIntyre
      Posted Jul 6, 2011 at 1:02 PM | Permalink

      Nic, in other circumstances, Trenberth see here said that contributing authors were not part of the IPCC “writing team”. This was in the context of Phil Jones who had been a Contributing Author but not Lead Author and thus not part of the “writing team”. As I recall, some of Trenberth’s defenders scorned the idea of considering Contributing Authors as part of the writing team. See here.

      The more serious issue is the validity of uniform priors and their impact on sensitivity and whether or not there was a consensus of chapter 9 authors on the matter doesn’t make their conclusion correct.

      • NicL
        Posted Jul 6, 2011 at 1:29 PM | Permalink

        Steve, noted re what constitutes the IPCC “writing team”, thank you.

        Entirely agree about a consensus of authors not making a conclusion correct!

      • mark t
        Posted Jul 6, 2011 at 10:00 PM | Permalink

        I simply think it is a problem that they presented a result that was different than the peer reviewed article they cited. The IPCC does not do its own science, it merely cites (aand summarizes) peer reviewed science of others. Yet another instance in which their own policies take a back seat to a favorable result.

        Mark

  27. Paul Linsay
    Posted Jul 6, 2011 at 2:03 PM | Permalink

    Nick Lewis,

    Is there a plot of the distribution of the actual experimental Y values? Your Fig. 2 is clearly some kind of calculation and F&G 2006 doesn’t seem to have one either. A comparison of the two choices of priors with real data would be interesting. There’s also the usual frustrating lack of error bars on all these graphs.

    • NicL
      Posted Jul 6, 2011 at 4:40 PM | Permalink

      Paul,
      Unfortunately Forster & Gregory did not provide the exact data that they used in their regression, without which it is not really possible to check their results (although they did at least provide clear graphs of the data used). They state the use of gaussian error distributions in the observable variables, which looks reasonable.

      I derived my figure 2 from F&G’s stated main result of Y = 2.3 +/- 1.4, within a 95% confidence limit. F&G say that the mean and 95% range were both derived by 10,000 Monte Carlo simulations, subsetting an equal number of random data points from the original dataset and performing OLS regression for each of those sets. F&G did not show any PDF for Y or S. I generated Figure 2 by using a Normal distribution centered on 2.3 with a standard deviation of 1.4/1.96 (1.96 being the 95% CI point for N(0,1)) As explained in the article, I used a gaussian rather than a t-distribution since that seems to be what the IPCC did; the uncertainty range of +/- 1.4 should already reflect the wider than Normal 95% points of a t- distribution, of course.

      One doesn’t need to use any prior to generate Figure 2, but one can recast it in Bayesian terms using a uniform prior in Y, as F&G state.

  28. ferd berple
    Posted Jul 7, 2011 at 12:43 PM | Permalink

    There would appear to be no valid statistical reason to assume uniform priors. That assumption is used when you know nothing about the distribution.

    In this case we know something about the distribution, based on the paleo reconstructions. As CO2 levels have gone up and down, earth’s average temperature has kept within a narrow band of 11 – 22 C.

    With current temperatures at 14.5, if you are going to use a uniform prior, then it should be in the range of -3.5 to 7.5. Not 1-18.5 as chose, That would have the effect of skewing the estimate towards an unrealistically high value.

    In any case, using the paleo data, one can build a distribution for CO2 and temperature that makes no assumption about uniform distribution. Once that is established, the appropriate statistical treatment will be obvious.

  29. Posted Jul 7, 2011 at 6:45 PM | Permalink

    There’s an important follow-up from Nic in the form of a letter to Gabi Hegerl. Evidently a uniform prior over S was not ‘uniformly’ applied, as the caption stated – because if it had been, it would have shown how ridiculous this was in the case of Gregory 02 and others. Oh my, oh my. Great investigative work, Lewis. (But wasn’t he a detective in Oxford? McIntyre as Morse, the younger sidekick: it all begins to make sense.)

  30. RuhRoh
    Posted Oct 5, 2011 at 3:49 PM | Permalink

    Bayesian Analysis Loses Some Cachet

    from

    http://www.theregister.co.uk/2011/10/05/bayes_formula/;

    Or at least, it shouldn’t be relied upon as it has been in recent years: according to the judge, before any expert witness plugs data into the theorem to brief the jury on the likelihood that a defendant is guilty, the underlying statistics should be “firm” rather than rough estimates.

    Apparently from here;

    http://www.guardian.co.uk/law/2011/oct/02/formula-justice-bayes-theorem-miscarriage?INTCMP=SRCH

  31. Skiphil
    Posted Dec 19, 2012 at 4:16 AM | Permalink

    new thread at BH discussing article by Matt Ridley, who summarizes where he thinks the climate sensitivity discussion is going

  32. AntonyIndia
    Posted Jan 25, 2013 at 8:22 AM | Permalink

    Any comments from Steve McIntyre on the contributions to IPCC’s climate sensitivity reporting of Steve Jewson starting @ http://www.realclimate.org/index.php/archives/2013/01/on-sensitivity-part-i/comment-page-2/#comment-314549 as republished at BH or his 2009 publication about this matter @ http://arxiv.org/pdf/1005.3907?

  33. Skiphil
    Posted Apr 12, 2013 at 1:43 PM | Permalink

    Not a scientific paper but M. Mann and Dana N. are hand-waving a defense of a “canonical” value for climate sensitivity of 3C in response to the recent article in “The Economist” — interesting to see how millenial multi-proxy studies are supposed to help resolve CS debate in Mann’s favor:
    [emphasis added]
    Mann and Dana N. on climate sensitivity

    …However, there is a wealth of other sources of information that scientists have used to try to constrain climate sensitivity (see for example this discussion at the site RealClimate). That evidence includes the paleoclimate record of the past thousand years, the specific response of the climate to volcanic eruptions , the changes in global temperature during the last ice age, the geological relationship between climate and carbon dioxide over millions of years, and more.

    When the collective information from all of these independent sources of information is combined, climate scientists indeed find evidence for a climate sensitivity that is very close to the canonical 3°C estimate. That estimate still remains the scientific consensus, and current generation climate models — which tend to cluster in their climate sensitivity values around this estimate — remain our best tools for projecting future climate change and its potential impacts….

    • Brandon Shollenberger
      Posted Apr 13, 2013 at 4:31 AM | Permalink

      More interesting to me is the byline, “By Dana Nuccitelli and Michael E Mann.” The article is your typical garbage, but it seems SKS and Michael Mann are starting to “come out of the closet.” That’s interesting because SKS and Stephan Lewandowsky are also allowing themselves to become publicly associated. That means SKS, Lewandowsky and Mann are all basically grouping themselves together.

      I think that’s a tactical blunder. I think Dana Nuccitelli and John Cook will likely benefit from the additional publicity, but by grouping themselves together, I think this group will polarize itself too much. Anyone who shuns the behavior of SKS, Mann or Lewandowsky et al will now (basically) be forced to shun all of them. None of them behave well, and if this keeps up, it’ll be seen as reasonable to smear them with the failures of each other. I believe Benjamin Franklin said something to the effect of:

      We must all hang together, or assuredly we shall all hang separately.

      That seems to apply to what we’re seeing here.

      • kim
        Posted Apr 13, 2013 at 5:52 AM | Permalink

        Golden chains wreathe narrative bouquets, and bind the perfumed fools so fast.
        ========

      • Skiphil
        Posted Apr 13, 2013 at 8:32 AM | Permalink

        Re: Ben Franklin quotation

        Yes, but in that case he had a truly noble cause, whereas in the Mann-SkS case we get only the Noble Cause Corruption.

  34. Skiphil
    Posted May 19, 2013 at 10:03 PM | Permalink

    Latest from Nic Lewis:

    New paper shows transient climate response less than 2°C

3 Trackbacks

  1. By Energy and Environment News on Jul 5, 2011 at 10:18 AM

    […] Nic Lewis on IPCC Sensitivity Steve McIntyre, Climate Audit, 5 July 2011 […]

  2. By IPCC past waarnemingen aan aan modellen on Jul 5, 2011 at 3:10 PM

    […] de aangepaste grafiek legt de piek bij 1.7 graden, met een halvering van de kans…zie ook: climateaudit.org noconsensus.wordpress.com/Aanverwante berichten:Klimaatwetenschapper Judith Curry met wijze woorden […]

  3. […] discussion in the blogosphere. Andrew Montford (aka Bishop Hill) has highlighted this, as has Steve McIntyre and Anthony […]