Nic Lewis (a co-author of O’Donnell et al 2010) is a very sharp analyst who’s recently taken an interest in climate sensitivity estimates and has an interesting guest post at Judy Curry’s today.
In his studies, he noticed that the IPCC’s representation in AR4 Figure 9.20 of the probability distribution for climate sensitivity arising from observationally-based Forster and Gregory 2006 differed substantially from the distribution in the original article. He reports that the alteration had the effect of fattening the tails of high-end climate sensitivity. He reports:
The IPCC did not attempt, in the relevant part of AR4:WG1 (Chapter 9), any justification from statistical theory, or quote authority for, restating the results of Forster/Gregory 06 on the basis of a uniform prior in S. Nor did the IPCC challenge the Forster/Gregory 06 regression model, analysis of uncertainties or error assumptions. The IPCC simply relied on statements [Frame et al. 2005] that ‘advocate’ – without any justification from statistical theory – sampling a flat prior distribution in whatever is the target of the estimate – in this case, S. In fact, even Frame did not advocate use of a prior uniform distribution in S in a case like Forster/Gregory 06. Nevertheless, the IPCC concluded its discussion of the issue by simply stating that “uniform prior distributions for the target of the estimate [the climate sensitivity S] are used unless otherwise specified”.
The transformation effected by the IPCC, by recasting Forster/Gregory 06 in Bayesian terms and then restating its results using a prior distribution that is inconsistent with the regression model and error distributions used in the study, appears unjustifiable. In the circumstances, the transformed climate sensitivity PDF for Forster/Gregory 06 in the IPCC’s Figure 9.20 can only be seen as distorted and misleading.
Update– A commenter at Judy Curry’s has drawn attention to AR4 Review Comments. I’ve uploaded the more convenient version that used to be available (IPCC took this down in favor of a less usable version.) Search “uniform prior” for interesting discussion.
75 Comments
If reality does not fit the model, ignore reality. If the model does not fit the policy agenda, tinker with the model till it fits.
The only empirically observed evidence of climate sensitivity was tampered with and altered to show a false figure. Words fails to describe this piece of machination!
‘distorted and misleading’ well that would appear to be part of the IPCC mission statement ., so I would say in this case they right on track.
I am shocked, shocked, to find such things going on in an IPCC establishment.
When dealing with a regression analysis, one must use the priors (if you insist on being Bayesian) on the quantity you estimate (Y here), not down the road where you divide by it to get a derived quantity (S, sensitivity in this case). I think you would be hard pressed to find an example in the stats literature where it is done the IPCC did it.
A commenter at Judy Curry’s has drawn attention to AR4 Review Comments. I’ve uploaded the more convenient version.) Search “uniform prior” for interesting discussion.
Reviewer Michael Mann criticized IPCCs use of a uniform prior as well:
Nic Lewis explained the origin of his analyses at Judy’s as follows:
These comments do credit to Dr. Mann. But the end result disregarded them. This reminds us that no one Mann is bigger than the system as far as the IPCC and even the central science of WG1 is concerned. The bias in the system is always (it seems) towards magnifying the perceived threat. For me what Nic has uncovered is the worst example by far. Respect to Mann but not to the system that made him, which of course happily ignores him when needs demand.
My reading of Mann’s comments is that he was more concerned that use of an upper bound for a uniform prior (or, presumably, any other prior) could lead to the risk of very high sensitivity being ignored, rather than that the uniform shape of the prior could distort sensitivity estimates in an upwards direction.
To be honest I read it the same way as you did. But then I thought, let’s give the guy (or my cynicism) a break. His language wasn’t totally clear. As for his ultimate motivation … who knows.
My read is that Dr. Mann is arguing for higher sensitivities.
Re: Bernie (Jul 5 12:29), That is my read on it as well.
I can’t read it any other way than that.
My read is that in the first comment he’s arguing that ‘The uniform prior as used in these studies is not what statisticians would call “uninformative”’ – and Nic’s shown that he’s dead right about that. What follows is an e.g. To say that he’s arguing for higher sensitivities in this comment is too simplistic for me. He is saying something importantly right. That’s where I was willing to give him credit (a popular choice, the people’s choice … er, I think not!)
But that raises the relationship between the two Michael Mann comments and the editors’ response to the second. Steve has highlighted the ‘marginally influenced’ in the latter – presumably because Nic has shown that not to be true for the effect on Forster & Gregory. That seems a key editorial statement – a significantly wrong one.
But I admit I don’t understand the rest of the editors’ comment:
I don’t have the draft being refered to so the (perhaps erroneous) line numbers aren’t helping me. Can anyone point to where in the final text they “clarify that the limits of the prior reflect computer time limitations and are generally wide enough to encompass ranges experts consider plausible”?
Thanks for that handy link to the AR4 comments.
Annan makes some references to ‘LGM-derived prior’ in contrast to ‘uniform prior’.
Please, what is the meaning of this TLA, “LGM” ?
TIA
RR
Steve: Last Glacial Maximum
Last glacial maximum – i.e. from changes dating back to the last ice age
Any chance of a nifty analogy or restating in a more ‘engineering’ lexicon?
My need for remedial statistical education is abundantly clear, no argument about that.
I was just hoping that the mathematical gyrations might be expressed in some kind of ‘signal processing’ perspective, or ?
The net effect seems to be some kind of warping/stretching, no?
I expect that these questions are strong evidence for my second sentence.
I’m betting that the number of folks comfortable with ‘uniform prior’ is quite small, and this seems to be a big message worthy of wider comprehension.
Does anyone 1)
understand this material, and
2) remember what it was like when they didn’t understand these topics, and
3) retain the ability to convey it to non-statisticians?
TIA
RR
Nic Lewis did a great job of analysis and explaining the Bayesian process that the IPCC misapplied in this case and the consequences of that misapplication.
Does the above mean that the expression of the only experimentally obtained grasp of sensitivity was “adjusted” to fall more into line with the products of modeling?
Too bad the modelers were not unnerved by the gap between their work and what had been found by experiment.
Bingo.
Please correct me if I missed something, but this appears to place IPCC defenders between the proverbial rock and hard place. Either the IPCC looks really, really bad for manipulating a critical study OR they argue that the statistical treatment is simply one of several acceptable options. If they argue the latter, however, they are admitting that the science is not “settled” and, indeed, not even close.
I predict (if anything) a vigorous defence of the IPCC assumptions of how to handle the statistics as the only valid method plus a defence that the sensitivity derived from models is “better” than that from data.
Except hansen says the models are the least informative
My take on Hansen’s recent long paper is that he thinks AOGCMs have got sensitivity about right (for whatever reason) at circa 3C, a figure he strongly believes in based on some paleoclimate studies. However he now, rather belatedly, realises that the models effective ocean diffusivities are far too high. To prevent correction of this ocean mis-modelling causing AOGCM back-cast simulations producing unreaslistically high warming, he wants to assume (with no real evidence) that tropospheric aerosol forcing is much more negative than the current best estimates suggest.
Helpful pointer to what Hansen’s currently arguing, thanks. Aerosols always seem to be the get-out-of-jail card, as Lindzen’s pointed out for years.
Does Vicky Pope know about this? NicL, you going to tell her?
I have emailed her, but she is away from the office at present.
There are two options here:
The IPCC knew what they were doing when they amended the graph
The IPCC didn’t know what they were doing when they altered the graph.
Which option is worse?
Question: WHO altered the graph? Who is the person who did this? Because “the IPCC” doesn’t DO anything. Somewhere there is someone who did this. Perhaps that person should explain why.
NicL
This is a very pertinent question. Who was in charge of that particular chapter/section, and who did that individual report to?
Is that a traceable matter of record?
Hegerl and Zweiers were the CLAs. They were also coauthors of hegerl et al, cited in the caption of Figure 9.20 for the methodology. The alteration to Fig 9.20 style would have been done by or under the supervision of hegerl and Zwiers. They were also judge and jury in rejecting Annan’s criticism of uniform priors.
About as far from an ‘engineering quality exposition’ as it’s possible to imagine.
Richard,
That’s because it was only the science they were declaring “settled”. How to “engineer” the climate was someone else’s job. 😉
Judy’s improved her thread all out of measure by deleting all the non-substantive commentary, mostly ill-tempered wrangling between Joshua and me. I can delight in her editorial skill, as do I Steve’s, but still regret that my best stuff always gets deleted. For example, this episode of my outrage:
Bayesian babbling
Snookered policy makers.
Climate science mocks.
H/t some kim at Wot’s Up?
==============
Joshua’s justification for trolling was interesting: “Judy Curry didn’t indicate this was a technical thread” (!).
One thing to note, Lewis’ post at Curries indicates that there are two Dr. Michael Manns, any indication which made the quoted review comments? I would suspect the co-author of the paper under question, which in not the Dr Mann we all know and love.
Where?
There is a Michael E Mann and a Michael L Mann. Since the Michael Mann in the review comments was arguing for a higher sensitivity, I think we can safely assume it is the Michael E. Mann of Hockey Stick infamy.
I think that more attention needs to be placed on the issue of whether we should “insist on being Bayesian” in Craig Leohle put it. My understanding is that Bayesian methods are a good way to inject one’s assumptions into the analysis that should just be of the data.
Perhaps on a stats oriented blog it would be informative to see a show of hands kind of thing, how many consider themselves “Bayesian” versus “frequentist”…I will say I for one am a frequentist. I am open to being convinced of anything of course! 🙂
I don’t think that this issue is, in any way, an indictment of Bayesian approaches. Rather the issue is the post hoc injection of ignorance into a perfectly adequate model in order to change an inconvenient result. A uniform prior is used when you have no expert predictors – you are at a point of complete ignorance. Information is then acquired and used to modify this state of ignorance and as a result, the model gets better (more skillful). What they did here was to take a very solid observation-based model and artificially inject ignorance into it. Only through this injection of artificial ignorance were they able to get the answer the wanted.
Except that as Annan points out in the discussion, there is nothing ‘ignorant’ about 0-10, its preemtively skewed high
“injection of artificial ignorance” this conjures such an image!
“AI II – The Revenge” ?
Assuming that there might be “actual ignorance” or “real ignorance” in contrast, how is “artificial ignorance” distinguished? Is characterizing the quality or provenance of “ignorance’ a feature of Bayesian analysis, of which I’m demonstrating my own total ignorance? Another coefficient?
But then putting a coefficient on zero doesn’t seem to do much. there must be something more.
If ‘artificial ignorance” is not a term of art, then you’ve hit on something I’ll be able to use the rest of my life – thanks much.
If it is a term of art, I’ll use it anyway. It’s very good.
Just to put this matter in another context;
If a teacher of a university entrance-level subject alters student exam answers to achieve higher marks for the student cohort and thus better chances of entry to university for that cohort, that is an offence that carries very severe negative legal and professional sanctions.
Or am I seeing this matter in shades of black and white that are too definite?
Fiddle with the model till it fits? This seems fringe… We will see
Jeremy Harvey on Bishophill has written a great explanation of the Climate sensitivity controversy that I think is worthy of being published alongside Doug Keenan’s article “How Scientific is Climate Science?” in WSJ (easily found in Google).
I had thought that what the IPCC had done would be considered too esoteric to be explained to the masses and therefore unworthy of much attention but he seems to have achieved the impossible.
Comment no. 66 (this link should take you to p2 starting at comment 41 if I am lucky)
http://www.bishop-hill.net/blog/2011/7/5/ipcc-on-climate-sensitivity.html?lastPage=true#comment13552977
matthu,
Thanks for pointing out Jeremy Harvey’s comment on Bishophill which helps explain in less formal statistical terms what Nic Lewis’s main CA post was all about.
John
Any comment from Nick Stokes?
Nick is a semanticist and lawyer. He will not be making comments about the Rev Bayes.
GrantB Jul 6, 2011 at 7:51 AM Reply Nick is a semanticist and lawyer.
Has this page suddenly reduced his seman output?
Reading the “Convenient Version” leads me to ask who is VINCENT GRAY?
He suffered so many rejections from the IPPC that one leads me to the conclusion that he must be extremely intelligent?
“But I could have told you, Vincent,
This world was never meant for one
As beautiful as you.”
By Paul Simon
[RomanM: Don McLean, not Paul Simon]
@RomanM
“Don McLean not Paul Simon”
Sorry I better go and eat American Pie?:-)
My understanding of the IPCC remit was that they were to undertake a review of the state of the published science of climate change at periodic intervals. My take on this is that they were significantly over-stepping the bounds of such a review in performing recalculations on already published data, especially in fundamentally changing the approach to interpreting this data. As such, this is surely (at least to a limit extent) original work rather than a review.
It is obvious from some of the review comments that both Annan and Mann saw drawbacks with the statistical processing technique being applied, but that these criticisms were largely given the brush off by the lead authors. This raises the question of whether the authors correctly understood the processing that they were undertaking (and the transforming effect this had on the F&G06 results) or if they did understand, was the ignoring of criticisms because the results ‘looked right’ (I love the term ‘chartmanship’ for this type of manipulation).
I too have been a bit surprised by the lack of discussion on IPCC’s pro-active ‘press time’ manipulation of a previous study. I *thought* IPCC was to provide a review of the state of the science….not inject new methods/approaches into the record or apply different methods to existing data that have already been reviewed and published (?)
Sea level has it’s own curve stretching issue. One of the Hockey Stick team and RealClimate.org organizers named Rahmstrof used a very odd mathematical alternative curve shape instead of just fitting a curve to his data. The difference is shown here:
Another bizarre oddity in the paper is the fitting of a straight line to scatter plot data that does not merit a linear fit. Then he extrapolates far into the future, based on it, as shown here:
These odd manipulations (along with added “corrections” to *actual* sea level based on water reservoirs on land, while ignoring ground water pumping) resulted in a sea level curve that shows a recent upswing:
Click to access rahmstorf_science_2007.pdf
Tom Moriarty’s blog at http://climatesanity.wordpress.com has several recent entries on this fiasco.
In other news, basic sea level data is not on the pro-AGW side at all:
If the IPCC has no good rebuttal to this accusation, it will give a lot of folks (including politicians) the excuse they need to get off the warmist bandwagon. They will be able to say, “The IPCC misled me.”
I should maybe cross-post a comment that I have just made at Climate Etc:
Gabi Hegerl, joint coordinating lead author of AR4:WG1 Chapter 9, has asked that it be mentioned on this blog that the authors of Forster and Gregory were part of the author team and not unhappy about the presentation of their result (in Figure 9.20). I hereby do so. That firms up my tentative earlier comment that F&G were Contributing authors for chapter 9 of AR4:WG1 and, presumably, accepted (at least tacitly) the IPCC’s treatment of their results.
Piers Forster has also confirmed that when their paper was published, he tried to invert the results (convert them from Y to S) and got a range of sensitivity much like Figure 3 in this post. However, he remembers being persuaded by the Oxford group (Frame, Allen, Stainforth, etc, I assume) and other statisticians that by doing this simple inversion F&G were inadvertently assuming a very skewed and unrealistic prior themselves.
Of course, whether or not the authors of a paper agree with presenting its results on a different, inconsistent basis in no way shows that doing so was valid, nor that there was anything wrong with the basis on which they were originally presented. I am not sure that Piers Forster realised that doing anything other than a simple inversion would produce a PDF for S that implied the results for Y presented in their paper were wrong. Perhaps David Frame told him that no such implication arose. Certainly, Frame doesn’t seem to have been very concerned about the inconsistencies between the various approaches he advocated. As climate scientist James Annan wrote, commenting on the approaches advocated in Frame 05: “Basically, you have thrown away the standard axioms and interpretation of Bayesian probability, and you have not explained what you have put in their place.”
Nic, in other circumstances, Trenberth see here said that contributing authors were not part of the IPCC “writing team”. This was in the context of Phil Jones who had been a Contributing Author but not Lead Author and thus not part of the “writing team”. As I recall, some of Trenberth’s defenders scorned the idea of considering Contributing Authors as part of the writing team. See here.
The more serious issue is the validity of uniform priors and their impact on sensitivity and whether or not there was a consensus of chapter 9 authors on the matter doesn’t make their conclusion correct.
Steve, noted re what constitutes the IPCC “writing team”, thank you.
Entirely agree about a consensus of authors not making a conclusion correct!
I simply think it is a problem that they presented a result that was different than the peer reviewed article they cited. The IPCC does not do its own science, it merely cites (aand summarizes) peer reviewed science of others. Yet another instance in which their own policies take a back seat to a favorable result.
Mark
Nick Lewis,
Is there a plot of the distribution of the actual experimental Y values? Your Fig. 2 is clearly some kind of calculation and F&G 2006 doesn’t seem to have one either. A comparison of the two choices of priors with real data would be interesting. There’s also the usual frustrating lack of error bars on all these graphs.
Paul,
Unfortunately Forster & Gregory did not provide the exact data that they used in their regression, without which it is not really possible to check their results (although they did at least provide clear graphs of the data used). They state the use of gaussian error distributions in the observable variables, which looks reasonable.
I derived my figure 2 from F&G’s stated main result of Y = 2.3 +/- 1.4, within a 95% confidence limit. F&G say that the mean and 95% range were both derived by 10,000 Monte Carlo simulations, subsetting an equal number of random data points from the original dataset and performing OLS regression for each of those sets. F&G did not show any PDF for Y or S. I generated Figure 2 by using a Normal distribution centered on 2.3 with a standard deviation of 1.4/1.96 (1.96 being the 95% CI point for N(0,1)) As explained in the article, I used a gaussian rather than a t-distribution since that seems to be what the IPCC did; the uncertainty range of +/- 1.4 should already reflect the wider than Normal 95% points of a t- distribution, of course.
One doesn’t need to use any prior to generate Figure 2, but one can recast it in Bayesian terms using a uniform prior in Y, as F&G state.
There would appear to be no valid statistical reason to assume uniform priors. That assumption is used when you know nothing about the distribution.
In this case we know something about the distribution, based on the paleo reconstructions. As CO2 levels have gone up and down, earth’s average temperature has kept within a narrow band of 11 – 22 C.
With current temperatures at 14.5, if you are going to use a uniform prior, then it should be in the range of -3.5 to 7.5. Not 1-18.5 as chose, That would have the effect of skewing the estimate towards an unrealistically high value.
In any case, using the paleo data, one can build a distribution for CO2 and temperature that makes no assumption about uniform distribution. Once that is established, the appropriate statistical treatment will be obvious.
There’s an important follow-up from Nic in the form of a letter to Gabi Hegerl. Evidently a uniform prior over S was not ‘uniformly’ applied, as the caption stated – because if it had been, it would have shown how ridiculous this was in the case of Gregory 02 and others. Oh my, oh my. Great investigative work, Lewis. (But wasn’t he a detective in Oxford? McIntyre as Morse, the younger sidekick: it all begins to make sense.)
Bayesian Analysis Loses Some Cachet
from
http://www.theregister.co.uk/2011/10/05/bayes_formula/;
Or at least, it shouldn’t be relied upon as it has been in recent years: according to the judge, before any expert witness plugs data into the theorem to brief the jury on the likelihood that a defendant is guilty, the underlying statistics should be “firm” rather than rough estimates.
Apparently from here;
http://www.guardian.co.uk/law/2011/oct/02/formula-justice-bayes-theorem-miscarriage?INTCMP=SRCH
new thread at BH discussing article by Matt Ridley, who summarizes where he thinks the climate sensitivity discussion is going
oops, here’s a link:
http://www.bishop-hill.net/blog/2012/12/19/climate-sensitivity-is-low.html
Any comments from Steve McIntyre on the contributions to IPCC’s climate sensitivity reporting of Steve Jewson starting @ http://www.realclimate.org/index.php/archives/2013/01/on-sensitivity-part-i/comment-page-2/#comment-314549 as republished at BH or his 2009 publication about this matter @ http://arxiv.org/pdf/1005.3907?
Not a scientific paper but M. Mann and Dana N. are hand-waving a defense of a “canonical” value for climate sensitivity of 3C in response to the recent article in “The Economist” — interesting to see how millenial multi-proxy studies are supposed to help resolve CS debate in Mann’s favor:
[emphasis added]
Mann and Dana N. on climate sensitivity
More interesting to me is the byline, “By Dana Nuccitelli and Michael E Mann.” The article is your typical garbage, but it seems SKS and Michael Mann are starting to “come out of the closet.” That’s interesting because SKS and Stephan Lewandowsky are also allowing themselves to become publicly associated. That means SKS, Lewandowsky and Mann are all basically grouping themselves together.
I think that’s a tactical blunder. I think Dana Nuccitelli and John Cook will likely benefit from the additional publicity, but by grouping themselves together, I think this group will polarize itself too much. Anyone who shuns the behavior of SKS, Mann or Lewandowsky et al will now (basically) be forced to shun all of them. None of them behave well, and if this keeps up, it’ll be seen as reasonable to smear them with the failures of each other. I believe Benjamin Franklin said something to the effect of:
That seems to apply to what we’re seeing here.
Golden chains wreathe narrative bouquets, and bind the perfumed fools so fast.
========
Re: Ben Franklin quotation
Yes, but in that case he had a truly noble cause, whereas in the Mann-SkS case we get only the Noble Cause Corruption.
Latest from Nic Lewis:
3 Trackbacks
[…] Nic Lewis on IPCC Sensitivity Steve McIntyre, Climate Audit, 5 July 2011 […]
[…] de aangepaste grafiek legt de piek bij 1.7 graden, met een halvering van de kans…zie ook: climateaudit.org noconsensus.wordpress.com/Aanverwante berichten:Klimaatwetenschapper Judith Curry met wijze woorden […]
[…] discussion in the blogosphere. Andrew Montford (aka Bishop Hill) has highlighted this, as has Steve McIntyre and Anthony […]