Scientific American article: “How to Misinterpret Climate Change Research”

A Scientific American article concerning Bjorn Stevens’ recent paper “Rethinking the lower bound on aerosol radiative forcing” has led to some confusion. The article states, referring to a blog post of mine at Climate Audit, “The misinterpretation of Stevens’ paper began with Nic Lewis, an independent climate scientist.”. My blog post showed how climate sensitivity estimates given in Lewis and Curry (2014) (LC14) would change if the estimate for aerosol forcing from Stevens’ recent paper were used instead of the estimate thereof given in the IPCC 5th Assessment Working Group 1 report (AR5 WG1). To clarify, Bjorn Stevens has never suggested that my blog post misinterpreted or misrepresented his paper.

The article also states, paraphrasing rather than quoting, “Lewis had used an extremely rudimentary, some would even say flawed, climate model to derive his estimates, Stevens said.” LC14 used a simple energy budget climate model, described in AR5 WG1, to estimate equilibrium climate sensitivity (ECS) from estimates of climate system changes over the last 150 years or so. An essentially identical method was used to estimate ECS in Otto et al (2013), a paper of which Bjorn Stevens was an author, along with thirteen other AR5 WG1 lead authors (and myself). Energy budget models actually estimate an approximation to ECS, effective climate sensitivity, not ECS itself, which some people may regard as a flaw. AR5 WG1 states that “In some climate models ECS tends to be higher than the effective climate sensitivity”; this is certainly true. Since the climate system takes many centuries to equilibrate, it is not known whether or not this is the case in the real climate system. LC14 discussed the issues involved in some detail, and my Climate Audit blog post referred to estimating “equilibrium/effective climate sensitivity”.

I sent Bjorn Stevens a copy of the above wording and he has responded, saying the following:

“Dear Nic,

because I have reservations about estimates of ocean heat uptake used in the ‘energy-balance approaches’, and because of a number of issues (which you allude to) regarding differences between effective climate sensitivity estimates from the historical record and ECS, I am not ready to draw the inference from my study that ECS is low. That said, I do think what you write in the two paragraphs above is a fair characterization of the situation and of your important contributions to the scientific debate. The Ringberg meeting also made me confident that the open issues are ones we can resolve in the next few years.

Feel free to quote me on this.

Best wishes, Bjorn”

Update 26 April 2015

Gayathri Vaidyanathan tells me that the article has  been changed at ClimateWire .  Certainly, the title has been changed, and I presume the text has been amended per the version she sent me, which no longer suggests misinterpretation. But Scientific American is still showing the original version, so the situation is not very satisfactory.

Update 28 April 2015

The text of the article has now been changed at Scientific American, although the title is unaltered. The sentence referring to misinterpretation now reads “Stevens’ paper was analyzed by Nic Lewis, an independent climate scientist.*” At the foot of the article is the note:

Correction: A previous version of this story did not accurately reflect Lewis’ work. Lewis used Stevens’ study in an analysis that was used by some media outlets to throw doubt on global warming.


Pitfalls in climate sensitivity estimation: Part 3

A guest post by Nicholas Lewis

In Part 1 I introduced the talk I gave at Ringberg 2015, explained why it focussed on estimation based on warming over the instrumental period, and covered problems relating to aerosol forcing and bias caused by the influence of the AMO. In Part 2 I dealt with poor Bayesian probabilistic estimation and summarized the state of observational, instrumental period warming based climate sensitivity estimation. In this third and final part I discuss arguments that estimates from that approach are biased low, and that GCM simulations imply ECS is higher, partly because in GCMs effective climate sensitivity increases over time. I’ve incorporated one new slide here to help explain this issue.

Slide 19

ringSlide19

Continue reading

Pitfalls in climate sensitivity estimation: Part 2

A guest post by Nicholas Lewis

In Part 1 I introduced the talk I gave at Ringberg 2015, explained why it focussed on estimation based on warming over the instrumental period, and covered problems relating to aerosol forcing and bias caused by the influence of the AMO. I now move on to problems arising when Bayesian probabilistic approaches are used, and then summarize the state of instrumental period warming, observationally-based climate sensitivity estimation as I see it. I explained in Part 1 why other approaches to estimating ECS appear to be less reliable.

Slide 8 ringSlide8

Continue reading

Pitfalls in climate sensitivity estimation: Part 1

A guest post by Nicholas Lewis

As many readers will be aware, I attended the WCRP Grand Challenge Workshop: Earth’s Climate Sensitivities at Schloss Ringberg in late March. Ringberg 2015 was a very interesting event, attended by many of the best known scientists involved in this field and in areas of research closely related to it – such as the behaviour of clouds, aerosols and heat in the ocean. Many talks were given at Ringberg 2015; presentation slides are available here. It is often difficult to follow presentations just from the slides, so I thought it was worth posting an annotated version of the slides relating to my own talk, “Pitfalls in climate sensitivity estimation”. To make it more digestible and focus discussion, I am splitting my presentation into three parts. I’ve omitted the title slide and reinstated some slides that I cut out of my talk due to the 15 minute time constraint.

Slide 2

ringSlide2

In this part I will cover the first bullet point and one of the major problems that cause bias in climate sensitivity estimates. In the second part I will deal with one or two other major problems and summarize the current position regarding observationally-based climate sensitivity estimation. In the final part I will deal with the third bullet point.

In a nutshell, I will argue that:

  • Climate sensitivity is most reliably estimated from observed warming over the last ~150 years
  • Most of the sensitivity estimates cited in the latest IPCC report had identifiable, severe problems
  • Estimates from observational studies that are little affected by such problems indicate that climate sensitivity is substantially lower than in most global climate models
  • Claims that the differences are due to substantial downwards bias in estimates from these observational studies have little support in observations.

Continue reading

Rahmstorf’s Third Trick

Rahmstorf et al 2015 Figure 5 shows a coral d15N series from offshore Nova Scotia (see left panel below). The corresponding plot from the source is shown on the right.  Original captions for both follow.  There’s enough information in the figures and captions to figure out Rahmstorf’s next trick. See if you can figure it out before looking at my explanation below the fold.

mann-rahmstorf-temp-proxies-fig5

N15

Figure 1. Left – Rahmstorf et al Figure 5. Original caption: Figure 5 A compilation of different indicators for Atlantic ocean circulation. The blue curve shows our temperature-based AMOC index also shown in Fig. 3b. The dark red curve shows the same index based on NASA GISS temperature data-48 (scale on left). The green curve with uncertainty range shows coral proxy data – 25 (scale on right). The data are decadally smoothed. Orange dots show the analyses of data from hydrographic sections across the Atlantic at 25 N, where a 1 K change in the AMOC index corresponds to a 2.3 Sv change in AMOC transport, as in Fig. 2 based on the model simulation. Other estimates from oceanographic data similarly suggest relatively strong AMOC in the 1950s and 1960s, weak AMOC in the 1970s and 1980s and stronger again in the 1990s (refs 41,51). Right – Sherwood et al 2011 Figure 3 excerpt. Original caption: time series … annual mean bulk d15N from six colonies of the deep-sea gorgonian P. resedaeformis. Shaded areas represent 95% confidence intervals around annual means. Dashed lines indicate long-term trends, where significant. Note the cold periods (blue bars) of the 1930s/1940s and 1960s and sustained warm period (red bar) since 1970. Bulk d15N is most strongly correlated with NAO at a lag of 4 years (r= -0.19) and with temperature at a lag of 3 years (r=-0.27, p<0.05). … Squares in bulk d15N plot show values of the eight individual samples used for d15N-AA analysis. Continue reading

Rahmstorf’s Second Trick

The Rahmstorf et al reconstruction commences in AD900 even though the Mann et al 2009 reconstruction goes back to AD500.  Once again, this raises the obvious question: why didn’t Rahmstorf show values before AD900?  Are these results adverse to his claims? Once the question is posed, you can guess the answer.  Continue reading

Rahmstorf’s First Trick

In any article by Mann and coauthors, it is always prudent to assume that even seemingly innocent choices use up a researcher degree of freedom – to put it nicely. For example, Rahmstorf et al focus on their “AMOC index” in the period ending 1995 and show their AMOC index up to as shown below. Continue reading

Jones and Dixon Refute Conspiracy Theorist Lewandowsky

Jonathan Jones and Ruth Dixon have published (see Ruth’s blog here) a comment  in Psychological Science on conspiracy theorist Stephan Lewandowsky’s Hoax article, much discussed at CA at the time.  Although their statistical points are incontrovertible and clearly expressed, it took considerable persistence – see timeline here. Their first and longer original article was submitted to a different journal, but rejected as being of insufficient interest to readers of that journal. The reviewers were sympathetic but more or less referred them back to Psychological Science, the journal which had published Hoax.  Psychological Science has strict word limits on a comment (1000 words) and these word limits are counter-productive when an article is so thoroughly bogus as Lewandowsky et al 2012.   Lewandowsky was one of the reviewers for Psychological Science and opposed publication. However, unlike Steig in respect of O’Donnell et al 2011, Lewandowsky was identified to the authors as a reviewer, permitting Jones and Dixon to respond to his review comments knowing of the reviewer’s conflict of interest. Editor Eric Eich accepted the comment, as well as Lewandowsky’s response (response paywalled.) Lewandowsky has a blog reaction here, in which Lewandowsky hypocritically compliments the article as a scientific response in peer reviewed literature, without disclosing that he had opposed its publication.

There are numerous other defects with the Lewandowsky article that are not covered in their comment. One can only do so much with 1000 words and Jonathan and Ruth have unsurprisingly done an excellent job. :

Reductio ad mannium

The new article by Rahmstorf and Mann (see RC here) has been criticized at WUWT (here here) for making claims about Atlantic Ocean currents based on proxies, rather than measurements. (Also at Judy’s here)   But it’s worse, much worse than we thought.

Rahmstorf and Mann’s results are not based on proxies for Atlantic current velocity, but on a network consisting of contaminated Tiljander sediments (upside-down or not), Graybill’s  stripbark bristlecone chronologies, Briffa MXD series truncated to hide-the-decline and hundreds of nondescript tree ring series statistically indistinguishable from white noise. In other words, they used the same much-criticized proxy network as Mann et al 2008-9. It’s hard to understand why anyone would seriously believe (let alone publish in peer reviewed literature) that Atlantic ocean currents could be reconstructed by such dreck, but Rahmstorf et al 2015 stands as evidence to the contrary.

After so much controversy about Mann’s prior use of contaminated data, it defies credulity that he and Rahmstorf have done so once again.

And when the National Research Council panel recommended in 2006 that stripbark bristlecone chronogies be “avoided” in temperature reconstructions, they can scarcely have contemplated (let alone, endorsed) their use in reconstruction of Atlantic ocean currents.

Seemingly leaving no stone unturned, the Rahmstorf and Mann dataset even truncates the Briffa MXD chronologies in 1960, thereby hiding the decline (see here for a discussion of MXD truncation in Mann et al 2008 in September 2008, long before we learned from Climategate emails that they were using a trick to “hide the decline”)

In 2002, even Keith Briffa was frustrated enough by the Mann et al 1998 reconstruction to observe:

I am sick to death of Mann stating his reconstruction represents the tropical area just because it contains a few (poorly temperature representative ) tropical series. He is just as capable of regressing these data again any other “target” series , such as the increasing trend of self-opinionated verbage he has produced over the last few years , and … (better say no more)

But at least the network that Briffa complained about contained a “few poorly temperature representative” tropical series.  Rahmstorf et al 2015 dispensed with even that meager precaution by purporting to reconstruct Atlantic ocean currents without using any proxies purporting to directly measure Atlantic ocean current.

What is one to say of a climate science field which permits such practices to continue unchecked?  Should one borrow Andrew Weaver’s words and say:

They let these random diatribes of absolute, incorrect nonsense get published. They’re not able to determine if what’s being said is correct or not, or whether it’s just absolute balderdash.

Also see Arthur Smith here and Atte Korhola here on prior use of contaminated sediments. The reputable climate science community should collectively cringe with embarrassment.

Whatever may or may not be happening with the Atlantic Meridional Overturning Current (AMOC), one thing that you can take to the bank (or insane asylum, as appropriate): contaminated Finnish lake sediments, strip bark bristlecone pines and the hundreds of nondescript Mann 2008-9 tree ring series do not contain any useful information on the past history of the AMOC.

Only one thing can be surmised from Rahmstorf and Mann’s claim that the Mann et al 2008-9 network can be used to reconstruct not just NH temperature, but also SH temperatures and now Atlantic Meridional Overturning Circulation: using Mannian RegEM with the Mann et al 2008-9 network of 1209 “proxies”, one can probably “reconstruct” almost anything.  Are you interested in “reconstructing” the medieval Dow Jones Index?  Or medieval NFL attendance?

Reduction ad mannium.

Postscript:  unsurprisingly, Rahmstorf et al has many interesting booby traps.  As homework questions. (1) why is the most recent value of the gyre reconstruction shown in Rahmstorf Figure 3 (middle panel) ends in approximately 1995, when the underlying gridded reconstruction of Mann et al 2009 goes to 2006.   (2) why are the reconstructions only shown back to AD900, when the underlying gridded reconstruction of Mann et al 2009 begins in AD500.

How Weaver Ignored Corcoran’s Segue

Four of the incidents in J Burke’s background chronology in Weaver v National Post (the January 27, 2005, February 15, 2005, August 2006 and February 27, 2008 incidents) relate, either in whole or in part, to a dispute between Weaver and National Post on whether Weaver had dismissed our research as “rubbish” or “balderdash” or a like pejorative.

Substantively, I think that there is considerable evidence that Weaver’s opinion on our research was similar to Gavin Schmidt’s and that one can justify use of such a pejorative to describe Weaver’s opinion.  I plan to assess this evidence in a separate post, a post in which I’ll also begin considering Weaver as editor of Rutherford (Mann) et al 2005, an article that introduced various derogatory claims about our work into “the peer reviewed literature”.

But in today’s post, I’m going to look at a related, but different issue. While Weaver regularly complained about even slight supposed mischaracterizations of his opinions by National Post, his complaints were not necessarily valid. One has to carefully parse both the original article and the complaint to determine validity. In today’s post, I’ll show that Corcoran had segued his claim in the February 2005 and August 2006 incidents and that Weaver missed or ignored Corcoran’s segue.

In addition, while National Post published a Weaver letter setting out his side in August 2006, that didn’t mean that Weaver’s complaint had been vindicated or that National Post had “retracted”, despite Weaver’s later claim and the impression in J Burke’s chronology. In August 2006, Corcoran published a rebuttal that, in my opinion, fully refuted Weaver’s complaint, but this was not mentioned in J Burke’s chronology.  Curiously, although the issues were quite similar in respect to Weaver’s February 2008 complaint about a Foster opinion column,  on this occasion, National Post inconsistently published a correction, though, in my opinion, they could easily have taken a similar position to Corcoran’s earlier rebuttal.

Continue reading

Follow

Get every new post delivered to your Inbox.

Join 3,557 other followers