In any article by Mann and coauthors, it is always prudent to assume that even seemingly innocent choices use up a researcher degree of freedom – to put it nicely. For example, Rahmstorf et al focus on their “AMOC index” in the period ending 1995 and show their AMOC index up to as shown below. Continue reading
Jonathan Jones and Ruth Dixon have published (see Ruth’s blog here) a comment in Psychological Science on conspiracy theorist Stephan Lewandowsky’s Hoax article, much discussed at CA at the time. Although their statistical points are incontrovertible and clearly expressed, it took considerable persistence – see timeline here. Their first and longer original article was submitted to a different journal, but rejected as being of insufficient interest to readers of that journal. The reviewers were sympathetic but more or less referred them back to Psychological Science, the journal which had published Hoax. Psychological Science has strict word limits on a comment (1000 words) and these word limits are counter-productive when an article is so thoroughly bogus as Lewandowsky et al 2012. Lewandowsky was one of the reviewers for Psychological Science and opposed publication. However, unlike Steig in respect of O’Donnell et al 2011, Lewandowsky was identified to the authors as a reviewer, permitting Jones and Dixon to respond to his review comments knowing of the reviewer’s conflict of interest. Editor Eric Eich accepted the comment, as well as Lewandowsky’s response (response paywalled.) Lewandowsky has a blog reaction here, in which Lewandowsky hypocritically compliments the article as a scientific response in peer reviewed literature, without disclosing that he had opposed its publication.
There are numerous other defects with the Lewandowsky article that are not covered in their comment. One can only do so much with 1000 words and Jonathan and Ruth have unsurprisingly done an excellent job. :
The new article by Rahmstorf and Mann (see RC here) has been criticized at WUWT (here here) for making claims about Atlantic Ocean currents based on proxies, rather than measurements. (Also at Judy’s here) But it’s worse, much worse than we thought.
Rahmstorf and Mann’s results are not based on proxies for Atlantic current velocity, but on a network consisting of contaminated Tiljander sediments (upside-down or not), Graybill’s stripbark bristlecone chronologies, Briffa MXD series truncated to hide-the-decline and hundreds of nondescript tree ring series statistically indistinguishable from white noise. In other words, they used the same much-criticized proxy network as Mann et al 2008-9. It’s hard to understand why anyone would seriously believe (let alone publish in peer reviewed literature) that Atlantic ocean currents could be reconstructed by such dreck, but Rahmstorf et al 2015 stands as evidence to the contrary.
After so much controversy about Mann’s prior use of contaminated data, it defies credulity that he and Rahmstorf have done so once again.
And when the National Research Council panel recommended in 2006 that stripbark bristlecone chronogies be “avoided” in temperature reconstructions, they can scarcely have contemplated (let alone, endorsed) their use in reconstruction of Atlantic ocean currents.
Seemingly leaving no stone unturned, the Rahmstorf and Mann dataset even truncates the Briffa MXD chronologies in 1960, thereby hiding the decline (see here for a discussion of MXD truncation in Mann et al 2008 in September 2008, long before we learned from Climategate emails that they were using a trick to “hide the decline”)
In 2002, even Keith Briffa was frustrated enough by the Mann et al 1998 reconstruction to observe:
I am sick to death of Mann stating his reconstruction represents the tropical area just because it contains a few (poorly temperature representative ) tropical series. He is just as capable of regressing these data again any other “target” series , such as the increasing trend of self-opinionated verbage he has produced over the last few years , and … (better say no more)
But at least the network that Briffa complained about contained a “few poorly temperature representative” tropical series. Rahmstorf et al 2015 dispensed with even that meager precaution by purporting to reconstruct Atlantic ocean currents without using any proxies purporting to directly measure Atlantic ocean current.
What is one to say of a climate science field which permits such practices to continue unchecked? Should one borrow Andrew Weaver’s words and say:
They let these random diatribes of absolute, incorrect nonsense get published. They’re not able to determine if what’s being said is correct or not, or whether it’s just absolute balderdash.
Whatever may or may not be happening with the Atlantic Meridional Overturning Current (AMOC), one thing that you can take to the bank (or insane asylum, as appropriate): contaminated Finnish lake sediments, strip bark bristlecone pines and the hundreds of nondescript Mann 2008-9 tree ring series do not contain any useful information on the past history of the AMOC.
Only one thing can be surmised from Rahmstorf and Mann’s claim that the Mann et al 2008-9 network can be used to reconstruct not just NH temperature, but also SH temperatures and now Atlantic Meridional Overturning Circulation: using Mannian RegEM with the Mann et al 2008-9 network of 1209 “proxies”, one can probably “reconstruct” almost anything. Are you interested in “reconstructing” the medieval Dow Jones Index? Or medieval NFL attendance?
Reduction ad mannium.
Postscript: unsurprisingly, Rahmstorf et al has many interesting booby traps. As homework questions. (1) why is the most recent value of the gyre reconstruction shown in Rahmstorf Figure 3 (middle panel) ends in approximately 1995, when the underlying gridded reconstruction of Mann et al 2009 goes to 2006. (2) why are the reconstructions only shown back to AD900, when the underlying gridded reconstruction of Mann et al 2009 begins in AD500.
Four of the incidents in J Burke’s background chronology in Weaver v National Post (the January 27, 2005, February 15, 2005, August 2006 and February 27, 2008 incidents) relate, either in whole or in part, to a dispute between Weaver and National Post on whether Weaver had dismissed our research as “rubbish” or “balderdash” or a like pejorative.
Substantively, I think that there is considerable evidence that Weaver’s opinion on our research was similar to Gavin Schmidt’s and that one can justify use of such a pejorative to describe Weaver’s opinion. I plan to assess this evidence in a separate post, a post in which I’ll also begin considering Weaver as editor of Rutherford (Mann) et al 2005, an article that introduced various derogatory claims about our work into “the peer reviewed literature”.
But in today’s post, I’m going to look at a related, but different issue. While Weaver regularly complained about even slight supposed mischaracterizations of his opinions by National Post, his complaints were not necessarily valid. One has to carefully parse both the original article and the complaint to determine validity. In today’s post, I’ll show that Corcoran had segued his claim in the February 2005 and August 2006 incidents and that Weaver missed or ignored Corcoran’s segue.
In addition, while National Post published a Weaver letter setting out his side in August 2006, that didn’t mean that Weaver’s complaint had been vindicated or that National Post had “retracted”, despite Weaver’s later claim and the impression in J Burke’s chronology. In August 2006, Corcoran published a rebuttal that, in my opinion, fully refuted Weaver’s complaint, but this was not mentioned in J Burke’s chronology. Curiously, although the issues were quite similar in respect to Weaver’s February 2008 complaint about a Foster opinion column, on this occasion, National Post inconsistently published a correction, though, in my opinion, they could easily have taken a similar position to Corcoran’s earlier rebuttal.
A guest post by Nicholas Lewis
In a paper published last year (Lewis & Curry 2014), discussed here, Judith Curry and I derived best estimates for equilibrium/effective climate sensitivity (ECS) and transient climate response (TCR). At 1.64°C, our estimate for ECS was below all those exhibited by CMIP5 global climate models, and at 1.33°C for TCR, nearly all. However, our upper (95%) uncertainty bounds, at 4.05°C for ECS and 2.5°C for TCR, ruled out only a few CMIP5 climate models. The main reason was that they reflected the AR5 aerosol forcing best estimate and uncertainty range of −0.9 W/m2 (for 2011 vs 1750), with a 5–95% range of −1.9 to −0.1 W/m2. The strongly negative 5% bound of that aerosol forcing range accounts for the fairly high upper bounds on ECS and TCR estimates derived from AR5 forcing estimates.
The AR5 aerosol forcing best estimate and range reflect a compromise between satellite-instrumentation based estimates and estimates derived directly from global climate models. Although it is impracticable to estimate indirect aerosol forcing – that arising through aerosol-cloud interactions, which is particularly uncertain – without some involvement of climate models, observations can be used to constrain model estimates, with the resulting estimates generally being smaller and having narrower uncertainty ranges than those obtained directly from global climate models. Inverse estimates of aerosol forcing (derived by diagnosing the effects of aerosols on more easily estimated quantities, such as spatiotemporal surface temperature patterns) tended also to be smaller and less uncertain than those from climate models, but were disregarded in AR5.
Since AR5, various papers concerning aerosol forcing have been published, without really narrowing down the uncertainty range. Aerosol forcing is extremely complex, and estimating it is very difficult. One major problem is that indirect aerosol forcing has, approximately, a logarithmic relationship to aerosol levels. As a result, the change in aerosol forcing over the industrial period – anthropogenic aerosol forcing – is sensitive to the exact level of preindustrial aerosols (Carslaw et al 2013), and determining natural aerosol background levels is very difficult.
In this context, what is IMO a compelling new paper by Bjorn Stevens estimating aerosol forcing using multiple physically-based, observationally-constrained approaches is a game changer. Bjorn Stevens is Director of the Department ‘Atmosphere in the Earth System’ at the MPI for Meteorology in Hamburg. Stevens is an expert on cloud processes and their role in the general circulation and climate change. Through the introduction of new constraints arising from the time history of forcing and asymmetries in Earth’s energy budget, Stevens derives a lower limit for total aerosol forcing, from 1750 to recent years, of −1.0 W/m2. Although there is no best estimate given in the published paper, it can be worked out (from the uncertainty analyses given) to be −0.5 W/m2, and a time series for it derived from an analytical fit used in the analysis. An upper bound of −0.3 W/m2 is also given, but that comes from an earlier study (Murphy et al., 2009) rather than being a new estimate.
I have re-run the Lewis & Curry 2014 calculations using aerosol forcing estimates in line with the analysis in the Stevens paper (see Appendix) substituted for the AR5 estimates. I’ve accepted the Murphy et al (2009) upper bound of −0.3 W/m2 adopted by Stevens despite, IMO, the AR5 upper bound of −0.1 W/m2 being more consistent with the error distribution assumptions in his paper.
The Lewis & Curry 2014 energy budget study involves comparing, between a base and a final period, changes in global mean surface temperature (GMST) with changes in effective radiative forcing and – for ECS only – in the rate of ocean etc. climate system heat uptake. The preferred pairing was 1859–1882 with 1995–2011, the longest early and late periods free of significant volcanic activity. These periods are well matched for influence from internal variability and provide the largest change in forcing (and hence the narrowest uncertainty ranges). Neither the original Lewis & Curry 2014 ECS and TCR estimates nor the new estimates are significantly influenced by the low increase in surface warming this century.
Table 1 shows ECS and TCR estimates using the Stevens 2015 based aerosol forcing estimates, with for comparison the original estimates based on AR5 aerosol forcings. Estimates are shown for the preferred 1859–1882 base period to 1995–2011 final period combination, and also for a similarly well-matched 1930–1950 base period to 1995–2011 final period combination. That combination involves lower forcing and GMST increases and more heat uptake uncertainty, but probably better forcing and temperature data. Estimates from two AR5-vintage studies that used zonal temperature data to form their own inverse estimates of aerosol forcing are also shown.
Table 1: Best estimates are medians (50% probability points). Ranges (Ring et al: none given) and aerosol forcings are to the nearest 0.05°C. § Aldrin et al. aerosol forcing estimate is for 1750–2007 and based on replacing the AR4 aerosol forcing distribution used as the prior in the study, which significantly biased the inverse estimate, with the AR5 distribution. * With +0.1 W/m2 added to adjust for omitted black-carbon-on-snow forcing affecting the inverse estimate of aerosol forcing due to its similar fingerprint.
Compared with using the AR5 aerosol forcing estimates, the preferred ECS best estimate using an 1859–1882 base period reduces by 0.2°C to 1.45°C, with the TCR best estimate falling by 0.1°C to 1.21°C. More importantly, the upper 83% ECS bound comes down to 1.8°C and the 95% bound reduces dramatically – from 4.05°C to 2.2°C, below the ECS of all CMIP5 climate models except GISS-E2-R and inmcm4. Similarly, the upper 83% TCR bound falls to 1.45°C and the 95% bound is cut from 2.5°C to 1.65°C. Only a handful of CMIP5 models have TCRs below 1.65°C.
CMIP5 models with high TCRs are able to match the historical instrumental GMST record, or even warm less, mainly because most of them have highly negative aerosol forcing that until recently offset the bulk of greenhouse gas and other positive forcings. The mean aerosol forcing in CMIP5 models for which it has been diagnosed is about −1.2 W/m2 over 1850–2000, two and a half times Stevens’ best estimate.
The new ECS and TCR estimates, and the uncertainty associated with them, can also be presented in the form of probability density functions, as in Figures 1 and 2. The PDFs are skewed due principally to the dominant uncertainty, that in forcing, affecting the denominator of the fractions used to estimate ECS and TCR.
Appendix: Derivation of a best estimate time series for aerosol forcing from Stevens 2015
The best estimate for direct aerosol forcing (Fari) is taken as −0.15 W/m2 (line 494). The best estimate taken for indirect aerosol forcing (Faci) is that which, when δN/N has a bidirectional factor of two (0.5× to 2.0×) 5–95% uncertainty (line 620; taken as corresponding to a lognormal distribution) and C has a median estimate of 0.1 and a 95% bound of 0.15 (line 584; uncertainty independent and assumed Gaussian), produces a 95% bound for Faci of −0.75 W/m2. That implies a median estimate for Faci of −0.32 W/m2, which when added to the −0.15 W/m2 for Fari gives a best estimate for total aerosol forcing Faer of −0.5 W/m2.
The timeseries for Qa given by Eq.(A9) is used to scale, according to Eq.(1), the best estimate for Faer of −0.5 W/m2 as at 2005 over 1750 to 2011. Values of Qn = 76, α = 0.00167and β = 0.317 are used. Qn is taken from the caption to Fig.2 and α from the last line of Appendix A. The value of β is set to produce a total aerosol forcing of −0.5 W/m2 in 2005. Although giving a slightly different breakdown between Faci and Fari than that just derived, these parameter values result in an almost identical evolution of aerosol forcing.
Aldrin M, Holden M, Guttorp P, Skeie RB, Myhre G, Berntsen TK. Bayesian estimation of climate sensitivity based on a simple climate model fitted to observations of hemispheric temperatures and global ocean heat content. Environmetrics 23:253–271 (2012)
Carslaw, K. S., and coauthors. Large contribution of natural aerosols to uncertainty in indirect forcing. Nature, 503 (7474), 67–71 (2013)
Lewis N An objective Bayesian, improved approach for applying optimal fingerprint techniques to estimate climate sensitivity. J Clim 26:7414–7429 (2013)
Lewis N and Curry J A: The implications for climate sensitivity of AR5 forcing and heat uptake estimates, Climate Dynamics doi: 10.1007/s00382-014-2342-y (2014)
Stevens, B. Rethinking the lower bound on aerosol radiative forcing. In press, J.Clim (2015) doi: http://dx.doi.org/10.1175/JCLI-D-14-00656.1
J Burke’s decision contains a chronology of prior interactions between Weaver and National Post, much of which, when closely examined, is highly misleading. In today’s post, I’m going to discuss one small but interesting issue: Weaver’s claim that he “did not lobby for climate funding”. J Burke referred to this when she said that Weaver “sought to correct a number of factual errors” in an earlier article by Corcoran. J Burke did not mention that National Post had published a rebuttal contesting Weaver’s claim. In today’s post, I’ll review the sides of the dispute. I plan to get to more central issues in the case, but wish to first clear up some smaller issues (in part to avoid accusations of ignoring them.)
Continue reading →
Kaufman and McKay recently and quietly issued an Arctic2K correction file at NOAA xls here that concedes yet another upside-down series previously pointed out to them at Climate Audit. Once again, they used information from Climate Audit without acknowledgement or credit (see NSF definition of plagiarism here). Continue reading →
According to the University of Victoria, Andrew Weaver says:
the next generation of his climate model will address the influence of climate on human evolution—much like it’s now being used to examine the influence of humans on climate evolution”.
In breaking news, Climate Audit has obtained exclusive information on output from the first runs of Weaver’s “next generation” climate model. These are the first known climate model predictions of the future of human evolution. The results are worrying: take a look.
Coauthors of Rutherford et al (J Climate 2005) (pdf) were Rutherford, Mann, Bradley, Hughes, Jones, Osborn and Briffa. Its editor was Andrew Weaver. It was formally submitted on Sept 16, 2003, received two reviews in January 2004, revised and resubmitted on June 29, 2004, accepted without revision on September 27, 2004 and published in July 2005.
On January 1, 2005, Weaver became chief editor of Journal of Climate. On January 4, 2005, Mann, Rutherford, Wahl and Ammann submitted Testing the fidelity of methods used in proxy-based reconstructions of past climate (pdf),
On January 7, 2005, Weaver asked Keith Briffa, one of the coauthors of Rutherford, Mann et al 2005, to act as a peer reviewer for Mann, Rutherford et al 2005. Because of his IPCC commitments, Briffa declined (suggesting Wigley), but neither Weaver nor Briffa seemed the least bit concerned about any potential impropriety arising from Briffa acting as a buddy peer reviewer.
Postscript: The following AMS policies adopted in 2010 would appear to discourage such buddy review (and also enemy review), but also seem to leave discretion with the editor to override the policy:
A reviewer should be sensitive even to the appearance of a conflict of interest when the manuscript under review is closely related to the reviewer’s work in progress or published. If in doubt, the reviewer should indicate the potential conflict promptly to the editor.
A reviewer should not evaluate a manuscript authored or co-authored by a person with whom the reviewer has a close personal or professional connection if the relationship would bias judgment of the manuscript.
A couple of years ago, Brandon Shollenberger wrote up a lengthy review of Mann’s Hockey Stick Wars at Lucia’s. Brandon has fleshed out his review in an ebook here. Brandon summarized the book as follows:
“there is a great deal of misinformation, and even disinformation, polluting the airwaves. One prime example was world renowned climate scientist Michael Mann, and his book, The Hockey Stick and the Climate Wars: Dispatches From the Front Lines. Mann’s book contains many errors, misrepresentations and outright false statements. Responses to his book have been limited primarily to the blogosphere where the average person will never look. Even worse, those responses have been disjointed, broken up across many web pages and scattered throughout numerous discussions. This book is the first part of an attempt to bring together those responses to create informative counternarrative which allows people to quickly get up to speed on the infamous hockey stick controversy while correcting much of the misinformation present in Mann’s book. It covers about half of the hockey stick controversy in slightly over ten thousand words.”
Other books on the incidents are, of course, Andrew Montford’s Hockey Stick Illusion and Hiding the Decline; Steve Mosher and Tom Fuller’s The CRUTape Letters; Fred Pearce’s The Climate Files; and Rupert Darwall’s The Age of Global Warming, as well as fictional accounts of events by Mann and Bradley.
Brandon’s objective was to focus on deceptions in Mann’s Hockey Stick Wars and does not attempt to cover the same ground as the other works. Brandon’s work is also influenced by the interest in 2014 in connection with Mann v Steyn in itemizing the most direct misrepresentations in the Mann corpus, some of which, as Brandon observes, Mann has unrepentantly repeated for over a decade. For example, Brandon observes about Mann’s bizarre Excel spreadsheet fabrication: “Despite this correspondence being readily available for a decade now, Mann has continued to repeat his fabricated story about a spreadsheet error. This demonstrates an apparent pattern of deception consistently found in Mann’s book and other writings.”
Take a look.
I regret not giving more coverage to the earlier works. Because their coverage of me was so favorable, I felt somewhat abashed in endorsing them, but, in retrospect, I should have done so.