George Zimmerman’s Libel Lawsuit

Last week, a Florida court dismissed the libel suit of George Zimmerman e.g. here. In today’s post, I’ll discuss aspects of this decision, which are relevant to Mann’s libel suit against Steyn and others.

zimmerman

mann portrair

Figure 1. Two libel plaintiffs: left – George Zimmerman; right – Michael Mann. Continue reading


Abram et al 2014 and the Southern Annular Mode

In today’s post, I will look at a new Naturemag climate reconstruction claiming unprecedentedness (h/t Bishop Hill): “Evolution of the Southern Annular Mode during the past millennium” (Abram et al Nature 2014, pdf). Unfortunately, it is marred by precisely the same sort of data mining and spurious multivariate methodology that has been repeatedly identified in Team paleoclimate studies.

The flawed reconstruction has been breathlessly characterized at the Conversation by Guy Williams, an Australian climate academic, as a demonstration that, rather than indicating lower climate sensitivity, the recent increase in Antarctic sea ice is further evidence that things are worse than we thought. Worse it seems than previously imagined even by Australian climate academics.

the apparent paradox of Antarctic sea ice is telling us that it [climate change] is real and that we are contributing to it. The Antarctic canary is alive, but its feathers are increasingly wind-ruffled.

Continue reading

Mann’s new paper recharacterizing the Atlantic Multidecadal Oscillation

A guest post by Nic Lewis

 

Michael Mann has had a paper on the Atlantic Multidecadal Oscillation (AMO) accepted by Geophysical Research Letters: “On forced temperature changes, internal variability, and the AMO”. The abstract and access to Supplementary Information is here . Mann has made a preprint of the paper available, here . More importantly, and very commendably, he has made full data and Matlab code available.

The paper seeks to overturn the current understanding of the AMO, and provides what on the surface appears to be impressive evidence. But on my reading of the paper Mann’s case is built on results that do not support his contentions. Had I been a reviewer, I would have pointed this out and recommended rejection.

In this article, I first set out the background to the debate about the AMO and present Mann’s claims. I then examine Mann’s evidence for his claims in detail, and demonstrate that it is illusory. I end with a discussion of the AMO. All the links I give provide access to the full text of the papers cited, not just to their abstracts. Continue reading

Threats from the University of Queensland

1_herrcook[1]As many readers are aware, John Cook of SKS refused to provide complete data on his 97% Consensus Project (flatly refusing date-stamp and anonymized rater identification.) Ironically, Cook left the data lying around the internet (to borrow a phrase from Phil Jones). In an incident remarkably similar to the Mole Incident, Brandon Shollenberger alertly located the refused data, which he has provided a teaser at his blog. Continue reading

IOP: expecting consistency between models and observations is an “error”

The publisher of Environmental Research Letters today took the bizarre position that expecting consistency between models and observations is an “error”. Continue reading

The Cleansing of Lennart Bengtsson

Recently, Lennart Bengtsson undertook a positive dialogue with climate skeptics by joining the advisory board of the Global Warming Policy Foundation, an organization that attempts to represent rational skepticism.

Instead of welcoming this initiative, the climate “community” has now cleansed Lennart Bengtsson by pressuring him to resign from the GWPF advisory board. Bengtsson’s discouraging resignation is at Klimaz Weibel here (h/t Bishop Hill). Continue reading

Mann Misrepresents the EPA – Part 1

In today’s post, I will return to my series on false claims in Mann’s lawsuit about supposed “exonerations”. ( For previous articles, see here).

One of the most important misconduct allegations against Mann – the “amputation” of the Briffa reconstruction in IPCC TAR – was discussed recently by Judy Curry, who, in turn, covered Congressional testimony on the incident by John Christy, who had been a Lead Author of the same IPCC TAR chapter and whose recollections of the incident were both first-hand and vivid.

In one of the major graphics in the IPCC 2001 report, declining values of the Briffa reconstruction were deleted (“amputated” is Christy’s apt term), resulting in the figure giving a much greater rhetorical impression of consistency than really existed. This truncation of data had been known (and severely criticized) at Climate Audit long before Climategate.

However, the incident came into an entirely new light with the release of the Climategate emails, which showed that senior IPCC officials had been concerned that the Briffa reconstruction (with its late 20th century decline) would “dilute the message” and that Mann was equally worried that showing the Briffa reconstruction would give “fodder to the skeptics”.

Christy gave the following damning summary of Mann’s conduct as IPCC TAR Lead Author:

Regarding the Hockey Stick of IPCC 2001 evidence now indicates, in my view, that an IPCC Lead Author working with a small cohort of scientists, misrepresented the temperature record of the past 1000 years by (a) promoting his own result as the best estimate, (b) neglecting studies that contradicted his, and (c) amputating another’s result so as to eliminate conflicting data and limit any serious attempt to expose the real uncertainties of these data.

Christy left out a further fundamental problem in the amputation: there was no disclosure of the amputation in the IPCC 2001 report itself.

The impropriety of deleting adverse data in an IPCC graphic was easily understood in the broader world of brokers, accountants, lawyers and fund managers and one on which there was negligible sympathy for excuses. Not only did this appear to be misconduct as far as the public was concerned, the deletion of adverse data in the IPCC graphic appeared to be an act of “omitting data or results such that the research is not accurately represented in the research record” – one of the definitions (“falsification”) of academic misconduct in the NSF and other academic misconduct codes.

Further, both the Oxburgh and Muir Russell reports concluded that the IPCC 2001 graphic was “misleading”.

However, NONE of the inquiries conducted an investigation of the incident. Each, in turn, ignored or evaded the incident. I’ll examine the evasions in today’s post.

Today’s post will open consideration of the EPA documents referred to in Mann’s pleadings, a topic that is not easily summarized. Today’s discussion of the EPA documents will only be a first bite.
Continue reading

Turney in the Climategate Dossier

turney googleplus Today’s post finalizes some notes made earlier this year on appearances in the Climategate dossier by Chris Turney, the leader of the Ship of Fools and an alumnus of the University of East Anglia (an affiliation featured in his Google avatar over his PhD instiution – see left).

Although it attracted no notice at the time, Turney’s efforts to create a “consortium” to obtain government funds was a prominent feature of 2009 Climategate correspondence. Indeed, the second-last email in the original Climategate dossier concerns Turney’s “consortium”. It turns out that Turney even had a role in the quality control that was so severely criticized in the Harry Readme.
Continue reading

Radiocarbon calibration and Bayesian inference

A guest post by Nic Lewis

 

On 1 April 2014 the Bishop Hill blog carried a guest post ‘Dating error’ by Doug Keenan, in which he set out his allegations of research misconduct by Oxford University professor Christopher Bronk Ramsey. Professor Bronk Ramsey is an expert on calibration of radiocarbon dating and author of OxCal, apparently one of the two most widely used radiocarbon calibration programs (the other being Calib, by Stuiver and Reimer). Steve McIntyre and others opined that an allegation of misconduct was inappropriate in this sort of case, and likely to be counter-productive. I entirely agree. Nevertheless, the post prompted an interesting discussion with statistical expert Professor Radford Neal of Toronto University and with Nullius in Verba (an anonymous but statistically-minded commentator). They took issue with Doug’s claims that the statistical methods and resulting probability densities (PDFs) and probability ranges given by OxCal and Calib are wrong. Doug’s arguments, using a partly Bayesian approach he calls a discrete calibration method, are set out in his 2012 peer reviewed paper.

I also commented, saying if one assumes a uniform prior for the true calendar date, then Doug Keenan’s results do not follow from standard Bayesian theory. Although the OxCal and Calib calibration graphs (and the Calib manual) are confusing on the point, Bronk Ramsey’s papers make clear he does use such a uniform prior. I wrote that in my view Bronk Ramsey had followed a defensible approach (since his results flow from applying Bayes’ theorem using that prior), so there was no research misconduct involved, but that his method did not represent best scientific inference.

The final outcome was that Doug accepted what Radford and Nullius said about how the sample measurement should be interpreted as probability, with the implication that his criticism of the calibration method is invalid. However, as I had told Doug originally, I think his criticism of the OxCal and Calib calibration methods is actually valid: I just think that imperfect understanding rather than misconduct on the part of Bronk Ramsey (and of other radiocarbon calibration experts) is involved. Progress in probability and statistics has for a long time been impeded by quasi-philosophical disagreements between theoreticians as to what probability represents and the correct foundations for statistics. Use of what are, in my view, unsatisfactory methods remains common.

Fortunately, regardless of foundational disagreements I think most people (and certainly most scientists) are in practice prepared to judge the appropriateness of statistical estimation methods by how well they perform upon repeated use. In other words, when estimating the value of a fixed but unknown parameter, does the true value lie outside the specified uncertainty range in the indicated proportion of cases?

This so-called frequentist coverage or probability-matching property can be tested by drawing samples at random from the relevant uncertainty distributions. For any assumed distribution of parameter values, a method of producing 5–95% uncertainty ranges can be tested by drawing a large number of samples of possible parameter values from that distribution, and for each drawing a measurement at random according to the measurement uncertainty distribution and estimating a range for the parameter. If the true value of the parameter lies below the bottom end of the range in 5% of cases and above its top in 5% of cases, then that method can be said to exhibit perfect frequentist coverage or exact probability matching (at least at the 5% and 95% probability levels), and would be viewed as a more reliable method than a non-probability-matching one for which those percentages were (say) 3% and 10%. It is also preferable to a method for which those percentages were both 3%, which would imply the uncertainty ranges were unnecessarily wide. Note that in some cases probability-matching accuracy is unaffected by the parameter value distribution assumed.

I’ll now attempt to explain the statistical issues and to provide evidence for my views. I’ll then set up a simplified, analytically tractable, version of the problem and use it to test the probability matching performance of different methods. I’ll leave discussion of the merits of Doug’s methods to the end. Continue reading

The “Ethics Application” for Lewandowsky’s Fury

In today’s post, I will discuss the ethics application and approval process for Fury. Continue reading

Follow

Get every new post delivered to your Inbox.

Join 3,251 other followers