As many readers are aware, John Cook of SKS refused to provide complete data on his 97% Consensus Project (flatly refusing date-stamp and anonymized rater identification.) Ironically, Cook left the data lying around the internet (to borrow a phrase from Phil Jones). In an incident remarkably similar to the Mole Incident, Brandon Shollenberger alertly located the refused data, which he has provided a teaser at his blog. Continue reading
Recently, Lennart Bengtsson undertook a positive dialogue with climate skeptics by joining the advisory board of the Global Warming Policy Foundation, an organization that attempts to represent rational skepticism.
Instead of welcoming this initiative, the climate “community” has now cleansed Lennart Bengtsson by pressuring him to resign from the GWPF advisory board. Bengtsson’s discouraging resignation is at Klimaz Weibel here (h/t Bishop Hill). Continue reading →
In today’s post, I will return to my series on false claims in Mann’s lawsuit about supposed “exonerations”. ( For previous articles, see here).
One of the most important misconduct allegations against Mann – the “amputation” of the Briffa reconstruction in IPCC TAR – was discussed recently by Judy Curry, who, in turn, covered Congressional testimony on the incident by John Christy, who had been a Lead Author of the same IPCC TAR chapter and whose recollections of the incident were both first-hand and vivid.
In one of the major graphics in the IPCC 2001 report, declining values of the Briffa reconstruction were deleted (“amputated” is Christy’s apt term), resulting in the figure giving a much greater rhetorical impression of consistency than really existed. This truncation of data had been known (and severely criticized) at Climate Audit long before Climategate.
However, the incident came into an entirely new light with the release of the Climategate emails, which showed that senior IPCC officials had been concerned that the Briffa reconstruction (with its late 20th century decline) would “dilute the message” and that Mann was equally worried that showing the Briffa reconstruction would give “fodder to the skeptics”.
Christy gave the following damning summary of Mann’s conduct as IPCC TAR Lead Author:
Regarding the Hockey Stick of IPCC 2001 evidence now indicates, in my view, that an IPCC Lead Author working with a small cohort of scientists, misrepresented the temperature record of the past 1000 years by (a) promoting his own result as the best estimate, (b) neglecting studies that contradicted his, and (c) amputating another’s result so as to eliminate conflicting data and limit any serious attempt to expose the real uncertainties of these data.
Christy left out a further fundamental problem in the amputation: there was no disclosure of the amputation in the IPCC 2001 report itself.
The impropriety of deleting adverse data in an IPCC graphic was easily understood in the broader world of brokers, accountants, lawyers and fund managers and one on which there was negligible sympathy for excuses. Not only did this appear to be misconduct as far as the public was concerned, the deletion of adverse data in the IPCC graphic appeared to be an act of “omitting data or results such that the research is not accurately represented in the research record” – one of the definitions (“falsification”) of academic misconduct in the NSF and other academic misconduct codes.
Further, both the Oxburgh and Muir Russell reports concluded that the IPCC 2001 graphic was “misleading”.
However, NONE of the inquiries conducted an investigation of the incident. Each, in turn, ignored or evaded the incident. I’ll examine the evasions in today’s post.
Today’s post will open consideration of the EPA documents referred to in Mann’s pleadings, a topic that is not easily summarized. Today’s discussion of the EPA documents will only be a first bite.
Continue reading →
Today’s post finalizes some notes made earlier this year on appearances in the Climategate dossier by Chris Turney, the leader of the Ship of Fools and an alumnus of the University of East Anglia (an affiliation featured in his Google avatar over his PhD instiution – see left).
Although it attracted no notice at the time, Turney’s efforts to create a “consortium” to obtain government funds was a prominent feature of 2009 Climategate correspondence. Indeed, the second-last email in the original Climategate dossier concerns Turney’s “consortium”. It turns out that Turney even had a role in the quality control that was so severely criticized in the Harry Readme.
Continue reading →
A guest post by Nic Lewis
On 1 April 2014 the Bishop Hill blog carried a guest post ‘Dating error’ by Doug Keenan, in which he set out his allegations of research misconduct by Oxford University professor Christopher Bronk Ramsey. Professor Bronk Ramsey is an expert on calibration of radiocarbon dating and author of OxCal, apparently one of the two most widely used radiocarbon calibration programs (the other being Calib, by Stuiver and Reimer). Steve McIntyre and others opined that an allegation of misconduct was inappropriate in this sort of case, and likely to be counter-productive. I entirely agree. Nevertheless, the post prompted an interesting discussion with statistical expert Professor Radford Neal of Toronto University and with Nullius in Verba (an anonymous but statistically-minded commentator). They took issue with Doug’s claims that the statistical methods and resulting probability densities (PDFs) and probability ranges given by OxCal and Calib are wrong. Doug’s arguments, using a partly Bayesian approach he calls a discrete calibration method, are set out in his 2012 peer reviewed paper.
I also commented, saying if one assumes a uniform prior for the true calendar date, then Doug Keenan’s results do not follow from standard Bayesian theory. Although the OxCal and Calib calibration graphs (and the Calib manual) are confusing on the point, Bronk Ramsey’s papers make clear he does use such a uniform prior. I wrote that in my view Bronk Ramsey had followed a defensible approach (since his results flow from applying Bayes’ theorem using that prior), so there was no research misconduct involved, but that his method did not represent best scientific inference.
The final outcome was that Doug accepted what Radford and Nullius said about how the sample measurement should be interpreted as probability, with the implication that his criticism of the calibration method is invalid. However, as I had told Doug originally, I think his criticism of the OxCal and Calib calibration methods is actually valid: I just think that imperfect understanding rather than misconduct on the part of Bronk Ramsey (and of other radiocarbon calibration experts) is involved. Progress in probability and statistics has for a long time been impeded by quasi-philosophical disagreements between theoreticians as to what probability represents and the correct foundations for statistics. Use of what are, in my view, unsatisfactory methods remains common.
Fortunately, regardless of foundational disagreements I think most people (and certainly most scientists) are in practice prepared to judge the appropriateness of statistical estimation methods by how well they perform upon repeated use. In other words, when estimating the value of a fixed but unknown parameter, does the true value lie outside the specified uncertainty range in the indicated proportion of cases?
This so-called frequentist coverage or probability-matching property can be tested by drawing samples at random from the relevant uncertainty distributions. For any assumed distribution of parameter values, a method of producing 5–95% uncertainty ranges can be tested by drawing a large number of samples of possible parameter values from that distribution, and for each drawing a measurement at random according to the measurement uncertainty distribution and estimating a range for the parameter. If the true value of the parameter lies below the bottom end of the range in 5% of cases and above its top in 5% of cases, then that method can be said to exhibit perfect frequentist coverage or exact probability matching (at least at the 5% and 95% probability levels), and would be viewed as a more reliable method than a non-probability-matching one for which those percentages were (say) 3% and 10%. It is also preferable to a method for which those percentages were both 3%, which would imply the uncertainty ranges were unnecessarily wide. Note that in some cases probability-matching accuracy is unaffected by the parameter value distribution assumed.
I’ll now attempt to explain the statistical issues and to provide evidence for my views. I’ll then set up a simplified, analytically tractable, version of the problem and use it to test the probability matching performance of different methods. I’ll leave discussion of the merits of Doug’s methods to the end. Continue reading →
In today’s post, I will discuss the ethics application and approval process for Fury. Continue reading →
Following a variety of untrue allegations by Lewandowsky and his supporters, Frontiers have issued a new statement stating that they received “no threats” and that they had received “well argued and cogent” complaints, including mine here and here. (I did not report or publicize this complaint at Climate Audit or invite any public pressure on the journal.)
According to my understanding, the issues identified by the journal are issues that constitute of violations of most codes of conduct within academic psychology, including Australian codes.
There has been a series of media reports concerning the recent retraction of the paper Recursive Fury: Conspiracist ideation in the blogosphere in response to research on conspiracist ideation, originally published on 18 March 2013 in Frontiers in Psychology. Until now, our policy has been to handle this matter with discretion out of consideration for all those concerned. But given the extent of the media coverage – largely based on misunderstanding – Frontiers would now like to better clarify the context behind the retraction.
As we published in our retraction statement, a small number of complaints were received during the weeks following publication. Some of those complaints were well argued and cogent and, as a responsible publisher, our policy is to take such issues seriously. Frontiers conducted a careful and objective investigation of these complaints. Frontiers did not “cave in to threats”; in fact, Frontiers received no threats. The many months between publication and retraction should highlight the thoroughness and seriousness of the entire process
As a result of its investigation, which was carried out in respect of academic, ethical and legal factors, Frontiers came to the conclusion that it could not continue to carry the paper, which does not sufficiently protect the rights of the studied subjects. Specifically, the article categorizes the behaviour of identifiable individuals within the context of psychopathological characteristics. Frontiers informed the authors of the conclusions of our investigation and worked with the authors in good faith, providing them with the opportunity of submitting a new paper for peer review that would address the issues identified and that could be published simultaneously with the retraction notice.
The authors agreed and subsequently proposed a new paper that was substantially similar to the original paper and, crucially, did not deal adequately with the issues raised by Frontiers.
We remind the community that the retracted paper does not claim to be about climate science, but about psychology. The actions taken by Frontiers sought to ensure the right balance of respect for the rights of all.
One of Frontiers’ founding principles is that of authors’ rights. We take this opportunity to reassure our editors, authors and supporters that Frontiers will continue to publish – and stand by – valid research. But we also must uphold the rights and privacy of the subjects included in a study or paper.
One of the hidden assumptions of proxy reconstructions, as carried out by IPCC authors, is that each “proxy” has a linear relationship to temperature plus relatively low-order red noise. Under such circumstances, the noise will cancel out in a linear combination of proxies (reconstruction) and a “signal” will emerge. However, I’ve never seen any author discuss the validity of this assumption, let alone establish the validity.
In today’s post, I’m going to look at low-latitude South American d18O isotope series mainly from Peru, including three proxies from Neukom. Tropical ice core d18O series (especially Quelccaya, but also Huascaran and Sajama) have been a staple of temperature reconstructions. During the past few years, d18O series have also been obtained from speleothems and lake sediments.
In my opinion, before one can begin thinking about temperature reconstructions using many different types of proxies, some of which are singletons, it makes sense to see if one can make sense of something as simple as d18O series within one relatively circumscribed region.
Neukom, Gergis and Karoly, accompanied by a phalanx of protective specialists, have served up a plate of cold screened spaghetti in today’s Nature (announced by Gergis here).
Gergis et al 2012 (presently in a sort of zombie withdrawal) had foundered on ex post screening. Neukom, Gergis and Karoly + 2014 take ex post screening to a new and shall-we-say unprecedented level. This will be the topic of today’s post. Continue reading →