Mann Misrepresents the EPA – Part 1

In today’s post, I will return to my series on false claims in Mann’s lawsuit about supposed “exonerations”. ( For previous articles, see here).

One of the most important misconduct allegations against Mann – the “amputation” of the Briffa reconstruction in IPCC TAR – was discussed recently by Judy Curry, who, in turn, covered Congressional testimony on the incident by John Christy, who had been a Lead Author of the same IPCC TAR chapter and whose recollections of the incident were both first-hand and vivid.

In one of the major graphics in the IPCC 2001 report, declining values of the Briffa reconstruction were deleted (“amputated” is Christy’s apt term), resulting in the figure giving a much greater rhetorical impression of consistency than really existed. This truncation of data had been known (and severely criticized) at Climate Audit long before Climategate.

However, the incident came into an entirely new light with the release of the Climategate emails, which showed that senior IPCC officials had been concerned that the Briffa reconstruction (with its late 20th century decline) would “dilute the message” and that Mann was equally worried that showing the Briffa reconstruction would give “fodder to the skeptics”.

Christy gave the following damning summary of Mann’s conduct as IPCC TAR Lead Author:

Regarding the Hockey Stick of IPCC 2001 evidence now indicates, in my view, that an IPCC Lead Author working with a small cohort of scientists, misrepresented the temperature record of the past 1000 years by (a) promoting his own result as the best estimate, (b) neglecting studies that contradicted his, and (c) amputating another’s result so as to eliminate conflicting data and limit any serious attempt to expose the real uncertainties of these data.

Christy left out a further fundamental problem in the amputation: there was no disclosure of the amputation in the IPCC 2001 report itself.

The impropriety of deleting adverse data in an IPCC graphic was easily understood in the broader world of brokers, accountants, lawyers and fund managers and one on which there was negligible sympathy for excuses. Not only did this appear to be misconduct as far as the public was concerned, the deletion of adverse data in the IPCC graphic appeared to be an act of “omitting data or results such that the research is not accurately represented in the research record” – one of the definitions (“falsification”) of academic misconduct in the NSF and other academic misconduct codes.

Further, both the Oxburgh and Muir Russell reports concluded that the IPCC 2001 graphic was “misleading”.

However, NONE of the inquiries conducted an investigation of the incident. Each, in turn, ignored or evaded the incident. I’ll examine the evasions in today’s post.

Today’s post will open consideration of the EPA documents referred to in Mann’s pleadings, a topic that is not easily summarized. Today’s discussion of the EPA documents will only be a first bite.
Continue reading

Turney in the Climategate Dossier

turney googleplus Today’s post finalizes some notes made earlier this year on appearances in the Climategate dossier by Chris Turney, the leader of the Ship of Fools and an alumnus of the University of East Anglia (an affiliation featured in his Google avatar over his PhD instiution – see left).

Although it attracted no notice at the time, Turney’s efforts to create a “consortium” to obtain government funds was a prominent feature of 2009 Climategate correspondence. Indeed, the second-last email in the original Climategate dossier concerns Turney’s “consortium”. It turns out that Turney even had a role in the quality control that was so severely criticized in the Harry Readme.
Continue reading

Radiocarbon calibration and Bayesian inference

A guest post by Nic Lewis


On 1 April 2014 the Bishop Hill blog carried a guest post ‘Dating error’ by Doug Keenan, in which he set out his allegations of research misconduct by Oxford University professor Christopher Bronk Ramsey. Professor Bronk Ramsey is an expert on calibration of radiocarbon dating and author of OxCal, apparently one of the two most widely used radiocarbon calibration programs (the other being Calib, by Stuiver and Reimer). Steve McIntyre and others opined that an allegation of misconduct was inappropriate in this sort of case, and likely to be counter-productive. I entirely agree. Nevertheless, the post prompted an interesting discussion with statistical expert Professor Radford Neal of Toronto University and with Nullius in Verba (an anonymous but statistically-minded commentator). They took issue with Doug’s claims that the statistical methods and resulting probability densities (PDFs) and probability ranges given by OxCal and Calib are wrong. Doug’s arguments, using a partly Bayesian approach he calls a discrete calibration method, are set out in his 2012 peer reviewed paper.

I also commented, saying if one assumes a uniform prior for the true calendar date, then Doug Keenan’s results do not follow from standard Bayesian theory. Although the OxCal and Calib calibration graphs (and the Calib manual) are confusing on the point, Bronk Ramsey’s papers make clear he does use such a uniform prior. I wrote that in my view Bronk Ramsey had followed a defensible approach (since his results flow from applying Bayes’ theorem using that prior), so there was no research misconduct involved, but that his method did not represent best scientific inference.

The final outcome was that Doug accepted what Radford and Nullius said about how the sample measurement should be interpreted as probability, with the implication that his criticism of the calibration method is invalid. However, as I had told Doug originally, I think his criticism of the OxCal and Calib calibration methods is actually valid: I just think that imperfect understanding rather than misconduct on the part of Bronk Ramsey (and of other radiocarbon calibration experts) is involved. Progress in probability and statistics has for a long time been impeded by quasi-philosophical disagreements between theoreticians as to what probability represents and the correct foundations for statistics. Use of what are, in my view, unsatisfactory methods remains common.

Fortunately, regardless of foundational disagreements I think most people (and certainly most scientists) are in practice prepared to judge the appropriateness of statistical estimation methods by how well they perform upon repeated use. In other words, when estimating the value of a fixed but unknown parameter, does the true value lie outside the specified uncertainty range in the indicated proportion of cases?

This so-called frequentist coverage or probability-matching property can be tested by drawing samples at random from the relevant uncertainty distributions. For any assumed distribution of parameter values, a method of producing 5–95% uncertainty ranges can be tested by drawing a large number of samples of possible parameter values from that distribution, and for each drawing a measurement at random according to the measurement uncertainty distribution and estimating a range for the parameter. If the true value of the parameter lies below the bottom end of the range in 5% of cases and above its top in 5% of cases, then that method can be said to exhibit perfect frequentist coverage or exact probability matching (at least at the 5% and 95% probability levels), and would be viewed as a more reliable method than a non-probability-matching one for which those percentages were (say) 3% and 10%. It is also preferable to a method for which those percentages were both 3%, which would imply the uncertainty ranges were unnecessarily wide. Note that in some cases probability-matching accuracy is unaffected by the parameter value distribution assumed.

I’ll now attempt to explain the statistical issues and to provide evidence for my views. I’ll then set up a simplified, analytically tractable, version of the problem and use it to test the probability matching performance of different methods. I’ll leave discussion of the merits of Doug’s methods to the end. Continue reading

The “Ethics Application” for Lewandowsky’s Fury

In today’s post, I will discuss the ethics application and approval process for Fury. Continue reading

Frontiers Issues Statement on Lewandowsky

Following a variety of untrue allegations by Lewandowsky and his supporters, Frontiers have issued a new statement stating that they received “no threats” and that they had received “well argued and cogent” complaints, including mine here and here. (I did not report or publicize this complaint at Climate Audit or invite any public pressure on the journal.)

According to my understanding, the issues identified by the journal are issues that constitute of violations of most codes of conduct within academic psychology, including Australian codes.

There has been a series of media reports concerning the recent retraction of the paper Recursive Fury: Conspiracist ideation in the blogosphere in response to research on conspiracist ideation, originally published on 18 March 2013 in Frontiers in Psychology. Until now, our policy has been to handle this matter with discretion out of consideration for all those concerned. But given the extent of the media coverage – largely based on misunderstanding – Frontiers would now like to better clarify the context behind the retraction.

As we published in our retraction statement, a small number of complaints were received during the weeks following publication. Some of those complaints were well argued and cogent and, as a responsible publisher, our policy is to take such issues seriously. Frontiers conducted a careful and objective investigation of these complaints. Frontiers did not “cave in to threats”; in fact, Frontiers received no threats. The many months between publication and retraction should highlight the thoroughness and seriousness of the entire process

As a result of its investigation, which was carried out in respect of academic, ethical and legal factors, Frontiers came to the conclusion that it could not continue to carry the paper, which does not sufficiently protect the rights of the studied subjects. Specifically, the article categorizes the behaviour of identifiable individuals within the context of psychopathological characteristics. Frontiers informed the authors of the conclusions of our investigation and worked with the authors in good faith, providing them with the opportunity of submitting a new paper for peer review that would address the issues identified and that could be published simultaneously with the retraction notice.

The authors agreed and subsequently proposed a new paper that was substantially similar to the original paper and, crucially, did not deal adequately with the issues raised by Frontiers.

We remind the community that the retracted paper does not claim to be about climate science, but about psychology. The actions taken by Frontiers sought to ensure the right balance of respect for the rights of all.

One of Frontiers’ founding principles is that of authors’ rights. We take this opportunity to reassure our editors, authors and supporters that Frontiers will continue to publish – and stand by – valid research. But we also must uphold the rights and privacy of the subjects included in a study or paper.

SH Proxies: Peru d18O

One of the hidden assumptions of proxy reconstructions, as carried out by IPCC authors, is that each “proxy” has a linear relationship to temperature plus relatively low-order red noise. Under such circumstances, the noise will cancel out in a linear combination of proxies (reconstruction) and a “signal” will emerge. However, I’ve never seen any author discuss the validity of this assumption, let alone establish the validity.

In today’s post, I’m going to look at low-latitude South American d18O isotope series mainly from Peru, including three proxies from Neukom. Tropical ice core d18O series (especially Quelccaya, but also Huascaran and Sajama) have been a staple of temperature reconstructions. During the past few years, d18O series have also been obtained from speleothems and lake sediments.

In my opinion, before one can begin thinking about temperature reconstructions using many different types of proxies, some of which are singletons, it makes sense to see if one can make sense of something as simple as d18O series within one relatively circumscribed region.

Continue reading

Neukom and Gergis Serve Cold Screened Spaghetti

Neukom, Gergis and Karoly, accompanied by a phalanx of protective specialists, have served up a plate of cold screened spaghetti in today’s Nature (announced by Gergis here).

Gergis et al 2012 (presently in a sort of zombie withdrawal) had foundered on ex post screening. Neukom, Gergis and Karoly + 2014 take ex post screening to a new and shall-we-say unprecedented level. This will be the topic of today’s post. Continue reading

UWA Vice Chancellor Johnson Refuses Data Again Again

Barry Woods has been trying to get Lewandowsky’s data, inclusive of any metadata on referring blogs, since August 2012 (before anyone had even heard of Lewandowsky). Woods has made multiple requests, many of which have not even been acknowledged. Woods has expressed concern about Hoax to Eric Eich, editor of Psychological Science, who suggested that Woods submit a comment.

The UWA’s Code of Conduct for the Responsible Practice of Research states clearly:

3.8 Research data related to publications must be available for discussion with other researchers.

The Australian Code of Conduct for the Responsible Practice of Research (to which the University of Western Australia claims to adhere) states:

2.5.2 Research data should be made available for use by other researchers unless this is prevented by ethical, privacy or confidentiality matters.

Nonetheless, Vice Chancellor Johnson flatly and unequivocally denied data to Woods for the purpose of submitting a comment to the journal, stating that “it is not the University’s practice to accede to such requests”.

From: Paul Johnson
Sent: Friday, March 28, 2014 8:08 AM
To: Barry Woods
Cc: Murray Maybery ; Kimberley Heitman
Subject: request for access to data

Mr B. Woods

Dear Mr Woods,

I refer to your emails of the 11th and 25th March directed to Professor Maybery, which repeat a request you made by email dated the 5th September 2013 to Professor Lewandowsky (copied to numerous recipients) in which you request access to Professor Lewandowsky’s data for the purpose of submitting a comment to the Journal of Psychological Science.

It is not the University’s practice to accede to such requests.

Yours faithfully,
Professor Paul Johnson,

It seems highly doubtful to me that it is indeed the “University’s practice” to refuse access to data to other researchers. Such a practice, if generally applied, would be a flagrant violation of the Australian Code of Conduct and would surely have come to light before now. But whether the refusal of data to other researchers is the general “practice” of the University or merely applied opportunistically in this particular case, it is a violation of the Australian Code of Conduct for Responsible Research and the “practice” should cease.

UWA Vice-Chancellor Refuses Lewandowsky Data

Over the past 15 months, I’ve made repeated requests to the University of Western Australia for a complete copy of Lewandowsky’s Hoax data in order to analyse it for fraudulent and/or scammed responses. Up to now, none of my previous requests were even acknowledged.

I was recently prompted to re-iterate my longstanding request by the retraction of Lewandowsky’s Fury. This time, my request was flatly and permanently denied by the Vice Chancellor of the University himself, Paul Johnson, who grounded his refusal not on principles set out in university or national policy, but because the University administration’s feelings were hurt by my recent blogpost describing the “investigation” by the University administration into the amendment of Lewandowsky’s ethics application .

Continue reading

Lewandowsky Ghost-wrote Conclusions of UWA Ethics Investigation into “Hoax”

Following the retraction of Lewandowsky’s Fury, the validity of University of Western Australia ethics “investigations” is again in the news. At present, we have negligible information on the University’s investigation into Fury, but we do have considerable (previously unanalysed) information on their earlier and illusory “investigation” into prior complaints about the ethics application for Moon Landing Hoax (“Hoax”).

This earlier “investigation” (recently cited at desmog here and Hot Whopper here) supposedly found that the issues that I had raised in October 2012 were “baseless” and that the research in Hoax was “conducted in compliance with all applicable ethical guidelines”.

However, these conclusions were not written by a university investigation or university official but by Lewandowsky himself and simply transferred to university letterhead by UWA Deputy Vice Chancellor Robyn Owens within minutes after Lewandowsky had sent her language that was acceptable to him.

In today’s post, I’ll set out a detailed chronology of these remarkable events. Continue reading


Get every new post delivered to your Inbox.

Join 3,381 other followers