For over a year, John Cook and the University of Queensland have repeatedly refused Richard Tol’s requests for information on rater ID and timestamps for the SKS ratings for Cook et al 2013. Recently there have been two events that shed new light on the dispute. First, in mid-May Brandon Shollenberger located the requested information online without password protection, which he placed online a few days ago. The new information shows that the majority of ratings were done by coauthors and nearly all ratings were done by coauthors and named acknowledgees, rather than by anonymous volunteers. Second, Simon Turnill received an FOI response from the University, that showed that the University did not make ANY confidentiality agreements with SKS raters. More surprisingly, Cook had done the SKS ratings program without submitting an ethics application for this program or obtaining ethics approval. Previously, both Cook and the University of Queensland had made public statements referring to “ethical approval” and confidentiality agreements. Each of these statements is, at best, misleading, especially when parsed in the light of this new information, as Brandon has done.
I’ve re-drafted this post to better reflect the lede, now beginning with the new information and moving to parsing of the statements, rather then the opposite.
Majority of Ratings Done by Coauthors
As many readers are aware, Brandon Shollenberger recently located the SKS ratings data that Cook had placed online (at the aptly named website http://www.welloiledcatherd.org) without password protection on the ratings data. A few days ago, Brandon uploaded this data to an online mirror. Brandon also preserved the online images as they appeared to him at archive.org: for the TCP Results page here and the ratings data here (to demonstrate that the information was not password protected in case the University tried to so argue, as SKS had done with their Nazi images).
The long withheld information shows that majority of ratings (54%) were done by coauthors, including Cook himself, with an additional 34% done by acknowledgees named in the acknowledgements to the paper, as shown in the pie chart below.
Figure 1. Pie Chart of SKS Ratings by Rater
Seven raters (Cook, Nuccitelli, Green, Richardson, Winckler, Painting and Skuce) are named as coauthors, while 7 more raters (Jokimaki, Reitano, Honeycutt, Scadden, Tamblyn, Morrison and Coulter) were named in the Acknowledgements to the paper, where they were thanked for “rating abstracts”.
2783 of 11944 papers had more than two raters. In 83% of the cases, the final ratings were given by one of the authors. Of the 9161 papers that were only rated twice (agreement in Final), in 82% of the cases, at least one author rated the paper. In other words, only 14% of the papers were entirely rated by non-authors.
In my opinion, it is “of scientific value” (a term that will be discussed later) to know that coauthors were also raters and, indeed, had done the majority of ratings and this information should have been reported in the original paper and disclosed to Tol at the time of his original request.
No Ethics Approval for SKS Ratings Program
Recently, under Queensland FOI, Simon Turnill of Australian Climate Madness requested copies of any confidentiality agreements, agreement on intellectual property and ethics applications and approvals regarding Cook et al 2013.
In response, the University produced NO confidentiality agreements, NO agreements with third parties on intellectual property and NO ethics application or approval for the SKS ratings program. Here are the FOI documents.
They only include an ethics application for the author self-rating program, but, this application refers to the SKS ratings (for ~12000 papers) as already having been carried out by parties described as “Team members”. Nothing for the SKS ratings.
The only alternatives are that (1) the University withheld responsive documents i.e. the ethics application for the SKS ratings program and confidentiality agreements with SKS raters; or (2) there are no such documents. The latter seems far more likely.
Parsing University Statements
Over the past year, both Cook and the University have made a variety of statements in which they’ve tried to connect their withholding of SKS ratings to obligations arising from ethics approval, while disguising the non-existence of ethics approval for the SKS ratings program. As too often, one has to watch the pea very closely. Brandon Shollenberger has done so and, while I do not necessarily agree with him on all points, the following exegesis reflects his comments.
In this post, I’ve not gone back to the University’s correspondence with Tol. This is an interesting topic on which I have work in hand that I’ll try to write up. Today, I’ll deal with the most recent statements by the University.
UQ Legal Threats
On May 15, shortly after Brandon announced that he was in possession of the withheld data, Jane Malloch, counsel to the University of Queensland, wrote a legal letter to Brandon, which, among other assertions, stated that the SKS data was property of the University of Queensland which had “contractual obligations to third parties” in connection with this property:
The intellectual property in the data set (the “IP”) you have in your possession is owned by The University of Queensland. The University of Queensland has contractual obligations to third parties regarding the IP. Any publication of the IP will expose the University to civil actions from third parties.
Indeed, it was this letter that prompted Simon Turnill’s FOI request. However, according to the documents produced under FOI, there were no confidentiality agreements between the University and third parties nor any agreements between the University and third parties (SKS raters) under which the University acquired the intellectual property. These claims by the University in the above paragraph appear to be completely without foundation.
Response to Tol by Cook, Lewandowsky and others
In their recent response to Tol’s published Comment (published online by the University of Queensland, Cook, Lewandowsky and others stated:
The release of privacy-protected identifying data discussed in T14 [Tol 2014] is unnecessary to replicate the C13 [Cook et al 2013] survey, and the data was withheld to protect the privacy of raters who were guaranteed anonymity.
Timestamps for the ratings were not collected, and the information would be irrelevant. Two timestamps would be needed for each rating: rating-started and rating-ended. Moreover, the time to complete an abstract rating is dependent upon several factors such as the length of the abstract, technical level of the abstract language, and interruptions occurring during the rating. Hence T14 is incorrect to state that this information (which does not exist) would shed further light on C13.
All data relating to C13 of any scientific value was published at http://sks.to/data in
2013… The only data withheld was information that might be used to identify the individual research participants. This protocol was in accordance with University ethical approval specifying that the identity of participants should remain confidential and was approved by the publisher.
First, datestamps are included in the data that Brandon located. If authors are going to publish statements that deny the existence of timestamp information without disclosing the existence of datestamp information, readers are equally entitled to have little confidence in anything that they say without consulting a Philadeplhia lawyer. Further, in a letter to unidentified associate on July 30, 2013, Cook said:
ERL said I didn’t have to include time stamp info but I’m probably going to anyway, just to show Tol’s fatigue theory is all rubbish.
It seems odd that the system that Cook used to collect datestamp information would not also have collected timestamp information (all ratings data were in chronological order, including many ratings from the same day. In August, Cook had been instructed by the UQ ethics officer to preerve all data pertaining to Cook et al.
Second, as Brandon observes, the discussion of the release of ratings information is in two different paragraphs, separated by the discussion of timestamps, and the vocabulary in the two paragraphs is different.
In the earlier paragraph about SKS raters, there is no explicit reference to “ethics approval”, only an assertion that “data was withheld to protect the privacy of raters who were guaranteed anonymity.” Precisely what form (if any) those “guarantees” took remains unknown. Nor is it known who made the guarantees or on what basis. It’s hard to understand how a University could “guarantee” anonymity to coauthors: the idea is absurd.
According to Brandon’s exegesis, Cook took the position that rater ID information on SKS raters was “of no scientific value”, whereas the rater ID information on author self-ratings was “of scientific value” but withheld under different reasoning: because of the ethics approval relating to the author self-rating program.
All data relating to C13 of any scientific value was published at http://sks.to/data in 2013… The only data withheld was information that might be used to identify the individual research participants. This protocol was in accordance with University ethical approval specifying that the identity of participants should remain confidential and was approved by the publisher.
Brandon (not justifying but trying to get inside the mind of Cook and Lewandowsky) argues that one is left with a dispute over what is “of scientific value” – the sort of dispute that goes on all the time – but that the statements are not untrue on their face when narrowly parsed, even if the overall effect is misleading.
In today’s note, I won’t review the prior correspondence with Tol. However, it seems to me that University administrators did not recognize the difference between the ethics application situation with the author self rating program (where there was one) and the SKS ratings program (where there wasn’t) and that Cook allowed the University officials to persist in this misunderstanding. When SKS rater IDs were discussed, the ethics application for author self ratings would be pointed to, tricking the unwary.
But Cook is walking a tightrope here and it’s hard to keep everything straight. In the above text, obvious questions arise about who guaranteed anonymity to the SKS raters and on what authority. Problems also arise when University officials, not fully cognizant of the trick, make public statements, as I’ll discuss next.
The UQ Press Release
In May 2014, the University of Queenland issued a press release with the following language:
All data relating to the “Quantifying the Consensus on Anthropogenic Global Warming in the Scientific Literature” paper that are of any scientific value were published on the website Skepticalscience.com in 2013. Only information that might be used to identify the individual research participants was withheld. This was in accordance with University ethical approval specifying that the identity of participants should remain confidential.
This language tracks the second paragraph of the statement by Cook and Lewandowsky discussing the author self-rating program, but omits any mention or disclosure that “data was withheld to protect the privacy of raters who were guaranteed anonymity”. Clearly the University press officer didn’t realize that Cook and Lewandowsky were walking a tightrope here, but the net result is that this language is untrue in respect to the SKS raters.
The larger issue is, of course, the contradiction not faced by “climate communications” theorists e.g. Dan Kahan who are blind to the corrosiveness of misleading/deceptive statements by climate scientists and supporters on matters that can be verified (as in FOI disputes) on their expectations to be trusted on larger issues.
Nor is it easy to understand the purpose of some of these machinations. As I’ve said before, I took zero interest in Cook’s study (or in “skeptic” protests against it) as it seems evident to me that there is a “consensus” of climate scientists on many points. I believe that the strength of the “consensus” varies by proposition and that too often climate promoters will bait-and-switch from consensus on something relatively uncontroversial (e.g. GHG having some impact) to green solution fantasies, but that is a different story.
Nor do I think that there is some smoking gun in the rater ID data. So it’s hard to understand why Cook made such an issue of it. But we’ve seen very odd conduct from climate scientists: think of Cook and Lewandowsky on the SKS link, Jones on non-existent confidentiality agreements on data, Mann on excel spreadsheets, etc etc. On matters which can be understood and verified by non-clmate scientists, we’ve seen bizarre behaviour by prominent people in the field.
In drafting this post, I chatted briefly with Lucia about this seeming blindness. Lucia wrote (in her usual forceful style):
Yep. I don’t see how people can’t see that if UQ lies and climate scientists just seem to think that’s ok, then the public will see the climate scientists as likely to be lying on other things. We are seeing tons and tons and tons of “how to communicate” documents, but none seem to point out the obvious: We need to stop being caught lying. Oh… here’s a strategy to stop being caught: Don’t lie in the first place!
Both Cook and Lewandowsky were, of course, involved in a previous incident also involving lying: see here, a conclusion which Tom Curtis of SKS also reached in respect to Lewandowsky (see here) but not Cook, though, in my opinion, the evidence against Cook is overwhelming.