One of the most bizarre conclusions of D.C. Judge Combs-Greene were her findings that it was actionable to “question [Mann’s] intellect and reasoning” and that calling his work “intellectually bogus” was “tantamount to an accusation of fraud”. These absurd findings are all the more remarkable because, as National Review pointed out in their written brief, Harry Edwards, then Chief Judge of the U.S. Court of Appeals for the D.C. Circuit, used exactly the same term (“bogus”) in an academic article that severely criticized statistical analysis of the D.C Court. Edwards’ article not only questioned his opponent’s “reasoning”, but, in effect, accused his opponent of data manipulation, an accusation that his opponent, Richard Revesz, a prominent law professor, sharply disputed.
Edwards, who had written the opinion in Moldea II, a leading case (C.A. D.C. Circuit) cited frequently in the pleading of Mann v National Review et al, clearly did not think that his language was libelous under D.C. law. Nor seemingly did Revesz, who seemed to have concluded that the appropriate response was through rebuttal rather than libel litigation.
Ironically, the article to which Edwards was responding claimed that decisions of Edwards’ court, the U.S. Court of Appeals for the D.C. Circuit (h/t Mark Kantor for correction from D.C. C.A) were influenced by political ideology. Edwards, a strong advocate of collegiality and whose ambitions for D.C. jurisprudence reached beyond it being the sort of home field for NGOs and environmentalists that Alabama had provided to segregationists in the days of NYT v Sullivan, contested the statistical analysis of his opponents, with some of his statistical arguments being familiar to CA readers (though in somewhat different terminology.)
Lowry’s Editorial and CEI Link
Before reviewing the exchange between Edwards and Revesz, I’ll briefly review Lowry’s use of the word “bogus” and how it was challenged in the pleadings. Lowry’s editorial asserted that the word “fraudulent” in contemporary U.S. political controversy did not allege “criminal fraud”, but merely meant “intellectually bogus” as follows:
In common polemical usage, “fraudulent” doesn’t mean honest-to-goodness criminal fraud. It means intellectually bogus and wrong.
Despite the explicit disassociation of the use of the term from allegations of criminal activity, Mann claimed that the term “intellectually bogus” did impute “commission of a criminal offense” and included this incident as a separate count of libel per se in his Complaint as follows:
Mr. Lowry’s statement, published by NRI on National Review Online, calling Dr. Mann’s research “intellectually bogus” is defamatory per se and tends to injure Dr. Mann in his profession because it falsely imputes to Dr. Mann academic corruption, fraud and deceit as well as the commission of a criminal offense, in a manner injurious to the reputation and esteem of Dr. Mann professionally, locally, nationally, and globally.
In addition, Mann claimed that the hyperlink to Lowry’s editorial from CEI’s press release was a further count of libel per se, also imputing “academic corruption, fraud and deceit as well as the commission of a criminal offense”.
Whatever one may think of the underlying claims in connection with the Steyn and Simberg articles, these additional claims seemed especially farcical and impossible for any court to allow to proceed.
Nonetheless, Combs-Greene J, temporarily managing not to reverse the roles of CEI and National Review, found that one meaning of the word “bogus” in an online dictionary was “sham”, and that any critique that had the temerity to “question” Mann’s “intellect and reasoning” was “tantamount to an accusation of fraud” and was based on “provably false facts”, thereby making such a critique actionable:
It is obvious that “intellectually bankrupt” refers to a lack of sense or intellect but the same may not be said for “intellectually bogus.” The definition of “bogus” in the Merriam-Webster online dictionary, inter alia, is “not genuine . . . sham.” BOGUS, MERRIAM-WEBSTER: ONLINE DICTIONARY AND THESAURUS, http://www.merriam-webster.com/dictionary/bogus. In Plaintiff’s line of work, such an accusation is serious. To call his work a sham or to question his intellect and reasoning is tantamount to an accusation of fraud (taken in the context and knowing that Plaintiff’s work has been investigated and substantiated on numerous occasions). The Court must, at this stage, find the evidence indicates that the NR Defendants’ statements are not pure opinion but statements based on provably false facts.
Revesz 1997 and Edwards 1998 and Revesz 1999
Turning now to the exchange between Revesz and Edwards. As noted above, National Review’s written brief cites Edwards’ article, but it seems to me that the exchange is worth parsing in much greater length, since, in addition, to actionability of the word “bogus”, it raises issues about whether statements about data manipulation and even data torture are “fact” or “opinion”.
In 1997, Revesz published an article in the Virginia Law Review (link), concluding that ideology significantly influenced judicial decision-making on the Court of Appeals for the D.C. Circuit; that ideological voting was more prevalent in cases that are less likely to be reviewed by the Supreme Court; and that a judge’s vote is greatly affected by the party affiliation of the other judges on the panel: in particular, whether there is another panelist with the same party affiliation. For the purposes of today’s post, I take no position on whether Revesz’ conclusions were warranted and ask readers not to editorialize on politics of D.C. judges as my interest is limited to the form of Edwards’ response to Revesz.
The following year (1998), Edwards, then Chief Judge of the US Court of Appeals for the D.C. Circuit, published a response in the Virginia Law Review, sharply criticizing Revesz’ article. Edwards’ article attracted wide attention and, to date, has been cited 221 times.
One of Edwards’ major criticisms was that Revesz had “winnowed” data to find some supposedly “statistically significant” result without properly weighing the winnowing process itself – a form of criticism that is closely related to common Climate Audit criticism of Mannian and like methodology. Edwards wrote:
The first observation to be made regarding Revesz’s methodology is that Revesz found no statistically significant result in many of the time periods and circumstances studied. Where he reached this outcome, however, he was never deterred: instead he turned to other configurations of data, apparently in the hopes of producing more interesting results. In other words, this was hardly an experimental methodology designed to discover and report whatever outcome emerged; instead it represented a winnowing process in which progressively fewer and fewer of the data were considered as the author set his sights on finding some set of data that might produce notable consequences. By the final multivariate analysis of the second era, which he claims produced a significant result, Revesz was studying just one of the six slices of data that covered the period. Within that slice he was analyzing only the petitions for review brought by industry groups and within that slice only so-called procedural challenges to EPA action and not statutory claims.
This data selection process is methodologically strange at best. The result of this winnowing is that we must ask why allegedly significant results were found only for these comparatively narrow circumstances and what significance, if any, we should give to the other findings that did not produce such significance…..[36 – see Light and Pillemer, note 7 at 28-29, citing the well-known statistical caveat that if a single study examines dozens of separate relationships, each at the .05 level of significance, it is not surprising to see one in twenty of those relationships turn up significant even though that finding is spurious]
Edwards also made numerous caustic comments that questioned Revesz’ reasoning, which describing the article as “biased”, containing “some shockingly ill-reasoned” statements, “untidy arguments” and “heedless observations”. Edwards also described key arguments of Cross and Tiller, a similar critique, as based on “sheer speculation”, “without convincing support” in their data and “absurd”.
Edwards asserted that both Revesz and Cross and Tiller had a “tendency to push the data into bogus interpretations” – the very word that Mann later claimed to be libel per se:
Revesz is not alone in this tendency to push the data into bogus interpretations. Cross and Tiller do much the same thing when they describe their finding as one of “whistle blowing”..
Edwards even accused Revesz of knowingly presenting false results as follows:
This spurious parading of correlation as causation is highly questionable scholarship . Revesz knows perfectly well that he has not done anything at all to show that the likelihood of Supreme Court review accounts for the differential he has described…
Unsurprisingly, Revesz disagreed across the board with Edwards’ assertions and published a rebuttal in 1999 contesting each of Edwards’ claims. In this rebuttal, Revesz denied that he had “manipulated” the methodology – “manipulation” being a word also in dispute in Mann v CEI:
Chief Judge Edwards complains that, in order to make the ideological divisions look more pronounced, I manipulated the experimental methodology, selectively presented the multivariate results, and focused only on procedural challenges.53…
Chief Judge Edwards makes the strong and wholly unfounded accusation that I manipulated the experimental methodology in order to find that ideology affects judicial votes.
Revesz also objected to Edwards’ rhetoric, including his use of the word “bogus” (though he didn’t take special exception to the term “bogus”):
Throughout his essay, Chief Judge Edwards resorts to the use of strident and tendentious rhetoric. For example, he refers to my article as a “so-called ‘empirical stud[y],’”212 consisting of “heedless observations”213 and containing “heedless language.”214 My “method,” he says, consists of “framing the most cynical hypothesis in pursuit of a significant-sounding formulation.”215 He attributes to me an “evasive impulse,”216 and says that my work exhibits a “tendency to push the data into bogus interpretations.”217 Most troublingly, as discussed earlier, he cavalierly and without any foundation accuses me of manipulating the experimental methodology.218
Despite this inflammatory style, which is not the norm in
academic discourse of this kind,219 each of the numerous objections that Chief Judge Edwards levels against my work is without merit.
It appears that Revesz never thought of suing Edwards for libel per se. Possibly Revesz was intimidated by Edwards’ stature, but more likely is that it never occurred to Revesz (or Edwards) that Edwards’ words were outside the bounds of public dispute (as for example set in Moldea II, the opinion of which had been written by Edwards); or that the allegations were “facts” in the sense of Milkovich; or were libel per se (prejudicial in a pecuniary sense to a person in office or to a person engaged as a livelihood in a profession or trade). Whatever the reason, Revesz, to borrow Lucia’s phrase, put on his big boy pants and responded to Edwards’ allegations with a reasoned response – a course of action frequently recommended by judges to losing libel plaintiffs, including judges in some of Mann’s references (e.g. Schatz v Republican Leadership Committee), by Mikva in the dissent to Moldea I and, eloquently, by Edwards himself in Moldea II.
The details of Edwards’ criticism are also worth examining, since Edwards criticized Revesz for procedures that would commonly be called “data torture” today.
The epigram “If you torture the data long enough, it will confess” had originated with Ronald Coase, a prominent economist (see link) some time before Edwards’ article. Googling the phrase turns up dozens, if not hundreds, of uses especially when one wants to disapprove of the statistical procedures without imputing fraud. The term has been regularly used with this lesser meaning on skeptic climate blogs, e.g. Judy Curry’s post “On Torturing Data“.
There have been several academic attempts to formalize the idea behind Coase’s epigram. For example, Foster and Huber (1999) state:
“Data torturing” as defined by Mills (1993) is the manipulation of data or post hoc interpretation of a study to reveal interesting features. The rubric covers a range of practices that fall within the broad gray zone between ethical and unethical scientific practices. The common thread is an investigator’s tendency to focus on interesting aspects of data and often to arrange results to highlight some features while downplaying or disregarding others.
Foster and Huber’s characterization of data torture as falling into a “broad gray zone between ethical and unethical scientific practices” seems reasonable enough to me: most data torture would be regarded as bad statistical practice, but not academic misconduct, let alone fraud. In many, if not most cases, the bad practices occur unintentionally and/or through poor statistical training.
More recently, Wagenmakers (2011, 2012) has written on the topic, including the following:
In this article, the authors state clearly what many researchers already know: using creative outlier-rejection, selective reporting, post-hoc theorizing, and optional stopping, researchers can very likely obtain a significant result even if the null hypothesis is exactly true. Or, in other words, if you set out to torture the data until they confess, you will more likely than not obtain some sort of confession – even if the data are perfectly innocent.
In my opinion, the “winnowing” procedure as characterized by Edwards (and I haven’t parsed the data to opine on the validity of the criticism) falls well within the loose criteria of “data torture” as understood by Wagenmakers and others. While the criteria are not clearly defined in the literature, in my opinion, the term denotes disapproval of certain statistical procedures but without imputing fraud or even misconduct (though they could additionally be present.) Because there is no settled criterion of what is and isn’t data torture, classifying a procedure as “data torture” would be a matter of judgement and, in many cases, reasonable people could disagree on the classification of a given procedure. In my opinion, some key procedures both in Mann’s work and the work of other paleoclimatologists meet Wagenmakers’ definition of “data torture” and, indeed, this is one of the major longstanding elements of the Climate Audit critique.
In a notorious Climategate email, Mann told the editor of a leading journal “Better that nothing appear, than something unacceptable to us”, so it is possible that Mann has come to view Combs-Greene’s finding that “questioning [Mann’s] intellect and reasoning” is actionable in law as merely long overdue recognition. But ironically Mann himself didn’t even ask for a finding of this breadth. Ross and I have been perhaps Mann’s severest critics, but Mann observed in his pleadings that we hadn’t accused him of “fraud”. Thus even in the four corners of Mann’s pleadings, he realized that “questioning” his reasoning was not, in itself, actionable at law, even in D.C. Harry Edwards must be disappointed that D.C. jurisprudence, for which he had such pride and ambition, has descended into the sort of feeble practice shown here by Combs-Greene.
In addition, I think that the exchange between Revesz and Edwards nicely illustrates the difficulty in reducing statistical controversies to disputes over facts. Whether Revesz’ calculations constituted data manipulation or even data torture is a matter of judgement (judgement seeming to me to be a more appropriate word than “opinion”). Even if one ultimately concluded that the statistical procedures used by one of Revesz or Edwards were “right” relative to the procedures proposed by the other, this remains a garden variety academic dispute that no Court ought to touch with a bargepole.
That the last 50 years or so of the Briffa reconstruction in the IPCC 2001 diagram under Mann’s lead authorship were deleted is a matter of fact: Gavin Schmidt and Richard Muller would agree on that. To date, no investigation (to my knowledge and I’ve examined all their reports closely) has considered whether this omission of data was falsification under academic codes of conduct. Deciding whether this truncation was data manipulation, data torture, falsification or academic misconduct seems to me a matter of judgement, rather than an objectively verifiable fact (as Williams argued.)
As an irony, if Mann sets a precedent in connection with his claim on the use of “bogus” and his claim on hyperlinks, then perhaps Revesz should reconsider his situation. What about the statute of limitations, you ask? Well, Edwards’ article was cited in 2014 in Malouf et al 2014 (link) and this fresh citation is not statute barred. In the world after Mann, the young coauthors Fatima Malouf, William Kagan and William Boyd, would all be actionable for merely citing an article that is actionable post-Mann.
Revesz, R. 1997. Environmental regulation, ideology, and the DC Circuit. Va. L. Rev. 1717-72. link
Edwards, H., 1998. Collegiality and Decision Making on the D.C. Circuit, 84 Va. L. Rev. 1335, 1368 (1998). link
Revesz, R. 1999. Ideology, Collegiality, and the D.C. Circuit: A Reply to Chief Judge Harry T. Edwards. link
Kenneth R. Foster, Peter W. Huber, 1999. Judging Science: Scientific Knowledge and the Federal Courts. online
Mills, J. 1993. Data torturing. New England J Medicine. pdf
Wagenmakers, E. 2012. A year of Horrors pdf