Lost in the recent controversy over Said et al 2008 is that the Climategate documents provided conclusive evidence of the hypothesis originally advanced in the Wegman Report about paleoclimate peer review – that members of the Mann “clique” had been “reviewing other members of the same clique”.
In today’s post, I’ll examine the origin of this hypothesis in the Wegman Report, its consideration in Said et al 2008 and how Climategate documents provided the supporting evidence that neither the Wegman Report nor Said et al 2008 had been able to provide.
I won’t attempt to analyze the plagiarism issues today (I will return to this on another occasion), other than to say that some recent literature on the topic attempts to distinguish between degrees of plagiarism e.g. Bouville, Clarke and Loui.
In addition, contrary to recent false claims by USA Today, Said et al 2008 was not “a federally funded study that condemned scientific support for global warming”. It does not mention global warming nor even climate. Nor is Said et al 2008 a “cornerstone” of criticisms of either Mann or IPCC as Joe Romm falsely claimed. For example, it has never been referred to or discussed at Climate Audit even in comments. Nor at any other climate blog, to my knowledge. (Update May 24 – Nor is the “cornerstone” included on PopTech’s list of 900+ “skeptic” papers.)
Wegman Report 2006
In 2005 and 2006, there had been considerable controversy over our criticism of Mann et al 1998-99. Although we had placed at least equal (and perhaps greater) weight on other aspects of our critique (e.g. that the important early steps of the Mann reconstruction did not have the claimed ‘statistical skill’ and that the Mann reconstruction unduly weighted Graybill’s bristlecone chronologies, which were known to be problematic), particular controversy attached to our observation that Mannian principal components mined datasets for hockey stick shaped data.
Rather than conceding even seemingly indisputable points, Mann and his associates contested every single issue – even the seemingly indisputable and elementary observation that Mannian principal components mined datasets for hockey stick shaped data. To this date, neither Mann nor any of his associates has conceded the point.
As we and others observed at the time, other reconstructions do not use Mannian principal components and demonstrating defects in the MBH reconstruction does not per se demonstrate the invalidity of the other methodologies. However, as discussed in extensive commentary at Climate Audit, there are serious defects in respect to the “other” reconstructions as well and none provide a “safe haven”. There are also serious defects with the variations on MBH methodology proposed in Mann’s response to our 2004 submission to Nature, variations that were subsequently plagiarized by Mann’s associates in Wahl and Ammann 2007.
When the Wegman Report came out (in July 2006), it observed that the dispute about Mannian methodology had been waged for what then already seemed like an interminable time without resolution by the climate science community of even the most elementary questions, such as the validity of Mannian principal components methodology. In light of this failure, the Wegman Report observed that it was timely for a third party to consider the matter. Wegman came down entirely on our side of the question of whether Mannian principal components mined for hockey-stick shaped data. This issue fell well within the professional expertise of the members of the Wegman panel. The NAS panel also came down entirely on our side of this particular issue, though you’d never know it from the fantasizing commentary of Gerry North and other climate scientists. At the time, Eduardo Zorita observed that the NAS panel was as severe as possible under the then circumstances. One member of the NAS panel (not Christy) told me, under the condition that I not reveal his identity, that we had effectively killed the enterprise of trying to reconstruct temperatures from lousy data and that it would take brand new data to resolve the questions – something that he thought might take 20 years.
After agreeing that Mannian principal components was an objectively flawed methodology, the two panels went in different directions.
The NAS panel attempted to opine on the scientific question of the 1000-year reconstructions, but, unfortunately, attempted to do so without doing any due diligence on other candidate reconstructions. As Gerry North subsequently told a Texas A&M seminar, they just “winged it”. They observed that reconstructions by other academics also yielded hockey stick shaped reconstructions – a point never at issue. However, they did not check whether these other reconstructions used strip bark bristlecones, proxies which the NAS panel said should be “avoided” in temperature reconstructions, and ended up illustrating reconstructions using strip bark bristlecones in their own summary.
After the Wegman Report (like the NAS panel) determined that Mannian principal components was an erroneous methodology, the Wegman Report considered a different question: given the defects of Mannian principal components, how did the methodology pass peer review and then remain unchallenged by specialists in the field? Wegman’s question pertained only to statistical methodology. Whether you could “get” a similar answer by a different method had no bearing on the failure of specialists to call an invalid statistical methodology.to account.
The Wegman Report hypothesized that this failure was due to the inter-connectedness of climate scientists through co-authorship and, in particular, by the extent of Mann’s network of coauthorship, a level of inter-connectedness that the Wegman Report seemed to think as not existing in their own field. Wegman speculated that members of Mann’s closest circle (“clique’ in network terminology) reviewed papers of other members of the clique, resulting in non-independent and weak peer review, which, in turn, had resulted in the failure to identify the incorrectness of Mannian principal components in both the original article and subsequently. This was expressed in the 2006 Wegman Report as follows;
One of the interesting questions associated with the ‘hockey stick controversy’ are the relationships among the authors and consequently how confident one can be in the peer review process. In particular, if there is a tight relationship among the authors and there are not a large number of individuals engaged in a particular topic area, then one may suspect that the peer review process does not fully vet papers before they are published. Indeed, a common practice among associate editors for scholarly journals is to look in the list of references for a submitted paper to see who else is writing in a given area and thus who might legitimately be called on to provide knowledgeable peer review. Of course, if a given discipline area is small and the authors in the area are tightly coupled, then this process is likely to turn up very sympathetic referees. These referees may have coauthored other papers with a given author. They may believe they know that author’s other writings well enough that errors can continue to propagate and indeed be reinforced.
From their analysis of Mann’s co-authorship network, the Wegman Report stated:
it is immediately clear that the Mann, Rutherford, Jones, Osborn, Briffa, Bradley and Hughes form a clique, each interacting with all of the others.
In follow-up questions, Wegman was pointedly asked whether social networks analysis “proved” his “hypothesis” about peer review (an issue that also arose in recent commentary about Said et al 2008):
You stated in your testimony that the social networking analysis that you did concerning Dr. Mann and his co-authors represented a “hypothesis” about the relationships of paleoclimatologists. You said that the “tight relationship” among the authors could lead one to “suspect that the peer review process does not fully vet papers before they are published.” Please describe what steps you took that proved or disproved this hypothesis.
Wegman answered that the anonymity of peer review prevented him from showing that members of Mann’s clique had reviewed one another’s papers, observing somewhat pointedly that the Committee had missed an ideal opportunity to shed light on this question:
Obviously because peer review is typically anonymous, we cannot prove or disprove the fact that there are reviewers in one clique that are reviewing other members of the same clique. However, the subcommittee did miss the opportunity to ask that question during the testimony, a question I surely would have asked if I were in a position to ask questions.[my bold]
Here, I cannot help but make the same point about the Muir Russell review, which, despite far more time, attention and expenditure, also failed to ask the same question of the three CRU members of the clique (Jones, Briffa, Osborn).
The Wegman Report’s use of social network methods to examine intra-clique peer reviewing occasioned only limited response at the time, even in the climate blogs. Among blogs for the faithful, it was briefly considered by Realclimate and Crooked Timber, both of whom sneered at the methodology as yielding nothing more than tautologies.
At Climate Audit, Wegman’s speculation about using social network attracted even less attention. I didn’t post at the time on social networks. To my knowledge, I only commented on the topic in passing twice (both much afterwards), each time being mildly critical, observing that both Jones and Bradley seemed far more central in the network as at 1998 than Mann. (Mashey recently criticized Said et al 2008 for using data up to 2006 rather than data up to 1998.) The topic was discussed by CA readers in one 2006 thread, but I myself didn’t participate in the discussion. Afterwards, the topic more or less disappeared from view here. Said et al 2008 was never mentioned at CA either in a post or even in a reader comment and, until yesterday, I’d never even read Said et al 2008. I don’t think that I even knew of its existence.
Said et al 2008
In the following comments, I’m going to discuss the “substantive” sections of Said et al 2008, reserving the discussion of section 1 for another occasion. Said et al 2008 explicitly linked back to the brief controversy arising from the social network analysis of the Wegman Report. They extended their typology of styles of co-authorship to include “solo, entrepreneurial, mentor, and laboratory”:
Co-authorship establishes a linkage or tie between two individuals. These linkages can be examined as a social network and patterns exhibited in the social network of an individual and his co-authors can shed considerable light on how an author works and deals with his colleagues. Using the block model analysis outlined in the previous section, we can cluster the set of co-authors….
Wegman et al. (2006) undertook a social network analysis of a segment of the paleoclimate research community. This analysis met with considerable criticism in some circles, but it did clearly point out a style of co-authorship that led to intriguing speculation about implications of peer review. Based on this analysis and the concomitant criticism, we undertook to examine a number of author–coauthor networks in order to see if there are other styles of authorship. Based on our analysis we identify four basic styles of co-authorship, which we label, respectively, solo, entrepreneurial, mentor, and laboratory. The individuals we have chosen to represent the styles of co-authorship all have outstanding reputations as publishing scholars. Because of potential for awkwardness in social relationships, we do not identify any of the individuals or their co-authors.
Said et al 2008 published block model diagrams for four authors supposedly representing each of these styles. Although they did not identify the authors or their cliques, precursors for two of the figures can be readily identified from the Wegman Report. The block model (their Figure 1) used to represent the “entrepreneurial” coauthorship pattern is the figure for Mann from the Wegman Report (with names removed). The names removed from Mann’s “clique” (in the top left corner) are Jones, Briffa, Osborn, Hughes, Bradley and Rutherford – all names familiar to CA readers. Similarly the block model that supposedly represents the “mentor” coauthorship pattern is the figure for Wegman himself, taken from the Follow-up Questions to the Wegman Report.
After presenting social network analyses of these coauthorship styles, Said et al 2008 observed that the premise of the peer review system was the existence of “independent, unbiased, and knowledgeable referees”:
Wegman et al. (2006) suggested that the entrepreneurial style could potentially lead to peer review abuse. Many took umbrage at this suggestion. Nonetheless, there is some merit to this idea. Peer review is usually regarded as a gold standard for scientific publication. Clearly it is desirable that the peer reviewer have three important traits: independent, unbiased, and knowledgeable in the field. As any hard-working editor or associate editor knows, finding independent, unbiased, and knowledgeable referees for a paper or proposal is a difficult chore. This is especially true in a rather narrow field where there are not many experts so that issues of independence arise quickly. Clearly as a field becomes increasingly specialized, there are not as many independent experts. Thus finding someone who is both independent and knowledgeable is difficult. In the past, when many more authors adopted a solo style of authorship, finding someone who was not a co-author was relatively easy. Nonetheless, the issue of unbiasedness still was an issue.
Said et al argued that the overlapping co-authorships arising from “entrepreneurial” co-authorship, in effect, drained the pool of potential “independent, unbiased and knowledgeable” referees in a small discipline:
The social network analysis of an entrepreneurial style suggests the following. There are many tightly coupled groups working closely together in a relatively narrow field. It is clear that closely coupled groups have a common perspective. Thus it is very hard to find a referee that is both knowledgeable and independent. Because of the common perspective, in addition it is very hard to find an unbiased referee. Thus this style of co-authorship makes it more likely that peer review will be compromised. One mechanism for selecting referees is to look at papers referenced by the paper in question. This possibility means that a naive associate editor might actually pick someone from the social network of co-authors, who is not obviously a co-author. Indeed, the paleoclimate discussion in Wegman et al. (2006), while showing no hard evidence, does suggest that the papers were refereed with a positive, less-than-critical bias.
As Wegman had done previously in the follow-up answers to the Wegman Report, Said et al 2008 conceded that, given the anonymity of peer review, it was impossible to be more than “suggestive” on this point:
Of course because referees are not identified, getting hard evidence of independence, unbiasedness and knowledgeable expertise is not readily available. The social network analysis can therefore only be suggestive. It is our contention, however, that safeguards such as double blind refereeing and not identifying referees invariably lead to the conclusion that peer review is at best an imperfect system. Anyone with a long history of publication in their heart of hearts knows that they have benefited or have been penalized, probably both, by imperfect peer review
The final sentence of this paragraph is particularly ironic in view of the evidence of cursory peer review of Said et al 2008 itself, the cursoriness of which was perhaps due to longstanding association between editor Stanley Azen and coauthor Wegman. (While the short time of acceptance is evidence of this cursoriness, I have seen little evidence that peer review is made more effective by aging on the to-do list of the reviewer.) The cursoriness is evident by the failure of the peer review process to require attributions for section 1. No reasonable reviewer could have regarded the definitions in section 1 as original to Said et al 2008, regardless of whether the text was paraphrased or verbatim. The failure of the peer review process to require attributions in section 1 is itself evidence that Said et al 2008 was accepted essentially without anything other than the most cursory review.
Like the Wegman Report, Said et al 2008 could only suggest that “entrepenurial” co-authorship resulted in a situation where members of a clique reviewed one another’s papers, but fell short of actually demonstrating that this happened in any actual cases.
Little recent commentary on Said et al 2008 has addressed the substance of the article (most commentary showing little evidence of the parties having actually read the article). However, two social network specialists have commented – Kathleen Carley in a recent USA Today interview and Gary Robins (page 151 here).
Carley characterized Said et al 2008 as an “opinion piece”, observing that the authors had not been able to provide data to “support their argument” ( a point clearly acknowledged by the authors who pointed to the anonymity of peer review.) Although Carley would have recommended a major revision, she didn’t characterize anything within the article as ‘wrong’:
Q: (So is what is said in the study wrong?)
A: Is what is said wrong? As an opinion piece – not really. Is what is said new results? Not really. Perhaps the main “novelty claim” are the definitions of the 4 co-authorship styles. But they haven’t shown what fraction of the data these four account for.
Earlier, Garry Robins of the University of Melbourne had observed that network analysis by itself could not demonstrate that members of the Mann clique had reviewed one another’s work, conceding that, given peer review anonymity, ‘this data would be difficult to obtain’.
The implications for peer review in section 4 of the article present inferences that go too far beyond the data and the analysis. The argument is that because there are many tightly coupled groups working closely together they will have a common perspective, so unbiased reviewing is compromised. Of course, in regard to a given article, a set of co-authors (i.e. within the one group) will have a common perspective at least in regard to that article. That does not mean that the other groups necessarily share that same perspective (even if they share one co-author). The literature on network entrepreneurship, including work by Granovetter and Burt, gives plenty of theoretical and empirical reasons to suggest that such groups, linked by one entrepreneur (in this case the central author), may indeed have different opinions or approaches. In other words, a common perspective within groups does not imply a common perspective across groups. Because it is not possible a priori to infer perspectives from network structure, the issue is an empirical question that requires additional data before conclusions can be drawn.
Moreover, the analysis is essentially an egonet strategy, focusing on the co-authors of one central author. Even if there were a common perspective within the egonet, this does not imply a paucity of other reviewers outside the egonet (i.e. who are not co-authors) to provide different perspectives. It is highly risky to draw conclusions about a complete network based on a single egonet. In summary, there may or may not be compromised reviewing in various research domains but this network analysis cannot provide sufficient leverage to show it.
A more complete network analysis involving editors‘ and reviewers‘ links, in the context of a domain-wide co-authorship network, together with data on individual positions taken on controversial research issues, would be one way to proceed, although admittedly some of this data would be difficult to obtain
Climategate Social Network Diagrams
Soon after the release of the Climategate dossier, social network-type diagrams were published – though no one at the time thought to compare them to the social network diagrams in the Wegman Report and Said et al 2008.
The main nodes of the Climategate social network diagram were Mann, Jones, Briffa, Osborn with Santer, Schmidt, Wigley, Overpeck and Jansen next in prominence, as shown in the figure below (click on figure for expanded version):
Figure 1. Climategate Network Diagram. Downloaded from http://www.warwickhughes.com/agri/crunet.gif
The principal nodes of the Climategate network obviously correlate to Mann’s “clique” as identified in the Wegman Report (Mann, Jones, Briffa, Osborn, Bradley, Hughes and Rutherford), as shown below:
The Climategate Dossier
Wegman had stated in 2006 that “obviously because peer review is typically anonymous, we cannot prove or disprove the fact that there are reviewers in one clique that are reviewing other members of the same cli que.”Although the Climategate dossier offers only a small sample, it provides the evidence missing from the Wegman Report and Said et al 2008 – conclusive evidence that members of Mann’s clique had reviewed papers by other members of the clique. Not only does the Climategate dossier contain emails about peer review, it contains a number of actual reviews by Phil Jones (a member of Mann’s clique) of articles by other members of the clique.
In each case, as Wegman had hypothesized, Jones’ reviews were short and favourable – invariably “acceptance subject to minor revisions” – recommendations that were entirely opposite in tone to his reviews of critics of the clique. Moreover, in important cases, Jones’ reviews evaluated articles in terms of how they would contribute to the forthcoming 2007 IPCC assessment report – in which he and other clique members (e.g. Briffa, Osborn, Trenberth) were Lead Authors and Coordinating Lead Authors responsible for assessing articles written by members of the clique and reviewed by other membes.
For example, the Climategate documents contain Jones’ review of Santer et al 2005 , co-authors of which also included not only Santer, but other regular Climategate correspondents: Wigley, Schmidt and Thorne. Jones’ cursory review was less than a page and contained only minor comments. Santer et al 2005 was submitted on May 16, 2005; Jones’ short review was dated June 4, 2005; the article was accepted on July 27, 2005.
One of the more controversial Climategate emails was Jones’ email saying that he would keep a paper by McKitrick and Michaels criticizing CRUTEM out of the forthcoming IPCC report “even if we have to redefine what the peer-review literature is!” Jones made good on this threat through the first two drafts, the only versions sent to external reviewers. In the final draft, Jones and his fellow IPCC authors grudgingly commented on the McKitrick and Michaels paper, but only accompanied by disparaging editorial comments which, as McKitrick later observed, then had no support in academic literature. In 2008, Gavin Schmidt of realclimate, submitted an article entitled “Spurious correlations between recent warming and indices of local economic activity” to International Journal of Climatology (Tim Osborn of CRU being on the editorial board) that purported to provide the support missing at the time of the 2007 IPCC Report. Once again, Phil Jones was a friendly reviewer for submission by another member of the clique. No such indulgence was extended to McKitrick when he attempted to respond to Schmidt’s criticism at the same journal. On this occasion, Jones was doubly conflicted, since Schmidt 2008 criticized the critics of Jones’ own temperature index. As on other occasions of intra-clique reviewing, Jones recommended acceptance with minor revisions and the article was speedily accepted. Jones’ review stated:
it will be good to have another paper to refer to when reviewing any more papers like dML06 and MM07. There is really no excuse for these sorts of mistakes to be made, that lead to erroneous claims about problems with the surface temperature record.
The Climategate documents even contain a review by Jones of a lengthy submission by Mann (one that I am presently unable to identify), with Jones once again recommending acceptance subject only to minor revisions.
Jones’ Review of Wahl and Ammann
Wegman’s “hypothesis” was vividly confirmed with the evidence in the Climategate dossier that Jones had even been a reviewer of Wahl and Ammann 2007, the official Team response to the MM criticisms of Mann et al 1998-99, that had been at issue in the Wegman Report. Wahl and Ammann had been submitted on May 11, 2005; Jones’ review was only 11 days later.
Unsurprisingly, Jones’ review was limited to “minor” comments, with his only ‘major suggestion’ being to include a map in the supplementary information. Jones’ review did not even object to Wahl and Ammann’s reliance on a rejected submission (Ammann and Wahl, submitted to GRL) for essential results – the long story of “Caspar and the “Jesus Paper” is wittily told by Bishop Hill. Jones appraised Wahl and Ammann in terms of its potential contribution to the forthcoming IPCC report (in which it would be assessed by Keith Briffa), stating that it was to be “thoroughly welcomed and is particularly timely with the next IPCC assessment coming along in 2007”, adding (inaccurately as it turned out) that it would ‘go a long way to silencing the critics‘.
As CA readers are aware, there were numerous substantive issues about Wahl and Ammann. Many of these were discussed in Wegman’s answers to follow-up questions to the Wegman Report, in which Wegman was asked detailed questions about Wahl and Ammann. Wegman observed that Wahl and Ammann had actually validated our results and calculations. Wegman stated caustically that Wahl and Ammann’s methodology, in which they altered MBH methodology to “get” a similar answer, had “no statistical integrity”. See Wegman’s discussion of Wahl and Ammann at question 10 here. Even though there were numerous substantive issues on the table, Jones did not ask about or seek clarification on any of these issues.
I won’t re-litigate these various issues in this post. My point here is that instead of Wahl and Ammann being peer reviewed by someone independent, it was reviewed by Phil Jones, Mann’s closest correspondent, with the subsequent IPCC assessment being conducted by fellow clique member Keith Briffa of CRU.
Given the concerns about plagiarism in Said et al 2008 (which is limited to failure to attribute introductory material only), it is important for readers to realize that Wahl and Ammann 2007 thoroughly plagiarized Mann’s 2004 submission to Nature (responding to our 2004 submission), not just in background words, but in important ideas and concepts. Virtually every argument advanced in Wahl and Ammann had previously been proposed in Mann’s 2004 submission to Nature (some of which had also been summarized in posts by Mann at realclimate in late 2004 and early 2005). Despite their obvious use of Mann’s prior work, Wahl and Ammann contained no reference to Mann’s prior work (nor even an acknowledgment to Mann.)
Unfortunately, section 1 of Said et al 2008 (and the corresponding section of the Wegman Report) contained background material from standard texts without attribution. This breached academic protocols and requires a proportional response – a point that I will discuss in a forthcoming post.
However, within this controversy, it should not be overlooked that the Wegman Report was vindicated on its hypothesis about peer review within the Mann “clique”. The Wegman Report (and Said et al 2008) hypothesized, but were unable to prove, that reviewers in the Mann “clique” had been “reviewing other members of the same clique”. Climategate provided the missing evidence, Climategate documents showed that clique member Phil Jones had reviewed papers by other members of the clique, including some of the articles most in controversy – confirming what the Wegman Report had only hypothesized.
Update – please see Roman Mureika’s post here.