Some people, including some who are not particularly sympathetic to the thoughts expressed here, suggest that the way that I do things is ineffective and have a variety of suggestions on how I could get my views across better. Mostly they involve less blogging and more journal submissions. Maybe they’re right . However, I noticed this week-end that an Op Ed that I wrote last May for the National Post in Toronto has been cited in two journal editorials of diverse origin – the Journal of Cave and Karst Studies and the Journal of the Royal Statistical Society.
Malcolm Field in the Journal of Cave and Karst Studies discussed the purpose of discussion and reply articles. He commented on the role of peer review, citing my Op Ed – note the bolded comments below.
The notion that the purpose of a discussion article might be to correct errors in a published paper might suggest a problem with the peer-review process. Peer-review does not necessarily mean that a paper has been thoroughly examined to ensure scientific “perfection,” which is unrealistic. Rather, peer review does ensure that the basic research concepts, methods, and conclusions are sound and reasonable.
In general, readers of scientific literature generally assume that when an article is published in a peer-reviewed journal it means that someone has checked the data and perhaps even replicated how the data was collected and analyzed, checked the equations used and calculations made, and checked that the stated conclusions are fully supported by the evidence presented (McIntyre, 2005). But peer-review does not guarantee any of this, especially because many, if not most, journal editors and reviewers work as volunteers.
The net effect is that influential papers can continue to be quoted for years without the data or methods ever being fully evaluated, let alone independently checked, even as future research projects or policies are developed based on the previous work. Publication of discussions of papers will not ensure that any errors contained in the original work will subsequently be caught and corrected.
The other editorial by John Zidek in the Journal of the Royal Statistical Society is entitled “Post Normal Science”. He opened the editorial by discussing the efforts of a large EPA panel on ozone to reach a consensus – the hearing sounds like a larger version of our NAS panel. He distinguished the process of such a panel as not being “objective” science, but as a process where objectivity has given way to intersubjectivity.
Discussion focused on what was in the draft and what was not, on what was certain and what was not, and on how to incorporate that uncertainty in formulating public policy based on science and how not. We were engaged in a process that exemplifies post-normal science (PNS)! I am not sure who coined that phrase, but I first read it in an article on the WorldWideWeb by Funtowicz and Ravetz (undated). It refers to a category of processes involving scientists and science that lead to conclusions and commonly to policy decisions.
… The tasks differ from those of normal science owing to complexity, specifically radical uncertainty, a plurality of legitimate perspectives and very high risk. PNS embraces issue-driven scientific inquiry related to environmental controversies; the facts are uncertain, values in dispute, stakes high and decisions urgent. Everything, the choice of what to measure, the modelling approaches and the formation of theoretical constructs, is shaped by societal values. Rather than knowledge discovery as in traditional science, the ultimate goal of its post-normal successor is consensual agreement, what one person called “Åconvergence’ at our panel meeting. Objectivity has given way to intersubjectivity. The quality of their eventual outputs”¢’¬?conclusions and maybe policy decisions”¢’¬?depends on the processes that produced them rather than just on the experiments as in normal science. ..
Zidek then went on discuss this process in the context of “mandated science”, i.e. where science is being carried out to resolve a matter of policy interest.
PNS has a handmaiden, mandated science, as described in Salter (1988). Its realm lies in the intersection of values, public policy and science according to Gerald van Belle who in a recent lecture characterized mandated science as taking place in public view, often adversarial, having a multiplicity of stakeholders and involving concern for accountability (the bottom line!).
As an example of intersubjective science (rather than objective science), he cited the IPCC process in respect to climate change:
The widely held belief in global climate change illustrates well a post-normal scientific conclusion, one that led to the Kyoto accord, a policy decision. In fact, Oreskes (2004) concluded on the basis of her survey of 928 abstracts that “Ånone of the papers disagreed with the consensus position’ and that “Åthere is a scientific consensus on the reality of anthropogenic climate change’. That prompted the Royal Society’s Vice-President, Sir David Wallace, to send the media a letter asking them to ignore “Åindividuals on the fringes, sometimes with financial support from the oil industry, who have been attempting to cast doubt on the scientific consensus on climate change’. His position was supported by LordMay, the Society’s President, according to an article in the UK’s Daily Telegraph (2005). Here we have intersubjective agreement!
He then reported on our criticisms of IPCC, citing both my Op Ed and our GRL article.
Even the Intergovernmental Panel on Climate Change is not spared. McIntyre (2005) asserted that the hockey stick diagram prompted the Panel to change its 1990 conclusion that the mediaeval period was warmest in the last millennium, in favour of the 1990s. However, he wrote that he and his co-author, Ross McKitrick, have shown that the hockey stick diagram resulted from flawed analysis, in particular from data mining methods that would have produced hockey stick patterns even in a random series (see for example McIntyre and McKitrick (2005)).
He wondered why statisticians were not involved earlier in the IPCC process, rather than afterwards:
That analysis and others like it [I think that he means IPCC, not us!] might have benefited from critical analysis of statisticians before its publication. After all, statistical science is tailor made for PNS and its complexities, uncertainties being admitted through probabilities and values through utilities. Not surprisingly, statistical scientists did become involved, after its publication. …
In fact, statistical scientists should be involved in mandated science and PNS so that those in other disciplines do not usurp their legitimate roles. Yet meeting this requirement has not been easy as the climate example demonstrates. I wonder why. Are we blocked from doing so? Or does our nature lead us to shun roles on centre stage, preferring the wings instead? Or perhaps we do not have the requisite skills for handling the complicated interactions that are involved.
So despite the supposed ineffectiveness of my way of going about things, I think that a fair amount of attention has actually been garnered to the points raised here, as these two editorials in such diverse journals demonstrate. (Another longer review of these issues is in the works for another journal.)
As a quick thought, I’m not sure that “post-normal science” is a very helpful term for making policy decisions from scientific information. What Zidek calls “post-normal science” is really not science per se, but trying to make decisions using scientific information and reports. I’d be inclined to change the emphasis of the hyphen. It’s not that the science is “post-normal” (which connotes post-modernism, but that the process is “post normal-science”.
If readers will indulge another analogy: geologists in a mineral exploration program are carrying out “mandated science”. When you’re trying to decide to fish or cut bait on a mineral exploration program or whether to try to fund a new program, you are carrying out a process that comes after the science, but you rely on scientific information (geological reports) and businesses do try develop a consensus.
Or when businesses are making investment decisions on factories implementing new processes, management will try to achieve a consensus at the board level. One of the methods of achieving consensus is to comission engineering reports, which are a form of “mandated science”. But the approach of engineers is quite different than scientists trying to write original articles for academic journals. A lot of engineering is checking everything. Making sure that the i’s are dotted and t’s are crossed. In climate, that would mean seeing if the models worked through a detailed checking process, perhaps involving several million dollars of engineering work, rather than relying on a journal article, which necessarily had negligible due diligence. It’s not that the science is “post-normal”; it’s just that well-informed decision-making requires engineering-level due diligence and that engineering comes after the “normal science”.
Zidek, J. Post-normal Science. J. R. Statist. Soc. A (2006) 169, Part 1, pp. 1–4
Malcolm S. Field – Why publish discussions and author responses to papers published in the Journal of Cave and Karst Studies? Journal of Cave and Karst Studies,v. 67, no. 2, p. 91. http://www.caves.org/pub/journal/PDF/V67/v67n2-Field.pdf