Two Editorials

Some people, including some who are not particularly sympathetic to the thoughts expressed here, suggest that the way that I do things is ineffective and have a variety of suggestions on how I could get my views across better. Mostly they involve less blogging and more journal submissions. Maybe they’re right . However, I noticed this week-end that an Op Ed that I wrote last May for the National Post in Toronto has been cited in two journal editorials of diverse origin – the Journal of Cave and Karst Studies and the Journal of the Royal Statistical Society.

Malcolm Field in the Journal of Cave and Karst Studies discussed the purpose of discussion and reply articles. He commented on the role of peer review, citing my Op Ed – note the bolded comments below.

The notion that the purpose of a discussion article might be to correct errors in a published paper might suggest a problem with the peer-review process. Peer-review does not necessarily mean that a paper has been thoroughly examined to ensure scientific “perfection,” which is unrealistic. Rather, peer review does ensure that the basic research concepts, methods, and conclusions are sound and reasonable.

In general, readers of scientific literature generally assume that when an article is published in a peer-reviewed journal it means that someone has checked the data and perhaps even replicated how the data was collected and analyzed, checked the equations used and calculations made, and checked that the stated conclusions are fully supported by the evidence presented (McIntyre, 2005). But peer-review does not guarantee any of this, especially because many, if not most, journal editors and reviewers work as volunteers.

The net effect is that influential papers can continue to be quoted for years without the data or methods ever being fully evaluated, let alone independently checked, even as future research projects or policies are developed based on the previous work. Publication of discussions of papers will not ensure that any errors contained in the original work will subsequently be caught and corrected.

The other editorial by John Zidek in the Journal of the Royal Statistical Society is entitled “Post Normal Science”. He opened the editorial by discussing the efforts of a large EPA panel on ozone to reach a consensus – the hearing sounds like a larger version of our NAS panel. He distinguished the process of such a panel as not being “objective” science, but as a process where objectivity has given way to intersubjectivity.

Discussion focused on what was in the draft and what was not, on what was certain and what was not, and on how to incorporate that uncertainty in formulating public policy based on science and how not. We were engaged in a process that exemplifies post-normal science (PNS)! I am not sure who coined that phrase, but I first read it in an article on the WorldWideWeb by Funtowicz and Ravetz (undated). It refers to a category of processes involving scientists and science that lead to conclusions and commonly to policy decisions.

… The tasks differ from those of normal science owing to complexity, specifically radical uncertainty, a plurality of legitimate perspectives and very high risk. PNS embraces issue-driven scientific inquiry related to environmental controversies; the facts are uncertain, values in dispute, stakes high and decisions urgent. Everything, the choice of what to measure, the modelling approaches and the formation of theoretical constructs, is shaped by societal values. Rather than knowledge discovery as in traditional science, the ultimate goal of its post-normal successor is consensual agreement, what one person called “‹Å“convergence’ at our panel meeting. Objectivity has given way to intersubjectivity. The quality of their eventual outputs”¢’‚¬?conclusions and maybe policy decisions”¢’‚¬?depends on the processes that produced them rather than just on the experiments as in normal science. ..

Zidek then went on discuss this process in the context of “mandated science”, i.e. where science is being carried out to resolve a matter of policy interest.

PNS has a handmaiden, mandated science, as described in Salter (1988). Its realm lies in the intersection of values, public policy and science according to Gerald van Belle who in a recent lecture characterized mandated science as taking place in public view, often adversarial, having a multiplicity of stakeholders and involving concern for accountability (the bottom line!).

As an example of intersubjective science (rather than objective science), he cited the IPCC process in respect to climate change:

The widely held belief in global climate change illustrates well a post-normal scientific conclusion, one that led to the Kyoto accord, a policy decision. In fact, Oreskes (2004) concluded on the basis of her survey of 928 abstracts that “‹Å“none of the papers disagreed with the consensus position’ and that “‹Å“there is a scientific consensus on the reality of anthropogenic climate change’. That prompted the Royal Society’s Vice-President, Sir David Wallace, to send the media a letter asking them to ignore “‹Å“individuals on the fringes, sometimes with financial support from the oil industry, who have been attempting to cast doubt on the scientific consensus on climate change’. His position was supported by LordMay, the Society’s President, according to an article in the UK’s Daily Telegraph (2005). Here we have intersubjective agreement!

He then reported on our criticisms of IPCC, citing both my Op Ed and our GRL article.

Even the Intergovernmental Panel on Climate Change is not spared. McIntyre (2005) asserted that the hockey stick diagram prompted the Panel to change its 1990 conclusion that the mediaeval period was warmest in the last millennium, in favour of the 1990s. However, he wrote that he and his co-author, Ross McKitrick, have shown that the hockey stick diagram resulted from flawed analysis, in particular from data mining methods that would have produced hockey stick patterns even in a random series (see for example McIntyre and McKitrick (2005)).

He wondered why statisticians were not involved earlier in the IPCC process, rather than afterwards:

That analysis and others like it [I think that he means IPCC, not us!] might have benefited from critical analysis of statisticians before its publication. After all, statistical science is tailor made for PNS and its complexities, uncertainties being admitted through probabilities and values through utilities. Not surprisingly, statistical scientists did become involved, after its publication. …

In fact, statistical scientists should be involved in mandated science and PNS so that those in other disciplines do not usurp their legitimate roles. Yet meeting this requirement has not been easy as the climate example demonstrates. I wonder why. Are we blocked from doing so? Or does our nature lead us to shun roles on centre stage, preferring the wings instead? Or perhaps we do not have the requisite skills for handling the complicated interactions that are involved.

So despite the supposed ineffectiveness of my way of going about things, I think that a fair amount of attention has actually been garnered to the points raised here, as these two editorials in such diverse journals demonstrate. (Another longer review of these issues is in the works for another journal.)

As a quick thought, I’m not sure that “post-normal science” is a very helpful term for making policy decisions from scientific information. What Zidek calls “post-normal science” is really not science per se, but trying to make decisions using scientific information and reports. I’d be inclined to change the emphasis of the hyphen. It’s not that the science is “post-normal” (which connotes post-modernism, but that the process is “post normal-science”.

If readers will indulge another analogy: geologists in a mineral exploration program are carrying out “mandated science”. When you’re trying to decide to fish or cut bait on a mineral exploration program or whether to try to fund a new program, you are carrying out a process that comes after the science, but you rely on scientific information (geological reports) and businesses do try develop a consensus.

Or when businesses are making investment decisions on factories implementing new processes, management will try to achieve a consensus at the board level. One of the methods of achieving consensus is to comission engineering reports, which are a form of “mandated science”. But the approach of engineers is quite different than scientists trying to write original articles for academic journals. A lot of engineering is checking everything. Making sure that the i’s are dotted and t’s are crossed. In climate, that would mean seeing if the models worked through a detailed checking process, perhaps involving several million dollars of engineering work, rather than relying on a journal article, which necessarily had negligible due diligence. It’s not that the science is “post-normal”; it’s just that well-informed decision-making requires engineering-level due diligence and that engineering comes after the “normal science”.

References:
Zidek, J. Post-normal Science. J. R. Statist. Soc. A (2006) 169, Part 1, pp. 1–4

Malcolm S. Field – Why publish discussions and author responses to papers published in the Journal of Cave and Karst Studies? Journal of Cave and Karst Studies,v. 67, no. 2, p. 91. http://www.caves.org/pub/journal/PDF/V67/v67n2-Field.pdf

14 Comments

  1. TCO
    Posted Mar 26, 2006 at 5:50 PM | Permalink

    The GRL article did more for you than anything else, Steve. The blog is fine. I’m not saying to do less of it. I’m saying to do more real papers. Get the academic notches on the belt. The average journal paper has more complete analysis, story, than your average posting here. Also it exposes your ideas to official response. Also, they are ABSTRACTED as part of the official literature.

    Has anyone done “grass plots” in the literature? You need to contribute that advance.

    How about the Polar Urals tree chronology issue? That should have been in a specialty journal. It is a model for others to look at in terms of checking/double-checking and is an important question about a particular feature of a broader study.

    Several other analyses on here ought to be packaged up and put in the official literature.

    If you build a portfolio of papers, people will ask you for reviews and the like. I have to believe at this point, you’ve read enough to be one of the world’s experts on statistical paleoclimatology and paleoclimatology in general. I also think that a portfolio of real papers (not blog posts) would help you to get even more oped offers and official position offers and the like.

  2. John A
    Posted Mar 26, 2006 at 6:21 PM | Permalink

    I second that emotion from TCO.

    I think that your experiences with the Hockey Stick (which you described somewhere as a laboratory of bad statistics) should be sent to a journal of statistics. I’m sure you’ll some interesting feedback.

  3. Posted Mar 26, 2006 at 8:52 PM | Permalink

    I am not sure whether I agree with the colleagues above. While it is clear that the published articles have diminished the ability of the “believers” to humiliate Steve and Ross as outsiders who should not be listened at all, the published papers can’t solve everything and they’re not the ultimate recipe to search for the truth. The peer review in particular turned out to be rather ineffective and I find the current system in climate science to be broken.

    Although there are some formal – and not only formal – aspects in which a journal paper might be more perfect than Steve’s blog articles, it also costs a lot of extra time and tugs of war about the process. Climate Audit is obviously the most detailed, content-rich, and penetrating regularly updated website about paleoclimatology in the world. In my opinion, Steve should continue the way he does, and when he sees a meaningful collection of insights or calculations that may be transformed into a paper, he should write it and submit it. And he should never be afraid to try to publish together with one of those who are viewed as “mainstream”.

    But I think that he should definitely avoid the struggles for every regular publication – the kind of method that the careerist scientists follow – especially because such an approach reduces independence which is rather crucial in the politicized field of climate science. Agree with me or disagree, I find it completely obvious that those who are producing “standard” papers with the “right” conclusions about the climate change face a much easier process from writing to the publication.

    This is how most reviewers write the reviews: they look whether the conclusions agree with the “consensus”, and if they do, they quickly skim through the paper and recommend it. If the conclusions disagree with the “consensus”, they read the paper and try to find flaws important enough to recommend rejection. Under current circumstances, this is not a fair battle.

    It is definitely not true that the expertise and contributions to climate science can be measured simply by the number of refereed publications. And I think that the atmosphere in the field is so hostile towards attempts to analyze things properly that the attempts to transform the method purely from within are doomed from the beginning. Read the main article in the Time magazine (and CNN) about “being very afraid” to see what kind of articles and evidence is demanded by the current system.

  4. McCall
    Posted Mar 26, 2006 at 9:40 PM | Permalink

    My vote — publish sparingly and when warranted, but peer- and refereed-websites and blogs are more timely and important in the future. While your GRL and other peer-reviewed articles are great, I saw them as existence proofs for your credibility and to diffuse the common criticism of being unpublished in the field.

    My hope is GRL will be forward enough to add this e-publishing opportunity for Climate Science — what ScienceExpress would be, with balanced and equally enforced policies. Both Nature and Science indicate an abrogation of responsibility in this scientific field, not unlike the declining subscriber based print-news-media. Time will tell on that too — but of the dinosaurs, ScienceExpress offers some publishing attraction, if only they would act with more ________ and ________ (fill in the blanks); but I suspect a GRL-Express will end up filling that void.

  5. Steve McIntyre
    Posted Mar 26, 2006 at 9:40 PM | Permalink

    Thanks for the support. I think that you all have the right tone. There are some things that need to be written up – I acknowledge this. But if I hadn’t pushed back at the Hockey Team from the blog, I would have been bullied off the field without being heard.

  6. McCall
    Posted Mar 26, 2006 at 9:47 PM | Permalink

    Oh and Geo-Physics is where this debate should have been in the first place. IMO, general science mags don’t have the staff, nor the specialization to properly audit what they published in the past (duh!). Nor will they succeed in a future of specialized and niche scientific expertise and quick-turn e-publishing.

  7. G. Boden
    Posted Mar 26, 2006 at 10:19 PM | Permalink

    When freshwater ecology was maturing as a discipline, a volume called Limnological Methods (P.S. Welch, 1948) helped to standardize the techniques used for measuring and analysis. Generations of ecologists have cut their teeth on it (and later improved versions) as part of their basic training in undergraduate research courses. It’s becoming very apparent that paleoclimatological research needs something similar, both in field/lab methods and statistical techniques. Maybe its time to establish minimum standards that set a baseline for passing the ‘climate audit.’ At the very least it would serve as a guideline/checklist for peer-reviewers.

  8. Posted Mar 26, 2006 at 10:38 PM | Permalink

    I strongly agree that the best strategy involves a combination of formal journal publication and less formal blogging. You can’t hope to have the credibility you need without peer-reviewed journal publications behind you, but you also can’t hope to deal with all the relevant issues in journal articles. I think I’d suggest adjusting the current emphasis a bit more towards journals, but you are already in the right ball park.

  9. John A
    Posted Mar 27, 2006 at 3:07 AM | Permalink

    Backing up my further comments, surely the point made by John Zidek is that statistician should be involved in scientific resolutions but they currently are not.

    I think a nice article on statistical methods in paleoclimatology for the Journal of the Royal Statistical Society is exactly what the doctor orders.

  10. TCO
    Posted Mar 27, 2006 at 9:57 AM | Permalink

    I often find that many of Steve’s posts have incomplete analysis and have implied results/ciriticisms that are not explicitly stated. Formal pubs would be better.

    Nothing wrong with having the website as well. But the amount of work that this guy has done without getting it into formal, abstracted results is way out of balance. I also think that some of the rejections and time wasted could be obviated by a tighter submission process and cleaner division of arguments and submitting to the right journals. The mashed up PP presentations show a tendancy to me. It’s not hard to get things published if you divide them up and avoid the kitchen sink tendancy. And follow the procedures and proofread like a nuke engineer. (You’d be surprised how much that helps. Just tell a simple straightforward story and follow the rules.)

    I think getting some simpler articles published before writing the review article on statistical methods would both make it more likely that Steve has the academic cred to pull it off…as well as the process will teach him things that will make his review article better–he doesn’t know it all yet!

  11. Steve McIntyre
    Posted Mar 27, 2006 at 10:54 AM | Permalink

    TCO, I haven’t had a rejection problem. The Ritson and A&W Comments at GRL were rejected so the Replies were not published. That wasn’t because of defects in the replies.

    I did send a very short focused note on Polar Urals cross-dating to Nature, who published the original article. They rejected it and said it should go to a specialty journal. I’ll do that. But the obligation to deal with errors in Nature publications is really Nature’s and I had to go through that process. I’ll post that up.

    I’ve got a few other things in the works, some of which I should have finished. Look, if I were a 35-year trying to keep score for academic promotions, I’d be worried about academic citations just like young professors are. But, TCO, you’re right and you’ve been right for a long time that I need to write up more articles. But then something like the NAS panel comes along or Wahl & Ammann and these things take a lot of time.

  12. TCO
    Posted Mar 27, 2006 at 11:23 AM | Permalink

    Since you admitted I was right, I don’t want to beat you over the head. But…I can’t help myself. At least I segue a little.

    I don’t consider the replies to comments (in essence a reply to a reply to a reply to a paper) to be worth much as publications. In your own words, those were in essence blog back and forth comments. I think you would be better served (and the community better served) by getting some of the analyses here finished up and published. at least those are “replies to papers” if you follow me. Heck, some of them even approach the point of being original work. For instance, the concept of the grass plot and identification of super-trees.

  13. Terry
    Posted Mar 27, 2006 at 8:16 PM | Permalink

    What you really need are assistants that can write up papers at your direction. Since everything is done via computer analysis, you just need eager hands to crank them out.

    You easily have enough ideas for another 10 papers. A couple of assistants could build a tenurable record by collarborating with you.

  14. Posted Dec 19, 2009 at 1:30 PM | Permalink

    апартаменты посуточно