From the NY Times, a wide-ranging article on the problems of journal peer-review:
Recent disclosures of fraudulent or flawed studies in medical and scientific journals have called into question as never before the merits of their peer-review system.
The system is based on journals inviting independent experts to critique submitted manuscripts. The stated aim is to weed out sloppy and bad research, ensuring the integrity of the what it has published.
Because findings published in peer-reviewed journals affect patient care, public policy and the authors’ academic promotions, journal editors contend that new scientific information should be published in a peer-reviewed journal before it is presented to doctors and the public.
That message, however, has created a widespread misimpression that passing peer review is the scientific equivalent of the Good Housekeeping seal of approval.
I could add that that impression is also given by some authors and institutions when asked to defend their results against criticism.
Virtually every major scientific and medical journal has been humbled recently by publishing findings that are later discredited. The flurry of episodes has led many people to ask why authors, editors and independent expert reviewers all failed to detect the problems before publication.
The publication process is complex. Many factors can allow error, even fraud, to slip through. They include economic pressures for journals to avoid investigating suspected errors; the desire to avoid displeasing the authors and the experts who review manuscripts; and the fear that angry scientists will withhold the manuscripts that are the lifeline of the journals, putting them out of business. By promoting the sanctity of peer review and using it to justify a number of their actions in recent years, journals have added to their enormous power.
The only thing I wonder is: are scientists in general reticent to lay their books open or just a few high profile ones? Also, are scientists a homogeneous bunch who would boycott a journal en masse in protest at new review rules designed to prevent fraud?
Later in the article Dr Altman lays bare the dynamic we have seen in climate science rather too often:
Increasingly, journals and authors’ institutions also send out news releases ahead of time about a peer-reviewed discovery so that reports from news organizations coincide with a journal’s date of issue.
A barrage of news reports can follow. But often the news release is sent without the full paper, so reports may be based only on the spin created by a journal or an institution.
Yep. Seen that one. We’ve also seen press releases for articles entering peer review which were subsequently withdrawn without any public retraction. We’ve even had statements of the applicability of a particular result far outside the range of the paper reported. Clearly the practice of spin is distorting the process as well.
Journal editors say publicity about corrections and retractions distorts and erodes confidence in science, which is an honorable business. Editors also say they are gatekeepers, not detectives, and that even though peer review is not intended to detect fraud, it catches flawed research and improves the quality of the thousands of published papers.
However, even the system’s most ardent supporters acknowledge that peer review does not eliminate mediocre and inferior papers and has never passed the very test for which it is used. Studies have found that journals publish findings based on sloppy statistics. If peer review were a drug, it would never be marketed, say critics, including journal editors.
This gets me. I would argue that publicity about corrections and retractions would enhance confidence in science, and not erode it in the way described. It’s interesting to note that science is "an honorable business" and so it is, but businesses need independent audit and review to retain that honor, as only one bad apple can cause a crisis of confidence in that business as a whole. It has to be said that there has been a windfall of bad apples in many branches of science. What must the honest and conscientious scientist think when his or her paper has citations which are later shown to be false or published next to a study later shown to be severely flawed?
I mean, the NY Times had a journalistic fraudster on its payroll, and it had to change its policies with regard to review and retention of staff in the aftermath. These changes were demanded in part, by the journalists themselves. They were hardly boycotting the tightening up of procedures, were they?
A widespread belief among nonscientists is that journal editors and their reviewers check authors’ research firsthand and even repeat the research. In fact, journal editors do not routinely examine authors’ scientific notebooks. Instead, they rely on peer reviewers’ criticisms, which are based on the information submitted by the authors.
While editors and reviewers may ask authors for more information, journals and their invited experts examine raw data only under the most unusual circumstances.
I think Steve McIntyre could wax eloquent on this one. The IPCC relies on the integrity of the journal’s peer review, and woe betide an IPCC reviewer who asks for raw data.
Journals have rejected calls to make the process scientific by conducting random audits like those used to monitor quality control in medicine. The costs and the potential for creating distrust are the most commonly cited reasons for not auditing.
In defending themselves, journal editors often shift blame to the authors and excuse themselves and their peer reviewers.
Journals seldom investigate frauds that they have published, contending that they are not investigative bodies and that they could not afford the costs. Instead, the journals say that the investigations are up to the accused authors’ employers and agencies that financed the research.
What of the cost of damage to a journal’s integrity? Or the knock-on effect of authors not submitting articles because of the loss of reputation? Wouldn’t it be interesting if journals did do random checks on papers paid for by a small stipend from the authors’ funding?
I recommend reading the full article.
Hat tip to Sara Chan for bringing this to my attention
Comment by Steve: My views on peer review are different from what most people, including John A, think they are. I tend to think of “peer review” being to an article like being “broker recommended” would be for a stock. The one is not a guarantee that the aricle is “right” and the other is not a guarantee that the stock will go up. Securities commissions review prospectuses, but they don’t attempt to decide whether a stock is going to go up or down; they do try to ensure “full, true and plain disclosure” in a prospectus. They try to keep criminals from offering securities to the public but frauds happen. My beef against peer reviewers is that they spend too little time and energy ensuring that sufficient information is attached to the article to ensure that they are replicable and that the journals do not enforce their own pretty policies on replicability. In the paleoclimate field where I’m knowledgeable, the biggest thing that would avoid problems is archiving data, code and requiring accurate data citation (as called for under AGU data policies, but not applied.) That would reduce the time and energy needed for subsequent researchers to check things. McCullough’s observation on this, in economic terms, is that the “cost” of replication needs to be reduced dramatically by requiring these steps. The job of the journal should be to ensure that these steps are done, not necessarily to do them. Economics journals such as the American Economic Review have implemented them; the refusal of Nature and Science to do so is inexcusable. That’s where I’d put my energy.