NY Times: For Science’s Gatekeepers, a Credibility Gap

From the NY Times, a wide-ranging article on the problems of journal peer-review:

Recent disclosures of fraudulent or flawed studies in medical and scientific journals have called into question as never before the merits of their peer-review system.

The system is based on journals inviting independent experts to critique submitted manuscripts. The stated aim is to weed out sloppy and bad research, ensuring the integrity of the what it has published.

Because findings published in peer-reviewed journals affect patient care, public policy and the authors’ academic promotions, journal editors contend that new scientific information should be published in a peer-reviewed journal before it is presented to doctors and the public.

That message, however, has created a widespread misimpression that passing peer review is the scientific equivalent of the Good Housekeeping seal of approval.


I could add that that impression is also given by some authors and institutions when asked to defend their results against criticism.

Virtually every major scientific and medical journal has been humbled recently by publishing findings that are later discredited. The flurry of episodes has led many people to ask why authors, editors and independent expert reviewers all failed to detect the problems before publication.

The publication process is complex. Many factors can allow error, even fraud, to slip through. They include economic pressures for journals to avoid investigating suspected errors; the desire to avoid displeasing the authors and the experts who review manuscripts; and the fear that angry scientists will withhold the manuscripts that are the lifeline of the journals, putting them out of business. By promoting the sanctity of peer review and using it to justify a number of their actions in recent years, journals have added to their enormous power.

The only thing I wonder is: are scientists in general reticent to lay their books open or just a few high profile ones? Also, are scientists a homogeneous bunch who would boycott a journal en masse in protest at new review rules designed to prevent fraud?

Later in the article Dr Altman lays bare the dynamic we have seen in climate science rather too often:

Increasingly, journals and authors’ institutions also send out news releases ahead of time about a peer-reviewed discovery so that reports from news organizations coincide with a journal’s date of issue.

A barrage of news reports can follow. But often the news release is sent without the full paper, so reports may be based only on the spin created by a journal or an institution.

Yep. Seen that one. We’ve also seen press releases for articles entering peer review which were subsequently withdrawn without any public retraction. We’ve even had statements of the applicability of a particular result far outside the range of the paper reported. Clearly the practice of spin is distorting the process as well.

Journal editors say publicity about corrections and retractions distorts and erodes confidence in science, which is an honorable business. Editors also say they are gatekeepers, not detectives, and that even though peer review is not intended to detect fraud, it catches flawed research and improves the quality of the thousands of published papers.

However, even the system’s most ardent supporters acknowledge that peer review does not eliminate mediocre and inferior papers and has never passed the very test for which it is used. Studies have found that journals publish findings based on sloppy statistics. If peer review were a drug, it would never be marketed, say critics, including journal editors.

This gets me. I would argue that publicity about corrections and retractions would enhance confidence in science, and not erode it in the way described. It’s interesting to note that science is "an honorable business" and so it is, but businesses need independent audit and review to retain that honor, as only one bad apple can cause a crisis of confidence in that business as a whole. It has to be said that there has been a windfall of bad apples in many branches of science. What must the honest and conscientious scientist think when his or her paper has citations which are later shown to be false or published next to a study later shown to be severely flawed?

I mean, the NY Times had a journalistic fraudster on its payroll, and it had to change its policies with regard to review and retention of staff in the aftermath. These changes were demanded in part, by the journalists themselves. They were hardly boycotting the tightening up of procedures, were they?

A widespread belief among nonscientists is that journal editors and their reviewers check authors’ research firsthand and even repeat the research. In fact, journal editors do not routinely examine authors’ scientific notebooks. Instead, they rely on peer reviewers’ criticisms, which are based on the information submitted by the authors.

While editors and reviewers may ask authors for more information, journals and their invited experts examine raw data only under the most unusual circumstances.

I think Steve McIntyre could wax eloquent on this one. The IPCC relies on the integrity of the journal’s peer review, and woe betide an IPCC reviewer who asks for raw data.

Journals have rejected calls to make the process scientific by conducting random audits like those used to monitor quality control in medicine. The costs and the potential for creating distrust are the most commonly cited reasons for not auditing.

In defending themselves, journal editors often shift blame to the authors and excuse themselves and their peer reviewers.

Journals seldom investigate frauds that they have published, contending that they are not investigative bodies and that they could not afford the costs. Instead, the journals say that the investigations are up to the accused authors’ employers and agencies that financed the research.

What of the cost of damage to a journal’s integrity? Or the knock-on effect of authors not submitting articles because of the loss of reputation? Wouldn’t it be interesting if journals did do random checks on papers paid for by a small stipend from the authors’ funding?

I recommend reading the full article.

Hat tip to Sara Chan for bringing this to my attention

Comment by Steve: My views on peer review are different from what most people, including John A, think they are. I tend to think of “peer review” being to an article like being “broker recommended” would be for a stock. The one is not a guarantee that the aricle is “right” and the other is not a guarantee that the stock will go up. Securities commissions review prospectuses, but they don’t attempt to decide whether a stock is going to go up or down; they do try to ensure “full, true and plain disclosure” in a prospectus. They try to keep criminals from offering securities to the public but frauds happen. My beef against peer reviewers is that they spend too little time and energy ensuring that sufficient information is attached to the article to ensure that they are replicable and that the journals do not enforce their own pretty policies on replicability. In the paleoclimate field where I’m knowledgeable, the biggest thing that would avoid problems is archiving data, code and requiring accurate data citation (as called for under AGU data policies, but not applied.) That would reduce the time and energy needed for subsequent researchers to check things. McCullough’s observation on this, in economic terms, is that the “cost” of replication needs to be reduced dramatically by requiring these steps. The job of the journal should be to ensure that these steps are done, not necessarily to do them. Economics journals such as the American Economic Review have implemented them; the refusal of Nature and Science to do so is inexcusable. That’s where I’d put my energy.

47 Comments

  1. Gary
    Posted May 4, 2006 at 7:21 AM | Permalink

    Although well-intentioned, peer-review is flawed. Blogs are the beginning of the democratization of science by providing a previously unavailable opportunity for informed people to open the clubroom doors. The first ones in might get stopped by the bouncer, but a horde will follow and bring in the fresh air eventually. It will be messy, but given that scientific investigation is ultimately self-correcting, we should expect progress. Journals would be smart to start their own blogs and invite all comers. This type of ‘peer’-review is more likely to fulfill the original goal. Journals can make money by selling ad space and good controversy will boost the hit count. Advance science and make money at the same time – who could ask for more?

  2. Jaime Arbona
    Posted May 4, 2006 at 7:49 AM | Permalink

    I would like to see the day when Science, Nature and the like invite the unwashed masses to soil their hollowed hallowed halls of knowledge.

  3. mikep
    Posted May 4, 2006 at 9:14 AM | Permalink

    The British Medical Journal (BMJ) already have what looks like an open access comments page for each article. Some of the comments are silly,but readers can see the arguments laid out for them and make their own judgements – a bit like this blog!

  4. John A
    Posted May 4, 2006 at 9:15 AM | Permalink

    Re #1

    Actually that’s not the advance I’d like to see at all. Democratization maybe a great buzzword in authoritarian states, but its not a good analogy for science, which is meritocratic and necessarily elitist.

    I would like to see scientists open their books as a condition of entry into peer review. I would like to see audit done (why can’t this be done as part of a statistics course, it would be fun?), not on every paper but on some of the groundbreaking ones, and a random selection of others.

    Just imagine if full and plain disclosure were practiced in science generally and climate science in particular. This blog would be a lot shorter, for a start…

  5. John A
    Posted May 4, 2006 at 9:31 AM | Permalink

    I’m going to comment on Steve’s comment tacked to the bottom of the blog post:

    In the paleoclimate field where I’m knowledgeable, the biggest thing that would avoid problems is archiving data, code and requiring accurate data citation (as called for under AGU data policies, but not applied.) That would reduce the time and energy needed for subsequent researchers to check things. McCullough’s observation on this, in economic terms, is that the “cost” of replication needs to be reduced dramatically by requiring these steps. The job of the journal should be to ensure that these steps are done, not necessarily to do them. Economics journals such as the American Economic Review have implemented them; the refusal of Nature and Science to do so is inexcusable. That’s where I’d put my energy.

    You know what’s interesting, Steve? No-one in the paleoclimatic community had the time, patience or persistence to go through MBH98 the way you have. In a sense, not providing the full, true and plain disclosure has raised an almost insuperable barrier for audit within the discipline itself. It wasn’t other stem cell scientists who caught out Hwang woo Suk, but scientifically knowledgeable bloggers with the time and the dogged persistence to check results against claims. Furthermore, the paleoclimate community is rather small – everyone knows each other by one or two degrees. That puts a peer pressure to conform and acquiesce that you don’t have.

    Nevertheless, without audit or the threat of audit, what would be the point of the disclosure?

  6. Paul Gosling
    Posted May 4, 2006 at 9:46 AM | Permalink

    No amount of auditing will stop the determined fraudster. If I want to I can just make up data. If I know I am likely to be audited I will ensure that I do a sufficiently good job to cover my tracks. Scientists are no different from people selling widgets, we are trying to make a living. When our jobs are on the line it is tempting just to ignore the outlier, make up the data for the replicate that didn’t work to make sure our paper will be accepted by the best journal we can. Of course some people go much further than that. However, the fact that such high profile frauds have been uncovered recently gives me faith if not in the peer review process, then at least the scientific community.

    P.S. we are expecting random audits by project funders in May.

  7. jae
    Posted May 4, 2006 at 9:47 AM | Permalink

    The peer review system is better than nothing, but not a lot better. As pointed out in the WSJ article, it has numerous flaws and has been greatly over-valued in some circles. As noted in #1 above, he word “club” is the best word to describe the scientific journal mentality, I think. What I find most disturbing are those so-called scientific journals that get into the politics. That is not their place. Of course, there is a lot of variability, with certain journals being much more careless and political than others (as we have seen on this blog).

  8. Peter Hartley
    Posted May 4, 2006 at 9:57 AM | Permalink

    Another issue that the above quotes do not address is that a journal can make both Type I and Type II errors. The focus above and in recent discussion of the flaws of peer review has all been on the examples of bad papers that have made it through the system and been published.

    Another problem is that many good papers get rejected. In particular, papers that are innovative and present a radically different view on a subject can be excluded by the peer review process. This need not be the result of a conspiracy of the “ruling elite” in the field. Something that is more innovative requires more effort for a reviewer to examine, and typically peer review is a voluntary activity with a small payoff — perhaps only an improved reputation with the managing editor. A reviewer is taking more of a risk with his or her reputation with the editor to approve something that is more innovative.

    A consequence of this systematic effect is that paradigms can persist for a very long time ebven when the evidence for them is quite weak. There is a very strong incentive for academics to write papers that are no more than marginal additions to the reigning paradigm in the field.

  9. fFreddy
    Posted May 4, 2006 at 10:11 AM | Permalink

    Re #5, John A

    Nevertheless, without audit or the threat of audit, what would be the point of the disclosure?

    John, I’d say that complete disclosure brings an automatic threat of audit. You never know when someone from completely outside the field is going to get interested, and start looking into your work. It’s not the certainty of audit that matters so much as the possibility.
    The other issue, of course, is what are the negative consequences if your work does not pass audit. If the only sanction is that you have to produce the data you have not previously disclosed – i.e., do what you should have done in the first place – then the risk/return ratio is skewed in favour of sloppy work. There needs to be a big disincentive to failing audit.

  10. ET SidViscous
    Posted May 4, 2006 at 10:20 AM | Permalink

    Good science stands on it’s own. An audit can’t show a rigorously proven hypothesis to be incorrect.

  11. Gary
    Posted May 4, 2006 at 10:36 AM | Permalink

    Re #4

    Democratization may be a buzzword, but by not capitalizing it I meant to identify the process that we’re seeing in science – as in education and commerce – where more people are becoming involved with the process. Wikipedia and ebay are in the vangard. Something in the science domain (maybe this blog is an early prototype) is sure to arise because it broadens access. Science, like education and commerce, still will have it’s elite and the meritorious still should rise to the top, but they will have a bunch of very interested people interacting with them, both to learn and to challenge. This is a good thing if the objective is to advance knowledge more quickly. As #5 points out, insiders are too busy to check every detail and tend to accept the work of colleagues they know (unless they are competing for the same grant pool). It is taking “democratization” to get full disclosure out front as an important criterion for publishing the results of research.

  12. fFreddy
    Posted May 4, 2006 at 11:12 AM | Permalink

    Re #10, Sid

    An audit can’t show a rigorously proven hypothesis to be incorrect.

    Very true. But it can show that a hypothesis is not as rigorously proven as previously thought.

  13. fFreddy
    Posted May 4, 2006 at 11:20 AM | Permalink

    Re #11, Gary
    Gary, I love the idea, but how do you avoid ending up with something like Realclimate ?

  14. ET SidViscous
    Posted May 4, 2006 at 11:22 AM | Permalink

    fFreddy
    And?

    That was my point. If scientist X has done his homework, then he has nothing to fear from audit. If he’s not…

    Well Caveat Emptor

    Which yes I know is not perfect latin for the situation, but close enough.

  15. The Knowing One
    Posted May 4, 2006 at 11:39 AM | Permalink

    Re #6 (by Paul Gosling), fortunately, you cannot “just make up data”. It is extremely difficult to make up good data, because there is typically some statistical test that will show the data to have been fabricated. This is one of the ways in which R.K. Chandra was found out. To quote from the CBC story, the editor of the British Medical Journal sent one of Chandra’s papers to

    a statistical expert with a lot of experience of research misconduct … the statistician said: “This has all the hallmarks of having been completely invented.’

    Real data usually has structure that is very hard to accurately reproduce, in full detail, without using the actual process—i.e. without gathering it for real.

  16. John A
    Posted May 4, 2006 at 12:06 PM | Permalink

    Re #15

    Of course, audit is one way to catch out fraud, but more generally a bad paper will be difficult or impossible to reproduce.

  17. John Hekman
    Posted May 4, 2006 at 12:14 PM | Permalink

    I don’t think that there are payoffs for scientists to audit the work of others. In economics, there was (and may still be) a “Journal of Replication” to check on empirical results. It did not catch on, because you are not awarded tenure for spending hundreds of hours to reproduce work that has already been done.

    In an area like Medieval Warm Period studies, there seem to be enough studies so that a conclusion can be drawn by looking at the literature as a whole. The problem with MBH98 was that it was a single study and so much weight was put on it. Even the hockey team has acknowledged this, and they point to “many” other studies that back up their conclusions. The problem is, as Steve and Ross have shown, that all of the studies have the same problem proxies.

    So I don’t think the peer review system will change much. It won’t become like the FDA drug approval process. But some smaller group of studies whose empirical results are relied upon by policy makers should be checked very thoroughly.

    Hey, I’m guilty. My Ph.D thesis, paid for by EPA and NSF, estimated parameters of demand and supply to explain the geographical movement of the steel industry. The parameters were then used by others to calculate the likely effects of optimal pollution taxes on the location of industry. No one ever checked my data. I think I have now “lost” my Fortran program.

  18. fFreddy
    Posted May 4, 2006 at 12:55 PM | Permalink

    Re #14, ET SidViscous
    Sid, on re-reading #10, I realise I was being a bit dozy. Sorry !

  19. John A
    Posted May 4, 2006 at 1:12 PM | Permalink

    re: #17

    I don’t think that there are payoffs for scientists to audit the work of others. In economics, there was (and may still be) a “Journal of Replication” to check on empirical results. It did not catch on, because you are not awarded tenure for spending hundreds of hours to reproduce work that has already been done

    That is the point. However Steve Mcintyre has shown that replication and audit can be fruitful endeavors scientifically.

    What is needed is a Nobel Prize for “Most Spectacular Result from a Replication or Audit in Science”. I think the bloggers who worked over Hwang woo Suk should win it…

  20. The Knowing One
    Posted May 4, 2006 at 1:30 PM | Permalink

    Re #17 (by John Hekman), if you don’t find anything wrong when you audit a scientific work, then you are right, there is little or no payoff. There will usually be a large payoff, though, if you do find something wrong. The only problem is that then the payoff tends to be negative.

    If you report that someone in a particular field of science has done really bad work, you are effectively criticizing that whole field (else why did the field not spot the problem before?—and how could a really good field produce really bad work?). Researchers in any field tend to base part of their identity on being in that field. So if you expose bad work in a field, you are ultimately threatening the identities of many researchers in that field. Those researchers are likely to fight in a manner that seems beyond reason, in order to protect their identities.

    There have been lots of cases like that. Steve M.’s case is unusual, because he got some politicians involved (due to global warming). Blogs like this also help, especially for morale.

  21. Jeremy
    Posted May 4, 2006 at 2:08 PM | Permalink

    Gary wrote:
    …Blogs are the beginning of the democratization of science by providing a previously unavailable opportunity for informed people to open the clubroom doors. The first ones in might get stopped by the bouncer, but a horde will follow and bring in the fresh air eventually.

    I love this idea frankly. I’ve always maintained that anyone can do “hard” sciences/math. I frankly get irritated when ordinary people act like science/math is out-of-reach for them intellectually. Plainly, I’m sick of people acting like I come from some grandiose cathedral of giant intellectuals just because I majored in Physics. Anyone can do this stuff, and I mean that quite sincerely. Which leads me to…

    Peter Hartley wrote:
    Another problem is that many good papers get rejected. In particular, papers that are innovative and present a radically different view on a subject can be excluded by the peer review process…

    This is, in my view, another great justification for “democratizing” or to use a term from Eric Scott Raymond, turning the scientific community away from “cathedral-like” authority to one more like a “Bazaar” where truth manifests itself from the masses.

  22. TCO
    Posted May 4, 2006 at 2:19 PM | Permalink

    You can get your groundbreaking, but controversial papers published. If the “club” blocks you at Nature, you can get it into a specialized journal. There are a host of them. Are European ones. Are society ones, Verlanger ones, Elsevier, etc. Typically there is even more than one specialty within which you can report.

    When someone isn’t published, it’s more likely that they don’t know how to write good papers or are sending to wrong journals, or are just not writing up their research and sulking with a rejection.

    If you hang in there and publish a few papers within the field (even if not at Nature or Phys Rev), people will notice the thread.

  23. TCO
    Posted May 4, 2006 at 2:25 PM | Permalink

    I don’t beleive the average person has the capability to do mathematical physics. I do think that people have the ability to push for explanation and to extract physical insight in many cases, where the work is unnescesarily opaque. For instance in VS06, I would like to ask Zorita, to what extent he considers the different reconstructions independant (especially “MM” and MBH98). I would also like to deconvolute the talked about “year where warming is anamolous” to see if this is basically just saying that the recons are “higher” than previous (with a little autocorr stuff thrown in), but that is basically the insight (higher deviation than observed previously, with some significance level).

  24. Bob K
    Posted May 4, 2006 at 3:33 PM | Permalink

    Don’t know if anyone has posted about this, and not sure where to put it.
    The US gov. has put the draft of the new IPCC report online.

    Here’s a link to a Tim Worstall article about wanting public comments before May 9th.
    http://www.tcsdaily.com/article.aspx?id=050406H

    snip …
    OK, so, fine, the process goes on apace and what’s so remarkable about that? Well, the draft report has historically been kept secret. Sooper-seekkrit. You see, the scientists (and the rather more political creatures who write the summaries) want to be able to work in peace and deliver the report at once in a blaze of publicity. So what has the US Government gone and done? Posted a draft of it on the Internet! (Right here).

    Now, admittedly, you do have to apply for a password to get onto the site, but I can assure you that it’s set up as an automatic function and if the choice of password that I got is any indication we’re all going to be getting exactly the same one.
    snip …

  25. Bob K
    Posted May 4, 2006 at 3:48 PM | Permalink

    After futher investigation I see they want experts and gov. officials to review it. Might be some restrictions on those allowed to download it. You can try applying for a password here.

    Because the report is still in draft, distribution of the materials for review will be through a password-protected website. If you are interested in reviewing the report, send a message — with your name and affiliation in the subject line — to ipcc-usgrev@climatescience.gov to obtain the username and password required to access the report. Then follow this link to download the report and to obtain explicit instructions regarding comments formatting.

  26. Frank H. Scammell
    Posted May 4, 2006 at 4:19 PM | Permalink

    These are a bunch of very interesting and insightful comments. None of them seem to me to address the underlying problem. As far as I can tell, all of the hockey team researchers had a goal in mind before they began their research. Our “sponsors” want to control energy utilization, distribution, and/or global long-term economic development (i.e., political issues) so we will reduce the MWP or wipe out the Little Ice Age, or whatever. Easily done if you make the explanation sufficiently baffling, and not obviously wrong.If we are to get a well funded, highly publicized study started, we need to study something that the MSM can run with. Even if later, it is proven incorrect, it often doesn’t matter because the correction or retraction never surfaces for the average reader.Furthermore, we will get not only the journal editors, but also the popular magazine editors (Scientific American, Discovery, Popular Science, etc.) on our side. Not that difficult, because they can all see through our subterfuge and realize that our intentions are good and appropriately liberal (additionally, we all have to make a living). No, I don’t want to make this thread political, just get people thinking about whether the problem is possibly bigger than is obvious.

  27. Armand MacMurray
    Posted May 4, 2006 at 4:49 PM | Permalink

    Re: #25

    After futher investigation I see they want experts and gov. officials to review it.

    Actually, this review phase is called the “expert and government” review by IPCC, but the US government says this:

    On behalf of the U.S. Department of State, the Climate Change Science Program Office (CCSPO) is coordinating the solicitation of comments by U.S. experts and stakeholders to inform development of an integrated set of U.S. Government comments on the report. To be considered in development of the U.S. position, comments must be received at CCSPO by close of business 9 May 2006. Comments must be sent to the following e-mail address using the template for comments that is provided (see detailed instructions below): wg14AR-USGreview@climatescience.gov. Comments submitted as part of the U.S. Government Review should be reserved for that purpose, and not also sent to the IPCC Working Group I Technical Support Unit as a discrete set of expert comments.

    (http://www.climatescience.gov/Library/ipcc/wg14ar-review.htm)
    I think it’s arguable that *every* US resident and/or citizen counts as a “stakeholder” for this (sorry, Steve and the many other capable non-US citizens/residents!), so apply away! Just note that they ask “PLEASE DO NOT CITE, QUOTE, OR DISTRIBUTE THE DRAFT REPORT.”

  28. Gerald Machnee
    Posted May 4, 2006 at 6:00 PM | Permalink

    Interesting studies going on today that might make interesting audits are: 1) Is coffee good for you? and 2) Is a glass of red wine a day good for you?
    I think I have seen yes and no to both of these in the past few years. I think these studies can go either way depending on your sample selection ( and where have we seen that), whether the sample is large enough, and other contributing factors masking the results. Anyway, the results can be deliberately skewed or vary unintentionally just by the sample used. This would be one area where the data used would be interesting.

  29. ET SidViscous
    Posted May 4, 2006 at 6:40 PM | Permalink

    Those are epidemiology studies kind of different rules. And they are audited quite often, and the audit process is much further down the road, whereas here Steve is a ground breaker.

    John Brignell has done a lot of work like that, and the audit process is so precise that he even has categories for the mistaken methods they use.

    He’s also attacked by the usual suspects.

  30. Peter Hartley
    Posted May 4, 2006 at 8:11 PM | Permalink

    Re 22 — TCO I seem to recall seeing remarks from you elsewhere on this site downplaying the first publication by Ross and Steve in Energy and Environment and further suggesting that their article in the more elite GRL was the only one that “really counts” — or words to that effect. Such sentiments would appear to be at odds with your comments here :-). All joking aside, I think it is not uncommon for the really big breakthroughs to be published by outsiders in non-mainstream venues.

  31. TCO
    Posted May 4, 2006 at 8:48 PM | Permalink

    Peter:

    It’s night and day. It’s one thing to move a step down from Nature to Journal of American Chemical Society, then down to a specialty chemical journal (ACS) or to some other specialty journal (Verlanger) or to a closely related field (many people are working on field boundaries, so you can publish either field). It’s a whole nother thing to publish in a Cold Fusion journal. EE is an inappropriate place for Steve to publish. It’s not much better then a conference proceedings (and yes, some conference proceedings are “peer-reviewed” in theory, but it’s a joke compared to what any decent specialty journal would do. EE sucks. Is it even abstracted? If not, then it’s not part of the real literature.

  32. Paul Gosling
    Posted May 5, 2006 at 5:06 AM | Permalink

    Re #15

    My understanding is that Chandra made up his means and standard errors and the se arose suspision. He didn’t make up his raw data. Though I could be wrong and am happy to be corrected. I am no statistician, but I would be surprised if the raw data is made up that this can be detected using statistcs. I am sure there is someone here who can tell me if this is so.

  33. Louis Hissink
    Posted May 5, 2006 at 5:42 AM | Permalink

    Peer Review in this sense is like asking SS guards that during WWII no extermination camps existed./

    Of course not.

    So one asked the wrong question, and in that restricted sense, the SS were correct.

    Frame the question differently and other answers come to mind.

    Cynic that I am, but Peer Review basically means substantiation of the existing paradigm, whatever that would be.

  34. The Knowing One
    Posted May 5, 2006 at 5:57 AM | Permalink

    Re #32 (by Paul Gosling), Chandra made up the raw data. You would know this if you read the article that my comment linked to.

  35. Gerald Machnee
    Posted May 5, 2006 at 6:39 AM | Permalink

    Re #32 and #34: I think he made up raw data. He needed over 200 subjects and was writing the report when his assistant had only a handful of subjects to study.
    I think the detail that Steve M has shown is more than an audit at times – an autopsy?

  36. Gary
    Posted May 5, 2006 at 6:46 AM | Permalink

    Re #13 (fFreddy) and #21 (Jeremy)

    “Open science,” for lack of better term, IS going to be messier than what we’ve had. Maybe it might make things worse for a while, too, as it involves more political passion because it involves more people. But that is benefit in the long run because the real problem with political decisions is that decision makers get very limited input (usually just cherry-picked data from a lobby). Better to have everybody yelling than just a few; it’s more likely to get the solution out on the table. Group decision-making is found in bird flocks and honeybee swarms where signals from a few cogniscenti are picked up, amplified and become the group decision. It’s going to happen with democratized science and science policy too.

  37. Greg F
    Posted May 5, 2006 at 7:07 AM | Permalink

    This seems like an appropriate place to post this bit of information. It appears that the “gatekeepers” left the barn door open.

    On behalf of the U.S. Department of State, the Climate Change Science Program Office (CCSPO) is coordinating the solicitation of comments by U.S. experts and stakeholders to inform development of an integrated set of U.S. Government comments on the report. To be considered in development of the U.S. position, comments must be received at CCSPO by close of business 9 May 2006.

    Since policy related to climate change effects everybody … everybody is a stakeholder! The 2nd order draft of the “Summary for Policymakers” and “Technical Summary” of Working Group I is available to any U.S. citizens by simply submitting an email for a username and password. The process is automated so you will receive your username and password in a matter of minutes.

  38. Paul
    Posted May 5, 2006 at 7:07 AM | Permalink

    RE #36-

    Group decision-making is found in bird flocks and honeybee swarms where signals from a few cogniscenti are picked up, amplified and become the group decision. It’s going to happen with democratized science and science policy too.

    Isn’t that exactly what we have going on right now? A few people started making a buzz, then picked up by the MSM and the politico’s. Now AGW is a political force that “everyone” seems to have picked up on.

    Maybe it’s like the honeybee who’s found the path to the trap and is sending everyone to their doom.

  39. Michael Jankowski
    Posted May 5, 2006 at 7:17 AM | Permalink

    but I would be surprised if the raw data is made up that this can be detected using statistcs.

    Google and read about “Benford’s Law.”

  40. Paul Gosling
    Posted May 5, 2006 at 7:58 AM | Permalink

    Re #34, 35.

    I don’t know about Chandras raw data as he has always refused to release it. Nothing I have read seems to make it clear exactly what he did. Anyway, making up fictitious researchers to submit results to support your own false data shows much more dedication to fraud than just making up data. When it reaches that sort of level surely there is a case for the police to be involved. Obtaining money by deception etc?

    Re #39.

    Very interesting, thanks for that.

  41. Brooks Hurd
    Posted May 5, 2006 at 10:43 AM | Permalink

    There is a naturaly tendency for all researchers to emphasize data which supports either their, or their sponsers’, assumptions. The problems arise when this tendency leads to cherry-picking and/or suppression of contrary data. Peer reviewers would be hard pressed to perform the audits necessary to uncover these distortions. They could, however, apply the adage that if it looks to good to be true, then it probably is. This should arose reviewers’ suspicions to the point that they at least ask where the raw data is archieved.

    When the author’s reply sounds like what Phil Jones told Steve, alarms should start going off in the reviewer’s mind.

  42. Steve Sadlov
    Posted May 5, 2006 at 1:51 PM | Permalink

    RE: #4. John A what you have outlined is essentially a TQM approach to scientific publication. Naturally, the approach could and should be used upstream within the actual research itself. Surprisingly (or maybe not) many scientists and engineers only have a very primative understanding of quality and Sigma. Or if they do understand it, they avoid it based on the excuse that it adds to much overhead and therefore slows things down and inhibits the creative process. Red herrings one and all …

  43. Armand MacMurray
    Posted May 5, 2006 at 2:53 PM | Permalink

    Re#37

    The 2nd order draft of the “Summary for Policymakers” and “Technical Summary” of Working Group I is available to any U.S. citizens by simply submitting an email for a username and password.

    You can also be a US permanent resident.

  44. John A
    Posted May 5, 2006 at 3:29 PM | Permalink

    Re: #42

    Steve S,

    I’m a very big fan of W Edwards Deming as regards statistical control of quality. It’s amazing to me that the one area where Deming’s method has yet to make any impact is in scientific research reporting. Instead we have would could be described as a naive, Boy Scout honor system which has amply demonstrated that its not up to the task.

  45. TCO
    Posted May 5, 2006 at 5:29 PM | Permalink

    DOE is coming into things…

  46. Gary
    Posted May 5, 2006 at 6:41 PM | Permalink

    Re #38

    Isn’t that exactly what we have going on right now? A few people started making a buzz, then picked up by the MSM and the politico’s. Now AGW is a political force that “everyone” seems to have picked up on.

    Maybe it’s like the honeybee who’s found the path to the trap and is sending everyone to their doom.

    Right, however with an open process dissenters get to chime in too and if they make enough sense to enough people the tide can turn. Honeybee swarms don’t always find the best hive sites, but they usually do after enough scout bees advocate for one choice among several. All I’m saying is a closed process is more likely to send us to a trap because the amount of information is limited. The current AGW bandwagon has advanced quite far because of limited information that all seems to point in the same direction. Only lately have the dissenters made enough coherent noise to get a hearing.

  47. Posted May 8, 2006 at 1:14 PM | Permalink

    jae wrote: “As noted in #1 above, he word “club” is the best word to describe the scientific journal mentality, I think.”

    For what it’s worth, “club” pretty well describes the world of computer science peer-reviewed forums, as well. It is a good system in general, so long as you realize what it is. It focusses discussion among people who are in the club. Don’t take me wrong: it is a great system in general and it greatly helps scientific progress!

    Passing peer review, though, is no Seal of Truth. It basically means that other people in the club will want to read it.

    To find the best theories, one needs to hunker down and read the material. If the club members are speaking in a foreign jargon, then they just are not going to get their results out. Draw your own conclusions (it is not a simple topic!), but notice at least that “I saw it in Nature” does not cut it for science.