How To Publish A Scientific Comment in 123 Easy Steps

is an engaging account by Rick Trebino of Georgia Tech on his experience in trying to publish a scientific comment in a field far less controversial than climate science. Readers can doubtless think of analogous experience in climate science. (h/t Chas)

Trebino was stonewalled by the authors when he sought data and methods – again readers can doubtless think of analgous situations.

Trebino closes by reflecting on how matters might be improved – his first recommendation is compulsory archiving of data and parameters as a condition of journal publication and a sanction of misconduct for authors refusing to provide to so after publication.


  1. Mailman
    Posted Sep 24, 2009 at 2:23 PM | Permalink

    You know…I think Ive seen this being called for elsewhere but for the life of me cant remember where I saw it??? It was also making reference to data used for climate change, so maybe it was on this site or some other “skeptical” site?


  2. J in L du B
    Posted Sep 24, 2009 at 2:44 PM | Permalink

    This is one of the reasons I took an industrial job with an operational component after my PhD in Physics and a couple of years as post doc and junior researcher. Haven’t regretted it at all. Besides routine production work there was always enough time spent on applied research and development in the companies I’ve worked for and a need for input at various stages through the “innovation swamp” to keep life interesting. What amazes me is how much time is spent “rediscovering” just because original data is missing or even development of an equation in the literature isn’t clear and results have to be verified before limited development funds can be allocated to put the scientific result to work.

    Don’t doubt a word of this.

  3. Gerald Browning
    Posted Sep 24, 2009 at 3:07 PM | Permalink


    Heinz and I wrote a manuscript on numerical splitting methods (reference available on request). The manuscript exposed the main method currently being used in meteorology as being unstable. The Editor (being afraid for his job) forced us to change the manuscript to a comment so that the unscrupulous advocates of the method could respond to our mathematically rigorous manuscript. The rules allow the responder the last word without the authors of the original comment allowed a rebuttal. I think you can see why the game was played.

    The unstable splitting method is still being used by a number of modelers although the responders now
    offer an alternative well known numerical method. The humorous thing here is that the responders claimed that there was a problem with the Browning-Kreiss multiscale system, but an accurate and stable semi-implicit method applied to the unmodified system produces exactly the same results (reference available on request). The natural scientific question is what result is obtained by the unstable splitting method. No comparisons have been made with a stable numerical approximation of the multi-scale system or the stable semi-implicit numerical approximation of the unmodified system. Anyone suspicious?


  4. PhilH
    Posted Sep 24, 2009 at 3:30 PM | Permalink

    This needs to be a story in one of the “popular” magazines like Wired. If anyone knows anybody there, please bring this to their attention. This kind of abomination needs to be brought to a wider audience. Peer review, my ass.

  5. Andrew
    Posted Sep 24, 2009 at 3:33 PM | Permalink

    A few years back I noticed a factor of ten numerical error in a medical paper. I called up the main author and she cheerfully admitted the error, said that they’d already noticed it and informed the journal, who had said they didn’t have room to publish a correction any time soon.

  6. Posted Sep 24, 2009 at 3:48 PM | Permalink

    Steve, thanks a lot for posting this. I have already forwarded it to a slew of people…mostly academics, by the way. Human, all too human.

    So while we’re sharing stories, a certain professor (who will remain nameless) became very, uh, skeptical of peer review in midlife. So when he finished a paper, he made a list of journals, in order of his preference, where he thought the paper might fly, and went down the list. If the first journal rejected it, he would: (i) throw away the referee reports unread; (ii) cross off the journal; (iii) wait two weeks; and (iv) send paper to next journal on the list.

    It came to pass that he forgot step (ii) once, and sent the paper back to the same journal two weeks after it had been rejected by that journal.

    It was published.

    • Steve McIntyre
      Posted Sep 24, 2009 at 4:54 PM | Permalink

      Re: NW (#6),

      As someone who’s spent most of his life unaware of academic peer review politics – but familiar with forms of due diligence in other fields – I sometimes feel like an anthropologist among South Sea Islanders in the 19th century.

      IMO academics are far too quick to credit the unquestionable progress of science to peer review as practised in modern academic journals.

      In a securities offering, the guiding principles are “full, true and plain disclosure”. Ultimately commissions don’t try to rule on whether an offering is going to be a good one or not, though they try to ensure, for example, that known criminals are not involved.

      I would prefer that reviewers spent more time ensuring that authors ensured that they had provided sufficient materials and data so that there results were as replicable as possible. For paleoclimate articles in today’s techonology, I would like to see quite vast Supplementary Information so that the article was a sort of extended abstract for really detailed technical information.

      I think that reviewers spend far too much time trying to protect their POV and not enough time ensuring full disclosure. In climate, of course, people confuse journal peer review with due diligence.

      • bender
        Posted Sep 24, 2009 at 4:58 PM | Permalink

        Re: Steve McIntyre (#9),

        I think that reviewers spend far too much time trying to protect their POV and not enough time ensuring full disclosure.

        It’s a really interesting perspective. Ten years ago – before this debacle – I would not have agreed. Now I do.

  7. Posted Sep 24, 2009 at 4:26 PM | Permalink

    From Fix #12, Trebino’s Addendum to the Addendum: “…What if those opposed to taking action against global warming were to make the claim that science shouldn’t be believed in this matter because its process is so rife with poor ethics that it can’t be trusted…”

    What if, indeed?

    Steve: This is one reason why I’m baffled at the reluctance of IPCC to require authors to archive data and stop acting like adolescents. And by the acquiescence of the “community” in non-archiving and poor behavior by Thompson, Santer, Mann, Jones, Briffa and now Kaufman. I discourage readers from imputing anything to this behavior other than the authors behaving poorly. If I were one of the many straightforward climate scientists who’s worried about things and who archived my own data – think of someone like Judd Partin or Lowell Stott – I’d be angry that prima donnas were raising irrelevant issues. The answer to Trebino’s #12 is that climate science should be particularly scrupulous about this issue because of this concern – rather than acquiescing and tolerating the opposite behavior.

    • stan
      Posted Sep 25, 2009 at 8:03 AM | Permalink

      Re: jorgekafkazar (#7),

      Expand the reason given in your referenced quote from “poor ethics” to include all the other methodological shortcomings and I have made that very argument. I would guess that Steve doesn’t want a recitation of the litany of such shortcomings, but it is a frighteningly impressive list. If a lawyer at a trial were to lay out such a list in the cross-examination of expert witnesses, it is a very good bet that that the jury would reject the experts’ opinions.

  8. Posted Sep 24, 2009 at 5:10 PM | Permalink

    Comment On The Climate Station Of The University Of Hohenheim: Analysis Of Air Temperature And Precipitation Time Series Since 1878

    was refused in the original journal

    • John M
      Posted Sep 24, 2009 at 5:29 PM | Permalink

      Re: Hans Erren (#11),

      Interesting looking graph. This caught my eye recently, and your graph bears on it.

      Frankfurt, which is near several wine growing regions in Germany, saw 14 of their last 18 August months yield in an average temperature at least two degrees above normal.

      I wonder how they define “normal”.

      FWIW, I dug this out from climate explorer, but I’m not sure I was using the filters properly.

  9. Ed Moran
    Posted Sep 24, 2009 at 5:30 PM | Permalink

    He should “name and shame” in big letters in the headline.
    Else things wont change.

    • Posted Sep 24, 2009 at 11:09 PM | Permalink

      Re: Ed Moran (#13), Well, there’s a risk in that sort of confrontational approach. It’s bad enough to post such a tells-all monograph, even without naming the journal. In fact, I envision the unnamed editor clenching a copy of Trebino’s post in his fist and saying through gritted teeth: “That’s it! I’ve had it! NO MORE MISTER NICE GUY!!”

  10. Sean Houlihane
    Posted Sep 24, 2009 at 5:43 PM | Permalink

    This looks like proof to me that the old traditional journal system for publishing papers has pretty much reached then end of it’s natural life, and is no longer performing a valuable function. Journals belong to an age where communication was by letter, and it took years for ideas to become circulated around the world. In that context, peer review and editoral selection have some value, but no longer. Today’s requirement is for open access and reliable archiving – which can be achieved directly, without the middle men. Very much like the music publishing industry, a parasite which is starting to be out-evolved, and no longer provides mutual benefit.

  11. srp
    Posted Sep 24, 2009 at 5:49 PM | Permalink

    These sorts of proposals are not followed because of the purposes and incentives of the journals. Peer review is designed to make journals successful in the academic community. It has little to do with warranting quality for external users and so is nothing like a software design review or an investment prospectus.

    The first, and hardest, condition for journal success in the academic community is interest–you have to publish stuff that is of interest (both topically and in terms of its findings) to the readers and contributors to the journal. The second condition is a level of technical competence such that readers won’t feel they are wasting their time in perusing the articles. Neither of these has a whole lot to do with easy replication. And if you start imposing (what are perceived as) onerous documentation requirements, you’ll lose desirable (interesting and credible) submissions to other journals.

    These incentives cut across all fields. If you check out Derek Lowe’s blog In The Pipeline, which covers medicinal chemistry issues, he and his commenters frequently gripe and joke about their difficulties in replicating reactions (and yields!) published in the chemistry journals. They even point out particular journals and particular labs whose results they find particularly dodgy and which they have stopped paying attention to. It’s just the market finding its own equilibrium.

    As I have noted before, the problem with this equilibrium in climate science is that the audience is no longer other researchers in the same or allied fields who have the right incentives and capabilities to decide whether or not to believe a result. Now the audience is policymakers, private interests, and the public as a whole. Unlike the academic audience, which uses these articles as inputs into their own curiosity-driven research, the policy audience wants to base hugely consequential investments and regulatory decisions upon them. Our host’s proposed standards would do much to make that desire reasonable, but I think that the existing journals are pretty much stuck in the traditional academic incentive structure. Maybe the NSF grants should go to the journals as well as the researchers–that might shake things up.

  12. Steve McIntyre
    Posted Sep 24, 2009 at 6:07 PM | Permalink

    Markets of all types ultimately “work” – in the sense that worthless Enron stock is eventually perceived to have no value. The purpose of full, true and plain disclosure is to make the markets more efficient. Promoters seldom have much interest in making markets more efficient – but it’s a condition of dealing with the public.

    Climate scientists want to suck and blow when dealing with the public. They want to issue press releases – often even more lurid than the original article (something not permitted in penny mining promotions) – and they want IPCC to cite their results.

    Aside from any regulations at the journal level, it is shameful that IPCC does not impose its own requirements that authors archive data. I asked Susan Solomon about this at a CCSP workshop and got a reprehensible answer that this would be “interfering” with journals – even if the journals, like the Royal Meteorological Society journals – had no data policies or even if the journal didn’t enforce its professed standards.

    • srp
      Posted Sep 24, 2009 at 6:52 PM | Permalink

      Re: Steve McIntyre (#16),

      That’s the problem. The journals have no incentive to change their standards given their traditional academic role and the IPCC and the other advisory and policy bodies defer to the journals. The only leverage point I can see is if the EPA starts promulgating regulations and then federal data-quality standards and eventually litigation ensue. (I’m assuming public shaming as attempted at this site is unlikely to work by itself, although I have optimistic days.)

  13. Jeff Id
    Posted Sep 24, 2009 at 8:07 PM | Permalink

    As I read Rick Trebino’s post his, frustration and patients remind me of some of the adventures of our proprietor in submissions on Mann08 and 98. There is little incentive for a Journal to correct a published paper. I read somewhere that Nature’s own policy is to make refutations more difficult than papers. Perhaps that should be changed also.

  14. Posted Sep 24, 2009 at 8:57 PM | Permalink


    There is an interesting comparison to climate research that should be noted here. Today NASA had a press conference about water on the Moon. I know one of the principal investigators (Carle Pieters) from Brown University. Carle has an instrument on the Indian Chandraayn spacecraft called the Moon Mineralology Mapper (Mcubed). When the instrument was first taking data they found water, not only in the polar regions, which was somewhat expected, but water in the equatorial regions. Since the reigning paradigm is that there is little or no water outside of the polar regions, the data was received skeptically. They got together with the PI’s of similar instruments from Cassini and from the Deep Impact spacecraft. Both of these spacecraft did flyby’s of the Moon and also found the water, not only in the polar regions, but in the equatorial regions in places that no one thought that it would be. (as an aside many of us thought that it would be like this based upon Lunar Prospector and Clementine).

    According to Carle, they took months of data sifting, specifically to falsify their findings, and it was only when comparing the data from the other instruments (previous looks at the data by the PI’s concluded that the water that they found was contamination due to outgassing) that they finally figured out that the data was real and that it was revolutionary to our understanding of water on the Moon. In the near future this new research will lead to some interesting re-evaulations of a lot of planetary science but what is important is that after an extensive effort to prove themselves wrong, they concluded that their data was good and that they had to go where the data led.

    Compare this methodology to current climate science work.

  15. Some Guy
    Posted Sep 25, 2009 at 1:25 AM | Permalink

    It seems to me that an opportunity exists for a journal of criticism of other journals. It would have rather wider appeal, it seems to me, than the audience for any of the particular journals it covers.

    • tty
      Posted Sep 25, 2009 at 8:28 AM | Permalink

      Re: Some Guy (#21),

      Perhaps such a publication might be called “Journal of Egregious Errors”. It certainly would not have any problems with getting enough manuscripts. I could provide a couple myself.

      • Posted Sep 25, 2009 at 12:05 PM | Permalink

        Re: tty (#27), I used to subscribe to the notable “Journal of Irreproducible Results” (JIR). Since there is a high (if unintentional) humor content in some of the worst papers, perhaps JIR would consider adding a bimonthly feature covering certain egregious (and certainly irreproducible!) examples. Alternatively, they are set up to produce custom publications such as you suggest. See the bottom of their home page:

    • Ron Cram
      Posted Sep 26, 2009 at 5:48 AM | Permalink

      Re: Some Guy (#21),
      Re: tty (#27),

      After reading the thread further down, I see you both had a similar comment to mine above. I also think such a journal could become “high impact.”

  16. Jonathan
    Posted Sep 25, 2009 at 3:40 AM | Permalink

    My own experience with journal comments has been pretty depressing. I found a critical flaw in a paper published in a leading physics journal (in essence the proposed analysis method could only be applied if the answer was already known) and pointed this out in a comment. The editors sent it to the authors of the original paper, and on the basis of a statement by the authors that my argument was wrong the comment was rejected, with no independent review whatsoever.

    I rewrote it and submitted it to another journal as an original paper where it was immediately accepted with two positive referee reports.

    I have often said that peer review is a bit of a joke (at best it establishes that the paper is not obvious drivel, unless the authors are famous where it doesn’t even establish that), but review of comments is a total joke. This is hardly surprising: as a comment is usually an implicit statement that the review process has failed, it is in the editor’s interest to reject them wherever possible

    • Posted Sep 25, 2009 at 1:38 PM | Permalink

      Re: Jonathan (#22),
      “The editors sent it to the authors of the original paper, and on the basis of a statement by the authors that my argument was wrong the comment was rejected, with no independent review whatsoever.”

      Indeed, that’s what happened to me too.

  17. Posted Sep 25, 2009 at 6:29 AM | Permalink

    Trebino’s comment has been recently published:
    Amplitude ambiguities in second-harmonic-generation frequency-resolved optical gating: comment
    Optics Letters Vol. 34, Iss. 17, pp. 2602–2602 (2009).
    The published version and the other authors’ reply can be read here for those who have access to the journal.

  18. Posted Sep 25, 2009 at 7:21 AM | Permalink

    RE Paul M #31,
    Step 124 is usually the charm in these matters!

  19. Posted Sep 25, 2009 at 7:47 AM | Permalink

    RE #30,
    My bad — the reconstruction is Jones and Mann (R Geoph 2004), not Jones, Parker and Briffa 2005. The latter is the instrumental series.

    In any event, Hanno’s graph of temperatures without CO2 splices the two series together as if they were one, something that the Team supposedly never indulges in. In fact, the reconstructed portion is heavily smoothed — perhaps with a 50 or even 100 year MA, while the instrumental portion is much less smoothed, giving the impression of much greater volatility in the past century.

  20. Bruce
    Posted Sep 25, 2009 at 12:08 PM | Permalink

    “the vast majority of scientific papers are mainly correct.”

    I don’t believe this anymore.

    • PhilH
      Posted Sep 26, 2009 at 6:52 PM | Permalink

      Re: Bruce (#29), How many hundreds, thousands, of “scientific” papers would you guess were written between 1912 and 1960 saying that Alfred Wegners’ plate tectonics theory was not only wrong but stupid?

      Nothing has changed.

  21. jorgekafkazar
    Posted Sep 25, 2009 at 12:09 PM | Permalink

    For some reason, the link didn’t post. One more try:

  22. ianl
    Posted Sep 25, 2009 at 7:45 PM | Permalink

    SMc quote from #16″

    “Climate scientists want to suck and blow when dealing with the public. They want to issue press releases – often even more lurid than the original article (something not permitted in penny mining promotions) – and they want IPCC to cite their results.”

    I’ve been reading this website for some few years now, and I believe that this is as close to the truth as SMc has permitted himself to come. His gentlemanly patience has been genuinely astonishing.

  23. Jim
    Posted Sep 25, 2009 at 11:42 PM | Permalink

    There is a basic issue here is that it is very hard to
    set up an absolutely fair system to deal with
    “he said, she said” type disputes. Someone above mentioned
    that if you contradict known work, then authors of known
    work have to be contacted. And how many times have we
    heard “My work differs from the big names in the field
    so they prevented it from being published”. Some journals
    actually allow you to nominate lists of people who should
    not referee your work due to conflict of interest.

    When I read the account, rather amusing as it was a
    couple of things came to minds. Life is not fair. Job
    realised that in biblical times. Second, there is
    more than one way to skin a cat. Basically, Dr Trebino
    should have realized that early, and beyond a certain
    stage he should have bailed on the comment and worked
    on plan B. Write a paper, and slide the comment into
    the paper. I have done that when a comment was rejected.
    If you are academically smarter, then you should be
    better at working the journal system to your advantage.

    BTW, I write a comment every two or three years. Every now
    and then I get in a bad mood, and instead of going home
    and kicking the dog, I write a comment on some piece of
    garbage that irritated me more than most. However, when I
    do this I do not always expect it to be published. Just
    noting that there are errors is not often enough to get
    published, some other point needs to be made.

  24. MarcH
    Posted Sep 26, 2009 at 3:00 AM | Permalink

    Here’s the abstract of Trebino’s comment:

    Amplitude ambiguities in second-harmonic generation frequency-resolved optical gating: comment.
    OPTICS LETTERS / Vol. 34, No. 17 / September 1, 2009

    The authors of an earlier paper [Opt. Lett. 32, 3558 (2007)] reported two “ambiguities” in second-harmonic generation
    frequency-resolved optical gating (FROG). One ambiguity is simply wrong—a miscalculation.
    The other is well known and easily avoided in simple well-known FROG variations. Finally, the authors’
    main conclusion—that autocorrelation can be more sensitive to pulse variations than FROG—is also wrong.

    For the record the comment was 1 page including references. The final paragaph reads:

    “We thank the reviewer for confirming our calculations.”

    Thanks Rick for sharing your experience. It seems that the FROG may live another day.

  25. Ron Cram
    Posted Sep 26, 2009 at 5:39 AM | Permalink

    If science is self-correcting, perhaps we need a “Journal of Scientific Review.” The journal would be dedicated to improving the process of scientific review and would allow authors and reviewers to publish the full story regarding papers they have written, reviewed, commented on or been denied a comment on. Scientists could also publish comments on corrigenda which fail to admit all of the faults, errors, misdirections and faulty conclusions of pseudoscientific papers.

    Of course, this would be in addition to the earlier proposed “Journal of Climate and Statistics.” Perhaps I should go into the publication business.

  26. Posted Sep 26, 2009 at 5:38 PM | Permalink

    This sort of story is just depressing. When did science become such a bad word?

%d bloggers like this: