Climategatekeeping: Jones reviews Mann

As noted previously, the Climategate letters and documents show Jones and the Team using the peer review process to prevent publication of adverse papers, while giving softball reviews to friends and associates in situations fraught with conflict of interest. Today I’ll report on the spectacle of Jones reviewing a submission by Mann et al.

Let’s recall some of the reviews of articles daring to criticize CRU or dendro:

I am really sorry but I have to nag about that review – Confidentially I now need a hard and if required extensive case for rejecting (Briffa to Cook)

If published as is, this paper could really do some damage. It is also an ugly paper to review because it is rather mathematical, with a lot of Box-Jenkins stuff in it. It won’t be easy to dismiss out of hand as the math appears to be correct theoretically, (Cook to Briffa)

Recently rejected two papers (one for JGR and for GRL) from people saying CRU has it wrong over Siberia. Went to town in both reviews, hopefully successfully. (Jones to Mann)

Previously we also looked at Jones’ soft review of Schmidt (2009) and looked at his attempts to keep Michaels and McKitrick (2004) out of IJC and IPCC AR4.

Here is Jones’ review of Mann et al dated November 1, 2008 (I can’t tell if this refers to an article in print or not – the review mentions pages ranging from 85 to 247 which seems to be too long for any Mann papers published since Nov 1, 2008.)

The paper is generally well written. I recommend acceptance subject to minor revisions. I will leave it to the editor to check that most of my comments have been responded to.

Minor Comments
… [17 “minor comments” referring passim to pages ranging from 85 to 247]

That’s it. No “going to town”. Lucky, I guess, that there was apparently none of that “math stuff” to worry about. Lucky that they didn’t have to worry about a paper that might do “damage”. Or about how to reject a paper that might do “damage” when the math was “correct theoretically”. Lucky that no editor wrote to Jones asking him to provide a “hard and if required extensive case for rejecting”. Nope, none of that.

Between the Team, a few words sufficed:

The paper is generally well written. I recommend acceptance subject to minor revisions. I will leave it to the editor to check that most of my comments have been responded to.

Readers need to take care neither to overstate nor understate what these reviews show. Obviously there’s something fundamentally wrong with the behavior evidenced in these reviews – something that most readers of the Climategate Letters understand. (The only people who seem not to be troubled are the majority of climate scientists.) To go beyond that requires some reflection on what the purposes of “peer review” are.

Because I’ve encountering journal peer review systems rather late in my life, I tend to view journal peer review merely as a form of due diligence (realizing that there are other forms of due diligence); I sometimes feel a bit like an anthropologist studying a tribe (of academics) who do not realize that their customs (for due diligence) are only customs.

149 Comments

  1. Stacey
    Posted Dec 23, 2009 at 10:40 AM | Permalink

    It just goes on and on.
    “Response: Oh dear. Dr. Lu’s mechanism has been comprehensively debunked. See links from here. This extension of his results to global warming is based purely on a correlation with CFC levels and is very dubious (for obvious reasons), whether it is reported in the ‘prestigious’ Physics Reports or not. – gavin”

    snip

  2. Susann
    Posted Dec 23, 2009 at 10:41 AM | Permalink

    My only comment is this:

    I might be able to draw a conclusion about the significance of the excerpts from the emails you’ve posted above if I had 1) a strong background in the field, 2) knew what papers were being discussed and for what publication, and 3) was able to judge if their treatment of them was justified.

    Since I have to answer no to all three of those, I guess I have nothing of value to add.

    Steve: saying nothing when you have no basis for commenting is always a recommended option.

    • bender
      Posted Dec 23, 2009 at 10:51 AM | Permalink

      Wouldn’t you like to see an objective set of reviews of the two manuscripts, side-by-side? By the same reviewer, using the same level of scrutiny?

      • Susann
        Posted Dec 23, 2009 at 11:06 AM | Permalink

        Yes. I’m as interested in the mysteries of the peer review system as the next. That doesn’t mean I will be able to judge if the criticisms or words of praise are warranted.

        • bender
          Posted Dec 23, 2009 at 11:09 AM | Permalink

          Then why are you here? To comment on the commentary?

        • Susann
          Posted Dec 23, 2009 at 11:15 AM | Permalink

          Then why are you here?

          I’m here to read the posts and comment on them.

          Which is what I am doing.

          To comment on the commentary?

          You mean, like you’re doing?

        • bender
          Posted Dec 23, 2009 at 11:28 AM | Permalink

          Here, yes. But elsewhere I am (1) correcting people’s misconceptions of the published literature that wil lbe cited in the next AR, (2) suggesting avenues for fruitful lines of enquiry/analysis. So, in all, I think my contributions are very different from yours.

        • Susann
          Posted Dec 23, 2009 at 1:55 PM | Permalink

          I would never presume to suggest my commentary is worth more than yours, bender.

        • bender
          Posted Dec 23, 2009 at 1:59 PM | Permalink

          Now, now. I’m just pointing out that our commentary is not equal. I’m not doing what you’re doing. Not even close.
          .
          The topic here is Jones on Mann. What have you to say on that? Nothing? The place for “nothing” commentary is, as you know, “unthreaded”.

        • bender
          Posted Dec 23, 2009 at 11:31 AM | Permalink


          (3) providing references where people ask for them,
          (4) giving proper context to the emails by tying them to the published literature. (e.g. explaining why “hiding the decline” was a very bad (not clever) thing to do.)

        • David Bailey
          Posted Dec 25, 2009 at 5:24 AM | Permalink

          Don’t you find it strange that Jones complains that the paper he wants to reject is full of maths that he can’t fault, and he is appealing for reasons to condemn the paper because “this paper could really do some damage”.

          Peer review is not supposed to be about silencing your opponents!

          I think this should be promoted as the smoking gun! It is far more damning than the famous ‘trick’ sentence because ‘trick’ can mean two things in a scientific context.

          Steve: you’re conflating reviews – that was Cook talking about the math.

  3. Josh Keeler
    Posted Dec 23, 2009 at 10:51 AM | Permalink

    It would be very interesting to have a side by side comparison of each paper reviewed by the team, and their comments on each. Without reading the referenced paper by Mann, it is impossible to tell whether it deserved a constructive or destructive review, or was in fact well written.

    However, I would not be at all surprised if Mann’s paper was at or below the level of the Schmidt 09 paper that also received a very soft review.

    It is easy to see a bias in the comments when starting from a belief they are trying to collude to keep dissent out of publication. Once that judgement has been made, views of the teams behavior may be subject to the same type of evidence gathering that they appear to have done themselves to exaggerate the warming trend they wanted to find in their observations.

    • bender
      Posted Dec 23, 2009 at 10:51 AM | Permalink

      Crosspost!

    • Susann
      Posted Dec 23, 2009 at 11:12 AM | Permalink

      It is easy to see a bias in the comments when starting from a belief they are trying to collude to keep dissent out of publication. Once that judgement has been made, views of the teams behavior may be subject to the same type of evidence gathering that they appear to have done themselves to exaggerate the warming trend they wanted to find in their observations.

      Good point. If an observer starts off from a position — either pro or con — then they are more likely to see what they expect in the emails. It’s a tendency that science tries to control for, but doesn’t always succeed. It’s called ‘pattern recognition’ – we are hardwired to do it. It helps us understand but it can also bias us, for good or ill.

    • Carl Gullans
      Posted Dec 23, 2009 at 1:59 PM | Permalink

      Mann ’08 (if this is the article in question) was riddled with statistical errors and biases, as documented here and at other places. To verify this, search the site. To have only minor comments on that article, but to reject an article who’s math appears to be correct, is clearly biased. In this case, there is no point of view that is consistent with scientific priciples that could lead to thinking that all is well here.

  4. Jack
    Posted Dec 23, 2009 at 11:03 AM | Permalink

    Just wondering aloud–

    When Cook says “correct theoretically” isn’t he implying: The math is good so far as it goes even though it’s misapplied? And hence the “damage” to everything they’ve worked so hard on for the last couple of decades? This would be some good fodder in the hands of the opposition.

    I think what we’re seeing here is more of a scientific orthodoxy at work rather than an a devious attempt to supress the truth. These guys really believe in what they’re doing — so much so that they’ve crossed the line from science to religion as it were.

    • bender
      Posted Dec 23, 2009 at 11:07 AM | Permalink

      OT. The topic is the gatekeeping by Jones when he reviews Mann.
      (The “correct theoertically” comment by Cook applies to the unknown ms that we think is Auffhammer et al.)

    • liberalbiorealist
      Posted Dec 23, 2009 at 3:14 PM | Permalink

      Actually, when I look at the larger context of the email from Cook, it’s not clear (at least to me) how egregious is his comment — if at all.

      Here’s the relevant portion:

      It won’t be easy to dismiss out of hand as the math appears to be correct theoretically, but it suffers from the classic problem of pointing out theoretical deficiencies, without showing that their improved inverse regression method is actually better in a practical sense. So they do lots of monte carlo stuff that shows the superiority of their method and the deficiencies of our way of doing things, but NEVER actually show how their method would change the Tornetrask reconstruction from what you produced.

      I guess I’m not sure how to take this comment. If, indeed, all the new method does is to be superior in some theoretical way to the old method of Briffa’s, then the paper might not be advancing a very telling criticism. It may well be that both the relatively primitive, older method and the newer, more sophisticated method might give the same result. If the paper indeed failed to show that there was a difference in result, then it could just amount to throwing sand in people’s eyes without producing real evidence against the previous thesis. In that case, hoping to dismiss the paper might be based on a sound scientific impulse, even if refusing to let it see the light of day still seems well past what the scientific process should allow.

      Of course, the criticism in the paper of the older method might be so very comprehensive and destructive that essentially nothing based on that method should be taken seriously, even if no attempt is made to produce an alternative account. In that case, of course, dismissing the paper was an egregious thing to do.

      It’s certainly not obvious to me as an outsider which of these two cases holds.

      • mikep
        Posted Dec 23, 2009 at 3:32 PM | Permalink

        What the 2009 version shows, quite clearly, is that

        the standard approach provides biased estimates of the reconstructed climate series and underestimates the true variability of historical climate.

        This means that the standard approach artificially flattens the blade of hockey sticks compared to doing it properly. That seems to me significant. Moreover it doesn’t just say this and do a few simulations, the paper demonstrates mathematically that the estimators used in the standard approach have this property.

        As teh abstract goes on to say

        We show analytically as well as using Monte Carlo experiments and actual
        tree ring data, that use of the new specification and reconstruction procedure can be crucial for
        inferences about the nature of past climate and interpretation of recent climate variations.

        Econometrics journals are full of discussion of properties of estimators. It may sometimes be the case that using the exactly correct estimator may not be too crucial, but it’s still important to know what the various biases are from first principles. But in this case the biases do seem important.

        • bender
          Posted Dec 23, 2009 at 3:34 PM | Permalink

          This discussion is OT. It needs to go on a thread devoted to Kamel. May I suggest: climategatekeeping: Siberia?

        • bender
          Posted Dec 23, 2009 at 3:35 PM | Permalink

          Sorry, not Kamel/Siberia, Auffhammer/response function analysis.

      • bender
        Posted Dec 23, 2009 at 3:33 PM | Permalink

        Again, this email is OT.

  5. per
    Posted Dec 23, 2009 at 11:17 AM | Permalink

    not sure why jones reviewing mann is necessarily “odious”, or why any aspect of that review is “odious”. I don’t see why jones reviewing mann is a problem, unless jones has some self-interest in the outcome of the paper.

    i think it is different if you are reviewing a paper which criticises your work, or finds results contrary to your own; you might be best to at least mention the fact.

    per

    • Josh Keeler
      Posted Dec 23, 2009 at 11:23 AM | Permalink

      I may be off base here, but I believe Steve’s issue in the post was not with the nature of Jones’s review of Mann, but in the stark contrast between reviews of the work of close colleagues who share viewpoints and opposing viewpoint papers. It is merely confirmation of a lack of objectiveness in the review process, which compromises the ideal of peer review as being a reliable form of due diligence.

      • Mack
        Posted Dec 23, 2009 at 12:56 PM | Permalink

        I agree, especially with your last sentence. It is not uncommon with peer review in any field of science for a paper to be “bled” with red ink from one reviewer and rubber-stamped by another.

    • bender
      Posted Dec 23, 2009 at 11:25 AM | Permalink

      per,
      It is the contrast of Jones on Mann versus Jones on Kamel.

      • Susann
        Posted Dec 23, 2009 at 2:00 PM | Permalink

        This is all shadows flickering on the cave wall.

        1) Did Kamel’s paper deserve the response Jones gave? Was it a reasonable and correct review based on the science in question? If not, why not?

        2) Did Mann’s paper deserve the response Jones gave? Was it a reasonable and correct review based on the science in question? If not, why not?

        That’s the crux of the issue, not whether one was harder than the other. How hard you have to be on a paper depends on its content. Show me that the reviews were erroneous or flawed based on the science. Then you’re showing me something of note.

        • bender
          Posted Dec 23, 2009 at 2:05 PM | Permalink

          I agree completely, Susann. I’ve read both papers and the reviews and have previously commented on each independently. But these were not comprehensive reviews and I did not discuss them head-to-head. This will come. Steve M has to lay the foundation of the canonical threads before the comparative threads can proceed. Stay tuned.

        • Norbert
          Posted Dec 23, 2009 at 3:07 PM | Permalink

          It sounded like Mann’s 250+ pages work hasn’t been published yet. Is it also in the FOIA.zip file, or somewhere accessible?

        • Dave Dardinger
          Posted Dec 23, 2009 at 5:01 PM | Permalink

          I haven’t read all this thread yet (so maybe others have pointed this out), but one thing is clear IMO. The article isn’t 250 pages long despite what Steve said at the top. I think Mann was using p or pp to mean line or lines. At one point he mentions a sentence 5 pages long which just doesn’t make sense. I was thinking it could have been because there were some pages of charts or illustrations intervening, but from the wording I think he just has an unusual usage (unusual for me at any rate). It is probably because if you use l instead of line, it could be confused for a number one. Thus p may be used by reviewers this way since you’re not going to have set page numbers until the journal article is actually printed.

        • Susann
          Posted Dec 23, 2009 at 3:49 PM | Permalink

          I’ll look forward to reading it. I want to know the facts as much as the next person, regardless of where they lead. My concern is how will I know a fact when I see it?

    • Ashleigh
      Posted Dec 23, 2009 at 8:15 PM | Permalink

      All reviewers should be anonymous! The person doing the review should not know the author. The author should not know the identify of the reviewer.

      That has been breached. Over and over and over. That of itself introduces a bias into the review before a reviewer even considers the content!

  6. jb
    Posted Dec 23, 2009 at 11:22 AM | Permalink

    Search the CRU emails by From, To, Date, Subject, and keyword(s). Makes it easier to find and follow threads in the emails.

    You can also search through the CRU documents by keyword.

    http://www.yourvoicematters.org/cru/

  7. ZT
    Posted Dec 23, 2009 at 11:31 AM | Permalink

    I read this exactly as Steve presented it – odious. On the one hand you have ‘going to town on’ – which is a UK colloquialism for beating the living daylights out of an innocent person, generally in a dark alley, after a night of drinking (to be precise), and the other hand you have a list of typos found by a highly paid copy editor doing a favor for a ‘chum’.

    If authors regularly collaborate, they shouldn’t review each others work. If you are a reviewer for a journal – you can and should send back a paper if you are closely associated with the work. That is the way reviews work in other areas of science – if not climatology.

    • MarkB
      Posted Dec 23, 2009 at 11:40 AM | Permalink

      “That is the way reviews work in other areas of science – if not climatology.”

      I don’t know what world of science you live in, but in most sub-fields, there aren’t a lot of people to choose from to review papers. There are only so many people in any one field, and then tend to know each other. They may be friends, and they may be hated rivals, but they’re all in each other’s pants one way or another. Paleodendroclimatology isn’t exactly a big field – you’d need to go outside the field to get an objective reviewer.

      • ZT
        Posted Dec 23, 2009 at 3:29 PM | Permalink

        Happily, the field of science I’ve inhabited is absolutely unlike anything on display in climategate. Yes, expert fields tend to be small, and so people write to editors suggesting reviewers for their papers, etc., (and the editors typically ignore them). However, I have never seen anything like this fiasco.

        I am sure that reviewers give variable quality reviews in all fields. But, typically what I have seen is that reviewers (who are anonymous) work harder on their ‘friends’ papers – in order to maintain quality in their field. Impact factors (if not ‘consensus’ scoring) do not favor quantity over quality). The CRU team were not working in a normal scientific environment.

        I have never seen or heard about editors or reviewers ‘going to town on’ or colluding to game the review system.

      • kevoka
        Posted Dec 23, 2009 at 5:21 PM | Permalink

        venting

  8. templar knight
    Posted Dec 23, 2009 at 11:45 AM | Permalink

    After reading many of the e-mails between the scientists who were communicating with each other with respect to whose paper would get published in the scientific journals, it is quite obvious that there was a mindset and an intent to keep out any writings that might disagree with the AGW mindset. I would advise Susann to either go through the archieves of this blog, or google and review the UEA e-mails.

    Although not a climatologist myself, I am a geologist, and I am very upset with the way the peer-review process was hijacked by a relatively small number of people, which allowed them to advance their cause while shutting out anything that might disprove or at least cast doubt on what they called “settled” science.

    The gatekeeping aspect of these people should give us all pause, as nothing productive can ever take place if only the majority of a particular view is the only thing allowed.

  9. Hans-Heinrich Willberg
    Posted Dec 23, 2009 at 11:50 AM | Permalink

    Dear Steve,

    you are a real hero and you should recieve the nobel price!

    I wish you a great Chrismas and a Happy and successful New Year to change policy. Please go on in showing the world how corrupt the IPCC mafia is.

    Best regards from Germany

    Hans-Heinrich Willberg

  10. Syl
    Posted Dec 23, 2009 at 11:59 AM | Permalink

    I am really sorry but I have to nag about that review – Confidentially I now need a hard and if required extensive case for rejecting (Briffa to Cook)

    If published as is, this paper could really do some damage. It is also an ugly paper to review because it is rather mathematical, with a lot of Box-Jenkins stuff in it. It won’t be easy to dismiss out of hand as the math appears to be correct theoretically, (Cook to Briffa)

    This is pretty bad regardless of context. What damage is he talking about? In science, a discovery is not damage – it`s a path to the facts. Skeptism or opposing theories is by design part of the scientific process. A true scientist likes to be challenged. He likes to debate opposing views.

    • bender
      Posted Dec 23, 2009 at 12:08 PM | Permalink

      What damage is he talking about?

      .
      IPCC reviewers would question the precision and thus validity of the dendroclimatic reconstrucitons. And then they would question the ice core data. And then the borehole data. And then the lake sediment data.
      .
      This is not a guess, It is a certainty. Read the emails. Tom Wigley, of all people, was already starting to question the reliability of these data. This paper by Auffhammer would unquestionably have fed Wigley’s skepticism, and given pause to IPCC chapter authors and expert reviewers. Read the Von Storch article in WSJ. This result would have beenb “damaging”. It might even mean cancellation of the entire paleoclimatic chapter!

      • gml
        Posted Dec 26, 2009 at 3:30 PM | Permalink

        How about ‘damaging’ as more, although in the reviewers’ opinion misleading, information as ammunition for the politicans of the earth?

      • Norbert
        Posted Dec 26, 2009 at 4:23 PM | Permalink

        I’d phrase that a bit differently: In the reviewer’s opinion, I would guess that the potential damage might lie in delivering overstated talking points to politicians by exaggerated claims of rewriting history in the sense Bender has illustrated above, when any significance if its scientific impact falls far short of correctly supporting such claims.

        • mikep
          Posted Dec 27, 2009 at 12:32 PM | Permalink

          Which, even if true, is not a valid reason for rejecting an analysis which is correct and shows that “standard” methods in the field produce biased (in the statistical sense) results.

  11. Norbert
    Posted Dec 23, 2009 at 12:08 PM | Permalink

    The part that I don’t quite understand here is that Steve writes “a few words suffice”, but doesn’t quote the 17 comments that were made. My understanding from a previous post here is that some papers get accepted as they are, but in this case it only got accepted subject to revisions, with apparently asking for 17 of them (if the numbers correspond to each other).

    • bender
      Posted Dec 23, 2009 at 12:09 PM | Permalink

      That there were 17 does not imply they were substantive. Did you look at them? Or are you just hand-waving again?

      • Norbert
        Posted Dec 23, 2009 at 12:14 PM | Permalink

        I don’t see a link to the 17 comments. According to everything I know, “minor” could simply mean that they are no grounds for rejection.

        • bender
          Posted Dec 23, 2009 at 12:20 PM | Permalink

          Read the damn FOIA.zip file!
          review_schmidt.doc

          And while you’re at it read the McKitrick rebuttal to Schmidt, discussed here, exactly where Steve said it was:

          Gavin on McKitrick and Michaels

          Do your bloody homework!

          Bye! For the whole day!

        • Josh Keeler
          Posted Dec 23, 2009 at 12:22 PM | Permalink

          http://www.yourvoicematters.org/cru/documents/review_mannetal.doc here you go norbert.

        • Norbert
          Posted Dec 23, 2009 at 12:27 PM | Permalink

          When I click or enter this link, it only comes up with an empty page.

        • bender
          Posted Dec 23, 2009 at 12:30 PM | Permalink

          For me it asks if I want to save the file, so the problem is at your end.

        • Josh Keeler
          Posted Dec 23, 2009 at 12:35 PM | Permalink

          Minor Comments

          1.line 32 of abstract – change ‘known’ to ‘presumed’ or some other word. The NH temperature trends are not ‘known’, instead we have good estimates!
          2.Could emphasize in the first paragraph of the Introduction that most of the SAT/GST comparisons are discussed in the context of models.
          3.The long sentence encompassing lines 74 to 79 could be reworded to make it easier for readers. I had to read this several times to get the meaning. Perhaps split it into two and don’t begin with the ‘If’.
          4.p85 add ‘global average’ to mean radiative forcing changes. The next sentence gives the important information.
          5.The sentence extending over p91 to 94 could usefully do with a reference.
          6.The reason the difference in trends between 2.1 and 1.2 on pp 101/102 is because you are talking about the global average. This point is made a little higher, but it could be repeated.
          7.p111, can you restate what the question is? It wasn’t obvious to me at this point.
          8.p132, presumably the 10m SAT values from GISS-E do not have any effect as you’ll be using anomalies from a modern reference? Would be worth stating this.
          9.p142, reducing sea level by 40m 9Ky BP ago, the land surface is now higher in this simulation compared to those later in the sequence. A simple lapse rate calculation would make this simulation 0.24ºC cooler than the later ones.
          10.p151-155, I was going to suggest a map of the boxes in some Supplementary Information, but a map sort of appears in Figure 1. It might be worth mentioning that here.
          11.p159, can you state the modern, pre-industrial period you are using?
          12.In the discussion in pp180-186, the SAT peak over the Holocene, compared to GST, is a broader flatter one that has a slight peak about 3K Years ago.
          13.A reference to the ‘inverse’ perspective on p188 would be useful.
          14.The long sentence encompassing pp193-198 could again be useful split to enable easier reading.
          15.p233, other models would be useful, but my guess would be that none have run a similar set of experiments.
          16.p247, remove ‘in’.
          17.The likelihood that seasonal trends differ from annual ones has been discussed in a number of papers – at least for the last 1000 years. It would be useful pointing out that the seasons needn’t follow the annual average, especially in the context of the seasonally and latitudinally different forcing that took place. A useful reference for the recent millennium would be Jones et al. (2003).

          As you can see Norbert, most of these comments are very minor indeed for a 200+ page document. On the 14+ page paper by Gavin, there were 15 comments made in an equally soft review, again suggestions for changes to figures/tables, wording, grammar, etc.

          I would imagine depending on the reviewer, it is highly likely that most papers would receive a number of minor comments of this nature. The review itself is the part above the 17 comments however, which is extremely brief and lacks any specific impressions of the paper in question.

        • Josh Keeler
          Posted Dec 23, 2009 at 12:40 PM | Permalink

          I just noticed that in comment 17, Jones is plugging a paper he contributed to as a “useful reference” for Mann’s work. Interesting…

        • bender
          Posted Dec 23, 2009 at 12:43 PM | Permalink

          That is a reviewer’s job.

        • Josh Keeler
          Posted Dec 23, 2009 at 12:55 PM | Permalink

          to plug their own work? Interesting 🙂

        • bender
          Posted Dec 23, 2009 at 12:57 PM | Permalink

          No. To provide tha latest references that refute or support an argument, whether they are your own or someone else’s.

        • Chris S
          Posted Dec 23, 2009 at 1:05 PM | Permalink

          Recommending the removal of the word “in” seems more of a copy read exercise than a Scientific review.
          These comments read similarly to the comments of Jones on papers which he co-authors with others.

        • bender
          Posted Dec 23, 2009 at 1:10 PM | Permalink

          scientists are not paid to copy-edit

        • harold
          Posted Dec 23, 2009 at 5:39 PM | Permalink

          Pointing out missing references is part of the process. I had to tell an author to include a paper his boss wrote ~18 years earlier, since it was the most comprehensive prior work. Jones can plug Jones. it isn’t an abuse of process.

        • Norbert
          Posted Dec 23, 2009 at 12:41 PM | Permalink

          Ok, I was able to download it. About a page of comments, almost all of them asking for changes.

        • bender
          Posted Dec 23, 2009 at 12:42 PM | Permalink

          Like Jones himself said: “MINOR” changes.

        • Norbert
          Posted Dec 23, 2009 at 12:49 PM | Permalink

          Certainly.

        • Chris S
          Posted Dec 23, 2009 at 12:30 PM | Permalink

          Having spent far too much time reading the e-mails, I believe the main purpose of Jones’ comments would be to make Mann’s paper more “bullet proof”.

          His aim is to make sure the paper withstands criticism. Whether Mann’s conclusions are factually accurate is of secondary concern(I’m being generous here).

        • bender
          Posted Dec 23, 2009 at 12:34 PM | Permalink

          Every reviewer must make a choice: is this paper worth buttressing, or is it so bad it can’t be improved to make the grade. It is legitimate to swing one way or the other depending on the quality of the argument.

  12. Craig Loehle
    Posted Dec 23, 2009 at 12:34 PM | Permalink

    The comment has been made several times that this is a “small field” so there is no choice but for the Team to review each other’s work. This is so much baloney. Most of what they write is easily accessible to a huge audience of quantitative people, though many of these would object strenuously to many of the methods, such as decentered PCA, upside down proxies, excluding samples…etc. They have worked hard to assert that only they are qualified to review their own work. The only thing that makes the papers “hard” to review is that the code for RCS standardization or whatever is never released, so you can’t check it.

    • templar knight
      Posted Dec 23, 2009 at 12:59 PM | Permalink

      piling on

    • C. Ferrall
      Posted Dec 23, 2009 at 2:57 PM | Permalink

      I agree. These papers have nothing to do with calibrating equipment to measure isotope ratios or other issues that would require specialized knowledge. They use multivariate non-experimental data to recover a relationship between variables (including time). Their knowledge of time series methods seems comparable to economics in the 1970s.

      Bender said somewhere else these guys don’t know the statistics, which is painfully obvious from their turgid jargon.

      While fields are small, it is unusual that so many ideological and political objectives owed so much to so few correlations.

  13. bender
    Posted Dec 23, 2009 at 12:46 PM | Permalink

    What I think is even more interesting is the comparison of Jones on Mann versus Jones on Schmidt. Jones doesn’t like the Mann paper, but gives it a reluctant pass. But he loves the Schmidt paper. But the Schmidt paper is the one that’s got the errors in it, as pointed out by McKitrick & Michaels! Jones flubbed!

  14. Steve McIntyre
    Posted Dec 23, 2009 at 12:50 PM | Permalink

    per, as usual, is right that it is not this particular that is odious per se. I’ve accordingly deleted this adjective.

    It is the pattern that is odious, but that’s a different post and thread.

    • bender
      Posted Dec 23, 2009 at 12:56 PM | Permalink

      I get the sense that Jones has been “coasting” for years and years. His programming is bad, his reviews are shallow, the dog ate the homework, he’s always traveling the world …
      What has he done of value?

      • PeterA
        Posted Dec 23, 2009 at 3:58 PM | Permalink

        What has he done of value? Added “value” to data 🙂

  15. bender
    Posted Dec 23, 2009 at 1:02 PM | Permalink

    Steve M, please advise me when you think my blitzing of nonsense commentary is becoming tiresome to your audience. I have decided that is the only way to get people to think and to focus in a post-climategate transition. But I’m willing to let up any time.

    • Chris S
      Posted Dec 23, 2009 at 1:15 PM | Permalink

      You do a good job of “focusing” the debate, even if your preemting can occasionally cause another’s comment to loose the context to which it was posted;)

      • bender
        Posted Dec 23, 2009 at 1:21 PM | Permalink

        Sorry, Chris. I’m displaying nested comments, but apparently not everyone else is. I’ll stop doing that. Hopefully a truce can be struck.

        • crosspatch
          Posted Dec 23, 2009 at 2:44 PM | Permalink

          Is there a way to turn the nesting behavior off? Would you share it?

        • Mark T
          Posted Dec 23, 2009 at 5:44 PM | Permalink

          Not that I know of. All the comments I’ve seen are nested (well, those that are intended to be nested are nested).

          Mark

        • Chris
          Posted Dec 23, 2009 at 6:21 PM | Permalink

          I think the nesting is a great feature and allows replies to be kept where they belong. For me this results in something much more readable than a pure time stamped or numbered list of comments which may be in reference to the original post or to some comment that may be well up in the list. Is anyone talking about something else and I’m off base here?

        • Posted Dec 23, 2009 at 2:45 PM | Permalink

          Second that – how’s it done?

        • bender
          Posted Dec 23, 2009 at 2:46 PM | Permalink

          Don’t know, guys. When I said “I’ll stop doing that”, I meant pre-empting trollish replies.

    • MrPete
      Posted Dec 23, 2009 at 3:22 PM | Permalink

      What do you mean by “blitzing”? The earlier stack of replies had no effect at all. You are not prohibiting replies. Just adding to the depth.

      In general, it’s best to multiply-respond to the initiator of a thread. If a discussion ensues, fine. That will keep all responses at the same level.

      Nesting on/off is a future feature of the CA Assistant. You can patch it in yourself if you know how to use a plain-text editor. There are a few variations in the *.js file that do different kinds of sorting 🙂

      • bender
        Posted Dec 23, 2009 at 3:29 PM | Permalink

        (1) “blitzing” = much commentary in a short period (e.g. last 48h).
        (2) “pre-empting” = preventing trolls from replying in a nested comment
        .
        I’ve stared that I won’t do (2) anymore, because it doesn’t work.
        I’ve stated that I will stop (1) whenever you, Steve and others have had enough.

        As Susann’s comments note, the quality of commentary dropped during climategate and more effort needs to be spent policing people so that Steve’s time is free to do analysis. If my commentary is excessive, let me know.

        • Posted Dec 23, 2009 at 4:52 PM | Permalink

          bender, surely the good thing about trolls’ nesting habits is that one responder can say “troll” and everyone else can see; moreover, it’s easier for Steve to delete. So how about, name, but don’t argue.

        • Ashleigh
          Posted Dec 23, 2009 at 8:25 PM | Permalink

          One must be careful not to be seen doing the same as the AGW enthusiast sites, where any opposing point of view is simply deleted.

          when arguing for openness, one must be open.

          Hypocrisy is easily detected and detracts from credibility.

        • bender
          Posted Dec 24, 2009 at 12:25 AM | Permalink

          Sure, Lucy. The problem is you don’t always know a troll until they’ve exposed themselves. It might take a few hours or a half-dozen posts before you spot tha pattern. By then everyone’s telling you to stop playing with the troll. Had I known …

  16. A.M. Mackay
    Posted Dec 23, 2009 at 3:53 PM | Permalink

    I have been a peer-reviewer for a couple of dozen papers in cell and molecular biology, 1992-2006. These remarks might provide some context for non-scientists, in terms of getting a sense of:

    (1) How the rigor and depth of the review discussed here compares to accepted practice in another part of the physical sciences;

    (2) How effective peer-review may be, compared to more familiar “quality-control” or “due-diligence” mechanisms.

    My custom was to use the first paragraph of the review to restate the author’s central methods and conclusions. This was to provide context to the authors and editor for the remainder of my review, and to reassure the authors that I indeed understood their main points.

    In the next paragraph, I’d offer my summary viewpoint, including the recommendation to Accept as-is (rare), Accept with minor modifications, Accept assuming major modifications are made, or Reject. I’d support this recommendation with specifics, focusing on the work’s significance, the choice of methodology, whether the execution was correct, and the work’s completeness.

    I would follow this with my major criticisms, usually two to eight, a paragraph each. These might involve inadequate controls to experiments, missing experiments, errors in logic, overstating of results, assertions of fact lacking a literature reference, faulty statistics, or failing to take key insights in the literature into account. If needed, I would ask that additional experiments be performed.

    Then, I’d usually offer suggestions on improving text or figures, along the lines of the 17 minor comments, supra. I looked at this copy-editing as more of a courtesy than an essential part of my review.

    I’d also do a few PubMed literature searches to ensure that the authors (and I) weren’t ignoring any recent on-topic papers.

    This sort of peer review would take 2 to 6 hours to complete, depending on the paper and the subject.

    Aafterwards, when I had the chance, I’d read the other reviews (usually two of them). Sometimes they were cursory, sometimes very thorough, often as diligent as my own, or nearly so.

    I can’t recall a review of a pretty good submission where the reviewers mostly focused on the same major points. We’d each pick up on some items and miss others. The worse the paper, the more likely that two or three reviewers would make similar major criticisms.

    As a reviewer, I was never in a position to verify or disprove the results presented in the manuscript. I never had occasion to suspect fraud or wrongdoing.

    As an aside, some common criticisms had to do with basic statistical issues, such as “The error bars in Fig. 1 seem to represent standard errors of the mean; standard deviations would be more appropriate. The authors must specify which they are using. If SEM, the authors must defend this choice.”

    The common appearance of complex, esoteric, poorly-sourced, poorly-explained statistical methods in the literature of paleoclimate reconstruction is quite striking, in that regard.

    I hope other scientists will comment on this view, and contribute their own perspectives.

    I’ve been on the receiving end of peer-reviews too, ranging from fair to spiteful and from thorough to cursory. Happily, the norm was closer to fair and thorough. Most of my papers are here.

    • Susann
      Posted Dec 23, 2009 at 4:07 PM | Permalink

      Thank you for posting your experience with peer review. It really does help to show what is the case in other sciences.

      A thorough analysis would include comparing the reviews of several reviewers side by side along with the original paper in question to get a taste of what kind of comments are common. It would also be useful to have some stats on the number of articles are submitted and how many are rejected, accepted without revision, accepted with minor revisions, and accepted with major revisions. As well, it might be useful to see a “degrees of separation” for journal reviewers, editors/editorial boards, and submitters so we could gauge just how different / similar fields are when compared with climate science.

      Many of the readers here have science backgrounds and so have something with which to compare and judge. For us laymen, who have no similar background, what we see is without context and thus can be misunderstood.

    • Posted Dec 23, 2009 at 6:37 PM | Permalink

      Thank you for this. As a non-scientist (and a non-academic), while I was familiar with the concept of peer review, I did not have a full understanding of the process and “mechanics” (for want of a better word!) So from my perspective, one of your observations, was an “aye, there’s the rub” moment:

      As a reviewer, I was never in a position to verify or disprove the results presented in the manuscript. I never had occasion to suspect fraud or wrongdoing.

      In particular, your last sentence above. Correct me if I’m wrong, but this suggests to me that there is an implicit element of trust, on the part of the reviewer(s) that the authors a) knew what they were doing with the data; and b)that someone, somewhere, along the line to submission had, in fact, conducted the due diligence required to obviate the need for additional verification, i.e. the data and methods had already been independently verified and the results replicated.

      The repeated instances of resistance to sharing data and methods with “outsiders” can only lead to a strong suspicion that no due diligence had ever been conducted. As I recall, from my reading of the Wegman report, one of his strong criticisms (albeit diplomatically stated) was that although they relied on statistical methods, such due diligence was conspicuous by its absence.

      The authors knew that their “results” would be the basis of recommendations for radical changes in government policies. They repeatedly waved the flag of “peer review” as a shield to guard against any questioning of their results – and encouraged others to do likewise.

      From where I’m sitting, the pattern that emerges is nothing less than a deliberate, almost incomprehensible betrayal of trust.

      • RomanM
        Posted Dec 23, 2009 at 7:02 PM | Permalink

        Peer review IS the vetting procedure to determine that the submission is free of errors and suitable to the addition of total knowledge in an area. It is not a “summary” of what has gone before.

        Maybe you should check a definition such as this wiki entry.

        • Susann
          Posted Dec 23, 2009 at 7:46 PM | Permalink

          RomanM thanks for pointing to the wiki entry if only to see that Nature only publishes 5% of all papers it receives. Besides the background info, that is useful.

        • Ashleigh
          Posted Dec 23, 2009 at 8:31 PM | Permalink

          Free of errors (as presented), as best can be determined.

          Not a statement of fact. Not a statement of truth. Merely that the best has been done that can be done and that embarrassment won’t result.

          There is a common misconception that “peer review” means “correct”, or “proven”, or “the one true and absolute truth”. All of these perceptions are WRONG.

          And those who wave peer review around as a way of trying to enhance their bona-fides are just mischievous. Sure, peer-reviewed is better than non-reviewed. But it is NOT a guarantee that a paper is a statement of absolute fact.

        • RomanM
          Posted Dec 23, 2009 at 10:06 PM | Permalink

          Maybe I should have said “Peer review is supposed to ensure correctness of the submission”. It is very clear that the rubber stamping and proofreading evident in the emails does not qualify as peer review in any meaningful way.

          The AGW mantra that the work of climate science was subjected to such scrutiny rings pretty hollow in retrospect.

      • A.M. Mackay
        Posted Dec 23, 2009 at 11:32 PM | Permalink

        hro001,

        I don’t have a simple response to the points you raise. The Wikipedia entry that RomanM links has a good description and background.

        Yes, the entire scientific enterprise (including peer review) entails a high degreee of trust. Not trust that peers are competent or correct, but that nearly all aspire to practice science along the lines laid out by Popper and other philosophers of science.

        No, there is no supposition that results presented in a manuscript have been replicated or independently verified. It wouldn’t be practical or effective. Rather, the idea is that important observations will be confirmed (or in some cases, not confirmed).

        Suppose my coworkers and I write an article that includes novel observations “I” and “U”. After peer review and revision, the article is accepted and published. The methods for accomplishing “I” and “U” are (should be) now known to all, and we’ve agreed to make the necessary reagents available to other scientists.

        It turns out that other scientists decline to follow up on “U”. We thought it was interesting, but this reaction effectively judges it to be “unimportant.” It may never be replicated. However, a few groups working in related areas reflect on “I”, and proceed to design experiments that build on this observation. In the course of their work, they will effectively repeat our original work. In this way, the more “important” observations receive the most attention as far as replication and validation.

        This sketch applies to “discovery” work in academic labs. Laboratory work performed for other reasons (e.g. to support a filing with a regulatory agency) operates under much different standards. For instance, in my experience, lot traceability was a very low priority in the academic setting, but it was essential when contributing to an FDA filing.

      • David Bailey
        Posted Dec 25, 2009 at 6:20 AM | Permalink

        I would endorse that comment about not being able to disprove the contents of a paper. For example, say someone reports a new chemical synthesis – do you go to the trouble of actually repeating the synthesis just to review their paper!

        Many years ago when I did research, I’d often find I needed several attempts to successfully repeat a synthesis from the literature!

    • per
      Posted Dec 23, 2009 at 8:47 PM | Permalink

      I think i can make some comments here. Firstly, as a reviewer, i have been in a position where results are obviously wrong, where results are obviously extremely dodgy; the Journal of Cell Biology has published estimates that a notable %age of submissions has images which have been improperly edited. Just because an image is presented with two different captions, i cannot conclude fraud or “wrongdoing”; trivial error is obviously possible.

      “The common appearance of complex, esoteric, poorly-sourced, poorly-explained statistical methods in the literature of paleoclimate reconstruction is quite striking, in that regard.”
      I don’t understand why you are surprised. The British Medical Journal surveyed publications in the BMJ, and concluded that 10% of papers had serious statistical errors; this was published. I believe similar findings have been made elsewhere, and there was a highly publicised denouement of bad statistical practice in fMRI this last year. Bad statistics is known to be commonplace.

      I would just add that one of the more common issues is that methodology is described in a perfunctory manner, such that it is not possible to work out what exactly has been done. Journals, editors and authors are all reluctant to give precise detail, in spite of the availability of on-line supplementary information, and such detailed information/ results can frequently reveal that the methodology is bankrupt, or not adequate to support the conclusions drawn.

      per

    • dearieme
      Posted Dec 24, 2009 at 2:30 AM | Permalink

      I have 40 years of experience in peer-reviewing (and being peer-reviewed). In my view finding errors, or failures to reference other work, is important but secondary. Instead, if I can make the authors state clearly enough, and completely enough, what it is they claim to have done, then any blunders that I miss will be readily detected by the readers. But if I let the authors get off with obscurity or muddle, how is the reader to cope? For that reason “..complex, esoteric, poorly-sourced, poorly-explained statistical methods in the literature of paleoclimate ..” means that the work is not of publishable quality. Hence I view “Climate Science” as being largely bogus – the degree to which we are routinely asked to trust the author is plainly preposterous. Hence also my deep admiration for Steve’s work; it’s proper science, done against considerable and illegitimate opposition.

      • A.M. Mackay
        Posted Dec 24, 2009 at 8:22 AM | Permalink

        dearieme (12/24/09 at 2:30am) makes an excellent point.

        In my line of work (cell/molecular biology), statistical analyses are usually of secondary importance. For instance, (60 +/- X)% of cells induced to express mutant protein A1 exhibit a certain characteristic; (20 +/- Y)% of cells expressing control protein A0 show that feature. Visual inspection of a simple graphic will tell most of the tale, provided that the error bars are appropriately chosen. But the P-value presented should nonetheless be appropriate.

        There is a large contrast between this ‘typical’ scenario and, say, paleoclimate reconstructions based on proxies such as tree rings. In articles presenting work on estimates of temperature anomalies over time, the entire story being told is comprised of:
        * the data series chosen,
        * the algorithms used,
        * the results of the calculations, and
        * the attendant uncertainties.

        It ought to follow that interested parties should be able to examine and come to understand:
        * underlying data and the associated meta-data,
        * algorithms (i.e. equations and their implementation as code),
        * numerical and graphical results, and
        * how error ranges are calculated, and what they include and exclude.

        Of these, all interested parties have routine access only to the numerical and graphical results.

        For instance, it seems obvious to me that a key question in multiproxy reconstructions is “how much did each proxy contribute to the final Temperature Anomaly vs. Time figure? Given the methods used, this is typically a complicated question. However, that makes it more rather than less important.

        It seems to me that this issue is routinely ignored in the literature. One has to read “skeptical” blogs for answers, e.g. Willis Eschenbach’s novel and clever dissection of S/N ratios in the proxies used in the Mann group’s 2008 article in Proc Natl Acad Sci.

    • phil c
      Posted Dec 28, 2009 at 5:31 AM | Permalink

      important comments all….but what also has to be pointed out is that peer review differs between disciplines and between publications, sometimes for very good reasons. As noted below, peer review for engineering sciences must be different than microbiology, because in the former, lives can be at risk. The same applies to climate science. Comparing the peer review processes used in different scientific disciplines can be illuminating but also misleading at the same time (this does not apply to statistical measurements; sloppy statistical work is unforgivable, regardless the discipline.)

  17. Tom
    Posted Dec 23, 2009 at 3:54 PM | Permalink

    Absolutely disgusting. As an engineer, when I ask for a colleague to peer review something, I want an honest review – because if I get it wrong people die as a result. Scientists apparently do work that doesn’t matter quite this much.

    • Arn Riewe
      Posted Dec 23, 2009 at 7:30 PM | Permalink

      Appreciate your perspective. I do think it is worthy to note that engineering and some sciences do carry substantial liabilities for being wrong. One thing that’s frustrated me about climate science is that there seems to be no negative consequences for being wrong. The incentives are skewed to favor a specific agenda supporting politically correct results.

  18. Susann
    Posted Dec 23, 2009 at 4:16 PM | Permalink

    I want an honest review – because if I get it wrong people die as a result. Scientists apparently do work that doesn’t matter quite this much.

    IMO, although they have much in common, engineering and science are different and have different expectations regarding certainty and orientations to the world. A science paper might explore a question and offer possible explanations, while engineering takes knowledge gained through science’s exploration and applies it in the real world. If the standards of one were enforced on the other, both might suffer.

    • Posted Dec 23, 2009 at 4:41 PM | Permalink

      policy

      • Norbert
        Posted Dec 23, 2009 at 5:09 PM | Permalink

        OT

        • Chris
          Posted Dec 23, 2009 at 6:15 PM | Permalink

          breaches blog policies

        • Norbert
          Posted Dec 23, 2009 at 6:35 PM | Permalink

          The best I can tell, AGW is an active topic of research, not a topic of peer review. My current understanding of peer review is that it is just the entry point for the correctness (and perhaps relevance) of scientific work. I’m starting to wonder whether Phil Jones remark about “redefine peer review” was a parody. (Hence the reply “nobody can redefine peer review”.)

      • Susann
        Posted Dec 23, 2009 at 6:48 PM | Permalink

        -snip

        This is not a policy blog so this may get snipped, but if the science is bad, the policies can’t help but be bad, although interestingly enough, the reverse is not true.

    • Tom
      Posted Dec 23, 2009 at 5:35 PM | Permalink

      Sure they are not the same, but my point is that I ask for peer review as a matter of course, because I actually want to know if my work is right and if someone else can see a mistake in it. Why should scientists have some different standard where they are not interested in whether anyone can see mistakes?

      Engineers face another type of review, largely from management, where the question is not “Are there any mistakes?” but “Is this a good idea? Can we afford it? Will it look good? Is it legal? Is it what the client wants?” This is not peer review – review by fellow-engineers – but business-case review. These are the sorts of questions that should be applied to engineers (and are) but should never be applied to scientists. This whole saga seems to have shown that scientific papers have been reviewed by the wrong standars. The standard applied has not been peer review, checking whether the content is correct, but business review – does this look good? can we afford this to come out? is this what we want to hear?

      • harold
        Posted Dec 23, 2009 at 6:13 PM | Permalink

        To me, the review process shown here is symptomatic of the cavalier approach to the discipline. Data archiving in a user friendly way (raw data and meta data), data quality control, testing, validating, and documenting the data analysis methods – none of this seems to be taken very seriously. I’d normally refer to this as “playing in the sand box”.

      • Susann
        Posted Dec 23, 2009 at 6:55 PM | Permalink

        This whole saga seems to have shown that scientific papers have been reviewed by the wrong standars. The standard applied has not been peer review, checking whether the content is correct, but business review – does this look good? can we afford this to come out? is this what we want to hear?

        I agree to a certain extent. Here is what I see: the science in this case has become so politicized because of the huge ramifications that the normal processes in place to ferret out weak science are compromised. I see it happening on both sides, and this is as far as I am willing to go at this point just because I don’t know enough.

        • Susann
          Posted Dec 23, 2009 at 7:00 PM | Permalink

          One point — peer review does not, from what I understand, mean replicating the work to check if it is valid, but instead to see if all the proper elements are in place and that the research rises to a certain standard — pushing forward the science or confirming other research and thus solidifying it. There is a certain degree of trust that the authors have not fudged data, because reviewers will not have the time to actually do the kind of audit that Steve and others call for. Now, perhaps when it comes to the science that policy is premised on, it should have that level of audit.

  19. kevoka
    Posted Dec 23, 2009 at 5:49 PM | Permalink

    To see the effects “the peer-reviewed literature” read this email. Mr. John Holden of Harvard enlightens a student on how, given the preponderance of literature, the burden of proof falls on the skeptic to show the errors of AGW. This is in the context of the Soon / Baliunas paper.

    1066337021.txt

  20. Posted Dec 23, 2009 at 8:56 PM | Permalink

    Steve, there’s an interesting early comment on the Climategate emails by veteran journo (and economist) Clive Crook, at http://clivecrook.theatlantic.com/archives/2009/11/more_on_climategate.php

    Crook comments that “The stink of intellectual corruption is overpowering.”

    — Pete Tillman

  21. Bill Illis
    Posted Dec 23, 2009 at 9:14 PM | Permalink

    I noticed that gavin commented today that he is reviewing a comment/paper by Ross McKitrick.

    http://www.realclimate.org/?comments_popup=2586#comment-151025

    Not sure if this is sufficiently on topic: R. McKitrick wrote a response to Schmidt(2009), available online, but apparently not published yet. Is there a response available, or upcoming?

    [Response: Actually he co-wrote a new paper which was in effect a comment on Schmidt (2009) but which was not submitted as such. This isn’t so uncommon and can be appropriate if there is enough new material in the submission. I was asked to review it (as I assume other people were) and my review was submitted. I do not know what the current status is. – gavin]

    Comment by Norbert — 23 December 2009 @ 3:15 PM

  22. HotRod
    Posted Dec 23, 2009 at 9:20 PM | Permalink

    two things:

    bender – don’t apologise. I have nothing even beginning to come close to your knowledge (so it could, I guess, be rubbish) but your acerbic tone never spoils, for me, what you say. If anything, it’s part of it.

    now something more controversial, but on topic – Steve, I think you are a God. But somehow in this thread I noted an acerbity you normally stay away from? I’m amazed you have stayed so calm over so many years, amazed. But stay that way, please.

  23. Posted Dec 23, 2009 at 9:39 PM | Permalink

    I find the comment “that Box-Jenkins stuff” noticable since the book by Box and Jenkins was a groundbreaking and definitive establishment of modern time-series analysis — and these guys are doing time series analysis. This manner of referral suggests the author doesn’t have a particularly strong grounding in, or understanding of, the statistical analysis of time series. In fact, a better grounding in time-series analysis would, in my experience, lead to a general skepticism toward all the “smoothing” that is done to this data. Smoothing that doesn’t really lead to smoothness!

  24. Posted Dec 23, 2009 at 11:46 PM | Permalink

    scientists are not paid to copy-edit

    Well, Bender, here we have proof that they most certainly are!

  25. ZT
    Posted Dec 24, 2009 at 12:54 AM | Permalink

    As to the question of verification, in the case of statistical analysis, a reviewer might well say that a certain test should have been performed, and should be performed prior to publication, in order to confirm or test the result. The expectation would be that the authors would do the necessary work and include the confirmatory test results. Such a review would have helped the CRU people immensely.

    When Mann received a reasonable review for a change (from Wegman) he howled and demanded that the reviewer be reviewed. This shows how far from scientific norms climatology had strayed – and is apparently languishing even now.

  26. Posted Dec 24, 2009 at 5:09 AM | Permalink

    Concerning the customs, it really does look like that those people actually believe that by getting a “stamp” under an article, the article is becoming valid and accurate.

    They believe so despite their complete knowledge how this stamp was achieved – by a combination of incompetent, corrupt, scared, blackmailed, and artificially chosen reviewers (and many of them believers actively participate in the intimidation).

    Rationally, it’s clear that in a corrupt atmosphere, a review doesn’t mean anything. Even in a legitimate atmosphere, a review only increases the chances that the article is valid – it can never be a proof. But the climate science is very far from the legitimate atmosphere today.

    Their community is literally built on group think and intimidation and they even think that this is the right scientific approach.

  27. Solomon Green
    Posted Dec 24, 2009 at 1:42 PM | Permalink

    “Rationally, it’s clear that in a corrupt atmosphere, a review doesn’t mean anything. Even in a legitimate atmosphere, a review only increases the chances that the article is valid – it can never be a proof”.

    Let me explain how robust “peer reviewing” works. For a number of years there was much argument over the validity of certain assumptions upon which many, if not most, financial economists (including Nobel laurates) built their models.

    One professor of finance submitted an article for our professional journal. This was given to two others to peer review. One liked it and suggested only minor corrections. The other hated it and tore it to pieces.

    The editor was therefore obliged to send it to a referee to choose between them. Knowing that my views coincided with those of the author, although I do not pretend to be an academic, the editor sent the paper to me to referee. This ensured that the paper would would be, as it was in due course, published.

    So much for peer reviewing.

  28. Barbara
    Posted Dec 24, 2009 at 6:49 PM | Permalink

    Interesting to note that Wiki is cited for its description of peer review. Given William M. Connolley’s notorious involvement in 5,428 entries that should raise questions.
    I’ve noticed that all the graphs I’ve seen were done in HadCM3 – copyright owned by William M. Connolley. Not being a scientist I don’t know if this is acceptable or usual. If he was working for the Hadley Centre shouldn’t they own the copyright?

  29. Jess
    Posted Dec 25, 2009 at 4:00 AM | Permalink

    Happy Christmas! and thankyou for all your work.
    Dave Dardinger @ Dec23, 5.01pm is right, Jones must be using line numbers – about 300 lines incl. references is the length of a standard 4 page journal article.
    From the key words and flow of the .doc comments, would this not be MannetalGRL09 aka Mann, Schmidt, Miller & LeGrande, Potential biases in inferring Holocene temperature trends from long-term borehole information. Geophysical Research Letters, 2009 L05708. Received 15 Oct 09, Accepted 28 Jan 09, Published 12 Mar 09.
    As a (very) obscure scientist in a different discipline, I’m not qualified to comment on whether the article (as published) deserved a tougher review. Incidentally I agree with AM Mackay, even if one finds relatively minor problems it is a courtesy to pass them on. Wouldn’t it be elitist if an eminent academic felt it was beneath him to do that?

  30. Posted Dec 25, 2009 at 12:02 PM | Permalink

    As scientists and engineers, we all produce a work product. In my over 40 years experience in both areas, I have found that one cannot inspect quality into a work product. Quality must be designed in from the get go in terms of project design, product design, workmanship, methods, practices…. After which, inspection can only find errors of implementation but cannot, of itself, create or even assure a quality product.

    Scientific Journal submission peer review is the current inspection process for presumably scientific work products. If the peer review process is flawed, it will fail to find errors of implementation. The quality or lack thereof of accepted work products will pass through the system unscathed.

    Unfortunately, we find not only the AGW climate science peer review process is flawed, we also find the original design of the science is flawed: it assumes what its trying to prove, the books were cooked, a large fraction of the raw data is lost, and the details of the methods are obscured and possibly lost even to the scientists who produced the work product. Quality was not designed into the process or work product and the inspection mechanism covered up the flaws by the self same scientists who created the work product.

    How then can we say anything but that we must start over from scratch on climate science, place it on a sound footing, and be fully honest and open about the entire process?

    How then can we let it pass by simply saying scientists are human too and will therefore cut corners?

    Science is about NOT cutting corners. It is about discovering reliable knowledge upon which to found reliable actions. That is difficult and demanding work. It requires the maximum possible honesty, honor, and openness about every aspect of the endeavor. Even the slightest cut corner can and often does lead to monumentally costly errors in lost lives, lost wealth, and lost opportunities.

    Do the science right, and you get the right science!

  31. GT
    Posted Dec 27, 2009 at 7:43 AM | Permalink

    Just to throw in my two cents. It would seem that ‘peer review’ should not have to mean other climatologists review the paper, but an editor should send it to whomever happens to be an expert. If the climatologists paper discusses extensive statistics – how about having an expert statistician review it for that part. If it discusses extensive chemical reactions – have that expert review that particular area for accuracy. But it appears the AGW crowd set up what we like to call a self licking ice cream cone.

    • A.M. Mackay
      Posted Dec 27, 2009 at 10:17 AM | Permalink

      GT,

      Your remark brings up one of the underlying issues in climate and paleoclimate modeling: To what extent does the poor state of the field result from a politicized, personalized, groupthink-infuenced application of the forms of academic science, and to what extent is it a consequence of the misapplication of pure-science procedures?

      A number of examples of the latter have already been cited in this thread. For many engineering problems (e.g. bridge design) and regulatory filings (e.g. drug approvals), proper application of the peer-review process would deliver too little, too late.

      In the ideal, peer review doesn’t mean that a study was well-designed or correctly executed, much less that the authors’ conclusions or interpretations are correct. Instead, it signifies that the work has undergone scrutiny by impartial experts. It’s meant to indicate that the work addresses a significant issue, meets the current standards of the relevant fields, and has been structured according to a common format. It includes an implicit promise that if conclusions are found to be important by other working scientists, then those conclusions will be replicated and verified in the course of their own work.

      Improved description of physical reality, in fits and starts, over time. This structure has much to recommend it, as it fosters insight, innovation, rigorous logical thinking, and interdisciplinary projects. And it’s cost-effective.

      For many high-stakes decisions in engineering, medicine, and policy, society has decided that this “get it right eventually” philosophy is inadequate. Instead, greater short-term predictive power and verified correctness is emphasized, at the expense of imaginative thinking and leaps of insight. And at much greater financial cost.

      If climate science is to be the handmaiden of public policy (e.g. Kyoto, cap-and-trade, Copenhagen), this sounds like a more suitable paradigm.

  32. Falafulu Fisi
    Posted Dec 28, 2009 at 4:33 PM | Permalink

    Any climate scientist reviewer that doesn’t understand Box-Jenkin algorithm/s must immediately relinquish the job, since his/her knowledge is not up to the task. Reviewing a paper where you have little or no knowledge of the mathematical techniques used is akin to a senior high school student studying calculus trying to check the correctness of his/her university final year sister’s/brother’s calculus assignment. The high-school student wouldn’t be able to do the checking thoroughly since she/he lacked advanced knowledge (final year university level calculus).

    • Norbert
      Posted Dec 28, 2009 at 5:11 PM | Permalink

      Strange then that he came to expressing the view that the “math appears to be correct theoretically”, in spite of his bias.

      • ianl8888
        Posted Dec 28, 2009 at 5:50 PM | Permalink

        Stranger still that he thought this “theoretically correct maths” could do damage… hmm, eh ?

        • Norbert
          Posted Dec 28, 2009 at 6:44 PM | Permalink

          I don’t know what he thought, but that is not what he wrote. The sentence about the damage relates to the part that is hidden in the ellipsis.

        • ianl8888
          Posted Dec 29, 2009 at 3:39 AM | Permalink

          Here is the quote:

          “If published as is, this paper could really do some damage. It is also an ugly paper to review because it is rather mathematical, with a lot of Box-Jenkins stuff in it. It won’t be easy to dismiss out of hand as the math appears to be correct theoretically, …”

          [It goes on then about the perceived uselessness of theoretical considerations that do not show any apparent practical applications]

          Despite your denial, the damage that may have been done is precisely because the maths is “correct theoretically” and the paper claims that “the method of reconstruction that we use in dendroclimatology (reverse regression)is wrong, biased, lousy, horrible, etc.” Again, a direct quote from e-mail 1054756929.txt

        • Norbert
          Posted Dec 29, 2009 at 3:48 AM | Permalink

          Why don’t you quote the text preceding that sentence, together with that sentence? That is the text which the “damage” obviously refers to. You don’t seem to have understood what I said. You make it look as if that is the beginning of the email.

        • ianl8888
          Posted Dec 29, 2009 at 4:18 AM | Permalink

          Here it is, directly preceding the “If published as is …” comment

          “Hi Keith,
          Okay, today. Promise! Now something to ask from you. Actually somewhat important too. I got a paper to review (submitted to the Journal of Agricultural, Biological, and
          Environmental Sciences), written by a Korean guy and someone from Berkeley, that claims that the method of reconstruction that we use in dendroclimatology (reverse regression) is wrong, biased, lousy, horrible, etc. They use your Tornetrask recon as the main whipping boy. I have a file that you gave me in 1993 that comes from your 1992 paper.
          Below is part of that file. Is this the right one? Also, is it possible to resurrect the
          column headings? I would like to play with it in an effort to refute their claims.”

          Doesn’t help you at all.

          BTW, the word “ellipse” does not appear at all in this file 1054756929.txt file

        • Norbert
          Posted Dec 29, 2009 at 4:38 AM | Permalink

          “Ellipsis” refers to the “…” (the three dots in a row) which Steve’s quote has instead of the text which you now added.

          That text, which you now quoted, is followed immediately by the “damage” sentence. This text (“in an effort to refute their claims”) explains what the “claims” are (which he considers damaging).

          He doesn’t want to refute the *theoretical* mathematics, he wants to refute the claim that their method leads, in *practice*, to substantially different results. (Such that it would render existing reconstructions “wrong, biased, lousy, horrible, etc”). So he wants to make an effort to refute those claims (the claim that the results will be *practically* very different).

          An important sentence here is this one:

          “So they do lots of monte carlo stuff that shows the superiority of their method and the deficiencies of our way of doing things, but NEVER actually show how their method would change the Tornetrask reconstruction from what you produced.”

        • ianl8888
          Posted Dec 31, 2009 at 7:11 PM | Permalink

          You’ve just gone in a complete circle

          Why am I not surprised ?

          The paper in question demonstrated a superior mathematical technique. The potential for this to damage existing peer-reviewed conclusions based on the less-correct mathematics was recognized and help canvassed in an attempt to refute the paper at the peer-review level (ie. prevent it being published on grounds other than it’s correctness)

          This paper did not need to demonstrate a practical application – it skewered the reverse regression methods then in use. Trying to prevent it from being placed in the literature was unethical

        • Norbert
          Posted Dec 31, 2009 at 7:54 PM | Permalink

          No. In terms of the information we have found in that email itself, there are several points missing in your logic. First, you missed the phrase: “If published as is”. This means, he was considering the possibility that it might get accepted pending *modifications*.

          I wrote this over at the other thread which discussed all this in more detail (https://climateaudit.org/2009/12/16/climategatekeeping/#comment-212721)

          In the view of the reviewer, there are three steps:


          1. It proposes a new method.

          2. It claims the new method is better.

          3. It claims that the existing method is bogus and much existing work obsolete.

          In this email, the reviewer objected mainly that the paper didn’t have sufficient support for step 3.

          Furthermore, his acceptance of the “superiority” of the method is stated only in so far as the paper “shows” it (which could be falsely or correctly), not in that he would agree with this conclusion. He seems to agree only that the maths of the method are theoretically correct in itself, not even necessarily that it is correct to apply the method in the cases in which it was proposed to be applied.

          In summary, the fact that the maths in a paper are correct are not sufficient to accept all claims in might make. Otherwise, I could write a paper which includes “2 + 3 = 5”, plus a lot of false, but non-mathematical claims, and it would still have to be accepted by a scientific journal.

        • Tom
          Posted Dec 31, 2009 at 8:27 PM | Permalink

          Where to start on this piece of, erm, logic? No, wait, there’s another word for it – but I think Steve would censor it.

          I think we’d spotted that mathematical correctness is not enough to get a paper published. Bringing up this childish irrelevance does not establish the proposition that it is OK to reject a paper from the motive that it could do damage to some position the reviewer supports. Read the email – the meaning is utterly plain and not to be debated. The meaning behind the email and the meaning behind your post are the same, and just as wrong – that it is OK to be “theoretically” wrong so long as the “right” (ie desired) outcome is achieved. Why is “right” the same as “desired”? Because if the math is all wrong, then how else can you know that the result is right?

        • Norbert
          Posted Dec 31, 2009 at 9:42 PM | Permalink

          Tom, none of the methods is “wrong” according to the email. That’s exactly what he considers a false and unsupported claim. So much for your “logic”.

        • Tom
          Posted Jan 1, 2010 at 6:19 AM | Permalink

          It’s rather like juggling too many balls, isn’t it, Norbert? So long as you only think about one bit of the argument at once, it all makes sense for you.

          I said I wasn’t going to debate this, but, sigh, one more go:

          The paper deals with a statistical “method of reconstruction that we use in dendroclimatology” and shows that the method – that is the math, as the method is just a mathematical machine the eats data and spits out a result – “is wrong, biased, lousy, horrible, etc” And guess what? The math of the paper “appears to be correct.” Look at that! A correct paper, showing that the method is wrong! So what grounds are left to reject the paper? That is the problem faced by the author of the email.

        • Norbert
          Posted Jan 1, 2010 at 6:34 PM | Permalink

          Tom, you are simply repeating your argument.

          In the 3 steps I mentioned above, the math being correct, by itself, doesn’t even necessarily fully validate Step 1.

        • mikep
          Posted Jan 1, 2010 at 5:29 AM | Permalink

          Norbert, your characterisation of the paper is wrong. It PROVES mathematically that the inverse regression method commonly used is biased in the statistical sense (bias here is a perfectly well understood technical term in statistics not a term of abuse) and that other, not so often used methods, are not biased. So the paper does not even propose a new method, it just compares the estimation properties of existing methods. This is a result in its own right and well worth publishing in a statistical journal. How important the bias is in practice is a separate question well worth investigating. My reading of the 2009 version is that the bias does matter, but it may be that this part is post 2003. I can see no good grounds for not publishing the proof of bias in a much used estimator, unless the proof is incorrect.

        • Norbert
          Posted Jan 1, 2010 at 6:41 PM | Permalink

          Mikep, your own characterization is already wrong at the point were you state that the common method is “inverse” regression. But that is the type of regression proposed by the paper. The commonly used one (in dendroclimatology) is “reverse” regression. (According to the email.)

  33. HarleyDavidson
    Posted Dec 29, 2009 at 1:22 PM | Permalink

    Reading the above it is not difficult for one to suggest – as Paul Newman said in one of his movies – “what we have here is a failure to communicate.” Specifically on the part of Mann and Jones – if I want a good review on anything in my private life I ask my buddy – I know the best he’ll do is pick around the edges but not say anything of value.

    The fact remains – Jones or Mann – peer reviewing each other or their working colleagues in AGW climate science (not for lack of any knowledgeable climate specialists in the various fields in climate science) is akin to – snip

  34. RomanM
    Posted Jan 1, 2010 at 7:24 PM | Permalink

    Norbert:

    Mikep, your own characterization is already wrong at the point …

    By all means, Norbert, dispel the apparent ignorance of your comment by explaining exactly how the “reverse” regression referred to (which you claim to be “commonly used” in dendro without any actual knowledge of such a fact) differs from the “inverse” regression as known to statisticians.

    It has become very clear that you have little understanding of statistics or science in general. I doubt that you have ever published a scientific paper. All we have seen here is your high school debating style (always on on irrelevant points) which frankly leaves a lot to be desired. If you wish to contribute positively, try learning about a topic before commenting. Ignorance contributes nothing.

    • Norbert
      Posted Jan 1, 2010 at 8:51 PM | Permalink

      As I stated, I am referring to the email (which is what we are discussing). It says:

      “the method of reconstruction that we use in dendroclimatology (reverse regression)”

      The same terms are used by the re-written paper (not the original one which was reviewed) which includes Auffhammer as a co-author.The rest of your message is a ridiculous ad hominem.

      I did not intend to comment on this topic any further, and will not do so from now on.

      • Norbert
        Posted Jan 1, 2010 at 8:53 PM | Permalink

        (Above it should say: “*which is* not the original one” )

  35. Dr. Ross Taylor
    Posted Jan 1, 2010 at 11:07 PM | Permalink

    I do not know if you have all read Dr. Mann’s letter to the WSJ of 31 Dec, which includes this salient passage:

    “Society relies upon the integrity of the scientific literature to inform sound policy. It is thus a serious offense to compromise the peer-review system in such a way as to allow anyone—including proponents of climate change science—to promote unsubstantiated claims and distortions.”

    Having carefully considered many of the peer-review related e-mails in the climategate package, I am, er, gobsmacked (to use the scientific term).

    Can someone please get this man some high quality legal advice, as a matter of urgency.

    You can read his letter here: http://online.wsj.com/article/SB10001424052748703478704574612400823765102.html

    If you wish to compare what he writes to the WSJ to what he writes to his colleagues, apart from Steve’s work, there is an excellent ongoing analysis of the e-mails in great detail (about 3 hours reading’s worth)- the comments are inclined to be slightly OT, but it is worthwhile reading, and can be found here:

    http://assassinationscience.com/climategate/

  36. Jimw
    Posted Jan 2, 2010 at 1:18 AM | Permalink

    The following letter was rejected by Science Magazine:

    Global Warring
    I was happy to see Michael Mannn and colleagues acknowledging the existence of the Medieval Warm Period, as everyone else calls it – or the Medieval Temperature Anomaly, as they would prefer. This is a welcome departure from his colleague’s previous opinion (Overpeck?), if the released emails are accurate, that it needed to be gotten rid of, and welcome even if alleged to be a merely local phenomenon. Perhaps his moderate most current posture means that simple renaming will be sufficient for his purposes.

    I was disappointed that the interviewer (podcast) did not inquire of Mann his opinion of the not inconsiderable evidence of the existence of a corresponding warm period in Russia, China, Pakistan, North America, and Patagonia.

    I respectfully request that Science magazine consider reviewing his previous publications in their journal in view of the University of East Anglia material, and also the rejected papers submitted by authors previously disparaged or intimidated by this cabal, including but not limited to Steve McIntyre, Roger Pielke, John Christy, Roy Spencer, Anthony Watts, Craig Loehle and Richard Lindzen. It seems to me that a moral debt has been incurred by the choir boys of the AGW priesthood, and that penance must be said and amends must be made.
    Jim Whiting

3 Trackbacks

  1. […] this overused crutch (and favourite club of the “science is settled crowd”) does not appear to include any examination of the actual science underlying any of the IPCC […]

  2. By Mosher: The Hackers « Watts Up With That? on Jan 26, 2010 at 3:38 PM

    […] Without the mails which detail how these hacks work, one could imagine that the IPCC could be made “hack proof” merely by adopting more controls and a more open process. The mails, however, indicate that the science publishing process has also been hacked. Editors have been compromised, and the system of peer review has been corrupted. Very simply, one can make the IPCC process as open and transparent as one likes, but as long as it is fed by a corrupt journal process, you will still get garbage science out of the IPCC process. And further, you could reform the journals all you like and the process can still be corrupted by the individual influential researcher who hides his data and his code. […]

  3. […] Today I’ll report on the spectacle of Jones reviewing a submission by Mann et al. “    https://climateaudit.org/2009/12/23/climategatekeeping-jones-reviews-mann/     […]