Climategatekeeping

In the MIT Climategate Forum, Ronald Prinn trotted out what has become one of the standard “move along” memes in the climate science community: that while the “tone” of the Climategate emails was “unprofessional”, they did not succeed in their “endeavour” to prevent publication of articles in journals or mentions in IPCC. Prinn at around minute 48 says:

Number 2. Were the people successful in their endeavour to preventing publication in journals or mentions in IPCC ? This is a very important question. Could one successfully do that? Five papers by McIntyre and McKitrick were published and then referenced and discussed in the IPCC… But were the people successful in their endeavour to preventing publication in journals or mentions in IPCC ? The answer is no. They were not successful. [elision does not seem germane to this particular]

As so often in climate science, Prinn is talking without apparently doing any due diligence. The Climategate Letters provide many examples of CRU and their associates successfully preventing publication of articles in journals. Most of these examples do not pertain to the Mc-Mc articles and, indeed, some of the most egregious examples precede our entry onto the scene in late 2003.

Today, I’ll provide two 2003 and two 2004 examples where, contrary to Prinn’s soporific “move-along”, CRU and their associates successfully prevented publications of four articles (the identity of which is presently unknown.) There are other examples in the Climategate Letters which I’ll discuss on other occasions.

Briffa and Cook, Spring 2003

On June 4, 2003, 1054748574, Briffa, apparently acting as editor (presumably for Holocene), contacted his friend Ed Cook of Lamont-Doherty in the U.S. who was acting as a reviewer telling him that “confidentially” he needed a “hard and if required extensive case for rejecting”, in the process advising Cook of the identity and recommendation of the other reviewer. There are obviously many issues involved in the following as an editor instruction:

From: Keith Briffa
To: Edward Cook
Subject: Re: Review- confidential REALLY URGENT
Date: Wed Jun 4 13:42:54 2003

I am really sorry but I have to nag about that review – Confidentially I now need a hard and if required extensive case for rejecting – to support Dave Stahle’s and really as soon as you can. Please
Keith

Cook to Briffa, June 4, 2003
In a reply the same day, Cook told Briffa about a review for Journal of Agricultural, Biological, and Environmental Sciences of a paper which, if not rejected, could “really do some damage”. Cook goes on to say that it is an “ugly” paper to review because it is “rather mathematical” and it “won’t be easy to dismiss out of hand as the math appears to be correct theoretically”. Here is the complete email:

Hi Keith,
Okay, today. Promise! Now something to ask from you. Actually somewhat important too. I got a paper to review (submitted to the Journal of Agricultural, Biological, and Environmental Sciences), written by a Korean guy and someone from Berkeley, that claims that the method of reconstruction that we use in dendroclimatology (reverse regression) is wrong, biased, lousy, horrible, etc. They use your Tornetrask recon as the main whipping boy. I have a file that you gave me in 1993 that comes from your 1992 paper. Below is part of that file. Is this the right one? Also, is it possible to resurrect the column headings? I would like to play with it in an effort to refute their claims. If published as is, this paper could really do some damage. It is also an ugly paper to review because it is rather mathematical, with a lot of Box-Jenkins stuff in it. It won’t be easy to dismiss out of hand as the math appears to be correct theoretically, but it suffers from the classic problem of pointing out theoretical deficiencies, without showing that their improved inverse regression method is actually better in a practical sense. So they do lots of monte carlo stuff that shows the superiority of their method and the deficiencies of our way of doing things, but NEVER actually show how their method would change the Tornetrask reconstruction from what you produced. Your assistance here is greatly appreciated. Otherwise, I will let Tornetrask sink into the melting permafrost of northern Sweden (just kidding of course).
Cheers,
Ed

Briffa promptly replied:

Hi Big Boy
You just caught me as I was about to slope off after a brutal day …[chitchat]… This attack sounds like the last straw– from what you say it is a waste of time my looking at it but send a copy anyway. [more chitchat]
Keith

Update: it looks like this paper is http://nber-nsf09.ucdavis.edu/program/papers/auffhammer.pdf . On a quick look, it is a professional and interesting paper – far better than standard Team fare. It was not cited in AR4. Six years later, it has still not been published in the peerreviewedlitchurchur. Breathtaking. Cook is at Lamont-Doherty in the U.S.

Updated December 8, 2023

Seung Yoo and Wright, submitted 2003; Aufhammer et al 2014

(CG3 1051196664): On April 21, 2003, editor Tony Olsen of JABES (Journal of Agricultural, Biological and Environmental Sciences) asked Keith Briffa to review “Specification and Estimation of the Transfer Function in Paleoclimatic Reconstructions” by Seung Jick Yoo and Brian D. Wright. Briffa declined and suggested Ed Cook as an alternative.

(CG3 1054752279, 1054756775):  On June 4, 2003, in same thread as review of Hunzicker and Camill submission (see next section below) , Cook sent Briffa the following request about the submission by Seung Yoo and Brian Wright (which turns out to be about statistical problems with inverse regression):

Hi Keith,
Okay, today. Promise! Now something to ask from you. Actually somewhat important too. I got a paper to review (submitted to the Journal of Agricultural, Biological, and Environmental Sciences), written by a Korean guy and someone from Berkeley, that claims that the method of reconstruction that we use in dendroclimatology (reverse regression) is wrong, biased, lousy, horrible, etc. They use your Tornetrask recon as the main whipping boy. I have a file that you gave me in 1993 that comes from your 1992 paper. Below is part of that file. Is this the right one? Also, is it possible to resurrect the column headings? I would like to play with it in an effort to refute their claims.

If published as is, this paper could really do some damage. It is also an ugly paper to review because it is rather mathematical, with a lot of Box-Jenkins stuff in it. It won’t be easy to dismiss out of hand as the math appears to be correct theoretically, but it suffers from the classic problem of pointing out theoretical deficiencies, without showing that their improved inverse regression method is actually better in a practical sense. So they do lots of monte carlo stuff that shows the superiority of their method and the deficiencies of our way of doing things, but NEVER actually show how their method would change the Tornetrask reconstruction from what you produced.

Your assistance here is greatly appreciated. Otherwise, I will let Tornetrask sink into the melting permafrost of northern Sweden (just kidding of course).
Cheers,
Ed

(CG3 – 1054756775; 1054756929). Late on June 4, 2003, Briffa replied to Cook:

Hi Big Boy

You just caught me as I was about to slope off after a brutal day – we spent all day yesterday interviewing for a job we have and then someone accepted it – and now Janice tells us we don’t have the money to pay at the rate the job was advertised for! This attack sounds like the last straw- from what you say it is a waste of time my looking at it but send a copy anyway. The file you have is an old version of a reconstruction output for one Tornetrask reconstruction – if it was labelled something like 990 it is the original Nature one , but 997 (i Think//1) would make it the Climate Dynamics one . Trouble is I will have to go back and find out which . Please ring if I haven’t my tomorow to remind me – and concentrate on the review for now. I will also talk about an extended nearby dataset (temp) that might allow a longer more rigorous validation .

CG3 1054904190: On June 5, 2023, Cook sent Briffa a copy of the paper by Seung Yoo and Wright that Cook was reviewing, together with his review:

This is not terribly kosher, but I am sending you the paper I am reviewing that attempts to destroy dendroclimatology as presently done, and my present review of it. This does not have to be sent in until next week sometime, so there is time for you to add any comments. Doing this is justified in my view because the authors use your Tornetrask reconstruction as the main whipping boy. The paper is rather mathematical in parts, but the bias they show in condemning the standard method of climate reconstruction is pretty apparent. I don’t know if there is a hidden agenda or just an effort on their part to show us dumb asses how to do it right! Anyway, give me a call at home tomorrow if you wish, but certainly read what I have sent you and please recommend changes or additions.

P.S. Please keep this confidential for now since it is a paper under review.

CG3 1054925206: On June 6, 2003, editor Olsen reminded Cook about the outstanding review request:

Dr. Cook, I am not sure if you agreed to do the review (at least I can’t find it). Will you be able to complete the review in the next week? Two weeks? Look forward to hearing from you.

CG3 1054925206: Later on June 6, 2003, Cook submitted his review to JABES with following covering remarks (review itself is not in documents):

Hi Tony,

Here is my review of the Yoo and Wright paper. Frankly, it is very poor for reasons that I describe in my review and really must not be published as is. It would do grossly unfair harm to dendroclimatology because the authors have simply not made their case. If anything, they have actually vindicated the “reverse regression” method based on what they show in their Table 2, even if they don’t care to admit it.

I also see that they used the same tree-ring data as Briffa to test their method. Yet, there is not so much as the slightest acknowledgement of where the data were obtained. From Briffa? I assume so. If so, they should acknowledge it.

Also, had you considered Briffa as a reviewer as well? Since this paper is such a negative attack on his work, that would have been proper. Anyway, I appreciate the fact that you sent me this paper to review.

Cheers, Ed

CG3 1054925206:  Cook then sent review to Briffa.

CG3 1054928326  Later on June 6, 2003, Olsen replied:

I did ask Briffa to review the paper. He declined due to schedule conflicts. In fact he recommended that you review the paper. thank you very much for the timely review.

CG3 1054928326: Cook wrote Briffa:

A hah! The truth comes out.

Briffa replied:

truth when seen through a dark mirror is merely a distortion of ones inadvertently projected sub conscious …..or do I mean incontinence?

CG3 1054929746: Cook replied:

Your rhapsodic and imaginative prose is second only to that of J.K. Rowling. Now if only you had her money!

The Soong Yoo and Wright paper remained buried for 11 years until belatedly being published in 2014 as Auffhammer et al (link)

Hunzicker and Camill, Tree ring drought reconstruction from Montana

(CG3 1050527998): On April 16, 2003, Briffa emailed Ed Cook, asking Cook whether he had returned a review of a paper by Hunzicker and Camill.  Cook didn’t “remember anything about this paper”. Briffa reminded him that it was a “new 672 year tree ring drought reconstruction from west central Montana”. Briffa added:

Confidentially, Davis S also reviewed and said very well written BUT NO GO. It was sent to you on 17 June 2002!!

AND WHY HAVE YOU NOT F>>>G RANG ME ? Am off home now. love Keith

CG3 1054748574): On June 4, 2003, subject line “re: Review- confidential REALLY URGENT”, Briffa emailed Cook as follows.  The CG3 information shows that this was about the Hunzicker and Camill submission of a 672-year Montana tree ring chronology”

I am really sorry but I have to nag about that review – Confidentially I now need a hard and if required extensive case for rejecting – to support Dave Stahle’s and really as soon as you can. Please
Keith

CG3 1054757678:

Here is my review. I must admit to not being quite as negative about it as Stahle, but I do feel that it is marginal at best and could be justifiably rejected. Read my review. Of course, you will want to cut out the review and send it to the authors as a separate document.

Review of “Using a New 672-Year Tree-Ring Drought Reconstruction from West-Central Montana to Evaluate Severe Drought Teleconnections in the Western U.S. and Possible Climatic Forcing by the Pacific Decadal Oscillation” by D.A. Hunzicker and P. Camill This paper is reasonably well written, but has some problems in it that bother me. The first issue relates to the tree-ring chronology that was developed at Lindberg Lake. Anytime less than half of the core samples (61 or 152) are used in developing a chronology, this is cause for concern. The fact that there are “unresolvable sections of missing rings” (p. 10) can mean a lot of things. However, ponderosa pine is known to cross-date well, which includes “locating” locally-absent rings during the cross-dating phase, so it is surprising that the authors have chosen not to work through these problems. Presumably, the trees with missing rings are also those most sensitive to drought, so isn’t there a chance that the chronology being analyzed in this paper is less sensitive to drought than it ought to be? I also wonder how much their chronology is truly contributing to the overall stated goal of this paper, i.e. evaluating “Severe Drought Teleconnections in the Western U.S. and Possible Climatic Forcing by the Pacific Decadal Oscillation”. The authors extensively use the PDSI reconstructions of Cook et al. (1999) in their analyses. Aside from the increased length of their new tree-ring chronology, what does it contribute that was not possible simply by using the Cook et al. reconstructions to test for teleconnections and forcing. None of the indices of forcing (ENSO, PDO, sunspots) extend back before the beginning of the Cook et al. reconstructions, so there is little to be gained in using one longer series from west-central Montana in this analysis. One could point to Fig. 3, which compares the MT reconstruction vs the SWDI series. But even this comparison is limited in its overall contribution to the paper. I also don’t like the use of the FFT for estimating power spectra, even if the confidence limits are determined by bootstrapping. The power spectra calculated by the FFT are still inconsistent estimates. A more contemporary and consistent method of spectral estimation, like the Multi-Taper Method, should be used.

For the reasons stated above, I do not consider this paper to be ready for publication as is. I will leave it to the Editor to decide how to proceed with it past this point.

End 2023 Update

Jones to Mann Mar 31, 2004
On Mar 31, 2004 Jones wrote to to Mann as follows:

Recently rejected two papers (one for JGR and for GRL) from people saying CRU has it wrong over Siberia. Went to town in both reviews, hopefully successfully. If either appears I will be very surprised, but you never know with GRL.

Returning to Prinn’s question: “were the people successful in their endeavour to preventing publication in journals or mentions in IPCC?” I’m unaware at present whether any of these four papers eventually found their way into journals elsewhere. Or even who the authors of the papers were.

In order for Prinn or anyone else to make a grandiose move-along claim, surely a little bit of due diligence is in order: who were the authors of these four papers (and there are others)? Did they eventually get published in other journals despite CRU’s “endeavours to prevent publication”? Were they then mentioned in IPCC?

Prinn doesn’t know. And if he didn’t know, he should not have told the audience to move along.

Update Dec 2023.  In May 2014, Auffhammer et al, Specification and estimation of the transfer function in dendroclimatological reconstruction,  the paper that Briffa and Cook prevented from publication in was published (link). I didn’t notice it at the time.  Auffhammer et al cited this Climate Audit article as a reference for the “history of this paper”.  I was notified of this publication by a reader in 2015 and noted in correspondence at the time that Bo Li had been added as an author in the 2014 version, but didn’t discuss the new paper at the blog at the time.

250 Comments

  1. Norbert
    Posted Dec 16, 2009 at 12:53 PM | Permalink

    I don’t understand your complaint yet. It is not enough to have correct mathematics, even for an article that is mostly mathematical. After all, climate science is not mathematics. Peer-review is supposed to reject some papers, including ones that don’t really make a point. But maybe I missed yours.

    • MarkB
      Posted Dec 16, 2009 at 1:33 PM | Permalink

      Norbert

      Climate science is not mathematics? Climate science is a subfield of physics. All climate analysis and modeling is done through mathematics.

      If the issue was as you propose, there would have been no need for these emails. And of course, a reviewer contacting another scientist to protect his work from criticism would be considered unethical by many (to be generous) – although it goes on all the time, no doubt.

      Note this: “If published as is, this paper could really do some damage.” If it wasn’t a good paper, how could it do damage. Papers that can do damage to existing hypothesis SHOULD be published – that’s the entire point of peer reviewed publishing. If Briffa disagrees, he can respond in kind, and let the science community decide who is right. In this case, back-door gatekeeping is being used to prevent the science community from having the opportunity to make up their minds.

      • Jryan
        Posted Dec 17, 2009 at 9:54 AM | Permalink

        Well, in a way Norbert is correct. There was very little mathematics… or at least CORRECT mathematics.. in the Team literature.

      • Norbert
        Posted Dec 17, 2009 at 12:50 PM | Permalink

        It might be enough to have correct mathematics for a paper that is only mathematical. But this is not the case either.

    • Hoi Polloi
      Posted Dec 16, 2009 at 2:07 PM | Permalink

      “Peer-review is supposed to reject some papers, including ones that don’t really make a point.”

      And what do you think Cook meant with: “If published as is, this paper could really do some damage.”?

      • Norbert
        Posted Dec 16, 2009 at 2:58 PM | Permalink

        He probably meant that it could cause far reaching wrong conclusions based on a misleading critique about finer theoretical points which have no sufficient practical relevance.

        • David Wright
          Posted Dec 16, 2009 at 3:36 PM | Permalink

          I have done peer review, and I see this email as very hard to defend. If he believes that the paper “is correct theoretically,” the the right thing to do is to get it in the literature, even if the practical impact is small. Perhaps someone will do a follow-up paper on the practical impact. Perphaps future authors will use the statistically more justified framework. It’s certainly jusitified to write back something like “this paper would be stronger if you added a section on the practical impact on existing analysis” — such comments are common and helpful. But believing that a correct result has little practical impact is not a reason to block its publication.

          In any case, from the tone of the email, it seems that the real concern is not so much the lack of practical impact, as that the paper might indeed have a practical impact — on the image that these authors want to present that their work was completely correct and unassailable in any way.

          I might add that the paper sounds like it was trying to do exactly what the Wegman report specifically recommended: engage the dendro community with people with solid skills in the kind of advanced statistics it was trying to pull off. This email shows that some of the dendro community was not only ignoring that recommendation, but actively fighting against it.

        • Norbert
          Posted Dec 16, 2009 at 3:51 PM | Permalink

          He doesn’t say that the paper is correct. He says that the math in the paper appears correct theoretically, but that the paper doesn’t show that this matters for practical work. For all we know, the improvement might be completely irrelevant. However, apparently the paper suggests, (or creates the impression) that there would be such relevance.

          So the paper appears to miss something crucial, at least in the way in which it is presented. That’s what the information abovel tells me, and in terms of common sense, it would seem to me that such a paper should be rejected, at least until it is modified to point out practical relevance.

        • Skip Smith
          Posted Dec 16, 2009 at 4:26 PM | Permalink

          That sounds like a case for a revise and resubmit with instructions to add a section on the practical implications of their correction. A rejection of a correct paper that points out a flaw in the reviewer’s work is suspicious.

        • Norbert
          Posted Dec 16, 2009 at 4:53 PM | Permalink

          How would we know (at this point) that this didn’t happen?

          Or how would we know if the authors were interested in re-submitting the paper if any claim of practical relevance was disproven? Perhaps their method was only of interest if it had such a relevance.

        • Bob
          Posted Dec 16, 2009 at 10:53 PM | Permalink

          Norbert,

          Your disingenuous apologism does everyone here a disservice. It’s obvious that wasn’t what the author of the email wanted to happen. For proof of that, just read the email again. When someone says “it won’t be easy to dismiss out of hand” it is clear that their initial desire is to do what? Dismiss it out of hand, of course. The evidence of a bias in favor of the theory being challenged is ostensive, right there in the body of the email we are discussing.

        • Norbert
          Posted Dec 17, 2009 at 2:41 AM | Permalink

          You seem to be stating the obvious. It introduces the paper thus: “claims that the method of reconstruction that we use in dendroclimatology (reverse regression) is wrong, biased, lousy, horrible, etc.”

          Obviously he finds the paper insulting, and would like to be able to dismiss it. You could turn around and say that since he admits this isn’t easy, that is a sign of respect. But he might just be saying that it isn’t easy to return the favor, so to speak.

          One gets the impression that climate scientists have to deal with assaults on their fundamentals in an ongoing fashion. Probably I wouldn’t want to be in their place.

          No wonder he challenges the paper to actually prove something that is relevant to climate science, and not just some mathematical bash fest.

        • Posted Dec 17, 2009 at 7:18 AM | Permalink

          Norbert has a valid point: just because the math may LOOK right doesn’t obviate the deficiency that the conclusion wasn’t supported. It would be nice to see the paper and see if it actually added something to the scientific base or was just an exercise in ‘baffle-them-with-bs’.

        • bender
          Posted Dec 17, 2009 at 7:23 AM | Permalink

          Norbert is out of juice and so are you. You can’t know that the “conlusion wasn’t supported” because you don’t know the identity of the paper. If it should turn out that the paper is Auffhammer et al, then you are in for a treat.

        • ThinkingScientist
          Posted Dec 17, 2009 at 10:29 AM | Permalink

          If he says the math LOOKS right it sounds as though he may not be competent to review the paper. Reviewers cannot say the maths LOOKS right. It is either right or it isn’t. Any reviewer who is not sure should either seek clrification or ask for another opinion (via the editor). I have received papers for review which when I was asked to do the review sounded as though they were clearly within my competency but once I received the full paper it became apparent that it was too specialised. The proper response then is to inform the editors that you are unable to review and, if appropriate, recommend others who may be better qualified

        • Norbert
          Posted Dec 17, 2009 at 11:16 AM | Permalink

          bender,

          “Out of juice” is at least a bit early to say, since even if it is Aufhammer et al., we have only seen an update of it dated “June, 2009”. Perhaps it has been Cook’s eventual review which has prompted improvements in the current text. We don’t know that yet, it is still too early to draw conclusions. Plus, we don’t have a really critical and competent take even on the 2009 version yet, either.

        • mikep
          Posted Dec 19, 2009 at 6:40 AM | Permalink

          Assuming the paper is the one by Aufhammer et al then it is quite clear that they have proved that the estimators used in inverse regressions are biased. This is not a term of abuse – it’s a technical term referring to whether or not the estimator gives on average the right answer for the population variable it is estimating. The reaction given by “claims that the method of reconstruction that we use in dendroclimatology (reverse regression) is wrong, biased, lousy, horrible, etc.” almost seem to suggest that the team do not understand the concept of an unbiased estimator. It’s a little like the reaction to the use of the term “spurious regression”, which is also a technical term not an insult.

          Steve:
          It will be a while before I can study his paper. Isn’t there an even more important issue with CIs?

        • mikep
          Posted Dec 19, 2009 at 8:16 AM | Permalink

          I agree that there are also interesting points about confidence intervals, The point I was trying to make was that this is bread and butter econometrics about the properties of estimators. It seems to be new – at least in this application – and the argument is watertight. That seems to be good enough for it to be accepted. The practical consequences of the bias are a separate question, well worth exploring. But I don’t see why these authors need address it – surely that is for the people who have used a DEMONSTRABLY incorrect method. Contrast this with the paper published by Gavin Schmid in the International Journal of Climatology which seems to miss the elementary point that whether the confidence intervals for the coefficients on the independent variables in a regression need to be adjusted for autocorrelation depends on whether the residuals of the equation are autocorrelated, not whether the dependent variable is autocorrelated. (Autocorrelation in the dependent variable is one of the features of the data to be explained by the independent variables. If the independent variables do a good job of this and leave normally distributed residuals then that is fine.) What troubles me is how this sort of analysis gets through peer review in climatology but better work fails.

        • David Wright
          Posted Dec 16, 2009 at 4:28 PM | Permalink

          I disagree. Suppose, hypothetically, that the practical impact is really ZERO. To all decimal places. That is, the advanced analysis can be proven to be equivilent to the naive analysis. Then it is still very interesting to know this.

          In my field (particle physics) there are reams of interesting papers with no immediate practical impact on the analysis of experimental data. And there are scores of cases where a mathematical paper published in the 30s or 40s was ignored until the 70s or 80s, whereupon it was recognized as having been a crucial contribution. These papers would definitely have been rejected if “immediate practical impact” was a criteria for publication.

          It’s also very hard to maintain that the main problem these reviewers see is the lack of practical impact, when they are also writing that “this paper could really do some damage”. It looks much more like they are fishing about for an excuse to block publication of what they recognize to be accurate cirtique of their work, and the lack of an in-depth discussion of impacts is a convenient one. If you choose to reply, I would be interested to know what you think “this paper could really do some damage” means.

          I will say this, though, in their “defense.” If you send a manuscript for review to the author of a paper that it undermines, you are pretty much guaranteed to get a “reject” back. So while these authors’ behavior is poor and unjustified, it’s also relatively normal. This doesn’t rise nearly to the level of other ethical violations revealed by the Climategate emails.

        • Norbert
          Posted Dec 16, 2009 at 5:46 PM | Permalink

          It might be interesting, but not all mathematics which are correct are also interesting. Excuse me for quoting wikipedia, but in the shortness of time I found only this quote on the peer-review process:

          “This process encourages authors to meet the accepted standards of their discipline and prevents the dissemination of irrelevant findings, unwarranted claims, unacceptable interpretations, and personal views.”

          For all we know, all of these, and especially the first criteria, may have been applicable here.

          Regarding the “damage”, as I have already written elsewhere, it probably means that the article either directly draws wrong conclusions about the practical relevance, or invites the reader to draw such wrong conclusions about the relevance, regarding the methods it objects to. And that might be a mess to clean up, so to speak.

          At the end, you speak about sending a manuscript to the author for review. But it should be pointed out that the reviewer is Cook, and he mostly asks Briffa to confirm that he is using the right data file, and asks about the column headers etc. He didn’t ask Briffa to review it. Although Briffa asked for a copy of the paper to be reviewed, I wouldn’t see that as a problem, rather a natural thing to ask for in this situation, unless there is a rule speaking against that. The remainder of briffa’s response, which Steve McIntyre has edited out, contains the data file which Cook (the reviewer) has asked for, and some additional related technical information.

        • Norbert
          Posted Dec 16, 2009 at 5:50 PM | Permalink

          Actually it contains it doesn’t contain the data file itself, but a description of which file it is, and a promise to send it when he found it.

        • David Wright
          Posted Dec 16, 2009 at 6:32 PM | Permalink

          We have really different views of peer review. Irrelevant, to me, means you have sent a geology paper to a medical journal. And I would never consider “correct, but not sufficiently interesting” grounds for rejection, except in a few very high-prestige journals (e.g. Nature) which exist only to publish ground-breaking findings.

          If the manuscript does say or imply that not just the methodology of the critiqued research, but also its results, are wrong, without actually showing that, then that is not really a hard mess to clean up. The reviewer should just point this out and ask that the offending statements be removed or clarified. I have revised, added, or removed entire multi-page sections of papers due to reviewer feedback — it’s not uncommon.

          I will grant that you can, by reading against the grain and presuming a lot of favorable facts not in the email, cast the exchange in a less damning light. But the the prima facie reading is pretty damning.

          I should also point out that it is not, in fact, kosher to contact other researches in order to marshal evidence against a manuscript, or to pass on the manuscript to other researchers. Most journals ask reviewers to keep the review in confidence. If a researcher needs to answer criticisms of his work in some paper, he should publish his response as its own paper, not, in essence, create a confidencial mini-paper (apparently requiring additional data analysis!) to try to convince the editor not to publish the critique.

        • Norbert
          Posted Dec 17, 2009 at 3:19 AM | Permalink

          We also have different views on the meaning of “irrelevant”. I’d say there are more “subtle” forms of irrelevance than submitting a geology paper to a medical journal. For the latter I might use the term “stupid”, or some amplification thereof, instead of the term “irrelevant”.

          In regards to cleaning the mess up, perhaps the authors didn’t want that? Maybe they wanted to make exactly that claim, but were not able to prove it? How would we know the reviewer didn’t do exactly what you proposed, and just wanted to be able to back up his objection in case there was a gray zone where the authors would claim that they already made their point? After all, he was using the term “If published as is”, so he was obviously contemplating the possibility that the paper might get published in some modified form.

          That’s right there in the email, not me “presuming a lot of favorable facts not in the email”.

          I would think that it might even be obligatory to check the data and see if the claims are true, if the reviewer is able to do so.

          Finally, I see no sufficient sign that he was going to try to convince the editor not to publish the critique at all, as you claim, just that he wouldn’t want it to get it published “as is”. If he was going to ask for the paper to get modified on the basis that its “claims” were not shown to be correct, the obvious next step would be that the authors might try to add such a demonstration. In which case he would obviously like to be able to point out any obvious incorrectness. I do have the impression that the peer-review has the task of sorting out baseless as well as obviously erroneous claims. However, I wouldn’t know what any specific quality standards of this specific publication might be.

        • Posted Dec 17, 2009 at 7:26 AM | Permalink

          If you send a manuscript for review to the author of a paper that it undermines, you are pretty much guaranteed to get a “reject” back.

          If that’s the case (and if I understand correctly you are personally familiar with the peer review process), then what does that tell us about the integrity of the process to send such an article to such a person? Who picks the reviewers?

          It seems to me that this is also one of the key take-aways from the Letters, it’s that the peerreviewedlichuchur, in climatology anyway, is highly skewed to being peer reviewed by the same bunch of people with an agenda and a lot at stake.

        • David Wright
          Posted Dec 17, 2009 at 7:11 PM | Permalink

          A standard move for editors in cases like this is to make the attacked researcher an extra, non-voting reviewer. That is, if your journal normally sends a manuscript to 2 reviewers, you send it to 2 independent reviewers and also to the attacked researcher. That way the authors get the benefit of the attacked researchr’s feedback, and can update the manuscript to address his points, but the attacked researcher can’t block publication with his bad review.

        • Kevin
          Posted Dec 16, 2009 at 4:30 PM | Permalink

          If the paper was an analysis of the mathematical methods used, and the math in the paper was correct, what possible justification is there for preventing its publication? Additionally, if there was no impact to “practical work” why would they be worried about it being published? It seems to me that the correct approach would have been to allow this paper to be published, and then build upon or refute the results through another paper.

        • Norbert
          Posted Dec 16, 2009 at 4:56 PM | Permalink

          To me it would seem that the paper was more than that, and that this was the problem.

        • Kevin
          Posted Dec 16, 2009 at 5:22 PM | Permalink

          Unfortunately that type of comment is pure speculation, given that none of us has ever (presumably) read the paper.

          The point is, if there’s a chance the the paper would add value, and it was logically consistent, why keep it from being published? If there was something to it, people would built upon it, if not it would quickly be dismissed.

          Again, I think the correct course of action would have been to allow the paper to be published so that the scientific community as a whole could judge the paper and respond accordingly. In my opinion this is how science is advanced, not by a few gatekeepers guarding the flow of information.

        • Norbert
          Posted Dec 16, 2009 at 5:58 PM | Permalink

          It is not pure speculation, although I can’t be sure. For example, there is this sentence: “but NEVER actually show how their method would change the Tornetrask reconstruction from what you produced.”

          In context, to me this strongly implies that the paper suggests that there would be such an effect.

          One of the critical sentences we talk about is this one: “If published as is, this paper could really do some damage.” (Emphasis mine)

          So the main objections seems to be that there is something wrong with he way the paper makes its point, rather than with the mathematics.

          We don’t know (yet) if, whether, and if so, why, the paper wasn’t published in a modified way.

        • Michael Smith
          Posted Dec 16, 2009 at 6:17 PM | Permalink

          Norbert wrote:

          It is not pure speculation, although I can’t be sure. For example, there is this sentence: “but NEVER actually show how their method would change the Tornetrask reconstruction from what you produced.”

          Is the Tornetrask data available so that one can determine how an alternative method would change the reconstruction?

          Steve: Yes. but it will take a while to get to this

        • Barry R.
          Posted Dec 17, 2009 at 4:00 PM | Permalink

          At best the case you’re making doesn’t say, “Gosh Jones and company didn’t actually keep other research out.” It says, “It isn’t absolutely proven that they were able to keep research that is provably worthwhile out.”

          That’s a far weaker case. The appropriate response to what your case is saying is not “move along, nothing to see.” It’s “let’s do some further investigation and either nail them or prove that while they tried to be unethical they didn’t succeed at it.”

        • deadwood
          Posted Dec 16, 2009 at 10:17 PM | Permalink

          What exactly is the practical impact? Is it impact of the paper in question or the impact of Briffa’s earlier work?

        • bender
          Posted Dec 16, 2009 at 10:26 PM | Permalink

          Impact of the paper in question, is my read.

        • Hoi Polloi
          Posted Dec 17, 2009 at 12:04 PM | Permalink

          “He probably meant that it could cause far reaching wrong conclusions based on a misleading critique about finer theoretical points which have no sufficient practical relevance.”

          Probably, so not sure? And don’t you think that scientists are able to determine themselves whether these “far reaching wrong conclusions” are misleading and don’t you think it would be better that Prof.Jones would have adressed these “far reaching wrong conclusions” himself in his reply, instead of BLOCKING it?

  2. scott lurndal
    Posted Dec 16, 2009 at 1:13 PM | Permalink

    I wonder if the Korean paper was:

    Click to access SLB-GRL04-NHtempTrend.pdf

    Steve: Different journal. Soon was known to them and would have been identified by name. No Berkeley author either. Nope.

  3. Mike86
    Posted Dec 16, 2009 at 1:13 PM | Permalink

    The point would be that Mr. Prinn is attempting to prove a negative. I think the old example was, “can you prove there aren’t any pink elephants in the room?” Just because you don’t see any doesn’t mean they aren’t there.

    Unless you can identify each and every article reviewed, discussed, etc., and then demonstrate what ultimately happened to it (and why), you can not state that no article went unpublished as a result of these individuals.

    • Adamson
      Posted Dec 16, 2009 at 2:37 PM | Permalink

      Yes,”absence of evidence is not evidence of absence.”

  4. Mike
    Posted Dec 16, 2009 at 1:14 PM | Permalink

    @Norbert

    In principle, yes, some papers deserve to be rejected. However, in none of these messages do we find a substantial reason for such rejection. Also, the fact that Jones gets to review a paper criticizing his own work (or at least that of CRU) seems inappropriate.

    It will be hard to prove that papers were rejected without proper reason, and if it did happen, it is certainly not specific for climate science. I recently had a paper rejected. One of the reviewers made one flimsy specific comment, and then proceeded to say “This and many other questions are not addressed in the manuscript”. That was his review – all of two sentences. Another time I had a reviewer “go to town” on one of my papers – but he kept confusing HF (hydrofluoric acid) with “HS”, which he apparently assumed to be a stable compound (it isn’t).

    Peer review is wide open to abuse, and the abuse happens all the time. The way that those “climate scientists” present it to the public, as if it were some certificate of scientific truth, is a farce. Each time I see a comment somewhere repeating this asinine claim I know that person has no clue about academic reality.

    Steve: Let me repeat something that I ask readers all too often. If there are specific issues in the matter at hand, DON’T drag in other issues. Of course, I agree that journal peer review is not a talisman of truth, but there are narrower issues here. We don’t need to solve all the problems of academic peer review to consider Briffa’s editor instruction or Jones’ seeming conflict of interest.

    • DaveJR
      Posted Dec 16, 2009 at 1:46 PM | Permalink

      There’s nothing wrong with asking the person whose work is criticised to review the paper. However, the editor should bear in mind the conflict of interest when weighing the response. When the editor also has a conflict of interest, you’re in trouble.

      • Mike
        Posted Dec 16, 2009 at 2:08 PM | Permalink

        That doesn’t strike me as sound. A conflict of interest is a given in such a case. So, the editor can fully expect a rebuttal/rejection. Assuming other reviewers support the paper, we have a disagreement, typically leading to more reviews being solicited. Not a sensible scenario if the editor is serious about a fair and speedy assessment.

        • Norbert
          Posted Dec 16, 2009 at 3:14 PM | Permalink

          Of course there is a conflict of interest on the author’s side. But common sense says there is point in the reviewer informing himself about both sides of the story.

        • Mike
          Posted Dec 16, 2009 at 3:44 PM | Permalink

          I suppose you mean “the editor informing himself”. Yes, there is a point.

        • Norbert
          Posted Dec 16, 2009 at 3:55 PM | Permalink

          No, those seem to be two different papers, and in regard to the second one, Cook is the reviewer. Briffa seems to be the editor (based on what SteveM says) in regard to a different paper. Unless I’m missing something.

          Steve: my read is also that there are two papers.

        • Norbert
          Posted Dec 16, 2009 at 4:02 PM | Permalink

          Also, it seems that Briffa is not also the editor for the second paper as well, since then Cook wouldn’t have to inform him about that paper in this manner,

        • Bob
          Posted Dec 16, 2009 at 10:59 PM | Permalink

          Norbert,

          You’ve just observed a correspondence of Phil Jones in which he cites the thoroughness of his rejection *as a reviewer* of critiques of his own work. Where is the question about this being a baldly apparent conflict of interest?

        • Norbert
          Posted Dec 17, 2009 at 3:39 AM | Permalink

          Bob,

          I am not familiar with the expression “Went to town in both reviews”, and what it says about thoroughness, or not. Or is there some other correspondence which I have missed?

        • Ray
          Posted Dec 19, 2009 at 4:55 PM | Permalink

          Norbert,

          “Going to town” on something is a Briticism that means “to do something aggressively and somewhat excessively”. It actually applies through most of the Commonwealth I think.

          For example, “I caught a burglar in my kitchen and really went to town on him with a cricket bat”, would imply that the paramedics had to use a snow shovel to get the burglar off the floor.

          Or, “I went to town on my science homework” means you did a really thorough job on your homework, probably overdoing parts of it.

          Or, “Fred installed Christmas lights this year and he really went to town!” would imply that Fred has an amazing light display this year, far beyond the norm.

          I think you get the idea. 🙂

        • Bob
          Posted Dec 20, 2009 at 10:50 AM | Permalink

          Your familiarity with Jones’ choice of idiom was not my question. He was reviewing a study critical of his own work. From the context alone, you can tell he rejected it ‘con mucho gusto’ [tell me when someone hits on a language you grasp]. Is this or isn’t this an apparent conflict of interest?

        • Norbert
          Posted Dec 26, 2009 at 7:16 PM | Permalink

          Well, you can also tell that he felt he had a good reason to do so. 🙂

          In retrospect, this needs some clarification: I was talking about Briffa being asked by Ed Cook, in which case Briffa was the author, but neither the editor nor the reviewer. (That is what I was discussing previously, when Mike addressed me, and in which case there was similar criticism.)

    • Norbert
      Posted Dec 16, 2009 at 3:26 PM | Permalink

      Mike,

      you wrote: “However, in none of these messages do we find a substantial reason for such rejection.”

      Do you think it would not be a substantial reason if a paper makes a theoretical mathematical point of no practical relevance, but creates the impression that there is such practical relevance?

      That would be damaging indeed, damaging at least to the (hopefully) practically relevant work that it claims to refute.

      • Mike
        Posted Dec 16, 2009 at 3:53 PM | Permalink

        I don’t get your point. It seems that the paper under review contained valid criticisms of prior methodology, and at the same time proposed an improved one. Application of that methodology to one specific data series seems a minor concern in this context.

        If the reviewers felt otherwise, they could have requested the author to perform it and include it in a revised version.

        • Norbert
          Posted Dec 16, 2009 at 4:08 PM | Permalink

          To me it is quite simple. The authors claim to have a improved method. The reviewer doesn’t think that there is any theoretical problem with that method. But he thinks that the differences are not practically relevant, and he wants to prove that. Now, the article apparently claims that there is such practical relevance, since it use the “Tornetrask recon as the main whipping boy”.

          That’s the part where the reviewer thinks the paper doesn’t show what it needs to show.

        • dover_beach
          Posted Dec 16, 2009 at 5:33 PM | Permalink

          Norbet, its clear from both Briffa’s and Wood’s response that they believed the paper in question to be both theoretically interesting (“It is also an ugly paper to review because it is rather mathematical, with a lot of Box-Jenkins stuff in it. It won’t be easy to dismiss out of hand as the math appears to be correct theoretically,…” and practically significant (If published as is, this paper could really do some damage.) In fact, it follows generally that points that are theoretically significant are also practically significant.

          Their problem (“but it suffers from the classic problem of pointing out theoretical deficiencies, without showing that their improved inverse regression method is actually better in a practical sense.) with the paper is really a matter for the community to deal with, not the authors of the paper. If the authors have identified a problem with an existing method, and there argument appears sound, then they have done the dendro community a service.

          The other problem here is what Briffa/ Wood mean by practical sense. If the authors have in fact established a problem with existing methods then the consequence of this is to have established a problem with the results of this method and therefore with the practicability of the results it has derived. Ah, now I see their problem. The paper appears to undermine a widely used methodology without apparently offering them a new one; that would indeed be impractical for those currently employing that methodology but hardly a reason for rejection.

        • Norbert
          Posted Dec 17, 2009 at 3:43 AM | Permalink

          I think my response would be just a repetition of me previous message.

    • northern john
      Posted Dec 17, 2009 at 2:25 AM | Permalink

      I’m a editor for two journals and I always send a MS to the authors being criticised. It doesn’t matter if it is the devil himself as long as the argument makes sense. Matter of bias (unconscious or otherwise) are taken into account by the editor – yes a negative response is usually the case but it might surprise you how often there is agreement. This area is mostly peripheral to Climate Change Science, though.

  5. Steve McIntyre
    Posted Dec 16, 2009 at 1:16 PM | Permalink

    There are lots of issues here. Are Briffa’s editor instructions acceptable? Is Jones conflicted in reviewing a submission on CRU? Reply yes, but review?

    I, for one, think that there are important statistical issues in the calibration of dendro series and have referred to and discussed them in the past. Theoretical understanding of statistics of calibration in this field is abysmal. Surely academic articles are entitled to consider theoretical issues??? I, for one, would have been interested in this sort of article.

    And what precisely is the “damage” that would have caused by publication of this article. Now I want to find out what the article was.

    • Norbert
      Posted Dec 16, 2009 at 2:53 PM | Permalink

      Steve,

      you wrote: “Are Briffa’s editor instructions acceptable?”

      I might be missing something, but the emails presented above strongly suggest that they are talking about two different papers.

      (See the beginning sentences of Cook to Briffa, June 4, 2003.)

      So I don’t see any “editor instructions” (yet).

      The case here doesn’t apply to Prinn’s statement about rejection, because this, so far, appears to be a case of a reviewer asking for advice, which I don’t see as problematic in any case (in terms of common-sense, not knowing the exact rules for peer-review). Prinn was probably talking about outside influence on the review process, or some for of illegitimate reasoning.

      • Norbert
        Posted Dec 16, 2009 at 3:10 PM | Permalink

        Plus, the first email quoted, by Briffa, might very well relate to a review that was already said to be a rejecting review, and Briffa just pointing out that the review needs to make a hard and possibly extensive case. Which would be perfectly alright, one would think.

        • Fai Mao
          Posted Dec 17, 2009 at 12:50 AM | Permalink

          I rarely post on this form because my field is only tangentially, at best related to anything that even looks like science.

          I find it difficult to believe that Jones and Briffa were doing anything but trying to suppress an opposing viewpoint when I look at the Email in question. I say that because I know it happens in other academic fields. I do not believe that the desire to suppress evidence of an opposing viewpoint is unique to Climate Science. They simply carried it off better than some.

          My wife is a well known, somewhat respected scholar in Special Education at a major Asian university. She was for several years a moderately well known and respected scholar in the US at a large state university, not Ivy League, on the East Coast. One of the reasons we moved to Hong Kong is because she was completely blocked from advancement due to her stance opposing certain issues that have been an educational bandwagon for a few years. Despite the fact that she and others can show that some of the issues regarding pedagogy and assessment are actually detrimental to most students with clear evidence from carefully designed studies she cannot get her papers published in many journals because the editors hold a different perspective. The issue we are speaking of is “Inclusion” The editors have a vested interest in supporting a different position because they have spent many years promoting that cause. It is a very difficult thing for smart, highly trained people to let go of an idea they have spent years promoting, especially if it might cause a dimidiation in any measure of their reputation. In the case of Jones, Briffa, et al they had further reason to do this because they were doing most of their work on grant money rather than as simply part of their academic interest. It is one of the problems of money that it does seem to, at least sometimes, corrupt the research process. It does so regardless of whether the money comes from an industry or a government.

          Unlike the field of Education which is fairly large and has lots of journals it appears to me that it would be much easier for a small group of academics to actually gain control of the editorship of enough journals to effectively lock out anyone who disagreed with them in a small field like Climate Science. Because most people have children it is possible for educators with dissenting views to take their message to other venues and make the “experts” deal with it. This is both good and bad but does not appear to be the case with Climate Science. The EAU-CRU was trying to control the issue for their own advancement, the truth be damned.

          I think that the players involved simply, probably without realizing it, let their egos get involved. By the time they realized this, if they even have realized it, it was too late, or at least very difficult to stop. If the work done by Phil Jones, Michael Mann, and Ken Briffa is discredited they are ruined professionally. I doubt they want to be remembered that way. They almost have to stand by the AGW position now. The time to change their position was years ago. If I were them, I’d probably be reacting the same way. But, I think it is nearly inevitable that they are forced to retire and be disgraced at this point. There asimply too many flaws in their data. I wish it did not have to be so.

          It would be better if those who have believed them did not go down with them.

          I apologize for the slippery, wordiness of this post; I am trying not to give away names. I also realize that in doing this I lose some credibility because it becomes hearsay.

        • Norbert
          Posted Dec 17, 2009 at 3:56 AM | Permalink

          I think that in terms of the the view which you explain, the problem with the paper was that it it attacked a whole subfield of science, rather than just offering an improved method. (Or at least, that i was perceived this way.) It sounds as if it had a too strong component of trying to discredit existing scientific work, instead of contributing to an improvement of methods in a more constructive way.

          Or, to put it more simply, it seems to have tried to convey the message “Mathematicians know that climate scientists are idiots”.

        • Pedro S
          Posted Dec 17, 2009 at 5:44 AM | Permalink

          Norbert, I have read you various replies, and I think your (and Cook’s) insistence on not merely offering a critique of existing methods, but on providing a better method is not correct. Knowing that a given methodology is imperfect (or even worng) is an important part of progress.

          A silly example: if I showed an extensive body of statistical analysis proving that astrology is not capable of predicting the future, would you would you require me to provide a better method of fortune-telling in order to publish my analysis? 😉

          PS: I am in no way implying that dendrochronology is akin to fortune-telling, I am only trying to show how proving the insufficiency of a theory does not require one to offer a better one at the same occasion.

        • Norbert
          Posted Dec 17, 2009 at 10:59 AM | Permalink

          Pedor, I was just addressing Fai Mao’s point, which doesn’t really have that much to do with this situation in general.

        • Harold
          Posted Dec 18, 2009 at 8:00 AM | Permalink

          “it attacked a whole subfield of science, rather than just offering an improved method”

          Theories are shot down all the time without offering a new theory. In terms of climate science, none of the models have been validated in a way that would enable use for predictions of future climate, so I don’t see that there actually is much to shoot down.

        • Bob
          Posted Dec 20, 2009 at 10:55 AM | Permalink

          I think the proper response to this is “So?” Yes, the paper was broadly critical of the statistical methods being misused in climatology. That presents a problem for climatology, sure. If correct, however, it would make for a fine paper. The breadth of its critique is not a reason for summary rejection as you seem to feel.

        • Norbert
          Posted Dec 26, 2009 at 8:00 PM | Permalink

          My point is not that it has to provide a new method. As far as I understand, it actually did provide a new method. (Or at least make a good attempt in that regard.)

          In the view of the reviewer, there are three steps:
          1. It proposes a new method.
          2. It claims the new method is better.
          3. It claims that the existing method is bogus and much existing work obsolete.

          In the email, the reviewer objected mainly that the paper didn’t have sufficient support for step 3.

  6. Posted Dec 16, 2009 at 1:23 PM | Permalink

    @Norbert: The point is that a small number of entrenched individuals were able to suppress reasonable critiques of the reigning orthodoxy by (ab)using their positions as journal referees. Now, this happens every day in pretty much every discipline. However, members of other disciplines are not generally treated by politicians, bureaucrats and a significant portion of the general public as high priests whose words convey absolute truth.

    Over long enough periods of time, things balance out, reigning orthodoxies crumble etc etc with the public being none the wiser. In the case of so called climate “science”, however, such a process had no time to work itself out before Time and Discovery and everybody’s uncle got involved in deciding what kind of light bulb we should all use.

  7. Posted Dec 16, 2009 at 1:26 PM | Permalink

    These are great examples of the climategatekeepers blocking ‘mathematically correct’ papers because they don’t support conclusions.

    I caught this comment while reading the bottom from Phil Climategate Jones.

    Recently rejected two papers (one for JGR and for GRL) from people saying CRU has it wrong over Siberia. Went to town in both reviews, hopefully successfully. If either appears I will be very surprised, but you never know with GRL.

    Russian goverment agencies are apparently being reported by the Russian papers as saying the same thing today. If it’s true, they are accusing CRU of selectively biasing the record.

    Russia Accuses CRU of Tampering

    • Posted Dec 16, 2009 at 2:13 PM | Permalink

      Here is the link to the report of the Institute of Economic Analysis

      Click to access 15.12.2009.pdf

      On p. 6, there is a map of Russian meteostations used (yellow) and non-used (blue) by HADCRUT. The report practically explicitly accuses HADCRUT in selecting the stations with warming. Several graphs show omitted stations that do not show any warming trend.

      • Posted Dec 16, 2009 at 3:26 PM | Permalink

        Anastassia, can you provide a translation?

        • AJ
          Posted Dec 16, 2009 at 6:17 PM | Permalink

          Just go to google translate. Type the above URL in the Tranlate a Web page box. The results are ok’ish.

    • Gary Hladik
      Posted Dec 16, 2009 at 2:39 PM | Permalink

      Yes, Jeff, that was the first thing I thought of when I saw the phrase “CRU has it wrong over Siberia”. Coincidence, Steve covering these E-mails today–or “denialist” conspiracy? 🙂

    • Norbert
      Posted Dec 16, 2009 at 3:39 PM | Permalink

      JeffId,

      you wrote: “These are great examples of the climategatekeepers blocking ‘mathematically correct’ papers because they don’t support conclusions.”

      It would seem to me that such papers should be rejected, especially since they probably create the impression that they support conclusions but then don’t.

      I think the above information doesn’t really make the case that something wrong was done. One has to remember that (at least from their point of view), lots of people try to submit papers motivated by political bias rather than science.

      snip – inflammatory point unrelated to the 2003-2004 reviews

      • tty
        Posted Dec 16, 2009 at 4:32 PM | Permalink

        I have yet to see a mathematical paper motivated by political bias. If this was one it would have been worth publishing for its novelty alone.

      • Skip Smith
        Posted Dec 16, 2009 at 4:34 PM | Permalink

        Strange that you say the people submitting the paper might be biased, yet you can’t bring yourself to admit that a person reviewing a paper criticizing his own work might be biased towards rejection.

        • ianl8888
          Posted Dec 16, 2009 at 4:56 PM | Permalink

          Norbet’s entire line of argument is that of special pleading – ho hum

        • Norbert
          Posted Dec 17, 2009 at 4:02 AM | Permalink

          Skip, I didn’t say that about the people submitting this paper. I said that about climate science in general, in one point of view.

      • Posted Dec 16, 2009 at 7:15 PM | Permalink

        Norbert,

        When I read papers being blocked which refute the very mathematical techniques which have caused me to spend over a year of my free time blogging, it sounds different than someone not familiar with the MV math being used. You may be familiar, I don’t know.

        The MV regression techniques I’ve read are IMO designed to create HS. It’s an ugly mess. Steve probably knows the specifics of this particular one, but you simply cannot scale noisy data to high slope thermometers and expect to get a correct result. I plan to play with Mann09 shortly, but seeing Briffa point this ‘little detail’ out by email back in 1999 makes the whole thing frustrating.

        It’s also represents another pressure for high slope data. I didn’t realize that Phil Jones was so tied into the paleo crowd before these emails. The paleo’s need high temperature slopes for unprecedented results.

        –This is not a claim that this is the reason for exaggerating slopes, but it is another push in the warming direction.

        • Norbert
          Posted Dec 17, 2009 at 4:13 AM | Permalink

          JeffId,

          I can’t see from the above information which article Briffa refers to. The majority of Cooks email refers to a different article than Briffa’s comment.

          The article Cook talks about has not yet been read by Briffa, at the time of this exchange. The email indicates that Cook is honestly (though perhaps some wishful thinking might be there) convinced, at that point in time, that he may be able to prove that the there is no practically significant difference between using the two methods.

      • Jerry
        Posted Dec 21, 2009 at 2:03 PM | Permalink

        Perhaps it would be interesting to think of the rejected, but mathematically correct, paper as a peer review of the earlier dendro papers that it criticizes. If you look at it this way, and if the latter paper is indeed mathematically sound, then perhaps the earlier works should be ex post facto rejected.

  8. Kevin
    Posted Dec 16, 2009 at 1:30 PM | Permalink

    Can anyone elaborate on this comment by Ed Cook?

    but it suffers from the classic problem of pointing out theoretical deficiencies, without showing that their improved inverse regression method is actually better in a practical sense. So they do lots of monte carlo stuff that shows the superiority of their method and the deficiencies of our way of doing things, but NEVER actually show how their method would change the Tornetrask reconstruction from what you produced.

    Isn’t it possible that if the underlying math/method is correct/better, the foundation of the argument is better?

    Steve: Look at some of the old posts on Brown and Sundberg methodology for some insight on calibration methods. IMO these are fundamental and important issues. I’m going to try to track down what authors were involved.

    • DaveJR
      Posted Dec 16, 2009 at 2:02 PM | Permalink

      I think it’s safe to say that better methods should be welcome. However, the key point here seems to be that they show that their method is better than “our way of doing things”. The image of being infallible is very important to The Team.

      • Dave McK
        Posted Dec 16, 2009 at 3:13 PM | Permalink

        I agree. It seems a standard ploy for rejecting papers they can find no fault with (other than the contradiction of the orthodoxy).
        They call it ‘classic’.

    • Kevin
      Posted Dec 16, 2009 at 2:05 PM | Permalink

      Thanks guys.

    • bmcburney
      Posted Dec 16, 2009 at 2:48 PM | Permalink

      Isn’t the real question here what Cook thinks makes something “better in a practical sense”? Isn’t Cook saying that regardless of any “theoretical” improvements, unless it produces “better” “practical” results, it is not worth pursuing.

      Which begs the question: what, according to Cook, makes some results “better in a practical sense” than others?

    • Dave Dardinger
      Posted Dec 17, 2009 at 8:52 AM | Permalink

      Kevin,

      This begins to sound like a discussion between theorists and experimenters in physics. Briffa’s position seems to be that because the authors hadn’t built a supercollider and captured a few Higgs bosons, their improvement in the theory on how to recognize Higgs bosons shouldn’t be published.

      Briffa’s comments are totally bogus, IOW. But, of course, it’s attractive to many people in the field who don’t want their methods examined too closely. Steve has had to deal with similar attitudes when it comes to multi-proxy temperature reconstructions.

      • Posted Dec 17, 2009 at 9:04 AM | Permalink

        I would say the issue is more nuanced. If Dr. X studies the mosquitto flight and publishes his results, then Dr. Y comes with a paper arguing that as far as X has not taken into account the Earth’s rotation and Coriolis effects, his paper is incorrect, — the Editor should really encourage Y to provide a proof that Earth’s rotation quantitatively matters for X’s analysis.

        • fFreddy
          Posted Dec 20, 2009 at 7:56 PM | Permalink

          “…the Editor should really encourage Y to provide a proof that Earth’s rotation quantitatively matters for X’s analysis.”

          Which, I’d guess, is what the Monte Carlo testing is doing.

  9. aylamp
    Posted Dec 16, 2009 at 1:33 PM | Permalink

    From Phil Jones to John Christy; from 1120593115.txt

    “The science isn’t going to stop from now until AR4 comes out in early 2007, so we are going to have to add in relevant new and important papers. I hope it is up to us to decide what is important and new. ”

    -evidence of some power to exclude what “we” consider to be unimportant.

    • Posted Dec 16, 2009 at 2:51 PM | Permalink

      It is not, this may be only an indication that Jones hopes for some degree of independence for him and other CRU researchers in deciding the kind of research they will get funded for themselves. I mean, people, let’s not jump to conclusions without context. It is important both ways, sometimes helping, sometimes hurting the Cru of CRU. But still, necessary.

      • Darren
        Posted Dec 16, 2009 at 6:08 PM | Permalink

        @Michael Gancarski — Context? The context is obvious. When Phil Jones wrote (from aylamp’s quote above) “we are going to have to add in relevant new and important papers” it is crystal clear that he meant “we” (the Team) are going to have to allow “relevant new and important papers” into the AR4.

        Jones was certainly not referring to his ability wrt “deciding the kind of research they will get funded for themselves.” Is there any evidence that, in the last 20 years, Jones (or any prominent members of The Team) was ever stymied or kept from having enough funds to engage in the global warming research that he desired?

        Because of the politicization of the issue, many Team Members (Jones, Mann, Hansen, Briffa?) were considered by most western governments to be the leading climate researchers — so who else was going to decide “the kind of research they will get funded for themselves”? I mean, look no further than Hansen for an “untouchable.” And this is Jones’ email – Who was there to trump him?

        From Steve’s attempts to better the AR4 (https://climateaudit.org/2009/10/05/yamal-and-ipcc-ar4-review-comments/), we can see that many reasonable review comments were rejected (by the author’s of the study that would appear “damaged”, or even if the degree of climate panic might appear to be minimized if the orthodoxy were challenged).

        The context is obvious: they were attempting to be the gatekeepers of AR4 research inclusion.

        • Darren
          Posted Dec 16, 2009 at 6:15 PM | Permalink

          That is… Here is yet another example in which a Team Member expressed the hope (and expectation) that they could act as a climate research gatekeeper… in the larger context (and further evidence) of all the leaked emails.

        • Posted Dec 17, 2009 at 4:41 AM | Permalink

          Darren, I am not saying that my understanding is correct. I have proposed an alternative explanation just to show that an e-mail alone is not enough. One of the values that Steve wants to preserve on CA is that claims being made here are based on facts and good theory. This pertains not only to the science being discussed but also to comments on policy and other non-scientific aspects of the global warming movement, like bad behaving scientists, personal motives and so on.

          As a side note – Climategate has done a lot for CA’s popularity but at the same time too much politics has been brought here as a result. There are many other blogs that are politicized enough.

          Steve: All the people bringing their politics to the table is driving me crazy.

  10. Paul Linsay
    Posted Dec 16, 2009 at 1:43 PM | Permalink

    Per Sinan Unur at 1:23, yes this does go on all the time. It’s even more pernicious when it’s done in grant reviews. There’s nothing to publish in the first place if you can’t get funding for your research. It would be interesting to search the emails to find out if they conspire to prevent certain lines of research from getting money either by slanting grant reviews or by placing their own people as grant managers.

    • Mike
      Posted Dec 16, 2009 at 3:54 PM | Permalink

      Good points.

  11. Posted Dec 16, 2009 at 1:47 PM | Permalink

    Re: Sinan Unur

    Over long enough periods of time, things balance out — but what happens to those individuals whose submitted papers are tramped down? Things balance out already without many of them, I suppose.

    In all other professional societies professional errors can be prosecuted, e.g., errors of doctors in hospitals. How can editorial errors, based on disindigenous/incompetent/ill-meant reviews, that are traumatizing the authors’ personalities, can be prosecuted? How many creative projects have been killed? There is this complete lack of responsibility of editors with regard to rejected papers. Nobody will ever know. It is safer to reject than to publish.

    In my opinion, the reviewers should understand that their main task is not to fight with “the noise” — there is already arrogance in this term — but not to lose something new and precious. New knowledge is always in minority. Err on the side of publishing things. Foster a meaningful discussion, rather than a competition of citation indexes and publication numbers.

    Peer-reviewed publication is not a synonym of ultimate truth. Truth cannot be decided by the Editor/Reviewer 1/Reviewer 2 trio. I remember a dialogue of two academicians about a novel, controversial, thought-provoking paper. One said, extremely reluctantly, — Well, we could … possibly … publish it, but ONLY as a forum paper, for discussion only. The other replied — my apologies, but aren’t ALL scientific papers published for discussion?

    I very much hope that people whose papers as mentioned in the Climategate letters were rejected were not hurt and could continue their work.

    • Mike
      Posted Dec 16, 2009 at 2:11 PM | Permalink

      Spot on.

      • David L
        Posted Dec 16, 2009 at 5:42 PM | Permalink

        “New knowledge is always in minority.”

        Precisely.

    • Posted Dec 16, 2009 at 3:22 PM | Permalink

      @Anastassia Makarieva: You need to keep in mind the difference between is and ought to be.

      I tried to describe how things are in all disciplines. Even though I do not think climate “scientists” are any worse than academics in other fields, the consequences of their actions are worse because of what actual decision makers are planning to do.

    • Gerald Browning
      Posted Dec 16, 2009 at 10:00 PM | Permalink

      It boils down to the competence and honesty of the Editor.
      If the Editor is incompetent or biased, his selection of reviewers is likely to be less than optimal. However, if the Editor is competent, then he (she) will be able to discern a good scientific manuscript. His selection of reviewers is likely to include reviewers with differening viewpoints, but the Editor should be strong enough to reject viewpoints that are not based on rigorous scientific arguments.

      Jerry

  12. John Haythorn
    Posted Dec 16, 2009 at 1:50 PM | Permalink

    what is “box-jenkins stuff” referring to?
    Steve: A statistical method that you can locate in texts.

  13. Francois Ouellette
    Posted Dec 16, 2009 at 2:06 PM | Permalink

    To his defense, Ed Cook does not say that he HAS rejected the paper. We do not know the paper’s fate in the end. We know that Cook asked Briffa for his old data in order to try and refute the paper’s claims, which I believe is fair enough. Maybe he succeeded, and had a good reason to reject the paper. Maybe he didn’t, and merely passed along his comments to the Editor about the authors not giving any alternative “better” recon, which would be the standard way for a reviewer to proceed.

    Briffa’s request is different, but then again we have no background. As an Editor, you may have one reviewer rejecting the paper outright, and one that is slightly negative, but not that much. You might feel that you need a third review to confirm the negative trend of the other two, and give you a firm ground for rejection.

    As others have remarked, papers get rejected all the time. Authors argue with the Editor, or resubmit elsewhere. Few papers never get published anywhere, but many (and some very good ones) end up in “non-prestigious” journals.

    More interesting IMHO is the story about Michael Mann pretty much dictating to Briffa the terms of a perspective piece he (Briffa) did for Science about MBH99. Briffa was forced to withdraw anything that remotely criticized the paper. That sort of interference is, IMO, very unethical.

    • Derek Walton
      Posted Dec 16, 2009 at 2:44 PM | Permalink

      But of course Cook would not have had to ask Briffa for the data if it were published in the first place. Even proponents in this case needed the original data to develop further work..

    • bmcburney
      Posted Dec 16, 2009 at 2:55 PM | Permalink

      Yeah but what makes some results “better in a practical sense” than other results?

      Cook is saying that the analysis is theoretically better but the results are worse or at least not better “in a practical sense.” What makes results better or worse “in a pracical sense”?

      • Darren
        Posted Dec 16, 2009 at 6:23 PM | Permalink

        And we all understand that any research whose conclusions undermined the methods used by, or end-of-the-world-global-warming-i-mean-catastrophic-climate-change implications drawn from, the orthodoxy’s favored research papers (MBH, Briffa, etc.) could not in any way be “better in a practical sense.” The whole point was that “practical” was, by definition, in complete agreement with the orthodoxy-i-mean-consensus. Any research that could be used to imply otherwise had to be wrong.

  14. jim edwards
    Posted Dec 16, 2009 at 2:14 PM | Permalink

    I wouldn’t allow Prinn, and others, to get away with framing the question so narrowly. Framing the debate generally wins the argument.

    Rather than “Were they able to keep papers from being published / mentioned ?”, maybe the question should be “Were they able to use undue influence to unfairly impair the impact of competing views in print?”

    The second question includes the situation where a competing scientist ultimately is allowed to publish – but only in a lower-prestige journal.

    The second question also allows the exploration of “academic check-kiting” that Steve M. spent so many posts exploring. Should arguments dependent upon Wahl and Amman have been allowed to carry so much weight in the IPCC’s Fourth Assessment Report – given that their articles weren’t “in print” by the deadline for inclusion in the Report ? Contrary positions were sidelined in AR4 because W&A was deemed to refute them, if I remember correctly.

    One of Phil Jones’ most quoted e-mails in the press, apparently planning for the elimination of any e-trail re: getting the W&A paper into AR4, seems on point.

    Mike,
    > Can you delete any emails you may have had with Keith re AR4?
    > Keith will do likewise. He’s not in at the moment – minor family crisis.
    >
    > Can you also email Gene [That’s Eugene Wahl – JE] and get him to do the same? I don’t
    > have his new email address.
    >
    > We will be getting Caspar [That’s Caspar Amman – JE] to do likewise.
    >
    > I see that CA claim they discovered the 1945 problem in the Nature
    > paper!!
    >
    > Cheers
    > Phil

    • Calvin Ball
      Posted Dec 16, 2009 at 4:08 PM | Permalink

      I wouldn’t allow Prinn, and others, to get away with framing the question so narrowly. Framing the debate generally wins the argument.

      Finally somebody gets it. This is an error I see all over the threads here; implicitly accepting the burden of proof, when it’s not appropriate to do so. You can’t let people get away with saying that everything might be ok anyway.

      I’m probably going to get snipped for this, but comments are the realm of rhetoric. Getting all the facts and math and science right isn’t enough.

    • Harold
      Posted Dec 18, 2009 at 8:21 AM | Permalink

      While the CRU crowd had some control over some situations, they certainly had influence in a lot of others. Some may want to minimize the importance of this, but to make it very obvious, lobbyists in Washington have influence (not control), and I doubt any would disagree that influence is an important tool in manipulating political processes. Note that here I’m using “political” in it’s broadest sense.

  15. AnonyMoose
    Posted Dec 16, 2009 at 2:18 PM | Permalink

    In checking who funds these people, I noticed “National Science Foundation believes that it does not currently produce or sponsor the distribution of influential scientific information” — Why is the NSF spending money, then?

    • MattK
      Posted Dec 16, 2009 at 4:42 PM | Permalink

      Oh, I am getting good at Gov speak so I will translate…

      This is probably a loophole of the OMB rules/guidlines. You see the rules probably say that the NSF must follow data quality if they themselves have people that produce studies or if someone else does the producing and NSF does the publishing of the data.

      However if you give money to a scientist and that person does bad things with the data and gets a buddy to accept it in a peer reviewed journal, then the NSF does not have to worry about Data Quality (NSF themselves didn’t produce it and did not publish it) and can keep on funding that scientist.

  16. JohnH
    Posted Dec 16, 2009 at 2:26 PM | Permalink

    What bothers me the most is the way that these emails show them shopping around for someone to refute a paper. It is so completely biased. When I submitted papers to journals, they removed my name and sent them to be reviewed anonymously. As some defenders here have said, there is no proof that the papers were rejected. But can anyone defend this behavior?

  17. Craig Loehle
    Posted Dec 16, 2009 at 2:27 PM | Permalink

    Not only the CRU gang, but editors in general seem to have been far too quick to reject non-AGW papers out of hand without even reviewing them. GRL rejected a paper of mine with the sole review being “this is the worst cr*p I have ever seen. reject” which is highly unprofessional (even if it were true).
    My paper:
    Loehle, C. 2009. Trend Analysis of Satellite Global Temperature Data. Energy & Environment 20: 1087-1098
    which is certainly timely and on an important topic (whatever its quality) was rejected as having no subject matter interest (that is, not even sent for review, sometimes within days) by Science, Nature, and GRL.

  18. Craig Loehle
    Posted Dec 16, 2009 at 2:39 PM | Permalink

    I was recently sent a paper to review that took issue with a paper of mine on the use of control theory for fisheries management. I considered it a compliment that they dedicated a whole paper to my work and good that they were able to go beyond what I had done math-wise. I had some suggestions but recommended publishing it even though I did not totally agree with their conclusions. That is how it should work. Gaming the system to kill anything that challenges your little baliwick is so Machiavellian…so petty.

  19. Rattus Norvegicus
    Posted Dec 16, 2009 at 2:51 PM | Permalink

    Sounds like they were discussing critical reviews. Isn’t peer review supposed to be a gatekeeper? And how do you know, from the information provided that the critical reviews by Cook and Jones were successful in getting the papers blocked from publication.

    Just more of your pathetic innuendo.

    Steve: It was open to Prinn to argue that the articles were properly kept out of the peer reviewed literature, as you argue without any evidence. Prinn did not do so. Prinn conceded that the correspondence was “unprofessional”, but argued that the authors did not succeed in implementing their unprofessional objectives i.e. block publication. I do not believe that Prinn carried out any due diligence in order to make this claim in the MIT Debate. If you believe that Prinn established these facts, then you are free to say so, but please address the points in the post.

    • bmcburney
      Posted Dec 16, 2009 at 3:09 PM | Permalink

      Of course, none of us know whether the papers were actually rejected. But Cook was evidently attempting to find a basis for rejecting a paper containing a method of analysis which, although “theoretically” superior, produced results which were not “better in a practical sense.” Biffra was attempting to find a reviewer to give him a particular result.

      There is a difference between innuendo and inference.

    • Paul Penrose
      Posted Dec 16, 2009 at 3:15 PM | Permalink

      Rat,
      Peer review is supposed to be an unbiased gatekeeper. The emails under discussion clearly show that Jones, Cook, and Briffa in these specific incidences were quite biased. This is the most disingenuous nonsense you’ve ever posted.

    • Bernie
      Posted Dec 16, 2009 at 7:42 PM | Permalink

      My read of Prinn’s comment is that it was a very superficial and bureaucratic response. It is like the way they supposedly addressed the divergence issue in TAR3 on p131. That they mentioned the papers hardly equates to fully considering them. It would have been pretty hard no to mention them – too blatant. The team’s modus operandi has always been to ignore that which they do not agree with — hence the name that cannot be mentioned at RealClimate

  20. Steve McIntyre
    Posted Dec 16, 2009 at 3:00 PM | Permalink

    I’ve looked at IPCC AR4 chapter 3 and didn’t see any citations that could be construed as being criticisms of CRU’s handling of Siberia. Thus, whatever these papers were, it seems pretty certain that they were kept out of IPCC.

  21. Paul Penrose
    Posted Dec 16, 2009 at 3:05 PM | Permalink

    Using Prinn’s logic, attempted murder should not be a crime if one did not actually succeed. Just their attempt to subvert the peer review process is bad enough to disqualify them as journal editors and reviewers in my mind.

    Steve: I’d be amazed if all four articles saw the light of day and were cited in IPCC. And I’m 100% certain that Prinn doesn’t have any information on the matter.

    • jorgekafkazar
      Posted Dec 16, 2009 at 5:28 PM | Permalink

      Yes, Prinn seems to admit assault but deny battery. Pure sophistry.

  22. Posted Dec 16, 2009 at 3:05 PM | Permalink

    I’ve never considered peer review to be a form of ‘gatekeeping’. There are also strong ethical rules that don’t allow an editor to give a strong steer to reviewers, or to reveal to reviewers who the others are. Papers are invariably sent out to 3 or more reviewers who are meant to be independent. They are also supposed to review the paper in confidence.

    If I agree to review a paper I do it with diligence, professionalism and a full set of values. I always ask that my name be revealed to the authors. If I have recommended rejection I always try to do so constructively and suggest where the paper could be improved and invite the authors to contact me.

  23. Steve McIntyre
    Posted Dec 16, 2009 at 3:12 PM | Permalink

    Paul, I take it that being a climate scientist is a necessary but not sufficient condition in order not to be surprised/indignant at the Briffa editor instruction.

    • Paul Penrose
      Posted Dec 16, 2009 at 3:22 PM | Permalink

      I’m actually more incredulous that otherwise intelligent people would support this kind of behavior after the fact. The logical fallacies and tortured logic they trot out is truly amazing.

      • Kurt
        Posted Dec 16, 2009 at 4:05 PM | Permalink

        “I know that most men, including those at ease with problems of the greatest complexity, can seldom accept even the simplest and most obvious truth if it be such as would oblige them to admit the falsity of conclusions which they have delighted in explaining to colleagues, which they have proudly taught to others, and which they have woven, thread by thread, into the fabric of their lives.”

        -Tolstoy

        • OYD
          Posted Dec 17, 2009 at 9:47 AM | Permalink

          sni – policy

    • Posted Dec 16, 2009 at 3:43 PM | Permalink

      Steve, what is incredible about the exchanges is the complete distortion, or wilfull misunderstanding of what peer review is about. First we have an editor asking for a reviewer to provide a hard case for rejection. One is forced to ask the question why unless the editor has made up his mind and needs the support of reviewers before he can reject. Then we have the reviewer asking for a favour in return. He wants data to ‘refute’ a paper he has got to review. Note he doesn’t want to ‘test’ the new methods proposed in the paper , he wants to refute them. Why? Because he admits that the new paper has serious implications for their previous work. If this isn’t gatekeeping I don’t know what is. It is a serious attempt to distort the review process and shows a lack of objectivity and impartiality. To compund the issue Briffa then asks Cook to send hima copy of the paper!

      I stress peer review has nothing to do with gatekeeping. It is a process to aid the dissemination of new ideas and present them to the wider community for discussion. These are unprofessional attempts to keep these ideas away from the wider science community.

  24. Posted Dec 16, 2009 at 3:16 PM | Permalink

    Hi Steve –
    I’ve been on both sides of the author – reviewer divide. Some comments:

    1. I’ve had editors ask me (as a reviewer) to do a thorough job, but never, a hatchet job. Briffa’s instructions to Cook: “Confidentially I now need a hard and if required extensive case for rejecting – to support Dave Stahle’s …” are beyond the pale. First, Briffa appears to be asking for a negative review – I say “appears,” because I’m not sure how to interpret “if required.” Second, he names one of the other reviewers. For these actions, Briffa should be dismissed, if still serving, and barred from future editorships. This should apply to Cook as well, since the latter was apparently pleased to go along.

    2. Cook’s response to Briffa: “Now something to ask from you. …” would have been OK if addressed to a third party and without appeal to common interest – i.e., “I’ve been asked to review a paper on xyz, about which I’m not very knowledgeable. Could you clarify pqr?” But that’s not what he wrote, and, given the context – the two of them colluding to keep results adverse to their joint interests from seeing the light of day – his actions are likewise out of bounds.

    3. Jones’ comments to Mann the most abhorent – chortling to a co-conspirator. Absent additional details – like the names of the other reviewers – it’s hard to know what the editor was thinking. One would hope that he / she balanced Jones with someone outside the narrow circle. …

    4. The larger issue, I believe, is that these examples probably the tip of the iceberg. Lots of people, including the poobahs at the funding agencies, must have known what was going on, if not the specifics, then certainly in general. How many grant proposals did the CRUgate principals put the kibosh on? Lots, I would imagine. And no one said a word.

    • Lazlo
      Posted Dec 16, 2009 at 9:53 PM | Permalink

      I have acted variously in the roles of editor, reviewer and reviewee, in an engineering discipline not closely related to climate (thank goodness).

      W. M. Schaffer’s comments hit the relevant nails right on their heads.

      In particular, if Briffa was indeed acting as an editor, he has clearly crossed a line. In my discipline such behaviour would be considered unethical. It is also potential grounds for a misconduct investigation.

      • bender
        Posted Dec 16, 2009 at 10:25 PM | Permalink

        Again, be careful of wording. Briffa is saying he can not reject the paper unless there are some damn good reasons – reasons that Stahle has possibly failed to provide because his review was too shallow. Briffa is begging for something deeper than “this paper sucks”, because if he gets two shallow negative reviews he will be in a pickle. Briffa does not say that he wanted to reject the paper from the start. He says he is going to have to have something much more cutting than what Stahle has provided if he is going to follow Stahle’s recommendations. Remember that these guys thought their emails were private. They were surely very casual in their wording.
        .
        It would have been more incriminating, IMO, if Briffa had been caught saying to Cook “say what you like, I’m going to reject it”.
        .
        Please don’t abandon nuance just because you’re angry about AGW or the peer review process.

        • Lazlo
          Posted Dec 16, 2009 at 11:47 PM | Permalink

          “I now need a hard and if required extensive case for rejecting”

          I don’t believe there is much nuance in that. He is clearly seeking a negative review. This is unacceptable in an editor.

        • bender
          Posted Dec 16, 2009 at 11:50 PM | Permalink

          I just explained how that line, in context, could be interpreted differently. Your rejection is merely reiterating what you’ve already said. It is no more convincing to me than it was the first time. But you’re welcome to say it a third time if you’d like.

        • Lazlo
          Posted Dec 17, 2009 at 12:43 AM | Permalink

          No that’s enough from me. I have no interest in convincing you. Other readers can judge for themselves.

        • Posted Dec 17, 2009 at 7:07 AM | Permalink

          In my experience (mathematical biology), papers go out to two or three reviewers – more often, three. My guess is that Briffa had two reviews in hand and that they were split. Hence the request for “a hard and if required extensive case for rejecting – to support Dave Stahle’s [the negative review].” If the 1st two reviews had been negative, additional support for rejecting wouldn’t have been necessary. Even had the third review been positive, Briffa would have been within his rights to reject.

          Again, and not wanting to belabor the point, naming anther reviewer is arguably the most egregious aspect – editors just don’t do that; and the way in which it was done suggests that the matter had been previously discussed by the principals – unbelievable.

          Off topic, but hopefully Steve will cut me some slack, a copy of my letter to Nature on 6 December regarding their “circle the wagons” editorialat http://bill.srnr.arizona.edu/Open_Letter.html. Discussions such as this one a counter to their assertions in 1st paragraph under “Mail Trail.”

          Steve: there’s an interesting Nature angle to Phil Jones’ reviewing. Look for Heike and stay tuned.

        • bender
          Posted Dec 17, 2009 at 7:18 AM | Permalink

          Dr. Schaffer,
          Have you looked at the Auffhammer et al manuscript (url provided elsewhere in this thread)? It appears to be the work that was being considered by Briffa, Cook & Stahle. I am a longtime fan of your work, and suspect you may be able to act as a qualified independent reviewer of this paper. I realize you are a mathematician, not a statistician, but I ask anyways.

        • Posted Dec 17, 2009 at 7:44 AM | Permalink

          Bender – Thanks for the kind words. I just now looked briefly at Aufhammer’s paper. Best for someone more knowledgeable to comment, but I’ll look again (later) in the AM – Cripes it’s late.

        • Lazlo
          Posted Dec 17, 2009 at 7:31 AM | Permalink

          Absolutely. Briffa is not fit to be an editor. Showcasing this to academics from other disciplines this evening is re-inforcing the lack of integrity on display in the climatologist community.

        • Norbert
          Posted Dec 17, 2009 at 11:42 AM | Permalink

          How does it become obvious that he asking for a third review? He only mentions one other, and there obviously was previous communication about the review since he writes “I have to nag about that review”.

          Given that they had previous communication about the review, Briffa might be saying that the “case” Cook is making needs to be a hard case. The addition that “if required”, it needs to be extensive, indicates the possibility that Cook already made a case for rejection, but perhaps due to being rather quite short, wasn’t a “hard” enough case to justify rejection, so Briffa is asking him to strengthen his case.

          That’s what the email sounds to me, and since it is so short, I wonder how one could be so certain of a different interpretation.

        • Tom C
          Posted Dec 17, 2009 at 10:26 PM | Permalink

          Wow! An utterly fearless academic.

        • ianl8888
          Posted Dec 18, 2009 at 12:11 AM | Permalink

          Dr Schaffer

          I am astonished at the fearlessness of your open letter

          I have been slowly sinking into despair at the corruption of science (from which I have made a good and interesting life). Your open letter revives my spirits … respect to you

        • bmcburney
          Posted Dec 17, 2009 at 12:53 PM | Permalink

          Sorry Bender, your interpretation does not account for the objectionable portion of Briffa’s request. Briffa is clearly asking Cook to recommend rejection of the paper and therefore undermining the purpose of independant review. The practical effect is the same as saying I am going to reject the paper no matter what you say.

          You are right that he is also asking for the case to be a solid one. We can all agree there is nothing wrong with asking for a solid case but that avoids the point.

  25. Philip Lloyd
    Posted Dec 16, 2009 at 4:14 PM | Permalink

    The thing I find of interest is that refereeing seems to be ignored when it suits the IPCC editors. In the UEA documents there is a file 4RSOR_BatchAB_Ch06_KRB_1stAug and I invite reading of the reviewer’s comments 6-1114, -5 and -6. He says, very reasonably, that certain papers are too recent to be included, according to IPCC’s own rules. Somehow the editor(s) of Chap6 of WG1 saw fit to ignore the reviewer’s advice, because the papers complained of form part of the kernel of the argument of Chapter 6 in its final published form. It’s only when you read who the reviewer was and who the lead authors of Chapter 6 were that you realise how this could possibly have happened.

  26. Neil Fisher
    Posted Dec 16, 2009 at 4:34 PM | Permalink

    Lots, I would imagine. And no one said a word.

    This, I should imagine, will be the biggest fallout from climategate – to those who knew, and did nothing. That shows just how powerful the consensus was/is – enough to prevent people speaking out when they knew unethical behaviour was taking place. Worse, that this failure then propogates to others who don’t know and they unwittingly defend what may turn out to be indefensible.

    • Posted Dec 17, 2009 at 7:22 AM | Permalink

      Scientists travel in herds. An anecdotal example: Several years ago, a prospective graduate student waltzed into my office. After the usual exchange of pleasantries, she announced that she was intending to study the consequences (eco-physiological, if I remember correctly) of global warming. “And what will you do,” I asked, “if the climate cools?” She looked at me disbelieving. So I pulled out a copy of Alley’s book, The Two Mile Time Machine, and showed her some ice core data indicating the enormous fluctuations in temperature that occurred long before the industrial revolution – whereupon she terminated the interview. From time to time, I wonder what became of her.

      • bender
        Posted Dec 17, 2009 at 7:28 AM | Permalink

        Kaufman et al. 2009 is worth a read. They genuflect in all the right places, but the inescapable conclusion is that fossil fuel burning has proloned this interglacial that we’re in. There are problems with the paper, of course. But it’s curious that no one seems willing to discuss that one angle.

        • andy
          Posted Dec 17, 2009 at 8:38 AM | Permalink

          snip – OT

  27. Arn Riewe
    Posted Dec 16, 2009 at 4:47 PM | Permalink

    Good thread. I particularly appreciate the comments of those that have been part of the peer review process.

    I read in disbelief at those that imply that since the death of the papers was not proven that there’s nothing wrong here – move on. How pathetic.

  28. PaulM
    Posted Dec 16, 2009 at 5:43 PM | Permalink

    Based on my experience of refereeing papers I can say that Briffa’s email to Cook is totally unacceptable, as is Cook’s reponse. I have never received such a review request from an editor, and if I did, I would kick up a big fuss!

    Perhaps the authors of these rejected papers will now come forward?

    • WHR
      Posted Dec 16, 2009 at 10:32 PM | Permalink

      Good point.

      Another worthy observation is that in many circles, there isn’t enough “coziness” between colleagues to permit such Good ole Boy mentality. I’ve never peer-reviewed, but a similar concepts exist with auditing procedures in my field. Such e-mail exchanges would be grounds for dismissal if auditors discussed rigging audit findings in this way. CONTEXT. What is damning about the context, it almost reads like, “Hey Phil, how’s the Mrs.? Sending the oldest boy off to college next year. Yeah, well, Oh!!! Got a favor to ask, no biggie. Mind helping me butcher up this submission I’m reviewing? Don’t want this paper getting any traction. Ho hum, Pizza sounds good for lunch, how about you?”

      So par for the course as it reads. Sorry if this defies any rules of the road here Steve. I can’t help but say this as I pick my jaw up off the floor.

    • bender
      Posted Dec 16, 2009 at 11:52 PM | Permalink

      You don’t know that the papers were rejected. You don’t know that they weren’t published elsewhere.

  29. Sam Urbinto
    Posted Dec 16, 2009 at 5:53 PM | Permalink

    Whatever it’s supposed to be, the email exchange doesn’t look anything like any kind of objective review process I’d want to go through.

    Aside from all that, isn’t ‘a better method’ interesting even if it doesn’t prove ‘it has better results’?

    For some reason, this reminds me of Oreskes’ op/ed in Science. If you’re attempting to prove legitimate dissenting opinions on “anthropogenic climate change” are not being downplayed, why would you search summaries of non-dissenting published scientific journal articles with the keywords “global climate change”?

  30. StuartR
    Posted Dec 16, 2009 at 5:55 PM | Permalink

    As a layman looking at this particular climategate email thread I sense some sincerity that the writers may feel that the paper in question is just plain wrong, but there is some clear groping for confirmation and mutual hoping for some affirmation that if there is any area where the correspondents mutually lack understanding, then that seems to be be fair game to be ignored to their advantage, rather than explored to any benefit of any greater cause.
    As has been pointed out on this blog before about these emails, you can find quite a few examples from Briffa, Wigley and possibly others I have missed, where they state very familiar points about the weaknesses of certain arguments and the need to take care.
    It is an ironic thought, but I wonder if even these guys are seeing their colleagues thoughts clearly for the first time after this leak?
    I suspect their gatekeeper tendency has been a major handicap to their capacity to reason amongst themselves.

  31. kdk33
    Posted Dec 16, 2009 at 5:59 PM | Permalink

    I watched the MIT debate. My reaction was:

    You’re interviewing an accountant for your family business. During the interview, you’re called out of the room and told that the candidate (interviewee) was caught attempting to embezzle from his former employer. “Oh”, you say, “but he didn’t get away with it”, and you return to the interveiw.

    Finding examples of their success is interesting but not necessary. The defendants confess (in their own words) to the conspiracy. It’s not OK because they failed.

  32. EdeF
    Posted Dec 16, 2009 at 6:22 PM | Permalink

    Norbert
    Posted Dec 16, 2009 at 12:53 PM “I don’t understand your complaint yet. It is not enough to have correct mathematics, even for an article that is mostly mathematical. After all, climate science is not mathematics.”

    Climate science is heavily dependent on mathematics as Steve and others have pointed out here
    over the last few years, with an emphasis on statistics. First off, proxies need to have a sufficient number of samples per time period. Next, you histogram the raw proxy data to see
    if there are any strange distributions taking place. You need to do homogeneity tests on populations to see if nearby data can be included in your sample. Raw proxy data is then
    standardized with either a curve or one of the other processes such as RCS. Data is filtered
    using complex digital filters to get an idea of the long-term trends of the data. Data is
    correlated with local temperature data to see if there is a high correlation between the
    proxy parameter (tree ring width, varve thickness, tree ring density, etc) and temperature.
    The entire process is infused with mathematics. In fact, I would think that a requirement
    of future proxy work is to make sure that a well-qualified statistician be on hand to
    go through the work before it gets to the peer-review step. The consequences of this not
    being done is, as we have seen, rather embarrasing, ie, very small sample sizes, inclusion
    of data from different populations. Inclusion of a McIntyre or Wegman into the process
    circa 2003 would have caught a whole host of these problems.

  33. Rich Braud
    Posted Dec 16, 2009 at 6:24 PM | Permalink

    I have enjoyed reading the discussions on this site. One thing is glaringly obvious to me as a definite outsider is that whil the Scientific Method is of great value when conducting ant research the Peer Review Process does not follow that method. It is a human influenced method that can either be used to support or deny publication for a variety of reasons that have very little to do with science.

    As an outside observer I think an overhaul of the entire process is in order. I have no suggestions just observations. Keep up the good work Steve. While I do not have a clue as to what is true or not I can recognise a very definite bias and also a very sloppy work product out of these folks at the CRU.

  34. John O'Meara
    Posted Dec 16, 2009 at 6:41 PM | Permalink

    This abstract is dated June of this year, but it seems to fit the description of the paper well. If it is based on the same paper, it has been updated.

    Click to access auffhammer.pdf

    Steve: Yes, that sure looks like it fits the bill.

    • Byronius
      Posted Dec 16, 2009 at 9:06 PM | Permalink

      Great find! Their Conclusion is a masterpiece of understatement, and would seem to respond effectively to the original criticism that there was no practical import.

      To wit: “As a result of misspecification of the transfer function, and the bias induced by the reverse regression estimation procedure, fluctuations of the reconstructed climate series are underestimated. Inferences from such series regarding the existence of abnormalities in particular periods, including the most recent period, are unreliable.”

      Could do some damage, indeed!

      • Byronius
        Posted Dec 17, 2009 at 9:38 AM | Permalink

        And one more thing: let’s assume (for the sake of argument) that this paper is correct in its claim that variability in temps is suppressed. Wouldn’t climate models which reproduced this series then (artificially) amplify the predicted positive feedback effects of CO2?

        Steve: If so and the premise is true, we should know and govern ourselves accordingly. I frequently make this point.

        • Eric Rasmusen
          Posted Dec 18, 2009 at 12:45 AM | Permalink

          The ClimateGate email said that just showing that a method is theoretically flawed is not good enough— you have to show that the mistake matters. That’s perhaps a valid objection to the article , for publication in a first-rate journal. On the other hand, it’s good to be able to warn people not to use the bad method, if the method might produce wrong results when next used.

          In economics, I found that the proof of the most important result in a classic paper was fatally flawed, tho I could use a different method to prove that the theorem was true. The journal editor rejected, saying that since the theorem was true, it didn’t matter. Argh! But he wasn’t motivated by politics— just the idea that only the result mattered.

          The editor’s proper response, tho, to the climate paper with hard math was not to find an excuse to reject, but to invite a revise-and-resubmit, telling the authors that they need to show that that the mistake would seriously affect the conclusions of the earlier paper

    • WHR
      Posted Dec 16, 2009 at 10:58 PM | Permalink

      That has to be it, and I’d wager any reference to McIntyre and McKitrick’s research having any merit “automatically” invalidates the submitted research when it passes through the offices of THE TEAM.

    • Posted Dec 16, 2009 at 11:22 PM | Permalink

      That’s gotta be it. Steve McIntyre should update the OP with this link.

    • bender
      Posted Dec 17, 2009 at 12:16 AM | Permalink

      Damage?! I’ll say. That has got to be it.

    • bender
      Posted Dec 17, 2009 at 12:18 AM | Permalink

      Note: Jim Bouldin is at UC Davis. Jim, give us your expert review.

    • David Wright
      Posted Dec 17, 2009 at 1:35 AM | Permalink

      This is a good paper. The exposition is clear, the math justified, the conclusions circumscribed. (There is a typo: in equation 3, “\phi” should be replaced by “\phi L”.)

      • David Wright
        Posted Dec 17, 2009 at 7:07 PM | Permalink

        If I worked in denro, I would be pretty pissed at my colleagues for having kept a clearly methodologically superior technique out of the literature for five years, just so their past papers wouldn’t look deficient.

    • Posted Dec 17, 2009 at 4:09 AM | Permalink

      I have emailed Prof Auffhammer to ask if he would like to comment.

    • Norbert
      Posted Dec 17, 2009 at 6:10 AM | Permalink

      It is definitely updated, containing references to 2004 and 2005 literature throughout.

      Although this stuff is way over my head, it seems that it contains a comparison of the methods of the kind which Cook might have found lacking, although I couldn’t tell if this is what he meant.

      I wish the final graph (Figure 3) would show the actual temperature more clearly, and would show both curves smoothed so that one can see if the two methods show different trends, which I can’t see in this graph. I can see only that the red curve has more variation, which might go away after smoothing. it is difficult to tell what the trends would look like after smoothing, and whether there wold be significant differences.

      And I have to say, hoping that it is not just my wishful thinking because of the points I have argued, but to me it would look as if the blue curve is more often closer to the real temp than the red curve. The red curve seems to jump out of the general picture too often. So in the absence of understanding statistical error measures, I feel inclined to say the blue curve looks better to me (an exception seems to be the time 1920-1930).

      Certainly this graph doesn’t make it clear to me, as a non-statistician and non-climate-scientist, why the red curve would be better.

      As far as I am concerned, damage might be immanent, but hasn’t happened yet.

      • bender
        Posted Dec 17, 2009 at 7:05 AM | Permalink

        So you admit your surmise does not rise to the level of an analysis because you lack expertise? If so, then I think your contribution is complete.

        • Norbert
          Posted Dec 17, 2009 at 10:47 AM | Permalink

          On this point, yes, probably.

      • bobdenton
        Posted Dec 18, 2009 at 4:33 AM | Permalink

        Norbert, your focusing far too much on the curve of the mean. This paper shows that it is possible to preserve more of the information in, relatively low noise, original data than is done using the conventional computation. The red curve is the same as the blue curve, but with more information retained. As you note, smoothing will tend to reduce both red and blue curves to a common mean, what will be different is the accompanying probability distribution (the grey shaded area accompanying the curve – the error mask). The red curve will have a wider error mask than the blue. When comparing 2 points on the curve you compare not just the curve, but the curve and its accompanying error mask at each point. A wider error mask therefore will affect a comparison of present with former climate. An additional point is that probability distributions tail off to infinity, so a practical limit must be determined. The limit should be determined in relation to the question asked. So far as I can see error masks are limited to 2 SDs. If your question is: “Is this the warmest year in the millennium”, the error mask should be 3.1 SDs. Even if the question is: “Is this the warmest decade in a millennium an error mask of 2 SDs falls a little short”. Manipulation of the error masks can make the graphics appear more persuasive than the figures warrant.

        Steve: OT. Plase move this debate to another thread.

    • Jean S
      Posted Dec 17, 2009 at 8:34 AM | Permalink

      I did also independently research on this before noticing John’s link. I also came up with the names Yoo and Wright based mainly on this:
      Yoo, Seung Jick and Brian D. Wright. 2000. “Persistence of Growth Variation in Tree-Ring Chronologies.” Forest Science 46(4): 507–520.
      So I also guess that John found the correct paper.

      • Jean S
        Posted Dec 17, 2009 at 8:49 AM | Permalink

        Ah, from Auffhammer’s homepage under “working papers”:
        Yoo, Seung Jick, Brian Wright and Maximilian Auffhammer. 2006.
        Specification and Estimation of the Transfer Function in Paleoclimatic
        Reconstructions.

        Notice the date and the order of the aurhors. So I would guess that the paper has been rejected twice… maybe Y&W tried it originally, then A came in (resubmitted and rejected again), and now Y&W are tired of it but A still wants to give a shot?!?

        • bender
          Posted Dec 17, 2009 at 9:58 AM | Permalink

          reasonable guess

    • Frank
      Posted Dec 17, 2009 at 12:29 PM | Permalink

      On the author’s CV, this is still listed as a “Working Paper” dated 2006. So it appears that publication has been suppressed for 3+ years.

      Steve: The Cook review was in 2003.

  35. anon
    Posted Dec 16, 2009 at 7:53 PM | Permalink

    Number 2. Were the people successful in their endeavour to preventing publication in journals or mentions in IPCC ? This is a very important question. Could one successfully do that? Five papers by McIntyre and McKitrick were published and then referenced and discussed in the IPCC… But were the people successful in their endeavour to preventing publication in journals or mentions in IPCC ? The answer is no. They were not successful. [elision does not seem germane to this particular]

    Really … Phil doesn’t think so

    http://www.eastangliaemails.com/emails.php?eid=1065&filename=1256765544.txt

    You are probably aware of this, but the
    journal Sonja edits is at the very bottom of
    almost all climate scientists lists of journals
    to read. It is the journal of choice of climate
    change skeptics and even here they don’t seem to
    be bothering with journals at all recently
    .

    • Patrick M
      Posted Dec 18, 2009 at 10:17 AM | Permalink

      “Were the people successful in their endeavour to preventing publication in journals or mentions in IPCC ? This is a very important question.”

      Based on comments on this, the answer is YES:
      (1) their ability to reject publication of the paper that above is
      (2) identified as possibly by Yoo, Wright and Auffhammer, and which appears to be

      Click to access auffhammer.pdf

      (3) an important contribution in correcting Paleoclimatic Reconstructions (ie not a bad paper), AND which
      (4) did so in a way that crucially undercut claims of the uniqueness of contemporary temperature rises, and which
      (5) was identified as a ‘working paper’ in 2006, meaning its rejection by the Briffa/Cook in 2004 did prevent publication and use in subsequent IPCC reviews

      If this is confirmed, great detective work by the CA Comment Team. It shows a clear and successful conspiracy to suppress valid scientific findings that provide problems/challenges to the work of the “Hockey Stick Team”.

      • bender
        Posted Dec 18, 2009 at 10:31 AM | Permalink

        Of course the “conspirators” would argue that the “result” that might have had theoertical “validity” did not demonstrate empirical value. Again and again and again: you do not know that Cook suggested rejecting. He might have suggested “accept with major revision” or “reject with invitation to resumbit”. Perhaps the reason it has not been published is because the authors moved on.
        .
        It is premature to conclude a “successful conspiracy to suppressss valid scientific findings”. We need to see the reviews and hear from the authors.

  36. Posted Dec 16, 2009 at 8:05 PM | Permalink

    Speaking as another complete outsider to peer review in climate science, what I find shocking here is the reaction of Norbert and a few others.

    Surely what anyone seeking to defend the current IPCC ‘team’ and believing them to be morally right would have said simply “I will ask Keith, Ed, Phil and Mike which four papers these were and let’s take it from there.”

    Is there some unstated assumption that the four gentlemen in question would not be forthcoming with that information?

    If so, the presumption of guilt seems unavoidable.

    Let’s hope we do soon discover which papers.

    • Norbert
      Posted Dec 17, 2009 at 6:21 AM | Permalink

      I don’t get your point about being shocked. I didn’t intend to defend anyone or believe that anyone is morally better than anyone else. What I am hopefully saying is that the information presented *here* does not allow damning conclusions (so far, and to my knowledge, at least).

      It would be contradictory to my point to go elsewhere to seek additional information. (Although I am of course doing that for other reasons, if you know what I mean).

    • Posted Dec 18, 2009 at 9:01 AM | Permalink

      From my perspective, then, you spoke before you had anything to say.

  37. Shallow Climate
    Posted Dec 16, 2009 at 9:08 PM | Permalink

    It is my understanding that here in the US of A, in a jury trial, it is not enough for the jurors to be impartial: They must also give the IMPRESSION of being impartial. (At least I heard a judge in LA say that very thing to a group of potential jurors.) And surely this makes sense. Is it not blatantly obvious that the same has to be true in the reviewing of scientific research articles? Needless to say (but of course I am saying it), Mssrs. Briffa and Jones are revealed here to be totally guilty in terms of not giving the impression of impartiality.

  38. Keith Herbert
    Posted Dec 16, 2009 at 9:17 PM | Permalink

    I too was struck by Prinn’s framing of the question and his answer. He didn’t consider that merely attempting to block publication of relevent papers, regardless of the success, would be an aggregious offense to most people.

  39. Jason
    Posted Dec 16, 2009 at 9:42 PM | Permalink

    I saw something different in Professor Prinn’s statements on climategate than others here did.

    From a PR perspective, some of the responses from the climate community have been shocking. Respected climate scientists (and journals) have gone out on a limb to defend the Principals, often appearing to endorse the activity documented in the emails.

    If we were to ignore every other aspect of the issue, and consider only the PR impact, these responses have been very detrimental to climate science as a whole.

    Again, only considering PR and ignoring scientific merit entirely, the smart response is to:

    1. Criticize these emails
    2. Declare this to be an isolated incident
    3. Throw the scientists involved under the bus
    4. Declare that even if history were rewritten to remove every trace of the CRU principals, the basic science underlying this field would be sound.

    This seems remarkably similar to Professor Prinn’s statements.

    Let me further add: If Professor Prinn has either factored into his remarks his responsibilities at MIT or dissembled or exaggerated slightly about what we know about the emails, I would not consider it unethical or even unexpected. I think he has acted very reasonably. But like everyone, he most certainly has an agenda. [Unlike some of his colleagues, he has pursued that agenda without obvious distraction]

  40. Posted Dec 16, 2009 at 10:00 PM | Permalink

    The same Ed Cook who says we know eff all about variability > 100 years?

    • bender
      Posted Dec 16, 2009 at 10:16 PM | Permalink

      You guys are being unfair to the Cook review that was prompted by the Briffa ask. It happens all the time that a paper that pokes theoretic holes in a large body of empirical work is rejected simply because it fails to demonstrate the magnitude of the problem or move the field forward. All the time. You have no idea if this paper was eventually published. Indeed, what if it was Wilmking or Wilson on the divergence problem? What if Ed Cook’s harsh review led them to develop even stronger papers, by putting the theoretical objections in a solid empirical context? I would withold my judgement if I were you. Ed Cook may have done skeptics a huge favor by raising the bar high. You have no idea.

  41. BlueIce2HotSea
    Posted Dec 16, 2009 at 10:22 PM | Permalink

    Listening to Pinn on the MIT debate, I felt nauseous. Reading Briffa to Cook is deja vu. That’s not a technical assessment, just my gut feeling.

  42. Anton
    Posted Dec 16, 2009 at 11:28 PM | Permalink

    Isn’t this is paper referred to above?

    Click to access auffhammer.pdf

    • Bernie
      Posted Dec 17, 2009 at 10:26 AM | Permalink

      This paper reads like it is addressing very fundamental and well known – but possibly largely ignored – issues. It seems to be a more detailed look at the same “error” issues raised recently by Matt Briggs and AJStrata.

      • Eric Rasmusen
        Posted Dec 18, 2009 at 12:31 AM | Permalink

        As an economist I was gratified to see an NBER paper come up. It raises a good question: if climate science is a corrupted field, why don’t other fields (like economics or statistics) start taking away their topics?

        We in economics are quite good at statistics, even tho we know nothing about tree rings or temperatures. I wonder if this NBER paper could get published in (a) a climate journal, or (b) an economics journal.

        It may turn out that all the best papers in climatology will be published in agricultural economics journals in the 21st century.

  43. Anton
    Posted Dec 16, 2009 at 11:29 PM | Permalink

    Sorry, I see it has already been mentioned.

  44. Anton
    Posted Dec 16, 2009 at 11:36 PM | Permalink

    But apparently it has still not been published. Anyone want to contact one of the authors to hear this paper’s story?

    • bender
      Posted Dec 17, 2009 at 12:09 AM | Permalink

      Holy crap! You can’t just flat reject a paper of that quality! Is this really the paper that they were talking about? Steve, you have to call these guys!

  45. MikeN
    Posted Dec 16, 2009 at 11:42 PM | Permalink

    Steve how does Jones reviewing a paper that criticizes his own work compare with your review solicited by Stephen Schneider, described in Caspar and the Jesus Paper?

    Steve: They ignored my review and did not send the 2nd version to me. Wahl and Ammann concealed the rejection of the companion paper and they didn’t care.

  46. Anton
    Posted Dec 17, 2009 at 12:23 AM | Permalink

    Dr. Aufhammer, Yoo or Wright could provide the reviews they received from the journal(s) when they submitted this paper (or an earlier version) back in 2003 (SIX years ago!).

  47. Ron Cram
    Posted Dec 17, 2009 at 1:32 AM | Permalink

    Steve, regarding Climategatekeeping – what do you make of this email?
    http://www.eastangliaemails.com/emails.php?eid=384&filename=1069630979.txt

    • aragornnyc
      Posted Dec 17, 2009 at 1:40 PM | Permalink

      OT – if not OK delete

      i understand you are knowledgable

      i would like to ask your opinion on the “war” that is going on at Wikipedia over publishing an informative “Climategate” article…reading the edits is more enlightening then reading the article…the shoddy way they treat our host is terrible…can one do anything about this railroading???

      Steve: might be worth having a Wikipedia thread on this issue. Let me think about it. But I prefer such suggestions on Unthreaded.

  48. MikeN
    Posted Dec 17, 2009 at 4:23 AM | Permalink

    Steve, I wasn’t looking at the results of the review, but rather a comparison between asking you to review it and asking Jones to review whatever he was reviewing. Didn’t Wahl & Ammann criticize your paper?

  49. Craig Loehle
    Posted Dec 17, 2009 at 8:48 AM | Permalink

    The gatekeeping extends beyond journals. It also involves who was allowed to become an author or editor for the IPCC reports. Except for a few who slipped by because selected by their countries, how likely do you think it was that a skeptical scientist would be allowed to participate as an author? In fact, the same few, who also publish together, show up over and over. Furthermore, any doubts such as expressed by Briffa or Cook or Wigley, were squashed down by the enforcers of orthodoxy such as Mann and Jones. So any claims that the writing of the IPCC reports was pure science done by objective individuals are (self snip).

  50. Nicolas Nierenberg
    Posted Dec 17, 2009 at 9:13 AM | Permalink

    I actually think some of these comments are missing the point. Maybe these papers should have been rejected, or sent back for revision, or even accepted. The problem is that the reviewers don’t seem to approach it with an open mind. They are looking for ways to defeat the paper, because it conflicts with their own prior work, or the work of colleagues. This is an obvious conflict of interest that the journals should try to avoid.

    Steve: Precisely.

    • bender
      Posted Dec 17, 2009 at 9:26 AM | Permalink

      “the reviewers don’t seem to approach it with an open mind”
      .
      That’s a surmise. I’m not sure how much you can learn from a single part of an exchange. So maybe the point is not being missed.

      • Nicolas Nierenberg
        Posted Dec 17, 2009 at 9:33 AM | Permalink

        Bender,

        What am I missing? Jones says he has a paper to review, and that it “if published as is it could really do some damage.” He also points out that it uses his friends reconstruction as the whipping boy. All he knows when he wrote the email is that they haven’t shown whether it makes a practical difference and he wants the data “to refute their claims.”

        So in summary he has a paper to review that would indicate that papers written by people he likes were incorrect. He has no idea whether it is correct or not, and wants very much to prove that it isn’t since it would be damaging. How is that having an open mind?

        The open mind would say. This is interesting, I wonder if it makes a practical difference. That would be an important finding.

      • Nicolas Nierenberg
        Posted Dec 17, 2009 at 9:35 AM | Permalink

        I also note the somewhat disparaging “Korean guy and someone from Berkeley.” How could these upstarts be criticizing our work?

      • Nicolas Nierenberg
        Posted Dec 17, 2009 at 9:52 AM | Permalink

        Sorry it was Cook not Jones.

  51. Posted Dec 17, 2009 at 9:59 AM | Permalink

    Gatekeeping is business as usual, doing business with undeclared conflicting interests hinges on egregiousness, pending an appropriate definition (the one on https://climateaudit.org/faq/ does not fit the situation), but goalkeeping with a broken hockeystick is plainly illegal : http://www.nhl.com/ice/page.htm?id=26286 (see 10.3).

  52. bender
    Posted Dec 17, 2009 at 10:09 AM | Permalink

    Nicolas, here’s my take. They recognize the potential correctness of the argument and its potential importance in terms of application. They may even recognize that the improved method performs better at a single site. The question is: “does it matter?” i.e. Does the magnitude of improvement, when propagated across many sites, make a substantial difference to the mean and precision on the reconstruction? Maybe. Maybe not. They want a demo. It is a legitimate line of criticism to suggest that a paper’s argumentation is correct, but incomplete. It happens *all the time*.
    .
    Myself, I might have ruled differently. But you can understand that the bar is low when nothing is at stake, and the bar is high when you’re tugging at structure in an edifice. Can’t you? People who are suggesting this is highly unusual, you are just wrong. This is not all that unusual. I’m not saying it’s just. I’m saying what the culture is.
    .
    To get that paper published the authors should consider application of their method in an actual case study. Yamal, for example.

    • Tom C
      Posted Dec 17, 2009 at 10:43 AM | Permalink

      bender –

      I think you are falling into the “wrong method but right answer trap” here. From the E-mail, Cook implies that the authors are claiming that the method in question is incorrect. Therefore, the issue of “practicality” is not even relevant. Even if authors X,Y,Z don’t have a good demonstration of their method, the calling out of incorrect theory is of paramount importance.

    • Nicolas Nierenberg
      Posted Dec 17, 2009 at 12:17 PM | Permalink

      Bender I just disagree,

      Look at the tenses in the letter. The only thing Cook knows so far is that the math is correct. He is just hoping that it isn’t practical. He hasn’t tested that yet. He is looking for a way to prove that it is wrong because he doesn’t like it.

      • Tom C
        Posted Dec 17, 2009 at 12:36 PM | Permalink

        NN –

        What exactly do you think is meant by “practical”. It seems that he uses “practical” to mean whatever leads to a change in the conclusions. This is confused thinking. The rightness or wrongness of a statistical technique is independent of whether the results match some expected conclusion.

  53. Steve McIntyre
    Posted Dec 17, 2009 at 10:20 AM | Permalink

    I think that an improved theoretical understanding is extremely important regardless of whether it “matters” very much in a type case like Tornetrask. It might make a difference in another case.

    If this can’t be published in “academic” journals, where is it going to be publishd?

    At present, IPCC, Mann, Jones, Briffa and the rest of the Team use confidence intervals that have ZERO theoretical basis.

    • Bernie
      Posted Dec 17, 2009 at 10:29 AM | Permalink

      Steve:
      Which paper or papers or are we still unsure?

      Steve: We have leads on two papers (Aufhammer; Kamel) – two are completely blank.

  54. Jryan
    Posted Dec 17, 2009 at 10:36 AM | Permalink

    This whole argument reminds me of an article I read a few months ago called “The Formula that Killed Wall Street”

    Here: http://www.wired.com/techbiz/it/magazine/17-03/wp_quant

    I think Norbert should read this article in full to see why mathematics and theory are so tightly entertwined in science.

    In the case of Wall Street, the theory of risk modeling was turned on it’s head by a new theory on how to model risk. This new method, when applied to markets in 1999 and onward looked to everyone to be a Godsend to the markets. As we know now, the math was very very wrong… but nobody wanted to question the math because they were making gobs of money with it.

    There are numerous parallels between market risk modeling and Climate Science. Keith Briffa, and many more, were too heavily invested in their models, and too pleased with the professional results, to properly question what they were doing.

    Steve: be careful not to go a bridge too far. This is just dendro here.

  55. Steve McIntyre
    Posted Dec 17, 2009 at 10:51 AM | Permalink

    I’ve sent an inquiry to the publishers of The Holocene and the editors of GRL and JGR for information on the rejected papers (or forwarding my inquiry to the authors in question, requesting that they confirm that they have done so.)

  56. Kenneth Fritsch
    Posted Dec 17, 2009 at 11:03 AM | Permalink

    I do not have anything siginificant to add to this discussion, but to note that much of what I take away from the “emails” and their defenders these days is one simple and I think important question:

    What do these exposed comments or the defenses say in general about the potential for policy advocacy getting in the way of scientific judgments and associated judgments involving peer review? We can all make our own personal judgments.

    Do I see an apparent over reaction by Briffa to the consequences of a paper under review? Yes. Do I wonder about those vague references to doing damage to some undefined something that one might expect, if it were of a scientific nature, would be more specifically described? You bet. Do I think Norbert’s ongoing and extended lawyerly defense of something that is defended as something normal and unconsequential seems a bit overdone? I do.

  57. laggard
    Posted Dec 17, 2009 at 11:19 AM | Permalink

    Is there a “Journal of Agricultural, Biological, and Environmental Sciences,” or did Cook mean to write “Journal of Agricultural, Biological, and Environmental Statistics”. I found the latter online, but not the former.

    If the latter, this sentence from the homepage of the journal might be useful to the theory vs. application discussion above:

    “The Journal of Agricultural, Biological, and Environmental Statistics contributes to the development and use of statistical methods in the agricultural, biological, and environmental sciences. Published quarterly, the journal’s articles are of immediate and practical value to applied researchers and statistical consultants in these fields.”

    An improvement in methodology can satisfy the last sentence, no? Of course, if I have the wrong journal then never mind.

  58. Steve McIntyre
    Posted Dec 17, 2009 at 11:33 AM | Permalink

    A more complete statement is here
    http://pubs.amstat.org/page/jabes/information-for-authors and shown below.

    The Aufhammer article (if it’s the one) totally fulfils the specifications of the journal. Indeed, the journal is 100% appropriate. In statistics/applied statistics, standard data sets are often used to demonstrate techniques even if the results don’t “matter”. The purpose is to demonstrate the technique and the analysis.

    This is a good journal for submitting such analyses and I’m glad to learn of it BTW.

    Here is the longer journal mandate.

    The Journal of Agricultural, Biological, and Environmental Statistics (JABES) publishes papers of immediate and practical value to applied researchers and statistical consultants in the agricultural sciences, the biological sciences (including biotechnology), and the environmental sciences (including those dealing with natural resources). Only papers that address applied statistical problems will be considered. Interdisciplinary papers as well as papers that illustrate the application of new and important statistical methods using real data are strongly encouraged.

    For regular papers, a motivating example should be presented early in the paper. The statistical development should then be presented, and the results applied to the example. Expository, review, and survey articles addressing broad-based statistical issues are encouraged.

    Presentation should be accurate, clear, and comprehensible to readers with a background in statistical applications. When necessary, detailed proofs, computer code, and other lengthy technical portions of a manuscript should be placed in an appendix so that they will not interfere with the primary focus on the paper, which is to be a discussion of the statistical methods and issues being addressed. Real data should almost always be used to illustrate the statistical applications being discussed.

    Articles that have been previously published in a refereed journal or articles that are under review by another journal may not be submitted to JABES.

    • bender
      Posted Dec 18, 2009 at 1:10 AM | Permalink

      A methods paper in a journal of that variety does not IMO need to demonstrate the sort of applicability that Cook demanded. I’ve already made that clear, but I restate to declare my general agreement with Nicolas and Steve. Briffa IMO was the wrong person to handle that paper. And not because he was in conflict. Who exactly was the Editor? Who was the Associate Editor in charge of locating reviewers? Briffa? If so, what’s Briffa doing as an Associate Editor on a methods journal? He’s not a statistician. Some of these questions should be pretty easy to answer.

      Steve:
      bender, you’ve conflated two different reviews. Briffa was editor at Holocene not JABES and we don’t know what the Holocene submission was right now.

  59. Craig Loehle
    Posted Dec 17, 2009 at 11:41 AM | Permalink

    There is kind of an issue with how they are judging the paper in question. Cook says: “but it suffers from the classic problem of pointing out theoretical deficiencies, without showing that their improved inverse regression method is actually better in a practical sense. So they do lots of monte carlo stuff that shows the superiority of their method and the deficiencies of our way of doing things, but NEVER actually show how their method would change the Tornetrask reconstruction from what you produced.”
    A sound methods paper in statistics is looking at the properties of estimators, bias, confidence intervals, etc. In the draft linked to online, they show that the Briffa-style method gives invalid autoregressive outputs (and other problems). Showing that an existing method is invalid mathematically and statistically is a proper role for such a paper. They want to judge it by the results. This is backwards IMO.

    • Norbert
      Posted Dec 17, 2009 at 11:53 AM | Permalink

      The online draft is obviously a strongly modified update, since it mentions 2004 and 2005 literature throughout.

      In the current draft, the authors claim that the “reverse regression” method leads to problematic results. I seems adequate to say that such a claim should be demonstrated in terms of showing that the results are actually problematic., rather than just claiming that. Perhaps the 2009 version of the paper (still labelled “preliminary”) does that, and the 2003 version didn’t, we don’t seem to know yet.

      • Nicolas Nierenberg
        Posted Dec 17, 2009 at 12:21 PM | Permalink

        Nobert,

        The point is that Cook was looking for data to prove that it didn’t matter. If he just wanted to write a revise and update comment back because it was theoretical he could have done that without the data. It is clear from his email that he didn’t like the paper from the first moment he saw it.

        • Norbert
          Posted Dec 17, 2009 at 12:36 PM | Permalink

          Nicolas,

          I wouldn’t think his objection was that the paper was theoretical, but that the paper questions (or suggests to question) the practical results of existing research, without demonstrating that in terms of practical results it would matter. One obvious danger is that political interest groups would use such a paper to claim that all the related research has been wrong. So if that is what the paper perhaps even explicitly claims, then it should actually demonstrate that scientifically, instead of just claiming it.

        • Kenneth Fritsch
          Posted Dec 17, 2009 at 2:47 PM | Permalink

          When Norbert says:

          “One obvious danger is that political interest groups would use such a paper to claim that all the related research has been wrong.”

          I think he hits the essence of the matter squarely on the head. The follow-up is whether it is a “normal” concern of a scientist (not advocate) to care about the politics. Would not the purely science response be: the paper shows that are methodes are incorrect but I maintain my results will hold up with proper methodes and therefore to do the science properly we will use the proper/better method and show our results again- a bit more work but we are doing science after all and not politics.

        • bender
          Posted Dec 18, 2009 at 1:13 AM | Permalink

          Agreed. Was going to say as much this morning. Papers like this pose two dangers. A danger that they will get used (by knowledgeable specialists), and a danger that they will get abused (by trouble-making skeptics looking to pole holes in the AGW proposition). I think they were concerned about the latter, and thus wanted proof of the former – that the method “matters”.
          .
          Happens all the time. Dirty petty blood sport.

  60. Craig Loehle
    Posted Dec 17, 2009 at 11:44 AM | Permalink

    To clarify my point, if one shows that Mann is using PCA wrong, it is not valid to criticize this analysis on the basis that one can no longer find hockey sticks with an alternate method. Wrong is wrong.

    • Tom C
      Posted Dec 17, 2009 at 12:41 PM | Permalink

      Craig – Exactly right. A corollary to this point is that once PC has been used incorrectly, in a theoretical sense, the details of the subsequent analysis don’t really matter. MBH should have been ignored on these grounds alone, independent of the detailed demolition that Mc-Mc had to do.

  61. Richard A.
    Posted Dec 17, 2009 at 11:56 AM | Permalink

    Norbert wrote to Mike: “Do you think it would not be a substantial reason if a paper makes a theoretical mathematical point of no practical relevance, but creates the impression that there is such practical relevance?”

    If the paper can do “damage,” its practical import has to exist on some level. If it doesn’t, then it can’t do any damage of note. And if the paper is correct and points out errors in existing work but does not affect results of that work, then the claimed “damage” must be of some other nonscientific nature, likely political, and is that a legitimate reason to stop it from being published in a journal of science? I say no.

    • Norbert
      Posted Dec 17, 2009 at 12:24 PM | Permalink

      The way I’ve always read this, is that the paper can do damage by claiming that not only the methods it proposes is “better”, but by also claiming that the existing method leads to problematic results. This does damage by suggesting that the results of existing research are not valid. But if the better method doesn’t produce results which are noticeably different, then the existing research is still valid. That is obviously a big difference, and a paper with claims that suggest that the results of existing research are not valid (as opposed to saying that the methods can be improved), should better demonstrate that this is actually the case.

  62. Ward S. Denker
    Posted Dec 17, 2009 at 12:25 PM | Permalink

    >Most of these examples do not pertain to the Mc-Mc articles…

    Or is it Mc^2? 😉

  63. Posted Dec 17, 2009 at 12:56 PM | Permalink

    I notice that one of the authors of the Auffhammer draft (Yoo) works at an economics institute. Two of the references that they use (Klepper & Leamer 1984, and Newey & West 1994) are also from econometric journals.

    This sounds like something Ross McKitrick would be interested in.

    • Posted Dec 17, 2009 at 12:57 PM | Permalink

      Actually, they’re all energy/environmental economists.

  64. Frank
    Posted Dec 17, 2009 at 1:14 PM | Permalink

    From page 5 of the Auffhammer paper:

    “We show, using the data from Briffa
    et al. (1992), that the standard regression approach overestimates the order of the true AR process
    by three. This aspect of reconstruction is of crucial importance, since overestimation of the order
    of the AR process of temperature underestimates the variability of the true temperature movement
    so that inferences about the significance of observed climate abnormalities in a particular period are
    unreliable.

    In short, the commonly applied methods of making inferences about past climate variability
    from tree-ring data or other paleoclimatic data series are subject to problems that may crucially
    affect inferences about the nature of past climate and recent changes in climate, even if temperature
    is accurately measured in the period where direct instrumental measurement is available, the response
    function is correctly specified, and the explanatory power of the estimated response relation is good.”

    Now we know why Cook thought the paper could do some damage.

  65. Posted Dec 17, 2009 at 2:12 PM | Permalink

    This is an interesting quote from page 12 (pdf page 13) of the Auffhammer et al. 2009 working draft:

    In summary, a noisier response function will result in noisier reconstructions, with potential time dependence, even if the response function is correctly inverted. But the estimation of a misspecified transfer function induces time dependency not only due to the errors in the response relation but also due to misspecification of the transfer function. The limitations of this method of reconstruction do not disappear if the fit of the response relation is perfect. For high-quality data, for which the response relation has high explanatory power, the inversion method is clearly superior. For low-quality data, little can be expected from either method.

    It reminds me of Stephen McIntyre writing to Andrew Revkin of the New York Times’ Dot Earth blog here:

    The fundamental requirement in this field is not the need for a fancier multivariate method to extract a “faint signal” from noise – such efforts are all too often plagued with unawareness of data mining and data snooping. These problems are all too common in this field (e.g. the repetitive use of the bristlecones and Yamal series). I think that I’ve made climate scientists far more aware of these and other statistical problems than previously, whether they are willing to acknowledge this in public or not, and that this is highly “constructive” for the field.

    As I mentioned to you, at least some prominent scientists in the field accept (though not for public attribution) the validity of our criticisms of the Mann-Briffa style reconstruction and now view such efforts as a dead end until better quality data is developed. If this view is correct, and I believe it is, then criticizing oversold reconstructions is surely “constructive” as it forces people to face up to the need for such better data.

    (By the way, for anyone who hasn’t read McIntyre’s letter here, I highly recommend it. It’s a rare look into McIntyre’s ‘big-picture’ motivations that you often don’t get here on CA.)

    Now, McIntyre is talking about multiproxy reconstructions, not single-proxy inversions, but still. One of the (valid, or at least, if not totally valid, convincing to many people) criticisms levelled against many ‘climate skeptics’ is that they only attack, they never help build.

    To this, McIntyre has replied (in the Dot Earth column linked above) that he doesn’t want to do a reconstruction that he doesn’t consider statistically valid. I wonder if Auffhammer, Yoo, and Wright don’t feel the same way? Either way, it’s a bit of an impasse because to lots of people, just attacking and never helping to build makes one unworthy of attention. Meanwhile, many in the skeptic community (especially posters here on CA) are willing to say “Well, I’m sorry that your life’s work has just been wrecked, but I guess trees aren’t thermometers after all.” Of course they’re not going to get much traction with the dendroclimatology community saying stuff like that!

    • Byronius
      Posted Dec 17, 2009 at 2:46 PM | Permalink

      OT

  66. Frank
    Posted Dec 17, 2009 at 2:32 PM | Permalink

    snip – piling on

    Worst of all, what is happening to graduate students? First they hear from their faculty “of course Ron Prinn is right, it’s time to move along”. Then word of the info in this thread leaks out and they read the above emails. Then the Auffhammer draft starts circulating and someone realizes their project has been held back because they hadn’t seen it. (Lindzen suggested gathering six undergraduates for a summer to review the emails to understand what is really in them.)

  67. Posted Dec 17, 2009 at 2:37 PM | Permalink

    Dear Colleagues,

    I just received a few emails asking me to comment on what happened to our paper, which was ultimately rejected at JABES. Brian Wright and Seung Jick Yoo submitted this paper to JABES ~2002. It was reviewed by three referees and the editor invited a resubmission. One referee liked the paper, the second referee was not convinced, but made substantial suggestions for improvement and the third referee strongly disliked the paper calling it “an unfair and unproven condemnation of past climate reconstructions” and argued that the paper displayed “a lack of objectivity, fairness, and scientific rigor”.

    I read the original paper and was convinced by the statistical aspects of it. The underlying problem is a classical problem econometricians like myself deal with daily (e.g. Leamer’s paper). Wright and Yoo took me on as a third author and we rewrote the paper and addressed the referees’ comments as best we could. We resubmitted three years after the original submission, so the editor sent it back to the two non-hostile referees. The first referee argued that his comments were addressed, the second still had technical issues, which were valid concerns. The paper was rejected based on these two reviews.

    We have since rewritten the paper again and presented it at the NSF/NBER time series conference, which is the big meeting in time series. The paper received good reviews and no one found the mathematics surprising. (As a side note, there is no Box Jenkins in the paper).

    Overall, I am a firm believer that the refereeing process is imperfect, but overall works pretty well. Good papers get rejected sometimes and bad papers get accepted sometimes. We are about to submit the paper to a statistics journal and hope to get a fair read of the paper. We are _not_ climate skeptics, but scientists. This paper shows that if your underlying tree ring series is of bad quality, neither method gets you good reconstructions. If you have a good tree ring series, you will do better properly inverting the series. We are trying to show a better method, not to cast doubt on climate change. If you look at my CV, I have a number of papers, which show that the projected impacts of climate change are expected to be quite sizable.

    I hope we can get past this finger pointing and get back to providing objective assessment of the issues at hand. God knows there is enough work to do.

    Best wishes,

    Max

    P.S> My coauthors have not endorsed this email. It represents my own views.

    • AJ Abrams
      Posted Dec 17, 2009 at 3:25 PM | Permalink

      Then if this is the paper in question, I’d say nothing to see here at all.

      • Nicolas Nierenberg
        Posted Dec 17, 2009 at 3:59 PM | Permalink

        AJ Abrams,

        I disagree. The issue is not whether or not the paper was good enough to get published, it is whether the third reviewer, who we can see was out of sync, looked at it with an open mind, or simply wanted to reject the paper because it cast doubt on his and his friends previous work. I have no issue with the paper being rejected for legitimate reasons, which is apparently what finally happened.

        It is truly sad that Dr. Aufhammer felt the need to add all the things about how he is not a skeptic to the end of his post. It doesn’t seem possible any longer to simply have a discussion about a particular scientific topic without the entire world view being dragged in.

        • AJ Abrams
          Posted Dec 17, 2009 at 4:08 PM | Permalink

          I don’t agree, and some part of me would like to. With this paper, maybe not the others mentioned, it isn’t clear that there was a clear intent given the words used. I agree with Bender. I certainly do think you COULD come to a nefarious conclusion, I just don’t happen to think the evidence is there in this one case. Call it inconclusive and because of that – again just with this one paper – I’d say nothing to see here.

    • Craig Loehle
      Posted Dec 17, 2009 at 4:11 PM | Permalink

      Thank you for filling us in on what happened. You may be interested to know that many here at Climate Audit have posted on the reverse regression method (predicting temperature from tree rings directly rather than inverting the proper equation) as being questionable.

    • David Wright
      Posted Dec 17, 2009 at 7:14 PM | Permalink

      No question about who the third reviewer was. I’m glad the other reviewers gave positive and constructive feedback. For the sake of the field, I do wish the revision and resubmission had gone faster, but I’m glad to see it’s happening now. Nice paper.

    • bender
      Posted Dec 20, 2009 at 9:46 PM | Permalink

      “the editor invited a resubmission”
      .
      I told you guys you were being presumptuous. I specifically mentioned this possibility.

  68. mikep
    Posted Dec 17, 2009 at 2:41 PM | Permalink

    I have now read the paper by Auffhammer et al. It seems a fairly standard econometric piece on the properties of estimators. While, especially in its presumed earlier form, it may not have looked at the size of the biases, it remains an straightforward and compelling descriptiion of potential problems with current reconstruction methods. It seems quite wrong to try and stop publication because it “does not matter”. Surely it is up to the proponents of the methods to show that their demonstrably incorrect methods do not substantially affect the results. Blocking publication of such a standard analysis seems to me quite shocking and indefensible.

  69. Anton
    Posted Dec 17, 2009 at 3:57 PM | Permalink

    Dr. Auffhammer’s email is very similar to what any working scientist (like me) would produce in similar circumstances. He and his colleagues contend in their paper that they have found a better tool for analysis of time-series data. That is all their paper is really “about”. But it seems certain that at least one or more referees rejected the paper not based on an obejctive assessment of the quality of the proposed new tool, but rather on the outcome that tool produced when applied to a particular data set. That is unscientific, pure and simple, whether Dr. Auffhammer wants to say it directly or not.

    But to Dr. Auffhammer, I say: thanks for taking the time to provide an explanation, and by all means, let’s keep the good science coming.

  70. Hoi Polloi
    Posted Dec 17, 2009 at 6:32 PM | Permalink

    Clearly ass.prof Aufhammer doesn´t want to be drawn into the Climategate Maelstrom… Understandable from his point of view.

  71. EdeF
    Posted Dec 17, 2009 at 7:41 PM | Permalink

    Its seems that the paper by Prof. Auffhammer et al is getting a wide reading here, and
    for good reasons. It is a well written paper. I would think the Team members would not be
    very happy to see it published since it does point out that the standard techniques
    underestimate the variability of the temperature reconstruction. I am not convinced though
    that their method is superior since it seems to overshoot the variability, and it is tied
    to one site. I would like to see it applied to more than one location. That being said,
    this paper clearly could be published in a number of journals since it does present a new
    technique.

    Wanted to say thanks to Drs. Auffhammer
    and Shaffer for joining the discussion.

  72. MikeN
    Posted Dec 17, 2009 at 8:48 PM | Permalink

    To be more specific, you ask
    Is Jones conflicted in reviewing a submission on CRU? Reply yes, but review?

    Is McIntyre conflicted in reviewing Wahl and Ammann(or was it Ammann and Wahl)?

  73. MikeN
    Posted Dec 17, 2009 at 9:38 PM | Permalink

    Are we sure this is the right paper? Is it likely that Box-Jenkins would me misidentified in this paper?

    • Posted Dec 17, 2009 at 11:05 PM | Permalink

      Hi MikeN,

      unless there is another set of coauthors from Berkeley and Korea (signaling that JABES submissions are not blind) who have submitted a Monte Carlo based paper criticizing current dendroclim. practices, I think it is a safe bet that this is the paper. I think the email just refers to time series analysis as “Box Jenkins” type stuff.

      Max

  74. Posted Dec 19, 2009 at 2:37 PM | Permalink

    re reverese regression as biased estimator – one get’s a good insight to the problem by reading the book.

    The debate raged during the 1960s, now we have to start it again. Briffa uses ICE (reverse regression, as the role of response and explanatory variables is reversed). Mann sometimes uses CCE, but for some reason applies variance matching after that, and then invents own CI-algorithm that even close colleagues can’t resolve.

    Note that correlations in these proxy calibrations are so poor that ICE is heavily biased towards calibration mean. Here’s Briffa’s “Influence of volcanic eruptions on Northern Hemisphere summer temperature over the past 600 years”, Nature 98, recon with ICE (original)

    and CCE (in blue)

  75. anon
    Posted Dec 20, 2009 at 7:04 PM | Permalink

    snip – prohibited word

  76. anon
    Posted Dec 20, 2009 at 9:42 PM | Permalink

    I agree with McIntyre, but I’m sure the other side will simply point to

    Reference: 450 skeptical peer reviewed papers

    Reference: 450 skeptical peer reviewed papers

    and say the allegation that skeptical papers were kept out is not true. A tabulation of papers by year might show something interesting.

  77. James Lane
    Posted Dec 21, 2009 at 7:36 AM | Permalink

    Steve, I thought you and Ross were terrific.

    – J

  78. mikep
    Posted Dec 27, 2009 at 12:48 PM | Permalink

    Browsing the infamous emails I came across an exchange about the congressional hearings. Here is a quote from Tom Wigley, which suggests once again that there is deliberate attempt to keep sceptical stuff out of journals. Of course they believe it’s bad science – I don’t doubt their sincerity. I do ,however doubt their competence and dislike the arrogance which tries to silence dissent.

    There may, in fact, be an opportunity here. As you know, we suspect that there
    has been an abuse of the scientific review process at the journal editor level.
    The method is to choose reviewers who are sympathetic to the anti-greenhouse
    view. Recent papers in GRL (including the M&M paper) have clearly not been
    reviewed by appropriate people. We have a strong suspicion that this is the case,
    but, of course, no proof because we do not know *who* the reviewers of these
    papers have been. Perhaps now is the time to make this a direct accusation and
    request (or demand) that this information be made available. In order to properly
    defend the good science it is essential that the reasons for bad science appearing
    in the literature be investigated.

    The reference is

    http://www.eastangliaemails.com/emails.php?eid=538&filename=1119957715.txt

14 Trackbacks

  1. By Top Posts — WordPress.com on Dec 16, 2009 at 7:43 PM

    […] Climategatekeeping In the MIT Climategate Forum, Ronald Prinn trotted out what has become one of the standard “move along” […] […]

  2. […] This is a really interesting email string form the CRU emails, via Steve McIntyre: […]

  3. By Climategatekeeping #2 « Climate Audit on Dec 17, 2009 at 2:41 PM

    […] yesterday’s post, I showed that the Climategate letters showed gatekeeping incidents that had nothing to do with […]

  4. […] emails, in which they attempt to keep dissenting scientists from publishing their papers: Climategatekeeping Climate Audit Climategatekeeping #2 Climate Audit And for good measure: How to Manufacture a Climate Consensus […]

  5. […] in case you missed it yesterday, don’t miss “Climategatekeeping #2” (see also Climategatekeeping #1). Warning: if you’re a real scientist who still believes in seeking the truth without fear or […]

  6. […] JeffID has a post relevant to this discussion, as does Steve McIntyre here climategatekeeping and climategatekeeping2. Where these go into technical details, this post is about procedure and […]

  7. By Climategatekeeping: Siberia « Climate Audit on Dec 21, 2009 at 1:21 PM

    […] Siberian temperatures are an interesting case study in CRU gatekeeping. As reported a few days ago here, in an email of Mar 31, 2004, Jones advised Climategate correspondent Michael Mann that he had […]

  8. […] temperatures are an interesting case study in CRU gatekeeping. As reported a few days ago here, in an email of Mar 31, 2004, Jones advised Climategate correspondent Michael Mann that he had […]

  9. […] global warming scientists, who were in a position to twist the peer-review process in their favor, and did so shamelessly. Yet still most media reports desperately minimize Climategate, saying that it doesn’t taint […]

  10. By Climategate, what is going on? - EcoWho on Dec 22, 2009 at 6:20 PM

    […] Siberian temperatures are an interesting case study in CRU gatekeeping. As reported a few days ago here, in an email of Mar 31, 2004, Jones advised Climategate correspondent Michael Mann that he had […]

  11. By A lesson for the alarmists ! on Dec 23, 2009 at 4:06 AM

    […] global warming scientists, who were in a position to twist the peer-review process in their favor, and did so shamelessly. There are of course many others who need to look seriously at their personal internal review […]

  12. […] Jones reviews Mann As noted previously, the Climategate letters and documents show Jones and the Team using the peer review process to […]

  13. […] 21. Mauern-Gate […]

  14. […] 31. Fungus-gate Media immediately falsely attributes spread of deadly fungus to global warming. 32. Gatekeeping-gate Warmist scientists conspire to keep dissenting views out of science journals. 33. GISS Metar-gate […]