PNAS Reviews: Preferential Standards for Kemp (Mann) et al

About 10 days ago, we discussed the PNAS reviews of the recent submission by Richard Lindzen, a member of the National Academy of Sciences with a distinguished publication record.

A few days ago, PNAS published Kemp et al 2011, a submission by one of Mann’s a graduate student [from the University of Pennsylvania]. While, in this case, we do not have access to the reviews, it is possible to make conclusions about the review process of the Mann article both on the limited information in the article and on the basis of the article itself.

The Kemp article states in its masthead:

Edited* by Anny Cazenave, Center National d’Etudes Spatiales (CNES), Toulouse Cedex 9, France, and approved March 25, 2011 (received for review October 29, 2010)

The asterisk says:

*This Direct Submission article had a prearranged editor.

It was certainly generous of PNAS to give a “prearranged editor” to a submission by a graduate student at Penn State. I’m sure that Lindzen, an actual NAS member, would have appreciated a similar courtesy. It was particularly nice of PNAS to allow the Team to “prearrange” an editor who had been a collaborator with a coauthor within the past 4 years – Cazenave was coauthor with Rahmstorf in Rahmstorf et al (Science 2007), Recent climate observations compared to projections (accepted Jan 25, 2007; published Feb 1, 2007). In contrast, PNAS objected to Lindzen’s submission being reviewed by Chou, who had co-authored with him in 2001.

In the previous discussion of the Lindzen reviews, some defenders of the PNAS reviews argued that the comments were justified.

My own issue with PNAS and other review processes is not that any given criticism cannot be justified, but the hypocrisy of seemingly inconsistent standards for Team critics and Team members. This hypocrisy is nicely illustrated by the contrasting standards for replicability required for Lindzen and for Mann et al. For example, Reviewer 2 of Lindzen’s submission stated:

The description of the procedures is long on philosophical discussion, but rather too spare in describing exactly what was done. Sufficient description is necessary so that another experimenter could reproduce the analysis exactly. I don’t think I could reproduce the analysis based on the description given. For example, exactly how were the intervals chosen? Was there any subjectivity introduced?

If this criticism of Lindzen’s submission is valid, I, of all people, can hardly take issue with it. Lindzen contradicted this criticism in his reply, arguing that the results were replicable. I’m not familiar enough with the data to have my own opinion on who’s right or not. For present purposes, the point is that the PNAS reviewer applied this standard to Lindzen. If PNAS is to be consistent, then the same standard should apply to Kemp et al.

However, if you examine statements in the article itself, the reviewers have clearly not paid any attention to replication or subjectivity in a Team submission.

Jeff Id almost immediately noticed unsupported statements in the article and SI, in particular mentioning their ‘discussion” of weights. I urge readers to search both the article and the SI for the word “weight”, excerpts of which are provided below. The statements highlighted below do not represent any sort of fine tooth comb. Rather they stick out like a sore thumb – to mix metaphors.

The term ‘weight” is used only once in the main article, in the caption to Figure 4 which states:

Fig 4. Salt-marsh proxy data used in Bayesian update were down-weighted by a factor of 10 and used only after AD 1000.

Given that the article was said to “present new sea-level reconstructions for the past 2100 y based
on salt-marsh sedimentary sequences from the US Atlantic coast”, it is puzzling, to say the least, that the actual salt-marsh proxy data in Figure 4 was downweighted by a factor of 10 and not used after AD1000. These puzzling and shall-we-say “subjective” decisions were not discussed in the article itself.

Unfortunately, as often happens with the Team, instead of discussing and explaining the decision, the SI does little more than re-assert the point. For example, they state:

The result of the Bayesian prediction is somewhat dependent on the choice of weighting for the sea-level proxy data; it is necessary to downweight them (or inflate their assumed variance) to take into account that they are subject to strong serial correlation. An appropriate choice for this factor would be 10. With this choice, we find it is not possible to obtain a reasonable a posteriori result for the entire data period: it is necessary to exclude the sea-level data before AD 1000 from the fit.

Later they say:

Weighting and fit for the early period (AD 500–1100). To fit the sea-level proxy data back to AD 500 required down-weighting of the data and generated an inadequate fit with broad uncertainty bands, suggesting that the data is not compatible. Restricting the Bayesian update to only post-AD 1000 sea-level data markedly improved the fit (Fig. 3D), but increased divergence between sea-level proxy data and sea-level predicted prior to AD 1000. There is independent evidence (21, 22) that the steep sea-level rise predicted from temperatures between AD 500 and 1000 is unphysical, and thus that the sea-level proxy data from North Carolina for this period are more realistic.

Obviously, none of the above is a standard statistical procedure. If PNAS review standards condemn the supposed “subjectivity” of the Lindzen submission, how could PNAS permit an author to simply assert that “an appropriate choice for this [downweighting] factor would be 10″. This far exceeds the alleged “subjectivity” of the Lindzen submission. Why didn’t PNAS reviewers and editors object?

And where were the reviewers when the authors perpetuated the known problem of upside-down and contaminated sediments?

Lindzen’s reviewer 2 also emphasized replicability. Again, what was sauce for the goose wasn’t sauce for the gander. PNAS reviewers didn’t pay even the slightest lip-service to ensuring that the underlying data for Kemp et al was available.

For example, the authors say:

We developed transfer functions using a modern dataset of foraminifera (193 samples) from 10 salt marshes in North Carolina, USA (7).

This is a pretty fundamental calibration. Is the data in the SI or at the WDCP archive or at Mann’s website? Nope. They continue:

The transfer functions were applied to foraminiferal assemblages preserved in 1 cm thick samples from two cores of salt-marsh sediment (Sand Point and Tump Point, North Carolina; Fig. 1) to estimate paleomarsh elevation (PME), which is the tidal elevation at which a sample formed with respect to its contemporary sea level (9).

OK, where’s the data for the reconstruction i.e. the 1-cm foraminiferal assemblages? Nowhere in sight. Where, for that matter, are the estimates of “paleomarsh elevation”? Only illustrated in a scrunched figure, but not archived.

In Lindzen’s case, reviewers said that PNAS standards required the justification of “subjective” decisions and replicability. I endorse these principles. As noted above, I am not familiar enough with the data sets for the Lindzen submission to comment on the validity of these criticisms as applied to that article (and do not have the time or energy at present to run these issues to ground.)

However, I am familiar with paleo datasets and methods. I can say categorically that Kemp et al 2011 contains bizarre “subjective” decisions, that the underlying data is unarchived and that the methodology is not described to the standards required by the Lindzen reviewer 2. (I doubt that this could be done in this case without supplying source code, a practice increasingly accepted by journals.)

Once again, the issue is hypocrisy. What’s sauce for the goose should be sauce for the gander. If PNAS standards require replicability and non-subjectivity of a NAS member, then PNAS is flagrantly hypocritical in not enforcing these standards on a submission by a Penn State graduate student. Dare one observe that PNAS’ hypocrisy and their failure to ensure compliance with standards required of Lindzen were perhaps facilitated by the use of a “prearranged” editor, who had collaborated with the authors within the 4-year prohibited window.

95 Comments

  1. kim
    Posted Jun 22, 2011 at 1:59 PM | Permalink | Reply

    Dawn of the Living Mann limps across the marsh, unkempt.
    ===============

  2. green thunder
    Posted Jun 22, 2011 at 2:07 PM | Permalink | Reply

    PNAS despite its name is a vanity publication for third rate papers. The following says it all

    PNAS depends, in part, on the payment of publication fees to finance its operations. Articles are accepted or rejected for publication and published solely on the basis of merit. All authors will be assessed the following fees:

    Page charges: $70 per page, from all authors who have funds available for that purpose

    This is vanity publishing. There are plenty of journals that do not charge the author.

    • w.w. wygart
      Posted Jun 22, 2011 at 4:41 PM | Permalink | Reply

      In which case we would expect [or at least hope] to see much more even handed treatment of submissions.

      W^3

    • Rattus Norvegicus
      Posted Jun 23, 2011 at 10:24 AM | Permalink | Reply

      Actually, page charges are quite common in scientific publishing…

    • Clark
      Posted Jun 23, 2011 at 2:17 PM | Permalink | Reply

      You are not correct. Nearly every journal these days has page charges. Page charges have evolved as subscription to the printed versions have collapsed due to the accessibility of pdfs online.

      Climate science appears to be an outlier, but every field has serious issues with peer review consistency. I have had editors email me to hurry up with my review because they are really eager to publish the article in question. I have seen papers from bigshots fly through with the most cursory review while similar quality papers by others are ripped apart from start to finish.

  3. Posted Jun 22, 2011 at 2:27 PM | Permalink | Reply

    Steve, perhaps you can consider the CV of Dr. Anny Cazenave:

    http://www.academie-sciences.fr/academie/membre/CazenaveA_bio022011_gb.pdf

    “International responsibilities:
    Current Member: IPCC (Intergovernmental Panel on Climate Change) Working Group I and lead author of the 5th
    Assessment Report (Sea Level chapter)”

    Perhaps she can include imagery from this Kemp paper on the cover of the chapter — Hockey Stick redux

    • JEM
      Posted Jun 22, 2011 at 8:30 PM | Permalink | Reply

      You’d think these people really don’t believe anyone’s looking.

    • Geoff Sherrington
      Posted Jun 23, 2011 at 7:16 AM | Permalink | Reply

      Ryan,

      I’ve has a couple of interesting emails with Anny Cazenave. She is learned an so far as I could tell, open to ideas.

  4. Bob Koss
    Posted Jun 22, 2011 at 3:14 PM | Permalink | Reply

    Steve,

    The quote below of yours doesn’t match what you wrote following the quote. Either “used only after AD 1000″ is incorrect or “not used after AD 1000″ must be incorrect.

    Fig 4. Salt-marsh proxy data used in Bayesian update were down-weighted by a factor of 10 and used only after AD 1000.

  5. Marc Hendrickx
    Posted Jun 22, 2011 at 3:16 PM | Permalink | Reply

    PNAS, and other journals using a closed review system, would benefit from the peer review procedure now in place at the journal The Cryosphere. This states:

    The Cryosphere has an innovative two-stage publication process which involves a scientific discussion forum and exploits the full potential of the Internet to:

    foster scientific discussion;
    enhance the effectiveness and transparency of scientific quality assurance;
    enable rapid publication;
    make scientific publications freely accessible.

    In the first stage, papers that pass a rapid access-review by one of the editors are immediately published on the The Cryosphere Discussions (TCD) website. They are then subject to Interactive Public Discussion, during which the referee’s comments (anonymous or attributed), additional short comments by other members of the scientific community (attributed) and the author’s replies are also published in TCD. In the second stage, the peer-review process is completed and, if accepted, the final revised papers are published in TC. To ensure publication precedence for authors, and to provide a lasting record of scientific discussion, TCD and TC are both ISSN-registered, permanently archived and fully citable.
    http://www.the-cryosphere.net/

    One paper currently in the system is this one discussing effects of wind in arctic sea ice decline. http://www.the-cryosphere-discuss.net/5/1311/2011/tcd-5-1311-2011.html

  6. Salamano
    Posted Jun 22, 2011 at 3:27 PM | Permalink | Reply

    Perhaps a slight re-word to capture your disapproaval of the process…

    “What SHOULDN’T be sauce for the goose SHOULDN’T be sauce for the gander”

  7. pd
    Posted Jun 22, 2011 at 3:30 PM | Permalink | Reply

    Steve,

    that the actual salt-marsh proxy data in Figure 4 was downweighted by a factor of 10 and not used after AD1000.

    I think that you mean before

    pd

    • TerryS
      Posted Jun 22, 2011 at 5:52 PM | Permalink | Reply

      The meaning of before and after depend upon the direction of travel.
      If you are travelling forward in time then “before” means before AD1000 and not after and if you a travelling backward then “after” means after AD1000 and not before.

      Hope that clears things up!

  8. Posted Jun 22, 2011 at 3:30 PM | Permalink | Reply

    how much more rubbish (climate or otherwise) is published by PNAS every year?

  9. R.S.Brown
    Posted Jun 22, 2011 at 3:49 PM | Permalink | Reply

    Steve,

    When you check out Mike Mann’s curriculum vitae, on down the page you’ll
    see the doctorate and graduate students Mann takes credit for mentoring/advising.

    So far, Kemp isn’t listed as being under Mann’s beneficent wing as either a post- doctorate researcher or grad student.

    See:

    http://www.meteo.psu.edu/~mann/Mann/cv/cv_pdf.pdf

    As I’ve said at other times and other places, I believe we’re seeing a series of publications and presentations designed to bolster the on-paper impact and
    importance of Mann’s seminal works on temperature proxies and reconstructions
    while at the University of Massachusetts (1997 – 1999) and at the University of
    Virginia (1999 – 2005).

    The longer the list of publications with “M. Mann” as lead author or coauthor.
    the stronger the argument “Team” members and supporters can point to and
    say, “See why he’s being persecuted by the radical Right and their big buck
    corporate chums.”.

    They can point to the early Mann databases and statistical interpretations and
    then to the huge research edifice built on that early work and exclaim, “See
    how important this work is to our attempts to save mankind and the planet !”.

    Such studies, now matter how riddled with problem they might be, get published,
    and serve as focal points to galvanize letter writing, petitions, individual and
    group lobbying efforts to get organizations like ATI, congressional committee
    members, and state Attorneys General off Mann and other Team member’s
    backs.

    The emperor and his court is supposed to be unquestioned and unquestionable.

    Right ?

    • Michael Jankowski
      Posted Jun 22, 2011 at 8:01 PM | Permalink | Reply

      Mann is at Penn St, famous for Joe Paterno and football in the Nittany Mtns. Kemp is at UPenn (which, when I went there, was often called “Not Penn St”), an Ivy League school in Philly and hundreds of miles away.

      Mann wouldn’t be part of any advisory committee for Kemp.

      • Gilbert K. Arnold
        Posted Jun 23, 2011 at 11:39 AM | Permalink | Reply

        He could have been an ad ho member of his committee. This sometimes happens when a Professor from another university has expertise related to your thesis and has advised you on it. Although in this case he is just simply one of the co-authors of the paper.

        • DCC
          Posted Jun 24, 2011 at 8:05 AM | Permalink

          “Ad ho?” Ordinarily one mustn’t mix English slang into Latin phrases, but in this case, it does seem appropriate! :-)

  10. Posted Jun 22, 2011 at 4:04 PM | Permalink | Reply

    The subjective weighting of data before 1000 AD is all the more egregious when you consider examples of Roman period sea levels higher than now.

    http://www.dailymail.co.uk/sciencetech/article-1066712/Uncovered-lost-beach-Romans-got-toehold-Britain.html

    Considering the fact that southern England has been sinking since the counterbalancing ice mass over Scotland melted 11,000 years ago, the discovery of a Roman harbour wall next to a beach two miles inland should be enough to tell us that Kemp et al’s paper needs filing in the round cabinet.

  11. Daniel
    Posted Jun 22, 2011 at 4:26 PM | Permalink | Reply

    I asked Annie Cazenave her point of view. Wait’n see !

  12. Posted Jun 22, 2011 at 4:47 PM | Permalink | Reply

    yup.

    Those sections you quote from the SI jumped out to me on day 1.

    But i’m only an english major, not no climut guy.

    With Lindzen they made a fuss about the qualifications of the reviewer. I can spot the issues in Lindzen. I can spot the issues in Kemp. took me 30 seconds. turn to the SI, quick scan, boom!

    Reviewers are not selected by their ability to spot issues. Any college edumucated dolt can spot the issues. the real issue is will the reviewer let the issue pass or raise a stink.

  13. KnR
    Posted Jun 22, 2011 at 5:13 PM | Permalink | Reply

    Given that the reviewers quality is based on their acceptability to PNAS , not on their academic standing . Its not really a surprise to see this happening.
    In system design to ensure people can starch each other backs , its always going to be the best back starches that receive the most approval.

    • DCC
      Posted Jun 24, 2011 at 8:08 AM | Permalink | Reply

      So they go around stiffening each other’s resolve?

  14. David S
    Posted Jun 22, 2011 at 5:55 PM | Permalink | Reply

    Perhaps it’s time to drop the A in PNAS, as calling it PNS might be more accurate. Wonder what Jerry Ravetz thinks.

  15. Posted Jun 22, 2011 at 6:05 PM | Permalink | Reply

    It is stuff like this paper which make blogging hard to quit. You read it and can’t understand what the heck they could possibly be talking about. No data, no code, no truly supportive equations, the methods section of the original paper is incredibly short with no real description. Obvious ad hoc number mashing and a powerful result.

    A level 5 sudoku with no answer.

    • stan
      Posted Jun 23, 2011 at 9:02 AM | Permalink | Reply

      Jeff Id,

      “A level 5 sudoku with no answer”

      Good one! Short, sweet and so apropos.

      A commenter at Judy Curry’s wanted to know how two locations in North Carolina were representative of the whole globe while many locations in Europe were only evidence of a regional effect. Apparently, it’s easy to understand — if you start out already knowing the answer and work backwards. Like a level 5 sudoku.

    • Skiphil
      Posted Mar 4, 2013 at 9:09 PM | Permalink | Reply

      Re: Jeff Id (Jun 22 18:05),

      Heads up on a developing issue with serious potential implications for PNAS standards/reviewing and also IPCC issues….. Jim Bouldin is quite incensed at what he regards as a seriously flawed PNAS review process on a paper which was not rejected until just after the IPCC deadline. Most details are still to come, but it is interesting that RC teammate Jim Bouldin is sufficiently disillusioned with PNAS reviewing standards and process on this paper to go public in this way.

      Jim Bouldin on a PNAS rejection process which he regards as highly biased and irresponsible

      Third, the rejection without revision occurred just after the IPCC’s August 1 deadline for initial submission of any manuscripts that can be discussed and cited in the upcoming “AR5″ climate assessment report. Therefore, the AR5 now does not have to consider the issues I raised, and one can be quite confident that the reviewers and handling editor would have been well aware of this fact, since the IPCC Assessment Report is by far the most important climate document in the world. I can’t be sure that this was a reason for their outright rejection, but I’d be more than willing to bet on it. Fourth, I know who one of the reviewers was (because they signed their review), the organization this individual works for, and some background on various activities conducted by members of that organization over the last decade or more. Nor am I alone in that knowledge.

      • pottereaton
        Posted Mar 4, 2013 at 10:45 PM | Permalink | Reply

        Those are some very serious charges. Of course my first thought was, “I wonder what Steve would think about Bouldin’s paper?”

      • schatzenberge
        Posted Mar 6, 2013 at 9:15 AM | Permalink | Reply

        Thanks for the heads-up Skiphil. Jim Bouldin has a new post up today…

        [quote]
        I would like direct answers from some dendroclimatologists to the following absolutely critical questions to the legitimacy of the science, on issues which are almost entirely unrelated to the issues I’ve raised in my paper:

        1) Is Loehle* (2009) fundamentally correct in his description of the potentially very serious problems caused by unimodal responses of ring size to temperature. If not, why not?
        2) On what mathematical basis, if any, can a modeled, linear (straight line) relationship between climate driver and ring response be used to accurately predict a strongly non-linear relationship?
        3) On what basis does one assert that the climatic states experienced during the calibration period are fully representative of the set of states experienced during the pre-calibration (“reconstruction”) period, and that the tree sizes/ages sampled during the calibration period are also representative of the ages/sizes of the pre-calibration period.

        Somebody, anybody, please answer those questions, directly.
        [\quote]

  16. MarkB
    Posted Jun 22, 2011 at 6:15 PM | Permalink | Reply

    Putting aside the fact that I agree with all the criticisms of PNAS on this site…

    The fact is that sometimes you get easy reviewers and sometimes you get tough buggers. That’s true at any journal. Some will wave you through under ‘there but for the grace of God’ logic, and some will nitpick you to death so that they can tell their faculty buddies how rigorous they are. So as far as that goes, I can imagine finding hundreds of similar examples in other journals in other subjects.

    Like I said, that was putting aside the dirty dealings of this particular case. But the fact is that science publishing is like the proverbial sausage-making – the more you see of it, the worse it gets. Just don’t be naive about finding inconsistencies in the treatment of different papers.

    • Jeremy Harvey
      Posted Jun 23, 2011 at 5:45 AM | Permalink | Reply

      MarkB: This is of course correct. Apart from other things, referees are very busy, and often only invest an hour or so in refereeing a given manuscript. Such quick work is bound to rely partly on gut instincts, and also to be variable from paper to paper. Given that variability, you cannot rigorously conclude anything from the different path followed by the Choi & Lindzen and Kemp et al. manuscripts. Also, as Nick Stokes pointed out, they went into PNAS through different channels.

      People in the climate field like to talk about something that has been observed, A, being ‘consistent with’ something else, B, and then use the observation of A as evidence for B, albeit short of being proof. Use that style of reasoning here. B is the claim that papers which adhere to the consensus get an easier ride through peer review than those that don’t. Given all the examples discussed on this blog and elsewhere over the years, it seems very obvious to me that this does happen. Some people, amazingly, do not accept that, so new evidence is welcome. The new observation A is the path the two papers under consideration took at PNAS. A proves nothing but is sure as hell ‘consistent with’ B.

      • Posted Jun 23, 2011 at 6:26 AM | Permalink | Reply

        Yes, but the problem is, what is B. There have, for example, been papers circulating which say that back radiation of IR can’t cause warming, hence no greenhouse effect. Gerlich and Tseutschner is the most prominent. They have a lot of trouble getting published (though G&T managed). They don’t adhere to the consensus. But are their troubles caused by being non-consensus, or by being wrong?

        • Chris E
          Posted Jun 26, 2011 at 4:20 AM | Permalink

          The issue of whether a paper is ‘wrong’ shouldn’t be relevant to the review at all. Are the reviewers and editors Gods, repositories of all knowledge and wisdom? Suggesting that a paper is ‘wrong’ simply means that you don’t agree with the conclusions or implications of the work.
          When I do reviews I look at four questions:

          Is the paper clearly written, with adequate supporting detail?
          Are the experimental design, assumptions and statistical methodology appropriate?
          Are inconsistencies with previous research discussed and explained?
          Are the conclusions justified by the body of the paper?

          A paper could fail all of these tests and still be ‘right’ in its conclusions, or it could pass them all but still be ‘wrong’. That is for history to decide, not for the reviewers or editors.

  17. Posted Jun 22, 2011 at 6:17 PM | Permalink | Reply

    Andrew Kemp is affiliated with the University of Pennsylvania, not Penn State. The only graduate advisor he has listed that is listed as a co-author is Dr. Ben Horton. Donnelly is a scientist with Tenure at Woods Hole.

  18. Posted Jun 22, 2011 at 7:00 PM | Permalink | Reply

    Gives a whole new meaning to ‘publication bias’ but consistent with past practices by the hockey team.
    ‘Pal review’!

  19. Keith W.
    Posted Jun 22, 2011 at 9:45 PM | Permalink | Reply

    I will agree, there is no evidence of Kemp being a graduate student of Mike Mann. But it is interesting that he has never published a paper before this one, per his CV.

    http://climate.yale.edu/people/andrew-kemp

    The reason this is interesting is that when you look at Ben Horton’s resume, you see several papers with Kemp listed as lead author.

    http://www.sas.upenn.edu/earth/pdf/BPHCV-short.pdf

    Donnelly and Horton have worked together before, and Horton was Kemp’s graduate adviser and post doctoral sponsor. Vermeer & Rahmsdorf have co-published once before, in PNAS as well, although there has been no previous linking work between them and Horton, Kemp & Donnelly. It is Vermeer who has the only connection to Michael Mann, through RealClimate.

    So how did these unconnected individuals wind up together? Why bring in Michael Mann, who has no connection to or expertise in the subject at hand? The others have all published on this subject before, so they did not need his particular skills to create a new paper. They did not need him to get published in PNAS, and none of Kemp, Donnellly or Horton’s previous work has been about climate related historical sea bed changes.

  20. Posted Jun 22, 2011 at 10:07 PM | Permalink | Reply

    Siltdown Mann

  21. Rattus Norvegicus
    Posted Jun 22, 2011 at 10:33 PM | Permalink | Reply

    Actually, you might want to check references 7 and 12 which appear to be the papers which develop the data used in the recon. Kemp has published before (with Horton).

    Steve: Show me the data. An actual FTP location.

  22. EdeF
    Posted Jun 22, 2011 at 10:59 PM | Permalink | Reply

    Is this paper an attempt by the team to get more in line with known historical events?
    I notice that the Medieval Climate Anomaly shows up prominently along with his side-kick the Little Ice Age. Hmm, I didn’t think those terms existed, at least not according to
    the ol’ Hockey Stick of yore. The silt plots show a very prominent MWP and something
    resembling a LIA. How close are the authors to the team? Mann et al are prominent due to
    the use of their temperature reconstructions. Why the use of only two sites from N. Carolina? Is this a large enough sampling? I notice that the unprecedented rise in sea
    level of a few mm/yr starts sometime between the 1860s and 1890s, but this is well before the rapid population and industrial growth of the 20th century. Lastly, to me the editor
    seems to be very well qualified.

    • Posted Jun 23, 2011 at 1:36 AM | Permalink | Reply

      Re: EdeF (Jun 22 22:59),
      no, quite to the contrary. The paper is essentially arguing that the small “MWP” existing in Mann et al (2008) is incorrect, and that the curve should be lowered there about 0,2 degrees. Furthermore, the sea level reconstruction and the sea level – temperature model imply that there was no RWP, either.

      BTW, Vermeer has an “interesting” collection of cartoons:
      http://users.tkk.fi/mvermeer/cartoons.html

  23. Ron Cram
    Posted Jun 22, 2011 at 11:02 PM | Permalink | Reply

    The IPCC took a recent PhD grad with an unseasoned (and fatally flawed) paper and selected one graph to be the icon of catastrophic global warming. As a result, Scientific American named Michael Mann one of fifty “leading visionaries in science and technology.”

    Kemp is the (new) Mann!

  24. Adrian O
    Posted Jun 22, 2011 at 11:06 PM | Permalink | Reply

    Unfortunately this shows the price of US science.

    For a few billion/year pumped into climate science,

    the peer review process, as shown here, goes down the drain, and

    large swaths of previously prestigious journals, universities and institutions are changed into junk…

    Sad.

  25. Posted Jun 22, 2011 at 11:10 PM | Permalink | Reply

    I think you have the PNAS paper status completely wrong. Lindzen submitted under the privileged path for members, whereby you choose an editor to submit to, and send in your chosen referee reports.

    Kemp et al took the plebeian route of direct submission. There the deal, available to everyone, is that
    “Authors must recommend three appropriate Editorial Board members, three NAS members who are expert in the paper’s scientific area, and five qualified reviewers. The Board may choose someone who is or is not on that list or may reject the paper without further review. “

    But there is the following further provision:
    “Prior to submission to PNAS, an author may ask an NAS member to oversee the review process of a Direct Submission. Prearranged editors should only be used when an article falls into an area without broad representation in the Academy, or for research that may be considered counter to a prevailing view or too far ahead of its time to receive a fair hearing, and in which the member is expert. If the NAS member agrees, the author should coordinate submission to ensure that the member is available, and should alert the member that he or she will be contacted by the PNAS Office within 48 hours of submission to confirm his or her willingness to serve as a prearranged editor and to comment on the importance of the work. Authors submitting Feature Articles are not permitted to use a prearranged editor.”

    It looks like that is what happened. No comparison to the member’s privilege path chosen by Lindzen. And no requirements about non-coauthorship etc.

    • Steve McIntyre
      Posted Jun 22, 2011 at 11:44 PM | Permalink | Reply

      for research that may be considered counter to a prevailing view

      You mean – like Lindzen’s paper.

      Nick, do you agree that the Kemp paper contains “subjective” methodological decisions of the type opposed by Lindzen’s Reviewer 2. And that it does not meet the replicability standard required by Lindzen;s Reviewer 2. .

      • Posted Jun 22, 2011 at 11:53 PM | Permalink | Reply

        Re: Steve McIntyre (Jun 22 23:44),
        Steve,
        I’m trying to stick to the topic of your post, which was preferential stndards. I cannot see any way in which Kemp was “preferred”. They took the path available to everyone, unlike Lindzen. As to the grounds for granting the assigned editor, there are a number available – I do not know which was proposed, or what the grounds were. Maybe it is ahead of its time.

        I also haven’t read the paper in detail yet.

        • Ed Snack
          Posted Jun 23, 2011 at 3:28 AM | Permalink

          Yep Nick, the Board can agree with the choice (like they did for Kemp), or they can refuse and chose someone else patently and obviously hostile (like they did with Lindzen), gosh, I wonder why the difference. And you’re right, we don’t know which applied, but given the Lindzen case it can’t be “an area with broad representation”, because they found a few reviewers for Lindzen’s paper quick enough and the topics are related, counter to a prevailing view, nope, it is counter to the science and the evidence, but not counter to “the essential narrative”, so it ain’t that. So it must be “too far ahead of its time”, which figures. It probably couldn’t get a “fair hearing” if properly reviewed and they knew that, so a patsy had to be agreed. Simple, really.

          But then you’d defend any contortions to put this sort of paper into the “peer reviewed literature”, and support anything to keep contrary reviews out. Like the climategate emails provided a preview, recall how Jones was determined to keep contrary views out by “redefining peer review if necessary” ? Well, it sure looks like the corruption continues.

        • Posted Jun 23, 2011 at 4:15 AM | Permalink

          A very different situation, Ed. For one thing, Dr Cazenave is actually a PNAS Editor herself. And the requirement for nomination is that the editor should be expert in the area, which she clearly is. Kemp et al did not get to choose the referees.

          Lindzen’s referee Happer was none of those things.

        • Posted Jun 23, 2011 at 4:21 AM | Permalink

          Correction, Happer is a PNAS editor. But he was certainly not expert in the area.

        • Posted Jun 23, 2011 at 10:30 AM | Permalink

          Re: Nick Stokes (Jun 23 04:21), Nick, he didnt need to be an expert to find the rather obvious problems in Lindzen. Just like one doesnt need to be an expert to find the gaping holes in Kemp.
          45 seconds! that is how long it took me to find the issue in the SI.

        • Daniel
          Posted Jun 23, 2011 at 3:08 PM | Permalink

          Are you sure Dr Cazenave is an expert in the area considered ; I mean, isn’t she’s rather focused on climate data measurement by satellite ? Is she involved somehow in historic climate recons ?

        • Gerald Machnee
          Posted Jun 23, 2011 at 12:18 PM | Permalink

          Nick
          So start reading. You will see the the only qualification the reviewers may have had is the few letters after their names.

  26. Ron Cram
    Posted Jun 22, 2011 at 11:29 PM | Permalink | Reply

    Nick,

    Do you really think this new Hockey Stick is “research that may be considered counter to a prevailing view” or fits any of the other requirements? I don’t. This paper got privileged status precisely because it supports the IPCC view.

    • Posted Jun 22, 2011 at 11:44 PM | Permalink | Reply

      Re: Ron Cram (Jun 22 23:29),
      Ron, I’ve no wish to relitigate the journal’s decsision on that. All I’m pointing out is that Lindzen took the path of privilege and came unstuck – Kemp et al took the path available to everyone, and stuck to the rules.

      I’d note too that the paper was five months in review. Not trivial.

      • Ron Cram
        Posted Jun 22, 2011 at 11:50 PM | Permalink | Reply

        Nick,
        There is no legal proceeding to litigate or relitigate. You can make your point and I can make mine. The point is the journal gave special privilege to Kemp’s paper. It did not meet the requirement of going against a prevailing view, just the opposite. The paper goes against science in order to support the prevailing view. That’s the point.

        • Posted Jun 23, 2011 at 12:02 AM | Permalink

          By relitigate I mean engage in pointless legalistic argument re a decision made by people who knew. But the Journal lists three bases for the assigned editor provision:
          “when an article falls into an area without broad representation in the Academy, or for research that may be considered counter to a prevailing view or too far ahead of its time to receive a fair hearing,”
          We simply don’t know which of these was thought to apply and why.

        • Posted Jun 23, 2011 at 7:55 AM | Permalink

          Nick,
          None of the conditions apply as is obvious by your unwillingness to attempt a defense of the indefensible. We know a counter condition applies in that the paper supports a widely held view which is crumbling and needs support. This paper got special privilege. It is a poor excuse for a paper that never should have been published, but they got it in under the wire for AR5.

        • Venter
          Posted Jun 23, 2011 at 5:37 AM | Permalink

          Ron, Nick will only appear to put a spin on every dishonest activity and to only point out minute nitpcking errors in what somebody posted about the whole issue. He will never comment on the major offences by the team and on the main crux of the article. That is his style. He’s the team’s spinmeister, exsiting to obfuscate, deflect, be obtuse and try every trick in the book to defend any misdoing by the team.

  27. Ron Cram
    Posted Jun 22, 2011 at 11:43 PM | Permalink | Reply

    Judith Curry is pointing readers to Jeff Masters “must read” post on sea level variation. See http://www.wunderground.com/blog/JeffMasters/comment.html?entrynum=1240

    After reading this, the similarity to Michael Mann’s Hockey Stick becomes even more clear. They are attempting to repeat “photoshopping the historical record.” It seems the paper is going against the prevailing view of the history of sea levels, but it is doing so in order to prop up a crumbling hypothesis about the dangers of global warming.

  28. Posted Jun 23, 2011 at 1:40 AM | Permalink | Reply

    OT package is up on CRAN RghcnV3

  29. Jeremy
    Posted Jun 23, 2011 at 8:41 AM | Permalink | Reply

    Steve,

    Given that the article was said to “present new sea-level reconstructions for the past 2100 y based
    on salt-marsh sedimentary sequences from the US Atlantic coast”, it is puzzling, to say the least, that the actual salt-marsh proxy data in Figure 4 was downweighted by a factor of 10 and not used after AD1000.

    I think you mean “was only used”, not “was not used”.

  30. Jeremy
    Posted Jun 23, 2011 at 8:48 AM | Permalink | Reply

    There is independent evidence (21, 22) that the steep sea-level rise predicted from temperatures between AD 500 and 1000 is unphysical

    ^^ So sea level rises with temperature, except when it doesn’t.

    • Posted Jun 23, 2011 at 8:53 AM | Permalink | Reply

      Jeremy,
      Yeah, I noticed the same thing. I am totally shocked this paper got published.

  31. Alexander K
    Posted Jun 23, 2011 at 9:48 AM | Permalink | Reply

    David S – There are so many cock-ups in this smelly saga that after dropping the ‘a’, insert an ‘e’ to render the name appropriate. Or is that just too OTT?

  32. Kenneth Fritsch
    Posted Jun 23, 2011 at 11:11 AM | Permalink | Reply

    These discussions would appear to me to get a little fuzzy and vague when we allow the discussion to have a side light of the determining whether the PNAS is being fair by comparing it with a paper that was not published. I have read through the sea level paper in question here and I have many of the same questions concerning the lack of clarity and basic evidence as noted by SteveM, Jeff ID and others here.

    Now look at how much band width will be wasted in a lawyerly discussion with Nick Stokes on the much less important issues (to me anyway) of selecting referees. While the paper’s contents and conclusions are of major importance, the quality of the peer review argument could be made for this paper without reference to Lindzen’s submission.

    I would suspect that this paper will be given much space in the MSM and at advocacy blogs like RC because it messages can be used to emphasize concerns about sea level rise in the future and even have a sidelight affect on questioning the amount of warming during the Medieval Warm Period (MWP) as reconstructed from Mann et al. (2008). Surely Mann would be less likely to attempt to defend his methods against criticism of too warm a MWP than he would a too cool MWP.

    When all is said and done we have a paper attempting to show a global sea level reconstruction from a single location that falls off the rails going back in time when compared with a temperature reconstruction purported to show a history of global temperatures. Not considered in the paper but an alternative explanation is that both reconstructions are wrong. This is a paper that begs for more detailed reading and questioning.

  33. Gilbert K. Arnold
    Posted Jun 23, 2011 at 11:42 AM | Permalink | Reply

    ad hoc (hopefully for the last time on the correction)

  34. RayG
    Posted Jun 23, 2011 at 12:05 PM | Permalink | Reply

    Willis Eschenbach has done an interesting job of digitizing the Kemp et al 2011 data and also looking at the marine environment at the two sites used in Kemp and posted it at WUWT. He asserts with what seems to be solid evidence that the two sites do not show the same rise and that the nature of the sites is such that currents, deposition, etc. renders them inappropriate. Recommended reading.

    wattsupwiththat.com/2011/06/23/reduce-your-co2-footprint-by-recycling-past-errors/#more-42111

  35. Posted Jun 23, 2011 at 12:24 PM | Permalink | Reply

    What I find missing in just about every paper that deals with proxies for past climate, and this paper is no exception, is a clear-cut display of each proxy at each site with the measurement (e.g. tree ring width or foraminifera in salt marshes or whatever) plotted on the same axis with the observation (local temperature or measured sea level or whatever) showing exactly the comparison between measurement and proxy during the calibration period for each and every proxy used in the reconstruction. As long as reviewers for journals do not demand this data, those who reconstruct past climates from proxies can pretty much publish almost anything they want and the reviewers have little or no penetration into the ultimate foundation of the reconstruction. Only Steve McIntyre, acting on his own, has delved into the specific details of the proxies used in published papers. The journal reviewers do not do this. Hence, any reconstruction that supports the alarmist theology is accepted for publication.

  36. Jonas
    Posted Jun 23, 2011 at 12:25 PM | Permalink | Reply

    Economist taken up your IPCC findings
    http://www.economist.com/node/18866905

  37. Eric
    Posted Jun 23, 2011 at 12:26 PM | Permalink | Reply

    It seems to me that Nick’s point about Lindzen’s priviledged path vs. Kemp’s plebian path seems only to serve the argument of the skeptics.

    If I may grossly oversimplify to make my point.
    Even Lindzen’s NAS status, priviledged path to PNAS publication, and publication record couldn’t save his off-message paper. While dilettante Kemp gets a free pass despite his plebian path.

    Nick, you are just emphasizing difference in treatment of papers based on their adherence to the desired narrative.

  38. Daniel
    Posted Jun 23, 2011 at 3:37 PM | Permalink | Reply

    I got a feed back from editor Dr Cazenave ; in substance, her answer to my suggestion of providing me with her point of view as to CA post just sounds more or less like “McIntyre is a wellknown climato-sceptics ; the paper went through three reviews and three rewritings ; it was worth being published “

    • KnR
      Posted Jun 24, 2011 at 3:40 AM | Permalink | Reply

      And yet these three reviews failed to spot the problems.

  39. Kenneth Fritsch
    Posted Jun 23, 2011 at 8:16 PM | Permalink | Reply

    From RC we have the following posts/observations about Fig 3, which when I viewed it in the long term of the sea level reconstructions portrayed would appear to make the North Carolina sea level reconstruction an outlier.

    “Nice read, but Stefan can you comment on the criticism appearing in spiegel.de today (http://www.spiegel.de/wissenschaft/natur/0,1518,769424,00.html) regarding that only this US site matches the data of the past ~120yrs? Their main claim is neglecting other non matching site data will make this research not looking very solid…”

    “[Response: spiegel.de got a number of things badly wrong; I responded to those this morning at KlimaLounge. Regarding your specific question: our Fig. 3 tries to give an overview over previous reconstructions, regardless of their quality. You will see that a couple of them are "all over the place" (i.e. the ones from Israel and the Cook Islands). They do not only mismatch the other data sets but are implausible in themselves, in requiring huge sea level jumps over short time periods. Apart from these two, the only one that does not match the new data, within stated uncertainties, is the one from Iceland. This warrants further investigation whether this is a local effect.

    "(p.s. This response applies to the full time series, not just the past 120 years. For the past 120 years the proxy reconstruction nicely matches the Jevrejeva et al. (2006) and Church&White (2006) global tide gauge reconstructions - see above and Fig. 6 of paper.) -Stefan]”

    Even the MA reconstruction, that the authors of the NC reconstruction claim is in agreement with their NC reconstruction is seriously different. Most reconstructions that go further back in time show sea levels very much in the range of the current levels – unlike the NC reconstruction. Surely if these comparisons were available to the reviewers it must have raised a flag for at least obtaining more due diligence.”

    The NC sea level reconstruction is very much within the realm of the original MBH paper that gave us the hockey stick.

  40. theduke
    Posted Jun 23, 2011 at 9:24 PM | Permalink | Reply

    From realclimate:

    31
    grypo says:
    23 Jun 2011 at 5:12 PM

    It appears McIntyre is continuing with the teaspoons, CM. This time he has a powerpoint presentation that may or may not have anything to do with the new paper (I dunno), which may or may not be comparable to the sea level numbers, and an assumption about downweighing (who knows!) He went far enough to know when the pdf might have been created. He’s a real sleuth. I also noticed the PNAS time limit for referees mistake. He actually relied on “free-market energy blog” for that information which came from a letter sent by the PNAS to member in 2008. Oops! He obviously doesn’t care about accuracy anymore, or even the appearance of having it. U Penn, Penn St? So Mann’s graduate student got preferential treatment from the PNAS by being ‘given’ a “prearranged editor” within the “prohibited” window of CoI rules. Yup. Too bad none of that is at all accurate.

    [Response: Hmmm. Given that no graduate student of mine, to my knowledge, had any involvement with this paper at all, its hard to see how said imagined graduate student could have received any hypothetical 'preferential treatment' let alone any treatment at all. Very curious indeed. -Mike]

    • KnR
      Posted Jun 24, 2011 at 3:44 AM | Permalink | Reply

      I fail see how this nothing more an straw man to cover up the misuse of what Mann knows is poor data ,designed so it would seem to produce a results which ‘needed’ rather than a results which was valid .

  41. Russell
    Posted Jun 24, 2011 at 12:52 AM | Permalink | Reply

    “prearranged editor “?

    So what?

    PNAS requires everybody who submits to nominate a list of knowledgeable potential reviewers for the journal to choose among, and you also gets to list those you wish excluded on conflict of interest grounds .

    Steve – but not in Lindzen’s case.

    • Salamano
      Posted Jun 24, 2011 at 4:57 AM | Permalink | Reply

      Lindzen’s recent experience was that the editor took his list of ‘knowledgeable potential reviewers’ and disqualified all of them, and then drew instead from a group that presumably would be ‘excluded on conflict of interest grounds’.

      “Pre-arranged” editing speaks to me that what the editor is going to be doing in the above scenario is already known to the author as a condition for publishing/authorship. It must be nice to have a guarantee of sorts as to who is (and who is not) going to be in the path of your paper.

      Obviously, someone can just submit somewhere else (as Lindzen ended up being forced to do), but it only contiues (enables? legitimizes?) the runaway citation/impact train.

      • Posted Jun 24, 2011 at 6:31 AM | Permalink | Reply

        You haven’t been paying attention. Lindzen submitted under a special procedure reserved for NAS members. There you are allowed to submit not only the reviewers, but their reviews. But the Journal does not have to accept them.

        Russell is talking about the “direct submission” procedure available to general authors and used by Kemp et al. You are asked for a list of suggested reviewers, but the journal selects from those and adds some of its own. The journal deals directly with the reviewers.

        • kim
          Posted Jun 24, 2011 at 6:41 AM | Permalink

          Nick peers at review;
          Mirror, mirror, on the wall.
          Lookin’ back at me.
          ==========

        • Salamano
          Posted Jun 24, 2011 at 8:56 AM | Permalink

          “You are asked for a list of suggested reviewers, but the journal selects from those and adds some of its own. The journal deals directly with the reviewers.”

          …Except if they are operating under an exclusive ‘pre-arraged’ agreement…that would serve to change things now wouldn’t it.

        • Rattus Norvegicus
          Posted Jun 24, 2011 at 2:24 PM | Permalink

          I think the conditions under which a pre-arranged editor can be are listed at the PNAS site. They seem to be:

          1) Research too cutting edge to get a knowledgeable editor
          2) Research which goes against mainstream views
          3) Research which involves an area which does not have wide representation on the editorial membership

          I suspect that Anny Cazaneve was chosen because of her expertise in sea level rise. And apparently the paper did not have an easy time since it had to go through 3 rounds of review before all objections were answered to her satisfaction. Lindzen just gave up after one round.

        • Salamano
          Posted Jun 24, 2011 at 3:04 PM | Permalink

          You would think that under those requirements, Lindzen can try for a ‘pre-arranged editor’ (case #2) with a large number of his submissions (including his more recent one). I could have sworn I read somewhere that PNAS rejects very few submissions.

        • Posted Jun 24, 2011 at 4:12 PM | Permalink

          Again, different tracks. Yes, most papers submitted as member’s papers (Lindzen’s choice) are successful.

          “Pre-arranged editor” is a feature of the “direct submission track” used by Kemp et al. Here is a PNAS article advising authors on navigating the process. They say 2/3 of papers are rejected. If you get to actual review, the odds improve to 50%.

        • Tom Gray
          Posted Jun 24, 2011 at 9:05 PM | Permalink

          I would think that for scientists of the calibre and achievements to be appointed to the NAS that the rejection rate of papers would be rather low.

        • Posted Jun 25, 2011 at 5:17 AM | Permalink

          Nick Stokes wrote:

          “Lindzen submitted under a special procedure reserved for NAS members . . .”

          Clearly, as the self-appointed apologist extraordinaire for all things pro-AGW alarmist, your evident goal is to spin, explain-away, distort, twist or otherwise weaken any and all criticisms of anything alarmists say or do. As a result, I usually ignore your comments.

          In the case at hand, however, your efforts amount to saying that since the treatment of Lindzen’s paper was “within the rules”, then the criticism of said treatment as being a double standard when compared to how other, pro-AGW papers are treated is somehow invalid. A double standard is quite fine with you as long as the institutions stated policies and procedures allow for such a thing.

          And so rather than finding fault with a stated policy/procedure that permits such an egregious breach of logic as the use of an unjustified double standard, you instead accept the policy as a given, a not-to-be-questioned absolute that PNAS has a right to follow — and that you have a right to invoke as a means of deflecting criticism.

          The mystery to me is precisely who — other than yourself — do you expect to be persuaded by such a lame rationalization of what has happened in this case?

  42. Rob
    Posted Jun 24, 2011 at 1:19 AM | Permalink | Reply

    Steve said If this criticism of Lindzen’s submission is valid, I, of all people, can hardly take issue with it. Lindzen contradicted this criticism in his reply, arguing that the results were replicable. I’m not familiar enough with the data to have my own opinion on who’s right or not.

    Since you are not familiar enough with the data as presented and represented by Lindzen and Choi, let me give you a brief summary of the troubled history of this data :

    For one thing, L&C 2009 contained a blatant (high-school level) scientific mistake in the calculation of the feedback formula, which was exposed at Lubos Motl’s blog at around the same time that the results of that flawed L&C paper were being promoted on Fox News as being “the end of the (AGW) scam”.

    That same flaw was scientifically exposed by Trenberth 2010 and 2 other papers, along with a spectum of other problems with Lindzen and Choi 2009, and (to his credit) admitted by Lindzen.

    Lindzen and Choi 2011 was supposed to be “corrected” form of L&C 2009, but it obtained the same end results as L&C 2009, which is still inconsistent with multiple other scientific studies of the same satellite data.

    In fact, this inconsistency was the primary reason (at least for reviewers 3 and 4) to reject the paper at PNAS.

    So why does Lindzen still find negative feedback in the same satellite data where other scientists find opposing results ? Well, one reason may be that the Lindzen and Choi 2011 “lead and lag” method has a negative feedback bias. It will find negative feedback in a system where SST and FLUX are completely uncorrelated.

    That shows that Lindzen and Choi 2011 did not just arrive at biased results, but also that they did not do a statistical test of their (“lead and lag”) method.

    I’m a bit surprised that you Steve, as self-proclaimed international climate auditor and also as a statistician, claim to be “not familiar enough” with the data and methods as presented in Lindzen’s scientific papers.
    Why not Steve ?

    For the rest of us, and for the L&C reviewers, Lindzen and Choi 2011 obtains conclusions that are inconsistent with previous analyses of the same data, analyses that were not tainted by the fundamental scientific errors that have become a consistent theme for papers originating from Lindzen and Choi.

    Steve: I have limited time and energy. I cannot possibly analyse every article in the flood of climate papers nor have I ever ‘self-proclaimed” that I could or would do so. I’ve specialized in paleoclimate papers and even then, it is impossible to keep up with everything. Learning the Lindzen-Choi data would be interesting and, if I could clone myself, I’d do so.

    • timetochooseagain
      Posted Jun 28, 2011 at 12:42 PM | Permalink | Reply

      Since you are not familiar enough with the data as presented and represented by Lindzen and Choi, let me give you a brief summary of the troubled history of this data

      Your “history” is a rather biased and one sided account of the facts, as we shall see.

      For one thing, L&C 2009 contained a blatant (high-school level) scientific mistake

      How appropriate to reference high school when engaging in schoolyard name calling.

      the results of that flawed L&C paper were being promoted on Fox News as being “the end of the (AGW) scam”.

      A comment that reveals your political orientation! How nice.

      Lindzen and Choi 2011 was supposed to be “corrected” form of L&C 2009, but it obtained the same end results as L&C 2009

      So they changed the method to accommodate criticisms, but that can’t be, since it should have changed the results! Because correcting errors without changing the conclusions is unheard of…

      still inconsistent with multiple other scientific studies of the same satellite data.

      Wow, again totally unheard of that people can use different methods on the same data and come to different conclusions. But the methods they used must be right because…because… BECAUSE!

      So why does Lindzen still find negative feedback in the same satellite data where other scientists find opposing results ? Well, one reason may be that the Lindzen and Choi 2011 “lead and lag” method has a negative feedback bias.

      The methods those other scientists used have documented positive feedback biases.

      It will find negative feedback in a system where SST and FLUX are completely uncorrelated.

      Nevermind that such a situation is physically impossible (SST and flux cannot be complete uncorrelated), how to explain the fact that it failed to magically create negative feedback when the same technique was used on AMIP data? If this bias were as large and important as you seem to think, then it would have detectably resulted in an underestimate of the sensitivity of the AMIP models, but it didn’t. Funny that.

      That shows that Lindzen and Choi 2011 did not just arrive at biased results, but also that they did not do a statistical test of their (“lead and lag”) method.

      In point of fact, their APJAS paper did examine this bias, and indeed for strong positive feedback their method can be biased towards negative feedback, although the results would not have great statistical significance if that was the case. However, the methods used by Trenberth, simple regression on all the data, they found a strong positive feedback bias. And yet, once again, this did not cause their method to incorrectly assign negative feedback to the AMIP models, suggesting that the bias is simply not that great.

      Lindzen and Choi 2011 obtains conclusions that are inconsistent with previous analyses of the same data, analyses that were not tainted by the fundamental scientific errors that have become a consistent theme for papers originating from Lindzen and Choi.

      Not the errors, but problems that certainly do not make their results more correct, in fact the papers they are inconsistent with have seriously flawed methods themselves.

      But in case you were wondering, without using lead/lag methods one can in fact find negative feedback in the global CERES data:

      http://devoidofnulls.wordpress.com/2011/06/25/radiation-redux/

      Your criticisms, while strongly stated,are not nearly so justified as you seem to think.

  43. Posted Jun 24, 2011 at 12:52 PM | Permalink | Reply

    Small note: University of Pennsylvania and Pennsylvania State University (where Mann is) are different schools. Penn is in Philadelphia, Penn State is in State College – about 200 miles away.

  44. Posted Jun 25, 2011 at 1:47 AM | Permalink | Reply

    OK, so Kemp et al went through a different track than Lindzen. Kemp’s track supposedly had more rigorous review, yet failed to catch obvious errors.

  45. John Tofflemire
    Posted Jun 25, 2011 at 3:38 AM | Permalink | Reply

    A still small note: statement in the last paragraph reads “by a Penn State graduate student.” Could this please be corrected to note his affiliation with the University of Pennsylvania?

  46. Phil.
    Posted Jun 28, 2011 at 8:53 AM | Permalink | Reply

    Steve: I have limited time and energy. I cannot possibly analyse every article in the flood of climate papers nor have I ever ‘self-proclaimed” that I could or would do so. I’ve specialized in paleoclimate papers and even then, it is impossible to keep up with everything. Learning the Lindzen-Choi data would be interesting and, if I could clone myself, I’d do so.

    Perhaps you should have spent more time in the preparation of this post and thereby avoided the egregious errors (most of which remain uncorrected despite their being brought to your attention)? The deliberate misrepresentations of course are typical of your vendetta against Mann which we have come to expect from you.

    Steve: I try to write accurately. However, in this case, as you observe, I incorrectly described Kemp as a grad student at Penn State, when, in fact, he was a grad student at Penn. I don’t understand why you would hysterically call a “deliberate misrepresentation” or an “egregious error”. It seems like a pretty minor error to me. As is my practice, I’ve corrected the error when brought to my attention (I had already corrected the description of Kemp, but ad overlooked a further reference to Penn State).

    If your concern is that Kemp’s standing in the community would be diminished by the perception that he was a student at Penn State, rather than Penn – a university with a far more distinguished squash program – and you believe that an apology to Kemp is in order for even suggesting that he was affiliated with a state university, let me know and I’ll consider the possibilities.

    • Salamano
      Posted Jun 28, 2011 at 9:15 AM | Permalink | Reply

      Is the screwing up of PSU vs UPENN an example of one of these ‘egrious’ ‘deliberate misrepresentations’ that are ‘typical’?

      Is there a list of these errors that I can look at so that I’m on the same page?

      I don’t see McIntyre’s unwillingness to become fluent in every sub-specialty within climate science evidence of something nefarious. A lot of us don’t know everything and prefer to stick with the specialty in which their passion lies (and leave the rest to others).

  47. Phil.
    Posted Jul 7, 2011 at 6:14 PM | Permalink | Reply

    Steve: I try to write accurately. However, in this case, as you observe, I incorrectly described Kemp as a grad student at Penn State, when, in fact, he was a grad student at Penn.

    Yes you did change one of the three occurrences of that error, you still didn’t get it right, Kemp is a post-doc at Yale, a former graduate student at UPenn.

    I don’t understand why you would hysterically call a “deliberate misrepresentation” or an “egregious error”. It seems like a pretty minor error to me.

    I would class it as an egregious error, not only is he not a grad student but you got the school and advisor wrong, at best that’s sloppy work which you would have criticized in others!

    As is my practice, I’ve corrected the error when brought to my attention (I had already corrected the description of Kemp, but ad overlooked a further reference to Penn State).

    You overlooked two further references for 4 days after they had been pointed out to you, again sloppy.

    If your concern is that Kemp’s standing in the community would be diminished by the perception that he was a student at Penn State, rather than Penn – a university with a far more distinguished squash program certainly his current school, Yale, was arguably the best squash program in the NCAA this year. Penn State have had their moments though, Gail Ramsay, after whom the women’s individual trophy is named, played for them and won an unprecedented and unequalled 4 times.

    As far as “deliberate misrepresentation” goes calling the paper “Mann et al.” certainly qualifies, even someone as inexperienced in scientific publishing as you should know better than that!

    This line surely also qualifies: “It was certainly generous of PNAS to give a “prearranged editor” to a submission by a graduate student at Penn State. I’m sure that Lindzen, an actual NAS member, would have appreciated a similar courtesy.” It was of course up to Lindzen to chose to ask for a ‘prearranged editor’, to do so he would have had to submit via the normal channels, instead he chose to submit via his privileged route as a NAS member, it didn’t work for him, tough.

    There are several other errors in the piece, e.g. despite your mistake in referring to Fig 4 having been pointed out to you several times you haven’t corrected it. Perhaps you should consider retracting the piece rather than have copious amounts of strikethroughs?

5 Trackbacks

  1. [...] more: PNAS Reviews: Preferential Standards for Kemp (Mann) et al … Tags: criticism, critics, hypocrisy, issue, members, nicely, pnas, processes, review, seemingly, [...]

  2. [...] on, thus presumably saving the CO2 required to generate new and unique errors. Steve McIntyre has pointed out that, as is all too common with the mainstream AGW folks and particularly true of anything touched [...]

  3. By The Climate Change Debate Thread - Page 767 on Jun 23, 2011 at 3:55 AM

    [...] [...]

  4. [...] thus presumably saving the CO2 required to generate new and unique errors. Steve McIntyre has pointed out that, as is all too common with the mainstream AGW folks and particularly true of anything touched [...]

  5. By Cronache marine | Climate Monitor on Jun 27, 2011 at 2:01 AM

    [...] e quindi avallare le loro determinazioni. Sembra che questa volta nessuno abbia notato il problema. Steve McIntyre fa notare che questo è solo immaginabile e non oggettivamente riscontrabile, perché il materiale di [...]

Post a Comment

Required fields are marked *

*
*

Follow

Get every new post delivered to your Inbox.

Join 2,883 other followers

%d bloggers like this: