Lindzen’s PNAS Reviews

Chip Knappenberg has published Lindzen’s review correspondence with PNAS at Rob Bradley’s blog here. Most CA readers will be interested in this and I urge you to read the post, taking care to consult the attachments. (I would have preferred that the post include some excerpts from the attachments.)

The post focuses to a considerable extent on PNAS’ departures from their review policy, but there are some other interesting features in the correspondence, which I’ll discuss here, referring readers to the original post for the PNAS issues.

A PNAS letter to its members observes:

very few Communicated and Contributed papers are rejected by the Board. Last year approximately 800 Communicated and 800 Contributed papers were submitted, of which only 32 Communicated and 15 Contributed papers were rejected. These numbers are not exceptional by historical standards extending at least the past 15 years

The rejection of Lindzen’s paper is an unusual event. NAS members submitting a paper are asked to provide two reviews (they are permitted to select their own reviewers.) NAS policy on referees says:

we have adopted the NSF policy concerning conflict of interest for referees (http://www.pnas.org/site/misc/coi.shtml), which states that individuals who have collaborated and published with the author in the preceding four years should not be selected as referees.

Both Happer and Chou, according to Lindzen, met this criterion. (One of the overlooked implications in Wegman’s analysis is that the extensive collaboration between paleoclimate authors makes it that much harder to find referees that meet NSF standards – it’s too bad that they didn’t add this criterion in the analysis.)

PNAS rejected the referees as follows:

Both scientists are formally eligible for refereeing according to the PNAS rules, but one of them (WH) is certainly not an expert for the topic in question and the other one (MDC) has published extensively on the very subject together with Lindzen. So, in a sense, he is reviewing his own work…

it is good scientific practice to involve either some of those who have raised the counter-arguments (and may be convinced by an improved analysis) in the review or to solicit at least the assessment of leading experts that have no direct or indirect affiliation with the authors.

Instead of their normal cozy practices, PNAS reverted by suggesting that the submission be reviewed by “Susan Solomon, Kevin Trenberth, Gavin Schmidt, James G. Anderson and Veerabhadran Ramanathan”, saying that “the Board will seek the comments of at least one of these reviewers unless you have any specific objections to our contacting these experts”. Lindzen disputed PNAS’ characterization of Happer and Chou as not factual. In the end, PNAS obtained four reviews, two of which were respectful,recommending reworking, and two of which were acrimonious. Lindzen surmised that PNAS, contrary to its standard practices, had retained reviewers to whom he had objected.

Some of the comments in the reviews – see here – are intriguing. For example, Reviewer 2 stated:

The poor state of cloud modeling in GCMs has been amply demonstrated elsewhere and the effect of this on climate sensitivity is well documented and acknowledged.

While cloud uncertainties are mentioned in IPCC AR4, I would not say that the effect of various cloud parameterization on climate sensitivity is ‘well documented” in IPCC. Quite the opposite. IPCC’s description of clouds is, in my opinion, far too cursory given the importance of the problem.

The reviewer continues with the following list of problems with theory in the area:

While the stated result is dramatic, and a remarkable departure from what analysis of data and theory has so far shown, I am very concerned that further analysis will show that the result is an artifact of the data or analysis procedure. The result comes out of a multi-step statistical process. We don’t really know what kind of phenomena are driving the SST and radiation budget changes, and what fraction of the total variance these changes express, since the data are heavily conditioned prior to analysis. We don’t know the direction of causality – whether dynamically or stochastically driven cloud changes are forcing SST, or whether the clouds are responding to SST change. Analysis of the procedure suggests the former is true, which would make the use of the correlations to infer sensitivity demonstrably wrong, and could also explain why such a large sensitivity of OLR to SST is obtained when these methods are applied.

Let’s stipulate that all of this is true. Shouldn’t this then be stated prominently in IPCC? The IPCC SPM says “Cloud feedbacks remain the largest source of uncertainty” but this hardly does justice to the long list of problems that worry reviewer 2.

And doesn’t reviewer 2 prove too much here? If all of these problems need to be solved prior to publishing an article in the field, wouldn’t this apply to all articles? Not just ones the implications of which are low sensitivity.

Reviewer 2 complains that methodological details are inadequate:

Sufficient description is necessary so that another experimenter could reproduce the analysis exactly. I don’t think I could reproduce the analysis based on the description given. For example, exactly how were the intervals chosen? Was there any subjectivity introduced?

Look, I’m highly supportive of this type of criticism. Lindzen disputes the criticsm. But it is hardly standard practice in climate science to provide adequate methodology, let alone data. I’ve unsuccessful sought assistance from journals in getting data. I’m all in favor of replication and hope that this precedent extends to the Team as well. Several years ago, I asked PNAS to require Lonnie Thompson to provide a detailed archive of Dunde and other data so that inconsistent versions could be reconciled. PNAS refused.

The more sympathetic reviewers wanted to understand why Lindzen’s results differed from Trenberth’s and asked for a reconciliation:

I feel that the major problem with the present paper is that it does not provide a sufficiently clear and systematic response to the criticisms voiced following the publication of the earlier paper by the same authors in GRL, which led to three detailed papers critiquing those findings.

and

If the paper were properly revised, it would meet the top 10% category. 2) The climate feedback parameter is of general interest. 3) I answered no, because the exact same data have been used by others to get an opposing answer and I do not see any discussion or evidence as to why one is correct and the other is not.

That point seems reasonable enough to me. However, when I asked that IPCC provide similar reconciliation of Polar Urals versus Yamal, Briffa said that it would be “inappropriate” to do so, and that was that.

While Lindzen could have accommodated the last two reviewers, he decided that it would be impossible to accommodate the first two reviewers and he submitted elsewhere.

Compare these reviews to Jones’ puffball reviews, which were some of the most important Climategate documents. Prior to Climategate, people may have suspected that close collaborators were reviewing one another’s work (as Wegman had hypothesized), but no one knew for sure. People may have suspected that pals gave one another soft reviews, but no one knew for sure. Jones’ reviews of submissions by Mann, by Schmidt, by Santer were proof.

120 Comments

  1. stan
    Posted Jun 10, 2011 at 11:42 AM | Permalink

    A good beginning to some well-deserved petard hoisting. Keep battering at the gates.

  2. tetris
    Posted Jun 10, 2011 at 12:04 PM | Permalink

    Papers submitted by scientists who -for various reasons- question or counter the IPCC position on climate sensitivity [e.g. Lindzen, Spencer, Svensmark, and a few others] all appear to run into various forms of interference that go well beyond what should be expected from established peer review. Perhaps this is not altogether difficult to explain given the understanding of all concerned that the IPCCs climate projections are singularly dependent on a high sensitivity?

  3. glacierman
    Posted Jun 10, 2011 at 12:32 PM | Permalink

    What is the over/under on Reviewer 2 = Gavin?

  4. Andrew Dessler
    Posted Jun 10, 2011 at 1:22 PM | Permalink

    There’s one additional piece of information missing from this post: this paper was originally submitted to JGR, and it was rejected by that journal, too. When I talked to Lindzen last Oct., he railed about how unfair the reviews from that journal had been. At that point, I think Lindzen recognized that his paper was never going to make it through any kind of legitimate peer review, so he next submitted it to PNAS so he could select his own reviewers. Kudos to PNAS for not letting him select the entirely unqualified Happer or Lindzen’s wholly-owned subsidiary, Choi. But now Lindzen thinks PNAS is being unfair to him. Of course, after so many rejections by so many reviewers, there’s another possibility that Lindzen seems to not consider: his paper is not very good.

    Steve: Choi and Chou are different people.

    • Posted Jun 10, 2011 at 1:37 PM | Permalink

      You don’t accept then that PNAS departed from their review policy?

      • Andrew Dessler
        Posted Jun 10, 2011 at 3:48 PM | Permalink

        Yes, I know both Chou and Choi. It was an error on my part.

        • Posted Jun 11, 2011 at 12:08 AM | Permalink

          Dear Dr Dressler, I wonder whether you agree that your confusion of the names Choi and Chou – the latter hasn’t written anything with Richard for 7 years – destroys 1/2 of the evidence that you have offered us.

          Aren’t you worried that the foundations for the remaining 1/2 of the argument are not very good, either? I find the description of Prof Happer as an unqualified person amazing.

          http://scholar.google.com/scholar?hl=en&q=william-happer

          Happer has done lots of things that use qualitatively similar – but more advanced – physics as the greenhouse effect. In particular, his optical pumping paper has 1100+ citations. There are many other highly influential papers he has co-authored and he has investigated the climate topics in some detail for years.

          What’s really special about the specialized, “qualified” climate scientists whom you would prefer as referees is that they have never contributed anything genuine to the real science – and they form a clique. I don’t think that any of these two features should be presented as an advantage.

        • Posted Jun 11, 2011 at 12:21 AM | Permalink

          What about the remaining 0.01%? Oh no, my mistake, it’s a rounding error.

    • Steve McIntyre
      Posted Jun 10, 2011 at 1:49 PM | Permalink

      Papers that aren’t “very good” get published all the time. You just aren’t as attuned to them if they support your viewpoint.

      You don’t like the idea of Lindzen getting a friendly review, but you didn’t make a peep about Mann or Santer or Schmidt getting reviewed by pals – pals who did not meet NSF guidelines for referees.

      Look at something like Wahl and Ammann. There were all kinds of problems with it. Phil Jones gave a puffball review and it got published. Or Santer et al 2008. It was reviewed by pals, who ignored the fact that Santer didn’t use up to date data. Then they refused to publish a comment reporting that Santer didn’t use up-to-date and this affected the results. Or Schmidt 2009. Another puffball review by Phil Jones.

      In contrast, if someone dares to criticize the Team, adverse reviewers (whose adverse interest is not disclosed to the authors) are permitted to flagellate the submission, as we experienced with Steig’s reviewing of O’Donnell et al 2010. In that case, Steig’s review distorted the process. His primary interest was not the improvement of the O’Donnell submission but in preserving his own product.

      • Rattus Norvegicus
        Posted Jun 10, 2011 at 8:59 PM | Permalink

        Steve,

        Lindzen said, in a sort of backhanded way, that Ramanathan was acceptable. He states plainly than Minnis (someone he clearly thought would be friendly) was another acceptable reviewer. All reviewers requested major revisions and all of them basically agreed on what the problems with the paper were. Even the reviewer who said that “this could be a top 10% paper if properly revised”. Give me a break. That’s about as friendly a review as a problematic paper could receive.

        I saw the rather softball reviews that Jones gave to Wahl and Ammann. You might not like it, but the paper struck me as a reasonable do over of the original MBH work which fixed the acknowledged error in the original analysis and showed that the error made very little difference in the results and no change in the conclusions. Of course, we didn’t see the other reviews. Did you? If you did I don’t recall you posting on them.

        Of course there is always this. It’s always reviewer 3.

        Watts recent paper, Fall, et. al., got a pretty rough review but Dessler thinks it made it a better paper. This is often the case. In my own experience as a software engineer, good critical feedback from users is very important to making a better product. Personally, I hate lazy “oh it’s great” sort of reviews. I much prefer ruthless “you screwed up big time” reviews because those keep major bugs from getting through. I hate it when I make a request for reviews of my work that are softball and allow unanticipated (because I am dealing with mainly human interface issue) problems which violate what I like to call “the principal of least astonishment” (can’t remember where that phrase came from, perhaps Donald Norman). Good critical reviews are always better.

        Steve: actually I was a reviewer of Wahl and Ammann and published my review online here. I had an adverse interest but, according to the Climategate documents, my identity as a reviewer was disclosed to them. However, whereas the Journal of Climate editor required us to make major revisions in response to Steig (without disclosing his adverse interest), Schneider totally ignored my review comments – which I tried to be objective about – and terminated me as a reviewer. Worse, as a reviewer, I asked Wahl and Ammann to disclose verification r2 and CE statistics – a point that was in controversy – and where I knew that they had got precisely the same results as us, because their code matched ours – something that they failed to disclose. They refused and Schneider refused to make them. Shameful. Worse, they told Schneider that their parallel submission to GRL had showed us up – without disclosing to Schneider that the GRL submission had been rejected. Their conduct was totally repugnant and, as a result, I filed an academic misconduct complaint – the only time that I’ve done so. Only then did they grudgingly disclose (in an appendix) the statistics that confirmed our findings. UCAR failed to investigate the complaint according to their procedures, but I didn’t pursue the matter at the time. Perhaps I should reconsider.

        Their abstract was very deceptive. In fact, they confirmed our findings that the MBH reconstruction did not possess the advertised statistical ‘skill” and robustness.

        • Rattus Norvegicus
          Posted Jun 10, 2011 at 9:10 PM | Permalink

          That last paragraph had a slip up.

          I hate it when I provide specific requests for the features to test and get softball reviews back. Generally this means that the users (they’re the QA dept. in my small comnapny) did not do the requested testing or did not really exercise the new new features. I generally provide instructions on the areas of the interface that I have not been able to test adequately, this is mostly due to time constraints. It would be nice if turnaround time was not measured in days (and sometimes hours). Short turnaround time is the big problem…

        • Rattus Norvegicus
          Posted Jun 11, 2011 at 12:09 AM | Permalink

          Interesting that you move the goalposts so quickly Steve. Move it from WA to O’Donnell et. al. This just makes it clear that you really don’t have anything left.

        • igsy
          Posted Jun 11, 2011 at 2:04 AM | Permalink

          Eh? How does a passing reference with less than 10% of the word count move the goalposts? Steve’s comment is self-evidently focused on the WA shenanigans.

          If there’s anyone here doesn’t “have anything left”, it’s not Steve.

        • Rattus Norvegicus
          Posted Jun 11, 2011 at 12:30 AM | Permalink

          Steve, this is a case of “do your own homework”. AFAIK R2 is not considered to be the right measure for a reconstruction of this type. Of course, this is all really weird because the MBH method is not used anymore. The current Mann method, which has been tested against real pseudo proxies, EIV performs pretty well. Why don’t you critique the current methods? Yeah, the 4 Tiljander series are problematic, but Mann showed that they only made the late, no dendro (he was answering you here) system a little sketchy. Of course you don’t like any proxies. You don’t like trees, you don’t like varves, you don’t like dO18. You don’t like anything. Of course to make you point, you have to show that none of the various proxies are valid. But that means a lot of work. A lot harder than throwing darts on your blog, eh?

        • Ed Snack
          Posted Jun 11, 2011 at 3:56 AM | Permalink

          Rattus “Steve, this is a case of “do your own homework”. AFAIK R2 is not considered to be the right measure for a reconstruction of this type. Of course, this is all really weird because the MBH method is not used anymore” is complete bullshit, are you that stunningly ignorant of statistics ? RE is the totally incompetent choice for a statistic, it has been well known (by those with statistical knowledge, and I know that Mann at least is self admittedly totally ignorant of such) since the 20’s that RE can and does give totally misleading results when used with correlated data, and Mann was very well aware that his data was just that. See some of Climategate emails if you doubt that.

          At least you now appear to be admitting that MBH98 &99 was an incompetent piece of work that should have been rejected before publishing. It was certainly controversial and its methodology was highly suspect and certainly if treated like the Lindzen paper, have been rejected.

        • Thor
          Posted Jun 11, 2011 at 5:27 AM | Permalink

          “The current Mann method, which has been tested against real pseudo proxies, EIV performs pretty well”

          What is this? A simulation where your temperature reconstruction is tested agains a model of a tree? A simulation of a simulation? You know, it doesn’t get “real” just because you decide to call it so.

        • Jim T
          Posted Jun 11, 2011 at 3:47 PM | Permalink

          Yes, what exactly is a “real psuedo proxy”? A genuine fake substitute?

    • Rattus Norvegicus
      Posted Jun 10, 2011 at 8:18 PM | Permalink

      Andrew, have to agree here. Chou was his coauthor on the original “Iris Effect” paper.

      • Keith W.
        Posted Jun 10, 2011 at 9:56 PM | Permalink

        Rattus, the Iris Effect Paper was published in 2001. It is now 2011. That’s ten years. PNAS’ referee criteria says the referee cannot have coauthored a paper in four years. That makes Chou a viable referee selection, unless you can find that they have written a more recent paper together.

        Here’s Steve’s quote on the matter

        “we have adopted the NSF policy concerning conflict of interest for referees (http://www.pnas.org/site/misc/coi.shtml), which states that individuals who have collaborated and published with the author in the preceding four years should not be selected as referees.”

        • Keith W.
          Posted Jun 10, 2011 at 11:10 PM | Permalink

          Just to save Rattus a Google search, the last paper I can find where Chou and Lindzen were coauthors was published in 2005 on the subject of ERBE. That is six years ago, sufficiently outside the four year moratorium suggested by NSF.

        • Rattus Norvegicus
          Posted Jun 10, 2011 at 11:22 PM | Permalink

          Lindzen provided two reviewers. Happer is clearly not qualified. and Chou is questionable. Even if you give Chou a pass on the “not a coauthor” requirement. Happer is a joke, so requiring the extended review seems reasonable because it seems that Lindzen was trying to play the game. Andrew Dessler’s indication that JGR gave the paper a pass makes things rather fishey. If you couldn’t get a paper published in JGR, why would you think you could get it published in PNAS? Seems a little odd to me.

        • Keith W.
          Posted Jun 11, 2011 at 12:58 AM | Permalink

          While Happer may not be a climate physicist, he is an acknowledged expert on energy transfer based upon the interaction between light and gases.

          http://www.princeton.edu/physics/people/faculty/william-happer/

          While not necessarily his exact field, half of the premise of Lindzen-Choi deals with the fluctuations and fluxes of outbound energy/radiation at the top of the atmosphere. The sections dealing with energy fluctuations would involve formulas and models with which Happer would be either familiar or able to understand with a minimum of study. If you read through the PNAS submission, most of the physics is not excessively complex. Lindzen acknowledges and gives thanks to Happer for “helpful suggestions” at the end of the paper , which makes me think that Happer would certainly be able to understand and critique those sections where his suggestions were implemented if the approach is not in agreement with his thoughts on the matter.

        • Rattus Norvegicus
          Posted Jun 11, 2011 at 1:34 AM | Permalink

          Happer is a fine physicist, but his area of specialty does make him an expert in, or even someone who is able to evaluate, energy transfer in the climate system.

        • Posted Jun 11, 2011 at 1:46 AM | Permalink

          You’re making an exceptionally important claim here, Ratty, about energy transfer in the climate system: it is so darned complex that a physicist of Happer’s standing cannot even evaluate it.

          We are walking among intellectual giants, people, and we didn’t even recognise it. That I think is the problem here. There are great physicists like Happer and then there are, wait for it … climate scientists. Swoon and double swoon.

          And the reason these giants haven’t shared with us their findings in detail in the IPCC’s AR1 report, on cloud feedback issues is, presumably, that we would never, ever understand.

        • Posted Jun 12, 2011 at 7:12 PM | Permalink

          It’s very different stuff, energy transfer in atoms. Happer’s work was early on on optical pumping and energy transfer in metals (mostly alkali) and later in spin exchange pumping of noble gases. This really has nothing to do with atmospheric physics. You would be hard pressed to find a paper of his that deals with any molecule (there are one or two about alkalai dimers)

        • theduke
          Posted Jun 12, 2011 at 10:08 PM | Permalink

          A list of Lindzen’s papers. There are 235 of them and the first one is dated 1965:

          http://www-eaps.mit.edu/faculty/lindzen/PublicationsRSL.html

        • Posted Jun 13, 2011 at 2:20 AM | Permalink

          Re: Eli Rabett (Jun 12 19:12), Well, There is a bit of a problem here. You see, I read Lidzen’s paper and I’m troubled, as are some of the reviewers, by the smoothing and filtering process. It doesnt take a rocket scientist to see the issues with the paper. My sense is that this paper is not the last word, and I think the author’s dont represent it as the last word.

          One thing is for sure. If the paper were in PNAS it would be harder to ignore when Ar5 is written. And as we all know, some papers in the pipeline were seen as trouble for the writers of Ar4 and some papers were hurried through to make the writing job easier.

          In a normal world the Lidzen paper would have made it thru and it would have generated responses that advance the counter argument. But its not a normal world.

        • Posted Jun 13, 2011 at 5:54 AM | Permalink

          Thanks for pointing out the relevance of the AR5 situation and the other insights here. (Which reminds me, I meant WG1 above, not AR1.) Perhaps one of the things wrong with the IPCC process is that there’s an implicit assumption, among politicos and many of the public, that it is about a science that has been settled by the relevant experts. A paper that openly admits that it isn’t the last word doesn’t fit this paradigm. But that may be the only honest stance. We don’t have a settled science of the atmosphere. All that matters is that Lindzen and Choi have advanced an argument that helps to clarify the situation, even if it’s finally shown to be wrong.

        • Steve McIntyre
          Posted Jun 13, 2011 at 6:58 AM | Permalink

          Mann et al 2008 – a PNAS paper – mercilessly smooths series prior to regression, something adversely commented on here. I wonder why PNAS reviewers objected to it in Lindzen’s case, but not in Mann’s.

        • Fred
          Posted Jun 13, 2011 at 8:11 AM | Permalink

          Your point being that a mathematical operation, ‘smoothing’, must either be appropriate or inappropriate regardless of context or impact?

          Or is it that no operation ever used by Mann in any paper, thought or program, can ever be criticised if used by anyone else (regardless of context or impact)?

          Perhaps we need a new name for this particular logical fallacy… ‘ad mcintyrem’ maybe.

        • Posted Jun 13, 2011 at 4:13 PM | Permalink

          Re: steven mosher (Jun 13 02:20), My point is simply this. What we see from Eli and others is the following. They feel that Happer is unqualified to review the paper. They offer no proof of this. They try to establish that there is some kind of special physics knowledge required to review this paper. I’m claiming they are wrong. they are wrong because I have no special physics knowledge and as a reviewer i would immediately raise the issue of the smoothing and the filtering. ON INSPECTION. NO PHYSICS KNOWLEDGE REQUIRED. So that if a lowly english major with no degree in physics can see that this issue needs to be addressed, then clearly no special physics knowledge is required o see the problems with the paper. Further, as eli and others explain the problems to the general public, its also clear that no special physics knowledge is required. The motivation for removing happer is not his lack of specific knowledge in this domain. the motivation is clear. Lindzen cannot be allowed to publish a paper, EVEN IF that paper only makes the most marginal of claims. I don’t see any reason why the paper should not be published, as Steve points out mann is allowed to get away with smoothing. If I were in the field I would welcome Lindzen Publishing so that I could have the opportunity to show him the error of his ways. But, that takes time, and Schneider is not around to hustle papers through the process, so its safer just to prevent Lindzen from publishing altogether, after all, Ar5 is coming up. Lets not forget that in 2005 they were already looking at the papers being published for a 2007 Ar4.

          July 2012 is the deadline for Submission for Wg1. So if Lindzen is allowed to publish in PNAS now, people have only a year to get their stuff together.

          I know steve and ross have some things that could be pushed into the publication channel. The time is
          Now. That will put responses outside the submission window. Look for skeptics to have a tough time getting anything published in this time window.

        • Posted Jun 13, 2011 at 4:28 PM | Permalink

          Brilliant, you English major you. How can we do something to help with AR5? En masse I mean, crowdsourced. There must be something, from this point thru Jul 2012, beyond blowing a corporate raspberry.

        • Posted Jun 13, 2011 at 4:53 PM | Permalink

          I’m not sure what can be done, but the window is open now. The deadline for submission is July 2012, so If were a wise skeptic I would start getting my stuff together now.

          Here are some things to consider. If you submit too early, then your paper will be passed around behind closed doors and they can plan a counter attack. if you submit too late toward july, you risk being delayed beyond the “accepted by” date.

          I’d have to look at the time windows and the cycle times to figure an optimal window for submission. They know this game better than the skeptics so, I’m not clear their is an optimal strategy.. Like submitting early, getting reviews and withdrawing to fight at a different venue ( is that even acceptable?)

          In any case I fully expect that skeptics from now to 2012 will be documenting and publishing the reviews they receive. That’s the new battlefield. One approach would be to include a co author who is willing to burn bridges by revealing reviews.

          Lot’s of interesting ideas, but it’s time to suit up.

        • David Anderson
          Posted Jun 14, 2011 at 4:25 AM | Permalink

          What a sad reflection this is on the current state of climate science. “Is the world warming due to CO2?”. Depends, what date were the various papers published?

        • Posted Jun 11, 2011 at 11:04 AM | Permalink

          Rattus Norveigicus
          See Lindzen’s characterization of Happer:

          “Will Happer, though a physicist, was in charge of research at DOE including pioneering climate research. Moreover, he has, in fact, published professionally on atmospheric turbulence. He is also a member of the NAS.”

          You provide no evidence to the contrary that “Happer was clearly not qualified”. Thus you but give an ad hominem attack from false authority, giving no weight.

        • jim
          Posted Jun 12, 2011 at 7:58 AM | Permalink

          As if most of the climate scientists have “climate scientist” degrees. To say that a physicist isn’t qualified is not supportable without further justification.

    • R.S.Brown
      Posted Jun 11, 2011 at 3:40 AM | Permalink

      Re: Dr. Dressler’s bias… consider the source.

      Andrew Dessler, PhD
      Title: Professor in the Department of Atmospheric Sciences at Texas A&M University

      Position: Pro to the question “Is human activity a substantial cause of global climate change?”

      Reasoning: “Contrary to what one might read in newspapers, the science of climate change is strong. Our own work and the immense body of independent research conducted around the world leaves no doubt…”

      “There is no question that natural causes, such as changes in energy from the sun, natural cycles and volcanoes, continue to affect temperature today… ”

      “But despite years of intensive observations of the Earth system, no one has been able to propose a credible alternative mechanism that can explain the present-day warming without heat-trapping gases produced by human activities… ”

      See:

      http://climatechange.procon.org/view.source.php?sourceID=009955

      Ipso, facto. AGW rules. Lindzen-Choi must be wrong and treated like naughty children.

      Drop the popcorn and move along.

      • Vorlath
        Posted Jun 11, 2011 at 2:25 PM | Permalink

        About the fact that climate scientists can’t explain present-day warming (coming out of the little Ice Age to boot)…

        The exclusion principle is not scientific. I wonder why the very premise of it isn’t being ridiculed in climate science. When I first heard of this, my first thought was “How bogus is it that their proof is their own ignorance?”

  5. Barclay E MacDonald
    Posted Jun 10, 2011 at 2:16 PM | Permalink

    I agree with Stan. It’s very important these issues see the light of day. Just as important as auditing the reliability of the underlying data and analysis.

    The view that peer review is satisfactory because it eventually comes to the correct result is wholly inadequate. That should be obvious in climate science, where it is claimed much of humanity is in great and immediate danger. When it comes to important issues, it is truly pathetic that science has made so little progress in dealing with the personalities of scientists.

  6. Dan Zeise
    Posted Jun 10, 2011 at 4:31 PM | Permalink

    Andrew Dessler
    I could care less if Dr. Lindzen’s paper was originally rejected by JGR. Just another example of pal review in my mind.
    In his February 23, 2011 letter to Dr. Schekman, Dr. Lindzen wrote, “The use of simple regression over the entire record (as in the procedure in Trenberth et al, 2010 and Dessler, 2010) is shown to severely understate negative feedbacks and exaggerate positive feedbacks – and even to produce significant positive feedback for the case where no feedbacks were actually present (viz Figure 7 and Table 1 of the revised paper). Equally important, the simple regression approach leads to extraordinarily small values of the correlation on the order of 0.02. Such values would, in normal scientific fields, lead to the immediate rejection of the results of Trenberth et al and Dessler as insignificant.”
    Maybe you should address the low correlation from your published paper or address the merits of Dr. Lindzen’s paper instead of building a strawman. Your very post above, in my humble opinion, is confirmation of the bias in peer review.

  7. genealogymaster
    Posted Jun 10, 2011 at 5:07 PM | Permalink

    I guess I’ll never understand this peer review it just doesn’t seem to work in my opinion. Why is hard to for some to see that the team get a pass and its everyone else who has to jump through hoops.

  8. Pat Frank
    Posted Jun 10, 2011 at 6:01 PM | Permalink

    Let’s recall, in this context, that Ralph Cicerone himself reviewed Jim Hansen’s PNAS 2006 “Warmest in a Milll-yun Years” paper, which Steve discussed here and here.

    For those new to this forum, this is the paper that featured, “Hansen [taking] splicing to an entirely new level. He takes a proxy record whose most recent reading is approximately 4320 BP (and there’s hair on that age estimate) and compares that to instrumental records in the 20th century, using the 1870-1900 period (for which he has values for neither series) as a benchmark. I guess you have to be a fellow of the National Academy of Sciences to be able to do this.

    This paper also featured assigning physical meaning to a principal component by mere fiat; another high grade methodology that finds common usage in climate science.

    From Jim Hansen’s acknowledgements, page 14293, “We thank Ralph Cicerone for reviewing our submitted paper…

    All of that passed by the president of the NAS himself. One law for the ins and another law for the outs at the National Academy under Dr. Cicerone.

    You can snip this as you like, Steve, but it all seems like ‘wink, wink, nudge, nudge’ to me.

    • Posted Jun 11, 2011 at 12:01 AM | Permalink

      Well said Frank. My personal wiki is incomplete in many ways but the moment I looked up PNAS I was reminded of this quote from you:

      I remember back when you first posted on the truncation of Briffa 2000, Steve, and recall being unable to parse the meaning of it. My mind felt numb. In retrospect, my reaction came because the evidence of such sheer dishonesty in science was too hard to accept. After an adult life-time of hewing to the principle of objectivity, to see the bedrock of science flouted on that scale by institutional scientists was more than I could countenance. I’ve since become more hardened.

      What really put me over the top was Hansen’s 2006 PNAS paper, personally reviewed and passed by Ralph Cicerone, as discussed at CA here and here … I guess Dr. Hansen has never been less than 99% sure that he’s right.

      That was in the aftermath of ‘Swindle’ in May 07. Thanks for keeping on keeping on.

  9. Posted Jun 10, 2011 at 7:13 PM | Permalink

    It seems to me that the issue here is not whether the review of Lindzen and Choi’s manuscript was fair and equitable; I suspect that in many ways it probably was. The issue is whether the review was conducted in a manner comparable to reviews given by journals to other papers that express the alarmist view of climate change. In this regard there are two issues: (i) choice of reviewers, and (ii) depth of detail in review. I have the distinct impression that the choice of reviewers was tilted against Lindzen and Choi at the outset, and the depth of review seems to have been far more penetrating than for other papers. Indeed, it seems to me that it is in the nature of climatology that short-term random chaotic variations are typically much greater than long-term secular changes, and it is very difficult to unravel the long-term signal from data. Since most cases involve inadequate spatial and temporal data coverage, climatologists seem to derive a dollar’s worth of conclusions from a penny’s worth of data. For example, Dessler et al. (JGR, 2008) analyzed a mere one-month’s data in 2005 to infer clear-sky top-of-atmosphere outgoing long-wave radiation (OLR) and its relationship to humidity. I wonder what kind of review that paper received? Gettleman and Fu (2008) used a mere five years of data. The work by Soden et al., Santer et al. and Dessler et al. and others shows great ingenuity in ferreting out information from very limited amounts of data, some of which is of uncertain reliability. But ultimately, the credibility of their results is limited by the scarcity of good long-term data. Parameters such as humidity and cloudiness vary widely from day-to-day and year-to-year even in the absence of any forcing. In attempting to determine how these parameters respond to a forcing, one must have data over very long periods to overcome the low signal-to-noise rations inherent in them. The same problem occurs in sea level measurements. However, whereas climatologists studying sea level have emphasized the need for very long-term data, those who infer feedbacks from humidity and cloudiness seem to be content with very short-term data. I suspect that if all of these papers had been subjected to the same kind of review that Lindzen and Choi received, they might not have been published. Indeed, if all climatological papers received this kind of review, the journals would be emptied out.

    • stan
      Posted Jun 10, 2011 at 8:29 PM | Permalink

      Well done, Donald Rapp.

      “climatologists seem to derive a dollar’s worth of conclusions from a penny’s worth of data.” Nailed it. [Perhaps they are able to do this because they start out ‘knowing’ the right answer.]

      “Indeed, if all climatological papers received this kind of review, the journals would be emptied out.” Seconded.

    • Posted Jun 11, 2011 at 12:13 AM | Permalink

      The issue is whether the review was conducted in a manner comparable to reviews given by journals to other papers that express the alarmist view of climate change.

      In fact, there are two issues, because PNAS has never been like other journals. At least, I’m taking the word of contributor ‘j’ on Bishop Hill for that:

      PNAS serves almost as a vanity press for members of the Academy (Lindzen is one). Previously, they were able to publish papers there with no formal review. A few years ago, this was changed, to a policy whereby the author him or herself was required to provide two reports from independent scientists, picked by him or herself. A much lower hurdle than normal anonymous review, because the referee knows that the author knows who they are, so this usually cannot lead to rejection – Lindzen quotes only 2% of papers that do not get accepted. I know one member of the academy, who has told me that he was most happy about being elected member precisely because this meant he could get controversial – but in his view important – papers published where normal refereeing might lead to a protracted cycle of negative reports etc. …

      Anyway: this is not just any old paper rejection. This is really gross.

      Taking this on board – and what a lovely field one must be examining, when such informed people feel that they must remain anonymous, again and again – the treatment of Lindzen is a way of asserting that they have assumed total control over the National Academy of Sciences.

      OK, got the message guys. But even the Soviet block fell apart in the end. Don’t you have better things to do with your lives?

    • Posted Jun 12, 2011 at 7:15 PM | Permalink

      Although somewhat hidden in the letters, none of the PNAS originally suggested reviewers reviewed L&C II, but ones selected in consultation with Lindzen did. Ramanathan and Minnes off the top

      • timetochooseagain
        Posted Jun 12, 2011 at 8:02 PM | Permalink

        PNAS doesn’t have a normal policy that if you say a reviewer they suggest is the least objectionable of a bad bunch, they sic him on you. In the case of Ramanathan, he was literally that. In the case of Minnis, he was mentioned offhandedly. Again, this is hardly consistent with the normal procedures.

  10. Publius
    Posted Jun 10, 2011 at 8:47 PM | Permalink

    It is significant that Dessler’s paper, which ASSUMES one direction of causality (the one that gives a large positive feedback), was published in Science, whose editorial policy is consistently and strongly aligned with the IPCC. On the other hand, Lindzen, who has found for another direction of causality in a series of papers (and gets negative feedback), as does Spencer, gets hoisted up by the sudden imposition of a ‘conflict of interest’/’unqualifed’ reviewer judgement. And this in a nominally less rigorously reviewed journal. He is then evidently torpedoed by two members of the ‘objective’ coterie of Trenberth, Schmidt, Anderson and Ramanathan. One is reminded of the old saw, ‘If you want a stick to beat a dog, any stick will do.’

    Bias in the peer review process has been studied and even measured in the social sciences, where it is understood and acknowledged. See for example the classic paper by Mahoney http://pages.stern.nyu.edu/~wstarbuc/Writing/Prejud.htm However, Mahoney was dealing only with random reviewer predispositions and biases. Climate science has institutionalized bias as a matter of strategy. We see strong indications of it in the ClimateGate emails. Unfortunately it appears that since ClimateGate, it has become even harder for research findings opposed to IPCC dogma to be published.

    • tetris
      Posted Jun 11, 2011 at 3:00 PM | Permalink

      PNAS suggesting Salomon, Trenberth and Schmidt as reviewers on Lindzen’s paper is analogous to asking three senior cardinals to review a paper that provides evidence that strongly questions the ability to walk on water.

  11. Ron Cram
    Posted Jun 10, 2011 at 8:54 PM | Permalink

    Perhaps PNAS made the same error regarding Choi/Chou as was made by Andrew Dessler.

    • Rattus Norvegicus
      Posted Jun 10, 2011 at 9:01 PM | Permalink

      Ron,

      Not. Chou is a past coauthor with Lindzen on substantially the same subject.

      • Posted Jun 11, 2011 at 12:30 AM | Permalink

        Well, all those climate scientists and “climate scientists” who believe that CO2 poses a threat are writing papers “on the same subject” – and they’re still happily reviewing papers of each other. Have you ever protested against those? In fact, the “same subject” is usually presented as a “great consensus” that should make people “certain that the science is settled”.

        As you can see, in one context, you’re using the agreement about the broad structure of the model of the climate to be an amazing advantage, while on the opposite side, you’re viewing the agreement as a pathological disease that must prevent all those people from reviewing papers of other people in the same group. Would you at least agree that you behavior is hypocritical and you use double standards, without a glimpse of a respect for impartiality that is so important in science?

      • Ron Cram
        Posted Jun 11, 2011 at 12:45 AM | Permalink

        Rattus,
        If I understand correctly, writing on the same subject is not the issue. The issue is the time period. The infrared iris paper was a long time ago. And it is not really the same subject. Both papers arrive at a negative feedback, but the road traveled is different.

        PNAS is raising a new criterion by which to exclude reviewers. It seems rather arbitrary to me.

      • Hoi Polloi
        Posted Jun 11, 2011 at 1:31 PM | Permalink

        Something like Wahl, Ammann and Mann?

    • Venter
      Posted Jun 11, 2011 at 2:29 AM | Permalink

      I believe that they have definitely confused Chou and Choi as the last line of the NAS’s comment posted by Steve above says that ” So, in a sense, he is reviewing his own work…”

      That would not apply to Chou for sure. But it would fit Choi, if they had misunderstood Choi instead of Chou as the reviewer.

  12. Noblesse Oblige
    Posted Jun 10, 2011 at 9:13 PM | Permalink

    Chou and Lindzen have not collaborated on the outgoing radiation issue. Choi and Lindzen have done so.

  13. EdeF
    Posted Jun 10, 2011 at 9:36 PM | Permalink

    Lindzen and Choi seems to be well written. They admit to some earlier errors and make attampts to correct them. They respond to reviewer critiques. They say this is not the last word on this subject. I did not know that IPCC had admitted that cloud modeling in the GCMs had problems. Where is that widely diseminated? I am not following Linzens
    argument that once you have determined sensitivity over the tropics that that can also be applied to landmass, will have to read that more carefuly, plus the review comments. In addition to variable cloud formations and changes in absolute humidity, is someone looking into changes in SST due to frigid Arctic and Antarctic waters mixing with
    tropical waters?

    Wish PNAS had printed this article, then allowed the other side to rebut. That seems to me to be good science.

  14. timetochooseagain
    Posted Jun 10, 2011 at 9:44 PM | Permalink

    Calling the Iris work “essentially the same issue” is not quite on the mark, to be generous. One deals with the specifics of a mechanism which might function as a feedback and evidence for such a mechanism. The papers Lindzen has been working on recently has been dealing with sensitivity issue more comprehensively and relates to the Iris work only in that both have potential relevance to the sensitivity issue.

    Now let’s not go on about the Iris in this thread, as it is a blackhole of a debate.

    • Rattus Norvegicus
      Posted Jun 10, 2011 at 11:41 PM | Permalink

      Well, it is on the same subject in the sense that both found that there was a low sensitivity. My feeling, as stated in a previous comment is that Chou, a previous coauthor on pretty much the same subject makes him suspect. Lindzen used Chou and Happer. Happer was clearly not qualified to review the paper so Lindzen’s choice of reviewers did not meet the requirements of PNAS. If you grant the choice of Chou, Lindzen still had to provide one other qualified reviewer. He did not.

      • Ron Cram
        Posted Jun 11, 2011 at 12:48 AM | Permalink

        Rattus,
        Granting Chou, you believe Lindzen did not provide the second qualified reviewer. I disagree. In the email, Lindzen put forward Albert Arking of Johns Hopkins. I’ve not seen anything to suggest Arking does not meet the requirement.

      • Ed Snack
        Posted Jun 11, 2011 at 4:06 AM | Permalink

        Actually, I can see why Rattus wants to think that Happer is not a suitable reviewer, he has a very good understanding of atmospheric physics relating to energy transfer, and climate scientists almost to a man have little or no understanding of the same. They make the same stupid, false, assumptions over and over again and are indeed highly resistant to having their mistakes pointed out to them by someone more qualified.

      • MrPete
        Posted Jun 11, 2011 at 7:31 AM | Permalink

        So what we have here is:
        * PNAS rules ignored to disqualify one reviewer (who clearly has not co-published recently)
        * Another highly capable reviewer (Happer) tarred and feathered by insinuation, apparently because the editors and their pals don’t understand his qualifications. (“Who cares if he discovered lasers? This paper is about high energy light!”)
        * …and the head of this tribe has a history of stepping in to provide reviews that aren’t just puffball, they’re incorrect.

        Is that about right?

        I’d laugh at the Keystone Kops entertainment value if it weren’t so serious!

        • Steve E
          Posted Jun 11, 2011 at 4:16 PM | Permalink

          (“Who cares if he discovered lasers? This paper is about high energy light!”)

          Passed a jumbo olive stuffed with a jalapeno through my nose after that one. Note to self. Do not drink a Martini while reading comments…do not drink a Martini while reading comments…

      • timetochooseagain
        Posted Jun 11, 2011 at 10:42 AM | Permalink

        Frankly I think you are reaching to to question Chou’s objectivity. It’s ridiculous. The Iris “found low sensitivity” only in the sense that, if the iris were operating as a negative feedback, it would, all else equal, mean a lower sensitivity. But these papers deal with the broad sensitivity issue, not specific feedback mechanisms. To say this is the same work is to ignore the substance and focus on it’s impact on the broader debate: in other words, you think Chou is unqualified because he might be a skeptic.

      • Posted Jun 11, 2011 at 10:46 AM | Permalink

        Rattus Norveigicus
        A Google scholar search: William Happer1820 hits
        Optical pumping Cited by 1148
        Spin-exchange optical pumping of noble-gas nuclei… Cited by 552

        Google scholar search: Rattus Norvegicus
        Only typographical errors. No citations.

        Ergo – Rattus Norveigicus has not demonstrated any competence to declaim on Happer’s expertise.

        I find William Happer insightful in his review: The Truth About Greenhouse Gases

        We can choose to promote investment in technology that addresses real problems and scientific research that will let us cope with real problems more efficiently. Or we can be caught up in a crusade that seeks to suppress energy use, economic growth, and the benefits that come from the creation of national wealth.

        William Happer is the Cyrus Fogg Brackett Professor of Physics at Princeton University.

        • stephen richards
          Posted Jun 12, 2011 at 5:00 AM | Permalink

          I would put a review by a physicist a few hundred kilometres in front of a climate “scientist”. For crying out loud, how the hell do Gavin’s qualifications give him any right to review climate papers; he is a computer nerd and Jones, to review statistical work? without a maths qualification; and Mann?; and Hansen; astronomer.

          Happer has to be one of the great choices for reviewer. There are no truly great physicists left like Feynman et al but Happer is up there.

        • Posted Jun 13, 2011 at 8:28 PM | Permalink

          Hanson is a astrophysicist by training, not an astronomer. Look it up

        • Keith W.
          Posted Jun 14, 2011 at 5:15 PM | Permalink

          James Hanson’s NASA CV page

          http://www.giss.nasa.gov/staff/jhansen.html

          B.A., Physics and Mathematics, 1963, University of Iowa
          M.S., Astronomy, 1965, University of Iowa
          Ph.D., Physics, 1967, University of Iowa

          So, maybe a little of both. Astronomer and Physicist.

      • harry
        Posted Jun 14, 2011 at 3:56 AM | Permalink

        Does this mean that it would be wrong for Climate Scientists to use reviewers that “wrote on the same subject in the sense that both found that there was a HIGH sensitivity”?

      • MikeN
        Posted Jun 15, 2011 at 12:21 PM | Permalink

        >Well, it is on the same subject in the sense that both found that there was a low sensitivity.

        So does PNAS require no coauthors at any point in the past if you wrote a paper that supports the idea of a high climate sensitivity?

        You are moving the goalposts.

  15. Tom Gray
    Posted Jun 11, 2011 at 8:19 AM | Permalink

    I gather that it is common practice to ask an author whose work is being challenged by a subsequent paper to review that paper. I haver read here that the editor will take into account the challenged author’s unavoidable personal interest when he/she assesses the review. That being said, why is considered bad practice to invite a review from someone hose work will closely parallel the work in a paper. The NAS documents talk about someone reviewing their own work. However this is just as much true in the case of a challenged author, The editor can take into account the reviewers’ personal interests in his/her decision on publication. The reviewers only offer opinions that the editor is free to accept or dismiss.

  16. stan
    Posted Jun 11, 2011 at 1:21 PM | Permalink

    If the science were really “settled”, there should be dozens of scientists capable of being competent reviewers. If the science is so ‘solid’ that it supports government policies designed to transform society fundamentally, there should be hundreds of scientists with sufficient understanding to serve as reviewers.

  17. Noblesse Oblige
    Posted Jun 11, 2011 at 2:44 PM | Permalink

    There is no way that Cicerone would permit publication of Lindzen’s paper in PNAS. It is too damaging to the case on which he has built his career and influence, and to the basis of the IPCC and the administration’s program. All the rest of this discussion is noise.

  18. Dr Slop
    Posted Jun 11, 2011 at 5:32 PM | Permalink

    @Vorlath 11, 2011 at 2:25 PM

    Yes, the “I can’t imagine any reason for not p, therefore p” makes climate science rather boring. Perhaps climate scientists could learn from the philosophers. Then again, I get the impression some of them have. (See, for example, the entry for Searle.)

  19. Posted Jun 12, 2011 at 9:06 AM | Permalink

    When I first got seriously interested in climatology about six years ago, I was amazed and impressed at the general paucity and uncertainty of data, and the great extent of assertive conclusions derived therefrom. In other fields that I had worked in for the previous 45 years, such flimsy conclusions would never have been tolerated. Yet, I suppose that climatologists need to earn a living and they have to work with what is available. I just wish they weren’t so positive that they have the answers based on fragile and ephemeral measurements.

    A few examples follow:

    Dessler et al. (GRL, 2008) found a good correlation of global temperature with an ENSO index for 2006–2008. Hence it seems clear that global temperature changes for 2006–2008 were driven primarily by changes in the oceans, and changes in humidity during that period were not a cause of global temperature change, but an effect. The effect of changing CO2 concentration and putative water vapor greenhouse effect are buried in the noise of a much stronger signal due to El Niño variability during these years. Therefore it is physically impossible to derive a water feedback sensitivity from data limited to these two winters. Yet, the authors claim that they have done so and quote a value in agreement with climate models. This seems impossible to this writer. They then reached the rather incredible conclusion:

    “The existence of a strong and positive water-vapor feedback means that projected business-as-usual greenhouse gas emissions over the next century are virtually guaranteed to produce warming of several degrees Celsius.”

    This conclusion is utterly unsupportable from the analysis of a mere two winters’ data controlled by El Niño activity.

    Dessler et al. (JGR, 2008) analyzed a mere one-month’s data in 2005 to infer clear-sky top-of-atmosphere outgoing long-wave radiation (OLR) and its relationship to humidity. It is not clear to this writer that this paper sheds any light whatsoever on water feedback sensitivity.

    Gettleman and Fu (2008) analyzed the changes in humidity produced by temperature changes from 2002 to 2007. As before, temperatures during this period appear to have been determined mainly by El Niño variability, and changes in water vapor content appear to be effects of this temperature change. There is little or no connection to heating produced by CO2 and water feedback sensitivity over the long term does not seem to be derivable from this work.

    Soden et al. (2005) reported “We use satellite measurements to highlight a distinct radiative signature of upper tropospheric moistening over the period 1982 to 2004”. The reliability of these measurements is uncertain. But more importantly, when one examines the data reported by this paper, one finds rather large yearly fluctuations in column integrated water vapor from year to year, presumably due in part to El Niño – La Niña cycles. The effect of the great El Niño of 1998 seems to have been long lasting, producing a step function change in humidity as well as temperature. Thus, Figure 1 of Soden et al. (2005) shows a pattern of humidity fluctuating about a mean prior to 1998, and fluctuations about a higher mean after 1998. Hence the change in water vapor over the period 1982-2004 seems to have been driven by El Niño – La Niña cycles rather than temperature change, and the entire thesis of El Niño – La Niña cycles relating humidity to greenhouse driven temperatures seems to be fragile.

    Santer et al. (2007) continued this line of argument. They began by asserting: “Data from the satellite-based Special Sensor Microwave Imager (SSM/I) show that the total atmospheric moisture content over oceans has increased by 0.41 kg/m2 per decade since 1988”. This would seem to imply a steady rise in humidity over two decades. However when their Figure 1 is examined, one finds essentially the same result as that of Soden et al. (2005); there are large oscillations in humidity with an apparent step function in the mean after the 1998 El Niño. As in the case of Soden et al. (2005), Santer et al. (2007) utilized a very short period of data to infer great conclusions that remain very uncertain.

    After publication of Douglass et al. (2007), the cabal came forth with Santer et al. (2008) as a rebuttal. This paper begins with the sentence: “There is now compelling scientific evidence that human activities have influenced global climate over the past century” which aside from the fact that the statement is not true, reveals the belief system to which the authors subscribe religiously. The details of the statistical processing of large data sets are complex. The issue is whether tropical tropospheric temperatures have risen more than surface temperatures as climate models would predict for the effect of greenhouse gases on climate. Douglass et al. (2007) concluded that models and data disagreed to “a statistically significant extent”. Santer et al. (2008) claimed to achieve a “partial resolution of the long-standing ‘differential warming’ problem” although they also said:

    “We may never completely reconcile the divergent observational estimates of temperature changes in the tropical troposphere. We lack the unimpeachable observational records necessary for this task. The large structural uncertainties in observations hamper our ability to determine how well models simulate the tropospheric temperature changes that actually occurred over the satellite era. A truly definitive answer to this question may be difficult to obtain.”

    Yet, this did not prevent Santer et al. from producing a so-called “Fact Sheet” that said “We’ve gone a long way towards such a reconciliation” [between climate models and tropical tropospheric temperatures].

    In 2009, McIntyre pointed out that when the data used by Santer et al. (2008) that ended in 1999 is extended through 2008, the discrepancy reported by Douglass remains, and “the claim by Santer et al. (2008) to have achieved a ‘partial resolution’ of the discrepancy between observations and the model ensemble mean trend is unwarranted”. McIntyre also noted the difficulty in obtaining data from Santer et al., and indicated that the International Journal of Climatology was stalling in responding to him. It appears that this article will never pass through the cabal’s lock on the IJC, and McIntyre had to be content with merely archiving his article. Yet, alarmists continue to refer to Santer et al. (2008) as evidence that climate models have been adequately tested.

    Mann et al. inferred the climate of the Earth back to year 1000 based on the following number of proxies. Aside from the fact that the detailed relationships of many proxies to temperature are unclear, and the PC methodology was incorrect (as pointed out by McIntyre), the spatial and temporal coverage was weak:

    Earliest date Number of proxies
    1000 12
    1400 22
    1450 24
    1600 57
    1700 74
    1763 93
    1820 112
    1854 219
    1902 1,082

  20. Posted Jun 12, 2011 at 10:02 AM | Permalink

    Of all the papers I have published (130) and the hundreds of reviews I have gotten, the ONLY reviews that have been rude and angry have been on the spotted owl and on climate. My ventures into geology in areas such as hydrology have faced no such nastiness, and my 2 papers in climate in the early 90s did not either. How about a 1 line review from GRL that said simply “one can not do this type of analysis” and that was the total basis for rejection? “special treatment”? I think so.

    • Posted Jun 14, 2011 at 9:10 AM | Permalink

      Come, come, Loehle: the science of the Spotted Owl is settled. No amount of denial will change that.

  21. Brazil Tony
    Posted Jun 12, 2011 at 8:30 PM | Permalink

    Happer is an esteemed physicist and a partisan Republican. I once was assigned the task of finding PhD scientists in support of GW Bush’s re-election campaign. It was not an easy task. Happer was the only name that came up.

  22. Rob
    Posted Jun 13, 2011 at 4:02 AM | Permalink

    Apart from the fact that the original publication Lindzen and Choi 2009 contained a fundamental error in calculating the feedback response from ERBE results (so fundamental that anyone with a shred of scientific background would have caught it), and that Dr Trenberth spent an entire publication to explain the multitude of flaws and mistakes make in Lindzen and Choi 2009, Lindzen here again shows that he is religious in his belief of negative feedback and his debunked Iris theory, regardless of the empirical and mathematical and scientific evidence against him.

    Here are comments from the 4 reviewers of his latest submission :

    Reviewer 1 : “The paper is based on three basic untested and fundamentally flawed assumptions about global
    climate sensitivity”

    Reviewer 2 : “I would advise both the author and the journal not to publish this paper as it stands”

    Reviewer 3 : “I feel that the major problem with the present paper is that it does not provide a sufficiently clear and systematic response to the criticisms voiced following the publication of the earlier paper by the same authors in GRL,
    which led to three detailed papers critiquing those findings.”

    Reviewer 4 : “the exact same data have been used by others to get an opposing answer and I do not see any discussion or evidence as to why one is correct and the other is not”

    The fact that Lindzen needs to resort to “it has been cooling for the past 10 years” Dr Happer, and co-author of referenced “Iris theory” Dr. Chou, underlines that Lindzen cant find anyone outside his close circle of pals to back up his increasingly unsustainable belief system.

    Bad papers receive bad reviews, and that is a fact of life in the peer-review process.

    One may wonder why climate auditor McIntyre failed to identify the serious flaws in Lindzen and Choi 2009, fails to mention the devastating scientific rebuttal by Trenberth et al 2010 and 3 other papers, and instead resorts to parotting a political blog that is defending Lindzen’s weak defence against the critisism of his flawed climate analysis.

    The real question is : Is Lindzen loosing it ?

    • sleeper
      Posted Jun 13, 2011 at 5:19 AM | Permalink

      Re: Rob (Jun 13 04:02),

      Bad papers receive bad reviews, and that is a fact of life in the peer-review process.

      And sometimes bad papers receive puff-ball reviews, which is one of the points of this post. I would think that you would want Lindzen’s flawed paper’s published for all the world to see, if only to further diminish his credibility. But that’s just me.

    • Posted Jun 13, 2011 at 6:02 AM | Permalink

      The real question is : Is Lindzen loosing it ?

      I think loosing it is a very good description of what Lindzen is doing. Atmospheric physics got a bit constrained, in an intellectual straightjacket, and he’s helping it to loosen up with some new ideas. What a good way to put it.

      And who knows, that may not have been exactly what you meant. What you wrote might have been ‘wrong’. But it helped me to think differently about the subject. You doing the same as Lindzen, despite your mistake, Rob. Bless you for that.

    • Posted Jun 13, 2011 at 11:19 AM | Permalink

      Wow, defending the indefensible. You apparently are missing the point. The fact is PNAS does not, won’t, and will never apply their standards evenly. The bias presented in PNAS’ publications are plainly evident. It isn’t science. It is a political advocacy journal.

      Your quoting of reviewer one is laughable. “The paper is based on three basic untested and fundamentally flawed assumptions about global climate sensitivity” Yeh, that’s why we so accurately depict climate in our models, because we know so much about it….. oh, wait…

      The quote from reviewer two means nothing.

      Reviewer 3 seems to be insisting on rehashing old debates before science can proceed.

      Reviewer 4 Comment seems reasonable.

      I find it interesting that the argument is, that a professor with over 200 publications suddenly forgot what was necessary to achieve publication. It would seem more likely to me, that the process was altered to prevent his publication……. or you can go with the senility thing……. oh, character assassination…. never gets old, does it?

  23. Posted Jun 13, 2011 at 4:25 PM | Permalink

    Andrew may have confused the name of Chou/Choi, but Chou still worked extensively with Lindzen on the IRIS hypothesis in the early 2000’s. While Lindzen is not actually advancing the IRIS hypothesis in his recent swing of papers on the radiation budget, IRIS is still a mechanism Lindzen wants to use to explain his results. Chou could, in principle, be a legitimate reviewer but I’ll let others judge if a better reviewer could be sought. It probably could. Happer, on the other hand, is a completely unsuitable candidate for refereeing climate-related subjects. His recent error-and-fallacy ridden article on global warming (see my comment @ http://www.skepticalscience.com/even-princeton-makes-mistakes.html ) should be a clear indication that he has no working knowledge of the subject and is heavily biased on the matter. This is completely unacceptable for a prestigious journal like PNAS.

    What is worse is Lindzen’s characterization of prominent scientists like Dennis Hartmann and Wielicki, both of which are far better placed to judge the quality of material at hand than Happer, and both are experts in the data or topic. Lindzen being unable to find people who are actually qualified to examine the subject that will buy his ideas is not an excuse to declare every other possible reviewer an alarmist who is incapable of providing an objective review. This tendency to break everyone off into “pro” or “con” camps would necessarily make any conceivable reviewer either biased or unqualified, and science cannot advance if this thought process is the impression people actually have.

    • Noblesse Oblige
      Posted Jun 13, 2011 at 5:09 PM | Permalink

      Usual RealClimate adhoms. No content. Move on.

    • RomanM
      Posted Jun 13, 2011 at 6:11 PM | Permalink

      Chou could, in principle, be a legitimate reviewer but I’ll let others judge if a better reviewer could be sought. It probably could.

      Talk about moving the goalposts. I hadn’t realized that being able to find better reviewers was a requirement to be overcome when publishing at PNAS. I presume that this must apply to ALL authors submitting papers … or just to the chosen few? What is the procedure there?

      Note that members of the NAS are permitted to communicate up to 4 papers per year. The members are responsible for obtaining two reviews of their own papers and to report the reviews and their responses to the reviews. Note, as well, that rejection of such contributions by the Board of PNAS is a rare event, involving approximately 2% of all contributions.

      But . of course this is the fair, unbiased and insightful Chris Colose who declares in his assessment of Prof. Happer

      His recent error-and-fallacy ridden article on global warming (see my comment @ http://www.skepticalscience.com/even-princeton-makes-mistakes.html ) should be a clear indication that he has no working knowledge of the subject and is heavily biased on the matter.

      Maybe a better place to look at the “working knowledge” of Chris would be at the Air Vent here and continued here where his scientific prowess and powers of assessment of the scientific skills of others are amply displayed.

      Arrogance and bias are poor qualities for someone who is still learning what science is all about to be burdened with.

      • Posted Jun 13, 2011 at 6:41 PM | Permalink

        RomanM,

        I don’t particularly have an issue with Chou (don’t really know much about him…, it just seems like a mildly strange option), but I do have an issue with Happer, and I do have an issue with Lindzen’s repeated mischaracterization of other scientists (in this case, but certainly not limited to, Hartmann and Bruce W.). In this case for instance, it would be nice for Lindzen to work extensively with observational specialists trained in the nuances of the radiation budget data he is working on, and in fact Wielicki wanted to do this in more detail following the IRIS work but it was Lindzen who did not want to (from my personal correspondence with Dr. Wielicki). Even a couple years ago Lindzen hit the blogs with one of his theories ignoring data which would have not supported his conclusions (http://chriscolose.wordpress.com/2009/03/31/lindzen-on-climate-feedback/); this is why collaborating with other experts in related fields is necessary.

        And for the record, there’s candidates such as Gavin Schmidt recommended in the emails (by PNAS, to Lindzen) that don’t make much sense to me either, since Gavin’s expertise is not with the feedbacks and radiation budget studies.

        I also think there’s a lot of fair criticism of Lindzen’s recent work, including his 2009 paper that has no been adequately answered on his part. I don’t understand what is unreasonable here. I understand the big desire here to be skeptical of everything “warmist,” and that everyone here is very angry that they think they get treated unfairly, but this is all normal scientific protocol.

        P.S: Yes, I encourage other readers to read what I said at the Air Vent about Venus 🙂

        • JamesG
          Posted Jun 13, 2011 at 6:59 PM | Permalink

          You somehow missed the fact that Lindzen was not allowed to reply to critics of his work, as would be normal, in official channels, and that this current work was in large part a reply to criticism and even that was not allowed by the gatekeepers. Nor would Lindzen ever be asked to review the work of Dessler or Trenberth so there is no quid pro quo. The system is utterly rank. It has actually got worse since climategate exposed the clique peer review shennanigans. Clearly they see just how easy it is to keep any alternative viewpoints out of the literature and thus keep the self-feeding consensus of circular thinking going.

        • Posted Jun 13, 2011 at 7:20 PM | Permalink

          James,

          There are no “gatekeepers,” just people who aren’t going to allow for nonsense in the refereed literature. Don’t use “rejection” as an argument for censorship, use it as a clue that you need to do better work. Unfortunately, this is especially the case with so-called “skeptics” of global warming. Lindzen is a unique case, because unlike virtually everyone else that denies the significance of global warming, he does have an established history of good work and qualificaitons. As much as MIT would probably like to get rid of him, they can’t, unless he committed some criminal act or something. People have been listening to his negative feedback theories of different flavors since the early ’90s. and they have all been shown wrong very quickly. His unmitigated faith in a low climate sensitivity and horrible track record of statements outside the literature gives other scientists every right and reason to not want him to review their documents.

        • Posted Jun 13, 2011 at 11:39 PM | Permalink

          Really? So, inconsistently applying standards isn’t anywhere related to censorship…….sell it somewhere else, rational people won’t buy it. Apply the standards evenly and no one would have a problem. They don’t and didn’t. It is as simple as that.

        • mikep
          Posted Jun 14, 2011 at 2:28 AM | Permalink

          What about Scmidt 2009, which claimed that McKitrick’s work was invalidated by spatial autocorrelation, but never tested for it? It got a puffball review from Jones, instead of a request for some standard testing, which would have shown there were no serious problems.

        • Venter
          Posted Jun 14, 2011 at 3:14 AM | Permalink

          Not allowing nonsense in refereed literature? How did MBH 98, Mann 08, Schmidt 09 and Steig 09 get published? And how did Schneider’s rag about blacklists get published in PNAS?

        • KnR
          Posted Jun 14, 2011 at 3:31 AM | Permalink

          Your words would mean a lot more if you were willing to take on the ‘Teams’ abuse of the peer review process . It was after all Jones himself that wrote about how they wanted and planed to manipulate it in their own favor . And when it comes to ‘poor papers ‘ there has been some awful stuff seen in nature etc, the validity of which resides not in their contents but it would seem merely in the authors name, in full support of AGW.

        • Posted Jun 14, 2011 at 3:34 AM | Permalink

          Re: Chris Colose (Jun 13 19:20), ‘There are no “gatekeepers,” ‘

          “I can’t see either of these papers being in the next IPCC report. Kevin and I will keep them out somehow – even if we have to redefine what the peer-review literature is !
          Cheers
          Phil” (1089318616.txt)

          “It won’t be easy to dismiss out of hand as the math appears to be correct theoretically,” (1054756929.txt)

          “Confidentially I now need a hard and if required extensive case for rejecting”
          (1054748574.txt)

          “Recently rejected two papers (one for JGR and for GRL) from people saying CRU has it wrong over Siberia. Went to town in both reviews, hopefully successfully. If either appears I will be very surprised.” (1080742144.txt)

        • RomanM
          Posted Jun 14, 2011 at 6:58 AM | Permalink

          There are no “gatekeepers,” just people who aren’t going to allow for nonsense in the refereed literature.

          As much as MIT would probably like to get rid of him, they can’t, unless he committed some criminal act or something.

          His unmitigated faith in a low climate sensitivity and horrible track record of statements outside the literature gives other scientists every right and reason to not want him to review their documents.

          Let me see if I have this straight. It appears that we have an inexperienced undergraduate student telling us all how proper science is to be done and evaluating the credentials of people who have been successful working scientists for years… and we are supposed to take these aspersions on their personal character and their professional expertise seriously? You seem to have spent too much time studying the climategate emails to learn the tricks and methods of team climate science.

          You will forgive me if I would rather decide for myself whether someone is worth listening to.

        • Venter
          Posted Jun 14, 2011 at 8:20 AM | Permalink

          How many papers has Chris Colose published and what are his scientific credentials or knowledge to diss Lindzen? What we are seeing is hubris and arrogance of the highest order from somebody who’s basically a nobody compared to Lindzen and other scientists.

        • Posted Jun 14, 2011 at 1:44 PM | Permalink

          Ugh…it’s hopeless. You all need to spend more time off of CA, and go to some conferences or something.

        • TerryMN
          Posted Jun 14, 2011 at 2:24 PM | Permalink

          That’s all you’ve got, Chris? Seriously? Substance, please.

          I’ll grant you that it’s useless to argue from authority as an undergrad, but your proposed action seems pointed in the wrong direction.

        • Posted Jun 14, 2011 at 3:13 PM | Permalink

          Terry, there needs to be a topic of substance developing if I’m going to reply with substance.

          I would be quite happy to discuss Lindzen’s work, feedback theories, etc. I have no interest in ‘justifying’ who I am, a first year graduate student in atmospheric science, something I can freely admit. I can point out that no virtually no one here has significant climate credentials too but that would be a useless argument on my end, and as I’m sure you’d admit, it would be absurd to declare that it is ‘arrogant’ to evaluate work in climate because of this. As a group of self-appointed ‘auditors,’ we don’t need to go there.

          But I have absolutely no interest in the usual tactics of falling back on topics like Mann, Steig, climategate, or other irrelevant issues (and ones which I really don’t care much about). The problem is that everyone here has been indoctrinated into thinking that these things are scientifically significant which justifies being lazy in learning anything else about the climate system, which is ultimately the end goal. This is why I recommended that people go to conferences, so one can develop a better perspective on what scientists are thinking about and what they think is important. I freely admit that nothing I write here will give you this experience.

          I also do not have interest in conspiracy theories, personal interpretations of unethically-obtained and selectively quoted emails, or personal complaints about why you think life is unfair. In fact, I have a day job at NASA GISS where I need to get scientific work done, and night responsibilities as well, so I actually don’t have time to argue about these things for the sake of arguing.

        • Venter
          Posted Jun 14, 2011 at 11:00 PM | Permalink

          Oh, got it. He’s got a day job at NASA GISS. That explains it. No wonder a first year undergrad adopts this tone to start dissing any skeptic scientist. True to standards. No interest in Mann. Steig, Schnieder et. al. and their rubbish and puffball reviews by pals. But interested enough to come here and post against Lindzen with a lot of ad homs and talking about ” garbage not being allowed to be published etc. ” . Yes, pull the other one and it’s got bells on it.

          And ” selectively quoted ” e-mails? Which part of which e-mail was ” selecttively quoted “? if you can, please point out with facts. It’a a chalengse and I’m prepared to put up 100 bucks o it. Go ahead.

          We are old enough and experienced enough to recognise and smell hyprocrisy from a mile, especially in climate science.

        • BMcBurney
          Posted Jun 14, 2011 at 3:55 PM | Permalink

          Chris Colose,

          PNAS rejects Chou on bias grounds that are demonstrably false and rejects Happer as unqualified. At the same time, however, it recommends Schmidt who is demonstrably less qualified in the particular subject matter and requires a choice be made among a group known primarily for their “commitment” to the cause of high sensitivity.

          Doesn’t that beg the question of whether PNAS was really seeking expertise in an unbiased reviewer or seeking a reviewer with a particular attitude towards the subject matter? Can you really defend Schmidt on his qualification or Trenberth or Dessler on their objectivity?

        • Posted Jun 14, 2011 at 4:36 PM | Permalink

          BMcBurney,

          First of all, I agree with the PNAS editor in many respects. Given the important nature of climate sensitivity and that this will surely be a high-profile paper, one should ensure the most best and most beneficial review process possible. By that, I mean 1) If Lindzen’s paper is found to lack sufficient evidence or poor methodology leading to unwarranted conclusions, it should be rejected or sent back for revision 2) The reviewers should be well read in the literature of this field to be familiar with the technical nuances of the subject, to ensure a good paper that will be beneficial to both Lindzen and the scientific community 3) People that are well-placed to provide useful commentary that can lead to a more developed paper.

          I agree with PNAS that Will Happer is an unsuitable candidate for this task, and I’m not really sure why there would be disagreement here. You don’t ask a climate scientist to review papers on plate tectonics, a brain surgery expert to review dentistry work, etc. A couple of people seem to think that jumping professions in a week is easy if you are smart, but not really. Aside from no expertise on the matter, he has an off-track record of many spurious claims and proof of lack of familiarity with the subject of climate change, as his recent “The Truth About Greenhouse Gases” article so easily demonstrates. This is not my opinion, it’s a fact. I am agnostic on Chou- I would not object to him reviewing the paper, but I think it best to choose someone that was not a big player in developing the IRIS hypothesis.

          I also agree that Gavin is not a great reviewer choice on this matter either.

          All scientists have “attitudes” toward their particular subject-matter. I do hope CA readers realize that we don’t live in a world where we can have expert robots looking at papers. If we live in the black-and-white blogospheric world where people only distinguish between “alarmists” and “denialists” than everyone will think every conceivable reviewer is either biased or unqualified, by definition. So, I don’t see how you propose to get around this.

        • BMcBurney
          Posted Jun 15, 2011 at 12:38 PM | Permalink

          CC,

          I have read your other posts above and understand your position, there is no reason to restate it. I raised a different issue and would like to hear your response to that.

          Again, PNAS’ stated grounds for rejecting Chou as a reviewer was bias. The basis for that decision, however, is inconsistent with the facts. Its stated grounds for rejecting Happer was that he lacked qualifications in the subject matter.

          At the same time, however, PNAS recommended other reviewers with lesser qualification and/or a demonstrably higher degree of bias. How is this possible unless PNAS is determined, regardless of the quality of paper, to exclude a certain set of views? In other words, what neutral standard could conceivably apply which would result in a rejection of Chou and approval of Schmidt?

          Please note, I am not interested in your opinion of Chou or Happer. I don’t believe anyone in the world (beyond your close friends and immediate family) is interested in your opinion of these men. I certainly don’t want to give you another opportunity to engage in yet another round of stupid insults.

    • Posted Jun 14, 2011 at 7:48 AM | Permalink

      In pursuing and prosecuting criminals, there are two polar extremes in law enforcement: (1) convict every criminal even if it requires inadvertently convicting a few innocent people; and (2) convict no innocent people even if it requires inadvertently allowing some criminals to escape conviction. Similarly, in pursuing a policy of review of scientific papers, we could attempt to assure that no specious papers get published even if some good papers get rejected, vs. attempting to make sure that all good papers get published even if a few poor ones make it into the literature. In this regard we are concerned with the question of how much harm is done (and to whom) by allowing a poor paper to get published, vs. how much harm is done by rejecting a good one. The problem we face in climatology is that we have a wide divergence of views on what constitutes a good paper. Apparently, there is a large constituency of reviewers who are quite lenient toward submitted papers of the alarmist persuasion, whereas they are very stringent toward skeptical manuscripts. As RomanM said in this blog, they are “people who aren’t going to stand for nonsense in the refereed literature”, “nonsense” being anything that disagrees with the alarmist view. I find it endlessly amusing that Arnold Schwarzenneger (aka “The Terminator”) said: “The debate is over”. When you look at the papers by Dessler, for example, with gigantic conclusions drawn from very short-term noisy data, or the many specious papers by Mann and other members of the paleoclimatic cabal, or hundreds of other climatological papers, you find that they mostly constitute GIGO: typically sophisticated manipulations of lousy data. We might ask ourselves: What harm would be done by allowing Lindzen and Choi’s paper to be published in PNAS? All that would happen is that a few members of the “compact majority” (c.f. Ibsen: “An Enemy of the People”) would fire off papers purporting to refute Lindzen and Choi. I suppose that a benefit of not publishing Lindzen and Choi would be to relieve them of the burden of doing that. Meanwhile most of those of the alarmist persuasion would go their merry way, ignoring Lindzen and Choi, as they have pretty much ignored Lindzen’s previous papers. The hockey stick will be taught to our schoolchildren as if it were factual, and the alarmist view of global warming will prevail.

  24. Posted Jun 13, 2011 at 5:11 PM | Permalink

    Now Eli, not being Lubos, would think it fine if Bill Happer reviewed a bunch of string theory papers, which is the issue here wrt Happer.

    • Posted Jun 13, 2011 at 5:47 PM | Permalink

      This interests me a lot. Behind string theory there is the maths of Ed Witten, who had a major aha moment studying the work of my friend SK Donaldson, with the encouragement of Michael Atiyah.

      Behind atmospheric physics there is what, of equivalent depth?

      • Posted Jun 13, 2011 at 8:27 PM | Permalink

        Eli takes it your point is that Bill Happer should review string theory papers. That would be amusing.

        • Tom Gray
          Posted Jun 13, 2011 at 8:47 PM | Permalink

          What I find amusing is the casual way that people are shown disrespect by all sides in this controversy. Someone’s name is mentioned and this is the signal for an avalanche of cheap insults. The collective level of maturity in this “debate” must be that of the lower grades in high school

        • Posted Jun 14, 2011 at 1:12 AM | Permalink

          Eli Rabbett:

          Eli takes it your point is that Bill Happer should review string theory papers.

          No, that wasn’t my point. My point was that it was entirely possible for it to be reasonable for Happer to review Lindzen’s PNAS paper and unreasonable for him to review the latest from string theory or competing attempts to unify relativity and quantum physics.

          I gave my reasons for this. It has to do with the equivalence you were trying to draw between atmospheric physics and string theory. If you want me to spell that out, I thought that attempt barmy.

          Even those who question string theory as the ultimate expression of reality will acknowledge the brilliance of Witten’s maths. I was asking you if there was anything or anyone equivalent in atmospheric physics. Expected answer: no. Thus Happer might well be able to contribute usefully to the debate in one area – Lindzen, who has spent his life in the field, explicitly wrote that he had – but not the other. For all I know Happer might be able to contribute to both. But the two areas are not equivalent. That was my point.

  25. Tony Hansen
    Posted Jun 14, 2011 at 8:27 AM | Permalink

    Eli,
    I looked up ‘James Hanson Astrophysicist’ and found Duncan, Jeanne and Margaret… but no Jim!
    So then I looked up ‘Josh Halporn Scientist’…no luck there either.

    • Posted Jun 15, 2011 at 8:46 PM | Permalink

      Are you purposefully misspelling their names?

      • Andrew
        Posted Jun 15, 2011 at 11:48 PM | Permalink

        Of course he is. It’s a little joke about Eli’s mis-spelling of “Hansen”.

  26. EdeF
    Posted Jun 14, 2011 at 8:33 AM | Permalink

    Re: The Ratted One

    I can think of no one who has more knowledge and experience in the transmission of infrared radiation through the atmosphere that Prof. Will Happer of Princeton. He worked on SDI in the 1980s and was a top advisor to DOD. His expertise would be the transmission of lasers (of different wavelengths) throughout a turbulent atmosphere, including the effects of thermal blooming due to the laser heating up the atmosphere. Prof. Lindzens
    project is a very elegant science test: measure the amount of solar radiation inpinging
    on the earth at the top of the earth’s atmosphere, then measure the amount of thermal
    radiation emitted by the ocean at about 290 K that makes it back up through the atmosphere, including through the clouds, water vapor, CO2, NO2, etc. Sea water at 290 K
    emits infrared radiation at mainly the 10 micron level. Guess what, there happens to be a nice transmission window between 8 and 12 microns that we term the long IR. Transmission in this band would be maybe 80% in the desert on a low humidity day and much less, maybe
    60% on a cloudless day in the tropics. IR transmission in high absolute humidity is non-
    existent— the attenuation is about 70 db in fog. Increasing water temperature leads,
    in the absence of clouds, to an increase in the loss of IR energy (mainly longwave)
    to the very cold space. Lindzens test is simple and brilliant.

  27. PaulS
    Posted Jun 14, 2011 at 8:38 AM | Permalink

    Steve,

    ‘The reviewer continues with the following list of problems with theory in the area: [quote]
    Let’s stipulate that all of this is true. Shouldn’t this then be stated prominently in IPCC?

    And doesn’t reviewer 2 prove too much here? If all of these problems need to be solved prior to publishing an article in the field, wouldn’t this apply to all articles? Not just ones the implications of which are low sensitivity.’

    The quote reads as if it is addressing potential pitfalls with Lindzen’s particular approach rather than a general list of problems with understanding clouds. That is, the criticisms may not be relevant to other papers on clouds, in which case there would be no requirement to address them in other papers or an IPCC report.

    ‘The IPCC SPM says “Cloud feedbacks remain the largest source of uncertainty” but this hardly does justice to the long list of problems that worry reviewer 2.’

    The clue is in the title there: Summary for Policymakers. I think said policymakers would be slightly nonplussed if the document suddenly became a detailed technical dissertation on problems understanding cloud dynamics in relation to radiative physics.

  28. Posted Jun 14, 2011 at 4:56 PM | Permalink

    As much as I disagree with Chris Colose, I give him credit for revealing who he is and standing behind his convictions openly, rather than hiding behind a psedonym.

  29. Posted Jun 14, 2011 at 5:09 PM | Permalink

    The quandary that we all face in science is that science has become so extensive and complex that no single person can digest it all, even with the confines of a narrower field, climatology. Hence we tend to rely on “experts” whom we trust as neutral, independent specialists can delve far deeper into specific narrow topics than most of us can. Unfortunately, that trust has been repeatedly violated by exposure of closely knit cabals with a private agenda to alarm the public regarding future global warming, and they have done bad science, manipulated and omitted data, and bandied together to eliminate alternate views. They have published emphatic conclusions based on minimal, and often untrustworthy data. Our choice is to either count how many important people side with the alarmists (as Oreskes has done) or try to judge as best we can for ourselves. Among this confusion, one person stands out: Steve McIntyre penetrates to the depths of the details of many of the studies and shows the fallacies of the entrenched power in a credible way. I salute Steve and all he has done for us.

  30. TerryMN
    Posted Jun 14, 2011 at 6:47 PM | Permalink

    Chris:

    You all need to spend more time…

    The problem is that everyone here has been indoctrinated into thinking that…

    My problem is your repeated use of absolutes, and again, lack of substance. I don’t know that it’s the root problem, but it certainly seems to betray your thought process, and tends to cause me to “read by” you. Just my opinion, but you may want to consider how you can communicate more effectively.

  31. Rob
    Posted Jun 17, 2011 at 2:03 AM | Permalink

    Regarding this new Lindzen and Choi 2011 paper, as far as I can see on the various blog, there is a great deal of talk about the PNAS rejection, but very very little about the science in the paper.

    Lindzen and Choi obtain different feedback numbers from the same ERBE data than Trenberth 2010 and two other papers, and Lindzen claims (unsurprisingly) that his method is more accurately reproducing feedback numbers.
    When I looked at the details of his method however, I found something very concerning :

    The Lindzen and Choi method of doing FLUX/SST analysis (called “lead and lag” by Lindzen) seems to have a (strong?) bias towards negative feedback.
    Here is why :
    L&C analyzes fragments of SST changes that are either rising or falling, and then measures the FLUX response over the same period.
    No problem there, has been done many times before by numerous other scientists.
    The difference is that Lindzen is looking back and forth (lead and lag) in time, and finds the FLUX response that has the highest correlation with the SST change.

    First remember that the FLUX (response) has significant noise on it. Let’s note that if you do not look back and forth in time (no lead or lag), then on average the FLUX response will tell you the average FLUX response to that SST change.
    But also remember that the FLUX response with the highest correlation with SST will always be the response that starts at one extreme and ends at the other extreme. All other responses will correlate less, since they will show opposite slopes at the start and/or end points, which obviously don’t correlate well with the SST.
    So, if you are allowed to look back and forth in time through that noisy signal, you have a high chance of finding a lead or lag time where the FLUX response is larger (and thus correlates better) than the no-lag response alone.
    So Lindzen and Choi method will (for each fragment of SST analysed) find the lead or lag time where the FLUX response is the largest !

    When the FLUX response is larger for a certain SST change, the calculated feedback will be lower, and thus this method has a bias towards lowering the feedback calculated from the ERBE data.
    Let me note that the effect (bias) will be stronger the more lead or lag time is allowed, since there will be more start and end-points in the noise to consider, and the largest response will correlate the best.
    So for short lag times and strong negative feedback (large FLUX response), Lindzen’s method will be approximately correct. But for no-feedback or positive feedback the lead-lag bias will be very significant.

    In fact Lindzen mentions himself that his method works best for large negative feedbacks .
    He also mentions that his method works less good for small feedbacks (and consequently) large lag times, which, as I showed above is consistent with increased bias.

    Interestingly enough, he does not show what feedback parameter number he obtains for a system with no feedback or positive feedback, in which case the lead-lag-noise bias will be greatest.

    Needless to say that maybe Lindzen drew some very premature conclusions when he discards other scientists’ work (Trenberth et al, Dessler et al) who do NOT use his (biased) lead-lag-correlate method.

  32. David Weisman
    Posted Jun 17, 2011 at 7:27 AM | Permalink

    Usually when you extensively discuss a published or unpublished paper you examine the statistics supporting said paper carefully, even if that paper is not the whole subject of your post. I’ve never seen you discuss a paper by a member of the team, and compare it to other papers published in the same journal which did not involve climate science, although I’ve read about flawed statistics in other papers as well.

2 Trackbacks

  1. […] here: Lindzen's PNAS Reviews « Climate Audit This entry was posted in Reviews and tagged attachments, bradley, knappenberg, lindzen, […]

  2. […] 10 days ago, we discussed the PNAS reviews of the recent submission by Richard Lindzen, a member of the National Academy of […]