Troy: Dessler(2010) “artifact of combining two flux calculations”

Troy_CA has another excellent contribution to the continuing analysis of Dessler 2010 and Dessler 2011 (h/t Mosher for alerting me)

CA readers are aware that the sign of the regression coefficient from Dessler 2010 is reversed when CERES clear sky is used in combination with CERES all sky, instead of replacing CERES clear sky with ERA clear sky. Dessler purported to justify the substitution on the basis of a suggested bias in the CERES clear-sky, referring to Sohn and Bennartz 2008.

In my opinion, this passim reference hardly justifies a failure to disclose the adverse results using CERES clear sky. The adverse results should have been disclosed and discussed (just as Spencer and Braswell 2011 should have shown all relevant models in their justly criticized figure.)

Nick Stokes, rather predictably, swooned over Dessler’s supposed wisdom in replacing CERES clear sky with ERA clear sky. Some quotes:

What the reanalysis can then do is make the moisture correction so the humidity is representative of the whole atmosphere, not just the clear bits. I don’t know for sure that they do this, but I would expect so, since Dessler says that are using water vapor distributions. Then the reanalysis has a great advantage.

Instead of simply accepting this sort of arm-waving as proof, Troy_CA has carried out an insightful analysis, with some important conclusions that totally refute Nick’s swoon and, in the process, directly question the replacement of CERES clear sky with ERA clear sky and thus the conclusions of the original article:

the “dry-sky bias correction”, if it exists in ERA, accounts for very little of the difference we see between ERA_CRF and CERES_CRF

The bulk of these CERES_CRF vs. ERA_CRF differences come from this different value for the effective surface albedo. Note that this has nothing to do with a “dry-sky” longwave water vapor bias.

to me there seems to be little ambiguity that the magnitude of the positive feedback in Dessler10 is more of an artifact of combining two flux calculations that aren’t on the same page, rather than some bias correction in ERA-interim.

Following practices of critical climate blogs (I prefer “critical” to “skeptical”), Troy has commendably archived source code.

PS. I’ve obtained some source code from Dessler on some of the calculations in Dessler 2011 and will be posting on that.

PPS. Note that criticizing the analysis of Dessler (2010) does not imply that the conclusions of Spencer and Braswell are “right” (or that they are “wrong”).


  1. tetris
    Posted Sep 21, 2011 at 2:15 PM | Permalink

    The discourse on this blog and the detailed analysis of what we think we understand, are told we are allowed to know or are told we can not know, is all important. From one inquisitive mind to another, thx.].

    OT, I know:
    Ref: the latest serve-up of what is supposed to be an “evidence paper”, showing that Trenberth’s missing warming has mysteriously disappeared deep [very deep] below the ocean surface, without any trace of that somehow physical/chemical process showing up in the SSTs and without any evidence other than his computer “models”.
    Part of Trenberth’s ongoing efforts to reverse the null hypothesis, perchance?

    PS: pls feel free to nix as you see fit.

  2. TerryMN
    Posted Sep 21, 2011 at 3:44 PM | Permalink

    I believe the proverbial ball is in Mr. Stokes court. Nick?

    • Steven Mosher
      Posted Sep 21, 2011 at 4:27 PM | Permalink

      As some of us noted Dessler made an cursory argument for the analytical choice he made. he pointed to a reference. I do not expect that ANY reviewer chased down the reference to see if the argument held any water. I think that most reviewers would expect that inclusion of reference was ample argument enough. Also, the word limits dont allow dessler to expand his argument in the primary text.

      The ball properly belongs in Dessler’s court. Or maybe Troy and Nick and steve should do a paper. THAT would be an interesting affair. I think its interesting to put people together who have disagreed and see what they can come up with.. FWIW

      • TerryMN
        Posted Sep 21, 2011 at 5:29 PM | Permalink


        Nick, you’re arm waving. Paper/Peer Review/You.

    • Posted Sep 21, 2011 at 4:45 PM | Permalink

      “to see if the argument held any water. “ 🙂

      I’ll read carefully what Troy has said, because, as usual, there is much there. But I see a strawman being waved:
      “If our hypothesis is that ERA_CRF is simply CERES_CRF corrected for dry-sky bias”

      That, if anything, is what is being refuted. And it wasn’t my hypothesis. All I was saying is that there is a clear dry-sky bias, and the original post should have dealt with it, instead of just going on to use CERES the difference regardless. And ERA (probably) does deal with it. It may well do other things as well.

      • Steven Mosher
        Posted Sep 21, 2011 at 5:31 PM | Permalink

        I think we can both agree that Dessler’s bald reference to a bias in the dataset may have satisfied the reviewers ( who cannot be expected to run down every reference), but that does not logically imply that we should be satisfied with the analytical choice he made. Especially, when the choice turns positive to negative. It’s a simple sensitivity test. Much like the test you performed on some of Barts work, by dropping data and seeing that the answer changed. That kind of difference should engender curiousity. Was Dressler choice correct? was it justifiable? is the source he relies on reliable? My point would be this. That kind of difference makes me curious. The fact that I am curious doesnt settle any matter. It raises a matter. The fact that you dont think it is worth checking, doesnt settle a matter for me. The difference makes me curious. Words will not make that disappear. Running the data down to the source makes the curiousity go away. That’s merely the way I operate.

        Now, if the choice dessler made turn .5 into .52, then I would not be curious. It would not be worth my time. But if the choice turned .5 into -.5, then I’m curious. And as I said, a mere cite, doesnt make that go away. words about that cite, even words from that cite, dont make the question mark vanish for me. They just dont. They might make the question mark go away for you, but for me the question mark stays there till I can actually go look at the whole chain of evidence.

      • bender
        Posted Sep 21, 2011 at 6:14 PM | Permalink

        I’ve seen Nick Stokes score the odd point on Steve, but never a full set, let alone a match. Still, I value his participation here.

      • Posted Sep 21, 2011 at 6:15 PM | Permalink


        My intent was not to set up a strawman. The post spawned out of our comments on the previous thread, where after your comment I linked to, I noted:

        If indeed all else was equal between the ERA-interim analysis and CERES clear-sky, and the ERA-interim analysis uses the same methods/models described in those references, and it indeed performed the humidity adjustment, then obviously it would be better and an improvement on CERES.[Newline] But based on the differences we see here (particularly with the SW fluxes), obviously it isn’t the case.

        You responded with:

        I agree that the reanalysis corrects one major thing, but brings in other differences. And probably that Dessler should have said more about that. I can’t at the moment see how the SW flux contrast makes that “obviously not the case”.

        We were coming quite close to agreement, and I wanted to show in my post why this was “obviously not the case” (that all else was equal between ERA vs. CERES except the humidity adjustment).

        I do agree that this is all a bit unfair to you, as you acknowledge that there are likely other differences, and mention that you’re not sure if ERA is actually performs the adjustment. As Mosher has suggested, I think we should try to come to an agreement. What do you think of the following points?

        1) There is a slight dry bias in using CERES to calculate the absolute CRF.

        2) How this biases the changing CRF (i.e. the cloud feedback) is uncertain, but it is probably going to bias it positive.

        3) The ERA-interim clear-sky fluxes may correct some of this dry-sky bias.

        4) IF ERA-interim clear-sky fluxes do correct some of this dry-sky bias, the effect is tiny, and nearly unnoticeable among the differences between CERES_CRF and ERA_CRF (as described in my post).

        5) There are significant differences in the clear-sky shortwave fluxes between CERES and ERA, as shown in the difference in effective clear-sky albedos, and these differences form the bulk of the difference in CERES_CRF vs. ERA_CRF (also shown in my post).

        6) #5 is unrelated to any dry bias in clear-sky.

        7) Whichever clear-sky albedo is correct in #5, it is only from using the inconsistent values and combining the two different “datasets” that you get a result similar to Dessler10. This suggests the result is an artifact of using the two different fluxes.

        If we agree on #1-7, perhaps going forward we can look at ways to adjust for the slight dry bias in CERES clear-sky without introducing the problems of ERA clear-sky, and maybe even a paper (who knows?) could be in the works.

        • Posted Sep 21, 2011 at 7:01 PM | Permalink

          Re: troyca (Sep 21 18:15),
          I have a busy day ahead (morning here), so a really considered response may have to wait until tomorrow. But I think your list approach is quite useful. I’m not sure if the dry-sky bias is small. It seems you’re regressing against the relatively small differences in whole-globe whole-month precipitable moisture. It’s not clear to me what that means.

          I’ve wondered about the basis for subtracting the clear-sky from whole-sky at all. Should there be a weighting for cloudiness? What if (notionally) almost the whole earth was cloudy for a month? Just a thought experiment in trying to see what your CRF diff vs moisture really tells us.

        • bender
          Posted Sep 21, 2011 at 8:00 PM | Permalink

          Applause = ON

        • Posted Sep 22, 2011 at 11:19 AM | Permalink


          Re: the actual significant of the cloud feedback from the change in clear-sky – all-sky (and cloudiness weighting), this is an interesting point to consider (I’ve been pondering it myself), but as you know this is the method that Dessler10 uses, claiming that “this definition of the cloud feedback is a standard approach for quantifying feedbacks [26]”. That reference 26 might be a good place to start when looking more in-depth at it, and while I’m happy to pursue this avenue for interest, understand that it is unrelated to the critiques I’ve made of Dessler10 up to this point.

          Re: the whole-globe water vapor thing, if you look at the figures in the SB08 “dry-bias” paper (particularly figure 5), the effect seems to be global, and we should be able to detect the difference in global water vapor. Yes, there are areas where the effect is stronger (northern hemisphere mid-latitudes), yet it seems rather impossible that the dry-bias would show up in such a way as to be correlated with temperature (as it must if it going to create a bias in the cloud feedback calculations) but uncorrelated with global water vapor, particularly when global water vapor and temperature are so strongly correlated. Still, if you want to propose a different test, such as using the precipitable water for the midlatitudes of the northern hemisphere, I welcome your input.

          I would also note that while the “perfect” test may discover some greater percentage of the difference between ERA-clear sky and CERES-clear sky coming from a humidity correction, the bulk of the difference still comes from the short-wave clear-sky fluxes, unrelated to any dry bias referenced.

        • Posted Sep 25, 2011 at 7:31 AM | Permalink

          Troy, the more I think about it, the less I understand why regressing against TPW makes sense. The bias argument is that clear sky regions are drier and so presumably have higher outgoing LW (coming from lower, warmer). Why should TPW, a global average, help?

          You could say that maybe when TPW is high the differences might be proportionately higher, but the TPW changes are proportionately small, and are also unlikely to be globally distributed in any one month. In fact, I suspect the fluctuations are due to monsoons.

          You’ve mentioned some spatial arguments from SB08, but there’s no spatial data here. I just can’t see it.

          A better regressor might be total cloudiness. But it would need careful interpretation.

        • Geoff Sherrington
          Posted Sep 22, 2011 at 10:02 PM | Permalink

          It seems that radiation comparisons are being made on a whole of sky basis and that this introduces complications. Ignorant question – is it possible to compare clear sky with cloudy sky by successive samplings along a satellite traverse, which constrains a number of variables that otherwise enter into a whole of sky comparison? I’m still concerned by S/N ratios in these recent papers.

        • Posted Sep 25, 2011 at 7:50 AM | Permalink

          Belatedly responding to your list:
          1) Slight dry bias – I don’t see why it should be slight.
          2) Bias positive? I don’t know how the effect on the trend can be determined.
          4) No – I don’t agree with the logic of Fig 1, and Fig 2 has causation issues. I think you suggest that clear-sky albedo changes are not cloud related, but they could well be. I suspect a big factor in monthly fluctuations is just how cloudy it has been over land relative to sea. Land albedo is much higher and if some goes missing from the clear-sky set for a cloudy month, it makes a difference.
          5,6) As for 4.
          7) Not sure.

  3. Philh
    Posted Sep 21, 2011 at 3:56 PM | Permalink

    I agree with your opinion that you prefer the term “critical” rather than “skeptical.” It seems a perfect fit for what you have done over the years and continue to do.

  4. PaulM
    Posted Sep 21, 2011 at 4:49 PM | Permalink

    I would say that the ball is in Andrew Dessler’s court:
    “…the magnitude of the positive feedback in Dessler10 is more of an artifact of combining two flux calculations that aren’t on the same page, rather than some bias correction in ERA-interim. I would welcome any comments to the contrary. ”

    In fact the balls seem to be piling up in Dessler’s court following Steve’s two recent cross-court volleys showing that (a) a slightly different version of the data turns Dessler’s positive feedback into a negative one, (b) introducing a time delay makes the feedback negative, (c) Dessler’s alleged positive feedback has a statistically meaningless adjusted r^2 of 0.01 (adjusted meaning adjusted for finite N).

    • Tom Gray
      Posted Sep 21, 2011 at 5:05 PM | Permalink

      (b) introducing a time delay makes the feedback negative,

      It is more than that. It is a rejection of Dessler’s method of regression. Dessler used an incorrect method. At least that is how I see the controversy

      • Steve McIntyre
        Posted Sep 21, 2011 at 5:16 PM | Permalink

        I remind readers that Dessler has responded promptly to my inquiries and has been very courteous about this process.

        • bender
          Posted Sep 21, 2011 at 5:23 PM | Permalink

          There’s nothing wrong with making mistakes when you’re willing to subject your work to external scrutiny.

        • Graeme W
          Posted Sep 21, 2011 at 5:26 PM | Permalink

          That’s one thing I did note, as well as the fact that Dessler and Spencer are also maintaining a conversation.

          In my opinion, Dessler is working the way a scientist should. He’s making arguments, defending positions he’s taken, attacking positions he believe are wrong… but doing it all without dismissing those who disagree with him.

          He’s communication and interacting with those who disagree, and doing so in what seems to be a professional manner. He is to be applauded for that attitude.

        • Steven Mosher
          Posted Sep 21, 2011 at 5:34 PM | Permalink

          yes. more of that.

          I find this whole culture to be very odd. I’m used to a culture where we were expected to work with people who violently disagreed with my ideas. You might not like them as a person, but you knew how to put your feelings in a box and do the damn math.

        • Ron Cram
          Posted Sep 21, 2011 at 8:59 PM | Permalink

          I mostly agree with you. I have defended Dessler on other blogs, but I also think Dessler’s youtube video was out of character for him. Saying Spencer did not use real data was a bit much. But overall, his behavior is much better than most on the Team.

        • bender
          Posted Sep 21, 2011 at 5:56 PM | Permalink

          One wonders how much effort it would take to comb through AR4 and rank cited literature as to their level of compliance with McIntyre-like criteria for disclosure / due diligence. Full. Partial. Not. One wonders what proportion of cited literature would comply fully. Would it be greater or lesser than 10%? 25%? 50%? Any guesses out there? A list of fully compliant studies might prove useful in helping to inform writers & reviewers of AR5.

        • jorgekafkazar
          Posted Sep 21, 2011 at 6:10 PM | Permalink

          Yes, one of the reasons that my opinion of Dessler has been going up, not down, during this controversy.

        • Steven Mosher
          Posted Sep 21, 2011 at 6:33 PM | Permalink


        • TAC
          Posted Sep 21, 2011 at 7:32 PM | Permalink

          Ditto 2.

  5. Geoff Sherrington
    Posted Sep 21, 2011 at 8:38 PM | Permalink

    Please keep an eye on errors, especially bias, when these large numbers are subtracted. Ref Steve’s “Some Simple Questions” below and my selected quote Posted Sep 17, 2011 at 6:45 PM e.g. a part

    ” The total error in TOA outgoing clear-sky LW
    radiation in a region is sqrt(12+1.742+0.72+12+2.752) or approximately 3.6 Wm-2.”

    If this error is typical of the measurements being used (and others are given), then there is not much confidence in looking for measurement differences of 0.5 w m^-2.

  6. Patrick M.
    Posted Sep 21, 2011 at 11:59 PM | Permalink

    I think I can speak for the whole world when I say, Thank you, Nick, Steve, Troy, Mosher et al. for approaching climate science as scientists! You don’t have to agree, just be fair.

  7. Posted Sep 22, 2011 at 5:15 AM | Permalink

    I recommend reading the recent GRL paper of Soden and Vecchi. It’s model-based, but very informative on feedback/forcing, methods for CRF handling, and has an elaborate presentation of results.

    • tetris
      Posted Sep 22, 2011 at 6:42 AM | Permalink

      I had a look at the paper. The principal problem remains that it is multi-model based, which leaves us with the incontrovertible GIGO issue inherent in all model based studies. [see my comment above regarding the recent model based study – with NB Trenberth as co author- that purports to show that the missing heat is hiding in the deep ocean].

  8. Vincent Guerrini PhD
    Posted Sep 22, 2011 at 3:36 PM | Permalink

    Some edifying information which should put this to rest (or not?)

  9. David L. Hagen
    Posted Sep 22, 2011 at 7:48 PM | Permalink

    Judith Curry has posted “Cloud Wars” at Climate Etc. including Richard Allen, Dessler, Spencer etc.

  10. Kenneth Fritsch
    Posted Sep 23, 2011 at 11:26 AM | Permalink

    I am not at all sure what Troy does professionally, but I think we have seen the lack of sensitivity testing in climate science papers, and later done by those outside the science, as being problematic of that science. I suppose one not writing these papers should tread carefully in attempting to understand why these tests, which seem rather obvious after the fact, were not undertaken by the original author(s). I suspect if we saw the same number of countervailing papers (to the AGW consensus) we might see these same problems.

    I suspect it has to do with the author attempting to make a quick point or counter point. As long as the authors involved are willing to rationally discuss the results of these tests, I think the science progresses. If the authors, on the other hand, appear to fear that admitting a mistake (and correcting it) puts doubt into a general thesis that they might hold, and thus ignore or obscure the critics, I think the science suffers.

One Trackback

  1. […] Layman Lurker called my attention to TroyCA’s latest post on the Spencer/Dessler cloud data.  It is very interesting so I asked permission to crosspost here which he has granted.  In the meantime Steve McIntyre continued the discussion here. […]

%d bloggers like this: