The Stone in Trenberth’s Shoe

Like most of us, I’ve been a bit taken aback by the ritual seppuku of young academic Wolfgang Wagner, formerly editor of Remote Sensing, for the temerity of casting a shadow across the path of climate capo Kevin Trenberth. It appears that Wagner’s self-immolation has only partly appeased Trenberth, who, like an Oriental despot, remains unamused.

Spencer and Braswell 2011, the stone presently in Trenberth’s shoe, is, to a very considerable extent, a critique of Dessler 2010 (Science). Over the past few days, I requested data from the authors of both articles and was promptly supplied with it by both. (I remind readers that Dessler, almost uniquely in the climate community, agreed with my request that IPCC AR4 Review Comments be placed online, rather than IPCC’s original plan to place one paper copy at Harvard Library).

Dessler 2010 argued (against predecessor Spencer and Braswell 2010) that there was a positive cloud feedback as follows:

The cloud feedback is conventionally defined as the change in ∆R_cloud per unit of change in ∆T_s. Figure 2A is a scatter plot of monthly values of ∆R_cloud versus ∆T_s, calculated using ECMWF interim meteorological fields. The slope of this scatter plot is the strength of the cloud feedback, and it is estimated by a traditional least-squares fit to be 0.54 +- 0.72 (2σ) W/m2/K (the slope using the MERRA is 0.46 +- 0.75 W/m2/K). Because I have defined downward flux as positive, the positive slope here means that, as the surface warms, clouds trap additional energy; in other words, the cloud feedback here is positive.

Dessler 2010 Figure 2A is shown below, with a markup showing the plot of data sent yesterday by Dessler (to show apples-and-apples):

Figure 1. Dessler 2010 Figure 2A with overplot in red. Original Caption: Fig. 2. (A) Scatter plot of monthly average values of ∆R_cloud versus ∆T_s using CERES and ECMWF interim data.

I placed the Dessler data online and re-did the regression reported in the Science article (The peer reviewers at Science did not require Dessler to show the usual diagnostics for any regression.) Readers interested in handling the data for themselves can do so as follows. (Spencer data also shown.) [Update - Nick Stokes observes that, in the later discussion of Dessler 2010, Dessler observed that the correlation between DR_cloud and DT_s is weak (r2 = 2%), meaning that factors other than Ts are important in regulating DR_cloud," a point that I missed in writing this post. In my opinion, statistical diagnostics should be reported with the regression, rather than passim in a later discussion, but the r2 was reported. The adjusted r2, a preferable diagnostic, was .01, as I previously observed.

dess=read.csv("http://www.climateaudit.info/data/dessler/dessler_2010.csv") #collated from data sent Sep 6, 2011
fm=lm(eradr~erats,dess)
summary(fm)

spencer=read.csv("http://www.climateaudit.info/data/spencer/flux.csv")

I replicated the slope reported in the article. However, the diagnostic statistics were not imposing. The adjusted r^2 was a Mannian 0.01045. With this poor a fit, the "confidence intervals" reported in the article and illustrated in Dessler 2010 Figure 2010 are not ones that would comfort an independent statistical reviewer - not that Science requires independent statistical review for statistical calculations by climate scientists, despite Wegman's sensible recommendations on this matter a number of years ago.

The CERES all-sky series used in Spencer and Braswell 2011 matches the corresponding CERES all-sky series in Dessler 2010 (with a few more months.) The clear-sky versions differ - something that I'm presently trying to clarify. {Note - Troy draws attention to his (excellent) analysis at Lucia's here.]

The scatter plot in Dessler 2010 is based on an “instantaneous” relationship between CRF (as defined by both parties) and temperature. Spencer and Braswell 2010 observe that there is lead-lag relationship between CRF and temperature, with a stronger correlation with a lag of 4 months than the instantaneous correlation, illustrating this in their Figure 2 as follows.

Figure 2. Spencer and Braswell 2011 Figure 3.

Dessler 2011 Figure 2 (in press) substantially replicates Spencer and Brawell 2010 Figure 3, as shown below. The blue series shown by Dessler as a sort of outlier to the three red temperature series is the widely used HadCRUT3 series (which Spencer and Braswell 2011 had used). Dessler 2011 suggests that Spencer and Braswell’s use of HadCRUT3 was done to emphasize the differences between observations and models. This seems a two-edged sword, since one might equally argue that Dessler 2010′s omission of HadCRUT3 was done for no more worthy reason. In any event, given the wide usage of HadCRUT3, not least by IPCC, it doesn’t seem to me that SB can be strongly criticized for using HadCRUT3. Dessler’s diagram slightly understates the actual coefficients of Spencer and Braswell (shown in cyan).


Figure 3. markup of Dessler 2011 Figure 2 showing Spencer and Braswell 2011 values in cyan. Original Caption:” Slope of the relation between TOA net flux and ΔTs, in W/m2/K as a function of lag between the data sets (negative lags mean that the flux time series leads ΔTs). The colored lines are from observations (covering 3/2000-2/2010 using the same TOA flux data, but different time series for ΔTs); the shading represents the 2σ uncertainty of two of the data sets. The black lines are from 13 fully coupled pre-industrial control runs; lines with the crosses ‘+’ are models used by SB11. Following SB11, all data are 1-2-1 filtered. See the text for more details about the plot.”

Dessler also observes that Spencer and Braswell 2011 showed a comparison with the three “most sensitive” and three “least sensitive” models (based on an earlier article by Forster.) Dessler observes that the discrepancy is less for several other models that did not meet these criteria, singling out GFDL CM 2.1, MPI ECHAM5 and MRI CGCM 2.3.2A as performing better according to his metric. Dessler observes that “this suggests that the ability to reproduce ENSO is what’s being tested here, not anything directly related to equilibrium climate sensitivity.” This might well be true and seems like a worthwhile comment. I’m not familiar enough with the data sets to opine on the matter.

It does seem to me that it’s been an awful lot easier for Dessler to publish this comment than it is to publish criticisms of Team articles. As CA readers are aware, important results of Santer et al 2008 did not hold up with updated data, but Team reviewers refused to permit publication. CA readers are also well aware of Steig’s concerted efforts to block publication of O’Donnell et al 2010 (which appeared only because of Ryan O’Donnell’s remarkable persistence.)

In the course of looking at the data, I noticed something interesting about the analysis of Dessler 2010 purporting to show a positive feedback.

Whatever view one might take on the differences between observations and models in the above data, the lagged relationship is more significant than the instantaneous relationship – a point shown in both the figures in Spencer and Braswell 2011 and Dessler 2011. This suggests that the original scatter plot in Dessler 2010 should be re-done using a lag of 4 months. I used the common HadCRUT3 data for the comparison – Dessler had observed that this accentuated the difference between models and observations, but it is nonetheless widely used and, if Dessler takes exception to SB’s failure to illustrate re-analysis temperature versions, one might make the same observation about the HadCRU3 omission in Dessler 2010. The results are shown below.

Doing the same regression with 4-month lagged relationships (which both Dessler and SB agree to be more significant than the instantaneous relationship), the sign of the slope is reversed. Whereas Dessler 2010 had reported a slope of 0.54 +- 0.72 (2σ) W/m2/K, the regression with lagged variables is -0.90 +- 0.95 w/m2/K and has better diagnostics. [Update Sep 8 – Nick Stokes observes that this reversal of sign may be a phase phenomenon. This is something that needs to be examined as I haven’t handled this data before. However, please note that a sign reversal also results on alternative grounds merely from using CERES clear sky data instead of ERA clear sky data, the latter being used in Dessler 2010 without an explanation for the variation. See here.)


Figure 4. Restatement of Dessler 2010 Figure 2 with 4-month lag.

Given that the even the lagged relationship is weak, I’m reluctant to say that analysis using the methods of Dessler 2010 established a negative feedback, but it does seem to me that they cannot be said to have established the claimed positive feedback.

Perhaps the editor of Science will send a written apology to Kevin Trenberth.

217 Comments

  1. Posted Sep 6, 2011 at 12:49 PM | Permalink | Reply

    “The CERES all-sky series used in Spencer and Braswell 2011 matches the corresponding CERES all-sky series in Dessler 2010 (with a few more months.) The clear-sky versions differ – something that I’m presently trying to clarify.”

    Steve-

    The difference may be that Dessler does not use the measured clear-sky fluxes from CERES, but instead uses the ERA-interim reanalysis forecasted clear-sky fluxes. For more on this, see

    http://rankexploits.com/musings/2011/ceres-and-the-shortwave-cloud-feedback/

  2. Posted Sep 6, 2011 at 12:59 PM | Permalink | Reply

    When you plot data for A vs. B and get data points distributed something like:

    xxxxxxxxxxxxx
    xxxxxxxxxxxxx
    xxxxxxxxxxxxx
    A xxxxxxxxxxxxx
    xxxxxxxxxxxxx
    xxxxxxxxxxxxx
    B

    maybe you should conclude that A depends a lot more on factors other than B than it does on B?

    • Adrian
      Posted Sep 6, 2011 at 4:42 PM | Permalink | Reply

      That is my opinion as well. When you look at the scatter plots above, who in their right mind would dream of drawing a straight line through them?

      I know there is the r^2 test which is obviously very low from Steve’s comment, but shouldn’t there is also be some “you are having a laugh” test?

      • Robin Melville
        Posted Sep 7, 2011 at 2:51 AM | Permalink | Reply

        Whatever happened to the r^2 test of significance being > 0.5? These scatterplots have visibly no significant correlation, so the 0.01 r^2 is a no-brainer. The “world’s top 2,000 scientists”? Pah!

  3. Posted Sep 6, 2011 at 1:05 PM | Permalink | Reply

    > Whereas Dessler 2010 had reported a slope of 0.54 +- 0.72 (2σ) W/m2/K, the regression with lagged variables is -0.90 +- 0.95 w/m2/K and has better diagnostics.

    In the body of the post, you give the correlation for the former linear fit as r=0.01045. Out of curiosity, what’s the correlation for the red line shown in the post’s Figure 4, “Restatement of Dessler 2010 Figure 2 with 4-month lag”?

    Steve: Adjusted r^2 doubled :) to 0.02161

    flux=read.csv(“http://www.climateaudit.info/data/spencer/flux.csv”)
    flux=flux[3:126,] #removes NA rows
    dess$lag=c(rep(NA,4),flux$HadCRUT3[1:116])
    par(mar=c(4,4,2,1))
    plot(eradr~lag,dess,pch=”+”,type=”p”,
    ylab=”Flux_cloud (wm-2)”,xlab=”GLB Temperature”)
    title(“Dessler 2010 Fig 2 with lag”)
    fm=lm( eradr~lag,dess)

    • Posted Sep 6, 2011 at 2:55 PM | Permalink | Reply

      At what point is r^2 no longer considered ‘Mannian’?

      • Adrian
        Posted Sep 6, 2011 at 5:04 PM | Permalink | Reply

        “At what point is r^2 no longer considered ‘Mannian’?”

        Given that Mann himself only estimates about 30% signal to noise ratio in paleo data there isn’t really one, which is why he doesn’t like the measure.

        Steve has always emphasised the “reconstructions” are not statistically “robust”, both because of understimation of errors and gigo (garbage in -> garbage out).

        For me, the already assumed lack of reliable data in such a sparse data set makes the whole excercise “academic”.

        • jorgekafkazar
          Posted Sep 6, 2011 at 6:43 PM | Permalink

          Yes, academic. “Unsettling,” however.

        • geronimo
          Posted Sep 7, 2011 at 1:33 AM | Permalink

          I believe Professor Kelly, from Canbridge University who served on the Oxburgh “investigation” made the same point about the data the CRU where using asking Oxburgh to ask Jones and Briffa if they couldn`t get the opposite results from the same data if they were so minded. Unfortunely by the time he`d written his assessment of the work Oxburgh had pubished his report and was back in the HoL with Beddington being feted for “a blinder played”.

  4. Dave L.
    Posted Sep 6, 2011 at 1:11 PM | Permalink | Reply

    Spencer agrues that cloud feedback is bi-directional and assigns different modalities to the instantaneous change versus the lag phase — see his post on September 3 at his blog site. That Dressler makes his case only with the instantaneous change and does not analyze the lag phase as you did in the above … suggests that Dressler did not comprehend Spencer’s paper. How did the reviewers of Dressler’s paper also overlook this?

    Steve: Dessler 2011 and Dessler 2010 are different papers. Dessler 2011 does not revisit the regressions of Dessler 2010.

  5. Posted Sep 6, 2011 at 1:18 PM | Permalink | Reply

    Steve, you are going to make it hard for Trenberth (Peace be upon him) to know which papers he’s supposed to keep out of AR5. And Wolfgang made the task even harder now that they not only have to redefine the peer-reviewed literature, but the blogosphere literature as well.

    • Crispin in Waterloo
      Posted Sep 7, 2011 at 11:08 AM | Permalink | Reply

      A hockey stick breaker, McKitrick
      Read Dressler’s and Wolfgang’s new s***fit
      Reviewing Steve’s plays
      And concise (as always)
      Proved skilful as well with his McWit.

  6. Tom C
    Posted Sep 6, 2011 at 1:31 PM | Permalink | Reply

    Seems to me that this cloud/radiation topic could be better addressed by chemical or electrical engineers. It is a control problem after all.

    • David L. Hagen
      Posted Sep 6, 2011 at 3:19 PM | Permalink | Reply

      Tom C.
      Steve concluded:

      Whereas Dessler 2010 had reported a slope of 0.54 +- 0.72 (2σ) W/m2/K, the regression with lagged variables is -0.90 +- 0.95 w/m2/K and has better diagnostics.

      For the relevance of these results from a chemical engineer’s perspective on the feasibility of controlling climate see specialist Pierre R. LeTour “Engineering Earth’s thermostat with CO2?” HPI Viewpoint. See also Sowell’s post: Chemical Engineer Takes on Global Warming for LeTour’s interaction with global warming control advocate. e.g.,

      controllable. Controllable means that a change in the manipulated variable has an observable, measurable, and consistent effect on the control variable. . . .
      Dr. Latour refers to the dT/dCO2 is almost zero. This simply means that the amount of change in CO2 (dCO2) produces some amount of change in global temperature (dT).

      It appears both the sign and the magnitude of cloud feedback is still in play!

    • Posted Sep 8, 2011 at 9:37 AM | Permalink | Reply

      Tom C – you nailed it. These models are way too simplified in my mind. I liken it to simple Newtonian physics compared to actual orbit propagation models in space systems. The Newtonian physics provided the basis, but they cannot alone reflect the temporal forces that shift satellite orbits out of pure Newtonian tracks.

      As you so clearly noted, clouds are apparently can heat and cool the regions they traverse depending on the conditions. I fail to see a one way forcing in any of this that can be simplified down to a 3 element equation.

      Folks should be looking at Kalman filter based models, not these one liners.

      • Posted Sep 8, 2011 at 11:30 AM | Permalink | Reply

        For the purpose of controlling a system, one needs a predictor rather than a filter. A predictor differs from a filter in the respect that for a predictor the outcomes of statistical events lie in the future of the observed state of the system while for a filter, these outcomes are contemporaneous with the observed state. One needs a predictor because of inertia in a system’s actuators.

        Using modern information theory, it has often been possible to create an information theoretically optimal predictor of outcomes for a system which, like the climate, is complex. Though this approach has been successful in delivering a number of predictors of long range weather forecasting outcomes it has not been applied to delivery of a predictor of climate forecasting outcomes.

  7. dearieme
    Posted Sep 6, 2011 at 1:34 PM | Permalink | Reply

    Fig 1 reminds me of the sort of thing thoroughly mocked in my introductory stats lectures when I was a fresher. “How lucky you are” the lecturer would joke, “not to be doing Social Science, so you won’t have to deal with this sort of rubbish.”

    Science has regressesd since those days, it would appear.

    • Steve E
      Posted Sep 6, 2011 at 6:50 PM | Permalink | Reply

      Amen!
      If only the status quo of climate science could reach the accepted parameters of social science…we might actually get somewhere! At least we’d all know the ground rules for discussion. :-)

    • DEEBEE
      Posted Sep 8, 2011 at 1:41 PM | Permalink | Reply

      SPOT ON

  8. Craig Loehle
    Posted Sep 6, 2011 at 1:37 PM | Permalink | Reply

    It is my understanding that high and low clouds differ in their effects, as do clouds in day vs night and winter vs summer. Svensmark argues that their effect over the poles differs also, with more clouds over the arctic having a cooling effect compared to the albedo of open ocean vs more clouds over the antarctic having a warming effect compared to albedo of snow (hoping I didn’t get that backwards…).
    With these factors not being clearly separated in the analyses here, the R^2 values near 0 could result. If the Dessler analysis is correct, then the effect is so poor at explaining cloud feedback as to be a very weak support for the models. Spencer & Braswell need not get a strong negative feedback to make their point, on the other hand.

    • Posted Sep 6, 2011 at 7:28 PM | Permalink | Reply

      I think that the fact that various lags in one direction show positive correlation whereas the other direction shows negative, could demonstrate a more significant ‘robustness’ to the statistical calculation than the shotgun plot shows. It is beyond my experience though to work with that kind of mess.

      • David A
        Posted Nov 25, 2011 at 2:32 AM | Permalink | Reply

        Certainly any lag over water would be greater then over land. Logically any reduction in SWR over the oceans, where the residence time of the enrgies inolved is greater, could potentially have an intial postive (warming) feedback to the atmosphere, but a negative feedback of greater magnitude to the oceans due to the reduction in SWR.

    • Jimmy Haigh
      Posted Sep 6, 2011 at 8:40 PM | Permalink | Reply

      Yes. Only a “climate scientist” could make anything out of this mess.

      • Posted Sep 7, 2011 at 8:25 AM | Permalink | Reply

        I thought it was snow!

        I would be interested to compare the treatment of these papers with this one referred to by the BBC Friday, 16 March, 2001, 14:17 GMT: (http://news.bbc.co.uk/1/hi/sci/tech/1225064.stm)

        “A team of UK-based scientists have published evidence which they say proves unequivocally that global warming is real.Comparing data obtained from two satellites which orbited the Earth 27 years apart, they found that significantly less radiation is now escaping into space than was previously the case.”

        So comparing a satellite that presumably was in orbit around 1970 they found that it gave different readings from one around 2000. Or as Scientific American put it(http://www.scientificamerican.com/article.cfm?id=more-proof-of-global-warm):

        MORE PROOF OF GLOBAL WARMING
        “The researchers looked at the infrared spectrum of long-wave radiation from a region over the Pacific Ocean, as well as from the entire globe. The data came from two different spacecraft the NASA’s Nimbus 4 spacecraft, which surveyed the planet with an Infrared Interferometric Spectrometer (IRIS) between April 1970 and January 1971, and the Japanese ADEO satellite, which utilized the Interferometric Monitor of Greenhouse Gases (IMG) instrument, starting in 1996. To ensure that the data were reliable and comparable, the team looked only at readings from the same three-month period of the year (April to June) and adjusted them to eliminate the effects of cloud cover.”

    • Skiphil
      Posted Feb 17, 2013 at 5:04 PM | Permalink | Reply

      Re: Craig Loehle (Sep 6 13:37),

      I wonder if anyone here has noticed this article from last summer, which seems to argue that the data currently available do not rule out either positive or negative overall feedbacks from clouds…. i.e., as with a lot of the paleo proxy debates, it may be that the data available to date do not resolve the key issues (a provisional conclusion which may appeal to many who are not already “team” players:

      On the determination of the global cloud feedback from satellite measurements, by T. Masters of UCLA

      published 23 August 2012

      Earth Syst. Dynam., 3, 97–107, 2012
      http://www.earth-syst-dynam.net/3/97/2012/
      doi:10.5194/esd-3-97-2012

      “Overall, there is little correlation between the changes in the 1CRF and surface temperatures on these timescales, suggesting that the net effect of clouds varies during this time period quite apart from global temperature changes. Given the large uncertainties generated from this method, the limited data over this period are insufficient to rule out either the positive feedback present in most climate models or a strong negative cloud feedback.”

  9. Sean
    Posted Sep 6, 2011 at 1:41 PM | Permalink | Reply

    I remember an exchange between Trenberth and Spencer on Roger Pielke’s blog about using satellites to measure the global radiative budget with CERES a year and a half ago. There is a summary here with links to the discussion. http://pielkeclimatesci.wordpress.com/2010/05/24/comments-on-nature-commentary-by-kevin-trenberth/ What struck me more than anything in this discussion was that the satellites were only able to measure to within a percentage or so accuracy, not nearly enough to say conclusively much about the complete radiation budget and balance.

    • timetochooseagain
      Posted Sep 6, 2011 at 3:52 PM | Permalink | Reply

      Absolute accuracy, it terms of the exact level of energy in and out may be only a percent or so. The accuracy for change in the energy budget is almost certainly much higher.

      A similar situation is the accuracy of various satellite measures of solar irradiance. All the different satellites are accurate enough to measure a change in solar irradiance between solar max and minimum, but the absolute level of solar irradiance differed a great deal between satellites:

      http://acrim.com/RESULTS/Earth%20Observatory/earth_obs_fig1.pdf

      • Pat Frank
        Posted Sep 6, 2011 at 5:35 PM | Permalink | Reply

        only assuming a constant error per satellite.

        • timetochooseagain
          Posted Sep 6, 2011 at 8:48 PM | Permalink

          Yes, that’s another issue. What may cause the error to vary with time, though?

        • Craig Loehle
          Posted Sep 7, 2011 at 9:02 AM | Permalink

          Orbital decay and drift (not following a perfectly same orbit), instrument degradation, and other things can cause a time trend. Also, every time a new satellite is put up, its instruments must be calibrated against the old, with always potential for slight drift due to calibration error.

        • Pat Frank
          Posted Sep 7, 2011 at 12:09 PM | Permalink

          Craig’s got it, thanks. I also wonder about the heating effect of direct insolation, and the effect on the sensor of dayside/nightside progression of heat/cold cycles.

        • Posted Sep 7, 2011 at 12:49 PM | Permalink

          Thanks guys. I know that UAH and RSS have implemented correction schemes for many of said factors. Have the CERES teams done so?

          At least for the AQUA CERES instrument, drift isn’t an issue.

          The wiki article on CERES suggests that there has not been any evidence of instrument problems:

          “During its operation CERES has shown remarkable stability. There has been no discernible change in instrument gain for any channel at the 0.2% level with 95% confidence. Ground and in-space calibrations agree to within 0.25%.”

          I wonder if the/how the TSI teams do corrections for data biases, also?

      • Posted Sep 7, 2011 at 6:58 AM | Permalink | Reply

        Here I have collected the various data from Spencer and Braswells discover page.
        There have been many revisions some due to different satellites (but shouldn’t these read the same “temperature” for the same altitude?)
        Some just terminate.
        Some just get revised by a few 100ths of K with no explanation.

        http://climateandstuff.blogspot.com/2011/06/revisionism-in-satellite-temperatures.html

        If the cloud albedo records are just as variable then no conclusions should be drawn.

        Remember – temperature from satellites is just someone’s model derivation of temperature from the IR spectrum. and remember the satellite measures just one small section of the earth at a time – it is not a global snap shot in time.

  10. BobN
    Posted Sep 6, 2011 at 1:56 PM | Permalink | Reply

    Steve – Thanks for this analysis. What your plots tell me is that there is not a very strong relationship between changes in cloudiness and changes in temperature and the correlation is so weak that probably little can be concluded one way or the other as to whether clouds are an internal forcing or a feedback.

    • Posted Sep 6, 2011 at 2:22 PM | Permalink | Reply

      BobN Posted Sep 6, 2011 at 1:56 PM
      there is not a very strong relationship between changes in cloudiness and changes in temperature and the correlation is so weak

      ——

      And where does that leave the hypothesis than CLOUDs induced by cosmic rays control the climate?

      • Steven Mosher
        Posted Sep 6, 2011 at 3:57 PM | Permalink | Reply

        I love that Leif

      • Graeme W
        Posted Sep 6, 2011 at 4:24 PM | Permalink | Reply

        I seem to recall Dr. Spencer claiming that something like a 2-4% change in cloud cover could explain the increase in temperatures over the latter half of the 20th century. I don’t know the basis for that claim, though.

      • Bruce
        Posted Sep 6, 2011 at 5:02 PM | Permalink | Reply

        I believe there is a better correlation between bright sunshine and temperature.

        Wild suggests .5W/m^2/year from early 19902 to early 2000s.

        http://i55.tinypic.com/34qk01z.jpg

      • TimTheToolMan
        Posted Sep 7, 2011 at 8:53 AM | Permalink | Reply

        Dont forget that over the last decade there hasn’t been much in the way of temperature change so you mightn’t expect a strong correlation if the earth’s climate was in a meandering mode.

      • BMcBurney
        Posted Sep 7, 2011 at 11:44 AM | Permalink | Reply

        “And where does that leave the hypothesis than CLOUDs induced by cosmic rays control the climate?”

        Also, where does that leave the hypothesis that SO aerosols caused cooling between 1940 and 1970?

        • Varco
          Posted Sep 8, 2011 at 2:18 AM | Permalink

          I thought Svensmark proposes that different clouds at different altitudes have differing effects on climate, and in turn are differently affected by different cosmic rays? (His explanation makes more sense than mine so please don’t judge him on my words!) If so, non-specific ‘Cloudiness’ vs temperature should not be proof one way or the other?

          I can recommend ‘The chilling stars’ by Svensmark/Calder as good read regardless of your outlook on climate science…

    • Posted Sep 7, 2011 at 7:14 AM | Permalink | Reply

      That’s pretty much what Dessler says:
      “Obviously, the correlation between ΔR_cloud and ΔT_s is weak (r^2 = 2%), meaning that factors other than T_s are important in regulating ΔR_cloud. An example is the Madden-Julian Oscillation (7), which has a strong impact on ΔR_cloud but no effect on ΔT_s. This does not mean that ΔT_s exerts no control on ΔR_cloud, but rather that the influence is hard to quantify because of the influence of other factors. As a result, it may require several more decades of data to significantly reduce the uncertainty in the inferred relationship.”

  11. Steeptown
    Posted Sep 6, 2011 at 2:16 PM | Permalink | Reply

    Eye-balling the data, anyone would say the points were randomly distributed, with no correlation. Not much feedback either way.

  12. Posted Sep 6, 2011 at 2:24 PM | Permalink | Reply

    dearieme is right, your figs. 1 and 4 from Dessler are garbage, and Dessler (and his reviewers) should be thrown out of science for trying to make something out of them. Steve McIntyre, your restraint in discussing them as if they were reasonable/serious science makes me wonder about you. r-squares of 0.01 and 0.02??? (For the lay reader, those three question marks mean “this is so bad, it is totally unacceptable”.)

  13. Ivan
    Posted Sep 6, 2011 at 2:25 PM | Permalink | Reply

    It took me a while as a laymen, to understand that this entire debate between Spencer and co and Dressler and co is simply whether clouds act as an independent factor driving temperature, or just as a feedback. All statistical mumbo-jumbo and all ingenious “tricks” that both sides abundantly employ, the basic problem is very simple – do we have any physical explanation or justification for the thesis that clouds influence temperature by themselves or just as a consequence of other changes. And the short answer seems to be – nobody has a clue.

    • Graeme W
      Posted Sep 6, 2011 at 4:28 PM | Permalink | Reply

      We have a physical explanation: increased cloud coverage reflects sunlight back into space, reducing the amount of energy being input into the lower atmosphere (and into the oceans). Thus, combined with the theory that cosmic rays influence cloud creation, we have a mechanism involving clouds that is externally driven that affects temperatures.

      Whether that affect is significant is a completely different question, and that’s where the ‘nobody has a clue’ statement is quite accurate.

      • Posted Sep 12, 2011 at 1:41 AM | Permalink | Reply

        Uh. I seem to recall that cloudiness during the day reflects sunlight.

        And clouds at night act as a blanket. (Well they act as a blanket during the day too but the reflection of sunlight is more important).

        In any case if we are looking at clouds as energy reflectors shouldn’t that be a consideration? i.e. Only clouds on the sunny side (up?) affect energy input. And clouds at noon are more important than clouds at sunset.

    • Max Beran
      Posted Sep 7, 2011 at 3:52 AM | Permalink | Reply

      Well, we do have a sort of a clue, albeit a rather anthropic one, and that is that we wouldn’t be here in the first place if feedbacks of this sort turned out to be positive. Maybe that should be built into the null-hypothesis for testing climate data.
      With the opposite mindset of desperately wanting a positive feedback in order to keep well-feed their global warning frightfest I guess “they” would look at a mean of 0.54 and a standard deviation of 0.36 and say to themselves – “Hmmm, that of course means that the trend (as opposed to the data points that make up the trend) is 14 times more likely to indicate a positive feedback than a negative one”. “They” shouldn’t be looking at the standard deviation of the trend coefficient anyway, they should be looking at the standard deviation of a prediction of a new value of delta-T. This would be vastly greater and would happily emcompass a negative feedback with equal probability.

  14. Posted Sep 6, 2011 at 2:26 PM | Permalink | Reply

    “However, the diagnostic statistics were not imposing.” (ooch)

  15. Salamano
    Posted Sep 6, 2011 at 2:32 PM | Permalink | Reply

    Since when is a .46 +/- .75 (or whatever) considered a useful value? I guess it could simply mean ‘more likely to be positive than negative’, but is this the sort of thing on-which policy decisions are based (particularly “IF” you rely on one even longer equation made up of parts that each have this kind of variabilty)?

  16. Posted Sep 6, 2011 at 2:39 PM | Permalink | Reply

    I’m reminded of Monty Python’s succinct demonstration of this scientific tussle. It is known as the argument sketch. (No, it isn’t)

  17. Dave Springer
    Posted Sep 6, 2011 at 2:55 PM | Permalink | Reply

    leif svallgard writes:

    BobN Posted Sep 6, 2011 at 1:56 PM
    there is not a very strong relationship between changes in cloudiness and changes in temperature and the correlation is so weak

    ——

    And where does that leave the hypothesis than CLOUDs induced by cosmic rays control the climate?

    ——-

    No better or worse off. Imagine very slightly higher or lower albedo that persists for many decades. You couldn’t dig that signal out of the noise in a single decade. As for short term effects, sit outside on a sunny day and you’ll feel the temperature change almost immediately without needing a thermometer when a big cloud passes by overhead. So the notion that clouds don’t effect surface temperature is patent nonsense.

    • Posted Sep 6, 2011 at 3:17 PM | Permalink | Reply

      Dave Springer: Posted Sep 6, 2011 at 2:55 PM
      No better or worse off. Imagine very slightly higher or lower albedo that persists for many decades.
      Except that Svensmark claims the effect on clouds is immediate

      • Jeremy
        Posted Sep 6, 2011 at 3:33 PM | Permalink | Reply

        Persisting for many decades has nothing to do with the immediacy of impact.

        Because an effect persists has no bearing on the speed at which it can change.

        I would imagine that cosmic particles that make it through the solar system can vary on all time spans simply because the route they take to get here has so many influences. I’d wager the influence of Jupiter’s magnetic field on cosmic particles hitting the earth would be easier to measure then human-released-CO2′s impact on Earth’s climate.

      • Dishman
        Posted Sep 6, 2011 at 10:42 PM | Permalink | Reply

        Leif,

        I believe Dave is talking about a signal analysis problem, and the number of samples required to extract a signal for a given SNR.

        If GCR variation is only responsible for 10% change in cloud formation (still a huge result), that gives you a SNR of 0.1.

        It would take a lot of samples to dig that one out in a way that withstands scrutiny.

    • Posted Sep 6, 2011 at 4:42 PM | Permalink | Reply

      Dave Springer posted:

      “As for short term effects, sit outside on a sunny day and you’ll feel the temperature change almost immediately without needing a thermometer”

      That’s true Dave, but remember that “albedo” or reflectance as the rest of us refer to it is a two way street. What is reflected out is reflected in too. Also remember that the radiating area of the planet is 4 times the absorbing area of the planet. (There’s an interesting exercise in geometry for you!)

      I’ve not spent any time doing the math to make a guess as to what is greater. That being said, if clouds have higher albedo than not clouds, less solar radiant energy is reaching the earth, hence all other things being equal,one would guess (and I’m guessing) that the heat of the planet would drop, not increase. As with all things physics however, common sense is your enemy, so I could be wrong.

      Cheers

      JE

  18. Bart
    Posted Sep 6, 2011 at 3:14 PM | Permalink | Reply

    Congratulations on some excellent work. This absolutely murders Dessler’s argument.

    But, then, I’ve been pounding this drum ever since Dessler 2010. In a direct exchange on WUWT with Dessler at the time, I pointed out that the phase relationships indicated a significant lag and, in the face of that, his regression was useless. He insisted that there was no reason to expect a significant lag. I responded: “You do not have to go on faith. YOU’VE GOT THE DATA.”. Apparently, he never took my advice.

    Now, where to go from here? The problem you have is that you have many processes going on, and you need to isolate them. Run a PSD to find the largest cyclical process, then put in a bandpass filter on both time streams to pull it out. Then, redo your regression analysis.

    • Posted Sep 7, 2011 at 3:57 AM | Permalink | Reply

      In the comment linked to by Bart, Dessler says
      “Thus, the lag zero correlation is expected to be the most appropriate. In support of this, I’ve looked at other lags and the correlations are all weaker than lag zero.”
      This statement seems to be contradicted by Steve’s calculations, which found a (slighter) greater correlation with a 4-month lag.

      • Bart
        Posted Sep 7, 2011 at 12:07 PM | Permalink | Reply

        Absolutely, it is.

        • Bart
          Posted Sep 7, 2011 at 12:28 PM | Permalink

          You’ve got to understand, there are all kinds of lags in the output from a broad spectrum signal input to a dispersive (non-linear phase) system. The key is to isolate components which have the same lag (components in a narrow frequency band).

  19. Jeremy
    Posted Sep 6, 2011 at 3:17 PM | Permalink | Reply

    Sometimes I imagine McIntyre’s brain as a gigantic tongue planted firmly in the cheek when he writes about such low hanging fruit. I’m left thoroughly tickled from reading this post.

  20. Steve Brown
    Posted Sep 6, 2011 at 3:36 PM | Permalink | Reply

    I know I can’t possibly be as clever as all these climate scientists, but neither of these correlations are actually correlations… they are cross-plots of uncorrelated variables.

    If climate science is based on such weak correlation then it is frankly all rubbish.

  21. TanGeng
    Posted Sep 6, 2011 at 3:37 PM | Permalink | Reply

    Here’s something I fail to understand in all of this.

    These are all observational studies, basically someone applying a bunch of statistical analysis to input data. At best this can show correlation. The stronger the correlation, the more confident we can say that the two factors are SOMEHOW related (possibly via a 3rd confounding variable).

    There is no way to pin down cause and effect because there is no experimental control. Furthermore, with awful correlation r2′s like .01 and .02, the relationship is hardly well established. So is anyone any smarter after reading this? Seems like terribly useless garbage.

    • Eric
      Posted Sep 6, 2011 at 3:47 PM | Permalink | Reply

      it looks to me like you understand everything quite well.

  22. NW
    Posted Sep 6, 2011 at 4:08 PM | Permalink | Reply

    Part of the problem here is a lack of good instruments (I mean in the statistical sense of instrumental variables). What is exogenous to the climate system that causes clouds but not temperature? CRF?… But are they correlated with irradiance?

    The whole causality question seems nearly impossible to tease apart with the field data. That’s why the laboratory experiments like the CLOUD one should (I think) take on extra importance.

  23. timetochooseagain
    Posted Sep 6, 2011 at 4:11 PM | Permalink | Reply

    So we have, it looks like, three models which are consistent with Dessler’s version of the SB11 type plot. So we can now throw away the rest as not consistent with the observed relationships between TOA radiation and temperature, right?

    There isn’t that much to this, as far as I can tell. Dessler is saying that if one analyzes the in his way, all of the models aren’t wrong, only most of them. I remember David Douglass saying at Heartland once, that he had redone his analysis that was much criticized, and decided that, yes, all of the models aren’t wrong…only most of them. One wonders if the models with the most realistic relationships between TOA radiation and temperature are the same as those that are the least problematic in terms of their atmospheric profile trends.

  24. MarcH
    Posted Sep 6, 2011 at 4:15 PM | Permalink | Reply

    Have I got this right? Dessler 2011, refutes Dessler 2010, but confirms SB2011?

    If so how, many editors now joining the queue behind Wolfgang?

  25. Posted Sep 6, 2011 at 4:23 PM | Permalink | Reply

    For Figures 1 and 4 above a vertical line at 0.0 seems to be a better fit, eyeball-wise.

  26. Bernie
    Posted Sep 6, 2011 at 4:29 PM | Permalink | Reply

    Steve:
    What a great example of how to deconstruct 4 papers. They really should listen to Wegman’s advice – Spencer included – and get some serious statisticians involved. I think you have put more than a stone in Trenberth’s shoe.

  27. DocMartyn
    Posted Sep 6, 2011 at 4:36 PM | Permalink | Reply

    Don’t we have three sorts of clouds?

    1) We have clouds that are forming/getting bigger. Here heated gaseous water is being converted into liquid water and the cloud is radiating heat.
    2) We have steady state clouds where there is no change in size and in which there is no heat change.
    3) We have clouds that are disappearing due to droplet formation. As the drops fall through the air, they cool the air, due to evaporative cooling, and so eat heat.

    Now in the morning/early afternoon we should find type more 1) clouds, at mid-day later afternoon more type 2) and in the late evening and especially at night we should observe mostly type 3) clouds.

  28. David L. Hagen
    Posted Sep 6, 2011 at 4:47 PM | Permalink | Reply

    Willis Eschenbach shows that clouds systematically vary from day to night across the tropics. See: Further Evidence for my Thunderstorm Thermostat Hypothesis Could this have any bearing on any of the papers by Spencer or Dressler?

    • Posted Sep 6, 2011 at 8:04 PM | Permalink | Reply

      I think this is probably a big part of the missing link between S&B and Dessler. Dessler says that temperatures affect cloud cover, but not the other way around, whereas S & B say that clouds can also affect temperatures, and Willis’s Regulator is the method by which it happens. More heat, more clouds, more cooling.

      • Posted Sep 7, 2011 at 1:45 AM | Permalink | Reply

        Lindzen directly said on WUWT he thinks that the Eschenbach Thermostat effect was likely involved in the results they got in L&C 2011

        • Posted Sep 8, 2011 at 12:49 PM | Permalink

          Can I get a link to this comment? I’ve never seen Lindzen actually comment at any blog before!

        • David L. Hagen
          Posted Sep 8, 2011 at 8:37 PM | Permalink

          Andrew. For the WUWT post listing “Lindzen Eschenbach” and “thermostat” or “iris” See:
          The Thermostat Hypothesis

        • Posted Sep 9, 2011 at 11:21 AM | Permalink

          I’ve seen it, I just haven’t seen the comment mentioned by Tallbloke, where Lindzen himself says that he thinks Willis’s “Thermostat” is at work in the LC results.

          For what it is worth, I wouldn’t be surprised if Lindzen agrees with Willis’s general idea. Just couldn’t find the direct endorsement.

    • Joe Crawford
      Posted Sep 7, 2011 at 1:45 PM | Permalink | Reply

      I think that the basic problem everyone seems to be having with ‘clouds and climate’ is one of perspective. Everyone looks at clouds differently. David L Hagen in his post (at 4:47 PM) points to Willis Eschenbach’s Thunderstorm Thermostat Hypothesis which talks about cumulus clouds and thunderstorms in the tropics. In referring to that hypothesis, Paul Hanion said (posted at 8:04 PM ):

      “I think this is probably a big part of the missing link between S&B and Dessler. Dessler says that temperatures affect cloud cover, but not the other way around, whereas S & B say that clouds can also affect temperatures, and Willis’s Regulator is the method by which it happens. More heat, more clouds, more cooling.“

      And, DocMartyn, in his post of Sept. 6, 2011 at 4:36 PM talks about “three sorts of clouds” and how we should expect varying amounts at different times during the day.

      The ‘team’ and Paul Hanion, in trying to simplify the problem of clouds and their effect on climate in order to model it, appear only to look at ‘cloud cover’, that is, the persistent clouds that hang around for days. They assume that lower level clouds have an overall heating effect since they are in direct sunlight for only about 1 / 4th of the time where they shelter the surface from some of the sun’s energy. Those clouds then spend the majority of the time reflecting the heat back down to the surface, thus retaining heat. The ‘climate scientists’ seem to ignore the short-lived cumulus and thunderstorms, possibly assuming that they have a negligible effect.

      On the other hand (no, I’m not an economist), Willis talks about cumulus clouds and thunderstorms that develop in late morning or early afternoon and dissipate in the evening. These are also the types of clouds that, as many observers have commented on, lower (sometimes drastically) the temperature of the surface as they pass over it. They may also continue to transfer considerable quantities of heat from the surface to the upper atmosphere in the evening and until they dissipate.

      Personally I’m just a dumb engineer, but I will state that as far as I’m concerned, until we know enough science to be able to model the effects of the different types of clouds and how they perform in different combinations, over different surface and at different latitudes, the GCMs are totally useless in predicting future climate. I agree totally with Oreskes as referred to in Easterbrook and Johns “Engineering the Software for Understanding Climate Change” (here) where they state:”Oreskes argues that the appropriate use of models is for hypothesis testing and exploring “what-if” questions, rather than for predictions and scientific “proof”.”

      • Posted Sep 7, 2011 at 7:27 PM | Permalink | Reply

        Hi Joe,

        I think you may be labouring under a misapprehension here.

        I very much believe that the net effect of clouds is cooling, from Willis’s Thunderstorm Thermostat where the heat is transferred to the upper atmosphere directly by the storm itself, and also through reflected light during the daytime. Yes, there is some heat trapping at night-time, but most of that is over land, and the trapping function of clouds is much smaller than the cooling function of the other two.

        That clouds also affects temperatures is the biggest difference between S & B and Dessler from what I can see, and Willis’s regulator slots right in to fill the gap. It is one of the mechanisms whereby clouds do affect temperatures.

        As for the models, the ones that the IPCC quote from don’t take anything like enough variables into consideration, or capture what happens in the real world. More recent ones are probably better in this regard, but then they likely don’t offer up the same “scary scenarios” as the originals.

        /p

        • Joe Crawford
          Posted Sep 9, 2011 at 10:42 AM | Permalink

          Paul,
          Yes, after spending a few years sailing in the Caribbean, I have to agree with his Thermostat, at least in the tropics. However, if you go very far north or south of there you start getting into an area that gets more heat on an annual basis from the ocean currents than from the sun. Here, persistent clouds at night probably have a net warming effect by holding in the heat.

          From a modeler’s perspective, e.g., one who has seen many more frontal clouds than afternoon cumulus, and listened to the nightly weather reports that always tie warmer nights with (persistent) cloudiness, it’s a lot simpler (and way too tempting) to treat all clouds as permanent and just base you calculations on the overall percent of cloud cover. Or, if you really get serious, still assume persistence but split them into types with different characteristics for each. I worked high tech for 35 years (60 to 90 hour weeks) and the closest I usually got to clouds was flying over them while traveling, or watching them on the nightly news. It took many days of living out in the weather see and understand what Willis is describing.

          As I said before, I think the models are useless for anything other than hypothesis testing until they get a better handle on the science of clouds. However, it does appear, at least to me, that academia, or some part of it, is starting to realize and admit this. Maybe in the long run common sense will eventually overcome belief.

        • Joe Crawford
          Posted Sep 9, 2011 at 10:49 AM | Permalink

          Paul,sorry. I did not mean to group you with the ‘team.’ In my post (3 up) I meant to say “The ‘team’ and Dessler”. I inserted your name by mistake instead of Dessler’s.

  29. Frank
    Posted Sep 6, 2011 at 5:28 PM | Permalink | Reply

    Steve: Would you please consider reploting your Figure 2 (SB11 Figure 3) with vertical error bars on the observations?

    Do you understand how a relatively smooth S-curve relating the regression coefficient to time lag (minimum 12 months before and maximum 3 months after) can arise from the slopes of a series of CRF vs lagged temperature plots – when each plot is so noisy? Intuition suggest that the curve should be far more jagged. Perhaps information about the cause-and-effect relationship between temperature and CRF is spread over many months.

  30. Braddles
    Posted Sep 6, 2011 at 5:33 PM | Permalink | Reply

    I would have thought, if you are trying to estimate the impact of something on climate, you’d be better off with a time period where the global climate actually changes. With so little variation in global temperatures over the last 10 years (apart from ENSO fluctuations), it’s no wonder that any supposed influencing factors show weak correlations.

    • Posted Sep 7, 2011 at 1:46 AM | Permalink | Reply

      Yes, but we don’t have good enough data prior to 2000.

  31. RoyFOMR
    Posted Sep 6, 2011 at 5:38 PM | Permalink | Reply

    Glad to see that Dessler 2011 has responded to S&B 2010. No armwaving needed, just the scientific method which pits theory against theory in an open arena.
    May the best science ‘win’

  32. Bart
    Posted Sep 6, 2011 at 5:39 PM | Permalink | Reply

    My comment of some hours ago appears to have been lost. The gist was:

    A) You have shown Dessler 2010 is defective because of his assumption of zero time lag. This always was a profound weakness of his analysis.

    B) Your figure 4 can be considerably improved. Try averaging the data on both axes.

    The reason is that, for a given time lag at a particular frequency, you will generally get an oval in the phase plane, with the oval collapsing to a straight line when you shift one set of data so that there is zero lag.

    The lag comes about because there is some transfer function between the two variables with amplitude and phase characteristics as a function of frequency. In general, the phase response is nonlinear. As a result, different frequency components get different time lags, and you end up with a hash.

    You need to narrow the focus to a narrow band in which the lag will be roughly uniform. Averaging narrows the frequency band to low frequencies. Then, you can determine a good lag parameter such will give you more or less a straight line plot.

    • Posted Sep 7, 2011 at 1:51 AM | Permalink | Reply

      I suspect you’d then get many different slopes for different narrow time bands due to the varying imbalance between ocean emission and CRF. This would be down to a host of other variables.

      • Bart
        Posted Sep 7, 2011 at 12:17 PM | Permalink | Reply

        Narrow frequency bands. Yes, in general, the slope would vary. The item of interest is the slope at lower frequencies, which should approach a limit.

  33. Ron Cram
    Posted Sep 6, 2011 at 5:43 PM | Permalink | Reply

    Dessler has a video on Youtube explaining his new paper. See http://www.youtube.com/watch?v=C2ngavUkmis

    He makes the rather unusual claim that Spencer did not use real data. I can’t help but think Spencer will be surprised to hear that.

    I have to agree with Dave Springer that the idea clouds do not have an effect on temperature is patent nonsense. Dessler even seems to agree clouds could have an effect long-term, but he did not find one in the most recent decade.

  34. Bebben
    Posted Sep 6, 2011 at 5:55 PM | Permalink | Reply

    So… the inexperienced civilian Wagner suddenly found himself in the middle of this War of the Worlds… he was immediately outflanked by the superb General, a certain Trenberth, and he saw no other solution than to capitulate unconditionally and lick the great general’s boots, upon which a tragic overture sounded over the battlefield.

    Trenberth and his storm troopers proceeded to set up an ambush for the rebels Spencer and Braswell, accidentally shooting down Christy as collateral damage, due to the heavy artillery deployed and their confidence in firepower superiority. The little village known as AR5 was now unprotected for Dessler to make his triumphal entry, riding on a white sky dragon.

    But still, a few pockets of fierce resistance held up the fight, armed with sharp knives of commonsense logic and some good marksmen.

    (To be continued.)

  35. DRE
    Posted Sep 6, 2011 at 6:06 PM | Permalink | Reply

    “The adjusted r^2 was a Mannian 0.01045″

    Isn’t this all that needs to be pointed out?

    Unfortunately the controversy is more interesting than the science.

  36. Alan Wilkinson
    Posted Sep 6, 2011 at 6:24 PM | Permalink | Reply

    From where I came from that kind of R^2 value would preclude publication let alone justifying trillions of dollars spending.

  37. Ursus Augustus
    Posted Sep 6, 2011 at 6:31 PM | Permalink | Reply

    I am with Adrian 110906@16:42 and find the fitting of aline through such scattered data laughable. I used to teach engineering design and particularly cautioned my students against this sort of thing when gathering data for initial design estimation as the results can so easily be utterly meaningless and dangerous to rely upon. In engineering that is almost a guarantee of a fatal mistake, apparently not in science. Science appears to not even demand some sober qualification. This sort of ‘curve fitting’ deserves an iron pyrites medal for disservices to science.

    I am more and more reminded of that ludicrous paper published in the Lancet in 2004 purporting to estimate the casualties in Iraq where 50 or so out of 70 reported deaths in their sampling had to be discarded ( they came from one cluster) so they ended up extrapolating 20 odd deaths in 9000 people. The real issue was of course that if a rogue data point of 50 was so easy another rogue sample of 5 or 10 was entirely possible and ‘data salting’ ( salting in the mining exploration sense) a very distinct possibility. The political motive was plain as day.

  38. JFD
    Posted Sep 6, 2011 at 6:41 PM | Permalink | Reply

    The thing that seems to be missing in essentially all climate papers is the impact that producing fossil water for evaporative cooling towers and for irrigating food and fodder, has on the climate. This production has the dual impact of warming the atmosphere and also causing more clouds. Fossil ground water production is immense, starting in about 1950 and increasing until about 2000 and then tapering off slightly in USA and China, the two large users, due to declining water tables. Production of fossil ground water from no or slow to recharge aquifers also has a direct impact of increasing ocean levels by an average of 2.6 mm/year.

    Evaporative cooling towers remove heat from processes and discharge it to the atmosphere 50 to 60 feet above the ground level as essentially 100% water vapor and aerosols. The cooling towers operate night and day year around with very uniform heat output to the atmosphere. Agriculture evapotranspiration is variable in nature but while this impacts heating the atmosphere, it does not impact the average yearly increase in level of the oceans.

    Playing with R^2s of .01 -.02 is simply gilding the lily when real measured numbers are available which have very large impacts on the climate.

    JFD

  39. Greg Cavanagh
    Posted Sep 6, 2011 at 7:04 PM | Permalink | Reply

    Why draw a straight line through the scatter plot. How about a parabala, or hyperbola, or a high frequency sine wave, I’ll be I hit more of the targets with that, therefor a higher correlation, yes?

    • DRE
      Posted Sep 6, 2011 at 10:20 PM | Permalink | Reply

      Whose to say it isn’t some sort of hysteresis curve?

  40. Posted Sep 6, 2011 at 7:25 PM | Permalink | Reply

    Nice job Steve, I wonder though if you could state which is the direction of the lag and any meaning to it. Some will not get the point from the plots.

    Yes the correlation is a POS but the directionality as well as a consistent improvement/worsening depending on lag has a consistency which gives the shotgun some amount of credibility. Roman or yourself may be able to work something of it combining different months but IMO, the shotgun pattern is a little less shotgunny than it appears.

  41. Jay Currie
    Posted Sep 6, 2011 at 7:34 PM | Permalink | Reply

    It seems to me that Dessler is maintaining that clouds have either no effect or minimal effect (where they are the product of human pollution.

    He also seems to believe that clouds only have a downward effect: they only bounce energy back at the earth.

    So I wondered if I had two thermometers, one on an experimental earth and one an arbitrary orbital distance above earth and I shone a big, honking, photo lamp at my earth and my “orbital” thermometer, what would happen?

    And then, if I built a “cloud” out of a mirror with a black backside what would happen if I placed it between the energy source and the earth, first with the black side pointed toward the energy source and then the mirror side.

    I can’t help but think that in the later case the “orbital” thermometer would tend to get considerably warmer. But physics will make fun of such assumptions.

    (But I bet if I did the experiment a couple of dozen times my results R2 would be a tiny bit larger than .01.

  42. David L. Hagen
    Posted Sep 6, 2011 at 8:04 PM | Permalink | Reply

    Judith Curry started a technical thread Spencer & Braswell: Part III at Climate Etc. to discuss Dessler (2011) vs Spencer & Braswell (2011) etc.

  43. Geoff Sherrington
    Posted Sep 6, 2011 at 8:05 PM | Permalink | Reply

    timetochooseagain Sep 6, 2011 at 3:52 PM notes
    “Absolute accuracy, it terms of the exact level of energy in and out may be only a percent or so. The accuracy for change in the energy budget is almost certainly much higher.A similar situation is the accuracy of various satellite measures of solar irradiance. All the different satellites are accurate enough to measure a change in solar irradiance between solar max and minimum, but the absolute level of solar irradiance differed a great deal between satellites”

    In classical science, when the accuracy difference is large but the precision difference is small, the first inference is that there is an unsxplained, uncontrolled variable, or several. In that case there is little point in calculating correlation coefficients at all, because they mean nothing.

    • timetochooseagain
      Posted Sep 6, 2011 at 8:11 PM | Permalink | Reply

      As long as the source of error is not varying in time, there shouldn’t be a problem. With solar irradiance, at least, there appears to be something about the individual instruments that makes them consistently report the overall level as too high or to low. Perhaps Leif knows the reason for the differences in those?

      Anyway, in general I don’t think many people have proposed that there may be time dependent biases in the CERES data. It’s a worthwhile question: what are possible sources of such?

    • Posted Sep 6, 2011 at 11:30 PM | Permalink | Reply

      Yes, the reason for those discrepancies has been found: “Scattered light is a primary cause of the higher irradiance values measured by the earlier generation of solar radiometers in which the precision aperture defining the measured solar beam is located behind a larger, view‐limiting aperture. In the TIM, the opposite order of these apertures precludes this spurious signal by limiting the light entering the instrument.” from: http://www.leif.org/EOS/2010GL045777.pdf

      • Posted Sep 7, 2011 at 9:24 AM | Permalink | Reply

        Thanks Leif. Is the scattered light bias time invariant for each satellite?

        • Posted Sep 7, 2011 at 10:27 AM | Permalink

          Yes, it would be as the geometry is invariant. So the relative changes should be accurate. What is different for each satellite is the amount of degradation of the sensitivity of the cavity by ultraviolet light. And the degradation is time dependent, becoming larger with time. The oft cited ‘fact’ that TSI this past minimum was smaller than at previous minima is most likely due to uncompensated degradation [and other instrumental effects], see e.g. http://www.leif.org/research/PMOD%20TSI-SOHO%20keyhole%20effect-degradation%20over%20time.pdf and http://lasp.colorado.edu/sorce/news/2011ScienceMeeting/docs/abstracts/1j_Lean_contri.pdf

        • Posted Sep 7, 2011 at 2:07 PM | Permalink

          Thanks! Interesting that PMOD’s “corrections” appear not to be catching some instrument degradation. Isn’t the main difference between ACRIM and PMOD due to the degradation corrections PMOD uses on the NIMBUS data? So if their corrections don’t work then they aren’t stitching right over the ACRIM gap. I’m not sure if that makes either team more right or more wrong, though.

        • Posted Sep 7, 2011 at 2:14 PM | Permalink

          The ACRIM data has an unexplained annual variation , so that points to some problem in the data reduction. The NULL-hypothesis is still that there should be no difference at solar minimum when [almost] all solar activity has died away, so it seems to me that both ACRIM and PMOLD are wrong. This also seems to be the conclusion of Judith Lean. I’m going next week to a ‘TSI-Climate’ meeting in Sedona, AZ, and this whole question will be discussed: http://lasp.colorado.edu/sorce/news/2011ScienceMeeting/sedona.html

        • Steven Mosher
          Posted Sep 8, 2011 at 2:16 AM | Permalink

          I read that agenda a few days back. Looks like an interesting group and discussion.

        • Posted Sep 8, 2011 at 11:57 AM | Permalink

          I’ll report on the highlights at WUWT

        • GeoChemist
          Posted Sep 8, 2011 at 12:09 PM | Permalink

          Dr. Leif – did you see the press release from NASA on the energy from some solar flares being larger that beleived?

        • Posted Sep 8, 2011 at 12:19 PM | Permalink

          Yes, but as usual, one should take those releases with some salt [they always over-hype things: 'never before seen', 'unprecedented', 'scientists stumped, baffled, and out of their wits', etc]. It seems that it really just were two flares in succession: http://sprg.ssl.berkeley.edu/~tohban/wiki/index.php/Two-stage_SEE_Shows_Reconnection
          “The hot plasma at ~2 MK that produced the second-stage peak in the SDO/EVE light curves was revealed by the SDO/AIA images to be from a different location than the plasma in the first stage”

        • GeoChemist
          Posted Sep 8, 2011 at 12:24 PM | Permalink

          Thanks, I figured as much.

    • Posted Sep 7, 2011 at 7:13 AM | Permalink | Reply

      Collected here are data from different past plots on the spencer and braswells discover page

      There are many revisions:
      Some due to satellite changes (but if temperatures from satellites are accurate then shouldn’t temperature a a fixed altitude be the same from satellite to satellite?)
      Some just terminate
      Some are just revised by a few 100ths K

      http://climateandstuff.blogspot.com/2011/06/revisionism-in-satellite-temperatures.html

      Satellites do not give a global snapshot at a time they are a moving window taking hours? days? to complete a global sweep

      If satellites recording temp are so variable how can anyone use them to determine the effect of clouds?

      Temperatures are derived from someone’s models that derive temperature from radiation+mods for intervening layers etc. Is this really better than surface measurements

      Is the satellite data corrected for local time?

    • jphilips
      Posted Sep 7, 2011 at 3:04 PM | Permalink | Reply

      Collected here are data from different past plots on the spencer and braswells discover page
      There are many revisions:
      Some due to satellite changes (but if temperatures from satellites are accurate then shouldn’t temperature a a fixed altitude be the same from satellite to satellite?)
      Some just terminate
      Some are just revised by a few 100ths K why? if this is such a clean data source?

      Satellites do not give a global snapshot at a time they are a moving window taking hours? days? to complete a global sweep
      Is the satellite data corrected for local time?

      If satellites recording temp are so variable how can anyone use them to determine the effect of clouds? As far as I’m aware the global temperature derived from satellites is adjused for cloud cover!!!!

      Temperatures are derived from someone’s models that derive temperature from radiation+mods for intervening layers etc. Is this really better than surface measurements

      • Posted Sep 8, 2011 at 9:50 AM | Permalink | Reply

        jphilips.

        Not all instruments are equal (forget the satellite, focus on the instrument and its version). Some instruments (like CERES out of Langley Research Center) have had very stable impplementations. They are like Model T’s off the factory line. Others have had diverse implementations, which means their performance (both in precision and noise) vary.

        Low Earth Orbiting Satellites do not give global views. They fly about 350 miles above the surface, orbiting every 90-110 minutes. The Polar orbiters are sun synchronized to rise over the same latitude at the same time of day (so you can constant measurements of that local).

        The international community has produced trains of satellites so each local can get a sample over the entire day/night period, so you are not just sampling one time a day. Unfortunately, they may employ different sensors, thus it is not apples to apples.

        GEO birds (65,000 miles up) stare at one place. The drastic increase in altitude make differentiating temperatures in the air column difficult, and local temps are not at a gross level of granularity (more acres per sample).

        Your point about something as temporally dynamic as clouds (which can change over a 15 minute period) is spot on. You can’t. Which is why we have these overly simplified models.

        We don’t have the density of measurements geographically (i.e., acres of surface) across the air column (altitude) and in time (a snap every 30 minutes) to know the answers at all.

        These analyses not just academic, they are completely useless. The system is too dynamic to measure in this spotty fashion. It would be like trying to measure the tides once a month instead of daily – you just don’t have the sample resolution to tease out anything but noise.

    • David L. Hagen
      Posted Sep 8, 2011 at 3:31 PM | Permalink | Reply

      Scafetta & Wilson (2009) provide evidence for:

      a significant TSI increase of 0.033 %/decade between the solar activity minima of 1986 and 1996, comparable to the 0.037 % found in the ACRIM composite. The finding supports the contention of Willson (1997) that the ERBS/ERBE results are flawed by uncorrected degradation during the ACRIM gap and refutes the Nimbus7/ERB ACRIM gap adjustment Fro¨hlich and Lean (1998) employed in constructing the PMOD.

      Scafetta, N., and R. C. Willson (2009), ACRIM-gap and TSI trend issue resolved using a surface magnetic flux TSI proxy
      model,
      Geophys. Res. Lett., 36, L05701, doi:10.1029/2008GL036307.
      That needs to be considered when comparing that period with the results of Spencer (2010/2011) or Dessler (2010/2011)

      • Posted Sep 8, 2011 at 4:31 PM | Permalink | Reply

        No, this issue is not resolved the way Scafetta thinks. Rather it is resolved by there not being any difference.

        • David L. Hagen
          Posted Sep 8, 2011 at 8:42 PM | Permalink

          Leif – there are 34 articles citing Scafetta & Wilson (2009). Which are you referring to?

        • Posted Sep 8, 2011 at 9:05 PM | Permalink

          Does it matter how many articles are citing Scafetta & Wilson 2009? When their conclusion does not hold up.

        • David L. Hagen
          Posted Sep 9, 2011 at 9:21 AM | Permalink

          Leif – repeat: Why does Scafetta & Wilson (2009) conclusion not hold up? What is the rebuttal paper?

        • Posted Sep 9, 2011 at 10:04 AM | Permalink

          I repeat: their claim does not hold up.
          For example: http://arxiv.org/PS_cache/arxiv/pdf/0911/0911.4002v1.pdf
          (Journal of Atmospheric and Solar-Terrestrial Physics, 2009; doi:10.1016/j.jastp.2009.11.013)
          “Scafetta and Willson (2009) proposed that these reconstructions can be used to bridge the so-called ACRIM gap (see Sect. 2.1.1) and to create a ‘mixed’ ACRIM-1 – SATIRE – ACRIM-2 composite. They have compared ACRIM-1 and ACRIM-2 data directly to the model, in order to crosscalibrate the data from the two instruments. These authors have, however, used the SATIRE-T model described in Sect. 4, which is not suited for such an analysis. As discussed later, SATIRE-T is based on the historic sunspot number record instead of on magnetograms and continuum images, so that it is based on Eq. (1) rather than the more realistic Eq. (2). It is therefore significantly less accurate than SATIRE-S on time scales of weeks to several months. These are, however, the most critical time scales for such a comparison of the data and the model. This indicates that caution needs to be exercised when considering the results of Scafetta and Willson (2009), in particular their conclusion about the upward trend between the minima in 1986 and 1996. Krivova et al. (2009a) have repeated this analysis employing the more appropriate SATIRE-S model. The ‘mixed’ ACRIM-1 – SATIRE-S – ACRIM-2 composite is shown in Fig. 3. It shows no increase in the TSI from 1986 to 1996, in contrast to the ACRIM composite.”
          or
          Geophys. Res. Lett., 36, L20101, doi:10.1029/2009GL040707
          “A gap in the total solar irradiance (TSI) measurements between ACRIM-1 and ACRIM-2 led to the ongoing debate on the presence or not of a secular trend between the minima preceding cycles 22 (in 1986) and 23 (1996). It was recently proposed to use the SATIRE model of solar irradiance variations to bridge this gap. When doing this, it is important to use the appropriate SATIRE-based reconstruction, which we do here, employing a reconstruction based on magnetograms. The accuracy of this model on months to years timescales is significantly higher than that of a model developed for long-term reconstructions used by the ACRIM team for such an analysis. The constructed `mixed’ ACRIM – SATIRE composite shows no increase in the TSI from 1986 to 1996, in contrast to the ACRIM TSI composite.”
          And several other analyses, including my own, that indicate that no other solar properties showed any similarity to the Scafetta & Willson 2009 claim.

        • David L. Hagen
          Posted Sep 9, 2011 at 9:56 PM | Permalink

          Thanks Leif – will read.

  44. Peter Hartley
    Posted Sep 6, 2011 at 8:14 PM | Permalink | Reply

    I think to understand Spencer and Braswell’s argument you need to go back to their previous analysis in phase space.

    They started with a model where, by assumption, there was (bidirectional) causality. They showed that one direction of the causal relationship gave nice straight lines linking DeltaT and DeltaR while the other direction produced spiral loops. If you ignored the temporal relationships between the successive points in phase space I seem to recall you got cloud of points somewhat like the graphs here.

    In fact the point they made was that while the straight line relationship gave a fine estimate of the feedback slope, the points arising from the spiral pattern created the “cloud” and biased the regression slope estimate toward zero (no statistically significant relationship) — implying strong positive feedback. In their toy model at least one could not conclude from an estimated slope of zero that there was no causal relationship — since there was such a relationship by assumption.

    • Steve McIntyre
      Posted Sep 6, 2011 at 9:03 PM | Permalink | Reply

      Yes, this is an excellent point. I’ve now backtracked to SPencer and Braswell 2010 and, as you observe, it makes a number of interesting points, including clear statements of the lack of correlation.

      • Bart
        Posted Sep 6, 2011 at 9:31 PM | Permalink | Reply

        You need to isolate components with roughly uniform delay to see anything.

    • David L. Hagen
      Posted Sep 6, 2011 at 9:51 PM | Permalink | Reply

      On phase space analyses, David Stockwell shows some interesting oscillatory tracks for Delta T vs T changes in global temperature pulsed by volcanoes etc. See:
      Sinusoidal Wave in Global Temperature
      Phase Plots of Global Temperature after Eruptions

      The impulses are cooling of course, due to the shielding of short-wave solar radiation by stratospheric aerosols.

      These have a period of about 6 years compared to the +4 or -12 month peak lag for S&B 2011.
      In his “Solar Supersensitivity” series, Stockwell finds a much higher correlation (R2=0.71) for the accumulation of solar radiation than for the direct correlation (R2=0.036). See Fig. 7, as well as a Pi/2 (90 deg or 2.75 years) lag from solar driving to SST temperature from the solar cycle.
      Could S&B’s 12 mo lags be related to Pi/2 of an ENSO type oscillatory period? eg 1/4 of 4-5 year ENSO periods?

  45. timetochooseagain
    Posted Sep 6, 2011 at 8:33 PM | Permalink | Reply

    BTW, it seems to me that to suggest ENSO simulation is unrelated to sensitivity is wrong. I know Dessler didn’t say it was totally unrelated. But consider one of the points in a slide Lindzen had in his ACS presentation, it said:

    Note that high gain (sensitivity) implies weak thermal coupling between the atmosphere and ocean. Such coupling is obviously important for air-sea interactions. IMPORTANT QUESTION: Would reducing sensitivity (even artificially) improve simulations of ENSO, PDO, etc., and eliminate problems of drift?

    I doubt that there is a perfect correlation between “good” or “better [marginally]” ENSO simulation and lower sensitivity, but there may be some level of correlation. I would however tend to doubt the relationship goes in the other direction…

  46. Tilo Reber
    Posted Sep 6, 2011 at 8:54 PM | Permalink | Reply

    Leif: “And where does that leave the hypothesis than CLOUDs induced by cosmic rays control the climate?”

    Let’s not confuse the existence or lack of existence of a correlation with the ability of a specific data set to demonstrate that existence.

    In any case, if you want to throw out the possibility of a theramal effect for clouds in the Svensmark case, I guess you’ll also have to throw it out as a positive feedback mechanism for CO2 forcing.

    Ivan: “this entire debate between Spencer and co and Dressler and co is simply whether clouds act as an independent factor driving temperature, or just as a feedback.”

    It would follow that if changes in cloudiness that are driven by CO2 warming can function as a feedback, then changes in cloudiness that are caused by any other factors should have the same effect. Of course we also have to consider any spacial and temporal differences that may result from the different causes, as Craig says.

  47. jae
    Posted Sep 6, 2011 at 9:12 PM | Permalink | Reply

    “However, the diagnostic statistics were not imposing. The adjusted r^2 was a Mannian 0.01045.”

    ONE PERCENT OF THE VARIATION IS EXPLAINED BY THE RELATIONSHIP??? Is this tongue-in-cheek comedy, satire, or what? Can you actually get a poorer relationship than that between any two variables, even using random numbers?

    Maybe I’m really off-base, but I think the reviewers and the editor should kill any paper reporting this kind of correlation as a big deal.

  48. Bart
    Posted Sep 6, 2011 at 9:29 PM | Permalink | Reply

    Here is a little MATLAB routine to illustrate what I am talking about in my above posts.

    %%%%%%%%%%%%%%%%%%%%%%
    % Set up Random Input
    w=randn(1000,1);

    % Create 1st Order Lagged Response
    x=zeros(1000,1);
    a=0.9;
    for k = 2:1000
    x(k) = a*x(k-1)-w(k);
    end

    % Create Scatter Plot, Note Lack of Obvious Correlation
    plot(w,x,’*’)

    % Average 100 Samples At A Time And Re-Plot, Readily Observe Negative Correlation
    h=ones(1,100);
    h=h/sum(h);
    plot(filter(h,1,w),filter(h,1,x),’*’)
    %%%%%%%%%%%%%%%%%%%%%%

  49. geo
    Posted Sep 6, 2011 at 9:37 PM | Permalink | Reply

    Wagner is a young man? Ah, that explains much. Sometimes the young take their role in the world a little too seriously. . . often encouraged by older, umm, more cynical, men.

    I will give Dessler this –he appears to be much more open to engagement. No one has a right to force anyone else to agreement; it is open engagement (including things like sharing data) that it is the minimum standard that really *must* be met to have a healthy process.

    • Tony Hansen
      Posted Sep 7, 2011 at 5:36 AM | Permalink | Reply

      I would be suprised if his ‘stepping down’ were to show up on his CV in that way.
      Time will tell whether his performance leads to significant promotion or not.

  50. Gerald Browning
    Posted Sep 6, 2011 at 10:07 PM | Permalink | Reply

    Leif,

    You are quite right. Using data from a numerical model that does not accurately solve the compressible Navier Stokes equations with gravtational and Coriolis forces [numerical truncation error is O(1) after 1-2 days] and that uses adhoc physical parameterizations is nonsensical science.

    Jerry

  51. Policyguy
    Posted Sep 6, 2011 at 11:49 PM | Permalink | Reply

    Perhaps I haven’t been vigilant, but why hasn’t the recently completed CLOUDS experiment at the international supercollider been mentioned?

  52. Posted Sep 7, 2011 at 12:28 AM | Permalink | Reply

    The following may be helpful for passing Science editors, Science reviewers, or climatology enthusiasts (warning – the information flow is intense – and I’m not sure if the hat is essential to the procedure):

  53. PaddikJ
    Posted Sep 7, 2011 at 1:03 AM | Permalink | Reply

    My layman’s reaction to the shotgun plot in Fig. 1 was “how the hell can anyone fit a line to that mess?”, so I was gratified to see that several commenters (who appear to be stats-literate) had similar doubts.

    But I also think that if Steve thinks it’s worth his usual fine-tooth-comb treatment, there is probably something there. If that’s so, could maybe Steve or another stats-wiz give us non-stats folk a quick tutorial on trend-extraction?

  54. Posted Sep 7, 2011 at 2:27 AM | Permalink | Reply

    According to Pinker et al., 2005, surface solar irradiance increased by an average 0.16 W/m^2/year over the 18 year period 1983 – 2001 or 2.9 W/m^2 over the entire period.

    This change in surface solar irradiance over 1983 – 2001 is almost exactly 1.2% of the mean total surface solar irradiance of the more recent 2000 – 2004 CERES period of 239.6 W/m^2 for which the mean Bond albedo has been claimed to be 0.298 and mean surface albedo to be 0.067 (Trenberth, Fasullo and Kiehl, 2009).

    The ISCCP/GISS/NASA record for satellite-based cloud cover determinations suggests a mean global cloud cover over the 2000 – 2004 CERES period of about 65.6% and over the entire 1983 – 2008 27-year period a mean of about 66.4±1.5% (±1 sigma).

    ISCCP/FD and Earthshine albedo data for the 2000 – 2004 period enables estimation of the relationship between albedo and total cloud cover and it is best described by the simple relationship:

    Bond albedo (A) ~ 0.353C + 0.067 where C = cloud cover. The 0.067 term represents the surface SW reflection (albedo). For example, for all of 2000 – 2004; A = 0.298 = 0.353 x 0.654 + 0.067

    According to ISCCP/GISS/NASA mean global cloud cover declined from about 0.677 (67.7%) in 1983 to about 0.649 (64.9%) in 2001 or a decline of 0.028 (2.8%).

    This means that in 1983; A ~ 0.353 x 0.677 + 0.067 = 0.305

    and in 2001; A = 0.353 x 0.649 + 0.067 = 0.296

    Thus in 1983; 1 – A = 1 – 0.305 = 0.695

    and in 2001; 1 – A = 1 – 0.296 = 0.704

    Therefore, between 1983 and 2001, the known reduction in the Earth’s albedo A as measured by ISCCP/GISS/NASA should have increased total surface solar irradiance by 200 x [(0.704 - 0.695)/(0.704 + 0.695)]% = 200 x (0.009/1.399)% = 1.3%

    This estimate of ~1.3% increase in solar irradiance from cloud cover reduction over the 18 year period 1983 – 2001 is very close to the ~1.2% increase in solar irradiance measured by Pinker et al (2005) for the same period.

    The period 1983 – 2001 was a period of claimed significant global (surface) warming.

    However, within the likely precision of the available data for the above exercise (perhaps of the order of say ±0.5% at ± 2 sigma?), it may be concluded that it is easily possible that the finding of Pinker et al (2005) regarding the increase in surface solar irradiance over that period was due to an almost exactly equivalent decrease in Earth’s Bond albedo resulting from mean global cloud cover reduction.

  55. Alex
    Posted Sep 7, 2011 at 2:36 AM | Permalink | Reply

    Question: how much is the cloud feedback assumed to of the “c02 warming*3″ the models assume? If its supposed to be a significant part of the 3 then it seems to me that the warmists have a pretty big problem given the weak correlation shown in the scatter plots.

  56. Posted Sep 7, 2011 at 2:55 AM | Permalink | Reply

    fm=lm(eradr~erats,dess)

    I think Dessler 2010 runs regression without the intercept

    • Posted Sep 7, 2011 at 3:25 PM | Permalink | Reply

      Not sure if it matters in this case *). But Matlab complains that

      “Warning: R-square and the F statistic are not well-defined unless X has a
      column of ones.”

      and I guess the Fig2A lines would be more hyperbolic if one takes the intercept into account.

      *) When trying to predict a new observation of DR_cloud given some T it should matter

  57. Mac
    Posted Sep 7, 2011 at 3:51 AM | Permalink | Reply

    How will Trenberth reward Wagner for doing the ‘right thing’. Will other scientists be allowed to talk to Wagner now and call him “buddy”?

  58. PeterF
    Posted Sep 7, 2011 at 4:05 AM | Permalink | Reply

    I am still having a hard time to understand, where actually the difference is in the conclusions of Dessler and SB. This posts fig 2 and 3 suggest to me that both come up with almost the same data. The nuances in differences can’t possibly account for any different conclusions? So it is only a difference on interpretation based on what? Underlying different models? Differents physical phenomena used for interpretation?

    Regarding fig 1 and also fig 4 : it does take some chuzpah to even consider putting a regression into an almost circular cloud of dots! Doesn’t the R-language allow the plot of 95% confidence lines to a (linear) regression? It might be helpful vor visualization purposes, to do that just to show that the regrssion lines in both figures can be anything from almost straight up to almost straight down.

    But unless a find of no-correlation is the desired answer, what is the essence of Dessler and SB’s difference?

  59. RR Kampen
    Posted Sep 7, 2011 at 4:18 AM | Permalink | Reply

    “… climate capo Kevin Trenberth.” There we quit reading of course.

  60. son of mulder
    Posted Sep 7, 2011 at 5:12 AM | Permalink | Reply

    I fail to understand why the focus is on the affect of clouds on average global diurnal temperature. As more cloud would tend to suppress daily max temperature and increase nightly min temperatures, so affects could well be significant but a wash overall and hence the very scatterey scatter charts. Why are correlations between Average (Tmax-Tmin), cloud cover and CO2 not analysed and modelled against measured Insolation and Radiance.

  61. Geoff Sherrington
    Posted Sep 7, 2011 at 5:15 AM | Permalink | Reply

    timetochooseagain sept 6 8.11 pm notes about accuracy and precision “As long as the source of error is not varying in time, there shouldn’t be a problem”

    My general statement of motherhood did not nominate physical parameters and was not meant to be exclusive to time series analysis or satellite drift. I’m wary in general about correlation coefficients using truncated, detrended, centered, normalised, redistributed or whatever numbers. If Steve permits a digression, this concept hit the news fairly big time when the results of chemical analysis of lunar rock and soils was announced after the Apollo missions. The best labs in the world were chosen to analyse up to scores of elements in a variety of rocks and soils. Many labs reported replicate analyses of the same sample, so that a calculation was possible for within lab variance. This was compared with between lab variance. Analytical chemists were disappointed when within lab variance was smaller than the between lab variance for a number of elements. Conclusion – the labs were optimistic about their capability. Precison might have been good, but accuracy was poor. (Ref Morrison G.H. “Analytical Chemistry” VOL. 43, NO. 7, JUNE 1971 and later papers.)
    Next month sees Denver hosting the CMIP5 comparison of GCMs, which is quite similar in concept. In the past, modellers seem to have chosen a few runs to smile in public and where available, the within model variance was typically far less – not even overlapping sometimes – than the between model variance.
    From examples such as these flows the motherhood statement. It is still being breached. The question is, how does one ensure that the inputs show a within-model variance as well as a between model variance – and what is the best method to calculate from both, a variance that has practical value.
    Some similar concepts reside in the D11 analysis. One should not try to make a silk purse from a sow’s ear.
    When I posted a similar piece at Real Climate, Ray Ladbury replied “Thank you for that utterly irrelevant suggestion, based on an utter and complete misunderstanding of how climate models, climate science and science in general work”. I guess that sets a hurdle for people here who make a reply.

  62. Geoff Sherrington
    Posted Sep 7, 2011 at 5:54 AM | Permalink | Reply

    In Dessler 2011 preprint at line 36, “The term (-LambdaDeltaTs) represents the enhanced emission of energy to space as the planet warms.”
    Simple question – what causes the planet to warm?
    It would have to be a relatively fast process or processes, as the sentence implies that as the planet heats up, it emits enhanced energy to space, a process that would cool it. Is the analysis therefore valid with its choice of averaged monthly temperatures? Is a month too long or too short or optimum, and why?

  63. Posted Sep 7, 2011 at 6:34 AM | Permalink | Reply

    “I’ve looked at clouds from both sides now,
    From up and down, and still somehow
    It’s cloud illusions I recall.
    I really don’t know clouds at all”. – Joni mitchell

    Sorry. Couldn’t resist.

  64. Posted Sep 7, 2011 at 6:45 AM | Permalink | Reply

    Steve, you say:
    “The peer reviewers at Science did not require Dessler to show the usual diagnostics for any regression.”

    Dessler said:
    “Obviously, the correlation between ΔR_cloud and ΔT_s is weak (r^2 = 2%), meaning that factors other than T_ are important in regulating ΔR_cloud.”

    You say:
    “Given that the even the lagged relationship is weak, I’m reluctant to say that analysis using the methods of Dessler 2010 established a negative feedback, but it does seem to me that they cannot be said to have established the claimed positive feedback.
    Perhaps the editor of Science will send a written apology to Kevin Trenberth. “

    Dessler said:
    “Given the uncertainty, the possibility of a small negative feedback cannot be excluded.”

    • Steve McIntyre
      Posted Sep 7, 2011 at 7:39 AM | Permalink | Reply

      Nick Stokes observes that, in the later discussion of Dessler 2010, Dessler observed that the correlation between DR_cloud and DT_s is weak (r2 = 2%), meaning that factors other than Ts are important in regulating DR_cloud,” a point that I missed in writing this post.

      In my opinion, statistical diagnostics should be reported with the regression, rather than passim in a later discussion, but Nick is correct to observe that the r2 was reported. The adjusted r2, a preferable diagnostic, was .01, as I previously observed. I’ve amended the post, adding Nick’s correction on this point.

      While I accept Nick’s observation on this point, I do not accept his assertion that Dessler’s following statement adequately covers the situation: “Given the uncertainty, the possibility of a small negative feedback cannot be excluded”.

      Nonetheless, this brings up an interesting point that I didn’t completely pick up (this is my first pass through this data.) Dessler observed:

      Given the uncertainty, the possibility of a small negative feedback cannot be excluded. There have been inferences (7, 8) of a large negative cloud feedback in response to short-term climate variations that can substantially cancel the other feedbacks operating in our climate system. This would require the cloud feedback to be in the range of –1.0 to –1.5 W/m2/K or larger, and I see no evidence to support such a large negative cloud feedback [these inferences of large negative feedbacks have also been criticized on methodological grounds (24, 25)].

      This is an interesting paragraph since this is the only one in which Spencer and Braswell 2010 (8) and Lindzen and Choi (7) are cited.

      Dessler makes the reasonable point that you would need a significant negative cloud feedback to offset the well-established positive water vapor feedback and points to the -1 to -1.5 w/m2/K range as needing to be excluded. His money quote here is that he “sees no evidence to support such a large negative cloud feedback”.

      Following up the observation made at the conclusion of my post, the coefficient resulting from the application of Dessler’s method to the lag arising from the Spencer-Braswell 2011 analysis (similar results obtained by Dessler 2011), the coefficient is -.9 +- .94 w/m2/K (not that I endorse this method of calculation of confidence intervals.)

      Dessler’s conclusion in his abstract was:

      Over this period, the short-term cloud feedback had a magnitude of 0.54 T 0.74 (2s) watts per square meter per kelvin, meaning that it is likely positive. A small negative feedback is possible, but one large enough to cancel the climate’s positive feedbacks is not supported by these observations.

      I think that this conclusion is totally rebutted by the analysis provided in this post.

      By saying this, I am taking no position on whether cloud feedbacks are positive or negative (nor on solar or cosmic rays). I am simply, in my usual style, considering whether the author’s methods justified his conclusions, and, in this case, as in several others, the evidence is that they didn’t.

      • Posted Sep 7, 2011 at 8:01 AM | Permalink | Reply

        Steve,
        A correlation to an unlagged variable can reasonably be regarded as a measure of feedback. It goes directly into the appropriate differential equation. It applies at all frequencies.

        But a correlation to a lagged variable cannot. You then have to consider the frequency associated with that lag. The number (-0.9 W/m/K) only works with that value of feedback for oscillatory responses having a periodicity recurring every four months. At other frequencies even the sign of feedback may be different. And there’s no reason to believe that a period of four months is dominant here.

        Dessler’s claim was not that the feedback is assuredly positive, but that it ruled out the large negative feedbacks that S&B and L&C had claimed, which are well outside his error range. Your lagged calc does not change that.

        • Steve McIntyre
          Posted Sep 7, 2011 at 9:01 AM | Permalink

          Nick, in economics and econometric models, regression on lagged variables occurs all the time. I haven’t parsed the underlying mathematics of this, but, to my recollection, haven’t noticed anyone raising the sort of objection that you raise here.

          You refer to “oscillatory” responses. I doubt that one is dealing here with “oscillatory’ responses as much as decaying responses with lag. Again, I haven’t parsed the underlying mathematical logic, but would be surprised if your point really holds up. Can you give me a statistical reference proving your point in a relevant context?

        • Posted Sep 7, 2011 at 9:23 AM | Permalink

          Steve,
          Regressing on a lagged variable is fine. It’s the interpretation in terms of feedback that doesn’t work. That isn’t really statistics.

          Suppose you have a differential equation
          dT/dt = A + B
          and you then show that A correlates with T. A=λ T = …

          Then you can say you have dT/dt = λ T + B + …
          standard feedback. The homogeneous part dT/dt = λ T is satisfied by exp(λt)

          But if A correlates with T(t-t0) then you have
          dT/dt = λ T(t-t0) + B + …
          This is a differential-delay equation, and its behaviour is frequency-dependent. A growing exponential is not a solution of the homogeneous part.

          If λ = -0.9 and t0=4 (months) then a four month cycle will indeed be attenuated at the appropriate rate. But an eight month cycle will be amplified. Any real time sequence is of course a combination of many frequencies, so you have to integrate the effect over the spectrum.

        • Posted Sep 7, 2011 at 10:11 AM | Permalink

          Your math is not quite right here Nick.
          If λ > 0 a growing exponential IS a solution of the homogeneous part [growth rate b determined implicitly and uniquely by b = λ exp(-b t0) ].
          If λ < 0 you wouldnt expect a growing solution anyway, and you get oscillations.

          One thing that climate scientists don't really seem to have grasped, as far as I can see, is that a natural mechanism for generating oscillations is negative feedback combined with a time delay.

        • Posted Sep 7, 2011 at 10:21 AM | Permalink

          No, Paul, the point is that whatever the sign of λ you can get growing solutions. For example, with λ=-1 and t0=4 (close to Steve’s numbers), there is a solution exp(bt) with b = 0.169703+0.4779877i. It’s growing and oscillating, despite the apparent negative feedback.

        • Posted Sep 7, 2011 at 11:55 AM | Permalink

          Paul, I see where you’re coming from, and my statement there was inexact. Yes, with λ positive, there are exponential solutions with real positive exponent. I’m more concerned here with Steve’s case, with λ negative. Then you get a variety of exponentials with complex exponent coefficients, some with positive real part – ie growing, oscillating solutions.

        • Bart
          Posted Sep 7, 2011 at 12:15 PM | Permalink

          Good grief, guys. This problem has already been solved many times over. You’ve got an input A*sin(w*t) and a feedback dependent on it of B*sin(w*t+phi). The ratio B/A is the feedback gain. The variable phi is the phase delay. They are both functions of the frequency w. Narrow your focus to a narrow frequency band take out the nominal phase delay, and you are plotting B*sin(w*t) versus A*sin(w*t), which is a straight line. The ratio B/A at low frequency is the quantity of interest.

        • Bart
          Posted Sep 7, 2011 at 12:22 PM | Permalink

          phi negative is a delay, phi positive is an advance

        • Posted Sep 7, 2011 at 5:26 PM | Permalink

          Bart,
          Yes, indeed at low enough frequency you approach instantaneous feedback conditions, and it’s all OK. But then the autocorrelations over the relevant time periods are all 1.

          Steve has taken a long enough lag that he gets a significantly different correlation. That’s his point. So phase shift matters, and as you say, it’s frequency dependent. There isn’t a single number (real or complex) that characterises feedback.

        • Bart
          Posted Sep 7, 2011 at 8:59 PM | Permalink

          That’s because there are many processes which are correlated with different time lags depending on frequency. Overall, the energy is concentrated at a lag time of about 4 months.

          There is a single number which is important, though, and that is the dc gain of the feedback. This is the value we seek. There should be minimal delay there, though the signal to noise ratio might not be particularly good. No way to know without trying it out. Trying out different length averages would allow one to determine the best behaved result. Reapplying the same running average over the data would help eliminate more high frequency variability.

          If I were doing this, I could compute an estimate of the Cross Spectral Density and read off the phase relationship directly at all frequencies. But, I’m trying to make this simple so that anyone can try it.

        • TerryMN
          Posted Sep 7, 2011 at 7:40 PM | Permalink

          That’s as close as you’ll ever see to the equivalent of Nick saying “yep, sorry, I was wrong.”

        • Steve McIntyre
          Posted Sep 7, 2011 at 12:59 PM | Permalink

          If your point is that the results are uninterpretable, isn’t that what Spencer and Braswell 2010, 2011 argued? Isn’t it Dessler who asserted that an interpretation could be placed on the results via regression.

          In addition, you are presuming that there are a lot of “cycles” – a metaphor imported from other fields. If you have data that is spatially and temporally autocorrelated e.g. 1/f phenomena and there’s evidence of this sort of thing, then I’m not persuaded that a frequency-based approach is the right way to go.

          Again, with respect, I’m more interested in a reference to a consideration of this topic by a statistician (and it seems an obvious enough problem that there should be such), than in your assertions, meritorious as they may be.

        • Posted Sep 7, 2011 at 5:43 PM | Permalink

          Steve,
          “I’m more interested in a reference to a consideration of this topic by a statistician”
          I think you’re reversing the burden of proof here. You have produced a number (-0.9) which you say disproves Dessler’s feedback analysis. But yours is feedback from a lagged state. You need to show that your number is meaningful.

          “If your point is that the results are uninterpretable…”
          No, my point is that you have produced a number that isn’t a meaningful feedback factor. That doesn’t make anyone else’s analysis wrong.

          It comes back to the definition of feedback. A system where a fraction of the output is fed back to the input. That means current output. Time-delayed feedback is a whole new topic.

          If you have a delay, the notion of feedback still works for a sinusoid, because a past state is related to the present state by a complex factor, which allows for phase shift. But the feedback factor is now frequency dependent.

          In a linear system you can use feedback to describe the evolution of any time-varying process via Fourier decomposition, one frequency band at a time. And knowing the feedback factors for each band, you can put them together to get the whole system performance. But there isn’t a single number for feedback any more.

          It’s true that for processes much longer than your 4 months, a single number feedback will be significant. But it will be Dessler’s number. If you have been able by lagging to get a different regression slope, then that assures that you do have frequency dependence.

        • timetochooseagain
          Posted Sep 7, 2011 at 7:32 PM | Permalink

          “It’s true that for processes much longer than your 4 months, a single number feedback will be significant. But it will be Dessler’s number.”

          I could really use a “buzz” WRONG! Sound here. Look at Dessler’s plot again:

          http://climateaudit.files.wordpress.com/2011/09/dessler_2011_fig-2_markup.png

          Notice two of the models at zero lag show slopes of almost EXACTLY zero. This corresponds to infinite sensitivity! So it is easy to show, in fact Dessler has basically shown it himself, that slope at zero lag is NOT the slope that determines the sensitivity. In fact, if you had actually read Roy’s paper you’d know that he doesn’t say that the ~four month lag represent the real sensitivity, either.

        • Posted Sep 7, 2011 at 7:55 PM | Permalink

          TTCA,
          No, the figure you’ve linked to is of TOA nett flux. The figures being talked about here are ΔR_cloud vs T_s.

          But while you have your itchy finger on that WRONG button, do you think correlation with a lagged variable can be treated as a frequency-independent feedback factor?

        • timetochooseagain
          Posted Sep 7, 2011 at 8:06 PM | Permalink

          It physically makes more sense when you realize that atmospheric temps tend to lag sea surface temperatures by a couple months. Clouds don’t know what the sea surface temperature is, they aren’t quantum-entangled with the sea surface. They respond to the temperature of their ambient environment. Some of the lag is due to that.

          But you accusing me of switch out numbers is a bit odd. When you say:

          “It’s true that for processes much longer than your 4 months, a single number feedback will be significant. But it will be Dessler’s number.”

          You can only be referring to the combined feedback. This is the statement I referenced. It was indeed appropriate for me to show you this is BS by linking to Dessler’s full flux plot.

          There is no signal at zero lag. There is some signal with some lag. The full signal isn’t possible to tease out of the data at any lag. You need to actually know what the confounding variables are and remove their effects. They you will get the right slope and a better correlation. Since we don’t know the confounding variables, we are getting the wrong slope and poor correlation.

          You have made a number of statements that make it quite clear you have no idea what the points being made are.

        • Posted Sep 7, 2011 at 8:36 PM | Permalink

          “You can only be referring to the combined feedback.”
          Not at all. I was referring to the numbers Steve had been discussing and I had been quoting, which were for ΔR_cloud vs T_s.

        • timetochooseagain
          Posted Sep 7, 2011 at 8:44 PM | Permalink

          The same will be true at zero lag for clouds except you will get away with thinking it makes physical sense because there doesn’t have to be cloud feedback of any kind. Test the method on all feedbacks, if it fails there (it does) it won’t work for clouds alone either.

        • Bart
          Posted Sep 9, 2011 at 12:54 PM | Permalink

          “But yours is feedback from a lagged state.”

          Nick, create a series sin(theta) for theta from 0 to 2pi. Now create another one -sin(theta-3*pi/4). Plot the latter as the ordinate and the former as the abscissa. Do you see a positive “slope”? You should. Does that mean this is a positive feedback? Hardly. It just means the phase delay is enough to produce positive correlation. Take the phase delay out and plot -sin(theta) on Y versus sin(theta) on X. Now, you will see the negative relationship.

          You could also get a straight line by subtracting an additional phase -pi/4, and this one will have a positive slope. But, you have reversed the direction of causality in that case.

          “Time-delayed feedback is a whole new topic.”

          We are talking about ordinary phase lag which is inherent in any natural, causal system.

          “But the feedback factor is now frequency dependent.”

          Feedback is generally frequency dependent. A lead controller (e.g., Proportional-Derivative) advances the phase. A lag controller delays it (e.g., Proportional-Integral-Derivative).

          “But there isn’t a single number for feedback any more.”

          Suppose I have a system described by the transfer function H(s) = 1/s, i.e., a pure integrator, where “s” is the Laplace variable. I put in a feedback F(s) = -K/(tau*s+2). The characteristic equation of the system is tau*s^2 + 2*s + K. This system is now stable for any positive values of tau and K, with a characteristic time constant of tau.

          The phase response of the feedback is phi = atan2(-tau*omega,-2), where omega is radial frequency. At high enough frequency, the phase response is -90 degrees. If I had a second order feedback response, the phase response could easily go to 0 degrees – anything between -90 and 0 will give you an apparent positive correlation in a phase plane plot. But, the feedback is decidedly negative, or the system would be unstable. What matters is the phase response at low frequency, which is 180 degrees. The feedback at zero frequency is F(0) = -K/2, decidedly negative.

          “But it will be Dessler’s number.”

          Dessler’s number is crap, as the above discussion demonstrates.

        • Posted Sep 9, 2011 at 5:30 PM | Permalink

          Bart,
          I think we’re mostly agreeing here. My contention is that you can’t correlate a quantity with a lagged state, and then say the correlation coefficient is a feedback factor.

          A difficulty about fixed time delay is that the phase difference varies for each frequency. It’s not something you can implement with a circuit of impedances. If you tried to express it via Laplace transform, you’d have a transfer function with an infinite number of poles.

          But Dessler’s number isn’t crap, or at least not for those reasons. He’s talking in effect about dc feedback. No delay of any kind. Which in practice has to mean that the feedback is fast relative to other processes.

      • Skiphil
        Posted Feb 17, 2013 at 5:11 PM | Permalink | Reply

        Re: Steve McIntyre (Sep 7 07:39),

        It may not be possible yet to judge cloud feedbacks overall. I wonder if anyone here has noticed this article from last summer, which argues that the data currently available do not rule out either positive or negative overall feedbacks from clouds…. i.e., as with a lot of the paleo proxy debates, it may be that the data available to date do not resolve the key issues (a provisional conclusion which may appeal to many who are not already “team” players: [my previous comment disappeared, I hope there is no duplication]

        On the determination of the global cloud feedback from satellite measurements, by T. Masters of UCLA

        published 23 August 2012

        Earth Syst. Dynam., 3, 97–107, 2012
        doi:10.5194/esd-3-97-2012

        “Overall, there is little correlation between the changes in the 1CRF and surface temperatures on these timescales, suggesting that the net effect of clouds varies during this time period quite apart from global temperature changes. Given the large uncertainties generated from this method, the limited data over this period are insufficient to rule out either the positive feedback present in most climate models or a strong negative cloud feedback.”

        • Skiphil
          Posted Feb 17, 2013 at 5:38 PM | Permalink

          Re: Skiphil (Feb 17 17:11),

          OT to this thread but another case for curious climate science reviews, perhaps. Editor Hargreaves on contentious review process:

          Editor comments on “odd behavior” of critical reviewer of Masters (2012)

          Interactive comment on “On the determination of
          the global cloud feedback from satellite
          measurements” by T. Masters

          J. C. Hargreaves (Editor)

          Received and published: 7 August 2012

          [emphasis added]

          After the first reviews, the peer review process at ESD Discussions is not published
          online until the paper is finally accepted or rejected for ESD. I would like to see the
          review process made more open, as this contentious paper elicted some odd behaviour
          from some of those involved, which made the editing task quite difficult.
          So – for this paper we had 3 rounds of Major Revision in total. Finally I decided that
          despite continued calls to reject the paper, one reviewer had not provided the incisive
          criticism which would compel rejection.
          The other two reviewers accepted the paper.
          Although I am by no means a specialist in the subject of cloud feedback, I think the
          manuscript makes some valuable points, and I finally decided to accept the revised
          version for ESD.

    • Posted Sep 7, 2011 at 9:33 AM | Permalink | Reply

      Nick:

      “Obviously, the correlation between ΔR_cloud and ΔT_s is weak (r^2 = 2%), meaning that factors other than T_ are important in regulating ΔR_cloud.”

      Think for a bit about the implications of such a statement. Regardless of whether the “ΔR_cloud” is varying due to “T” or not, since they effect the radiation budget the clouds varying due to “something else” (regardless of what) will still have an impact on temperature. This is kinda Roy’s whole damn point.

      And no, the confidence intervals can NOT be used to say that a large negative feedback is ruled out, because as long as other factors impact “ΔR_cloud” other than “T” the slope of Dessler’s regression line cannot be an accurate estimate of feedback

      • Posted Sep 7, 2011 at 9:38 AM | Permalink | Reply

        No, Andrew, sure there may be other things than T that affect R, but then they are not feeding back T. It’s the estimation of slope that’s relevant there.

        • Posted Sep 7, 2011 at 9:46 AM | Permalink

          You still don’t get it at all.

          Those “other factors” cause you to get the wrong value for the slope! Is it really hard to understand that?

    • David L. Hagen
      Posted Sep 7, 2011 at 12:16 PM | Permalink | Reply

      Pseudo-positive feedback
      Axel Kleidon shows how transitions in climate accompanied by external changes can results in moving along “ridges” which can give the appearance of “positive feedback” when the feedback that at each location still negative. See Fig 7b in
      A. Kleidon (2009) Non-equilibrium thermodynamics and maximum entropy production in the Earth system: applications and implications Accepted for publication in Naturwissenschaften
      web link: http://www.springerlink.com/content/100479/

      (b) When external conditions change in such a way that the trade-off between flux and force shifts (grey lines: old state, black lines: new state), a perturbation of the flux would be enhanced until the flux reaches the new optimum value at which entropy production is at a maximum. This could be interpreted as a positive feedback to change.

      Thus, to prove a “positive” cloud feedback, it would appear you also have to prove that there are no changes in “external conditions”.

      • Posted Sep 7, 2011 at 8:08 PM | Permalink | Reply

        David,
        He’s not saying that at all. He’s saying that a change in external conditions has an enhanced effect, and that constitutes the positive feedback.

        But anyway, remember that it’s LC and SB who are trying to show a large negative feedback. Dessler isn’t trying to prove it’s positive – only that their numbers for negative feedback aren’t there.

        • Bart
          Posted Sep 7, 2011 at 9:04 PM | Permalink

          But, Dessler’s approach is utterly flawed. He has shown nothing.

        • David L. Hagen
          Posted Sep 8, 2011 at 8:56 PM | Permalink

          Nick Stokes
          I see arguments for both cooling and warming (negative/positive feedback). Consequently I see the Null hypothesis as “don’t know. Thus both sides need to prove their case. Thus, to prove one or the other, they also have to show no climate change “external changes” that could confuse the feedback per Kleidon. I expect there will be examples of both types. See Willis’ Thermostat.

  65. Posted Sep 7, 2011 at 9:40 AM | Permalink | Reply

    As I noted on WUWT this cloud model is ridiculous. Clouds do not form in response to surface temperatures. They form along fronts and are enormous energy conveyors.

    Take the hurricanes this and last week which hit the US East Coast (and us here in VA outside DC). Those clouds did not form in response to surface temps here in DC. But they sure as heck impacted our surface temps in a big way.

    The energy they collected (and H2O) came from Africa and then the Atlantic Ocean. The energy was then dissipated over land (along with a lot of water).

    The fact these models are so far from this basic reality tells me they are just plain wrong. Not even close.

    I repeat – clouds do not solely (or primarily) form due to surface temps. Over oceans that can be one mechanism, but fronts are much more dominant.

    The other factor missing is the vertical transport of energy that you get in thunderheads, etc. Those can CONSUME energy from the surface temps, pulling hot air up. But again, this would be clouds impacting surface temps, not the other way around.

    More examples: Fog on San Fran coast dropping surface temps and reflecting solar energy, Clouds at night trapping heat, etc.

    Nowhere do these overly simple models reflect all this real life dynamics. Without out temporal, vertical and geographic energy transport, the model is basically useless.

    • nutso fasst
      Posted Sep 9, 2011 at 9:15 PM | Permalink | Reply

      AJStrata: Clouds do not form in response to surface temperatures.

      They certainly do here in central Arizona during the monsoon season, when tropical moisture is drawn up from Mexico. Thunderstorms form in response to convection when the mornings are sunny and the surface heats up.

  66. JDN
    Posted Sep 7, 2011 at 11:54 AM | Permalink | Reply

    I’m surprised you didn’t catch the violation of homoskedasticity in Fig. 1. Most people get away with violating it, but, the upshot is that if you have one segment of your graph with a different variance than the other, linear regression may not work at all or your Pearson coefficient may be way off. The variance for Delta_T_s below -0.2 K is carried almost completely by a single outlying point. Also, if you eliminate data below -1.8 K where the variance changes, the slope is flat. I’m not sure what that means for anyone’s argument, but, this is a classic case of why homoskedisticity is required; violating it allows insufficiently sampled outliers to create a trend where none actually exists.

    See: http://en.wikipedia.org/wiki/Homoscedasticity

    • Steve McIntyre
      Posted Sep 7, 2011 at 1:02 PM | Permalink | Reply

      yup. there are other issues as well. Both series in the scatter plot are autocorrelated. This cam induce spurious correlation e.g. Phillips 1985 and past CA discussions. Remind me to do a quick simulation on this if I forget.

    • EdeF
      Posted Sep 7, 2011 at 6:18 PM | Permalink | Reply

      Good catch!

    • DocMartyn
      Posted Sep 7, 2011 at 6:53 PM | Permalink | Reply

      well spotted that man (?), completely missed that.

  67. Martin Lewitt
    Posted Sep 7, 2011 at 11:55 AM | Permalink | Reply

    Steve,

    “Dessler makes the reasonable point that you would need a significant negative cloud feedback to offset the well-established positive water vapor feedback and points to the -1 to -1.5 w/m2/K range as needing to be excluded. His money quote here is that he “sees no evidence to support such a large negative cloud feedback”.”

    Dismissing a small negative cloud feedback this way ignores the significant implications for the AR4 models which all have significant positive cloud feedback. The implication for the models might be comparable to the water vapor feedback. What would the AR4 model sensitivities be without positive cloud feedback? Negative cloud feedback doesn’t have to compensate for water vapor feedback alone, the climate system is chock full of negative feedbacks, its a heat engine after all. One place to start is a couple extra turns of the water cycle, as Wentz (2007) documented in Science, the AR4 models reproduced less than half of the increase in precipitation seen in the observations. The latent heat flux results confirm the precipitation increase.

    How Much More Rain Will Global Warming Bring?
    Frank J. Wentz, Lucrezia Ricciardulli, Kyle Hilburn, and Carl Mears
    Science 13 July 2007: 317 (5835), 233-235.Published online 31 May 2007 [DOI:10.1126/science.1140746]

    • Phil
      Posted Sep 7, 2011 at 12:50 PM | Permalink | Reply

      You ask:

      What would the AR4 model sensitivities be without positive cloud feedback?

      AR4 states in Section 8.6.2.3:

      in the absence of cloud feedbacks, current GCMs would predict a climate sensitivity (±1 standard deviation) of roughly 1.9°C ± 0.15°C (ignoring spread from radiative forcing differences). The mean and standard deviation of climate sensitivity estimates derived from current GCMs are larger (3.2°C ± 0.7°C) essentially because the GCMs all predict a positive cloud feedback (Figure 8.14) but strongly disagree on its magnitude.

      • Steve Fuitzpatrick
        Posted Sep 7, 2011 at 9:20 PM | Permalink | Reply

        Phil,

        1.9C+/- 0.15C (though probably a bit high) is almost certainly closer to reality than 3.2C +/-0.7C.

        • Steven Mosher
          Posted Sep 8, 2011 at 2:10 AM | Permalink

          I’m not so sure that this approach is even giving you an estimate of ECR. It looks more a diagnosis of TCR.
          The models which match most closely have TCRs between 1.6 and 2.2. hmm.

          I’m confused and my head hurts. How does anything of come close to answering the ECR question?

        • Posted Sep 8, 2011 at 11:49 AM | Permalink

          Steven, the transient response problem mostly comes about because the temperature takes a long time to reach equilibrium with the forcing. Unless feedback depends strongly on timescale, there is nothing “transient” about the response so calculated, because it is just a “correction of imbalance so far” that is proportional to the “change in temperature so far”, whether at equilibrium or not. At equilibrium you will get the same relationship between feedback flux and delta T, it’s just that both will be proportionally larger.

  68. David L. Hagen
    Posted Sep 7, 2011 at 4:03 PM | Permalink | Reply

    Roy Spencer posts his initial response to Dessler 2011.

    The Good, The Bad, and The Ugly: My Initial Comments on the New Dessler 2011 Study and
    The Good, The Bad and The Ugly: My Initial Comments on the New Dessler 2011 Study. at WUWT – with music.

    The following graphic shows the relevant equation, and the numbers he should have used since they are the best and most direct observational estimates we have of the pertinent quantities. I invite the more technically inclined to examine this . . .

    Using the above equation, if I assumed a feedback parameter λ=3 Watts per sq. meter per degree, that 20:1 ratio Dessler gets becomes 2.2:1. If I use a feedback parameter of λ=6, then the ratio becomes 1.7:1. This is basically an order of magnitude difference from his calculation.

  69. David L. Hagen
    Posted Sep 7, 2011 at 4:38 PM | Permalink | Reply

    In Global Atmospheric Trends: Dessler, Spencer & Braswell David Stockwell compares (and provides his analysis code):

    the scatter plot of monthly average values of ∆R_cloud (eradr) versus ∆T_s (erats) using CERES and ECMWF interim data. There is extremely little correlation as noted by Steve. In fact, it is not statistically significant in the conventional sense, . . .
    The points in red are the sequential difference of temperature against the cloud radiance. While these have a lower slope, unlike the former, they are conventionally significant, almost to the 99%CL.

  70. Scott Brim
    Posted Sep 7, 2011 at 5:08 PM | Permalink | Reply

    Assuming Dr. Demetris Koutsoyiannis has been following the long running Spencer (et al) versus Dessler (et al) Cloud Kerfuffle, I would be curious what his perspectives are concerning the most recent developments in this ongoing debate.

  71. sky
    Posted Sep 7, 2011 at 8:09 PM | Permalink | Reply

    Nothing reveals the dismal state of climate science more than the the commonplace misformulatrion of the response of the adaptive, nonlinear climate system to solar radiation as a problem of “feedback” of temperature.

    Temperature is an INTENSIVE variable that merely characterizes the STATE of some parcel of matter in the climate system. It is not a FLOW variable that can be “fed back” upon solar radiation, which is the sole INPUT of energy into the system. Nor is it in one-to-one correspondence with the OUTPUT of the system, which is the radiation to space over the entire thermal range of wavenumbers plus the Bond albedo.

    The lack of any physically realistic notion of dynamic system response is what prompts the inconsistent, often nonsensical empirical “determinations” of “feedback parameters,” over which climate “scientists” incessantly squabble.

    • Posted Sep 7, 2011 at 8:40 PM | Permalink | Reply

      “Temperature is an INTENSIVE variable that merely characterizes the STATE of some parcel of matter in the climate system.”
      As is voltage in an electrical circuit. And you can have a voltage feedback affecting a current output, or vice versa.

      • sky
        Posted Sep 7, 2011 at 9:30 PM | Permalink | Reply

        Pray tell, what feedback of temperature upon solar radiation remotely resembles the “voltage feedback affecting a current output”?

        • Posted Sep 7, 2011 at 10:21 PM | Permalink

          Temperature can act as a feedback by affecting atmospheric Rayleigh scattering, for among other reasons, through increasing/decreasing water vapor density. There are also obvious land albedo effects due to evaporation from the soil.

        • Posted Sep 7, 2011 at 10:30 PM | Permalink

          That’s exactly what this post and much of the papers discussed is about. Dependence of R_Cloud on T_s.

        • sky
          Posted Sep 9, 2011 at 2:04 PM | Permalink

          The various effects of internal state variables upon each other and upon the system output do NOT, by a long shot, constitute system feedback. In any rigorous sense of the term, feedback requires some looping of output back to input driven by an independent power source, e.g., operational amplifier. Neither voltage impedance nor current inductance in a circuit are feedbacks. In fact, Stokes’ notion that one can feed back upon the other and vice versa is nonsensical, because voltage and current are differently dimensioned variables. Nor is Rabett’s reference to temperature effects upon Rayleigh scattering of solar radiation a bona fide example. Such scattering is a product of atmospheric particulates and aerosols, not temperature per se.

          On physical grounds, there should be little doubt that cloud albedo invariably diminishes the power available for thermalization. The effect upon surface temperatures is quite dramatic in tropical zones, where there is a pronounced diurnal and/or seasonal cycle in cloudiness. All along the Gulf of Guinea, the seasonal cycle, with maximum cloudiness in the summer months, results in August average temperatures ~2.5K LOWER than in January, when solar radiation is near its zonal minimum. Such patently obvious effects get obscured by misformulating the problem as a “feedback” relationship.

        • Posted Sep 9, 2011 at 5:57 PM | Permalink

          “Stokes’ notion”? Well, the idea of two-way feedback here is Lindzen’s. But you seem to be saying the whole climate science notion of feedback is bunk. OK, then it isn’t just me.

          Dimension is something you have to sort out to get a non-dimensional feedback factor, but it doesn’t preclude the existence of feedback. Nor does feedback itself require an active element. The classic negative feedback was for an emitter-loaded transistor, where you run a resistor back from the emitter to the base. And since the base is low impedance input, emitter high impedance output, this is normally thought of as feeding the output voltage back as an input current. You don’t have to think of it that way – Ohm’s law does the conversion. But you can, and many people did.

        • Posted Sep 9, 2011 at 9:22 PM | Permalink

          Oops – it’s a long time since I was soldering transistors. For emitter read collector.

        • sky
          Posted Sep 10, 2011 at 4:19 PM | Permalink

          Nature pays no attention to how we think about its various mechanisms and processes. It operates by its own rules. Invoking analogies, parsing words and tap-dancing around dimensionality issues doesn’t get you any closer to establishing what those rules are.

          Temperature is no doubt an important PARAMETER, but the whole idea of an INTENSIVE variable being a forcing or a bona fide feedback in any physical system lacks any physical foundation. The commonplace misapplication of Stefan-Boltzman to convert all temperatures to EXTENSIVE variables doesn’t change the basic issue.

    • David L. Hagen
      Posted Sep 8, 2011 at 7:23 AM | Permalink | Reply

      For the comparative evidence for R_Cloud dependence on T_s, see Willis Eschenbach TAO/TRITON TAKE TWO and the earlier The Tao that can be spoken. Willis’s diurnal variations seem to visually seem to show a much larger more significant response than Dessler 2010, Spencer 2011 or even (a href=http://landshape.org/enm/global-atmospheric-trends-dessler-spencer-braswell/>Stockwell 2011 (R^2 ~ 0.01 to 0.02 to 0.04). Have there been any statistical analyses for Willis’ cloud/thunderstorm feedback mechanism to compare with Dessler or Spencer?

  72. DG
    Posted Sep 7, 2011 at 10:22 PM | Permalink | Reply

    Wasn’t the whole point of SB 2011 (referring back to their earlier papers) that use of standard regression analysis gives the “illusion” of positive feedback when it is actually strongly negative? I’m not qualified with this enough to opine, but it seems that was the crux of Dr. Spencer’s argument for the past 3-4 years and why he used phase space analysis. RPS posted barbs between Spencer and Humbert as early as 2008 IIRC.

  73. Posted Sep 7, 2011 at 11:37 PM | Permalink | Reply

    There is rumored to be an annual award in climatology for the published correlation coefficient closest to zero, which is relied upon to ‘prove’ a climatological point.

    Winning requires the devious inventiveness of a googlewhacker, and a sophisticated pal-review network.

    • Steven Mosher
      Posted Sep 8, 2011 at 11:18 AM | Permalink | Reply

      I think you just gave me an idea.. I’ll see what Josh Thinks.

  74. JvdLaan
    Posted Sep 8, 2011 at 2:28 AM | Permalink | Reply

    “…for the temerity of casting a shadow across the path of climate capo Kevin Trenberth…”

    Do you know what capo stands for (actually it is kapo with a k)?
    Here is the true meaning: http://en.wikipedia.org/wiki/Kapo_(concentration_camp)

    Have you no sense of decency, sir? At long last, have you left no sense of decency?

    • simon abingdon
      Posted Sep 8, 2011 at 2:45 AM | Permalink | Reply

      Oxford Dictionary of English

      capo n.(pl. capos) chiefly North American the head of a crime syndicate, especially the Mafia, or a branch of one.

      • JvdLaan
        Posted Sep 8, 2011 at 3:09 AM | Permalink | Reply

        Still no decency left then!

    • mrsean2k
      Posted Sep 8, 2011 at 7:47 AM | Permalink | Reply

      You’ve already made an idiot of yourself once by substituting a word of your own choosing for the actual word used and expressing your faux-outrage.

      You should quit while you’re behind.

      • mrsean2k
        Posted Sep 8, 2011 at 9:15 AM | Permalink | Reply

        Snip way,obviously.

    • Gerald Machnee
      Posted Sep 8, 2011 at 7:51 AM | Permalink | Reply

      It was spelled “capo” not “kapo”. You are making indecent inferences.

      • JvdLaan
        Posted Sep 9, 2011 at 3:37 PM | Permalink | Reply

        Ok Kapo my bad – but most anglo-saxons do not use the letter K, but again the use of the word capo is still no sign of any decency, or do you think capo is just a normal thing to say to somebody?
        So still a repeat its again:

        Have you no sense of decency, sir? At long last, have you left no sense of decency?

        Oh and Mr McIntyre, why do you never audit something of Watts, Spencer, Pielke, Eschenbach?
        Have you signed a non-aggression pact?

  75. Geoff Sherrington
    Posted Sep 8, 2011 at 5:56 AM | Permalink | Reply

    Can any progress be made before the time rate of operation of all significant effects of temperature of the globe is established?
    I quoted above “In Dessler 2011 preprint at line 36, “The term (-LambdaDeltaTs) represents the enhanced emission of energy to space as the planet warms.”
    Taking the example to extremes, if the time rate of enhanced emission of energy to space was almost instantaneous then the planet would not warm and the quoted sentence is a logical nonsense. Just as it starts to get hot, ZIP! and the energy is back in space, back to jail, do not pass GO.
    At the other extreme, if the rate of emission was extremely low, one would have to turn to the dominance of variations in the incoming energy. If the incoming energy rate change was very high, then the earth would heat or cool if the emission rate was extremely low. OTOH, if the rate of incoming energy was at a very low rate compared with outgoing capacity, the temperature could tend to a constant value.
    We are discussing clouds. We need to know if clouds have any capacity to affect the rate of energy movement in all directions. Of couse, one notes in the hot Tropics the relief of a cloud passing overhead, but this effect considers only part of the energy path. Another obvious effect is the heating and cooling of ocean surface water.
    So what is the significance of a lag as discussed? Is it a relatively predictable figure, able to be calculated from physical properties? Is it a single figure applicable to all of the globe? Is the peak of a lag merely the maximum of a wider distribution of lags of various times? If we have lags of various time responses, are these lags a small or a large part of the total delay of the energy path? Which lags can lead to an accumulation of energy over time and which ones can cause a lowerinf of energy over time?
    On a quick reading, it is easy to respond with “This is what these several papers are all about”. Personally, I feel that they cover some of the possible ground for discussion, while leaving room for further measurement, concepts and refinement.
    Whereas SB10 summarised to problem as “In simple terms, radiative changes resulting from temperature change (feedback) cannot be easily
    disentangled from those causing a temperature change (forcing),” I’d be more comfortable with “Temperature changes resulting from accumulation of energy (being delays in the time rate of heating or cooling) cannot easily be disentangled from near instantantaneous changes in energy.”

  76. Posted Sep 8, 2011 at 9:13 AM | Permalink | Reply

    One of the most critical needs for climate models is how cloudiness changes over time as low-level warming occurs due to increasing CO2 concentration.

    The fundamental assumption that underlies the orthodoxy’s view on clouds, as exemplified by the publications of Dessler* is that changes in energy budget at the TOA resulting from changes in cloudiness are related on a one-to-one basis to changes in surface temperature, whether these changes in temperature are due to temporary conditions induced by El Niños, volcanic eruptions, or whether they are due to slowly evolving changes in CO2 concentration. It is clear that Dessler assumes that from year to year, there is a distinct level of cloudiness associated with each surface temperature, and that relation is the same for El Niños, volcanic eruptions, or evolving changes in CO2 concentration. Using short-term data with high scatter, he has drawn untenable inferences regarding longer-term variations cloudiness due to low-level global warming. Dessler et al. (GRL, 2008) attempted to derive water feedback sensitivity by comparing data on global temperature and humidity during the winter months of 2006–2007 and 2007–2008. However, the effects of changing CO2 concentration are buried in the noise of a much stronger signal due to El Niño variability during these years. Therefore it is physically impossible to derive the water feedback sensitivity from data limited to these two winters. Yet, the authors claim that they have done so and quote a value in agreement with climate models. They then reach the rather incredible conclusion:

    “The existence of a strong and positive water-vapor feedback means that projected business-as-usual greenhouse gas emissions over the next century are virtually guaranteed to produce warming of several degrees Celsius.”

    This conclusion is utterly unsupportable from the analysis of a mere two winters’ data controlled by El Niño activity. Dessler et al. (JGR, 2008) analyzed a mere one-month’s data in 2005 to infer clear-sky top-of-atmosphere outgoing long-wave radiation (OLR) and its relationship to humidity. It is not clear to this writer that this paper sheds any light on water feedback sensitivity.

    *[Minschwaner and Dessler (2004); Minschwaner, Dessler, and Sawaengphokhai (2006); Dessler et al. (GRL, 2008); Dessler et al. (JGR, 2008); Dessler (2010); Dessler and Davis (2010); Dessler (2011)]

  77. Posted Sep 6, 2011 at 4:44 PM | Permalink | Reply

    Ivan: what’s your take on this problem of cloud feedback
    —-
    It seems to me that the empirical data are to uncertain to arrive at any firm conclusion. So we are down to models. Some [too?] simple, some not. And that means we are down to assumptions. I’m not qualified [who is?] to decide which assumptions are good and which are bad. There is probably a spectrum from good to bad.

  78. Steve Fuitzpatrick
    Posted Sep 7, 2011 at 9:15 PM | Permalink | Reply

    “And that means we are down to assumptions”
    Yes, for sure. And this is the biggest single problem with depending on models to determine climate sensitivity. All kinds of parameterizations can be used (not even considering direct and indirect aerosol effects); most of these choices are not possible to prove or disprove with available data. So we are left (as usual) with “trust us”. It is not going to be enough.

  79. Posted Sep 8, 2011 at 11:46 AM | Permalink | Reply

    It is most likely that the spectrum of quality of assumptions is from bad to worse. With so many unknown factors what are the chances that anyone will “luck” into a good set of assumptions?

    Hal

12 Trackbacks

  1. [...] can read Steve’s analyis of both of the data sets HERE. He concluded at the end of his analysis, in Steve’s typical low key [...]

  2. [...] UPDATE2: Dessler has made a video on the paper see it here And Steve McIntyre has his take on it with The stone in Trenberth’s shoe [...]

  3. [...] http://climateaudit.org/2011/09/06/the-stone-in-trenberths-shoe/ Share this:ShareFacebookDiggTwitterStumbleUponPrintRedditEmailLike this:LikeBe the first to like this post. Comments [...]

  4. [...] ad iniziare da Judith Curry (ben tre post qui, qui e qui), per seguire con Steve McIntyre (qui), con Roger Pielke Sr (qui) e con WUWT, che di fatto raccoglie tutti questi [...]

  5. By Ok, another “me, too”! | suyts space on Sep 7, 2011 at 9:17 AM

    [...] McIntyre has really nice, yet funny, take on the issue.  http://climateaudit.org/2011/09/06/the-stone-in-trenberths-shoe/&nbsp; .  I recommend reading it.  Though, I’m wondering why it was presented in such a [...]

  6. [...] more technical discussion….click here…..click here…..click [...]

  7. [...] UPDATE2: Dessler has made a video on the paper see it here And Steve McIntyre has his take on it with The stone in Trenberth’s shoe [...]

  8. [...] the S&B story at the beginning, as did Steve McIntyre, with Dessler 2010 in Science, I’ll put a new spin on the satellite data uploaded by Steve, [...]

  9. [...] You can read more about this at Watts Up With That’s page linking to a series of posts regarding this issue. Steve McIntyre of Hockey Stick fame also weighs in on some of the alarmists’ problems over at Climate Audit. [...]

  10. [...] Mr. McIntyre: [...]

  11. [...] vorhandene Fehler keineswegs ein derartiges Vorgehen rechtfertige (Roger Pielke Sr., Judith Curry, Steven McIntyre). Roy Spencer selbst erklärt auf seinem Weblog, warum die Vorwürfe gegenüber seiner [...]

  12. [...] till Spencers och Braswells data: http://climateaudit.org/2011/09/06/the-stone-in-trenberths-shoe/ [...]

Post a Comment

Required fields are marked *

*
*

Follow

Get every new post delivered to your Inbox.

Join 3,113 other followers

%d bloggers like this: