Reply to Patrick Brown’s response to my article commenting on his Nature paper

Introduction

I thank Patrick Brown for his detailed response (also here) to statistical issues that I raised in my critique “Brown and Caldeira: A closer look shows global warming will not be greater than we thought” of his and Ken Caldeira’s recent paper (BC17).[1] The provision of more detailed information than was given in BC17, and in particular the results of testing using synthetic data, is welcome. I would reply as follows.

Brown comments that I suggested that rather than focusing on the simultaneous use of all predictor fields, BC17 should have focused on the results associated with the single predictor field that showed the most skill: The magnitude of the seasonal cycle in OLR. He goes on to say: “Thus, Lewis is arguing that we actually undersold the strength of the constraints that we reported, not that we oversold their strength.”

To clarify, I argued that BC17 undersold the statistical strength of the relationships involved, in the RCP8.5 2090 case focussed on in their Abstract, for which the signal-to-noise ratio is highest. But I went on to say that I did not think the stronger relationships would really provide a guide to how much global warming there would actually be late this century on the RCP8.5 scenario, or any other scenario. That is because, as I stated, I disagree with BC17’s fundamental assumption that the relationship of future warming to certain aspects of the recent climate that holds in climate models necessarily also applies in the real climate system. I will return to that point later. But first I will discuss the statistical issues.

Statistical issues

When there are many more predictor variables than observations, the dimensionality of the predictor information has to be reduced in some way to avoid over-fitting. There are a number of statistical approaches to achieving this using a linear model, of which the partial least squares (PLS) regression method used in BC17 is arguably one of the best, at least when its assumptions are satisfied. All methods estimate a statistical model fit that provides a set of coefficients, one for each predictor variable.[2] The general idea is to preserve as much of the explanatory power of the predictors as possible without over-fitting, thus maximizing the fit’s predictive power when applied to new observations.

If the PLS method is functioning as intended, adding new predictors should not worsen the predictive skill of the resulting fitted statistical model. That is because, if those additional predictors contain useful information about the predictand(s), that information should be incorporated appropriately, while if the additional predictors do not contain any such information they should be given zero coefficients in the model fit. Therefore, the fact that, in the highest signal-to-noise ratio, RCP8.5 2090 case focussed on both in BC17 and my article, the prediction skill when using just the OLR seasonal cycle predictor field is very significantly reduced by adding the remaining eight predictor fields indicates that something is amiss.

Brown say that studies are often criticized for highlighting the single statistical relationship that appears to be the strongest while ignoring or downplaying weaker relationships that could have been discussed. However, the logic with PLS is to progressively include weaker relationships but to stop at the point where they are so weak that doing so worsens predictive accuracy. Some relationships are sufficiently weak that including them adds too much noise relative to information useful for prediction. My proposal of just using the OLR seasonal cycle to predict RCP8.5 2090 temperature was accordingly in line with the logic underlying PLS – it was not a case of just ignoring weaker relationships.

Indeed, the first reference for the PLS method that BC17 give (de Jong, 1993),  justified PLS by referring to a paper [3] that specifically proposed carrying out the analysis in steps, selecting one variable/component at a time and not adding an additional one if it worsened the statistical model fit’s predictive accuracy. At the predictor field level, that strongly suggests that, in the RCP8.5 2090 case, when starting with the OLR seasonal cycle field, one would not go on to add any of the other predictor fields, as in all cases doing so worsens the fit’s predictive accuracy. And there would not be any question of using all predictor fields simultaneously, since doing so also worsens predictive accuracy compared to using just the OLR seasonal cycle field.

In principle, even when given all the predictor fields simultaneously PLS should have been able to optimally weight the predictor variables to build composite components in order of decreasing predictive power, to which the add-one-at-a-time principle could be applied.  However, it evidently was unable to do so in the RCP8.5 2090 case or other cases. I can think of two reasons for this. One is that the measure of prediction accuracy used –  RMS prediction error when applying leave-one-out cross-validation – is imperfect. But I think that the underlying problem is the non-satisfaction of a key assumption of the PLS method: that the predictor variables are free of uncertainty. Here, although the CMIP5-model-derived predictor variables are accurately  measured, they are affected by the GCMs’ internal variability. This uncertainty-in-predictor-values problem was made worse by the decision in BC17 to take their values from a single simulation run by each CMIP5 model rather than averaging across all its available runs.

Brown claims (a) that each model’s own value is included in the multi-model average which gives the multi-model average an inherent advantage over the cross-validated PLSR estimate and (b) that this means that PLSR is able to provide meaningful Prediction Ratios even when the Spread Ratio is near or slightly above 1. Point (a) is true but the effect is very minor. Based on the RCP8.5 2090 predictions, it would normally cause a 1.4% upwards bias in the Spread Ratio. Since Brown did not adjust for the difference of one in the degrees of freedom involved, the bias is twice that level – still under 3%. Brown’s claim (b), that PLS regression is able to provide meaningful Prediction Ratios even when the Spread Ratio is at or virtually at the level indicating a skill no higher than when always predicting warming equal to the mean value for the models used to estimate the fit, is self-evidently without merit.

As Brown indicates, adding random noise affects correlations, and can produce spurious correlations between unrelated variables. His test results using synthetic data are interesting, although they only show Spread ratios. They show that one of the nine synthetic predictor fields produced a reduction in the Spread ratio below one that was very marginally – 5% – greater than that when using all nine fields simultaneously. But the difference I highlighted, in the highest signal RCP8.5 2090 case, between the reduction in Spread ratio using just the OLR seasonal cycle ratio and that using all predictors simultaneously was an order of magnitude larger – 40%. It seems very unlikely that the superior performance of the OLR seasonal cycle on its own arose by chance.

Moreover, the large variation in Spread ratios and Prediction ratios between different cases and different (sets of) predictors calls into question the reliability of estimation using PLS. In view of the non-satisfaction of the PLS assumption of no errors in the predictor variables, a statistical method that does take account of errors in them would arguably be more appropriate. One such method is the RegEM (regularized expectation maximization) algorithm, which was developed for use in climate science.[4] The main version of RegEM uses ridge regression with the ridge coefficient (the inverse of which is analogous to the number of retained components in PLS) being chosen by generalized cross-validation. Ridge regression RegEM, unlike the TTLS variant used by Michael Mann, produces very stable estimation. I have applied RegEM to BC17’s data in the RCP8.5 2090 case, using all predictors simultaneously.[5] The resulting Prediction ratio was 1.08 (8% greater warming), well below the comparative 1.12 value Brown arrives at (for grid-level standardization). And using just the OLR seasonal cycle , the excess of the Prediction ratio over one was only half that for the comparative PLS estimate.

Issues with the predictor variables and the emergent constraints approach

I return now to BC17’s fundamental assumption that the relationship of future warming to certain aspects of the recent climate that holds in climate models also applies in the real climate system. They advance various physical arguments for why this might be the case in relation to their choice of predictor variables. They focus on the climatology and seasonal cycle magnitude predictors as they find, compared with the monthly variability predictor, these have more similar PLS loading patterns to those when targeting shortwave cloud feedback, the prime source of intermodel variation in ECS.

There are major problems in using climatological values (mean values in recent years) for OLR, OSR and the TOA radiative imbalance N. Most modelling groups target agreement of simulated climatological values of these variables with observed values (very likely spatially as well as in the global mean) when tuning their GCMs, although some do not do so. Seasonal cycle magnitudes may also be considered when tuning GCMs. Accordingly, how close values simulated by each model are to observed values may very well reflect whether and how closely the model has been tuned to match observations, and not be indicative of how good the GCM is at representing the real climate system, let alone how realistic its strength of multidecadal warming in response to forcing is.

There are further serious problems with use of climatological values of TOA radiation variables. First, in some CMIP5 GCMs substantial energy leakages occur, for example at the interface between their atmospheric and ocean grids.[6] Such models are not necessarily any worse in simulating future warming than other models, but they need (to be tuned) to have TOA radiation fluxes significantly different from observed values in order for their ocean surface temperature change to date, and in future, to be realistic.

Secondly, at least two of the CMIP5 models used in BC17 (NorESM1-M and NorESM1-ME) have TOA fluxes and a flux imbalance that differ substantially from CERES observed values, but it appears that this merely reflects differences between derived TOA values and actual top-of-model values. There is very little flux imbalance within the GCM itself.[7] Therefore, it is unfair to treat these models as having lower fidelity – as BC17’s method does for climatology variables – on account of their TOA radiation variables differing, in the mean, from observed values.

Thirdly, most CMIP5 GCMs simulate too cold an Earth: their GMST is below the actual value, by up to several degrees. It is claimed, for instance in IPCC AR5, that this does not affect their GMST response to forcing. However, it does affect their radiative fluxes. A colder model that simulates TOA fluxes in agreement with observations should not be treated as having good fidelity. With a colder surface its OLR should be significantly lower than observed, so if it is in line then either the model has compensating errors or its OLR has been tuned to compensate, either of which indicates its fidelity is poorer than it appears to be. Moreover, complicating the picture, there is an intriguing, non-trivial correlation between preindustrial absolute GMST and ECS in CMIP5 models.

Perhaps the most serious shortcoming of the predictor variables is that none of them are directly related to feedbacks operating over a multidecadal scale, which (along with ocean heat uptake) is what most affects projected GMST rise to 2055 and 2090. Predictor variables that are related to how much GMST has increased in the model since its preindustrial control run, relative to the increase in forcing – which varies substantially between CMIP5 models – would seem much more relevant. Unfortunately, however, historical forcing changes have not been measured for most CMIP5 models. Although one would expect some relationship between seasonal cycle magnitude of TOA variables and intra-annual feedback strengths, feedbacks operating over the seasonal cycle may well be substantially different from feedbacks acting on a multidecadal timescale in response to greenhouse gas forcing.

Finally, a recent paper by scientists as GFDL laid bare the extent of the problem with the whole emergent constraints approach. They found that, by a simple alteration of the convective parameterization scheme, they could engineer the climate sensitivity of the GCM they were developing, varying it over a wide range, without them being able to say that one model version showed a greater fidelity in representing recent climate system characteristics than another version with a very different ECS.[8] The conclusion from their Abstract is worth quoting:”Given current uncertainties in representing convective precipitation microphysics and the current inability to find a clear observational constraint that favors one version of the authors’ model over the others, the implications of this ability to engineer climate sensitivity need to be considered when estimating the uncertainty in climate projections.” This strongly suggests that at present emergent constraints cannot offer a reliable insight into the magnitude of future warming. And that is before taking account of the possibility that there may be shortcomings common to all or almost all GCMs that lead them to misestimate the climate system response to increased forcing.

 

Nicholas Lewis                                                                                   23 December 2017

 

[1] Patrick T. Brown & Ken Caldeira, 2017. Greater future global warming inferred from Earth’s recent energy budget, doi:10.1038/nature24672.

[2] The predicted value of the predictand is the sum of the predictor variables each weighted by its coefficient, plus an intercept term.

[3] A Hoskuldsson, 1992. The H-principle in modelling with applications to chemometrics. Chemometrics and Intelligent Laboratory Systems, 14, 139-153.

[4] Schneider, T., 2001: Analysis of incomplete climate data: Estimation of mean values and covariance matrices and imputation of missing values. J. Climate, 14, 853–871.

[5] Due to memory limitations I had to reduce the longitudinal resolution by a factor of three when using all predictor fields simultaneously. Note that RegEM standardizes all predictor variables to unit variance.

[6] Hobbs et al, 2016. An Energy Conservation Analysis of Ocean Drift in the CMIP5 Global Coupled Models. DOI: 10.1175/JCLI-D-15-0477.1.

[7] See discussion following this blog comment.

[8] Ming Zhao et al, 2016. Uncertainty in model climate sensitivity traced to representations of cumulus precipitation microphysics. J Cli, 29, 543-560.

70 Comments

  1. Posted Dec 23, 2017 at 6:33 PM | Permalink

    Thank you for this informative post. It seems that Patrick Brown ignored Nic’s comments that “just the OLR seasonal cycle predictor produces a much more skilful result than using all predictors simultaneously”.

    Trends over decades are much more useful than comparing seasonal cycles. The paper used satellite data from 2001 to 2015. In that period the surface temperature best fit trend from the UK Met Office (HadCRUT4.5) was 0.093 °C/decade. 2015 was an El Nino year, so the data ends with an uptick, biasing the trend line high. The trend over the same period from satellite data of the lower troposphere (UAH6.0) was 0.007 °C/decade. The climate model trend of 0.202 °C/decade is much higher. According to the models, the surface trend is supposed to be less than the satellite trend, but it isn’t, likely due to urban warming. This huge discrepancy suggests that the models are running too hot!

    • Frank
      Posted Dec 26, 2017 at 4:01 PM | Permalink

      Ken wrote: “Trends over decades are much more useful than comparing seasonal cycles.”

      The seasonal cycle in GMST is about 3.5 degC in amplitude and has been observed by CERES more than fifteen times (under a variety of ENSO states), providing a much more robust signal that the single example of 0.14 degC warming between 2001 and 2015 (and ending with an El Nino). If you are interested in comparing observations to model predictions, the seasonal cycle IMO provides far more discrimination than decadal warming. Tsushima and Manabe (2013) clearly shows that all AOGCMs do a poor job – and mutually inconsistent job – of reproducing the “feedbacks” (W/m2/K) in OLR from cloudy skies and SWR from both clear and cloudy skies observed during the seasonal cycle. T&M13 are more diplomatic in their assessment:

      “the gain factors obtained from satellite observations of cloud radiative forcing are effective for identifying systematic biases of the feedback processes that control the sensitivity of simulated climate, providing useful information for validating and improving a climate model.”

      http://www.pnas.org/content/110/19/7568.full

      Unfortunately, seasonal warming is not global warming. Seasonal warming is the composite of about 10 degC of average warming in the NH (with more land, a shallower mixed layer and a polar ocean somewhat isolated by land) and 3 degC of average cooling in the SH (with more ocean, almost no seasonal snow cover, and a polar continent somewhat isolated by ocean currents). Little of the signal arises in warming in the tropics. There is no reason to assume that a 2.1 W/m2/K change in OLR from all skies during seasonal warming will turn out to be the same for global warming.

      Even worse, unlike the monthly change in OLR with Ts during the seasonal cycle (which is highly linear), the change in SWR appears to have some lagging components. Reflection of SWR does not appear to be a well-behaved function of seasonal change in Ts, nor is there an simple physics reason for why their should be a functional relationship for SWR reflection from clouds.

      Seasonal warming is great for demonstrating that AOGCMs have significant flaws, but it has severe limitations as a model for global warming.

  2. S. Geiger
    Posted Dec 23, 2017 at 9:25 PM | Permalink

    I assume/hope this will show up on Climate Etc.(?) Would be nice to see a dialog on these issues.

  3. robinedwards36
    Posted Dec 24, 2017 at 5:19 AM | Permalink

    The whole point of generating a “reliable” predictor for a climate variable is to probe into the future, which as everyone understands is very difficult to predict!
    What worried me about evaluating the climate predictor models is something that seems very simple. It is valueless to use current values of potential predictor variables to “predict” current values. We know them!
    It is clear that any attempt to produce a useful predictor has to use earlier values of potential predictor variables to estimate current conditions. We need to know (ideally and essentially without error) data that were observed prior to the current time, whose parameters we can now measure. Thus any evaluation of potential predictor models must involve some values from unspecified past and accurate data.
    Succinctly, the “X data” must precede in time the interesting Y data.
    The intuition and computation labour involved in choosing feasible sets of X data is likely to be formidable.

  4. Posted Dec 24, 2017 at 5:36 AM | Permalink

    Perhaps the most serious shortcoming of the predictor variables is that none of them are directly related to feedbacks operating over a multidecadal scale

    Ray Pierrehumbert said something similar in a comment on my blog, but I think your claim that they’re not directly related is too strong (although I realise that you’ve said “multidecadal”). Global warming is fundamentally about the TOA energy imbalance and so these TOA fluxes must be related to the feedbacks and may well be related on multidecadal scales. However, as Ray Pierrehumber suggests, we can’t necessarily know that models that do best on representing short-term fluctuations in these TOA fluxes also do best when it comes to the feedbacks that determine the ECS, but these TOA fluxes must still be related to the feedbacks.

    The key point – as far as I’m concerned – is that Brown & Caldeira are showing that if you use some reasonable predictors you find that the models that do best suggest that ECS is on the higher side of the range. This doesn’t mean that it definitely is, but does suggest that we should be careful of ruling out this as a possibility.

    • mpainter
      Posted Dec 24, 2017 at 6:11 AM | Permalink

      ATTP : “Global warming is fundamentally about the TOA energy imbalance… ”

      ##### ########

      A question for anyone who can answer:

      In calculating TOA imbalance, what fraction of the imbalance is attributed to photosynthesis, if any?
      Some claim that as much as 2% of insolation is converted to stored energy, that is sequestered and removed from the energy budget. What is the actual figure used in GCMs. Anyone?

      • Posted Dec 24, 2017 at 6:29 AM | Permalink

        mpainter,
        I’m not sure about what is included in GCMs, although Earth System Models do include the movement of carbon through the Earth system. However, the imbalance refers to the difference between the incoming and outgoing energy fluxes. In the absence of any changes, we’d expect this to tend to zero (i.e., we’d tend to energy balance). There are, of course, many ways in which energy can flow through the Earth system, including the energy associated with photosynthesis. However, ultimately most of this energy is then converted into heat and radiated away into space so that the amount of energy we receive from the Sun matches the amount we radiate back into space (the reason for the imbalance today is because of our emission of CO2 into the atmosphere).

        However, if you’re asking about how much energy is sequestered, here’s a comment from Tom Curtis. If you consider all the coal reserves and the timescale over which it was laid down, there we’re talking about something like a few times 10^{-8} W/m^2, which really is too small to care about in this context.

        • mpainter
          Posted Dec 24, 2017 at 7:12 AM | Permalink

          ATTP, thanks for your reply.

          The issue of photosynthesis concerns the TOA imbalance on a short term basis, of course, whether computed daily, monthly or annually. This concerns the energy that is removed from the energy budget by chemical sequestration, meaning insolation that is converted to carbohydrates. My question is this: what accounting is made of photosynthesis in the GCMs as a natural process that removes energy from the TOA energy flux. Or is no attempt made to do this?

        • Posted Dec 24, 2017 at 7:19 AM | Permalink

          mpainter,
          Okay, I don’t know the answer, but I’m not convinced that it is a big deal. Most of that energy is eventually converted to thermal energy and radiated back into space. You’d expect there to be some kind of quasi-steady state, in which the rate at which the chemical energy is converted back into thermal energy matches the rate at which it is being converted to chemical energy via photosynthesis. There may be short-term variations, but on longer timescales, you’d expect this to be in balance. So, there may be no attempt to include this, but I can’t see how doing so would make any difference.

        • mpainter
          Posted Dec 24, 2017 at 8:09 AM | Permalink

          ATTP, thanks for your response.

          If photosynthesis removes, say, 1% of insolation from the energy budget, that is equivalent to 1.6 W/square meter. This is the TOA energy imbalance.

          Hard to imagine that a whole generation of climate scientists have ignored photosynthesis, but apparently that has been the case.

        • Posted Dec 24, 2017 at 8:35 AM | Permalink

          mpainter,
          Except we would expect there to be an equivalent amount of energy released via biological processes. Only a tiny amount is sequestered as fossils, and so the solar energy taken up photosynthesis should be balanced by energy released via other biological processes. I don’t think there is any reason to think that this is significantly out of balance.

        • hanserren
          Posted Dec 24, 2017 at 8:50 AM | Permalink

          The energy is nit realeased it us stored in the growing biomass
          https://www.nasa.gov/feature/goddard/2016/carbon-dioxide-fertilization-greening-earth
          April 26, 2016

          Carbon Dioxide Fertilization Greening Earth, Study Finds
          From a quarter to half of Earth’s vegetated lands has shown significant greening over the last 35 years largely due to rising levels of atmospheric carbon dioxide, according to a new study published in the journal Nature Climate Change on April 25.

        • Posted Dec 24, 2017 at 9:09 AM | Permalink

          Hans,
          Yes, there may be an imbalance, but if the flux of energy into the biosphere is of order 1W/m^2 and if the biosphere has grown by of order 10% in 30 years, then that would imply an imbalance of less than 0.01W/m^2. Unless I’ve done my calculation wrong, this would seem rather negligible.

        • mpainter
          Posted Dec 24, 2017 at 9:58 AM | Permalink

          “Flux of energy into the biosphere is of order 1W/m^2…” is of course photosynthesis. So the assumption is made that the photosynthesis is in equilibrium with the biosphere? A very big assumption, particularly in the oceans.

          By the way it is a .10 W, not .01 W that should be calculated.

          But it is best not to conflate the carbon cycle with the earth’s energy budget. It seems that, in the energy budgets, photosynthesis is ignored on the assumption of some sort of energy equilibrium of the biosphere. Is such an assumption warranted?
          One wonders about all the precise figures that are based on such a gross assumption.

        • Posted Dec 24, 2017 at 10:21 AM | Permalink

          mpainter,
          Hold on, if there isn’t some kind of balance, then that would imply that the biosphere is growing substantially, or dying off. It’s seems pretty clear that the rate at which energy is sequestered as fossils is slow (~10^{-8}W/m^2) and that even though there has been greening, this is of order 10% in 30 years. All I’m pointing out (which I think is roughly correct) is that the energy stored as chemical energy through photosynthesis, is released when plants are eaten, etc. As far as I can see, there is not much evidence to suggest that there is a substantial increase in the amount of energy being stored in the biosphere.

          Also, I think the 0.01W/m^2 is roughly correct. If we take the greening as evidence for an increase in the amount of energy stored in the biosphere, then this is of order 10% increase in 30 years (I’m not convinced it is quite this simple, but let’s go with this). Therefore, this is less than 1% per year. If, in balance, the flux of energy into the biosphere (through photosynthesis) is 1W/m^2 then a 1% increase would be an increase 0.01W/m^2.

        • mpainter
          Posted Dec 24, 2017 at 10:55 AM | Permalink

          ATTP, I can assure you that a very great amount of organic material is incorporated in sediments, on a daily basis. There are the soluble organic compounds that are washed to sea by drainage to be incorporated into the sediments directly or indirectly after cycling through ocean lifeforms. Then there is the continual rain of organic debris to the ocean floor which is also incorporated into the sediments. Every shale in the geologic column has its organic (carbon,etc) component to a greater or lesser degree. There can be no question that photosynthesis constitutes permanent sequestration of solar energy, insofar as our timeframes on climate issues apply.

          Back to my original question. Do modelers and others who calculate TOA flux ignore photosynthesis? It appears that they do, citing as justification a very dubious assumption.

        • mpainter
          Posted Dec 24, 2017 at 11:32 AM | Permalink

          Also a ten per cent increase in photosynthesis is just that: a ten per cent increase in sequestration of solar energy.
          And you assume an energy equilibrium in the biosphere, a 10% increase in photosynthesis notwithstanding. It does not seem so to me.

        • tty
          Posted Dec 25, 2017 at 6:45 PM | Permalink

          The amount of photosynthesized organics sequestered is quite considerable, and largely concentrated to interglacials (10 % of the time). For example there is at least 3 million km^2 of peat bogs in the northern taiga zone, on average a few meters thick. This has (almost) all accumulated in the last 12,000 years and will (almost) all be destroyed by the next glaciation.

      • Posted Dec 24, 2017 at 11:45 AM | Permalink

        Also a ten per cent increase in photosynthesis is just that: a ten per cent increase in sequestration of solar energy.

        The 10% is – roughly – the total accumulation over a period of just over 30 years. Therefore, annually, it must be less than 1%. If it was accumulating 10% every year, the increase over a 30 year period would be much more than 10%.

        • mpainter
          Posted Dec 24, 2017 at 11:58 AM | Permalink

          And so do you still assume an energy equilibrium in the biosphere?

      • Posted Dec 24, 2017 at 12:04 PM | Permalink

        mpainter,
        As I’ve already said, I see no reason to think that it is significantly out of balance. The greening would seem to suggest some additional sequestration of energy (although I’m not convinced it’s quite that simple) but that would seem to be at around 1% level (i.e., an energy imbalance of order 0.01W/m^2 – much smaller than the TOA imbalance).

        • mpainter
          Posted Dec 24, 2017 at 12:20 PM | Permalink

          ATTP, thanks for your reply. It seems that we agree that it is by the assumption of an energy equilibrium in the biosphere that climate modelers ignore the sequestration of solar energy via photosynthesis. This assumption held in the face of a preponderance of evidence that shows otherwise (see my comment above at 10:55 am).

          I doubt that climate scientists have paid but scant attention to the issue, unfounded assumptions being much simpler than actual scientific study (such study admittedly involving years of hard work).

        • Frank
          Posted Dec 26, 2017 at 4:23 PM | Permalink

          ATTP and mpainter: It may be worth noting that man currently burns enough fossil fuels (and acidifies enough limestone) to increase CO2 in the atmosphere by 4 ppm per year, but only 2 ppm accumulates in the atmosphere. 1 ppm/yr is currently taken up by the ocean (which may not involve photosynthesis) and 1 ppm/yr by land. Uptake by land almost certainly starts with photosynthesis, even though soil is the largest repository. So, you might want to calculate how much energy (actually power) is required to photo-reduce 1 ppm/yr of CO2.

        • mpainter
          Posted Dec 27, 2017 at 11:54 PM | Permalink

          In fact Attp, you have just demonstrated the mindset of the climate modeling confraternity. No reason to examine the issue mainly because it suits you not to look more closely. No skepticism for you; everything’s fine.

        • Posted Dec 29, 2017 at 8:25 AM | Permalink

          Re Frank
          Posted Dec 26, 2017 at 4:23 PM | “1 ppm/yr is currently taken up by the ocean (which may not involve photosynthesis)”.
          My old textbook on ocean chemistry tells me that surface ocean water is depleted in total dissolved CO2 as compared with ocean water at greater depths. At around 2 to 4km depth the content is (or was) about 2350 μ moles/kg (micro-moles per kg), as compared with about 2150 μ moles/kg (micro-moles per kg) at the surface, so the depletion is around 9%. This deficiency can only realistically be accounted for by photosynthetic activity of planktonic organisms. As M.Painter indicates(24 Dec, 10.55) much of the carbon absorbed by the oceans from the atmosphere in the form of CO2 and then fixed by photosynthesis is permanently buried in ocean sediments (both as organic carbon and as CaCO3 shells/skeletal material). Ocean sediments must therefore also form a considerable energy sink.
          Does an increase in atmospheric CO2 such as that due to human activities enhance the rate of photosynthesis in the oceans? If so, some of the excess atmospheric energy may, as M.Painter suggests, be buried and stored in ocean sediments.

        • mpainter
          Posted Dec 29, 2017 at 4:18 PM | Permalink

          Coldish,

          Much of the CO2 depletion of the surface is due to coccolithpores, an abundant phytoplankton with calcareous tests which sink to the bottom when the organism dies. These dissolve in the cold, high pressure regime found at great depth, enriching the CO2 content of deep water. This is short term sequestration of CO2 since overturning of the oceans is estimated to take 700-1000 years. When these tests fall onto shallow bottoms such as continental shelves, these are permanently sequestered in sediments.
          Studies have shown a very considerable increase in coccolithopores.

          This is an interesting subject. Coccolithopores not only sequester carbon through photosynthesis but also through the formation of their calcareous tests. And they are increasing. One study showed a tenfold increase of these in one area of the North Atlantic. Hmmmm.

        • Frank
          Posted Dec 30, 2017 at 1:56 AM | Permalink

          Coldish: Thanks for the informative reply. My point should have been that the equivalent of 1-2 ppm of CO2 is currently being taken up by photosynthesis. That should allow us to determine how much radiation is consumed by that process. Non-reliable sources on the Internet suggest that 6 photons are needed to reduce each CO2 (and inefficiency converts photons to heat). From this information one should be able to calculate how much visible sunlight is consumed in reducing CO2.

        • Pierre-Normand
          Posted Jan 2, 2018 at 2:03 AM | Permalink

          Re Frank “Uptake by land almost certainly starts with photosynthesis, even though soil is the largest repository. So, you might want to calculate how much energy (actually power) is required to photo-reduce 1 ppm/yr of CO2.”

          That’s a clever suggestion for establishing an upper bound regarding photosynthesis. The Gibbs free energy for converting one mole of CO2 to glucose is 114kcal. On that basis, by my calculations, the power required to reduce 1ppm/year of CO2 is 0.002W/m^2. That seems negligible indeed.

        • mpainter
          Posted Jan 2, 2018 at 5:20 AM | Permalink

          There seems to be some confusion here regarding the power consumed by photosynthesis. It has been variously estimated at 1-2% of insolation, or 1.6 to 3.2 W/sq m.

          Interesting how one can use math to jimmy science. At 1% of insolation converted to glucose, or 1.6 W/sq m of power and using Pierre-Normand’s .002 W/sq m per ppm of CO2, we get for the annual global reduction of CO2 via photosynthesis

          1.6/.002 = 800 ppm/year

          If, as has been claimed, 2% of insolation is sequestered via photosynthesis, then it is 1,600 ppm/year.

          If anthro CO2 = 4ppm annually, then this contributes .05% annually to global total photosynthesis.
          Which computes to a pre-industrial reduction of 796 ppm of CO2/year via photosynthesis. Or 1,596 ppm/year if 2% of insolation is used, or 0.025%. In light of these figures, one needs to ask the right questions. For example, it can be inferred that anthropogenic CO2 contributes only a minute fraction of the annual CO2 increase (>1%?). And the balance is natural.

          Another interesting fact: if all anthropogenic emissions of CO2 were to suddenly end, atmospheric CO2 would decline by 2ppm/year.

        • mpainter
          Posted Jan 2, 2018 at 5:51 AM | Permalink

          To continue, such a decline is computed on the assumption that natural sources make no contribution to the 2ppm annual increase. But a warming ocean means less absorption of atmospheric CO2 and increased ex-gassing of the same (warmer water holds less CO2 than cooler water). There can be no doubt that a warming ocean has contributed to the atmospheric increase in CO2. If one takes a simplistic approach, then these questions can be dismissed without any attempt to quantify these factors.

        • Pierre-Normand
          Posted Jan 2, 2018 at 8:22 PM | Permalink

          Re mpainter,

          I thought the initial question was how much more incident energy gets sequestered as a result of the recent greening (consequent on anthropogenic CO2 emissions). This is clearly bounded up by how much anthropogenic CO2 is being sequestered through photosynthesis. It can’t be more than 2ppm (from simple mass balance) and hence the power sequestered can’t be anymore than 0.004W/m^2. Now you are shifting the talk to total power consumed by photosynthesis rather than the power sequestered, that is, what is consumed regardless of the power released back at the same time (every year) through decomposition of biomass. This is not an imbalance, it’s just one single column in the balance sheet. I don’t see what relevance the magnitude of this power consumed, taken in isolation, has on energy balance.

        • mpainter
          Posted Jan 3, 2018 at 3:18 AM | Permalink

          Pierre-Normand,
          Correct, the energy to reduce one ppm of atmospheric CO2 via photosynthesis is inconsiderable. There can be no disagreement on this. This, however, does not settle the original question.

          I shifted nothing, of course. The thread originated as a question of what fraction of insolation was sequestered via photosynthesis and what cognizance, if any, the climate modelers took of this natural process (see my comment at December 24, 6:11 am). The conclusion is that modelers dismissed this process as of no account, assuming that the biosphere was in energy balance. The sedimentary record indicates otherwise, so it becomes a question of what support there is for such an assumption.

          Does the IPPC address this issue?

        • Pierre-Normand
          Posted Jan 4, 2018 at 2:32 AM | Permalink

          mpainter,

          The Sequestration of incoming solar energy through photosynthesis isn’t constrained by the amount of solar energy absorbed by vegetation as it grows, but rather is constrained by the amount of CO2 sequestered over a full seasonal cycle. (Compare: the amount of cash a business generates isn’t constrained by gross income but rather by net profit). That’s because much of the energy consumed is released back, at the same time as much of the consumed CO2 also is released back (mainly through biomass decay). The net variation in CO2 thus provides a measure of the energy sequestered (net) as opposed to the energy merely consumed (gross). Furthermore, we know the amount of CO2 in the oceans to be increasing because of Henry’s law and because the vast recent increase in atmospheric CO2 partial pressure (from 320ppm to 400ppm in 200 years) overwhelms the small amount of warming of the ocean surface. The increase in ocean CO2 concentration also directly shows up as a lowering of ocean pH. This is why we know that there can’t be any significant amount of hidden sequestration of energy through photosynthesis above what is accounted for (as an upper bound) by the 2ppm difference (the airborne fraction) between what we release (4ppm) and what accumulates in the atmosphere (2ppm).

        • mpainter
          Posted Jan 4, 2018 at 7:20 AM | Permalink

          Pierre-Normandy,
          Thanks for your response. Your reply is entirely theoretical and gives the sort of theoretical justification that modelers might give in support of their assumption of an energy balance in the biosphere.

          You reasoning assumes an energy balance in the biosphere in the event of a constant partial pressure of atmospheric CO2, this constant partial pressure taken as the proof of such balance. This is circular reasoning. Ice cores show a variable atmospheric CO2 content. By your reasoning, this variation reflects an energy imbalance in the biosphere. But we know that it reflects variations in surface temperature.
          Nor does your reasoning account for such episodes where atmospheric CO2 was several times higher than today’s, as in the Cretaceous.

          Science should be constrained with observations. We know that organic material is incorporated in sediments. Your theorizing ignores this and you have made no attempt to quantify this. This is the sort of approach that distinguishes the science behind AGW.

          For myself, hypothetical reasoning and theorizing without observational constraints or that ignores such constraints is deplorable science. One suspects that the climate modelers lack proper scientific training.

        • mpainter
          Posted Jan 4, 2018 at 7:33 AM | Permalink

          Normand_, not Normandy.

        • Pierre-Normand
          Posted Jan 5, 2018 at 12:22 AM | Permalink

          mpainter,

          Contrary to what you are claiming, I never assumed energy balance of the growth/decomposition cycle. Quite the contrary, in order to set an upper bound to the energy that might be sequestered by this process over a full cycle I assumed that *all* the carbon that is being removed from the oceans+atmosphere is being sequestered after first having been captured through photosynthesis. (Photosynthesis can’t capture carbon that doesn’t exist). This can’t be anymore than 2ppm per year assuming only that the terrestrial biomass isn’t shrinking (it is indeed increasing) and the amount of CO2 in the oceans isn’t diminishing. It is increasing as evidenced by the lowering pH of ocean water. This is observational, not theoretical. Henry’s law simply explains *why* this increase in ocean CO2 concentration can be expected in spite of the small amount of warming of the ocean surface. You can’t ignore the large increase in atmospheric partial pressure of CO2 that has occurred over the period when the temperature has increased slightly (accounting on its own by no more than about 10ppm atmospheric CO2 increase per °C).

        • mpainter
          Posted Jan 5, 2018 at 1:44 AM | Permalink

          Pierre-Normand,
          As I understand your position, you assume an energy equilibrium in the biosphere if atmospheric CO2 remains at a constant level. If you do not so assume, then you need to clarify. Climate modelers assume an energy equilibrium in the biosphere, right?
          #### ######

          You say

          “…I assumed that *all* the carbon that is being removed from the oceans+atmosphere is being sequestered after first having been captured through photosynthesis…”

          The issue is the sequestration of energy by natural processes. Photosynthesis is one means, through direct sequestration of insolation. Another means is the formation of calcium carbonate in the oceans, an endothermic reaction. The assumption that 2 ppm of atmospheric CO2 gives the measure of carbon of energy sequestration

        • mpainter
          Posted Jan 5, 2018 at 2:04 AM | Permalink

          To continue,
          The assumption that the measure of energy sequestration worldwide is given by 2ppm atmospheric CO2 is false, because oceanic upwelling adds carbon to surface waters and it is in these surface waters (the photic zone) that energy sequestration occurs via photosynthesis and calcium carbonate formation. Please note that the issue is energy sequestration, which issue is ignored by climate modelers. My point is that the modelers are deficient in their science.

        • mpainter
          Posted Jan 5, 2018 at 2:25 AM | Permalink

          I should point out that CO2 is depleted from upwelling water as measured by the pH, which typically upwells at 7.6 pH and is depleted to 8 pH at some remove from the locus of upwelling. It bears repeating that this upwelled carbon is not relevant to present day changes in atmospheric CO2 and that the modelers take no cognizance of this source of carbon, very much to their fault.

      • bitchilly
        Posted Dec 25, 2017 at 5:52 PM | Permalink

        m.painter,considering phytoplankton produce approximately half of the oxygen we breathe it must be a considerable amount.i would be interested in reading any literature you know of that attempts to quantify permanent sequestration of insolation.

        the reported drop in some quarters of up to 40% in ocean plankton since 1950 certainly coincides with the period when it was found there was some heat “missing”,though i believe that heat may have been found in a ships bucket.

        • mpainter
          Posted Dec 25, 2017 at 8:51 PM | Permalink

          bitchilly, I have no reference. My supposition is that any such study would be by a “consensus” type and automatically suspect.

          It needs to be investigated by skeptics. Oxidation of carbon is exothermic. But the reaction CaO + CO2 = CaCO3 is endothermic. I doubt that this reaction is considered by the climate scientists in their energy budget but this represents the ultimate fate of oceanic CO2. Hence, energy released by oxidation of C is balanced by the calcium carbonate reaction to ? extent.

          And so forth. I have no faith in the muddling consensus science.

    • Posted Dec 24, 2017 at 12:29 PM | Permalink

      Hi ATTP, thanks for commenting.

      It is certainly true that there is a tendency among CMIP5 models for those that most accurately represent various aspects of the current climate to have a fairly high ECS – in the range 3 – 4.5 C, say. Thus, most “emergent constraints” studies using CMIP5 models point to ECS being above 3 C. However, I’m far from convinced that these findings tell one much about the real climate system, as opposed to about the various CMIP5 models.

      As well as the major issue of feedbacks acting over the longer term not necessarily being similar to those acting over shorter timescales, the results may be specific to the current generation of GCMs.

      Moreover, for CMIP5 GCMs results may be dependent on which models are included. For BC17’s predictors, when using all predictors simultaneously there is a correlation of -0.3 between the RMS error verss CERES observations and predicted 2090 warming on the RCP8.5 scenario: models predicting greater warming tend to have better fidelity to aspects of current TOA radiation variables. However, if the low sensitivity NASA GISS models (4 of them, all sharing the same atmospheric module and having poor TOA radiation fidelity) are excluded, the correlation disappears.

    • Posted Dec 24, 2017 at 1:17 PM | Permalink

      ATTP and Nic, The problem here is that if AOGCM’s lack skill and have large numbers of substantial inaccuracies, then any skill is almost certainly due to tuning that causes errors to cancel for the output quantities used for tuning.

      I would suggest rereading this excellent comment from Frank pointing to one of the weaknesses of AOGCM’s that is critical to skillfully predicting future climate states (convection and precipitation).

      From the abstract of Zhao (2016), GFDL model:

      “The authors demonstrate that model estimates of climate sensitivity can be strongly affected by the manner through which cumulus cloud condensate is converted into precipitation in a model’s convection parameterization, processes that are only crudely accounted for in GCMs. In particular, two commonly used methods for converting cumulus condensate into precipitation can lead to drastically different climate sensitivity, as estimated here with an atmosphere–land model by increasing sea surface temperatures uniformly and examining the response in the top-of-atmosphere energy balance. The effect can be quantified through a bulk convective detrainment efficiency, which measures the ability of cumulus convection to generate condensate per unit precipitation. The model differences, dominated by shortwave feedbacks, come from broad regimes ranging from large-scale ascent to subsidence regions. Given current uncertainties in representing convective precipitation microphysics and the current inability to find a clear obser- vational constraint that favors one version of the authors’ model over the others, the implications of this ability to engineer climate sensitivity need to be considered when estimating the uncertainty in climate projections.”

      The “drastic difference” in climate sensitivity is from ECS 3.0 K to 1.8 K (assuming F_2x = 3.7 W/m2). If the authors are correct and no observational constraint favors one model over the other, then the approach of BC17 appears worthless. I’d like to see how well each model reproduces feedbacks in response to seasonal warming. (Studying the relative merits of a climate model with an ECS that agrees with EBMs might not enhance one’s career.)

      • Frank
        Posted Dec 26, 2017 at 4:40 PM | Permalink

        dpy6629: For the record, I added some caveats to my comments on Zhao (2016) on the Nic’s earlier post. Zhao (2016) reports large changes in “Cess climate sensitivity” with parameterization of cumulus precipitation, which I multiplied by 3.7 W/m2/doubling to get an ECS. Changing parameterization caused large changes in climate sensitivity, but saying this represents a change from 3.0 K to 1.8 K in ECS may be incorrect.

        Comments from someone who understands Cess climate sensitivity and the difference between working with AGCMs and AOGCMs would be appreciated. There is no guarantee that the lowest sensitivity models AM4-L will ever be incorporated into a full AOGCM.

    • Frank
      Posted Dec 27, 2017 at 12:31 PM | Permalink

      ATTP wrote: “Global warming is fundamentally about the TOA energy imbalance and so these TOA fluxes must be related to the feedbacks and may well be related on multidecadal scales.”

      This appears to be incorrect. Global warming/climate sensitivity is about the amount of warming needed to eliminate a radiative forcing. It is about W/m2/K, where W/m2 is the imbalance created by a forcing, and K is the temperature change needed to eliminate that imbalance. Nic is correct to criticize the use of predictors that involve only W/m2 or K, and do not contain information about this ratio.

      We also need to carefully distinguish between forcing (things besides temperature that permanently change the TOA balance) and the current radiative imbalance, which is a measure of disequilibrium between current temperature and the equilibrium temperature associated with zero imbalance across the TOA. The current imbalance is about 0.7 W/m2 and the current forcing is about 2.5 W/m2. If forcing didn’t increase in the future, we would be 72% of the way to equilibrium warming.

  5. Posted Dec 24, 2017 at 10:41 AM | Permalink

    ATTP, perhaps you can provide a technical basis for your dismissal of Nic’s criticism in your blog post here.

    Rather predictably, Nic Lewis has a guest post on Climate Etc in which he looks at the Brown & Caldeira analysis and claims that global warming will not be greater than we thought.

    I think this claim is simply wrong. Even if he has found some issue with the analysis in Brown & Caldeira, that still would not justify a claim that global warming will not be greater than we thought.

    AttP, I took Nic’s post’s title to be a tongue-in-cheek poke at the blatantly subjective and unscientific title of BC17. He was not proving global warming will not be worse than we thought, only that the models, analysis and rhetoric are. So if the basis of you calling Nic’s criticism “simply wrong” is the title of Nic’s post then I would say you have nothing.

    • Posted Dec 24, 2017 at 11:48 AM | Permalink

      Ron,
      The point I was making in that post was that the key result in Brown & Caldeira is very simply that models that match some observational predictors suggest that the ECS is on the high side of the range. Consequently, we should maybe not dismiss the possibility that ECS could be on the high side of the range. This doesn’t mean that it is, but it’s not the first analysis of this kind that has suggested this. My own view is that there is little to suggest that we should move away from the range presented by the IPCC (with the caveat that there are some indications that the low end of the likely range should maybe be at 2K, rather than at 1.5K).

      • Posted Dec 24, 2017 at 12:17 PM | Permalink

        Ron, as I said…

      • Posted Dec 24, 2017 at 5:55 PM | Permalink

        ATTP says: “The point I was making in that post was that the key result in Brown & Caldeira is very simply that models that match some observational predictors suggest that the ECS is on the high side of the range. ”

        And Nic demonstrated with a technical analysis that the quality of the statistical behavior of BC17’s predictors indicated the paper’s primary conclusion — that CMIP5 mean ECS can be uplifted 15% when the AOGCMs are weighted according to their skill — is wrong. If you are claiming that Nic is “simply wrong” in his analysis we await your counter (as well as Patrick Brown’s).

        In your post and here you have only disputed Nic’s claim by mis-characterizing it, and Dr. Brown seems to have done the same. Nic’s claim is not regarding ECS, it’s that BC17’s statistics do nothing to support their claims regarding raising ECS.

        I see Nic’s claim of BC17 failing a statistical spot check yet unchallenged. I think it’s also important to remember that BC17 would not have published their analysis it the result inferred a lower ECS. The underlying bias of the field puts the requirement of strict scrutiny to any unique or experimental analysis technique, as they are invented with a predetermined use.

    • Posted Dec 24, 2017 at 2:56 PM | Permalink

      Ron, what you are seeing here is the cognitive dissonance of someone trying to project a scientific demeanor when he really has fixed policy and political views that are hard to defend with the latest science. So he sneaks in with claims such as “Lewis is wrong” which ignores the real issues but expresses his policy views.

    • Posted Dec 25, 2017 at 7:49 AM | Permalink

      On the reliability of the intra annual reflection of the observed monthly trends ( ERA interim) vs. the bleded ( 70% tos +30% tas(land)) CMIP5 mean (RCP8.5):

  6. Posted Dec 24, 2017 at 11:40 AM | Permalink

    Ron, the last time that ATTP dismissed an article by Nic at his blog ( of course without any valueable reasoning) I asked for some arguments and I was blocked in the end by a guy named “Willard” who has this right for it at the ATTP-blog. You won’t get a skillful reply I’m afraid.

  7. Geoff Sherrington
    Posted Dec 24, 2017 at 10:21 PM | Permalink

    I find it quite amusing, Nic the Realist meets the BC17 Model Dreamers – I am on the side of Nic here. How far out can one go with belief in a theoretical construct like this?
    I would be ashamed to show my face if I had written a paper so loosely constrained by observation as BC17 is.
    Some concepts are quite old. There is a possible clarifying analogy with analytical chemistry whole rock analysis, analysis for every major and minor rock element, where historic difficulties long precluded analysis of oxygen, a major. It was estimated after the analysis of everything else, by assuming stochiometry, like Silicon was SiO2, Aluminium was Al2O3, etc.
    Then came crunch point. All the analytes had to sum to 100% +/- derived error. What error could be assigned to oxygen? If your sum came to 110%, had you made an analytical error or an oxygen assumption error? This was never settled, since it was not possible without oxygen analysis. So there was essentially no check on the +/- of the other analyses. But, we proceeded, knowing and commonly stating that the assumptions limited the accuracy.
    In BC17, by analogy, we do not have the complication of an assumed oxygen value. Essentially all inputs to their models are assumptions, every one is an oxygen. So tell me again how BC17 estimates the +/- of every factor? Geoff

    • Posted Dec 25, 2017 at 1:50 PM | Permalink

      As far as I can tell, Geoff; the answer seems, to me, that they can’t and they don’t. It is, again it seems to me, a problem where a proper error analysis and propagation never seems to be addressed. Maybe a proper error analysis and propagation isn’t even considered.

      Am I the only person alarmed by estimates being reported as facts without uncertainty? /rhetorical

  8. jim2
    Posted Dec 25, 2017 at 7:08 PM | Permalink

    “…that still would not justify a claim that global warming will not be greater than we thought.”

    We think, therefore, it is.

  9. Reginald Perrin
    Posted Dec 27, 2017 at 5:57 PM | Permalink

    Steve, why have you abandoned Judith Curry?
    .
    .
    She desperately needs help
    .
    .
    Willard Tony is a powderkeg about to explode
    .
    .
    And only a short car ride
    From the emotionally delicate Damsel in Distress

    Girls rule(s)

    • jim2
      Posted Dec 29, 2017 at 11:33 PM | Permalink

      Are you the internet climate crazy from Windsor Ontario, Canada?

      Malicious WUWT troll sees police show up at his door

      • barn E. rubble
        Posted Jan 1, 2018 at 9:38 PM | Permalink

        Well, the AKA is among those listed @WUWT, as is the rambling incoherence of the message. Rather embarrassing for most of us from the Great White North. I know, we have more, and sadly everybody has too many . . . sigh . . . Here’s to a saner and safer 2018!

  10. DR
    Posted Dec 29, 2017 at 9:37 AM | Permalink

    Off topic, but thought this would be of interest:
    UA ordered to surrender emails to skeptics of human-caused climate change
    http://tucson.com/news/local/ua-ordered-to-surrender-emails-to-group-that-calls-global/article_8983347d-faff-51b3-9748-f1a83737b637.html

  11. Gerald Browning
    Posted Dec 31, 2017 at 2:34 PM | Permalink

    Finally a piece of honesty from the climate modelers:

    “They found that, by a simple alteration of the convective parameterization scheme, they could engineer the climate sensitivity of the GCM they were developing, varying it over a wide range, without them being able to say that one model version showed a greater fidelity in representing recent climate system characteristics than another version with a very different ECS”

    A clear admission that the models do not represent reality. If the parameterizations (forcings) and
    numerical approximations of the correct dynamical equations were accurate, such variation in results would not be possible.

    Jerry

    • mpainter
      Posted Dec 31, 2017 at 4:45 PM | Permalink

      The physics of clouds is not well understood. This is known as the “micro-physics” of clouds and involves the formation of cloud droplets and how this influences convection; it has been studied for decades by meteorologists and is still somewhat of a mystery. Bottom line, the modelers are guessing on convective parameterization and will not admit it.

      • Gerald Browning
        Posted Jan 10, 2018 at 12:37 AM | Permalink

        mpainter,

        We analyzed the modelers micro physics scheme and most of the terms were negligible. The only ones that were important were the latent heating and cooling terms. The rest could be thrown away.
        If you want a reference I would be happy to supply it.

        Jerry

      • Gerald Browning
        Posted Jan 12, 2018 at 12:31 AM | Permalink

        mpainter,

        Here is the reference:

        Advances in Atmospheric Sciences

        July 2002, Volume 19, Issue 4, pp 619–650

        Scaling the microphysics equations and analyzing the variability of hydrometeor production rates in a controlled parameter space

        Note that convective parameterization for large scale flows is not the largest source of error
        in short term forecasts, but the boundary layer parameterization is. In fact the error in that parameterization propagates upward and destroys the accuracy of the forecast within a few days. It is only the new observational data that is inserted into the models every 6-12 hours that keeps the models from going completely off the rails (see Gravel et al. reference shown on this site).
        A large part of this problem is the use of Richardson’s columnar equation for the vertical velocity
        (an integral) that propagates the error upwards so fast. As I have said many times, the hydrostatic assumption that leads to Richardson’s equation does not lead to the correct reduced system, i.e.,
        the correct well posed limit system for the hyperbolic dynamical system that describes large scale atmospheric flows in the mid-latitudes (Browning and Kreiss 2002).

        Jerry

  12. ccscientist
    Posted Jan 4, 2018 at 9:31 AM | Permalink

    Convective heat dissipation is a key component of Willis’ tropical ocean heat engine theory (hotter days yield earlier and more prolonged clouds and more thunderstorms–a negative feedback mechanism). The inability of the models to handle daily cloud and storm formation is a major weakness.

    • Posted Jan 8, 2018 at 6:59 AM | Permalink

      A recent study about the diurnal cycle of low clouds in the tropics in models and obs:https://www.nature.com/articles/s41467-017-02369-4 (open).
      From the abstracts:”While the mean appears to be reliable, the amplitude and phase of the DCC show marked inconsistencies, inducing overestimation of radiation in most climate models. In some models, DCC appears slightly shifted over the ocean, likely as a result of tuning and fortuitously compensating the large DCC errors over the land.”

    • Posted Jan 8, 2018 at 7:22 AM | Permalink

      Nic, once again I have a comment in moderation. IMO it’s the link which is the source of this trouble. However, the linked paper is very interesting…

      • Posted Jan 21, 2018 at 12:47 PM | Permalink

        Sorry, onlyjust spotted this. Comment now belatedly released.

  13. jddohio
    Posted Jan 5, 2018 at 11:02 PM | Permalink

    Deflategate — The ESPN article on the Patriots http://www.espn.com/nfl/story/_/page/hotread180105/beginning-end-new-england-patriots-robert-kraft-tom-brady-bill-belichick-internal-power-struggle
    mentioned in passing that John Jastremski and Jim McNally, the equipment managers accused of assisting Brady in deflating footballs were no longer with the Patriots. This lends credence to my position that almost certainly McNally lied when he stated that he didn’t remember why he went to the bathroom the day after the game occurred. (See this post https://climateaudit.org/2016/06/07/deflategate-controversy-is-due-to-scientist-error/ [Since the Deflategate posts have been closed to comments, I am commenting here.]

    If McNally and Jastremski were innocent victims, it seems almost certain that the Patriots would have supported and protected them after the kerfuffle was over. The ESPN article also states that “many [Patriot] staffers in the building believed there was merit in the allegation, however absurd the case.”

    JD

  14. Dan White
    Posted Jan 31, 2018 at 7:18 PM | Permalink

    Has there been a reply to the reply to the reply to the reply yet from Patrick Brown? I did a bit of searching and didn’t find any response.

  15. Posted May 2, 2018 at 2:54 PM | Permalink

    I’d like to find out more? I’d want to find out more
    details.

3 Trackbacks

  1. […] warming will not be greater than we thought” of his and Ken Caldeira’s recent paper (BC17).[1] The provision of more detailed information than was given in BC17, and in particular the results of […]

  2. […] warming will not be greater than we thought” of his and Ken Caldeira’s recent paper (BC17).[1] The provision of more detailed information than was given in BC17, and in particular the results […]

  3. […] vous laisse consulter les réponses, ici et ici, d’un des rares contradicteurs de cette étude: Nic […]