*A guest post by Nic Lewis*

The recently published open-access paper “How accurately can the climate sensitivity to CO2 be estimated from historical climate change?” by Gregory et al.[i] makes a number of assertions, many uncontentious but others in my view unjustified, misleading or definitely incorrect. Perhaps most importantly, they say in the Abstract that “The real-world variations mean that historical EffCS [effective climate sensitivity] underestimates CO_{2} EffCS by 30% when considering the entire historical period.” But they do not indicate that this finding relates only to effective climate sensitivity in GCMs, and then only to when they are driven by one particular observational sea surface temperature dataset.

However, in this article I will focus on one particular statistical issue, where the claim made in the paper can readily be proven wrong without needing to delve into the details of GCM simulations.

Gregory et al. consider a regression in the form *R* = *α* *T*, where *T* is the change in global-mean surface temperature with respect to an unperturbed (i.e. preindustrial) equilibrium, and *R* is the radiative response of the climate system to change in *T*. *α* is thus the climate feedback parameter, and *F*_{2xCO2 }/ *α* is the EffCS estimate, *F*_{2xCO2} being the effective radiative forcing for a doubling of preindustrial atmospheric carbon dioxide concentration.

The paper states that “that estimates of historical α made by OLS [ordinary least squares] regression from real-world *R* and *T* are biased low”. OLS regression estimates *α* as the slope of a straight line fit between *R* and *T *data points (usually with an intercept term since the unperturbed equilibrium climate state is not known exactly), by minimising the sum of the squared errors in *R*. Random errors in *R* do not cause a bias in the OLS slope estimate. Thus in the below chart, with *R* taken as plotted on the y-axis and *T *on the x-axis, OLS finds the red line that minimizes the sum of the squares of the lengths of the vertical lines.

.

However, some of the variability in measured *T* may not produce a proportionate response in *R*. That would occur if, for example, *T* is measured with error, which happens in the real world. It is well known that in such an “error in the explanatory variable” case, the OLS slope estimate is (on average) biased towards zero. This issue has been called “regression dilution”.

Regression dilution is one reason why estimates of climate feedback and climate sensitivity derived from warming over the historical period often instead use the “difference method”.[ii] [iii] [iv] [v] The difference method involves taking the ratio of differences, Δ*T *and Δ*R*, between *T *and *R* values late and early in the period. In practice Δ*T *and ΔR are usually based on differencing averages over at least a decade, so as to reduce noise.

I will note at this point that when a slope parameter is estimated for the relationship between two variables, both of which are affected by random noise, the probability distribution for the estimate will be skewed rather than symmetric. When deriving a best estimate by taking many samples from the error distributions of each variable, or (if feasible) by measuring them each on many differing occasions, the appropriate central measure to use is the sample median not the sample mean. Physicists want measures that are invariant under reparameterization[vi], which is a property of the median of a probability distribution for a parameter but not, when the distribution is skewed, of its mean. Regression dilution affects both the mean and the median estimates of a parameter, although to a somewhat different extent.

So far I agree with what is said by Gregory et al. However, the paper goes on to state that “The bias [in *α* estimation] affects the difference method as well as OLS regression (Appendi*x *D.1).” This assertion is wrong. If true, this would imply that observationally-based estimates using the difference method would be biased slightly low for climate feedback, and hence biased slightly high for climate sensitivity. However, the claim is *not *true.

The statistical analyses in Appendi*x *D consider estimation by OLS regression of the slope *m *in the linear relationship *y*(*t*) = *m x*(*t*), where *x *and y are time series the available data values of which are affected by random noise. Appendi*x *D.1 considers using the difference between the last and first single time periods (here, it appears, of a year), not of averages over a decade or more, and it assumes for convenience that both *x *and *y* are recentered to have zero mean, but neither of these affect the point of principle at issue.

Appendi*x *D.1 shows, correctly, that when only the endpoints of the (noisy) *x *and *y* data are used in and OLS regression, the slope estimate for *m *is Δ*y*/Δ*x*, the same as the slope estimate from the difference method. It goes on to claim that taking the slope between the *x *and *y* data endpoints is a special case of OLS regression and that the fact that an OLS regression slope estimate is biased towards zero when there is uncorrelated noise in the *x *variable implies that the difference method slope estimate is similarly so biased.

However, that is incorrect. The median slope estimate is not biased as a result of errors in the *x *variable when the slope is estimated by the difference method, nor when there only two data points in an OLS regression. And although the mean slope estimate is biased, the bias is high, not low. Rather than going into a detailed theoretical analysis of why that is the case, I will show that it is by numerical simulation. I will also explain how in simple terms regression dilution can be viewed as arising, and why it does not arise when only two data points are used.

The numerical simulations that I carried out are as follows. For simplicity I took the true slope *m *as 1, so that the true relationship is *y* = *x, *and that true value of each *x* point is the sum of a linearly trending element running from 0 to 100 in steps of 1 and a random element uniformly distributed in the range -30 to +30, which can be interpreted as a simulation of a trending “climate” portion and a non-trending “weather” portion.[vii] I took both *x* and *y* data (measured) values as subject to zero-mean independent normally distributed measurement errors with a standard deviation of 20. I took 10,000 samples of randomly drawn (as to the true values of *x* and measurement errors in both *x* and *y*) sets of 101 *x* and 101 *y* values.

Using OLS regression, both the median and the mean of the resulting 10,000 slope estimates from regressing *y* on *x* using OLS were 0.74 – a 26% downward bias in the slope estimator due to regression dilution.

The median slope estimate based on taking differences between the averages for the first ten and the last ten *x* and *y* data points was 1.00, while the mean slope estimate was 1.01. When the averaging period was increased to 25 data points the median bias remained zero while the already tiny mean bias halved.

When differences between just the first and last measured values of *x *and *y* were taken,[viii] the median slope estimate was again 1.00 but the mean slope estimate was 1.26.

Thus, the slope estimate from using the difference method was median-unbiased, unlike for OLS regression, whether based on averages over points at each end of the series or just the first and last points.

The reason for the upwards mean bias when using the difference method can be illustrated simply, if errors in *y* (which on average have no effect on the slope estimate) are ignored. Suppose the true Δ*x *value is 100, so that Δ*y* is 100, and that two *x *samples are subject to errors of respectively +20 and –20. Then the two slope estimates will be 100/120 and 100/80, or 0.833 and 1.25, the mean of which is 1.04, in excess of the true slope of 1.

The picture remains the same even when (fractional) errors in *x* are smaller than those in *y*. On reducing the error standard deviation for *x *to 15 while increasing it to 30 for *y*, the median and mean slope estimates using OLS regression were both 0.84. The median slope estimates using the difference method were again unbiased whether using 1, 10 or 25 data points at the start and end, while the mean biases remained under 0.01 when using 10 or 25 data point averages and reduced to 0.16 when using single data points.

In fact, a moment’s thought shows that the slope estimate from 2-point OLS regression must be unbiased. Since both variables are affected by error, if OLS regression gives rise to a low bias in the slope estimate when *x *is regressed on *y*, it must also give rise to a low bias in the slope estimate when *y* is regressed on *x*. If the slope of the true relationship between *y* and *x *is m, that between *x *and *y* is 1/m. It follows that if regressing *x *on *y* gives a biased low slope estimate, taking the reciprocal of that slope estimate will provide an estimate of the slope of the true relationship between *y* and *x *that is biased high. However, when there are 2 data points the OLS slope estimate from regressing *y* on *x *and that from regressing *x *on *y* and taking its reciprocal are identical (since the fit line will go through the 2 data points in both cases). If the *y*-against-*x *and *x*-against-*y* OLS regression slope estimates were biased low that could not be so.

As for how and why errors in the *x *(explanatory) variable cause the slope estimate in OLS regression to be biased towards zero (provided there are more than two data points), but errors in the *y* (dependent) variable do not, the way I look at it is this. For simplicity, I take centered (zero-mean) *x *and *y* values. The OLS slope estimate is then Σ*xy* / Σ*xx*, that is to say the weighted sum of the *y* data values divided by the weighted sum of the *x *data values, the weights being the *x *data values. An error that moves a measured *x *value further from the mean of zero not only reduces the slope *y*/*x *for that data point, but also increases the weight given to that data point when forming the OLS slope estimate. Hence such points are given more influence when determining the slope estimate. On the other hand, an error in *x *that moves the measured value nearer to zero mean *x *value, increasing the *y*/*x *slope for that data point, reduces the weight given to that data point, so that it is less influential in determining the slope estimate. The net result is a bias towards a smaller slope estimate. However, for a two-point regression, this effect does not occur, because whatever the signs of the errors affecting the *x*-values of the two points, both *x*-values will always be equidistant from their mean, and so both data points will have equal influence on the slope estimate whether they increase or decrease the *x*-value. As a result, the median slope estimate will be unbiased in this case. Whatever the number of data points, errors in the y data points will not affect the weights given to those data points when forming the OLS slope estimate, and errors in the *y*-data values will on average cancel out when forming the OLS slope estimate Σ*xy* / Σ*xx*.

So why is the proof in Gregory et al. Appendix D.1, supposedly showing that OLS regression with 2 data points produces a low bias in the slope estimate when there are errors in the explanatory (*x*) data points, invalid? The answer is simple. The Appendi*x *D.1 proof relies on the proof of low bias in the slope estimate in Appendi*x *D.3, which is expressed to apply to OLS regression with any number of data points. But if one works through the equations in Appendi*x *D.3, one finds that in the case of only 2 data points no low bias arises – the expected value of the OLS slope estimate equals the true slope.

It is a little depressing that after many years of being criticised for their insufficiently good understanding of statistics and lack of close engagement with the statistical community, the climate science community appears still not to have solved this issue.

*Update 29 October 2019*

Just to clarify, the final paragraph is a general remark about the handling of statistical issues in climate science research, not a particular remark about this new paper (where the statistical mistake made does not in any case affect any of the results).

*Update 23 January 2020*

Typo in 3rd paragraph fixed (*R* = *α* *T* corrected to *R* in the 2nd line).

.

[i] Gregory, J.M., Andrews, T., Ceppi, P., Mauritsen, T. and Webb, M.J., 2019. How accurately can the climate sensitivity to CO₂ be estimated from historical climate change?. Climate Dynamics.

[ii] Gregory JM, Stouffer RJ, Raper SCB, Stott PA, Rayner NA (2002) An observationally based estimate of the climate sensitivity. J Clim 15:3117–3121.

[iii] Otto A, Otto FEL, Boucher O, Church J, Hegerl G, Forster PM, Gillett NP, Gregory J, Johnson GC, Knutti R, Lewis N, Lohmann U, Marotzke J, Myhre G, Shindell D, Stevens B, Allen MR (2013) Energy budget constraints on climate response. Nature Geosci 6:415–416

[iv] Lewis, N. and Curry, J.A., 2015. The implications for climate sensitivity of AR5 forcing and heat uptake estimates. Climate Dynamics, 45(3-4), pp.1009-1023.

[v] Lewis, N. and Curry, J., 2018. The impact of recent forcing and ocean heat uptake data on estimates of climate sensitivity. Journal of Climate, 31(15), pp.6051-6071.

[vi] So that, for example, the median estimate for the reciprocal of a parameter is the reciprocal of the median estimate for the parameter. This is not generally true for the mean estimate. This issue is particularly relevant here since climate sensitivity is reciprocally related to climate feedback.

[vii] There was an underlying trend in T over the historical period, and taking it to be linear means that, in the absence of noise, linear slope estimated by regression and by the difference method would be identical.

[viii] Correcting the small number of negative slope estimates arising when the *x* difference was negative but the *y* difference was positive to a positive value (see, e.g., Otto et al. 2013). Before that correction the median slope estimate had a 1% low bias. The positive value chosen (here the absolute value of the negative slope estimate involved) has no effect of the median slope estimate provided it exceeds the median value of the remaining slope estimates, but does materially affect the mean slope estimate.

## 44 Comments

“It is a little depressing that after many years of being criticised for their insufficiently good understanding of statistics and lack of close engagement with the statistical community, the climate science community appears still not to have solved this issue.”

Amen. Most of them not only don’t seem to get it, but lack the understanding of elementary statistics I was taught as a first-year grad student (in geology!) long, long ago. Shocking, really.

Thanks for sticking with this, despite the brickbats.

“It is a little depressing that after many years of being criticised for their insufficiently good understanding of statistics and lack of close engagement with the statistical community, the climate science community appears still not to have solved this issue.”

I could not have said it better… The core of this paper is founded on some… shakily estimations, striped by Nic, I hope for a adequate reaction of the authors. The rest of the paper is interesting indeed.

In a truly random world, climate scientists would sometimes make mistakes exaggerating global warming, and sometimes make mistakes underestimating global warming. These would balance out over many published articles. But when 100% of published papers make mistakes exaggerating global warming, one must suspect underlying bias.

Might be the reason for why they don’t engage with the statistical community.

Or deliberate corruption.

Reblogged this on Climate Collections.

Gosh, that was an amazing catch, Nic.

Do you have any thoughts on using a Reduced Major Axis regression here? It has the same property as a two point OLS regression in that the X on Y regression yields the inverse of the Y on X regression, but it may be more robust than end-point averaging.

Hi Paul

Thanks for commenting! I quite often use Deming regression (where there are only two variables and errors in them are uncorrelated), which I think is related to (or a special case of) Reduced Major Axis regression? That just requires an estimate of the ratio of the standard deviations of the x and y variables. However, I’ve found that the auto- and cross-correlations for annual mean N and T data from GCMs, which can be quite strong and complex, can sometimes bias the slope estimates.

In general I find the difference method (with averaging periods of at least a decade, and chosen to minimise unwanted influences) is most robust for estiamting climate feedback etc., but that if regression is preferred (or necessary, as when there is more than one explanatory variable), then regressing pentadal mean data is much more robust than regressing annual mean data, and is much less likely to introduce estiamtion bias.

Hi Nic, thanks for taking a good look at this. The effect of regression dilution in exaggerating CS is something I have been raising for years.

https://judithcurry.com/2016/03/09/on-inappropriate-use-of-least-squares-regression/

The “trick” is to plot dR against dT, ie with dT on the abcissa, which is normally used for causative variable, despite the fact they are suggesting dR is causing dT. For some reason which is never explained , all climate science on the question does it this way around. Regressing dT against dR would give a lesser value of CS. Are they really all ignorant of this issue and coincidentally all using a method which exaggerates, rather than under-estimates, CS ?

One of the links in that article is to Forster & Gregory 2006 ( same JM Gregory of UK Hadley Centre ). They were about the only paper I could find at the time which even showed awareness of the issue. However, they chose to bury it an an appendix rather than discuss it in paper itself where they continue to use a method producing exaggerated CS.

Lindzen and Choi 2011 also considers the problem and refers to it “exaggerating positive feedbacks”.

IIRC Demming requires estimated of the SD of the ERRORS in x and y, not the variables themselves ( which is clearly already available ). It is this additional information ( where that is available ) which allows a correction for regression dilution. I’m sure you know that but your comment was poorly worded.

Thanks again for applying your expertise to the question and shining some light on it abuse in climate science.

Hi Greg,

When considering the radiative response (change in net outward radiation with zero change in applied forcing) delta_R to a change in surface temperature delta_T, which is what ‘climate feedback’ refers to, it is logical to regress delta_R on delta_T. As there is more fractional noise in delta_R than in delta_T, one would want to do it this way round anyway if using OLS.

I agree that it is the SD of the errors in x and y that is needed for Deming regression – IIRC just the ratio of their errors is in fact sufficient. I inadvertently misworded that sentence. Good spot.

Click to access climate-05-00076.pdf

Temperature over the last 425 million years is uncorrelated to CO2 levels. By the guy who wrote the draft Report for the Kyoto Protocal.

From W. Jackson Davis, 2017, linked above:

“As shown here, and as expected from the derivation of marginal forcing from concentration, atmospheric CO2 concentration and marginal radiative forcing by CO2 are inversely related.Diminishing returns in the forcing power of atmospheric CO2 as concentration increases ensure that in a CO2-rich environment like the Phanerozoic climate, large variations in CO2 exert little or negligible effects on temperature. ”

As many of us have been saying for quite a few years now. . . .

you are reading what you want to read and not what is written.

What have “many of us” been saying for quite a few years about CO2 in the Phanerozoic climate? I must have missed that one.

My understanding is that the paper is pointing out that unforced variability can lead to variations in T and N that mean that if you try to estimate equilibrium climate sensitivity from the historical period it is biased low, whether you use OLS regression, or the difference method.

ATTP, The paper is long and makes various claims. The one that I address in this post is the first one mentioned in the Abstract:

“We find that (1) for statistical reasons, unforced variability makes the estimate of historical EffCS both uncertain and biased; it is overestimated by about 10% if the energy balance is applied to the entire historical period…”.

As I show in this post, that claim is wrong, unless the estimate is based on OLS regression (which it rarely is, for estimation relating to the real climate system). And the statistical proof that they give of bias when using the difference method is invalid. No bias arises when doing so.

I plan subsequently to address various other claims in the paper. I do concur with a number of things that are said in the paper, but not with their claim that “The real-world variations mean that historical EffCS underestimates CO2 EffCS by 30% when considering the entire historical period.”, which I think is what you refer to.

Nic,

How have you incorporated the impact of unforced variability? I don’t see how you can have demonstrated that their claim is wrong without having included some estimate of the impact of unforced variability.

ATTP, Gregory et al are making a statistical claim, that variability in the dependent variable (here R), whether from unforced variability or measurement/sampling error, that is not related to the explanatory variable (here T) will bias down the estimate of climate feedback (dR/dT) when estimated as the ratio of their differences between two periods. I have shown that claim to be false. May I suggest that you reread my post?

Nic,

I have read your post, but maybe you can clarify something. Are you suggesting that the difference method has no possible bias? In other words, when you estimate the equilibrium sensitivity using this method, even though the result you get is often referred to as the Effective Climate Sensitivity (EffCS), it still correctly represents the actual equilibrium climate sensitivity (ECS). Is that essentially what you’re claiming?

ATTP,

My post is purely concerned about whether the difference method shares with OLS regression the problem of slope underestimation when the explanatory variable is subject to error (noise), as claimed by Gregory et al, supported by a purported mathematical proof. This is purely a mathematical/statistical question. I have shown that their claim is untrue and their mathematical proof invalid.

This issue has nothing whatsoever to do with the question whether EffCS is an unbiased estimate of ECS. I make no suggestion about that point here. For a discussion of that issue, please see sections 2 and 7.f of Lewis and Curry (2018).

Nic,

I understand what your post is about. I’m suggesting that what the authors are saying is not quite what you’re suggesting they’re saying (although, to be fair, it isn’t entirely clear). My understanding is that the bias they’re refering to is due to unforced variability (and also volcanic forcing) and can’t be accounted for using OLS or differencing because you can’t know (from a single realisation) the impact that it has had.

ATTP, I repeat my initial reply to your mistaken comment:

‘ATTP, The paper is long and makes various claims. The one that I address in this post is the first one mentioned in the Abstract:

“We find that (1) for statistical reasons, unforced variability makes the estimate of historical EffCS both uncertain and biased; it is overestimated by about 10% if the energy balance is applied to the entire historical period…”.

As I show in this post, that claim is wrong, unless the estimate is based on OLS regression (which it rarely is, for estimation relating to the real climate system). And the statistical proof that they give of bias when using the difference method is invalid. No bias arises when doing so.’

Assertion (1) in the Abstract relates purely to the statistical issue which I discuss in my post. Reread Gregory et al more carefully if you are in doubt about that.

ATTP, it’s a pitty that you didn’t understand the very straight ahead post of niclewis. Does this happen to you often? In this case I would worry… 🙂

Kenny just feels that Nic must be wrong. Pathetic.

ATTP: They don’t say “for using OLS OR DIFFERENCING”, this is what you made of their text. They say: “First, the estimate of the climate feedback parameter αα using ordinary least-square regression (OLS) of the global-mean top-of-atmosphere radiative response against the global-mean surface temperature change from a single realisation of historical change (such as the real world) is both uncertain and biased towards low values by the presence of unforced variability.” They do not mention “differention”, only OLS regression. And I don’t understand what they mean with “biased low due to unforced variability” IMO it’s also possible ( with the same statisticaly probability) that the result is biased high?

Frank,

If you go to Section 3.3, it also says “The bias affects the difference method as well as OLS regression (Appendix D.1). Total least-squares regression is a method that would avoid the bias, but it is not obviously applicable because it depends on information that we do not have (Appendix D.5).”

And I don’t understand what they mean with “biased low due to unforced variability” IMO it’s also possible ( with the same statisticaly probability) that the result is biased high?As I understand, they’re using single realisations of models. Hence, they know the actual models ECS and can therefore say if the EffCS is biased low, or high, relative to the known ECS.

What is “known ECS”? The known ECS is the GCM- model mean? IMO you prioririze the model estimate and hence you concludes ( because the observed sensitivity is lower) that there is a “low bias”. It’s not justified al all. Open your mind!

“The real-world variations mean that historical EffCS [effective climate sensitivity] underestimates CO2 EffCS by 30% when considering the entire historical period.” But they do not indicate that this finding relates only to effective climate sensitivity in GCMs, and then only to when they are driven by one particular observational sea surface temperature dataset.and

“It is a little depressing that after many years of being criticised for their insufficiently good understanding of statistics and lack of close engagement with the statistical community, the climate science community appears still not to have solved this issue.”

It is not at all clear to me why Nic Lewis refers to the study of climate by a community of policy funded modellers as a ‘science’.

“when a slope parameter is estimated for the relationship between two variables, both of which are affected by random noise, the probability distribution for the estimate will be skewed rather than symmetric”

I have just tried to see the skewness with 100 (normal) x,y pairs (repeated 50K) and whilst I get a deflated OLS slope ( ~1 rather than the ‘underlying’ 2) I am getting a pretty symetrical distribution of OLS slopes,is the skew only visible in particular circumstances?

Thanks for raising this point. I may have overstated the skewness issue. I think that the skewness is only clear when the number of points in the regression is pretty small and the fractional standard deviation of the explanatory variable is large. But if there is error in the explanatory variable then one shouldn’t be using OLS regression in any case – the difference method is the safest method, and that certainly gives an estimate with a skewed distribution in such cases.

Here’s the Connolly’s power point presentation in Tucson Arizona USA in July. Well worth an hour of your time.

A lot to take in but this is all about the actual balloon data over a long period of time.

No modelling or theories or guesses, just the results of millions of balloon flights over decades.

There’s a very short Q&A at the end. I hope those interested have the time to look at the video and perhaps have a friend who understands the chemistry + data etc involved?

Way beyond my capabilities.

Very interesting work on molar density, incredible to get such straight lines from anything other than a controlled experiment. This looks very important.

Sadly, Connolly Snr saying there is no greenhouse effect and failing to realise that absorption is impeding a directional flow whereas emission is omnidirectional makes any further conclusions rather questionable.

This first thing they should do if they want to question the usefulness of climate models is to look at CMIP data and analyse it in the same way. Do models display similar linearities and do they get credible slopes compared to radiosonde.

question the usefulness of climate modelsMy recent paper on error propagation, uncertainty analysis, and the reliability of global air temperature projections:

https://www.frontiersin.org/articles/10.3389/feart.2019.00223/full

Models are unable to predict air temperature one year out, not to say 100 years out.

The upside of this is that FINALLY the climate community is recognising that there is a problem with regression dilution and that is leads to exaggerated estimations of CS. This applies not only to actual climate data but to analysis of climate model output where they attempt to estimate the R vs T response of the model.

Nic, I see Real Climate posted an update on their Resplandy post last month that Nature retracted Resplandy et al (2018). Congratulations and thank you for your work. I think many are curious if you have any final thoughts you can share on that affair. For instance, did Resplandy ever acknowledge the errors to you personally?

Here is the update:

http://www.realclimate.org/index.php/archives/2018/11/resplandy-et-al-correction-and-response/

Ron, thanks. I never obtained any substantive response to emails to Laure Resplandy, apart from her saying that they were “working on an update” and then, in response to my enquiry about their having submitted a correction to Nature, that the essence of it would be covered in a post at Realclimate by Ralph Keeling. I then received a more useful response from Ralph Keeling (who I believe had in the past been Laure Resplandy’s Phd supervisor, before she moved to Princeton), which I thought was rather strange. I went on to have a sensible and constructive correspondence with him.

I gather that there is a theory at Princeton as to why Ralph Keeling rather than Laure Resplandy has taken the lead role in dealing with the problems with the Resplandy et al paper.

“It is a little depressing that after many years of being criticised for their insufficiently good understanding of statistics and lack of close engagement with the statistical community, the climate science community appears still not to have solved this issue.”

Even when climate scientists do have close engagement with the statistical community, they often pick some statistical expert who has the same ideological bias and end up with statistically flawed research. You can see an example of this in a working paper I have written criticizing the work of Kaufmann, Kauppi, and Stock in four papers published in the journal “Climatic Change”. Even though James Stock is one of the foremost experts at time series econometrics, they are using methodology that is completely inappropriate for the greehouse gas data derived from ice cores.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3304901

Thanks for your comment. I will look at your working paper. It certainly seems that there is a likely problem either with the way the recent pre-instrumental ice-core CO2 record has been handled or (IMO less probable) with the CO2 emission estimates, as there is a misfit between emissions and concentration estimates between the 2nd world war and the start of the instrumental record in 1958.

Nic,

any comment about the Connolly’s work on the very long balloon data-sets?

See their recent video link above 23/10 5.19pm. Neville.

Not as yet. While I like Ronan Connolly and think he is a bright guy, I’d like to see a set of slides or a written summary about his work on long balloon datasets before looking at a video, which takes much more time.

Nic, here are a number of their studies. Hope this helps?

Plus their recent talk link.

Neville.

http://oprj.net/

Click to access July-18-2019-Tucson-DDP-Connolly-Connolly-16×9-format.pdf

Thanks, Neville. I will look at the slides for their balloons talk, at least.

Thanks for your interest Nic, it seems that very few people show any interest in their findings.

I’d be interested to know what you think after you’ve had a chance to look at their studies.

Nic,

I read this post two times completely through. I didn’t read the paper though so all I can say is that it is extremely clearly written and I don’t see any controversy with your version of the math. Thanks again for your extreme efforts.

Jeff,

Thanks. FYI, I plan shortly to post an article about another, somewhat more central, part of this paper.

## 9 Trackbacks

[…] can the climate sensitivity to CO2 be estimated from historical climate change?” by Gregory et al.[i] makes a number of assertions, many uncontentious but others in my view unjustified, misleading […]

[…] Whilst I imagine this can be a profitable workout, Nic Lewis has objected (at the start right here, then reposted right here and right here) to one of the crucial paper’s claims referring to […]

[…] Whilst I imagine this can be a profitable workout, Nic Lewis has objected (at the start right here, then reposted right here and right here) to one of the vital paper’s claims relating to mistakes […]

[…] solicited): Human CO2 has little effect on the carbon cycle Top Climate Change Myths Exposed Nic Lewis Exposes Statistical Errors In Yet Another Climate Paper Op-Ed: The climate does change and always will Good website: Global Warming Solved A cautionary […]

[…] solicited): Human CO2 has little effect on the carbon cycle Top Climate Change Myths Exposed Nic Lewis Exposes Statistical Errors In Yet Another Climate Paper Op-Ed: The climate does change and always will Good website: Global Warming Solved A cautionary […]

[…] « Gregory et al 2019: Unsound claims about bias in climate feedback and climate sensitivity&nb… […]

[…] and volcanic eruptions. However, independent climate research statistician Nicholas Lewis recently countered that Gregory and his colleagues used flawed statistical methods to obtain their results. Time will […]

[…] and volcanic eruptions. However, independent climate research statistician Nicholas Lewis recently countered that Gregory and his colleagues used flawed statistical methods to obtain their results. Time will […]

[…] and volcanic eruptions. However, independent climate research statistician Nicholas Lewis recently countered that Gregory and his colleagues used flawed statistical methods to obtain their results. Time will […]