However, it appears that the participants communicated in a mutually respectful and civil manner. There were no trolls to spoil the flow.

Congratulations to Steve McIntyre for providing the forum for a highly civilized discussion. I wish to hell I could have followed it in more detail.

]]>Thanks Roman, that’s very helpful.

re first link: 2. Transforming to the standard form (a+x)/(b+y)

this transformation provides a result that is itself gaussian. Doesn’t this provide a solution to the regression dilution problem?

Do this variable transformation and then perform an orthogonal least sqr regression.

Unless I’m mistaken, the regression dilution results from the fact that the general E[y|x] distribution is asymmetric and implicitly ignoring this by pretending x variability does not “matter” leads to a biased result.

Nic stated that the WG1 distributions were the result of dividing two (assumed) gaussian distributions, so presumably also some Cauchy based fn, that could similarly be transformed (subject to certain checks) into a gaussian approximation.

It would seem that this opens the possibility to rationalise the unwieldy, long tail PDFs that strictly do not have a defined mean into something more tractable.

/Greg

]]>The distribution of the ratio of two mean zero and unit standard deviation gaussian variables is a Cauchy distribution. This is true for both the independent and dependent case.

The case for non-standardized gaussian variables is more complex and is dealt with by using transformations and/or approximations. See the pdfs at the links here or here.

]]>Nic,

I have not been able to find anything on this can you help?

the pdf of the product of two gaussian variables is also gaussian. What is it for the division?

]]>@asexymind: *“RCP 4.5 scenario assumes that we hit 750-800ppm by the end of the century”*

The emissions scenarios are for CO2 and other greenhouse gases. AR5 has “very high confidence” of the forcing of CO2 and OGHG (CH4/NO2etc.) but only “high medium” to “low” confidence of the negative forcing from aerosols. By a remarkable “coincidence” the aerosol offset they use almost exactly counter-acts the other GHG contribution – see here. This means that the net forcing is as near as damn it a pure CO2 only forcing 5.35ln(C/C0) . As a result you can “derive” TCR directly from the temperature data. The result is 1.6 ± 0.2 C.

ECS however does indeed depend on models. If you use the CMIP5 forcing incremented annually and then combined with a simple temperature relaxation model, itself derived from models, then ECS works out to be 2.3+0.5 – 0.3 C. The pause in warming also appears to be incompatible with heat relaxation from past forcing reaching temperature equilibrium.

]]>climategrog

“Am I correct in thinking that the scewedness comes from the variability of the data which is put on the denominator?”

Generally speaking, yes. But in some cases the data distributions are also skewed.

]]>“Kenneth, how much the choice of prior affects the results of a Bayesian analysis depends largely on how precise the data are and how much data there are, along with the form of the relationship between the data and the parameter(s) being estimated.”

Nic, your explanation covers these subtle issues of Bayesian inference well and is indeed in line with what I remember (now) reading. My understanding in my post above was incomplete and is probably how you can tell the rookie from the seasoned user of Bayesian analyses. My son gave me a book on the history of Bayesian approaches to statistical analysis and that got me sufficiently interested in the subject that I wanted to able to apply it, at least, to some story problems I had found online. I found that using R helped me along with understanding better the data manipulation involved.

]]>Thanks Nic.

Am I correct in thinking that the scewedness comes from the variability of the data which is put on the denominator?

Presumably if one set of data was know precisely the gaussian errors in the numerator would produce a gaussian PDF.

I think there is some mathematical parallel with regression problem here. I’m in the process of building some numerical tests.

Uncertainty in the denominator will spread the result asymmetrically and this is what prevents ruling out some of the more extreme higher limits. Despite giving the median as the best estimate, It does give less numerate readers of the report the idea that it’s “between one and six”, say.

I’m wondering whether it has any impact on the median as well as std. dev.

I haven’t joined the theoretical dots yet but my feeling is that it’s analogous to the way regression dilution happens.

If you have any knowledge on that it may save some time but I’ll post back if I find an effect in the tests.

Greg.

]]>I wouldn’t consider using a plain lognormal distribution for ECS or TCR, but how far it differs from a better fitting approximation will very from estimate to estimate.

My comments about priors and estimated distributions apply both to ECS and TCR, but distributions for TCR are generally less skewed and uncertain than those for ECS.

]]>Kenneth, how much the choice of prior affects the results of a Bayesian analysis depends largely on how precise the data are and how much data there are, along with the form of the relationship between the data and the parameter(s) being estimated. For climate sensitivity the data are limited and have large errors, and are non-linearly related to sensitivity (and to ocean effective diffusivity, often estimated alonside sensitivity). Therefore the prior will have large effect on the estimated PDF for climate sensitivity.

I don’t think Tol’s 1998 analysis of Bayesian priors is very suitable, but in fairness it was carried out a long time ago.

]]>