Dessler (2011) reported the following:

A related point made by both LC11 and SB11 is that regressions of TOA flux or its components vs. ΔTs will not yield an accurate estimate of the climate sensitivity λ or the cloud feedback. This conclusion, however, relies on their particular values for σ(ΔFocean) and σ(ΔRcloud). Using a more realistic value of σ(ΔFocean)/σ(ΔRcloud) = 20, regression of TOA flux vs. ΔTs yields a slope that is within 0.4% of λ, a result confirmed in Fig. 2b of Spencer and Braswell [2008]. This also applies to the individual components of the TOA flux, meaning that regression of ΔRcloud vs. ΔTs yields an accurate estimate of the magnitude of the cloud feedback, thereby confirming the results of D10.

Although these findings have been widely praised by the “community” and already cited by Trenberth et al 2011, exactly what’s been shown in this paragraph is far from being clear on its face, despite Trenberth’s fulminations against SB11 that “the description of their method was incomplete, making it impossible to fully reproduce their analysis. Such reproducibility and openness should be a benchmark of any serious study.”

I asked Dessler for the source code supporting this paragraph, which he kindly provided me, but it would be better to have a Supplementary Information which makes manual requests unnecessary.

After parsing the code, I’ve come to the conclusion that almost nothing in the above paragraph makes any sense. I’ve set out my understanding below and, if I’ve misunderstood anything, I’ll amend accordingly.

Quite aside from the criticisms of LC2011 and SB2011 (which I haven’t fully parsed yet), in my opinion, there are quite compelling reasons to think that “regressions of TOA flux or its components vs. ΔTs will not yield an accurate estimate of the climate sensitivity λ or the cloud feedback”. We’ve already discussed this in connection with Dessler (2010). The bigger question is why anyone would think that they would.

Dessler says that such objections rely “on their particular values for σ(ΔFocean) and σ(ΔRcloud)”. That hardly seems to me to be the case. The problems with the D10 regression seem pretty fundamental to me and not related to the “particular values for σ(ΔFocean) and σ(ΔRcloud)”.

Dessler continues:

Using a more realistic value of σ(ΔFocean)/σ(ΔRcloud) = 20, regression of TOA flux vs. ΔTs yields a slope that is within 0.4% of λ

It turns out that this conclusion is based on simulations – not at all obvious from the language. I’ve shown the relevant code below (this is in Python, which I don’t use. The regression function looks very limited relative to R, but that’s another story.) Dessler’s setup is modeled on his interpretation of a setup in SB11 (which I’ll get to.) Each simulation uses a simple feedback model with known parameters followed by a regression. However, when parsed, it seems to me that Dessler got lost, that the simulation is essentially a tautology and doesn’t show anything one way or another.

An important point at this stage: the issues here do not involve climate science. None of the longwinded discussions by climate scientists about whether clouds are *forcing* or* feedback* have **anything **to do with this part of the analysis. This is merely statistics. And barely even statistics. More like arithmetic.

modelruns=1000. modlen=500 calcslope=[] for kk in np.arange(modelruns): dF=np.random.normal(size=(modlen+18))*20 dF=smooth.smoothListGaussian(dF,degree=9) dR=np.random.normal(size=(modlen))*1 lmd=6 c=14*12. # heat capacity t=[0.];dtdt=[] for ii in range(modlen): dtdt1=(dF[ii]+dR[ii]-lmd*t[-1])/c t.append(t[-1]+dtdt1);dtdt.append(dtdt1) t=np.array(t);dtdt=np.array(dtdt) # t=(t[1:]+t[:-1])/2 t=t[:-1] calcslope.append(stats.linregress(t,dR-lmd*t)[0]) print 'avg. bias = %4.2f, avg. std. = %4.2f' % ((-np.average(calcslope)/lmd-1)*100,(np.std(calcslope)/lmd)*100)

Dessler’s setup is constructed to match the following setup in Spencer and Braswell 2011:

For the radiative forcing N(t) we used a time series of normally-distributed monthly random numbers with box filter smoothing of 9 months to approximate the time scales of variations seen in the climate models and observations in Fig. 3. A separate time series of random numbers without low pass filtering was used for the non-radiative forcing S(t). This mimics what we believe to be intraseasonal oscillations in the heat flux between the ocean and atmosphere seen in the data [5, 12]. The model time step was one month, and the model simulations were carried out for 500 years of simulated time.

Let me re-state the code in R for an individual simulation. First, the parameters:

N=modlen=500;

paramF=20;paramR=1

lmd=6 # feedback λ

c=14*12. # heat capacity

Next create two random series – a white noise series dR of sd 1 and a reddened noise series dF created by smoothing a white noise series of sd 20. (For analysis purposes, it’s useful to also experiment with a white noise series by itself and an AR1 series by itself.)

dF= rnorm(modlen+16)*paramF

dF= filter(dF, truncated.gauss.weights(21)) #smooth.smoothListGaussian(dF,degree=9)

dF=dF[!is.na(dF)]

dR= rnorm(modlen)*paramR #np.random.normal(size=(modlen))*1

N=length(dF)

Here is a rendering of the simple feedback model into an R data frame (which simplifies keeping track of leads and lags.) In ARMA terms (or econometric terms), each step has an “innovation” which is the sum of the two noise terms. Added to the “innovation” is a feedback term, -lmd*t[k-1] (a negative feedback). The temperature change is obtained by multiplying the sum of the innovation by 1/c, but mathematically the only relevant thing is that it’s a constant. The new temperature is then obtained by adding the temperature change. Offhand, I don’t see a material difference from ARMA, but haven’t parsed this yet.

Data=data.frame(time=0:N, dF=c(NA,dF),dR=c(NA,dR),t=c(0,rep(NA,N)),dtdt=rep(NA,N+1),lag=c(NA,0,rep(NA,N-1) ) )

Data$direct=Data$dF+Data$dR

Data$feedback=NA

Data=Data[,c(1:3,7,8,6,5,4)]

for(i in 2: (N+1)) {

Data$lag[i]=Data$t[i-1]

Data$feedback[i]=-lmd*Data$lag[i]

Data$dtdt[i]= (Data$direct[i]+Data$feedback[i])/c

Data$t[i]= Data$lag[i]+Data$dtdt[i]

}

Dessler’s regression was (in Python):

calcslope.append(stats.linregress(t,dR-lmd*t)[0])

In R, this seems to me to be the following:

fmd=lm(I(dR-lmd*t)~t-1) #

Dessler ran this 1000 times and then calculated that the average slope of the resulting regression differed negligibly from -lmd with a narrow standard deviation. This is apparently what was meant by the assertion:

regression of TOA flux vs. ΔTs yields a slope that is within 0.4% of λ

And yes, I too got this (almost tautological) result. Look closely at the regression. The regressand is dR-lmd*t i.e. a white noise series dR is added to -lmd*t which is regressed against t. t, by its construction, looks like an ARMA series, but this is irrelevant to the tautology. The regression is going to yield a coefficients that are negligibly different from -lmd (and the regressions are going to have very high r2 – median 0.81 in my simulation.)

Trenberth et al 2011 said (presumably of this aspect) of Dessler’s results:

Moreover, correlation does not mean causation. This is brought out by Dessler [10] who quantifies the magnitude and role of clouds and shows that cloud effects are small even if highly correlated.

However, it seems to me that the particular “result” in this paragraph of Dessler 2011 has NOTHING to do with climate, but is merely a very elementary and (pointless) statistical tautology.

Nor does this tautology “confirm” the results of Dessler (2010), which used a different regression. Transposing to the present case, the D10 regression is:

fm=lm(dR~t-1,Data);

This regression has entirely different properties: it has very low r2. This can be easily seen by thinking about what’s been done. dR is white noise; t is a weighted sum of white noise terms. The correlations between such things are necessarily very low.

Nor do these observations show that Spencer’s conclusions are valid. On the narrow point under discussion here, I agree that Dessler 2010 methodology cannot be used to validly estimate feedback but the reasons set out here do not necessarily coincide with those advocated in Spencer and Braswell 2011 (which I’m still parsing.)

## 181 Comments

Merciful code. Bless you.

Steve- it is hugely annoying trying to figure out what any of these articles means from the descriptions of what they did. Spencer and Braswell are no better. Working through with code at least offers the opportunity of direct reconciliation as opposed to the exchange of academic articles over years.

Regressing a function of t on t ??????

You have got to be kidding.

Steve: I don’t see any alternative interpretation of the Python codecalcslope.append(stats.linregress(t,dR-lmd*t)[0])

Plus it “explains” the (strangely) worded conclusions of the paragraph cited in the post.

It is pretty strange even for climate science.

Not knowing Python, maybe double-check order of operations in arguments passed to stats.linregress, just to be 100% certain.

Steve: D11 says that they got back a regression coefficient within a pct or so of lmd. So the order of regression is “right” (but wouldn’t matter anyway for getting a high correlation of a linear function of t against t.“Regressing a function of t on t ??????”No, it seems to me there is a lot more to it than that. t depends on dR via the recurrence relation used to generate it (the ii loop), which implements the first order feedback ODE. That means that there is rather complicated autocorrelation in t, and it’s present in, and different for, the regressand (because dR is added). This affects the se of the trend estimate.

Normally you’d want him to do at least a Quenouille type correction, but here it’s not obvious how to do that. So he’s calculating the se by simulation, and in the process testing whether the autocorrelation biases the actual esatimate of the trend (as it could, but which the Quenouille check does not do). It seems a very reasonable way to do it.

I ran the following code which seems equivalent to Dessler’s. I found a systematic bias of about 0.5% in the estimated trend. That is, when you subtract the obvious trend of -lmd, that amount remained. I found that the se of simulated trend was about 1.9% of lmd, and didn’t seem to differ from the iid value.

So 0.5% (not 0.4%, by my calc) is small, but I think it did have to be calculated, to make sure it was indeed small.

modelruns=1000;

modlen=500;

ch=calcslope=NULL;

i=-9:9*4/17

gauss=exp(-i*i)

gauss=gauss/sum(gauss);

lmd=6;

cp=14*12. # heat capacity

for (kk in 1:modelruns){

dF=rnorm(modlen+18,sd=20);

dF=filter(dF,gauss)[1:modlen+9];

dR=rnorm(modlen);

s=0; t=dtdt=NULL;

for (ii in 1:modlen){

dtdt=c(dtdt,(dF[ii]+dR[ii]-lmd*s)/cp);

s=s+dtdt[ii]; t=c(t,s);

}

h=lm(dR-lmd*t~t-1);

ch=c(ch, summary(h)$coef[2])

calcslope=c(calcslope, h$coef);

}

a=c(mean(calcslope)+lmd,sd(calcslope),mean(ch))/lmd*100;

b=sprintf(“Trend Est = %.4f, Trend se (sim) = %.4f, Trend se (iid) = %.4f”,a[1],a[2],a[3]);

cat(“Percentages:\n”,b,”\n”);

Nick, this is a similar transliteration to mine and your results are what I reported in my post.

I’m not contesting the arithmetic of the calculation. I’m contesting the purpose of the calculation. What on earth is proved by this? Why did anyone “have” to show that a regression of a linear function of t with a slight perturbation by white noise against t yielded back a close estimate of the coefficient? Why would this be published in a scientific journal?

And how does this “confirm” Dessler 2010?

“Why did anyone “have” to show that a regression of a linear function of t with a slight perturbation by white noise against t yielded back a close estimate of the coefficient?”I think the issue is that t and dR are not independent. dR may be white noise, but the t values are dependent on dR, via the feedback equation (dtdt etc). dR and t values are thus correlated, and so, when the dR values are placed in t-order for regression, they are autocorrelated. They are not iid in the regression.

How much that matters is hard to estimate without calculating. It turns out to be a small but consistent bias. But how to know in advance that it is small?

Or, to put it another way, what would you have said if he hadn’t checked?

I don’t know how it “confirms” D10.

Nick, you haven’t answered the question. What is the value of regressing a function of t onto t, when you know it is going to introduce a monstrously positive bias?

I think the -ldm*t just shifts the trend by a predictable amount. Possibly the fact that dR-ldm*t is the combination that occurs in the feedback equation is significant. The key is the more subtle relation of dR and t.

Nick, are you defending this analysis, or excusing it? For someone as smart as you to do defend something as boneheaded as this speaks volumes.

I think what is done in this particular routine has to be done. I don’t think it is bone-headed.

There’s a broader issue, which I spoke of in the long thread on D10. When you try to resolve T acting on CFR, you are trying to do an i/o analysis on an element in a feedback loop, not detached from it.

I think that’s the underlying issue here. If you want to see how two variables might affect each other in a feedback loop, you have to allow for the postulated feedback relation in any analysis of the co-variation.

As a practical matter, the reason why the bias in trend is small is that dF is dominant. And that dominance in turn depends on the smoothing of dF, which reduces its variance. So there’s a lot to keep track of.

Steve: one of my original questions to you was to provide a statistical reference supporting the form of the regression in D10. Deducing feedback is, as you observe, something that people have tried to do before. I’ve often editorialized against homemade statistical methods by climate scientists. Isn’t this merely another example? As is the regression in D11. A homemade statistical analysis with no references to statistical literature to justify the methodology leaving all of us trying to figure out what, if anything, is established by the D11 regression.

“one of my original questions to you was to provide a statistical reference supporting the form of the regression in D10.”And my original answer was that feedback isn’t statistics. An irony is that we now have Mark T and Tom G accusing people (including I think statisticians) of homemade solutions when Bode plots are the answer. This is closer, though they overrate Bode plots.

I see nothing statistically unorthodox in the actual OLS regressions performed by Dessler. Nor is his response to the uncertainty of the effect of coupled ODE – he does a standard Monte Carlo simulation. Plenty of statisticians do that.

I’ve done nothing of the sort. The reasons these simple regressions are inapplicable are entirely because of the problems associated with phase as Bart has pointed out. Nobody has ever said “a Bode plot is the answer.”

A frequency response curve is nothing more than a plot of how the system behaves in frequency. It is useful in determining phase relationships that are necessary to understand feedback. There are, however, other means to gather such information. Simple statistics ain’t high on the list of things that work well in such cases, however.

snip

Mark

“I think the issue is that t and dR are not independent”

And how would you test the hypothesis that a and b are independent?

By regressing a onto a*b? Good grief.

It’s not a hypothesis in this routine. Values of t are generated from values of dR.

In fact, t can be seen as derived from dR+dF by multiplying by a factor, and exponentially smoothing.

t[i]=t[i-1]*(1-lmd/c) + (dR[i]+dF[i])/c

In fact, a regression of t on t yields a slope and r2 of 1. A little noise takes it down some.

Where I write:

“Regressing a function of t on t ??????”

Nick Stokes replies:

“No, it seems to me there is a lot more to it than that. ”

Nick, your perception is irrelevant. It is as simple as this. Regardless where t comes from, the end result of the regression is predetermined, by definition, to be hugely biased toward 1.

Introducing needless complexity obfuscates the issue. Please stop doing that. It strikes me as disingenuous.

Bender,

Doing such a thing might help a bit to quantify a signal to noise ratio to check if any other factors might dirty the observation and processes. But it doesn’t validate anything beyond that.

Steve, not that it matters to the discussion of tautology, but shouldn’t your R line

be

to match Desslers Python code? (his ’18’ vs your ’16’).

Steve: my Gaussian filter was microscopically different than his. I’ll tidy this at some point.

I don’t think this makes much difference, but based on the SB11 description of their model, it is the N term (the radiative forcing) that has the filter applied, which would correpond to the dR term in the Dessler model. From what I can tell, the Dessler code (and Steve’s in reproducing this) has this reversed, with the dF (corresponding to the S term in SB11) using the filter instead of simple white noise. Of course, I’m not sure that the SB11 noise model makes all that much sense to me.

Steve: at this point, I’m not trying to parse the SB11 noise model. There seems to be an awful lot of arguing about words – whether something’s a forcing or a feedback. Without parsing code, I can’t tell whether they are arguing at crosspurposes or not.For the purposes of today’s post, it doesn’t matter. I’m trying to get some footing.

I have linked to the relevant energy budget equations here (Dessler video) and here (Spencer blog post) via Troy’s D11 post. Perhaps Steve could include images of these equations for reader reference?

I don’t think that the energy budget equations have anything to do with the point of this post – which is purely mathematical.

No wonder I find Dessler so hard to understand. It makes no sense.

Seems logical: Google Science Fellow Dessler first publishes a paper in Science with a regression coefficient indistinguishable from zero; he follows this with a GRL showing the excellent correlation of a x and a linear function of x. Presumably the next step will be a Nature publication based on a correlation around 0.5. It is exciting to watch a master at work.

After a read of Troy’s post on SB11: (dR-lmd*t) is a derived expression of the TOA flux component of the SB11 energy budget equation. But TOA flux is measured by CERES.

I have another comment in moderation with links to the equations which Troy posted.

Steve- for the purposes of this post, we’re talking arithmetic. Properties of simulations of random numbers. Try to focus on the statistical point separately from CERES.

I do know Python, and the first thing that struck me is that the code as posted is very hard to make sense of because indentation in Python is significant and all indentation has been removed from the code as posted.

I could probably figure out what the indentation is supposed to be, but I find it annoying to have to waste time doing that.

Steve: The spaces are in my post, but WordPress doesn’t recognize them. The script with spacing is uploaded to http://www.climateaudit.info/scripts/dessler/dessler_2011_regression_py.txt

The second thing that struck me is that the code contains references to modules — the concept is similar to R’s packages — named np np.random smooth and stats that are not defined.

There should be Python import statements for those modules in the code, and references in the documentation to where the modules came from. Maybe np is for Numeric Python?

A little Googling yields a plethora of Python modules named stats but I have not dredged through them to see which of them contains a linregress function and how similar or different the linregress functions in the various packages might be.

Steve: in R, you don’t have to load a package to do something as a linear regression. The Python linear regression cited here appears to only permit two columns and doesn’t give standard diagnostics. Very bush league for statistics.s

Oops! Cross-posted.

The script with spacing does have import statements and some references to well-known module names, such as scipy.

I’ll have to root around to try to find smooth though.

Steve: I found the smoother by googling. Its just a gaussian filter. For the purposes of the regression discussed here, I don’t think that “matters” (TM-Team) how the series t was constructed. AFAIK, you could put any series t into the Dessler regression without affecting the tautology.

The np I would imagine stands for numpy, which can be found at http://www.numpy.org. The stats function if I had to guess came from the scipy package, with a statement such as “from scipy import stats”. Documentation can be found at http://www.scipy.org.

I knew as soon as I read Dessler’s paper that this was nonsense, so I could not be bothered to work out exactly what he was trying to say. I suppose someone had to , so thanks for taking the trouble Steve.

>>

calcslope.append(stats.linregress(t,dR-lmd*t)[0])

In R, this seems to me to be the following:

fmd=lm(I(dR-lmd*t)~t-1) #

>>

Not quite , it translates as ~t , as specified in Spencer’s model it is the instantaneous plank feedback, not the previous feedback used to calculate the current t increment. That will not significantly change the discussion but best to get it right, to avoid rebuke 😉

What it appears that Dessler does not understand about this “result” is he is simply testing how much the time series decorrelates when he adds symmetrical gaussian distributed pseudo random noise.

One does wonder what point he thinks he’s making here.

The key phrase is the lead-in :

” Using a more realistic value of (ΔFocean)/σ(ΔRcloud) = 20, ”

This is the same trick (TM) that he used in dissing Lindzen and Choi. He arbitrarily introduces some ad hoc conditions deemed to be “realistic” or “reasonable” without any definition of the criteria on which he makes that assessment, and then goes on to “prove” a point that is based on unproven assumptions.

What he is effectively doing here is saying that what Spencer calls the non-rad term is far greater than what he calls the rad term. However, if you feed that into Spencers model, the term he used for the regression test fades into insignificance and the zero lag correlation all but disappears. This is the fundamental point of S&B’s Remote Sensing paper and is what they show in figure 4.

It seems Dessler has confused himself by renaming everthing.

Also, the term on which he does the regression is the total radiative flux AT THE SURFACE. The random element represents radiative forcing AT THE SURFACE and specifically includes cloud variation. Even with static cloud this in no way represents TOA.

He really has not understood the terms in Spencer model if he thinks this is total TOA flux.

Then for another unqualified assumption:

“This also applies to the individual components of the TOA flux,”

Oh yeah?

This would seem central to his whole point but it just drops out of the air.

This is so lightweight as to be ridiculous. I really don’t think it’s worth any further effort until it actually IS published in a journal.

“not the previous feedback used to calculate the current t increment”P, are you familiar with lm()? I think t-1 means regression line through the origin, not shifted t.

You are right of course. R formula notation is rather arcane. Apologies to Steve. Thanks for the correction.

However, as will be seen from the python doc link below linregress()[0] is the slope of a regression with intercept, so I think my correction stands, if the aim is to repeat the same method, the model should be:

lm(I(dR-lmd*t)~t)

Steve: ok. The D10 regression seemed to have been without intercept and I carried that over. It doesn’t make any difference, but next iteration I’ll pick this up.

I’ve tried that in the code. It made a small difference – the bias was closer to 0.6%.

Nick, please stop intentionally missing the point.

Nick, as he does far too often, refuses to engage on the point at issue.

I am baffled by (1) what conceivable point arises out of the Dessler calculations; (2) how the near-tautological point relates to anything at hand; (3) in particular, how it “confirms” D10.

Moreover, one of my points is that the issue here is entirely arithmetical and has NOTHING to do with climate science. Unfortunately Nick never concedes the most minor point.

Yes, issues arise if you regress an AR1 series against the innovations or vice versa. But these are properties of the arithmetic.

refusal to engage is a concession, IMO

My engagement went into moderation. But it’s midnight here.

Steve: released. will look at this and see you tomorrow.

I don’t think I’m missing the point – I think I’m seeing the point, and trying to help. People want to infer a linear co-variation model for feedback analysis, and to base this on observations. But in the presence of feedback this is not so easy.

D11 tries to set up a routine regression with synthetic data, and with the feedback equation embedded. He wants to check if the usual regression estimator actually works. That is justified because the standard iid assumptions of OLS regression aren’t valid.

He finds it does, more or less. That is a useful finding.

Measurement in the presence of feedback is a familiar problem in electrical circuits. Suppose you create a negative feedback with a resistor linking output to input. And then suppose you tried to measure that resistance in the usual way, by applying a voltage difference and measuring the current that then flows through your meter.

You’d find the resistance appeared much smaller than it really is. The reason is that when you apply a voltage, the amplifier acts in such a way as to increase the current that flows in response. So the apparent relation of voltage to current is not what you’d expect from the real resistance and Ohm’s law. If you used the apparent resistance in the circuit analysis, you’d be misled.

PS – if I switch off now – it isn’t a concession – it’s because it is midnight here.

another point that is worth repeating.

Dessler’s example, by construction, has a

negativefeedback.In the toy example, the system diverges once the feedback becomes somewhat positive. Small positive values don’t diverge quickly, but the values of lmd don’t have to be very large for divergence to occur quickly.

To look at the relationships for positive feedback, you need to wrap a greater negative feedback around the system, then pull off just the signals you want.

That’s how the climate system, and any positive feedback in it, has to work, too, or the climate would be unstable, and we would never have come to be.

Blah blah blah

Does it really take you this many words to address Steve’s points?

Signing off at this point after so much blather really is a dodge. Midnight my ass.

In science it is customary to acknowledge an opposing point before “moving on”. Try it sometime.

Summary:

Dessler’s bit of analysis here has no value.

It does nothing to “confirm” the earlier paper.

A tautology is not evidence. It’s truth by definition.

Bender,

OK, it’s (early) morning, and I can summarise in fewer words what Dessler is doing here.

1. D11, following others, wants to regress dR against T to get a trend that can be used to estimate feedback.

2. I note that, when you arrange dR values according to a regressor that depends on dR, you can get a trend, even though dR are iid. An extreme is regressing dR against dR.

3. So he has to answer the possibility that, with feedback, dR could show a trend even if it is white noise. We

don’t knowhow big.4. He sets up a simulation where dR is indeed white noise, and T is derived according to a plausible feedback equation.

5. In the process, he subtracts lmd*t. I don’t know why, but speculate that it is because that combination is in the feedback equation. Anyway, he adds lmd to the trend afterwards, so it should make no difference.

6. Finally, he can answer the question. The extra dR trend created is small – he says 0.4%.

He didn’t say that this result confirms D10. He said that the regression he did confirms D10. This result answers a possible objection to the regression.

So my question – what do you think he should have done? Just ignored the dependence of dR on T?

http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.linregress.html

>>> from scipy import stats

>>> import numpy as np

>>> x = np.random.random(10)

>>> y = np.random.random(10)

>>> slope, intercept, r_value, p_value, std_err = stats.linregress(x,y)

>>> print “r-squared:”, r_value**2

r-squared: 0.15286643777

Not sure I’d want the world to know I was basing my work on something called “Numpy” but tastes differ 😉

(Steve — I corrected the indentation of Dessler’s code by enclosing it in “pre” tags — which produce fixed-space output.)

Instead of all this regression business, would somebody please generate the Bode plot of the process. Wouldn’t that tell one the response to any input signal – white noise or whatever. It looks like an IIR low pass filter to me and that has been extensively studied.

Steve: The process looks to me like an ARMA process, also intensively studied. At present, I’m simply trying to determine what is proved by the D11 regression.

In the process noise is added to a signal and then the result is passed though a low pass filter. So what you get is the signal with low pass filtered noise or, in other words, the signal. So you are regressing a signal with low pass filtered noise onto the signal. This does not seem like a very interesting thing to do. The Bode plot would show all of this

The regression of a signal onto that same signal + some other component yields a heavily biased regression that would only obscure the relationship of interest.

How anyone could do this is beyond me. How anyone could choose to defend it – well, I’m not free at this blog to speculate on motive. But how it all slips through peer-review in the first place ??? – well we’ve been there a few times. Let’s just say that it’s pretty clear that not everyone is as diligent as McIntyre.

Bender,

You’re off-beam here. It isn’t a biased regression – it provides a predictable offset to the trend, which can be later removed if desired. Which he did.

As to how anyone could do it – Troy has been tracing the history. SB11 did it, and D11 is writing about their paper. And the regression goes back further, as Troy describes.

Not all of us are familiar with the Bode plot, meritorious as this technique undoubtedly is.

The point at hand seems sufficiently elementary to be accessible to people simply from the perspective of straightforward statistical practice without requiring them to learn the significance of Bode plot (though learning this method may undoubtedly be useful on other grounds.)

Why do good-natured people choose to introduce unneccessary complexity into these discussions? I know people like to show off their knowledge, but it really is a hige disservice to those trying to follow the pea in thimble. More thimbles & more peas – it doesn’t help, folks. Strip to essences, please.

Well said. How much of climate science is unnecessary complexity? Answers on a much reduced federal budget, please.

Because, bender, sometimes the “simple” approaches are inappropriate for the analysis. In such cases, it seems apparent (to me at least) that the arguments are essentially a waste of time.

Perhaps, somewhere along the line, someone needs to go off and prove the distinction. I’m not sure if I can or not. Sorry if that’s not helpful.

Mark

The Bode plot is taught in first or second year engineering courses. It is pretty elementary. if is fundamental to circuit design and will be familiar to all engineers who have taken courses in electricity

Just as elementary are the topics that Nick Stokes is bring up in his discussion of the mysteries of feedback. Changing parameter values such as resistance as he gives in his example is a standard priactice in circuit design. Simulators, such as SPICE, were solving those circuits quickly 30 years ago. I can just imagine what the circuit emulator packages and fast computers of today are capable of.

The point of all this is that what is being attempted to be done here with regression is to characterize a feedback circuit. This is what Bode plots and their relations are all about. I am very surprised that these standard techniques (with extensive theory and tool support) are not being used while ad hoc techniques are being devised

These guys are statisticians through and through and suffer terribly from the NIH syndrome. Why make it simple by the use of 75 year old, well established analytic techniques. There is no grant money in that.

Well, some of them are statisticians.

Mr. McIntyre;

You seem to have a sufficiently charming style to elicit responses from faraway academics.

Perhaps you might consider asking that EE professor (M. Kelly?) about the appropriate and inappropriate venues for using OLS regression (vs. ‘Bode’ approach) to diagnose ‘feedback’ parameters.

I’m thinking of the guy whose notes were initially aghast at the shabby data. He had a role in one of the post-climategate panels.

I’ve followed C-gate as a form of remedial statistics education.

But as a 37-year EE, I can no longer pretend that merely regressing to linear trends is OK, after Bart pointed out that that phase vs frequency is not linear.

I think that any scientific paper that purports to diagnose “feedback” without inviting Mr. Bode et aliam to the party,

risks landing in the “phlogiston” junkpile of history.

At the very least, a ‘Feedback’ paper must test their data ala ‘Bode’ to justify a ‘no-delays’ diagnosis of a simple feedback network.

How did the word “Feedback” enter the common language? My oxford dictionary says it is E20, and the first definition is about EE. I suspect the other usages (in biology and popular psych) came after the guys from Bell labs.

RR

In my old Box and Jenkins text on ARMA processes (early 1970s), they discuss feedback in the context of stochastic processes, especially industrial processes. They were working with discrete time series and the approaches look like they would be useful in the present context. I can’t tell offhand whether all the methods in the text are included in the current R package.

Perhaps it is time for the (signal processing) EE’s to express ARMA processes in EE terms.

I think ARMA seems to involve adding in weighted prior values to get the new value.

This sounds a lot like the ‘z-transform’ that I heard a lot about, (but never did fully ‘grok’ it).

My goal with this offering is to persuade the smart guys who can do it, to start to bridge the statistical and electrical engineering jargon sets.

It seems ‘we’ should try to clarify the big picture about the boundaries of proper applicability of ‘regression’ techniques to frequency-dependent feedback systems.

RR

You’re right it’s time to apply EE techniques to these feedback problems — it’s far simpler — if you understand the math. The next step will be assembling all the know (correct) climate equations into a network style problem and apply NP (Non Polynomial network solution) techniques. But this first…

Despite the comments about the EE way being more difficult and complex it really isn’t. If anything it is less so because these types of problems are well understood by control systems engineers and EE engineers and there are many modeling programs that can handle the equations — Matlab, Spice — etc.

Steve — you said you wanted an engineering grade report — maybe you should encourage people who have the math skills. Maybe the time to start on your desired report is now

Steve: such a report is far beyond my resources or volunteers.

WillR, by “engineering grade” do you mean simply that the report is written in a professional, disciplined, and well-documented manner; with end-to-end traceability of data, analysis, and conclusions — i.e.; the kind of document we would expect of engineers who carry significant personal responsibility for the consequences of potentially poor quality work?

Or, in addition to creating a document which has all the aforementioned qualities; does this “engineering grade” report also assume apriori that the earth’s heat transfer mechanisms and its weather producing processes operate in ways that mimic those found in various electro-mechanical processes of our own devising?

I’ve explored some of the correspondences between feedback amplifiers, ARMA models and Dessler/SB style feedback equations here.

Two things I note in this (with the caveat that I don’t know stats):

1. stats.linregress(t,dR-lmd*t)[0] == stats.linregress(t,dR)[0] – lmd;

It seems strange to have the -lmd*t in the regression, and causes the final percentage calculation to reduce to -(avgSlope/lmd)*100, which instead of just converting to a percentage of lmd, also reverses the result.

2. linear regression is supposed to be used to show a relationship between the pairs of values, right? t[0] = 0, t[1] contains dR[0], t[2] contains dR[1] and dR[0], etc. But the regression is going to compare t[x] to dR[x] and there should be no relationship between them.

“It seems strange to have the -lmd*t in the regression”It is, but he subtracts the effect out at the end.

“t[0] = 0, t[1] contains dR[0],”No, in the step t=t[:-1] all the t values move back; t[0]!=0.

t = t[:-1] drops the last element in the array

t = t[1:] drops the first element in the array

>>> d = range(10); d = d[:-1]; print d

[0, 1, 2, 3, 4, 5, 6, 7, 8]

>>> d = range(10); d = d[1:]; print d

[1, 2, 3, 4, 5, 6, 7, 8, 9]

mt,

Thanks, I don’t know Matlab very well. I implemented the backshift [1:] in the R code.

But I have to say that taking out t[0] makes sense; taking out the last value doesn’t. Especially as they could just have stopped the loop one step earlier.

It’s a difference that changes the result. Using t[:-1] and 10000 iterations of regressing (t,dR) gives an average slope of -0.00536771. Using t[1:] gives an average slope of 0.04261215. Interestingly, regressing (dR[:-1],dR[1:]) (comparing dR[x-1] to dR[x]) gives an average slope of -0.00205143.

My question is if the code as given with t[:-1] even makes sense in terms of deriving something useful. And if t[1:] is used, then t[x] is comprised of approx 1/27 dR[x] (20 parts dF, 6 parts previous temp, 1 part dR), that that’s about what comes back out of the regression.

Is Steve’s analysis surprising? No.

AGW is an ethical concern that became a political movement. Professor Dressler has tried to contribute to this worthy cause. Quite human and reflecting well on his character.

But good science is only as strong as its methodological underpinnings. “Mere statistics” are essential to doing good work, whether in climate science (or any other “hard” discipline) (and many soft ones too).

(Glad to see allusion to Hendrik Bode, who I hold in much regard.)

Let me see if I can explain what Dessler 2011 is trying to prove:

1) Forster and Gregory 2006 noted that (in Spencer’s terminology) N-lmbda*T corresponded to the TOA flux (N being the radiative forcing component, or dR here), so you could calculate the overall climate response/feedback (lmbda) by regressing the difference between the measured TOA flux and N against T.

2) Spencer and Braswell 2008 and 2011 noted that while true, the unknown forcing component (N), if large, could cause a significant underestimate of lmbda if not included. The lagged plots formed in the SB11 paper, which I’d reproduced in a script here, use the form of regressing T against (N-lmbda*T), but N is the smoothed white noise and is large in magnitude, so the resulting regressions lead to an underestimate of lambda.

3) Dessler 2011 attempts to rebut SB2008 & 2011, and noted the rather trivial point, as you have pointed out, that regressing T against (N-lmbda*T) will yield an accurate value of lambda assuming a small enough magnitude in N. I believe he was mistaken in using simple white noise rather than the smoothed value for N, but that doesn’t make much of a difference IF the ratio is 20:1 N vs. S (dR vs. dF). Furthermore, he seemed to have gotten a bit off track by using dR with respect to the cloud forcing, rather than noting it could be any type of unknown radiative forcing.

Things have become muddled with Dessler 2010, because while the SB08/SB11 point about unknown cloud forcings would sink the results of Dessler 2010, the point noted in #3 from Dessler 2011 would only uphold Dessler 2010 against the objection in SB11, and does not confirm it with respect to other methodological problems. Particularly, as you say, because the form of estimating the cloud feedback is different than the form of calculating the overall climate feedback/response.

I should also note that the ratio in #3 of 20:1 is way off, and results from Dessler 2011 incorrectly using the entire upper ocean (700 m) as the mixed layer (between 50 and 100m), as I’ve explained on my blog and Dr. Spencer has on his. So Dessler’s regression here is pointless because of a huge underestimate in the N term relative to temperature changes.

Troy, I’m stuck at a different point. You say:

Aren’t the regressions pointless period? I’ve been experimenting with an even simpler model – white noise only while varying lambda. The D10 slope calculation doesn’t even come close to estimating lambda in known situations. Quite separately from the ratio question, which is empirical and which people can argue about.

For the overall climate feedback, you can theoretically get an accurate value for lambda by regressing T against simulated TOA_flux (lmbda * T + noise (N)), assuming that the magnitude of N is much smaller than the T * lmdba term. Since the magnitude of T is driven by N + S (F_ocean), a ratio of 1:20 of N:S would generally yield accurate values of lmbda, because the magnitude of T would be high while N would be small. However, you *can’t* get an accurate value if N and S are closer to 1:1. This all relates to D11 and SB11.

I believe D10 is a different situation. How are you setting up your simulations for that? I have to go now but I’ll take a look when I get a chance. Thanks.

Errr, when I say the “magnitude” of T or N, I mean the size of the *fluctuations*, not the values themselves.

Hmmm… OK, I see what you mean.

Troy’s observation is correct. I’d been experimenting with white noise setups which were on the no relationship side of things. I’ll do up another iteration on this showing both situations.

And, what if noise(N) is not white? What if it isn’t even “noise” but a series of deterministic signals? What if both? That’s what we’re dealing with.

An interesting point in this respect is the effect of taking out monthly normals – a question that UC raised and which I think is important and as yet unanalyzed. I presume that whatever physics is at work responds to the actual T rather than the monthly anomaly.

Steve, I was also instantly sceptical when I first saw this kind of data mangling going on. The results reassured me quite a bit.

For this approach to be valid presumes that the physical processes involved are linear. That is probably not the most outlandish approximation in the whole story.

T^4 approximates to T fairly well within the range of “anomalies”. Heat content is additive. Radiative forcing produces a result that is the integral of the forcing over time and integration is a linear transformation. Optical effects of gas concentrations aren’t linear but again over small changes may be not grossly out.

When I ran R.stl on the UHA TLT what it subtracted out was quite complex but did show a credible superimposition of NH and SH seasons that fit expectations. It also pulled out a trend that had a strong 18mth and 3 year cycle. The larger swings in this match very closely with ENSO data. On the shorter scale not so but no major excursions.

The 18 month cycle explains a feature I had noticed in Spencer’s satellite data (the zero crossing at +/-18m that is a key difference if _form_ compared to climate models). I first noted this last year reading a study he had done on variations after Mt.Pinatubo .

There is of course the possibility that he is using the same processing and this is nothing but an artefact. My gut feeling is that that is not the case. Looking at the function description I was dubious about end-filling of windows and use of running-means which always make me twitch.

The magnitude of this 18m cycle is significant and requires explaining before the rest of the phase response can be useful in determining feedback. I expect it is responsible for the 3.5 month -ve offset I have raised elsewhere.

I suspect Dr Spencer knows more about this than he has written about publically.

In short, I think the deseasonalising algo in R is pretty powerful , though I instinctively mistrust such things and always consider if anything coming out may not be an artefact.

I found some relevant stuff,

“What Signals Are Removed and Retained by Using an Anomaly Field in Climatic Research?”

http://www.hindawi.com/journals/ijog/2009/329754/

and, of course this:

http://statpad.wordpress.com/2010/03/18/anomaly-regression-%E2%80%93-do-it-right/

But I think there is more to explore, for example monthly anomaly and detrending operations are noncommutative. If one detrends anomaly series he might find annual cycle in the data. Or if one uses anomaly operation for detrended data he might find a trend.

Seems like a general problem with treating everything as a line.

Mark

The distinction between noise and signal is rarely qualified let alone quantified. Quite frankly, I don’t know if it can be.

Certainly there will be some noise processes that can be easily modeled, such as measurement errors due to rounding (sampling, as it were in the case of satellite data) but there’s no way to know how large those contributions are relative to anything else so such knowledge is marginally helpful at best.

Pat Frank has argued that the temperature data are biased with unknown stationarity issues, though he does not have the support of many that have reviewed his work. If he is correct (I’m not saying one way or another,) his ideas have a huge impact on any analysis that uses the temperature data. At the very least, it would mean the analysis we perform today will likely produce entirely different results 10 years from now.

I seem to recall JeffID doing some work (not sure what with) that indicated some SNRs regarding climate data are likely in the very low category, possibly below 1, though I don’t recall the context. It seems like it was an analysis of something Mann was doing.

Maybe. It’s not difficult to envision general systems that respond to time-derivatives of inputs, however. Whether they apply here is a different matter.

Mark

I entirely agree about the perils of climate scientists reifying “noise” and “signal”. This seems particularly iniquitous in proxy studies where squiggles, no matter how arbitrary, are said to be noise+signal.

Yes, he made his way through Mann 07 in a series of posts about a year ago – I think to evaluate Mann’s claim of having a calibration algorithm which recovered full signal variance from his proxy network. Jeff used the rate of proxy retention through calibration of various scenarios to demonstrate the SNR assumed by Mann in his simulations was too low.

That sounds right, LL.

Mark

“

though he does not have the support of many that have reviewed his work.”The only people coming to mind who have both reviewed the work and expressed non-support are JeffID and Lucia. I know of well-qualified others who have reviewed the work and supported it.

Strangely enough, none of Jeff’s or Lucia’s criticisms encompass any of the data or results actually appearing in the Tables and Figures of the paper. So, whatever else, their objections do not involve the non-stationarity of surface station temperature measurements highlighted in the paper.

Similar problems with nonstationary error in air temperature measurements can be found in H. Huwald, et al. (2009) “Albedo effect on radiative errors in air temperature measurements” Water Resources Research 45, W08431; 1-13, journal abstract (here), and in the very recent work in Antarctica by C. Genthon, et al. (2011) “Atmospheric temperature measurements biases on the Antarctic plateau” J. Ocean Atm. Technol., in press, discussed by Anthony at WUWT here

I meant appearances over at Jeff’s place. Sorry for missing that, Pat. For that matter I have stated that your ideas are at least plausible/possible without going into deeper details.

I’m sure you will agree a problem with the temperature record would be an issue for this and other analyses?

Mark

Thanks, Mark. Your honest appraisals have invariably been on target and very welcome. Your view of the potential impact of air temperature sensor error on analyses of 20th century climate are exactly right, IMHO.

Linear regressions in the phase plane are useless when the transfer function has nonlinear phase (variable time lag as a function of frequency) over the frequency band of the signals. Even if the phase were linear (which is impossible in analog systems), you would still need to account for the delay in the result.

So, basically, the shenanigans played with the data make an already worthless analysis even more worthless – it is beyond superlative form.

I think the words you are looking for is that it is the mathematical equivalent of a Malamanteau:

That was the basis of your point in the beginning over in the other thread. I’ve made a similar argument (different specific problems, same concept) regarding extraction of temperature from tree rings. None of the past 7 years (or is it 6?) would have been necessary if they first attempted to publish in any EE literature. 🙂

Mark

Mark T – My thoughts, too.

I wanted to clarify for people that a nonlinear phase relationship means a variable time delay across the frequency range. The delay in a pure sinusoid is the phase divided by frequency (phase delay). The relative delay in a bandlimited process modulated at a particular frequency is the derivative of the phase with respect to frequency (group delay).

A linear phase relationship means constant delay. Such a phase relationship can only be achieved digitally, which is one reason why your CDs and ipods sound so much nicer than the tape players and record players you probably grew up with.

So Spencer and Braswell did the same regression of T on f(T)?

If so then it seems RealClimate should have just run with a guest post by bender that the paper should not have been published.

IIRC, it seems to me that SB concede the point that if unknown radiative forcings are insignificant, that TOA flux vs dT (and hence the ability to estimate lambda accurately) is essentially tautological. I believe their point is however, that the time lagged signature of the regression using observations, is more consistent with their simple model based on a significant contribution from unknown radiative forcings.

The following graphic illustrates and incorporates the point made by Troy earlier i.e. that the ratio of sd(F) to sd(R) is highly material to the accuracy of the estimation of lambda. I think that I see the point in dispute more clearly than before.

Both show

negativefeedback (lambda= -6) which, oddly is the case discussed by Dessler. On the left is high ratio (20) of sd(F) to sd(R) – the D11 sort of case. On the right is a low ratio (1) of sd(F) to sd(R) – the SB11 sort of case – in which a slight positive slope with negligible r^2 is obtained despite the construction from -6 feedback.Dessler argues for high ratio on “physical” principles; SB argue the opposite also on physical principles.

The case used in D11 to support D10 is not on point. It is a case of strong negative feedback with high sd(F)/sd(R) which yields an entirely different scatter plot than D10.

The difficulty arises because you can get a scatter plot with slight positive slope under a very wide range of lambdas – including quite negative lambdas depending on sd(F)/sd(R).

You can also get a slight positive slope in the case where there is a slight positive feedback and low sd(F)/sd(R). A slight positive slope can be obtained from almost any lambda (that is not very positive). In Bayesian terms, a slight positive slope in itself doesn’t enable any confidence that feedback is positive.

perhaps the data permits a simultaneous estimate of paramF on max likelihood grounds or something like that. I’m not sure; I’ll think about this tomorrow.

So on one hand we have the D11 counterargument to SB11 – that lambda estimation (the regression slope) is a tautology due to assigning “realistic” values for sigma(dF)/sigma(dR) which would naturally result in high r^2. On the other hand we have the D10 regression (looking very much like your right graph) and it’s low r^2 value. The D10 scatterplot does not show a strong enough relationship to be consistent with D11’s arguments.

Your plots here bring out a point I’ve been banging on about for a while, whether the OLS method is applied correctly and what (if anything) the resulting estimator means.

The acid test (which I’ve been discussing on Troy’s blog) is whether you get a similar result when you plot the data with temperature on y. Equally you can reverse the variable order in the call to lm() and take the reciprocal of the result.

I can tell you from experience that the lefthand graph will give you slightly steeper slope but both seem credible and reasonably consistent. Your second plot will get a near vertical result and you can thus demonstrate that both are garbage.

This all comes down to error in the “controlled” variable , which in this context is not controlled at all. The fundamental assumptions of OLS are not there and the result is invalid before you start. This is being ignored (I suspect wilfully) by Dessler and others on the positive feedback trail.

This point is key to his dismissal of Lindzen and Choi . Since that rebuttal IS a published paper and L&C have published results that do seem to get a better estimate of CS (whereas SB11’s conclusion is more of a non result), I would think that looking at the flaws in D10 would be more fruitful , at least until D11 is finally published.

This discussion will be useful in that context since it is now getting at the core of the problem.

PS , real data is much closer to the second graph than the first which shows that Dessler’s 20:1 is a fairly tale.

Steve,

good that you are finally engaging in the very important discussion about feedback. However, it seems that you have not yet caught on to the special definition of positive feedback in climate science. It does not mean that lambda is positive, just that the magnitude of lambda is smaller than the value obtained from blackbody radiation without contributions from changes in water vapor, clouds etc caused by the temperature change. A positive lambda would indeed lead to instability.

I can’t speak for Steve but he wouldn’t be the first person confused by this subtle shift in terminology. Who began it? And for goodness sake why?

In control systems lingo, there’s open loop gain and closed loop gain. The climate people never made a point of distinguishing those two.

This is yet another non scientific “trick” special to this field. When the base line, though not really contentious, has not even been established and could be open to reassessment, it makes absolutely no sense to adopt such nonsense.

It’s like referring to any change in dynamic chaotic system as an “anomaly”, as if anything in climate was ever stable or “normal”. It politicising jargon not science.

I consider feedback to the relationship between the variables, not arbitrary X minus the relationship.

Imagine electonics engineers deciding that all feedbacks will be stated relative to negative gain of 3.3 . The idea is so stupid as to be laughable.

Climate non-science seems to regard this as logical. Maybe they should call it the “feedback anomaly”.

I’m interested (as I think all here are) in the science not the word games.

Climate non-scientists can redefine all they like , I will continue to work on the feedback.

Jens thanks for reminding us of the Rabbit hole. e.g. Climate Etc. discussion on CO2 no-feedback sensitivity citing IPCC <a href=http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch8s8-6.html8.6 Climate Sensitivity and Feedbacks

Steve,

I’m wondering why there is such a big difference in t range on your two plots?

Steve: the only difference is the scale parameter of the F series in (my interpretation) of the Spencer-Dessler setup.Yes, look at how F_ocean affects T in the model, and then wonder how the supposed 13 W/m^2 fluctuations (standard deviation) of F_ocean according to Dessler11 manifests itself in mere fluctuations of < 0.1 C during that period. Yes, this "forcing" is constantly changing, but if you look at the chart of GISS forcings, and note that a mere 3 W/m^2 for volcanic forcings manifests itself almost immediately as > 0.5 C change in the temperature record, I would have thought that the 13 W/m^2 average changes would have been more pronounced in the temperature record over the 2000-2010 period. That’s what got me scratching my head, and then of course it becomes clear that Dessler11 is using 750 meter depth of Argo to calculate it (rather than 100 m), so that number really has nothing to do with F_ocean.

Hey MikeN,

I don’t mind you taking me down a notch or two. I haven’t read any of the three papers, and I’m not going to. So yeah,s hame on me, my hyperventilating, and my attention span of a gnat.

.

But how about you? Are you going to answer Steve’s questions?

Dessler skilfully references the plots in SB08 to support his point, though he has also commented that the model is not valid because it does not account for ocean heat input (the disappearing S in Spencer’s notation).

From SB08:

>>

Cp dt/dT= -aT +N +f +S (2)

Daily data were collected from each run, 31-day aver-

ages computed, and the feedback parameter ␣ esti-

mated by linear regression of average T against average

F (= -aT + N ).

>>

The rather elaborate plotting of SB08 is thus seem to be a graphical representation of the tautology: if you remove all the other terms you get a trivial relationship and a good OLS estimate of alpha.

What Dessler (wilfully?) onits to say is that the good fit is not just when N is small but when there is _no S either_. Since his main argument is the dominance of ocean heat input, his claim that SB08 figure 2b “confirmed” his position is fallacious.

Since SB08 says tests were run with a fixed , finite S it is unclear why his equation omits it and why the data he used to do the plots does not include it.

This is what D has criticised elsewhere but is happy to overlook in claiming confirmation.

The key point to retain from SB08 is the fact that OLS estimator does deviate significantly from the true feedback as soon as any other inputs are present. This includes random or systematic heat input from the oceans.

OK, I’ve got what SB08 was doing, though his omission of S without explanation makes it a bit hard to follow his logic.

He is injecting a suitable fixed proportion of zero mean, gaussian distributed S. This is the surface radiative response to short term variations in ocean currents. (Thought the equation does not include longer term oceanic heat input). This is adding some gaussian noise to the y term. OLS is known to be the best estimator in that situation which is why he gets good alpha regression slopes with no N.

N is a radiative forcing. This will _induce_ a rate of change of temperature with time. It will not be in phase with surface radiation but ORTHOGONAL to it (wrt time) since it is the time derivative. This is probably what Bart was referring to above.

This will add noise to BOTH axes and this is where OLS comes apart. SB08 shows that real climate data matches a range of N,S values where this cannot be ignored.

D ignores this result by choosing a “more realistic” 20:1 . As I’ve noted elsewhere he often uses such arbitrary affirmations as the cornerstone of his so-called rebuttals.

I do not share Spencer’s pessimistic view that all is lost but do agree that any analysis based on trivial ols regression can be shown to be wrong before it is even started.

Fitting the inverse ols regressions to the above plots would be a good visual demonstration.

“It will not be in phase with surface radiation but ORTHOGONAL to it (wrt time) since it is the time derivative.”It’s worse than that. It will have a spread of phase relationships because of the filtering going on by having a term proportional to T on the right side.

Could you just recap what we are referring to here? This is about the phase space plot of TOA radiation anom. against surface temperature anom, right? This will contain a mix of a linear response (where both are time dependent) and an “oscillatory” response where the one is a time derivative dT/dt against a time dependant forcing.

I don’t understand your “on the right side” and your filtering.

I’m sure you have a valid point but something’s got out of context.

Let’s take just

Cp dT/dt= -aT + N

Let us set tau = Cp/a and express this as

tau * dT/dt = -T + N/a

The amplitude response from N to T is (1/a)/sqrt(1 + (w*tau)^2) where w is frequency in radians per time unit (w = 2*pi*f, where f is the frequency in cycles per time unit).

The phase response is phi = atan(-w*tau). The phase and group delay are given, respectively, by tau_phi = atan(w*tau)/w and tau_g = tau/(1 + (w*tau)^2). Note that the phase approaches -90 deg only at high frequency, and it is only there that sinusoidal signals approach orthogonality. In between, the phase goes from 0 to -90, the phase delay from tau to pi/(2*w), and the group delay from tau to 1/(tau*w^2).

dT/dt is almost by definition orthogonal to T without making any reference to cause or the whole system equation.

I was not suggesting the T was orthogonal to N.

We are not plotting T against N .

I don’t know the precise definition of your terms. Perhaps N is supposed to be noise? Make the input an all purpose “u” so that

tau * dT/dt = -T + u/a

Now, what do you plot against what to derive tau? If tau were “small”, you could plot T versus u to try to find “a”, I guess, and using an assumed value of Cp, get tau = Cp/a. I believe this is essentially what Dessler was trying to do.

But, as I have been explaining, the phase relationships there make this problematical, and there is no guarantee that “tau” is small (I guarantee you the opposite, in fact).

You could theoretically get both parameters from a Bode plot. But, in reality, you would find that the outcome does not match the assumed 1st order form.

If both axes represent noisy variables, why are we using OLS anyway, which is predicated on minimizing an error metric in the dependent variable only? Why not something like Deming regression?

Steve: I think that OLS versus other alternatives is a very secondary issue. The larger issue is what, if anything, the slope represents.

There are several possible approaches: Deming , PC, TLS, … they all require some knowledge of the nature of the errors / noise involved. I’m sure Steve knows this kind of problem better than most.

I have tried using the magnitudes of the two random series used in Spencer’s model to correct for attenuation. This is not rigorous , but at least a first step since we know what we put in. In reality it will be trickier.

CS=OLS*(rad_coeff/nonrad_coeff+1)

[in SB08 talk non-rad=N ; rad=S ]

This gives a corrected ols estimator that is typically within +/-10% of the feedback I started with. I haven’t run any stats on it but it looks promising. The caveat is evaluating those magnitudes in the real data.

Using this at lag zero should ensure that “N” is decorrelated since it is orthogonal and hence out of phase.(pi/2) , it won’t be contributing to the in phase ols. It can be regarded as a noise term, this should remove the attenuation shown in SB08.

The effect of long term (multi-annual) tenancies in S will need to be considered further.

The point of orthogonality due to the different nature of N and S forcings is crucial in eliminating this problem. I have not seen it addressed anywhere before.

lag plot of corrected ols estimator:

http://tinypic.com/r/xmj449/7

and the conclusion is that the estimate is wildly overstated?

If you have a point to make , can I suggest you make it clearly and state the reason for you conclusion rather than making cryptic , rhetorical questions.

Since you seem to be having some trouble understanding the word “corrected” I’ll spell it out.

The simple ols here is about 1.2 whereas the model was run with a feedback of 4. This is the method being used by Dessler to diss more serious work of others and is what is behind claims of positive (relative) feedback in climate.

The

correctedols calculated as I described is very close to 4 in this one-off run. It is normally within +/-10% of the initial f/b value.After all the mumbo jumbo and supercomputers are stripped away CAGW is founded on grade school maths, applied incorrectly.

Impressed? I know I was.

And, just a note because I see the two concepts confused so often even by people who should know better: do you mean “gaussian”, or do you mean “white”?

White noise is independent from sample to sample. This term refers to its correlation, its distribution if you will, in

time.“Gaussian” noise refers to its distribution in

space, being the density of the range of values falling under a bell curve.Spencer proposes pseudo random numbers shaped by log and sine. The distribution of the magnitudes is close to a gaussian bell. As for sample to sample independance goes, that is down to the randomness of the algo, Presumably fairly good but obviously pseudo.

I have used random and gaussian in that context. I have not used the term white.

This sounds like the Box-Muller transform. Generate two uniform random #s (U1 and U2) between 0 and 1, then Z1 = sqrt(-2*log(U1)) * cos(2*pi*U2) and Z2 uses sin instead of cos (note that log is the natural log in MATLAB.) Small U1 causes problems with floating point precision (and 0 blows up) which puts a practical limit on the tails, though checking can mitigate this problem.

Wikipedia has a decent enough article though it doesn’t mention the precision issue for small U1. As long as U1 and U2 are otherwise “good” uniform #s, the normal results should be fairly uncorrelated (and, thus, independent.)

Mark

BTW both F and S seem suffer from mission creep as SB08 progresses which adds to the confusion. I don’t know if the authors were aware of the logical slips or thought they didn’t matter. The basic conclusion of the paper does not seem to wrong because of that.

I apologise for some confusion in earlier posts but it’s hard to know exactly what quantities you are referring to when they change meaning within the paper and then D gratuitously reinterprets and renames them in his replies.

Hopefully this thread will have made it all a bit clearer.

I too found these papers – on both sides – extremely hard to decode. I wish that peer reviewers in climate science would ensure that authors archive the data as used and detailed supplementary information showing the actual calculations.

unpaid reviewers are worth every penny !

Sadly this situation seems endemic . Lindzen and Choi has a similar issue. They explain at some length how they select their periods of study but after several readings I have no idea how they derived the four or five points on their graph nor what they represent.

I find it strange that they chose to publish a paper about a new method without documenting what it is.

Clearly the review process does not review what one would presume it does.

Steve: I obtained data and scripts from them today. Take a look.

This is an nice arcane discussion but only latterly identifies a major problem – the way feedbacks are defined, understood and applied in climate science. The following remarks are not presented as being right or wrong, but simply my experience and observations of the evolution and problems of climate science, particularly the concept of feedback.

snip

Steve: sorry. 1) OT and 2) longstanding blog policy not to try to resolve or debate the “big picture” in a couple of paragraphs. otherwise all threads become the same.

The equations used by SB and D refer to a Linear system, unless I have grossly misunderstood the meaning of feedback.

I have two problems. The equations do NOT involve lag. They are the equivalent of a “perfect” amplifier circuit that has no capacitance. I am therefore puzzled about analysing lags in a system that has no phase in its constitutive equation. If it were written as

heat_capacity.dDt/dt=sigma(fluxes)+lambda.F(Dt,t)

it might make more sense to people used to analysing systems. For example, the F(Dt,t) could be broken in a set of ODEs describing the mechanics of the feedback.

The other problem I have is that there are well established methods for analysing linear systems and the whole process of ignoring these methods and contorting the analysis to conform to statistical methods for which they were not initially envisaged seems to me to be unsouund. I agree that more powerful approaches ( Bode plots, etc.) would be extremely helpful, but the whole system needs to formulated mathematically with delays (i.e. phase), since this is basically what a Bode plot implies.

Having said that, I cannot imagine what argument Dessler is trying to advance, but the whole statistical approach makes very little sense since it does not appear to reflect the underlying “mathematics” of the model.

Tried to understand Nick Stokes analagy to electrical feedback circuits, but in that case the attempted observational measurement of resistance effects the system, is this what Nick is suggesting the curious Dessler ramblings mean?

Yes, somewhat. It’s true that if you want to measure a resistance, say, you’d normally perturb the system. But it’s the same issue if you just observe perturbations. You see voltage fluctuations producing apparently amplified current responses, as if the resistance were much lower. Of course, you can analyse the response in other ways so that you do discern the true resistance. That’s what Dessler is doing by embedding the feedback ODE in his simulation.

FWIW, Climate Audit supports LaTeX. If I did this right…

You can experiment at texify.com

😀

Oh, cool. Should be a helpful site.

I think newer versions of Mathtype allow you to convert to Latex, too.

Mark

Thanks, that’s a great help. I did not realise that was possible.

Tom Gray Posted Sep 24, 2011 at 2:31 PM | Permalink | Reply…Simulators, such as SPICE, were solving those circuits quickly 30 years ago. I can just imagine what the circuit emulator packages and fast computers of today are capable of.

Free version is here LTSPICE 4 (most use the same berkley spice v3 as the calculation engine feeding the interfaces)

http://www.linear.com/designtools/software/

Output includes FFTs

If only one knew the input parameters, the delays, gains, losses, etc it would be simple to model the earth. Also provides a simple method of varying parameters on a run to run basis

Yes Thirty years would be about right. I studied some Electronics under the EE (PhD) who wrote the transient analysis section of the original SPICE. Interesting fellow — as are many brilliant people. He was working with the on-campus medical school to model the human heart using inductors and capacitors and resistors.

The point is if you can determine an analogous component for a subsystem then you can do some very interesting modeling — even using empirical data — not made up “simulation numbers”. A capacitor can be an integrator (Node to ground) — or it can perform differentiation (series hookup/pass through) as I recall. 🙂

Regardless, Mr. McIntyre seems to have inspired some very skilled individuals to re-think some sections of climate analysis. This is something long overdue.

When you get to the point where you have numerous equations which model sections of the climate and they can be assembled into a network of equations with transfer functions that define the data-flow between nodes then you can build that NP style graph for the network and apply some of the NP modeling and reduction techniques to get some real answers. At that point I may get more interested but my EE skills are rusty through lack of use — so at this point I could contribute little — and have other projects anyway — mostly in LFN modeling (using real data — honest!).

Mr. McIntyre may yet get his engineering grade report through inspiration and leadership — whether he wanted to do it or not.Those who studied both EE and Computer Science (modeling) probably remember the story of how on EE-to-be with relatively little knowledge made advances in the art.

Some guidance as to where to apply the EE/CS modeling skills — couple that with strong statistical people to validate the results and you have the makings of an interesting team. 😉

I say: “Lead on!”

I’ve often thought about using spice in this context but I’m not familiar with its use and that’s would be another huge learning curve. However, it should be fairly simple to represent Spencer’s simple model in that way and this may avoid the temptation to make gross , simplifying assumptions to reduce the number of terms to something we do in our heads. (which is essentially the what SB08 does.)

Linux offers spice and ngspice.

I agree with Bart and Mr Pete.

The impulse response of the system is a negative exponential (or positive ) and the analysis of the system is trivial. Note that since there are no delays in the feedback, this is a simple first order low pass filter.

The problem is simply to determine the time constant of the system, which is a deconvolution – not, repeat not, a problem in regression, which does not describe the system.

Autoregressive models can not be used to deconvolute a feedback loop? Is that what I am hearing in this thread from the EEs?

They can, but in this case the impulse response is a simple exponential whose time constant depends on the “gain” of the feedback. The solution of the equation used by S&B and D in presence of a Dirac function is:

f(t)=1/k.exp(-kt), where k=lambda/Cp

Since the output is is simply the convolution of an exponential with the input, this can be done rather more simply, and in my view in a more intuitive way, using a deconvolution.

The model is that of a low-pass filter (sse MrPete’s post) and so delay isn’t really meaningful and there are different delays, or more correctly, phase shifts that are dependent on frequency (a pure delay has a linear phase shift). However a first order system with a truly delayed feedback does not result in an exponential impulse response, implied by the equation used by S&B and D.

Therefore, I would suggest that the time constant of the system (lambda/Cp) is the parameter to be estimated and OLS regression of time shifted signals is an approximation to the basic equation.

(I’m guessing where RC says “MrPete’s post” etc, what is really intended is “Steve McIntyre’s post”… MrPete barely understands 5% of the stats and certainly can’t produce a post like this 🙂 🙂 )

“Since the output is is simply the convolution of an exponential with the input,”It’s also just exponential smoothing of the input. That’s easy to implement, effectively by numerical solution of the ODE. Which, to complete the circle, is what the “tautological” algorithm of Dessler does.

Isn’t this an example of the “circular” reasoning that Bart was criticized for. He extracted the impulse response of a low pass filter in the empirical data. Dessler made assumptions that Bart did not. Perhaps “circularity” in science is a way of making hypotheses that can be tested by experiment.

Tom,

I don’t see the cicularity (my own reference was just to how we started talking about ways to implement the RC lowpass and it came back, in my mind, to what Dessler had done).

But anyway, I think D is following SB11 here.

There’s a physical argument – the heat capacity acts like a capacitor, with the flux imbalance behaving like a current. That’s the basis of the DE; it isn’t just guesswork. Of course, finding numbers for the heat capacity is a big problem.

It seems to me that the whole debate about feedback is a red herring.

The equations used by S&B and D simply show that they are assuming that the system has an impulse response that is a negative exponential with a time constant of Cp/lambda.

This is simply assuming that energy goes through a first order system and all is recoverable as time tends to infinity. Physical analogies could be dye in solution going through a mixing chamber or, voltage signal applied to a simple capacitor-resistor network. Note that these systems do not contain feedback.

It seems to me that the focus of discussion should not be only the minutiae of the analysis, which I do not not think reflects the “mathematics” of the system but should focus on the physical assumptions underlying the analysis.

1) The basic equation does not imply feedback – although an electronic system might do so.

2) Given the basic equation, which has definite physical implications, the analysis should reflect this equation rather than something (delay) which it does not.

RC

“It seems to me that the whole debate about feedback is a red herring.The equations used by S&B and D simply show that they are assuming that the system has an impulse response that is a negative exponential with a time constant of Cp/lambda.”I agree with both of those statements, from a slightly different perspective. They have a model which enables output (T) to be calculated from the input (flux). There are mechanisms whereby the output can modify the input. These have the capability of leading to runaway. They call that feedback. Whether everyone would do so is not important.

But I think the cloud mechanisms discussed here are assumed fast relative to the sampling period. So no time constant is attached to them. It’s a DC mechanism.

SB and D (following) allow an RC response to total flux based on heat capacity, as you observe. This is an addition to the usual equilibrium consideration. The time scale here does seem to be long enough to affect observations, so they try to allow for it. Both quantities involved in that RC are very uncertain.

RC – have you seen the discussion here?

Why is my comment still awaiting moderation?

Steve- 1) it was off-topic; 2) it is longstanding blog policy to discourage commenters from trying to debate the “big picture” in a couple of paragraphs. Otherwise every thread becomes the same.

I thought, in light of comments made earlier, readers would like to know how climate science defined feedback since it appears to contradict the definitions in other specialist areas. It would have an influence on assumptions and all that ensued.

When I read Tim’s comment, I thought it made some interesting points on the definitions feedback, though admittedly one reason it was interesting was that I could understand it (unlike some of the detail which is beyond my Mech Eng training). Perhaps worth a post of its own?

Trying to resume Steve’s main points in this thread:

D11:

regression of TOA flux vs. ΔTs yields a slope that is within 0.4% of λ

This result is from regressing a synthetic time series against itself plus a mean zero, gaussian distributed noise.

It is a mathematical result that OLS will give an accurate result in this case. In fact his 4% is more likely a result is the deviations of his peudo-random number generator from ideally ramdom data than anything else.

This is not a “result” , it can be demonstrated mathematically without even doing one run of the model.

Repeating the same mistake 1000 times does not make it less of a mistake. It merely underlines his lack of understanding of the techniques he is using.

a result confirmed in Fig. 2b of Spencer and Braswell [2008]

Well, SB08 shows that you get a good correlation when using negligible “N” , which is the term D is effectively excluding in this test. SB08 also shows that as soon as you do have significant radiative forcing present the correlation goes to pot and you get an inaccurate ols estimator.

Most of Dessler’s papers have extremely poor corr coeffs which would thus suggest a strong radiative component. He manages to draw the opposite conclusion.

This conclusion, however, relies on their particular values for σ(ΔFocean) and σ(ΔRcloud). Using a more realistic value of σ(ΔFocean)/σ(ΔRcloud) = 20,

As usual, no justification for “realistic”. His result, more than that of SB , who did consider a wide range of values , depends totally on his _assumed_ choice of 20:1 .

This also applies to the individual components of the TOA flux, meaning that regression of ΔRcloud vs. ΔTs yields an accurate estimate of the magnitude of the cloud feedback

This result, based precisely on exclusion of everything but a perfectly random ocean hear forcing tells up nothing about how well regression will work in the presence of other forcings (you need to refer to SB08 again for that), neither is there any justification for this obviously false claim that it applies to “to the individual components of the TOA flux”.

The whole deviation into reprocessed data with more invalid ols regressions is smoke and mirrors.

The main claims of this unpublished paper are based on ad hoc assertions and fallacious arguments.

Yes, basically an exercise in “if the system behaves like this, then this form of analysis would work,” but never confirming in any way that the system behaves like “this”.

And, indeed, we know that it doesn’t.

Bart;

Try this for an analogy;

You’ve been called in to consult on an important issue with a critical product, by some very dear friends whom you know to be intelligent and very decent human beings.

They walk you through the problem with the product. They’ve gone into great detail of analyzing the voltages vs. time.

It turns out they measured all the voltages with an old DVM, the kind we had before they could do good RMS. You ask what scale they used? Yep, the DC scale. Oh, garsh…

In my experience, the first thing to do is to have them demonstrate their technique. Then ask them to change to AC, and explain the reading. Bewilderment ensues… Ask to change the range, and the readout is different. ???

At that point I dredge up a scope and connect it, (and gently show/teach them how to trigger it). More bewilderment usually at this point; either 1) why is the waveform so a)fuzzy, b)wiggly, or c) going up and down to the power supplies?

You know the drill…

Usually, it takes a few days before I try to wheel a Fourier Transformer over to the apparatus in question. (they used to be rather heavy, not so much anymore), i.e. Bode100. Then we have to talk about small-signal linear and noisefloor and blahblahblah…

Anyway, my point with all of this annoying pedantry is to remind you that we all need baby steps. It doesn’t do any good to just tell us the answer and the things that pass for ‘semi-rigorous contemplation’ at your level of expertise.

The crux of confusion at this point seems to be whether arbitrarily-sophisticated trend fitting (i.e., ARMA ) can deduce nontrivial feedback networks. I think the answer you’ve stated is that pure delays (at all freqs) only exist with sophisticated math not readily found in nature, but even as such, they are on the ‘simple’ end of the feedback complexity spectrum. I think that means you are saying ‘Game Over’ on using OLS to deduce nontrivial feedback, but it was less than casually obvious to the most trivial observer.

If you have the patience to gently teach some bright but naive folks, I think you might find it to be fulfilling use of time.

(as if anyone was gentle with us…)_

But hey, it’s a new millenium…

Carpe Dinero

RR

The crux of confusion at this point seems to be whether arbitrarily-sophisticated trend fitting (i.e., ARMA ) can deduce nontrivial feedback networks.It can. Here is Wiki showing how the fitted coefficients (from ARMAX, say) give you a general rational transfer function in the Z-transform domain.

OK, well, looks like I was wrong, if I am understanding Nick.

I never realized that all of that OLS statistics stuff was actually the Z-transform done with complex variables. Pretty nifty how the whole “j Omega” axis in the ‘s-plane’ gets mapped into the unit circle. Seems like a nice way to tidy up a complex topic.

I guess the statisticians were being kind to us visiting noobs, and they didn’t want to scare us off or humiliate us regarding our lack of ‘grok’ of handling magnitude and phase, carefully concealing their adroit use of complex variables.

RR

hehe.. I knew Z transform would come up sooner or later

one day I sat listening to two flight control engineers arguing and Z transform, blah blah, Bode plot, blah blah, Pole, blah blah, came up. I was like ” huh?” they tried to explain. I realized then that the lobotomy I had received as part of my move from engineering to marketing had taken hold. The second lobotomy, required for advancement to the executive offices was even more effective.

That doesn’t tell you anything about the validity of the ARMA coefficients obtained through regression on a non-linear phase system in the first place. It’s just saying that “given the correct coefficients, this is how you represent them in the z-domain.” The z-domain transfer function is derived directly from the discrete-time difference equation, e.g.:

y(n) = x(n) + a1 * x(n-1) + b1*y(n-1), where n is the sample index

has a z-transform of

Y(z) = X(z) + a1 * X(z)*z^-1 + b1*Y(z)*z^-1

reordering terms gives you

(1 – b1*z^-1)*Y(z) = (1 + a1*z^-1)*X(z)

and the resulting transfer function is

H(z) = Y(z)/X(z) = (1 + a1*z^-1)/(1 – b1*z^-1)

The “no feedback” first-order discrete difference equation discussed in here can be represented by

H(z) = 1/(1 – b1*z^-1)

with 0 < b1 < 1 (|b1| < 1 in general for unconditional BIBO stability, but negative values result in an oscillating decay rather than a capacitive decay, and b1 = 0 is trivial.)

Play around in Excel if you want to see how it responds with various b1.

Mark

Unconstrained pseudo-linear regression on ARMA models doesn’t work very well – it tends to give you singularities. Moreover, Dessler wasn’t fitting a discrete time model.

I’ve been looking at some of the predecessor papers especially Forster and Gregory 2006, which seems to have initiated the idea of using regression to estimate sensitivity without worrying about leads and lags. Which seems to be where SPencer and Braswell get started.

I’ve been doing some experiments without taking monthly normals – something suggested by UC. The relationships between absolute quantities don’t necessarily look the same as the relationships after taking monthly normals. This looks like a very interesting issue, but one that would take a lot of time to examine thoroughly since it’s new territory for me.

IIRC it was FG06 that said it thought OLS was the best method and tucked the reasoning away in appendix 1.

When you read it, it basically says they know it gives artificially low results but they did not want to get bogged down in arguments about the best regression method. ie it was a political move, not science.

Well I suppose that was in 2006 and they did want to get published. However, the point was there for the discerning reader.

I still think the basic : plank + rad forcing + non-rad is a valid simple model. That should yield an estimate of feedback provided a more rigorous regression technique is used.

Bart and I have got nearly identical results for the time constant though different routes. So if a separate method can get a lambda there will be good start to a testable framework.

The electrical analogy is well worth following, if someone could create the climate in spice we could avoid being too simplistic.

My money is on somewhere between 5 and 6 , which is about where FG was heading except for the attenuation of simple OLS.

I have some ideas about why L&C are so much higher but I need to port their work to something I can use before I can get anywhere with that.

I put a comment after Bart’s bode plots but that may not be noticed now. Here’s an overlay of a Spencer model run (random inputs only) over satellite data.

http://tinypic.com/view.php?pic=30sfupc&s=7

This was just the result of hand tweaking parameters to match the form of the satellite data. The key point is it was quite stable to changes in f/b and depth as long as the ratio was kept the same, so I don’t regard the choice of either as being particularly realistic. The ratio probably is.

45/9.2 = 4.88 years.

I did some similar experiments, which are shown in Measuring Climate Sensitivity – Part One.

In essence, if the regression is done with daily results (for an experiment with daily independent gaussian noise added) then of course the regression produces the correct estimate of climate sensitivity.

But once the regression is done of monthly anomalies then (of course) the regression doesn’t produce the correct estimate of climate sensitivity because this month’s temperature is not independent of this month’s radiative noise.

The simple maths for the bias in the estimate is also shown in that article.

I’m not sure I have read every comment on this Climate Audit article but a missing piece of the jigsaw puzzle for readers here may be Murphy & Forster 2010 who responded to Spencer & Braswell 2008.

The point of their paper was that choosing “correct” values of radiative noise and mixed ocean depth turns the Spencer & Braswell correction into a very small systematic error. A reasonable substance of their paper is how they arrive at their values.

I don’t yet know if their (Murphy & Forster’s) values are correct. I am currently exploring more interesting models with values of mixed ocean depths that vary with latitude and month to see how that “corrupts” the estimates of climate sensitivity.

“But once the regression is done of monthly anomalies then (of course) the regression doesn’t produce the correct estimate of climate sensitivity because this month’s temperature is not independent of this month’s radiative noise.”

Perhaps so, assuming the “radiative noise” is “daily independent gaussian noise”, then it would be the case that there is little problem with the use of

dailytimescale data. All well and good, but if we can agree that themonthlyresults will still be biased, then this implies that, well, all estimates that are being done in this manner are biased, since their is no global mean surface daily product, and no analyst has used daily resolution data-well, certainly not Dessler or Forster and Gregory.Keep in mind however that you are citing a paper that “responded” to Spencer and Braswell’s earliest paper on this subject. Spencer himself believes that his earlier paper (08) did not make sufficient arguments to really demonstrate their point. That analysis was superseded by their 2010 paper (not the 2011 paper which Dessler “rebuts”). How well the criticism of any earlier paper stands in light of the refined later arguments is rather important.

“..All well and good, but if we can agree that the monthly results will still be biased, then this implies that, well, all estimates that are being done in this manner are biased..”

That was exactly my point.

Mr McIntyre,

I really would suggest that you look at the problem from a physical perspective. All these equations imply is a first order physical system, without feedback, and the appropriate operation to determine the properties of such a system is through deconvolution rather than regression.

Consider the relationship between convolution and correlation – they are complex conjugates, and it should become clear why regression is possibly not the best approach.

You have made this suggestion several times without explaining what it involves and what it produces. You give the impression that you are familiar with this kind of work. Could you outline what properties are obtainable by deconvolution and how that can be done?

Deconvolution is essentially what I did with the Fourier transforms and such on that other page. Deconvolution is that gives the impulse response here.

20,217 views so far. Hopefully, some people viewing will know what to make of it and carry it forward. I’ve done about all I can.

Bart , have you documented all this anywhere?

Graphs are a concise way of communicating a result and I’m interested. What I need is the method.

BTW, was it you posted finding a feedback of 9.5 a few weeks back? If so, how did you get that result. It seems too high to me but matched my ad hoc parameters fitting sat. data.

Thanks.

The only documentation is the CA thread.

The feedback of -9.5 W/m^2/degK is the DC gain of the estimated transfer function in the frequency domain or, equivalently, the final value of the step response in the time domain. I would expect fairly large error bars on that estimate (at an educated guess, I’d give it +/- 20% or so 1-sigma).

The problem is the time span of the data available for analysis is less than the correlation time (taken as the inverse of the frequency bandwidth). We’ll know better as we gather more data, or perhaps as other relevant data come to light.

However, I do want to make clear that there is no doubt at all that the feedback is negative, and that it is significant.

@Bart,

I am sure that there is feedback, but the equations used by SB and D do not contain feedback.

If you read Judith Curry’s chapter on thermodynamic feedback in her book on atmospheric physics textbook, she defines it correctly to be a sum of internal functionals, which makes sense. My quiblle with this definition is that it is static analysis and, were each each internal functional treated as a complex function of time, dynamic analysis in control theory terms becomes straightforward.

The difficulty, I believe, is distinguishing between non-feedback as implied by SB and D and a system in which the feedback is defined as a set of ODEs relating internal variables from flux measurements.

There’s a big difference between what is being discussed here and what Bart is referring to (from the 2010 thread that went into the weeds), in general terms, though the two are linked, obviously. His reference is a difficult thread to follow, with a lot of heat-of-the-moment statements, retractions, corrections, apologies, insults, etc., but his initial analysis (and code) is pretty plain and not unlike anything you have been saying in this thread, RC.

Mark

It is fairly straightforward.

If a time(or any other variable) signal passes through a linear system, it produces an output.

A liner system is defined in many ways, but it can be described in terms of LINEAR differential equations (which may be partial in systems with more than one variable), as is SB2011 and D2011. A more signal-processing orientated definition is that the output of a linear system is Proportional to the input, is stationary (i.e.: its properties don’t vary with time) and it obeys superposition: if you put in the sum of two waveforms, the output are the sum of the individual outputs of the individual inputs.

The process of an input being modified by a linear system to produce an output is called a convolution. If the signals are treated as a function of time, the system’s behaviour is defined by its Impulse response – i.e.: what the system does when it is presented with an input of infinite magnitude and infinitesimal duration (and an area of 1). In this case the output is calculated from the convolution integral.

A more powerful method of analysing linear systems (and much more effecient computationally) is the use of integral trasforms: Fourier, Laplace, Z… In this case the behaviour of the system is defined in terms of complex frequency. In this representation, the behaviour of the system is known as the transfer function. Convolution in this domain is multiplication of the transform of the input by the transfer function and the output signal can be recovered by inverse transformation.

To calculate the output of a system, defined by a set of linear DEs, in principle one does the following:

The input signal undergoes a discrete Fourier Transform*, the transform is multiplied by the calculated transfer function and the result is the subjected to an inverse DFT to obtain the output.

Deconvolution is identifying the system when one has an input and output signal. The transfer function of the system is calculated by dividing the transform of the output by the transform of the input and the impulse response of the system is the inverse transform of the calculated transfer function.

In SB2011, we know the analytical form of the transfer function because it is defined by their differential equation, and the parameter lambda/Cp is to be estimated.

This can be done, in principle through deconvolution, which I would suggest is a mathematically “purer” and physically more interpretable method than contortions through OLS.

This idea pervades system analysis, communications, electronics etc and there is a huge amount of work on identification of systems in the presence of noise. Good starting points in the field are “Signals and Systems” by Oppenheim,Willsky & Young (Prentice/Hall) or Random Data by Bendat & Piersol (Wiley)

* One has to realise that integral transforms are an analytical idea and include limits of integration between +- infinity. Thus computation of the transform of real, sampled, signal is highly restricted because the transform is not a continuous function.

That’s precisely what I did at the link, RC. But, I found the transfer function from temperature to dR is not a 1st order lag system.

Yeah, this thread seems concerned with a specific subset of the overall problem. At least, that’s my interpretation.

Mark

I do not think so, This is the kernal of the problem. does the analysis reflect the the basic “MATEMATICS”?

No, if you read the topic it concerns a specific subset of the general relationship which is discussed in the 2010 paper (this thread pertains to the 2011 paper- different beasts). The 2010 stuff is much more than that you see here… At least, the thread is more.

Mark

Entering dangerous zone

Perhaps you could make your point a bit clearer rather than assuming we all see the same thing that you see and draw the same conclusions.

I can guess at what you maybe mean and could possibly see reasons why I may disagree as to whether it’s dangerous. But until you say clearly what you mean I’m not going to post what I presume you mean and why I do or don’t agree.

You could possibly explain the nanmean anomaly too.

Thanks

I just meant that it is more difficult to work with plain raw data. The incoming series have so much energy.. Nanmean is the average-of-anomalies of stations that have full coverage for 1961-90 period in the data set I have.

OK , so WordPress has kindly dumped by quote tags so you’ll have to work out for yourselves which paras are quotes from Dr D.

A word of warning to emphasise a point mentionned by UC.

As long as one only does simple linear correlations then the problem of using anomalies instead of true values is (more or less) known and (more or less) taken care of.

However as soon as one begins to introduce delays and phases (e.g spectral responses) then it is no more under control because using anomaly fields instead of the real fields removes all signals where L/T is an integer. This can and will biase analysis.

I am aware that this issue doesn’t strictly adress the very technical (OLS) point made here even if it adresses the larger point whether and how far the model used here represents (physical) reality.

If considered too much OT, snip.

As a follow up to my comments to Bart and P Solar, the difficulty here is not model identification in terms of a transfer function anlysis obtained by deconvolution, which can be done, but the cannonical form of the model, and hence the transfer function.

The question “is there feedback?” stems from a mathematical model of the system. I maintain that the basic equation used by SB and D is simply that of linear first order system that does not contain feedback. This model can be modified to contain feedback, using the thermodynamic model of internal functionals defined in Judith Curry’s textbook. Although her analysis is steady state, this can easily be handled by making the internal variables ODEs wrt time.

However, the real problem is being able to distinguish between the models using temperature and flux data, given a relatively short period of data and subject to errors?