They don’t have to be periodic for the peak of the response to lead the peak of the signal in the derivative case. See the green line in the first figure example here http://landshape.org/enm/phase-shift-in-spencers-data/.

All that is needed is for the max rate of increase to preceed the max magnitude – easily done.

]]>So non-periodic signals, that is any signals which carry information (e.g. the effect of solar radiance), exhibit a lag when going from the input to the output

]]>Good link Tom.

One thing to point out though is the following. This is provable as a theorem:

If X is the input and Y is the output, and Y depends linearly on X, the underlying system is passive (aka “stable” aka does not require a power supply to operate), and the relationship between X and Y is causal, other than spectral widening due to a finite window and noise, the inferred impulse response function h(tau) = 0 for tau < 0.

For a broad range of systems, tau can be identified with the physical delay.

It turns out if you have a power supply (e.g., the Sun certainly acts as one for climate), then you can get negative delays. These negative delays can either be a sign of net amplification in the system (feedback greater than one so a stabilizing nonlinearity is required) or they can arise in a passive, nonlinear system.

]]>More from the web page

]]>Thus the effective forcing function at any given instant does not reflect the future of x, it represents the current x and the current dx/dt. It just so happens that if the sinusoidal wave pattern continues unchanged, the value of x will subsequently progress through the phase that was “predicted” by the combination of the previous x and dx/dt signals, making it appear as though the output predicted the input. However, if the x signal abruptly changes the pattern at some instant, the change will not be foreseen by the output. Any such change will only reach the output after it has appeared at the input and worked its way through the transfer function. One way of thinking about this is to remember that the basic transfer function is directionally symmetrical, and the “output signal” y(t) could just as well be regarded as the input signal, driving the “response” of x(t) and its derivative.

For those of you who were puzzled by the talk of “negative time lags” in the discussion about transfer functions above can find an accessible discussion at the web page whose URL is above. TEh web page discusses transfer functions and teh effect of the phase response. Contrary to some impressions, these “negative time lags” have nothing to do with predicting the future or showing that the cloud feedback observations are non-causal.

The pertinent passage from the web page is:

]]>The ratios a1/a0 and b1/b0 are often called, respectively, the lag and lead time constants of the transfer function, so the “time lag” of the response to a steady ramp input equals the lag time constant minus the lead time constant. Notice that it is perfectly possible for the lead time constant to be greater than the lag time constant, in which case the “time lag” of the transfer function is negative. In general, for any frequency input (not just linear ramps), the phase lag is negative if b1/b0 exceeds a1/a0.

Despite the appearance, this does not imply that the transfer function is somehow reads the future,nor than the input signal is traveling backwards in time. The reason the output appears to anticipate the input is simply that the forcing function (the right hand side of the original transfer function) contains not only the input signal x(t) but also its derivative dx/dt (assuming b1 is non-zero), whose phase is /2 ahead. (Recall that the derivative of the sine is the cosine.) Hence a linear combination of x and its derivative yields a net forcing function with an advanced phase.

Those graphs from Bart are very interesting. This is very much complementary to how I have been investigation this.

Here is an overlay of Spencer’s graphic showing satellite data vs model results, on top of the lag response of Spencer’s simple model. (Here I mean using random inputs for rad and non-rad, not just the basic equation form).

http://tinypic.com/r/30sfupc/7

This was just trial and error to get the nearest fit, I’m not suggesting this is a result that shows what f/b really is.

What is relevant to Bart’s work is that this plot changes little as long as the feedback/depth ratio stays the same. This in fact represents the time constant Cp/lambda.

45/9.2= 4.891304

That is uncannily close to Bart’s result by a completely different approach.

Having a hook on the time constant of system response will be a great help in getting to lambda.

]]>The loop would have to be quite complicated to take advantage of the phase lead near 0.3 years^-1. There appears to be non-minimum phase behavior in this area. So, maybe the bandwidth is substantially less than this.

Nobody appears to care, but I thought I’d keep any who do apprised.

]]>Yup. The data from such generators are pseudo-random and often have issues. Cleve Moler has a pretty long writeup on the method MATLAB uses to generate the Normally distributed values from the randn() function, though it’s been a while since I read the article (search at The MathWorks then wade through a bazillion hits.) It is good a good generator, but not perfect, which is otherwise impossible.

Mark

]]>I don’t know R code. To generate the artificial data, hopefully this post and the two following it are not too hard to decipher.

]]>Bender,

I posted R code here, along with graphs – it’s similar to the code Roman posted above. Just add dR=rev(dR); temp=rev(temp) after they are defined to get the reverse effect. It’s easier to see what is happening if you reduce Nsamp from 8192 to, say, 1024.