CERES data, as retrieved in its original state (see here) provides all-sky and clear-sky time series. Dessler 2010 made the curious decision to combine ERA clear-sky with CERES all-sky to get a CLD forcing series. This obviously invites the question about the impact of using CERES clear-sky in combination with CERES all-sky to calculate the CLD forcing series. One would have thought that this is the sort of thing that any objective peer reviewer would ask almost immediately. Unfortunately, as we’ve seen, climate science articles are too often reviewed by pals. Nor, to my knowledge, has the question been raised in the climate community.

The decision was touched on in Dessler 2010 as follows:

Previous work has shown that DRclear-sky can be calculated accurately givenwater vapor and temperature distributions (20 Dessler et al JGR 2008, 21- Moy et al JGR 2010). And, given suggestions of biases in measured clear-sky fluxes (22 -Sohn Bennartz JGR 2008), I chose to use the reanalysis fluxes here.

While peer reviewers at Science were unequal to the question, the issue was raised a month ago by Troy_CA in an excellent post at Lucia’s. Having exactly replicated Dessler’s regression results and Figure 2a, I’m re-visiting this issue by repeating the regression in Dessler 2010 style but making the plausible variation of CERES clear sky in combination with CERES all sky, and with the widely used HadCRUT3 series and got surprising results.

The supposed relationship between CLD forcing and temperature is reversed: the slope is -0.96 w/m2/K rather than 0.54 (and with somewhat higher though still low significance).

Here are exact details proving the calculation. Troy provided the following recipe for CERES data (isn’t it absurd that blog posts on “skeptic” blogs provide better replication information than “peer reviewed” articles in academic literature):

The CERES data I’ll be using in this post is available for download here.

http://ceres.larc.nasa.gov/order_data.php

I’ve chosen SSF1deg to match up with the Dessler paper, then selected global mean, monthly, Terra, full time range, with the TOA fluxes. I’ll also note that the site automatically downloads version 2.6 now instead of 2.5.

I repeated the exercise (data is now available to end 2010) and uploaded the data set (in ncdf format) to http://www.climateaudit.info/data/ceres. The following operations retrieve data for analysis and plotting:

library(ncdf)

download.file(“http://www.climateaudit.info/data/ceres/CERES_SSF1deg-Month-lite_Terra_Ed2.6_Subset_200003-201012.nc”,”temp.nc”,mode=”wb”)

ceres=open.ncdf(“temp.nc”)

net=ceres.all.data<-ts(get.var.ncdf(ceres, "gtoa_net_all_mon"),start=c(2000,3),freq=12)

tsp(net) #

clr<-ts(get.var.ncdf(ceres, "gtoa_net_clr_mon"),start=c(2000,3),freq=12)

cld= net-clr

month=window( ts( rep(1:12,11),start=2000,freq=12),2000.16,tsp(net)[2])

anom=function(x,Month=month) { #function to take anomalies by month

y=factor(Month) ; norm= tapply(x,factor(Month),mean,na.rm=T)

levels(y)=norm; y=as.numeric(as.character(y)); y=x-y

return(y)

}

cld=anom(cld)source("http://dl.dropbox.com/u/9160367/Climate/ClimateTimeSeries.R") #Troy's functions

had<-window(getHadCRUt(), start=c(2000,3), end=2010.99)

had=had-mean(had)

Dessler’s regression for the conclusion in his Abstract can be simply replicated for CERES CLD data and HadCRU as follows:

fm=lm(cld~had)

summary(fm)

This yields a slope of -0.96 +- 0.98 w/m2/K REVERSING the result reported in Dessler 2010 using a combination of CERES all-sky and ERA clear-sky (0.54 +- 0.94 w/m2/K). r^2 remain very low but higher than that reported in Dessler 2010.

#had -9.556e-01 4.908e-01 -1.947 0.0538 .

#Residual standard error: 0.5605 on 128 degrees of freedom

#Multiple R-squared: 0.02876, Adjusted R-squared: 0.02117

#F-statistic: 3.79 on 1 and 128 DF, p-value: 0.05376

The scatter plot corresponding to Dessler 2010 Figure 2a is shown below. While I feel uneasy using the term “confidence intervals” with such weak relationships, the 2-sigma confidence interval brackets the -1 to -1.5 w/m2/K range that Dessler 2010 sought to exclude.

Figure 1. Re-doing Dessler 2010 Figure 2.

For comparison, here is Dessler’s original figure.

The questions are obvious.

PS. Just to confirm that both flux series are correctly oriented, here is a plot of the Dessler CLD series versus the CERES CLD series (the correlation is 0.73):

## 727 Comments

It pains me to see a straight line drawn through that scatter plot. 🙂

Text books sometimes say that both regression lines should be plotted, to see whether developing a regression model is worthwhile:

(CIs with the intercept, still a bit confused about leaving it out; I don’t really understand anomalies 🙂 )

Yes that was my first impression of the scatter plots also.

Ditto. With correlation coefficients this low one really is picking gnat sh*t out of pepper. Is the thinking here that only an r^2 of exactly zero means a result is not significant? A value of 0.02 is remarkably close to that.

I suspect that a lot of shotgun manufacturers would pride themselves on the quality of this distribution.

You still gotta to aim in the general direction! It looks like the rabbit will get away.

Is this another example of “Hide the decline”?

(Note, you are missing a link in your first sentence at “see here”.

Funny. But not quite accurate. They never looked for a decline. It took Steve to find it. The hiding takes place in the next phase. In the usual forums.

Welcome back, Cotter.

couldn’t resist: http://www.youtube.com/watch?v=GGlY3ubGzUY

“who’d have thought they’d lead ya

back here where we need ya”

it is always nice to see Bender.

My pool is a mess since he disappeared.

Steve–

Did you mean to provide a link in your first line (“see here”)? Or is it the same link you provide in the box below that?

Is this with, or without lag?

Speaking of Troy, he and I are co-authoring something about this feedback issue that improves the correlations somewhat. Obviously people will want to know more but I don’t want to jeopardize our chances of publication.

I’ll give everyone a hint: clouds and water vapor aren’t quantum entangled to the sea surface. 🙂

Steve: No lags (as shown in the script). Apples-to-apples to the Dessler calculation.

Thanks Steve. I figured it had to be, but couldn’t tell (I hadn’t understood quite what the code is doing yet, just kinda glanced).

So the relationship one gets for the supposed feedback slopes is ridiculously dependent on the arbitrary choices one makes for what datasets to use.

Forgive a CA cliche, but it’s cloud’s illusions I recall, we really don’t know clouds at all 😉

Even that hardly does this justice. To choose all ERA or all CERES would presumably make some sense. To combine six of one and half a dozen of the other should, as Steve says, have been picked up at once by any serious reviewer. But that’s what you never get in climate science peer review. It’s either the lapdog that lost its teeth or the rottweiler who won’t let go until the intruder is dead. It’s pitiful.

due diligence means I get to pick the data I want and the other guy has to look at the problem six ways from sunday. Witness Steig with Ryan.

At least give creit where it is due, this time to Joni Mitchell

Why is it that ANY regression slope with a confidence interval LARGER than the estimated value is deemed to be in any way significant? Slopes of 0.54 +/- 0.94 or for that matter -0.96 +/- 0.98 are not statistically distinguishable from zero, though the latter might be marginally significant if one is slightly less demanding, i.e. with a significance level slightly below 95% (In other words, Steve result is “marginally significantly different from zero at somewhat less than 95% confidence” while Deller’s result is definitely not significant in statistical terms unless you reduce the confidence level to embarrassingly low levels). No wonder R2 is nearly zero, meaning that nearly zero percent of the observed variance in the dependent variable is “explained” by the linear regression.

Such findings SHOULD be published, of course (all positive, negative or neutral results should be published to reduce “publication bias”), but the conclusion should be “no significant relation found” in the case of Deller, and “significant at 90% (or whatever) confidence level” in Steve’s case.

Looking at the two figures, it would appear the major difference is a couple of outliers in the UL quadrent and several in the LR quadrent of your figure. Can you check to see why these are so much different in the two data sets?

One is just a subset of observations, extended, The other is a reanalysis.

Absolutely hilarious!

I’m looking forward to Dessler’s video explanation.

After all, Dessler is a ‘Google Science Communication Fellow’ (which must be a tax deduction for Schmidt et al).

Should be easy enough to submit as a comment to Science.

Steve this is very interesting but I’m confused. In your fig. 1 above, which CERES series do you use? Just “all sky” or just “clear sky?” And what did Dessler 2010 do to “combine” these? Average them?

Steve- the calculations are specified in the code. cld= net-clr

Does “combine” mean that any CLD series is “all sky” minus “clear sky?” And I guess ERA is a different data source from CERES? Anyone confirm if this is correct… I would be grateful.

One thing that will account for some of the difference (although a smaller amount) are the adjustments to CRF to get to delta-R (since the clear-sky fluxes change more due to temperature, water vapor, and surface albedo, which correlate with temperatures). So using the radiative kernels mentioned in Dessler10 to remove the bias will make the results slightly more positive, although not a whole lot.

You can get the differing flux contributions I calculated between clear and all-sky for the various kernels from this post (GlobalFluxContributions2000-2010.txt):

http://troyca.wordpress.com/2011/08/31/radiative-kernels-and-more-on-the-surface-vs-atmospheric-temperature-feedback-problem/

And then make the adjustments.

Also, one reason I looked at for the difference between CERES vs. ERA clear-sky fluxes is that of the measured solar insolation. There was a discrepancy as I recall, where the difference in measured solar insolation actually caused a positive bias when subtracting clear-sky (from ERA, calculated based on its solar insolation) from all-sky (a different value for solar insolation). However, it could not account for the total discrepancy.

So, we have the benefits of peer review on one side and those of open code and data on the other. Which is going to win out? Given the quality of peer review in climate science? In the end all points of the spectrum will benefit but for now, thanks Steve, Troy and the rest of the technical climate bloggers. As it says, a little leaven leavens the lump.

What happens if you combine ERA clear-sky and ERA all-sky?

Its the short of thing that happens in rush job ,

Comments to any journal get buried, so that is a rather time intensive process for little gain. I encourage Steve to generate a new piece of independent scholarship to address issues in the recent papers, e.g. D10, D11, SB2011, etc.

There is an advantage to being late to party as you get to see which cars are parked outside.

Good one! You also get to find who’s in bed with whom in the back seat if you are game to peep.

Just posted this on realclimate.

Looks like eric and martin vermeer and gavin agree that looking at alternative datasets is a normal course of due diligence

Eric,

“[Response: Mosher: There has never been any objection to the idea of ‘due diligence’ at RealClimate. The objection has been to the laughable and arrogant claim — repeated ad nauseum by you — that the idea of ‘diligence’ hasn’t occurred to anyone before, and to the offensive and unsubstantiated accusation that the mainstream scientific community have placed scientific diligence secondary to a perceived political agenda by the mainstream scientific community.–eric

Dr. Steig a few points.

If you read what I wrote carefully you will see that I’m not claiming that anyone at Real Climate “objects” to the “idea” of due diligence. Here is what I wrote:

“Thank you. Over the course of the past four years quite a number of us have suggested just this type of due-diligence thing repeatedly. These suggestions which seem utterly normal to anyone who has had to work with messy datasets, conflicting datasets, and divergent models, have been routinely met with cat calls, insults, and challenges to “do your own damn science.”

I don’t see any references there to real climate. What I am pointing out is simply this. In the past when people asked if certain due diligence was performed, those questions were met with the kind of responses I mentioned. You’ll note that I dont call those responses “objections” to the idea of due diligence. They are something else. They are not objections to the idea, they are objections to the person who raises the issue. That’s two entirely different things. Of course, that behavior is often taken as an objection to the idea itself. Hence, it’s good to see that clarified. The objection is to certain people raising the issue in a way that you don’t approve of or that makes you uncomfortable.

Second, If you read carefully you will see that no where do I make the claim to be the “originator of this idea” In fact, in all my writing I’ve given deference to the people who taught me. I have expressed surprise when I have, on occassion, found little documentary evidence of due diligence. By that I mean no documentary evidence that due diligence has been performed. It may have been performed, but in certain cases which interest me, I have on occasion seen no documentary evidence that it occurred. I am by no means the first person to notice this. I am glad that we can both agree that testing multiple datasets is one of the ordinary things you do as a part of due diligence. It’s a good day when we can agree on that. More on that later. Rest assured that both you and martin and gavin will get full credit for the idea of testing with alternative datasets.

I will assume that you agree with Martin that looking at alternative datasets is normal due diligence and let him handle any arguments you have with that.

Third, Like you I do not think that general indictments of an entire community of scientist’s is well founded or useful. It’s rather like calling all skeptics Oil Shills. Some clearly are, other’s, well, not so clear. Moreover, my main focus has been on a few, very few, isolated cases. In those isolated cases my focus has been exclusively on the sociological aspects, and institutional aspects, not the political aspects. Let’s suppose I had somebody who had challenged an engineer to take a matlab class from him. I would never look to the political aspects of this. I would look at the institutional and sociological aspects to explain the phenomena. In fact, you will find that is a common theme for me going back 4 years when I first noticed this rather odd dynamic. Frankly, I find politics and arguments about people’s politics boring.

So peace dude. It’s a good day when we can agree on something.

Rest assured that both you and martin and gavin will get full credit for the idea of testing with alternative datasets.

tee hee hee

Yes, you can blame them for the stuff that Steve does. After all it would be arrogant and laughable for anyone at CA to claim credit for the idea of due diligence. RC endorses the idea of due diligence. They argue that looking at different datasets ( can anyone say no BCPs!!, i knew you could) is basic and obvious. All they object to is people like me pretending that I came up with the idea ( ha I stole it from Steve Mc). I would suggest that anyone who looks at alternative datasets should invoke the blessing of martin and gavin and eric.

Peace to them all.

No, I think your missing their point. Everyone agrees that you should look at different data sets. This is basic and obvious. What’s novel and unique is that they advocate choosing the datasets that produce the lower R^2 value.

you must be new to moshpit sarcasm. sorry.

I was being sarcastic back. I think I need to start using a sarc tag!

To be literal: if they are claiming that Dressler looked at different data sets, then why would he have chosen the one with the lower R^2 value? Wouldn’t you choose the one with the highest possible value (even it its still pathetically low)? If Dressler did look at different data sets, then it raises the possibly that he cherry picked the data sets that best made his argument — the very thing he accused Spenser of doing. /literal

They aren’t different data sets. One is a measured set of clear-sky fragments which has been simple-mindedly extend to make up a pretend atmosphere. The other is a reanalysis, which makes the appropriate corrections.

Through the use of a model along with the assumptions it’s built on.

ah tallbloke you like reanalysis data when it fits your purpose! You cannot have it both ways. you cannot, without close analysis, accept one part of this data and reject another. And if you accept the data you accept the physics ( yes radiative physics) used in the models.

Dessler’s choice may well be defensible. since he cites another study there is at least a paper trail to follow and in the end some data to look at again. It’s all good.

And if this is inconclusive then there is more data to collect. thats good too.

Nothing here, however, will change the laws of energy balance and radiation. More C02 warms the planet. Just say that and you’ll feel better and you’ll be able to get about the business of quantifying the role the sun plays, however small.

Mosh, you say:

I love how you take what we have spent years trying to decide

(whether more CO2 warms the planet, or whether it is basically neutralized by homeostatic mechanisms and climate feedbacks)and merely assert it, as though having the Mosher stamp of approval makes a damn bit of difference … and as to whether other factors can offset every bit of theoretical CO2 warming, see the temperature history of the last 15 years. If CO2 warmed us during that time, where is the evidence?So my answer is no, Mosh. Repeating unverified claims might make you feel better, but I don’t feel better at all when I do that. Your claim is like saying “turning on the oven will warm the kitchen”, which is true

if and only ifmy house doesn’t have a thermostat, which will completely negate the effect of turning on the oven. Saying “more CO2 warms the planet” is just as incomplete a claim as the idea that “turning on the oven heats the house”. It may or it may not depending on the circumstances.Since you have not shown that the earth

doesn’thave a thermostat, and since my work has provided various lines of evidence showing that the earth very likelydoeshave a thermostat, your claim about CO2 is way, way premature. Do your homework first, andthenmake your claims.Shakespeare is reputed to have said, “There are more homeostatic mechanisms in heaven and earth, Horatio, that are dreamt of in your philosophy”. As a friend of mine once remarked, just say that and you’ll feel better, Mosh, and you’ll be able to get about your business …

w.

Willis and Mosh, it is an editorial policy of this blog that the “big question” of CO2 not be debated in short paragraphs on OT threads. Otherwise every thread quickly looks the same.

Steve, my bad, thanks for the reminder.

w.

Noted.

No, Martin is making a point about Spencer. Spencer used only Hadcrut.

martin is saying two things:

1. he has no trouble with spencer using hadcrut.

2. checking other datsets is obvious due diligence.

I am focusing on the second point trying to find a grounds for agreement. Its not that hard. It’s obvious standard proceedure when you have multiple datasets measuring the “same” thing or closely similar things to do some basic tests.

Namely: does my choice of dataset impact the results. Even when one has a clear case that a dataset is superior I still think it makes good common sense to check.

So my point, my only point, is that finally we have a clear statement on this issue of checking other datasets to ascertain the uncertainty due to dataset selection.

In general. I think that both spenser and dessler should examine the relevant datasets.

run their analysis on the datasets, report the results, and then argue the merits of selecting one dataset over another. or show the insensitivity to the dataset.

I suspect that people will now disagree with this clear principle.

Surely the finding here is that there *is* no result. I seems that whatever datasets are used the adjusted correlation coefficient is achingly close to zero. It’s a little bizarre that anyone would seek to justify any kind of conclusions from these data — except that there isn’t a significant correlation.

“Surely the finding here is that there *is* no result.”Yes, pretty much. Here’s what Dessler says:

“Obviously, the correlation between ΔR_cloud and ΔT_s is weak (r2 = 2%), meaning that factors other than T_s are important in regulating ΔR_cloud. An example is the Madden-Julian Oscillation (7), which has a strong impact on ΔR_cloud but no effect on ΔT_s. This does not mean that ΔTs exerts no control on ΔR_cloud, but rather that the influence is hard to quantify because of the influence of other factors. As a result, it may require several more decades of data to significantly reduce the uncertainty in the inferred relationship.”And were we inferring a negative or positive feedback?

Dessler says positive, but:

“Given the uncertainty, the possibility of a small negative feedback cannot be excluded.”So he’s really saying “I inferred a positive feedback, but found no statistical evidence for it”.

The rest of what he says is just wishful thinking (“This does not mean that ΔTs exerts no control on ΔR_cloud, but rather that the influence is hard to quantify because of the influence of other factors. As a result, it may require several more decades of data to significantly reduce the uncertainty in the inferred relationship.”)

The lack of statsitical relationship begs a range of other interpretations that D should perhaps have speculated on.

No, in Dessler’s argument nothing depends on it being positive. He made an estimate. Numbers have signs. His best estimate here turned out to be positive.

It isn’t clear what part of my comment you are saying “No” to.

No, he did find a statistical reationhip?

No, the rest wasn’t wishful thinking?

No, there was nothing else in this result he should have reflected on?

In fact his fianl reflection was “My analysis suggests that the short-term cloud feedback is likely positive.”

I assume this is something you would likely agree with?

I was referring to your first sentence. Dessler found a regression coefficient that was positive, but had an uncertainty range that included zero. He noted this – it has no significance for his argument.

This has run its course, but I’ll just say that the lack of significance has much significance for his argument – namely that “short-term cloud feedback is likely positive”.

Robin Melville was right to call D10 out on this, and I’m surprised you tried to (rather persistently) argue it was all OK.

I am focusing on the second point trying to find a grounds for agreement. Its not that hard. It’s obvious standard proceedure when you have multiple datasets measuring the “same” thing or closely similar things to do some basic tests.Namely: does my choice of dataset impact the results. Even when one has a clear case that a dataset is superior I still think it makes good common sense to check.Not necessarily the appropriate approach. When you have a clearly superior data-set/test, all you do by running the inferior test is to increase your uncertainty about the truth of the matter. Bayes looms large here

I’m still wondering if this was the impetus for Scafetta’s rejection of RC’s request for code, suggesting they take a course in wavelets.

Steven Mosher (Sep 8 14:27) to Eric Steig:

“And the fruit of righteousness is sown in peace of them that make peace.”

I can’t yet see this irenic contribution over at RC. No doubt the moderators are struggling manfully, wondering how to deal with it. But I read some of the earlier stuff. Respect.

It might get more interesting. we’ll see

RC usually only posts a few articles per month. Their MO in the case of an article that “goes wrong” is to quickly post another to draw attention away. This is no exception…

Mosher,

“the offensive and unsubstantiated accusation that the mainstream scientific community have placed scientific diligence secondary to a perceived political agenda by the mainstream scientific community”

Don’t know how you didn’t take that bait. You must be a clever fellow.

I found it odd that he would accuse me of saying anything about political agendas. Perhaps he is sensitive

about having one. Perhaps he confused me with someone else. Perhaps he is confused. Perhaps someone else wrote

it and signed eric’s name. Perhaps he didn’t mean it. Perhaps he didnt think. Who ever wrote it is carrying a bag of shit. I feel no compulsion to take that bag or set it on fire. So I left him holding it. Sometimes the sight of someone left holding the bag is instructive. A least for others.

Threading can make it difficult to follow a sequence of comments, at times.

Thiscomment is a follow-up to a comment that Steven Mosher originally submitteded toRealClimate. He cross-posted it to CA, above, where it is timestamped Sep 8, 2011 at 2:27 PM. Mosher prefaced it withRC’s moderators have now allowed Mosher’s comment, it is in position #104 at this writing. Mosher begins by quoting an earlier response by Eric Steig.

Here is Eric Steig’s inline response to what Mosher wrote.

The “matlab” exchange is hereSteve Mc:

The linked code pointed to a subroutine that was used in Steig’s calculation but the subroutine did not constitute ALL the code. Nor as of Feb 4, 2009 was Steig’s data fully available. With the benefit of hindsight, I’m wondering what precisely within the jeff Id comment Steig considered to be “snide and inaccurate”? Did Steig take umbrage at the suggestion his reconstruction was “robust”?

Maybe Eric needs to be challanged to take a congeniality course.

I would rather not personalize it to eric. You’ll notice over the course of time a pattern to these discussions. In the end it comes down to the way we said things or the way we asked for things. we ask for them in public, we speak snidely or sarcastically or impertinently. we don’t ask nicely.

If they want to provide us with a standard form to request stuff that would work for me.

If I have to guess i’d guess it was the use of the word “robust” .

So, let’s grant Dr. Steig his point. Jeff was snide and sarcastic. evil horrid jeff.

There are two ways to respond to that

1. heap on kindness

2. Give jeff what for between the eyes.

Moshpit has NO PROBLEM with the good Dr. Hitting Jeff back. I have no problem with dr Steig

being rude to Jeff for no cause whatsoever. None. Not an issue.

I just wanted to point out what seemed Obvious. I never make political issues about this.

Dr. Steig can be upset with me for many things, but making political statements aint one of

my sins. Also, The funny thing is why would you “suggest” that an engineer take your matlab course.

It’s not the rudeness on either partys part. It’s the stupidity of that particular comeback. Oh

Ya, Oh ya, rude boy, well maybe you should take my matlab course, and i’ll school YOU. pffft.

Assume jeffs sarcasm is there. Fine. You dont hit back by suggesting that he take your matlab class.

Duh. duh.. McFly!

If you are going to be a jerk, be a creative jerk. That’s my motto. Why is it they miss the real criticism.

why did he think my criticism are political, when they really are about the paucity of wit. dunno.

I think its all too common for all skeptics to be bunched together and tarred with the same brush…that all skeptics believe “x” or all skeptics are politically oriented toward “y”.

It seems to be all AGWers that do this.

😉

ha.

Yes. Here is something I suggest. I suggest that skeptics and lukewarmers all try to find something, anything they can agree upon with AGWers.

When people react to you “categorically” that is, with a stereotypical framework, nothing works better to unsettle them, to open their minds, to snap them out of it than when an opponent finds a place of agreement and doesnt react how they expect them to react. This can lead to

1. reciprocity

2. a better dialogue

3. some huge Fail on their part ( witness eric)

4. more careful thinking on their part.

So try it. Find the agreement and sit back and watch. fascinating

In my defense, there was no intended sarcasm at that time although I can see how it could be taken that way. He later wrote he didn’t want to release his code because it was not clean and in separate files which he didn’t feel comfortable with releasing.

In climate science, peer review is carried out by a cloterie of pals ………….very sad!

Steve,

I’m shocked that you would “do your own damn science”

shocked.

You know over at RC they were discussion the decision to try different datasets.

Martin Vermeer writes:

“I see absolutely nothing wrong with SB11′s use of HadCRUT temperature data for their regressions

But Russ, doesn’t it make you wonder why they chose it? The paper doesn’t say. Where is your curiosity? 🙂

Dessler clearly wonders: “… they plotted … the particular observational data set that provided maximum support for their hypothesis.”

BTW trying alternatives like this gives you one more handle on the real uncertainty of the observational curve, in addition to the formal sigmas. A due-diligence thing, and rather obvious.

Comment by Martin Vermeer — 7 Sep 2011 @ 9:50 AM

@ Martin Vermeer,

####

So there you have it. trying different datasets and documenting what you find is rather obvious and a due diligence thing. I wonder if Dessler tried what you tried?

due diligence thing. rather obvious.

Mosh, it’s hard to work out who’s saying what here. But trying alternative datasets and documenting what one finds is surely of fundamental importance, expecially as all the data become increasingly open. It’s as if the old way of peer review and publishing selective pieces of work in respected publications has become an ornate, baroque dance when what’s needed is the speed and team work of ice hockey. Even the odd brawl would seem a price worth paying. The blogosphere does so much of this best.

I think what Mosh is trying to point out is that there is a really difference between “due diligence” and fishing for the best data set prior to “Starting” your investigation.

Due diligence would be picked the best data set based on a set of critera estabilished before doing any tests on the data set.

We see this all the time in the medical field. There are mulitple pools of restrospective patient data. You have to pick the pool of patients prior to doing your hypothesis testing otherwise you will be tempted to pick the 1 of the 7 patient pools that happens to make the case you are trying to make.

Absolutely, in medicine and pharma double blind tests are de rigeur. With satellite data on cloudiness, SST and the rest either the available time periods are too short or there are not enough alternative datasets to make this sort of thing possible. But showing results for all alternatives is minimal for due diligence. I assume we’re all agreeing on that.

If there are multiple data sets avalible then you either need to show the results for all data sets. (Seperate, combined, etc) or you need to come up with a selection critera before hand and to decide which data set you are going to use and then only use that data set.

You CAN NOT pick the data set based on which gives the “best” results after the fact.

Given wide confidence intervals the chances of making a type I error aproaches 100% if you data set shop. IE seeing a difference when there really is not a difference. (just noise)

I am talking about Large Retrospective Studies done on establish data sets of patient populations. Not Prospective Randomized Placebo Controled Trials.

Charlie here is what I’m pointing out.

Spencer only looked at Hadcrut. Martin argues that looking at multiple datasets is obvious and normal due diligence.

So, when criticizing Spenser the Team whips out this notion that Steve and others have mentioned in regards to many other studies. w/wo tiljander, w/wo BCP, Yamal, etc. Many times data analysts here have suggested that we dont know the uncertainty due to data selection decisions. When that idea is express here WRT team science, Team science derides the individuals making the suggestion. Take your pick of insults. They never really address the issue, they move the pea, the shoot the messenger ( “ask nice Mr Mcintyre”), they keep the data from you so you cant do what they should have done. They challenge you to settle the issue using a machinery ( peer review) that they dominate ( not control, not manipulate.. but clearly dominate). The causality, of course, is the process of due diligence.

“casualty”?

thank you simon.

Right you can not on one hand say we are making data selection decisions prospectively and __________ data sets fit that inclusion critera. Then on the other hand say you should run you test on all the data sets ahead of time as part of due diligence.

The chance of making a type 1 error is just too large.

It would be bizarre if Dessler could claim Spencer made a type 1 error about the findings of Dessler’s paper while at the same time Dessler could claim he (Dessler himself) made a type 2 error about his own paper?

My head is exploding. : )

John

I think what Mosh is saying is: Soon there will be some major own-petard hoisting by.

hehe. consider this post.

Sorry first two comments are directed here

Mosher: “Steve,

I’m shocked that you would “do your own damn science”

shocked.”

###############################

Mosher: You know over at RC they were discussion the decision to try different datasets.

#########################

[ over at RC]

Martin Vermeer writes:

“I see absolutely nothing wrong with SB11′s use of HadCRUT temperature data for their regressions

But Russ, doesn’t it make you wonder why they chose it? The paper doesn’t say. Where is your curiosity?

Dessler clearly wonders: “… they plotted … the particular observational data set that provided maximum support for their hypothesis.”

BTW trying alternatives like this gives you one more handle on the real uncertainty of the observational curve, in addition to the formal sigmas. A due-diligence thing, and rather obvious.

Comment by Martin Vermeer — 7 Sep 2011 @ 9:50 AM

@ Martin Vermeer,

####

Mosher:

So there you have it. trying different datasets and documenting what you find is rather obvious and a due diligence thing. I wonder if Dessler tried what you tried?

due diligence thing. rather obvious.

Ta.

The problem with this analysis is that you are not using patented ClySy(tm) statistics. If you had used Mannian principle components the r^2 would be much better, regardless of its value.

I think there should be award called

“A foolish thing to do”

and every year we should award it to the paper in climate science with the lowest

published metric for statistical significance.

maybe somebody can come up with a pun on “Nobel” prize. and Josh can do a cartoon for the Tshirt the winner gets.

I think that a good name for it would be

“LeBon” , for the very lowest value of statsig.

How to avoid biting the tongue whilst cheekingwardly thrust?

hoRhuR

How about the “fauxbel” prize?

lovely.. multi lingual no less.

and the prize for the best vietnamiese quisine is called the “Pho”-bel prize

There’s already the Ig Nobels, originally awarded for research “that cannot, or perhaps should not, be reproduced.”

Isn’t what you are talking about basically what the “Ig Nobel Prize” is? It’s even a pun on Nobel…

We could have a sort of climate sci version of that.

How about the ‘Real And Nebulous Data Operation Mangling’ prize?

RANDOM for short. 🙂

Careful tallbloke.. scaffetta might qualify. i play no favorites.

I suggest the “Siggy” award.

With apologies to Shaw:

Some people see things as they are, and say ‘not significant’. But others dream things that never were, and say ‘significant’.

Steve Mosher

Pun on Nobel, NoBull?

I suggest that the award be called the

Mr. T Statistics Awardafter teaching the students some years ago how to carry out a “Mr. T-test”:Pity the poor fool who doesn’t think that this statististic is significant!But what do you think the null hypothesis should be?

Nick:

Suppose you just tell us what the null hypotheses should be — save us the guesswork.

No, if you’re proposing a T-test, you need to say what you are testing against.

I may be wrong but I see a transformation from T to Mr. T at play in Roman’s suggestion Nick. Is that A-Team or Rocky III as inspiration? As a mere Brit I know when I’m out of my depth – and past my bedtime. Goodnight.

When you’re dealing with Mr. T,

whatever he wants it to beis OK with me.However, it just occurred to me that there was another “Mr. T” who decided last January to redefine” what the null hypothesis should be regarding extreme weather events. Perhaps, that was what you were thinking of? 😉

The appropriate null hypothesis depends on the “experiment” so to speak. In the case of cloud feedback, I would say the appropriate null is zero. The same for every feedback except the Planck response, which is based on actual, indisputable physics. For that the null hypothesis is a slope of about 3.3 W/m^2 per Kelvin. For the total feedback, it’s the same null hypothesis as for the Planck response, since they are all zero except for it, in the null.

TTCA.

That’s the point of my query. Dessler isn’t trying to show a difference from zero. He says:

“Given the uncertainty, the possibility of a small negative feedback cannot be excluded. There have been inferences (7, 8) of a large negative cloud feedback in response to short-term climate variations that can substantially cancel the other feedbacks operating in our climate system. This would require the cloud feedback to be in the range of –1.0 to –1.5 W/m2/K or larger, and I see no evidence to support such a large negative cloud feedback [these inferences of large negative feedbacks have also been criticized on methodological grounds (24, 25)].”So a t-test vs zero wouldn’t help him. He’s just trying to show that the evidence doesn’t indicate a large negative feedback which would counter the positive wv feedback.

And the r2 test, or others based on it, also if anything confirm his argument, which is that no such feedback can be shown.

And the r2 test, or others based on it, also if anything confirm his argument, which is that no such feedback can be shown

fair point.

A couple of comments:

A) He assumes in that bit that the “other feedbacks” are not zero but substantially positive. That also has to be shown to be a statistically significant result, or his statement, again, has no meaning. But he doesn’t even explain what the “other feedbacks” are. Water Vapor? Well it must be more than that, unless he is butchering the English language, but to my knowledge that is the only feedback he has explicitly investigated until now. He has

claimedthat the water vapor feedback is statistically significant. Well, perhaps that should be looked at then?B) Based on the way he did

hisanalysis, there is not evidence for a strong negative cloud feedback. This most certainly does not mean that there is not a strong negative cloud feedback, if his “experiment design” was ill posed to isolate the feedbacks. This is the point Roy has been making and I frankly think neither you nor Dessler has understood his points. I think there is a separate problem with his experiment design actually and the work showing this will hopefully be published soon.C) At any rate, the significance and sign of feedbacks he has found appear to be dependent on methodological choices, which makes it questionable whether his results could stand up to scrutiny.

Posted Sep 8, 2011 at 11:44 PM | Permalink | Reply | Edit

Nick Stokes said:

No, Nick, the r2 test doesn’t do that, that’s a bridge way too far. It can only confirm the argument that no such feedback

was shownby his analysis, not that no such feedbackcan be shown…w.

even more precise.

Indeed so

Ignoble

But, Steven, shouldn’t the award go to the journal, instead? It’s relatively easy for an author to crank out a paper with low significance, or even with no published significance or error bars at all. Surely the real challenge is for a journal and its peer reviewers to find the gall to print it.

Excellent point. It’s a good move away from over personalizing the issue as well.

Of course blog posts are better than peer reviewed literature. Have read that myself in an editorial by some W Wagner of Vienna, Austria.

Yeah, very much in my mind too. But nothing as helpful as hyperlinks from Wolfgang, in case we should look it up and find it less than convincing.

I think, today is a new day dawning. Peer review took a giant and very public hit today. The blogosphere smoked GRL’s peer-review. In this particular instance, if resignations aren’t occurring and op-ed pieces aren’t forthcoming, peer-review will have acquiesced their legitimacy to blog threads.

I’ve seen corrections. I’ve seen rebuttal papers. But, I’ve never seen a pre-release altered because of what was stated on the blogs. In this case, unless I’ve missed something, this is exactly what happened.

I think I’ve seen the same. What probability we’ve had the same hallucination? But it’s one thing for Dessler to take heed of the corrections of Spencer. Those of McIntyre … that would be something else.

suyts, this post is about Dessler 2010 – Dessler, A.E.,

A determination of the cloud feedback from climate variations over the past decade,Science,330, DOI: 10.1126/science.1192546, 1523-1527, 2010. Not the more recent Dessler 2011 –Cloud Variations and the Earth’s Energy Budget,Geophys. Rev. Lett., 2011, in press (preprint here). So it is not GRL’s refereeing that is being questioned by this post – but Science’s.Steve, did you read the papers cited to justify his use of ERA data? If so, do you feel they support his claims? If you do not feel that they support his claims how do you justify this belief?

Rattus,

From what I recall when I originally read the references, they discussed how clear-sky measurements were associated with drier conditions than the all sky. This, of course, should have almost no effect the SW results (I say almost none because there is a tiny SW water vapor contribution), but you still get significant differences in the SW estimates using the CERES obs (much more negative feedback, as shown in my guest post at Blackboard mentioned above).

Furthermore, the dry condition “bias” serves to shrink the total CRF by adding to the LW portion, which is in the opposite direction of the larger SW portion. This would then slightly bias the feedback towards the positive (making it appear as though hotter conditions, with more water vapor, was shrinking the CRF due to a larger LW trapping associated with clouds when there is actually no change in cloud properties).

So, if there is any bias using the CERES clear-sky, it should bias the cloud feedback towards the positive, and wouldn’t explain the differences we see here. Furthermore, using a different dataset for clear-sky than all-sky can cause other biases, such as the differences from measured solar insolation (from which the ERA-interim clear-sky fluxes are forecasted).

From what I read of the papers, they don’t explain what’s going on here, and absent other references or data I think it is a stretch to say that the ERA-interim is a “better” than CERES.

Troy_CA comments on this matter in his July 8th blog post where he performed much the same analysis as Steve has done here.

After demonstrating that the feedback estimate is dominated by the SW component rather than LW, Troy comments futher:

This type of sensitivity analysis is a paper killer. I have only skimmed Dessler ’10, but if his throw away comment was all that was done to rationalize not running the analysis with CERES clear sky then this is a head shaker. And yes indeed Steve, what kind of peer review process would not ask this obvious question.

LL,

Dessler did not make a throwaway comment. He justified it rather carefully:

“In a reanalysis system, conventional and satellite based meteorological observations are combined within a weather forecast assimilation system in order to produce a global and physically consistent picture of the state of the atmosphere. I used both the ECMWF (European Centre for Medium-Range Weather Forecasts) interim reanalysis (18) and NASA’s Modern Era Retrospective analysis for Research and Applications (MERRA) (19) in the calculations. The fields being used here (mainly water vapor and temperature) are constrained in the reanalysis by high-density satellite measurements. Previous work has shown that ΔR_clear-sky can be calculated accurately given water vapor and temperature distributions (20, 21). And, given suggestions of biases in measured clear-sky fluxes (22), I chose to use the reanalysis fluxes here.”He explains what is wrong with clear-sky, and why he expects reanalysis to fix it. Peer review did not have to ask the question. The answer is there.

Troy says that fixing the bias should go in the other direction. But it didn’t. That needs checking.

Interestingly, reference 20, which shows that “R_clear-sky can be calculated accurately given water vapor and temperature distributions”, uses a set of observations to validate their models: the CERES clear-sky observations.

Most of your quote deals with explanation of the reanalysis systems, which are also used with the radiative kernels to correct other biases. And yes, it is interesting that models can calculate the absolute OLR for clear-sky (as validated against CERES clear-sky), but then the question is why not just use the CERES clear-sky in the first place? The only answer to that specific question seems to be the last sentence and reference, which I’ve already explained does not necessarily hold up here.

Furthermore, as mentioned above, using two different datasets, even if they are accurate on an absolute level, can cause biases when you are working with changes on a magnitude far less than that “absolute” level, particularly when they are working with different estimates of solar insolation!

I’m not blaming Dessler for using ERA-interim for clear-sky, but given what we know now, I don’t think it is a better choice than the simple CERES clear-sky bundled with the all-sky.

Troy,

The reanalysis is validated against CERES clear-sky. But they would validate under the appropriate conditions – namely the dry air of the clear sky patches.

What the reanalysis can then do is make the moisture correction so the humidity is representative of the whole atmosphere, not just the clear bits. I don’t know for sure that they do this, but I would expect so, since Dessler says that are using water vapor distributions. Then the reanalysis has a great advantage.

Nick,

If indeed all else was equal between the ERA-interim analysis and CERES clear-sky, and the ERA-interim analysis uses the same methods/models described in those references, and it indeed performed the humidity adjustment, then obviously it would be better and an improvement on CERES.

But based on the differences we see here (particularly with the SW fluxes), obviously it isn’t the case. Since we’re calculating CRF as the difference between clear-sky and all-sky fluxes, ANY difference in those two datasets is going to show up in the estimated cloud forcing, including their different estimates of solar insolation (which has nothing to do with clouds). The magnitude of the changes in flux is far smaller that the magnitude of the total flux, so you would expect using two different datasets to have a lot more noise unrelated to the CRF. Note that if there is ANY flux calculation bias in either of the two datasets unrelated to clear-sky vs. all-sky, it WILL show up in the CRF, whereas if you use the same dataset, even if a flux calculation bias is present, it will NOT show up in CRF unless it is related to clear-sky vs. all-sky.

Do I think Dessler should have had better reasons for switching to ERA-interim? Honestly, it’s not that interesting to me…what interests me is that clearly the CERES-only result is 1) important and 2) probably a better estimate.

“But based on the differences we see here (particularly with the SW fluxes), obviously it isn’t the case. Since we’re calculating CRF as the difference between clear-sky and all-sky fluxes, ANY difference in those two datasets is going to show up in the estimated cloud forcing, including their different estimates of solar insolation (which has nothing to do with clouds).”Well, that’s a much more sophisticated argument than we see in this CA head post, which just pitches it as a choice between two data sets. Which of course the crowd then picks up with chants of cherry-picking.

I agree that the reanalysis corrects one major thing, but brings in other differences. And probably that Dessler should have said more about that. I can’t at the moment see how the SW flux contrast makes that “obviously not the case”. But I’ll read your other posts at your blog more carefully and see if I can figure it out.

‘And, given suggestions of biases in measured clear-sky fluxes (22), I chose to use the reanalysis fluxes here.”

“Given suggestions” does not sound like a robust justification.

You get a better result if Dessler looks at both, notes the opposite sign and drives the question back to the bias question. Which is where we are now. As it stands, he makes a choice based on a suggestion and gets one answer. Ignoring that suggestion we get a different answer.

its suggested that BCP be avoided, after all.

This is not about the precise issue at play here. This is about the approach to analysis.

““Given suggestions” does not sound like a robust justification.”That’s scientist-speak. The paper he refers to is quite definite.

“The corresponding CRF change forced by these WVP changes is about 2 W m−2 in a zonal mean sense. Highest values occur in the midlatitudes of the northern hemisphere in which a magnitude up to 6 W m−2 is shown.”Sorry I dont find those words definite in any regard. I find them as suggestive as the suggestive comment that refers to them.

no cookie

C’mon. There is no explanation of any kind. Just a vague suggestion of bias to justify dismissing a data set which would radically change the central conclusion of the paper. It is Troy who has explained the nature of the bias set out by SB08. No way reviewers should have let him off the hook.

LL,

Again, ERA is not a dataset. It is a reanalysis, substantiallyly based on CERES.

Troy did not discover SB08. He followed Dessler’s reference. That’s how it’s meant to work. Dessler doesn’t have to write it out again.

He makes the reason for not using CERES clear-sky clear elsewhere.

“ΔCRF is the change in TOA net flux anomaly if clouds were instantaneously removed, with everything else held fixed,..”CERES clear-sky does not hold everything else fixed. It substitutes the surrounding clear-sky atmosphere. To implement the definition of ΔCRF, he needs a reanalysis.

Nick, if the nature of the bias is as Troy has explained, then there is no basis for outright dismisal CERES clear sky. On the contrary, Dessler has an obligation to explore whether his conclusions are sensitive to this subsititution.

I never said he did. You stated that Dessler “explained” what was wrong with CERES clear sky when he did no such thing. Had Dessler provided an “explanation” he would have done something similar to Troy.

LL,

He said

“And, given suggestions of biases in measured clear-sky fluxes (22), I chose to use the reanalysis fluxes here.”That’s explaining why he chose ERA. (22) is SB08, which contains the details. That’s pretty standard for scientific articles. And Science doesn’t give you space to repeat what you can refer to.

So Steve, you have just found out that the ground breaking homonid has a human skull, the lower jaw of a Sarawak orangutan and chimpanzee fossil teeth; all of which have been boiled in Ferric/Chromic acid

Well done.

Steve,

This is a curious post. You’ve quoted Dessler’s reason for not using the CERES clear-sky numbers. But you’ve made no attempt to deal with them. You’ve just gone ahead and done what he said there was a reason for not doing.

Troy, in his Blackboard post, dealt with one of those. If you work out a whole clear sky on the basis of the clear parts that you can see, then you have a water vapor bias. Where the sky is clear, the air is usually drier, and this affects outgoing OLR. If you just simply extrapolate from those clear parts to a whole atmosphere, it is a much drier atmosphere. That’s the focus of the Sohn and Bennartz paper that he cited. It’s paywalled, so I’ve only seen the abstract so far, but the biases they calculate are large. An average forcing bias of 2 W/m2, with some major regions up to 6 W/m2.

Some adjustment

hasto be made for this. I don’t know the details, but it is likely that the appropriate adjustment has been made in the ERA recalc. That would be a very good reason for Dessler to use those figures.Nick,

It’s looks like we cross-posted. See my comment above:

https://climateaudit.org/2011/09/08/more-on-dessler-2010/#comment-302531

I don’t believe this explains the difference, as the bias would be in the opposite direction.

Note to Steve McIntyre and Roy Spencer: if you guys continue to help Dessler write his paper, perhaps you should request co-authorship or at least be included in the acknowledgement section with regards to specific points! Really as much as it is making science progress, it is naive at best to kindly serve people who have done everything in their own power to demean and attack you! In the end it also shows how peer review sceintific journals are obsolete means of doing science since blogosphere is doing it much faster and with more agility

It’s certainly clear from interactions between Dessler and Spencer published publicly over the last year, that Dessler is capable of snark and being a little free with his elbows under the backboard.

And yet, from what I can see, he’s been much more willing to engage in something approaching a real scientific debate in a iterative and co-operative manner than those generally numbered amongst “the Team”. That should be encouraged.

That he hasn’t surrendered horse and foot to Spencer should surprise no one. That he clings to his context re the supremacy of ENSO should surprise no one. It’s still early innings on this subject, why should he?

That’s my take.

Kudos to Dessler.

Regardless of the science that he will ultimately defend, let’s hope Dessler will have the elegance of acknowledging the exchange and the imput from both Spencer and Steve.

“LL,

Dessler did not make a throwaway comment. He justified it rather carefully:

..He explains what is wrong with clear-sky, and why he expects reanalysis to fix it. Peer review did not have to ask the question. The answer is there.

Troy says that fixing the bias should go in the other direction. But it didn’t. That needs checking.”

I think LL gets the throwaway comment from being aware of Troy’s comment. It will be very interesting where this discussion ends, or at least, leads.

Sometimes you do sensitivity studies to show differences in results instead of similar ones. If the source of the data used makes large differences in results, science would advance from knowing that difference.

A circle has no end. 😉

Reanalyses can have unknown sources of bias, especially when new sources of data are added in, as discontinuities can arise. How confident are we that there are no biases in the two reanalyses that Dessler used that might impact his results?

This is an honest question, as I am not sure what sort of uncertainties there might be in them that may make them problematic also. It isn’t just the CERES data that could have problems.

I have no problem with Dessler’s use of ERA reanalysis. What I have a problem with is a paper which fails to report the contradictory results of substituting CERES clear sky. If there is an explanation then fair enough. The only explanation I have seen so far is Troy’s which seems pretty straight forward in showing the bias would not account for the contradictory feedback estimate.

I don’t have a problem with it either. I just want to know, since it’s use is evidently very important for the results, what the uncertainties in ERA are that could cause one to question it’s reliability. All datasets have uncertainties and potential for error. It’s always worth discussing them, IMHO.

Actually, I’d want to know about the uncertainties in ERA with or without it “mattering” if it is used.

Yes, that’s the right line of inquiry. Not why he didn’t use CERES clear-sky – he’s explained convincingly that it’s not appropriate. But whether ERA (actually ECMWF reanalysis and MERRA) is adequate.

Steve: Dessler2010 gives the uncertainty in his slope as 0.74 W/m2/K, not 0.94 as you have above.

Am I missing something here?

There seems to be several major points that have got lost in all the hubub:

(1) this analysis is based on a very short period of data, which I thought all sides had (previously) agreed was to short to yield anythig significant about the climate.

(2) The error terms for the correlation are very large relative to the trend – far too large to “yield anythig significant about the climate”.

(3) the R squared statistic is too hopeless close to zero to “yield anythig significant about the climate”.

(4) a visual appraisal of the data would at the very first glance indicate that it has got too much scatter on to “yield anythig significant about the climate”.

So what’s this all about?

Why the published papers in peer reviewed journals?

The only faintly interesting thing is that these examinations of data provide no evidence supporting the CO2 contention.

The NUL hypothesis remains untarnished by this weak onslaught.

(I’m sure somebody else could put that much better).

I’m amazed that such subtle variations in the data make such a big difference. I suppose that is an indicator of the extreme noise for any conclusion.

Yes, it is really just another way of showing that Dessler’s claimed positive feedback is completely meaningless. Steve had already demonstrated that yesterday by showing that the adjusted r^2 is only 0.01 and that the sign changes if you add a lag. Today he’s showing the same thing a different way. So there is no reason to be amazed. It’s clear from Steve’s plots that drawing a line through the cloud is nonsense and the numbers confirm it.

On this point I’d lack to add one minor thing. If you look at Dessler 2010 fig 2A, the axes have been squashed down with an aspect ratio just over 2. This makes the straight line through the data look reasonable. But when you plot it with an aspect ratio near 1 as Steve has done you see it’s pretty much a circular cloud with no linear relationship.

I expect that ‘Google Science Communication Fellow’ Dessler knows that changing the aspect ratio, while not affecting r2, improves the political optics greatly.

Something in the realm of the unthinkable is occurring over at WUWT between Spencer and Dressler – they appear to be collaborating and exchanging data, ideas, and good will. As is often said for other scientific events, that is the way science should work. There is a lesson to be learned here.

Yes. It needs to be encouraged.

I’m just amazed at the non-responses I have gotten in my recommendations to bring some rigor into this analysis. Am I the only person interested in these matters who has ever used an oscilloscope? Am I the only one who knows what a Lissajous pattern is?

Am I the only one who knows what a Lissajous pattern is?Nope. Australians do!

But what are you rigorously saying?

That you cannot diagnose feedback by performing a linear regression on a phase plane plot when the driving input is all over the place and the phase response is nonlinear. As I explained on the last thread, and have written up here.

I’m sitting back and watching people argue about techniques which are entirely unsuited to the problem in the first place. It’s insane.

Bart,

David Stockwell discusses this similarly at his blog.

You are exactly right.

Why everyone cannot grasp Spencer’s central argument from day one is misdiagnosing the feedback using traditional regression techniques, is puzzling. Only a few seem to understand this.I think everybody is waiting for you to do it and show us how. That’s a real request.

If you need a forum to do it, I’ll suggest Lucia’s. Just write her and ask ( you can reference this ). or maybe you and david could do something and post over there.

Nobody is going to believe something some anonymous guy posting on a blog would say. Even if I did the analysis, someone respected by a lot of people would need to replicate it. So, why take the time to sort it all out on my own when it would be wasted effort?

I’ve given straightforward methods. Anyone should be able to perform a running average on both data sets and replot the phase plane (with the time lag removed as Steve M. has done here, though the lag should be recalibrated for the filtered data). Cross spectral analysis is more difficult, but can provide additional confirmation / the definitive result after looking at the phase plane.

Steve: I posted the series online so that interested people could perform their own analysies – something that I encourage. It’s not a matter of whether people “believe” an anonymous person on a blog if the script and results are posted up. I believe that R has functions to do what you’re suggesting. WHy not give it a go?

Bart,

Like Steve says why not give it a go. if you give it a go with easy to follow instructions others will follow. Troy, UC, roman, Carrick, steveF, dewitt, Willis, chad, jeff id, lucia, nick.. they all have the skills to follow along if you take the lead. We don’t care if your anonymous as long as you share the work. The whole point of sharing is to make the individual disappear

[ apologoes to anyone I left off the list o wizzes ]

http://en.wikipedia.org/wiki/William_Sealy_Gosset

You mean this data? I’m not sure how to get it or the temperature data. I’d like to work on the same exact series you plotted. Any way you might make it accessible columnwise in a blog post or something?

Steve: in my earlier posts, I referred to collations placed online. See http://www.climateaudit.info/data/spencer and http://www.climateaudit.info/data/desslerWhich columns should I use?

Seriously. Precisely which columns do I use, because I have found some very strong correlations.

Looking forward to hearing about your findings Bart.

Well, here are some preliminary results. I hope I used the right quantities. I pulled this data from the Spencer link. I used column 9 for the HADCRUT3 temperature anomaly, and I used column 5 minus column 8 for the cloud response. The relationship between these variables most assuredly has a negative dc gain.

Here is a plot of the estimated frequency response. If this holds up, I think it’s going to shock some people. The response at high frequency is a jumble, and probably due to independent processes going on. But, the low frequency region is dominated by a fairly well defined 2nd order response with natural frequency of about 0.0725 year^-1 and a damping ratio of about 0.45, which indicates a time constant of about 4.88 years. Yes,

years.The impulse response is shown here.

Bart, thanks for this very interesting analysis. Perhaps you could elucidate in more detail how the graphics are interpreted for readers not used to this style of graphic.

Also do you get similar results for the different version that Dessler used:

http://www.climateaudit.info/data/dessler/dessler_2010.csv columns eradr (CLD RF) and erats (temperature)?

Here is the MATLAB code used to compute all this:

% Import monthly data

temp = data(:,9);

dR = data(:,5)-data(:,8);

N = length(dR);

% Sample period

T = 1/12; % years

% Pad time series with zeroes to prevent time aliasing of impulse response

Nsamp = 8192;

Npad = Nsamp-N;

X = fft([temp;zeros(Npad,1)]);

Y = fft([dR;zeros(Npad,1)]);

% Compute impulse Response

h = real(ifft(Y./X))/T;

% Window Impulse Response

Nc = Nsamp/2^2;

w = [ones(Nc/2,1);(1 + cos(pi*(0:(Nc/2-1))’/(Nc/2-1)))/2];

w = [w;zeros(Nsamp-Nc,1)];

hw = h.*w;

% Plot smoothed impulse response

c = [1:15 15:-1:1]/(15*16);

figure(1)

hs = flipud(filter(c,1,flipud(h)));

t = (0:(length(hs)-1))*T;

plot(t,hs)

grid

xlim([0 50])

title(‘Cloud-Temperature System Smoothed Impulse Response’)

xlabel(‘time (years)’)

ylabel(‘W/m^2/^oC/year’)

% Compute Frequency Response and plot

H = T*fft(hw);

f = (0:(length(H)-1))’/Nsamp/T;

% Create 2nd order model

s = sqrt(-1)*2*pi*f;

w0 = 2*pi*0.0725;

zeta=0.45;

Hmod = -9.5./((s/w0+2*zeta).*(s/w0)+1);

figure(2)

subplot(211)

loglog(f,abs([H Hmod]),’LineWidth’,2)

grid

xlim([1e-3 5e-1])

title(‘Cloud-Temperature System Magnitude Response’)

ylabel(‘W/m^2/^oC’)

xlabel(‘frequency (years^-^1)’)

legend(‘From Data’,’Model’,’Location’,’SouthWest’)

subplot(212)

semilogx(f,faz([H Hmod]),’LineWidth’,2)

grid

xlim([1e-3 5e-1])

title(‘Cloud-Temperature System Phase Response’)

ylabel(‘deg’)

xlabel(‘frequency (years^-^1)’)

In case anyone’s looking, I posted links to plots, but that post is being held up for moderation. I’m sure it will appear soon.

Thanks Bart. An underdamped harmonic. Do you think the following phase plot of HadCRU since 1997 is consistent with these parameters? (http://landshape.org/enm/sinusoidal-wave-in-global-temperature/ in case it doesn’t come out.)

Sweet. Climate sensitivity in the frequency domain. I am going to play with this tomorrow. Thanks

Bart, would it be possible to plot the step response. This might be more understandable for more people.

It is surprising that an analysis like this wasn’t done previously considering that the original paper was published in a major scientific journal. I wonder why the peer reviewers did not ask for it. The level of the mathematics that is used in these studies is surprisingly low. All the talk about lags and damped exponentias with vague mumblings about differential equations does make one rather disheartened on this issue.

Bart,

You have deduced an impulse response over fifty years from ten years of data. How much of that is an artefact of the time windowing?

That well-defined second order response also has a period longer than your data. Again, is it coming from your windowing?

Bart,

You’ve used a Hanning window to taper the impulse. But I would suggest tapering temp and dR prior to FFT. It’s a little better than just a 10-yr gate function, which I think is influencing your results.

Again, I am quite astounded. We have a discussion on the temproal relationship between these two quantities with publications in distinguished journals and wide press release and, amazingly, this sort of analysis has not been done. We are discussing how to window the data. Positive and negative feedback – whatever! Perhaps we need more than peer review.

Nick Stokes

Posted Sep 10, 2011 at 9:48 AM | Permalink

“You have deduced an impulse response over fifty years from ten years of data. How much of that is an artefact of the time windowing?”I know. I did not believe it myself at first. So, I tried seeing if I could generate artificial data with these characteristics and time span and if I could fit it in the same way. Here is some code for doing that. I made sure to generate a lot of data and truncate it so that there would not be any start up transients:

a = [1.000000000000000 -1.967462610776618 0.968691947164695];

b = -[0.617926899846966 0.611409488230977]*1e-2;

temp=randn(10000,1);

dR = filter(b,a,temp);

temp = temp((10000-123):10000);

dR = dR((10000-123):10000);

This is a discretized model of the 2nd order response driven by white noise.

Know that happens? The identification works

some of the time. It is either nicely behaved, like this data was or, more times than not, it comes out a total mess.This is very peculiar behavior, and I have not yet determined what property it is which makes a good data set versus a bad data set, or whether the problem is inherent or numerical.

David Stockwell

Posted Sep 10, 2011 at 7:00 AM | Permalink

Do you think the following phase plot of HadCRU since 1997 is consistent with these parameters?At a glance, it does look it. The spiral does not cross over itself too many times, so that suggests a moderate amount of damping.

Tom Gray

Posted Sep 10, 2011 at 9:09 AM | Permalink

Bart, would it be possible to plot the step response. This might be more understandable for more people.

Here you go.

Steve McIntyre

Posted Sep 10, 2011 at 7:30 AM | Permalink

Also do you get similar results for the different version that Dessler used?Sort of similar. The low frequency phase shift is still 180 deg. It appears to ring more and have significantly lower magnitude. It’s not very well behaved. This data set evidences the “peculiar behavior” I referenced above.

Maybe Nick’s idea of windowing the data would help:

Nick Stokes

Posted Sep 10, 2011 at 10:48 AM | Permalink

“But I would suggest tapering temp and dR prior to FFT.”I will give that a try later, but now that you’ve all got the idea, others can try to track down what causes this, too.

Yes, I think that the step response plot makes things much more apparent.

I should note for you, Tom, that I have been carrying out this analysis on two different computers and this step response is not precisely that of the modeled process I showed above, which is a later version on another computer to which I do not have access right now. That model should have settled out to about -9.5 W/m^2 for a 1 degC step input and it has a slightly different damping ratio and frequency. The -9.5 sensitivity is the newer version and the one I think fits the data better. But, at this point, these values are not set in stone anyway.

Bart

Posted Sep 10, 2011 at 2:25 PM | Permalink

Your comment is awaiting moderation.

“This data set evidences the “peculiar behavior” I referenced above.”I suspect the problem may be when there is a zero or near zero in the temperature response which makes things ill conditioned. Nick’s idea of tapering the time series may help with that. I had also considered adding a small white noise “floor” to the input spectrum. Will work on it, and everyone else feel free to, too.

There’s several posts being held up in the queue. Tapering the data as Nick suggests alters the result slightly, but has no major effect. Adding a fictitious white noise floor to the input data does not seem to improve things when I generate artificial representative time series. I’m not sure what to do to ameliorate the pathological cases. Maybe it’s just inherent in trying to draw such long correlations out of a short span of data, and it’s just hit or miss.

My artificially generated data does show, however, that such long term correlation

canbe extracted from short term data, if you are lucky. Generating a long span of artificial datadoesseem to eliminate the pathological cases. My preliminary conclusion right now is that the Spencer data just happens to be lucky.Bart,

I translated your code to R, up to the impulse response plot. I get the same result.

But if I apply a Hanning taper to temp and dR (down to zero at each end of the data window) it is very different. I think that suggests that the finite data length is having a big effect.

I’ve put up a html page here which has some explanation, the impulse response with and without Hanning, and the R code.

Nick – yes, it can change things a little through loss of resolution, but it isn’t really all that significant and, importantly, the feedback is still negative (180 deg phase shift at low frequencies).

I have other responses which may be of interest to you and which are directed to your queries but, unfortunately, the moderator seems to be taking the weekend off. Your insights are cogent, but I think I feel relatively confident now in asserting that this response is real, and the feedback is, clearly, negative. But, please, continue to investigate.

“You have to accept low frequency problems with finite data…”Actually, Nick, abrupt transitions are a high frequency phenomenon. Multiplication in the time domain is the same as convolution in the frequency domain, so the tapering tends to

lowerthe resolution of the frequency response estimate. Which is good for the higher frequency portion of the spectrum, but not so good for the low. Yes, a truncated data record is bad for resolution of low frequency stuff. But, providing a taper only really helps the high frequency portion. And, so, your impulse response so estimated has the slight ringing smoothed out.I’m going to try one more time to get through the blocker on this. Here is a discrete time model for generating data with the desired correlation which you can play with. I assume the input is white noise. You should find that, occasionally, it is possible to generate a short time series for which the analysis works, other times, not. Which is why I said: “My preliminary conclusion right now is that the Spencer data just happens to be lucky.”

a = [1.000000000000000 -1.967462610776618 0.968691947164695];

b = -[0.617926899846966 0.611409488230977]*1e-2;

temp=randn(10000,1);

dR = filter(b,a,temp);

temp = temp((10000-123):10000);

dR = dR((10000-123):10000);

I generate a lot of data before truncating to be sure of eliminating transient start up effects.

The “filter” function implements the difference equation such that

a(1)*dR(k) = -a(2)*dR(k-1) – a(3)*dR(k-1) + b(1)*temp(k) + b(2)*temp(k-1)

Dang.

a(1)*dR(k) = -a(2)*dR(k-1) – a(3)*dR(k-

2) + b(1)*temp(k) + b(2)*temp(k-1)Nick and Bart,

This is great stuff, thank you both for posting more details off-site. It really helps a lot.

Bart, as far as the “getting lucky” biz goes, it IS interesting to know how often one would be able to properly detect the characteristics of the estimated process in this short time window, and it is great to see you looking at that. This speaks to the power of the test, given that the DGP has the estimated negative feedback property.

The other, complementary question, is of interest too. Suppose the feedback is really positive and small: How often would we incorrectly find the kinds and magnitudes of negative feedback that you are estimating (either with or without Nick’s taper)? If a Monte Carlo shows this to be extremely unlikely–with the taper or not, even with the short time series–then you’ve really got something.

Again, I am really enjoying watching this.

Bart et al,

I’ve given up on this long thread, and posted a reply here.

Bart, with your troubles getting posts to appear, are you aware that there seems to be a rule here that more than one link takes you into moderation?

NW

Posted Sep 10, 2011 at 9:31 PM | Permalink

“How often would we incorrectly find the kinds and magnitudes of negative feedback that you are estimating (either with or without Nick’s taper)?”That is a good question. A very few of my runs with artificially produced short data records appear to give a false positive (pun intended). But, the giveaway that something is not quite right is that the frequency response estimate always looks kind of haphazard and poorly behaved, not smooth and nice and readily recognizable as a standard 2nd order response like this one, and like others I have generated with artificial data.

I think there must be a way of making the estimation process more reliable. But, as I have always analyzed systems for which there was more than enough data to cover the time span of the dynamics, I have never had to research it. Perhaps there are methods in the literature, or we might have to come up with whole new ones. For certain, there are various least-squares, maximum entropy, etc… methods which can work well with short data records, but these generally seek to fit parameters to a model, and can be sensitive to unmodeled components of the data (and, there are plenty of likely unmodelable processes in this data stream). One of the strengths of FFT based methods is that they require no parameterization and are unconstrained.

Anyway, that’s for future investigation. Right now, I believe this data is producing a reasonable result based on the fact that I can often generate artificial data which also produces a reliable estimate of the transfer function with a short data record.

Let us henceforward move further discussion to Nick’s new post.

4.88 years ~ 2 x QBO period

Billy Liar

Posted Sep 11, 2011 at 9:50 AM | Permalink

I don’t think there is a likely link with periodicities, as I explained here.

Those graphs from Bart are very interesting. This is very much complementary to how I have been investigation this.

Here is an overlay of Spencer’s graphic showing satellite data vs model results, on top of the lag response of Spencer’s simple model. (Here I mean using random inputs for rad and non-rad, not just the basic equation form).

http://tinypic.com/r/30sfupc/7

This was just trial and error to get the nearest fit, I’m not suggesting this is a result that shows what f/b really is.

What is relevant to Bart’s work is that this plot changes little as long as the feedback/depth ratio stays the same. This in fact represents the time constant Cp/lambda.

45/9.2= 4.891304

That is uncannily close to Bart’s result by a completely different approach.

Having a hook on the time constant of system response will be a great help in getting to lambda.

Here, here. Linear regression is not the sharpest tool in the shed, but its like everything gets belted with a pick-handle (which is what Steve is showing).

Bart

Compliments. Very interesting evidence of a damped impulse response. Your natural frequency of 0.0725 year^-1 = 13.8 year period. That is similar to the ~ 11 year Schwab solar cycle or half the ~22 year Hale cycle. Recommend testing the solar cycle for the driving impulses to test amplitude and phase. I encourage you to explore using Ed Fix’s solar cycle model based on damped oscillation around the barycenter (~ Hale cycle) which seems to track remarkably well.

See: The Relationship of Sunspot Cycles to Gravitational Stresses on the Sun: Results of a Proof-of-Concept Simulation”. Ch 14 p 335 of Dr. Donald Easterbrook, ed. (Elsevier, 2011) e-book

can be previewed (in short sections) at: ReadInside

Search for “355″ or “barycenter” or “sunspot cycles”. See especially Fig. 6 and Fig. 7. See summary posted by Tallbloke. with his graph posted by David Archibald.

Bart

For frequency analysis of solar on temperature, see: Scafetta, N., Empirical evidence for a celestial origin of the climate oscillations and its implications. Journal of Atmospheric and Solar-Terrestrial Physics (2010), doi:10.1016/j.jastp.2010.04.015

Your cloud analysis could help provide the bridging model.

Exciting. This has to be one of the finest recent threads on CA. I’ve just clicked back to find no fewer than ten posts by someone called David in a sequence of eleven, with someone called Steve (Mc) the only outlier. Could there be a correlation between the number of Daves and thread quality? (Only r^2 > 0.01 need apply, in line with strict climate science norms.) And how might both be connected with the incidence of cosmic rays? I think we deserve to be told.

Steve – let’s leave cosmic rays out of this as they have nothing to do with Dessler v Spencer

Makes “Project Steve” http://en.wikipedia.org/wiki/Project_Steve look even more convincing.

Both significant: “david scientist” 74,700,000 vs “Steve scientist</a.. & David 4,150,000 vs ‘Steve” 1,200,000.

No question about it, more David, more quality.

(written while wearing a “radiant” smile)

I second Richard’s comment re: this thread. I feel like I am eavesdropping on a meeting of The Royal Society of London for Improving Natural Knowledge. This is truly a special forum. Thanks to our host and contributors. Time to hit the tip jar.

Bart has already done some work on the solar end of this:

http://tallbloke.wordpress.com/2011/07/31/bart-modeling-the-historical-sunspot-record-from-planetary-periods/

Which led to this:

http://tallbloke.wordpress.com/2011/08/05/jackpot-jupiter-and-saturn-solar-cycle-link-confirmed/

lets try this again:

Bart

A few years ago I had a look at FFTs (using EXCEL) of temperatures:

e.g.prefix this with h t t p \ img15.imageshack.us/img15/1127/ffts.jpg

others are available!

You will note that there is very little evidence in most of any solar influence.

in fact there were few common frequencies apparent in all!

After this I used a narrow band filter and swept the centre frequency from 0.5 to 300 years (with constant bandwidth) looking for peaks in amplitudes on the output.

Then summing all the amplitudes and frequencies at suitable phase got a fair reconstruction of the original (to be expected with enough freqs!)

I then tried manually adjusting the centre frequencies, amplitudes and phases of summed signals to get a “very good ” synthesized hadcrut3v global.

some of these results are shown here together with future predictions!!

prefix this with h t t p \ climateandstuff.blogspot.com/search/label/temperature%20synthesis

“With four parameters I can fit an elephant, and with five I can make him wiggle his trunk”.

Attributed to von Neumann

Bart has already done some work on the solar end of this:

http://tallbloke.wordpress.com/2011/07/31/bart-modeling-the-historical-sunspot-record-from-planetary-periods/

Bart are you suggesting putting the data in a streaming format looped and see what the osciliscope shows? Or perhaps the deltas , or both?

So it would appear that Dessler is currently re-writing a paper in haste that in its orginal draft has already been peer reviewed and accepted for publication.

Surely this raises all sorts of questions of why this paper was allowed to be fast-tracked in the first place?

And raises all sorts of questions of to the quality of the peer review…

Aren’t they most useful in the case of weak relationships? If r is close to 1 CIs are not that helpful, everything is clear anyway.

We lose 12 degrees of freedom here, I’m more and more worried about using anomalies

Hmmm. interesting point. Here is a plot of pre-anomaly CERES clr and net with cld by difference. On a pre-anomaly basis, there is a high-amplitude annual cycle. CLD forcing is strongly negative and CLR forcing strongly positive.

It also seems that you get different trends depending on whether or not you take an anomaly first. The pre-anomaly trends are higher than the post-anomaly trend. CLR has an upward trend; CLD a downward trend.

I suspect the lag comes in as an artifact of the annual variations. As shown here, http://vixra.org/abs/1108.0032, 3 month lags come about through integration (or differentiation) of a cos or sin cycle, and also the sum of sin and cos terms of different amplitude can change the phase.

The thing is the phase shift is eliminated as a free parameter by considering the surface temperature as the integration of atmospheric forcing. The difference in the autoregression between the top of atmosphere and surface demonstrates they stand in this integral(differential) relationship.

hi david. the annual variations are surprisingly large, aren’t they. I’ll bet that there might even be a noticeable daily cycle depending on the longitude of noon.

This oscillation looks like the MSU data displayed

here. Is this the same data, or just closely related?

Well, nearly all geophysical data has an annual cycle related, directly or indirectly, to the seasonal cycles of solar insolation, with the Northern Hemisphere dominating most of the time because it has a stronger annual temperature cycle due to less ocean to dampen it.

Naturally atmospheric and surface temperatures vary through a year, and of course the radiation the Earth emits to space similarly varies, in part as a simple consequence of the fact that warmer bodies emit more radiation than cooler bodies, all other things being equal, and in part because a large number of processes that vary with the seasons directly or indirectly connected to the insolation cycles.

Of course, the way the Earth reacts to the annual redistribution of solar insolation as the seasons come and go 180 degrees out of phase (but not magnitude!) in each hemisphere, is likely very different from how it reacts to a more uniform, sustained “climate forcing”-and it is the latter we are at this moment interested in.

Yes, though they are global variables there are considerable annual variations.

I just checked. The surface temperature has a lag peak at around 3 months relative to solar insolation, and the cloudR leads the global temps 3-4 months. So a 3-4 month lag would be 180 out of phase.

The data would make no sense until these dynamic relationships were sorted out.

David Stockwell, you are being modest with your short but important paper Sept 9 at 6.50 pm. I’d be fascinated to see more comments about it.

Re Steve’s graph Sept 9 at 10.25 am, I did blog a few days ago that care has to be taken when correlating data suffering manipulations such as truncating, centering, normalising, working with anomaly numbers, etc. It frustrating that others are more able to express themselves than I am. These actions are designed in part to make relationships more obvious to the eye when presented pictorially, but the same operations can affect the validity of the math.

Re annual variations, here are a few more. The wriggles in global CO2 are well known, but their main explanations are unconvincing. See

http://climate4you.com/ClimateAndClouds.htm#Tropical%20cloud%20cover%20and%20global%20air%20temperature (Borrowed from WUWT and earlier).

Geoff

Thanks – very interesting. Am I seeing things or are the mid and low level clouds out of phase? That should be significant on feedback analysis.

Is that phase difference related to: Krivova (2009)

David Hagen, Can’t answer as it’s not my specialty. I saw the paper elsewhere and threw it into the ring without judgemental comment in case it was of interest to specialists. I’m not aware of the nature of its past reception. I’ll wander off and read David Krivova now, before re-reading some geostatistics by the French David.

On that page, the graph: Tropical cloud cover vs Global surface air temperature appears to show declining cloud cover during the 1980s (resulting in increasing surface insolation) corresponding to increasing global surface air temperature.

The ISCCP data is questionable in many respects in terms of apparently showing a long term trend. Abrupt changes in cloudiness appear to occur with the introduction of new satellites or the repositioning of old ones:

Evan, A.T., A.K. Heidinger and D.J. Vimont, 2007. Arguments against a physical long-term trend in global ISCCP cloud amounts. Geophysical Research Letters, 34:LO4701.

David Stockwell – Rephrasing your statement:

The surface temperature lags the solar insolation by 3 months.

The global temperature lags the cloudR by 3-4 months.

Doesn’t that say solar insolation and clouds are in phase, with both leading the surface temperature? [snip – save for another occasion]

Showing the differences in lag between northern and southern hemispheres are 180 degrees out of phase would confirm that relationship.

David, yes in phase, even though the chain of causation could be TSI->GT->CloudR. Illustrating that just because something happens to lead temperature doesnt mean it causes temperature variations. We are only talking about annual periodicity. The GRF is a solar cycle periodicity. Its coincident with the solar cycle so its effect is indistinguishable from phase of a sine wave alone.

David Stockwell

The causation from solar to galactic cosmic rays to low level clouds has been shown by evaluating the impact of Forbush events. See: Henrik Svensmark,Torsten Bondo,and Jacob Svensmark Cosmic ray decreases affect atmospheric aerosols and clouds GEOPHYSICAL RESEARCH LETTERS, VOL. 36, L15101, doi:10.1029/2009GL038429, 2009.

Future evidence will be coming from: The Pierre Auger Observatory scaler mode for the study of solar activity modulation of galactic cosmic rays. The Pierre Auger collaboration 2011 JINST 6 P01003

Steve McIntyre

Re: “the annual variations are surprisingly large . . .there might even be a noticeable daily cycle”

If so, you might be able to pick up the Forbush event impacts on top of solar/cosmic ray driven cloud impacts using Bart/Stockwell type analyses.

Svensmark’s paper posted at spacecenter.dk : Cosmic ray decreases affect atmospheric aerosols and clouds

Speak of the devil, David.

Hello, David.

For background, the lag between insolation and surface temperature varies. For land it is 30 to 40 days. For ocean, it is about 3 months. And, the winter ice cover in the Northern Hemisphere seems to delay the warmup of large areas to values between those of land and ocean. And, of course, ocean area is greater in the Southern Hemisphere than in the North. The upshot is that the annual cycle has some complexity.

David Smith

Thanks – ~ in proportion to specific heat capacity. Any good reviews?

hi david. the annual variations are surprisingly large, aren’t they. I’ll bet that there might even be a noticeable daily cycle depending on the longitude of noon.In looking at the three signals, it occurs to me that there could be a phasing error. The peak of solar insolation should directly correlate to the orbital position of the Earth and the Earth hits perihelion on January 3rd each year and amphelion on July 3rd. The magnitude of the variation in direct insolation is upwards of 80w/m2, which is significant no matter how you slice it.

As a spacecraft power systems designer this impacts both the available power to a spacecraft as well as the thermal control system. It has always bothered me that climate scientists seem to just arm wave this variation away while at the same time talking about the global impact of a 1.5 w/m2 variation in climate. I understand that the smaller variability is an integration of change over time but to ignore periodic changes of this magnitude irks me as an engineer.

Your graph Fig 1 on your ref PDF paper seems to have a mis-label. Black and red instead of black and black??

It would be interesting (and quite easy) to check how the properties of incoming stochastic process would change due to this anomaly operation. ‘Anomalies’ far from the reference period might be surprisingly high:

UC, can you expand on this? You said that the “properties of the incoming stochastic process” change — in what way? Are you saying that the anomaly operation is losing information useful in constraining the probability space of the future (or past) state? So when you say:”‘Anomalies’ far from the reference period might be surprisingly high” – does this mean that the probability space far from the reference period is less dense and more broadly distributed than if you didn’t take the anomaly? Or am I misreading your comment?

Yes, in LS terms, trace(I-P_x) changes from T-1 to T-12. A linear change in the input will result a staircase function in the output, with jumps only where the year changes, etc.. How much it matters to the present discussion, I don’t know.

Yes, assuming that ‘weather noise’ is not iid, but something more realistic. ‘Autocorrelation is small’ says Brohan, but I’d like to see more evidence. The ‘anomaly tube’ in the video squeezes the data and it gets back to freedom when the tube is far away.

Steve McIntyre

You may also see evidence of underlying 22 year Hale or 60 year PDO cycles. While Nyquist-Shannon needs 2x period to identify, other studies could be used for supporting evidence. e.g., See Loehle & Scafetta 2011, and Adriano Mazzarella & Nicola Scafetta 2011

“I’m re-visiting this issue by repeating the regression ….but making [a] plausible variation ….and got surprising results.”

That’ll be “surprising” in the sense of “not in the least astonished”, I take it?

(isn’t it absurd that blog posts on “skeptic” blogs provide better replication information than “peer reviewed” articles in academic literature)Hasn’t this gone without saying for nearly a decade now?

I could just about put any line with any slope through those data and not get a much worse fit.

Figure 1 looks very much like my ring density vs temperature plot for White Mtn bristle cone pines.

Those didn’t look linear either.

3-D scatter data plotter for excel with rotaty things.

http://www.doka.ch/Excel3Dscatterplot.htm

In regard to Dessler 2011 in press…

GRL state about papers in press:

“Papers in Press is a service for subscribers that allows immediate citation and access to accepted manuscripts prior to copyediting and formatting according to AGU style. Manuscripts are removed from this list upon publication.”

The AGU Authors Guide states: “Once the figures pass technical requirements, your final figures and text will be combined into

a PDF file that is placed on the journal’s Papers in Press page. Papers in Press is a service for subscribers that

allows immediate citation and access to accepted manuscripts prior to copyediting and formatting according to

AGU style.”

The Publishing Guidelines state:

“An author should make no changes to a paper after it has been accepted. If there is a compelling reason to make changes, the author is obligated to inform the editor directly of the nature of the desired change. Only the editor has the final authority to approve any such requested changes.”

As the changes suggested by Dessler are greater than “copyediting and formatting” it seems the paper must be withdrawn and a new version submitted and reviewed. Any comment?

From your quoted paragraphs, it seems like the editor can wave the changes through – and he is, no doubt, getting pressure from Lord Trenberth (may peace be upon him) to do just that. However, with all of the scrutiny that this paper is getting, the editor would be foolish to do so. Me thinks the editor is in a bit of a bind.

Journals do discourage substantive changes post-approval. It’s up to GRL here. I suspect they’ll publish it with few if any changes. None of the matters that Dessler seems to agree to amend require withdrawal. A “note added in press” or an erratum would be ample.

Yes, changes can be made at the editor’s discretion. But based on what appears at Roy Spencer’s site it appears they are a little more substantive that Nick suggests and as such would require additional review. As such the paper in its current state should be withdrawn and the review process started a fresh. This obviously would be major embarrassment to the editor of GRL and makes the resignation of Wolfgang Wagner look decidedly immature.

It seems the peer review process is well and truly broken.

MarcH:

Here is what Dessler told Spencer he will change:

“I’m happy to change the introductory paragraph of my paper when I get the galley proofs to better represent your views. My apologies for any misunderstanding. Also, I’ll be changing the sentence “over the decades or centuries relevant for long-term climate change, on the other hand, clouds can indeed cause significant warming” to make it clear that I’m talking about cloud feedbacks doing the action here, not cloud forcing.”

Somehow I don’t think the paper will be withdrawn over either having or not having those changes included. Although if they don’t allow the changes to clarify Roy’s views he will use it as a cudgel in his next paper on the subject. Good luck getting that published Roy!

Spencer mentions that Dessler made a error in his calculation of the radiative to non radiative ratio. Dessler seems to have accepted that there is an error. We simply don’t know how significant the error is. Spencer seems to think its big, writing:

We’ll need to wait and see.

===============

Good luck getting that published Roy!

================

I thought scientific peer review was about assessing the quality of a paper not to enforce issues in academic politics. Well I didn’t really think that given what I have seen of academic reviews but climate scieitists claim that it is.

Note the various underlined bits. Spencer apparently has reason to believe that Dessler is making alterations to more than just the bits that misrepresented his views. Dessler is apparently going to change his calculation of the nonradiative/radiative ratio from about 20:1 to something less (how much?) which is a change of substance.

Bart,

Your graphic results are very interesting. I second Steve McIntyre in suggesting you provide a more detailed description and what implications you can draw from the analysis.

~4.88 years sounds like the response time of the ocean’s well mixed layer to an impulse (eg a big Pinatubo driven dip followed by an overshoot/undershoot sequence). Does that make sense to you?

Sure, it seems reasonable to me. But, I am not very familiar with all the climate interactions. I’m just a systems analyst.

Bart

A few years ago I had a look at FFTs (using EXCEL) of temperatures:

e.g. http://img15.imageshack.us/img15/1127/ffts.jpg

others are available!

You will note that there is very little evidence in most of any solar influence.

in fact there were few common frequencies apparent in all!

After this I used a narrow band filter and swept the centre frequency from 0.5 to 300 years (with constant bandwidth) looking for peaks in amplitudes on the output.

Then summing all the amplitudes and frequencies at suitable phase got a fair reconstruction of the original (to be expected with enough freqs!)

I then tried manually adjusting the centre frequencies, amplitudes and phases of summed signals to get a “very good ” synthesized hadcrut3v global.

some of these results are shown here together with future predictions!!

http://climateandstuff.blogspot.com/search/label/temperature%20synthesis

“With four parameters I can fit an elephant, and with five I can make him wiggle his trunk”.

Attributed to von Neumann

This issue of lags/phase in relation to the behaviour of clouds and SST is a fascinating one!

I don’t know whther it is a factor but back in April 2009 I showed, using the MODIS chlorophyll a data that there appeared to be two distinct NH oceanic cyanobacterial consortia producing two annual phases of blooming in the NH oceans (i.e. blooming over two fairly distinct water temperature ranges). But strangely the growth bimodality was not apparent in the SH oceans (at leasst according to satellite measures of primary productivity).

http://landshape.org/enm/oceanic-cayanobacteria-in-the-modern-global-cycle/

At the time I wondered (and still do) whther this is a modern adaptation to the hemisphere where most anthropogenic CO2 has been increasingly generated over the last 200 or so years?

As far as I (still) know this observation does not appear anywhere in the modern scientific literature on oceanic cyanobacterial productivity.

Nevertheless, given the powerful role of the emissions by cyanobacteria of dimethyl sulfide (DMS) – the principal nucleant of (especially low/medium level) oceanic cloud this issue of lags/phase in cloud response to SST might also have something to do with oceanic cyanobacterial productivity, given the intimate relationship between that productivity and monthly/annual SST ranges.

Steve, This is the general direction of my thoughts also. Those annual wriggles are reported for a number of data sets, from CO2 to SST etc. I’ve not been convinced by the “NH trees get leaves” explanation. The more that can be added to explain the wriggles, the better. I can’t see how CO2 can be described both as a well-mixed gas and yet one that retains a microstructure from Barrow Alaska through Mauna Loa and Cape Grim to the South Pole. I also think that discerning a lag from a lead can be hard and that the more independent lines of causation there are, the better.

Steve Short

Interesting observations. Wonder if the N/S differences in annual pulsations affect the cloud feedback of Dessler/Spencer analyses. Do Fred Haynie’s pulsation analyses provide any clues to your NH bimodality vs not in SH? e.g. Temperature/CO2/Nutrient driven differences in growth rate vs nutrient limitations? OR do the differences in chlorophyll affect the albedo/absorption?

David Hagen

Thanks very much for the link to Fred Haynie’s PP presentation on annual pulsations in CO2. Do you have an email address/URL for Fred? I’d very much like to get in touch with him. My apologies for the fact that all the plots in my April 2009 Niche Modeling piece have since been lost. David Sockwell tells me it was some sort of backup failure by his site’s service provider. However, I have kept an archive of all my plots for that article. They very clearly show that in the NH oceans at least there is, each year, a major and minor phase bimodality to both chlorophyll a and diffuse seawater attenuation coefficient at 490 nm (both measures of cyanobacterial productivity) bracketing respectively the unimodal peak in SST with lags/leads which correspond very closely to those in the S&B work. These plots can be very easily generated at the NASA Giovanni site (using the MODIS, SeaWIFS and Aqua satellite databases).

http://gdata1.sci.gsfc.nasa.gov/daac-bin/G3/gui.cgi?instance_id=ocean_month

http://dust.ess.uci.edu/ess_bnd/ppr/ppr_MeN06.pdf

Steve – Your welcome.

See Fred’s home site: http://www.kidswincom.net/ Encourage you to both to publish your findings.

Comparing TSI & neutron count vs ice extent vs CO2 pulsing and ocean temperature lags may help identify causality. The summer driving and Christmas economic activity may help provide further differentiation of anthropogenic drivers.

David, Reverse causality. Are the wriggles the consequence of cloud properties?

Bart,

I had some more ideas on this analysis.

Firstly, we should remember that like Steve in this post, we’re doing what Dessler said we shouldn’t do – subtracting clear-sky from total to get the effect of clouds. It gets a big wv bias.

Steve did it, and got a negative slope. You’re finding a generally negative impulse response.

Now I think these can be related. The dc analysis of Dessler assumes that the response is immediate. But you can relate the areas underneath. The area under the impulse response can be compared to the regression coefficient. It’s a total response.

So I looked at the tapered version, which I think is more reliable. I integrated the smoothed hs that you plotted. No subtle integration formula – just adding. I got -2.27 W/m2/C – very large (negative).

But this would be sternly deprecated by Mr Briggs et al. So I integrated just h. That came to -0.3 – just at the edge of Dessler’s range.

I found the reason for the difference. I assume Matlab, like R, just omits smoothed values where you can’t use the whole filter range. So with a cone filter like yours, end values are pretty much shut out. And the small time values for h are large positive.

So I think the h integral value is correct.

I agree with you that this is the correct way to take account of lag. But we haven’t dealt with causality. The FFT analysis relates past and future indifferently.

However, we don’t have a clear causality here. I think S&B and L&C are saying it goes both ways. So I don’t know what to make of that.

“The area under the impulse response can be compared to the regression coefficient.”Indeed. The area under the impulse response is, in fact, the dc gain in the frequency response.

“So I looked at the tapered version, which I think is more reliable.”It isn’t. Tapers smooth things so that you get less high frequency jiggling. But, they decrease the resolution at low frequencies.

“I got -2.27 W/m2/C”I expect you are biased down by about a factor of four by your tapering.

“That came to -0.3”W/m2/C? Are you keeping your units straight? The hs is just a smoothed version of h. You should integrate to essentially the same value as smoothing and integration are both linear operations.

“I found the reason for the difference. I assume Matlab, like R, just omits smoothed values where you can’t use the whole filter range. So with a cone filter like yours, end values are pretty much shut out. And the small time values for h are large positive.”Ah, now I see. Unfortunately, your intuition is failing you. The frequency response I plotted on a log-log scale necessarily omits the zero frequency (dc gain) singleton value, because it is at 10 to the minus infinity years^-1. The dc gain estimated is, in fact, much smaller than the nominal immediately higher frequency sample.

But, this is a continuous system, and the frequency response is continuous – it cannot suddenly step to a completely new value at zero. The reason the dc value itself is ambiguous is because we are dealing with

anomalies. The dc information has already been removed from these data streams. But, because the frequency response is a smooth, continuous function, we can infer that the true dc gain is continuous with the next several frequency samples. Ergo, the true factor is about -9.5 W/m^2/degC.“The FFT analysis relates past and future indifferently.”That is false. Natural causal systems generally have decreasing phase response with frequency. If we tried to invert this system, we would get increasing phase. Ergo, we know we are indeed inferring the correct direction of causality.

Bart,

A few queries:

1.

“The dc information has already been removed from these data streams.”Do you mean that the mean (of temp, dR) has already been subtracted? I think that would be a good idea. I had modified my code to do it, but it didn’t make a huge difference. It’s not in your code – maybe preprocessing?

2. I didn’t get the bit about log plotting. I’m talking about where the filter is implemented

filter(c,1,flipud(h)). There are numerical values of h there and the first few are positive.The default filter behaviour in R is to not allow the filter to cross the end (t=0). That was the cause of the discrepancy. Positive values near 0 were lost. It occurred to me that I should have used the “circular=TRUE” option which wraps around. Do you know which is default in Matlab?

3. I realised that the integral of h should be just the zero of its DFT, ie Y(0)/X(0). That is itself just the ratio of integrals of dR and temp. I’m still trying to work that out.

4. Comments: yes, my 0.3 was W/m2/C. That was after integrating over your 50 year period. Choosing different comparable periods makes some difference. Y/X has big spikes, reflected as long-time oscillations in h.

5. I’d insist FFT

analysisis non-causal. It has to be – it’s periodic. But I agree that afterward you can try to infer causality from the phase.1,

“Do you mean…”I mean that the temperature and cloud data have arbitrary dc offsets from nominal values from equilibrium. We do not know a priori what those offsets are, but we can infer their ratio based on the estimated transfer function samples near dc.“…it didn’t make a huge difference.”That is because they are both near zero mean already, and arbitrarily so. In fact, for numerical improvement, we should actually take the Y(0)/X(0) term out and just replace it with zero in the inverse FFT calculating the impulse response. We can infer the dc gain from the frequency samples near zero and then just offset the impulse response so that it’s integral is the inferred dc gain.2. This issue could easily get confused. Bottom line: you need to infer the dc gain from the estimated transfer function. And, you need to nix the taper, or you are destroying low frequency information with no justification.

One thing to consider: It appears you have implemented the filtering forward in time. In that case, your early behavior is going to be contaminated with startup transients. You notice I used the MATLAB function “flipud”, which flips the data around. This means I am actually filtering the data backwards in time, then flipping the result again to get it forwards:

hs = flipud(filter(c,1,flipud(hw)));

That puts my startup transients at the

endwhere the impulse response is near zero and it doesn’t affect things much, and the transients have died out by the time I get to the “meat” of the function.Do not use the “circular=TRUE” option. You don’t want the ends wrapping around and influencing each other.

3. As I said, you cannot rely on Y(0)/X(0). You need to infer the true value from frequency samples above zero frequency.

4. Stop tapering. The spikes are indicative that you have changed the correlations in a bad way. My “bad” runs using artificial data always have big spikes and ugly frequency responses.

5. Well, if you insist… so long as you agree.

Of course, “startup transient” with this 30 point FIR filter means the time needed to cover the first 30 data points within the weighting function.

Bart,

I’m now more bothered about causality, and in particular the step where you taper h:

hw = h*w

where w is zero in about 4097:8192 of the range of h values (1:8192).

Now h is initially a bi-directional impulse response. A pulse in temperature is associated with changes in dR in past and future. That’s just the original transfer function model which is fitted to the data.

But with that taper you are, by periodicity, forcing h at small negative times to be zero and h to be one-sided. You’re imposing a causality that wasn’t there in the data.

Then you go on to FFT hw, getting the mag and phase diagrams shown. And as you say, the phase goes to about -180, suggesting causality. But I think it is just the causality you forced with the h*w step.

And of course this is where you get the very long time constant and the quadratic model.

I get the same result without windowing. The windowing mostly helps out the higher frequency region, of which I do not show much because it is poorly behaved with no particular readily recognizable form – I believe it is reasonable to conclude that this region is dominated by other processes independent of the essential temperature-cloud loop.

The phase relationship establishes the direction of causality. A decreasing phase is associated with phase lag, increasing with phase lead.

BTW, Nick, my hidden posts have appeared. See in particular Bart @ Sep 10, 2011 at 1:42 PM. I agree that it is a little tenuous pulling out such long term correlation from such a short segment of data. But, as I said, I can often do it with artificially generated data, too, so I think this is a lucky set of data.

There are probably more robust methods for what we are doing here under the rubric of deconvolution. In the future, I will be looking into the literature, and suggest you do, too. But, in the meantime, having uncovered what looks to be a very strong negative feedback of -9.5 W/m^2/degC, I think the onus is on those who believe there is a weak or positive feedback to prove it. From my viewpoint here, I think that is very unlikely.

Any good book on adaptive filter theory should cover the topics. System identification as well as deconvolution (blind or otherwise) are what you want as Bart suggests. Adaptive Filter Theory (ed. by Simon Haykin,) though terse almost to the point of requiring the knowledge before reading, is regularly considered one of the best resources. I believe he has a deconvolution book as well.

There are also time domain parallels to what Bart is doing that may be useful for verification purposes. I do believe record length as well as stationarity/linearity issues need t be fully explored.

Mark

Re: Bart (Sep 11 18:43),

Bart,

I found that strong negative feedback of -9.4 W/m2/C to be iffy. That’s the number with 124 months of data. But if you drop 4 months of data at the start (same algorithm otherwise, subtracting means), then the number drops to -15.06 W/m2/C. And if you drop that tapering of h (h*w) it rises to -12.2 W/m2/C for the 124 months.

In fact, if you omit the tapering of h (h*w, for which I can see no justification), there’s a simple formula for the number you are quoting. It’s just the ratio of regression slopes vs time:

slope(dR)/slope(temp).

And I can’t see any basis for considering that a feedback.

The basis is the very well defined 2nd order response evident in the transfer function. Such responses are ubiquitous in engineering. Newton’s laws naturally lead to them because f = m*a, the second derivative is proportional to the force, and when you wrap feedback around such a system, you get a 2nd order response such as this. Electrical systems with resistors, inductors, and capacitors generally produce such responses. Solutions of elliptic partial differential equations commonly can be expanded in a series of 2nd order responses, a so-called modal expansion. Such responses are

ubiquitous.“But if you drop 4 months of data…”Since we are already at the ragged edge of having enough data to identify the process, I think dropping data or otherwise messing with anything which effectively reduces the time interval (like tapering) is a very bad idea. I would be willing to bet that, if you plot out the frequency response from those runs, you will see them behaving very erratically.

That is a key point. When I generate artificial data across the same time interval, the times when the analysis works, it gives a nice, well-behaved frequency response, like the real-world data does. When it doesn’t, the frequency response is erratic. Thus, I think the “erraticity” of the frequency response is a sort of metric of how reasonable the estimate is.

Nick writes “I found that strong negative feedback of -9.4 W/m2/C to be iffy. That’s the number with 124 months of data. But if you drop 4 months of data at the start (same algorithm otherwise, subtracting means), then the number drops to -15.06 W/m2/C. And if you drop that tapering of h (h*w) it rises to -12.2 W/m2/C for the 124 months.”

I’m way out of my depth here…but can you repeat this by successively dropping more and more data and see that it remains negative at all times (except maybe right at the end)? Or is that not helpful?

TT,

It bounces around. But Bart has a point that we really already have too little data and can’t drop much more. I’m just trying to illustrate that the number is rubbery. I think the more significant observation is that without the taper of h, it’s just the ratio of the trends of dR and temp. We have a better feel for how stable they should be. That ratio will switch sign if either trend switches. And if it’s the temp trend that changes, being the denominator, the ratio will get very large (positive or negative) before that happens.

In fact the sign of the Hadcrut3 temp trend over the last ten years has famously wavered in sign. Whenever it goes negative, the logic here says that feedback is positive, since then dR and T trends have the same sign.

Nick,

There is something very troubling with feedback that switches sign in synchronization with sign-change in a trend. Does this occur in “natural” systems?

Well, I don’t think it’s a feedback. I’m just commenting on how the arithmetic works out. That’s what these numbers actually are.

I think the notion of impulse response won’t work for another reason. We’ve been arguing about causality, and the feeling that an impulse response should give one variable in terms of the past values of another. But feedback needs “two-way causality”. If past temp determines present dR, then present dR needs to feed back into whatever determined it – that’s how it works. But present dR can’t feed back into past temp.

It’s hard to get away from the usual dc (instantaneous) feedback that Dessler used.

Nick writes “But present dR can’t feed back into past temp.”

Not “past temp” per se, but with the thermal inertia of the ocean (along with currents that change the location of those forcings) surely dR can feed back in the manner needed?

Nick, you are assuming dR doesn’t change slope along with the temp. But, of course, it must. You’ve disproven the system to yourself by imagining a scenario in which there is no system.

“I think the notion of impulse response won’t work for another reason. We’ve been arguing about causality, and the feeling that an impulse response should give one variable in terms of the past values of another.”Well, I guess those degrees I got and 30 years of working with feedback systems is down the drain. Nick says they cannot exist. Sheesh.

You are not thinking about the feedback system properly. Of course dR feeds back into temp. But, there are other processes feeding into temp, too, so the correspondence is not unique. The system diagram looks like this.

“It’s hard to get away from the usual dc (instantaneous) feedback that Dessler used.”What a profoundly ridiculous statement. You really have no inkling of the whole field of control theory at all, do you?

“But, of course, it must

within the relevant time frame dictated by the bandwidth of the feedback system.”Nick Stokes writes

================

But feedback needs “two-way causality”. If past temp determines present dR, then present dR needs to feed back into whatever determined it – that’s how it works. But present dR can’t feed back into past temp.

==================

Look at a diagram of a simple feedback systm. The output produces an error signal which is fedback to create a modified output and so on forever.

You don’t seems to grasp how this works. There is no feeding back into the past. The output creates an error signal and the error signal produces an output. Everything goes forward in time

Nick Stokes writes

================

But feedback needs “two-way causality”. If past temp determines present dR, then present dR needs to feed back into whatever determined it – that’s how it works. But present dR can’t feed back into past temp.

==================

Just to add that signal processing chips are special purpsoe processors that “delay, multiply add” very very quickly over and over again frorever. They do not go back into the past

“There is something very troubling with feedback that switches sign in synchronization with sign-change in a trend.”It’s only changing in Nick’s mind, not in reality.

“It’s only changing in Nick’s mind, not in reality.”Not in my mind, Bart, but as a simple result of the arithmetic of your algorithm.

An analysis of the full data set gives:

With your h trunc H[1]=-9.64 W/m2/°C (which you’ve been quoting)

Without trunc H[1]=-12.22 W/m/°C

Trend dR -0.0605 W/m2/yr Trend T 0.0049 °C/yr Ratio -12.22 W/n2/C

But if you take out the first 10 months of data (all of year 2000) that’s enough to make the trend of temp negative. Then your analysis gives:

With your h trunc H[1] = 10.75 W/m2/°C

Without trunc H[1]= 18.71 W/m/°C

Trend dR -0.0509 W/m2/yr Trend T -0.0027°C/yr Ratio 18.72 W/m2/C

So the trend in T switched, and suddenly we have big positive “feedback”.

You might like to think through the implications of using such a simple criterion. You can form the ratio of trends of any two series, and quote such a figure. But it doesn’t prove an association.

If H[1] is your dc value, it is meaningless. I’ve already been over this.

No, H (FFT of h) is perfectly smooth near zero. Here are some typical values:

> H[1:6]

[1] 18.71285+0.00000i 18.71285-0.15041i 18.67962-0.29995i 18.62446-0.44743i

[5] 18.54773-0.59190i 18.44988-0.73243i

What is your -9.4 W/m2/C if not limiting low freq value of H ?

Bart, I should add that I interpolated the zero value of Y/X from nearby values, which are also perfectly smooth. In fact, this number H[1] is also that limiting value. That is why what you are calculating is just the ratio of trends.

Without trunc H[1]=-12.22 W/m/°CDon’t

usethat data. It’s just noise and extraneous processes. The early part of the impulse response reveals a standard 2nd order type response. Once you see a signature like that, youknowthat is the thing you are after. You do not want any data unrelated to it funking up your estimate. You can see where it dies down, so stop using data after that point. But, use a smooth taper in order to maintain your resolution.But if you take out the first 10 months of data…Don’t take out any original data. You have no basis for doing so.

You have no basis for doing so…and you need every bit of data that you have from which you can get longer term correlations.Bart, I want to make this comment in order to help reassure you that your analysis has been useful and appreciated by many. Nick can see that you have added to Steve’s analysis and helped shoot holes in Dessler’s critique of Spencer, and he doesn’t like it one bit. He knows what you say is true, he is just trying to cast doubt in the minds of those whose read this blog and may not have the education in math etc. to follow the argument. You can look in the tread Steve had titled “dirty laundry II, contaminated sediments” and get an earful of Nick’s efforts at confusing the issue. See here

https://climateaudit.org/2011/07/06/dirty-laundry-ii-contaminated-sediments/

Thanks for the analysis!

“Whenever it goes negative, the logic here says that feedback is positive, since then dR and T trends have the same sign.”Not in the relevant frequency band. Control systems are all about frequencies, Nick. You can’t always make out what is going on in the time domain. It’s why we use the frequency domain.

Welcome to the party, Bart. Sigh…

Mark

it seems that you have some interesting results – is anyone going to make a summary and present them to Spenser?

Spencer should be aware. I did post about it on his blog. I think there is going to be a lull while people come to grips with the analysis, then there will be a ramp up in commentary and analysis. My guess – they will find a hundred or a thousand irrelevant objections, such as Nick has brought up here, to deny the reality and stick their collective heads in the sand.

Mark – this has really been my main beef from day one. The mainstream climate community seems to be virtually illiterate in the tools and standard forms which have been the mainstay of controls engineering since the days of Nyquist and Bode, not to mention Kalman. They treat the data and processes as though they were deterministic, and evince little understanding of the effects of phase delay, the subtleties of gain/phase relationships, the behavior of lead and lag networks, the necessary and sufficient conditions for robust stability and well behaved and smooth evolution of processes, or just basic feedback principles in general.

They do not seem to understand the reduction of sensitivity inherent in negative feedback systems. Everything is a straight line, and the only tool you need is a linear regression. What you and I see as normal feedback dynamics they see as spontaneous, causeless, random motion.

They are reinventing the wheel as they go along, and not doing a terrifically good job of it. You can’t teach an old dog new tricks. I don’t think things are going to get better until a new generation of climate scientists, schooled in the proper subjects, comes to the fore.

Bart…I was feeling your frustration ( along with Mark T)…Nick seems to be pettifogging but not really demonstrating that he knows what he is talking about. My take is that he wanted to gound the discusion in the “physics” but that he is bound up in his conception of what the “physics” are, rather than taking on board a new way of approaching the topic. I have been awaiting a deus-ex-machina intervention from Rabett, expalining how he knew all this all the time.

I mean, doing a linear regression on starkly, painfully obviously phase-lagged variables? Come on!

and I think this is why there is a need to track climate variables on an official basis, as we do for economic aggregates on a quality-controlled basis, if this is a really important subject.

“I mean, doing a linear regression on starkly, painfully obviously phase-lagged variables? Come on!”We’re at the stage where thread order is randomised, so I don’t if that’s a reply to this. But it sounds like it.

Of course, plenty of people (including me) calculate regression slopes of temp. But ne need to defend it here. because that’s what your “feedback” is. The simple ratio of the two regression trends. That’s just arithmetic. Not my choice.

While this is a useful and powerful, and appropriate method, you need to be cautious interpreting the low frequencies at the length of the record, surely. Where the Bode plot magnitude rolls over can be due to data length.

Another potential problem that hasn’t been mentioned is that a system being driven by an 11 year periodic will respond at that frequency, even though its not the natural periodic. So how can you be sure of estimates of rise/decay time?

I dont think they even realize that feedback implies poles.

I’m guessing you haven’t been following long enough to be bitter yet. It is stunning. I hope Spencer is open to expansion.

Stockwell: I agree. We have mentiined several of the potential pitfalls along the way.

Mark

No, Nick, that wasn’t directed at you, but at Dessler’s “analysis” purporting to show positive feedback. Hence the link. A phase plane plot of lagged frequency localized variables gives a Lissajous oval whose major axis can point in any direction depending on the phase, and a myriad of continuously varying spirals when you essentially have a continuous frequency spread extruded through a nonlinear phase characteristic. You can get anything from a linear regression on the data that way.

David – I’ve spent a lot of time addressing your concerns on this page. Basically, it comes down to this: A) I have given code for generating artificial data with the same low frequency correlation as evaluated for the real-world data B) With this, I am able to generate artificial data sets with the same time span and properly identify them using the same algorithm C) the algorithm does frequently

failto properly identify the artificial processes, but the frequency responses in those cases exhibit distinctive erratic behavior which tips you off to it being a poor estimate – the fact that the real world data response estimate is very nicely behaved suggests that the input data has a good frequency spread, and therefore the estimate is likely valid, and D) there is a very definite need to formulate better algorithms to deal with short data records with relatively long correlations and increase confidence in the results.Nick sez:

“But ne need to defend it here. because that’s what your “feedback” is. The simple ratio of the two regression trends.”You are nowhere near getting it. The zero frequency point here is arbitrary, because the data sets are not zeroed with respect to the actual equilibrium points, which are unknown. The -180 degree phase shift conclusion is based on all the other low frequency data points which show a definitive -180 degree phase shift. These ARE NOT all dependent on the regression trend.

You apparently do not understand the frequency response, and are making broad and erroneous claims based on your misunderstanding.

No, Bart, you are not getting it. I actually understand frequency response very well – better, I think than you do. But you are not dealing with my basic proposition. These numbers you are quoting, which are perfectly well characterised by H[1], but can be got in other ways if you want, are just the ratios of the OLS linear trends of dR and temp. There’s no frequency analysis required to verify that.

It’s slightly confused by your illegitimate truncation of h. But if you take that out, the agreement is to four figures or more. Works every time.

“But if you take that out, the agreement is to four figures or more.”The object

is not to get agreement!!! The object is to estimate a response which is hidden by noise and extraneous processes! I do not want to recreate noise! I do not want to recreate other random inputs! I want to determine the underlying relationship between the input variable and the output using imperfect measurements of both!Do you ever read anything I write? I have gone over and over and over this same ground. I explain things to you, you ignore my explanations, and then have the temerity to assert that you know this stuff “better, I think than you do.” This has become a sad little joke.

“…are just the ratios of the OLS linear trends…”This is only an approximation which holds very near zero frequency because of L’Hopital’s rule. The values at higher frequencies, where the gain plot depicts the almost flat passband of the response, are related to other properties. It all holds together as a whole. This is a VERY commonly encountered response type in the natural world.

Besides which, of course the trends are related. Why shouldn’t they be? If the temperature series changes direction, so will dR. But, not right on a dime. There will be a lag of several years before it becomes apparent – you are dealing with a time constant near 5 years! And, if the driving force changes again in that time, then the response will be attenuated, because that is motion at frequencies outside of the passband.

This is what filters

do. This is what they arefor.“The values at higher frequencies, where the gain plot depicts the almost flat passband of the response, are related to other properties. It all holds together as a whole.”I see now a way to make this clear. Of course, the very low frequency stuff is very much like a linear regression. The entire Fourier Transform is a regression against sinusoids of varying frequency. At very low frequency, you are essentially regressing against a linear trend over the span of the data, because sin(eps*t) := eps*t (and, of course, cos(eps*t) := 1 but, since these time series are very nearly zero mean, that does not have much effect). But, as you get to higher frequencies, you are regressing against progressively more sinusoidal signals. The fact that the progressively curved regressors yield virtually the same result, in a manner which is characteristic of a very widespread and common type of system response, tells you that you have latched onto something significant.

“Now h is initially a bi-directional impulse response.”I also just realized that you have a misconception here. A discrete Fourier transform of a

realtime sequence produces a frequency response which is mirrored across the Nyquist frequency. But, that symmetry does not hold in the inverse Fourier transform of acomplexsequence. The inverse Fourier transform of Y/X is not “two-sided”, i.e., it does not have midpoint symmetry. And, it is real valued – I used the “real()” function on it just to ensure that MATLAB did not return a complex valued function with small complex values due to numerical error.Truncating the later values does smooth the frequency response estimate a bit, but it does not fundamentally change the phase response.

Bart,

I don’t see that. Is it not true that both FFT (or DFT) and iFFT produce periodic output, period Nsamp? I don’t see how it could be otherwise, given how they are expressed as trig functions. And I don’t see how being complex affects periodicity.

As I understand your model, it is just that dR is equal to h convolved with temp. You FFT to turn convolution into multiplication, get FT(h) as a quotient, and iFFT to get h as a real. In the original hypothesis h is two-sided, and it should come out that way.

You are confusing symmetry with periodicity. The periodicity of an N point FFT/IFFT is N. The N samples have no guarantee of symmetry. As Bart noted, an IFFT of arbitrary complex data will be complex, but it need not be symmetric.

Also, an assumption of symmetry imposed upon h initially would be silly at best. This would only occur if the transfer function were linear phase finite impulse response, something that rarely occurs in natural systems. Bart is inferring feedback simply from knowledge of the shape of the transfer function as well.

Mark

Mark, is this addressed to me? No, I didn’t say that h would be symmetric, and I’m very well aware that it isn’t. I’m simply saying that it isn’t one-sided, and because the periodic representation starts at time zero, the portion for negative times appears at the other end of the plot.

This was my issue with Bart’s code – he multiples h by a taper, which starts at 1 for small positive t and goes to zero at 2048 (of the 8192 sample points). By that stage it seems that h can well be truncated, but the effect continues on to zero the significant parts of h corresponding to negative time.

There is no “negative time” portion of h. The first point out is time t=0 and it progresses forward from there (h has identical progression as the series that it came from.) The windowing function merely removes the wiggles at the end which has little impact which should be verifiable. Bart has already stated this.

I mention periodicity because you did twice in the first paragraph to which i responded. You seem to be under the impression that the output of an IFFT consists of positive and negative halves like the FFT… It does not.

Mark

No, the iFFT returns a periodic function. It is made up from the same trig functions as the FFT. If you continued on, point 8193 would be the same as point 1 (we’re using Nsamp=8192).

In this plot I’ve shown the smoothed h (in black – ignore the red) plotted with a 180 deg phase shift on the time axis. That is, I’ve plotted values 4097 to 8192, then 1 to 4096 of h, and I’m showing a centered window. That’s the time series that is actually convolved with temp to reproduce dR.

The function is periodic with period of N, Nick. That does not imply symmetry about some point 0 nor does it imply some sort of periodicity within the N samples themselves. The results of an IFFT have the same temporal span as the original input.

Mark

Nick – this is nonsense. There is no symmetry such as you are suggesting. Make some runs with artificially generated data such as I have prescribed and see. The stuff from 4097 to 8192 is just a bunch of junk which comes about due to independent processes in the data. It should be excised with the taper because it is useless. But, if you insist on not doing the taper, don’t do it. You will get the same result in the important low frequency region.

It is incorrect to swap the halves of an IFFT result. Periodicity does not have anything to do with this.

Those “trig functions” are merely an orthonormal basis set consisting of sinusoids. This does not imply anything regarding their temporal distribution as the FFT does with the frequency distribution.

Think about it this way: take an arbitrary real input function and perform an FFT. The result is a DC term, followed by the positive plane frequencies and then the negative plane frequencies which have complex conjugate symmetry. Performing an IFFT returns the original real function.

The problem Bart noted above in which he takes the IFFT of Y/X results from finite precision effects. You can effect the same as his result (taking the real) by using the ‘symmetric’ switch in MATLAB’s IFFT which will treat the negative half as the flipped conjugate of the positive half even if there are small rounding errors.

Mark

Mark and Bart, in the interests of saving time: Nick is a mathematician. He knows what an FFT is, and an IFFT. He knows their properties for real and complex input data.

I know who Nick is. I disagree that he “knows their properties,” however, as his posts seem (to me and apparently Bart) to indicate. He is misinterpreting the periodicity concept (which is plainly stated as N in his link to Wikipedia, and I have noted twice now.)

Mark

Mark and Bart,

Truncating h does make a difference. It shows in the h integral that Bart has been citing as the large negative feedback factor, -9.4 W/m2/°C. If you don’t taper h, you get -12.22 W/m2/°C, even larger. But then because it is exact, you can see where the figure comes from.

The OLS regression grad of temp is 0.000412 °C/month

and the grad of dR is -0.005039 W/m2/month

quotient is -12.22 W/m2/°C.

I’ve added the convolution of Bart’s truncated h to the plot at the Moyhu post. You can see that while the untapered version (red) fits exactly, the tapered version (gold) gives a substantially different result.

Nick, you’re just seeing the higher frequency noise. Stop mucking around in the time domain and look at the frequency response directly in the frequency domain.

See here, also.

PaulM

Posted Sep 12, 2011 at 11:13 AM | Permalink

“He knows their properties for real and complex input data.”The impulse response is not “two sided”. On this, there is no middle ground.

One only has to consider why an impulse that arrives at time t = 0 cannot generate any negative time response.

Mark

I’ve put up a post here which I hope is more explanatory on the issues of periodicity, and what the impulse response function (as calculated here) means.

It’s true that if we actually know that there was causality, the impulse response function would be one-sided. But we don’t. All that is done is that two series are FFT’d and the response function obtained by division. It’s just the function which convolves with T to get dR. And it does. And the truncated version doesn’t at all. That needs explaining if you’re going to truncate.

Certainly you can try to argue that causality has not been established, and likewise interpret the IFFT output differently, however, that does not allow you to create a new ordering for the data. The IFFT, whether you like it or not, progresses forward in time. The index n is part of t = nT where T is the sample period.

I’m not sure if you’re simply confused because the FFT “frequencies” wrap to the negative plane or what. You get the wrap (in radians) because it is sampled, and the spectrum repeats every 2pi radians. The IFFT result is interpreted as a periodic repeat, but time progresses steadily forward. Data at index n = 0 is the same as n = 8192, but there is no wrap into negative time akin to the radian frequency wrap.

You really need to get your head around the distinction. It’s pretty clear at this point that you are nothing but a distraction.

Mark

Nick, did you ever try your method on the artificially generated data? That data is without question causal, yet you will find the same issues there which you bring to the fore. Why? Because you are not working with deterministic signals but stochastic ones. You are producing an

estimate, not a 1:1 correspondence.There is a very well understood tension in spectral estimation using the FFT, that of bias versus variance. A PSD estimate, or a cross spectral estimate (and what we are producing here is actually an estimate of the cross spectrum divided by the input power spectrum), are highly variable. The fundamental result in Fourier methods of spectral estimation is that the variation is as large as the quantity being estimated, and the variance does not go down as the data record gets larger. A naked FFT is therefore not a

consistentestimator for spectral properties. If you don’t believe me, google “psd estimation trade off of bias and variance” and look at all the hits you get.So, tapering windows are used. Various methods of smoothing are used. These all bias the estimate, and reduce resolution, but they reduce the variance, too. This is why FFT based methods of spectral analysis require considerable operator interaction, and cannot be made into simple black box batch algorithms. The analyst has to choose the best trade off between bias and variance.

The stuff you are seeing beyond the “meat” of the response, which is eliminated by the taper, is NOT a reversed time, non-causal response. It is a phantom of noise and external processes not related to the process in which we are interested: the driving of cloud formation by temperature. Being able to reconstruct the exact time series with the impulse response only means you are additionally reconstructing those parts of the process in which we are not interested. You are insisting on having an

unbiasedestimator, but you are giving yourself entirely over to having the maximum variance.And, as I’ve tried to point out time and time again, but you seem never to have tried it, it DOES NOT AFFECT the low frequency -180 degree phase shift which defines this system to be part of a negative feedback. Do it. With taper, without taper… it’s all the same. We have a very strong negative feedback here, and there’s nothing whatsoever in your obsession with this minor detail which changes that.

And, causality is most undeniably established here. It is established by the negative slope of the phase response, which indicates a time lag in the output variable. If the slope were positive at low frequency,

thatwould indicate a phase lead, and a non-causal response. It isn’t. It doesn’t.Can we please move on from these trivialities?

“It is incorrect to swap the halves”I’m not swapping halves. I’m just plotting with a different time convention. At I said on the blog, it’s as if you were plotting some variable around the Equator that had significant behaviour around the date line. If you plotted with conventional longitude, you’d have some of that peak at one end of the plot, and some at the other. But if you declared the Greenwich meridian to be the longitude break (from -180 to 180), and the DL to be Lon 0, then you’d see it all in the middle of the plot. You haven’t changed any reality, just conventions about periodicity. Changing the date line doesn’t change the map.

What you did was slide the window effectively putting the impulse in the middle of the response, which is the same as swapping halves.

Mark

“which is the same as swapping halves.”No. It’s the equivalent of looking at a Mercator projection with lat 180 in the middle instead of lat 0. The world is still the same.

I mean, of course, longitude 180 etc

Here’s Wiki on periodicity.

Mark,

There’s no indication in this method of analysis that these numbers have anything to do with time. The underlying variable could have been distance, longitude, anything. In fact as entered the numbers are running backward in time. There’s nothing to say what the direction is.

The same applies to FFT’s in general. They can be applied to quantities that vary in time, space, or whatever.

The device of truncating h does give it a causal aspect. But then it loses the basic property of being a function which on convolution generates dR. It no longer relates the variables. If you think h was one-sided you might like to explain why removing the non-existent side had that effect.

Excuse me? This is ridiculous.

Except that the data are a time series.

The variables are actually cloud cover and temperature… both sampled in time.

They are both going in the same direction, and thus the response is in the same direction.

The truncation did not give it causality. The clear 2nd order response coupled with a reasonable assumption that there is at least some connection between the two is why Bart inferred causality.

Had what effect? I looked at your plots though it was not clear whether you swapped halves before using the impulse in the convolution. Given typically lags greater than half the record length provide increasingly ambiguous results.

Mark

“Except that the data are a time series.”Well, you know that. But the algorithm isn’t notified of that anywhere. You just enter two lists of numbers, dR and T. And if it were to use that ts information, you’d have to tell it the time direction. As I say, it’s backward to the normal.

“The clear 2nd order response coupled with a reasonable assumption…”The second order response means nothing. It’s at the low frequency end – Bart’s expression is just the Taylor series expansion of X/Y, inverted. All its appearance tells you is that X has a zero in the complex plane near 0.1 yr^-1. Which is the zero of the gate function FT and has little to do with the properties of T.

“Had what effect?”In the plots I showed, the red had h unaltered, the cyan had h smoothed (30 month triangular), but not truncated, and the gold had been truncated to low positive t (Bart’s taper). Both smoothing and truncation were just prior to convolution. The positive part of h up to about 80 years was unaffected, the next 80 years tapered.

Please see above. This errant discussion has gone on long enough.

The algorithm doesn’t need to “know” what the order is. You and I know based on the order of the input samples.

Which is the impulse response of the transfer function from Y to X. If there were no cause/effect relationship, you wouldn’t see such a response.

I don’t doubt that the DC gain term is the ratio of the trends because it is the ratio of the DC location in the frequency plane, which would necessarily include any slopes on the data that consist of less than half a cycle (one full cycle would be the 2pi/N bin.)

Mark

Nick said:

“Which is the zero of the gate function FT…”The zeros of the gate function do not appear in the gated FT. One can think of the finite span of data as an infinite span multiplied by the gate function. Multiplication in the time domain transforms to convolution in the frequency domain – the FT of the gate function

smoothsthe combined FT. In addition:“All its appearance tells you is that X has a zero in the complex plane near 0.1 yr^-1. Which is the zero of the gate function FT and has little to do with the properties of T. “The zeroes of the gate function are

onthe imaginary axis. The zeroes of X are well into the complex plane with a significant damping ratio. Just because themagnitudesof the zeroes are very roughly near one another does not mean they resemble each other in the slightest.Bart

“The zeros of the gate function do not appear in the gated FT.”Yes, I agree with that. The location of the zeroes is not connected.

I did a little experiment – you might like to try it. It illustrates some of the points where we differ.

I simply reversed the data vectors (flipud). Now if there is causality in the analysis, that should be a big deal. What happened?

As you might expect, X, Y and X/Y were transposed. And h came out exactly reversed. Those “negative t” numbers that you said were just noise are now the positive t numbers for the reverse problem. And so, if you don’t truncate, H is also simply transposed, and the low freq limit is just -12.22 W/m2/C, exactly as for the original.

But if you do use the same truncating function, you get H[1] = -2.5718 W/m2/C. The original (unreversed) number I got was H[1]=-9.628. And, yes, they add to -12.20.

So what? In my view, h is indeed two sided about t=0. It has area -2.57 on one side, -9.628 on the other. When you don’t truncate, you get -12.22 in both directions. When you do truncate, you get the appropriate split.

And if you don’t truncate, all the numbers you are pulling out are exactly the same with the data reversed. So the algorithm has no expectations about causality.

Nick Stokes writes

===================

So the algorithm has no expectations about causality.

===================

bart is not saying that the algorithm has”expectations about causality”. He has told you quite a few times thet the frequency response in indciative of a type of system that is described as providing feeedback. It is the emprical data which he uses as “expectations about causality”. That is what he is saying. Do you ever read anythimg that he writes?

“Do you ever read anythimg that he writes?”Well, I seem to be the only person here who has gone through his code and got it working.

But both Mark and Bart have been saying how silly I am to think h is two-sided. If it’s one-sided then it’s causal.

But Tom, how about making a substantive contribution. Can you explain why it makes sense to cut the impulse response down the middle? Do you have a view on why the data from start 2001 shows a strong positive feedback, on this analysis?

Bart and Mark have told you this many many times. The impulse response is being used as part of a model of a physical system. The time t=0 has a meaning in that model. Bart has told you this and shown you a diagram of that physical system. You keep saying the same thing over and over again without responding to Bart’s point. he tries to anwer you and you pay no attention.

Forget the impulse response. Look at the step response

Tom,

Yes, of course t=0 has meaning. The issue is whether the impulse response h is one-sided (about t=0).

I know Bart has a physical system in mind. But nis numbers come from the low frequency limit of the Fourier analysis.

The impulse response is the time domain response of the system to an impulse which occurs at t=0

The step response is the time domain response to a a unit step which occurs at t-0

The impuse response and the step response have physical meaning.

What more needs to be said.

Bart is testing the hypothesis that the “cloud” system can be modeled by a simple feedback system. He has taken the empirical data and analyzed it to see if it has the characteristics of such a model. He notes that it has. He them indicates that this is evidence that the simple feedback model is correct and that the feedback is of a specific magnitude and phase (with appropriate impulse and step response as aids to this understanding). He notes that the mathematics that he is using has difficulty with the short data set but that that the conclusion can reasonably be relied upon. He also inquites if there is other mathematics that could be of use.

That is all Bart is saying (to my understanding). The model appears to work and appears to be compatible with the data. That is all. Bart’s investigation is into the physics of the system and not into mathematics.

What you are saying is not incorrect (as far as I can tell). It is just not useful in determining if the feedback model is applicable to the physical system. There may be other models that fit the data better than Bart’s. Nothing that you are saying has any bearing on this or the validity of Bart’s model.

What more needs to be said? Well quite a lot really. To start with, Fourier analysis is not the correct mathematical tool to find an impulse response or step response, you’d need a Laplace transform or a Green’s function. But here is not the appropriate place – this has gone on too long anyway. Why not take it to Nick’s blog?

[As regulars will know, it is unheard of for me to agree with Nick on anything 🙂 ]

Nick Stokes

Posted Sep 14, 2011 at 5:55 AM | Permalink

“As you might expect, X, Y and X/Y were transposed. And h came out exactly reversed. Those “negative t” numbers that you said were just noise are now the positive t numbers for the reverse problem.”Nick… causal and anti-causal responses

do not both occur. Eithertemp drives dRordR drives temp, not both. They are bothrelatedin either direction, but only one direction can be related by a transfer function.Take a look at the system diagram. The transfer function from temp to dR is the part I have circled. That is the thing we want to estimate. The box on top is the transfer function from dR to temp, but the input is polluted by the Radiation Forcing (RF) which adds to the dR forcing, so you cannot resolve that transfer function.

That is, if I call the top transfer function T1 and the bottom one (the circled one, the one we want) T2, then

temp = T1[RF + dR + OT1]

dR = T2[temp + OT2]

where I use square brackets to indicate the operator relationship, that operation being convolution in the time domain, and multiplication in the frequency domain. “OT1” and “OT2” stand for “other terms” not depicted which include other forcing and feedback.

I am assuming I can get T2 as dR/temp (with “/” indicating the appropriate operation in the chosen domain – deconvolution in time, division in frequency), doing my best to recognize those parts of dR which are due to OT2 and eliminate them as much as possible, hence the tapering and exclusion. The assumption is that the dR:temp relationship is dominant, particularly at low frequency. If we find a readily recognizable type of transfer function there (as we have) then that assumption is borne out, i.e., we consider it likely true.

But I cannot get T1 = temp/(RF+dR+OT1) because I do not have RF or OT1, and RF and OT1 are assuredly dominant. If we could get T1, then the partial transfer function from RF to temp could be gotten as T1/(1-T1*T2). This is a negative feedback (sub)system precisely because T2 has a 180 degree phase shift at zero frequency, and 1/(1-T1*T2) is the reduction in sensitivity conferred by the feedback.

What you are seeing is a phantom of circular convolution. In fact, to get the transfer function, what we are doing is

deconvolution, which should eliminate the “anti-causal” part but doesn’t because of imperfections in the data.The FFT is a discrete time Fourier Transform. If we could use continuous time Fourier Transforms, you would not see this happening.

Oops. Forgot to close tag.

PaulM

Posted Sep 14, 2011 at 11:34 AM | Permalink

“To start with, Fourier analysis is not the correct mathematical tool to find an impulse response or step response…”Nonsense. There is a 1:1 relationship between the Fourier transform and the impulse response in continuous time. In discrete time, you have to guard against aliasing, but that isn’t really too hard.

I have a big post for Nick in the moderation queue because of the multiple links. Check back later.

PaulM,

“Why not take it to Nick’s blog?”A good idea. The threads here do get chaotic after a while. I’d be very happy to host a post by Bart (or anyone else on this topic) – there’s no limit on graphs or links. You can even do Latex 🙂

Nick: “Well, I seem to be the only person here who has gone through his code and got it working”

Here http://landshape.org/enm/fft-of-tsi-and-global-temperature/

Different data, different system response, useful insight.

“In fact, to get the transfer function, what we are doing is deconvolution, which should eliminate the “anti-causal” part but doesn’t because of imperfections in the data.”Actually, I think the relevant imperfection is finite window of data, which allows the circular convolution to come back around and do the reverse time correlation.

But, as I said, we are only interested in one time direction or another, and the one we are interested in is temp forcing dR. So, this essentially

requiresthat the impulse response estimate be cut off at the midpoint or earlier.I appreciate the invite to your blog, Nick but, truth to tell, I am just about spent on this. I think I’ve covered just about everything and I think I’m ready to push this little bird out of its nest.

“The FFT is a discrete time Fourier Transform.”Actually, it is a

frequency sampleddiscrete time Fourier Transform. Circular convolution comes about because of that sampling.Just in case anyone does want to continue discussion at Moyhu, I’ve posted the relevant comment subthreads from here on this page. There is a thread still open there, or if there is interest, I could start a new one.

The deconvolution should deliver only the causal part. The reason it creates other fluff at the end is inherent to the algorithm operating on noisy and polluted data.

Why does the algorithm favor the causal part? Because what we are doing is effectively Wiener Deconvolution.

Our algorithm has computed the impulse response as

h = real(ifft(Y./X))/T;

We can do this in three steps. First, compute the cross spectrum

C = conj(X).*Y;

Then, the power spectrum

P = conj(X).*X;

Ideally, both of the spectrums should be smoothed, but that eliminates resolution, and we are already hurting for resolution, so we don’t do any smoothing. Since ideally Y = H.*X, where H is the transfer function, C = H.*P. Thus, dividing it out should give us H. The Wiener formula adds the signal to noise ratio in the denominator to prevent division by zero:

h = ifft(C./(P+sn))/T;

If we wanted the anti-causal part, we would exchange X and Y in the above. Anything beyond the halfway point in this impulse response estimate is dross, and should be excised with a tapered window. A knowledgeable analyst should look for standard forms of impulse response at the low end and window even further down to isolate those components.

I went to Nick’s site to alert him to this new message, but it won’t let me post without some account or other. So, here is my final message to him:

Bah. You don’t know what you are doing, Nick. I have put up a final post at the original thread which is the final word on how the data should be treated, and what to expect. -BartOn h = real(ifft(Y./X))/T versus h = ifft(C./(P+sn))/T – one probably should also get the real part of the latter as well. The imaginary part, if there is any, is just small and inconsequential numerical error which needs to be eliminated because the impulse response is inherently real.

Bart,

I think implementing the algorithm with a cross-spectrum calculation is a better idea and I’ll try it.

But there is no directionality, in the sense of causality, implied in Wiener deconvolution. In fact, a major application is in image processing.

If you look at the Wiki ref for convolution that your link refers to, all the integrals are over the whole real line. Two-sided about 0.

And there is no magic that says the impulse response h turns to noise halfway along the spectrum. As I said earlier, a simple test of that is to run the algorithm with data reversed. It’s still a perfectly well defined problem. You get the same h, exactly, but reversed. The numbers that were at the high end are now near zero. They are the impulse response numbers for this reversed problem. They are not noise.

If you want to see what is wrong with your taper w applied to h, just look at its FFT. You’d be expecting 18 db/octave roll-off for a Hann window. Wrong. It’s 6db/octave, and looks like a sinc function. That’s the effect of the sharp cut at zero. Everything is periodic – that cut is real.

Sorry you had trouble accessing my blog. It’s a standard Blogger site. They ask for ID, but it used to be possible to get in without.

As Ed Koch was fond of saying, “I can explain it to you, but I cannot understand it for you.”

In fact, the direction of causality in the Wiener deconvolution is implied by the fft you take the conjugate of in the cross correlation, and the power spectrum you divide by.

You are reinventing the wheel, and making things up as you go along. This is old hat with roots going back to the 1940’s and beyond. It would behoove you to be a bit less categorical until you understand things.

It’s really very simple to prove I am right, Nick. Generate some artificial data according to my prescription:

a = [1.000000000000000 -1.967462610776618 0.968691947164695];

b = -[0.617926899846966 0.611409488230977]*1e-2;

temp=randn(10000,1);

dR = filter(b,a,temp);

temp = temp((10000-123):10000);

dR = dR((10000-123):10000);

Do your little analysis and see that you get a phantom of a non-causal response here, too. But, the artificially generated data is causal by construction.

“I have put up a final post at the original thread which is the final word on how the data should be treated, and what to expect.”And you say I’m categorical?

I can’t see how your artificial data proves anything. Especially as it builds in constants from your erroneous h hack.

But have you looked at the FFT of w yet?

“I can’t see how your artificial data proves anything.”It proves that, even when you have “perfect” data, you still get a “ghost” of a “non-causal” (which is itself an absurdity) response which does not reflect anything real.

I really should never have dignified this discussion and suggested in any way we were negotiating about this. I thought I could make it plain for you and teach you, but it is clear that you have no clue and do not want one. There are decades of practical use of these procedures and reams of literature about them. The science is settled. The End.

“But I am not so ignorant of these matters.”You may understand some theory quite well, Nick, but you are completely out to lunch in understanding the nuts and bolts of practical application. In particular, you seem not to understand the application to stochastic sequences. You are not grasping even the most basic logical construct: you cannot have a non-causal “response”. Time flows in only one direction in this universe.

When you construct the cross spectrum, you are constructing the Fourier transform of the cross correlation, which does indeed correlate the sequences backward and forward in time. When you divide that spectrum by the power spectrum, ideally you get out only a causal response determined by your choice of conjugate transform. You are

deconvolvingthe spectra, and removing the effects of circular convolution.In the real world with noisy data, your spectral estimates are variable. They are not accurate. They do not coincide with the expected values. As a result, the deconvolution is imperfect, and you end up with a residue of the circular convolution. It is completely useless, unnecessary, and unreal.

And, BTW, your creds in this field are lesser than mine.

Steve

Posted Sep 17, 2011 at 3:15 AM | Permalink

“What is the primary difference between evaluating a Discrete-time Fourier Transform summing the complex components over all integers k (-inf to inf), rather than a Discrete Fourier Transform from k = 1 to N-1?”A Discrete Time Fourier Transform (DTFT) is a continuous function of frequency. The FFT is an implementation of the Discrete Fourier Transform (search for it on Wiki – I don’t want to go over my one-link limit) which is a sampled frequency version of the DTFT. As a result of the sampling, convolution of time series by taking the DFT of both, multiplying them together, and taking the inverse DFT yields a circular convolution (again, see Wiki).

Well, Bart, I am sorry to have damaged your dignity. But I am not so ignorant of these matters. I appreciate that the science is settled – I was in fact around when it was being settled. You just don’t know how to apply it.

I have a PhD (1972) in the mathematics of control theory. I have done a lot of research in the mathematics of integral transforms. I was an author of what is still one of the most widely used methods of numerical Laplace Transform inversion.

Details, with links, are here.

Nick, Bart

Firstly, I am no expert. I am interested in learning from this discussion by hearing a resolution.

What is the primary difference between evaluating a Discrete-time Fourier Transform summing the complex components over all integers k (-inf to inf), rather than a Discrete Fourier Transform from k = 1 to N-1?

Is the problem here a misunderstanding of Matlab DFT implementation (wrt periodicity, centering, etc.)? regards, Steve

Steve,

Well, you can’t sum over all integers in finite time. But the DFT scheme where you use the same number of sampling points as frequencies in the sum has a number of useful features. It means that the DFT inverse is just the same DFT operator with negative frequencies. And it is important in enabling the acceleration that constitutes the FFT.

It also avoids overfitting. Basically, if you have N sampling points, you can’t determine more than N coefficients. And the same logic on inversion means that you have to have exactly N. In matrix terms, you want the DFT to be represented by a square matrix.

I don’t think there’s a matlab-specific issue. As Bart says, FFT is pretty much settled science.

Nick- “… But I am not so ignorant of these matters…. I was in fact around when it was being settled…..I have a PhD (1972)….I have done a lot of research….. I was an author….”

I was once advised to strenuously avoid polishing ones own nameplate.

But maybe the advice has changed since then.

“I was once advised to strenuously avoid polishing”Tony, I agree, and I have done so. I have not sought to speak from authority in this thread. But when you get stuff like:

“I thought I could make it plain for you and teach you, but it is clear that you have no clue and do not want one. “well, it just has to be set straight.

You started a new reply thread. Good thinking. See my response prior.

Steve

Posted Sep 17, 2011 at 3:15 AM | Permalink

“What is the primary difference between evaluating a Discrete-time Fourier Transform summing the complex components over all integers k (-inf to inf), rather than a Discrete Fourier Transform from k = 1 to N-1?”Ah, I forgot to address your question entirely. The effect of a finite data window is to smear out the transform, reducing resolution. When two of the sequences are effectively windowed by a rectangular function (Nick’s “gate function”), the convolution of the two is effectively multiplied by a triangular function, a Bartlett Window. Multiplication in the time domain transforms as convolution in the frequency domain, so you are effectively performing a moving average on the true spectrum with the Fourier Transform of the window function.

And, for crying out loud Nick, for once read what I wrote. I have answered all your qualms and laid them to rest many times over, and it is very, VERY frustrating to have to repeat my self over and over with you apparently taking no notice of what I have explained. I sense that you do a brief scan of what I write, assume I am still gainsaying you and, since you must be completely and totally correct, you then repeat back at me the same erroneous conclusions which I have already shown to be false. At least, just once, address one of my actual arguments instead of regurgitating the same cant.

Well Bart, welcome to the world of climatology…

Bart, Nick,

Can you convolve the smoothed impulse response with temperature and then see how well the output correlates with dR? Would this quantify the contribution of windowing /smoothing artifacts?

interesting thread – thanks.

Steve,

Yes, I’ve done it – it’s an update (below the others) on this page. The unsmoothed impulse response reproduces dR (cloud radiance) exactly, as it should, The smoothed is, well, smoother. Thanks for the suggestion – it’s a useful check.

Just saw this after responding above. There, you will see an explanation of why we are not trying to derive an impulse response which can precisely recreate dR. In doing so, we would be allowing into our estimate the effects of extraneous processes, randomness, and measurement error which have nothing to do with the dynamics we seek.

Bart, yes I understand this aspect.

Nick, sorry, but I have to agree with Bart and Mark on almost every issue raised in the discussion. I am not an expert in control theory, but I use aspects of advanced signal processing in my own area of research and I consider many of your points to be completely nonsensical.

Bart, I have not digested this discussion yet but I thought it interesting you found 9.5 feedback.

I have been attempting to fit lag regression of spencer’s simple model to his satellite data. The best fit I could get (which was amazingly good fit) included a feedback of 9.2 W/m2/K

http://tinypic.com/view.php?pic=2rqjas6&s=7

This is work in progress an a hand fit rather than a robust method but from this end it has to be of that kind of value.

Just what that feedback represents physically will be the next question. But at least short-term, shallow mixing (45m) seems to show strong neg. feedback.

Neat!

What has me curious is why both Spencer and Dessler came up with results that are barely distinguishable from zero.

Mark

simple. it’s because they are both using ols to fit a straight line to data that has a whole crock of noise and non linear effects mixed into it.

They are dumbly calling the result the result a “slope”, it isn’t.

This does not surprise me from Dessler who probably knows its wrong but it fits his arguement to ignore the fact.

I don’t understand Spencer on this issue. He said in one reply that he had spent quite a bit of time looking at other regressions but it seemed to be basically just averaging two ols slopes.

Fitting a linear model by linear regression in this context will give meaningless results.

Any time spent arguing about who’s slope is best is a waste of effort , they are both wrong , and fundamentally so.

Looks like something significant is going on here wrt Bart’s work. However, for us folks in the peanut gallery, comments at other blogs lead me to believe few are following it well – even the likes of Tallbloke, who knows piles more than I do. Here’s hoping that at some stage it gets translated into language anyone can understand.

At WUWT, Bill Illis compares Dessler 2010 results with cloud to global variability:

Only an order of magnitude better!

Anyone attempting OLS on data with more noise than signal and with significant errors in the independent variable does not understand the first thing about linear regression and how to use it.

Sadly this is ubiquitous. How any of this stuff gets published is beyond me. And this is the kind of “analysis” they use to justify the parametrisation for the models.

Until climate science community gets past first grade science class techniques they will not get anywhere.

But, you may at least reasonably expect a linear relationship between variables which are coincident in time. The idea of applying linear regression to variables for which one is delayed with respect to the other so that the plot is a Lissajous oval is highly dubious. Applying it to variables with a frequency spread with frequency dependent nonlinear phase (variable delay) is nuts.

Having spent a lot of time looking at his simple model and the data that he used for SB2011 I have just found something significant. Here is the result of R.slr decomposition of the hadSST for the period used.

http://tinypic.com/view.php?pic=x2l7wl&s=7

There is an interesting c. 20 mth oscillation plus the big swings around 2002, 2007.

One of the main features of the lag-regression analysis was the strong sinusoidal swing. This is what required the heavy bias towards “rad” forcing and the high feedback in using the model.

This feature disappears if I limit the window to 2001,9:2007.8 .

The remaining oscillation is interesting. It does not look like “ringing” since it is constant. It doubt that it is an artifact of the decorrelation since it’s period is about 20mths, (at 24 I would have been suspicious).

I speculate that this is ocean currents inputting heat from below the mixed layer. The model has this sort of input in the form of the “rad” term but uses random zero centred data for it.

Now look at the lag auto-regression for the detrended dSST for the reduced period.

http://tinypic.com/view.php?pic=29blgld&s=7

For those looking at frequency analysis, that should be a clue !

Thanks P. Solar

Given the 10 year data, that seems pretty close to the ~11 year Schwab (1/2 of ~22 yr Hale) solar cycles.

What follows is a recap of three posts I made on on Dr. Spencer’s blog, concerning computation of the left-hand side of the main equation. You may recall that Dr. Spencer obtained 2.3 Wm^-2 and Dr. Dessler obtained 9 Wm^-2 for the LHS. The obvious differences between these papers were (a) Spencer used quarterly data, while Dessler used monthly data; (b) Spencer used a mixing layer depth of 25 meters while Dessler used a mixing layer depth of 100 meters.

In his blog, Dr. Spencer recommended use of Levitus when computing the LHS. Levitus appears to be the World Ocean Atlas (WOA) which is available online in updated form, here:

http://www.nodc.noaa.gov/OC5/WOA09/woa09data.html

I found that mixing layer depth has already been computed by Levitus on a global grid, and available from NOAA, here: http://www.nodc.noaa.gov/OC5/WOA94/mix.html

There are three criteria for ML in use: A) (most common definition) depth at which temp is .5 C lower than the surface; B) depth at which the density is .125 standard deviations greater than the surface; and C) depth at which the density is equal to what the density would be with a .5 C change. These three definitions give rather different results.

After downloading all the data and running global weighted averages (weight = cosine[latitude]), the global average mixing layer depth for each definition was:

A. 71.5 meters

B. 57.2 meters

C. 45.9 meters

These numbers fall neatly between the depths used by Spencer and Dessler.

Downloading quarterly data for temperature (objectively analyzed means) from WOA allows you to compute a weighted mean SST for the globe. The four weighted means thus computed were: JFM=18.293; AMJ=18.1672; JAS=18.128;OND=17.935, with a global annual mean of 18.130 C.

The four differences between quarters are then ,358, -.131, -.034, and -.193, with a standard deviation of those values being .247 C.

To compute heat capacity (and change thereof) I used the mean temp of 18.130 as a “before” value and a changed temp of 18.130+.247=18.377 as an “after” value. I computed density and heat capacity for both using a salinity of 35 g/kg and the equations of Sharqawy et.al. 2010. For a 1m x 1m x 25m column, I get mass=25634.58 kg (before), 25633.14 kg (after); HC=29854059 kJ (before), 29878463 kJ (after) for a quarterly change of 24404 kJ. The rate of change per quarter is therefore 2440384 J / 7889400 seconds = 3.1 Wm^-2. This is a bit higher than the 2.3 value given by Dr. Spencer. Note also that this value may be wrong; one could argue that we should be operating on a constant mass of water rather than a constant volume. Computing on that basis, the result would be 3.3 Wm^-2.

But note that this was computed using a 25m ML depth. Using the Levitus ML depths gives for the LHS of the equation energy change rates of (A) 8.9 (B) 7.1 and (C) 5.7 Wm^-2 respectively. In other words, Dr. Spencer’s 2.3 Wm^-2 seems too low.

Using monthly (rather than quarterly) WOA data, I find the global weighted-average SSTs by month: 18.176, 18.347, 18.357, 18.282, 18.147, 18.057, 18.134, 18.173, 18.078, 17.950, 17.871, 17.984 giving the same 18.13 average as in quarterly data.

This gives Delta-Ts of: .192, .171, .010, -.075, -.135, -.090, .077, .038, -.094, -.128, -.078, .113, and the standard deviation of these is .117°C. Already we notice a major difference: if the SST is changing by typically .117 C in a month, we might expect it to change by .117 x 3 = .35 C per quarter. But the actual quarterly change is .25, which means that monthly data is more variable than quarterly data. Not a surprise, but here it is quantified.

Now let’s repeat the same computations, but using Dr. Dessler’s assumptions. In this run I added a slight improvement: I also downloaded and used salinity data from WAO, which is a little less than the 35 I had been using (mean=34.586).

Using T=18.130 as the “before” temp and 18.130+.117=18.246 as “after”, I find a “before” density of 1025.0639, and for a column 1×1×100 meters a mass of 102506.4 kg, specific heat of 4.001219 KJ/kg/K and heat capacity of 119468525 KJ. “After” density is 1025.0369, specific heat is 4.001262, and heat capacity is 119514517 KJ. The change over time is therefore 45991.5 KJ in 2629800 seconds, for 17.5 Wm^-2.

Using the Levitus ML depths gives for the LHS of the equation energy change rates of (A) 12.5 (B) 10.0 and (C) 8.0 Wm^-2 respectively. In other words, Dr. Dessler’s use of 9 Wm^-2 seems about right, if used with a corrected ML depth, while Dr. Spencer’s LHS values are substantially too low.

Frankly, I’m new at these equations and perhaps I’ve made a mistake somewhere. If so, I hope Dr. Spencer (or someone else) will correct me.

Socratic Re “Dessler used a mixing layer depth of 100 meters.”

Spencer modified his post on Sept. 8, 2011 following Dessler’s response:

PS Spencer underlined “(It now appears . . . what he used).”

I have posted the transfer function and impulse response estimates here and here, respectively, created using artificially generated data according to the prescription here.

These show it is, indeed, possible to pull out the long term correlation using only 124 monthly data points. It doesn’t always work, but sometimes, it does.

It would be very nice to come up with a more robust deconvolution scheme which works the majority of the time, and reanalyze the data to prove the veracity of the estimated functions beyond doubt, and I encourage any interested parties to look into this.

For introductory info on Bart’s figures check out http://en.wikipedia.org/wiki/Bode_plot.

For further background on wikipedia go to control theory and PID controllers.

I would like see a more intense examination of the daily cycle data.

I hope Steve or someone summarizes the Bart and related discussion above in a new post for those of us just munching popcorn on this thread, but still able to clearly detect the signature of multiple geeks in excited high-overdrive mode.

Fascinating. and I wish I understood more.

Hey Bart;

Is it a trait of the ‘peculiar’ series that they have some large amplitude ‘steps’ (perhaps of opposite polarity) rather close in time?

In the oscilloscope analogy, when looking at a noisily noisey source, to set the sweep trigger very near the top of the ‘grass’, so that triggers are ~infrequent, as a way to pick off the big steps, and thus be able to see a characteristic time constant in the response after the trigger.

Yes there will be the rest of the noise, but, if the ‘higher frequency’ noise is not too bad, eyeball smoothing will pick up the predominant pattern.

This would be spoiled if the big steps come with sufficient frequency to recur within the ringdown from the step, and spoil the ability to see enough of the post-step response to characterize it.

Anyway, maybe I’m missing the boat totally. but this seems to remind me of the reason that trigger levels are still attached to a knob…

Another anonymous guy, but with less substance than you have brought to this party.

RR

So, the limited duration data set is like having not much time to pick off the biggest signals, and thus only seeing a few examples.

In that case it is harder to get a feel for the dominant response, and easier to get thrown off when the few big steps have other confounding steps in close temporal proximity.

Anyway, looking forward to having my intuition regrooved as needed.

Thanks

RuhRoh

I will have to think about this. Thanks.

Bart;

What proof do we have that a thing called

DC really exists? It has only been observed for a few centuries.

Maybe it is only ELF AC…

RR

Well, since DC is essentially an abstract concept… cogito ergo sum?

DC = non-periodic signal?

ELF = Extra Low Frequency?

Yes.

To horn in on the Skywalker-Talbloke colloquy about making Bart understandable, two suggestions.

First, in supplying the translation, it would be helpful to explain why the response variable is not the whole-sky radiance rather than the cloud radiance. After all, isn’t the target the response of the net insolation of the earth as a whole to the average temperature of the earth as a whole?

Second, to finesse around an explanation of Bart’s “statistics,” (actually, his use of discrete Fourier transforms, impulse and step responses, etc.), one might just say this:

“It is uncontroversial that, if one observes the complete response (e.g., electrical current flowing as a function of time) of a linear system (such as an unknown network of resistors and capacitors) to a known stimulus (voltage impressed across that network as a function of time), one can accurately infer from it what that system’s response will be to some other stimulus (e.g., to voltage applied as some different function of time).

“What Bart is doing is simply applying the universally accepted technique for doing so to determining, from the response (net insolation as a function of time over the past ten years) to the known stimulus (average temperature as a function of time over that ten years) to determine what the system’s response would to a step change increase temperature–and he concludes that the net insolation would fall.

“There are a couple of complications. First, unlike the resistor-capacitor network, Bart’s system (the climate) is not linear, so application of his technique to this particular problem would not be completely uncontroversial. Still, this first complication is not a subject of much if any discussion, because linear-systems techniques are widely applied in estimating responses to small perturbations in non-linear systems–with the tacit recognition that the results can’t be completely accurate.

“The other complication–which is the subject of the discussions–is that the technique Bart uses theoretically requires an infinite record of both the known stimulus and its response if one is to use them to compute the response to some other stimulus with complete accuracy. Of course, no one ever has an infinite record, but one can come close enough with a long-enough finite record. The disputants recognize, however, that the ten-year record is not long enough to avoid significant inaccuracies. So their discussion concerns whether it nonetheless admits of confidence in Bart’s basic conclusion: that a temperature increase tends to decrease net insolation: feedback in this system is negative”

My first comment no doubt betrays appalling ignorance of the subject on my part, but I am hopeful that the second comment will be helpful to a significant subset of us unwashed masses who typically just watch from the sidelines.

I think that is the jist, Joe. The two problems are universal to any estimation scheme, btw.

Mark

Yes, it would be very nice to have a data set longer of at least as long as the inferred correlations. Coming up with a more robust algorithm would be greatly helpful in quelling doubts. However, successful application of the estimation procedure to artificially generated data with the same correlations and time span, and the similar well-behaved property which appears to coincide when the estimation procedure gets it right, supports a preliminary conclusion that the current result is likely correct.

The standard for time-domain processing is the Wiener-Hopf solution (which is optimal w.r.t. MMSE.) Basically, w = inv(R) * p, where w is the channel response estimate and R and p are MxM auto and Mx1 cross correlations respectively, each with M lags and M < N / 2.

Mark

Yes, doing the cross spectrum and dividing by the power spectrum of the input gives the same result as my algorithm. Adding a little epsilon to the divisor gives WH and helps at least avoid division by zero. But, I have tried this, and I don’t get any better at pulling out the long correlation from the short data span – it’s still something about the particular data set which makes it amenable or not. It must be possible to figure out what that something is. Maybe someone in the literature already has.

My guess is that it is just an SNR issue. There is some “noise” and some “signal” in the data, but we don’t know how they are distinguished from one another. It “works” when the SNR is sufficient for the processing gain to give a “good” result (“good” being one that makes sense.) Perhaps having data constructed of full sinusoids, i.e., each component has an integer number of cydcles, helps, too, because it would minimize the forced periodicity in the FFT result (smaller discontinuities from the end of the data to the beginnining of the data will result in less spectral leakage.)

Mark

Ah, but when I construct my artificial time series, it’s all signal. So, I think it is a fundamental limit of the method applied based simply on resolution, i.e., the observability of long term correlations of a stochastic process based on short term data.

Now, I think I could take my artificial data and do a more precise estimate using a parametric method (as in these books, but those

would, I expect, be very sensitive to the unmodeled processes in the real-world data. I have thought about filtering the data to exclude all the high frequency stuff, but I would likely be left with only a small number of data without startup transients (or, fully inclosed within the weighting function of an FIR weighting). Still, that might be enough for a parametric method to act on. Theoretically, I only need 5 data points to fit a general proper 2nd order response.Perhaps one or more of these books have non-parametric methods which would be more robust. Well, I or you or someone will no doubt work it out in time.

enclosed… 5

uncorrelated to a minimal extentdata points…Ah, ok, no noise then. Your cases that fail, integer# of cycles for all components?

It would be nice if we had 20 years of data to compare with. Maybe if you took the signals that failed and extended them out?

I have octave at home and a copy of MATLAB (2007b) which is not licensed to me technically (previous employer) so I hate using it. Octave sorta sucks. That plus a few days between jobs so maybe I’ll tinker a bit.

Mark

Well, the intent is to show that the analysis on the real world data is reliable. And, there are only a limited number of real world data to draw from: 124 monthly measurements, to be exact. The problem is that, from this data, I have determined a model with a bandwidth of about 0.0725 years^-1, which associates with a settling time of about 1/0.0725 = 13.8 years or 166 months.

The question is, can you put any reliability into such a long term statistical model estimated from only 124 months of data?

What I have shown is that, if I generate artificial data with the same statistics, the analysis approach I used gives a reasonable result

some of the time, as I showed at the links here. It appears completely dependent on the random number seed I use to initialize the model.In those cases in which the analysis goes awry, the resulting impulse response and transfer function are poorly behaved. The analysis on the real world data is nicely behaved, like the artificial data sets which give good results. This leads me to believe the analysis on the real world data is valid, and the real world data set is just plain lucky.

I feel certain that there must be a way of identifying the weakness in the estimation procedure and coming up with some form of estimator which will be more consistent, and applying such an estimator to the real world data would then confirm the model beyond a reasonable doubt. I suspect such methods are already available, that I we do not have to reinvent the wheel. There are lots of deconvolution methods available, so some research is needed.

Anyway, that is where my analysis is at: I believe it is valid, but I concede there is reason to be wary of the actual numerical result. I think it is less reasonable at this time to presume the feedback is anything but negative.I think it is less reasonable at this time to presume the feedback is anything but negative… because I very rarely get a false positive with my artificial data.

Bart Great to see your testing. Would it be reasonable to give a probability on the positive vs negative trends based on their proportion to the total number of runs with your artificial data for the parameters you obtained? E.g., see Lucia’s synthetic explorations at The Blackboard.

It is difficult to characterize simply. Just running the algorithm untended in batch mode a few thousand times suggests that I might get a false positive roughly 25% of the time. However, almost all of those false positives yield quirky frequency responses with sharp peaks and other ugliness. A smooth, well behaved frequency response almost always correlates to being fairly close to the true response.

We really do need a more robust analysis tool for this short span of data. But, as I have been saying, all the qualitative indicators right now are that it performed pretty well on the current set of real world data.

It appears, generally speaking, that bad performance occurs when a frequency band is not well covered in the input. So, you end up getting division by a small number which trashes the numerics and creates errant peaks in the frequency response estimate. I’ve been considering that it might be possible to test for significant input frequency bands, perform the initial estimation of the frequency response only over those bands, and then interpolate between them before performing the inverse FFT to get the impulse response estimate. If this bears any fruit, I will let people know, but I have many other demands on my time right now so I don’t know if or when I will get to it. But, I’ll just throw it out there as an idea in case anyone else is interested in pursuing it.

Noting my understanding of FFTs is limited to long past university lectures of the late 1960s and early 1970s and only very occasional use since then, I would point out out that is a pity we can’t get (Dr) Jeff Glassman of Rocket Scientist’s Journal fame to contribute to this specific (and entirely gripping) discussion (thanks Bart, Nick,Mark, David).

It doesn’t seem to be well known that Jeff was one of the great luminaries of late 19th century signal processing theory and is still recognised as the author of the best general N point fast Fourier transform (FFT) algorithm ever (published in the mid-late 1970s if I recall) still in widespread use.

Oops, make that 20th century (;-)

Joe Born:

Thanks for that Joe, very helpful. I’ll reproduce that comment on the post on my blog where I’ve been attempting to make a reasonable collation of Bart’s comments from across three blogs.

http://tallbloke.wordpress.com/2011/09/11/bart-cloud-feedback-is-negative-ocean-response-is-around-4-88-years/

My best hope was that enough people would see my analysis to start giving it thought, replicate it on their own, and go on from there. As of now, I see the cloud radiation impulse response has been viewed 8,054 times! Hopefully, there are some amongst there who will carry the ball forward.

Bart, nice work. Because of the format of comments here, its somewhat difficult for a casual reader to get the gist of your findings. I think people are starting to hear that you have done something interesting and are coming here to see what you have done. It would be helpful if you could summarize it in simple terms. As a suggestion perhaps you could start by summarizing the Spencer/Dessler controversy in simple terms (Spencer: more clouds = cooler; Dessler: more clouds = warmer), then summarize your method and approach, and then discuss your conclusions.

Perhaps Steve could allow you a guest author spot for this.

I too would like to see all the work assembled into one coherent article. In my case I am doing some work with Low Frequency Noise (LFN of turbines and large mechanics like trains)and some of the analysis techniques look similar. Granted that the stuff I am looking at is aperiodic and random (for which FFT does not always produce good results) — but still I did some of the analysis in a similar manner. One of the interesting techniques I found is to create a simple power function for the wave train, then look for the areas where the steps do occur. Like you I found that a using n=30 (or 32) for the size of the moving average seemed to bring out the low frequencies and the steps. I did not run it “backwards” to clean up noise — never thought of it — but it should be a small change to test that… Once I found the areas of interest then I did the analysis of that area more thoroughly. Once major difference is that my sample sets are typically several million points (40-80 samples a second) for a run so I use a DB server to store the data. No lack of data here.

Nick

Have you detrended the data before applying the FFTs it makes quite a difference- see

http://climateandstuff.blogspot.com/2011/09/ffts-cloud-feedback-and-stuff.html

final plot

jp,

No, neither Bart nor I did that. The data was zero mean.

But it certainly would make a difference. As I’ve mentioned above, the zero frequency mag of h, which Bart is calling the negative feedback number (his -9.4 W/m2/C) is just the ratio of the trends of dR and T. If you remove them, you’d get a quite new number based on the ratio of second moments. And so on.

However, trend removal is tricky. You have to put it back eventually. As I’ve said in my post, we’re in a world of periodic functions. Trend isn’t periodic. It would create a sort of sawtooth wave.

Not much of a trend in Hadcrut or either of the cloud series.

Abstract: IPCC (Intergovernmental Panel on Climate Change) AR4 (Fourth Assessment Report) GCMs (General Circulation Models) predict a tropical tropospheric warming that increases with height, reaches its maximum at ∼200 hPa, and decreases to zero near the tropical tropopause. This study examines the GCM‐predicted maximum warming in the tropical upper troposphere using satellite MSU (microwave sounding unit)‐derived deeplayer temperatures in the tropical upper‐ and lower‐middle troposphere for 1979–2010. While satellite MSU/AMSU observations generally support GCM results with tropical deep‐layer tropospheric warming faster than surface, it is evident that the AR4 GCMs exaggerate the increase in static stability between tropical middle and upper troposphere during the last three decades.

Citation: Fu, Q., S. Manabe, and C. M. Johanson (2011), On the warming in the tropical upper troposphere: On the warming in the tropical upper

troposphere: Models versus observations, Geophys. Res. Lett., 38,

L15704, doi:10.1029/2011GL048101.

Referring to Figure 2 in Fu et al., 2011; i.e. their key plot of modeled vs observed tropical upper tropospheric temperatures, I find it very curious the Australia’s CSIRO Mark 3.0 GCM matched the RSS & UAH (T24 – T2LT) value in K/decade (average ~0.010) so well within error bars and yet their (CSIRO’s) presumably later much improved Mark 3.5 model had ‘drifted’ right back to (essentially) match the overall GCM ensemble mean value (~0.100) (although both versions of the CSIRO GCMs took into account ozone depletion).

I would be absolutely fascinated to find out just what might have been the key difference e.g. with respect to tropical upper tropospheric cloud handling between the CSIRO Mark 3.0 and Mark 3.5 models?

There might be a powerful clue therein.

Any ideas Nick?

Steve S,

I’m not familiar with CSIRO’s recent models. There is a report at

http://www.cawcr.gov.au/publications/technicalreports/CTR_021.pdf

dealing with the differences between 3.0 and 3.5. They have improved the absorption of LW radiation – they mention only minor changes to cloud models.

I have some results using an R version of Bart’s code on TSI and global temperature http://landshape.wordpress.com/2011/09/13/fft-of-tsi-and-global-temperatur/

“This is the application of the work-in-progress Fast Fourier Transform algorithm by Bart coded in R on the total solar irradiance (TSI via Lean 2000) and global temperature (HadCRU). The results below support the assessment PDF that the atmosphere is sufficiently sensitive to variations in solar insolation for these to cause recent (post 1950) and paleo-warming.”

The analysis is an indication of the robustness of the method, as it gives a different but appropriate result on a different data set. Its going to be a very useful tool in arguing that the climate system is not at all like its made out to be.

I will post the code when its further along.”

David, thanks for your link. Are you still tidying your R code? Interested to see it when it’s available.

Bart –

Sorry, there wasn’t a “reply” button on your post of 2:34 am. Possibly reached a maximum indent level.

You wrote: “It appears, generally speaking, that bad performance occurs when a frequency band is not well covered in the input. So, you end up getting division by a small number which trashes the numerics and creates errant peaks in the frequency response estimate.”

That’s the problem when you compute Y/X. At frequencies where X=0 it blows up, and nearby it’s a little unstable. One thing which may help is to compute instead

YX*/(XX* + eps^2) where eps is a small number and the asterisk of course indicates complex conjugation. Just a thought, you may well have already tried it.

Thanks, Harold. Yes, that is the standard Wiener deconvolution formula. But, it doesn’t seem to help a great deal, I think because, essentially, there is simply no information in that band, and replacing it with something which is not correlated in the output may make things numerically nice, but doesn’t really improve the estimate.

That is why I came up with the idea of using information on either side of the missing band to infer and interpolate what should be in that band. We’ll see if it works.

Bart,

I’m not sure that deconvolution is an appropriate technique for your aim. As ‘temp’ contains noise across frequencies, wouldn’t either division or deconvolution (dependent on whether working in time or frequency domain) amplify this noise? I.e. any temp frequencies approaching zero.

I would think that deconvolution is ‘only’ suitable if you have derived a robust noiseless estimate of H (e.g. the ‘blur’ point spread function of an optical lens) and from this you wish to reconstruct input X. Rather than using X and Y to reconstruct H. Please correct me if I am wrong.

Argggh …. scratch that. I now see the point of your post and where you are heading in attempting to account for this (using interpolation).

This is interesting. I wish I didn’t have my own job requiring my attention 🙂 Steve

I’m not sure I saw that part (the interpolation.) There’s so much chaff it’s hard to keep up (h/t to sky for posting something interesting, though potentially damaging to any attempt to extract the relationship.) I ran out of time today though will make a valiant attempt at more investigation tomorrow.

Mark

I’m glad to see Bart taking the time to acquaint CA readers with the rudiments of system analysis. The incisive power of its sophisticated analytic techniques is a much-needed antidote to the simplistic, wrong-headed notions of system behavior that have long been the hallmark of “climate science.”

There are several problematic matters in the results that Bart obtains under the assumption that the relationship between cloud radiation and temperature is governed by the transfer function (or impulse response function) characteristic of a damped oscillator. Not the least of them is that assumption itself, which cannot be physically justified in the case of diffusive processes governed by first-order differential equations that characterize heat transfer. Moreover, temperature and radiative fluxes are dimensioned differently. Physical factors other than radiative fluxes (evaporagtion. convection, etc.) enter into the determination of temperature in a highly non-linear fashion. Thus one cannot be the system input for the other output, except in a purely phenomenological sense.

On purely formal mathematical grounds, the transfer function can be determined empirically from data only in the case of nearly perfect cross-spectral coherence between input and output of a linear system. Such coherence, which is typical of a well-designed measurement system, is hardly to be found anywhere amongst geophysical variables. I’ve estimated the cross-spectrum between HADCRUT3 and the putative cloud radiative fluxes (difference of columns E – H in Spencer’s spreadsheet) with following results for the squared coherence and cross-spectral phase (in degrees) at the lowest frequencies (indexed as cycles/48yrs):

Freq. 0 1 2 3 4

Coh. .57 .48 .39 .23 .03

Phase 34 58 90 129 84

While the phase lead of radiative flux relative to temperature is unmistakable, these coherence results do not support the premise that a reasonable empirical determinatrion of the relationship between these two variables can be made from the data.

“There are several problematic matters in the results that Bart obtains under the assumption that the relationship between cloud radiation and temperature is governed by the transfer function (or impulse response function) characteristic of a damped oscillator. Not the least of them is that assumption itself, which cannot be physically justified in the case of diffusive processes governed by first-order differential equations that characterize heat transfer.”

I don’t think the method assumes a damped oscillator. It finds a damped oscillator. Its capable of finding other simpler systems, characterised by simpler impulse responses.

The diffusive process that is your assumption. I have found it worthwhile to take a fresh look at the data and ask “What if I believe exactly what it is telling me?”, and then follow that through. To me this points to certain properties that are not widely appreciated, such as very long decay/rise times, and integrative/derivative relationships.

“Moreover, temperature and radiative fluxes are dimensioned differently. Physical factors other than radiative fluxes (evaporation. convection, etc.) enter into the determination of temperature in a highly non-linear fashion. Thus one cannot be the system input for the other output, except in a purely phenomenological sense.”

Temperature and radiative fluxes are related quite appropriately in the classic first order ODE of the energy balance model. To say they are highly non-linear and that blocks further analysis is a cop-out IMO. It quite likely, within the constraints of the abstractions inherent in an ODE, that the essential features of convection and evaporation can be represented.

For example, they might be identified with inertial components of the general harmonic oscillator ODE. That is, an increase in temperature causes more evaporation and convection, but this effect fades as higher temperatures stabilize, due to lapse rate stability. This would explain the phase lead, even though the cause is temperature, because it is proportional to the derivative of a change, not proportional to the change itself.

But you won’t be able to understand these relationships until the analysis has the power to represent them.

Although you quote my words exactly, your comments ar at best tangential to the issues that I raise. Don’t have the time today, but will respond more fully tomorrow.

Meanwhile, I need to correct the error made in describing the scale of the frequency index in my cross-pectrum analysis. The units are cycles per 48 MONTHS, not years.

What you consider to be the “findings” of Bart’s methods are entirely

predicated upon the bald assumption that there is a strictly deterministic

LINEAR relationship between temperatures and cloudy sky radiative fluxes.

What I point out is that other physical variables enter into the

determination of surface temperatures. Thus the actual system is not

single-input; it cannot be adequately characterized by the impulse reponse

to a single variable, no matter how the system is configured.

Nor on the basis of known physics is it plausible that heat transfer alone

(a first-order non-oscillatory mechanism) constitutes a second-order

oscillatory system, as in Bart’s characterization of the relationship

between the two variables. The coherence-destroying effects of neglected

NON-LINEAR mechanisms are clearly at work in the actual temperature data.

Evaporation acts as a severe limiter. Without strong coherence between

the variables, even optimal methods of determining the transfer function via

cross-spectrum analysis (Wiener-Hopf) do not provide useful results. With

insignificant coherence, the transfer function appropriately vanishes along

with the impulse response.

The coherence is in no way incorporated in the ratio of FFT coefficients

that is the crux of Bart’s simplified approach. While Bart is keenly aware

of the limitations of what he did, I see no evidence that most readers are.

One can only guess at what the consequences are upon the results. I suspect

that the long system “time constant” he obtains is an artifact of the

massive zero-padding employed, whereby the record length is increased by

nearly two orders of magnitude.

Your idea that the “energy balance equation” universally relates temperature

to energy flux may be employed perhaps at TOA with the assumption of an

equivalent planetary BB temperature. But you can’t do that with surface

temperatures influenced by evaporation; BBs don’t evaporate. In any

event, there was no such dimensional conversion done in Bart’s analysis.

The results he obtains are very different from those obtained via

Wiener-Hopf. Your speculation that temperature rates of change, rather

than temperature levels, may still be the “cause” of the phase

relationships shown by my cross-spectrum analysis is just that. It finds

no support in the known physics.

Lastly, your “cop-out” characterization of my caveat about using linear

analysis in the face SEVERE nonlinearities is revealing. I know that if

anyone at our firm did that, they would be fired for incompetence. Let’s

leave it at that.

“I suspect that the long system “time constant” he obtains is an artifact of themassive zero-padding employed, whereby the record length is increased by

nearly two orders of magnitude.”

I don’t think it’s the padding, though there could certainly be less of it. I tried cutting the 8192 to 4096 with no effect.

I think it’s the hack that he did to h. He did a Hann taper from about sample 1024 to 2048. But forgetting about periodicity, so it zeroed the whole negative t part. The time constant is the first moment of h, so if you chop off a whole lot on one side, it shifts the apparent centre the other way.

“time constant is the first moment of h”That is, first moment divided by zeroth moment (area).

My suspicion was based upon the fact that Bart’s time constant is roughly half the length T of the actual data. Zero padding creates an artificial time-limited function, which consequently cannot be band-limited. All of his FFT results at frequencies lower than 1/T are the product of that artifact. You may may be correct that the problem lies elsewhere, but I’d like to see results with far less and eventually no padding.

Although I did not stress the point in my response to Stockwell, the FFT ratio used to compute h raises questions about its consistency as an estimator when the data are non-periodic and only marginally coherent. This matter is quite apart from any windowing issue that you raise (which I did not consider).

I think you need some padding – we’re looking for a response function that has twice the span of the data. But I agree that the low frequencies are far too low. That’s why the result being quoted is just the ratio of the OLS trends of dR and temp. That’s all the FFT can resolve at those frequencies.

I tried substantially reduced padding. Down to 512 points (total) it made little difference. H[1] (area under h) moved from -12.2 (the trend ratio) to -12.8. Going to 256 made a difference (-14.2), but there is an issue, because the zero value of Y/X (after first FFT) has to be interpolated, and neighboring values are now much less smooth.

I think things are still being being ASSUMED here that shouldn’t, given the general lack of coherence between the two data series. The major question, as I see it, is the existence of any LINEAR CAUSAL relationship. Without such, the entire idea of a characteristic impulse response function is inapplicable. Even when the series are very highly coherent, the linear relationship need not be causal; the impulse response function can be BILATERAL, as in a filter with symmetric weights.

To say that you’re looking for a long impulse response begs the entire question. (BTW, the time-constant that Bart cites is the e-folding time and not the average age of the data.)

What troubles me increasingly as I ponder Bart’s results is how he imposes causality. Having only glanced at his analysis code, I’m not sure of the particular step. Nevertheless, when it comes to his recursive 5-component model, it is apparent that radiative flux is being determined by its past values and by past and present values of temperature. This cannot possibly reproduce the phase lead of the flux relative to temperature that is found by cross-spectrum analysis.

Although I applaud Bart’s efforts to acquaint readers with system analysis methods, I wish he had picked an appropriate pair of data series for demonstration.

Have a good weekend.

“Without such, the entire idea of a characteristic impulse response function is inapplicable. Even when the series are very highly coherent, the linear relationship need not be causal; the impulse response function can be BILATERAL, as in a filter with symmetric weights.”Indeed so. I wouldn’t say inapplicable – it just doesn’t mean what Bart wants it to mean. The impusle response h

isbilateral, though not symmetric. In fact, the centre point is at about 17 months.“What troubles me increasingly as I ponder Bart’s results is how he imposes causality.”Causality is imposed by the attempt to taper. This consists of multiplying by a function w which is made thus:

From points 1 to 1024 w=1

From 1024 to 2048 it tapers Hann-style to 0 (cos^2)

From 2048 to 8192 it is zero.

Now h was bilateral. The significant positive t part is in the 1-200 range approx, and the negative-t part is approx 8000-8192. The rest is mostly noise. So this is where the causality is imposed. The negative part is set to zero.

This has an effect on the centre-point of h, derived from the first moment. With the loss of the negative part, it moves in the positive t direction, from about 17 months to 4.8 years. While h varies in sign, its sum on each side is negative.

The taper used is quite inappropriate. You can see this from the fact that there is only one tapering section. Tapers are normally symmetric. There is no symmetry about 0 here. So the taper has continuous derivative on one side, but is discontinuous on the other.

I have been unsuccessfully urging Bart to just look at the FFT of the taper w. A proper Hann-style window would show frequency attenuation of 18 db/octave. His shows 6, which is determined by the discontinuity. The main problem with w is that it cuts into h, but it’s also a very unsuccessful taper.

“Causality is imposed by the attempt to taper.”No, it isn’t. This is standard practice. You are too inexperienced to understand.

There is no such thing as a non-causal response in this universe.

“the impulse response function can be BILATERAL, as in a filter with symmetric weights.”Such a filter is still

causal. The impulse response goes forward in time – it has no weights extending into the past, at least not if it is applied in real time. That is what Nick is proposing here. It is completely insane.You are too inexperienced to understand…. and refuse to read for comprehension anything I have written to explain where you have gone wrong.

“The impulse response goes forward in time”Bart,

Where in the algorithm do you tell it which direction is forward in time? You’ve just fed in two sets of numbers. How does it know time is involved at all? Why could they not be varying with distance, say?

All you’ve done is divide two FFT’s and iFFT the result. Ask sky asks, where is the directionality in that?

Causality is not in the algorithm. It is in the model that Bart is investigating by use of the algorithm. He has told you that many times. the model may be correct or incorrect. he use of the algorithm may be misguided. However he is not useing it to deduce causality.

He hasn’t fed in two sets of numbers. He has fed in a set of observations that he is fitting to a model. How many times does he have to tell you that

Tom,

The model isn’t even introduced until he has chopped away one side of the impulse response. I’ve set out more details of that process here.

From what I gather

a) Dessler says that the correlation between clud and radiation is weak and maybe even be positive

b) Spencer says that Dessler used instaneous correlation and that alagged correlation is stronger and negative

c) great controversy with abject apologies etc etc etc

d) SMc indicates the lagged correlation in a blog post

e) great mumbles about data windows, lags, correlation’s, noise, damped exponential, … ( note the damped exponentials, lags and differential equations assume a causal model)

f) Bart asks why if a causal feedback model is assumed then why not use the stnadard mathematics that has been developed over many decades to investigate causal feedback models. Under assumption of specific causal model, Bart takes the observed time domain measurements and uses FFT to create phase and magnitude response (called Bode plots here) along with impulse and step responses that are derived from such “Bode plots”.

g) Bart notes that the magnitude and phase response are typical of a second order system which would be what was expected from the assumed model. Feedback is negative and around 10dB. Bart notes this as evidence of that the assumed model is valid but notes that this is not conclusive proof

i) Sky and RomanM indicate doubt that the simple physical model that Bart has assumed is an accurate model of the cloud system. They note that the evidence that Bart cites is primarily from the low frequency portion of the Bode plot and that the high frequency region is not in agreement. Bart agree with this and indicates that he has described other processes to create a more complicated model in previous posts. However Bart notes that the low frequency agreement is good and that he high frequency region that Sky cites has already been discussed as a artifact of noise and other processes.

h) Nick Stokes asks how Bart knows that the system is causal. Bart says that he has assumed a assumed a physical model that is causal and that the calculations that he has performed are in reasonable agreement with this assumption. Nick Stokes talks about impulse responses and windowing not Bode plots. Step h) is repeated forever

That’s pretty much it, Tom. My only quibble is “a second order system which would be what was expected from the assumed model.” I wasn’t expecting to see much of anything coherent – I was just intending to look at the phase at low frequency. Maybe I should have stuck with that, because no matter what atrocities Nick performs on the data, he still comes out with low frequency 180 deg phase.

I just happened to observe a remarkably coincidental, well defined, typical 2nd order response appear in the analysis.

Nick:

“How does it know time is involved at all?”It doesn’t. But, most people realize that time moves forward.

The results of an FFT and an IFFT are ordered. Performing them successively creates samples that are in the same order as the input. The FFT and IFFT themselves do not “care” what the order is. I’ve made this point repeatedly and Nick simply fails to get it.

Mark

Mark,

I’ve shown here the effects of simply reversing the order of the data. The FFT doesn’t care – it just spits out h in reverse order too.

But here’s the rub. Bart says that for h, negative t can be ignored as noise, and his taper erases it. But if you applied that thinking to the reverse problem, you’d erase the data that was considered good for the original problem, and work with the data previously discarded as noise.

So what determines which half you can regards as noise?

With the limited data that we have, quibbling about the type of data window to sue does not seem to be useful. We should not go beyond the limitations of the data in discussing the results. Whether we have a magnitude of 9dB or a magnitude of 12dB does not change the results. That is, there is some evidence of negative feedback from clouds. For me, that is the extent of the conclusions that can be drawn. Now what does this mean. is this evidence compatible with more sophisticated physical models? iI it simply a spurious result or, to the same point, can it make repeatable predictions?

Yes, this is what is important. Even Nick does not disagree with the 180 deg phase shift. And, importantly, the feedback is not only shown thereby to be negative, but it also appears large enough to be very significant.

“Even Nick does not disagree…”Well, I pointed out that if you start the analysis at Jan 2001 instead of Mar 2000 you get a strong positive feedback (because the trend of T becomes negative). You said that we can’t afford to remove data. But its a standard sensitivity test. You can’t say that 124 months is good and 114 is worthless.

“You can’t say that 124 months is good and 114 is worthless.”I tried it on my end. Still get -180 deg at low frequency. But, it isn’t kosher to eliminate data when the thing you are trying to find requires as long a data record as you can find.

‘This cannot possibly reproduce the phase lead of the flux relative to temperature that is found by cross-spectrum analysis.’We do not care about high frequency stuff which is almost certainly dominated by noise and independent processes. For feedback, what matters is the low frequency regime.

“…which is

likelydominated by noise and independent processes.”In any case, the sign of the feedback is determined by the low frequency response.

But high frequency is low frequency. A frequency of 8191/8192 revs per month (RPM), the highest on your axis, is not such a frequency on a monthly sampled time axis. It is a frequency of -1/8192 RPM. That’s the frequency you see after sampling.

The highest frequency I can theoretically see is the Nyquist frequency, which is 0.5/T, where T is the sample frequency. If T is 1/12 years, then that is 6 years^-1, or 0.5 months^-1.

The max frequency I can see theoretically is the Nyquist frequency, which is 0.5/T, where T is the sample time. That is 0.5 months^-1 with T in months.

Blasted first reply didn’t show up at first.

Bart –

I agree that the Nyquist frequency in this context is 0.5 cycle / month. But your frequency response plot (above on this thread, or here ) appears to go up to 0.5 cycle / year. Labelling error on your graph, perhaps?

Bart

“The max frequency I can see theoretically is the Nyquist frequency”That relates to much of what I’ve been saying about periodicity. The DFT deliberately and standardly goes to double the Nyquist frequency. 1 cycle/month. The Nyquist frequency is mid-range. So it makes more sense to interpret the upper part of the range as a set of diminishing (toward zero) negative frequencies.

HaroldW

Posted Sep 18, 2011 at 2:03 PM

0.5 cycles/year * 1 year/12 months = 0.0416 cycles/month, much less than 0.5 cycles per month.

“The Nyquist frequency is mid-range. So it makes more sense to interpret the upper part of the range as a set of diminishing (toward zero) negative frequencies.”Sure… in the frequency domain.

“…which is likely dominated by noise and independent processes.”

You know, in actual fact, in thinking this over… that higher frequency phase lead may be just what is needed to stabilize things with a fairly high bandwidth. If I modify my model to add in phase lead terms so that it looks very close to the frequency response estimate over all observable frequencies, then it is very easy to use it to close the loop around an integrator and produce a stable feedback system. The observed phase lead, assuming it is

notjust noise or the expression of independent processes, forms the “D” in a standard PD control loop. The apparent 2nd order response I have been harping on is essentially a lag compensator in series with the PD which boosts the gain at low frequency.Very interesting…

It hums, doesn’t it?

=====

“I suspect that the long system “time constant” he obtains is an artifact of the

massive zero-padding employed”

Thanks sky for your input – a couple of questions if you don’t mind.

(1) If this time constant is artefactual (from zero padding), why does convolving the impulse response with temp well represent (subjectively) the low frequency content of dR? This doesn’t make sense to me.

(2) Wouldn’t low coherence represent the fact that mid to high frequencies are not well represented. Isn’t Bart’s point that these are not components he is interested in?

(3) In systems analysis, can you consider coherence in bands to try and account for these types of issues? I.e. segregate (or quantify) impacts of hypothesized near-linear components?

thanks again, Steve

Sorry I had to run to a meeting yesterday. Don’t mind your questions at all. In response them:

1) Given 5 free paramters, such as Bart uses in his recursive feedback model, Johnny von Neumann claimed he could make an elephant’s trunk wiggle.

2) The coherence results I show ARE for the low frequencies and are entirely independent of mid- and high-frequency components. Cross-spectrum analysis does not conflate them. The data contain virtually no information at the much lower frequencies that Bart produces through zero-padding in his FFT analysis.

3) As explained in 2) that’s what I did. In the context of NON-linear analysis, bi-spectrum analysis can reveal quadratic interactions band-by band. I’ve done such analyses frequently, but not on this miserably short data.

Hope this helps.

Zero padding does not change anything except to increase the density of points in the frequency domain results.

You must zero pad, though, to at least double the length of the sequence, or you will get time aliasing when you invert the transform.

Again, a very standard, well understood, and accepted practice being questioned by the uninitiated.

Indeed… padding with zeros just allows sub-cycle “correlations” in the FFT process. Very standard, as Bart states.

The coherence issue sky is getting at is likely just a consequence of a MIMO system with some added (not literally, as in “thrown in”) non-linearities. Doing this will “average out” the non-linearities over the time-span of data – if they are not significant, i.e., if the poles are not moving around too much and the system looks somewhat stationary over the entire span, then the results can be expected to be reasonable. Another 10 years of data will be telling – does the “model” hold? It will also simply find the one path through the MIMO system represented by the one input and one output. Nobody is arguing it is the entire system, Bart in particular.

In general, any of the coherence issues that impact Bart’s fairly straightforward analysis will impact others just as much, if not more.

Mark

Sky writes “I suspect

that the long system “time constant” he obtains is an artifact of the

massive zero-padding employed”

Would certainly be a valid argument if there weren’t climatic processes that spanned those sorts of timeframes. But there are. And hence the results cant arbitrarily be written off as an artifact.

The presence of climatic processes on those timeframes is an acknowledged fact–one that has little bearing on the mathematical issues in Bart’s analysis. There’s nothing arbitrary about pointing out those issues and presenting results from an analysis that Bart himself acknowledges as being more rigorous.

sky – you may benefit from reading this post.

Zero padding is not the culprit for anything you are suspecting here. See comment at Sep 18, 2011 at 2:30 AM.

I have freely admitted the data record is too short to have high confidence in the result. What I have striven to prove is that it is

possible. What I am tired of is getting nonsensical attacks based on uninformed nonsense (such as Nick’s obsession with artifacts of circular convolution suggesting backward in time (!) responses). Zero padding is, sorry to say, another red herring. The weakness IS the span of data. Nothing else. Just that.Most importantly, however, it is very, very clear that there is no basis whatsoever for Dessler’s argument that the feedback is positive. The phase response is near 180 degrees at low frequencies which are nevertheless well above a reasonable threshold of resolution. My analysis here establishes that this negative feedback could very well be significant.

Verysignificant.Strictly speaking, BTW, you really do need a lot more than 5 parameters to make an elephant’s trunk wiggle arbitrarily.

One other note on the last: those 5 parameters are not independent. They are completely dependent on the three parameters of a 2nd order continuous time system model: gain, natural frequency, and damping ratio.

Hi sky,

The characteristics of a damped oscillator were never assumed, it is an interpretation of the result.

The system is nonlinear, not SISO and could consist of adaptive filters. I would also favor the view that clouds influence temperature as an independent degree of freedom, additional to any feedback role.

Given these ideas, I am unsure why you would predict a “near perfect cross spectral coherence” (i.e. 100% predictive power of dR from temp)?

Natural systems are not electronic circuits and I would think that expectations of coherence need to be adjusted (due to these additional complexities).

I understand that nonlinear systems are often investigated using reverse-correlation techniques (white noise) to derive linear contributions to the overall system. Would you agree that deriving a linear component that explains x% of the data is a valid contribution to understanding this system?

Is it possible to quantify a linear contribution from analysis of coherence or is this just back to front reasoning?

We might then propose that there is no room left for a ‘cloud positive feedback role’ I.e. we would expect a minimum likelihood of ‘x'(in the observed signals). The test would be whether this is falsified with this linear analysis. Steve

Hi Steve.

I don’t PREDICT near perfect coherence. On the contrary, I state that as a REQUIREMENT for Bart’s simplifiesd approach to work effectively. That requirement is grossly violated by the actual data, as the cross-spectral results show, at best, only marginal coherence. For other issues, see my reply to Stockwell. Gotta run.

Steve,

This feels to me like a re-run of the Hockey Stick saga where Dessler 2010 takes the same role as Ammann and Wahl 2007.

Viz:-

– A keystone component of the Warmist dogma is attacked by a sceptic

– This is refuted suspiciously quickly by a member of the Team

– Close examination of the refutation raises the suspicion that the data and/or statistical techniques applied were selected so as to meet the overriding need of the team to circle the wagons, rather than for any sound scientific reason.

I have to admit I am lost on the relative merits of the datasets Dessler used and the ones you used. I think I (and maybe a lot of other people) would welcome a further post on this topic which would explore whether there are any sound scientific grounds for Dessler using the datasets he did.

agrees with Richard…and where Nick Stokes, for whatever reason, is playing the role of doggedly defending the consensus by throwing up meanignless chaff inoorder to clooud the issue. Slightly surprised to see Paul< come in though…although hi9s comment was obviously cribbed from Wikip

kudos to Bart…he has tried to open the world of the climatologist to techniques that are well-established outside. the resistance he has faced is…instructive.

An old Chinese saying goes:

True words aren’t eloquent;

eloquent words aren’t true.

Wise men don’t need to prove their point;

men who need to prove their point aren’t wise.

Call me skeptical, but it doesn’t “prove” much of anything to merely generate some random data containing relationships which someone

believesto be characteristic of the observed data set, but which may not reflect reality. I would suggest it would be more relevant to evaluate what hasactually occurredin the analysis itself.The Fourier transform and the fft are a linear operators. For the others who may be reading this thread, this means that applying the fft to the sum of two series is identical to applying the fft to each series separately and adding the two results together. The same property applies to the numerator series (in this case, dR) in the inversion procedure done in the calculation of h, the “impulse response curve” which was calculated for Bart’s comment earlier. If we take the series dR apart and express it as the sum of two distinct series, we will be able to see the effect of each of the components on the impulse curve.

It is a simple matter to do a regression to calculate a simple trend line for dR over time along with the residuals from that regression. These two series will provide a simple decomposition of dR. Using a simplified form of Nick’s R version of the Matlab script we can get the following plot:

The red line is the effect of only the trend line for the dR data. The green line is the effect of the residuals from the regression. The black line (which is equal to the sum of these two curves is the result of calculating the impact response from the original data.

The lack of difference in the red and the black curves suggests to me is that the original analysis describes the relationship between the temperature series and the negative-slope trend line of the dR data with very little role played by the residual “wiggles”. One would likely get a similar result if instead of dR they were to use any other variable (even one unrelated to climate) with a similar trend having a random residual series added to it. I fail to see how the trend could contribute in any way to create a genuine 4.88 year cycle for such a relationship.

So where could the observed cycle come from? I used R to fit a loess curve to the temperature data (with default parameters) and got the following:

Who knows what frequencies might be observed as the loess smooth gets converted into a straight line? However, with only 124 months of data, I doubt any such cycles would have a physically viable meaning.

Here is a script for the above comment:

#function to calculate impulse response

#based on Nick Stokes’ script

#without tapering

impulse.calc = function(Yvar,Xvar,samps = 8192) {

N=length(Yvar)

dT=1/12;

Nsamp = samps/2

Npad=Nsamp-N

X=fft(c(Xvar,rep(0,Npad)))+1.0e-9

Y=fft(c(Yvar,rep(0,Npad)))

hr = Re(fft(Y/X,inv=T))/dT/Nsamp

Nc=Nsamp/8

w=c(rep(1,Nc),(1-cos(pi*(1:Nc-1)/(Nc-1)/2)))

hw=hr*w

f1=c(1:15,15:1); f1=f1/sum(f1)

hs=filter(hr,f1)

t=(0:599)*dT

plot(t,hs[1:600],type=”l”,ylab=”Impulse Response W/m2/C/yr”,xlab=”Years”,main=”Smoothed Impulse Response”)

invisible(hs) }

flux=read.csv(“http://www.climateaudit.info/data/spencer/flux.csv”)

flux=flux[3:126,] #removes NA rows

dR = flux[,5]-flux[,8]

temp = flux[,9]

#Impulse for dR

test1 = impulse.calc(dR,temp)

#calculate trend line for dR

reg.time = (1:124)/12

dR.reg = lm(dR ~ reg.time)

dR.pred = predict(dR.reg)

dR.resid = residuals(dR.reg)

#Impulse for trend line

test2 = impulse.calc(dR.pred,temp)

#Impulse for residuals of trend line

test3 = impulse.calc(dR.resid,temp)

#Plot three previous results

matplot((1:600)/12,cbind(test1[1:600],test2[1:600],test3[1:600]),type=”l”,lty=1,lwd=2, xlab=”Year”,

ylab = “W/m2/C/yr”, main=”Cloud-Temperature System Smoothed Impulse Response” )

legend(“bottomright”,legend=c(“dR”,”Trend”,”Residual”),lty=1,lwd=2,col=1:3)

#plot(reg.time,dR) #not run

#abline(dR.reg) #not run

temp.loess = loess(temp~reg.time)

plot(reg.time,temp,xlab=”Year”,ylab=”Temp Anomaly (C)”,main = “Loess Plot for Temperature”)

lines(reg.time,predict(temp.loess),col=”red”, lwd=2)

Roman,

I see that in the version of the code I posted, I left out the zero padding on the right of w, which Bart used. It should be

w=c(rep(1,Nc),(1-cos(pi*(1:Nc-1)/(Nc-1)/2)),rep(0,Nsamp-2*Nc))

I used this in my actual calcs.

I intentionally left out the left side padding because it didn’t make any mathematical sense to pad just those two values. Besides, they weren’t material to the point of the comment.

I’m not speaking of the two missing months at the start. The (half) taper runs for 2048 (Nc=1024) values. Then, as you’ve done it (and as I wrote) it cycles. Bart padded with 6144 zeroes on the right. It won’t make a huge difference, because both versions go toward zero near t=0 (ie in say 8000 to 8192). But some.

And alas, I still managed to get it wrong. Here’s a tested version:

w=c(rep(1,Nc),(1+cos(pi*(1:Nc-1)/(Nc-1)))/2,rep(0,6*Nc));

Anyway, I’ve written a new post here setting out in detail why I think the taper is wrong and causes error. It also goes into some detail about h being bilateral, and about periodicity.

If you don’t do the zero padding, you get time aliasing.

And, put the taper back in. Nick is completely WRONG about this.

I had a sort of breakthrough thought on this. Given the proffered system diagram, this is the type of behavior which would be

expected.Label the top response T1 and the bottom T2. We are trying to estimate T2. The closed loop transfer function from the input Radiation Forcing (RF) to the input point of T2 is H2 = T1/(1-T1*T2). Assume the gain of the loop is “large” within the passband. Then, H2 := T1/(-T1*T2) = -1/T2. So, if the RF is wideband, the spectrum of the input to T2 should be approximately the spectrum of RF divided by mag(T2)^2, which is what RomanM has found.

The gain from RF to the output of T2 is approximately unity, so the output of T2 should more or less track RF within the passband of the loop. My hunch would be that, that passband is probably about 0.3 years^-1, which is where you get the maximum phase margin boost from T2.

I think that what this analysis is saying is that the phase and magnitude response of the relationship between the two parameters is dominated by low frequency components and that if the original sequence is passed throuhg a low pass filter not much is changed in the impulse reponse

In my view, it is a bit more than that.

I find it difficult to believe that an interrelationship with a variable whose “low frequency” component is pretty much linear can produce a 4.88 year (out of a 10+ year data window) cyclic outcome without the strong possibility of the existence of a spurious reason. In this case, the temperatures seemed to exhibit sufficient possible cause of the final result without a need for a genuine relationship of the proposed format to exist.

My personal suspicion is that the cloud-temperature relationship is considerably more complex than the current analyses of either Dessler or Spencer can assess.

It

istoo complex for phase plane analysis. For the other, it would be incredibly unlikely to get such a well defined classic 2nd order response and have it just be coincidence.does the topic intrigue you enough to do one of your amazingly insightful posts?

Roman,

Yes, that’s my contention. Because of the padding, the base frequency is (1/8192) mth^-1. The low frequencies can only discriminate a few low-order moments of the data. And they are used for deriving Bart’s results.

So the feedback cited is -9.4 W/m2/°C, but should be -12.2 W/m2/°C without the hack to h. That’s just the ratio of the trends. And the lag is just the difference of the centre-points (COM, from first moment) of dR and temp.

And as you say, you could get these from any two quite unrelated series.

Wrong. I have explained, but you refuse to learn.

This is a good observation. I have never denied that the trend in dR accounts for much of the low frequency spectrum. And, taking out all but the trend is very much a low pass filtering operation.

But, if you look at the transfer function in the frequency domain using the trend of dR, you find that the response deviates from a classic 2nd order response. With the other low frequency components added back in, it fits the classic 2nd order response like a glove.

This is one of the key points I have been trying to get through to Nick. It isn’t just the trend which creates the 2nd order response which gives weight to the observation. It is the very tight agreement with a classic order response across the entire frequency band of 0.01 to 0.1 years^-1 which is portentous.

The above comment is to RomanM for his original post.

Should have been: “It isn’t just the trend which creates the 2nd order response which gives weight to the analysis. It is the very tight agreement with a classic 2nd order response across the entire frequency band of 0.01 to 0.1 years^-1 which is portentous.”

Furthermore, there

shouldbe a relationship between clouds and temperature. The IPCC doesn’t reject that there is, they just say the relationship is positive, or at least not significantly negative. That the trends themselves go in the opposite direction is reason enough to doubt that.Bart,et aliam;

It appears that some of the statistically sophisticated respondents, indeed much of ‘climate science’, are somewhat ~undereducated in feedback systems analysis, especially the sophisticated techniques and insights you are bringing to bear here.

As you have pointed out, the scanty data at issue here are a first class ticket to skepticism about your analysis thereof.

But, the ‘remedial-education’ issue is clearly dominating the discussion here.

Can anyone suggest a richer cyclical dataset suitable for elucidation of the methods? i.e., Ceridian PCI (commercial US diesel purchases) vs. US ~Employment figures?

This might allow folks to explore the ‘longshort record’ analysis question and get more comfortable with Bode plots.

Perhaps Prof. McKittrick can suggest a readily available, suitably broad cyclic econometric data set, [or ‘school’ me about the ‘frequency’ of published feedback system analyses.]

Until Bart got rolling here, I hadn’t realized the scarcity of feedback analysis among the many LS trend ‘regression’ analyses. Is regression a reasonable approach to diagnosing feedback parameters? I imagine that Dr. Middlebrook would have mentioned it…

Do statisticians take any classes in feedback system analysis? Why are Bode plots so uncommon in Climate-related analysis?

RR

Ain’t it the truth!

One other idea;

What is the result of looking at ever-shorter subsets of the already short record?

Or, starting with the shortest series (2? 3? points) and sweeping up toward the full length, how looks the curve of % non-strange responses?

Beyond the computational challenge of automating and running so many analyses, perhaps there is a step where a human must sort the non-strange responses, but maybe one could sort automatically by comparing to the result you have reported.

Maybe Roman will first address the question of whether this approach would help ruleout the ‘spurious’ comments.

Is this a situation where some % of the data points can be withheld and the analysis re-run? Perhaps some statistical legerdemaine of this kind would happify folks. Probably not though…

I don’t see why everyone is so focused on a ~constant trend.

I imagine that any trend could be added to the data and still get the same Bode…Am I wrong here?

Talk is cheap…

RR

RR

“What is the result of looking at ever-shorter subsets of the already short record?”I noted here that the direction of “feedback” depends mostly on the sign of the trend of T over the period. So if you start from Jan 2001 instead of Mar 2000, the trend of T goes negative and strong positive fedback is reported.

Now it’s true that we don’t have data to spare. But if the conclusion is

thatsensitive…Not what I found. But, again, there is no justification for removing any data, and every reason to use as long a data record as possible. This is me like saying, I have proof that 1 + 1 + 1 = 3, and being challenged with “maybe, but what if you take away one of the ones?”

Hi Bart,

Using Nick’s truncation of data results in a triphasic impulse response. An initial positive, followed by your biphasic response. regards, Steve

Remember to subtract the mean when subsetting. The original data was zero-mean, but of course only for the full set.

I get the following dc gains, depending on what month the data starts in 2000. Col 2 is the trend(CFR)/trend(T) ratio, col 3 is dc gain without taper, col 4 is gain with taper. You can see how it depends on that trend ratio.

Mar -12.22 -12.22 -9.48

Apr -13.64 -13.64 -11.32

May -13.50 -13.50 -11.22

Jun -15.72 -15.72 -12.79

Jul -19.39 -19.39 -17.35

Aug -25.39 -25.40 -23.52

Sep -31.19 -31.20 -29.89

Oct -46.48 -46.51 -44.72

Nov 686.61 647.44 127.54

Dec 34.52 34.49 15.72

Jan 18.72 18.71 10.15

Hi Nick,

Yes, I forgot to subtract the mean – thanks. I am surprised that you are surprised about the near-equivalence between trend and very low frequency content – surely this is expected by everyone.

This sensitivity analysis is interesting, but as Mark and Bart have said, the DC gain does not seem at all relevant to the analysis. How about describing the higher-order characteristics of the impulse response?

I’ve learnt a great deal over the last few days thanks to you, Bart, Mark, sky, and many others – thank you to all.

There is still the dispute, the resolution of which I believe is likely nuanced and quite mysterious to me due to my lack of familiarity. Why do you consider the IFFT (back in the time domain) periodic and how do you interpret the vector returned? Perhaps on your blog (or better here) you could take causal and anticausal filters, transform them through the frequency domain and then back again to illustrate your point. Work yes, but surely appreciated by myself and many lurkers.

regards, Steve

I think the sensitivity analysis is interesting and illuminating.

It’s like removing a single tree from a recon or removing BCPs.

This is a simple due diligence practice.

“This is a simple due diligence practice.”No. It isn’t. It is removing the resolution needed to isolate the response at low frequency in the first place. It is as absurd as the thought experiment I suggested above.

It is stupid.

Steve,

“as Mark and Bart have said, the DC gain does not seem at all relevant to the analysis.”No, Bart said:

“But, in the meantime, having uncovered what looks to be a very strong negative feedback of -9.5 W/m^2/degC, I think the onus is on those who believe there is a weak or positive feedback to prove it.”I’m not surprised that it’s the ratio of trends – I knew it would be because that is the zero-order term in the power series for the transfer function. I’m commenting that it is an unreliable measure of interaction. In fact, it involves no interaction. You could cite it for any series.

But it dominates the analysis. CRF is assumed to be totally dependent on T. But CRF has a big negative trend, T has almost none. In the real world you’d say, OK, the CRF trend is not a result of T. But in this model you have to say that it says CRF is very sensitive to T. And higher order considerations won’t change that. You have to satisfy the lowest order first.

As to periodicity, I gave the Wiki link. The DFT and iDFT are formally almost identical, just replacing -i by i. You form the transform as a weighted sum of harmonics of a base freq, then sample. The sum of harmonics is periodic at the base freq. In the iFFT, time and freq have swapped roles.

For those of us trying to follow along, exchanges based on sarcasm and insult (and responses to other party’s same) make our task harder. And diminish your substantive points.

Basic vocabulary question. How is “negative delay” (as used in these exchanges over the past two days) to be defined? Bart (here and elsewhere) discusses “negative delay” as physically-impossible concept, whereby an event at time t2 affects another event at an earlier time t1. Interpretation of an analysis that includes seeming “negative delays” must therefore begin with an acknowledgement of their artefactual nature.

Bart, is this correct? Carrick, does this square with your notion of what a “negative delay” is, in the context of the FFT and other techniques under discussion?

AMac, I’ll point you over to moyhu for this discussion (whichI will add shortly). This thread has gotten too long. I may even post on JeffIDs blog on the concept (and not just so I can pick on Bart ;-))

But wait, don’t kill the thread. What about Mr Bunny? What happens next?

.

(Nice stuff Bart)

Bender, I’m not trying to kill the thread (on purpose). I have a dangling tag below that appears to have put the halts on my ability to add any further comments at the bottom of this thread I’ll use Nick’s (moyhu) site to comment, and with SteveM’s permission, cross post here as time, Steve and this blogging software permits.

Amac, the short take home is that Bart doesn’t understand the relationship between h(tau) and the physical impulse response function (what you would get if you put in a true impulse into a signal), and why they sometimes can differ. What I am saying is h(tau) (the quantity computed in Bart’s matlab code) can give rise to negative delays in h(tau), that these delays are simultaneously physical and do not really violate true signal causality.

(Anymore than flashing a laser across the moon, and having the laser dot on the surface of the moon traveling faster than the speed of light is a violation of physical/information causality.)

Fundamentally there are many types of velocities that one can define, one of these is phase velocity, a second is group velocity, the third is information velocity. It’s only the third quantity that is restricted to being less than or equal to the speed of light. The reason that h(tau) gives negative delays is because tau is a measure of the group delay (rate of change of phase with frequency) , which can be negative over a range of the transfer function.

And yes Virginia systems with negative group delay exist in the real world.

make that ” the physical impulse response function (what you would get if you put in a true impulse into a

system).”I know you keep arguing about the sign of the constant on the front of the equation… but that’s just DC gain. The feedback is dictated by the poles of the system, not the DC gain. Trend ratios are the linear, first order effects, not feedback. Different beasts.

Mark

“I know you keep arguing about the sign of the constant on the front of the equation… but that’s just DC gain.”Well, it obviously affects the DC loop gain of Bart’s system diagram. And it emerges from Bart’s analysis as the ratio of the trends. The assumptions of the analysis are that temp determines CRF, so the trend in CRF is a consequence of the trend in T, so the ratio determines the sensitivity.

In fact, Bart’s analysis there makes the role of that sign explicit. He says

“This is a negative feedback (sub)system precisely because T2 has a 180 degree phase shift at zero frequency, and 1/(1-T1*T2) is the reduction in sensitivity conferred by the feedback.”If the sign changes, there’s no 180 shift and it isn’t negative feedback.

I just cannot believe this conversation continues. I am completely flummoxed by Nick’s dogged insistence that processes can evolve backwards in time. It has gone from the sublime to the absurd and back again so many times I have whiplash.

It’s just an artifact of frequency sampling, Nick. Backwards in time evolution does not occur in this universe. How can you possibly have a problem with that and call yourself sentient? Ay yi yi.

Steve

Posted Sep 19, 2011 at 9:16 AM

“I am surprised that you are surprised about the near-equivalence between trend and very low frequency content – surely this is expected by everyone.”Indeed. See here. The trend isn’t the only thing. All low frequencies are -180 deg phase shifted.

By progressively (and arbitrarily and capriciously) truncating the data set, Nick is merely reaching a minimum frequency resolution whereby the phase shift has reached at least -270 deg, which is where the feedback starts to turn positive. It means nothing. The sign of the feedback is determined by the response at low frequency.

For some reason, link didn’t come out. See here.

The

physically meaningfulsign of the feedback is determined by the response at low frequency. That is what determines the long term response.Bart, by now you should know that Nick always wants the last word, no matter what it brings about…

Bart,

“I am completely flummoxed by Nick’s dogged insistence that processes can evolve backwards in time. “As I’ve said several times, you have taken two sets of numbers and put them through a FFT analysis which makes no reference to the fact that they depend on time going forward. It could have been distance, time backward, anything.

What you have done is, as soon as the impulse response h was calculated, you zeroed the negative part. That makes it causal all right – it just isn’t the impulse response for this data any more. But then you draw the Bode plots of the truncated h and say, hey, it looks like a damped oscillator – it must be causal.

Hi Bart,

Thanks for maintaining patience as you (and others) assume the role of teacher. I lecture in an unrelated discipline and I understand it can be frustrating 🙂

I don’t believe Nick is saying that processes can evolve backwards in time. He is simply asserting that the relationship between the two vectors is best described by a filter that includes a non-causal contribution (which you remove with the taper).

I would think that this is only ‘absurd’ due to the nature of your defined model – the effect of temperature on cloud as time evolves. By tapering your data, you have correctly reinforced the following condition:

A change in temp can not go back in time to effect dR.

Of course nothing can really go back in time, however, I don’t think we should close our minds to what your own analysis suggests is a more complex relationship.

The question arises as to whether there is a model of interacting variables which results in the illusion of non-causality within a simplified linear, time-invariant representation?

regards, Steve

Steve,

What I have pointed out time and again to Nick (#comment-302956, #comment-303017, #comment-303492, #comment-303497) is that you get these ghosts of backwards in time relationships even when the data are perfectly causal. It means nothing.

If he wanted to test the reverse causality, he should switch the roles of dR and temp. However, the result is useless because that route is polluted by other dominant inputs (#comment-303206).

“…even when the data are perfectly causal

by constructionI mean.Nick said:

“But then you draw the Bode plots of the truncated h and say, hey, it looks like a damped oscillator – it must be causal.”Yeah, Nick. A “damped oscillator” is a very common type of response. Extremely common. Extremely, extremely common. This is a Liliputian logical leap.

“However, the result is useless because that route is polluted by other dominant inputs (#comment-303206).”The result you get is therefore

notthe transfer function of the dR to temp relationship, but the inverse of the temp to dR relationship. The phaseleadat low frequency indicates that in this direction, the output is anticipatory, which is generally the wrong direction for causality in natural systems, and certainly in this one.“…the output is anticipatory…”I.e., you would then be having temperature reacting largely to the

rate of changeof cloud formation. E.g., if clouds stop increasing, temperatures return to normal, even if the clouds stay.Cloud formation due to the accumulation of energy, you see, is much more logical.

Bart,

It might be of interest to get a comment from you about Carrick’s comment at Nick’s, which starts with:

> Nick no worries there. I think your criticisms are spot on.

http://moyhu.blogspot.com/2011/09/faulty-tapering.html?showComment=1316438104544#c6551620125560606178

PS. To comment at Nick’s, choosing Name/URL and filling in a valid URL might improve your chances to appease the spam filter.

Unbelievable. The level of ignorance in this thread and over at Nick’s is simply amazing. Carrick doesn’t know what he’s talking about and, based on some of what Nick has said, I wonder if he thinks the same as Carrick.

The taper is not removing negative frequencies – it is removing the upper half of the impulse response. What Nick doesn’t show in his plot (of the frequency response of the taper) is that the magnitude of the frequency response is two-sided, centered on DC. It has to be. It consists of purely real data. The FFT of real data has complex conjugate symmetry –

this is a fundamental concept for anyone that has ever seriously studied Fourier theory!I will leave it to Carrick and Nick to figure out what that means. I’ll leave it to the readers to figure out what I think it means… oh, wait, I’ve already made that pretty clear.

Mark

I should have noted that the response actually convolves in the frequency domain.

Mark

Mark, it’s true that negative times are at issue here, though I expect that Carrick does have some analogous situation in mind in the frequency domain.

“What Nick doesn’t show in his plot (of the frequency response of the taper) is that the magnitude of the frequency response is two-sided, centered on DC.”It is, but it’s symmetric. And you can’t show two sides on a loglog plot. Which is conventional.

Carrick’s reply betrays his ignorance. My suspicion about you regards the body of your posts in this regard. You certainly made no attempt to correct his obvious error. I fully understand how log-log plots work… I merely mentioned that because it was likely where Carrick got his notion.

And, no, negative times are not at issue, at least, not with anybody but you. When you slide the window you place the impulse at the center of the response, generating the issue with causality yourself.

Mark

Mark,

I don’t think Carrick is ignorant at all – he was just describing an anologous situation in the frequency domain. With that understanding, it made sense to me.

As to negative times and causality, we start out not knowing if CFR can be taken to depend on T at all, let alone whether the relation is causal. So we try something. If you want to test explicitly whether there is a causal relation, you should postulate a one-sided impulse response and apply a Laplace Transform, not a Fourier transform. Numerically quite different.

Bart has chosen FFT. That makes no presumption about one-sided h. It may turn out to be one-sided – in fact it was fairly lop-sided. And you may be able to make deductions from that. But not if you chop it right at the start.

And I haven’t heard any response to the reverse data argument. If the FFT somehow knows to put sense on the positive side of t=0, and noise on the negative, then why doesn’t it do this when the data is reversed?

Good try, Mark. I think we’ve given it our best shot. But, Nick has the invincible confidence of the unknowing.

I’ve explained all the reasons in detail, Nick. You are flat out wrong. But, lead you to the water as I might, you prefer to remain thirsty. So be it.

Bart,

Your comments here have been seen by many as being of potentially great significance. Could it be that Nick, having failed to present an adequate rebuttal, is doing nothing more than attempting to wear you down, hoping you’ll take your ball and just go away?

If so, please don’t give in! You are doing a real service. And I’m sure many admire your patience. Thank you.

Thanks, PaulMa. But, I think I’ve said all that can be said, and people who understand the analysis and procedures will come to the proper conclusions. Those who do not, and do not want to, never will.

> Unbelievable. The level of ignorance in this thread and over at Nick’s is simply amazing. Carrick doesn’t know what he’s talking about and, based on some of what Nick has said, I wonder if he thinks the same as Carrick.

This kind of comment has “potentially great significance,” first and foremost among readers who share an even more amazing level of ignorance on the subject at hand.

It would be interesting to know what the person who once advised Tony Hansen “to strenuously avoid polishing one’s own nameplate” would think of such comment.

willard…are you trying to get points from the policywonk? I know that she thinks you are brilliant.

There’s a difference between people that don’t understand, and admit it while hoping to learn, and people that don’t understand yet fail to admit it nor attempt to learn. Nick is notorious for the latter.

Carrick’s comment is baffling, actually, because I know he otherwise understands these things. Note in his post previous to the one you linked he mentions applying tapers on a regular basis, and acknowledging they work. Then he goes into stating that Nick’s criticism is spot on (which is almost entirely directed at the taper itself) while stating you shouldn’t be deleting negative frequencies. All true, except the taper does no such thing. Nick himself has complained, in this very thread, that the high frequency 8191/8192 is actually -1/8192 (a low frequency.) That’s pretty good evidence he doesn’t understand anything about what happens with the taper nor the concept of an impulse response in general.

Both are claiming to understand, yet neither really do. Nick because that is his nature as far as I can tell (defend the faith,) Carrick because he didn’t bother to actually look at what was being done to disambiguate that from Nick’s ill-conceived comments.

Sorry, willard, if I offended, but the back and forth about the exact same point is ridiculous – it is a red herring at best, intentional disinformation more likely (something our host has already openly accused Nick of in the past.) I did not push my own nameplate as you suggest. I merely pointed out that others are claiming a nameplate that is not justified.

It is difficult enough to explain basic theory in a few paragraphs on a blog. It is even more difficult when there exist those that are loathe to fault their own theories which the basics call into question.

Mark

Mark,

“It is difficult enough to explain basic theory in a few paragraphs on a blog.”The things is, you don’t explain. You just assert, loudly. I try to explain, with calculations and diagrams.

OK, here’s one you can try to explain.

” Nick himself has complained, in this very thread, that the high frequency 8191/8192 is actually -1/8192 (a low frequency.) That’s pretty good evidence he doesn’t understand anything about what happens with the taper nor the concept of an impulse response in general..”In what way is it “good evidence”? My proposition is standard Nyquist. Sampled monthly, a sinusoid of freq 8191/8192 per month and freq -1/8192 per month are indistinguishable.

You really don’t understand this science.

When somebody begins to bluster and name call, that’s a sure sign when they is out of their depth or at least comfort zone.

I’m not even sure where to go because , Mark’s said nothing that has any substance, or that is particularly relevant to any of the issues I commented on. Just a lot of techno-jargon, no real substance.

The statement I made on Nick’s blog related to this erroneous statement attributed to Bart and Nick’s fully accurate criticism of it:

This is the sort of statement I can only assume is made by a person who’s never computed an impulse response function (IPR) h(tau) before. .

You can get nonzero values of h(tau) for negative delays occurring for several reasons a) when there isn’t a strict cause and effect relationship between the two variables for which you’re trying to compute an “impulse response function”,

[There is no strict cause and effect relationship here.]b) and even then only if you have a linear, passive system and c) an infinite length window (otherwise you still get splatter into negative delay portion of the impulse response function).For climate, we have an energy source (the sun) so

it’s certainly not a passive system,andit’s certainly not linear.(Nonlinearity can translate positive delays into negative ones, in a similar manner as you can get combination tone frequencies generated by the nonlinear distortion in the frequency domain.]In this case, even if you had a strict cause and effect relationship between temperature and cloud radiative fluxes

and you don’t, active, nonlinear systems will in general still exhibit negative delays. It’s a well known feature in one of the areas I work in (cochlear mechanics, which is active, nonlinear, with–in some cases–net positive gain).MarkT also appears to be confusing negative frequencies for negative delays (or is equally confused if he thinks I am confusing those). I can assure you I have no such problem and am referring to the taper function applied in the time domain, namely the cut-off of the negative delays in h(tau).

Actual data. It’s a screw-up to set h(tau) = 0 for tau < 0.

Nick:

Or he’s made a munchkin level mistake, realized it, has put both feet in it, and doesn’t know how to gracefully step out of the mess he’s step into. Some of us are more comfortable admitting personal errors than others of us.

Here’s an example where you only seem to get a zero delay and a (roughly) -4 ms delay.

Impulse response function.

This image plot is generated by taking successively overlapping (Hann) windows of the frequency domain data, computing the equivalent (complex valued) IPR for each window. The vertical axis is the center frequency of the Hann window, the horizontal delay of course, and the color represents the intensity of the response.

In this case we absolutely know a causal relationship exists, we are measuring the recorded output from an input stimulus.

To reiterate what I said earlier, just

assumingthere is a causal relationship between temperature and cloud radiative flux isn’t in itself a sufficient condition for zero out the negative values of h(tau). At best this is just circular reasoning.Nonsense, Carrick. I explicitly referenced your comment in which you stated that it is wrong to remove negative

frequencies. Your words, not mine. I am not confusing negative time with negative frequency – your statement did, however, imply you thought Bart was removing negative frequencies. Your statement was incredibly ill-informed as to what the very simple impulse response taper is doing.Stating that it is obvious I have never done an impulse response (transfer function) estimation is pretty hypocritical coming right after accusing me of name calling. Indeed, you’ll note that I actually defended you as someone that I felt

knew the difference, and surmised that Nick’s analysis was seemingly the culprit. Hypocrisy seems to be your forte these days, quite frankly. The list of examples of such behavior is increasing.Mark

Sorry about that, saying “frequencies” was a mistatment on my part.

I meant to say negative delay. Sue me. If you were nearly as bright as you carry on, you should have been able to pick that up from context.

Secondly, I was referring to Bart not you with respect to impulse response functions (as far as I know he’s the only one prattling on about negative delays not being meaningful).

And even then not name calling on my part, it’s an observation.

Have a nice day.

Carrick said:

Bold mine.

I ask, which one of us is confusing negative time and negative frequency?

Perhaps if you had simply said “I meant time,” rather than ‘bluster and name call’ and make assumptions about my knowledge/experience you would not have appeared as this indicates:

Carrick said:

and

Indeed.

And, w.r.t munchkin level mistakes: negative

frequencieshave nothing to do with causality, linearity, or passive nature of a system. Seems this is the second time you owe me an apology. Given the obviousness here, should I bet on whether you will own up to it and admit your mistake (and your hypocrisy?)Mark

MarkT, I’ll assure you again I fully understand the difference between the frequency domain and time domain, and their relationship to causality, and apologize for any confusion a comment I made late at night may have caused on your part.

Here is a stab at the corrected text:

In any case, I think you were the one who stated

and later

How do you expect me to respond?

Again this is all unfortunate because if I had stated clearly and correctly what I meant to say, and at the level of understanding that I do possess, some of this needless back and forth could have been avoided.

Time for something of substance:

Do you agree or disagree with Bart that the negative

delaysin the impulse response function should be set to zero?After that, we can go back and discuss your and Nick’s exchange:

and decide who has the right on this one.

I’ll draw attention that in my corrected paragraph, I only made changes to the first sentence. The rest is straight cut and paste from here.

Carrick – By the very structure of our universe, there are no negative delays in cause and effect. There are no closed timelike loops. Anything you find in negative time has nothing to do with any cause and effect relationship you are looking for.

But, you say, then any time your estimation procedure shows indications of negative time response, you should discard the assumption that there is a cause and effect relationship? Then, you would never analyze any system data at all. Because of the variability of the data and the finite window of time, there are

alwayssuch spurious indications.This is why I have urged Nick to generate artificial data with absolutely assured causal relationship and try his analysis on that. He will find spurious indications of backwards in time relationships then, too. It is inherent in an estimation procedure which uses frequency sampled Fourier transforms or, equivalently, for which the sequence fft-multiply-inverse-fft is a circular convolution.

This data, when processed the way I have demonstrated, produces a very well defined 2nd order response. Such responses are ubiquitous. It is a strong indication that my assumptions were correct.

Nick – “Sampled monthly, a sinusoid of freq 8191/8192 per month and freq -1/8192 per month are indistinguishable.”

Nobody here has suggested overturning Nyquist and Shannon except you. Suggesting these frequencies are the same in the real continuous time world suggests you have lost your way.

Or, were you suggesting that my low frequencies might be aliased? If so, then you are apparently unaware of one of the most important properties of boxcar average filtering (filtering with a uniform response over N samples and downsampling by N) – there is no aliasing to dc with such a procedure – dc is always a zero of the aliased transfer functions.

I expect you to notice that your comment, that I replied to directly, stated frequency. That comment was ridiculously ignorant as stated. How did you expect me to respond? I did follow up by defending you and your knowledge, instead blaming Nick’s deceptive comments. I got miffed because you did not offer me the same courtesy. I do accept your explanation that it all was a legitimate mistake, which is what I was hoping.

First, the second part of your response:

Uh, i think you’re confusing what I was getting at with that comment. Bart says the taper cuts off the

resultsfrom “high frequency stuff” and Nick says “but 8191/8192 is actually low frequency,” implying he thinks the taper is actually messing with low frequencies as well. Coupled with your comment, it seemed pretty clear that Nick was interpreting a removal of negative frequencies and you agreed. That’s what I was pointing out, that Nick is under the impression (low) negative frequencies were being removed by the taper.The taper does not remove the 8191/8192 = -1/8192 term. Bart’s frequency response plots also confirm his claim that removal of the upper half of the IFFT result merely cuts down the high frequency stuff.

Your retraction and correction leaves Nick’s confusion and ignorance (which I merely observed.)

I think Bart’s frequency plots indicate they differ only in the high frequency region, which is pretty good evidence his claims are at least plausible, i.e., the dominant effect inferred from this procedure is the first half of the response.

Mark

To the degree that the taper cuts off data before the midway point, it does reduce resolution in the low frequency range. But, that is a good thing, because the impulse response of significance dies down long before the mid-point, and the rest is just ordinary variability in which we are not interested.

When I said the taper affects “high frequency stuff”, I was guilty of ambiguity. The taper smooths the frequency response – multiplication by the taper in the time domain transforms to convolution of the response by the transfer function of the taper window. So, the taper removes high “frequency” wiggles in the frequency response. But, I do not mean the same thing by “frequency” in those last two instances. By the former, I mean the frequency of oscillations

in the frequency response.Hopefully, I have thoroughly muddied that beyond understanding, but probably the context makes my remarks clear.

Strike that. Reverse it. Oh, well.

Hehe, I get that, and muddled it even worse anyway in my response to Carrick.

“in the high frequency region” should be “in the high frequency variations in the frequency response.”

The thread is large enough that immediate recognition of a misstatement is impossible after hitting “post comment.” On a wireless network that keeps dropping out it’s getting difficult to follow and even Carrick and I cross-posted earlier.

Mark

“…the taper removes high “frequency” wiggles in the frequency response.”And, those wiggles become especially pronounced in the high frequency region when you plot on a logarithmic scale. So,in that sense, “high frequency” can be interpreted either way.

Bart I agree with this comment:

But that’s a different thing that how one interprets negative delays in the computed impulse response function.

To get nonzero values for h(tau) for strictly positive (or zero) delays, requires the at least the following: 1) that there is a true cause and effect relationship between the variables, 3) that the system be linear and passive, and 3) that the measurement window be semi-infinite in length.

No I don’t say that. I say negative delays can arise in this case either because a cause and effect relationship doesn’t exist or because one of the other assumptions necessary to get zero values for the impulse response function for negative delays has been violated.

I also gave an example of a physical system where there is undeniably a cause and effect relationship where a component of the measurement response very clearly had a negative delay.

This doesn’t mean you throw away impulse response functions, it’s just the measure has a different interpretation in this case (more importantly the the “impulse response function” can be modeled theoretically and measured experimentally… and the comparison of the two still informs on the underlying agreement with theory).

/warn speculation

In the present case, it seems to me we have two variables that have a

coupled relationship(rather than a purely cause-effect one): Temperature change can affect cloud formation and hence cloud radiative flux, and clouds certainly affect temperature.If the forward versus reverse effects reside in different

frequency(not delay) ranges we might be able to skip around this problem though… for example changes in clouds can affect temperature might be pretty high frequency (weather). How cloud patterns (and hence cloud radiative flux) affect temperatures might be very low frequency (e.g. climate).It seems plausible that some of what Bart is doing with his tapering is exactly this: A tapered window function centered at DC is a form of low-pass filter after all. So maybe smoothing isn’t just something that “cleans up the picture”, it may be required in order to get an approximately causal relationship between temperature and dR.

I still managed to foobar this comment. Here’s what I meant to say (changes in bold):

“Nobody here has suggested overturning Nyquist and Shannon except you. Suggesting these frequencies are the same in the real continuous time world suggests you have lost your way.”I did not suggest anywhere that the frequencies are the same in the real continuous time world, My original statement was perfectly clear and correct:

“A frequency of 8191/8192 revs per month (RPM), the highest on your axis, is not such a frequency on a monthly sampled time axis. It is a frequency of -1/8192 RPM. That’s the frequency you see after sampling.”The FFT does not deal with the real continuous time world. Everything is sampled. That’s all you have. When the FFT generates a sequence of frequencies from 1/8192 to 1, you pass the Nyquist critical frequency half-way, and thereafter, of all the (continuous) frequencies that could generate those sampled values, the negative frequencies are lower, and tend to 0 from below at the “upper” end of the range.

Mark T,

You said something to Carrick that seems to deserve due diligence:

> Hypocrisy seems to be your forte these days, quite frankly. The list of examples of such behavior is increasing.

First question. I’m not aware of such list. Do you still have it? Auditors might be interested to take a look. Full disclosure might be the best policy on this matter.

Second question. I note that you used words like “hypocrisy” to qualify Carrick’s behaviour, and also Nick’s behaviour hereunder. I am unsure why the behaviour of Nick or Carrick shan’t be characterized instead as “dogged persistence” or “tremendous tenacity”, expressions which, I believe, connote a virtue for an auditor to endorse.

Here is my question: by which criteria are these epithets judged? It would be important for people to understand which behaviour is considered virtuous and which behaviour is considered less so.

Thank you for your consideration and for constructively moving forward the discussion,

w

PS: Impartial auditors like TerryMN could take a look at these two questions too.

“- dc is always a zero of the aliased transfer functions.”I think I need to add a little detail to avoid confusion here. DC is always a zero, and the transfer functions are small in the vicinity of a zero. Hence, the low frequency information is generally preserved. It is when you get well outside the low frequency band that aliasing becomes significant. This is another reason that the higher frequency portion becomes progressively unreliable.

And, these data

areend-to-end “boxcar” averages.Carrick

Posted Sep 21, 2011 at 3:23 PM

“To get nonzero values for h(tau) for strictly positive (or zero) delays, requires the at least the following: 1) that there is a true cause and effect relationship between the variables, 3) that the system be linear and passive, and 3) that the measurement window be semi-infinite in length.”1) Cause and effect is precisely what we are testing for. The existence of a well defined and commonly encountered type of response argues that we have found it.

Besides which, we have reason to expect that the system is essentially as described in my comment at Sep 14, 2011 at 11:34 AM.

2) Nonlinear smooth systems can be linearized. Passive systems – define precisely what you mean by this. I suspect you mean… well, why don’t you just tell me.

3) Nonsense. I generated artificial data with the same correlations, sampling rate, and time span as evidenced by the actual data and replicated the analysis here.

“I also gave an example of a physical system where there is undeniably a cause and effect relationship where a component of the measurement response very clearly had a negative delay.”You have

claimedsuch. I suspect you did not zero pad the data properly, and are getting time aliasing.1. the data being used is from 10 years only, how can you extrapolate to 20 +years if the returned satellite data were to deviate from the current relative flatness how will that affect your derived response.

2. the data from clear to cloudy sky is not simultaneous

3. the data is average over 1 month so can never be safely used to subtract clear from cloudy – the data is smeared over 1 month and can never be data from the same region.

3a. Albedo of soil and water are very different- cloud over water will show a large TOA flux difference whereas the cloud over land will show less outward going flux.

water albedo= 0.02 approx (at some angles)

ground albedo = 0.1 to 0.5

Clouds albedo = 0 to 0.8

Wiki

4. Are the columns you have chosen correct? For cloud cover shouldnt sw radiation only be considered. Total would also include the BB radiation from the increasing temperature

Bart

Are you saying that a change in temperature should create a corresponding delta in the cloud cover of 9w/m2/k with a delay of 5 years (approx)?

If this is the case then it should be possible to show the effect in the real world data. Have you tried this?

Nick Stokes

Posted Sep 21, 2011 at 3:47 PM

“My original statement was perfectly clear and correct”There is no relevance. There are no 12 year^-1 frequency components in the data.

jphilips

Posted Sep 21, 2011 at 6:25 PM

1,2,3 are worrying about high frequency error sources and things which have been covered elsewhere in the discussions above.

4. I just analyzed the same data everyone else is. I’m not a climate scientist, just a guy with a lot of background, training, and experience in systems theory and signal processing. More guys like that are evidently sorely needed in the climate sciences.

From my perspective, I am just analyzing data from two signals and deriving relationships between them. AFAIK, these signals have bearing on the question of whether or not clouds act as a thermostatic regulator for the Earth’s climate system.

If I understand these signals properly, then yes, a change of 1 degC in temperature should result in a reduction of 9.5 W/m^2 incoming radiation with a time constant of ~5 years (settling in maybe 3X that).

“If this is the case then it should be possible to show the effect in the real world data. Have you tried this?”The real world data is precisely what is being analyzed.

Carrick

Posted Sep 21, 2011 at 3:27 PM

“How temperature affects clouds (and hence cloud radiative flux) might be very low frequency (e.g. climate).”Yeah, that’s what I

found. Is a bandwidth of 0.0725 year^-1 (associated period of 14 years and longer) not low enough for you?Bart, let’s start with

active versus passive systems.The simplest definition, is an active system is a system that requires a power source to operate. A transistor would be an example of an active element, a MOX resistor would be an example of a passive element. A system composed entirely of passive elements, and no battery, is by definition a passive system.A system that requires a power supply to operate, and has one provided to it, is an active system.

The Earth’s climate has a very good power supply–namely the Sun—and without the Sun, it would shutdown and turn into a frozen ball, and you’d have to get your air by the bucketful.

So the Earth’s climate meets the definition for an active system. It also has at least one stabilizing nonlinearity coming from the Stefan-Boltzman equation.

Next topic, active nonlinear systems.If you have a system with net amplification (gain > 1) of low level signals, such a system is physically unrealizable unless there is a saturating nonlinearity to prevent runaway conditions, since it would (eventually) require an infinite amount of energy to continue to drive the system. But more on that in a bit.Let’s consider a resonance tube where you are putting an acoustic signal a(t) = a0 exp(2 pi f t) in at one end (labeled “1”) with (complex) amplitude a0 and frequency f, where the reflection on the end where the signal is injected is R1 and reflection from the other end (“2”) is R2, which is defined as a ratio of the amplitudes of the reflected to incident waves (measured at end “1”) for a wave moving from “1” to “2”. [Similarly R1 is the ratio of the amplitudes of the wave reflected to incident on end “1” of the tube.]

Anyway, i’s easy to show that:

ar = a0 (1 + R2) /(1 – R1 R2)

where ar is the measured amplitude at the end where you are injecting the signal a0.

For simplicity let’s set R1 R2 = R0 = |R0| exp(-i 2*pi* f*tau0), where |R0| and tau0 are constants. The “-” of course comes from my sign convention for the phase of the signal.

For the sake of simplicity, let’s also assume R1 is real valued and |R1| < 1. We’ll also assume |R0| < 1. (|R1|,|R2| < 1 can be shown to be required for a passive system.)

You can compute the impulse response function associated with this systems, and you’ll find that you get a series of delta functions at tau_n = n tau0. Of course if you have a finite frequency domain window, you’ll get the convolution of the sum over delta functions with your window response function.

Of course all of these delta functions will appear at times tau ≥ 0.

Now what happens if |R0| > 1?

First it can be shown this is impossible in a strictly passive system (a power source is required to get a gain < 1.) Secondly, it is certainly possible. You could have a microphone speaker combination at the end with say R2, and feedback a signal that was larger than the signal picked up by the microphone.

We’ve all experienced this of course, when we have a microphone hooked up to an amplifier and place the mike to close to the loudspeaker, as “microphone squeal.” So such a system is not only possible, it’s one we’re all pretty well acquainted with.

The question I will pose for here, is what does your impulse response function look like? It’s a bit messy, but you expand ar above in 1/R0.

You’ll end up with a series of delta functions, but they will now all appear at times tau_n = -n tau0.

However, this system is unphysicalTo make the system physical, we have to add a

saturating nonlinearity.The simplest example of this is the Van der Pol oscillator which looks like:

x”(t) + (-r0 + r2 x(t)^2) x'(t) + w0^2 x(t) = 0.

This system exhibits self oscillation near the frequency w0/(2pi). If you look at the damping function for this oscillator for very small values of x(t), it is negative, and becomes positive as x(t) becomes large.

In my resonance tube example, with net amplification and a limiting/saturating nonlinearity, you’ll end up getting a series of stable tones at separations of f = n/tau0.

On to data, real world measurements, and negative delays in the next installment. (This has gottten plenty long enough.)

Even proofing it, mistakes. I assume:

a(t) = a0 exp(2 pi i f t) + complex conjugate.

Missed the factor of “i”, the complex conjugate is obvious to those of us who do this, but people are looking for things to pick at, so…

If anybody notices other errors or things that need clarification, please point them out.

Carrick –

“…in the next installment”Don’t bother, please. We would only end up talking past each other.

You have some conventional, convenient

fiction which apparently has usefulness to you. But, systems always progress forward in time, and causes always precede effect. If you want to argue that clouds drive temperature, well, I’ve already covered that. What we are looking at here is the response from temperature to clouds.

If you have any further problem with that, I really don’t care to waste any more time on it, and I really don’t care.

In the first part, we saw that negative delays are possible in an active system for which there is net amplification. I glossed over what happens in such a system, when a saturating nonlinearity is added to stabilize the system and make it physical realizable.

Fortunately there is a fairly nice physical system where one can perform measurements that pretty closely follows the hypothetical resonant tube system above: It’s the

mammalian cochlea.I’m not going to enter into a full theoretical discussion of how the mammalian cochlea works, but especially in humans, which have very narrow band tuning, the existence of spontaneous otoacoustic emissions (sounds generated by the ear in the absence of external stimulation) have been known to exist for decades.

In fact, these emissions, when they exist, are approximate equally spaced in log frequency (this is a result, it is thought, of the log-frequency place frequency map of the cochlea.) Data from a human subject. The existence of these narrow band signals has been shown to be associated withself-sustained oscillationssimilar to that of the van der Pol oscillator (“limit cycle”) oscillator equation above.In fact, it is thought that the human cochlea closely mimics the resonant tube behavior I described above. It’s not surprising then that in this system that negative delays can be sometimes observed. This pictures gets even more complex when you throw in nonlinearity, because that produces a mixing of forward and reverse traveling waves (it acts as a source of reflection). In a “scale invariant” system, the delay associated with this is

zero, but if the system is “scale invariant violating” the delay associated with it can either bepositiveornegative.To address Bart’s other point, yes I do know how to properly compute the impulse response function, know how about zero padding and all of that other good stuff. As a test of my own code, I use both the FFT and a directly implement discrete Fourier transform. Both give identical results.

But if Bart continues to disbelieve that this is physically possible, I can provide some experimental data, and allow him to verify for himself that indeed negative delays are possible in a causal, physical system.

Bart:

Shorter Bart: you’ve already made up your mind, you don’t want to be confused with the facts, and you weren’t able to follow the discussion.

There’s a term for that “moral coward.”

We’re dealing with a smooth, nonlinear system here near its local equilibrium, not some exotic system in a lab driven strategically to a limit cycle. The system is adequately described using linear systems theory.

“There’s a term for that “moral coward.””Yeah, you hold forth dealing with such stupid comments over something obvious to one with proper background and experience for over a week, then call me that.

“But if Bart continues to disbelieve that this is physically (im)possible…”For time to flow backwards and effects to precede causes? Yeah, I’m a real stick in the mud about that.

Your example is merely a mathematical abstraction. In the real world, causes precede effects.

Bart:

The human cochlea is such “exotic system”, and it exists in this “strategically driven to a limit cycle” condition.

Perhaps part of your problem is you believe you know more than everybody around you? (but really don’t)?

People that know everything are incapable of learning anything. Closed minds are as interesting as closed books.

The system is “smooth” in some sense, but it contains turbulent behavior on almost all scales. That is hardly linear.

We’re not dealing with cochleas. We’re not dealing with a marginally stable system. I don’t want to take the discussion off in the direction you want to go because it is immaterial, useless, and a waste of time.

The response is clearly effectively linear. The form is utterly mundane and usual. Your objections have no merit.

Bart:

So am I, and at this point, I question why you make that comment since I’ve clearly framed this as “not a violation of physical causality.” This smacks of dishonesty on your part.

Wrong for the nth time.

It’s not a mathematical abstraction. A squealing mike is a real phenomenon, and not particularly exotic. A cochlea is another real world example. Most of us have two of them, and about 80% of us with normal hearing have at least one ear with one measurable spontaneous emission.

For the last time, the problem here is that “impulse response function” has strictly interpretation you wish it to have, strictly only in linear, passive systems.

Whether your interpretation is adequate for this problem is something I believe needs to be established. I don’t think you have established it yet though.

Bart:

You raised an issue with a statement I made, I merely responded to it.

If you had any class, you would admit you were wrong on that point, instead throwing up this silly dismissive nonsense.

Comment stuck in moderation. Wish there was a way of deleting them from the cue (hintz).

Bart said:

So am I, and at this point, I question why you make that comment since I’ve clearly framed this as “not a violation of physical causality.” For the last time, the problem here is that “impulse response function” has the interpretation you wish it to have only in linear, passive systems.

Whether your interpretation is adequate for this problem is something I believe needs to be established. I don’t think you have established it yet though, and simply claiming it is so, doesn’t make it so.

“For the last time, the problem here is that “impulse response function” has the interpretation you wish it to have only in linear, passive systems.”For the last time, that is incorrect. All it needs to be is

smooth.You are seeking to redefine “impulse response” to be something other than what it is. The clue is in the word “response”.

And, if you think you do indeed have something wherein the effect appears to precede the cause, I would suggest to you that you keep in mind what I said here.

Bart:

Still wrong. All you need is a smooth nonlinearity to get negative delays, even in a passive system.

I mean by “impulse response function” what is usually meant by it. You compute it by taking the inverse Fourier transform of the transfer function between input X and output Y.

We call it an “impulse response function” because

under certain assumptionsit is. But just because you call it an “impulse response function”, doesn’t make it so, and doesn’t mean that the interpretation of negative delays has anything to do with signal causality.And for the record, the Earth’s coupled atmospheric-ocean system is nonlinear, active and in fact contains self-oscillations (e.g., the ENSO).

Bart:

I’m all for artificial data. When are you going to code up my resonance tube example to demonstrate you can get negative delays in a causal system?

When you’re done with that one, I’ll give you an example of a smooth passive system (no boundaries), let you compute the impulse response function for it, and demonstrate for yourself, that you will can indeed obtain negative delays in a system which does not violate physical causality.

“…and doesn’t mean that the interpretation of negative delays has anything to do with signal causality.”That’s what “negative delay”

means, child. It means there would be a part of theresponsewhich began before the impetus was applied. You see, it’s two words: “impulse”, and “response”. The “impulse” is input to the system. And, the “response” comes out inresponseto the “impulse”.I’m typing this very slowly so that you will understand. If you like, I will add in Mister Bunny and Mister Toad to make it go down easier. Because, you are a special little boy, and Mister Bunny and Mister Toad want so very badly for you to understand.

“You compute it by taking the inverse Fourier transform of the transfer function between input X and output Y.”This is merely a means to the end. The impulse response is the response to an impulse.

And, these means are fraught with pitfalls which can give nonsensical results if you do not know what you are doing, like some people with whom I have regrettably recently had discussions.

“When you’re done with that one, I’ll give you an example of a smooth passive system (no boundaries), let you compute the impulse response function for it, and demonstrate for yourself, that you will can indeed obtain negative delays in a system which does not violate physical causality.”This is so idiotic. I have tried to explain it to Nick so many times, but he just does not understand. And, you do not, either, apparently.

Apparent “negative delays” (how I HATE even writing such a logical travesty) result from using the Discrete Fourier Transform as a tool of analysis. It is inherent in the circular convolution which results from frequency sampling. YOU WILL GET SUCH APPARENT NEGATIVE DELAYS EVEN WHEN YOUR DATA IS GENERATED USING AN IMPULSE RESPONSE WITH

NO SUCH PROPERTY. They are a fiction. A ghost. You ignore them, becausethey are not a real part of the true impulse response.And, FWIW,

passivein my field means this. In your definition, your claim is not all-encompassing. I can build an active filter with an op-amp and a couple of resistors which will have a very well defined impulse response. What you appear to mean to claim is that an impulse response cannot be defined for a system which is being maintained in a quasi-stable state. Yet, even here, we can often speak of an impulse response of average behavior.I’ve put everything that needs to be said in the above 4 responses. If you have any more asinine comments in mind, read the above again until you get it and don’t have them anymore.

Bart, let’s start with passive versus active. The link you found discussing passive systems…. had all passive components (resistors, capacitors inductors). None of the examples

requireda battery, so it is equivalent to my definition. I just said it in a bit plainer language (but it’s not my language in any case, this is standard EE 101). So when you say “we” I assume this is shorthand for Bart googling for a reference that hethoughthad a different meaning than the one I gave. It doesn’t, and it’s telling that youthoughtthis to be true.Negative delays are not a fiction, they are a consequence of a violation of one of the assumptions that comes into the proof that then inverse Fourier transform of the transfer function between two variables X and Y IS the true “inverse Fourier transform”. This is why I make my students derives these sorts of results before they engage in the sort of mathematical gymnastics you went through in your matlab code above.

Since the description I gave above on how one computes what is “colloquially” called an impulse response function, and

this is how you compute the “impulse response function” in your code, your code will suffer from the presence of negative delays that have physical meaning associated with them, cannot be ignored, and indeed, as I discussed above, can be modeled, and the presence of them interpreted in terms of a model that adds light to our understanding of the underlying physical processes at work.Now when you said

I don’t think I ever stated anywhere that an active system

necessarily gives rise to negative delays, otherwise I would have said “your approach is totally hosed.” And I’m perfectly aware of examples where you have an active system where you don’t get negative delays. In my resonance tube example, keep the mike/amplifier/loudspeaker at end “2”, but require the net amplification so that |R2| < 1. Active system, no negative delays.Passive and linear are requirements to

guaranteeyou won’t observe physically meaningful negative delays, but it’s not necessary to have them. Otherwise, like I said you’d be hosed, because you are applying the methodology I described above that can give negative delays to an active, nonlinear system that is in fact in a continual state of self-oscillation.I’m going to comment on impulse response filters more in a second comment. It’s coming whether you like it or not. I don’t expect it to change the behavior of somebody who replies to reasonable commentary by calling it asinine and really descending into grade-school behavior with comments like:

This is shameful behavior on your part. I thought I was dealing with an adult, not a child.Neither of my posts went through last night… must have been the link.

Carrick, what you are referring to is called negative group delay. It is a well-known phenomenon resulting from non-linear phase. This is apparent as a time lead in a continuous response though the impulse response will still exhibit the causality. Only a non-causal system can result in an actual time lead by the output. In the frequency domain it will appear as a positive slope in the phase response. The responses Bart generated clearly show a negative phase slope indicating your example is not the case here.

You can read a good article on it at www dotta dsprelated dotta com slash showarticle slash 54.php.

After referring to Bart as a moral coward coupled with your last statement one has to wonder how you can actually claim some moral high ground regarding name calling and blustering.

Now that I’ve shown you your error, are you Mann enough to admit it?

Mark

I once argued somewhat foolishly with Carrick, only to learn later that he is informed and independent. I often judge on tone, and on this it’s a close three horse race. Carrick by a nose.

I know the difference between median and mean.

Mode makes me wonder, and wander and lean.

======================

Nick mounts Pegasus and flies above the slings and arrows of outrageous tone.

===============

MarkT, assuming this lets me post… it is absolutely the difference between group delays and true signal (physica/information, insert favorite rubric here ______) delays that is at issue, and I never claimed otherwise.

I’ve just posted a comment on moyhu that discusses this very thing.

Now, if you can show me where I’m mistaken group delay with physical delays, I will certainly be the first to admit it. As to the other… certainly there is an exact equivalence between one comment of mine, prompted by Bart dismissing my arguments out of hand, and his steady litany of insults. In fact I’m sure they are mathematically identical. 😉

(For the record, I don’t claim high ground, nor do I need to, to note when a person is arguing by ad hominem rather than by substance. )

Sorry screwed up the link here’s the comment on moyhu. It would have posted here around 8:45, except ClimateAudit wasn’t accepting my posts at the time.

Unlike Bart who stead-fastly refuses to exist the existence of negative delays, MarkT recognizes you need to go through a vetting process before you apply machinery that was built for passive, linear systems to active, nonlinear systems that contain self-sustained oscillations, like the Earth’s climate.

MarkT gives a plausible explanation for why one might be able to neglect negative correlations. Eyeballing curves is not of course the proper way of doing this, but it’s a start (MarkT is admitting that negative delays can happen, and you have to control for this).

One approach might be to compare your computed impulse response function (positive and negative value), then apply a brick wall filter to include only that portion of the impulse response function which is statistically significant. I’m open to other ideas, but really this is a breakthrough of sort–until now, we’ve been getting a “stone wall” on any consideration of negative delays.

Sighz Another flummoxed sentence:

One approach might be to compare your computed impulse response function

to an estimate of its noise floor(forpositive and negativedelays), then apply a brick wall filter to include only that portion of the impulse response function which is statistically significant.I understand there is a way to preview comments on this blog. I wish I understood it.

Kim, thanks for the comments. I know I get heated sometimes, but at the same time don’t wish to be less human. 😉 I have to wrap this up here soon. My boss needs me to provide him with a list of poles and zeros to a transfer function for a particular system we just used to collect measurements with, so I have some data to analyze, since I don’t have them in hand at the moment. (I’m not kidding. Ironies of life and all that.)

In every case (and I did not capture some) you are referring to time delays, specifically referencing tau in several, implying a necessity of a negative time delay in the impluse response. Not once do you mention the phenomenon of group delay. The impulse response of a causal system with negative group delay is zero for tau < 0, always. If the system we are analyzing had negative group delay, it would appear in the resulting response. It does not, as I have already noted. If you were to plot the phase response of your system, it would also have a positive phase slope leading to negative group delay.

So, if it is the case that you were referring to group delay, how is it relevant to this discussion? How are your examples a legitimate rebuttal to a response that clearly does not exhibit negative group delay?

Oh, I should also point out that ultimately the sun is the input to the system. Your active examples are all three terminal devices: input, output, and power supply. You've got a problem if you want to use the sun as both. I'll let you stew on why that is.

Mark

Carrick –

“Since the description I gave above on how one computes what is “colloquially” called an impulse response function, and this is how you compute the “impulse response function” in your code, your code will suffer from the presence of negative delays that have physical meaning associated with them…”That, in a nutshell, says it all. You do not understand the tools. You do not know what circular convolution is. You think the DFT is a black box into which you put in deterministic data, and get a deterministic result which represents some kind of “truth”. It is merely a tool. And, if you do not understand how that tool works, you do not understand what the answer you get out of it means.

None of your “requirements” for being able to estimate an impulse response are

necessary. You do not even know what an impulse response is.But that’s what you seem to be doing, Carrick. Maybe in your mind your comments aren’t ad hominem, but they are.

Only negative group delays. Group delay has nothing to do with whether the impulse response exhibits negative delays. Group delay is irrelevant to the complaint about the taper Bart is using. If the system had a negative group delay, we would see it. We don’t.

So, again, I ask, why are you trying to rebut an argument regarding negative time delays with an example of negative group delays?

Mark

And thanks, Mark, for your excellent comments. I really did not want to dip my toes into Carrick’s fever swamp and waste my time understanding exactly what he was talking about. It was too obvious at the outset that he was off on an irrelevant tangent.

MarkT:

I believe you are confused here. My series of comments started out in response to criticisms from you over faulty language on my part. They were then continued by Bart, who clearly doesn’t understand the issues associated with negative delays.

It would be nice if you are claiming objective in this argument if you would acknowledge his many repetitions of this error in this thread..Otherwise…. don’t.None of this is per se a

rebuttalof Bart’s analysis, other than to say “you can’t throw away negative delays without testing for significance first” and “yes Virginia statistically significant negative delays can happen in real physical systems and no that doesn’t imply a violation of physical causality.”Next as to “clearly does not exhibit negative group delays” Nick begs to differ.

Um… I’m just saying that the sun acts as the power supply for the system.

It also fluctuates over time, which you can consider as an input to the system if you like.

It would be somewhat analogous to what happens in an op amp, when the power supply (rail voltage) is varying over time. The term power supply rejection ratio gets used in that context.

I’ll let you stew over why we use that term, since you appear to like stew.

Or you can explain why you don’t think power supplies can fluctuate, or if they do, why that wouldn’t affect system behave in a way that could be considered analogous to a system “input”.

(I can measure signals on the output terminals associated with the switching power supply used to drive the op amp. So is the power supply really a power supply or is it an input too?)

MarkT:

I believe you are confused here. My series of comments started out in response to criticisms from you over faulty language on my part. They were then continued by Bart, who clearly doesn’t understand the issues associated with negative delays.

It would be nice if you are claiming objective in this argument if you would acknowledge his many repetitions of this error in this thread.. Otherwise…. don’t.

None of this is per se a rebuttal of Bart’s analysis, other than to say “you can’t throw away negative delays without testing for significance first” and “yes Virginia statistically significant negative delays can happen in real physical systems and no that doesn’t imply a violation of physical causality.”

Secondly, MarkT, Next as to “clearly does not exhibit negative group delays” Nick’s calculations beg to differ. If you want to offer a rebuttal in the form of a calculation on your own part please do.

“Secondly, MarkT, Next as to “clearly does not exhibit negative group delays” Nick’s calculations beg to differ.”No. He hasn’t. He does not understand the tool. You do not understand the tool. And, before you make any bad puns, no, you do not understand me, either.

MarkT:

The sun acts as a power supply in the sense that it provides energy to the coupled atmospheric-ocean system. Internally, it converts one form of power to another (thermonuclear into photons), which are then absorbed by the Earth’s system as heat. This heat drives convection and other mechanical processes, and permits the atmospheric-ocean system to function. But the sun

does notdirectly provide input in the form of say mechanical forces on the atmospheric-ocean system. (Not in any meaningful way.)(You may or may not recall that part of the definition of a power supply is the conversion from one form of stored energy to another form of energy that can be directly used by the system.)

The output from the sun also fluctuates over time, which you can consider as an input to the system if you like.

This is somewhat analogous to what happens in an op amp, when the power supply (rail voltage) is varying over time. The term power supply rejection ratio gets used in that context. Indeed, I can (and do) measure signals on the output terminals of amplifier boards associated with the switching power supply used to drive the op amp. So is the power supply really a power supply or is it an input too?

The sun acts as a power supply in the sense that it provides energy to the coupled atmospheric-ocean system. Internally, it converts one form of power to another (thermonuclear into photons), which are then absorbed by the Earth’s system as heat. This heat drives convection and other mechanical processes, and permits the atmospheric-ocean system to function. But the sun

does notdirectly provide input in the form of say mechanical forces on the atmospheric-ocean system. (Not in any meaningful way.)So in any meaningful sense, the sun meets all of the requirements for a power supply to a system, rather than simply being a driver of (input to) that system.

Bart:

Based on your many gaffes, at this point, I don’t think you’re in a position to proclaim who “knows” and “doesn’t know”. Sorry.

Bart:

Bart, seriously this is a waste of bandwidth. You have opinions about things, you’ve stated them, they’ve been repeated contradicted both by me and (whether he likes it or not) MarkT, and you

thankedMarkT even though he implicitly admitted my basic point was right (which is that h(tau) can have statistically significant values for tau < 0 in a physically realizable system). MarkT is trying tosaveyour argument by hand waving that we can “obviously see there are no negative delays”. I don’t accept that as a rigorous defense (other than in a hand waving fashion).It’s your opinion that I don’t know how to compute an impulse response function. I’ve already offered to give you some of my data let you compute it yourself, and compare our results. You’ve apparently refused this offer, which puts you in a very bad light in my opinion. (I will make this same offer to Nick.)

Please say something of substance or let us discontinue this thread. This has gotten beyond obnoxious in terms of how you are behaving.

At this point, I couldn’t care what you think…. I almost didn’t get involved in the discussion to begin with, because to people like yourself with obvious limited experience in this sort of problem combined with your huge ego, I might as well be speaking in Martian.

Bart’s made a point that no one yet has acknowledged as critically important.

Carrick and Mark T, you talk a lot about your tools, but this is about physical systems – clouds and temperature – not abstract mechanics of signal processing. Piss all you want, the winner of the name-plate polishing contest doesn’t matter. What matters is which of you can address the question most conceilsy and coherently.

kim, your money is misplaced. Carrick is merely winning a battle of his choosing. IMO it’s a battle that’s irrelevant to the question at hand. Which is Bart’s point, and why he’s leaning toward disengagement.

Gosh, it’s fun when it’s not me acting like a 2 year old. Keep it going. You guys are great.

You need to go back and look at where you made these statements. Your “example” system of the cochlea clearly indicates you think one can get a negative time delay in a causal system, and this has extended well after you clarified your “faulty language.”

Nicks’s analysis is irrelevant to the group delay issue. I’ll restate this for the final time, try to understand: IF there is negative group delay, you WILL see it in the causal portion (positive time delays) with a positive slope in the phase response. The link I provided demonstrates this rather clearly. Completely causal response, only positive time delays in the impulse response, yet a positive phase slope.

I take it from your lack of response that you do not care to, or cannot, respond to the use of group delay as some sort of refutation of time delay issues. Neither is a requirement for the other, so how does your “substance” in any way refute Bart?

Also, as I noted, the sun cannot be both and input and a power supply. Go ahead and consider it a supply, but you need to come up with an input to replace it in order to maintain your active model, though it is largely irrelevant anyway. We would see a positive phase slope from the causal portion if it mattered.

You’re doing exactly what you accused Pat of… it is humorous.

Mark

Bender:

Bart’s leaning towards disengagement????? He’s probably launched 20 ad homs in 10 minutes. Surely this is a record?

The question I am raising is very much relevant, because it ties into the question of whether you can neglect h(tau) for tau < 0. That’s an important question here.

Insulting me and belittle me (or Nick) for posing questions does not advance the argument. Pointing out errors in them advances the argument, making erroneous statements that don’t point out errors, then not owning the erroneous statement does not advance the argument.

My responding to Bart’s ad homs get me censored by MarkT, while Bart’s ad homs are some how OK. Just wow.

Great how this forum works doesn’t it? And by works, I mean something different than “works”.

MarkT:

Sorry if I didn’t response to this sooner, I was dealing with all of Bart’s MarkT-stamp-of-approval ad= hominem attacks (no one sidedness from you, right MarkT?).

What you say here is actually false and I have seen it in real world data. The overall phase can still have a causal slope (downward/negative slope) and have components with negative delay in h(tau). The only time you can tell by looking at the slope by itself is when the acasual component

dominatesthe response (otherwise you will see “wiggles’ in the phase associated with the interference between the causal and acausal components.I’ll make the same offer I made to Bart (this is a put up or shut up moment): I’ll provide you with data where this is true, and you can verify for yourself whether or not you see a negative delay component.

Or you can construct your own synthetic data and verify what I say to be true or false.

I’m sorry the point about understanding how the sun acts as a power supply went by you. I see no point in repeating it, and will put it up as your lack of experience with these types of systems. Somethings are just useless to argue over.

Carrick

Posted Sep 22, 2011 at 12:23 PM

What a lot of blather, signifying nothing. You could have saved a lot of space if you had just said “I think the Sun acts as a power supply.” It’s just an assertion. And, it means really nothing insofar as the discussion is concerned.

‘Based on your many gaffes, at this point, I don’t think…’No, you don’t. That is clear. What you think are gaffes are things you do not understand, and are too proud to ask for clarification to help you understand.

Bart to be clear I wasn’t addressing you, and I didn’t expect you to be able to follow the conversation.

If you want to learn something instead of stomping up and down like a little kid, I suggest go look up the definition of power supply, and make your own choice at that point.

The word “blather” from Bart is synonymous for “Bart has no ideal in hell what is being talked about.”

Carrick – you’ve made such a fool of yourself with your rambling and appeals to irrelevancies. It would do you good to humble yourself, and

askfor help from people who understand how the tools are used and what the outputs mean.You could try giving a little thought to it yourself. What is a DFT? How does it differ from a DTFT? What are its limitations? What is circular convolution? What exactly do the two ends of the cross correlation represent? Why is one of interest, and one not? What do I get when I deconvolve the cross correlation by the autocorrelation? Where and how do errors in the input manifest themselves?

You just do not get it. You do not understand the tools.

As I said elsewhere Bart. This is a go-nowhere conversation.

If you wish to use this forum just to insult me, please have at it until/unless SteveM says “enough”.

Are you actually an adult?

Yeah, but he can turn the crank, so he thinks that makes him qualified.

But never mind. Get back to the real question.

“This is shameful behavior on your part. I thought I was dealing with an adult, not a child.”Your behavior was shameful from the get go, charging in here and making all manner of bald assertions which had no basis, and presuming to instruct. You have backed off of some of your categorical statements which proved to be erroneous, and are now fighting a rear guard action trying to hold on to whatever shreds of credibility you can.

“What exactly do the two ends of the cross correlation represent? Why is one of interest, and one not?”On that, I jumped ahead of myself.The two ends of the estimated impulse response are not of equal interest.

Clarification:

Yeah, but *Carrick* can turn the crank, so he thinks that makes him qualified.

Don’t dismiss each other. Return to the question and summarize, based on what you’ve learned.

“As I said elsewhere Bart. This is a go-nowhere conversation.”That was predetermined by your initial hubris, and your lack of substantial arguments.

bender

Posted Sep 22, 2011 at 1:03 PM | Permalink

“Don’t dismiss each other. Return to the question and summarize…”1) Causes precede effects. An impulse response extending backwards in time is therefore a logical absurdity.

2) DFT based estimation of impulse response

alwayscreates apparent backwards-in-time behavior. It may produce it repeatably within tolerances for a particular system, but that does not make it physically significant.3) None of Carrick’s “requirements” for successful estimation of the impulse response are

necessary.4) Phase leads do not require backwards in time impulse responses.

5) Impulse response characterization via the chosen method for a given system may or may not yield valuable results. When you see common forms in the result, it is a pretty good tip-off that, in that instance, it probably did.

6) People whose sole intent is to disrupt a conversation, demonstrate workaday knowledge of their particular niche, and bog down the discussion in irrelevant technicalities are thoroughly contemptible and deserve no respect. However, the conversation involves a wider audience, and that should be considered when expressing your contempt and disrespect, however justified.

Would someone explain for the uniformed how “negative delays” are physically implemented. An impulse response is the response that follows a unit impulse. How can delays go back in time before that?

bender:

I’ve got 20 years of more than turning the crank on this bender, and more to the point exactly the sorts of analysis that I’ve been discussing above. I can derive the relationships from scratch, it’s something I require of any student that works for me, and I generally can push this sort of analysis further than most other people I know.

Since his only modis opererdi is to attack me personally, I don’t see any other choice than dismissing Bart at this point.

From my perspective, this thread is has wound down. Sorry I was involved, it has been a complete waste of my time. (No willard, it isn’t your fault, I could have declined to participate in this thread.)

I should have snipped out the ad homs as against blog policy. It shouldnt be necessary.

Perhaps someone can answer a simple question for me in connection with Dessler’s original regression as I’m not familiar with control theory terminology (though I’m familiar with some pieces.)

I’ve done some simple experiments with synthetic feedbacks and then tried a regression a la Dessler 2010 and thus far, it doesn’t seem to me that the methodology of Dessler 2010 comes even close to recovering the underlying process. Can anyone speculate what was in Dessler’s mind when he applied this method? Does it occur in any texts on the subject?

Tom,

We don’t know that anything is being physically implemented. We just have two series of numbers and are trying to find out something about them. We don’t, a priori, know if temo “causes” CFR or CFR causes temp. Indeed, the system postulated (Bart’s two-box) is a loop.

That’s my objection to the circularity of the logic. First the impulse response is truncated, negative values removed. Then it is Bode plotted, then said to resemble a causal process.

I did not ask if anything is being physically impolemented> I asked how would a negative delayy be implemented.

We do not have two sets of numbers.

We have observations

Bart has shown that these numbers agree reasonably well with a simple feedback model and takes this as physical evidence. That is all Bart has done.

Now we read talk about “cyclic behavior”, phase leads and system response to cyclic behavior. This does not match any physicals model that I can conceive of.

What on earth is “cyclic behavior”?

How can a system know theta the signal is cyclic and how can it know that it is not going to be turned off immediately?

How can a unit impulse which is acrylic as a siganl can be be affected by properties that are only applicable to “cyclic behavior”?

What on earth is “cyclic behavior”?

Nick Stokes writes:

=================

That’s my objection to the circularity of the logic

===================

Bart’s invesstigation is one of physics abnd not mathematics. He is matching observation to teh predictions of theory. This is inductive and not deductive logic. There is no circurltiy in this. it is just the proper application of inductive logic

“Since his only modis opererdi is to attack me personally, I don’t see any other choice than dismissing Bart at this point.”Ha! I made all the arguments needed early on. Like Nick, you chose to ignore those which discomfited you and tried to steer the discussion into a strawman irrelevant to the system at hand. When you choose not to address my arguments, what is there left to do but get in some gratuitous insults?

I’ve been at this game much longer than you, Carrick. I have

made products which workbased on these principles. Indeed, I didn’t bring in any of the heavy machinery because A) this isn’t my day job B) it wasn’t necessary and C) when so many people can’t even understand this basic analysis, what hope is there for introducing even more complexity?You do not understand your tools. It is apparent by what you have been arguing. Your point has been a very narrow, technical, and irrelevant one.

Steve – thank you for your patience, tolerance, and good works. I apologize if I have taken untoward liberties with your resource in addressing Carrick’s… (gulp) whatever they are.

Oh, and Steve, I discussed the shortcomings of Dessler’s analysis here.

Nick Stokes

Posted Sep 22, 2011 at 1:53 PM | Permalink

“We don’t, a priori, know if temo “causes” CFR or CFR causes temp.”As I have said time and time again, and you have ignored it, if you want to investigate causality in the other direction, don’t use the chaff on the other end of the impulse response estimate. Swap temp with dR, and redo the analysis. What you will get is the inverse of the response we have found here, and it is thoroughly unphysical.

Steve McIntyre

Posted Sep 22, 2011 at 1:57 PM

Also, what was on his mind was that there was negligible delay in the response, so the outcome should be more or less linearly related according to the feedback factor. I have demonstrated that the delay is, in fact, on the order of about 5 years for the relevant frequency range, and this assumption was unwarranted.

I told him so on the WUWT thread back in the day (the links are in a comment of mine on the previous thread on this topic) and recommended that he do just such an analysis on the data as I have presented here. But, he did not listen.

This is my final contribution to this thread, except substantive questions. I will not respond further to ad homs or attempts to derail the conversation with arguments over who can use his tool better or what not.

Bender asks for a summary, I respect Bender so I’ll give one (and will hope to not get pushed into moderation):

Method 1: Measuring the physical impulse response of a system using an impulsive signal.1) A “true” impulse response function is the response of a system to a very-narrow-band impulse.

2) This has a number of limitations, including the ability of the transducer used to generate the impulse to faithfully generate it (nonlinearity of the transducer is one problem).

3) If the system has noise, because the power associated with the impulse is very small, this limits your ability to resolve the true response of the system over the background noise.

4) Sometimes, instead of one impulse, we apply a series of impulses, with a sufficient delay between impulses to allow the response of the system to return to a nearly quiescent state before the next impulse was applied. Then we average these together to knock down the noise.

5) The problem with this is “RMS” power of the input signal. For a finite duration measurement, the RMS signal associated with this train of impulsive-like signals is very low.

6) For a casual (e.g. “real world”) system, you will never get a response from the system before the impulse to the system occurs.

7) This method is not very efficient, fraught with problems, and often discarded in favor of the broad-band method (to be described next)

8) For the sake of clarity, we will call the measured output, normalized to the amplitude of the input, as I(tau), where tau = t – t0, is the time relative to the time t0 that the original impulse was applied.

Again I(tau) is strictly causal, I(tau) = 0 for tau < 0.

Method 2: Measuring the physical impulse response of a system using a broad band signal.1) As an alternative to Method 1, we can inject a signal X that is broad band input the system and measure the system response Y to this input.

2) We can compute the transfer function between X and Y by simply fourier transforming (denoted FT(…) the two time series to get: T(f) = FT(Y)/FT(X).

3) If we inverse Fourier transform T(f) we get h(tau) = IFT(T).

4) Under certain circumstances (and yes I can prove this myself), you can demonstrate that h(tau) .= I(tau), and under those circumstances h(tau) = 0 for tau < 0.

5) Conditions under which h(tau) ≠ I(tau) may be true: If the system is active and there is net gain, you can get negative delay. I have given an example above. Here’s a simple one for those of us who like synthetic data: T(f) = 1/(1 – R exp(-2*pi*i*f*tau), where R ≶ 1.

6) You can also get negative delays in a system that is smooth, because nonlinearity can cause reflections and acausal behavior in its own right. Take an infinite wave-guide, let the medium be nonlinear and dispersive, you’ll end up with a nonzero value of R that is level dependent, but nearly independent of frequency. (The nonlinearity mixes forward and reverse going waves in the waveguide.) However, because the system is dispersive, the

phaseof the nonlinear reflectance will vary with frequency, so in general it “noodles” around tau = 0, but can drift to either positive or negative values of tau.6) Because X is broadband, but doesn’t encompass 0…infinity in the frequency domain, h(tau) will always be a frequency-domain filtered version of I(tau).

7) If we don’t have a signal generator, but can measure the signal Xm(t) that is as an input to the system as well as the output Ym(t) (the “m” signifies that both X and Y are measured, and we have made an assumption of causality between them), then we can use “signals of opportunity” to estimate the corresponding transfer function Tm(f), and once we have Tm(f) we can inverse transform to obtain hm(tau)

8) We have no guarantee in this case that Tm(f) = T(f) or that hm(tau) = h(tau). The reason is that we don’t control the input signal, and the input and system that we are measuring the response to may be covarying to some quantity that we aren’t monitoring. In my research, we might be monitoring pressure, but both X and Y might also be functions of seismic activity, temperature, etc. This can lead to among other things, apparently acausal signals (in my case, atmosphere pressure waves travel much slower than seismic waves which can lead to arrivals that are acausal with respect to the atmosphere pressure wave.

9) If X and Y aren’t truly causally related, for example, they are quantities that covary (one may cause the other to change, and and the other may cause the first to vary, for example rabbit and fox populations), then you can still compute hm(tau), but it has nothing to do with an ordinary impulse response function, and double-sized behavior in hm(tau) is expected.

What does this mean for this analysis?1) When causality between X and Y is not known to be true, it is an assumption that must be tested.

2) Even if we know that causality is present, in a system like the Earth’s climate which is nonlinear, active and self-oscillation is present (by the way, from the systems theory perspective, sustained internal oscillations are considered evidence of the presence of a “battery” or “power supply”), there is no guarantee that that hm(tau) will be causal (meaning there will be statistically significant value of hm(tau) for tau < 0).

3) This means you have to justify truncation of hm(tau) to values of tau ≥ 0.

And by “justify” I mean you have to perform an analysis demonstrating that hm(tau) is statically indistinguishable from zero before you are allowed the please of truncating hm(tau) values of tau ≥ 0.

That’s all I have time for. If somebody asks me a reasonable question, as a legitimate complaint not related to my “tool use”, I will respond given time.

Hopefully somebody found this of use, maybe even Bender who thinks I crank my tool too much. :-;Carrick and Bart, thanks for all of this. Can both/either of you recommend texts/articles that deal with the topics from the perspectives that you recommend.

This should have been:

8) We have no guarantee in this case that Tm(f) = T(f) or that hm(tau) = h(tau). The reason is that we don’t control the input signal, and the input and system that we are measuring the response to may be covarying to some quantity that we aren’t monitoring. In my research, we might be monitoring pressure, but both X and Y might also be functions of seismic activity, temperature, etc. This can lead to among other things, apparently acausal signals (in my case, atmosphere pressure waves travel much slower than seismic waves which can lead to arrivals that are acausal with respect to the atmosphere pressure wave.

8) translates to 8).

(see if I got the escapes right on that.)

OK, a couple more typos

1) As an alternative to Method 1, we can inject a signal X that is broad band

intothe system and measure the system response Y to this input.T(f) = 1/(1 – R exp(-2*pi*i*f*tau), where R > 1.

And of course replace all smile faces with <p>8).

Carrick writes:

=======================

A “true” impulse response function is the response of a system to a very-narrow-band impulse

==============================

The impusle is as bradband as a signal cnan be. It contains all frequencies

Thanks Tom, I thought I got that mistake (error in copy/paste from where I was proofing it).

I meant “very narrow-width” impulse. You want the impulse of course is as broad band as can be (based on the capacity of the generator of the impulse to produce this signal of course), with the mathematical ideal of it being a Dirac delta function, as mentioned above.

My final word to Carrick.

Steve McIntyre

Posted Sep 22, 2011 at 3:11 PM

Phase Plan Analysis: Hopefully Ogata still discusses this. This is the 5th edition. Mine has no edition number on it.

I have still never seen as good a book on FFT methods of spectral estimation as the classic Oppenheim and Schafer.

I don’t think understands that feedback and poles are synonymous.

Mark

Nice article on ‘negative group delay’.

I learned something from that.

I reply here because the other thread is too far indented.

I encourage the combatants to limit the number of usages of ‘you’ and ‘your’ for demeaning attribution, in any given reply, as a way to damp down the disproportionate responses.

The technical fracas is interesting, the mud wrestling, less so. Who will be the first to do an eye gouge? The ad hom attributions undercut the value of the work, and are self defeating.

Easy for me to say…

Perhaps the parties will humor me on dropping the debate about who needs to apologize to whom, and whose feelings are more wounded… It is merely tiresome and dilutive at this juncture, IMHO.

You are all obviously successful technical professionals. I doubt this kind of thing is conveyed in memo form at work.

This topic is too important to allow it to be squandered and subsumed by sophomoric squabbling…

RR

(herewith, ‘you’ is used in positive sense)

Agreed RR. A bit of toning down of personalities would improve the readability of the thread. I’ll do my part to try and keep the heat down in my comments.

The immediate topic lends itself to that, because it is so utterly jejune. Carrick isn’t interested in the problem at hand. He wants to inform us that he is brilliant.

Sorry if I am intemperate. I’ve been at this for too long and I need to just walk away. Yet, leaving the waters to be muddied by the likes of this popinjay is equally unsettling to me at this time.

The mixture is far too unsettled for me to read the tea leaves, yet.

=============

bart:

When we started this you wrote

“Nobody is going to believe something some anonymous guy posting on a blog would say. Even if I did the analysis, someone respected by a lot of people would need to replicate it. So, why take the time to sort it all out on my own when it would be wasted effort?”

And I noted that if you did an analysis that these respected would show up. I listed carrick and Nick, among others. There are good number of people here who have experience with control theory. (Willard aint one of them.) It’s also important to note that Carrick and Nick do not agree on all matters. One of the nice things about this place ( as opposed to RC or Tamino or pick any warmist site ) is that people who may disagree about global warming can come together to discuss methods. It can get frustrating and testy. But it beats rabbit droppings and echo chambers.

We already know that Carrick is brilliant as is Nick. We know that from years of reading their stuff, seeing them argue, reading their code, seeing them change their minds. That is why I promised that they would show up. I think what needs to be established is not who is more brilliant, but rather who can identify the issues at hand in a clear manner and suggest an agreed upon approach to resolve the issues.

If we are really lucky RomanM will come back.

Steve – All I see these guys doing is talking past my arguments. Look over the thread. You will see them make an argument. I will respond. They will ignore my comment. Then, later, they will dredge up the same argument which I have already explained is a false concern.

They are completely wound up over a fiction. Take a look at this plot. I generated this data artificially. I know what its properties are. I know it is completely causal. But, I get a ghost of a non-causal response. It is inherent to the process when the correlations are longer than the data window. It is useless information.

This is precisely why I was reluctant to be drawn into the discussion in the first place. We are being led down a path of destruction by people who are so caught up in their confirmation bias that they do not take the time to understand exactly what they are doing.

Well, reading over the comments I see a lot of things. I don’t think its fruitful to try to untangle things. That untangling is a distraction from the issues at hand. At this stage after tempers have been engaged you basically have a few choices. leave, stay and get madder. pause and summarize as Carrick has done.

“They are completely wound up over a fiction. Take a look at this plot. I generated this data artificially. I know what its properties are. I know it is completely causal. But, I get a ghost of a non-causal response. It is inherent to the process when the correlations are longer than the data window. It is useless information.”

I would not call it a path of destruction to get clearer on that issue or other issues.

Speaking of RomanM…

Group delay and delay are two entirley different things. The speed of light still holds

Mark T;

Re; sun as power supply And input, I’m sure you are familiar with the term Power Supply Rejection Ratio, which is the system response to variations of the power supply.

Why so much ‘binary thinking’ on an intrinsically analog question?

Remember, Black, White, Grey, BINGO, go with the grey…

Carrick seems to be a diligent academic kind of guy; tipoff for me was ‘ my students’.

Ok, we all learned from academic folks. Persnickety about ‘rigor’. Ok, fine.

Bart and Mark T remind me of the cantankerous old guys at work, with decades of system design,

including situations where lives are at stake. Lessons from those guys are rarely gentle.

It seems that Bart has looked at this from a very conventional framework, and gotten a classic answer.

Carrick seems to imply that Bart and Mark T can’t do ‘proper’ system analysis.

Does this mean all of their prior designs must be recalled?

Wouldn’t the world have noticed by now if the machines didn’t work as advertised?

Here’s a question; On what do the contentious posters agree? Always a good starting place…

Is this a situation where argument over nits and jargon is obscuring the big picture?

RR

Uh, that’s not the problem I’m referring to. If you use the sun as an input and a supply, you are converting a three terminal system to… a two terminal system, which cannot have true amplification, and brings us back to the passive system. none of that really matters, it is an aside at best.

16 years for me, Bart mentioned 30. We are both engineers which is almost synonymous with cynical.

Doubtful.

Mark

MarkT just as a matter of interest, how would you analyze a system like the one I described?

A two terminal input to an operational amplifier, hooked to a switching power supply, with a two terminal output, in which you didn’t have perfect isolation from power supply noise?

Also what happens, if you put a capacitor on the V+ end of the power supply and hook it into one end of the operational amplifier. So basically what you’ll mostly measure is the switching noise of the power supply, using whatever amplification you choose to use.

One of the papers I wrote was on self-sustained oscillations, for which you can demonstrate (systems perspective) that a power supply is needed, that is, if you have self-sustained oscillations, the system must be active. (I wasn’t the first on this: TJ Gold pointed it out in 1948, we just had better data.)

If the ENSO isn’t directly forced by the Sun, and it doesn’t appear to be, that would seem to demand that the coupled ocean-atmospheric system be treated as an active one in which net amplification is present.

Bart as I understand it is a climate scientist, I’ve never seen his resume so I don”t know what training that entails. I am Ph.D. physicist with 24 years of post-doctoral experience. Being in an “academic” environment means something different to me than to RR. I work at a lab where we collaborate heavily with government, military and industry. Not the same thing as a guy in front of a computer pushing a pencil around. The hearing science thing was something I did in the 1990s. These days I mostly do atmospheric acoustics sorts of applications.

“Bart as I understand it is a climate scientist, “I think you’re thinking of a different Bart.

Definitely.

Hm… sorry for the misattribution, but this does explain why you are so much more knowledgeable than I imagined that other Bart would be about signal processing theory!

LOL.

Actually a really good application, if you want to dive in fully, is using this same framework to analyze the impulse response function of tree ring proxies to temperature. I’d do it myself, but have literally no time for it.

Time is the enemy. I have spent way too much time on this already, which is why I try mostly just to set the stage and encourage others to follow through.

Like you are doing, too 😉

Signal processing is intimately involved with what I do, which is designing control systems. If you have any interest in that field, you may find some of the implications of the proffered transfer function interesting. It is actually just of the right form to provide some pretty standard stabilizing feedback. Also, see my latest entry near the bottom of the page.

Oops, copied and pasted wrong link. Here is the one I was referring to.

The amplification is a result of increased storage of energy, which is actually a passive function. An LC circuit is a prime example, one that can oscillate as well.

I’d have to think about it since I haven’t tested a differential op-amp in a while*, but that’s immaterial to the point I was asking you to think about. If you take an active system, three terminals at a minimum, and tie the input(s) to the supply, it is not active any longer, it is passive. Not that you can’t get valid information from testing in this manner, just that it is no longer active.

Certainly our “system” is much more complex than that, I’m just pointing out that assigning the label “supply” to the sun implies you need something else as an “input” to truly view the system as active. Not that parts of the system, internally, cannot be modeled as active (ENSO, for example,) but the overall system is not.

FYI, Bart mentioned somewhere that he is an engineer, electrical I would assume, with somewhere in the neighborhood of 30 years experience. I posted over in the tAV “resume” thread if you’re curious about my background.

Mark

* Most of the “circuits” I have designed in my career are targeted to 50 ohm impedances, which op-amps tend to deal with poorly. It is a difficult problem getting a 50 ohm signal into an ADC that has a 1k ohm input. You either use a transformer (1:4 is typical) and accept some mismatch, or drive it with something like an AD8138 (differential op-am) and suffer from a variety of other issues.

I submitted a query in response to the recent comment of Sep 22, 2011 at 8:30 AM by Carrick. However, it’s been inserted into the middle of the pile, here (at least on my browser). This is confusing. Hence this unthreaded “marker”.

Two kinds of delay, an actual time delay, i.e, the difference between one sample and the next (since we are referring to a sampled system,) and group delay which is the negative derivative of phase with respect to frequency. GD units are time, but it does not represent a true time delay, rather, it is an apparent delay that is a result of the system “predicting” the output. From the article I linked:

Mark

This is a very interesting discussion, snide comments aside. Carrick has reduced the question to:

.

Very simple question, from an EE who has basic understanding of some of these things: could it be that the back-and-forth here between Bart and Nick/Carrick revolves around the following?

Bart: negative delay makes no sense in the real world, since it implies that the effect precedes the cause.

Nick/Carrick: negative delay does make sense in the real world,

when examining a cyclical system,because an oscillating system can introduce a “lead time” amplification that comesbeforethe next cycle.Just trying to understand what is being said here.

If the above two statements are (overly simplified) reasonable summaries of the two positions, then I would tend to agree with Bart, because the “negative delay” in the latter sense is actually an almost-full-cycle delay from a causal perspective.

Wouldn’t a full cycle delay give a lot of throttle play to the response?

=============

MrPete:

Absolutely this is what is involved. The conventional way to compute h(tau) is using the frequency-domain method, which assumes a broad band signal as the input X, to which you are measuring the response Y. Broad band measurements by their very nature have the assumption of cyclic behavior, and it is this that allows for phase-lead behavior.

Another way of saying this, if you were to zap the same physical system with short-duration pulse, you certainly wouldn’t expect to see a response that preceded the impulse. The equivalence between the broad-band calculation of h(tau) and the time-domain direct measurement of the impulse response y(t) is guaranteed only for passive, linear systems.

For climate, we are stuck with broad-band signals because we can’t (and shouldn’t) generate impulsive inputs to the climate system in order to directly measure their time-domain response. Because of that, we have to control for the possibility of statistically significant values of h(tau) for tau < 0. We can’t simply truncate h(tau) for tau < 0 by fiat. That’s an error made by somebody who really doesn’t understand what the broad-band derived h(tau) truly signifies.

(Negative delays aren’t even necessarily bad.. it tells you something about the pole structure of the climate system in some averaged sense. this has some relevant discussion to this topic, especially when one is measuring the transfer function for a

nonlinearsystem, and how that can be interpreted.)Large volcanic eruptions may be as close as we get to a time-domain measurement of an impulsive input and the response of the system to it. There are peer reviewed publications that look at this, of course.

If that is what you are arguing, then you have no leg to stand on at all (not that you ever did, just that this makes it obvious why). We are not, in fact, dealing with a broadband signal which has significant components beyond the Nyquist frequency.

All this time, you apparently have been concerned about aliasing. Well, I’ve already addressed that.

Bart, meet data

And yes I do know what a Fourier transform is, understand sampling theory, Shannon’s Sampling theory, and I’m not worry about aliasing..

(And yes this is a colossal waste of my time, but maybe somebody besides Bart will learn from it, so…)

“And yes I do know what a Fourier transform is… Broad band measurements by their very nature have the assumption of cyclic behavior…”suggests you do not grok it.I can croak, now, I’ve heard bullfrogs and cats fighting.

========

“Broad band measurements by their very nature have the assumption of cyclic behavior, and it is this that allows for phase-lead behavior.”Ridiculous. You don’t even know what a bloody Fourier Transform

is. A FourierSeriesassumes a periodic base. A Fourier Transform does not. But, I’m not about to go into the nuances of functional analysis for you here.A phase lead can be generated as simply as having an impulse response of the form h(0) = 2, h(1) = -1. This is simply a basic discrete time PD controller.

A few questions from the uninformed:

How does the system know that a signal has a “cyclic behavior”? How can it predict the future of a cycle? How does it know that the signal will not be turned off immediately?

An impulse is a broadband signal. It is 1 at t=0 and 0 everywhere else. How is this cyclic? It would seem to be as acylic as a signal can be. How does the impulse response provide information about the performance of a system to a signal that has a “cyclic behavior”?

Phase lead and phase lag have nothing to do with time or with “cyclic behavior”.

What on earth is “cyclic behavior”?

It doesn’t, nor does it need to.

What is happening is a bit more complex than I can go into here. Read the link I provided above for some insight. In general, when things vary slowly relative to the ability of the system to “adapt,” it can appear to “predict” future behavior. Sudden changes will cause this ability to fail.

It doesn’t. In fact, when this happens, or any sudden (not smooth) change occurs, the ability to predict is “lost” and must start over. The link explains this, too.

Realistically, it is not, except for a brief instant in time.

Because when you put an impulse into a system, you are hitting it with all frequencies. If you look at the spectral response, you get magnitude and phase at each of these frequencies. The inverse of this is the time response.

Well, phase lead/lag in this context are functions of the frequency and the delay between input and output. For illustration purposes, consider a delay of 1/10 second and a signal with two frequencies, one with a frequency of 1 cycle/s, and the other with a frequency of 2 cycles/s. The phase lag of the first frequency output w.r.t. to the input is 360/10, or 36 degrees (pi/5 rad.) The phase lag of the second is 720/10, or 72 degrees (2pi/5 rad.) Both have been delayed by 1/10 second, however.

Consisting of sinusoids, i.e., oscillating.

Mark

“How does the system know that a signal has a “cyclic behavior”? How can it predict the future of a cycle?”</i.A very good question. You need to have observed a signal for a while to characterize cyclic behaviour. That's the issue with this 10-year data pariod and statements about a 0.0725 yr^-1 frequency. Once you've characterised it adequately (after a few periods), you can predict. But as you say, someone might switch it off.

“How does the impulse response provide information about the performance of a system to a signal that has a “cyclic behavior”?”There’s also a purely time domain interpretation of the use of impulse response. You can regard, by linear superposition, any signal as just the weighted sum of impulses. So the response is just the weighted sum of impulse responses. That’s where the convolution comes from. This works for sinusoids as for any signal.

An impuse is a broadband signal. That is why it is used. It contains all frequencies.

So why is bad to apply all freqencies to the syatem under consdiration to see its response

It’s not bad, actually, though it will not provide you with any indication of how the system changes over time. In reality it is impractical to implement, however (a true impulse is a mathematical construct only.) Network analyzers actually sweep the frequency to generate the same thing for analog devices.

Mark

MarkT:

These days, I think they are moving towards maximum-length sequences (which are slightly more efficient).

I still use (multi-tonal) sweeps in ear acoustics to obtain the system response as a function of frequency, but that’s because it is nonlinear, and a sweep is a closer approximate to a series of tones than broad-band noise is. It’s an interesting problem because the cochlea is nonlinear, active and

dispersiveand you can make use of the dispersive nature of the system in your OLS analysis in measuring its response to separate otherwise overlapping reflections.Some people in that community use broad-band noise. I think there’s been a bit of work on MLS in that field, but the problem with that is, you need a very good transducer to make it work well, and we’re talking about miniaturized systems that have to fit into ear canals.

Bart does say this, but he is referring to

time delay.Sort of, I think, but Carrick is actually referring to negative

group delay, which is different. Carrick seems to be arguing that because negative group delay can exist, we need to consider the negative time delay portion of the impulse response.It’s a bit more than that, it would imply that the system could start generating a response in anticipation of an signal prior to the signal’s arrival. Once the signal is proceeding through the system, however, it may

appearto be leading in time, but any sudden (broadband) change will quickly reveal the difference. Negative group delay does not in any way imply negative time delay.The link I tried to provide above (search on dsprelated) explains all of this in sufficient detail for most people to understand.

Mark

“Carrick seems to be arguing that because negative group delay can exist, we need to consider the negative time delay portion of the impulse response.”Thank you, Mark, for having greater patience than I, and for so clearly resolving the impasse. After all the time we have spent on this ridiculous argument, I stopped listening at the point I realized he was suggesting something physically impossible and just switched to “how do I get rid of this guy” mode.

And here I thought you were patient comparitively speaking.

Mark

Compared to you, certainly!

😛

It’s just gone on too long. They’re a broken record, completely uninterested in anything anyone else has to say.

MarkT, I’m referring to the “tau” in hm(tau) in my summary:

What I am discussing relates exactly to what Bart is calculating so it is entirely relevant.

Were Bart measuring a train of impulses in the time domain, instead of synthetically constructing them using the transfer function between T and dR before inverse transforming, what he says would be right.

The problem is he is confusing physical delay with the lag that comes out of his calculation, and thinks the latter is the former.

I am going to have to partially concede Carrick’s point. If the system in question were

unstable, then the procedure would pack the inverse of the impulse response into the negative time region. If he had stated the problem more pithily like that in the first place, instead of dragging us into the responses of mammalian cochlea, we would have avoided a lot of grief.However, that is not what is happening here. The impulse response computed from this data has negative time components only because the correlations are longer than the data window. This is indicated by the fact that the “negative time” components are only weakly coherent, but clearly exhibit the same natural frequency as the forward impulse response.

This conclusion is firmly supported by the fact that the full impulse response estimate looks very like that generated with my artificial data.

I realize I came into this discussion in the middle, really was kind of dragged into it by a bone-headed mistake I made on Nick’s website. Given the history of the discussion on this website to that point, I can see how you would have taken anything I said the wrong way. I’m not even sure there was a “right way” to say what I wanted that wouldn’t lead to acrimony given the prior history of the discussion on this thread, but never mind.

Glad we have something we can agree on, and let’s just leave it at that. Time to breath is a good thing.

Well, with a statement like that, it would be bad form for me not to extend my apologies for heated words. On my comment previous:

“…but clearly exhibit the same natural frequency as the forward impulse response.”I leaped before I looked. It is not the same natural frequency – I just assumed it naturally would be. But, the artificial data also does not match the phantom negative time oscillations with the natural frequency of the true response. The visualization is even more stark when I use representative data of the same length as the actual data..

I know, I know… what confidence can you have in the indicated correlation bandwidth, given that it is longer than the measurement data span inverse period? We’ve been over that. My position is that it is

possiblefor it to be right, based on analysis of artificial data of the same length and statistics. Furthermore, it has been observed that, qualitatively, the result looks “right” when it is right. But, definitive results will just have to wait for a more extensive data record. However, the -180 degree phase lag is not in doubt, as it extends to frequencies for which the inverse is within the data record interval.No group hugs, please.

It’s been a long day and I’ve been trying to learn Java in the middle of it all… time for a beer.

Cheers.

Mark

Mark T:

Care for a book on 6502 programming instead? I may have one lying around. It might actually have more utility than learning Java. >.>

Did the 6811 years ago. The system I develop on is controlled using Java for various reasons. The algorithms are all done in CUDA C (Nvidia Tesla system… Teraflop processor.)

Yes, btw, I do regularly do channel estimation and system identification, though admittedly, the impulse response is typically at -80 dB from the peak well before even the midpoint of the response result. There is no argument in such cases. Nobody would be aggravated. We wouldn’t even need a taper since it is essentially 0 within 1/10th of the response.

I’m drinking Laughing Lab at Holy Cow on Stetson Hills at Powers, MrPete, if you’re bored.

Mark

Stetson Hills @ Powers… bummer, too late… I’m on that side of town early some days, never in the evening 🙂

Put a good one on in your honor for the halibut anyway. At corner pocket west at 8th and arcturus every other th on average.

Mark

It’s all good Bart. It would be a lot more insulting if you didn’t give a d**m about what I said than if you maybe cared too much! I’ve got a pretty thick skin, one has to, to work in research, as you are probably fully aware too.

I’ll have more to say about the physical problem presently. First I have to catch up on everything that was said upstream to when I entered.

I’ll raise again my speculation that your low-pass filtering may fix a lot of sins. (I think you agreed with me on this point.)

First I have to escape wife and boss aggro.

I’d better have those poles and zeros by the morning, is all I can say.

Well, I hope we have both been proved right in and agree upon the following conclusions:

1) an unstable response can generate an apparent negative-in-time impulse response

estimateusing the FFT based analysis approach2) So can a data record shorter than significant correlations in the data, and this effect is spurious

I hope further that you will agree with me that the apparent backwards-in-time response here bears the earmarks of case 2. I recommend generating data according to my prescription and testing with it to give you a feel for how it behaves.

I generally only deal with stable systems, and so never really looked at what would happen if the system were unstable until you brought this up. Likewise, I’d bet we both rarely, if ever, run into the case where there is really insufficient data to consistently tease out the long term correlation which is evident here, at least using this analysis approach.

We all seem to agree to differing degrees on many common areas differing largely upon interpretations from results borne of differing experiences. Maybe ruhroh was right. Even Nick has clarified to the point i understand his view.

I think I know of a few tests that can provide more insight too, though they all suffer from the same problems inherent to short records.

Mark

The wondering mode

Again and again and again.

Forces spin to rest.

==========

When trying to break a rock, Nick Stokes comes with a steady drip, while Carrick comes with a sledgehammer…

It’s ironic that just as peace breaks out in this thread (and well done guys – it’s been a fascinating process), CERN announce that some sub-atomic particles apparently travel faster than the speed of light and so presumably negative time intervals are possible after all

http://www.bbc.co.uk/news/science-environment-15017484

AndyL (Sep 23 03:06),

here’s a link to the original paper

http://arxiv.org/ftp/arxiv/papers/1109/1109.4897.pdf

Meh… Two words: Cold Fusion. They’ll figure out where they went wrong sometime.

Or, maybe there’s something to the VSL theories. I’ve always thought that argument made more sense than cosmic inflation anyway.

Re: MrPete (Sep 22 10:23),

My main contention is that it’s too early to ask what “makes sense”. It’s an exploratory analysis on observations. We’ve got (by FFT) to an apparent impulse response that is two-sided, and just cutting one side off does not help the exploration.

But I think the cyclic system issue needs thinking about. I think people here have much in mind wired systems and of the “box” (T2) being analysed detached from the loop. It isn’t, and so the transfer through T2 is mixed with that through T1, which is where CRF is changing temperature, which could certainly cause “negative delays”.

Just one other thing – when you speak of “full-cycle delay” I think you have in mind a system in oscillation at a defined frequency (so you can speak of a full cycle). That’s not really what we have here.

That’s actually just feedback which cannot create negative delays in time (the feedback path itself has a postitive delay, even if it is small,) though it can result in negative group delays. The feedback path itself is necessary to understand the behavior of the overall mechanism.

Take a look at Bart’s system, posted above, to see what it is he is doing.

Mark

I suppose that was the vocabulary of defined-frequency, but do understand that this isn’t such a system. I probably should have said full-feedback-cycle.

As Mark nicely summarizes, what we’re dealing with here is some kind of stimulus (or “cause”) and some kind of impact (or “effect”). And while there can be feedback that has an effect the next time the “cause” occurs… and thus can appear to precede the cause, that is not really what happens. Feedback never can happen in negative time.

A nice simple example that comes to mind is Whack-A-Mole. I’m pretty good at it, and definitely begin my Whack before the Mole pops out of its hole, based on timing learned from previous Whacks. But that doesn’t mean negative time is involved.

😀

Group delay is not about time delay. It is a measure of how the phase of the constituent frequency components that make up the signal are affected as they move to the output. The changes in phase will affect the shape of the waveform not the speed of the wave. Group delay is a metric about how faithfully the shape of an input waveform is preserved across the system. It is used s a measure to see how faithfully data can be passed through a telecom network. It is not a messier of the time delay that the waveforms will take to traverse the system. Group delay is about shapes of waveforms. It is not about time delay. It has nothing to do with reaching into the past. it is not mysterious. It is not negative time dealy.

While the discussions on signal processing include some less-than-respectful comments about others, I applaud those whom I previously regarded as trollish troglodytes, for their technical contributions. I’m actually learning something from the discussions, and have returned to reading some of their posts. “They” may still be a******s,

but the content is apperciated.

Steve

“Can anyone speculate what was in Dessler’s mind when he applied this method? “Dessler simply says:

“The cloud feedback is conventionally defined as the change in ΔRcloud per unit of change in ΔTs.”And that is what he works out. There is no dynamics in the notion. It relates to equilibrium sensitivity. In recent terminology, a DC response. He assumes (for this purpose) that the observed fluctuations in CRF are caused by the surface T fluctuations, and the regression gives the proportionality.

That goes into a feedback calc thus – suppose you have a primary mechanism (eg Stefan-Boltzmann) which says that a small equilibrium shift in flux F causes a proportional change in T

dF1 = a*dT

Then you find a mechanism (eg CRF) which produces a flux change in F (dF2) proportional to a change in T

dF2 = b*dT

Then the nett sensitivity is dT/(dF1+dF2) = 1/(a+b).

If b is of opposite sign to a, it enhances the magnitude of the flux and is termed positive feedback.

Dessler’s para here:

“This definition of the cloud feedback is a standard approach for quantifying feedbacks (26). It only requires an association between Ts and ΔRcloud but does not imply any specific physical mechanism connecting them. The recent suggestion that feedback analyses suffer from a cause-and-effect problem (27) does not apply here:”might be worth looking into.

It is entirely based on an assumption of zero phase lag. That is a clearly invalid assumption.

By the way, all of youse guys seem to be infringing on my patented technique which is mine, of including some minor thing which is Wrongo, and thus eliciting responses from wiseguys who might otherwise remain silently on the sidelines. Herein we saw recursive deployment…

This was a bit like the runaway jazz song (with dueling soloists), that has the audience wondering if they are ever going to get back and reprise the theme, in a semi-harmonious way.

Dramatic, yet valuable.

Thanks to all.

RR

Indeed!

A heartfelt thanks to all involved in a great thread!

I believe that even the stoic adversaries will, with hindsight, reconsider the value of this communication – scientific education, blogosphere style.

Steve

Steve McIntyre:

I think what am using here would mostly be in the peer reviewed literature. Most text books that deal with broad-band methods for obtaining transfer functions and “impulse response functions” stop with linear, passive systems, and don’t address any of the pitfalls in that method when these assumptions fail.

I’m sure there are reviews where some of the comments I’ve made above are synthesized, I would have to search for them though.

I’m not sure what you’re getting at here. I’ve been making the same arguments as Bart, simply noting distinctions about the analysis, not any real “name-plate” polishing.

While I agree that this is a physical system, the argument is actually about how you interpret the results of the signal processing mechanics.

Mark

I want to highlight and repeat what I just replied to RomanM above, as it might get lost in all the traffic, and is important.

I had a sort of breakthrough thought on this. Given the proffered system diagram, this is the type of behavior which would be

expected.Label the top response T1 and the bottom T2. We are trying to estimate T2. The closed loop transfer function from the input Radiation Forcing (RF) to the input point of T2 is H2 = T1/(1-T1*T2). Assume the gain of the loop is “large” within the passband. Then, H2 := T1/(-T1*T2) = -1/T2. So, if the RF is wideband, the spectrum of the input to T2 should be approximately the spectrum of RF divided by mag(T2)^2, which is what RomanM has found.

The gain from RF to the output of T2 is approximately unity, so the output of T2 should more or less track RF within the passband of the loop. My hunch would be that, that passband is probably about 0.3 years^-1, which is where you get the maximum phase margin boost from T2.

“My hunch would be that, that passband is probably about 0.3 years^-1, which is where you get the maximum phase margin boost from T2.”Or, roughly the maximum anyway.

“The gain from RF to the output of T2 is approximately

minusunity (T1*T2/(1-T1*T2) := T1*T2/(-T1*T2) = -1)…”And, this is another important point. The ~5 year time constant associated with the transfer function we have been looking at is

open loop. In the closed loop, the response should be more like 1/0.3 years or about three years tosettle(time constant of maybe 1/2 years), assuming the closed loop bandwidthisabout 0.3 years^-1.The loop would have to be quite complicated to take advantage of the phase lead near 0.3 years^-1. There appears to be non-minimum phase behavior in this area. So, maybe the bandwidth is substantially less than this.

Nobody appears to care, but I thought I’d keep any who do apprised.

I’ve largely dropped out of this discussion, because I think the logic has gone off the rails. I’ll explain why.

We started with CFR and T, relationship to be discovered. We hypothesised that CFR could be expressed as a linear function of T, via a convolution relation, CFR = h ⊗ T.

Now you can always find such a relation, via FFT. But there’s been endless argument about whether h should be causal – ie one-sided.

We got an h such that CFR = h ⊗ T, but was two-sided. If causality is part of the hypothesis, that is the end of the investigation. We did not find a satisfactory h. Try again to relate CFR and T with some other method.

However, what Bart’s analysis does is to zero the inconvenient non-causal part. That leaves an h that no longer satisfies CFR = h ⊗ T (but is causal). The relevance of that remains to be demonstrated.

Bart

assertsthat the omitted part is mere noise. That needs to be shown I believe it is untrue, as shown by these facts:1. When omitted, CFR = h ⊗ T seriously failed

2. The DC gain changed from -12.2 W/m2/K to -9.4 W/m2/K.

Nick – you seem to have a complete blind spot for the lessons of the artificially generated data, which argues quite clearly and strongly that the “anti-causal” components of the estimated impulse response are an artifact of the long correlation time.

I have no more time to waste trying to convince you. The evidence is there, if you are willing to see it.

Thanks to Nick Stokes for returning and recapitulating the core problem from his perspective.

“When omitted, CFR = h ⊗ T seriously failed”But, it doesn’t. You didn’t look closely enough. It starts to agree more and more closely as time progresses. You are just seeing a settling phenomenon.

Just a reminder: full impulse response estimate from artificially generated data with the same low frequency correlations and observation time span. The mess on the right is a chimera. We know, because I explicitly created this data using white noise input and a causal filter.

What if the artificial data are generated by a fully closed feedback loop?

It shouldn’t make any difference, as long as the input has sufficient bandwidth to excite the entire range of the transfer function. If it does not, the result is not a good representation of the system.

I should note that there is this limitation in my artificially generated data, that of an adequately stimulating random noise generator. This could be why it takes me several runs to get a good result.

The MATLAB generator should be pretty good, but this has been a known issue in Monte Carlo analysis for many decades. It may well be that the real world is more randomly stimulative, and it is perhaps no accident that this particular real world data set generated a good result.

Yup. The data from such generators are pseudo-random and often have issues. Cleve Moler has a pretty long writeup on the method MATLAB uses to generate the Normally distributed values from the randn() function, though it’s been a while since I read the article (search at The MathWorks then wade through a bazillion hits.) It is good a good generator, but not perfect, which is otherwise impossible.

Mark

I cannot see the logic here. You’ve shown, I guess, that an artificial example can generate noise in the negative t portion. That doesn’t mean that all negative t portions of different problems are noise.

You need to show that there is some mechanism in the FFT process that preferentially generates noise in the negative part. And even that does not rule out the possibility of real information there.

I believe there’s nothing in the FFT analysis that would cause asymmetric time treatment of h.

You’ve never dealt with my simple reversal argument. Reverse the data order, and you reverse h. If the FFT process was causing noise and noise only to appear in negative t, why are we seeing the numbers that you thought were valid turning up there, unchanged?

Not just

can. Always does. Seriously, Nick, try it yourself. That’s why I provided the algorithm for generating artificial data to everyone.As for the reversal argument, think about what the cross correlation

isand how reversing the time series affects it. It’s not magic.Can you guys not post R code?

Thanks for continuing the dialogue. Many lurkers here.

Bender,

I posted R code here, along with graphs – it’s similar to the code Roman posted above. Just add dR=rev(dR); temp=rev(temp) after they are defined to get the reverse effect. It’s easier to see what is happening if you reduce Nsamp from 8192 to, say, 1024.

I don’t know R code. To generate the artificial data, hopefully this post and the two following it are not too hard to decipher.

http://www.mathpages.com/home/kmath249/kmath249.htm

For those of you who were puzzled by the talk of “negative time lags” in the discussion about transfer functions above can find an accessible discussion at the web page whose URL is above. TEh web page discusses transfer functions and teh effect of the phase response. Contrary to some impressions, these “negative time lags” have nothing to do with predicting the future or showing that the cloud feedback observations are non-causal.

The pertinent passage from the web page is:

More from the web page

Good link Tom.

One thing to point out though is the following. This is provable as a theorem:

If X is the input and Y is the output, and Y depends linearly on X, the underlying system is passive (aka “stable” aka does not require a power supply to operate), and the relationship between X and Y is causal, other than spectral widening due to a finite window and noise, the inferred impulse response function h(tau) = 0 for tau < 0.

For a broad range of systems, tau can be identified with the physical delay.

It turns out if you have a power supply (e.g., the Sun certainly acts as one for climate), then you can get negative delays. These negative delays can either be a sign of net amplification in the system (feedback greater than one so a stabilizing nonlinearity is required) or they can arise in a passive, nonlinear system.

So non-periodic signals, that is any signals which carry information (e.g. the effect of solar radiance), exhibit a lag when going from the input to the output

They don’t have to be periodic for the peak of the response to lead the peak of the signal in the derivative case. See the green line in the first figure example here http://landshape.org/enm/phase-shift-in-spencers-data/.

All that is needed is for the max rate of increase to preceed the max magnitude – easily done.

## 9 Trackbacks

[…] Above Post Links */ google_ad_slot = "9927257852"; google_ad_width = 468; google_ad_height = 15; As Steve McIntyre points out, Dessler cherry picked his data to get the result he wanted. Dessler cherry […]

[…] https://climateaudit.org/2011/09/08/more-on-dessler-2010/ […]

[…] Dessler may need to make other changes, it appears Steve McIntyre has found some flaws related to how the CERES data was combined: https://climateaudit.org/2011/09/08/more-on-dessler-2010/ […]

[…] Nick Stokes Posted Sep 10, 2011 at 5:32 PM | Permalink […]

[…] […]

[…] is the application of the work-in-progress Fast Fourier Transform algorithm by <a href=”Permalink “>Bart</a> coded in R on the total solar irradiance (TSI via Lean 2000) and global […]

[…] is the application of the work-in-progress Fast Fourier Transform algorithm by Bart coded in R on the total solar irradiance (TSI via Lean 2000) and global temperature (HadCRU). The […]

[…] Nick raised a point over at CA, that perhaps all we’re getting with ERA-Interim clear-sky fluxes is the CERES fluxes, but […]

[…] Nick raised a point over at CA, that perhaps all we’re getting with ERA-Interim clear-sky fluxes is the CERES fluxes, but with […]