More on Dessler 2010

CERES data, as retrieved in its original state (see here) provides all-sky and clear-sky time series. Dessler 2010 made the curious decision to combine ERA clear-sky with CERES all-sky to get a CLD forcing series. This obviously invites the question about the impact of using CERES clear-sky in combination with CERES all-sky to calculate the CLD forcing series. One would have thought that this is the sort of thing that any objective peer reviewer would ask almost immediately. Unfortunately, as we’ve seen, climate science articles are too often reviewed by pals. Nor, to my knowledge, has the question been raised in the climate community.

The decision was touched on in Dessler 2010 as follows:

Previous work has shown that DRclear-sky can be calculated accurately givenwater vapor and temperature distributions (20 Dessler et al JGR 2008, 21- Moy et al JGR 2010). And, given suggestions of biases in measured clear-sky fluxes (22 -Sohn Bennartz JGR 2008), I chose to use the reanalysis fluxes here.

While peer reviewers at Science were unequal to the question, the issue was raised a month ago by Troy_CA in an excellent post at Lucia’s. Having exactly replicated Dessler’s regression results and Figure 2a, I’m re-visiting this issue by repeating the regression in Dessler 2010 style but making the plausible variation of CERES clear sky in combination with CERES all sky, and with the widely used HadCRUT3 series and got surprising results.

The supposed relationship between CLD forcing and temperature is reversed: the slope is -0.96 w/m2/K rather than 0.54 (and with somewhat higher though still low significance).

Here are exact details proving the calculation. Troy provided the following recipe for CERES data (isn’t it absurd that blog posts on “skeptic” blogs provide better replication information than “peer reviewed” articles in academic literature):

The CERES data I’ll be using in this post is available for download here.
http://ceres.larc.nasa.gov/order_data.php
I’ve chosen SSF1deg to match up with the Dessler paper, then selected global mean, monthly, Terra, full time range, with the TOA fluxes. I’ll also note that the site automatically downloads version 2.6 now instead of 2.5.

I repeated the exercise (data is now available to end 2010) and uploaded the data set (in ncdf format) to http://www.climateaudit.info/data/ceres. The following operations retrieve data for analysis and plotting:

library(ncdf)
download.file(“http://www.climateaudit.info/data/ceres/CERES_SSF1deg-Month-lite_Terra_Ed2.6_Subset_200003-201012.nc”,”temp.nc”,mode=”wb”)
ceres=open.ncdf(“temp.nc”)
net=ceres.all.data<-ts(get.var.ncdf(ceres, "gtoa_net_all_mon"),start=c(2000,3),freq=12)
tsp(net) #
clr<-ts(get.var.ncdf(ceres, "gtoa_net_clr_mon"),start=c(2000,3),freq=12)
cld= net-clr
month=window( ts( rep(1:12,11),start=2000,freq=12),2000.16,tsp(net)[2])
anom=function(x,Month=month) { #function to take anomalies by month
y=factor(Month) ; norm= tapply(x,factor(Month),mean,na.rm=T)
levels(y)=norm; y=as.numeric(as.character(y)); y=x-y
return(y)
}
cld=anom(cld)

source("http://dl.dropbox.com/u/9160367/Climate/ClimateTimeSeries.R&quot;) #Troy's functions
had<-window(getHadCRUt(), start=c(2000,3), end=2010.99)
had=had-mean(had)

Dessler’s regression for the conclusion in his Abstract can be simply replicated for CERES CLD data and HadCRU as follows:

fm=lm(cld~had)
summary(fm)

This yields a slope of -0.96 +- 0.98 w/m2/K REVERSING the result reported in Dessler 2010 using a combination of CERES all-sky and ERA clear-sky (0.54 +- 0.94 w/m2/K). r^2 remain very low but higher than that reported in Dessler 2010.

#had -9.556e-01 4.908e-01 -1.947 0.0538 .
#Residual standard error: 0.5605 on 128 degrees of freedom
#Multiple R-squared: 0.02876, Adjusted R-squared: 0.02117
#F-statistic: 3.79 on 1 and 128 DF, p-value: 0.05376

The scatter plot corresponding to Dessler 2010 Figure 2a is shown below. While I feel uneasy using the term “confidence intervals” with such weak relationships, the 2-sigma confidence interval brackets the -1 to -1.5 w/m2/K range that Dessler 2010 sought to exclude.


Figure 1. Re-doing Dessler 2010 Figure 2.

For comparison, here is Dessler’s original figure.

The questions are obvious.

PS. Just to confirm that both flux series are correctly oriented, here is a plot of the Dessler CLD series versus the CERES CLD series (the correlation is 0.73):

727 Comments

  1. Charlie Hart
    Posted Sep 8, 2011 at 11:49 AM | Permalink

    It pains me to see a straight line drawn through that scatter plot. 🙂

    • Posted Sep 8, 2011 at 1:52 PM | Permalink

      Text books sometimes say that both regression lines should be plotted, to see whether developing a regression model is worthwhile:

      A

      (CIs with the intercept, still a bit confused about leaving it out; I don’t really understand anomalies 🙂 )

    • Anthony Watts
      Posted Sep 8, 2011 at 5:01 PM | Permalink

      Yes that was my first impression of the scatter plots also.

      • Robin Melville
        Posted Sep 9, 2011 at 1:49 AM | Permalink

        Ditto. With correlation coefficients this low one really is picking gnat sh*t out of pepper. Is the thinking here that only an r^2 of exactly zero means a result is not significant? A value of 0.02 is remarkably close to that.

    • Posted Sep 8, 2011 at 9:53 PM | Permalink

      I suspect that a lot of shotgun manufacturers would pride themselves on the quality of this distribution.

      • eyesonu
        Posted Sep 10, 2011 at 8:11 PM | Permalink

        You still gotta to aim in the general direction! It looks like the rabbit will get away.

  2. Ed Waage
    Posted Sep 8, 2011 at 11:51 AM | Permalink

    Is this another example of “Hide the decline”?

    (Note, you are missing a link in your first sentence at “see here”.

    • bender
      Posted Sep 8, 2011 at 4:19 PM | Permalink

      Funny. But not quite accurate. They never looked for a decline. It took Steve to find it. The hiding takes place in the next phase. In the usual forums.

      • Philh
        Posted Sep 8, 2011 at 8:40 PM | Permalink

        Welcome back, Cotter.

        • Eric
          Posted Sep 9, 2011 at 11:06 AM | Permalink

          couldn’t resist: http://www.youtube.com/watch?v=GGlY3ubGzUY

          “who’d have thought they’d lead ya
          back here where we need ya”

          it is always nice to see Bender.

        • Steven Mosher
          Posted Sep 9, 2011 at 1:59 PM | Permalink

          My pool is a mess since he disappeared.

  3. Lance Wallace
    Posted Sep 8, 2011 at 11:52 AM | Permalink

    Steve–

    Did you mean to provide a link in your first line (“see here”)? Or is it the same link you provide in the box below that?

  4. Posted Sep 8, 2011 at 11:54 AM | Permalink

    Is this with, or without lag?

    Speaking of Troy, he and I are co-authoring something about this feedback issue that improves the correlations somewhat. Obviously people will want to know more but I don’t want to jeopardize our chances of publication.

    I’ll give everyone a hint: clouds and water vapor aren’t quantum entangled to the sea surface. 🙂

    Steve: No lags (as shown in the script). Apples-to-apples to the Dessler calculation.

    • Posted Sep 8, 2011 at 1:07 PM | Permalink

      Thanks Steve. I figured it had to be, but couldn’t tell (I hadn’t understood quite what the code is doing yet, just kinda glanced).

      So the relationship one gets for the supposed feedback slopes is ridiculously dependent on the arbitrary choices one makes for what datasets to use.

      Forgive a CA cliche, but it’s cloud’s illusions I recall, we really don’t know clouds at all 😉

      • Posted Sep 8, 2011 at 1:41 PM | Permalink

        So the relationship one gets for the supposed feedback slopes is ridiculously dependent on the arbitrary choices one makes for what datasets to use.

        Even that hardly does this justice. To choose all ERA or all CERES would presumably make some sense. To combine six of one and half a dozen of the other should, as Steve says, have been picked up at once by any serious reviewer. But that’s what you never get in climate science peer review. It’s either the lapdog that lost its teeth or the rottweiler who won’t let go until the intruder is dead. It’s pitiful.

        • Steven Mosher
          Posted Sep 8, 2011 at 3:46 PM | Permalink

          due diligence means I get to pick the data I want and the other guy has to look at the problem six ways from sunday. Witness Steig with Ryan.

      • David A
        Posted Sep 10, 2011 at 11:18 PM | Permalink

        At least give creit where it is due, this time to Joni Mitchell

  5. Hector M.
    Posted Sep 8, 2011 at 12:19 PM | Permalink

    Why is it that ANY regression slope with a confidence interval LARGER than the estimated value is deemed to be in any way significant? Slopes of 0.54 +/- 0.94 or for that matter -0.96 +/- 0.98 are not statistically distinguishable from zero, though the latter might be marginally significant if one is slightly less demanding, i.e. with a significance level slightly below 95% (In other words, Steve result is “marginally significantly different from zero at somewhat less than 95% confidence” while Deller’s result is definitely not significant in statistical terms unless you reduce the confidence level to embarrassingly low levels). No wonder R2 is nearly zero, meaning that nearly zero percent of the observed variance in the dependent variable is “explained” by the linear regression.
    Such findings SHOULD be published, of course (all positive, negative or neutral results should be published to reduce “publication bias”), but the conclusion should be “no significant relation found” in the case of Deller, and “significant at 90% (or whatever) confidence level” in Steve’s case.

  6. Dave Dardinger
    Posted Sep 8, 2011 at 12:36 PM | Permalink

    Looking at the two figures, it would appear the major difference is a couple of outliers in the UL quadrent and several in the LR quadrent of your figure. Can you check to see why these are so much different in the two data sets?

    • Posted Sep 8, 2011 at 4:57 PM | Permalink

      One is just a subset of observations, extended, The other is a reanalysis.

  7. Posted Sep 8, 2011 at 12:44 PM | Permalink

    Absolutely hilarious!

    I’m looking forward to Dessler’s video explanation.

    After all, Dessler is a ‘Google Science Communication Fellow’ (which must be a tax deduction for Schmidt et al).

  8. Nicolas Nierenberg
    Posted Sep 8, 2011 at 12:49 PM | Permalink

    Should be easy enough to submit as a comment to Science.

  9. Posted Sep 8, 2011 at 1:02 PM | Permalink

    Steve this is very interesting but I’m confused. In your fig. 1 above, which CERES series do you use? Just “all sky” or just “clear sky?” And what did Dessler 2010 do to “combine” these? Average them?

    Steve- the calculations are specified in the code. cld= net-clr

    • Posted Sep 8, 2011 at 1:13 PM | Permalink

      Does “combine” mean that any CLD series is “all sky” minus “clear sky?” And I guess ERA is a different data source from CERES? Anyone confirm if this is correct… I would be grateful.

  10. Posted Sep 8, 2011 at 1:05 PM | Permalink

    One thing that will account for some of the difference (although a smaller amount) are the adjustments to CRF to get to delta-R (since the clear-sky fluxes change more due to temperature, water vapor, and surface albedo, which correlate with temperatures). So using the radiative kernels mentioned in Dessler10 to remove the bias will make the results slightly more positive, although not a whole lot.

    You can get the differing flux contributions I calculated between clear and all-sky for the various kernels from this post (GlobalFluxContributions2000-2010.txt):

    Radiative kernels, and more on the surface vs. atmospheric temperature feedback problem

    And then make the adjustments.

    Also, one reason I looked at for the difference between CERES vs. ERA clear-sky fluxes is that of the measured solar insolation. There was a discrepancy as I recall, where the difference in measured solar insolation actually caused a positive bias when subtracting clear-sky (from ERA, calculated based on its solar insolation) from all-sky (a different value for solar insolation). However, it could not account for the total discrepancy.

  11. Posted Sep 8, 2011 at 1:29 PM | Permalink

    isn’t it absurd that blog posts on “skeptic” blogs provide better replication information than “peer reviewed” articles in academic literature

    So, we have the benefits of peer review on one side and those of open code and data on the other. Which is going to win out? Given the quality of peer review in climate science? In the end all points of the spectrum will benefit but for now, thanks Steve, Troy and the rest of the technical climate bloggers. As it says, a little leaven leavens the lump.

  12. Molon Labe
    Posted Sep 8, 2011 at 1:37 PM | Permalink

    What happens if you combine ERA clear-sky and ERA all-sky?

  13. KnR
    Posted Sep 8, 2011 at 1:54 PM | Permalink

    Its the short of thing that happens in rush job ,

  14. Posted Sep 8, 2011 at 1:55 PM | Permalink

    Comments to any journal get buried, so that is a rather time intensive process for little gain. I encourage Steve to generate a new piece of independent scholarship to address issues in the recent papers, e.g. D10, D11, SB2011, etc.

    There is an advantage to being late to party as you get to see which cars are parked outside.

    • Geoff Sherrington
      Posted Sep 8, 2011 at 8:11 PM | Permalink

      Good one! You also get to find who’s in bed with whom in the back seat if you are game to peep.

  15. Steven Mosher
    Posted Sep 8, 2011 at 2:27 PM | Permalink

    Just posted this on realclimate.

    Looks like eric and martin vermeer and gavin agree that looking at alternative datasets is a normal course of due diligence

    Eric,

    “[Response: Mosher: There has never been any objection to the idea of ‘due diligence’ at RealClimate. The objection has been to the laughable and arrogant claim — repeated ad nauseum by you — that the idea of ‘diligence’ hasn’t occurred to anyone before, and to the offensive and unsubstantiated accusation that the mainstream scientific community have placed scientific diligence secondary to a perceived political agenda by the mainstream scientific community.–eric

    Dr. Steig a few points.

    If you read what I wrote carefully you will see that I’m not claiming that anyone at Real Climate “objects” to the “idea” of due diligence. Here is what I wrote:

    “Thank you. Over the course of the past four years quite a number of us have suggested just this type of due-diligence thing repeatedly. These suggestions which seem utterly normal to anyone who has had to work with messy datasets, conflicting datasets, and divergent models, have been routinely met with cat calls, insults, and challenges to “do your own damn science.”

    I don’t see any references there to real climate. What I am pointing out is simply this. In the past when people asked if certain due diligence was performed, those questions were met with the kind of responses I mentioned. You’ll note that I dont call those responses “objections” to the idea of due diligence. They are something else. They are not objections to the idea, they are objections to the person who raises the issue. That’s two entirely different things. Of course, that behavior is often taken as an objection to the idea itself. Hence, it’s good to see that clarified. The objection is to certain people raising the issue in a way that you don’t approve of or that makes you uncomfortable.

    Second, If you read carefully you will see that no where do I make the claim to be the “originator of this idea” In fact, in all my writing I’ve given deference to the people who taught me. I have expressed surprise when I have, on occassion, found little documentary evidence of due diligence. By that I mean no documentary evidence that due diligence has been performed. It may have been performed, but in certain cases which interest me, I have on occasion seen no documentary evidence that it occurred. I am by no means the first person to notice this. I am glad that we can both agree that testing multiple datasets is one of the ordinary things you do as a part of due diligence. It’s a good day when we can agree on that. More on that later. Rest assured that both you and martin and gavin will get full credit for the idea of testing with alternative datasets.

    I will assume that you agree with Martin that looking at alternative datasets is normal due diligence and let him handle any arguments you have with that.

    Third, Like you I do not think that general indictments of an entire community of scientist’s is well founded or useful. It’s rather like calling all skeptics Oil Shills. Some clearly are, other’s, well, not so clear. Moreover, my main focus has been on a few, very few, isolated cases. In those isolated cases my focus has been exclusively on the sociological aspects, and institutional aspects, not the political aspects. Let’s suppose I had somebody who had challenged an engineer to take a matlab class from him. I would never look to the political aspects of this. I would look at the institutional and sociological aspects to explain the phenomena. In fact, you will find that is a common theme for me going back 4 years when I first noticed this rather odd dynamic. Frankly, I find politics and arguments about people’s politics boring.

    So peace dude. It’s a good day when we can agree on something.

    • Posted Sep 8, 2011 at 2:40 PM | Permalink

      Rest assured that both you and martin and gavin will get full credit for the idea of testing with alternative datasets.

      tee hee hee

      • Steven Mosher
        Posted Sep 8, 2011 at 3:11 PM | Permalink

        Yes, you can blame them for the stuff that Steve does. After all it would be arrogant and laughable for anyone at CA to claim credit for the idea of due diligence. RC endorses the idea of due diligence. They argue that looking at different datasets ( can anyone say no BCPs!!, i knew you could) is basic and obvious. All they object to is people like me pretending that I came up with the idea ( ha I stole it from Steve Mc). I would suggest that anyone who looks at alternative datasets should invoke the blessing of martin and gavin and eric.

        Peace to them all.

        • mpaul
          Posted Sep 8, 2011 at 3:33 PM | Permalink

          No, I think your missing their point. Everyone agrees that you should look at different data sets. This is basic and obvious. What’s novel and unique is that they advocate choosing the datasets that produce the lower R^2 value.

        • Steven Mosher
          Posted Sep 8, 2011 at 3:43 PM | Permalink

          you must be new to moshpit sarcasm. sorry.

        • mpaul
          Posted Sep 8, 2011 at 4:46 PM | Permalink

          I was being sarcastic back. I think I need to start using a sarc tag!

          To be literal: if they are claiming that Dressler looked at different data sets, then why would he have chosen the one with the lower R^2 value? Wouldn’t you choose the one with the highest possible value (even it its still pathetically low)? If Dressler did look at different data sets, then it raises the possibly that he cherry picked the data sets that best made his argument — the very thing he accused Spenser of doing. /literal

        • Posted Sep 8, 2011 at 4:53 PM | Permalink

          They aren’t different data sets. One is a measured set of clear-sky fragments which has been simple-mindedly extend to make up a pretend atmosphere. The other is a reanalysis, which makes the appropriate corrections.

        • Posted Sep 8, 2011 at 6:20 PM | Permalink

          Through the use of a model along with the assumptions it’s built on.

        • Steven Mosher
          Posted Sep 9, 2011 at 12:07 AM | Permalink

          ah tallbloke you like reanalysis data when it fits your purpose! You cannot have it both ways. you cannot, without close analysis, accept one part of this data and reject another. And if you accept the data you accept the physics ( yes radiative physics) used in the models.

          Dessler’s choice may well be defensible. since he cites another study there is at least a paper trail to follow and in the end some data to look at again. It’s all good.

          And if this is inconclusive then there is more data to collect. thats good too.

          Nothing here, however, will change the laws of energy balance and radiation. More C02 warms the planet. Just say that and you’ll feel better and you’ll be able to get about the business of quantifying the role the sun plays, however small.

        • Willis Eschenbach
          Posted Sep 9, 2011 at 12:55 PM | Permalink

          Mosh, you say:

          More C02 warms the planet. Just say that and you’ll feel better and you’ll be able to get about the business of quantifying the role the sun plays, however small.

          I love how you take what we have spent years trying to decide (whether more CO2 warms the planet, or whether it is basically neutralized by homeostatic mechanisms and climate feedbacks) and merely assert it, as though having the Mosher stamp of approval makes a damn bit of difference … and as to whether other factors can offset every bit of theoretical CO2 warming, see the temperature history of the last 15 years. If CO2 warmed us during that time, where is the evidence?

          So my answer is no, Mosh. Repeating unverified claims might make you feel better, but I don’t feel better at all when I do that. Your claim is like saying “turning on the oven will warm the kitchen”, which is true if and only if my house doesn’t have a thermostat, which will completely negate the effect of turning on the oven. Saying “more CO2 warms the planet” is just as incomplete a claim as the idea that “turning on the oven heats the house”. It may or it may not depending on the circumstances.

          Since you have not shown that the earth doesn’t have a thermostat, and since my work has provided various lines of evidence showing that the earth very likely does have a thermostat, your claim about CO2 is way, way premature. Do your homework first, and then make your claims.

          Shakespeare is reputed to have said, “There are more homeostatic mechanisms in heaven and earth, Horatio, that are dreamt of in your philosophy”. As a friend of mine once remarked, just say that and you’ll feel better, Mosh, and you’ll be able to get about your business …

          w.

        • Steve McIntyre
          Posted Sep 9, 2011 at 1:28 PM | Permalink

          Willis and Mosh, it is an editorial policy of this blog that the “big question” of CO2 not be debated in short paragraphs on OT threads. Otherwise every thread quickly looks the same.

        • Willis Eschenbach
          Posted Sep 9, 2011 at 9:06 PM | Permalink

          Steve, my bad, thanks for the reminder.

          w.

        • Steven Mosher
          Posted Sep 10, 2011 at 6:23 PM | Permalink

          Noted.

        • Steven Mosher
          Posted Sep 8, 2011 at 5:08 PM | Permalink

          No, Martin is making a point about Spencer. Spencer used only Hadcrut.

          martin is saying two things:

          1. he has no trouble with spencer using hadcrut.
          2. checking other datsets is obvious due diligence.

          I am focusing on the second point trying to find a grounds for agreement. Its not that hard. It’s obvious standard proceedure when you have multiple datasets measuring the “same” thing or closely similar things to do some basic tests.

          Namely: does my choice of dataset impact the results. Even when one has a clear case that a dataset is superior I still think it makes good common sense to check.

          So my point, my only point, is that finally we have a clear statement on this issue of checking other datasets to ascertain the uncertainty due to dataset selection.

          In general. I think that both spenser and dessler should examine the relevant datasets.
          run their analysis on the datasets, report the results, and then argue the merits of selecting one dataset over another. or show the insensitivity to the dataset.

          I suspect that people will now disagree with this clear principle.

        • Robin Melville
          Posted Sep 9, 2011 at 1:58 AM | Permalink

          Surely the finding here is that there *is* no result. I seems that whatever datasets are used the adjusted correlation coefficient is achingly close to zero. It’s a little bizarre that anyone would seek to justify any kind of conclusions from these data — except that there isn’t a significant correlation.

        • Posted Sep 9, 2011 at 2:29 AM | Permalink

          “Surely the finding here is that there *is* no result.”

          Yes, pretty much. Here’s what Dessler says:
          “Obviously, the correlation between ΔR_cloud and ΔT_s is weak (r2 = 2%), meaning that factors other than T_s are important in regulating ΔR_cloud. An example is the Madden-Julian Oscillation (7), which has a strong impact on ΔR_cloud but no effect on ΔT_s. This does not mean that ΔTs exerts no control on ΔR_cloud, but rather that the influence is hard to quantify because of the influence of other factors. As a result, it may require several more decades of data to significantly reduce the uncertainty in the inferred relationship.”

        • HAS
          Posted Sep 9, 2011 at 4:19 AM | Permalink

          And were we inferring a negative or positive feedback?

        • Posted Sep 9, 2011 at 4:48 AM | Permalink

          Dessler says positive, but:
          “Given the uncertainty, the possibility of a small negative feedback cannot be excluded.”

        • HAS
          Posted Sep 9, 2011 at 5:48 PM | Permalink

          So he’s really saying “I inferred a positive feedback, but found no statistical evidence for it”.

          The rest of what he says is just wishful thinking (“This does not mean that ΔTs exerts no control on ΔR_cloud, but rather that the influence is hard to quantify because of the influence of other factors. As a result, it may require several more decades of data to significantly reduce the uncertainty in the inferred relationship.”)

          The lack of statsitical relationship begs a range of other interpretations that D should perhaps have speculated on.

        • Posted Sep 9, 2011 at 7:29 PM | Permalink

          No, in Dessler’s argument nothing depends on it being positive. He made an estimate. Numbers have signs. His best estimate here turned out to be positive.

        • HAS
          Posted Sep 9, 2011 at 9:23 PM | Permalink

          It isn’t clear what part of my comment you are saying “No” to.

          No, he did find a statistical reationhip?

          No, the rest wasn’t wishful thinking?

          No, there was nothing else in this result he should have reflected on?

          In fact his fianl reflection was “My analysis suggests that the short-term cloud feedback is likely positive.”

          I assume this is something you would likely agree with?

        • Posted Sep 9, 2011 at 9:49 PM | Permalink

          I was referring to your first sentence. Dessler found a regression coefficient that was positive, but had an uncertainty range that included zero. He noted this – it has no significance for his argument.

        • HAS
          Posted Sep 9, 2011 at 10:04 PM | Permalink

          This has run its course, but I’ll just say that the lack of significance has much significance for his argument – namely that “short-term cloud feedback is likely positive”.

          Robin Melville was right to call D10 out on this, and I’m surprised you tried to (rather persistently) argue it was all OK.

        • Neu Mejican
          Posted Sep 9, 2011 at 11:27 PM | Permalink

          I am focusing on the second point trying to find a grounds for agreement. Its not that hard. It’s obvious standard proceedure when you have multiple datasets measuring the “same” thing or closely similar things to do some basic tests.

          Namely: does my choice of dataset impact the results. Even when one has a clear case that a dataset is superior I still think it makes good common sense to check.

          Not necessarily the appropriate approach. When you have a clearly superior data-set/test, all you do by running the inferior test is to increase your uncertainty about the truth of the matter. Bayes looms large here

        • MikeN
          Posted Sep 8, 2011 at 7:36 PM | Permalink

          I’m still wondering if this was the impetus for Scafetta’s rejection of RC’s request for code, suggesting they take a course in wavelets.

    • Posted Sep 8, 2011 at 2:48 PM | Permalink

      Steven Mosher (Sep 8 14:27) to Eric Steig:

      So peace dude. It’s a good day when we can agree on something.

      “And the fruit of righteousness is sown in peace of them that make peace.”

      I can’t yet see this irenic contribution over at RC. No doubt the moderators are struggling manfully, wondering how to deal with it. But I read some of the earlier stuff. Respect.

      • Steven Mosher
        Posted Sep 8, 2011 at 3:25 PM | Permalink

        It might get more interesting. we’ll see

        • TimTheToolMan
          Posted Sep 10, 2011 at 8:21 PM | Permalink

          RC usually only posts a few articles per month. Their MO in the case of an article that “goes wrong” is to quickly post another to draw attention away. This is no exception…

    • Steve Fitzpatrick
      Posted Sep 8, 2011 at 7:14 PM | Permalink

      Mosher,
      “the offensive and unsubstantiated accusation that the mainstream scientific community have placed scientific diligence secondary to a perceived political agenda by the mainstream scientific community”

      Don’t know how you didn’t take that bait. You must be a clever fellow.

      • Steven Mosher
        Posted Sep 8, 2011 at 10:05 PM | Permalink

        I found it odd that he would accuse me of saying anything about political agendas. Perhaps he is sensitive
        about having one. Perhaps he confused me with someone else. Perhaps he is confused. Perhaps someone else wrote
        it and signed eric’s name. Perhaps he didn’t mean it. Perhaps he didnt think. Who ever wrote it is carrying a bag of shit. I feel no compulsion to take that bag or set it on fire. So I left him holding it. Sometimes the sight of someone left holding the bag is instructive. A least for others.

    • Posted Sep 9, 2011 at 8:47 AM | Permalink

      Threading can make it difficult to follow a sequence of comments, at times. This comment is a follow-up to a comment that Steven Mosher originally submitteded to RealClimate. He cross-posted it to CA, above, where it is timestamped Sep 8, 2011 at 2:27 PM. Mosher prefaced it with

      Just posted this on realclimate.

      Looks like eric and martin vermeer and gavin agree that looking at alternative datasets is a normal course of due diligence.

      RC’s moderators have now allowed Mosher’s comment, it is in position #104 at this writing. Mosher begins by quoting an earlier response by Eric Steig.

      Eric,

      “[Response: Mosher: There has never been any objection to the idea of ‘due diligence’ at RealClimate. The objection has been to the laughable and arrogant claim — repeated ad nauseum by you — that

      [snip]

      –eric]

      Dr. Steig a few points.

      If you read what I wrote carefully

      [snip]

      Comment by steven mosher — 8 Sep 2011 @ 2:25 PM

      Here is Eric Steig’s inline response to what Mosher wrote.

      [Response: Steve: You miss my point, largely. There has never been any ‘objection to the person’ raising ideas. The objection has been to the crap that accompanies it too often. And once again you provide a nice example of this: Amidst the sober sounding language of your reply to me is yet another boring reference to the Jeff Id “matlab affair”. [Fact: my statement about matlab was not ‘challenge’. It was a response to a snide and inaccurate accusation by Jeff.] In other words, you have once again chosen to place your otherwise reasonable points in the context of cajoling language, with the obvious goal of point scoring. MY point, once again, is that your claim to have ‘finally been heard’ with respect to ‘due diligence’ is simply self-aggrandizing.–eric]


      Steve Mc:
      The “matlab” exchange is here

      jeff Id says:
      4 Feb 2009 at 11:21 PM

      A link to my recent post requesting again that code be released.
      [edit]
      I believe your reconstruction is robust. Let me see the detail so I can agree in public.

      [Response: What is there about the sentence, “The code, all of it, exactly as we used it, is right here,” that you don’t understand? Or are you asking for a step-by-step guide to Matlab? If so, you’re certainly welcome to enroll in one of my classes at the University of Washington.–eric]

      The linked code pointed to a subroutine that was used in Steig’s calculation but the subroutine did not constitute ALL the code. Nor as of Feb 4, 2009 was Steig’s data fully available. With the benefit of hindsight, I’m wondering what precisely within the jeff Id comment Steig considered to be “snide and inaccurate”? Did Steig take umbrage at the suggestion his reconstruction was “robust”?

      • glacierman
        Posted Sep 9, 2011 at 9:31 AM | Permalink

        Maybe Eric needs to be challanged to take a congeniality course.

        • Steven Mosher
          Posted Sep 9, 2011 at 12:08 PM | Permalink

          I would rather not personalize it to eric. You’ll notice over the course of time a pattern to these discussions. In the end it comes down to the way we said things or the way we asked for things. we ask for them in public, we speak snidely or sarcastically or impertinently. we don’t ask nicely.

          If they want to provide us with a standard form to request stuff that would work for me.

      • Steven Mosher
        Posted Sep 9, 2011 at 6:36 PM | Permalink

        If I have to guess i’d guess it was the use of the word “robust” .

        So, let’s grant Dr. Steig his point. Jeff was snide and sarcastic. evil horrid jeff.

        There are two ways to respond to that

        1. heap on kindness
        2. Give jeff what for between the eyes.

        Moshpit has NO PROBLEM with the good Dr. Hitting Jeff back. I have no problem with dr Steig
        being rude to Jeff for no cause whatsoever. None. Not an issue.

        I just wanted to point out what seemed Obvious. I never make political issues about this.
        Dr. Steig can be upset with me for many things, but making political statements aint one of
        my sins. Also, The funny thing is why would you “suggest” that an engineer take your matlab course.

        It’s not the rudeness on either partys part. It’s the stupidity of that particular comeback. Oh
        Ya, Oh ya, rude boy, well maybe you should take my matlab course, and i’ll school YOU. pffft.

        Assume jeffs sarcasm is there. Fine. You dont hit back by suggesting that he take your matlab class.
        Duh. duh.. McFly!

        If you are going to be a jerk, be a creative jerk. That’s my motto. Why is it they miss the real criticism.

        why did he think my criticism are political, when they really are about the paucity of wit. dunno.

        • TimTheToolMan
          Posted Sep 10, 2011 at 8:36 PM | Permalink

          I think its all too common for all skeptics to be bunched together and tarred with the same brush…that all skeptics believe “x” or all skeptics are politically oriented toward “y”.

          It seems to be all AGWers that do this.

          😉

        • steven mosher
          Posted Sep 11, 2011 at 12:26 PM | Permalink

          ha.

          Yes. Here is something I suggest. I suggest that skeptics and lukewarmers all try to find something, anything they can agree upon with AGWers.

          When people react to you “categorically” that is, with a stereotypical framework, nothing works better to unsettle them, to open their minds, to snap them out of it than when an opponent finds a place of agreement and doesnt react how they expect them to react. This can lead to

          1. reciprocity
          2. a better dialogue
          3. some huge Fail on their part ( witness eric)
          4. more careful thinking on their part.

          So try it. Find the agreement and sit back and watch. fascinating

        • Posted Sep 11, 2011 at 1:48 PM | Permalink

          In my defense, there was no intended sarcasm at that time although I can see how it could be taken that way. He later wrote he didn’t want to release his code because it was not clean and in separate files which he didn’t feel comfortable with releasing.

    • Robert Thomson
      Posted Sep 9, 2011 at 12:45 PM | Permalink

      In climate science, peer review is carried out by a cloterie of pals ………….very sad!

  16. Steven Mosher
    Posted Sep 8, 2011 at 2:33 PM | Permalink

    Steve,

    I’m shocked that you would “do your own damn science”

    shocked.

    You know over at RC they were discussion the decision to try different datasets.

    Martin Vermeer writes:

    “I see absolutely nothing wrong with SB11′s use of HadCRUT temperature data for their regressions

    But Russ, doesn’t it make you wonder why they chose it? The paper doesn’t say. Where is your curiosity? 🙂

    Dessler clearly wonders: “… they plotted … the particular observational data set that provided maximum support for their hypothesis.”

    BTW trying alternatives like this gives you one more handle on the real uncertainty of the observational curve, in addition to the formal sigmas. A due-diligence thing, and rather obvious.

    Comment by Martin Vermeer — 7 Sep 2011 @ 9:50 AM

    @ Martin Vermeer,

    ####

    So there you have it. trying different datasets and documenting what you find is rather obvious and a due diligence thing. I wonder if Dessler tried what you tried?
    due diligence thing. rather obvious.

    • Posted Sep 8, 2011 at 3:00 PM | Permalink

      Mosh, it’s hard to work out who’s saying what here. But trying alternative datasets and documenting what one finds is surely of fundamental importance, expecially as all the data become increasingly open. It’s as if the old way of peer review and publishing selective pieces of work in respected publications has become an ornate, baroque dance when what’s needed is the speed and team work of ice hockey. Even the odd brawl would seem a price worth paying. The blogosphere does so much of this best.

      • Charlie Hart
        Posted Sep 8, 2011 at 3:13 PM | Permalink

        I think what Mosh is trying to point out is that there is a really difference between “due diligence” and fishing for the best data set prior to “Starting” your investigation.

        Due diligence would be picked the best data set based on a set of critera estabilished before doing any tests on the data set.

        We see this all the time in the medical field. There are mulitple pools of restrospective patient data. You have to pick the pool of patients prior to doing your hypothesis testing otherwise you will be tempted to pick the 1 of the 7 patient pools that happens to make the case you are trying to make.

        • Posted Sep 8, 2011 at 3:21 PM | Permalink

          Absolutely, in medicine and pharma double blind tests are de rigeur. With satellite data on cloudiness, SST and the rest either the available time periods are too short or there are not enough alternative datasets to make this sort of thing possible. But showing results for all alternatives is minimal for due diligence. I assume we’re all agreeing on that.

        • Charlie Hart
          Posted Sep 9, 2011 at 12:24 PM | Permalink

          If there are multiple data sets avalible then you either need to show the results for all data sets. (Seperate, combined, etc) or you need to come up with a selection critera before hand and to decide which data set you are going to use and then only use that data set.

          You CAN NOT pick the data set based on which gives the “best” results after the fact.

          Given wide confidence intervals the chances of making a type I error aproaches 100% if you data set shop. IE seeing a difference when there really is not a difference. (just noise)

          I am talking about Large Retrospective Studies done on establish data sets of patient populations. Not Prospective Randomized Placebo Controled Trials.

        • Steven Mosher
          Posted Sep 8, 2011 at 3:40 PM | Permalink

          Charlie here is what I’m pointing out.

          Spencer only looked at Hadcrut. Martin argues that looking at multiple datasets is obvious and normal due diligence.

          So, when criticizing Spenser the Team whips out this notion that Steve and others have mentioned in regards to many other studies. w/wo tiljander, w/wo BCP, Yamal, etc. Many times data analysts here have suggested that we dont know the uncertainty due to data selection decisions. When that idea is express here WRT team science, Team science derides the individuals making the suggestion. Take your pick of insults. They never really address the issue, they move the pea, the shoot the messenger ( “ask nice Mr Mcintyre”), they keep the data from you so you cant do what they should have done. They challenge you to settle the issue using a machinery ( peer review) that they dominate ( not control, not manipulate.. but clearly dominate). The causality, of course, is the process of due diligence.

        • simon abingdon
          Posted Sep 9, 2011 at 4:57 AM | Permalink

          “casualty”?

        • Steven Mosher
          Posted Sep 9, 2011 at 12:09 PM | Permalink

          thank you simon.

        • Charlie Hart
          Posted Sep 9, 2011 at 12:28 PM | Permalink

          Right you can not on one hand say we are making data selection decisions prospectively and __________ data sets fit that inclusion critera. Then on the other hand say you should run you test on all the data sets ahead of time as part of due diligence.

          The chance of making a type 1 error is just too large.

        • John Whitman
          Posted Sep 9, 2011 at 12:49 PM | Permalink

          It would be bizarre if Dessler could claim Spencer made a type 1 error about the findings of Dessler’s paper while at the same time Dessler could claim he (Dessler himself) made a type 2 error about his own paper?

          My head is exploding. : )

          John

      • Posted Sep 8, 2011 at 3:18 PM | Permalink

        I think what Mosh is saying is: Soon there will be some major own-petard hoisting by.

        • Steven Mosher
          Posted Sep 8, 2011 at 3:42 PM | Permalink

          hehe. consider this post.

      • Steven Mosher
        Posted Sep 8, 2011 at 3:28 PM | Permalink

        Sorry first two comments are directed here

        Mosher: “Steve,

        I’m shocked that you would “do your own damn science”

        shocked.”

        ###############################

        Mosher: You know over at RC they were discussion the decision to try different datasets.

        #########################
        [ over at RC]

        Martin Vermeer writes:

        “I see absolutely nothing wrong with SB11′s use of HadCRUT temperature data for their regressions

        But Russ, doesn’t it make you wonder why they chose it? The paper doesn’t say. Where is your curiosity?

        Dessler clearly wonders: “… they plotted … the particular observational data set that provided maximum support for their hypothesis.”

        BTW trying alternatives like this gives you one more handle on the real uncertainty of the observational curve, in addition to the formal sigmas. A due-diligence thing, and rather obvious.

        Comment by Martin Vermeer — 7 Sep 2011 @ 9:50 AM

        @ Martin Vermeer,

        ####

        Mosher:

        So there you have it. trying different datasets and documenting what you find is rather obvious and a due diligence thing. I wonder if Dessler tried what you tried?
        due diligence thing. rather obvious.

  17. Paul Linsay
    Posted Sep 8, 2011 at 2:52 PM | Permalink

    The problem with this analysis is that you are not using patented ClySy(tm) statistics. If you had used Mannian principle components the r^2 would be much better, regardless of its value.

    • Steven Mosher
      Posted Sep 8, 2011 at 3:31 PM | Permalink

      I think there should be award called

      “A foolish thing to do”

      and every year we should award it to the paper in climate science with the lowest
      published metric for statistical significance.

      maybe somebody can come up with a pun on “Nobel” prize. and Josh can do a cartoon for the Tshirt the winner gets.

      • RuhRoh
        Posted Sep 8, 2011 at 3:40 PM | Permalink

        I think that a good name for it would be
        “LeBon” , for the very lowest value of statsig.

        How to avoid biting the tongue whilst cheekingwardly thrust?

        hoRhuR

        • Frumious Bandersnatch
          Posted Sep 9, 2011 at 11:29 AM | Permalink

          How about the “fauxbel” prize?

        • Steven Mosher
          Posted Sep 9, 2011 at 2:01 PM | Permalink

          lovely.. multi lingual no less.

          and the prize for the best vietnamiese quisine is called the “Pho”-bel prize

      • Posted Sep 8, 2011 at 3:43 PM | Permalink

        There’s already the Ig Nobels, originally awarded for research “that cannot, or perhaps should not, be reproduced.”

      • timetochooseagain
        Posted Sep 8, 2011 at 3:44 PM | Permalink

        Isn’t what you are talking about basically what the “Ig Nobel Prize” is? It’s even a pun on Nobel…

        We could have a sort of climate sci version of that.

      • Posted Sep 8, 2011 at 4:25 PM | Permalink

        How about the ‘Real And Nebulous Data Operation Mangling’ prize?
        RANDOM for short. 🙂

        • Steven Mosher
          Posted Sep 8, 2011 at 10:17 PM | Permalink

          Careful tallbloke.. scaffetta might qualify. i play no favorites.

      • HaroldW
        Posted Sep 8, 2011 at 4:41 PM | Permalink

        I suggest the “Siggy” award.

        With apologies to Shaw:
        Some people see things as they are, and say ‘not significant’. But others dream things that never were, and say ‘significant’.

      • golf charley
        Posted Sep 8, 2011 at 5:13 PM | Permalink

        Steve Mosher

        Pun on Nobel, NoBull?

      • RomanM
        Posted Sep 8, 2011 at 6:31 PM | Permalink

        I suggest that the award be called the Mr. T Statistics Award after teaching the students some years ago how to carry out a “Mr. T-test”:

        Pity the poor fool who doesn’t think that this statististic is significant!

        • Posted Sep 8, 2011 at 6:36 PM | Permalink

          But what do you think the null hypothesis should be?

        • Posted Sep 8, 2011 at 6:44 PM | Permalink

          Nick:

          Suppose you just tell us what the null hypotheses should be — save us the guesswork.

        • Posted Sep 8, 2011 at 6:59 PM | Permalink

          No, if you’re proposing a T-test, you need to say what you are testing against.

        • Posted Sep 8, 2011 at 7:12 PM | Permalink

          I may be wrong but I see a transformation from T to Mr. T at play in Roman’s suggestion Nick. Is that A-Team or Rocky III as inspiration? As a mere Brit I know when I’m out of my depth – and past my bedtime. Goodnight.

        • RomanM
          Posted Sep 8, 2011 at 6:55 PM | Permalink

          When you’re dealing with Mr. T, whatever he wants it to be is OK with me.

          However, it just occurred to me that there was another “Mr. T” who decided last January to redefine” what the null hypothesis should be regarding extreme weather events. Perhaps, that was what you were thinking of? 😉

        • timetochooseagain
          Posted Sep 8, 2011 at 8:01 PM | Permalink

          The appropriate null hypothesis depends on the “experiment” so to speak. In the case of cloud feedback, I would say the appropriate null is zero. The same for every feedback except the Planck response, which is based on actual, indisputable physics. For that the null hypothesis is a slope of about 3.3 W/m^2 per Kelvin. For the total feedback, it’s the same null hypothesis as for the Planck response, since they are all zero except for it, in the null.

        • Posted Sep 8, 2011 at 11:44 PM | Permalink

          TTCA.
          That’s the point of my query. Dessler isn’t trying to show a difference from zero. He says:

          “Given the uncertainty, the possibility of a small negative feedback cannot be excluded. There have been inferences (7, 8) of a large negative cloud feedback in response to short-term climate variations that can substantially cancel the other feedbacks operating in our climate system. This would require the cloud feedback to be in the range of –1.0 to –1.5 W/m2/K or larger, and I see no evidence to support such a large negative cloud feedback [these inferences of large negative feedbacks have also been criticized on methodological grounds (24, 25)].”

          So a t-test vs zero wouldn’t help him. He’s just trying to show that the evidence doesn’t indicate a large negative feedback which would counter the positive wv feedback.

          And the r2 test, or others based on it, also if anything confirm his argument, which is that no such feedback can be shown.

        • Steven Mosher
          Posted Sep 9, 2011 at 12:11 AM | Permalink

          And the r2 test, or others based on it, also if anything confirm his argument, which is that no such feedback can be shown

          fair point.

        • Posted Sep 9, 2011 at 9:56 AM | Permalink

          A couple of comments:

          A) He assumes in that bit that the “other feedbacks” are not zero but substantially positive. That also has to be shown to be a statistically significant result, or his statement, again, has no meaning. But he doesn’t even explain what the “other feedbacks” are. Water Vapor? Well it must be more than that, unless he is butchering the English language, but to my knowledge that is the only feedback he has explicitly investigated until now. He has claimed that the water vapor feedback is statistically significant. Well, perhaps that should be looked at then?

          B) Based on the way he did his analysis, there is not evidence for a strong negative cloud feedback. This most certainly does not mean that there is not a strong negative cloud feedback, if his “experiment design” was ill posed to isolate the feedbacks. This is the point Roy has been making and I frankly think neither you nor Dessler has understood his points. I think there is a separate problem with his experiment design actually and the work showing this will hopefully be published soon.

          C) At any rate, the significance and sign of feedbacks he has found appear to be dependent on methodological choices, which makes it questionable whether his results could stand up to scrutiny.

        • Willis Eschenbach
          Posted Sep 9, 2011 at 1:07 PM | Permalink

          Posted Sep 8, 2011 at 11:44 PM | Permalink | Reply | Edit
          Nick Stokes said:

          … And the r2 test, or others based on it, also if anything confirm his argument, which is that no such feedback can be shown.

          No, Nick, the r2 test doesn’t do that, that’s a bridge way too far. It can only confirm the argument that no such feedback was shown by his analysis, not that no such feedback can be shown

          w.

        • Steven Mosher
          Posted Sep 9, 2011 at 1:58 PM | Permalink

          even more precise.

        • Posted Sep 9, 2011 at 5:13 PM | Permalink

          Indeed so

      • Leo G
        Posted Sep 8, 2011 at 7:00 PM | Permalink

        Ignoble

      • jorgekafkazar
        Posted Sep 9, 2011 at 12:06 AM | Permalink

        But, Steven, shouldn’t the award go to the journal, instead? It’s relatively easy for an author to crank out a paper with low significance, or even with no published significance or error bars at all. Surely the real challenge is for a journal and its peer reviewers to find the gall to print it.

        • Steven Mosher
          Posted Sep 9, 2011 at 12:11 PM | Permalink

          Excellent point. It’s a good move away from over personalizing the issue as well.

  18. Posted Sep 8, 2011 at 3:38 PM | Permalink

    Of course blog posts are better than peer reviewed literature. Have read that myself in an editorial by some W Wagner of Vienna, Austria.

    • Posted Sep 8, 2011 at 4:09 PM | Permalink

      Yeah, very much in my mind too. But nothing as helpful as hyperlinks from Wolfgang, in case we should look it up and find it less than convincing.

    • Posted Sep 8, 2011 at 6:46 PM | Permalink

      I think, today is a new day dawning. Peer review took a giant and very public hit today. The blogosphere smoked GRL’s peer-review. In this particular instance, if resignations aren’t occurring and op-ed pieces aren’t forthcoming, peer-review will have acquiesced their legitimacy to blog threads.

      I’ve seen corrections. I’ve seen rebuttal papers. But, I’ve never seen a pre-release altered because of what was stated on the blogs. In this case, unless I’ve missed something, this is exactly what happened.

      • Posted Sep 8, 2011 at 7:03 PM | Permalink

        I think I’ve seen the same. What probability we’ve had the same hallucination? But it’s one thing for Dessler to take heed of the corrections of Spencer. Those of McIntyre … that would be something else.

      • Jeremy Harvey
        Posted Sep 9, 2011 at 3:19 AM | Permalink

        suyts, this post is about Dessler 2010 – Dessler, A.E., A determination of the cloud feedback from climate variations over the past decade, Science, 330, DOI: 10.1126/science.1192546, 1523-1527, 2010. Not the more recent Dessler 2011 – Cloud Variations and the Earth’s Energy Budget, Geophys. Rev. Lett., 2011, in press (preprint here). So it is not GRL’s refereeing that is being questioned by this post – but Science’s.

  19. Rattus Norvegicus
    Posted Sep 8, 2011 at 3:44 PM | Permalink

    Steve, did you read the papers cited to justify his use of ERA data? If so, do you feel they support his claims? If you do not feel that they support his claims how do you justify this belief?

    • Posted Sep 8, 2011 at 4:15 PM | Permalink

      Rattus,

      From what I recall when I originally read the references, they discussed how clear-sky measurements were associated with drier conditions than the all sky. This, of course, should have almost no effect the SW results (I say almost none because there is a tiny SW water vapor contribution), but you still get significant differences in the SW estimates using the CERES obs (much more negative feedback, as shown in my guest post at Blackboard mentioned above).

      Furthermore, the dry condition “bias” serves to shrink the total CRF by adding to the LW portion, which is in the opposite direction of the larger SW portion. This would then slightly bias the feedback towards the positive (making it appear as though hotter conditions, with more water vapor, was shrinking the CRF due to a larger LW trapping associated with clouds when there is actually no change in cloud properties).

      So, if there is any bias using the CERES clear-sky, it should bias the cloud feedback towards the positive, and wouldn’t explain the differences we see here. Furthermore, using a different dataset for clear-sky than all-sky can cause other biases, such as the differences from measured solar insolation (from which the ERA-interim clear-sky fluxes are forecasted).

      From what I read of the papers, they don’t explain what’s going on here, and absent other references or data I think it is a stretch to say that the ERA-interim is a “better” than CERES.

    • Layman Lurker
      Posted Sep 8, 2011 at 4:25 PM | Permalink

      Troy_CA comments on this matter in his July 8th blog post where he performed much the same analysis as Steve has done here.

      Dessler10 mentions that he does the opposite because of bias in the measurements: “And, given suggestions of biases in measured clear-sky fluxes (22), I chose to use the reanalysis fluxes here.” The paper referenced there is Sohn and Bennartz in JGR 2008, “Contribution of water vapor to observational estimates of longwave cloud radiative forcing. However, that paper only refers to bias in the clear-sky LW radiation calculations.

      After demonstrating that the feedback estimate is dominated by the SW component rather than LW, Troy comments futher:

      But I’m doubtful that the LW result should be discounted based on measurement bias anyhow. For one, the SB08 paper refers to bias in the absolute calculation of CRF, not necessarily to the change in CRF, and the effect is minimal there (around 10% of only the OLR). Second, the bias should affect it in the opposite direction – it would make the cloud feedback appear more positive, not negative. From the SB08 paper, they mention: “As expected, OLR fluxes determined from clear-sky WVP are always higher than those from the OLR with all-sky OLR (except for the cold oceanic regions) because of drier conditions over most of the analysis domain.” Obviously, clear-sky days don’t prevent as much OLR from leaving the planet as cloudy days, and SB08 estimates that about 10% of this effect is from water vapor instead of all of it being from clouds. So, warmer temperatures should increase water vapor, which will be more prevalent on the cloudy days vs. the clear sky days, which in turn will make it appear that clouds are responsible for trapping more OLR than they actually do. In other words, the bias includes some of the positive feedback due to water vapor – which is already counted elsewhere – in the estimation of cloud feedback. Thus, if we are to take into account the bias, we have slightly underestimated the magnitude of the negative cloud feedback.

      This type of sensitivity analysis is a paper killer. I have only skimmed Dessler ’10, but if his throw away comment was all that was done to rationalize not running the analysis with CERES clear sky then this is a head shaker. And yes indeed Steve, what kind of peer review process would not ask this obvious question.

      • Posted Sep 8, 2011 at 4:48 PM | Permalink

        LL,
        Dessler did not make a throwaway comment. He justified it rather carefully:
        “In a reanalysis system, conventional and satellite based meteorological observations are combined within a weather forecast assimilation system in order to produce a global and physically consistent picture of the state of the atmosphere. I used both the ECMWF (European Centre for Medium-Range Weather Forecasts) interim reanalysis (18) and NASA’s Modern Era Retrospective analysis for Research and Applications (MERRA) (19) in the calculations. The fields being used here (mainly water vapor and temperature) are constrained in the reanalysis by high-density satellite measurements. Previous work has shown that ΔR_clear-sky can be calculated accurately given water vapor and temperature distributions (20, 21). And, given suggestions of biases in measured clear-sky fluxes (22), I chose to use the reanalysis fluxes here.”

        He explains what is wrong with clear-sky, and why he expects reanalysis to fix it. Peer review did not have to ask the question. The answer is there.

        Troy says that fixing the bias should go in the other direction. But it didn’t. That needs checking.

        • Posted Sep 8, 2011 at 5:06 PM | Permalink

          Interestingly, reference 20, which shows that “R_clear-sky can be calculated accurately given water vapor and temperature distributions”, uses a set of observations to validate their models: the CERES clear-sky observations.

          Most of your quote deals with explanation of the reanalysis systems, which are also used with the radiative kernels to correct other biases. And yes, it is interesting that models can calculate the absolute OLR for clear-sky (as validated against CERES clear-sky), but then the question is why not just use the CERES clear-sky in the first place? The only answer to that specific question seems to be the last sentence and reference, which I’ve already explained does not necessarily hold up here.

          Furthermore, as mentioned above, using two different datasets, even if they are accurate on an absolute level, can cause biases when you are working with changes on a magnitude far less than that “absolute” level, particularly when they are working with different estimates of solar insolation!

          I’m not blaming Dessler for using ERA-interim for clear-sky, but given what we know now, I don’t think it is a better choice than the simple CERES clear-sky bundled with the all-sky.

        • Posted Sep 8, 2011 at 5:34 PM | Permalink

          Troy,
          The reanalysis is validated against CERES clear-sky. But they would validate under the appropriate conditions – namely the dry air of the clear sky patches.

          What the reanalysis can then do is make the moisture correction so the humidity is representative of the whole atmosphere, not just the clear bits. I don’t know for sure that they do this, but I would expect so, since Dessler says that are using water vapor distributions. Then the reanalysis has a great advantage.

        • Posted Sep 8, 2011 at 5:57 PM | Permalink

          Nick,

          If indeed all else was equal between the ERA-interim analysis and CERES clear-sky, and the ERA-interim analysis uses the same methods/models described in those references, and it indeed performed the humidity adjustment, then obviously it would be better and an improvement on CERES.

          But based on the differences we see here (particularly with the SW fluxes), obviously it isn’t the case. Since we’re calculating CRF as the difference between clear-sky and all-sky fluxes, ANY difference in those two datasets is going to show up in the estimated cloud forcing, including their different estimates of solar insolation (which has nothing to do with clouds). The magnitude of the changes in flux is far smaller that the magnitude of the total flux, so you would expect using two different datasets to have a lot more noise unrelated to the CRF. Note that if there is ANY flux calculation bias in either of the two datasets unrelated to clear-sky vs. all-sky, it WILL show up in the CRF, whereas if you use the same dataset, even if a flux calculation bias is present, it will NOT show up in CRF unless it is related to clear-sky vs. all-sky.

          Do I think Dessler should have had better reasons for switching to ERA-interim? Honestly, it’s not that interesting to me…what interests me is that clearly the CERES-only result is 1) important and 2) probably a better estimate.

        • Posted Sep 8, 2011 at 6:34 PM | Permalink

          “But based on the differences we see here (particularly with the SW fluxes), obviously it isn’t the case. Since we’re calculating CRF as the difference between clear-sky and all-sky fluxes, ANY difference in those two datasets is going to show up in the estimated cloud forcing, including their different estimates of solar insolation (which has nothing to do with clouds).”
          Well, that’s a much more sophisticated argument than we see in this CA head post, which just pitches it as a choice between two data sets. Which of course the crowd then picks up with chants of cherry-picking.

          I agree that the reanalysis corrects one major thing, but brings in other differences. And probably that Dessler should have said more about that. I can’t at the moment see how the SW flux contrast makes that “obviously not the case”. But I’ll read your other posts at your blog more carefully and see if I can figure it out.

        • Steven Mosher
          Posted Sep 8, 2011 at 5:25 PM | Permalink

          ‘And, given suggestions of biases in measured clear-sky fluxes (22), I chose to use the reanalysis fluxes here.”

          “Given suggestions” does not sound like a robust justification.

          You get a better result if Dessler looks at both, notes the opposite sign and drives the question back to the bias question. Which is where we are now. As it stands, he makes a choice based on a suggestion and gets one answer. Ignoring that suggestion we get a different answer.

          its suggested that BCP be avoided, after all.

          This is not about the precise issue at play here. This is about the approach to analysis.

        • Posted Sep 8, 2011 at 5:38 PM | Permalink

          ““Given suggestions” does not sound like a robust justification.”
          That’s scientist-speak. The paper he refers to is quite definite.

          “The corresponding CRF change forced by these WVP changes is about 2 W m−2 in a zonal mean sense. Highest values occur in the midlatitudes of the northern hemisphere in which a magnitude up to 6 W m−2 is shown.”

        • Steven Mosher
          Posted Sep 8, 2011 at 10:13 PM | Permalink

          Sorry I dont find those words definite in any regard. I find them as suggestive as the suggestive comment that refers to them.

          no cookie

        • Layman Lurker
          Posted Sep 8, 2011 at 5:35 PM | Permalink

          He explains what is wrong with clear-sky…

          C’mon. There is no explanation of any kind. Just a vague suggestion of bias to justify dismissing a data set which would radically change the central conclusion of the paper. It is Troy who has explained the nature of the bias set out by SB08. No way reviewers should have let him off the hook.

        • Posted Sep 8, 2011 at 5:49 PM | Permalink

          LL,
          Again, ERA is not a dataset. It is a reanalysis, substantiallyly based on CERES.

          Troy did not discover SB08. He followed Dessler’s reference. That’s how it’s meant to work. Dessler doesn’t have to write it out again.

          He makes the reason for not using CERES clear-sky clear elsewhere.
          “ΔCRF is the change in TOA net flux anomaly if clouds were instantaneously removed, with everything else held fixed,..”
          CERES clear-sky does not hold everything else fixed. It substitutes the surrounding clear-sky atmosphere. To implement the definition of ΔCRF, he needs a reanalysis.

        • Layman Lurker
          Posted Sep 8, 2011 at 6:46 PM | Permalink

          CERES clear-sky does not hold everything else fixed. It substitutes the surrounding clear-sky atmosphere. To implement the definition of ΔCRF, he needs a reanalysis.

          Nick, if the nature of the bias is as Troy has explained, then there is no basis for outright dismisal CERES clear sky. On the contrary, Dessler has an obligation to explore whether his conclusions are sensitive to this subsititution.

        • Layman Lurker
          Posted Sep 8, 2011 at 7:15 PM | Permalink

          Troy did not discover SB08.

          I never said he did. You stated that Dessler “explained” what was wrong with CERES clear sky when he did no such thing. Had Dessler provided an “explanation” he would have done something similar to Troy.

        • Posted Sep 9, 2011 at 12:02 AM | Permalink

          LL,
          He said
          “And, given suggestions of biases in measured clear-sky fluxes (22), I chose to use the reanalysis fluxes here.”
          That’s explaining why he chose ERA. (22) is SB08, which contains the details. That’s pretty standard for scientific articles. And Science doesn’t give you space to repeat what you can refer to.

  20. DocMartyn
    Posted Sep 8, 2011 at 3:55 PM | Permalink

    So Steve, you have just found out that the ground breaking homonid has a human skull, the lower jaw of a Sarawak orangutan and chimpanzee fossil teeth; all of which have been boiled in Ferric/Chromic acid
    Well done.

  21. Posted Sep 8, 2011 at 4:19 PM | Permalink

    Steve,
    This is a curious post. You’ve quoted Dessler’s reason for not using the CERES clear-sky numbers. But you’ve made no attempt to deal with them. You’ve just gone ahead and done what he said there was a reason for not doing.

    Troy, in his Blackboard post, dealt with one of those. If you work out a whole clear sky on the basis of the clear parts that you can see, then you have a water vapor bias. Where the sky is clear, the air is usually drier, and this affects outgoing OLR. If you just simply extrapolate from those clear parts to a whole atmosphere, it is a much drier atmosphere. That’s the focus of the Sohn and Bennartz paper that he cited. It’s paywalled, so I’ve only seen the abstract so far, but the biases they calculate are large. An average forcing bias of 2 W/m2, with some major regions up to 6 W/m2.

    Some adjustment has to be made for this. I don’t know the details, but it is likely that the appropriate adjustment has been made in the ERA recalc. That would be a very good reason for Dessler to use those figures.

    • Posted Sep 8, 2011 at 4:27 PM | Permalink

      Nick,

      It’s looks like we cross-posted. See my comment above:

      More on Dessler 2010

      I don’t believe this explains the difference, as the bias would be in the opposite direction.

  22. TomRude
    Posted Sep 8, 2011 at 5:07 PM | Permalink

    Note to Steve McIntyre and Roy Spencer: if you guys continue to help Dessler write his paper, perhaps you should request co-authorship or at least be included in the acknowledgement section with regards to specific points! Really as much as it is making science progress, it is naive at best to kindly serve people who have done everything in their own power to demean and attack you! In the end it also shows how peer review sceintific journals are obsolete means of doing science since blogosphere is doing it much faster and with more agility

    • geo
      Posted Sep 8, 2011 at 5:44 PM | Permalink

      It’s certainly clear from interactions between Dessler and Spencer published publicly over the last year, that Dessler is capable of snark and being a little free with his elbows under the backboard.

      And yet, from what I can see, he’s been much more willing to engage in something approaching a real scientific debate in a iterative and co-operative manner than those generally numbered amongst “the Team”. That should be encouraged.

      That he hasn’t surrendered horse and foot to Spencer should surprise no one. That he clings to his context re the supremacy of ENSO should surprise no one. It’s still early innings on this subject, why should he?

      • Luther Wu
        Posted Sep 8, 2011 at 9:04 PM | Permalink

        That’s my take.
        Kudos to Dessler.

      • TomRude
        Posted Sep 8, 2011 at 10:33 PM | Permalink

        Regardless of the science that he will ultimately defend, let’s hope Dessler will have the elegance of acknowledging the exchange and the imput from both Spencer and Steve.

  23. Kenneth Fritsch
    Posted Sep 8, 2011 at 5:18 PM | Permalink

    “LL,
    Dessler did not make a throwaway comment. He justified it rather carefully:

    ..He explains what is wrong with clear-sky, and why he expects reanalysis to fix it. Peer review did not have to ask the question. The answer is there.

    Troy says that fixing the bias should go in the other direction. But it didn’t. That needs checking.”

    I think LL gets the throwaway comment from being aware of Troy’s comment. It will be very interesting where this discussion ends, or at least, leads.

    Sometimes you do sensitivity studies to show differences in results instead of similar ones. If the source of the data used makes large differences in results, science would advance from knowing that difference.

    • Layman Lurker
      Posted Sep 9, 2011 at 11:58 AM | Permalink

      It will be very interesting where this discussion ends…

      A circle has no end. 😉

  24. timetochooseagain
    Posted Sep 8, 2011 at 5:54 PM | Permalink

    Reanalyses can have unknown sources of bias, especially when new sources of data are added in, as discontinuities can arise. How confident are we that there are no biases in the two reanalyses that Dessler used that might impact his results?

    This is an honest question, as I am not sure what sort of uncertainties there might be in them that may make them problematic also. It isn’t just the CERES data that could have problems.

    • Layman Lurker
      Posted Sep 8, 2011 at 8:33 PM | Permalink

      I have no problem with Dessler’s use of ERA reanalysis. What I have a problem with is a paper which fails to report the contradictory results of substituting CERES clear sky. If there is an explanation then fair enough. The only explanation I have seen so far is Troy’s which seems pretty straight forward in showing the bias would not account for the contradictory feedback estimate.

      • timetochooseagain
        Posted Sep 8, 2011 at 11:00 PM | Permalink

        I don’t have a problem with it either. I just want to know, since it’s use is evidently very important for the results, what the uncertainties in ERA are that could cause one to question it’s reliability. All datasets have uncertainties and potential for error. It’s always worth discussing them, IMHO.

        Actually, I’d want to know about the uncertainties in ERA with or without it “mattering” if it is used.

        • Posted Sep 9, 2011 at 12:15 AM | Permalink

          Yes, that’s the right line of inquiry. Not why he didn’t use CERES clear-sky – he’s explained convincingly that it’s not appropriate. But whether ERA (actually ECMWF reanalysis and MERRA) is adequate.

  25. OliverS
    Posted Sep 8, 2011 at 7:19 PM | Permalink

    Steve: Dessler2010 gives the uncertainty in his slope as 0.74 W/m2/K, not 0.94 as you have above.

  26. ausiedan
    Posted Sep 8, 2011 at 9:26 PM | Permalink

    Am I missing something here?
    There seems to be several major points that have got lost in all the hubub:
    (1) this analysis is based on a very short period of data, which I thought all sides had (previously) agreed was to short to yield anythig significant about the climate.
    (2) The error terms for the correlation are very large relative to the trend – far too large to “yield anythig significant about the climate”.
    (3) the R squared statistic is too hopeless close to zero to “yield anythig significant about the climate”.
    (4) a visual appraisal of the data would at the very first glance indicate that it has got too much scatter on to “yield anythig significant about the climate”.

    So what’s this all about?
    Why the published papers in peer reviewed journals?

    The only faintly interesting thing is that these examinations of data provide no evidence supporting the CO2 contention.
    The NUL hypothesis remains untarnished by this weak onslaught.
    (I’m sure somebody else could put that much better).

  27. Posted Sep 8, 2011 at 9:52 PM | Permalink

    I’m amazed that such subtle variations in the data make such a big difference. I suppose that is an indicator of the extreme noise for any conclusion.

    • Posted Sep 9, 2011 at 7:18 AM | Permalink

      Yes, it is really just another way of showing that Dessler’s claimed positive feedback is completely meaningless. Steve had already demonstrated that yesterday by showing that the adjusted r^2 is only 0.01 and that the sign changes if you add a lag. Today he’s showing the same thing a different way. So there is no reason to be amazed. It’s clear from Steve’s plots that drawing a line through the cloud is nonsense and the numbers confirm it.

      On this point I’d lack to add one minor thing. If you look at Dessler 2010 fig 2A, the axes have been squashed down with an aspect ratio just over 2. This makes the straight line through the data look reasonable. But when you plot it with an aspect ratio near 1 as Steve has done you see it’s pretty much a circular cloud with no linear relationship.

      • Posted Sep 9, 2011 at 11:23 AM | Permalink

        I expect that ‘Google Science Communication Fellow’ Dessler knows that changing the aspect ratio, while not affecting r2, improves the political optics greatly.

  28. dp
    Posted Sep 8, 2011 at 10:23 PM | Permalink

    Something in the realm of the unthinkable is occurring over at WUWT between Spencer and Dressler – they appear to be collaborating and exchanging data, ideas, and good will. As is often said for other scientific events, that is the way science should work. There is a lesson to be learned here.

    • Steven Mosher
      Posted Sep 9, 2011 at 12:16 PM | Permalink

      Yes. It needs to be encouraged.

  29. Bart
    Posted Sep 9, 2011 at 2:25 AM | Permalink

    I’m just amazed at the non-responses I have gotten in my recommendations to bring some rigor into this analysis. Am I the only person interested in these matters who has ever used an oscilloscope? Am I the only one who knows what a Lissajous pattern is?

    • Posted Sep 9, 2011 at 2:41 AM | Permalink

      Am I the only one who knows what a Lissajous pattern is?

      Nope. Australians do!

      But what are you rigorously saying?

      • Bart
        Posted Sep 9, 2011 at 11:25 AM | Permalink

        That you cannot diagnose feedback by performing a linear regression on a phase plane plot when the driving input is all over the place and the phase response is nonlinear. As I explained on the last thread, and have written up here.

        I’m sitting back and watching people argue about techniques which are entirely unsuited to the problem in the first place. It’s insane.

        • DG
          Posted Sep 9, 2011 at 11:39 AM | Permalink

          Bart,
          David Stockwell discusses this similarly at his blog.

          I’m sitting back and watching people argue about techniques which are entirely unsuited to the problem in the first place. It’s insane.

          You are exactly right. Why everyone cannot grasp Spencer’s central argument from day one is misdiagnosing the feedback using traditional regression techniques, is puzzling. Only a few seem to understand this.

        • Steven Mosher
          Posted Sep 9, 2011 at 12:23 PM | Permalink

          I think everybody is waiting for you to do it and show us how. That’s a real request.

          If you need a forum to do it, I’ll suggest Lucia’s. Just write her and ask ( you can reference this ). or maybe you and david could do something and post over there.

        • Bart
          Posted Sep 9, 2011 at 1:17 PM | Permalink

          Nobody is going to believe something some anonymous guy posting on a blog would say. Even if I did the analysis, someone respected by a lot of people would need to replicate it. So, why take the time to sort it all out on my own when it would be wasted effort?

          I’ve given straightforward methods. Anyone should be able to perform a running average on both data sets and replot the phase plane (with the time lag removed as Steve M. has done here, though the lag should be recalibrated for the filtered data). Cross spectral analysis is more difficult, but can provide additional confirmation / the definitive result after looking at the phase plane.

          Steve: I posted the series online so that interested people could perform their own analysies – something that I encourage. It’s not a matter of whether people “believe” an anonymous person on a blog if the script and results are posted up. I believe that R has functions to do what you’re suggesting. WHy not give it a go?

        • Steven Mosher
          Posted Sep 9, 2011 at 3:30 PM | Permalink

          Bart,

          Like Steve says why not give it a go. if you give it a go with easy to follow instructions others will follow. Troy, UC, roman, Carrick, steveF, dewitt, Willis, chad, jeff id, lucia, nick.. they all have the skills to follow along if you take the lead. We don’t care if your anonymous as long as you share the work. The whole point of sharing is to make the individual disappear

          [ apologoes to anyone I left off the list o wizzes ]

        • Posted Sep 9, 2011 at 3:51 PM | Permalink

          Nobody is going to believe something some anonymous guy posting on a blog would say.

          http://en.wikipedia.org/wiki/William_Sealy_Gosset

        • Bart
          Posted Sep 9, 2011 at 5:26 PM | Permalink

          You mean this data? I’m not sure how to get it or the temperature data. I’d like to work on the same exact series you plotted. Any way you might make it accessible columnwise in a blog post or something?

          Steve: in my earlier posts, I referred to collations placed online. See http://www.climateaudit.info/data/spencer and http://www.climateaudit.info/data/dessler

        • Bart
          Posted Sep 9, 2011 at 7:09 PM | Permalink

          Which columns should I use?

        • Bart
          Posted Sep 9, 2011 at 8:41 PM | Permalink

          Seriously. Precisely which columns do I use, because I have found some very strong correlations.

        • Posted Sep 10, 2011 at 12:25 AM | Permalink

          Looking forward to hearing about your findings Bart.

        • Bart
          Posted Sep 10, 2011 at 6:06 AM | Permalink

          Well, here are some preliminary results. I hope I used the right quantities. I pulled this data from the Spencer link. I used column 9 for the HADCRUT3 temperature anomaly, and I used column 5 minus column 8 for the cloud response. The relationship between these variables most assuredly has a negative dc gain.

          Here is a plot of the estimated frequency response. If this holds up, I think it’s going to shock some people. The response at high frequency is a jumble, and probably due to independent processes going on. But, the low frequency region is dominated by a fairly well defined 2nd order response with natural frequency of about 0.0725 year^-1 and a damping ratio of about 0.45, which indicates a time constant of about 4.88 years. Yes, years.

          The impulse response is shown here.

        • Steve McIntyre
          Posted Sep 10, 2011 at 7:30 AM | Permalink

          Bart, thanks for this very interesting analysis. Perhaps you could elucidate in more detail how the graphics are interpreted for readers not used to this style of graphic.

          Also do you get similar results for the different version that Dessler used:
          http://www.climateaudit.info/data/dessler/dessler_2010.csv columns eradr (CLD RF) and erats (temperature)?

        • Bart
          Posted Sep 10, 2011 at 6:14 AM | Permalink

          Here is the MATLAB code used to compute all this:

          % Import monthly data
          temp = data(:,9);
          dR = data(:,5)-data(:,8);
          N = length(dR);

          % Sample period
          T = 1/12; % years

          % Pad time series with zeroes to prevent time aliasing of impulse response
          Nsamp = 8192;
          Npad = Nsamp-N;
          X = fft([temp;zeros(Npad,1)]);
          Y = fft([dR;zeros(Npad,1)]);

          % Compute impulse Response
          h = real(ifft(Y./X))/T;

          % Window Impulse Response
          Nc = Nsamp/2^2;
          w = [ones(Nc/2,1);(1 + cos(pi*(0:(Nc/2-1))’/(Nc/2-1)))/2];
          w = [w;zeros(Nsamp-Nc,1)];
          hw = h.*w;

          % Plot smoothed impulse response
          c = [1:15 15:-1:1]/(15*16);
          figure(1)
          hs = flipud(filter(c,1,flipud(h)));
          t = (0:(length(hs)-1))*T;
          plot(t,hs)
          grid
          xlim([0 50])
          title(‘Cloud-Temperature System Smoothed Impulse Response’)
          xlabel(‘time (years)’)
          ylabel(‘W/m^2/^oC/year’)

          % Compute Frequency Response and plot
          H = T*fft(hw);
          f = (0:(length(H)-1))’/Nsamp/T;

          % Create 2nd order model
          s = sqrt(-1)*2*pi*f;
          w0 = 2*pi*0.0725;
          zeta=0.45;
          Hmod = -9.5./((s/w0+2*zeta).*(s/w0)+1);

          figure(2)
          subplot(211)
          loglog(f,abs([H Hmod]),’LineWidth’,2)
          grid
          xlim([1e-3 5e-1])
          title(‘Cloud-Temperature System Magnitude Response’)
          ylabel(‘W/m^2/^oC’)
          xlabel(‘frequency (years^-^1)’)
          legend(‘From Data’,’Model’,’Location’,’SouthWest’)
          subplot(212)
          semilogx(f,faz([H Hmod]),’LineWidth’,2)
          grid
          xlim([1e-3 5e-1])
          title(‘Cloud-Temperature System Phase Response’)
          ylabel(‘deg’)
          xlabel(‘frequency (years^-^1)’)

        • Bart
          Posted Sep 10, 2011 at 6:16 AM | Permalink

          In case anyone’s looking, I posted links to plots, but that post is being held up for moderation. I’m sure it will appear soon.

        • Posted Sep 10, 2011 at 7:00 AM | Permalink

          Thanks Bart. An underdamped harmonic. Do you think the following phase plot of HadCRU since 1997 is consistent with these parameters? (http://landshape.org/enm/sinusoidal-wave-in-global-temperature/ in case it doesn’t come out.)

        • Posted Sep 10, 2011 at 8:28 AM | Permalink

          Sweet. Climate sensitivity in the frequency domain. I am going to play with this tomorrow. Thanks

        • Tom Gray
          Posted Sep 10, 2011 at 9:09 AM | Permalink

          Bart, would it be possible to plot the step response. This might be more understandable for more people.

          It is surprising that an analysis like this wasn’t done previously considering that the original paper was published in a major scientific journal. I wonder why the peer reviewers did not ask for it. The level of the mathematics that is used in these studies is surprisingly low. All the talk about lags and damped exponentias with vague mumblings about differential equations does make one rather disheartened on this issue.

        • Posted Sep 10, 2011 at 9:48 AM | Permalink

          Bart,
          You have deduced an impulse response over fifty years from ten years of data. How much of that is an artefact of the time windowing?

          That well-defined second order response also has a period longer than your data. Again, is it coming from your windowing?

        • Posted Sep 10, 2011 at 10:48 AM | Permalink

          Bart,
          You’ve used a Hanning window to taper the impulse. But I would suggest tapering temp and dR prior to FFT. It’s a little better than just a 10-yr gate function, which I think is influencing your results.

        • Tom Gray
          Posted Sep 10, 2011 at 11:37 AM | Permalink

          Again, I am quite astounded. We have a discussion on the temproal relationship between these two quantities with publications in distinguished journals and wide press release and, amazingly, this sort of analysis has not been done. We are discussing how to window the data. Positive and negative feedback – whatever! Perhaps we need more than peer review.

        • Bart
          Posted Sep 10, 2011 at 1:42 PM | Permalink

          Nick Stokes
          Posted Sep 10, 2011 at 9:48 AM | Permalink

          “You have deduced an impulse response over fifty years from ten years of data. How much of that is an artefact of the time windowing?”

          I know. I did not believe it myself at first. So, I tried seeing if I could generate artificial data with these characteristics and time span and if I could fit it in the same way. Here is some code for doing that. I made sure to generate a lot of data and truncate it so that there would not be any start up transients:

          a = [1.000000000000000 -1.967462610776618 0.968691947164695];
          b = -[0.617926899846966 0.611409488230977]*1e-2;
          temp=randn(10000,1);
          dR = filter(b,a,temp);
          temp = temp((10000-123):10000);
          dR = dR((10000-123):10000);

          This is a discretized model of the 2nd order response driven by white noise.

          Know that happens? The identification works some of the time. It is either nicely behaved, like this data was or, more times than not, it comes out a total mess.

          This is very peculiar behavior, and I have not yet determined what property it is which makes a good data set versus a bad data set, or whether the problem is inherent or numerical.

        • Bart
          Posted Sep 10, 2011 at 1:47 PM | Permalink

          David Stockwell
          Posted Sep 10, 2011 at 7:00 AM | Permalink

          Do you think the following phase plot of HadCRU since 1997 is consistent with these parameters?

          At a glance, it does look it. The spiral does not cross over itself too many times, so that suggests a moderate amount of damping.

        • Bart
          Posted Sep 10, 2011 at 2:05 PM | Permalink

          Tom Gray
          Posted Sep 10, 2011 at 9:09 AM | Permalink

          Bart, would it be possible to plot the step response. This might be more understandable for more people.

          Here you go.

        • Bart
          Posted Sep 10, 2011 at 2:25 PM | Permalink

          Steve McIntyre
          Posted Sep 10, 2011 at 7:30 AM | Permalink

          Also do you get similar results for the different version that Dessler used?

          Sort of similar. The low frequency phase shift is still 180 deg. It appears to ring more and have significantly lower magnitude. It’s not very well behaved. This data set evidences the “peculiar behavior” I referenced above.

          Maybe Nick’s idea of windowing the data would help:

          Nick Stokes
          Posted Sep 10, 2011 at 10:48 AM | Permalink

          “But I would suggest tapering temp and dR prior to FFT.”

          I will give that a try later, but now that you’ve all got the idea, others can try to track down what causes this, too.

        • Tom Gray
          Posted Sep 10, 2011 at 2:32 PM | Permalink

          Yes, I think that the step response plot makes things much more apparent.

        • Bart
          Posted Sep 10, 2011 at 2:53 PM | Permalink

          I should note for you, Tom, that I have been carrying out this analysis on two different computers and this step response is not precisely that of the modeled process I showed above, which is a later version on another computer to which I do not have access right now. That model should have settled out to about -9.5 W/m^2 for a 1 degC step input and it has a slightly different damping ratio and frequency. The -9.5 sensitivity is the newer version and the one I think fits the data better. But, at this point, these values are not set in stone anyway.

        • Bart
          Posted Sep 10, 2011 at 3:04 PM | Permalink

          Bart
          Posted Sep 10, 2011 at 2:25 PM | Permalink
          Your comment is awaiting moderation.

          “This data set evidences the “peculiar behavior” I referenced above.”

          I suspect the problem may be when there is a zero or near zero in the temperature response which makes things ill conditioned. Nick’s idea of tapering the time series may help with that. I had also considered adding a small white noise “floor” to the input spectrum. Will work on it, and everyone else feel free to, too.

        • Bart
          Posted Sep 10, 2011 at 3:29 PM | Permalink

          There’s several posts being held up in the queue. Tapering the data as Nick suggests alters the result slightly, but has no major effect. Adding a fictitious white noise floor to the input data does not seem to improve things when I generate artificial representative time series. I’m not sure what to do to ameliorate the pathological cases. Maybe it’s just inherent in trying to draw such long correlations out of a short span of data, and it’s just hit or miss.

          My artificially generated data does show, however, that such long term correlation can be extracted from short term data, if you are lucky. Generating a long span of artificial data does seem to eliminate the pathological cases. My preliminary conclusion right now is that the Spencer data just happens to be lucky.

        • Posted Sep 10, 2011 at 5:32 PM | Permalink

          Bart,
          I translated your code to R, up to the impulse response plot. I get the same result.

          But if I apply a Hanning taper to temp and dR (down to zero at each end of the data window) it is very different. I think that suggests that the finite data length is having a big effect.

          I’ve put up a html page here which has some explanation, the impulse response with and without Hanning, and the R code.

        • Bart
          Posted Sep 10, 2011 at 8:23 PM | Permalink

          Nick – yes, it can change things a little through loss of resolution, but it isn’t really all that significant and, importantly, the feedback is still negative (180 deg phase shift at low frequencies).

          I have other responses which may be of interest to you and which are directed to your queries but, unfortunately, the moderator seems to be taking the weekend off. Your insights are cogent, but I think I feel relatively confident now in asserting that this response is real, and the feedback is, clearly, negative. But, please, continue to investigate.

        • Bart
          Posted Sep 10, 2011 at 8:32 PM | Permalink

          “You have to accept low frequency problems with finite data…”

          Actually, Nick, abrupt transitions are a high frequency phenomenon. Multiplication in the time domain is the same as convolution in the frequency domain, so the tapering tends to lower the resolution of the frequency response estimate. Which is good for the higher frequency portion of the spectrum, but not so good for the low. Yes, a truncated data record is bad for resolution of low frequency stuff. But, providing a taper only really helps the high frequency portion. And, so, your impulse response so estimated has the slight ringing smoothed out.

        • Bart
          Posted Sep 10, 2011 at 8:58 PM | Permalink

          I’m going to try one more time to get through the blocker on this. Here is a discrete time model for generating data with the desired correlation which you can play with. I assume the input is white noise. You should find that, occasionally, it is possible to generate a short time series for which the analysis works, other times, not. Which is why I said: “My preliminary conclusion right now is that the Spencer data just happens to be lucky.”

          a = [1.000000000000000 -1.967462610776618 0.968691947164695];
          b = -[0.617926899846966 0.611409488230977]*1e-2;
          temp=randn(10000,1);
          dR = filter(b,a,temp);
          temp = temp((10000-123):10000);
          dR = dR((10000-123):10000);

          I generate a lot of data before truncating to be sure of eliminating transient start up effects.

        • Bart
          Posted Sep 10, 2011 at 9:03 PM | Permalink

          The “filter” function implements the difference equation such that

          a(1)*dR(k) = -a(2)*dR(k-1) – a(3)*dR(k-1) + b(1)*temp(k) + b(2)*temp(k-1)

        • Bart
          Posted Sep 10, 2011 at 9:04 PM | Permalink

          Dang.

          a(1)*dR(k) = -a(2)*dR(k-1) – a(3)*dR(k-2) + b(1)*temp(k) + b(2)*temp(k-1)

        • Posted Sep 10, 2011 at 9:31 PM | Permalink

          Nick and Bart,

          This is great stuff, thank you both for posting more details off-site. It really helps a lot.

          Bart, as far as the “getting lucky” biz goes, it IS interesting to know how often one would be able to properly detect the characteristics of the estimated process in this short time window, and it is great to see you looking at that. This speaks to the power of the test, given that the DGP has the estimated negative feedback property.

          The other, complementary question, is of interest too. Suppose the feedback is really positive and small: How often would we incorrectly find the kinds and magnitudes of negative feedback that you are estimating (either with or without Nick’s taper)? If a Monte Carlo shows this to be extremely unlikely–with the taper or not, even with the short time series–then you’ve really got something.

          Again, I am really enjoying watching this.

        • Posted Sep 10, 2011 at 9:51 PM | Permalink

          Bart et al,
          I’ve given up on this long thread, and posted a reply here.

          Bart, with your troubles getting posts to appear, are you aware that there seems to be a rule here that more than one link takes you into moderation?

        • Bart
          Posted Sep 11, 2011 at 4:13 AM | Permalink

          NW
          Posted Sep 10, 2011 at 9:31 PM | Permalink

          “How often would we incorrectly find the kinds and magnitudes of negative feedback that you are estimating (either with or without Nick’s taper)?”

          That is a good question. A very few of my runs with artificially produced short data records appear to give a false positive (pun intended). But, the giveaway that something is not quite right is that the frequency response estimate always looks kind of haphazard and poorly behaved, not smooth and nice and readily recognizable as a standard 2nd order response like this one, and like others I have generated with artificial data.

          I think there must be a way of making the estimation process more reliable. But, as I have always analyzed systems for which there was more than enough data to cover the time span of the dynamics, I have never had to research it. Perhaps there are methods in the literature, or we might have to come up with whole new ones. For certain, there are various least-squares, maximum entropy, etc… methods which can work well with short data records, but these generally seek to fit parameters to a model, and can be sensitive to unmodeled components of the data (and, there are plenty of likely unmodelable processes in this data stream). One of the strengths of FFT based methods is that they require no parameterization and are unconstrained.

          Anyway, that’s for future investigation. Right now, I believe this data is producing a reasonable result based on the fact that I can often generate artificial data which also produces a reliable estimate of the transfer function with a short data record.

          Let us henceforward move further discussion to Nick’s new post.

        • Billy Liar
          Posted Sep 11, 2011 at 9:50 AM | Permalink

          4.88 years ~ 2 x QBO period

        • Bart
          Posted Sep 11, 2011 at 6:34 PM | Permalink

          Billy Liar
          Posted Sep 11, 2011 at 9:50 AM | Permalink

          I don’t think there is a likely link with periodicities, as I explained here.

        • P. Solar
          Posted Sep 27, 2011 at 1:24 PM | Permalink

          Those graphs from Bart are very interesting. This is very much complementary to how I have been investigation this.

          Here is an overlay of Spencer’s graphic showing satellite data vs model results, on top of the lag response of Spencer’s simple model. (Here I mean using random inputs for rad and non-rad, not just the basic equation form).

          http://tinypic.com/r/30sfupc/7

          This was just trial and error to get the nearest fit, I’m not suggesting this is a result that shows what f/b really is.

          What is relevant to Bart’s work is that this plot changes little as long as the feedback/depth ratio stays the same. This in fact represents the time constant Cp/lambda.

          45/9.2= 4.891304

          That is uncannily close to Bart’s result by a completely different approach.

          Having a hook on the time constant of system response will be a great help in getting to lambda.

        • Posted Sep 9, 2011 at 9:03 PM | Permalink

          Here, here. Linear regression is not the sharpest tool in the shed, but its like everything gets belted with a pick-handle (which is what Steve is showing).

        • David L. Hagen
          Posted Sep 10, 2011 at 8:05 AM | Permalink

          Bart
          Compliments. Very interesting evidence of a damped impulse response. Your natural frequency of 0.0725 year^-1 = 13.8 year period. That is similar to the ~ 11 year Schwab solar cycle or half the ~22 year Hale cycle. Recommend testing the solar cycle for the driving impulses to test amplitude and phase. I encourage you to explore using Ed Fix’s solar cycle model based on damped oscillation around the barycenter (~ Hale cycle) which seems to track remarkably well.
          See: The Relationship of Sunspot Cycles to Gravitational Stresses on the Sun: Results of a Proof-of-Concept Simulation”. Ch 14 p 335 of Dr. Donald Easterbrook, ed. (Elsevier, 2011) e-book
          can be previewed (in short sections) at: ReadInside
          Search for “355″ or “barycenter” or “sunspot cycles”. See especially Fig. 6 and Fig. 7. See summary posted by Tallbloke. with his graph posted by David Archibald.

        • David L. Hagen
          Posted Sep 10, 2011 at 8:54 AM | Permalink

          Bart
          For frequency analysis of solar on temperature, see: Scafetta, N., Empirical evidence for a celestial origin of the climate oscillations and its implications. Journal of Atmospheric and Solar-Terrestrial Physics (2010), doi:10.1016/j.jastp.2010.04.015

          Several global
          surface temperature records since 1850 and records deduced from the orbits of the planets present very similar power spectra.

          Your cloud analysis could help provide the bridging model.

        • Posted Sep 10, 2011 at 10:08 AM | Permalink

          Exciting. This has to be one of the finest recent threads on CA. I’ve just clicked back to find no fewer than ten posts by someone called David in a sequence of eleven, with someone called Steve (Mc) the only outlier. Could there be a correlation between the number of Daves and thread quality? (Only r^2 > 0.01 need apply, in line with strict climate science norms.) And how might both be connected with the incidence of cosmic rays? I think we deserve to be told.

          Steve – let’s leave cosmic rays out of this as they have nothing to do with Dessler v Spencer

        • simon abingdon
          Posted Sep 10, 2011 at 10:40 AM | Permalink

          Makes “Project Steve” http://en.wikipedia.org/wiki/Project_Steve look even more convincing.

        • David L. Hagen
          Posted Sep 10, 2011 at 11:11 AM | Permalink

          Both significant: “david scientist” 74,700,000 vs “Steve scientist</a.. & David 4,150,000 vs ‘Steve” 1,200,000.

        • David Jay
          Posted Sep 10, 2011 at 11:40 AM | Permalink

          No question about it, more David, more quality.
          (written while wearing a “radiant” smile)

        • Eric
          Posted Sep 12, 2011 at 9:41 AM | Permalink

          I second Richard’s comment re: this thread. I feel like I am eavesdropping on a meeting of The Royal Society of London for Improving Natural Knowledge. This is truly a special forum. Thanks to our host and contributors. Time to hit the tip jar.

        • Posted Sep 10, 2011 at 1:05 PM | Permalink

          Bart has already done some work on the solar end of this:

          Bart: Modeling the historical sunspot record from planetary periods

          Which led to this:

          Jackpot! Jupiter and Saturn – Solar cycle link confirmed

        • jphilips
          Posted Sep 10, 2011 at 5:34 PM | Permalink

          lets try this again:

          Bart
          A few years ago I had a look at FFTs (using EXCEL) of temperatures:
          e.g.prefix this with h t t p :\ \ img15.imageshack.us/img15/1127/ffts.jpg
          others are available!

          You will note that there is very little evidence in most of any solar influence.

          in fact there were few common frequencies apparent in all!

          After this I used a narrow band filter and swept the centre frequency from 0.5 to 300 years (with constant bandwidth) looking for peaks in amplitudes on the output.

          Then summing all the amplitudes and frequencies at suitable phase got a fair reconstruction of the original (to be expected with enough freqs!)

          I then tried manually adjusting the centre frequencies, amplitudes and phases of summed signals to get a “very good ” synthesized hadcrut3v global.
          some of these results are shown here together with future predictions!!
          prefix this with h t t p :\ \ climateandstuff.blogspot.com/search/label/temperature%20synthesis

          “With four parameters I can fit an elephant, and with five I can make him wiggle his trunk”.
          Attributed to von Neumann

        • Posted Sep 11, 2011 at 3:35 AM | Permalink

          Bart has already done some work on the solar end of this:

          Bart: Modeling the historical sunspot record from planetary periods

    • John F. Pittman
      Posted Sep 9, 2011 at 6:47 AM | Permalink

      Bart are you suggesting putting the data in a streaming format looped and see what the osciliscope shows? Or perhaps the deltas , or both?

  30. Mac
    Posted Sep 9, 2011 at 7:04 AM | Permalink

    So it would appear that Dessler is currently re-writing a paper in haste that in its orginal draft has already been peer reviewed and accepted for publication.

    Surely this raises all sorts of questions of why this paper was allowed to be fast-tracked in the first place?

    • Jimmy Haigh
      Posted Sep 9, 2011 at 7:16 AM | Permalink

      And raises all sorts of questions of to the quality of the peer review…

  31. Posted Sep 9, 2011 at 7:43 AM | Permalink

    While I feel uneasy using the term “confidence intervals” with such weak relationships

    Aren’t they most useful in the case of weak relationships? If r is close to 1 CIs are not that helpful, everything is clear anyway.

    anom=function(x,Month=month) { #function to take anomalies by month

    We lose 12 degrees of freedom here, I’m more and more worried about using anomalies

    • Steve McIntyre
      Posted Sep 9, 2011 at 10:25 AM | Permalink

      We lose 12 degrees of freedom here, I’m more and more worried about using anomalies

      Hmmm. interesting point. Here is a plot of pre-anomaly CERES clr and net with cld by difference. On a pre-anomaly basis, there is a high-amplitude annual cycle. CLD forcing is strongly negative and CLR forcing strongly positive.

      It also seems that you get different trends depending on whether or not you take an anomaly first. The pre-anomaly trends are higher than the post-anomaly trend. CLR has an upward trend; CLD a downward trend.

      • Posted Sep 9, 2011 at 6:50 PM | Permalink

        I suspect the lag comes in as an artifact of the annual variations. As shown here, http://vixra.org/abs/1108.0032, 3 month lags come about through integration (or differentiation) of a cos or sin cycle, and also the sum of sin and cos terms of different amplitude can change the phase.

        The thing is the phase shift is eliminated as a free parameter by considering the surface temperature as the integration of atmospheric forcing. The difference in the autoregression between the top of atmosphere and surface demonstrates they stand in this integral(differential) relationship.

        • Steve McIntyre
          Posted Sep 9, 2011 at 9:31 PM | Permalink

          hi david. the annual variations are surprisingly large, aren’t they. I’ll bet that there might even be a noticeable daily cycle depending on the longitude of noon.

        • Jason Lewis
          Posted Sep 9, 2011 at 9:48 PM | Permalink

          This oscillation looks like the MSU data displayed
          here. Is this the same data, or just closely related?

        • timetochooseagain
          Posted Sep 9, 2011 at 10:52 PM | Permalink

          Well, nearly all geophysical data has an annual cycle related, directly or indirectly, to the seasonal cycles of solar insolation, with the Northern Hemisphere dominating most of the time because it has a stronger annual temperature cycle due to less ocean to dampen it.

          Naturally atmospheric and surface temperatures vary through a year, and of course the radiation the Earth emits to space similarly varies, in part as a simple consequence of the fact that warmer bodies emit more radiation than cooler bodies, all other things being equal, and in part because a large number of processes that vary with the seasons directly or indirectly connected to the insolation cycles.

          Of course, the way the Earth reacts to the annual redistribution of solar insolation as the seasons come and go 180 degrees out of phase (but not magnitude!) in each hemisphere, is likely very different from how it reacts to a more uniform, sustained “climate forcing”-and it is the latter we are at this moment interested in.

        • Posted Sep 9, 2011 at 11:11 PM | Permalink

          Yes, though they are global variables there are considerable annual variations.

          I just checked. The surface temperature has a lag peak at around 3 months relative to solar insolation, and the cloudR leads the global temps 3-4 months. So a 3-4 month lag would be 180 out of phase.

          The data would make no sense until these dynamic relationships were sorted out.

        • Geoff Sherrington
          Posted Sep 10, 2011 at 2:47 AM | Permalink

          David Stockwell, you are being modest with your short but important paper Sept 9 at 6.50 pm. I’d be fascinated to see more comments about it.

          Re Steve’s graph Sept 9 at 10.25 am, I did blog a few days ago that care has to be taken when correlating data suffering manipulations such as truncating, centering, normalising, working with anomaly numbers, etc. It frustrating that others are more able to express themselves than I am. These actions are designed in part to make relationships more obvious to the eye when presented pictorially, but the same operations can affect the validity of the math.

          Re annual variations, here are a few more. The wriggles in global CO2 are well known, but their main explanations are unconvincing. See
          http://climate4you.com/ClimateAndClouds.htm#Tropical%20cloud%20cover%20and%20global%20air%20temperature (Borrowed from WUWT and earlier).

        • David L. Hagen
          Posted Sep 10, 2011 at 7:16 AM | Permalink

          Geoff
          Thanks – very interesting. Am I seeing things or are the mid and low level clouds out of phase? That should be significant on feedback analysis.

        • David L. Hagen
          Posted Sep 10, 2011 at 11:22 AM | Permalink

          Is that phase difference related to: Krivova (2009)

          Solar variability in the IR is comparable to or lower than the TSI variations and in the range between about 1500 and 2500 nm it is reversed with respect to the solar cycle (Harder et al., 2009; Krivova et al., 2009b).

        • Geoff Sherrington
          Posted Sep 11, 2011 at 6:18 AM | Permalink

          David Hagen, Can’t answer as it’s not my specialty. I saw the paper elsewhere and threw it into the ring without judgemental comment in case it was of interest to specialists. I’m not aware of the nature of its past reception. I’ll wander off and read David Krivova now, before re-reading some geostatistics by the French David.

        • David L. Hagen
          Posted Sep 10, 2011 at 7:27 AM | Permalink

          On that page, the graph: Tropical cloud cover vs Global surface air temperature appears to show declining cloud cover during the 1980s (resulting in increasing surface insolation) corresponding to increasing global surface air temperature.

        • timetochooseagain
          Posted Sep 10, 2011 at 1:04 PM | Permalink

          The ISCCP data is questionable in many respects in terms of apparently showing a long term trend. Abrupt changes in cloudiness appear to occur with the introduction of new satellites or the repositioning of old ones:

          Evan, A.T., A.K. Heidinger and D.J. Vimont, 2007. Arguments against a physical long-term trend in global ISCCP cloud amounts. Geophysical Research Letters, 34:LO4701.

        • David L. Hagen
          Posted Sep 10, 2011 at 7:44 AM | Permalink

          David Stockwell – Rephrasing your statement:
          The surface temperature lags the solar insolation by 3 months.
          The global temperature lags the cloudR by 3-4 months.
          Doesn’t that say solar insolation and clouds are in phase, with both leading the surface temperature? [snip – save for another occasion]
          Showing the differences in lag between northern and southern hemispheres are 180 degrees out of phase would confirm that relationship.

        • Posted Sep 10, 2011 at 7:52 AM | Permalink

          David, yes in phase, even though the chain of causation could be TSI->GT->CloudR. Illustrating that just because something happens to lead temperature doesnt mean it causes temperature variations. We are only talking about annual periodicity. The GRF is a solar cycle periodicity. Its coincident with the solar cycle so its effect is indistinguishable from phase of a sine wave alone.

        • David L. Hagen
          Posted Sep 10, 2011 at 8:33 AM | Permalink

          David Stockwell
          The causation from solar to galactic cosmic rays to low level clouds has been shown by evaluating the impact of Forbush events. See: Henrik Svensmark,Torsten Bondo,and Jacob Svensmark Cosmic ray decreases affect atmospheric aerosols and clouds GEOPHYSICAL RESEARCH LETTERS, VOL. 36, L15101, doi:10.1029/2009GL038429, 2009.

          Cloud water content as gauged by the Special Sensor Microwave/Imager (SSM/I)
          reaches a minimum 7 days after the Forbush minimum in cosmic rays, and so does the fraction of low clouds seen by the Moderate Resolution Imaging Spectroradiometer
          (MODIS) and in the International Satellite Cloud Climate Project (ISCCP).

          Future evidence will be coming from: The Pierre Auger Observatory scaler mode for the study of solar activity modulation of galactic cosmic rays. The Pierre Auger collaboration 2011 JINST 6 P01003

          Steve McIntyre
          Re: “the annual variations are surprisingly large . . .there might even be a noticeable daily cycle”
          If so, you might be able to pick up the Forbush event impacts on top of solar/cosmic ray driven cloud impacts using Bart/Stockwell type analyses.

        • David L. Hagen
          Posted Sep 10, 2011 at 8:38 AM | Permalink

          Svensmark’s paper posted at spacecenter.dk : Cosmic ray decreases affect atmospheric aerosols and clouds

        • Layman Lurker
          Posted Sep 10, 2011 at 12:24 PM | Permalink

          Speak of the devil, David.

        • David Smith
          Posted Sep 10, 2011 at 8:34 AM | Permalink

          Hello, David.

          For background, the lag between insolation and surface temperature varies. For land it is 30 to 40 days. For ocean, it is about 3 months. And, the winter ice cover in the Northern Hemisphere seems to delay the warmup of large areas to values between those of land and ocean. And, of course, ocean area is greater in the Southern Hemisphere than in the North. The upshot is that the annual cycle has some complexity.

          David Smith

        • David L. Hagen
          Posted Sep 10, 2011 at 8:40 AM | Permalink

          Thanks – ~ in proportion to specific heat capacity. Any good reviews?

        • Dennis Wingo
          Posted Sep 10, 2011 at 12:32 PM | Permalink

          hi david. the annual variations are surprisingly large, aren’t they. I’ll bet that there might even be a noticeable daily cycle depending on the longitude of noon.

          In looking at the three signals, it occurs to me that there could be a phasing error. The peak of solar insolation should directly correlate to the orbital position of the Earth and the Earth hits perihelion on January 3rd each year and amphelion on July 3rd. The magnitude of the variation in direct insolation is upwards of 80w/m2, which is significant no matter how you slice it.

          As a spacecraft power systems designer this impacts both the available power to a spacecraft as well as the thermal control system. It has always bothered me that climate scientists seem to just arm wave this variation away while at the same time talking about the global impact of a 1.5 w/m2 variation in climate. I understand that the smaller variability is an integration of change over time but to ignore periodic changes of this magnitude irks me as an engineer.

        • Ed_B
          Posted Sep 10, 2011 at 10:12 AM | Permalink

          Your graph Fig 1 on your ref PDF paper seems to have a mis-label. Black and red instead of black and black??

      • Posted Sep 10, 2011 at 4:19 PM | Permalink

        It would be interesting (and quite easy) to check how the properties of incoming stochastic process would change due to this anomaly operation. ‘Anomalies’ far from the reference period might be surprisingly high:

        • mpaul
          Posted Sep 11, 2011 at 12:50 PM | Permalink

          UC, can you expand on this? You said that the “properties of the incoming stochastic process” change — in what way? Are you saying that the anomaly operation is losing information useful in constraining the probability space of the future (or past) state? So when you say:”‘Anomalies’ far from the reference period might be surprisingly high” – does this mean that the probability space far from the reference period is less dense and more broadly distributed than if you didn’t take the anomaly? Or am I misreading your comment?

        • Posted Sep 13, 2011 at 2:20 AM | Permalink

          You said that the “properties of the incoming stochastic process” change — in what way? Are you saying that the anomaly operation is losing information useful in constraining the probability space of the future (or past) state?

          Yes, in LS terms, trace(I-P_x) changes from T-1 to T-12. A linear change in the input will result a staircase function in the output, with jumps only where the year changes, etc.. How much it matters to the present discussion, I don’t know.

          So when you say:”‘Anomalies’ far from the reference period might be surprisingly high” – does this mean that the probability space far from the reference period is less dense and more broadly distributed than if you didn’t take the anomaly?

          Yes, assuming that ‘weather noise’ is not iid, but something more realistic. ‘Autocorrelation is small’ says Brohan, but I’d like to see more evidence. The ‘anomaly tube’ in the video squeezes the data and it gets back to freedom when the tube is far away.

      • David L. Hagen
        Posted Sep 12, 2011 at 10:51 AM | Permalink

        Steve McIntyre
        You may also see evidence of underlying 22 year Hale or 60 year PDO cycles. While Nyquist-Shannon needs 2x period to identify, other studies could be used for supporting evidence. e.g., See Loehle & Scafetta 2011, and Adriano Mazzarella & Nicola Scafetta 2011

  32. dearieme
    Posted Sep 9, 2011 at 9:33 AM | Permalink

    “I’m re-visiting this issue by repeating the regression ….but making [a] plausible variation ….and got surprising results.”

    That’ll be “surprising” in the sense of “not in the least astonished”, I take it?

  33. Jeremy
    Posted Sep 9, 2011 at 9:35 AM | Permalink

    (isn’t it absurd that blog posts on “skeptic” blogs provide better replication information than “peer reviewed” articles in academic literature)

    Hasn’t this gone without saying for nearly a decade now?

  34. David Shaw
    Posted Sep 9, 2011 at 9:48 AM | Permalink

    I could just about put any line with any slope through those data and not get a much worse fit.

  35. EdeF
    Posted Sep 9, 2011 at 5:43 PM | Permalink

    Figure 1 looks very much like my ring density vs temperature plot for White Mtn bristle cone pines.
    Those didn’t look linear either.

  36. Posted Sep 9, 2011 at 5:56 PM | Permalink

    3-D scatter data plotter for excel with rotaty things.
    http://www.doka.ch/Excel3Dscatterplot.htm

  37. MarcH
    Posted Sep 9, 2011 at 6:06 PM | Permalink

    In regard to Dessler 2011 in press…

    GRL state about papers in press:

    “Papers in Press is a service for subscribers that allows immediate citation and access to accepted manuscripts prior to copyediting and formatting according to AGU style. Manuscripts are removed from this list upon publication.”

    The AGU Authors Guide states: “Once the figures pass technical requirements, your final figures and text will be combined into
    a PDF file that is placed on the journal’s Papers in Press page. Papers in Press is a service for subscribers that
    allows immediate citation and access to accepted manuscripts prior to copyediting and formatting according to
    AGU style.”

    The Publishing Guidelines state:
    “An author should make no changes to a paper after it has been accepted. If there is a compelling reason to make changes, the author is obligated to inform the editor directly of the nature of the desired change. Only the editor has the final authority to approve any such requested changes.”

    As the changes suggested by Dessler are greater than “copyediting and formatting” it seems the paper must be withdrawn and a new version submitted and reviewed. Any comment?

    • mpaul
      Posted Sep 9, 2011 at 7:24 PM | Permalink

      From your quoted paragraphs, it seems like the editor can wave the changes through – and he is, no doubt, getting pressure from Lord Trenberth (may peace be upon him) to do just that. However, with all of the scrutiny that this paper is getting, the editor would be foolish to do so. Me thinks the editor is in a bit of a bind.

    • Posted Sep 9, 2011 at 7:40 PM | Permalink

      Journals do discourage substantive changes post-approval. It’s up to GRL here. I suspect they’ll publish it with few if any changes. None of the matters that Dessler seems to agree to amend require withdrawal. A “note added in press” or an erratum would be ample.

      • MarcH
        Posted Sep 9, 2011 at 8:09 PM | Permalink

        Yes, changes can be made at the editor’s discretion. But based on what appears at Roy Spencer’s site it appears they are a little more substantive that Nick suggests and as such would require additional review. As such the paper in its current state should be withdrawn and the review process started a fresh. This obviously would be major embarrassment to the editor of GRL and makes the resignation of Wolfgang Wagner look decidedly immature.

        It seems the peer review process is well and truly broken.

        • Rattus Norvegicus
          Posted Sep 10, 2011 at 11:58 AM | Permalink

          MarcH:

          Here is what Dessler told Spencer he will change:

          “I’m happy to change the introductory paragraph of my paper when I get the galley proofs to better represent your views. My apologies for any misunderstanding. Also, I’ll be changing the sentence “over the decades or centuries relevant for long-term climate change, on the other hand, clouds can indeed cause significant warming” to make it clear that I’m talking about cloud feedbacks doing the action here, not cloud forcing.”

          Somehow I don’t think the paper will be withdrawn over either having or not having those changes included. Although if they don’t allow the changes to clarify Roy’s views he will use it as a cudgel in his next paper on the subject. Good luck getting that published Roy!

        • mpaul
          Posted Sep 10, 2011 at 12:24 PM | Permalink

          Spencer mentions that Dessler made a error in his calculation of the radiative to non radiative ratio. Dessler seems to have accepted that there is an error. We simply don’t know how significant the error is. Spencer seems to think its big, writing:

          Using the above equation, if I assumed a feedback parameter λ=3 Watts per sq. meter per degree, that 20:1 ratio Dessler gets becomes 2.2:1. If I use a feedback parameter of λ=6, then the ratio becomes 1.7:1. This is basically an order of magnitude difference from his calculation.

          We’ll need to wait and see.

        • Tom Gray
          Posted Sep 10, 2011 at 12:25 PM | Permalink

          ===============
          Good luck getting that published Roy!
          ================

          I thought scientific peer review was about assessing the quality of a paper not to enforce issues in academic politics. Well I didn’t really think that given what I have seen of academic reviews but climate scieitists claim that it is.

        • timetochooseagain
          Posted Sep 10, 2011 at 12:57 PM | Permalink

          Note the various underlined bits. Spencer apparently has reason to believe that Dessler is making alterations to more than just the bits that misrepresented his views. Dessler is apparently going to change his calculation of the nonradiative/radiative ratio from about 20:1 to something less (how much?) which is a change of substance.

  38. Steve Fitzpatrick
    Posted Sep 10, 2011 at 9:37 AM | Permalink

    Bart,

    Your graphic results are very interesting. I second Steve McIntyre in suggesting you provide a more detailed description and what implications you can draw from the analysis.

    ~4.88 years sounds like the response time of the ocean’s well mixed layer to an impulse (eg a big Pinatubo driven dip followed by an overshoot/undershoot sequence). Does that make sense to you?

    • Bart
      Posted Sep 10, 2011 at 2:07 PM | Permalink

      Sure, it seems reasonable to me. But, I am not very familiar with all the climate interactions. I’m just a systems analyst.

  39. jphilips
    Posted Sep 10, 2011 at 4:29 PM | Permalink

    Bart
    A few years ago I had a look at FFTs (using EXCEL) of temperatures:
    e.g. http://img15.imageshack.us/img15/1127/ffts.jpg
    others are available!

    You will note that there is very little evidence in most of any solar influence.

    in fact there were few common frequencies apparent in all!

    After this I used a narrow band filter and swept the centre frequency from 0.5 to 300 years (with constant bandwidth) looking for peaks in amplitudes on the output.

    Then summing all the amplitudes and frequencies at suitable phase got a fair reconstruction of the original (to be expected with enough freqs!)

    I then tried manually adjusting the centre frequencies, amplitudes and phases of summed signals to get a “very good ” synthesized hadcrut3v global.
    some of these results are shown here together with future predictions!!
    http://climateandstuff.blogspot.com/search/label/temperature%20synthesis

    “With four parameters I can fit an elephant, and with five I can make him wiggle his trunk”.
    Attributed to von Neumann

  40. Posted Sep 10, 2011 at 5:02 PM | Permalink

    This issue of lags/phase in relation to the behaviour of clouds and SST is a fascinating one!

    I don’t know whther it is a factor but back in April 2009 I showed, using the MODIS chlorophyll a data that there appeared to be two distinct NH oceanic cyanobacterial consortia producing two annual phases of blooming in the NH oceans (i.e. blooming over two fairly distinct water temperature ranges). But strangely the growth bimodality was not apparent in the SH oceans (at leasst according to satellite measures of primary productivity).

    http://landshape.org/enm/oceanic-cayanobacteria-in-the-modern-global-cycle/

    At the time I wondered (and still do) whther this is a modern adaptation to the hemisphere where most anthropogenic CO2 has been increasingly generated over the last 200 or so years?

    As far as I (still) know this observation does not appear anywhere in the modern scientific literature on oceanic cyanobacterial productivity.

    Nevertheless, given the powerful role of the emissions by cyanobacteria of dimethyl sulfide (DMS) – the principal nucleant of (especially low/medium level) oceanic cloud this issue of lags/phase in cloud response to SST might also have something to do with oceanic cyanobacterial productivity, given the intimate relationship between that productivity and monthly/annual SST ranges.

    • Geoff Sherrington
      Posted Sep 10, 2011 at 9:34 PM | Permalink

      Steve, This is the general direction of my thoughts also. Those annual wriggles are reported for a number of data sets, from CO2 to SST etc. I’ve not been convinced by the “NH trees get leaves” explanation. The more that can be added to explain the wriggles, the better. I can’t see how CO2 can be described both as a well-mixed gas and yet one that retains a microstructure from Barrow Alaska through Mauna Loa and Cape Grim to the South Pole. I also think that discerning a lag from a lead can be hard and that the more independent lines of causation there are, the better.

    • David L. Hagen
      Posted Sep 12, 2011 at 2:09 PM | Permalink

      Steve Short
      Interesting observations. Wonder if the N/S differences in annual pulsations affect the cloud feedback of Dessler/Spencer analyses. Do Fred Haynie’s pulsation analyses provide any clues to your NH bimodality vs not in SH? e.g. Temperature/CO2/Nutrient driven differences in growth rate vs nutrient limitations? OR do the differences in chlorophyll affect the albedo/absorption?

      • Posted Sep 12, 2011 at 6:16 PM | Permalink

        David Hagen
        Thanks very much for the link to Fred Haynie’s PP presentation on annual pulsations in CO2. Do you have an email address/URL for Fred? I’d very much like to get in touch with him. My apologies for the fact that all the plots in my April 2009 Niche Modeling piece have since been lost. David Sockwell tells me it was some sort of backup failure by his site’s service provider. However, I have kept an archive of all my plots for that article. They very clearly show that in the NH oceans at least there is, each year, a major and minor phase bimodality to both chlorophyll a and diffuse seawater attenuation coefficient at 490 nm (both measures of cyanobacterial productivity) bracketing respectively the unimodal peak in SST with lags/leads which correspond very closely to those in the S&B work. These plots can be very easily generated at the NASA Giovanni site (using the MODIS, SeaWIFS and Aqua satellite databases).

      • Geoff Sherrington
        Posted Sep 17, 2011 at 3:21 AM | Permalink

        David, Reverse causality. Are the wriggles the consequence of cloud properties?

  41. Posted Sep 10, 2011 at 9:48 PM | Permalink

    Bart,
    I had some more ideas on this analysis.

    Firstly, we should remember that like Steve in this post, we’re doing what Dessler said we shouldn’t do – subtracting clear-sky from total to get the effect of clouds. It gets a big wv bias.

    Steve did it, and got a negative slope. You’re finding a generally negative impulse response.

    Now I think these can be related. The dc analysis of Dessler assumes that the response is immediate. But you can relate the areas underneath. The area under the impulse response can be compared to the regression coefficient. It’s a total response.

    So I looked at the tapered version, which I think is more reliable. I integrated the smoothed hs that you plotted. No subtle integration formula – just adding. I got -2.27 W/m2/C – very large (negative).

    But this would be sternly deprecated by Mr Briggs et al. So I integrated just h. That came to -0.3 – just at the edge of Dessler’s range.

    I found the reason for the difference. I assume Matlab, like R, just omits smoothed values where you can’t use the whole filter range. So with a cone filter like yours, end values are pretty much shut out. And the small time values for h are large positive.

    So I think the h integral value is correct.

    I agree with you that this is the correct way to take account of lag. But we haven’t dealt with causality. The FFT analysis relates past and future indifferently.

    However, we don’t have a clear causality here. I think S&B and L&C are saying it goes both ways. So I don’t know what to make of that.

    • Bart
      Posted Sep 11, 2011 at 3:54 AM | Permalink

      “The area under the impulse response can be compared to the regression coefficient.”

      Indeed. The area under the impulse response is, in fact, the dc gain in the frequency response.

      “So I looked at the tapered version, which I think is more reliable.”

      It isn’t. Tapers smooth things so that you get less high frequency jiggling. But, they decrease the resolution at low frequencies.

      “I got -2.27 W/m2/C”

      I expect you are biased down by about a factor of four by your tapering.

      “That came to -0.3”

      W/m2/C? Are you keeping your units straight? The hs is just a smoothed version of h. You should integrate to essentially the same value as smoothing and integration are both linear operations.

      “I found the reason for the difference. I assume Matlab, like R, just omits smoothed values where you can’t use the whole filter range. So with a cone filter like yours, end values are pretty much shut out. And the small time values for h are large positive.”

      Ah, now I see. Unfortunately, your intuition is failing you. The frequency response I plotted on a log-log scale necessarily omits the zero frequency (dc gain) singleton value, because it is at 10 to the minus infinity years^-1. The dc gain estimated is, in fact, much smaller than the nominal immediately higher frequency sample.

      But, this is a continuous system, and the frequency response is continuous – it cannot suddenly step to a completely new value at zero. The reason the dc value itself is ambiguous is because we are dealing with anomalies. The dc information has already been removed from these data streams. But, because the frequency response is a smooth, continuous function, we can infer that the true dc gain is continuous with the next several frequency samples. Ergo, the true factor is about -9.5 W/m^2/degC.

      “The FFT analysis relates past and future indifferently.”

      That is false. Natural causal systems generally have decreasing phase response with frequency. If we tried to invert this system, we would get increasing phase. Ergo, we know we are indeed inferring the correct direction of causality.

      • Posted Sep 11, 2011 at 6:03 AM | Permalink

        Bart,
        A few queries:

        1. “The dc information has already been removed from these data streams.”
        Do you mean that the mean (of temp, dR) has already been subtracted? I think that would be a good idea. I had modified my code to do it, but it didn’t make a huge difference. It’s not in your code – maybe preprocessing?

        2. I didn’t get the bit about log plotting. I’m talking about where the filter is implemented filter(c,1,flipud(h)). There are numerical values of h there and the first few are positive.
        The default filter behaviour in R is to not allow the filter to cross the end (t=0). That was the cause of the discrepancy. Positive values near 0 were lost. It occurred to me that I should have used the “circular=TRUE” option which wraps around. Do you know which is default in Matlab?

        3. I realised that the integral of h should be just the zero of its DFT, ie Y(0)/X(0). That is itself just the ratio of integrals of dR and temp. I’m still trying to work that out.

        4. Comments: yes, my 0.3 was W/m2/C. That was after integrating over your 50 year period. Choosing different comparable periods makes some difference. Y/X has big spikes, reflected as long-time oscillations in h.

        5. I’d insist FFT analysis is non-causal. It has to be – it’s periodic. But I agree that afterward you can try to infer causality from the phase.

        • Bart
          Posted Sep 11, 2011 at 12:59 PM | Permalink

          1, “Do you mean…” I mean that the temperature and cloud data have arbitrary dc offsets from nominal values from equilibrium. We do not know a priori what those offsets are, but we can infer their ratio based on the estimated transfer function samples near dc.

          “…it didn’t make a huge difference.” That is because they are both near zero mean already, and arbitrarily so. In fact, for numerical improvement, we should actually take the Y(0)/X(0) term out and just replace it with zero in the inverse FFT calculating the impulse response. We can infer the dc gain from the frequency samples near zero and then just offset the impulse response so that it’s integral is the inferred dc gain.

          2. This issue could easily get confused. Bottom line: you need to infer the dc gain from the estimated transfer function. And, you need to nix the taper, or you are destroying low frequency information with no justification.

          One thing to consider: It appears you have implemented the filtering forward in time. In that case, your early behavior is going to be contaminated with startup transients. You notice I used the MATLAB function “flipud”, which flips the data around. This means I am actually filtering the data backwards in time, then flipping the result again to get it forwards:

          hs = flipud(filter(c,1,flipud(hw)));

          That puts my startup transients at the end where the impulse response is near zero and it doesn’t affect things much, and the transients have died out by the time I get to the “meat” of the function.

          Do not use the “circular=TRUE” option. You don’t want the ends wrapping around and influencing each other.

          3. As I said, you cannot rely on Y(0)/X(0). You need to infer the true value from frequency samples above zero frequency.

          4. Stop tapering. The spikes are indicative that you have changed the correlations in a bad way. My “bad” runs using artificial data always have big spikes and ugly frequency responses.

          5. Well, if you insist… so long as you agree.

        • Bart
          Posted Sep 11, 2011 at 2:06 PM | Permalink

          Of course, “startup transient” with this 30 point FIR filter means the time needed to cover the first 30 data points within the weighting function.

      • Posted Sep 11, 2011 at 7:15 AM | Permalink

        Bart,
        I’m now more bothered about causality, and in particular the step where you taper h:
        hw = h*w
        where w is zero in about 4097:8192 of the range of h values (1:8192).

        Now h is initially a bi-directional impulse response. A pulse in temperature is associated with changes in dR in past and future. That’s just the original transfer function model which is fitted to the data.

        But with that taper you are, by periodicity, forcing h at small negative times to be zero and h to be one-sided. You’re imposing a causality that wasn’t there in the data.

        Then you go on to FFT hw, getting the mag and phase diagrams shown. And as you say, the phase goes to about -180, suggesting causality. But I think it is just the causality you forced with the h*w step.

        And of course this is where you get the very long time constant and the quadratic model.

        • Bart
          Posted Sep 11, 2011 at 1:17 PM | Permalink

          I get the same result without windowing. The windowing mostly helps out the higher frequency region, of which I do not show much because it is poorly behaved with no particular readily recognizable form – I believe it is reasonable to conclude that this region is dominated by other processes independent of the essential temperature-cloud loop.

          The phase relationship establishes the direction of causality. A decreasing phase is associated with phase lag, increasing with phase lead.

        • Bart
          Posted Sep 11, 2011 at 6:43 PM | Permalink

          BTW, Nick, my hidden posts have appeared. See in particular Bart @ Sep 10, 2011 at 1:42 PM. I agree that it is a little tenuous pulling out such long term correlation from such a short segment of data. But, as I said, I can often do it with artificially generated data, too, so I think this is a lucky set of data.

          There are probably more robust methods for what we are doing here under the rubric of deconvolution. In the future, I will be looking into the literature, and suggest you do, too. But, in the meantime, having uncovered what looks to be a very strong negative feedback of -9.5 W/m^2/degC, I think the onus is on those who believe there is a weak or positive feedback to prove it. From my viewpoint here, I think that is very unlikely.

        • Mark T
          Posted Sep 11, 2011 at 9:14 PM | Permalink

          Any good book on adaptive filter theory should cover the topics. System identification as well as deconvolution (blind or otherwise) are what you want as Bart suggests. Adaptive Filter Theory (ed. by Simon Haykin,) though terse almost to the point of requiring the knowledge before reading, is regularly considered one of the best resources. I believe he has a deconvolution book as well.

          There are also time domain parallels to what Bart is doing that may be useful for verification purposes. I do believe record length as well as stationarity/linearity issues need t be fully explored.

          Mark

        • Posted Sep 12, 2011 at 1:42 AM | Permalink

          Re: Bart (Sep 11 18:43),
          Bart,
          I found that strong negative feedback of -9.4 W/m2/C to be iffy. That’s the number with 124 months of data. But if you drop 4 months of data at the start (same algorithm otherwise, subtracting means), then the number drops to -15.06 W/m2/C. And if you drop that tapering of h (h*w) it rises to -12.2 W/m2/C for the 124 months.

          In fact, if you omit the tapering of h (h*w, for which I can see no justification), there’s a simple formula for the number you are quoting. It’s just the ratio of regression slopes vs time:
          slope(dR)/slope(temp).

          And I can’t see any basis for considering that a feedback.

        • Bart
          Posted Sep 12, 2011 at 10:22 AM | Permalink

          The basis is the very well defined 2nd order response evident in the transfer function. Such responses are ubiquitous in engineering. Newton’s laws naturally lead to them because f = m*a, the second derivative is proportional to the force, and when you wrap feedback around such a system, you get a 2nd order response such as this. Electrical systems with resistors, inductors, and capacitors generally produce such responses. Solutions of elliptic partial differential equations commonly can be expanded in a series of 2nd order responses, a so-called modal expansion. Such responses are ubiquitous.

        • Bart
          Posted Sep 12, 2011 at 4:03 PM | Permalink

          “But if you drop 4 months of data…”

          Since we are already at the ragged edge of having enough data to identify the process, I think dropping data or otherwise messing with anything which effectively reduces the time interval (like tapering) is a very bad idea. I would be willing to bet that, if you plot out the frequency response from those runs, you will see them behaving very erratically.

          That is a key point. When I generate artificial data across the same time interval, the times when the analysis works, it gives a nice, well-behaved frequency response, like the real-world data does. When it doesn’t, the frequency response is erratic. Thus, I think the “erraticity” of the frequency response is a sort of metric of how reasonable the estimate is.

        • TimTheToolMan
          Posted Sep 13, 2011 at 7:10 AM | Permalink

          Nick writes “I found that strong negative feedback of -9.4 W/m2/C to be iffy. That’s the number with 124 months of data. But if you drop 4 months of data at the start (same algorithm otherwise, subtracting means), then the number drops to -15.06 W/m2/C. And if you drop that tapering of h (h*w) it rises to -12.2 W/m2/C for the 124 months.”

          I’m way out of my depth here…but can you repeat this by successively dropping more and more data and see that it remains negative at all times (except maybe right at the end)? Or is that not helpful?

        • Posted Sep 13, 2011 at 8:10 AM | Permalink

          TT,
          It bounces around. But Bart has a point that we really already have too little data and can’t drop much more. I’m just trying to illustrate that the number is rubbery. I think the more significant observation is that without the taper of h, it’s just the ratio of the trends of dR and temp. We have a better feel for how stable they should be. That ratio will switch sign if either trend switches. And if it’s the temp trend that changes, being the denominator, the ratio will get very large (positive or negative) before that happens.

          In fact the sign of the Hadcrut3 temp trend over the last ten years has famously wavered in sign. Whenever it goes negative, the logic here says that feedback is positive, since then dR and T trends have the same sign.

        • j ferguson
          Posted Sep 13, 2011 at 8:24 AM | Permalink

          Nick,
          There is something very troubling with feedback that switches sign in synchronization with sign-change in a trend. Does this occur in “natural” systems?

        • Posted Sep 13, 2011 at 8:52 AM | Permalink

          Well, I don’t think it’s a feedback. I’m just commenting on how the arithmetic works out. That’s what these numbers actually are.

          I think the notion of impulse response won’t work for another reason. We’ve been arguing about causality, and the feeling that an impulse response should give one variable in terms of the past values of another. But feedback needs “two-way causality”. If past temp determines present dR, then present dR needs to feed back into whatever determined it – that’s how it works. But present dR can’t feed back into past temp.

          It’s hard to get away from the usual dc (instantaneous) feedback that Dessler used.

        • TimTheToolMan
          Posted Sep 13, 2011 at 10:48 AM | Permalink

          Nick writes “But present dR can’t feed back into past temp.”

          Not “past temp” per se, but with the thermal inertia of the ocean (along with currents that change the location of those forcings) surely dR can feed back in the manner needed?

        • Bart
          Posted Sep 13, 2011 at 11:19 AM | Permalink

          Nick, you are assuming dR doesn’t change slope along with the temp. But, of course, it must. You’ve disproven the system to yourself by imagining a scenario in which there is no system.

          “I think the notion of impulse response won’t work for another reason. We’ve been arguing about causality, and the feeling that an impulse response should give one variable in terms of the past values of another.”

          Well, I guess those degrees I got and 30 years of working with feedback systems is down the drain. Nick says they cannot exist. Sheesh.

          You are not thinking about the feedback system properly. Of course dR feeds back into temp. But, there are other processes feeding into temp, too, so the correspondence is not unique. The system diagram looks like this.

          “It’s hard to get away from the usual dc (instantaneous) feedback that Dessler used.”

          What a profoundly ridiculous statement. You really have no inkling of the whole field of control theory at all, do you?

        • Bart
          Posted Sep 13, 2011 at 11:54 AM | Permalink

          “But, of course, it must within the relevant time frame dictated by the bandwidth of the feedback system.”

        • Tom Gray
          Posted Sep 13, 2011 at 12:46 PM | Permalink

          Nick Stokes writes

          ================
          But feedback needs “two-way causality”. If past temp determines present dR, then present dR needs to feed back into whatever determined it – that’s how it works. But present dR can’t feed back into past temp.
          ==================

          Look at a diagram of a simple feedback systm. The output produces an error signal which is fedback to create a modified output and so on forever.

          You don’t seems to grasp how this works. There is no feeding back into the past. The output creates an error signal and the error signal produces an output. Everything goes forward in time

        • Tom Gray
          Posted Sep 13, 2011 at 12:56 PM | Permalink

          Nick Stokes writes

          ================
          But feedback needs “two-way causality”. If past temp determines present dR, then present dR needs to feed back into whatever determined it – that’s how it works. But present dR can’t feed back into past temp.
          ==================

          Just to add that signal processing chips are special purpsoe processors that “delay, multiply add” very very quickly over and over again frorever. They do not go back into the past

        • Bart
          Posted Sep 13, 2011 at 11:26 AM | Permalink

          “There is something very troubling with feedback that switches sign in synchronization with sign-change in a trend.”

          It’s only changing in Nick’s mind, not in reality.

        • Posted Sep 13, 2011 at 5:32 PM | Permalink

          “It’s only changing in Nick’s mind, not in reality.”
          Not in my mind, Bart, but as a simple result of the arithmetic of your algorithm.

          An analysis of the full data set gives:
          With your h trunc H[1]=-9.64 W/m2/°C (which you’ve been quoting)
          Without trunc H[1]=-12.22 W/m/°C
          Trend dR -0.0605 W/m2/yr Trend T 0.0049 °C/yr Ratio -12.22 W/n2/C

          But if you take out the first 10 months of data (all of year 2000) that’s enough to make the trend of temp negative. Then your analysis gives:
          With your h trunc H[1] = 10.75 W/m2/°C
          Without trunc H[1]= 18.71 W/m/°C
          Trend dR -0.0509 W/m2/yr Trend T -0.0027°C/yr Ratio 18.72 W/m2/C

          So the trend in T switched, and suddenly we have big positive “feedback”.

          You might like to think through the implications of using such a simple criterion. You can form the ratio of trends of any two series, and quote such a figure. But it doesn’t prove an association.

        • Bart
          Posted Sep 13, 2011 at 9:39 PM | Permalink

          If H[1] is your dc value, it is meaningless. I’ve already been over this.

        • Posted Sep 13, 2011 at 10:04 PM | Permalink

          No, H (FFT of h) is perfectly smooth near zero. Here are some typical values:
          > H[1:6]
          [1] 18.71285+0.00000i 18.71285-0.15041i 18.67962-0.29995i 18.62446-0.44743i
          [5] 18.54773-0.59190i 18.44988-0.73243i
          What is your -9.4 W/m2/C if not limiting low freq value of H ?

        • Posted Sep 13, 2011 at 10:16 PM | Permalink

          Bart, I should add that I interpolated the zero value of Y/X from nearby values, which are also perfectly smooth. In fact, this number H[1] is also that limiting value. That is why what you are calculating is just the ratio of trends.

        • Bart
          Posted Sep 14, 2011 at 12:27 AM | Permalink

          Without trunc H[1]=-12.22 W/m/°C

          Don’t use that data. It’s just noise and extraneous processes. The early part of the impulse response reveals a standard 2nd order type response. Once you see a signature like that, you know that is the thing you are after. You do not want any data unrelated to it funking up your estimate. You can see where it dies down, so stop using data after that point. But, use a smooth taper in order to maintain your resolution.

          But if you take out the first 10 months of data…

          Don’t take out any original data. You have no basis for doing so.

        • Bart
          Posted Sep 14, 2011 at 12:29 AM | Permalink

          You have no basis for doing so… and you need every bit of data that you have from which you can get longer term correlations.

        • Gil Grissom
          Posted Sep 14, 2011 at 10:56 PM | Permalink

          Bart, I want to make this comment in order to help reassure you that your analysis has been useful and appreciated by many. Nick can see that you have added to Steve’s analysis and helped shoot holes in Dessler’s critique of Spencer, and he doesn’t like it one bit. He knows what you say is true, he is just trying to cast doubt in the minds of those whose read this blog and may not have the education in math etc. to follow the argument. You can look in the tread Steve had titled “dirty laundry II, contaminated sediments” and get an earful of Nick’s efforts at confusing the issue. See here

          Dirty Laundry II: Contaminated Sediments

          Thanks for the analysis!

        • Bart
          Posted Sep 13, 2011 at 11:40 AM | Permalink

          “Whenever it goes negative, the logic here says that feedback is positive, since then dR and T trends have the same sign.”

          Not in the relevant frequency band. Control systems are all about frequencies, Nick. You can’t always make out what is going on in the time domain. It’s why we use the frequency domain.

        • Mark T
          Posted Sep 13, 2011 at 3:36 PM | Permalink

          Welcome to the party, Bart. Sigh…

          Mark

        • diogenes
          Posted Sep 13, 2011 at 4:31 PM | Permalink

          it seems that you have some interesting results – is anyone going to make a summary and present them to Spenser?

        • Bart
          Posted Sep 13, 2011 at 5:27 PM | Permalink

          Spencer should be aware. I did post about it on his blog. I think there is going to be a lull while people come to grips with the analysis, then there will be a ramp up in commentary and analysis. My guess – they will find a hundred or a thousand irrelevant objections, such as Nick has brought up here, to deny the reality and stick their collective heads in the sand.

          Mark – this has really been my main beef from day one. The mainstream climate community seems to be virtually illiterate in the tools and standard forms which have been the mainstay of controls engineering since the days of Nyquist and Bode, not to mention Kalman. They treat the data and processes as though they were deterministic, and evince little understanding of the effects of phase delay, the subtleties of gain/phase relationships, the behavior of lead and lag networks, the necessary and sufficient conditions for robust stability and well behaved and smooth evolution of processes, or just basic feedback principles in general.

          They do not seem to understand the reduction of sensitivity inherent in negative feedback systems. Everything is a straight line, and the only tool you need is a linear regression. What you and I see as normal feedback dynamics they see as spontaneous, causeless, random motion.

          They are reinventing the wheel as they go along, and not doing a terrifically good job of it. You can’t teach an old dog new tricks. I don’t think things are going to get better until a new generation of climate scientists, schooled in the proper subjects, comes to the fore.

        • diogenes
          Posted Sep 13, 2011 at 5:46 PM | Permalink

          Bart…I was feeling your frustration ( along with Mark T)…Nick seems to be pettifogging but not really demonstrating that he knows what he is talking about. My take is that he wanted to gound the discusion in the “physics” but that he is bound up in his conception of what the “physics” are, rather than taking on board a new way of approaching the topic. I have been awaiting a deus-ex-machina intervention from Rabett, expalining how he knew all this all the time.

        • Bart
          Posted Sep 13, 2011 at 5:49 PM | Permalink

          I mean, doing a linear regression on starkly, painfully obviously phase-lagged variables? Come on!

        • diogenes
          Posted Sep 13, 2011 at 5:51 PM | Permalink

          and I think this is why there is a need to track climate variables on an official basis, as we do for economic aggregates on a quality-controlled basis, if this is a really important subject.

        • Posted Sep 13, 2011 at 6:12 PM | Permalink

          “I mean, doing a linear regression on starkly, painfully obviously phase-lagged variables? Come on!”
          We’re at the stage where thread order is randomised, so I don’t if that’s a reply to this. But it sounds like it.

          Of course, plenty of people (including me) calculate regression slopes of temp. But ne need to defend it here. because that’s what your “feedback” is. The simple ratio of the two regression trends. That’s just arithmetic. Not my choice.

        • Posted Sep 13, 2011 at 6:25 PM | Permalink

          While this is a useful and powerful, and appropriate method, you need to be cautious interpreting the low frequencies at the length of the record, surely. Where the Bode plot magnitude rolls over can be due to data length.

          Another potential problem that hasn’t been mentioned is that a system being driven by an 11 year periodic will respond at that frequency, even though its not the natural periodic. So how can you be sure of estimates of rise/decay time?

        • Mark T
          Posted Sep 13, 2011 at 6:40 PM | Permalink

          I dont think they even realize that feedback implies poles.

          I’m guessing you haven’t been following long enough to be bitter yet. It is stunning. I hope Spencer is open to expansion.

          Stockwell: I agree. We have mentiined several of the potential pitfalls along the way.

          Mark

        • Bart
          Posted Sep 13, 2011 at 10:01 PM | Permalink

          No, Nick, that wasn’t directed at you, but at Dessler’s “analysis” purporting to show positive feedback. Hence the link. A phase plane plot of lagged frequency localized variables gives a Lissajous oval whose major axis can point in any direction depending on the phase, and a myriad of continuously varying spirals when you essentially have a continuous frequency spread extruded through a nonlinear phase characteristic. You can get anything from a linear regression on the data that way.

          David – I’ve spent a lot of time addressing your concerns on this page. Basically, it comes down to this: A) I have given code for generating artificial data with the same low frequency correlation as evaluated for the real-world data B) With this, I am able to generate artificial data sets with the same time span and properly identify them using the same algorithm C) the algorithm does frequently fail to properly identify the artificial processes, but the frequency responses in those cases exhibit distinctive erratic behavior which tips you off to it being a poor estimate – the fact that the real world data response estimate is very nicely behaved suggests that the input data has a good frequency spread, and therefore the estimate is likely valid, and D) there is a very definite need to formulate better algorithms to deal with short data records with relatively long correlations and increase confidence in the results.

        • Bart
          Posted Sep 13, 2011 at 10:08 PM | Permalink

          Nick sez: “But ne need to defend it here. because that’s what your “feedback” is. The simple ratio of the two regression trends.”

          You are nowhere near getting it. The zero frequency point here is arbitrary, because the data sets are not zeroed with respect to the actual equilibrium points, which are unknown. The -180 degree phase shift conclusion is based on all the other low frequency data points which show a definitive -180 degree phase shift. These ARE NOT all dependent on the regression trend.

          You apparently do not understand the frequency response, and are making broad and erroneous claims based on your misunderstanding.

        • Posted Sep 13, 2011 at 10:47 PM | Permalink

          No, Bart, you are not getting it. I actually understand frequency response very well – better, I think than you do. But you are not dealing with my basic proposition. These numbers you are quoting, which are perfectly well characterised by H[1], but can be got in other ways if you want, are just the ratios of the OLS linear trends of dR and temp. There’s no frequency analysis required to verify that.

          It’s slightly confused by your illegitimate truncation of h. But if you take that out, the agreement is to four figures or more. Works every time.

        • Bart
          Posted Sep 14, 2011 at 12:05 AM | Permalink

          “But if you take that out, the agreement is to four figures or more.”

          The object is not to get agreement!!! The object is to estimate a response which is hidden by noise and extraneous processes! I do not want to recreate noise! I do not want to recreate other random inputs! I want to determine the underlying relationship between the input variable and the output using imperfect measurements of both!

          Do you ever read anything I write? I have gone over and over and over this same ground. I explain things to you, you ignore my explanations, and then have the temerity to assert that you know this stuff “better, I think than you do.” This has become a sad little joke.

          “…are just the ratios of the OLS linear trends…”

          This is only an approximation which holds very near zero frequency because of L’Hopital’s rule. The values at higher frequencies, where the gain plot depicts the almost flat passband of the response, are related to other properties. It all holds together as a whole. This is a VERY commonly encountered response type in the natural world.

          Besides which, of course the trends are related. Why shouldn’t they be? If the temperature series changes direction, so will dR. But, not right on a dime. There will be a lag of several years before it becomes apparent – you are dealing with a time constant near 5 years! And, if the driving force changes again in that time, then the response will be attenuated, because that is motion at frequencies outside of the passband.

          This is what filters do. This is what they are for.

        • Bart
          Posted Sep 14, 2011 at 12:49 AM | Permalink

          “The values at higher frequencies, where the gain plot depicts the almost flat passband of the response, are related to other properties. It all holds together as a whole.”

          I see now a way to make this clear. Of course, the very low frequency stuff is very much like a linear regression. The entire Fourier Transform is a regression against sinusoids of varying frequency. At very low frequency, you are essentially regressing against a linear trend over the span of the data, because sin(eps*t) := eps*t (and, of course, cos(eps*t) := 1 but, since these time series are very nearly zero mean, that does not have much effect). But, as you get to higher frequencies, you are regressing against progressively more sinusoidal signals. The fact that the progressively curved regressors yield virtually the same result, in a manner which is characteristic of a very widespread and common type of system response, tells you that you have latched onto something significant.

        • Bart
          Posted Sep 11, 2011 at 7:19 PM | Permalink

          “Now h is initially a bi-directional impulse response.”

          I also just realized that you have a misconception here. A discrete Fourier transform of a real time sequence produces a frequency response which is mirrored across the Nyquist frequency. But, that symmetry does not hold in the inverse Fourier transform of a complex sequence. The inverse Fourier transform of Y/X is not “two-sided”, i.e., it does not have midpoint symmetry. And, it is real valued – I used the “real()” function on it just to ensure that MATLAB did not return a complex valued function with small complex values due to numerical error.

          Truncating the later values does smooth the frequency response estimate a bit, but it does not fundamentally change the phase response.

        • Posted Sep 11, 2011 at 10:41 PM | Permalink

          Bart,
          I don’t see that. Is it not true that both FFT (or DFT) and iFFT produce periodic output, period Nsamp? I don’t see how it could be otherwise, given how they are expressed as trig functions. And I don’t see how being complex affects periodicity.

          As I understand your model, it is just that dR is equal to h convolved with temp. You FFT to turn convolution into multiplication, get FT(h) as a quotient, and iFFT to get h as a real. In the original hypothesis h is two-sided, and it should come out that way.

        • Mark T
          Posted Sep 12, 2011 at 7:34 AM | Permalink

          You are confusing symmetry with periodicity. The periodicity of an N point FFT/IFFT is N. The N samples have no guarantee of symmetry. As Bart noted, an IFFT of arbitrary complex data will be complex, but it need not be symmetric.

          Also, an assumption of symmetry imposed upon h initially would be silly at best. This would only occur if the transfer function were linear phase finite impulse response, something that rarely occurs in natural systems. Bart is inferring feedback simply from knowledge of the shape of the transfer function as well.

          Mark

        • Posted Sep 12, 2011 at 8:05 AM | Permalink

          Mark, is this addressed to me? No, I didn’t say that h would be symmetric, and I’m very well aware that it isn’t. I’m simply saying that it isn’t one-sided, and because the periodic representation starts at time zero, the portion for negative times appears at the other end of the plot.

          This was my issue with Bart’s code – he multiples h by a taper, which starts at 1 for small positive t and goes to zero at 2048 (of the 8192 sample points). By that stage it seems that h can well be truncated, but the effect continues on to zero the significant parts of h corresponding to negative time.

        • Mark T
          Posted Sep 12, 2011 at 8:32 AM | Permalink

          There is no “negative time” portion of h. The first point out is time t=0 and it progresses forward from there (h has identical progression as the series that it came from.) The windowing function merely removes the wiggles at the end which has little impact which should be verifiable. Bart has already stated this.

          I mention periodicity because you did twice in the first paragraph to which i responded. You seem to be under the impression that the output of an IFFT consists of positive and negative halves like the FFT… It does not.

          Mark

        • Posted Sep 12, 2011 at 8:58 AM | Permalink

          No, the iFFT returns a periodic function. It is made up from the same trig functions as the FFT. If you continued on, point 8193 would be the same as point 1 (we’re using Nsamp=8192).

          In this plot I’ve shown the smoothed h (in black – ignore the red) plotted with a 180 deg phase shift on the time axis. That is, I’ve plotted values 4097 to 8192, then 1 to 4096 of h, and I’m showing a centered window. That’s the time series that is actually convolved with temp to reproduce dR.

        • Mark
          Posted Sep 12, 2011 at 9:50 AM | Permalink

          The function is periodic with period of N, Nick. That does not imply symmetry about some point 0 nor does it imply some sort of periodicity within the N samples themselves. The results of an IFFT have the same temporal span as the original input.

          Mark

        • Bart
          Posted Sep 12, 2011 at 9:51 AM | Permalink

          Nick – this is nonsense. There is no symmetry such as you are suggesting. Make some runs with artificially generated data such as I have prescribed and see. The stuff from 4097 to 8192 is just a bunch of junk which comes about due to independent processes in the data. It should be excised with the taper because it is useless. But, if you insist on not doing the taper, don’t do it. You will get the same result in the important low frequency region.

        • Mark
          Posted Sep 12, 2011 at 10:00 AM | Permalink

          That is, I’ve plotted values 4097 to 8192, then 1 to 4096 of h, and I’m showing a centered window.

          It is incorrect to swap the halves of an IFFT result. Periodicity does not have anything to do with this.

          It is made up from the same trig functions as the FFT.

          Those “trig functions” are merely an orthonormal basis set consisting of sinusoids. This does not imply anything regarding their temporal distribution as the FFT does with the frequency distribution.

          Think about it this way: take an arbitrary real input function and perform an FFT. The result is a DC term, followed by the positive plane frequencies and then the negative plane frequencies which have complex conjugate symmetry. Performing an IFFT returns the original real function.

          The problem Bart noted above in which he takes the IFFT of Y/X results from finite precision effects. You can effect the same as his result (taking the real) by using the ‘symmetric’ switch in MATLAB’s IFFT which will treat the negative half as the flipped conjugate of the positive half even if there are small rounding errors.

          Mark

        • Posted Sep 12, 2011 at 11:13 AM | Permalink

          Mark and Bart, in the interests of saving time: Nick is a mathematician. He knows what an FFT is, and an IFFT. He knows their properties for real and complex input data.

        • Mark
          Posted Sep 12, 2011 at 11:45 AM | Permalink

          I know who Nick is. I disagree that he “knows their properties,” however, as his posts seem (to me and apparently Bart) to indicate. He is misinterpreting the periodicity concept (which is plainly stated as N in his link to Wikipedia, and I have noted twice now.)

          Mark

        • Posted Sep 12, 2011 at 4:32 PM | Permalink

          Mark and Bart,
          Truncating h does make a difference. It shows in the h integral that Bart has been citing as the large negative feedback factor, -9.4 W/m2/°C. If you don’t taper h, you get -12.22 W/m2/°C, even larger. But then because it is exact, you can see where the figure comes from.

          The OLS regression grad of temp is 0.000412 °C/month
          and the grad of dR is -0.005039 W/m2/month
          quotient is -12.22 W/m2/°C.

          I’ve added the convolution of Bart’s truncated h to the plot at the Moyhu post. You can see that while the untapered version (red) fits exactly, the tapered version (gold) gives a substantially different result.

        • Bart
          Posted Sep 12, 2011 at 8:34 PM | Permalink

          Nick, you’re just seeing the higher frequency noise. Stop mucking around in the time domain and look at the frequency response directly in the frequency domain.

          See here, also.

        • Bart
          Posted Sep 12, 2011 at 9:07 PM | Permalink

          PaulM
          Posted Sep 12, 2011 at 11:13 AM | Permalink

          “He knows their properties for real and complex input data.”

          The impulse response is not “two sided”. On this, there is no middle ground.

        • Mark
          Posted Sep 12, 2011 at 9:22 PM | Permalink

          One only has to consider why an impulse that arrives at time t = 0 cannot generate any negative time response.

          Mark

        • Posted Sep 12, 2011 at 9:43 PM | Permalink

          I’ve put up a post here which I hope is more explanatory on the issues of periodicity, and what the impulse response function (as calculated here) means.

          It’s true that if we actually know that there was causality, the impulse response function would be one-sided. But we don’t. All that is done is that two series are FFT’d and the response function obtained by division. It’s just the function which convolves with T to get dR. And it does. And the truncated version doesn’t at all. That needs explaining if you’re going to truncate.

        • Mark
          Posted Sep 13, 2011 at 12:08 AM | Permalink

          Certainly you can try to argue that causality has not been established, and likewise interpret the IFFT output differently, however, that does not allow you to create a new ordering for the data. The IFFT, whether you like it or not, progresses forward in time. The index n is part of t = nT where T is the sample period.

          I’m not sure if you’re simply confused because the FFT “frequencies” wrap to the negative plane or what. You get the wrap (in radians) because it is sampled, and the spectrum repeats every 2pi radians. The IFFT result is interpreted as a periodic repeat, but time progresses steadily forward. Data at index n = 0 is the same as n = 8192, but there is no wrap into negative time akin to the radian frequency wrap.

          You really need to get your head around the distinction. It’s pretty clear at this point that you are nothing but a distraction.

          Mark

        • Bart
          Posted Sep 13, 2011 at 2:04 AM | Permalink

          Nick, did you ever try your method on the artificially generated data? That data is without question causal, yet you will find the same issues there which you bring to the fore. Why? Because you are not working with deterministic signals but stochastic ones. You are producing an estimate, not a 1:1 correspondence.

          There is a very well understood tension in spectral estimation using the FFT, that of bias versus variance. A PSD estimate, or a cross spectral estimate (and what we are producing here is actually an estimate of the cross spectrum divided by the input power spectrum), are highly variable. The fundamental result in Fourier methods of spectral estimation is that the variation is as large as the quantity being estimated, and the variance does not go down as the data record gets larger. A naked FFT is therefore not a consistent estimator for spectral properties. If you don’t believe me, google “psd estimation trade off of bias and variance” and look at all the hits you get.

          So, tapering windows are used. Various methods of smoothing are used. These all bias the estimate, and reduce resolution, but they reduce the variance, too. This is why FFT based methods of spectral analysis require considerable operator interaction, and cannot be made into simple black box batch algorithms. The analyst has to choose the best trade off between bias and variance.

          The stuff you are seeing beyond the “meat” of the response, which is eliminated by the taper, is NOT a reversed time, non-causal response. It is a phantom of noise and external processes not related to the process in which we are interested: the driving of cloud formation by temperature. Being able to reconstruct the exact time series with the impulse response only means you are additionally reconstructing those parts of the process in which we are not interested. You are insisting on having an unbiased estimator, but you are giving yourself entirely over to having the maximum variance.

          And, as I’ve tried to point out time and time again, but you seem never to have tried it, it DOES NOT AFFECT the low frequency -180 degree phase shift which defines this system to be part of a negative feedback. Do it. With taper, without taper… it’s all the same. We have a very strong negative feedback here, and there’s nothing whatsoever in your obsession with this minor detail which changes that.

        • Bart
          Posted Sep 13, 2011 at 2:10 AM | Permalink

          And, causality is most undeniably established here. It is established by the negative slope of the phase response, which indicates a time lag in the output variable. If the slope were positive at low frequency, that would indicate a phase lead, and a non-causal response. It isn’t. It doesn’t.

          Can we please move on from these trivialities?

        • Posted Sep 13, 2011 at 2:15 AM | Permalink

          “It is incorrect to swap the halves”
          I’m not swapping halves. I’m just plotting with a different time convention. At I said on the blog, it’s as if you were plotting some variable around the Equator that had significant behaviour around the date line. If you plotted with conventional longitude, you’d have some of that peak at one end of the plot, and some at the other. But if you declared the Greenwich meridian to be the longitude break (from -180 to 180), and the DL to be Lon 0, then you’d see it all in the middle of the plot. You haven’t changed any reality, just conventions about periodicity. Changing the date line doesn’t change the map.

        • Mark T
          Posted Sep 13, 2011 at 10:24 AM | Permalink

          What you did was slide the window effectively putting the impulse in the middle of the response, which is the same as swapping halves.

          Mark

        • Posted Sep 13, 2011 at 7:36 PM | Permalink

          “which is the same as swapping halves.”
          No. It’s the equivalent of looking at a Mercator projection with lat 180 in the middle instead of lat 0. The world is still the same.

        • Posted Sep 13, 2011 at 8:31 PM | Permalink

          I mean, of course, longitude 180 etc

        • Posted Sep 12, 2011 at 9:10 AM | Permalink

          Here’s Wiki on periodicity.

        • Posted Sep 13, 2011 at 12:26 AM | Permalink

          Mark,
          There’s no indication in this method of analysis that these numbers have anything to do with time. The underlying variable could have been distance, longitude, anything. In fact as entered the numbers are running backward in time. There’s nothing to say what the direction is.

          The same applies to FFT’s in general. They can be applied to quantities that vary in time, space, or whatever.

          The device of truncating h does give it a causal aspect. But then it loses the basic property of being a function which on convolution generates dR. It no longer relates the variables. If you think h was one-sided you might like to explain why removing the non-existent side had that effect.

        • Mark
          Posted Sep 13, 2011 at 1:10 AM | Permalink

          Excuse me? This is ridiculous.

          There’s no indication in this method of analysis that these numbers have anything to do with time.

          Except that the data are a time series.

          The underlying variable could have been distance, longitude, anything.

          The variables are actually cloud cover and temperature… both sampled in time.

          In fact as entered the numbers are running backward in time.

          They are both going in the same direction, and thus the response is in the same direction.

          The device of truncating h does give it a causal aspect

          The truncation did not give it causality. The clear 2nd order response coupled with a reasonable assumption that there is at least some connection between the two is why Bart inferred causality.

          If you think h was one-sided you might like to explain why removing the non-existent side had that effect.

          Had what effect? I looked at your plots though it was not clear whether you swapped halves before using the impulse in the convolution. Given typically lags greater than half the record length provide increasingly ambiguous results.

          Mark

        • Posted Sep 13, 2011 at 1:29 AM | Permalink

          “Except that the data are a time series.”
          Well, you know that. But the algorithm isn’t notified of that anywhere. You just enter two lists of numbers, dR and T. And if it were to use that ts information, you’d have to tell it the time direction. As I say, it’s backward to the normal.

          “The clear 2nd order response coupled with a reasonable assumption…”
          The second order response means nothing. It’s at the low frequency end – Bart’s expression is just the Taylor series expansion of X/Y, inverted. All its appearance tells you is that X has a zero in the complex plane near 0.1 yr^-1. Which is the zero of the gate function FT and has little to do with the properties of T.

          “Had what effect?”
          In the plots I showed, the red had h unaltered, the cyan had h smoothed (30 month triangular), but not truncated, and the gold had been truncated to low positive t (Bart’s taper). Both smoothing and truncation were just prior to convolution. The positive part of h up to about 80 years was unaffected, the next 80 years tapered.

        • Bart
          Posted Sep 13, 2011 at 2:18 AM | Permalink

          Please see above. This errant discussion has gone on long enough.

        • Mark
          Posted Sep 13, 2011 at 10:16 AM | Permalink

          Well, you know that. But the algorithm isn’t notified of that anywhere.

          The algorithm doesn’t need to “know” what the order is. You and I know based on the order of the input samples.

          The second order response means nothing. It’s at the low frequency end – Bart’s expression is just the Taylor series expansion of X/Y, inverted.

          Which is the impulse response of the transfer function from Y to X. If there were no cause/effect relationship, you wouldn’t see such a response.

          I don’t doubt that the DC gain term is the ratio of the trends because it is the ratio of the DC location in the frequency plane, which would necessarily include any slopes on the data that consist of less than half a cycle (one full cycle would be the 2pi/N bin.)

          Mark

        • Bart
          Posted Sep 13, 2011 at 11:44 PM | Permalink

          Nick said: “Which is the zero of the gate function FT…”

          The zeros of the gate function do not appear in the gated FT. One can think of the finite span of data as an infinite span multiplied by the gate function. Multiplication in the time domain transforms to convolution in the frequency domain – the FT of the gate function smooths the combined FT. In addition:

          “All its appearance tells you is that X has a zero in the complex plane near 0.1 yr^-1. Which is the zero of the gate function FT and has little to do with the properties of T. “

          The zeroes of the gate function are on the imaginary axis. The zeroes of X are well into the complex plane with a significant damping ratio. Just because the magnitudes of the zeroes are very roughly near one another does not mean they resemble each other in the slightest.

        • Posted Sep 14, 2011 at 5:55 AM | Permalink

          Bart
          “The zeros of the gate function do not appear in the gated FT.”
          Yes, I agree with that. The location of the zeroes is not connected.

          I did a little experiment – you might like to try it. It illustrates some of the points where we differ.

          I simply reversed the data vectors (flipud). Now if there is causality in the analysis, that should be a big deal. What happened?

          As you might expect, X, Y and X/Y were transposed. And h came out exactly reversed. Those “negative t” numbers that you said were just noise are now the positive t numbers for the reverse problem. And so, if you don’t truncate, H is also simply transposed, and the low freq limit is just -12.22 W/m2/C, exactly as for the original.

          But if you do use the same truncating function, you get H[1] = -2.5718 W/m2/C. The original (unreversed) number I got was H[1]=-9.628. And, yes, they add to -12.20.

          So what? In my view, h is indeed two sided about t=0. It has area -2.57 on one side, -9.628 on the other. When you don’t truncate, you get -12.22 in both directions. When you do truncate, you get the appropriate split.

          And if you don’t truncate, all the numbers you are pulling out are exactly the same with the data reversed. So the algorithm has no expectations about causality.

        • Tom Gray
          Posted Sep 14, 2011 at 6:26 AM | Permalink

          Nick Stokes writes

          ===================
          So the algorithm has no expectations about causality.
          ===================

          bart is not saying that the algorithm has”expectations about causality”. He has told you quite a few times thet the frequency response in indciative of a type of system that is described as providing feeedback. It is the emprical data which he uses as “expectations about causality”. That is what he is saying. Do you ever read anythimg that he writes?

        • Posted Sep 14, 2011 at 7:18 AM | Permalink

          “Do you ever read anythimg that he writes?”

          Well, I seem to be the only person here who has gone through his code and got it working.

          But both Mark and Bart have been saying how silly I am to think h is two-sided. If it’s one-sided then it’s causal.

          But Tom, how about making a substantive contribution. Can you explain why it makes sense to cut the impulse response down the middle? Do you have a view on why the data from start 2001 shows a strong positive feedback, on this analysis?

        • Tom Gray
          Posted Sep 14, 2011 at 7:40 AM | Permalink

          Bart and Mark have told you this many many times. The impulse response is being used as part of a model of a physical system. The time t=0 has a meaning in that model. Bart has told you this and shown you a diagram of that physical system. You keep saying the same thing over and over again without responding to Bart’s point. he tries to anwer you and you pay no attention.

        • Tom Gray
          Posted Sep 14, 2011 at 7:42 AM | Permalink

          Forget the impulse response. Look at the step response

        • Posted Sep 14, 2011 at 8:06 AM | Permalink

          Tom,
          Yes, of course t=0 has meaning. The issue is whether the impulse response h is one-sided (about t=0).

          I know Bart has a physical system in mind. But nis numbers come from the low frequency limit of the Fourier analysis.

        • Tom Gray
          Posted Sep 14, 2011 at 10:18 AM | Permalink

          The impulse response is the time domain response of the system to an impulse which occurs at t=0

          The step response is the time domain response to a a unit step which occurs at t-0

          The impuse response and the step response have physical meaning.

          What more needs to be said.

          Bart is testing the hypothesis that the “cloud” system can be modeled by a simple feedback system. He has taken the empirical data and analyzed it to see if it has the characteristics of such a model. He notes that it has. He them indicates that this is evidence that the simple feedback model is correct and that the feedback is of a specific magnitude and phase (with appropriate impulse and step response as aids to this understanding). He notes that the mathematics that he is using has difficulty with the short data set but that that the conclusion can reasonably be relied upon. He also inquites if there is other mathematics that could be of use.

          That is all Bart is saying (to my understanding). The model appears to work and appears to be compatible with the data. That is all. Bart’s investigation is into the physics of the system and not into mathematics.

          What you are saying is not incorrect (as far as I can tell). It is just not useful in determining if the feedback model is applicable to the physical system. There may be other models that fit the data better than Bart’s. Nothing that you are saying has any bearing on this or the validity of Bart’s model.

        • Posted Sep 14, 2011 at 11:34 AM | Permalink

          What more needs to be said? Well quite a lot really. To start with, Fourier analysis is not the correct mathematical tool to find an impulse response or step response, you’d need a Laplace transform or a Green’s function. But here is not the appropriate place – this has gone on too long anyway. Why not take it to Nick’s blog?
          [As regulars will know, it is unheard of for me to agree with Nick on anything 🙂 ]

        • Bart
          Posted Sep 14, 2011 at 11:34 AM | Permalink

          Nick Stokes
          Posted Sep 14, 2011 at 5:55 AM | Permalink

          “As you might expect, X, Y and X/Y were transposed. And h came out exactly reversed. Those “negative t” numbers that you said were just noise are now the positive t numbers for the reverse problem.”

          Nick… causal and anti-causal responses do not both occur. Either temp drives dR or dR drives temp, not both. They are both related in either direction, but only one direction can be related by a transfer function.

          Take a look at the system diagram. The transfer function from temp to dR is the part I have circled. That is the thing we want to estimate. The box on top is the transfer function from dR to temp, but the input is polluted by the Radiation Forcing (RF) which adds to the dR forcing, so you cannot resolve that transfer function.

          That is, if I call the top transfer function T1 and the bottom one (the circled one, the one we want) T2, then

          temp = T1[RF + dR + OT1]

          dR = T2[temp + OT2]

          where I use square brackets to indicate the operator relationship, that operation being convolution in the time domain, and multiplication in the frequency domain. “OT1” and “OT2” stand for “other terms” not depicted which include other forcing and feedback.

          I am assuming I can get T2 as dR/temp (with “/” indicating the appropriate operation in the chosen domain – deconvolution in time, division in frequency), doing my best to recognize those parts of dR which are due to OT2 and eliminate them as much as possible, hence the tapering and exclusion. The assumption is that the dR:temp relationship is dominant, particularly at low frequency. If we find a readily recognizable type of transfer function there (as we have) then that assumption is borne out, i.e., we consider it likely true.

          But I cannot get T1 = temp/(RF+dR+OT1) because I do not have RF or OT1, and RF and OT1 are assuredly dominant. If we could get T1, then the partial transfer function from RF to temp could be gotten as T1/(1-T1*T2). This is a negative feedback (sub)system precisely because T2 has a 180 degree phase shift at zero frequency, and 1/(1-T1*T2) is the reduction in sensitivity conferred by the feedback.

          What you are seeing is a phantom of circular convolution. In fact, to get the transfer function, what we are doing is deconvolution, which should eliminate the “anti-causal” part but doesn’t because of imperfections in the data.

          The FFT is a discrete time Fourier Transform. If we could use continuous time Fourier Transforms, you would not see this happening.

        • Bart
          Posted Sep 14, 2011 at 11:36 AM | Permalink

          Oops. Forgot to close tag.

        • Bart
          Posted Sep 14, 2011 at 12:13 PM | Permalink

          PaulM
          Posted Sep 14, 2011 at 11:34 AM | Permalink

          “To start with, Fourier analysis is not the correct mathematical tool to find an impulse response or step response…”

          Nonsense. There is a 1:1 relationship between the Fourier transform and the impulse response in continuous time. In discrete time, you have to guard against aliasing, but that isn’t really too hard.

          I have a big post for Nick in the moderation queue because of the multiple links. Check back later.

        • Posted Sep 14, 2011 at 2:24 PM | Permalink

          PaulM,
          “Why not take it to Nick’s blog?”

          A good idea. The threads here do get chaotic after a while. I’d be very happy to host a post by Bart (or anyone else on this topic) – there’s no limit on graphs or links. You can even do Latex 🙂

        • laterite
          Posted Sep 14, 2011 at 2:57 PM | Permalink

          Nick: “Well, I seem to be the only person here who has gone through his code and got it working”

          Here http://landshape.org/enm/fft-of-tsi-and-global-temperature/

          Different data, different system response, useful insight.

        • Bart
          Posted Sep 14, 2011 at 4:36 PM | Permalink

          “In fact, to get the transfer function, what we are doing is deconvolution, which should eliminate the “anti-causal” part but doesn’t because of imperfections in the data.”

          Actually, I think the relevant imperfection is finite window of data, which allows the circular convolution to come back around and do the reverse time correlation.

          But, as I said, we are only interested in one time direction or another, and the one we are interested in is temp forcing dR. So, this essentially requires that the impulse response estimate be cut off at the midpoint or earlier.

          I appreciate the invite to your blog, Nick but, truth to tell, I am just about spent on this. I think I’ve covered just about everything and I think I’m ready to push this little bird out of its nest.

        • Bart
          Posted Sep 14, 2011 at 5:10 PM | Permalink

          “The FFT is a discrete time Fourier Transform.”

          Actually, it is a frequency sampled discrete time Fourier Transform. Circular convolution comes about because of that sampling.

        • Posted Sep 14, 2011 at 6:22 PM | Permalink

          Just in case anyone does want to continue discussion at Moyhu, I’ve posted the relevant comment subthreads from here on this page. There is a thread still open there, or if there is interest, I could start a new one.

        • Bart
          Posted Sep 16, 2011 at 1:53 PM | Permalink

          The deconvolution should deliver only the causal part. The reason it creates other fluff at the end is inherent to the algorithm operating on noisy and polluted data.

          Why does the algorithm favor the causal part? Because what we are doing is effectively Wiener Deconvolution.

          Our algorithm has computed the impulse response as

          h = real(ifft(Y./X))/T;

          We can do this in three steps. First, compute the cross spectrum

          C = conj(X).*Y;

          Then, the power spectrum

          P = conj(X).*X;

          Ideally, both of the spectrums should be smoothed, but that eliminates resolution, and we are already hurting for resolution, so we don’t do any smoothing. Since ideally Y = H.*X, where H is the transfer function, C = H.*P. Thus, dividing it out should give us H. The Wiener formula adds the signal to noise ratio in the denominator to prevent division by zero:

          h = ifft(C./(P+sn))/T;

          If we wanted the anti-causal part, we would exchange X and Y in the above. Anything beyond the halfway point in this impulse response estimate is dross, and should be excised with a tapered window. A knowledgeable analyst should look for standard forms of impulse response at the low end and window even further down to isolate those components.

        • Bart
          Posted Sep 16, 2011 at 1:59 PM | Permalink

          I went to Nick’s site to alert him to this new message, but it won’t let me post without some account or other. So, here is my final message to him:

          Bah. You don’t know what you are doing, Nick. I have put up a final post at the original thread which is the final word on how the data should be treated, and what to expect. -Bart

        • Bart
          Posted Sep 16, 2011 at 3:47 PM | Permalink

          On h = real(ifft(Y./X))/T versus h = ifft(C./(P+sn))/T – one probably should also get the real part of the latter as well. The imaginary part, if there is any, is just small and inconsequential numerical error which needs to be eliminated because the impulse response is inherently real.

        • Posted Sep 16, 2011 at 5:39 PM | Permalink

          Bart,
          I think implementing the algorithm with a cross-spectrum calculation is a better idea and I’ll try it.

          But there is no directionality, in the sense of causality, implied in Wiener deconvolution. In fact, a major application is in image processing.

          If you look at the Wiki ref for convolution that your link refers to, all the integrals are over the whole real line. Two-sided about 0.

          And there is no magic that says the impulse response h turns to noise halfway along the spectrum. As I said earlier, a simple test of that is to run the algorithm with data reversed. It’s still a perfectly well defined problem. You get the same h, exactly, but reversed. The numbers that were at the high end are now near zero. They are the impulse response numbers for this reversed problem. They are not noise.

          If you want to see what is wrong with your taper w applied to h, just look at its FFT. You’d be expecting 18 db/octave roll-off for a Hann window. Wrong. It’s 6db/octave, and looks like a sinc function. That’s the effect of the sharp cut at zero. Everything is periodic – that cut is real.

          Sorry you had trouble accessing my blog. It’s a standard Blogger site. They ask for ID, but it used to be possible to get in without.

        • Bart
          Posted Sep 16, 2011 at 9:08 PM | Permalink

          As Ed Koch was fond of saying, “I can explain it to you, but I cannot understand it for you.”

          In fact, the direction of causality in the Wiener deconvolution is implied by the fft you take the conjugate of in the cross correlation, and the power spectrum you divide by.

          You are reinventing the wheel, and making things up as you go along. This is old hat with roots going back to the 1940’s and beyond. It would behoove you to be a bit less categorical until you understand things.

        • Bart
          Posted Sep 16, 2011 at 9:25 PM | Permalink

          It’s really very simple to prove I am right, Nick. Generate some artificial data according to my prescription:

          a = [1.000000000000000 -1.967462610776618 0.968691947164695];
          b = -[0.617926899846966 0.611409488230977]*1e-2;
          temp=randn(10000,1);
          dR = filter(b,a,temp);
          temp = temp((10000-123):10000);
          dR = dR((10000-123):10000);

          Do your little analysis and see that you get a phantom of a non-causal response here, too. But, the artificially generated data is causal by construction.

        • Posted Sep 16, 2011 at 9:52 PM | Permalink

          “I have put up a final post at the original thread which is the final word on how the data should be treated, and what to expect.”
          And you say I’m categorical?
          I can’t see how your artificial data proves anything. Especially as it builds in constants from your erroneous h hack.

          But have you looked at the FFT of w yet?

        • Bart
          Posted Sep 16, 2011 at 10:18 PM | Permalink

          “I can’t see how your artificial data proves anything.”

          It proves that, even when you have “perfect” data, you still get a “ghost” of a “non-causal” (which is itself an absurdity) response which does not reflect anything real.

          I really should never have dignified this discussion and suggested in any way we were negotiating about this. I thought I could make it plain for you and teach you, but it is clear that you have no clue and do not want one. There are decades of practical use of these procedures and reams of literature about them. The science is settled. The End.

        • Bart
          Posted Sep 17, 2011 at 1:45 PM | Permalink

          “But I am not so ignorant of these matters.”

          You may understand some theory quite well, Nick, but you are completely out to lunch in understanding the nuts and bolts of practical application. In particular, you seem not to understand the application to stochastic sequences. You are not grasping even the most basic logical construct: you cannot have a non-causal “response”. Time flows in only one direction in this universe.

          When you construct the cross spectrum, you are constructing the Fourier transform of the cross correlation, which does indeed correlate the sequences backward and forward in time. When you divide that spectrum by the power spectrum, ideally you get out only a causal response determined by your choice of conjugate transform. You are deconvolving the spectra, and removing the effects of circular convolution.

          In the real world with noisy data, your spectral estimates are variable. They are not accurate. They do not coincide with the expected values. As a result, the deconvolution is imperfect, and you end up with a residue of the circular convolution. It is completely useless, unnecessary, and unreal.

          And, BTW, your creds in this field are lesser than mine.

          Steve
          Posted Sep 17, 2011 at 3:15 AM | Permalink

          “What is the primary difference between evaluating a Discrete-time Fourier Transform summing the complex components over all integers k (-inf to inf), rather than a Discrete Fourier Transform from k = 1 to N-1?”

          A Discrete Time Fourier Transform (DTFT) is a continuous function of frequency. The FFT is an implementation of the Discrete Fourier Transform (search for it on Wiki – I don’t want to go over my one-link limit) which is a sampled frequency version of the DTFT. As a result of the sampling, convolution of time series by taking the DFT of both, multiplying them together, and taking the inverse DFT yields a circular convolution (again, see Wiki).

        • Posted Sep 17, 2011 at 2:01 AM | Permalink

          Well, Bart, I am sorry to have damaged your dignity. But I am not so ignorant of these matters. I appreciate that the science is settled – I was in fact around when it was being settled. You just don’t know how to apply it.

          I have a PhD (1972) in the mathematics of control theory. I have done a lot of research in the mathematics of integral transforms. I was an author of what is still one of the most widely used methods of numerical Laplace Transform inversion.

          Details, with links, are here.

        • Steve
          Posted Sep 17, 2011 at 3:15 AM | Permalink

          Nick, Bart

          Firstly, I am no expert. I am interested in learning from this discussion by hearing a resolution.

          What is the primary difference between evaluating a Discrete-time Fourier Transform summing the complex components over all integers k (-inf to inf), rather than a Discrete Fourier Transform from k = 1 to N-1?

          Is the problem here a misunderstanding of Matlab DFT implementation (wrt periodicity, centering, etc.)? regards, Steve

        • Posted Sep 17, 2011 at 3:36 AM | Permalink

          Steve,
          Well, you can’t sum over all integers in finite time. But the DFT scheme where you use the same number of sampling points as frequencies in the sum has a number of useful features. It means that the DFT inverse is just the same DFT operator with negative frequencies. And it is important in enabling the acceleration that constitutes the FFT.

          It also avoids overfitting. Basically, if you have N sampling points, you can’t determine more than N coefficients. And the same logic on inversion means that you have to have exactly N. In matrix terms, you want the DFT to be represented by a square matrix.

          I don’t think there’s a matlab-specific issue. As Bart says, FFT is pretty much settled science.

        • Tony Hansen
          Posted Sep 17, 2011 at 7:25 AM | Permalink

          Nick- “… But I am not so ignorant of these matters…. I was in fact around when it was being settled…..I have a PhD (1972)….I have done a lot of research….. I was an author….”

          I was once advised to strenuously avoid polishing ones own nameplate.
          But maybe the advice has changed since then.

        • Posted Sep 17, 2011 at 7:38 AM | Permalink

          “I was once advised to strenuously avoid polishing”

          Tony, I agree, and I have done so. I have not sought to speak from authority in this thread. But when you get stuff like:
          “I thought I could make it plain for you and teach you, but it is clear that you have no clue and do not want one. “
          well, it just has to be set straight.

        • Bart
          Posted Sep 17, 2011 at 1:46 PM | Permalink

          You started a new reply thread. Good thinking. See my response prior.

        • Bart
          Posted Sep 17, 2011 at 1:54 PM | Permalink

          Steve
          Posted Sep 17, 2011 at 3:15 AM | Permalink

          “What is the primary difference between evaluating a Discrete-time Fourier Transform summing the complex components over all integers k (-inf to inf), rather than a Discrete Fourier Transform from k = 1 to N-1?”

          Ah, I forgot to address your question entirely. The effect of a finite data window is to smear out the transform, reducing resolution. When two of the sequences are effectively windowed by a rectangular function (Nick’s “gate function”), the convolution of the two is effectively multiplied by a triangular function, a Bartlett Window. Multiplication in the time domain transforms as convolution in the frequency domain, so you are effectively performing a moving average on the true spectrum with the Fourier Transform of the window function.

        • Bart
          Posted Sep 17, 2011 at 2:02 PM | Permalink

          And, for crying out loud Nick, for once read what I wrote. I have answered all your qualms and laid them to rest many times over, and it is very, VERY frustrating to have to repeat my self over and over with you apparently taking no notice of what I have explained. I sense that you do a brief scan of what I write, assume I am still gainsaying you and, since you must be completely and totally correct, you then repeat back at me the same erroneous conclusions which I have already shown to be false. At least, just once, address one of my actual arguments instead of regurgitating the same cant.

        • Hoi Polloi
          Posted Sep 17, 2011 at 2:15 PM | Permalink

          Well Bart, welcome to the world of climatology…

        • Steve
          Posted Sep 12, 2011 at 4:40 AM | Permalink

          Bart, Nick,

          Can you convolve the smoothed impulse response with temperature and then see how well the output correlates with dR? Would this quantify the contribution of windowing /smoothing artifacts?

          interesting thread – thanks.

        • Posted Sep 12, 2011 at 7:04 AM | Permalink

          Steve,
          Yes, I’ve done it – it’s an update (below the others) on this page. The unsmoothed impulse response reproduces dR (cloud radiance) exactly, as it should, The smoothed is, well, smoother. Thanks for the suggestion – it’s a useful check.

        • Bart
          Posted Sep 13, 2011 at 2:17 AM | Permalink

          Just saw this after responding above. There, you will see an explanation of why we are not trying to derive an impulse response which can precisely recreate dR. In doing so, we would be allowing into our estimate the effects of extraneous processes, randomness, and measurement error which have nothing to do with the dynamics we seek.

        • Steve
          Posted Sep 14, 2011 at 2:58 AM | Permalink

          Bart, yes I understand this aspect.

          Nick, sorry, but I have to agree with Bart and Mark on almost every issue raised in the discussion. I am not an expert in control theory, but I use aspects of advanced signal processing in my own area of research and I consider many of your points to be completely nonsensical.

      • P. Solar
        Posted Sep 12, 2011 at 7:45 AM | Permalink

        Bart, I have not digested this discussion yet but I thought it interesting you found 9.5 feedback.

        I have been attempting to fit lag regression of spencer’s simple model to his satellite data. The best fit I could get (which was amazingly good fit) included a feedback of 9.2 W/m2/K

        http://tinypic.com/view.php?pic=2rqjas6&s=7

        This is work in progress an a hand fit rather than a robust method but from this end it has to be of that kind of value.

        Just what that feedback represents physically will be the next question. But at least short-term, shallow mixing (45m) seems to show strong neg. feedback.

        • Bart
          Posted Sep 12, 2011 at 9:52 AM | Permalink

          Neat!

        • Mark
          Posted Sep 13, 2011 at 10:19 AM | Permalink

          What has me curious is why both Spencer and Dessler came up with results that are barely distinguishable from zero.

          Mark

        • P. Solar
          Posted Sep 13, 2011 at 4:18 PM | Permalink

          simple. it’s because they are both using ols to fit a straight line to data that has a whole crock of noise and non linear effects mixed into it.

          They are dumbly calling the result the result a “slope”, it isn’t.

          This does not surprise me from Dessler who probably knows its wrong but it fits his arguement to ignore the fact.

          I don’t understand Spencer on this issue. He said in one reply that he had spent quite a bit of time looking at other regressions but it seemed to be basically just averaging two ols slopes.

          Fitting a linear model by linear regression in this context will give meaningless results.

          Any time spent arguing about who’s slope is best is a waste of effort , they are both wrong , and fundamentally so.

  42. Michael Larkin
    Posted Sep 11, 2011 at 7:19 AM | Permalink

    Looks like something significant is going on here wrt Bart’s work. However, for us folks in the peanut gallery, comments at other blogs lead me to believe few are following it well – even the likes of Tallbloke, who knows piles more than I do. Here’s hoping that at some stage it gets translated into language anyone can understand.

  43. David L. Hagen
    Posted Sep 11, 2011 at 7:06 PM | Permalink

    At WUWT, Bill Illis compares Dessler 2010 results with cloud to global variability:

    While we are having no luck finding a good correlation between clouds and temperatures in a feedback sense (the scatters are providing r^2 of 0.02) . . . I’m getting Cloud variability being a very large part of the variability in the total Global Net Radiation Budget – anywhere from 65% to 100% (with R^2 between 0.29 and 0.77).

    Only an order of magnitude better!

    • P. Solar
      Posted Sep 12, 2011 at 8:35 AM | Permalink

      Anyone attempting OLS on data with more noise than signal and with significant errors in the independent variable does not understand the first thing about linear regression and how to use it.

      Sadly this is ubiquitous. How any of this stuff gets published is beyond me. And this is the kind of “analysis” they use to justify the parametrisation for the models.

      Until climate science community gets past first grade science class techniques they will not get anywhere.

      • Bart
        Posted Sep 12, 2011 at 10:10 AM | Permalink

        But, you may at least reasonably expect a linear relationship between variables which are coincident in time. The idea of applying linear regression to variables for which one is delayed with respect to the other so that the plot is a Lissajous oval is highly dubious. Applying it to variables with a frequency spread with frequency dependent nonlinear phase (variable delay) is nuts.

        • P. Solar
          Posted Sep 12, 2011 at 10:38 AM | Permalink

          Having spent a lot of time looking at his simple model and the data that he used for SB2011 I have just found something significant. Here is the result of R.slr decomposition of the hadSST for the period used.

          http://tinypic.com/view.php?pic=x2l7wl&s=7

          There is an interesting c. 20 mth oscillation plus the big swings around 2002, 2007.

          One of the main features of the lag-regression analysis was the strong sinusoidal swing. This is what required the heavy bias towards “rad” forcing and the high feedback in using the model.

          This feature disappears if I limit the window to 2001,9:2007.8 .

          The remaining oscillation is interesting. It does not look like “ringing” since it is constant. It doubt that it is an artifact of the decorrelation since it’s period is about 20mths, (at 24 I would have been suspicious).

          I speculate that this is ocean currents inputting heat from below the mixed layer. The model has this sort of input in the form of the “rad” term but uses random zero centred data for it.

        • P. Solar
          Posted Sep 12, 2011 at 10:44 AM | Permalink

          Now look at the lag auto-regression for the detrended dSST for the reduced period.

          http://tinypic.com/view.php?pic=29blgld&s=7

          For those looking at frequency analysis, that should be a clue !

        • David L. Hagen
          Posted Sep 12, 2011 at 1:56 PM | Permalink

          Thanks P. Solar
          Given the 10 year data, that seems pretty close to the ~11 year Schwab (1/2 of ~22 yr Hale) solar cycles.

  44. Socratic
    Posted Sep 11, 2011 at 7:26 PM | Permalink

    What follows is a recap of three posts I made on on Dr. Spencer’s blog, concerning computation of the left-hand side of the main equation. You may recall that Dr. Spencer obtained 2.3 Wm^-2 and Dr. Dessler obtained 9 Wm^-2 for the LHS. The obvious differences between these papers were (a) Spencer used quarterly data, while Dessler used monthly data; (b) Spencer used a mixing layer depth of 25 meters while Dessler used a mixing layer depth of 100 meters.

    In his blog, Dr. Spencer recommended use of Levitus when computing the LHS. Levitus appears to be the World Ocean Atlas (WOA) which is available online in updated form, here:
    http://www.nodc.noaa.gov/OC5/WOA09/woa09data.html

    I found that mixing layer depth has already been computed by Levitus on a global grid, and available from NOAA, here: http://www.nodc.noaa.gov/OC5/WOA94/mix.html

    There are three criteria for ML in use: A) (most common definition) depth at which temp is .5 C lower than the surface; B) depth at which the density is .125 standard deviations greater than the surface; and C) depth at which the density is equal to what the density would be with a .5 C change. These three definitions give rather different results.

    After downloading all the data and running global weighted averages (weight = cosine[latitude]), the global average mixing layer depth for each definition was:
    A. 71.5 meters
    B. 57.2 meters
    C. 45.9 meters

    These numbers fall neatly between the depths used by Spencer and Dessler.

    Downloading quarterly data for temperature (objectively analyzed means) from WOA allows you to compute a weighted mean SST for the globe. The four weighted means thus computed were: JFM=18.293; AMJ=18.1672; JAS=18.128;OND=17.935, with a global annual mean of 18.130 C.
    The four differences between quarters are then ,358, -.131, -.034, and -.193, with a standard deviation of those values being .247 C.

    To compute heat capacity (and change thereof) I used the mean temp of 18.130 as a “before” value and a changed temp of 18.130+.247=18.377 as an “after” value. I computed density and heat capacity for both using a salinity of 35 g/kg and the equations of Sharqawy et.al. 2010. For a 1m x 1m x 25m column, I get mass=25634.58 kg (before), 25633.14 kg (after); HC=29854059 kJ (before), 29878463 kJ (after) for a quarterly change of 24404 kJ. The rate of change per quarter is therefore 2440384 J / 7889400 seconds = 3.1 Wm^-2. This is a bit higher than the 2.3 value given by Dr. Spencer. Note also that this value may be wrong; one could argue that we should be operating on a constant mass of water rather than a constant volume. Computing on that basis, the result would be 3.3 Wm^-2.

    But note that this was computed using a 25m ML depth. Using the Levitus ML depths gives for the LHS of the equation energy change rates of (A) 8.9 (B) 7.1 and (C) 5.7 Wm^-2 respectively. In other words, Dr. Spencer’s 2.3 Wm^-2 seems too low.

    Using monthly (rather than quarterly) WOA data, I find the global weighted-average SSTs by month: 18.176, 18.347, 18.357, 18.282, 18.147, 18.057, 18.134, 18.173, 18.078, 17.950, 17.871, 17.984 giving the same 18.13 average as in quarterly data.
    This gives Delta-Ts of: .192, .171, .010, -.075, -.135, -.090, .077, .038, -.094, -.128, -.078, .113, and the standard deviation of these is .117°C. Already we notice a major difference: if the SST is changing by typically .117 C in a month, we might expect it to change by .117 x 3 = .35 C per quarter. But the actual quarterly change is .25, which means that monthly data is more variable than quarterly data. Not a surprise, but here it is quantified.

    Now let’s repeat the same computations, but using Dr. Dessler’s assumptions. In this run I added a slight improvement: I also downloaded and used salinity data from WAO, which is a little less than the 35 I had been using (mean=34.586).

    Using T=18.130 as the “before” temp and 18.130+.117=18.246 as “after”, I find a “before” density of 1025.0639, and for a column 1×1×100 meters a mass of 102506.4 kg, specific heat of 4.001219 KJ/kg/K and heat capacity of 119468525 KJ. “After” density is 1025.0369, specific heat is 4.001262, and heat capacity is 119514517 KJ. The change over time is therefore 45991.5 KJ in 2629800 seconds, for 17.5 Wm^-2.

    Using the Levitus ML depths gives for the LHS of the equation energy change rates of (A) 12.5 (B) 10.0 and (C) 8.0 Wm^-2 respectively. In other words, Dr. Dessler’s use of 9 Wm^-2 seems about right, if used with a corrected ML depth, while Dr. Spencer’s LHS values are substantially too low.

    Frankly, I’m new at these equations and perhaps I’ve made a mistake somewhere. If so, I hope Dr. Spencer (or someone else) will correct me.

    • David L. Hagen
      Posted Sep 12, 2011 at 1:50 PM | Permalink

      Socratic Re “Dessler used a mixing layer depth of 100 meters.”
      Spencer modified his post on Sept. 8, 2011 following Dessler’s response:

      Here I went ahead and used Dessler’s assumed 100 meter depth for the ocean mixed layer, rather than the 25 meter depth we used in our last paper. (It now appears that Dessler will be using a 700 m depth, a number which was not mentioned in his preprint. I invite you to read his preprint and decide whether he is now changing from 100 m to 700 m as a result of issues I have raised here. It really is not obvious from his paper what he used).

      • David L. Hagen
        Posted Sep 12, 2011 at 1:52 PM | Permalink

        PS Spencer underlined “(It now appears . . . what he used).”

  45. Bart
    Posted Sep 11, 2011 at 9:02 PM | Permalink

    I have posted the transfer function and impulse response estimates here and here, respectively, created using artificially generated data according to the prescription here.

    These show it is, indeed, possible to pull out the long term correlation using only 124 monthly data points. It doesn’t always work, but sometimes, it does.

    It would be very nice to come up with a more robust deconvolution scheme which works the majority of the time, and reanalyze the data to prove the veracity of the estimated functions beyond doubt, and I encourage any interested parties to look into this.

  46. Posted Sep 11, 2011 at 9:47 PM | Permalink

    For introductory info on Bart’s figures check out http://en.wikipedia.org/wiki/Bode_plot.

    For further background on wikipedia go to control theory and PID controllers.

  47. Posted Sep 11, 2011 at 11:45 PM | Permalink

    I would like see a more intense examination of the daily cycle data.

  48. geo
    Posted Sep 12, 2011 at 12:34 AM | Permalink

    I hope Steve or someone summarizes the Bart and related discussion above in a new post for those of us just munching popcorn on this thread, but still able to clearly detect the signature of multiple geeks in excited high-overdrive mode.

  49. TGSG
    Posted Sep 12, 2011 at 2:18 AM | Permalink

    Fascinating. and I wish I understood more.

  50. RuhRoh
    Posted Sep 12, 2011 at 2:58 AM | Permalink

    Hey Bart;
    Is it a trait of the ‘peculiar’ series that they have some large amplitude ‘steps’ (perhaps of opposite polarity) rather close in time?
    In the oscilloscope analogy, when looking at a noisily noisey source, to set the sweep trigger very near the top of the ‘grass’, so that triggers are ~infrequent, as a way to pick off the big steps, and thus be able to see a characteristic time constant in the response after the trigger.
    Yes there will be the rest of the noise, but, if the ‘higher frequency’ noise is not too bad, eyeball smoothing will pick up the predominant pattern.

    This would be spoiled if the big steps come with sufficient frequency to recur within the ringdown from the step, and spoil the ability to see enough of the post-step response to characterize it.

    Anyway, maybe I’m missing the boat totally. but this seems to remind me of the reason that trigger levels are still attached to a knob…
    Another anonymous guy, but with less substance than you have brought to this party.
    RR

    • RuhRoh
      Posted Sep 12, 2011 at 3:45 AM | Permalink

      So, the limited duration data set is like having not much time to pick off the biggest signals, and thus only seeing a few examples.

      In that case it is harder to get a feel for the dominant response, and easier to get thrown off when the few big steps have other confounding steps in close temporal proximity.

      Anyway, looking forward to having my intuition regrooved as needed.
      Thanks
      RuhRoh

      • Bart
        Posted Sep 12, 2011 at 9:56 AM | Permalink

        I will have to think about this. Thanks.

        • RuhRoh
          Posted Sep 12, 2011 at 10:26 AM | Permalink

          Bart;
          What proof do we have that a thing called
          DC really exists? It has only been observed for a few centuries.
          Maybe it is only ELF AC…
          RR

        • Bart
          Posted Sep 12, 2011 at 11:01 AM | Permalink

          Well, since DC is essentially an abstract concept… cogito ergo sum?

        • j ferguson
          Posted Sep 12, 2011 at 11:09 AM | Permalink

          DC = non-periodic signal?
          ELF = Extra Low Frequency?

        • Bart
          Posted Sep 12, 2011 at 11:52 AM | Permalink

          Yes.

  51. Joe Born
    Posted Sep 12, 2011 at 8:11 AM | Permalink

    To horn in on the Skywalker-Talbloke colloquy about making Bart understandable, two suggestions.

    First, in supplying the translation, it would be helpful to explain why the response variable is not the whole-sky radiance rather than the cloud radiance. After all, isn’t the target the response of the net insolation of the earth as a whole to the average temperature of the earth as a whole?

    Second, to finesse around an explanation of Bart’s “statistics,” (actually, his use of discrete Fourier transforms, impulse and step responses, etc.), one might just say this:

    “It is uncontroversial that, if one observes the complete response (e.g., electrical current flowing as a function of time) of a linear system (such as an unknown network of resistors and capacitors) to a known stimulus (voltage impressed across that network as a function of time), one can accurately infer from it what that system’s response will be to some other stimulus (e.g., to voltage applied as some different function of time).

    “What Bart is doing is simply applying the universally accepted technique for doing so to determining, from the response (net insolation as a function of time over the past ten years) to the known stimulus (average temperature as a function of time over that ten years) to determine what the system’s response would to a step change increase temperature–and he concludes that the net insolation would fall.

    “There are a couple of complications. First, unlike the resistor-capacitor network, Bart’s system (the climate) is not linear, so application of his technique to this particular problem would not be completely uncontroversial. Still, this first complication is not a subject of much if any discussion, because linear-systems techniques are widely applied in estimating responses to small perturbations in non-linear systems–with the tacit recognition that the results can’t be completely accurate.

    “The other complication–which is the subject of the discussions–is that the technique Bart uses theoretically requires an infinite record of both the known stimulus and its response if one is to use them to compute the response to some other stimulus with complete accuracy. Of course, no one ever has an infinite record, but one can come close enough with a long-enough finite record. The disputants recognize, however, that the ten-year record is not long enough to avoid significant inaccuracies. So their discussion concerns whether it nonetheless admits of confidence in Bart’s basic conclusion: that a temperature increase tends to decrease net insolation: feedback in this system is negative”

    My first comment no doubt betrays appalling ignorance of the subject on my part, but I am hopeful that the second comment will be helpful to a significant subset of us unwashed masses who typically just watch from the sidelines.

    • Mark T
      Posted Sep 12, 2011 at 8:46 AM | Permalink

      I think that is the jist, Joe. The two problems are universal to any estimation scheme, btw.

      Mark

    • Bart
      Posted Sep 12, 2011 at 10:02 AM | Permalink

      Yes, it would be very nice to have a data set longer of at least as long as the inferred correlations. Coming up with a more robust algorithm would be greatly helpful in quelling doubts. However, successful application of the estimation procedure to artificially generated data with the same correlations and time span, and the similar well-behaved property which appears to coincide when the estimation procedure gets it right, supports a preliminary conclusion that the current result is likely correct.

      • Mark
        Posted Sep 12, 2011 at 10:27 AM | Permalink

        The standard for time-domain processing is the Wiener-Hopf solution (which is optimal w.r.t. MMSE.) Basically, w = inv(R) * p, where w is the channel response estimate and R and p are MxM auto and Mx1 cross correlations respectively, each with M lags and M < N / 2.

        Mark

        • Bart
          Posted Sep 12, 2011 at 11:00 AM | Permalink

          Yes, doing the cross spectrum and dividing by the power spectrum of the input gives the same result as my algorithm. Adding a little epsilon to the divisor gives WH and helps at least avoid division by zero. But, I have tried this, and I don’t get any better at pulling out the long correlation from the short data span – it’s still something about the particular data set which makes it amenable or not. It must be possible to figure out what that something is. Maybe someone in the literature already has.

        • Mark
          Posted Sep 12, 2011 at 12:15 PM | Permalink

          My guess is that it is just an SNR issue. There is some “noise” and some “signal” in the data, but we don’t know how they are distinguished from one another. It “works” when the SNR is sufficient for the processing gain to give a “good” result (“good” being one that makes sense.) Perhaps having data constructed of full sinusoids, i.e., each component has an integer number of cydcles, helps, too, because it would minimize the forced periodicity in the FFT result (smaller discontinuities from the end of the data to the beginnining of the data will result in less spectral leakage.)

          Mark

        • Bart
          Posted Sep 12, 2011 at 2:10 PM | Permalink

          Ah, but when I construct my artificial time series, it’s all signal. So, I think it is a fundamental limit of the method applied based simply on resolution, i.e., the observability of long term correlations of a stochastic process based on short term data.

          Now, I think I could take my artificial data and do a more precise estimate using a parametric method (as in these books, but those would, I expect, be very sensitive to the unmodeled processes in the real-world data. I have thought about filtering the data to exclude all the high frequency stuff, but I would likely be left with only a small number of data without startup transients (or, fully inclosed within the weighting function of an FIR weighting). Still, that might be enough for a parametric method to act on. Theoretically, I only need 5 data points to fit a general proper 2nd order response.

          Perhaps one or more of these books have non-parametric methods which would be more robust. Well, I or you or someone will no doubt work it out in time.

        • Bart
          Posted Sep 12, 2011 at 2:15 PM | Permalink

          enclosed… 5 uncorrelated to a minimal extent data points…

        • Mark T
          Posted Sep 12, 2011 at 2:28 PM | Permalink

          Ah, ok, no noise then. Your cases that fail, integer# of cycles for all components?
          It would be nice if we had 20 years of data to compare with. Maybe if you took the signals that failed and extended them out?

          I have octave at home and a copy of MATLAB (2007b) which is not licensed to me technically (previous employer) so I hate using it. Octave sorta sucks. That plus a few days between jobs so maybe I’ll tinker a bit.

          Mark

        • Bart
          Posted Sep 12, 2011 at 3:55 PM | Permalink

          Well, the intent is to show that the analysis on the real world data is reliable. And, there are only a limited number of real world data to draw from: 124 monthly measurements, to be exact. The problem is that, from this data, I have determined a model with a bandwidth of about 0.0725 years^-1, which associates with a settling time of about 1/0.0725 = 13.8 years or 166 months.

          The question is, can you put any reliability into such a long term statistical model estimated from only 124 months of data?

          What I have shown is that, if I generate artificial data with the same statistics, the analysis approach I used gives a reasonable result some of the time, as I showed at the links here. It appears completely dependent on the random number seed I use to initialize the model.

          In those cases in which the analysis goes awry, the resulting impulse response and transfer function are poorly behaved. The analysis on the real world data is nicely behaved, like the artificial data sets which give good results. This leads me to believe the analysis on the real world data is valid, and the real world data set is just plain lucky.

          I feel certain that there must be a way of identifying the weakness in the estimation procedure and coming up with some form of estimator which will be more consistent, and applying such an estimator to the real world data would then confirm the model beyond a reasonable doubt. I suspect such methods are already available, that I we do not have to reinvent the wheel. There are lots of deconvolution methods available, so some research is needed.

          Anyway, that is where my analysis is at: I believe it is valid, but I concede there is reason to be wary of the actual numerical result. I think it is less reasonable at this time to presume the feedback is anything but negative.

        • Bart
          Posted Sep 12, 2011 at 3:56 PM | Permalink

          I think it is less reasonable at this time to presume the feedback is anything but negative… because I very rarely get a false positive with my artificial data.

        • David L. Hagen
          Posted Sep 12, 2011 at 4:48 PM | Permalink

          Bart Great to see your testing. Would it be reasonable to give a probability on the positive vs negative trends based on their proportion to the total number of runs with your artificial data for the parameters you obtained? E.g., see Lucia’s synthetic explorations at The Blackboard.

        • Bart
          Posted Sep 13, 2011 at 2:34 AM | Permalink

          It is difficult to characterize simply. Just running the algorithm untended in batch mode a few thousand times suggests that I might get a false positive roughly 25% of the time. However, almost all of those false positives yield quirky frequency responses with sharp peaks and other ugliness. A smooth, well behaved frequency response almost always correlates to being fairly close to the true response.

          We really do need a more robust analysis tool for this short span of data. But, as I have been saying, all the qualitative indicators right now are that it performed pretty well on the current set of real world data.

          It appears, generally speaking, that bad performance occurs when a frequency band is not well covered in the input. So, you end up getting division by a small number which trashes the numerics and creates errant peaks in the frequency response estimate. I’ve been considering that it might be possible to test for significant input frequency bands, perform the initial estimation of the frequency response only over those bands, and then interpolate between them before performing the inverse FFT to get the impulse response estimate. If this bears any fruit, I will let people know, but I have many other demands on my time right now so I don’t know if or when I will get to it. But, I’ll just throw it out there as an idea in case anyone else is interested in pursuing it.

        • Posted Sep 13, 2011 at 7:09 PM | Permalink

          Noting my understanding of FFTs is limited to long past university lectures of the late 1960s and early 1970s and only very occasional use since then, I would point out out that is a pity we can’t get (Dr) Jeff Glassman of Rocket Scientist’s Journal fame to contribute to this specific (and entirely gripping) discussion (thanks Bart, Nick,Mark, David).

          It doesn’t seem to be well known that Jeff was one of the great luminaries of late 19th century signal processing theory and is still recognised as the author of the best general N point fast Fourier transform (FFT) algorithm ever (published in the mid-late 1970s if I recall) still in widespread use.

        • Posted Sep 13, 2011 at 8:24 PM | Permalink

          Oops, make that 20th century (;-)

    • Posted Sep 13, 2011 at 1:21 AM | Permalink

      Joe Born:

      Thanks for that Joe, very helpful. I’ll reproduce that comment on the post on my blog where I’ve been attempting to make a reasonable collation of Bart’s comments from across three blogs.

      Bart: Cloud feedback is negative – ocean response is around 4.88 years

  52. Bart
    Posted Sep 12, 2011 at 12:07 PM | Permalink

    My best hope was that enough people would see my analysis to start giving it thought, replicate it on their own, and go on from there. As of now, I see the cloud radiation impulse response has been viewed 8,054 times! Hopefully, there are some amongst there who will carry the ball forward.

    • mpaul
      Posted Sep 12, 2011 at 5:12 PM | Permalink

      Bart, nice work. Because of the format of comments here, its somewhat difficult for a casual reader to get the gist of your findings. I think people are starting to hear that you have done something interesting and are coming here to see what you have done. It would be helpful if you could summarize it in simple terms. As a suggestion perhaps you could start by summarizing the Spencer/Dessler controversy in simple terms (Spencer: more clouds = cooler; Dessler: more clouds = warmer), then summarize your method and approach, and then discuss your conclusions.

      Perhaps Steve could allow you a guest author spot for this.

    • Posted Sep 12, 2011 at 6:21 PM | Permalink

      I too would like to see all the work assembled into one coherent article. In my case I am doing some work with Low Frequency Noise (LFN of turbines and large mechanics like trains)and some of the analysis techniques look similar. Granted that the stuff I am looking at is aperiodic and random (for which FFT does not always produce good results) — but still I did some of the analysis in a similar manner. One of the interesting techniques I found is to create a simple power function for the wave train, then look for the areas where the steps do occur. Like you I found that a using n=30 (or 32) for the size of the moving average seemed to bring out the low frequencies and the steps. I did not run it “backwards” to clean up noise — never thought of it — but it should be a small change to test that… Once I found the areas of interest then I did the analysis of that area more thoroughly. Once major difference is that my sample sets are typically several million points (40-80 samples a second) for a run so I use a DB server to store the data. No lack of data here.

  53. jphilips
    Posted Sep 13, 2011 at 12:51 AM | Permalink

    Nick
    Have you detrended the data before applying the FFTs it makes quite a difference- see
    http://climateandstuff.blogspot.com/2011/09/ffts-cloud-feedback-and-stuff.html
    final plot

    • Posted Sep 13, 2011 at 1:11 AM | Permalink

      jp,
      No, neither Bart nor I did that. The data was zero mean.

      But it certainly would make a difference. As I’ve mentioned above, the zero frequency mag of h, which Bart is calling the negative feedback number (his -9.4 W/m2/C) is just the ratio of the trends of dR and T. If you remove them, you’d get a quite new number based on the ratio of second moments. And so on.

      However, trend removal is tricky. You have to put it back eventually. As I’ve said in my post, we’re in a world of periodic functions. Trend isn’t periodic. It would create a sort of sawtooth wave.

    • Layman Lurker
      Posted Sep 13, 2011 at 2:10 AM | Permalink

      Not much of a trend in Hadcrut or either of the cloud series.

    • Posted Sep 13, 2011 at 6:51 PM | Permalink

      Abstract: IPCC (Intergovernmental Panel on Climate Change) AR4 (Fourth Assessment Report) GCMs (General Circulation Models) predict a tropical tropospheric warming that increases with height, reaches its maximum at ∼200 hPa, and decreases to zero near the tropical tropopause. This study examines the GCM‐predicted maximum warming in the tropical upper troposphere using satellite MSU (microwave sounding unit)‐derived deeplayer temperatures in the tropical upper‐ and lower‐middle troposphere for 1979–2010. While satellite MSU/AMSU observations generally support GCM results with tropical deep‐layer tropospheric warming faster than surface, it is evident that the AR4 GCMs exaggerate the increase in static stability between tropical middle and upper troposphere during the last three decades.

      Citation: Fu, Q., S. Manabe, and C. M. Johanson (2011), On the warming in the tropical upper troposphere: On the warming in the tropical upper
      troposphere: Models versus observations, Geophys. Res. Lett., 38,
      L15704, doi:10.1029/2011GL048101.

      Referring to Figure 2 in Fu et al., 2011; i.e. their key plot of modeled vs observed tropical upper tropospheric temperatures, I find it very curious the Australia’s CSIRO Mark 3.0 GCM matched the RSS & UAH (T24 – T2LT) value in K/decade (average ~0.010) so well within error bars and yet their (CSIRO’s) presumably later much improved Mark 3.5 model had ‘drifted’ right back to (essentially) match the overall GCM ensemble mean value (~0.100) (although both versions of the CSIRO GCMs took into account ozone depletion).

      I would be absolutely fascinated to find out just what might have been the key difference e.g. with respect to tropical upper tropospheric cloud handling between the CSIRO Mark 3.0 and Mark 3.5 models?

      There might be a powerful clue therein.

      Any ideas Nick?

  54. Posted Sep 13, 2011 at 3:55 AM | Permalink

    I have some results using an R version of Bart’s code on TSI and global temperature http://landshape.wordpress.com/2011/09/13/fft-of-tsi-and-global-temperatur/

    “This is the application of the work-in-progress Fast Fourier Transform algorithm by Bart coded in R on the total solar irradiance (TSI via Lean 2000) and global temperature (HadCRU). The results below support the assessment PDF that the atmosphere is sufficiently sensitive to variations in solar insolation for these to cause recent (post 1950) and paleo-warming.”

    The analysis is an indication of the robustness of the method, as it gives a different but appropriate result on a different data set. Its going to be a very useful tool in arguing that the climate system is not at all like its made out to be.

    I will post the code when its further along.”

    • Posted Sep 16, 2011 at 3:50 AM | Permalink

      David, thanks for your link. Are you still tidying your R code? Interested to see it when it’s available.

  55. HaroldW
    Posted Sep 13, 2011 at 6:08 AM | Permalink

    Bart –
    Sorry, there wasn’t a “reply” button on your post of 2:34 am. Possibly reached a maximum indent level.
    You wrote: “It appears, generally speaking, that bad performance occurs when a frequency band is not well covered in the input. So, you end up getting division by a small number which trashes the numerics and creates errant peaks in the frequency response estimate.”
    That’s the problem when you compute Y/X. At frequencies where X=0 it blows up, and nearby it’s a little unstable. One thing which may help is to compute instead
    YX*/(XX* + eps^2) where eps is a small number and the asterisk of course indicates complex conjugation. Just a thought, you may well have already tried it.

    • Bart
      Posted Sep 13, 2011 at 11:50 AM | Permalink

      Thanks, Harold. Yes, that is the standard Wiener deconvolution formula. But, it doesn’t seem to help a great deal, I think because, essentially, there is simply no information in that band, and replacing it with something which is not correlated in the output may make things numerically nice, but doesn’t really improve the estimate.

      That is why I came up with the idea of using information on either side of the missing band to infer and interpolate what should be in that band. We’ll see if it works.

      • Steve
        Posted Sep 13, 2011 at 7:11 PM | Permalink

        Bart,

        I’m not sure that deconvolution is an appropriate technique for your aim. As ‘temp’ contains noise across frequencies, wouldn’t either division or deconvolution (dependent on whether working in time or frequency domain) amplify this noise? I.e. any temp frequencies approaching zero.

        I would think that deconvolution is ‘only’ suitable if you have derived a robust noiseless estimate of H (e.g. the ‘blur’ point spread function of an optical lens) and from this you wish to reconstruct input X. Rather than using X and Y to reconstruct H. Please correct me if I am wrong.

        Argggh …. scratch that. I now see the point of your post and where you are heading in attempting to account for this (using interpolation).

        This is interesting. I wish I didn’t have my own job requiring my attention 🙂 Steve

      • Mark
        Posted Sep 14, 2011 at 4:43 PM | Permalink

        I’m not sure I saw that part (the interpolation.) There’s so much chaff it’s hard to keep up (h/t to sky for posting something interesting, though potentially damaging to any attempt to extract the relationship.) I ran out of time today though will make a valiant attempt at more investigation tomorrow.

        Mark

  56. sky
    Posted Sep 13, 2011 at 7:24 PM | Permalink

    I’m glad to see Bart taking the time to acquaint CA readers with the rudiments of system analysis. The incisive power of its sophisticated analytic techniques is a much-needed antidote to the simplistic, wrong-headed notions of system behavior that have long been the hallmark of “climate science.”

    There are several problematic matters in the results that Bart obtains under the assumption that the relationship between cloud radiation and temperature is governed by the transfer function (or impulse response function) characteristic of a damped oscillator. Not the least of them is that assumption itself, which cannot be physically justified in the case of diffusive processes governed by first-order differential equations that characterize heat transfer. Moreover, temperature and radiative fluxes are dimensioned differently. Physical factors other than radiative fluxes (evaporagtion. convection, etc.) enter into the determination of temperature in a highly non-linear fashion. Thus one cannot be the system input for the other output, except in a purely phenomenological sense.

    On purely formal mathematical grounds, the transfer function can be determined empirically from data only in the case of nearly perfect cross-spectral coherence between input and output of a linear system. Such coherence, which is typical of a well-designed measurement system, is hardly to be found anywhere amongst geophysical variables. I’ve estimated the cross-spectrum between HADCRUT3 and the putative cloud radiative fluxes (difference of columns E – H in Spencer’s spreadsheet) with following results for the squared coherence and cross-spectral phase (in degrees) at the lowest frequencies (indexed as cycles/48yrs):

    Freq. 0 1 2 3 4
    Coh. .57 .48 .39 .23 .03
    Phase 34 58 90 129 84

    While the phase lead of radiative flux relative to temperature is unmistakable, these coherence results do not support the premise that a reasonable empirical determinatrion of the relationship between these two variables can be made from the data.

    • Posted Sep 13, 2011 at 8:14 PM | Permalink

      “There are several problematic matters in the results that Bart obtains under the assumption that the relationship between cloud radiation and temperature is governed by the transfer function (or impulse response function) characteristic of a damped oscillator. Not the least of them is that assumption itself, which cannot be physically justified in the case of diffusive processes governed by first-order differential equations that characterize heat transfer.”

      I don’t think the method assumes a damped oscillator. It finds a damped oscillator. Its capable of finding other simpler systems, characterised by simpler impulse responses.

      The diffusive process that is your assumption. I have found it worthwhile to take a fresh look at the data and ask “What if I believe exactly what it is telling me?”, and then follow that through. To me this points to certain properties that are not widely appreciated, such as very long decay/rise times, and integrative/derivative relationships.

      “Moreover, temperature and radiative fluxes are dimensioned differently. Physical factors other than radiative fluxes (evaporation. convection, etc.) enter into the determination of temperature in a highly non-linear fashion. Thus one cannot be the system input for the other output, except in a purely phenomenological sense.”

      Temperature and radiative fluxes are related quite appropriately in the classic first order ODE of the energy balance model. To say they are highly non-linear and that blocks further analysis is a cop-out IMO. It quite likely, within the constraints of the abstractions inherent in an ODE, that the essential features of convection and evaporation can be represented.

      For example, they might be identified with inertial components of the general harmonic oscillator ODE. That is, an increase in temperature causes more evaporation and convection, but this effect fades as higher temperatures stabilize, due to lapse rate stability. This would explain the phase lead, even though the cause is temperature, because it is proportional to the derivative of a change, not proportional to the change itself.

      But you won’t be able to understand these relationships until the analysis has the power to represent them.

      • sky
        Posted Sep 14, 2011 at 8:47 PM | Permalink

        Although you quote my words exactly, your comments ar at best tangential to the issues that I raise. Don’t have the time today, but will respond more fully tomorrow.

        Meanwhile, I need to correct the error made in describing the scale of the frequency index in my cross-pectrum analysis. The units are cycles per 48 MONTHS, not years.

      • sky
        Posted Sep 15, 2011 at 4:56 PM | Permalink

        What you consider to be the “findings” of Bart’s methods are entirely
        predicated upon the bald assumption that there is a strictly deterministic
        LINEAR relationship between temperatures and cloudy sky radiative fluxes.
        What I point out is that other physical variables enter into the
        determination of surface temperatures. Thus the actual system is not
        single-input; it cannot be adequately characterized by the impulse reponse
        to a single variable, no matter how the system is configured.

        Nor on the basis of known physics is it plausible that heat transfer alone
        (a first-order non-oscillatory mechanism) constitutes a second-order
        oscillatory system, as in Bart’s characterization of the relationship
        between the two variables. The coherence-destroying effects of neglected
        NON-LINEAR mechanisms are clearly at work in the actual temperature data.
        Evaporation acts as a severe limiter. Without strong coherence between
        the variables, even optimal methods of determining the transfer function via
        cross-spectrum analysis (Wiener-Hopf) do not provide useful results. With
        insignificant coherence, the transfer function appropriately vanishes along
        with the impulse response.

        The coherence is in no way incorporated in the ratio of FFT coefficients
        that is the crux of Bart’s simplified approach. While Bart is keenly aware
        of the limitations of what he did, I see no evidence that most readers are.
        One can only guess at what the consequences are upon the results. I suspect
        that the long system “time constant” he obtains is an artifact of the
        massive zero-padding employed, whereby the record length is increased by
        nearly two orders of magnitude.

        Your idea that the “energy balance equation” universally relates temperature
        to energy flux may be employed perhaps at TOA with the assumption of an
        equivalent planetary BB temperature. But you can’t do that with surface
        temperatures influenced by evaporation; BBs don’t evaporate. In any
        event, there was no such dimensional conversion done in Bart’s analysis.
        The results he obtains are very different from those obtained via
        Wiener-Hopf. Your speculation that temperature rates of change, rather
        than temperature levels, may still be the “cause” of the phase
        relationships shown by my cross-spectrum analysis is just that. It finds
        no support in the known physics.

        Lastly, your “cop-out” characterization of my caveat about using linear
        analysis in the face SEVERE nonlinearities is revealing. I know that if
        anyone at our firm did that, they would be fired for incompetence. Let’s
        leave it at that.

        • Posted Sep 15, 2011 at 5:13 PM | Permalink

          “I suspect that the long system “time constant” he obtains is an artifact of the
          massive zero-padding employed, whereby the record length is increased by
          nearly two orders of magnitude.”

          I don’t think it’s the padding, though there could certainly be less of it. I tried cutting the 8192 to 4096 with no effect.

          I think it’s the hack that he did to h. He did a Hann taper from about sample 1024 to 2048. But forgetting about periodicity, so it zeroed the whole negative t part. The time constant is the first moment of h, so if you chop off a whole lot on one side, it shifts the apparent centre the other way.

        • Posted Sep 15, 2011 at 6:24 PM | Permalink

          “time constant is the first moment of h”
          That is, first moment divided by zeroth moment (area).

        • sky
          Posted Sep 16, 2011 at 5:42 PM | Permalink

          My suspicion was based upon the fact that Bart’s time constant is roughly half the length T of the actual data. Zero padding creates an artificial time-limited function, which consequently cannot be band-limited. All of his FFT results at frequencies lower than 1/T are the product of that artifact. You may may be correct that the problem lies elsewhere, but I’d like to see results with far less and eventually no padding.

          Although I did not stress the point in my response to Stockwell, the FFT ratio used to compute h raises questions about its consistency as an estimator when the data are non-periodic and only marginally coherent. This matter is quite apart from any windowing issue that you raise (which I did not consider).

        • Posted Sep 16, 2011 at 8:15 PM | Permalink

          I think you need some padding – we’re looking for a response function that has twice the span of the data. But I agree that the low frequencies are far too low. That’s why the result being quoted is just the ratio of the OLS trends of dR and temp. That’s all the FFT can resolve at those frequencies.

          I tried substantially reduced padding. Down to 512 points (total) it made little difference. H[1] (area under h) moved from -12.2 (the trend ratio) to -12.8. Going to 256 made a difference (-14.2), but there is an issue, because the zero value of Y/X (after first FFT) has to be interpolated, and neighboring values are now much less smooth.

        • sky
          Posted Sep 17, 2011 at 4:25 PM | Permalink

          I think things are still being being ASSUMED here that shouldn’t, given the general lack of coherence between the two data series. The major question, as I see it, is the existence of any LINEAR CAUSAL relationship. Without such, the entire idea of a characteristic impulse response function is inapplicable. Even when the series are very highly coherent, the linear relationship need not be causal; the impulse response function can be BILATERAL, as in a filter with symmetric weights.
          To say that you’re looking for a long impulse response begs the entire question. (BTW, the time-constant that Bart cites is the e-folding time and not the average age of the data.)

          What troubles me increasingly as I ponder Bart’s results is how he imposes causality. Having only glanced at his analysis code, I’m not sure of the particular step. Nevertheless, when it comes to his recursive 5-component model, it is apparent that radiative flux is being determined by its past values and by past and present values of temperature. This cannot possibly reproduce the phase lead of the flux relative to temperature that is found by cross-spectrum analysis.

          Although I applaud Bart’s efforts to acquaint readers with system analysis methods, I wish he had picked an appropriate pair of data series for demonstration.

          Have a good weekend.

        • Posted Sep 17, 2011 at 5:57 PM | Permalink

          “Without such, the entire idea of a characteristic impulse response function is inapplicable. Even when the series are very highly coherent, the linear relationship need not be causal; the impulse response function can be BILATERAL, as in a filter with symmetric weights.”

          Indeed so. I wouldn’t say inapplicable – it just doesn’t mean what Bart wants it to mean. The impusle response h is bilateral, though not symmetric. In fact, the centre point is at about 17 months.

          “What troubles me increasingly as I ponder Bart’s results is how he imposes causality.”

          Causality is imposed by the attempt to taper. This consists of multiplying by a function w which is made thus:
          From points 1 to 1024 w=1
          From 1024 to 2048 it tapers Hann-style to 0 (cos^2)
          From 2048 to 8192 it is zero.

          Now h was bilateral. The significant positive t part is in the 1-200 range approx, and the negative-t part is approx 8000-8192. The rest is mostly noise. So this is where the causality is imposed. The negative part is set to zero.

          This has an effect on the centre-point of h, derived from the first moment. With the loss of the negative part, it moves in the positive t direction, from about 17 months to 4.8 years. While h varies in sign, its sum on each side is negative.

          The taper used is quite inappropriate. You can see this from the fact that there is only one tapering section. Tapers are normally symmetric. There is no symmetry about 0 here. So the taper has continuous derivative on one side, but is discontinuous on the other.

          I have been unsuccessfully urging Bart to just look at the FFT of the taper w. A proper Hann-style window would show frequency attenuation of 18 db/octave. His shows 6, which is determined by the discontinuity. The main problem with w is that it cuts into h, but it’s also a very unsuccessful taper.

        • Bart
          Posted Sep 18, 2011 at 2:26 AM | Permalink

          “Causality is imposed by the attempt to taper.”

          No, it isn’t. This is standard practice. You are too inexperienced to understand.

          There is no such thing as a non-causal response in this universe.

          “the impulse response function can be BILATERAL, as in a filter with symmetric weights.”

          Such a filter is still causal. The impulse response goes forward in time – it has no weights extending into the past, at least not if it is applied in real time. That is what Nick is proposing here. It is completely insane.

        • Bart
          Posted Sep 18, 2011 at 2:28 AM | Permalink

          You are too inexperienced to understand…. and refuse to read for comprehension anything I have written to explain where you have gone wrong.

        • Posted Sep 18, 2011 at 3:55 AM | Permalink

          “The impulse response goes forward in time”
          Bart,
          Where in the algorithm do you tell it which direction is forward in time? You’ve just fed in two sets of numbers. How does it know time is involved at all? Why could they not be varying with distance, say?

          All you’ve done is divide two FFT’s and iFFT the result. Ask sky asks, where is the directionality in that?

        • Tom Gray
          Posted Sep 18, 2011 at 8:19 AM | Permalink

          Causality is not in the algorithm. It is in the model that Bart is investigating by use of the algorithm. He has told you that many times. the model may be correct or incorrect. he use of the algorithm may be misguided. However he is not useing it to deduce causality.

          He hasn’t fed in two sets of numbers. He has fed in a set of observations that he is fitting to a model. How many times does he have to tell you that

        • Posted Sep 18, 2011 at 9:17 AM | Permalink

          Tom,
          The model isn’t even introduced until he has chopped away one side of the impulse response. I’ve set out more details of that process here.

        • Tom Gray
          Posted Sep 18, 2011 at 10:00 AM | Permalink

          From what I gather

          a) Dessler says that the correlation between clud and radiation is weak and maybe even be positive

          b) Spencer says that Dessler used instaneous correlation and that alagged correlation is stronger and negative

          c) great controversy with abject apologies etc etc etc

          d) SMc indicates the lagged correlation in a blog post

          e) great mumbles about data windows, lags, correlation’s, noise, damped exponential, … ( note the damped exponentials, lags and differential equations assume a causal model)

          f) Bart asks why if a causal feedback model is assumed then why not use the stnadard mathematics that has been developed over many decades to investigate causal feedback models. Under assumption of specific causal model, Bart takes the observed time domain measurements and uses FFT to create phase and magnitude response (called Bode plots here) along with impulse and step responses that are derived from such “Bode plots”.

          g) Bart notes that the magnitude and phase response are typical of a second order system which would be what was expected from the assumed model. Feedback is negative and around 10dB. Bart notes this as evidence of that the assumed model is valid but notes that this is not conclusive proof

          i) Sky and RomanM indicate doubt that the simple physical model that Bart has assumed is an accurate model of the cloud system. They note that the evidence that Bart cites is primarily from the low frequency portion of the Bode plot and that the high frequency region is not in agreement. Bart agree with this and indicates that he has described other processes to create a more complicated model in previous posts. However Bart notes that the low frequency agreement is good and that he high frequency region that Sky cites has already been discussed as a artifact of noise and other processes.

          h) Nick Stokes asks how Bart knows that the system is causal. Bart says that he has assumed a assumed a physical model that is causal and that the calculations that he has performed are in reasonable agreement with this assumption. Nick Stokes talks about impulse responses and windowing not Bode plots. Step h) is repeated forever

        • Bart
          Posted Sep 18, 2011 at 12:47 PM | Permalink

          That’s pretty much it, Tom. My only quibble is “a second order system which would be what was expected from the assumed model.” I wasn’t expecting to see much of anything coherent – I was just intending to look at the phase at low frequency. Maybe I should have stuck with that, because no matter what atrocities Nick performs on the data, he still comes out with low frequency 180 deg phase.

          I just happened to observe a remarkably coincidental, well defined, typical 2nd order response appear in the analysis.

        • Bart
          Posted Sep 18, 2011 at 1:24 PM | Permalink

          Nick:

          “How does it know time is involved at all?”

          It doesn’t. But, most people realize that time moves forward.

        • Mark T
          Posted Sep 18, 2011 at 1:49 PM | Permalink

          The results of an FFT and an IFFT are ordered. Performing them successively creates samples that are in the same order as the input. The FFT and IFFT themselves do not “care” what the order is. I’ve made this point repeatedly and Nick simply fails to get it.

          Mark

        • Posted Sep 18, 2011 at 5:17 PM | Permalink

          Mark,
          I’ve shown here the effects of simply reversing the order of the data. The FFT doesn’t care – it just spits out h in reverse order too.

          But here’s the rub. Bart says that for h, negative t can be ignored as noise, and his taper erases it. But if you applied that thinking to the reverse problem, you’d erase the data that was considered good for the original problem, and work with the data previously discarded as noise.

          So what determines which half you can regards as noise?

        • Tom Gray
          Posted Sep 18, 2011 at 1:52 PM | Permalink

          With the limited data that we have, quibbling about the type of data window to sue does not seem to be useful. We should not go beyond the limitations of the data in discussing the results. Whether we have a magnitude of 9dB or a magnitude of 12dB does not change the results. That is, there is some evidence of negative feedback from clouds. For me, that is the extent of the conclusions that can be drawn. Now what does this mean. is this evidence compatible with more sophisticated physical models? iI it simply a spurious result or, to the same point, can it make repeatable predictions?

        • Bart
          Posted Sep 18, 2011 at 2:02 PM | Permalink

          Yes, this is what is important. Even Nick does not disagree with the 180 deg phase shift. And, importantly, the feedback is not only shown thereby to be negative, but it also appears large enough to be very significant.

        • Posted Sep 18, 2011 at 4:30 PM | Permalink

          “Even Nick does not disagree…”
          Well, I pointed out that if you start the analysis at Jan 2001 instead of Mar 2000 you get a strong positive feedback (because the trend of T becomes negative). You said that we can’t afford to remove data. But its a standard sensitivity test. You can’t say that 124 months is good and 114 is worthless.

        • Bart
          Posted Sep 18, 2011 at 7:43 PM | Permalink

          “You can’t say that 124 months is good and 114 is worthless.”

          I tried it on my end. Still get -180 deg at low frequency. But, it isn’t kosher to eliminate data when the thing you are trying to find requires as long a data record as you can find.

        • Bart
          Posted Sep 18, 2011 at 3:26 AM | Permalink

          ‘This cannot possibly reproduce the phase lead of the flux relative to temperature that is found by cross-spectrum analysis.’

          We do not care about high frequency stuff which is almost certainly dominated by noise and independent processes. For feedback, what matters is the low frequency regime.

        • Bart
          Posted Sep 18, 2011 at 3:26 AM | Permalink

          “…which is likely dominated by noise and independent processes.”

        • Bart
          Posted Sep 18, 2011 at 3:28 AM | Permalink

          In any case, the sign of the feedback is determined by the low frequency response.

        • Posted Sep 18, 2011 at 4:02 AM | Permalink

          But high frequency is low frequency. A frequency of 8191/8192 revs per month (RPM), the highest on your axis, is not such a frequency on a monthly sampled time axis. It is a frequency of -1/8192 RPM. That’s the frequency you see after sampling.

        • Bart
          Posted Sep 18, 2011 at 1:36 PM | Permalink

          The highest frequency I can theoretically see is the Nyquist frequency, which is 0.5/T, where T is the sample frequency. If T is 1/12 years, then that is 6 years^-1, or 0.5 months^-1.

        • Bart
          Posted Sep 18, 2011 at 1:45 PM | Permalink

          The max frequency I can see theoretically is the Nyquist frequency, which is 0.5/T, where T is the sample time. That is 0.5 months^-1 with T in months.

        • Bart
          Posted Sep 18, 2011 at 1:46 PM | Permalink

          Blasted first reply didn’t show up at first.

        • HaroldW
          Posted Sep 18, 2011 at 2:03 PM | Permalink

          Bart –
          I agree that the Nyquist frequency in this context is 0.5 cycle / month. But your frequency response plot (above on this thread, or here ) appears to go up to 0.5 cycle / year. Labelling error on your graph, perhaps?

        • Posted Sep 18, 2011 at 5:05 PM | Permalink

          Bart
          “The max frequency I can see theoretically is the Nyquist frequency”

          That relates to much of what I’ve been saying about periodicity. The DFT deliberately and standardly goes to double the Nyquist frequency. 1 cycle/month. The Nyquist frequency is mid-range. So it makes more sense to interpret the upper part of the range as a set of diminishing (toward zero) negative frequencies.

        • Bart
          Posted Sep 18, 2011 at 7:33 PM | Permalink

          HaroldW
          Posted Sep 18, 2011 at 2:03 PM

          0.5 cycles/year * 1 year/12 months = 0.0416 cycles/month, much less than 0.5 cycles per month.

        • Bart
          Posted Sep 18, 2011 at 7:35 PM | Permalink

          “The Nyquist frequency is mid-range. So it makes more sense to interpret the upper part of the range as a set of diminishing (toward zero) negative frequencies.”

          Sure… in the frequency domain.

        • Bart
          Posted Sep 18, 2011 at 2:35 PM | Permalink

          “…which is likely dominated by noise and independent processes.”

          You know, in actual fact, in thinking this over… that higher frequency phase lead may be just what is needed to stabilize things with a fairly high bandwidth. If I modify my model to add in phase lead terms so that it looks very close to the frequency response estimate over all observable frequencies, then it is very easy to use it to close the loop around an integrator and produce a stable feedback system. The observed phase lead, assuming it is not just noise or the expression of independent processes, forms the “D” in a standard PD control loop. The apparent 2nd order response I have been harping on is essentially a lag compensator in series with the PD which boosts the gain at low frequency.

          Very interesting…

        • kim
          Posted Sep 18, 2011 at 3:01 PM | Permalink

          It hums, doesn’t it?
          =====

        • Steve
          Posted Sep 15, 2011 at 5:20 PM | Permalink

          “I suspect that the long system “time constant” he obtains is an artifact of the
          massive zero-padding employed”

          Thanks sky for your input – a couple of questions if you don’t mind.

          (1) If this time constant is artefactual (from zero padding), why does convolving the impulse response with temp well represent (subjectively) the low frequency content of dR? This doesn’t make sense to me.

          (2) Wouldn’t low coherence represent the fact that mid to high frequencies are not well represented. Isn’t Bart’s point that these are not components he is interested in?

          (3) In systems analysis, can you consider coherence in bands to try and account for these types of issues? I.e. segregate (or quantify) impacts of hypothesized near-linear components?

          thanks again, Steve

        • sky
          Posted Sep 16, 2011 at 6:04 PM | Permalink

          Sorry I had to run to a meeting yesterday. Don’t mind your questions at all. In response them:

          1) Given 5 free paramters, such as Bart uses in his recursive feedback model, Johnny von Neumann claimed he could make an elephant’s trunk wiggle.

          2) The coherence results I show ARE for the low frequencies and are entirely independent of mid- and high-frequency components. Cross-spectrum analysis does not conflate them. The data contain virtually no information at the much lower frequencies that Bart produces through zero-padding in his FFT analysis.

          3) As explained in 2) that’s what I did. In the context of NON-linear analysis, bi-spectrum analysis can reveal quadratic interactions band-by band. I’ve done such analyses frequently, but not on this miserably short data.

          Hope this helps.

        • Bart
          Posted Sep 18, 2011 at 2:30 AM | Permalink

          Zero padding does not change anything except to increase the density of points in the frequency domain results.

          You must zero pad, though, to at least double the length of the sequence, or you will get time aliasing when you invert the transform.

          Again, a very standard, well understood, and accepted practice being questioned by the uninitiated.

        • Mark T
          Posted Sep 18, 2011 at 12:10 PM | Permalink

          Indeed… padding with zeros just allows sub-cycle “correlations” in the FFT process. Very standard, as Bart states.

          The coherence issue sky is getting at is likely just a consequence of a MIMO system with some added (not literally, as in “thrown in”) non-linearities. Doing this will “average out” the non-linearities over the time-span of data – if they are not significant, i.e., if the poles are not moving around too much and the system looks somewhat stationary over the entire span, then the results can be expected to be reasonable. Another 10 years of data will be telling – does the “model” hold? It will also simply find the one path through the MIMO system represented by the one input and one output. Nobody is arguing it is the entire system, Bart in particular.

          In general, any of the coherence issues that impact Bart’s fairly straightforward analysis will impact others just as much, if not more.

          Mark

        • TimTheToolMan
          Posted Sep 15, 2011 at 7:44 PM | Permalink

          Sky writes “I suspect
          that the long system “time constant” he obtains is an artifact of the
          massive zero-padding employed”

          Would certainly be a valid argument if there weren’t climatic processes that spanned those sorts of timeframes. But there are. And hence the results cant arbitrarily be written off as an artifact.

        • sky
          Posted Sep 16, 2011 at 6:20 PM | Permalink

          The presence of climatic processes on those timeframes is an acknowledged fact–one that has little bearing on the mathematical issues in Bart’s analysis. There’s nothing arbitrary about pointing out those issues and presenting results from an analysis that Bart himself acknowledges as being more rigorous.

        • Bart
          Posted Sep 18, 2011 at 3:09 AM | Permalink

          sky – you may benefit from reading this post.

          Zero padding is not the culprit for anything you are suspecting here. See comment at Sep 18, 2011 at 2:30 AM.

          I have freely admitted the data record is too short to have high confidence in the result. What I have striven to prove is that it is possible. What I am tired of is getting nonsensical attacks based on uninformed nonsense (such as Nick’s obsession with artifacts of circular convolution suggesting backward in time (!) responses). Zero padding is, sorry to say, another red herring. The weakness IS the span of data. Nothing else. Just that.

          Most importantly, however, it is very, very clear that there is no basis whatsoever for Dessler’s argument that the feedback is positive. The phase response is near 180 degrees at low frequencies which are nevertheless well above a reasonable threshold of resolution. My analysis here establishes that this negative feedback could very well be significant. Very significant.

          Strictly speaking, BTW, you really do need a lot more than 5 parameters to make an elephant’s trunk wiggle arbitrarily.

        • Bart
          Posted Sep 18, 2011 at 3:12 AM | Permalink

          One other note on the last: those 5 parameters are not independent. They are completely dependent on the three parameters of a 2nd order continuous time system model: gain, natural frequency, and damping ratio.

    • Steve
      Posted Sep 14, 2011 at 11:58 PM | Permalink

      Hi sky,

      The characteristics of a damped oscillator were never assumed, it is an interpretation of the result.

      The system is nonlinear, not SISO and could consist of adaptive filters. I would also favor the view that clouds influence temperature as an independent degree of freedom, additional to any feedback role.

      Given these ideas, I am unsure why you would predict a “near perfect cross spectral coherence” (i.e. 100% predictive power of dR from temp)?

      Natural systems are not electronic circuits and I would think that expectations of coherence need to be adjusted (due to these additional complexities).

      I understand that nonlinear systems are often investigated using reverse-correlation techniques (white noise) to derive linear contributions to the overall system. Would you agree that deriving a linear component that explains x% of the data is a valid contribution to understanding this system?

      Is it possible to quantify a linear contribution from analysis of coherence or is this just back to front reasoning?

      We might then propose that there is no room left for a ‘cloud positive feedback role’ I.e. we would expect a minimum likelihood of ‘x'(in the observed signals). The test would be whether this is falsified with this linear analysis. Steve

      • sky
        Posted Sep 15, 2011 at 5:05 PM | Permalink

        Hi Steve.

        I don’t PREDICT near perfect coherence. On the contrary, I state that as a REQUIREMENT for Bart’s simplifiesd approach to work effectively. That requirement is grossly violated by the actual data, as the cross-spectral results show, at best, only marginal coherence. For other issues, see my reply to Stockwell. Gotta run.

  57. Richard E North
    Posted Sep 14, 2011 at 5:25 AM | Permalink

    Steve,
    This feels to me like a re-run of the Hockey Stick saga where Dessler 2010 takes the same role as Ammann and Wahl 2007.
    Viz:-
    – A keystone component of the Warmist dogma is attacked by a sceptic
    – This is refuted suspiciously quickly by a member of the Team
    – Close examination of the refutation raises the suspicion that the data and/or statistical techniques applied were selected so as to meet the overriding need of the team to circle the wagons, rather than for any sound scientific reason.

    I have to admit I am lost on the relative merits of the datasets Dessler used and the ones you used. I think I (and maybe a lot of other people) would welcome a further post on this topic which would explore whether there are any sound scientific grounds for Dessler using the datasets he did.

    • Posted Sep 14, 2011 at 4:10 PM | Permalink

      agrees with Richard…and where Nick Stokes, for whatever reason, is playing the role of doggedly defending the consensus by throwing up meanignless chaff inoorder to clooud the issue. Slightly surprised to see Paul< come in though…although hi9s comment was obviously cribbed from Wikip

  58. Posted Sep 15, 2011 at 6:52 PM | Permalink

    kudos to Bart…he has tried to open the world of the climatologist to techniques that are well-established outside. the resistance he has faced is…instructive.

  59. Hoi Polloi
    Posted Sep 17, 2011 at 1:33 PM | Permalink

    Tony, I agree, and I have done so. I have not sought to speak from authority in this thread. But when you get stuff like:
    “I thought I could make it plain for you and teach you, but it is clear that you have no clue and do not want one. “
    well, it just has to be set straight.

    An old Chinese saying goes:

    True words aren’t eloquent;
    eloquent words aren’t true.
    Wise men don’t need to prove their point;
    men who need to prove their point aren’t wise.

  60. RomanM
    Posted Sep 17, 2011 at 3:31 PM | Permalink

    Call me skeptical, but it doesn’t “prove” much of anything to merely generate some random data containing relationships which someone believes to be characteristic of the observed data set, but which may not reflect reality. I would suggest it would be more relevant to evaluate what has actually occurred in the analysis itself.

    The Fourier transform and the fft are a linear operators. For the others who may be reading this thread, this means that applying the fft to the sum of two series is identical to applying the fft to each series separately and adding the two results together. The same property applies to the numerator series (in this case, dR) in the inversion procedure done in the calculation of h, the “impulse response curve” which was calculated for Bart’s comment earlier. If we take the series dR apart and express it as the sum of two distinct series, we will be able to see the effect of each of the components on the impulse curve.

    It is a simple matter to do a regression to calculate a simple trend line for dR over time along with the residuals from that regression. These two series will provide a simple decomposition of dR. Using a simplified form of Nick’s R version of the Matlab script we can get the following plot:

    The red line is the effect of only the trend line for the dR data. The green line is the effect of the residuals from the regression. The black line (which is equal to the sum of these two curves is the result of calculating the impact response from the original data.

    The lack of difference in the red and the black curves suggests to me is that the original analysis describes the relationship between the temperature series and the negative-slope trend line of the dR data with very little role played by the residual “wiggles”. One would likely get a similar result if instead of dR they were to use any other variable (even one unrelated to climate) with a similar trend having a random residual series added to it. I fail to see how the trend could contribute in any way to create a genuine 4.88 year cycle for such a relationship.

    So where could the observed cycle come from? I used R to fit a loess curve to the temperature data (with default parameters) and got the following:

    Who knows what frequencies might be observed as the loess smooth gets converted into a straight line? However, with only 124 months of data, I doubt any such cycles would have a physically viable meaning.

    • RomanM
      Posted Sep 17, 2011 at 3:34 PM | Permalink

      Here is a script for the above comment:

      #function to calculate impulse response
      #based on Nick Stokes’ script
      #without tapering

      impulse.calc = function(Yvar,Xvar,samps = 8192) {
      N=length(Yvar)
      dT=1/12;
      Nsamp = samps/2
      Npad=Nsamp-N
      X=fft(c(Xvar,rep(0,Npad)))+1.0e-9
      Y=fft(c(Yvar,rep(0,Npad)))
      hr = Re(fft(Y/X,inv=T))/dT/Nsamp
      Nc=Nsamp/8
      w=c(rep(1,Nc),(1-cos(pi*(1:Nc-1)/(Nc-1)/2)))
      hw=hr*w
      f1=c(1:15,15:1); f1=f1/sum(f1)
      hs=filter(hr,f1)
      t=(0:599)*dT
      plot(t,hs[1:600],type=”l”,ylab=”Impulse Response W/m2/C/yr”,xlab=”Years”,main=”Smoothed Impulse Response”)
      invisible(hs) }

      flux=read.csv(“http://www.climateaudit.info/data/spencer/flux.csv”)
      flux=flux[3:126,] #removes NA rows

      dR = flux[,5]-flux[,8]
      temp = flux[,9]

      #Impulse for dR
      test1 = impulse.calc(dR,temp)

      #calculate trend line for dR
      reg.time = (1:124)/12
      dR.reg = lm(dR ~ reg.time)
      dR.pred = predict(dR.reg)
      dR.resid = residuals(dR.reg)

      #Impulse for trend line
      test2 = impulse.calc(dR.pred,temp)

      #Impulse for residuals of trend line
      test3 = impulse.calc(dR.resid,temp)

      #Plot three previous results
      matplot((1:600)/12,cbind(test1[1:600],test2[1:600],test3[1:600]),type=”l”,lty=1,lwd=2, xlab=”Year”,
      ylab = “W/m2/C/yr”, main=”Cloud-Temperature System Smoothed Impulse Response” )
      legend(“bottomright”,legend=c(“dR”,”Trend”,”Residual”),lty=1,lwd=2,col=1:3)

      #plot(reg.time,dR) #not run
      #abline(dR.reg) #not run

      temp.loess = loess(temp~reg.time)
      plot(reg.time,temp,xlab=”Year”,ylab=”Temp Anomaly (C)”,main = “Loess Plot for Temperature”)
      lines(reg.time,predict(temp.loess),col=”red”, lwd=2)

      • Posted Sep 17, 2011 at 7:43 PM | Permalink

        Roman,
        I see that in the version of the code I posted, I left out the zero padding on the right of w, which Bart used. It should be
        w=c(rep(1,Nc),(1-cos(pi*(1:Nc-1)/(Nc-1)/2)),rep(0,Nsamp-2*Nc))
        I used this in my actual calcs.

        • RomanM
          Posted Sep 17, 2011 at 7:57 PM | Permalink

          I intentionally left out the left side padding because it didn’t make any mathematical sense to pad just those two values. Besides, they weren’t material to the point of the comment.

        • Posted Sep 17, 2011 at 8:17 PM | Permalink

          I’m not speaking of the two missing months at the start. The (half) taper runs for 2048 (Nc=1024) values. Then, as you’ve done it (and as I wrote) it cycles. Bart padded with 6144 zeroes on the right. It won’t make a huge difference, because both versions go toward zero near t=0 (ie in say 8000 to 8192). But some.

        • Posted Sep 18, 2011 at 9:05 AM | Permalink

          And alas, I still managed to get it wrong. Here’s a tested version:
          w=c(rep(1,Nc),(1+cos(pi*(1:Nc-1)/(Nc-1)))/2,rep(0,6*Nc));

          Anyway, I’ve written a new post here setting out in detail why I think the taper is wrong and causes error. It also goes into some detail about h being bilateral, and about periodicity.

        • Bart
          Posted Sep 18, 2011 at 1:20 AM | Permalink

          If you don’t do the zero padding, you get time aliasing.

          And, put the taper back in. Nick is completely WRONG about this.

        • Bart
          Posted Sep 23, 2011 at 10:36 AM | Permalink

          I had a sort of breakthrough thought on this. Given the proffered system diagram, this is the type of behavior which would be expected.

          Label the top response T1 and the bottom T2. We are trying to estimate T2. The closed loop transfer function from the input Radiation Forcing (RF) to the input point of T2 is H2 = T1/(1-T1*T2). Assume the gain of the loop is “large” within the passband. Then, H2 := T1/(-T1*T2) = -1/T2. So, if the RF is wideband, the spectrum of the input to T2 should be approximately the spectrum of RF divided by mag(T2)^2, which is what RomanM has found.

          The gain from RF to the output of T2 is approximately unity, so the output of T2 should more or less track RF within the passband of the loop. My hunch would be that, that passband is probably about 0.3 years^-1, which is where you get the maximum phase margin boost from T2.

    • Tom Gray
      Posted Sep 17, 2011 at 4:44 PM | Permalink

      I think that what this analysis is saying is that the phase and magnitude response of the relationship between the two parameters is dominated by low frequency components and that if the original sequence is passed throuhg a low pass filter not much is changed in the impulse reponse

      • RomanM
        Posted Sep 17, 2011 at 6:03 PM | Permalink

        In my view, it is a bit more than that.

        I find it difficult to believe that an interrelationship with a variable whose “low frequency” component is pretty much linear can produce a 4.88 year (out of a 10+ year data window) cyclic outcome without the strong possibility of the existence of a spurious reason. In this case, the temperatures seemed to exhibit sufficient possible cause of the final result without a need for a genuine relationship of the proposed format to exist.

        My personal suspicion is that the cloud-temperature relationship is considerably more complex than the current analyses of either Dessler or Spencer can assess.

        • Bart
          Posted Sep 18, 2011 at 1:23 AM | Permalink

          It is too complex for phase plane analysis. For the other, it would be incredibly unlikely to get such a well defined classic 2nd order response and have it just be coincidence.

    • Posted Sep 17, 2011 at 5:00 PM | Permalink

      does the topic intrigue you enough to do one of your amazingly insightful posts?

    • Posted Sep 17, 2011 at 6:11 PM | Permalink

      Roman,
      Yes, that’s my contention. Because of the padding, the base frequency is (1/8192) mth^-1. The low frequencies can only discriminate a few low-order moments of the data. And they are used for deriving Bart’s results.

      So the feedback cited is -9.4 W/m2/°C, but should be -12.2 W/m2/°C without the hack to h. That’s just the ratio of the trends. And the lag is just the difference of the centre-points (COM, from first moment) of dR and temp.

      And as you say, you could get these from any two quite unrelated series.

      • Bart
        Posted Sep 18, 2011 at 1:25 AM | Permalink

        Wrong. I have explained, but you refuse to learn.

    • Bart
      Posted Sep 18, 2011 at 1:49 AM | Permalink

      This is a good observation. I have never denied that the trend in dR accounts for much of the low frequency spectrum. And, taking out all but the trend is very much a low pass filtering operation.

      But, if you look at the transfer function in the frequency domain using the trend of dR, you find that the response deviates from a classic 2nd order response. With the other low frequency components added back in, it fits the classic 2nd order response like a glove.

      This is one of the key points I have been trying to get through to Nick. It isn’t just the trend which creates the 2nd order response which gives weight to the observation. It is the very tight agreement with a classic order response across the entire frequency band of 0.01 to 0.1 years^-1 which is portentous.

      • Bart
        Posted Sep 18, 2011 at 1:52 AM | Permalink

        The above comment is to RomanM for his original post.

      • Bart
        Posted Sep 18, 2011 at 1:54 AM | Permalink

        Should have been: “It isn’t just the trend which creates the 2nd order response which gives weight to the analysis. It is the very tight agreement with a classic 2nd order response across the entire frequency band of 0.01 to 0.1 years^-1 which is portentous.”

        • Bart
          Posted Sep 18, 2011 at 2:12 AM | Permalink

          Furthermore, there should be a relationship between clouds and temperature. The IPCC doesn’t reject that there is, they just say the relationship is positive, or at least not significantly negative. That the trends themselves go in the opposite direction is reason enough to doubt that.

        • RuhRoh
          Posted Sep 18, 2011 at 11:24 AM | Permalink

          Bart,et aliam;

          It appears that some of the statistically sophisticated respondents, indeed much of ‘climate science’, are somewhat ~undereducated in feedback systems analysis, especially the sophisticated techniques and insights you are bringing to bear here.

          As you have pointed out, the scanty data at issue here are a first class ticket to skepticism about your analysis thereof.

          But, the ‘remedial-education’ issue is clearly dominating the discussion here.

          Can anyone suggest a richer cyclical dataset suitable for elucidation of the methods? i.e., Ceridian PCI (commercial US diesel purchases) vs. US ~Employment figures?
          This might allow folks to explore the ‘longshort record’ analysis question and get more comfortable with Bode plots.

          Perhaps Prof. McKittrick can suggest a readily available, suitably broad cyclic econometric data set, [or ‘school’ me about the ‘frequency’ of published feedback system analyses.]

          Until Bart got rolling here, I hadn’t realized the scarcity of feedback analysis among the many LS trend ‘regression’ analyses. Is regression a reasonable approach to diagnosing feedback parameters? I imagine that Dr. Middlebrook would have mentioned it…

          Do statisticians take any classes in feedback system analysis? Why are Bode plots so uncommon in Climate-related analysis?

          RR

        • Bart
          Posted Sep 18, 2011 at 1:54 PM | Permalink

          Ain’t it the truth!

        • RuhRoh
          Posted Sep 18, 2011 at 3:26 PM | Permalink

          One other idea;
          What is the result of looking at ever-shorter subsets of the already short record?

          Or, starting with the shortest series (2? 3? points) and sweeping up toward the full length, how looks the curve of % non-strange responses?

          Beyond the computational challenge of automating and running so many analyses, perhaps there is a step where a human must sort the non-strange responses, but maybe one could sort automatically by comparing to the result you have reported.

          Maybe Roman will first address the question of whether this approach would help ruleout the ‘spurious’ comments.

          Is this a situation where some % of the data points can be withheld and the analysis re-run? Perhaps some statistical legerdemaine of this kind would happify folks. Probably not though…

          I don’t see why everyone is so focused on a ~constant trend.
          I imagine that any trend could be added to the data and still get the same Bode…Am I wrong here?

          Talk is cheap…
          RR

        • Posted Sep 18, 2011 at 4:48 PM | Permalink

          RR
          “What is the result of looking at ever-shorter subsets of the already short record?”

          I noted here that the direction of “feedback” depends mostly on the sign of the trend of T over the period. So if you start from Jan 2001 instead of Mar 2000, the trend of T goes negative and strong positive fedback is reported.

          Now it’s true that we don’t have data to spare. But if the conclusion is that sensitive…

        • Bart
          Posted Sep 18, 2011 at 7:47 PM | Permalink

          Not what I found. But, again, there is no justification for removing any data, and every reason to use as long a data record as possible. This is me like saying, I have proof that 1 + 1 + 1 = 3, and being challenged with “maybe, but what if you take away one of the ones?”

        • Steve
          Posted Sep 19, 2011 at 6:43 AM | Permalink

          Hi Bart,

          Using Nick’s truncation of data results in a triphasic impulse response. An initial positive, followed by your biphasic response. regards, Steve

        • Posted Sep 19, 2011 at 8:02 AM | Permalink

          Remember to subtract the mean when subsetting. The original data was zero-mean, but of course only for the full set.

        • Posted Sep 19, 2011 at 8:25 AM | Permalink

          I get the following dc gains, depending on what month the data starts in 2000. Col 2 is the trend(CFR)/trend(T) ratio, col 3 is dc gain without taper, col 4 is gain with taper. You can see how it depends on that trend ratio.

          Mar  -12.22  -12.22   -9.48
          Apr  -13.64  -13.64  -11.32
          May  -13.50  -13.50  -11.22
          Jun  -15.72  -15.72  -12.79
          Jul  -19.39  -19.39  -17.35
          Aug  -25.39  -25.40  -23.52
          Sep  -31.19  -31.20  -29.89
          Oct  -46.48  -46.51  -44.72
          Nov  686.61  647.44  127.54
          Dec   34.52   34.49   15.72
          Jan   18.72   18.71   10.15

        • Steve
          Posted Sep 19, 2011 at 9:16 AM | Permalink

          Hi Nick,

          Yes, I forgot to subtract the mean – thanks. I am surprised that you are surprised about the near-equivalence between trend and very low frequency content – surely this is expected by everyone.

          This sensitivity analysis is interesting, but as Mark and Bart have said, the DC gain does not seem at all relevant to the analysis. How about describing the higher-order characteristics of the impulse response?

          I’ve learnt a great deal over the last few days thanks to you, Bart, Mark, sky, and many others – thank you to all.

          There is still the dispute, the resolution of which I believe is likely nuanced and quite mysterious to me due to my lack of familiarity. Why do you consider the IFFT (back in the time domain) periodic and how do you interpret the vector returned? Perhaps on your blog (or better here) you could take causal and anticausal filters, transform them through the frequency domain and then back again to illustrate your point. Work yes, but surely appreciated by myself and many lurkers.

          regards, Steve

        • Posted Sep 19, 2011 at 3:34 PM | Permalink

          I think the sensitivity analysis is interesting and illuminating.
          It’s like removing a single tree from a recon or removing BCPs.

          This is a simple due diligence practice.

        • Bart
          Posted Sep 19, 2011 at 3:56 PM | Permalink

          “This is a simple due diligence practice.”

          No. It isn’t. It is removing the resolution needed to isolate the response at low frequency in the first place. It is as absurd as the thought experiment I suggested above.

          It is stupid.

        • Posted Sep 19, 2011 at 4:00 PM | Permalink

          Steve,
          “as Mark and Bart have said, the DC gain does not seem at all relevant to the analysis.”

          No, Bart said:
          “But, in the meantime, having uncovered what looks to be a very strong negative feedback of -9.5 W/m^2/degC, I think the onus is on those who believe there is a weak or positive feedback to prove it.”

          I’m not surprised that it’s the ratio of trends – I knew it would be because that is the zero-order term in the power series for the transfer function. I’m commenting that it is an unreliable measure of interaction. In fact, it involves no interaction. You could cite it for any series.

          But it dominates the analysis. CRF is assumed to be totally dependent on T. But CRF has a big negative trend, T has almost none. In the real world you’d say, OK, the CRF trend is not a result of T. But in this model you have to say that it says CRF is very sensitive to T. And higher order considerations won’t change that. You have to satisfy the lowest order first.

          As to periodicity, I gave the Wiki link. The DFT and iDFT are formally almost identical, just replacing -i by i. You form the transform as a weighted sum of harmonics of a base freq, then sample. The sum of harmonics is periodic at the base freq. In the iFFT, time and freq have swapped roles.

        • Posted Sep 22, 2011 at 9:03 AM | Permalink

          For those of us trying to follow along, exchanges based on sarcasm and insult (and responses to other party’s same) make our task harder. And diminish your substantive points.

          Basic vocabulary question. How is “negative delay” (as used in these exchanges over the past two days) to be defined? Bart (here and elsewhere) discusses “negative delay” as physically-impossible concept, whereby an event at time t2 affects another event at an earlier time t1. Interpretation of an analysis that includes seeming “negative delays” must therefore begin with an acknowledgement of their artefactual nature.

          Bart, is this correct? Carrick, does this square with your notion of what a “negative delay” is, in the context of the FFT and other techniques under discussion?

        • Carrick
          Posted Sep 22, 2011 at 9:09 AM | Permalink

          AMac, I’ll point you over to moyhu for this discussion (whichI will add shortly). This thread has gotten too long. I may even post on JeffIDs blog on the concept (and not just so I can pick on Bart ;-))

        • bender
          Posted Sep 22, 2011 at 9:56 AM | Permalink

          But wait, don’t kill the thread. What about Mr Bunny? What happens next?
          .
          (Nice stuff Bart)

        • Carrick
          Posted Sep 22, 2011 at 10:17 AM | Permalink

          Bender, I’m not trying to kill the thread (on purpose). I have a dangling tag below that appears to have put the halts on my ability to add any further comments at the bottom of this thread I’ll use Nick’s (moyhu) site to comment, and with SteveM’s permission, cross post here as time, Steve and this blogging software permits.

          Amac, the short take home is that Bart doesn’t understand the relationship between h(tau) and the physical impulse response function (what you would get if you put in a true impulse into a signal), and why they sometimes can differ. What I am saying is h(tau) (the quantity computed in Bart’s matlab code) can give rise to negative delays in h(tau), that these delays are simultaneously physical and do not really violate true signal causality.

          (Anymore than flashing a laser across the moon, and having the laser dot on the surface of the moon traveling faster than the speed of light is a violation of physical/information causality.)

          Fundamentally there are many types of velocities that one can define, one of these is phase velocity, a second is group velocity, the third is information velocity. It’s only the third quantity that is restricted to being less than or equal to the speed of light. The reason that h(tau) gives negative delays is because tau is a measure of the group delay (rate of change of phase with frequency) , which can be negative over a range of the transfer function.

          And yes Virginia systems with negative group delay exist in the real world.

        • Carrick
          Posted Sep 22, 2011 at 10:19 AM | Permalink

          make that ” the physical impulse response function (what you would get if you put in a true impulse into a system).”

        • Mark T
          Posted Sep 18, 2011 at 9:19 PM | Permalink

          I know you keep arguing about the sign of the constant on the front of the equation… but that’s just DC gain. The feedback is dictated by the poles of the system, not the DC gain. Trend ratios are the linear, first order effects, not feedback. Different beasts.

          Mark

        • Posted Sep 19, 2011 at 5:56 AM | Permalink

          “I know you keep arguing about the sign of the constant on the front of the equation… but that’s just DC gain.”
          Well, it obviously affects the DC loop gain of Bart’s system diagram. And it emerges from Bart’s analysis as the ratio of the trends. The assumptions of the analysis are that temp determines CRF, so the trend in CRF is a consequence of the trend in T, so the ratio determines the sensitivity.

          In fact, Bart’s analysis there makes the role of that sign explicit. He says
          “This is a negative feedback (sub)system precisely because T2 has a 180 degree phase shift at zero frequency, and 1/(1-T1*T2) is the reduction in sensitivity conferred by the feedback.”

          If the sign changes, there’s no 180 shift and it isn’t negative feedback.

        • Bart
          Posted Sep 19, 2011 at 3:32 PM | Permalink

          I just cannot believe this conversation continues. I am completely flummoxed by Nick’s dogged insistence that processes can evolve backwards in time. It has gone from the sublime to the absurd and back again so many times I have whiplash.

          It’s just an artifact of frequency sampling, Nick. Backwards in time evolution does not occur in this universe. How can you possibly have a problem with that and call yourself sentient? Ay yi yi.

        • Bart
          Posted Sep 19, 2011 at 3:49 PM | Permalink

          Steve
          Posted Sep 19, 2011 at 9:16 AM

          “I am surprised that you are surprised about the near-equivalence between trend and very low frequency content – surely this is expected by everyone.”

          Indeed. See here. The trend isn’t the only thing. All low frequencies are -180 deg phase shifted.

          By progressively (and arbitrarily and capriciously) truncating the data set, Nick is merely reaching a minimum frequency resolution whereby the phase shift has reached at least -270 deg, which is where the feedback starts to turn positive. It means nothing. The sign of the feedback is determined by the response at low frequency.

        • Bart
          Posted Sep 19, 2011 at 3:51 PM | Permalink

          For some reason, link didn’t come out. See here.

        • Bart
          Posted Sep 19, 2011 at 3:53 PM | Permalink

          The physically meaningful sign of the feedback is determined by the response at low frequency. That is what determines the long term response.

        • Hoi Polloi
          Posted Sep 19, 2011 at 4:34 PM | Permalink

          Bart, by now you should know that Nick always wants the last word, no matter what it brings about…

        • Posted Sep 19, 2011 at 4:56 PM | Permalink

          Bart,
          “I am completely flummoxed by Nick’s dogged insistence that processes can evolve backwards in time. “
          As I’ve said several times, you have taken two sets of numbers and put them through a FFT analysis which makes no reference to the fact that they depend on time going forward. It could have been distance, time backward, anything.

          What you have done is, as soon as the impulse response h was calculated, you zeroed the negative part. That makes it causal all right – it just isn’t the impulse response for this data any more. But then you draw the Bode plots of the truncated h and say, hey, it looks like a damped oscillator – it must be causal.

        • Steve
          Posted Sep 19, 2011 at 7:34 PM | Permalink

          Hi Bart,

          Thanks for maintaining patience as you (and others) assume the role of teacher. I lecture in an unrelated discipline and I understand it can be frustrating 🙂

          I don’t believe Nick is saying that processes can evolve backwards in time. He is simply asserting that the relationship between the two vectors is best described by a filter that includes a non-causal contribution (which you remove with the taper).

          I would think that this is only ‘absurd’ due to the nature of your defined model – the effect of temperature on cloud as time evolves. By tapering your data, you have correctly reinforced the following condition:
          A change in temp can not go back in time to effect dR.

          Of course nothing can really go back in time, however, I don’t think we should close our minds to what your own analysis suggests is a more complex relationship.

          The question arises as to whether there is a model of interacting variables which results in the illusion of non-causality within a simplified linear, time-invariant representation?

          regards, Steve

        • Bart
          Posted Sep 19, 2011 at 9:34 PM | Permalink

          Steve,

          What I have pointed out time and again to Nick (#comment-302956, #comment-303017, #comment-303492, #comment-303497) is that you get these ghosts of backwards in time relationships even when the data are perfectly causal. It means nothing.

          If he wanted to test the reverse causality, he should switch the roles of dR and temp. However, the result is useless because that route is polluted by other dominant inputs (#comment-303206).

        • Bart
          Posted Sep 19, 2011 at 9:35 PM | Permalink

          “…even when the data are perfectly causal by construction I mean.

        • Bart
          Posted Sep 19, 2011 at 9:38 PM | Permalink

          Nick said: “But then you draw the Bode plots of the truncated h and say, hey, it looks like a damped oscillator – it must be causal.”

          Yeah, Nick. A “damped oscillator” is a very common type of response. Extremely common. Extremely, extremely common. This is a Liliputian logical leap.

        • Bart
          Posted Sep 19, 2011 at 9:51 PM | Permalink

          “However, the result is useless because that route is polluted by other dominant inputs (#comment-303206).”

          The result you get is therefore not the transfer function of the dR to temp relationship, but the inverse of the temp to dR relationship. The phase lead at low frequency indicates that in this direction, the output is anticipatory, which is generally the wrong direction for causality in natural systems, and certainly in this one.

        • Bart
          Posted Sep 19, 2011 at 9:59 PM | Permalink

          “…the output is anticipatory…”

          I.e., you would then be having temperature reacting largely to the rate of change of cloud formation. E.g., if clouds stop increasing, temperatures return to normal, even if the clouds stay.

          Cloud formation due to the accumulation of energy, you see, is much more logical.

        • Posted Sep 19, 2011 at 10:04 PM | Permalink

          Bart,

          It might be of interest to get a comment from you about Carrick’s comment at Nick’s, which starts with:

          > Nick no worries there. I think your criticisms are spot on.

          http://moyhu.blogspot.com/2011/09/faulty-tapering.html?showComment=1316438104544#c6551620125560606178

          PS. To comment at Nick’s, choosing Name/URL and filling in a valid URL might improve your chances to appease the spam filter.

        • Mark T
          Posted Sep 19, 2011 at 10:30 PM | Permalink

          Unbelievable. The level of ignorance in this thread and over at Nick’s is simply amazing. Carrick doesn’t know what he’s talking about and, based on some of what Nick has said, I wonder if he thinks the same as Carrick.

          The taper is not removing negative frequencies – it is removing the upper half of the impulse response. What Nick doesn’t show in his plot (of the frequency response of the taper) is that the magnitude of the frequency response is two-sided, centered on DC. It has to be. It consists of purely real data. The FFT of real data has complex conjugate symmetry – this is a fundamental concept for anyone that has ever seriously studied Fourier theory!

          I will leave it to Carrick and Nick to figure out what that means. I’ll leave it to the readers to figure out what I think it means… oh, wait, I’ve already made that pretty clear.

          Mark

        • Mark T
          Posted Sep 19, 2011 at 10:42 PM | Permalink

          I should have noted that the response actually convolves in the frequency domain.
          Mark

        • Posted Sep 20, 2011 at 12:37 AM | Permalink

          Mark, it’s true that negative times are at issue here, though I expect that Carrick does have some analogous situation in mind in the frequency domain.

          “What Nick doesn’t show in his plot (of the frequency response of the taper) is that the magnitude of the frequency response is two-sided, centered on DC.”

          It is, but it’s symmetric. And you can’t show two sides on a loglog plot. Which is conventional.

        • Mark T
          Posted Sep 20, 2011 at 12:50 AM | Permalink

          Carrick’s reply betrays his ignorance. My suspicion about you regards the body of your posts in this regard. You certainly made no attempt to correct his obvious error. I fully understand how log-log plots work… I merely mentioned that because it was likely where Carrick got his notion.

          And, no, negative times are not at issue, at least, not with anybody but you. When you slide the window you place the impulse at the center of the response, generating the issue with causality yourself.

          Mark

        • Posted Sep 20, 2011 at 1:42 AM | Permalink

          Mark,
          I don’t think Carrick is ignorant at all – he was just describing an anologous situation in the frequency domain. With that understanding, it made sense to me.

          As to negative times and causality, we start out not knowing if CFR can be taken to depend on T at all, let alone whether the relation is causal. So we try something. If you want to test explicitly whether there is a causal relation, you should postulate a one-sided impulse response and apply a Laplace Transform, not a Fourier transform. Numerically quite different.

          Bart has chosen FFT. That makes no presumption about one-sided h. It may turn out to be one-sided – in fact it was fairly lop-sided. And you may be able to make deductions from that. But not if you chop it right at the start.

          And I haven’t heard any response to the reverse data argument. If the FFT somehow knows to put sense on the positive side of t=0, and noise on the negative, then why doesn’t it do this when the data is reversed?

        • Bart
          Posted Sep 20, 2011 at 3:37 AM | Permalink

          Good try, Mark. I think we’ve given it our best shot. But, Nick has the invincible confidence of the unknowing.

          I’ve explained all the reasons in detail, Nick. You are flat out wrong. But, lead you to the water as I might, you prefer to remain thirsty. So be it.

        • PaulMa
          Posted Sep 20, 2011 at 3:49 PM | Permalink

          Bart,

          Your comments here have been seen by many as being of potentially great significance. Could it be that Nick, having failed to present an adequate rebuttal, is doing nothing more than attempting to wear you down, hoping you’ll take your ball and just go away?

          If so, please don’t give in! You are doing a real service. And I’m sure many admire your patience. Thank you.

        • Bart
          Posted Sep 20, 2011 at 4:35 PM | Permalink

          Thanks, PaulMa. But, I think I’ve said all that can be said, and people who understand the analysis and procedures will come to the proper conclusions. Those who do not, and do not want to, never will.

        • Posted Sep 20, 2011 at 4:46 PM | Permalink

          > Unbelievable. The level of ignorance in this thread and over at Nick’s is simply amazing. Carrick doesn’t know what he’s talking about and, based on some of what Nick has said, I wonder if he thinks the same as Carrick.

          This kind of comment has “potentially great significance,” first and foremost among readers who share an even more amazing level of ignorance on the subject at hand.

          It would be interesting to know what the person who once advised Tony Hansen “to strenuously avoid polishing one’s own nameplate” would think of such comment.

        • Posted Sep 20, 2011 at 4:48 PM | Permalink

          willard…are you trying to get points from the policywonk? I know that she thinks you are brilliant.

        • Mark T
          Posted Sep 20, 2011 at 5:35 PM | Permalink

          There’s a difference between people that don’t understand, and admit it while hoping to learn, and people that don’t understand yet fail to admit it nor attempt to learn. Nick is notorious for the latter.

          Carrick’s comment is baffling, actually, because I know he otherwise understands these things. Note in his post previous to the one you linked he mentions applying tapers on a regular basis, and acknowledging they work. Then he goes into stating that Nick’s criticism is spot on (which is almost entirely directed at the taper itself) while stating you shouldn’t be deleting negative frequencies. All true, except the taper does no such thing. Nick himself has complained, in this very thread, that the high frequency 8191/8192 is actually -1/8192 (a low frequency.) That’s pretty good evidence he doesn’t understand anything about what happens with the taper nor the concept of an impulse response in general.

          Both are claiming to understand, yet neither really do. Nick because that is his nature as far as I can tell (defend the faith,) Carrick because he didn’t bother to actually look at what was being done to disambiguate that from Nick’s ill-conceived comments.

          Sorry, willard, if I offended, but the back and forth about the exact same point is ridiculous – it is a red herring at best, intentional disinformation more likely (something our host has already openly accused Nick of in the past.) I did not push my own nameplate as you suggest. I merely pointed out that others are claiming a nameplate that is not justified.

          It is difficult enough to explain basic theory in a few paragraphs on a blog. It is even more difficult when there exist those that are loathe to fault their own theories which the basics call into question.

          Mark

        • Posted Sep 20, 2011 at 6:44 PM | Permalink

          Mark,
          “It is difficult enough to explain basic theory in a few paragraphs on a blog.”
          The things is, you don’t explain. You just assert, loudly. I try to explain, with calculations and diagrams.

          OK, here’s one you can try to explain.
          ” Nick himself has complained, in this very thread, that the high frequency 8191/8192 is actually -1/8192 (a low frequency.) That’s pretty good evidence he doesn’t understand anything about what happens with the taper nor the concept of an impulse response in general..”

          In what way is it “good evidence”? My proposition is standard Nyquist. Sampled monthly, a sinusoid of freq 8191/8192 per month and freq -1/8192 per month are indistinguishable.

          You really don’t understand this science.

        • Carrick
          Posted Sep 21, 2011 at 12:53 AM | Permalink

          When somebody begins to bluster and name call, that’s a sure sign when they is out of their depth or at least comfort zone.

          I’m not even sure where to go because , Mark’s said nothing that has any substance, or that is particularly relevant to any of the issues I commented on. Just a lot of techno-jargon, no real substance.

          The statement I made on Nick’s blog related to this erroneous statement attributed to Bart and Nick’s fully accurate criticism of it:

          Now Bart says that the negative t part of h is pure noise and can be discarded.

          This is the sort of statement I can only assume is made by a person who’s never computed an impulse response function (IPR) h(tau) before. .

          You can get nonzero values of h(tau) for negative delays occurring for several reasons a) when there isn’t a strict cause and effect relationship between the two variables for which you’re trying to compute an “impulse response function”, [There is no strict cause and effect relationship here.] b) and even then only if you have a linear, passive system and c) an infinite length window (otherwise you still get splatter into negative delay portion of the impulse response function).

          For climate, we have an energy source (the sun) so it’s certainly not a passive system, and it’s certainly not linear. (Nonlinearity can translate positive delays into negative ones, in a similar manner as you can get combination tone frequencies generated by the nonlinear distortion in the frequency domain.]

          In this case, even if you had a strict cause and effect relationship between temperature and cloud radiative fluxes and you don’t, active, nonlinear systems will in general still exhibit negative delays. It’s a well known feature in one of the areas I work in (cochlear mechanics, which is active, nonlinear, with–in some cases–net positive gain).

          MarkT also appears to be confusing negative frequencies for negative delays (or is equally confused if he thinks I am confusing those). I can assure you I have no such problem and am referring to the taper function applied in the time domain, namely the cut-off of the negative delays in h(tau).

          Actual data. It’s a screw-up to set h(tau) = 0 for tau < 0.

          Nick:

          You really don’t understand this science.

          Or he’s made a munchkin level mistake, realized it, has put both feet in it, and doesn’t know how to gracefully step out of the mess he’s step into. Some of us are more comfortable admitting personal errors than others of us.

        • Carrick
          Posted Sep 21, 2011 at 1:24 AM | Permalink

          Here’s an example where you only seem to get a zero delay and a (roughly) -4 ms delay.

          Impulse response function.

          This image plot is generated by taking successively overlapping (Hann) windows of the frequency domain data, computing the equivalent (complex valued) IPR for each window. The vertical axis is the center frequency of the Hann window, the horizontal delay of course, and the color represents the intensity of the response.

          In this case we absolutely know a causal relationship exists, we are measuring the recorded output from an input stimulus.

          To reiterate what I said earlier, just assuming there is a causal relationship between temperature and cloud radiative flux isn’t in itself a sufficient condition for zero out the negative values of h(tau). At best this is just circular reasoning.

        • Mark T
          Posted Sep 21, 2011 at 11:38 AM | Permalink

          Nonsense, Carrick. I explicitly referenced your comment in which you stated that it is wrong to remove negative frequencies. Your words, not mine. I am not confusing negative time with negative frequency – your statement did, however, imply you thought Bart was removing negative frequencies. Your statement was incredibly ill-informed as to what the very simple impulse response taper is doing.

          Stating that it is obvious I have never done an impulse response (transfer function) estimation is pretty hypocritical coming right after accusing me of name calling. Indeed, you’ll note that I actually defended you as someone that I felt knew the difference, and surmised that Nick’s analysis was seemingly the culprit. Hypocrisy seems to be your forte these days, quite frankly. The list of examples of such behavior is increasing.

          Mark

        • Carrick
          Posted Sep 21, 2011 at 11:52 AM | Permalink

          Sorry about that, saying “frequencies” was a mistatment on my part.

          I meant to say negative delay. Sue me. If you were nearly as bright as you carry on, you should have been able to pick that up from context.

          Secondly, I was referring to Bart not you with respect to impulse response functions (as far as I know he’s the only one prattling on about negative delays not being meaningful).

          And even then not name calling on my part, it’s an observation.

          Have a nice day.

        • Mark T
          Posted Sep 21, 2011 at 11:56 AM | Permalink

          Carrick said:

          You definitely shouldn’t truncate negative frequencies, and even if you know there’s causality and a linear, passive system, you can get “negative” into negative frequency bins.

          Bold mine.

          I ask, which one of us is confusing negative time and negative frequency?

          Perhaps if you had simply said “I meant time,” rather than ‘bluster and name call’ and make assumptions about my knowledge/experience you would not have appeared as this indicates:

          Carrick said:

          Or he’s made a munchkin level mistake, realized it, has put both feet in it, and doesn’t know how to gracefully step out of the mess he’s step into.

          and

          When somebody begins to bluster and name call, that’s a sure sign when they is out of their depth or at least comfort zone.

          Indeed.

          And, w.r.t munchkin level mistakes: negative frequencies have nothing to do with causality, linearity, or passive nature of a system. Seems this is the second time you owe me an apology. Given the obviousness here, should I bet on whether you will own up to it and admit your mistake (and your hypocrisy?)

          Mark

        • Carrick
          Posted Sep 21, 2011 at 12:24 PM | Permalink

          MarkT, I’ll assure you again I fully understand the difference between the frequency domain and time domain, and their relationship to causality, and apologize for any confusion a comment I made late at night may have caused on your part.

          Here is a stab at the corrected text:

          You definitely shouldn’t truncate negative delays, and even if you know there’s causality and a linear, passive system, you can get nonzero values into negative delays. In this case, you can filter h(t) before transforming it back to T(f) … the transfer function, which is usually what we’re interested in, by e.g., filtering out delays that are not realizable (e.g., if I know the maximum delay in the system is 40-ms, and the minimum is 0-ms, depending on the window size, I might keep only the range -5 ms — 45 ms before transforming back). The issue in this case is the width of the window function associated with your taper and window length.

          In any case, I think you were the one who stated

          Carrick doesn’t know what he’s talking about

          and later

          Carrick’s reply betrays his ignorance

          How do you expect me to respond?

          Again this is all unfortunate because if I had stated clearly and correctly what I meant to say, and at the level of understanding that I do possess, some of this needless back and forth could have been avoided.

          Time for something of substance:

          Do you agree or disagree with Bart that the negative delays in the impulse response function should be set to zero?

          After that, we can go back and discuss your and Nick’s exchange:

          ” Nick himself has complained, in this very thread, that the high frequency 8191/8192 is actually -1/8192 (a low frequency.) That’s pretty good evidence he doesn’t understand anything about what happens with the taper nor the concept of an impulse response in general..”

          In what way is it “good evidence”? My proposition is standard Nyquist. Sampled monthly, a sinusoid of freq 8191/8192 per month and freq -1/8192 per month are indistinguishable.

          and decide who has the right on this one.

        • Carrick
          Posted Sep 21, 2011 at 12:29 PM | Permalink

          I’ll draw attention that in my corrected paragraph, I only made changes to the first sentence. The rest is straight cut and paste from here.

        • Bart
          Posted Sep 21, 2011 at 1:05 PM | Permalink

          Carrick – By the very structure of our universe, there are no negative delays in cause and effect. There are no closed timelike loops. Anything you find in negative time has nothing to do with any cause and effect relationship you are looking for.

          But, you say, then any time your estimation procedure shows indications of negative time response, you should discard the assumption that there is a cause and effect relationship? Then, you would never analyze any system data at all. Because of the variability of the data and the finite window of time, there are always such spurious indications.

          This is why I have urged Nick to generate artificial data with absolutely assured causal relationship and try his analysis on that. He will find spurious indications of backwards in time relationships then, too. It is inherent in an estimation procedure which uses frequency sampled Fourier transforms or, equivalently, for which the sequence fft-multiply-inverse-fft is a circular convolution.

          This data, when processed the way I have demonstrated, produces a very well defined 2nd order response. Such responses are ubiquitous. It is a strong indication that my assumptions were correct.

          Nick – “Sampled monthly, a sinusoid of freq 8191/8192 per month and freq -1/8192 per month are indistinguishable.”

          Nobody here has suggested overturning Nyquist and Shannon except you. Suggesting these frequencies are the same in the real continuous time world suggests you have lost your way.

          Or, were you suggesting that my low frequencies might be aliased? If so, then you are apparently unaware of one of the most important properties of boxcar average filtering (filtering with a uniform response over N samples and downsampling by N) – there is no aliasing to dc with such a procedure – dc is always a zero of the aliased transfer functions.

        • Mark T
          Posted Sep 21, 2011 at 1:31 PM | Permalink

          How do you expect me to respond?

          I expect you to notice that your comment, that I replied to directly, stated frequency. That comment was ridiculously ignorant as stated. How did you expect me to respond? I did follow up by defending you and your knowledge, instead blaming Nick’s deceptive comments. I got miffed because you did not offer me the same courtesy. I do accept your explanation that it all was a legitimate mistake, which is what I was hoping.

          First, the second part of your response:

          and decide who has the right on this one.

          Uh, i think you’re confusing what I was getting at with that comment. Bart says the taper cuts off the results from “high frequency stuff” and Nick says “but 8191/8192 is actually low frequency,” implying he thinks the taper is actually messing with low frequencies as well. Coupled with your comment, it seemed pretty clear that Nick was interpreting a removal of negative frequencies and you agreed. That’s what I was pointing out, that Nick is under the impression (low) negative frequencies were being removed by the taper.

          The taper does not remove the 8191/8192 = -1/8192 term. Bart’s frequency response plots also confirm his claim that removal of the upper half of the IFFT result merely cuts down the high frequency stuff.

          Your retraction and correction leaves Nick’s confusion and ignorance (which I merely observed.)

          Do you agree or disagree with Bart that the negative delays in the impulse response function should be set to zero?

          I think Bart’s frequency plots indicate they differ only in the high frequency region, which is pretty good evidence his claims are at least plausible, i.e., the dominant effect inferred from this procedure is the first half of the response.

          Mark

        • Bart
          Posted Sep 21, 2011 at 1:48 PM | Permalink

          To the degree that the taper cuts off data before the midway point, it does reduce resolution in the low frequency range. But, that is a good thing, because the impulse response of significance dies down long before the mid-point, and the rest is just ordinary variability in which we are not interested.

          When I said the taper affects “high frequency stuff”, I was guilty of ambiguity. The taper smooths the frequency response – multiplication by the taper in the time domain transforms to convolution of the response by the transfer function of the taper window. So, the taper removes high “frequency” wiggles in the frequency response. But, I do not mean the same thing by “frequency” in those last two instances. By the former, I mean the frequency of oscillations in the frequency response.

          Hopefully, I have thoroughly muddied that beyond understanding, but probably the context makes my remarks clear.

          Strike that. Reverse it. Oh, well.

        • Mark T
          Posted Sep 21, 2011 at 2:01 PM | Permalink

          Hehe, I get that, and muddled it even worse anyway in my response to Carrick.

          “in the high frequency region” should be “in the high frequency variations in the frequency response.”

          The thread is large enough that immediate recognition of a misstatement is impossible after hitting “post comment.” On a wireless network that keeps dropping out it’s getting difficult to follow and even Carrick and I cross-posted earlier.

          Mark

        • Bart
          Posted Sep 21, 2011 at 2:06 PM | Permalink

          “…the taper removes high “frequency” wiggles in the frequency response.”

          And, those wiggles become especially pronounced in the high frequency region when you plot on a logarithmic scale. So,in that sense, “high frequency” can be interpreted either way.

        • Carrick
          Posted Sep 21, 2011 at 3:23 PM | Permalink

          Bart I agree with this comment:

          Carrick – By the very structure of our universe, there are no negative delays in cause and effect. There are no closed timelike loops. Anything you find in negative time has nothing to do with any cause and effect relationship you are looking for.

          But that’s a different thing that how one interprets negative delays in the computed impulse response function.

          To get nonzero values for h(tau) for strictly positive (or zero) delays, requires the at least the following: 1) that there is a true cause and effect relationship between the variables, 3) that the system be linear and passive, and 3) that the measurement window be semi-infinite in length.

          But, you say, then any time your estimation procedure shows indications of negative time response, you should discard the assumption that there is a cause and effect relationship?

          No I don’t say that. I say negative delays can arise in this case either because a cause and effect relationship doesn’t exist or because one of the other assumptions necessary to get zero values for the impulse response function for negative delays has been violated.

          I also gave an example of a physical system where there is undeniably a cause and effect relationship where a component of the measurement response very clearly had a negative delay.

          This doesn’t mean you throw away impulse response functions, it’s just the measure has a different interpretation in this case (more importantly the the “impulse response function” can be modeled theoretically and measured experimentally… and the comparison of the two still informs on the underlying agreement with theory).

          /warn speculation

          In the present case, it seems to me we have two variables that have a coupled relationship (rather than a purely cause-effect one): Temperature change can affect cloud formation and hence cloud radiative flux, and clouds certainly affect temperature.

          If the forward versus reverse effects reside in different frequency (not delay) ranges we might be able to skip around this problem though… for example changes in clouds can affect temperature might be pretty high frequency (weather). How cloud patterns (and hence cloud radiative flux) affect temperatures might be very low frequency (e.g. climate).

          It seems plausible that some of what Bart is doing with his tapering is exactly this: A tapered window function centered at DC is a form of low-pass filter after all. So maybe smoothing isn’t just something that “cleans up the picture”, it may be required in order to get an approximately causal relationship between temperature and dR.

        • Carrick
          Posted Sep 21, 2011 at 3:27 PM | Permalink

          I still managed to foobar this comment. Here’s what I meant to say (changes in bold):

          If the forward versus reverse effects reside in different frequency (not delay) ranges we might be able to skip around this problem though… for example changes in clouds can affect temperature might be pretty high frequency (weather). How temperature affects clouds (and hence cloud radiative flux) might be very low frequency (e.g. climate).

        • Posted Sep 21, 2011 at 3:47 PM | Permalink

          “Nobody here has suggested overturning Nyquist and Shannon except you. Suggesting these frequencies are the same in the real continuous time world suggests you have lost your way.”

          I did not suggest anywhere that the frequencies are the same in the real continuous time world, My original statement was perfectly clear and correct:
          “A frequency of 8191/8192 revs per month (RPM), the highest on your axis, is not such a frequency on a monthly sampled time axis. It is a frequency of -1/8192 RPM. That’s the frequency you see after sampling.”

          The FFT does not deal with the real continuous time world. Everything is sampled. That’s all you have. When the FFT generates a sequence of frequencies from 1/8192 to 1, you pass the Nyquist critical frequency half-way, and thereafter, of all the (continuous) frequencies that could generate those sampled values, the negative frequencies are lower, and tend to 0 from below at the “upper” end of the range.

        • Posted Sep 22, 2011 at 2:26 PM | Permalink

          Mark T,

          You said something to Carrick that seems to deserve due diligence:

          > Hypocrisy seems to be your forte these days, quite frankly. The list of examples of such behavior is increasing.

          First question. I’m not aware of such list. Do you still have it? Auditors might be interested to take a look. Full disclosure might be the best policy on this matter.

          Second question. I note that you used words like “hypocrisy” to qualify Carrick’s behaviour, and also Nick’s behaviour hereunder. I am unsure why the behaviour of Nick or Carrick shan’t be characterized instead as “dogged persistence” or “tremendous tenacity”, expressions which, I believe, connote a virtue for an auditor to endorse.

          Here is my question: by which criteria are these epithets judged? It would be important for people to understand which behaviour is considered virtuous and which behaviour is considered less so.

          Thank you for your consideration and for constructively moving forward the discussion,

          w

          PS: Impartial auditors like TerryMN could take a look at these two questions too.

        • Bart
          Posted Sep 21, 2011 at 1:25 PM | Permalink

          “- dc is always a zero of the aliased transfer functions.”

          I think I need to add a little detail to avoid confusion here. DC is always a zero, and the transfer functions are small in the vicinity of a zero. Hence, the low frequency information is generally preserved. It is when you get well outside the low frequency band that aliasing becomes significant. This is another reason that the higher frequency portion becomes progressively unreliable.

        • Bart
          Posted Sep 21, 2011 at 2:10 PM | Permalink

          And, these data are end-to-end “boxcar” averages.

        • Bart
          Posted Sep 21, 2011 at 6:17 PM | Permalink

          Carrick
          Posted Sep 21, 2011 at 3:23 PM

          “To get nonzero values for h(tau) for strictly positive (or zero) delays, requires the at least the following: 1) that there is a true cause and effect relationship between the variables, 3) that the system be linear and passive, and 3) that the measurement window be semi-infinite in length.”

          1) Cause and effect is precisely what we are testing for. The existence of a well defined and commonly encountered type of response argues that we have found it.

          Besides which, we have reason to expect that the system is essentially as described in my comment at Sep 14, 2011 at 11:34 AM.

          2) Nonlinear smooth systems can be linearized. Passive systems – define precisely what you mean by this. I suspect you mean… well, why don’t you just tell me.

          3) Nonsense. I generated artificial data with the same correlations, sampling rate, and time span as evidenced by the actual data and replicated the analysis here.

          “I also gave an example of a physical system where there is undeniably a cause and effect relationship where a component of the measurement response very clearly had a negative delay.”

          You have claimed such. I suspect you did not zero pad the data properly, and are getting time aliasing.

        • jphilips
          Posted Sep 21, 2011 at 6:25 PM | Permalink

          1. the data being used is from 10 years only, how can you extrapolate to 20 +years if the returned satellite data were to deviate from the current relative flatness how will that affect your derived response.
          2. the data from clear to cloudy sky is not simultaneous
          3. the data is average over 1 month so can never be safely used to subtract clear from cloudy – the data is smeared over 1 month and can never be data from the same region.
          3a. Albedo of soil and water are very different- cloud over water will show a large TOA flux difference whereas the cloud over land will show less outward going flux.
          water albedo= 0.02 approx (at some angles)
          ground albedo = 0.1 to 0.5
          Clouds albedo = 0 to 0.8
          Wiki
          4. Are the columns you have chosen correct? For cloud cover shouldnt sw radiation only be considered. Total would also include the BB radiation from the increasing temperature

          Bart
          Are you saying that a change in temperature should create a corresponding delta in the cloud cover of 9w/m2/k with a delay of 5 years (approx)?

          If this is the case then it should be possible to show the effect in the real world data. Have you tried this?

        • Bart
          Posted Sep 21, 2011 at 6:37 PM | Permalink

          Nick Stokes
          Posted Sep 21, 2011 at 3:47 PM

          “My original statement was perfectly clear and correct”

          There is no relevance. There are no 12 year^-1 frequency components in the data.

        • Bart
          Posted Sep 21, 2011 at 6:52 PM | Permalink

          jphilips
          Posted Sep 21, 2011 at 6:25 PM

          1,2,3 are worrying about high frequency error sources and things which have been covered elsewhere in the discussions above.

          4. I just analyzed the same data everyone else is. I’m not a climate scientist, just a guy with a lot of background, training, and experience in systems theory and signal processing. More guys like that are evidently sorely needed in the climate sciences.

          From my perspective, I am just analyzing data from two signals and deriving relationships between them. AFAIK, these signals have bearing on the question of whether or not clouds act as a thermostatic regulator for the Earth’s climate system.

          If I understand these signals properly, then yes, a change of 1 degC in temperature should result in a reduction of 9.5 W/m^2 incoming radiation with a time constant of ~5 years (settling in maybe 3X that).

          “If this is the case then it should be possible to show the effect in the real world data. Have you tried this?”

          The real world data is precisely what is being analyzed.

        • Bart
          Posted Sep 21, 2011 at 7:13 PM | Permalink

          Carrick
          Posted Sep 21, 2011 at 3:27 PM

          “How temperature affects clouds (and hence cloud radiative flux) might be very low frequency (e.g. climate).”

          Yeah, that’s what I found. Is a bandwidth of 0.0725 year^-1 (associated period of 14 years and longer) not low enough for you?

        • Carrick
          Posted Sep 21, 2011 at 8:30 PM | Permalink

          Bart, let’s start with active versus passive systems. The simplest definition, is an active system is a system that requires a power source to operate. A transistor would be an example of an active element, a MOX resistor would be an example of a passive element. A system composed entirely of passive elements, and no battery, is by definition a passive system.

          A system that requires a power supply to operate, and has one provided to it, is an active system.

          The Earth’s climate has a very good power supply–namely the Sun—and without the Sun, it would shutdown and turn into a frozen ball, and you’d have to get your air by the bucketful.

          So the Earth’s climate meets the definition for an active system. It also has at least one stabilizing nonlinearity coming from the Stefan-Boltzman equation.

          Next topic, active nonlinear systems. If you have a system with net amplification (gain > 1) of low level signals, such a system is physically unrealizable unless there is a saturating nonlinearity to prevent runaway conditions, since it would (eventually) require an infinite amount of energy to continue to drive the system. But more on that in a bit.

          Let’s consider a resonance tube where you are putting an acoustic signal a(t) = a0 exp(2 pi f t) in at one end (labeled “1”) with (complex) amplitude a0 and frequency f, where the reflection on the end where the signal is injected is R1 and reflection from the other end (“2”) is R2, which is defined as a ratio of the amplitudes of the reflected to incident waves (measured at end “1”) for a wave moving from “1” to “2”. [Similarly R1 is the ratio of the amplitudes of the wave reflected to incident on end “1” of the tube.]

          Anyway, i’s easy to show that:

          ar = a0 (1 + R2) /(1 – R1 R2)

          where ar is the measured amplitude at the end where you are injecting the signal a0.

          For simplicity let’s set R1 R2 = R0 = |R0| exp(-i 2*pi* f*tau0), where |R0| and tau0 are constants. The “-” of course comes from my sign convention for the phase of the signal.

          For the sake of simplicity, let’s also assume R1 is real valued and |R1| < 1. We’ll also assume |R0| &lt 1. (|R1|,|R2| < 1 can be shown to be required for a passive system.)

          You can compute the impulse response function associated with this systems, and you’ll find that you get a series of delta functions at tau_n = n tau0. Of course if you have a finite frequency domain window, you’ll get the convolution of the sum over delta functions with your window response function.

          Of course all of these delta functions will appear at times tau ≥ 0.

          Now what happens if |R0| &gt 1?

          First it can be shown this is impossible in a strictly passive system (a power source is required to get a gain < 1.) Secondly, it is certainly possible. You could have a microphone speaker combination at the end with say R2, and feedback a signal that was larger than the signal picked up by the microphone.

          We’ve all experienced this of course, when we have a microphone hooked up to an amplifier and place the mike to close to the loudspeaker, as “microphone squeal.” So such a system is not only possible, it’s one we’re all pretty well acquainted with.

          The question I will pose for here, is what does your impulse response function look like? It’s a bit messy, but you expand ar above in 1/R0.

          You’ll end up with a series of delta functions, but they will now all appear at times tau_n = -n tau0.

          However, this system is unphysical

          To make the system physical, we have to add a saturating nonlinearity.

          The simplest example of this is the Van der Pol oscillator which looks like:

          x”(t) + (-r0 + r2 x(t)^2) x'(t) + w0^2 x(t) = 0.

          This system exhibits self oscillation near the frequency w0/(2pi). If you look at the damping function for this oscillator for very small values of x(t), it is negative, and becomes positive as x(t) becomes large.

          In my resonance tube example, with net amplification and a limiting/saturating nonlinearity, you’ll end up getting a series of stable tones at separations of f = n/tau0.

          On to data, real world measurements, and negative delays in the next installment. (This has gottten plenty long enough.)

        • Carrick
          Posted Sep 21, 2011 at 8:33 PM | Permalink

          Even proofing it, mistakes. I assume:

          a(t) = a0 exp(2 pi i f t) + complex conjugate.

          Missed the factor of “i”, the complex conjugate is obvious to those of us who do this, but people are looking for things to pick at, so…

          If anybody notices other errors or things that need clarification, please point them out.

        • Bart
          Posted Sep 21, 2011 at 8:57 PM | Permalink

          Carrick –

          “…in the next installment”

          Don’t bother, please. We would only end up talking past each other.

          You have some conventional, convenient
          fiction which apparently has usefulness to you. But, systems always progress forward in time, and causes always precede effect. If you want to argue that clouds drive temperature, well, I’ve already covered that. What we are looking at here is the response from temperature to clouds.

          If you have any further problem with that, I really don’t care to waste any more time on it, and I really don’t care.

        • Carrick
          Posted Sep 21, 2011 at 9:08 PM | Permalink

          In the first part, we saw that negative delays are possible in an active system for which there is net amplification. I glossed over what happens in such a system, when a saturating nonlinearity is added to stabilize the system and make it physical realizable.

          Fortunately there is a fairly nice physical system where one can perform measurements that pretty closely follows the hypothetical resonant tube system above: It’s the mammalian cochlea.

          I’m not going to enter into a full theoretical discussion of how the mammalian cochlea works, but especially in humans, which have very narrow band tuning, the existence of spontaneous otoacoustic emissions (sounds generated by the ear in the absence of external stimulation) have been known to exist for decades.

          In fact, these emissions, when they exist, are approximate equally spaced in log frequency (this is a result, it is thought, of the log-frequency place frequency map of the cochlea.) Data from a human subject. The existence of these narrow band signals has been shown to be associated with self-sustained oscillations similar to that of the van der Pol oscillator (“limit cycle”) oscillator equation above.

          In fact, it is thought that the human cochlea closely mimics the resonant tube behavior I described above. It’s not surprising then that in this system that negative delays can be sometimes observed. This pictures gets even more complex when you throw in nonlinearity, because that produces a mixing of forward and reverse traveling waves (it acts as a source of reflection). In a “scale invariant” system, the delay associated with this is zero, but if the system is “scale invariant violating” the delay associated with it can either be positive or negative.

          To address Bart’s other point, yes I do know how to properly compute the impulse response function, know how about zero padding and all of that other good stuff. As a test of my own code, I use both the FFT and a directly implement discrete Fourier transform. Both give identical results.

          But if Bart continues to disbelieve that this is physically possible, I can provide some experimental data, and allow him to verify for himself that indeed negative delays are possible in a causal, physical system.

        • Carrick
          Posted Sep 21, 2011 at 9:10 PM | Permalink

          Bart:

          If you have any further problem with that, I really don’t care to waste any more time on it, and I really don’t care.

          Shorter Bart: you’ve already made up your mind, you don’t want to be confused with the facts, and you weren’t able to follow the discussion.

          There’s a term for that “moral coward.”

        • Bart
          Posted Sep 21, 2011 at 9:11 PM | Permalink

          We’re dealing with a smooth, nonlinear system here near its local equilibrium, not some exotic system in a lab driven strategically to a limit cycle. The system is adequately described using linear systems theory.

        • Bart
          Posted Sep 21, 2011 at 9:13 PM | Permalink

          “There’s a term for that “moral coward.””

          Yeah, you hold forth dealing with such stupid comments over something obvious to one with proper background and experience for over a week, then call me that.

        • Bart
          Posted Sep 21, 2011 at 9:18 PM | Permalink

          “But if Bart continues to disbelieve that this is physically (im)possible…”

          For time to flow backwards and effects to precede causes? Yeah, I’m a real stick in the mud about that.

          Your example is merely a mathematical abstraction. In the real world, causes precede effects.

        • Carrick
          Posted Sep 21, 2011 at 9:19 PM | Permalink

          Bart:

          We’re dealing with a smooth, nonlinear system here near its local equilibrium, not some exotic system in a lab driven strategically to a limit cycle. The system is adequately described using linear systems theory.

          The human cochlea is such “exotic system”, and it exists in this “strategically driven to a limit cycle” condition.

          Yeah, you hold forth dealing with such stupid comments over something obvious to one with proper background and experience for over a week, then call me that.

          Perhaps part of your problem is you believe you know more than everybody around you? (but really don’t)?

          People that know everything are incapable of learning anything. Closed minds are as interesting as closed books.

          We’re dealing with a smooth, nonlinear system here near its local equilibrium, not some exotic system in a lab driven strategically to a limit cycle. The system is adequately described using linear systems theory.

          The system is “smooth” in some sense, but it contains turbulent behavior on almost all scales. That is hardly linear.

        • Bart
          Posted Sep 21, 2011 at 9:28 PM | Permalink

          We’re not dealing with cochleas. We’re not dealing with a marginally stable system. I don’t want to take the discussion off in the direction you want to go because it is immaterial, useless, and a waste of time.

          The response is clearly effectively linear. The form is utterly mundane and usual. Your objections have no merit.

        • Carrick
          Posted Sep 21, 2011 at 9:30 PM | Permalink

          Bart:

          For time to flow backwards and effects to precede causes? Yeah, I’m a real stick in the mud about that.

          So am I, and at this point, I question why you make that comment since I’ve clearly framed this as “not a violation of physical causality.” This smacks of dishonesty on your part.

          Your example is merely a mathematical abstraction. In the real world, causes precede effects.

          Wrong for the nth time.

          It’s not a mathematical abstraction. A squealing mike is a real phenomenon, and not particularly exotic. A cochlea is another real world example. Most of us have two of them, and about 80% of us with normal hearing have at least one ear with one measurable spontaneous emission.

          For the last time, the problem here is that “impulse response function” has strictly interpretation you wish it to have, strictly only in linear, passive systems.

          Whether your interpretation is adequate for this problem is something I believe needs to be established. I don’t think you have established it yet though.

        • Carrick
          Posted Sep 21, 2011 at 9:34 PM | Permalink

          Bart:

          We’re not dealing with cochleas. We’re not dealing with a marginally stable system. I don’t want to take the discussion off in the direction you want to go because it is immaterial, useless, and a waste of time.

          You raised an issue with a statement I made, I merely responded to it.

          If you had any class, you would admit you were wrong on that point, instead throwing up this silly dismissive nonsense.

        • Carrick
          Posted Sep 21, 2011 at 9:37 PM | Permalink

          Comment stuck in moderation. Wish there was a way of deleting them from the cue (hintz).

          Bart said:

          For time to flow backwards and effects to precede causes? Yeah, I’m a real stick in the mud about that.

          So am I, and at this point, I question why you make that comment since I’ve clearly framed this as “not a violation of physical causality.” For the last time, the problem here is that “impulse response function” has the interpretation you wish it to have only in linear, passive systems.

          Whether your interpretation is adequate for this problem is something I believe needs to be established. I don’t think you have established it yet though, and simply claiming it is so, doesn’t make it so.

        • Bart
          Posted Sep 21, 2011 at 9:44 PM | Permalink

          “For the last time, the problem here is that “impulse response function” has the interpretation you wish it to have only in linear, passive systems.”

          For the last time, that is incorrect. All it needs to be is smooth.

          You are seeking to redefine “impulse response” to be something other than what it is. The clue is in the word “response”.

        • Bart
          Posted Sep 21, 2011 at 9:48 PM | Permalink

          And, if you think you do indeed have something wherein the effect appears to precede the cause, I would suggest to you that you keep in mind what I said here.

          This is why I have urged Nick to generate artificial data with absolutely assured causal relationship and try his analysis on that. He will find spurious indications of backwards in time relationships then, too. It is inherent in an estimation procedure which uses frequency sampled Fourier transforms or, equivalently, for which the sequence fft-multiply-inverse-fft is a circular convolution.

        • Carrick
          Posted Sep 21, 2011 at 9:53 PM | Permalink

          Bart:

          For the last time, that is incorrect. All it needs to be is smooth.

          Still wrong. All you need is a smooth nonlinearity to get negative delays, even in a passive system.

          You are seeking to redefine “impulse response” to be something other than what it is. The clue is in the word “response”.

          I mean by “impulse response function” what is usually meant by it. You compute it by taking the inverse Fourier transform of the transfer function between input X and output Y.

          We call it an “impulse response function” because under certain assumptions it is. But just because you call it an “impulse response function”, doesn’t make it so, and doesn’t mean that the interpretation of negative delays has anything to do with signal causality.

          And for the record, the Earth’s coupled atmospheric-ocean system is nonlinear, active and in fact contains self-oscillations (e.g., the ENSO).

        • Carrick
          Posted Sep 21, 2011 at 9:59 PM | Permalink

          Bart:

          And, if you think you do indeed have something wherein the effect appears to precede the cause, I would suggest to you that you keep in mind what I said here.

          I’m all for artificial data. When are you going to code up my resonance tube example to demonstrate you can get negative delays in a causal system?

          When you’re done with that one, I’ll give you an example of a smooth passive system (no boundaries), let you compute the impulse response function for it, and demonstrate for yourself, that you will can indeed obtain negative delays in a system which does not violate physical causality.

        • Bart
          Posted Sep 22, 2011 at 2:05 AM | Permalink

          “…and doesn’t mean that the interpretation of negative delays has anything to do with signal causality.”

          That’s what “negative delay” means, child. It means there would be a part of the response which began before the impetus was applied. You see, it’s two words: “impulse”, and “response”. The “impulse” is input to the system. And, the “response” comes out in response to the “impulse”.

          I’m typing this very slowly so that you will understand. If you like, I will add in Mister Bunny and Mister Toad to make it go down easier. Because, you are a special little boy, and Mister Bunny and Mister Toad want so very badly for you to understand.

        • Bart
          Posted Sep 22, 2011 at 2:33 AM | Permalink

          “You compute it by taking the inverse Fourier transform of the transfer function between input X and output Y.”

          This is merely a means to the end. The impulse response is the response to an impulse.

          And, these means are fraught with pitfalls which can give nonsensical results if you do not know what you are doing, like some people with whom I have regrettably recently had discussions.

        • Bart
          Posted Sep 22, 2011 at 2:57 AM | Permalink

          “When you’re done with that one, I’ll give you an example of a smooth passive system (no boundaries), let you compute the impulse response function for it, and demonstrate for yourself, that you will can indeed obtain negative delays in a system which does not violate physical causality.”

          This is so idiotic. I have tried to explain it to Nick so many times, but he just does not understand. And, you do not, either, apparently.

          Apparent “negative delays” (how I HATE even writing such a logical travesty) result from using the Discrete Fourier Transform as a tool of analysis. It is inherent in the circular convolution which results from frequency sampling. YOU WILL GET SUCH APPARENT NEGATIVE DELAYS EVEN WHEN YOUR DATA IS GENERATED USING AN IMPULSE RESPONSE WITH NO SUCH PROPERTY. They are a fiction. A ghost. You ignore them, because they are not a real part of the true impulse response.

        • Bart
          Posted Sep 22, 2011 at 3:23 AM | Permalink

          And, FWIW, passive in my field means this. In your definition, your claim is not all-encompassing. I can build an active filter with an op-amp and a couple of resistors which will have a very well defined impulse response. What you appear to mean to claim is that an impulse response cannot be defined for a system which is being maintained in a quasi-stable state. Yet, even here, we can often speak of an impulse response of average behavior.

        • Bart
          Posted Sep 22, 2011 at 3:26 AM | Permalink

          I’ve put everything that needs to be said in the above 4 responses. If you have any more asinine comments in mind, read the above again until you get it and don’t have them anymore.

        • Carrick
          Posted Sep 22, 2011 at 8:30 AM | Permalink

          Bart, let’s start with passive versus active. The link you found discussing passive systems…. had all passive components (resistors, capacitors inductors). None of the examples required a battery, so it is equivalent to my definition. I just said it in a bit plainer language (but it’s not my language in any case, this is standard EE 101). So when you say “we” I assume this is shorthand for Bart googling for a reference that he thought had a different meaning than the one I gave. It doesn’t, and it’s telling that you thought this to be true.

          Negative delays are not a fiction, they are a consequence of a violation of one of the assumptions that comes into the proof that then inverse Fourier transform of the transfer function between two variables X and Y IS the true “inverse Fourier transform”. This is why I make my students derives these sorts of results before they engage in the sort of mathematical gymnastics you went through in your matlab code above.

          Since the description I gave above on how one computes what is “colloquially” called an impulse response function, and this is how you compute the “impulse response function” in your code, your code will suffer from the presence of negative delays that have physical meaning associated with them, cannot be ignored, and indeed, as I discussed above, can be modeled, and the presence of them interpreted in terms of a model that adds light to our understanding of the underlying physical processes at work.

          Now when you said

          I can build an active filter with an op-amp and a couple of resistors which will have a very well defined impulse response,

          I don’t think I ever stated anywhere that an active system necessarily gives rise to negative delays, otherwise I would have said “your approach is totally hosed.” And I’m perfectly aware of examples where you have an active system where you don’t get negative delays. In my resonance tube example, keep the mike/amplifier/loudspeaker at end “2”, but require the net amplification so that |R2| < 1. Active system, no negative delays.

          Passive and linear are requirements to guarantee you won’t observe physically meaningful negative delays, but it’s not necessary to have them. Otherwise, like I said you’d be hosed, because you are applying the methodology I described above that can give negative delays to an active, nonlinear system that is in fact in a continual state of self-oscillation.

          I’m going to comment on impulse response filters more in a second comment. It’s coming whether you like it or not. I don’t expect it to change the behavior of somebody who replies to reasonable commentary by calling it asinine and really descending into grade-school behavior with comments like:

          I’m typing this very slowly so that you will understand. If you like, I will add in Mister Bunny and Mister Toad to make it go down easier. Because, you are a special little boy, and Mister Bunny and Mister Toad want so very badly for you to understand.

          This is shameful behavior on your part. I thought I was dealing with an adult, not a child.

        • Mark T
          Posted Sep 22, 2011 at 10:02 AM | Permalink

          Neither of my posts went through last night… must have been the link.
          Carrick, what you are referring to is called negative group delay. It is a well-known phenomenon resulting from non-linear phase. This is apparent as a time lead in a continuous response though the impulse response will still exhibit the causality. Only a non-causal system can result in an actual time lead by the output. In the frequency domain it will appear as a positive slope in the phase response. The responses Bart generated clearly show a negative phase slope indicating your example is not the case here.

          You can read a good article on it at www dotta dsprelated dotta com slash showarticle slash 54.php.

          After referring to Bart as a moral coward coupled with your last statement one has to wonder how you can actually claim some moral high ground regarding name calling and blustering.

          Now that I’ve shown you your error, are you Mann enough to admit it?

          Mark

        • kim
          Posted Sep 22, 2011 at 10:28 AM | Permalink

          I once argued somewhat foolishly with Carrick, only to learn later that he is informed and independent. I often judge on tone, and on this it’s a close three horse race. Carrick by a nose.

          I know the difference between median and mean.
          Mode makes me wonder, and wander and lean.
          ======================

        • kim
          Posted Sep 22, 2011 at 10:36 AM | Permalink

          Nick mounts Pegasus and flies above the slings and arrows of outrageous tone.
          ===============

        • Carrick
          Posted Sep 22, 2011 at 10:40 AM | Permalink

          MarkT, assuming this lets me post… it is absolutely the difference between group delays and true signal (physica/information, insert favorite rubric here ______) delays that is at issue, and I never claimed otherwise.

          I’ve just posted a comment on moyhu that discusses this very thing.

          Now, if you can show me where I’m mistaken group delay with physical delays, I will certainly be the first to admit it. As to the other… certainly there is an exact equivalence between one comment of mine, prompted by Bart dismissing my arguments out of hand, and his steady litany of insults. In fact I’m sure they are mathematically identical. 😉

          (For the record, I don’t claim high ground, nor do I need to, to note when a person is arguing by ad hominem rather than by substance. )

        • Carrick
          Posted Sep 22, 2011 at 10:55 AM | Permalink

          Sorry screwed up the link here’s the comment on moyhu. It would have posted here around 8:45, except ClimateAudit wasn’t accepting my posts at the time.

          Unlike Bart who stead-fastly refuses to exist the existence of negative delays, MarkT recognizes you need to go through a vetting process before you apply machinery that was built for passive, linear systems to active, nonlinear systems that contain self-sustained oscillations, like the Earth’s climate.

          MarkT gives a plausible explanation for why one might be able to neglect negative correlations. Eyeballing curves is not of course the proper way of doing this, but it’s a start (MarkT is admitting that negative delays can happen, and you have to control for this).

          One approach might be to compare your computed impulse response function (positive and negative value), then apply a brick wall filter to include only that portion of the impulse response function which is statistically significant. I’m open to other ideas, but really this is a breakthrough of sort–until now, we’ve been getting a “stone wall” on any consideration of negative delays.

        • Carrick
          Posted Sep 22, 2011 at 10:58 AM | Permalink

          Sighz Another flummoxed sentence:

          One approach might be to compare your computed impulse response function to an estimate of its noise floor (for positive and negative delays), then apply a brick wall filter to include only that portion of the impulse response function which is statistically significant.

          I understand there is a way to preview comments on this blog. I wish I understood it.

        • Carrick
          Posted Sep 22, 2011 at 11:05 AM | Permalink

          Kim, thanks for the comments. I know I get heated sometimes, but at the same time don’t wish to be less human. 😉 I have to wrap this up here soon. My boss needs me to provide him with a list of poles and zeros to a transfer function for a particular system we just used to collect measurements with, so I have some data to analyze, since I don’t have them in hand at the moment. (I’m not kidding. Ironies of life and all that.)

        • Mark T
          Posted Sep 22, 2011 at 11:35 AM | Permalink

          When are you going to code up my resonance tube example to demonstrate you can get negative delays in a causal system?

          In a “scale invariant” system, the delay associated with this is zero, but if the system is “scale invariant violating” the delay associated with it can either be positive or negative.

          I also gave an example of a physical system where there is undeniably a cause and effect relationship where a component of the measurement response very clearly had a negative delay.

          Do you agree or disagree with Bart that the negative delays in the impulse response function should be set to zero?

          Here’s an example where you only seem to get a zero delay and a (roughly) -4 ms delay.

          You can get nonzero values of h(tau) for negative delays occurring for several reasons.

          To get nonzero values for h(tau) for strictly positive (or zero) delays,

          I also gave an example of a physical system where there is undeniably a cause and effect relationship where a component of the measurement response very clearly had a negative delay.

          In every case (and I did not capture some) you are referring to time delays, specifically referencing tau in several, implying a necessity of a negative time delay in the impluse response. Not once do you mention the phenomenon of group delay. The impulse response of a causal system with negative group delay is zero for tau < 0, always. If the system we are analyzing had negative group delay, it would appear in the resulting response. It does not, as I have already noted. If you were to plot the phase response of your system, it would also have a positive phase slope leading to negative group delay.

          So, if it is the case that you were referring to group delay, how is it relevant to this discussion? How are your examples a legitimate rebuttal to a response that clearly does not exhibit negative group delay?

          Oh, I should also point out that ultimately the sun is the input to the system. Your active examples are all three terminal devices: input, output, and power supply. You've got a problem if you want to use the sun as both. I'll let you stew on why that is.

          Mark

        • Bart
          Posted Sep 22, 2011 at 11:51 AM | Permalink

          Carrick – “Since the description I gave above on how one computes what is “colloquially” called an impulse response function, and this is how you compute the “impulse response function” in your code, your code will suffer from the presence of negative delays that have physical meaning associated with them…”

          That, in a nutshell, says it all. You do not understand the tools. You do not know what circular convolution is. You think the DFT is a black box into which you put in deterministic data, and get a deterministic result which represents some kind of “truth”. It is merely a tool. And, if you do not understand how that tool works, you do not understand what the answer you get out of it means.

          None of your “requirements” for being able to estimate an impulse response are necessary. You do not even know what an impulse response is.

        • Mark T
          Posted Sep 22, 2011 at 11:54 AM | Permalink

          For the record, I don’t claim high ground, nor do I need to, to note when a person is arguing by ad hominem rather than by substance.

          But that’s what you seem to be doing, Carrick. Maybe in your mind your comments aren’t ad hominem, but they are.

          MarkT is admitting that negative delays can happen, and you have to control for this

          Only negative group delays. Group delay has nothing to do with whether the impulse response exhibits negative delays. Group delay is irrelevant to the complaint about the taper Bart is using. If the system had a negative group delay, we would see it. We don’t.

          So, again, I ask, why are you trying to rebut an argument regarding negative time delays with an example of negative group delays?

          Mark

        • Bart
          Posted Sep 22, 2011 at 11:55 AM | Permalink

          And thanks, Mark, for your excellent comments. I really did not want to dip my toes into Carrick’s fever swamp and waste my time understanding exactly what he was talking about. It was too obvious at the outset that he was off on an irrelevant tangent.

        • Carrick
          Posted Sep 22, 2011 at 12:00 PM | Permalink

          MarkT:

          So, if it is the case that you were referring to group delay, how is it relevant to this discussion? How are your examples a legitimate rebuttal to a response that clearly does not exhibit negative group delay?

          I believe you are confused here. My series of comments started out in response to criticisms from you over faulty language on my part. They were then continued by Bart, who clearly doesn’t understand the issues associated with negative delays.

          It would be nice if you are claiming objective in this argument if you would acknowledge his many repetitions of this error in this thread.. Otherwise…. don’t.

          None of this is per se a rebuttal of Bart’s analysis, other than to say “you can’t throw away negative delays without testing for significance first” and “yes Virginia statistically significant negative delays can happen in real physical systems and no that doesn’t imply a violation of physical causality.”

          Next as to “clearly does not exhibit negative group delays” Nick begs to differ.

          Oh, I should also point out that ultimately the sun is the input to the system. Your active examples are all three terminal devices: input, output, and power supply. You’ve got a problem if you want to use the sun as both. I’ll let you stew on why that is.

          Um… I’m just saying that the sun acts as the power supply for the system.

          It also fluctuates over time, which you can consider as an input to the system if you like.

          It would be somewhat analogous to what happens in an op amp, when the power supply (rail voltage) is varying over time. The term power supply rejection ratio gets used in that context.

          I’ll let you stew over why we use that term, since you appear to like stew.

          Or you can explain why you don’t think power supplies can fluctuate, or if they do, why that wouldn’t affect system behave in a way that could be considered analogous to a system “input”.

          (I can measure signals on the output terminals associated with the switching power supply used to drive the op amp. So is the power supply really a power supply or is it an input too?)

        • Carrick
          Posted Sep 22, 2011 at 12:03 PM | Permalink

          MarkT:

          So, if it is the case that you were referring to group delay, how is it relevant to this discussion? How are your examples a legitimate rebuttal to a response that clearly does not exhibit negative group delay?

          I believe you are confused here. My series of comments started out in response to criticisms from you over faulty language on my part. They were then continued by Bart, who clearly doesn’t understand the issues associated with negative delays.

          It would be nice if you are claiming objective in this argument if you would acknowledge his many repetitions of this error in this thread.. Otherwise…. don’t.

          None of this is per se a rebuttal of Bart’s analysis, other than to say “you can’t throw away negative delays without testing for significance first” and “yes Virginia statistically significant negative delays can happen in real physical systems and no that doesn’t imply a violation of physical causality.”

        • Carrick
          Posted Sep 22, 2011 at 12:09 PM | Permalink

          Secondly, MarkT, Next as to “clearly does not exhibit negative group delays” Nick’s calculations beg to differ. If you want to offer a rebuttal in the form of a calculation on your own part please do.

        • Bart
          Posted Sep 22, 2011 at 12:20 PM | Permalink

          “Secondly, MarkT, Next as to “clearly does not exhibit negative group delays” Nick’s calculations beg to differ.”

          No. He hasn’t. He does not understand the tool. You do not understand the tool. And, before you make any bad puns, no, you do not understand me, either.

        • Carrick
          Posted Sep 22, 2011 at 12:23 PM | Permalink

          MarkT:

          Oh, I should also point out that ultimately the sun is the input to the system. Your active examples are all three terminal devices: input, output, and power supply. You’ve got a problem if you want to use the sun as both. I’ll let you stew on why that is.

          The sun acts as a power supply in the sense that it provides energy to the coupled atmospheric-ocean system. Internally, it converts one form of power to another (thermonuclear into photons), which are then absorbed by the Earth’s system as heat. This heat drives convection and other mechanical processes, and permits the atmospheric-ocean system to function. But the sun does not directly provide input in the form of say mechanical forces on the atmospheric-ocean system. (Not in any meaningful way.)

          (You may or may not recall that part of the definition of a power supply is the conversion from one form of stored energy to another form of energy that can be directly used by the system.)

          The output from the sun also fluctuates over time, which you can consider as an input to the system if you like.

          This is somewhat analogous to what happens in an op amp, when the power supply (rail voltage) is varying over time. The term power supply rejection ratio gets used in that context. Indeed, I can (and do) measure signals on the output terminals of amplifier boards associated with the switching power supply used to drive the op amp. So is the power supply really a power supply or is it an input too?

          The sun acts as a power supply in the sense that it provides energy to the coupled atmospheric-ocean system. Internally, it converts one form of power to another (thermonuclear into photons), which are then absorbed by the Earth’s system as heat. This heat drives convection and other mechanical processes, and permits the atmospheric-ocean system to function. But the sun does not directly provide input in the form of say mechanical forces on the atmospheric-ocean system. (Not in any meaningful way.)

          So in any meaningful sense, the sun meets all of the requirements for a power supply to a system, rather than simply being a driver of (input to) that system.

        • Carrick
          Posted Sep 22, 2011 at 12:35 PM | Permalink

          Bart:

          No. He hasn’t. He does not understand the tool. You do not understand the tool. A

          Based on your many gaffes, at this point, I don’t think you’re in a position to proclaim who “knows” and “doesn’t know”. Sorry.

        • Carrick
          Posted Sep 22, 2011 at 12:50 PM | Permalink

          Bart:

          No, you don’t. That is clear. What you think are gaffes are things you do not understand, and are too proud to ask for clarification to help you understand.

          Bart, seriously this is a waste of bandwidth. You have opinions about things, you’ve stated them, they’ve been repeated contradicted both by me and (whether he likes it or not) MarkT, and you thanked MarkT even though he implicitly admitted my basic point was right (which is that h(tau) can have statistically significant values for tau < 0 in a physically realizable system). MarkT is trying to save your argument by hand waving that we can “obviously see there are no negative delays”. I don’t accept that as a rigorous defense (other than in a hand waving fashion).

          It’s your opinion that I don’t know how to compute an impulse response function. I’ve already offered to give you some of my data let you compute it yourself, and compare our results. You’ve apparently refused this offer, which puts you in a very bad light in my opinion. (I will make this same offer to Nick.)

          Please say something of substance or let us discontinue this thread. This has gotten beyond obnoxious in terms of how you are behaving.

          At this point, I couldn’t care what you think…. I almost didn’t get involved in the discussion to begin with, because to people like yourself with obvious limited experience in this sort of problem combined with your huge ego, I might as well be speaking in Martian.

        • bender
          Posted Sep 22, 2011 at 12:56 PM | Permalink

          Bart’s made a point that no one yet has acknowledged as critically important.

          Carrick and Mark T, you talk a lot about your tools, but this is about physical systems – clouds and temperature – not abstract mechanics of signal processing. Piss all you want, the winner of the name-plate polishing contest doesn’t matter. What matters is which of you can address the question most conceilsy and coherently.

          kim, your money is misplaced. Carrick is merely winning a battle of his choosing. IMO it’s a battle that’s irrelevant to the question at hand. Which is Bart’s point, and why he’s leaning toward disengagement.

          Gosh, it’s fun when it’s not me acting like a 2 year old. Keep it going. You guys are great.

        • Mark T
          Posted Sep 22, 2011 at 1:07 PM | Permalink

          I believe you are confused here. My series of comments started out in response to criticisms from you over faulty language on my part. They were then continued by Bart, who clearly doesn’t understand the issues associated with negative delays.

          You need to go back and look at where you made these statements. Your “example” system of the cochlea clearly indicates you think one can get a negative time delay in a causal system, and this has extended well after you clarified your “faulty language.”

          Nicks’s analysis is irrelevant to the group delay issue. I’ll restate this for the final time, try to understand: IF there is negative group delay, you WILL see it in the causal portion (positive time delays) with a positive slope in the phase response. The link I provided demonstrates this rather clearly. Completely causal response, only positive time delays in the impulse response, yet a positive phase slope.

          I take it from your lack of response that you do not care to, or cannot, respond to the use of group delay as some sort of refutation of time delay issues. Neither is a requirement for the other, so how does your “substance” in any way refute Bart?

          Also, as I noted, the sun cannot be both and input and a power supply. Go ahead and consider it a supply, but you need to come up with an input to replace it in order to maintain your active model, though it is largely irrelevant anyway. We would see a positive phase slope from the causal portion if it mattered.

          You’re doing exactly what you accused Pat of… it is humorous.

          Mark

        • Carrick
          Posted Sep 22, 2011 at 1:12 PM | Permalink

          Bender:

          kim, your money is misplaced. Carrick is merely winning a battle of his choosing. IMO it’s a battle that’s irrelevant to the question at hand. Which is Bart’s point, and why he’s leaning toward disengagement.

          Bart’s leaning towards disengagement????? He’s probably launched 20 ad homs in 10 minutes. Surely this is a record?

          The question I am raising is very much relevant, because it ties into the question of whether you can neglect h(tau) for tau < 0. That’s an important question here.

          Insulting me and belittle me (or Nick) for posing questions does not advance the argument. Pointing out errors in them advances the argument, making erroneous statements that don’t point out errors, then not owning the erroneous statement does not advance the argument.

          My responding to Bart’s ad homs get me censored by MarkT, while Bart’s ad homs are some how OK. Just wow.

          Great how this forum works doesn’t it? And by works, I mean something different than “works”.

        • Carrick
          Posted Sep 22, 2011 at 1:22 PM | Permalink

          MarkT:

          Nicks’s analysis is irrelevant to the group delay issue. I’ll restate this for the final time, try to understand: IF there is negative group delay, you WILL see it in the causal portion (positive time delays) with a positive slope in the phase response. The link I provided demonstrates this rather clearly. Completely causal response, only positive time delays in the impulse response, yet a positive phase slope.

          Sorry if I didn’t response to this sooner, I was dealing with all of Bart’s MarkT-stamp-of-approval ad= hominem attacks (no one sidedness from you, right MarkT?).

          What you say here is actually false and I have seen it in real world data. The overall phase can still have a causal slope (downward/negative slope) and have components with negative delay in h(tau). The only time you can tell by looking at the slope by itself is when the acasual component dominates the response (otherwise you will see “wiggles’ in the phase associated with the interference between the causal and acausal components.

          I’ll make the same offer I made to Bart (this is a put up or shut up moment): I’ll provide you with data where this is true, and you can verify for yourself whether or not you see a negative delay component.

          Or you can construct your own synthetic data and verify what I say to be true or false.

          I’m sorry the point about understanding how the sun acts as a power supply went by you. I see no point in repeating it, and will put it up as your lack of experience with these types of systems. Somethings are just useless to argue over.

        • Bart
          Posted Sep 22, 2011 at 12:35 PM | Permalink

          Carrick
          Posted Sep 22, 2011 at 12:23 PM

          What a lot of blather, signifying nothing. You could have saved a lot of space if you had just said “I think the Sun acts as a power supply.” It’s just an assertion. And, it means really nothing insofar as the discussion is concerned.

        • Bart
          Posted Sep 22, 2011 at 12:36 PM | Permalink

          ‘Based on your many gaffes, at this point, I don’t think…’

          No, you don’t. That is clear. What you think are gaffes are things you do not understand, and are too proud to ask for clarification to help you understand.

        • Carrick
          Posted Sep 22, 2011 at 12:37 PM | Permalink

          Bart to be clear I wasn’t addressing you, and I didn’t expect you to be able to follow the conversation.

          If you want to learn something instead of stomping up and down like a little kid, I suggest go look up the definition of power supply, and make your own choice at that point.

          The word “blather” from Bart is synonymous for “Bart has no ideal in hell what is being talked about.”

        • Bart
          Posted Sep 22, 2011 at 12:46 PM | Permalink

          Carrick – you’ve made such a fool of yourself with your rambling and appeals to irrelevancies. It would do you good to humble yourself, and ask for help from people who understand how the tools are used and what the outputs mean.

          You could try giving a little thought to it yourself. What is a DFT? How does it differ from a DTFT? What are its limitations? What is circular convolution? What exactly do the two ends of the cross correlation represent? Why is one of interest, and one not? What do I get when I deconvolve the cross correlation by the autocorrelation? Where and how do errors in the input manifest themselves?

          You just do not get it. You do not understand the tools.

        • Carrick
          Posted Sep 22, 2011 at 12:57 PM | Permalink

          As I said elsewhere Bart. This is a go-nowhere conversation.

          If you wish to use this forum just to insult me, please have at it until/unless SteveM says “enough”.

          Are you actually an adult?

        • bender
          Posted Sep 22, 2011 at 12:59 PM | Permalink

          Yeah, but he can turn the crank, so he thinks that makes him qualified.

          But never mind. Get back to the real question.

        • Bart
          Posted Sep 22, 2011 at 1:00 PM | Permalink

          “This is shameful behavior on your part. I thought I was dealing with an adult, not a child.”

          Your behavior was shameful from the get go, charging in here and making all manner of bald assertions which had no basis, and presuming to instruct. You have backed off of some of your categorical statements which proved to be erroneous, and are now fighting a rear guard action trying to hold on to whatever shreds of credibility you can.

          “What exactly do the two ends of the cross correlation represent? Why is one of interest, and one not?”

          On that, I jumped ahead of myself.The two ends of the estimated impulse response are not of equal interest.

        • bender
          Posted Sep 22, 2011 at 1:03 PM | Permalink

          Clarification:

          Yeah, but *Carrick* can turn the crank, so he thinks that makes him qualified.

          Don’t dismiss each other. Return to the question and summarize, based on what you’ve learned.

        • Bart
          Posted Sep 22, 2011 at 1:09 PM | Permalink

          “As I said elsewhere Bart. This is a go-nowhere conversation.”

          That was predetermined by your initial hubris, and your lack of substantial arguments.

        • Bart
          Posted Sep 22, 2011 at 1:28 PM | Permalink

          bender
          Posted Sep 22, 2011 at 1:03 PM | Permalink

          “Don’t dismiss each other. Return to the question and summarize…”

          1) Causes precede effects. An impulse response extending backwards in time is therefore a logical absurdity.

          2) DFT based estimation of impulse response always creates apparent backwards-in-time behavior. It may produce it repeatably within tolerances for a particular system, but that does not make it physically significant.

          3) None of Carrick’s “requirements” for successful estimation of the impulse response are necessary.

          4) Phase leads do not require backwards in time impulse responses.

          5) Impulse response characterization via the chosen method for a given system may or may not yield valuable results. When you see common forms in the result, it is a pretty good tip-off that, in that instance, it probably did.

          6) People whose sole intent is to disrupt a conversation, demonstrate workaday knowledge of their particular niche, and bog down the discussion in irrelevant technicalities are thoroughly contemptible and deserve no respect. However, the conversation involves a wider audience, and that should be considered when expressing your contempt and disrespect, however justified.

        • Tom Gray
          Posted Sep 22, 2011 at 1:28 PM | Permalink

          Would someone explain for the uniformed how “negative delays” are physically implemented. An impulse response is the response that follows a unit impulse. How can delays go back in time before that?

        • Carrick
          Posted Sep 22, 2011 at 1:32 PM | Permalink

          bender:

          Yeah, but *Carrick* can turn the crank, so he thinks that makes him qualified.

          Don’t dismiss each other. Return to the question and summarize, based on what you’ve learned.

          I’ve got 20 years of more than turning the crank on this bender, and more to the point exactly the sorts of analysis that I’ve been discussing above. I can derive the relationships from scratch, it’s something I require of any student that works for me, and I generally can push this sort of analysis further than most other people I know.

          Since his only modis opererdi is to attack me personally, I don’t see any other choice than dismissing Bart at this point.

          From my perspective, this thread is has wound down. Sorry I was involved, it has been a complete waste of my time. (No willard, it isn’t your fault, I could have declined to participate in this thread.)

        • Steve McIntyre
          Posted Sep 22, 2011 at 1:57 PM | Permalink

          I should have snipped out the ad homs as against blog policy. It shouldnt be necessary.

          Perhaps someone can answer a simple question for me in connection with Dessler’s original regression as I’m not familiar with control theory terminology (though I’m familiar with some pieces.)

          I’ve done some simple experiments with synthetic feedbacks and then tried a regression a la Dessler 2010 and thus far, it doesn’t seem to me that the methodology of Dessler 2010 comes even close to recovering the underlying process. Can anyone speculate what was in Dessler’s mind when he applied this method? Does it occur in any texts on the subject?

        • Posted Sep 22, 2011 at 1:53 PM | Permalink

          Tom,
          We don’t know that anything is being physically implemented. We just have two series of numbers and are trying to find out something about them. We don’t, a priori, know if temo “causes” CFR or CFR causes temp. Indeed, the system postulated (Bart’s two-box) is a loop.

          That’s my objection to the circularity of the logic. First the impulse response is truncated, negative values removed. Then it is Bode plotted, then said to resemble a causal process.

        • Tom Gray
          Posted Sep 22, 2011 at 2:18 PM | Permalink

          I did not ask if anything is being physically impolemented> I asked how would a negative delayy be implemented.

          We do not have two sets of numbers.

          We have observations

          Bart has shown that these numbers agree reasonably well with a simple feedback model and takes this as physical evidence. That is all Bart has done.

          Now we read talk about “cyclic behavior”, phase leads and system response to cyclic behavior. This does not match any physicals model that I can conceive of.

          What on earth is “cyclic behavior”?

          How can a system know theta the signal is cyclic and how can it know that it is not going to be turned off immediately?

          How can a unit impulse which is acrylic as a siganl can be be affected by properties that are only applicable to “cyclic behavior”?

          What on earth is “cyclic behavior”?

        • Tom Gray
          Posted Sep 22, 2011 at 2:23 PM | Permalink

          Nick Stokes writes:

          =================
          That’s my objection to the circularity of the logic
          ===================

          Bart’s invesstigation is one of physics abnd not mathematics. He is matching observation to teh predictions of theory. This is inductive and not deductive logic. There is no circurltiy in this. it is just the proper application of inductive logic

        • Bart
          Posted Sep 22, 2011 at 2:23 PM | Permalink

          “Since his only modis opererdi is to attack me personally, I don’t see any other choice than dismissing Bart at this point.”

          Ha! I made all the arguments needed early on. Like Nick, you chose to ignore those which discomfited you and tried to steer the discussion into a strawman irrelevant to the system at hand. When you choose not to address my arguments, what is there left to do but get in some gratuitous insults?

          I’ve been at this game much longer than you, Carrick. I have made products which work based on these principles. Indeed, I didn’t bring in any of the heavy machinery because A) this isn’t my day job B) it wasn’t necessary and C) when so many people can’t even understand this basic analysis, what hope is there for introducing even more complexity?

          You do not understand your tools. It is apparent by what you have been arguing. Your point has been a very narrow, technical, and irrelevant one.

          Steve – thank you for your patience, tolerance, and good works. I apologize if I have taken untoward liberties with your resource in addressing Carrick’s… (gulp) whatever they are.

        • Bart
          Posted Sep 22, 2011 at 2:24 PM | Permalink

          Oh, and Steve, I discussed the shortcomings of Dessler’s analysis here.

        • Bart
          Posted Sep 22, 2011 at 2:28 PM | Permalink

          Nick Stokes
          Posted Sep 22, 2011 at 1:53 PM | Permalink

          “We don’t, a priori, know if temo “causes” CFR or CFR causes temp.”

          As I have said time and time again, and you have ignored it, if you want to investigate causality in the other direction, don’t use the chaff on the other end of the impulse response estimate. Swap temp with dR, and redo the analysis. What you will get is the inverse of the response we have found here, and it is thoroughly unphysical.

        • Bart
          Posted Sep 22, 2011 at 2:35 PM | Permalink

          Steve McIntyre
          Posted Sep 22, 2011 at 1:57 PM

          Also, what was on his mind was that there was negligible delay in the response, so the outcome should be more or less linearly related according to the feedback factor. I have demonstrated that the delay is, in fact, on the order of about 5 years for the relevant frequency range, and this assumption was unwarranted.

          I told him so on the WUWT thread back in the day (the links are in a comment of mine on the previous thread on this topic) and recommended that he do just such an analysis on the data as I have presented here. But, he did not listen.

        • Carrick
          Posted Sep 22, 2011 at 2:40 PM | Permalink

          This is my final contribution to this thread, except substantive questions. I will not respond further to ad homs or attempts to derail the conversation with arguments over who can use his tool better or what not.

          Bender asks for a summary, I respect Bender so I’ll give one (and will hope to not get pushed into moderation):

          Method 1: Measuring the physical impulse response of a system using an impulsive signal.

          1) A “true” impulse response function is the response of a system to a very-narrow-band impulse.
          2) This has a number of limitations, including the ability of the transducer used to generate the impulse to faithfully generate it (nonlinearity of the transducer is one problem).
          3) If the system has noise, because the power associated with the impulse is very small, this limits your ability to resolve the true response of the system over the background noise.
          4) Sometimes, instead of one impulse, we apply a series of impulses, with a sufficient delay between impulses to allow the response of the system to return to a nearly quiescent state before the next impulse was applied. Then we average these together to knock down the noise.
          5) The problem with this is “RMS” power of the input signal. For a finite duration measurement, the RMS signal associated with this train of impulsive-like signals is very low.
          6) For a casual (e.g. “real world”) system, you will never get a response from the system before the impulse to the system occurs.
          7) This method is not very efficient, fraught with problems, and often discarded in favor of the broad-band method (to be described next)
          8) For the sake of clarity, we will call the measured output, normalized to the amplitude of the input, as I(tau), where tau = t – t0, is the time relative to the time t0 that the original impulse was applied.

          Again I(tau) is strictly causal, I(tau) = 0 for tau < 0.

          Method 2: Measuring the physical impulse response of a system using a broad band signal.

          1) As an alternative to Method 1, we can inject a signal X that is broad band input the system and measure the system response Y to this input.
          2) We can compute the transfer function between X and Y by simply fourier transforming (denoted FT(…) the two time series to get: T(f) = FT(Y)/FT(X).
          3) If we inverse Fourier transform T(f) we get h(tau) = IFT(T).
          4) Under certain circumstances (and yes I can prove this myself), you can demonstrate that h(tau) .= I(tau), and under those circumstances h(tau) = 0 for tau < 0.
          5) Conditions under which h(tau) ≠ I(tau) may be true: If the system is active and there is net gain, you can get negative delay. I have given an example above. Here’s a simple one for those of us who like synthetic data: T(f) = 1/(1 – R exp(-2*pi*i*f*tau), where R &lg; 1.
          6) You can also get negative delays in a system that is smooth, because nonlinearity can cause reflections and acausal behavior in its own right. Take an infinite wave-guide, let the medium be nonlinear and dispersive, you’ll end up with a nonzero value of R that is level dependent, but nearly independent of frequency. (The nonlinearity mixes forward and reverse going waves in the waveguide.) However, because the system is dispersive, the phase of the nonlinear reflectance will vary with frequency, so in general it “noodles” around tau = 0, but can drift to either positive or negative values of tau.
          6) Because X is broadband, but doesn’t encompass 0…infinity in the frequency domain, h(tau) will always be a frequency-domain filtered version of I(tau).
          7) If we don’t have a signal generator, but can measure the signal Xm(t) that is as an input to the system as well as the output Ym(t) (the “m” signifies that both X and Y are measured, and we have made an assumption of causality between them), then we can use “signals of opportunity” to estimate the corresponding transfer function Tm(f), and once we have Tm(f) we can inverse transform to obtain hm(tau)
          8) We have no guarantee in this case that Tm(f) = T(f) or that hm(tau) = h(tau). The reason is that we don’t control the input signal, and the input and system that we are measuring the response to may be covarying to some quantity that we aren’t monitoring. In my research, we might be monitoring pressure, but both X and Y might also be functions of seismic activity, temperature, etc. This can lead to among other things, apparently acausal signals (in my case, atmosphere pressure waves travel much slower than seismic waves which can lead to arrivals that are acausal with respect to the atmosphere pressure wave.
          9) If X and Y aren’t truly causally related, for example, they are quantities that covary (one may cause the other to change, and and the other may cause the first to vary, for example rabbit and fox populations), then you can still compute hm(tau), but it has nothing to do with an ordinary impulse response function, and double-sized behavior in hm(tau) is expected.

          What does this mean for this analysis?

          1) When causality between X and Y is not known to be true, it is an assumption that must be tested.
          2) Even if we know that causality is present, in a system like the Earth’s climate which is nonlinear, active and self-oscillation is present (by the way, from the systems theory perspective, sustained internal oscillations are considered evidence of the presence of a “battery” or “power supply”), there is no guarantee that that hm(tau) will be causal (meaning there will be statistically significant value of hm(tau) for tau < 0).
          3) This means you have to justify truncation of hm(tau) to values of tau ≥ 0.

          And by “justify” I mean you have to perform an analysis demonstrating that hm(tau) is statically indistinguishable from zero before you are allowed the please of truncating hm(tau) values of tau ≥ 0.

          That’s all I have time for. If somebody asks me a reasonable question, as a legitimate complaint not related to my “tool use”, I will respond given time.

          Hopefully somebody found this of use, maybe even Bender who thinks I crank my tool too much. :-;

        • Steve McIntyre
          Posted Sep 22, 2011 at 3:11 PM | Permalink

          Carrick and Bart, thanks for all of this. Can both/either of you recommend texts/articles that deal with the topics from the perspectives that you recommend.

        • Carrick
          Posted Sep 22, 2011 at 2:45 PM | Permalink

          This should have been:

          8) We have no guarantee in this case that Tm(f) = T(f) or that hm(tau) = h(tau). The reason is that we don’t control the input signal, and the input and system that we are measuring the response to may be covarying to some quantity that we aren’t monitoring. In my research, we might be monitoring pressure, but both X and Y might also be functions of seismic activity, temperature, etc. This can lead to among other things, apparently acausal signals (in my case, atmosphere pressure waves travel much slower than seismic waves which can lead to arrivals that are acausal with respect to the atmosphere pressure wave.

          8) translates to 8).

          (see if I got the escapes right on that.)

        • Carrick
          Posted Sep 22, 2011 at 2:49 PM | Permalink

          OK, a couple more typos

          1) As an alternative to Method 1, we can inject a signal X that is broad band into the system and measure the system response Y to this input.

          T(f) = 1/(1 – R exp(-2*pi*i*f*tau), where R > 1.

          And of course replace all smile faces with <p>8).

        • Tom Gray
          Posted Sep 22, 2011 at 2:52 PM | Permalink

          Carrick writes:

          =======================
          A “true” impulse response function is the response of a system to a very-narrow-band impulse
          ==============================

          The impusle is as bradband as a signal cnan be. It contains all frequencies

        • Carrick
          Posted Sep 22, 2011 at 3:01 PM | Permalink

          Thanks Tom, I thought I got that mistake (error in copy/paste from where I was proofing it).

          I meant “very narrow-width” impulse. You want the impulse of course is as broad band as can be (based on the capacity of the generator of the impulse to produce this signal of course), with the mathematical ideal of it being a Dirac delta function, as mentioned above.

        • Bart
          Posted Sep 22, 2011 at 3:04 PM | Permalink

          My final word to Carrick.

        • Bart
          Posted Sep 22, 2011 at 3:20 PM | Permalink

          Steve McIntyre
          Posted Sep 22, 2011 at 3:11 PM

          Phase Plan Analysis: Hopefully Ogata still discusses this. This is the 5th edition. Mine has no edition number on it.

          I have still never seen as good a book on FFT methods of spectral estimation as the classic Oppenheim and Schafer.

      • Mark T
        Posted Sep 18, 2011 at 2:17 AM | Permalink

        I don’t think understands that feedback and poles are synonymous.

        Mark

        • RuhRoh
          Posted Sep 22, 2011 at 10:26 AM | Permalink

          Nice article on ‘negative group delay’.
          I learned something from that.
          I reply here because the other thread is too far indented.

          I encourage the combatants to limit the number of usages of ‘you’ and ‘your’ for demeaning attribution, in any given reply, as a way to damp down the disproportionate responses.

          The technical fracas is interesting, the mud wrestling, less so. Who will be the first to do an eye gouge? The ad hom attributions undercut the value of the work, and are self defeating.
          Easy for me to say…

          Perhaps the parties will humor me on dropping the debate about who needs to apologize to whom, and whose feelings are more wounded… It is merely tiresome and dilutive at this juncture, IMHO.
          You are all obviously successful technical professionals. I doubt this kind of thing is conveyed in memo form at work.

          This topic is too important to allow it to be squandered and subsumed by sophomoric squabbling…

          RR
          (herewith, ‘you’ is used in positive sense)

        • Carrick
          Posted Sep 22, 2011 at 11:00 AM | Permalink

          Agreed RR. A bit of toning down of personalities would improve the readability of the thread. I’ll do my part to try and keep the heat down in my comments.

        • Bart
          Posted Sep 22, 2011 at 12:02 PM | Permalink

          The immediate topic lends itself to that, because it is so utterly jejune. Carrick isn’t interested in the problem at hand. He wants to inform us that he is brilliant.

          Sorry if I am intemperate. I’ve been at this for too long and I need to just walk away. Yet, leaving the waters to be muddied by the likes of this popinjay is equally unsettling to me at this time.

        • kim
          Posted Sep 22, 2011 at 12:56 PM | Permalink

          The mixture is far too unsettled for me to read the tea leaves, yet.
          =============

        • Steven Mosher
          Posted Sep 22, 2011 at 1:44 PM | Permalink

          bart:

          When we started this you wrote

          “Nobody is going to believe something some anonymous guy posting on a blog would say. Even if I did the analysis, someone respected by a lot of people would need to replicate it. So, why take the time to sort it all out on my own when it would be wasted effort?”

          And I noted that if you did an analysis that these respected would show up. I listed carrick and Nick, among others. There are good number of people here who have experience with control theory. (Willard aint one of them.) It’s also important to note that Carrick and Nick do not agree on all matters. One of the nice things about this place ( as opposed to RC or Tamino or pick any warmist site ) is that people who may disagree about global warming can come together to discuss methods. It can get frustrating and testy. But it beats rabbit droppings and echo chambers.

          We already know that Carrick is brilliant as is Nick. We know that from years of reading their stuff, seeing them argue, reading their code, seeing them change their minds. That is why I promised that they would show up. I think what needs to be established is not who is more brilliant, but rather who can identify the issues at hand in a clear manner and suggest an agreed upon approach to resolve the issues.

          If we are really lucky RomanM will come back.

        • Bart
          Posted Sep 22, 2011 at 2:50 PM | Permalink

          Steve – All I see these guys doing is talking past my arguments. Look over the thread. You will see them make an argument. I will respond. They will ignore my comment. Then, later, they will dredge up the same argument which I have already explained is a false concern.

          They are completely wound up over a fiction. Take a look at this plot. I generated this data artificially. I know what its properties are. I know it is completely causal. But, I get a ghost of a non-causal response. It is inherent to the process when the correlations are longer than the data window. It is useless information.

          This is precisely why I was reluctant to be drawn into the discussion in the first place. We are being led down a path of destruction by people who are so caught up in their confirmation bias that they do not take the time to understand exactly what they are doing.

        • Steven Mosher
          Posted Sep 22, 2011 at 4:07 PM | Permalink

          Well, reading over the comments I see a lot of things. I don’t think its fruitful to try to untangle things. That untangling is a distraction from the issues at hand. At this stage after tempers have been engaged you basically have a few choices. leave, stay and get madder. pause and summarize as Carrick has done.

          “They are completely wound up over a fiction. Take a look at this plot. I generated this data artificially. I know what its properties are. I know it is completely causal. But, I get a ghost of a non-causal response. It is inherent to the process when the correlations are longer than the data window. It is useless information.”

          I would not call it a path of destruction to get clearer on that issue or other issues.

        • Bart
          Posted Sep 23, 2011 at 10:43 AM | Permalink

          Speaking of RomanM

        • Tom Gray
          Posted Sep 22, 2011 at 1:30 PM | Permalink

          Group delay and delay are two entirley different things. The speed of light still holds

        • RuhRoh
          Posted Sep 22, 2011 at 2:38 PM | Permalink

          Mark T;

          Re; sun as power supply And input, I’m sure you are familiar with the term Power Supply Rejection Ratio, which is the system response to variations of the power supply.

          Why so much ‘binary thinking’ on an intrinsically analog question?

          Remember, Black, White, Grey, BINGO, go with the grey…

          Carrick seems to be a diligent academic kind of guy; tipoff for me was ‘ my students’.
          Ok, we all learned from academic folks. Persnickety about ‘rigor’. Ok, fine.

          Bart and Mark T remind me of the cantankerous old guys at work, with decades of system design,
          including situations where lives are at stake. Lessons from those guys are rarely gentle.

          It seems that Bart has looked at this from a very conventional framework, and gotten a classic answer.

          Carrick seems to imply that Bart and Mark T can’t do ‘proper’ system analysis.
          Does this mean all of their prior designs must be recalled?
          Wouldn’t the world have noticed by now if the machines didn’t work as advertised?

          Here’s a question; On what do the contentious posters agree? Always a good starting place…
          Is this a situation where argument over nits and jargon is obscuring the big picture?

          RR

        • Mark T
          Posted Sep 22, 2011 at 3:41 PM | Permalink

          Re; sun as power supply And input, I’m sure you are familiar with the term Power Supply Rejection Ratio, which is the system response to variations of the power supply.

          Uh, that’s not the problem I’m referring to. If you use the sun as an input and a supply, you are converting a three terminal system to… a two terminal system, which cannot have true amplification, and brings us back to the passive system. none of that really matters, it is an aside at best.

          Bart and Mark T remind me of the cantankerous old guys at work, with decades of system design, including situations where lives are at stake. Lessons from those guys are rarely gentle.

          16 years for me, Bart mentioned 30. We are both engineers which is almost synonymous with cynical.

          Is this a situation where argument over nits and jargon is obscuring the big picture?

          Doubtful.

          Mark

        • Carrick
          Posted Sep 22, 2011 at 5:24 PM | Permalink

          MarkT just as a matter of interest, how would you analyze a system like the one I described?

          A two terminal input to an operational amplifier, hooked to a switching power supply, with a two terminal output, in which you didn’t have perfect isolation from power supply noise?

          Also what happens, if you put a capacitor on the V+ end of the power supply and hook it into one end of the operational amplifier. So basically what you’ll mostly measure is the switching noise of the power supply, using whatever amplification you choose to use.

          One of the papers I wrote was on self-sustained oscillations, for which you can demonstrate (systems perspective) that a power supply is needed, that is, if you have self-sustained oscillations, the system must be active. (I wasn’t the first on this: TJ Gold pointed it out in 1948, we just had better data.)

          If the ENSO isn’t directly forced by the Sun, and it doesn’t appear to be, that would seem to demand that the coupled ocean-atmospheric system be treated as an active one in which net amplification is present.

          Bart as I understand it is a climate scientist, I’ve never seen his resume so I don”t know what training that entails. I am Ph.D. physicist with 24 years of post-doctoral experience. Being in an “academic” environment means something different to me than to RR. I work at a lab where we collaborate heavily with government, military and industry. Not the same thing as a guy in front of a computer pushing a pencil around. The hearing science thing was something I did in the 1990s. These days I mostly do atmospheric acoustics sorts of applications.

        • Posted Sep 22, 2011 at 6:43 PM | Permalink

          “Bart as I understand it is a climate scientist, “
          I think you’re thinking of a different Bart.

        • Bart
          Posted Sep 22, 2011 at 8:19 PM | Permalink

          Definitely.

        • Carrick
          Posted Sep 23, 2011 at 10:26 AM | Permalink

          Hm… sorry for the misattribution, but this does explain why you are so much more knowledgeable than I imagined that other Bart would be about signal processing theory!

          LOL.

          Actually a really good application, if you want to dive in fully, is using this same framework to analyze the impulse response function of tree ring proxies to temperature. I’d do it myself, but have literally no time for it.

        • Bart
          Posted Sep 23, 2011 at 1:32 PM | Permalink

          Time is the enemy. I have spent way too much time on this already, which is why I try mostly just to set the stage and encourage others to follow through.

          Like you are doing, too 😉

        • Bart
          Posted Sep 23, 2011 at 1:38 PM | Permalink

          Signal processing is intimately involved with what I do, which is designing control systems. If you have any interest in that field, you may find some of the implications of the proffered transfer function interesting. It is actually just of the right form to provide some pretty standard stabilizing feedback. Also, see my latest entry near the bottom of the page.

        • Bart
          Posted Sep 23, 2011 at 1:41 PM | Permalink

          Oops, copied and pasted wrong link. Here is the one I was referring to.

        • Mark T
          Posted Sep 23, 2011 at 9:13 AM | Permalink

          If the ENSO isn’t directly forced by the Sun, and it doesn’t appear to be, that would seem to demand that the coupled ocean-atmospheric system be treated as an active one in which net amplification is present.

          The amplification is a result of increased storage of energy, which is actually a passive function. An LC circuit is a prime example, one that can oscillate as well.

          MarkT just as a matter of interest, how would you analyze a system like the one I described?

          A two terminal input to an operational amplifier, hooked to a switching power supply, with a two terminal output, in which you didn’t have perfect isolation from power supply noise?

          I’d have to think about it since I haven’t tested a differential op-amp in a while*, but that’s immaterial to the point I was asking you to think about. If you take an active system, three terminals at a minimum, and tie the input(s) to the supply, it is not active any longer, it is passive. Not that you can’t get valid information from testing in this manner, just that it is no longer active.

          Certainly our “system” is much more complex than that, I’m just pointing out that assigning the label “supply” to the sun implies you need something else as an “input” to truly view the system as active. Not that parts of the system, internally, cannot be modeled as active (ENSO, for example,) but the overall system is not.

          FYI, Bart mentioned somewhere that he is an engineer, electrical I would assume, with somewhere in the neighborhood of 30 years experience. I posted over in the tAV “resume” thread if you’re curious about my background.

          Mark

          * Most of the “circuits” I have designed in my career are targeted to 50 ohm impedances, which op-amps tend to deal with poorly. It is a difficult problem getting a 50 ohm signal into an ADC that has a 1k ohm input. You either use a transformer (1:4 is typical) and accept some mismatch, or drive it with something like an AD8138 (differential op-am) and suffer from a variety of other issues.

  61. Posted Sep 22, 2011 at 9:10 AM | Permalink

    I submitted a query in response to the recent comment of Sep 22, 2011 at 8:30 AM by Carrick. However, it’s been inserted into the middle of the pile, here (at least on my browser). This is confusing. Hence this unthreaded “marker”.

    • Mark T
      Posted Sep 22, 2011 at 4:16 PM | Permalink

      Two kinds of delay, an actual time delay, i.e, the difference between one sample and the next (since we are referring to a sampled system,) and group delay which is the negative derivative of phase with respect to frequency. GD units are time, but it does not represent a true time delay, rather, it is an apparent delay that is a result of the system “predicting” the output. From the article I linked:

      However, negative group delays do not imply time advance (at least not in causal systems). Rather, for signals in the band where the group delay is negative the filter tries to predict the input. If the signal is predictable from past values (as for the modulated Gaussian pulse and the bandlimited random signal, Figs. 4,5,6), then this brings about the illusion of a time advance. The illusion breaks down when the signal contains an unpredictable event (the truncation of the Gaussian pulse, Fig. 7).

      Mark

  62. MrPete
    Posted Sep 22, 2011 at 10:23 AM | Permalink

    This is a very interesting discussion, snide comments aside. Carrick has reduced the question to:

    verify for himself that indeed negative delays are possible in a causal, physical system

    .

    Very simple question, from an EE who has basic understanding of some of these things: could it be that the back-and-forth here between Bart and Nick/Carrick revolves around the following?

    Bart: negative delay makes no sense in the real world, since it implies that the effect precedes the cause.

    Nick/Carrick: negative delay does make sense in the real world, when examining a cyclical system, because an oscillating system can introduce a “lead time” amplification that comes before the next cycle.

    Just trying to understand what is being said here.

    If the above two statements are (overly simplified) reasonable summaries of the two positions, then I would tend to agree with Bart, because the “negative delay” in the latter sense is actually an almost-full-cycle delay from a causal perspective.

    • kim
      Posted Sep 22, 2011 at 10:51 AM | Permalink

      Wouldn’t a full cycle delay give a lot of throttle play to the response?
      =============

    • Carrick
      Posted Sep 22, 2011 at 11:31 AM | Permalink

      MrPete:

      Nick/Carrick: negative delay does make sense in the real world, when examining a cyclical system, because an oscillating system can introduce a “lead time” amplification that comes before the next cycle.

      Absolutely this is what is involved. The conventional way to compute h(tau) is using the frequency-domain method, which assumes a broad band signal as the input X, to which you are measuring the response Y. Broad band measurements by their very nature have the assumption of cyclic behavior, and it is this that allows for phase-lead behavior.

      Another way of saying this, if you were to zap the same physical system with short-duration pulse, you certainly wouldn’t expect to see a response that preceded the impulse. The equivalence between the broad-band calculation of h(tau) and the time-domain direct measurement of the impulse response y(t) is guaranteed only for passive, linear systems.

      For climate, we are stuck with broad-band signals because we can’t (and shouldn’t) generate impulsive inputs to the climate system in order to directly measure their time-domain response. Because of that, we have to control for the possibility of statistically significant values of h(tau) for tau < 0. We can’t simply truncate h(tau) for tau < 0 by fiat. That’s an error made by somebody who really doesn’t understand what the broad-band derived h(tau) truly signifies.

      (Negative delays aren’t even necessarily bad.. it tells you something about the pole structure of the climate system in some averaged sense. this has some relevant discussion to this topic, especially when one is measuring the transfer function for a nonlinear system, and how that can be interpreted.)

      Large volcanic eruptions may be as close as we get to a time-domain measurement of an impulsive input and the response of the system to it. There are peer reviewed publications that look at this, of course.

      • Bart
        Posted Sep 22, 2011 at 12:16 PM | Permalink

        If that is what you are arguing, then you have no leg to stand on at all (not that you ever did, just that this makes it obvious why). We are not, in fact, dealing with a broadband signal which has significant components beyond the Nyquist frequency.

        All this time, you apparently have been concerned about aliasing. Well, I’ve already addressed that.

        • Carrick
          Posted Sep 22, 2011 at 1:38 PM | Permalink

          Bart, meet data

          And yes I do know what a Fourier transform is, understand sampling theory, Shannon’s Sampling theory, and I’m not worry about aliasing..

          (And yes this is a colossal waste of my time, but maybe somebody besides Bart will learn from it, so…)

        • Bart
          Posted Sep 22, 2011 at 3:00 PM | Permalink

          “And yes I do know what a Fourier transform is… Broad band measurements by their very nature have the assumption of cyclic behavior…” suggests you do not grok it.

        • kim
          Posted Sep 22, 2011 at 3:52 PM | Permalink

          I can croak, now, I’ve heard bullfrogs and cats fighting.
          ========

      • Bart
        Posted Sep 22, 2011 at 12:29 PM | Permalink

        “Broad band measurements by their very nature have the assumption of cyclic behavior, and it is this that allows for phase-lead behavior.”

        Ridiculous. You don’t even know what a bloody Fourier Transform is. A Fourier Series assumes a periodic base. A Fourier Transform does not. But, I’m not about to go into the nuances of functional analysis for you here.

        A phase lead can be generated as simply as having an impulse response of the form h(0) = 2, h(1) = -1. This is simply a basic discrete time PD controller.

      • Tom Gray
        Posted Sep 22, 2011 at 2:10 PM | Permalink

        A few questions from the uninformed:

        How does the system know that a signal has a “cyclic behavior”? How can it predict the future of a cycle? How does it know that the signal will not be turned off immediately?

        An impulse is a broadband signal. It is 1 at t=0 and 0 everywhere else. How is this cyclic? It would seem to be as acylic as a signal can be. How does the impulse response provide information about the performance of a system to a signal that has a “cyclic behavior”?

        Phase lead and phase lag have nothing to do with time or with “cyclic behavior”.

        What on earth is “cyclic behavior”?

        • Mark T
          Posted Sep 22, 2011 at 3:05 PM | Permalink

          How does the system know that a signal has a “cyclic behavior”?

          It doesn’t, nor does it need to.

          How can it predict the future of a cycle?

          What is happening is a bit more complex than I can go into here. Read the link I provided above for some insight. In general, when things vary slowly relative to the ability of the system to “adapt,” it can appear to “predict” future behavior. Sudden changes will cause this ability to fail.

          How does it know that the signal will not be turned off immediately?

          It doesn’t. In fact, when this happens, or any sudden (not smooth) change occurs, the ability to predict is “lost” and must start over. The link explains this, too.

          An impulse is a broadband signal. It is 1 at t=0 and 0 everywhere else. How is this cyclic?

          Realistically, it is not, except for a brief instant in time.

          How does the impulse response provide information about the performance of a system to a signal that has a “cyclic behavior”?

          Because when you put an impulse into a system, you are hitting it with all frequencies. If you look at the spectral response, you get magnitude and phase at each of these frequencies. The inverse of this is the time response.

          Phase lead and phase lag have nothing to do with time or with “cyclic behavior”.

          Well, phase lead/lag in this context are functions of the frequency and the delay between input and output. For illustration purposes, consider a delay of 1/10 second and a signal with two frequencies, one with a frequency of 1 cycle/s, and the other with a frequency of 2 cycles/s. The phase lag of the first frequency output w.r.t. to the input is 360/10, or 36 degrees (pi/5 rad.) The phase lag of the second is 720/10, or 72 degrees (2pi/5 rad.) Both have been delayed by 1/10 second, however.

          What on earth is “cyclic behavior”?

          Consisting of sinusoids, i.e., oscillating.

          Mark

        • Posted Sep 22, 2011 at 5:39 PM | Permalink

          “How does the system know that a signal has a “cyclic behavior”? How can it predict the future of a cycle?”</i.
          A very good question. You need to have observed a signal for a while to characterize cyclic behaviour. That's the issue with this 10-year data pariod and statements about a 0.0725 yr^-1 frequency. Once you've characterised it adequately (after a few periods), you can predict. But as you say, someone might switch it off.

          “How does the impulse response provide information about the performance of a system to a signal that has a “cyclic behavior”?”
          There’s also a purely time domain interpretation of the use of impulse response. You can regard, by linear superposition, any signal as just the weighted sum of impulses. So the response is just the weighted sum of impulse responses. That’s where the convolution comes from. This works for sinusoids as for any signal.

      • Tom Gray
        Posted Sep 22, 2011 at 2:27 PM | Permalink

        An impuse is a broadband signal. That is why it is used. It contains all frequencies.

        So why is bad to apply all freqencies to the syatem under consdiration to see its response

        • Mark T
          Posted Sep 22, 2011 at 3:22 PM | Permalink

          It’s not bad, actually, though it will not provide you with any indication of how the system changes over time. In reality it is impractical to implement, however (a true impulse is a mathematical construct only.) Network analyzers actually sweep the frequency to generate the same thing for analog devices.

          Mark

        • Carrick
          Posted Sep 22, 2011 at 3:35 PM | Permalink

          MarkT:

          It’s not bad, actually, though it will not provide you with any indication of how the system changes over time. In reality it is impractical to implement, however (a true impulse is a mathematical construct only.) Network analyzers actually sweep the frequency to generate the same thing for analog devices.

          These days, I think they are moving towards maximum-length sequences (which are slightly more efficient).

          I still use (multi-tonal) sweeps in ear acoustics to obtain the system response as a function of frequency, but that’s because it is nonlinear, and a sweep is a closer approximate to a series of tones than broad-band noise is. It’s an interesting problem because the cochlea is nonlinear, active and dispersive and you can make use of the dispersive nature of the system in your OLS analysis in measuring its response to separate otherwise overlapping reflections.

          Some people in that community use broad-band noise. I think there’s been a bit of work on MLS in that field, but the problem with that is, you need a very good transducer to make it work well, and we’re talking about miniaturized systems that have to fit into ear canals.

    • Mark T
      Posted Sep 22, 2011 at 2:23 PM | Permalink

      Bart: negative delay makes no sense in the real world, since it implies that the effect precedes the cause.

      Bart does say this, but he is referring to time delay.

      Nick/Carrick: negative delay does make sense in the real world, when examining a cyclical system, because an oscillating system can introduce a “lead time” amplification that comes before the next cycle.

      Sort of, I think, but Carrick is actually referring to negative group delay, which is different. Carrick seems to be arguing that because negative group delay can exist, we need to consider the negative time delay portion of the impulse response.

      If the above two statements are (overly simplified) reasonable summaries of the two positions, then I would tend to agree with Bart, because the “negative delay” in the latter sense is actually an almost-full-cycle delay from a causal perspective.

      It’s a bit more than that, it would imply that the system could start generating a response in anticipation of an signal prior to the signal’s arrival. Once the signal is proceeding through the system, however, it may appear to be leading in time, but any sudden (broadband) change will quickly reveal the difference. Negative group delay does not in any way imply negative time delay.

      The link I tried to provide above (search on dsprelated) explains all of this in sufficient detail for most people to understand.

      Mark

      • Bart
        Posted Sep 22, 2011 at 2:56 PM | Permalink

        “Carrick seems to be arguing that because negative group delay can exist, we need to consider the negative time delay portion of the impulse response.”

        Thank you, Mark, for having greater patience than I, and for so clearly resolving the impasse. After all the time we have spent on this ridiculous argument, I stopped listening at the point I realized he was suggesting something physically impossible and just switched to “how do I get rid of this guy” mode.

        • Mark T
          Posted Sep 22, 2011 at 3:11 PM | Permalink

          And here I thought you were patient comparitively speaking.

          Mark

        • Carrick
          Posted Sep 22, 2011 at 3:25 PM | Permalink

          Compared to you, certainly!

          😛

        • Bart
          Posted Sep 22, 2011 at 3:27 PM | Permalink

          It’s just gone on too long. They’re a broken record, completely uninterested in anything anyone else has to say.

      • Carrick
        Posted Sep 22, 2011 at 3:10 PM | Permalink

        MarkT, I’m referring to the “tau” in hm(tau) in my summary:

        What I am discussing relates exactly to what Bart is calculating so it is entirely relevant.

        Were Bart measuring a train of impulses in the time domain, instead of synthetically constructing them using the transfer function between T and dR before inverse transforming, what he says would be right.

        The problem is he is confusing physical delay with the lag that comes out of his calculation, and thinks the latter is the former.

        • Bart
          Posted Sep 22, 2011 at 4:27 PM | Permalink

          I am going to have to partially concede Carrick’s point. If the system in question were unstable, then the procedure would pack the inverse of the impulse response into the negative time region. If he had stated the problem more pithily like that in the first place, instead of dragging us into the responses of mammalian cochlea, we would have avoided a lot of grief.

          However, that is not what is happening here. The impulse response computed from this data has negative time components only because the correlations are longer than the data window. This is indicated by the fact that the “negative time” components are only weakly coherent, but clearly exhibit the same natural frequency as the forward impulse response.

        • Bart
          Posted Sep 22, 2011 at 4:32 PM | Permalink

          This conclusion is firmly supported by the fact that the full impulse response estimate looks very like that generated with my artificial data.

        • Carrick
          Posted Sep 22, 2011 at 5:00 PM | Permalink

          I realize I came into this discussion in the middle, really was kind of dragged into it by a bone-headed mistake I made on Nick’s website. Given the history of the discussion on this website to that point, I can see how you would have taken anything I said the wrong way. I’m not even sure there was a “right way” to say what I wanted that wouldn’t lead to acrimony given the prior history of the discussion on this thread, but never mind.

          Glad we have something we can agree on, and let’s just leave it at that. Time to breath is a good thing.

        • Bart
          Posted Sep 22, 2011 at 5:17 PM | Permalink

          Well, with a statement like that, it would be bad form for me not to extend my apologies for heated words. On my comment previous:

          “…but clearly exhibit the same natural frequency as the forward impulse response.”

          I leaped before I looked. It is not the same natural frequency – I just assumed it naturally would be. But, the artificial data also does not match the phantom negative time oscillations with the natural frequency of the true response. The visualization is even more stark when I use representative data of the same length as the actual data..

          I know, I know… what confidence can you have in the indicated correlation bandwidth, given that it is longer than the measurement data span inverse period? We’ve been over that. My position is that it is possible for it to be right, based on analysis of artificial data of the same length and statistics. Furthermore, it has been observed that, qualitatively, the result looks “right” when it is right. But, definitive results will just have to wait for a more extensive data record. However, the -180 degree phase lag is not in doubt, as it extends to frequencies for which the inverse is within the data record interval.

        • Mark T
          Posted Sep 22, 2011 at 5:53 PM | Permalink

          No group hugs, please.

          It’s been a long day and I’ve been trying to learn Java in the middle of it all… time for a beer.

          Cheers.

          Mark

        • Carrick
          Posted Sep 22, 2011 at 6:00 PM | Permalink

          Mark T:

          It’s been a long day and I’ve been trying to learn Java in the middle of it all…

          Care for a book on 6502 programming instead? I may have one lying around. It might actually have more utility than learning Java. >.>

        • Mark T
          Posted Sep 22, 2011 at 7:05 PM | Permalink

          Did the 6811 years ago. The system I develop on is controlled using Java for various reasons. The algorithms are all done in CUDA C (Nvidia Tesla system… Teraflop processor.)

          Yes, btw, I do regularly do channel estimation and system identification, though admittedly, the impulse response is typically at -80 dB from the peak well before even the midpoint of the response result. There is no argument in such cases. Nobody would be aggravated. We wouldn’t even need a taper since it is essentially 0 within 1/10th of the response.

          I’m drinking Laughing Lab at Holy Cow on Stetson Hills at Powers, MrPete, if you’re bored.

          Mark

        • MrPete
          Posted Sep 22, 2011 at 10:39 PM | Permalink

          Stetson Hills @ Powers… bummer, too late… I’m on that side of town early some days, never in the evening 🙂

        • Mark T
          Posted Sep 23, 2011 at 2:57 AM | Permalink

          Put a good one on in your honor for the halibut anyway. At corner pocket west at 8th and arcturus every other th on average.

          Mark

        • Carrick
          Posted Sep 22, 2011 at 5:57 PM | Permalink

          It’s all good Bart. It would be a lot more insulting if you didn’t give a d**m about what I said than if you maybe cared too much! I’ve got a pretty thick skin, one has to, to work in research, as you are probably fully aware too.

          I’ll have more to say about the physical problem presently. First I have to catch up on everything that was said upstream to when I entered.

          I’ll raise again my speculation that your low-pass filtering may fix a lot of sins. (I think you agreed with me on this point.)

          First I have to escape wife and boss aggro.

          I’d better have those poles and zeros by the morning, is all I can say.

        • Bart
          Posted Sep 22, 2011 at 6:52 PM | Permalink

          Well, I hope we have both been proved right in and agree upon the following conclusions:

          1) an unstable response can generate an apparent negative-in-time impulse response estimate using the FFT based analysis approach

          2) So can a data record shorter than significant correlations in the data, and this effect is spurious

          I hope further that you will agree with me that the apparent backwards-in-time response here bears the earmarks of case 2. I recommend generating data according to my prescription and testing with it to give you a feel for how it behaves.

          I generally only deal with stable systems, and so never really looked at what would happen if the system were unstable until you brought this up. Likewise, I’d bet we both rarely, if ever, run into the case where there is really insufficient data to consistently tease out the long term correlation which is evident here, at least using this analysis approach.

        • Mark T
          Posted Sep 22, 2011 at 7:17 PM | Permalink

          We all seem to agree to differing degrees on many common areas differing largely upon interpretations from results borne of differing experiences. Maybe ruhroh was right. Even Nick has clarified to the point i understand his view.

          I think I know of a few tests that can provide more insight too, though they all suffer from the same problems inherent to short records.

          Mark

        • kim
          Posted Sep 22, 2011 at 8:11 PM | Permalink

          The wondering mode
          Again and again and again.
          Forces spin to rest.
          ==========

        • Hoi Polloi
          Posted Sep 23, 2011 at 3:37 AM | Permalink

          When trying to break a rock, Nick Stokes comes with a steady drip, while Carrick comes with a sledgehammer…

        • AndyL
          Posted Sep 23, 2011 at 3:06 AM | Permalink

          It’s ironic that just as peace breaks out in this thread (and well done guys – it’s been a fascinating process), CERN announce that some sub-atomic particles apparently travel faster than the speed of light and so presumably negative time intervals are possible after all

          http://www.bbc.co.uk/news/science-environment-15017484

        • AndyL
          Posted Sep 23, 2011 at 3:15 AM | Permalink

          AndyL (Sep 23 03:06),
          here’s a link to the original paper

          Click to access 1109.4897.pdf

        • Bart
          Posted Sep 23, 2011 at 10:50 AM | Permalink

          Meh… Two words: Cold Fusion. They’ll figure out where they went wrong sometime.

        • Bart
          Posted Sep 23, 2011 at 8:32 PM | Permalink

          Or, maybe there’s something to the VSL theories. I’ve always thought that argument made more sense than cosmic inflation anyway.

    • Posted Sep 22, 2011 at 3:58 PM | Permalink

      Re: MrPete (Sep 22 10:23),
      My main contention is that it’s too early to ask what “makes sense”. It’s an exploratory analysis on observations. We’ve got (by FFT) to an apparent impulse response that is two-sided, and just cutting one side off does not help the exploration.

      But I think the cyclic system issue needs thinking about. I think people here have much in mind wired systems and of the “box” (T2) being analysed detached from the loop. It isn’t, and so the transfer through T2 is mixed with that through T1, which is where CRF is changing temperature, which could certainly cause “negative delays”.

      Just one other thing – when you speak of “full-cycle delay” I think you have in mind a system in oscillation at a defined frequency (so you can speak of a full cycle). That’s not really what we have here.

      • Mark T
        Posted Sep 23, 2011 at 9:46 AM | Permalink

        That’s actually just feedback which cannot create negative delays in time (the feedback path itself has a postitive delay, even if it is small,) though it can result in negative group delays. The feedback path itself is necessary to understand the behavior of the overall mechanism.

        Take a look at Bart’s system, posted above, to see what it is he is doing.

        Mark

      • MrPete
        Posted Sep 23, 2011 at 12:48 PM | Permalink

        I suppose that was the vocabulary of defined-frequency, but do understand that this isn’t such a system. I probably should have said full-feedback-cycle.

        As Mark nicely summarizes, what we’re dealing with here is some kind of stimulus (or “cause”) and some kind of impact (or “effect”). And while there can be feedback that has an effect the next time the “cause” occurs… and thus can appear to precede the cause, that is not really what happens. Feedback never can happen in negative time.

        A nice simple example that comes to mind is Whack-A-Mole. I’m pretty good at it, and definitely begin my Whack before the Mole pops out of its hole, based on timing learned from previous Whacks. But that doesn’t mean negative time is involved.

        😀

        • Tom Gray
          Posted Sep 23, 2011 at 3:05 PM | Permalink

          Group delay is not about time delay. It is a measure of how the phase of the constituent frequency components that make up the signal are affected as they move to the output. The changes in phase will affect the shape of the waveform not the speed of the wave. Group delay is a metric about how faithfully the shape of an input waveform is preserved across the system. It is used s a measure to see how faithfully data can be passed through a telecom network. It is not a messier of the time delay that the waveforms will take to traverse the system. Group delay is about shapes of waveforms. It is not about time delay. It has nothing to do with reaching into the past. it is not mysterious. It is not negative time dealy.

  63. Mark F
    Posted Sep 22, 2011 at 12:06 PM | Permalink

    While the discussions on signal processing include some less-than-respectful comments about others, I applaud those whom I previously regarded as trollish troglodytes, for their technical contributions. I’m actually learning something from the discussions, and have returned to reading some of their posts. “They” may still be a******s,
    but the content is apperciated.

  64. Posted Sep 22, 2011 at 3:11 PM | Permalink

    Steve
    “Can anyone speculate what was in Dessler’s mind when he applied this method? “
    Dessler simply says:
    “The cloud feedback is conventionally defined as the change in ΔRcloud per unit of change in ΔTs.”
    And that is what he works out. There is no dynamics in the notion. It relates to equilibrium sensitivity. In recent terminology, a DC response. He assumes (for this purpose) that the observed fluctuations in CRF are caused by the surface T fluctuations, and the regression gives the proportionality.

    That goes into a feedback calc thus – suppose you have a primary mechanism (eg Stefan-Boltzmann) which says that a small equilibrium shift in flux F causes a proportional change in T
    dF1 = a*dT
    Then you find a mechanism (eg CRF) which produces a flux change in F (dF2) proportional to a change in T
    dF2 = b*dT
    Then the nett sensitivity is dT/(dF1+dF2) = 1/(a+b).
    If b is of opposite sign to a, it enhances the magnitude of the flux and is termed positive feedback.

    Dessler’s para here:
    “This definition of the cloud feedback is a standard approach for quantifying feedbacks (26). It only requires an association between Ts and ΔRcloud but does not imply any specific physical mechanism connecting them. The recent suggestion that feedback analyses suffer from a cause-and-effect problem (27) does not apply here:”
    might be worth looking into.

    • Bart
      Posted Sep 22, 2011 at 4:29 PM | Permalink

      It is entirely based on an assumption of zero phase lag. That is a clearly invalid assumption.

      • RuhRoh
        Posted Sep 23, 2011 at 1:12 AM | Permalink

        By the way, all of youse guys seem to be infringing on my patented technique which is mine, of including some minor thing which is Wrongo, and thus eliciting responses from wiseguys who might otherwise remain silently on the sidelines. Herein we saw recursive deployment…

        This was a bit like the runaway jazz song (with dueling soloists), that has the audience wondering if they are ever going to get back and reprise the theme, in a semi-harmonious way.

        Dramatic, yet valuable.
        Thanks to all.
        RR

        • Steve
          Posted Sep 23, 2011 at 5:26 AM | Permalink

          Indeed!

          A heartfelt thanks to all involved in a great thread!

          I believe that even the stoic adversaries will, with hindsight, reconsider the value of this communication – scientific education, blogosphere style.

          Steve

  65. Carrick
    Posted Sep 22, 2011 at 5:37 PM | Permalink

    Steve McIntyre:

    Carrick and Bart, thanks for all of this. Can both/either of you recommend texts/articles that deal with the topics from the perspectives that you recommend.

    I think what am using here would mostly be in the peer reviewed literature. Most text books that deal with broad-band methods for obtaining transfer functions and “impulse response functions” stop with linear, passive systems, and don’t address any of the pitfalls in that method when these assumptions fail.

    I’m sure there are reviews where some of the comments I’ve made above are synthesized, I would have to search for them though.

  66. Mark T
    Posted Sep 23, 2011 at 9:30 AM | Permalink

    Carrick and Mark T, you talk a lot about your tools, but this is about physical systems – clouds and temperature – not abstract mechanics of signal processing. Piss all you want, the winner of the name-plate polishing contest doesn’t matter. What matters is which of you can address the question most conceilsy and coherently.

    I’m not sure what you’re getting at here. I’ve been making the same arguments as Bart, simply noting distinctions about the analysis, not any real “name-plate” polishing.

    While I agree that this is a physical system, the argument is actually about how you interpret the results of the signal processing mechanics.

    Mark

  67. Bart
    Posted Sep 23, 2011 at 10:39 AM | Permalink

    I want to highlight and repeat what I just replied to RomanM above, as it might get lost in all the traffic, and is important.

    I had a sort of breakthrough thought on this. Given the proffered system diagram, this is the type of behavior which would be expected.

    Label the top response T1 and the bottom T2. We are trying to estimate T2. The closed loop transfer function from the input Radiation Forcing (RF) to the input point of T2 is H2 = T1/(1-T1*T2). Assume the gain of the loop is “large” within the passband. Then, H2 := T1/(-T1*T2) = -1/T2. So, if the RF is wideband, the spectrum of the input to T2 should be approximately the spectrum of RF divided by mag(T2)^2, which is what RomanM has found.

    The gain from RF to the output of T2 is approximately unity, so the output of T2 should more or less track RF within the passband of the loop. My hunch would be that, that passband is probably about 0.3 years^-1, which is where you get the maximum phase margin boost from T2.

    • Bart
      Posted Sep 23, 2011 at 10:47 AM | Permalink

      “My hunch would be that, that passband is probably about 0.3 years^-1, which is where you get the maximum phase margin boost from T2.”

      Or, roughly the maximum anyway.

    • Bart
      Posted Sep 23, 2011 at 11:28 AM | Permalink

      “The gain from RF to the output of T2 is approximately minus unity (T1*T2/(1-T1*T2) := T1*T2/(-T1*T2) = -1)…”

      • Bart
        Posted Sep 23, 2011 at 11:33 AM | Permalink

        And, this is another important point. The ~5 year time constant associated with the transfer function we have been looking at is open loop. In the closed loop, the response should be more like 1/0.3 years or about three years to settle (time constant of maybe 1/2 years), assuming the closed loop bandwidth is about 0.3 years^-1.

    • Bart
      Posted Sep 25, 2011 at 1:26 PM | Permalink

      The loop would have to be quite complicated to take advantage of the phase lead near 0.3 years^-1. There appears to be non-minimum phase behavior in this area. So, maybe the bandwidth is substantially less than this.

      Nobody appears to care, but I thought I’d keep any who do apprised.

  68. Posted Sep 23, 2011 at 5:15 PM | Permalink

    I’ve largely dropped out of this discussion, because I think the logic has gone off the rails. I’ll explain why.

    We started with CFR and T, relationship to be discovered. We hypothesised that CFR could be expressed as a linear function of T, via a convolution relation, CFR = h ⊗ T.

    Now you can always find such a relation, via FFT. But there’s been endless argument about whether h should be causal – ie one-sided.

    We got an h such that CFR = h ⊗ T, but was two-sided. If causality is part of the hypothesis, that is the end of the investigation. We did not find a satisfactory h. Try again to relate CFR and T with some other method.

    However, what Bart’s analysis does is to zero the inconvenient non-causal part. That leaves an h that no longer satisfies CFR = h ⊗ T (but is causal). The relevance of that remains to be demonstrated.

    Bart asserts that the omitted part is mere noise. That needs to be shown I believe it is untrue, as shown by these facts:
    1. When omitted, CFR = h ⊗ T seriously failed
    2. The DC gain changed from -12.2 W/m2/K to -9.4 W/m2/K.

    • Bart
      Posted Sep 23, 2011 at 5:55 PM | Permalink

      Nick – you seem to have a complete blind spot for the lessons of the artificially generated data, which argues quite clearly and strongly that the “anti-causal” components of the estimated impulse response are an artifact of the long correlation time.

      I have no more time to waste trying to convince you. The evidence is there, if you are willing to see it.

    • bender
      Posted Sep 23, 2011 at 5:58 PM | Permalink

      Thanks to Nick Stokes for returning and recapitulating the core problem from his perspective.

    • Bart
      Posted Sep 23, 2011 at 5:59 PM | Permalink

      “When omitted, CFR = h ⊗ T seriously failed”

      But, it doesn’t. You didn’t look closely enough. It starts to agree more and more closely as time progresses. You are just seeing a settling phenomenon.

      • Bart
        Posted Sep 23, 2011 at 6:04 PM | Permalink

        Just a reminder: full impulse response estimate from artificially generated data with the same low frequency correlations and observation time span. The mess on the right is a chimera. We know, because I explicitly created this data using white noise input and a causal filter.

        • bender
          Posted Sep 23, 2011 at 6:17 PM | Permalink

          What if the artificial data are generated by a fully closed feedback loop?

        • Bart
          Posted Sep 23, 2011 at 8:30 PM | Permalink

          It shouldn’t make any difference, as long as the input has sufficient bandwidth to excite the entire range of the transfer function. If it does not, the result is not a good representation of the system.

          I should note that there is this limitation in my artificially generated data, that of an adequately stimulating random noise generator. This could be why it takes me several runs to get a good result.

          The MATLAB generator should be pretty good, but this has been a known issue in Monte Carlo analysis for many decades. It may well be that the real world is more randomly stimulative, and it is perhaps no accident that this particular real world data set generated a good result.

        • Mark T
          Posted Sep 24, 2011 at 11:50 AM | Permalink

          Yup. The data from such generators are pseudo-random and often have issues. Cleve Moler has a pretty long writeup on the method MATLAB uses to generate the Normally distributed values from the randn() function, though it’s been a while since I read the article (search at The MathWorks then wade through a bazillion hits.) It is good a good generator, but not perfect, which is otherwise impossible.

          Mark

        • Posted Sep 23, 2011 at 6:47 PM | Permalink

          I cannot see the logic here. You’ve shown, I guess, that an artificial example can generate noise in the negative t portion. That doesn’t mean that all negative t portions of different problems are noise.

          You need to show that there is some mechanism in the FFT process that preferentially generates noise in the negative part. And even that does not rule out the possibility of real information there.

          I believe there’s nothing in the FFT analysis that would cause asymmetric time treatment of h.

          You’ve never dealt with my simple reversal argument. Reverse the data order, and you reverse h. If the FFT process was causing noise and noise only to appear in negative t, why are we seeing the numbers that you thought were valid turning up there, unchanged?

        • Bart
          Posted Sep 23, 2011 at 8:08 PM | Permalink

          Not just can. Always does. Seriously, Nick, try it yourself. That’s why I provided the algorithm for generating artificial data to everyone.

          As for the reversal argument, think about what the cross correlation is and how reversing the time series affects it. It’s not magic.

        • bender
          Posted Sep 23, 2011 at 8:10 PM | Permalink

          Can you guys not post R code?
          Thanks for continuing the dialogue. Many lurkers here.

        • Posted Sep 23, 2011 at 8:35 PM | Permalink

          Bender,
          I posted R code here, along with graphs – it’s similar to the code Roman posted above. Just add dR=rev(dR); temp=rev(temp) after they are defined to get the reverse effect. It’s easier to see what is happening if you reduce Nsamp from 8192 to, say, 1024.

        • Bart
          Posted Sep 23, 2011 at 10:04 PM | Permalink

          I don’t know R code. To generate the artificial data, hopefully this post and the two following it are not too hard to decipher.

  69. Tom Gray
    Posted Oct 16, 2011 at 1:44 PM | Permalink

    http://www.mathpages.com/home/kmath249/kmath249.htm

    For those of you who were puzzled by the talk of “negative time lags” in the discussion about transfer functions above can find an accessible discussion at the web page whose URL is above. TEh web page discusses transfer functions and teh effect of the phase response. Contrary to some impressions, these “negative time lags” have nothing to do with predicting the future or showing that the cloud feedback observations are non-causal.

    The pertinent passage from the web page is:

    The ratios a1/a0 and b1/b0 are often called, respectively, the lag and lead time constants of the transfer function, so the “time lag” of the response to a steady ramp input equals the lag time constant minus the lead time constant. Notice that it is perfectly possible for the lead time constant to be greater than the lag time constant, in which case the “time lag” of the transfer function is negative. In general, for any frequency input (not just linear ramps), the phase lag is negative if b1/b0 exceeds a1/a0. Despite the appearance, this does not imply that the transfer function is somehow reads the future, nor than the input signal is traveling backwards in time. The reason the output appears to anticipate the input is simply that the forcing function (the right hand side of the original transfer function) contains not only the input signal x(t) but also its derivative dx/dt (assuming b1 is non-zero), whose phase is /2 ahead. (Recall that the derivative of the sine is the cosine.) Hence a linear combination of x and its derivative yields a net forcing function with an advanced phase.

    • Tom Gray
      Posted Oct 16, 2011 at 2:03 PM | Permalink

      More from the web page

      Thus the effective forcing function at any given instant does not reflect the future of x, it represents the current x and the current dx/dt. It just so happens that if the sinusoidal wave pattern continues unchanged, the value of x will subsequently progress through the phase that was “predicted” by the combination of the previous x and dx/dt signals, making it appear as though the output predicted the input. However, if the x signal abruptly changes the pattern at some instant, the change will not be foreseen by the output. Any such change will only reach the output after it has appeared at the input and worked its way through the transfer function. One way of thinking about this is to remember that the basic transfer function is directionally symmetrical, and the “output signal” y(t) could just as well be regarded as the input signal, driving the “response” of x(t) and its derivative.

      • Carrick
        Posted Oct 16, 2011 at 3:02 PM | Permalink

        Good link Tom.

        One thing to point out though is the following. This is provable as a theorem:

        If X is the input and Y is the output, and Y depends linearly on X, the underlying system is passive (aka “stable” aka does not require a power supply to operate), and the relationship between X and Y is causal, other than spectral widening due to a finite window and noise, the inferred impulse response function h(tau) = 0 for tau < 0.

        For a broad range of systems, tau can be identified with the physical delay.

        It turns out if you have a power supply (e.g., the Sun certainly acts as one for climate), then you can get negative delays. These negative delays can either be a sign of net amplification in the system (feedback greater than one so a stabilizing nonlinearity is required) or they can arise in a passive, nonlinear system.

      • Tom Gray
        Posted Oct 16, 2011 at 3:59 PM | Permalink

        So non-periodic signals, that is any signals which carry information (e.g. the effect of solar radiance), exhibit a lag when going from the input to the output

9 Trackbacks

  1. By Dessler Hides The Decline | Real Science on Sep 8, 2011 at 1:30 PM

    […] Above Post Links */ google_ad_slot = "9927257852"; google_ad_width = 468; google_ad_height = 15; As Steve McIntyre points out, Dessler cherry picked his data to get the result he wanted. Dessler cherry […]

  2. […] https://climateaudit.org/2011/09/08/more-on-dessler-2010/ […]

  3. […] Dessler may need to make other changes, it appears Steve McIntyre has found some flaws related to how the CERES data was combined: https://climateaudit.org/2011/09/08/more-on-dessler-2010/ […]

  4. […] Nick Stokes Posted Sep 10, 2011 at 5:32 PM | Permalink […]

  5. […] […]

  6. […] is the application of the work-in-progress Fast Fourier Transform algorithm by <a href=”Permalink “>Bart</a> coded in R on the total solar irradiance (TSI via Lean 2000) and global […]

  7. […] is the application of the work-in-progress Fast Fourier Transform algorithm by Bart coded in R on the total solar irradiance (TSI via Lean 2000) and global temperature (HadCRU). The […]

  8. […] Nick raised a point over at CA, that perhaps all we’re getting with ERA-Interim clear-sky fluxes is the CERES fluxes, but […]

  9. […] Nick raised a point over at CA, that perhaps all we’re getting with ERA-Interim clear-sky fluxes is the CERES fluxes, but with […]