Data Smoothing and Spurious Correlation

Allan Macrae has posted an interesting study at ICECAP. In the study he argues that the changes in temperature (tropospheric and surface) precede the changes in atmospheric CO2 by nine months. Thus, he says, CO2 cannot be the source of the changes in temperature, because it follows those changes.

Being a curious and generally disbelieving sort of fellow, I thought I’d take a look to see if his claims were true. I got the three datasets (CO2, tropospheric, and surface temperatures), and I have posted them up here. These show the actual data, not the month-to-month changes.

In the Macrae study, he used smoothed datasets (12 month average) of the month-to-month change in temperature (∆T) and CO2 (∆CO2) to establish the lag between the change in CO2 and temperature . Accordingly, I did the same. My initial graph of the raw and smoothed data looked like this:

Figure 1. Cross-correlations of raw and 12-month smoothed UAH MSU Lower Tropospheric Temperature change (∆T) and Mauna Loa CO2 change (∆CO2). Smoothing is done with a Gaussian average, with a “Full Width to Half Maximum” (FWHM) width of 12 months (brown line). Red line is correlation of raw (unsmoothed) data. Black circle shows peak correlation.

At first glance, this seemed to confirm his study. The smoothed datasets do indeed have a strong correlation of about 0.6 with a lag of nine months (indicated by the black circle). However, I didn’t like the looks of the averaged data. The cycle looked artificial. And more to the point, I didn’t see anything resembling a correlation at a lag of nine months in the unsmoothed data.

Normally, if there is indeed a correlation that involves a lag, the unsmoothed data will show that correlation, although it will usually be stronger when it is smoothed. In addition, there will be a correlation on either side of the peak which is somewhat smaller than at the peak. So if there is a peak at say 9 months in the unsmoothed data, there will be positive (but smaller) correlations at 8 and 10 months. However, in this case, with the unsmoothed data there is a negative correlation for 7, 8, and 9 months lag.

Now Steve McIntyre has posted somewhere about how averaging can actually create spurious correlations (although my google-fu was not strong enough to find it). I suspected that the correlation between these datasets was spurious, so I decided to look at different smoothing lengths. These look like this:

Figure 2. Cross-correlations of raw and smoothed UAH MSU Lower Tropospheric Temperature change (∆T) and Mauna Loa CO2 change (∆CO2). Smoothing is done with a Gaussian average, with a “Full Width to Half Maximum” (FWHM) width as given in the legend. Black circles shows peak correlation for various smoothing widths.

Note what happens as the smoothing filter width is increased. What start out as separate tiny peaks at about 3-5 and 11-14 months end up being combined into a single large peak at around nine months. Note also how the lag of the peak correlation changes as the smoothing window is widened. It starts with a lag of about 4 months (2 mo and 6 month smoothing). As the smoothing window increases, the lag increases as well, all the way up to 17 months for the 48 month smoothing. Which one is correct, if any?

To investigate what happens with random noise, I constructed a pair of series with similar autoregressions, and I looked at the lagged correlations. The original dataset is positively autocorrelated (sometimes called “red” noise). In general, the change (∆T or ∆CO2) in a positively autocorrelated dataset is negatively autocorrelated (sometimes called “blue noise”). Since the data under investigation is blue, I used blue random noise with the same negative autocorrelation for my test of random data.

This was my first result using random data:

Figure 3. Cross-correlations of raw and smoothed random (blue noise) datasets. Smoothing is done with a Gaussian average, with a “Full Width to Half Maximum” (FWHM) width as given in the legend. Black circles show peak correlations for various smoothings.

Note that as the smoothing window increases in width, we see the same kind of changes we saw in the temperature/CO2 comparison. There appears to be a correlation between the smoothed random series, with a lag of about 7 months. In addition, as the smoothing window widens, the maximum point is pushed over, until it occurs at a lag which does not show any correlation in the raw data.

After making the first graph of the effect of smoothing width on random blue noise, I noticed that the curves were still rising on the right. So I graphed the correlations out to 60 months. This is the result:

Figure 4. Rescaling of Figure 3, showing the effect of lags out to 60 months.

Note how, once again, the smoothing (even for as short a period as six months, green line) converts a non-descript region (say lag +30 to +60, right part of the graph) into a high correlation region, by the lumping together of individual peaks. Remember, this was just random blue noise, none of these are represent real lagged relationships despite the high correlation.

My general conclusion from all of this is to avoid looking for lagged correlations in smoothed datasets, they’ll lie to you. I was surprised by the creation of apparent, but totally spurious, lagged correlations when the data is smoothed.

And for the $64,000 question … is the correlation found in the Macrae study valid, or spurious? I truly don’t know, although I strongly suspect that it is spurious. But how can we tell?

My best to everyone,

w.

242 Comments

  1. Mike B
    Posted Feb 12, 2008 at 4:37 PM | Permalink

    The question of whether or not the correlation is spurious is an interesting one. It would certainly seem possible for smoothing to induce correlation, and particularly unscrupulous practitioners might go so far as to test a suite of smoothing parameters and select only the one that tells the story they want to tell.

    I seem to recall a thread a few months ago where warmers argued both sides of the lead/lag question. The thrust being that the lag is expected because of feedback from a warming ocean.

  2. John A
    Posted Feb 12, 2008 at 4:38 PM | Permalink

    The fourth picture didn’t come through.

  3. Andrew
    Posted Feb 12, 2008 at 5:15 PM | Permalink

    This correlation has been noted so many times before. I interpreted it to mean that CO2 growth varies do primarily to variability of the ocean as a sink for our emmissions. But it could just as easily be an artifact, I suppose. Hans Erren had an image of this somewhere…I’m off to digging for it!

  4. Andrew
    Posted Feb 12, 2008 at 5:21 PM | Permalink

    Found it! Here:

    Seems kind of impressive, but oddly, the trend in temperature is absent in the CO2.

  5. Posted Feb 12, 2008 at 5:56 PM | Permalink

    south pole CO2 data gives a smoother image:

  6. Tim Ball
    Posted Feb 12, 2008 at 6:07 PM | Permalink

    Is this problem serious when the degree of smoothing applied to the long term ice core record is considered?

  7. Francois Ouellette
    Posted Feb 12, 2008 at 6:09 PM | Permalink

    #4 Andrew

    The correlation (temp vs deltaCO2) is impressive (more than the lag hypothesis), but I’m confused as to which figure has temp and/or CO2 detrended and which hasn’t. Could we have both here for discussion?

    Proxy reconstructions of CO2 (via leaf stomata) show that CO2 concentration follows temperature, and is not the stable, smoothly increasing function that the ice cores tend to show.

  8. Sam Urbinto
    Posted Feb 12, 2008 at 6:12 PM | Permalink

    I’d imagine that as low resolution as the ice cores are on a yearly (or even 100 yearly) basis, that it would be more of a problem, on par with the issues with treemometers. Just a feeling.

  9. Andrew
    Posted Feb 12, 2008 at 6:26 PM | Permalink

    Francois, According to Hans’ site, black is CO2 and blue is temp.

    Tim, the ice cores, as it just so happens, have rates of growth very different from temperatures. Odd, isn’t it?

  10. Larry
    Posted Feb 12, 2008 at 6:27 PM | Permalink

    5, that’s really interesting. What are the units on that?

  11. Andrew
    Posted Feb 12, 2008 at 6:36 PM | Permalink

    Briggs had some interesting analysis of the ice cores and Muana Loa here, btw:

    http://wmbriggs.com/blog/2008/02/06/has-atmospheric-co2-decreased-a-different-way-to-look-at-co2-changes/

  12. JS
    Posted Feb 12, 2008 at 7:10 PM | Permalink

    Smoothing is only for graphs.

    It should be avoided for actual analysis.

    Smoothing induces a phase shift and thus anything that relies on smoothed results ot establish correlations should be immediately suspect. Smoothing also induces other artifacts and results are just as likely to be affected by the window or weighting pattern as the actual data. (Depending on the smoothing filter different frequencies are emphasised – very few filters are anything like an ideal bandpass filter.)

    The results should be present and statistically significant in a covariance (or correlation) table at various leads and lags on unsmoothed data. You can show any interesting findings from this analysis using smoothing to illuminate it for the untrained eye, but if the results rely on smoothing then the results are a result of smoothing.

  13. John Hekman
    Posted Feb 12, 2008 at 8:10 PM | Permalink

    There are econometric tests for this. Granger causality tests can tell you whether it is more likely that x is causing y or y is causing x.

  14. Andrew
    Posted Feb 12, 2008 at 8:29 PM | Permalink

    BTW, I should mention that if my hypothesis

    I interpreted it to mean that CO2 growth varies do primarily to variability of the ocean as a sink for our emmissions.

    Could be tested using SST data. Francois, I think I understand what you mean. When you look at the graph, on the left, the black curve, CO2 growth rate is above the blue one, on the right, its below. So the wiggles match, but not the trend, thats what I was getting at. Which is interesting becuase the oceans have warmed less than the land…

  15. Francois Ouellette
    Posted Feb 12, 2008 at 8:30 PM | Permalink

    #9 I was asking if it was detrended or not.

  16. Francois Ouellette
    Posted Feb 12, 2008 at 9:07 PM | Permalink

    I’ve just plotted deltaCO2 (12 month running mean, mauna laua data) vs monthly anomaly from 1959 to 2003. I don’t have anywhere to upload the graph, but the correlation is spooky, and even more so with a lag of 5 months between temp and deltaCO2. Is that expected or what? Who has noticed that before, and is there an explanation?

  17. Posted Feb 12, 2008 at 9:14 PM | Permalink

    Francois if you want to e-mail the graph to me at

    mndsmith33 AT earthlink.net

    I’ll upload and send to you a URL

  18. John Creighton
    Posted Feb 12, 2008 at 9:30 PM | Permalink

    #12 JS, a symmetric filter will not cause phase distortion.

  19. John Creighton
    Posted Feb 12, 2008 at 9:33 PM | Permalink

    I notice some obvious differences between the two graphs. The CO2 and temperature have much stronger correlation then the two random signals. Also when you increase the smoothing the correlation increases for the case of CO2 vs Temperature but the same phenomena is not seen for random signals.

  20. John Creighton
    Posted Feb 12, 2008 at 9:50 PM | Permalink

    I was thinking of why one might want to look for correlations in smoothed data. My answer is that if the phase delay is random then smoothing could enhance the correlation. The problem is that you will reduce your number of samples and thus increase the chance of spurious correlation.

    I wonder if some expression could be developed to reduce the correlation value so that you are only left with the correlation which is likely not due to noise.

  21. Francois Ouellette
    Posted Feb 12, 2008 at 9:56 PM | Permalink

    #17 David: done!

  22. Francois Ouellette
    Posted Feb 12, 2008 at 10:15 PM | Permalink

    O.K. here’s my graph. I took data for monthly temperature anomaly, of which I don’t remember the origin, and the mauna laua CO2 data. I took the 12-month running mean of CO2. Then I plotted CO2 vs temp anomaly, and looked at what lag would give the highest r2 for the correlation. I used the correlation coefficients to rescale and lag the temperature anomaly. The data go from 1959 to 2003.

    As I said: spooky…

  23. John Creighton
    Posted Feb 12, 2008 at 10:24 PM | Permalink

    #22 Francois Ouellette, is that the local temperature anomaly or the global temperature anomaly?

  24. Posted Feb 12, 2008 at 10:25 PM | Permalink

    Interesting post by Roy Spencer ( link ) courtesy of Anthony’s blog.

  25. Francois Ouellette
    Posted Feb 12, 2008 at 10:30 PM | Permalink

    #23 Global (or maybe just NH)

  26. bender
    Posted Feb 12, 2008 at 10:44 PM | Permalink

    Phase distortion is not the core issue. Fake cycling in a series and spurious correlations between two series are the issue. CA search on Yule and/or Slutzky. If it’s the smoothed noise that is the source of the correlation then the correlation is false.

    Mind you, some processes will correlate at some time scales but not others. Smoothing is not recommended for separating out those distinct time scale signals. Orthogonal digital filtering is much better. This is in fact what started the hurricane discussions at CA. I believe it was Mann and Emanuel that were smoothing hurricane data and SSTs and finding correlations between temp and hurricane PDI. Bad practice.

  27. Posted Feb 12, 2008 at 11:05 PM | Permalink

    Hans,

    I constructed a pair of series with similar autoregressions, and I looked at the lagged correlations.

    Do you mean similar to each other, or similar to the CO2 and Temp respectively (and therefore different to each other)?

    My general conclusion from all of this is to avoid looking for lagged correlations in smoothed datasets, they’ll lie to you.

    JS #12 and Hans, Unfortunately all measurements are smooths of some sort, (even monthly averages).

    Fortunately all is not lost. Isn’t it possible in general to construct similar
    series with similar autocorrelation structures as you have done,
    however they are smoothed, and null test against them?
    Wouldn’t this give a reliable rejection statistic?

  28. John Creighton
    Posted Feb 12, 2008 at 11:49 PM | Permalink

    #26 Bender Writes:
    “Smoothing is not recommended for separating out those distinct time scale signals. Orthogonal digital filtering is much better. ”

    I’m not sure what you mean.

  29. John Creighton
    Posted Feb 13, 2008 at 12:02 AM | Permalink

    Smoothing simply emphasizes the correlation in the lower frequency components of the signal. Low frequency signals stay correlated over larger time lags and that is why the the correlation peaks are wider when more smoothing is applied.

    I think the correct technique would be not to smooth. Instead first take the autocorrelation and then preform a fourier transform to get the power spectrum. Then plot the phase.

    http://en.wikipedia.org/wiki/Arg_%28mathematics%29

    The slope of the phase represents the time delay in the system.

  30. Geoff Sherrington
    Posted Feb 13, 2008 at 12:09 AM | Permalink

    Remember those little wiggles each year in the Mauna Loa CO2 data? They are supposed to show the vegetative changes of the NH forests. Somewhere in IPCC is a hint that it takes about 3-4 years for Alaskan CO2 to get to South Pole. So there is a probability of a physical lag in CO2 seen at Mauna Loa as the atmosphere redistributes. Whether this correlates with a lead or a lag with temp somewhere else in the world is a guess that I would not try to solve graphically. There are better numerical ways. In the end game they all come back to how good the surface temp measurement is and I have lost faith in that measurement, adjusted out of reality as it is, as Anthony’s and Steve’s work casts doubt after doubt.

  31. MrPete
    Posted Feb 13, 2008 at 12:17 AM | Permalink

    A question I’ve not seen raised about Mauna Loa CO2 measures: to what extent is it affected by outgassing from Hawaiian volcanoes?

  32. deadwood
    Posted Feb 13, 2008 at 1:52 AM | Permalink

    MrPete@31:

    A question I keep asking myself. The wiggly line is just too regular to have not been filtered.

    One would assume that eruptive events on the Big Island are somehow being subtracted out. How this would be done I do not know, but suspect an estimate of total flux for the entire mountain would be a starting point.

  33. Arnost
    Posted Feb 13, 2008 at 2:35 AM | Permalink

    I understood that the wiggles in the CO2 Mauna Loa data were a consequence of the cooling and warming of the ocean over the summer/winter yearly cycle. And not just because warmer water absorbs less CO2, but also because of variable biological (phytoplankton) activity driven by more/less sunlight.

    I also understand that the CO2 measuring station is upwind (prevailing wind is apparently unidirectional) of the volcano(s).

    One question though – maybe off topic on this thread – but given that there was a reduction in W/Stations reporting temps over the last 10 years, has anyone isolated those that are active right now, and only used these to recreate (say via JohnV’s tool) the global anomaly back as far as it goes? It would be really be interesting on a global scale if it was possible…

    cheers

  34. Posted Feb 13, 2008 at 2:51 AM | Permalink

    Significance testing in these cases is quite difficult, ( remember this http://www.climateaudit.org/?p=903 ), but the Bartlett’s formula is IMO a good starting point:

    var\{ R_{12}(s) \} \sim \frac{1}{n-s}\sum _{-\infty} ^{\infty}p_{11}(v)p_{22}(v)

    In the case presented above, we try different lags (s), but in addition we change the correlation structures of the original time series by smoothing (effectively it increases auto-correlations (p)). From the formula you can see that longer the lag, larger the variance of sample correlation coefficient (because we have less overlapping samples). Smoothing has the same effect. And if the experimenter can try with many lags and many smoothing operations, no wonder if he/she finds spurious correlations..

  35. Willis Eschenbach
    Posted Feb 13, 2008 at 3:45 AM | Permalink

    My thanks to all who have commented. I fixed the fourth graphic, it didn’t like a very long file name.

    Tim Ball, you ask an interesting question, viz:

    Is this problem serious when the degree of smoothing applied to the long term ice core record is considered?

    I had not considered physical smoothing of the data, such as you mention, as opposed to mathematical smoothing of the data. I don’t know the answer. All data other than instantaneous instrumental measurements have some temporal granularity, and some spatial granularity as well. We average individual measurements into daily means, and average daily means into monthly means. We average the monthly differences from the annual mean, and remove them from the dataset. We average from all over the planet, according to some chosen algorithm, to get a global mean. I don’t know what this does to the detection of lags between cause and effect.

    bender, thanks for mentioning Yule and Slutsky, I couldn’t for the life of me remember either name.

    David Stockwell, you say:

    Hans,

    I constructed a pair of series with similar autoregressions, and I looked at the lagged correlations.

    Do you mean similar to each other, or similar to the CO2 and Temp respectively (and therefore different to each other)?

    While I’m honored to be mistaken for Hans, I’m actually the author of the lead post. I meant similar to the CO2 and Temp respectively. From memory, those were about -0.4 and -0.2. You also say:

    Isn’t it possible in general to construct similar
    series with similar autocorrelation structures as you have done,
    however they are smoothed, and null test against them?
    Wouldn’t this give a reliable rejection statistic?

    Yes, that’s “Monte Carlo” testing. However, there are some caveats. Natural climate datasets often have a complex autocorrelation structure, even after removal of seasonal anomalies. I can generate a universe of random datasets with that approximate autocorrelation structure. However, we cannot show that my universe of random possibilities has the same structure and distribution as the real universe of real possibilities. Nature may favor certain outcomes over others, and that imbalance will not be captured in my random universe.

    There are, however, some conclusions we can draw, usually of a negative nature. That is to say, we can show that something is not unusual, by showing that it happens in a host of random datasets. But we can’t say something is unusual just because I don’t find it in my random universe. It may be common in the real world.

    John Creighton, you say:

    Smoothing simply emphasizes the correlation in the lower frequency components of the signal. Low frequency signals stay correlated over larger time lags and that is why the the correlation peaks are wider when more smoothing is applied.

    I think the correct technique would be not to smooth. Instead first take the autocorrelation and then preform a fourier transform to get the power spectrum. Then plot the phase.

    http://en.wikipedia.org/wiki/Arg_%28mathematics%29

    The slope of the phase represents the time delay in the system.

    Beyond me. What do you get when you do that with these datasets?

    bender, you say:

    Orthogonal digital filtering is much better.

    Same comment. Beyond me. What do you get when you do that with these datasets? Which one leads or lags, and how much?

    Geoff, you say:

    In the end game they all come back to how good the surface temp measurement is and I have lost faith in that measurement, adjusted out of reality as it is, as Anthony’s and Steve’s work casts doubt after doubt.

    That’s why I used the satellite data for the graphs above.

    MrPete, I’d prefer not to get sidetracked on the fidelity of various CO2 datasets. The trends of the South Pole records of CO2 are in very good agreement with the Mauna Loa records. At Mauna Loa they sample at night, when the air flow is downslope due to the cooling surface. At that point the air is coming down from the upper troposphere, so they are sampling pristine air that hasn’t been running across the earth’s surface. There are occasional anomalies, which are generally easily identified by the size of the jump. Because the sampled air is usually from the high troposphere, it doesn’t change much in composition from day to day. Thus, if you get one sample that’s say 5 ppmv high, you go “nope, bad data”. These represent only a percent or so of the samples.

    One point of terminology. Smoothing is different from filtering. Filtering removes (filters out) energy from the signal, while smoothing redistributes the energy across time. These are very different processes. What I have done above is smoothing, in particular “Gaussian” smoothing. I do not know whether filtering produces the same kind of correlation artifacts that smoothing produces.

    Nobody has tackled the $64K question, does the CO2 change really lag

    Regards,

    w

  36. Posted Feb 13, 2008 at 4:04 AM | Permalink

    Smoothing is different from filtering. Filtering removes (filters out) energy from the signal, while smoothing redistributes the energy across time. These are very different processes.

    ??? I guess there are many definitions, here you can find one http://en.wikipedia.org/wiki/Wiener_filter and often ‘smoothing’ means just low-pass filtering.

  37. Willis Eschenbach
    Posted Feb 13, 2008 at 4:41 AM | Permalink

    UC, thank you for the post. Let me distinguish based on the way that filtering and smoothing treat a single short pulse.

    A low pass filter totally ignores (filters out) a single short pulse. It leaves nothing. It removes all of the energy of the short pulse from the signal.

    Smoothing the same data, on the other hand, spreads out the energy but does not remove it from the signal. It widens the spike out in time and reduces it in amplitude.

    Yes, I know that people call low pass filtering “smoothing”, but as my example shows, they end up with different results because they utilize different processes. So call them what you will, but they are not the same thing.

    Regards,

    w.

  38. Posted Feb 13, 2008 at 5:04 AM | Permalink

    OK, ok, just trying to translate this to the terminology I’m familiar with:

    A low pass filter totally ignores (filters out) a single short pulse. It leaves nothing. It removes all of the energy of the short pulse from the signal.

    Impulse response of a nontrivial linear filter cannot be zero. But, for example, simple median filter has a zero-valued impulse response.

  39. Francois Ouellette
    Posted Feb 13, 2008 at 8:15 AM | Permalink

    Regarding the graph I posted, please note that it has nothing to do with smoothing-induced spurious correlations. The temperature anomaly is not some smoothed version, but the pure raw monthly anomaly. The change in CO2 is annualized only to get rid of the seasonal variation, but still conforms to the definition of a derivative, as it is equivalent to CO2(ti+1)-CO2(ti-1). Hence the 5-month lag really reflects the fact that the DeltaCO2 is taken as the difference of CO2 six-months ahead minus six months before (a six-month lag would have given a very similar result).

    The correlation is troubling, because the only explanation is that the CO2 fluxes are highly sensitive to temperature, yet the relative concentrations of CO2 in, say, the atmosphere and the ocean, take a long time to reach equilibrium. Hence if temperature increases, the uptake seems to diminish, which makes CO2 concentration go higher. But if the response time is, say, 50 years, you will find a nearly linear relationship between temperature and CO2 change. A long period of warming will result in a long period of CO2 increase.

    Now if you have such a dynamical system, then the CO2 concentration change is the result of warming, not the cause. Given that warming began at the end of the 19th century, and the system responds slowly, what we are seeing since 1958 at Mauna Laua could be the result of the warming of the previous 50 years. The apparent correlation between emissions and atmospheric concentration rise would be just spurious.

    There are some arguments in favor of that hypothesis. First, the anthropogenic emissions are but a small fraction of the total CO2 fluxes (1-2%). Second, there is a large uncertainty in our knowledge of such fluxes, up to 20% according to IPCC (chap.7). So the uncertainty in the fluxes is larger than the anthropogenic emissions. Finally, there is the question of the difference between emissions and concentration increase, the so-called “airborne fraction”. Apparently, about 55% of anthropogenic emissions are uptaken, so CO2 does not rise quantitatively according to emissions. If you revert the proposal and assume that temperature drives CO2, then you don’t have to resolve that difference. The two are just unrelated. It may actually be the case that the anthropogenic emissions “help” the system reach equilibrium faster!

    Note that if you admit that the CO2 system responds to temperature, then CO2 concentrations are bound to happen whenever temperature increases. You just cannot escape that fact. So you cannot just assume that the concentration rise is due solely to anthropogenic emissions. But it also raises other interesting questions. If temperature rise triggers CO2 increase, then the CO2 increase also triggers a temperature rise through GHG forcing, just like the water vapor feedback.

    Imagine, then, that the Sun, directly or indirectly, is what really drives the system. More active Sun leads to temperature rise, which increases both water vapor and CO2, and amplifies the temperature increase solely due to the change in solar forcing. There you go: you’ve just explained why the climate responds more strongly to solar forcing than the value of the forcing itself.

    I might post more on the message board, if others want to discuss this.

    Note that credit is entirely due to Allan McRae for pointing this out. Since his graph only covered 1979-2005, I was curious to see if the correlation held for longer in the past. I browsed quickly yesterday night through the IPCC chapter 7 to see if they noted the correlation. They do note the large variation in annual CO2 change, and somehow attribute it to “climatic events”. But they don’t show the correlation itself. They make it look like it’s a minor phenomenon. By the same token, they seem to totally ignore the results based on leaf stomatas, that show a similar correlation between CO2 concentration and temperatures in the past. I’ll read more today to figure that out.

  40. John Creighton
    Posted Feb 13, 2008 at 8:16 AM | Permalink

    #37 I was taught that smoothing is simply a non causal filter. Thus the two shouldn’t really be that different. Smoothing does not need to be low pass.

  41. John Creighton
    Posted Feb 13, 2008 at 8:19 AM | Permalink

    #32 send me the dataset and I guess I can try it. I forget though how the FFT relates to the Fourier Transform.

  42. Rich
    Posted Feb 13, 2008 at 8:55 AM | Permalink

    Thayer Watkins at http://www.applet-magic.com/climatology.htm has some (relatively) easy-to-follow stuff on analysing climate and climate-like data which may help those like me who are not statisticians. As I read it, analysing the raw data instead of its differences is the first mistake.

    Rich

  43. Pat Keating
    Posted Feb 13, 2008 at 9:42 AM | Permalink

    Francois

    Imagine, then, that the Sun, directly or indirectly, is what really drives the system. More active Sun leads to temperature rise, which increases both water vapor and CO2, and amplifies the temperature increase solely due to the change in solar forcing. There you go: you’ve just explained why the climate responds more strongly to solar forcing than the value of the forcing itself.

    I think this is a very important point, well stated.

  44. Francois Ouellette
    Posted Feb 13, 2008 at 10:04 AM | Permalink

    #43 Pat,

    Still working on it, but a simple dynamical system where CO2 responds to temperature change, with a response time of a few decades, can mimic the historical CO2 concentration trend, without any need for anthropogenic forcing. There are only 3 parameters in my little model: equilibrium CO2 concentration, response time, and fractional change of CO2 equilibrium concentration per degree C. The parameter values I get for a good fit are all realistic. So this is a model that explains both the long term trend, and the short term (annual) correlation with temperature.

    Note that you could also add an anthropogenic contribution, but it would have to be a small fraction of the total.

  45. Wolfgang Flamme
    Posted Feb 13, 2008 at 10:26 AM | Permalink

    Considering the short lag might possibly indicate land distribution, NH CO2 distribution and lagged SST-/ocean uptake effects I did some granger causality tests on monthly data:

    UAH granger-causes MaunaLoa-dCO2 but not vice versa. HadCRUT3 disputably granger-causes MaunaLoa-dCO2 but not vice versa.

    However UAH does not granger-cause Barrows-dCO2, neither vice versa. But Barrows-dCO2 actually granger-causes HadCRUT3 … again not vice versa.

  46. David Smith
    Posted Feb 13, 2008 at 12:07 PM | Permalink

    Roy Spencer’s article on “How Oceans are Driving CO2”, from Watts Up With That?

  47. Francois Ouellette
    Posted Feb 13, 2008 at 1:00 PM | Permalink

    #46 David,

    I sort of had read Roy’s paper quickly. I think we’re on to something here. My little model, which may be similar to Richard Courtney’s model (I don’t have his paper yet), is capable of mimicking both the long term trend in CO2, and the short term variations, with the same three parameters. I think maybe if I had SST data instead of global temp data the match could be better. (I’ll send you my graph soon so we can post it here).

    From what I’ve read from Richard Courney on Anthony’s blog, I agree entirely with his point of view. It is entirely possible to obtain the observed CO2 increase in the atmosphere, without resorting to the anthropogenic emissions.

    This seems to me like a major breakthrough. It could spread like wildfire, IMHO.

  48. kim
    Posted Feb 13, 2008 at 1:04 PM | Permalink

    So the little annual drop might be uptake by the Southern Hemisphere oceans rather than capture by Northern Hemisphere vegetation?
    ============================================================

  49. Larry
    Posted Feb 13, 2008 at 2:07 PM | Permalink

    Smoothing does not need to be low pass.

    Huh???

  50. Posted Feb 13, 2008 at 4:11 PM | Permalink

    Sorry Willis, and thanks John C and all for useful tips.

    Does anybody know how to deal with another issue that comes up in establishing causality in an annually cyclical system? Eg. even though there is an apparent lag in the response such as CO2, both variables are cyclical annually, and feeding back into each other. So an apparent lag of 6 months could be a lead of 6 months, 18 months, 24 months etc.

  51. Alan S. Blue
    Posted Feb 13, 2008 at 4:15 PM | Permalink

    This is something that could actually be reasonably tested for falsifiability.

    If there’s a five month lag of CO2-following-temperature.
    And if any of this satellite data holds any water…
    Then the CO2 level in May should be below the average.

    It would be interesting to study the trend of the correlation also. Are there events that might cause the excursions? (Mt. St. Helens…)

  52. Kevin B
    Posted Feb 13, 2008 at 4:31 PM | Permalink

    Francois #39

    There may be another way that C02 can rise with temperature, other than boil off from the oceans.

    Ever since I learned that the atmosphere was once 20% CO2 with trace amounts of oxygen, and then life came along and the proportions were quickly reversed, I’ve had this picture of a race between the flora and the fauna over how much CO2 should be in the air.

    The flora are busy turning H2O, CO2 and energy into carbohydrates and oxygen while the fauna are busy turning carbohydrates into energy and carbon dioxide. (Bearing in mind that much of this occurs at the phyto- and zoo- plankton scale and in the bacterial and fungal rotting process.)

    Now much of the fauna is exo-thermic so it makes sense, (to me at least), that they might be more active as the temperature rises, particularly as a glacial period gives way to an interglacial.

    Sorry if this is Off Topic, but with the constant refrain of ‘too much CO2’ I think it’s worth reminding ourselves occaisionally that the real question is ‘why is there so little CO2 in the atmosphere’.

  53. Paul Linsay
    Posted Feb 13, 2008 at 5:46 PM | Permalink

    #50, David,

    Look for structure in the temperature time series and see if the same structure shows up in the CO2 response. The simplest would be an impulse of some sort.

  54. Mark T.
    Posted Feb 13, 2008 at 6:08 PM | Permalink

    #37 I was taught that smoothing is simply a non causal filter. Thus the two shouldn’t really be that different. Smoothing does not need to be low pass.

    Depends upon how it is implemented… Any IIR or FIR is causal if the current output only depends upon previous and current inputs. Smoothing is generally low-pass since the operation implies the removal of high frequency terms, though I suppose it could also be done as a bandpass (if you think in terms of positive and negative frequencies, i.e. complex valued data, even a low-pass filter is a bandpass filter centered at 0 Hz for that matter).

    I forget though how the FFT relates to the Fourier Transform.

    An FFT is simply an efficient means of implementing a discrete Fourier transform. The FFTW website has a bunch of information and links if you need to do some research. One of the differences (other than speed) is that a Cooley-Tukey based FFTs decrease in accuracy as O(sqrt(log(N))) on average. The FFTW website explains all this, btw.

    Mark

  55. bender
    Posted Feb 13, 2008 at 7:39 PM | Permalink

    #50 controlled experiment

  56. Willis Eschenbach
    Posted Feb 13, 2008 at 8:04 PM | Permalink

    Consider a signal containing a 50 hz signal and a 5000 hz signal, both of which oscillate between zero and one.

    One algorithm removes the 5000 hz signal entirely, and leave the 50 hz signal untouched.

    Another algorithm simply averages the signals over some moving window. It does not leave the 50 hz signal untouched. It does not remove the 5000 hz signal.

    Now, you can call these two algorithms by the same name if you wish. You can call them both “filters”, or call them both “smoothers”. My only point is simple. They are different algorithms, which may or may not cause false correlations in analyses of the type done above.

    Me, I call the first one a “filter algorithm”, because it filters out (selectively removes) part of the signal. Because of this, it ends up with less energy in the signal.

    I call the second one a “smoothing algorithm”, because it does not filter out anything. Instead, it just smoothes out the signal, and thus ends up with the same amount of energy in the signal.

    If someone else has preferred terms to distinguish between these two very different actions, I’m happy to use them.

    Next, John C., I was confused by your statement that “smoothing does not need to be low pass”. What would a high pass smoothing algorithm look like?

    Finally, does CO2 actually lag temperature as Allan Macrae says, or is that simply a spurious correlation created by the smoothing?

    w.

  57. Geoff Sherrington
    Posted Feb 13, 2008 at 8:05 PM | Permalink

    Re # 50 and # 53,

    The rotation of the earth causing time zones meant you beat me to this point as I’m in Australia. Yes, you look for anomalous events or spikes to get a GUIDE as to what is lagging what. But here again, you have to face causation. The guide is very suspect if it happens but once; only partly acceptable if it repeats on many occasions; and only fully acceptable if you can attribute a physical cause to it.

    Re # 45 , Wolfgang

    Might you please explain a little more? You are commendably cryptic about these important calculations. Do you attribute any significance to Barrow and Mauna Loa and the South Pole being very distant from postulated anthropogenic emission sources of CO2?

    Re # 33 Arnost

    There are many past CA articles on Mauna Loa and its data manipulation. The key word is the man Keeling. The ML data are manipulated and I wonder if those annual wiggles are simply continued to now because they were depicted in the past and a sudden stop might cause questions.

    I am too old to get my mind around filtering and smoothing discussions when the data have already been filtered or smoothed or both before the analysis begins. This includes staellite temp data and it includes SST, for which one might ask the unsanswereable question “Sea Surface Temperature at what depth?”

  58. Posted Feb 13, 2008 at 8:17 PM | Permalink

    #53, #55, #57 ‘Indicative’ seems as far as one could go with observations of a tightly coupled system. If you look for an impulse of some kind in one of the variables, you could be cherry picking, unless you deliberately looked for impulses that did not result in change as well, and didn’t try to explain them away.

  59. John Creighton
    Posted Feb 13, 2008 at 8:40 PM | Permalink

    Anyway, I’m sure I’ve seen smoothing in text refer to non causal filters but the best way I suppose to end the semantic debate is just qualify if the filter or smoother is causal, non causal or anti-causal and then it will be clear.

  60. John Creighton
    Posted Feb 13, 2008 at 8:45 PM | Permalink

    Mark T #54 “An FFT is simply an efficient means of implementing a discrete Fourier transform.”

    While I think that is partially true, the FFT assumes a periodic signal. The Fourier transform is used on non periodic signals. Thus there should be some subtle differences in the result.

  61. Mark T
    Posted Feb 14, 2008 at 12:12 AM | Permalink

    While I think that is partially true, the FFT assumes a periodic signal. The Fourier transform is used on non periodic signals. Thus there should be some subtle differences in the result.

    No, sorry, an FFT IS nothing more than a DISCRETE Fourier transform. Look up the definition. I gave you a link and the wikipedia article says almost word for word what I stated in the first line (coincidence, actually). You can also check the wikipedia article. An FFT provides an identical result in every way except rounding and that is entirely due to the precision in the twiddle factors, which is a consequence of floating point arithmetic (errors cascade multiplicatively in an FFT, but additively in a DFT).

    Mark

  62. Mark T
    Posted Feb 14, 2008 at 12:17 AM | Permalink

    Anyway, I’m sure I’ve seen smoothing in text refer to non causal filters but the best way I suppose to end the semantic debate is just qualify if the filter or smoother is causal, non causal or anti-causal and then it will be clear.

    Again, it has nothing to do with the filter itself, only whether the output requires future inputs or not (which is the causality definition). A moving average is generally implemented as a trailing edge filter, which means it is causal by definition, i.e. the mean of the current and previous XX years is the current output.

    Mark

  63. Mark T
    Posted Feb 14, 2008 at 12:39 AM | Permalink

    One algorithm removes the 5000 hz signal entirely, and leave the 50 hz signal untouched.

    Any low pass response that is sufficiently wider than the signal of interest but narrower than the signal to be removed will do this. A “smoothing” filter such as a moving average has a low pass response.

    Another algorithm simply averages the signals over some moving window. It does not leave the 50 hz signal untouched. It does not remove the 5000 hz signal.

    If you have MATLAB, run the command freqz([1 1 1 1]/4,1,4096,20000). That’s a 4-tap MA filter with sampling frequency 20 kHz. 5 kHz is in a null and 50 Hz is attenuated by 1 milli-dB or so. In other words, the 5 kHz signal is completely removed and the 50 Hz signal is completely untouched.The more taps you put in the narrow the response gets. Though the null actually has infinite depth at 5 kHz, it will only “perfectly” remove a zero-bandwidth signal. The 4-tap filter is at -65 dB rejection at 5000 +/- 2.5 Hz already, however.

    Now, you can call these two algorithms by the same name if you wish. You can call them both “filters”, or call them both “smoothers”. My only point is simple. They are different algorithms, which may or may not cause false correlations in analyses of the type done above.

    They are, by definition, both finite impulse response filters (unless you implement feedback). They aren’t even different algorithms other than the fact that the “taps” are weighted differently.

    Me, I call the first one a “filter algorithm”, because it filters out (selectively removes) part of the signal. Because of this, it ends up with less energy in the signal.

    Both do, actually. The biggest difference with some other filter prototype is that you can attempt to control various aspects of the response such as passband ripple, stopband attenuation, etc. With a simple MA, all you can really control is the number of taps which a) increases the number of nulls, b) decreases the passband and c) increases the total power removed.

    I call the second one a “smoothing algorithm”, because it does not filter out anything. Instead, it just smoothes out the signal, and thus ends up with the same amount of energy in the signal.

    But it does, actually… The sidelobe peak in my example above is at about 7320 Hz and is at -11.305 dB. That’s power removed, not spread out… If you smooth something, you are removing some measure of frequency content. Variances add linearly, so removing a frequency component implies removing power.

    Mark

  64. Posted Feb 14, 2008 at 1:01 AM | Permalink

    Willis,

    Consider a signal containing a 50 hz signal and a 5000 hz signal, both of which oscillate between zero and one. One algorithm removes the 5000 hz signal entirely, and leave the 50 hz signal untouched.

    So, the frequency response of this filter at 5000 Hz is zero and at 50 Hz it is one.

    Another algorithm simply averages the signals over some moving window. It does not leave the 50 hz signal untouched. It does not remove the 5000 hz signal.

    This is just a filter with equal weights. Not specially good filter, but simple to implement ( compare the frequency responses http://www.geocities.com/uc_edit/remez.jpg and http://www.geocities.com/uc_edit/remez.jpg I linked earlier here )

    Me, I call the first one a “filter algorithm”, because it filters out (selectively removes) part of the signal. Because of this, it ends up with less energy in the signal.

    I call it ideal filter.

    I call the second one a “smoothing algorithm”, because it does not filter out anything. Instead, it just smoothes out the signal, and thus ends up with the same amount of energy in the signal.

    This is the part I don’t understand. Your smoothing algorithm takes out energy of the signal. But sum of it’s impulse response is 1, and thus DC part (0-frequency) will remain unaltered, maybe that’s where this misunderstanding arises.

  65. Posted Feb 14, 2008 at 1:10 AM | Permalink

    Crosspost with Mark. My second link should end MA20.jpg ( Spam filter doesn’t like those links, can’t repost 😉 )

  66. Mark T
    Posted Feb 14, 2008 at 1:33 AM | Permalink

    So, the frequency response of this filter at 5000 Hz is zero and at 50 Hz it is one.

    Hehe, as you well know, there are lots of ways to do this, too, FIR and IIR included.

    I think one of the common misunderstandings about “smoothing” operations is the notion that they sort of “spread” the energy around. This is not what happens (at least, not without a mixing operation, but that’s a different beast).

    Mark

  67. Armin
    Posted Feb 14, 2008 at 2:45 AM | Permalink

    Babblefish ‘Willis’ to ‘Mark T/UC’

    Filter algorithm = Ideal low-pass filter
    Smooting algorithm = Filter with equal weights or so

    So Willis uses ‘filter’ more in a English languange from. Like my coffee filter, and Mark T (and UC) prefer a more technical language, in which both Willis filter algotithm and smooting algorithm can be implemented as a … finite impulse response filter.

    And all of us know what we mean, so why not decide to call them A and B or so and move on?

    Does it change the issue, that smearing out (=A) seems to be able to give false results? Isn’t this the much more interesting issue?

    I’d say yes, as it means that people who use statistical methods just as black-box tools may end up with incorrect results. We’ve seen it with Mann – to name an extreme example – where he used certain statistics which he (initially at least) didn’t understand (proven by the fact he never did any noise-tsting), but also here where a simple noise tests shows the result is not because of something in the data, but just a side-effect of the method used.

    I’d even go sofar, and in line with the comment made in the article

    “However, I didn’t like the looks of the averaged data. The cycle looked artificial. And more to the point, I didn’t see anything resembling a correlation at a lag of nine months in the unsmoothed data”

    If a signal is not there in the raw data and can only be found using advanced statistics, be scpetic, be very sceptic!

  68. John A
    Posted Feb 14, 2008 at 3:48 AM | Permalink

    UC – you could put them on your blog and link from there.

  69. Posted Feb 14, 2008 at 4:16 AM | Permalink

    John A

    Good idea,

    also fixed MA20 sum of impulse response to unity 😉

  70. Posted Feb 14, 2008 at 5:27 AM | Permalink

    All,

    I have posted this on my own web site this morning, after receiving several emails from people asking me to comment here. This topic comes up so often, that I thought a small note would be appropriate.

    —–

    I want to give you, what I hope is, a simple explanation of why you should not apply smoothing before taking correlation. What I don’t want to discuss is that if you do smooth first, you face the burden of carrying through the uncertainty of that smoothing to the estimated correlations, which will be far less certain than when computed for unsmoothed data. I mean, any classical statistical test you do on the smoothed correlations will give you p-values that are too small, confidence intervals too narrow, etc. In short, you can be easily misled.

    Here is an easy way to think of it: Suppose you take 100 made-up numbers; the knowledge of any of them is irrelevant towards knowing the value of any of the others. The only thing we do know about these numbers is that we can describe our uncertainty in their values by using the standard normal distribution (the classical way to say this is “generate 100 random normals”). Call these numbers C. Take another set of “random normals” and call them T.

    I hope everybody can see that the correlation between T and C will be close to 0. The theoretical value is 0, because, of course, the numbers are just made up. (I won’t talk about what correlation is or how to compute it here: but higher correlations mean that T and C are more related.)

    The following explanation holds for any smoother and not just running means. Now let’s apply an “eight-year running mean” smoothing filter to both T and C. This means, roughly, take the 15th number in the T series and replace it by an average of the 8th and 9th and 10th and … and 15th. The idea is, that observation number 15 is “noisy” by itself, but we can “see it better” if we average out some of the noise. We obviously smooth each of the numbers and not just the 15th.

    Don’t forget that we made these numbers up: if we take the mean of all the numbers in T and C we should get numbers close to 0 for both series; again, theoretically, the means are 0. Since each of the numbers, in either series, is independent of its neighbors, the smoothing will tend to bring the numbers closer to their actual mean. And the more “years” we take in our running mean, the closer each of the numbers will be to the overall mean of T and C.

    Now let T’ = 0,0,0,…,0 and C’ = 0,0,0,…,0. What can we say about each of these series? They are identical, of course, and so are perfectly correlated. So any process which tends to take the original series T and C and make them look like T’ and C’ will tend to increase the correlation between them.

    In other words, smoothing induces spurious correlations.

    Technical notes: in classical statistics any attempt to calculate the ordinary correlation between T’ and C’ fail because that philosophy cannot compute an estimate of the standard deviation of each series. Again, any smoothing method will work this magic, not just running means. In order to “carry through” the uncertainty, you need a carefully described model of the smoother and the original series, fixing distributions for all parameters, etc. etc. The whole also works if T and C are time series; i.e. the individual values of each series are not independent. I’m sure I’ve forgotten something, but I’m sure that many polite readers will supply a list of my faults.

    Briggs

  71. Allan MacRae
    Posted Feb 14, 2008 at 6:00 AM | Permalink

    Gentlemen,

    I was just notified of this discussion. Perhaps I can save you considerable mathematical speculation.

    As stated at CS, I have produced the same graph without 12-month running means and without detrending. The correlation between ST, LT and dCO2/dt is quite obvious. ST leads dCO2/dt by one to several months. Atmospheric C02. the integral of dCO2/dt, will lag ST by ~9 months. CO2 does not drive temperature – the future cannot cause the past.

    I have tried to post the graph here, without success, so I’ll email it to Willis – perhaps he can post it.

    Best regards, Allan

  72. Allan MacRae
    Posted Feb 14, 2008 at 6:20 AM | Permalink

    Hi Willis,

    Apparently your email has changed – you should still have mine.

    Please send me a note and I’ll re-send my message (including Figure 5b for posting) to you.

    Best, Allan

  73. Allan MacRae
    Posted Feb 14, 2008 at 7:44 AM | Permalink

    By the way Gentlemen,

    All the raw data (not just month-to-month changes) and data sources are posted with my paper and original Excel spreadsheet at
    http://icecap.us/index.php/go/joes-blog/carbon_dioxide_in_not_the_primary_cause_of_global_warming_the_future_can_no/

    Also, Figures 5 to 8 in the original spreadsheet do not use 12-month running means, or detrending, or both. New Figure 5b is more clear, so I hope there is some way of posting it here for you to ponder.

    Best regards, Allan

  74. RomanM
    Posted Feb 14, 2008 at 8:04 AM | Permalink

    #70 Wm. Briggs.

    I agree with your conclusion that smoothing data prior to calculating correlations is not an appropriate procedure and that someone with minimal statistical understanding can easily be mislead in interpreting the results. However, I find that your explanation of the reasons is somewhat misleading.

    Now let T’ = 0,0,0,…,0 and C’ = 0,0,0,…,0. What can we say about each of these series? They are identical, of course, and so are perfectly correlated. So any process which tends to take the original series T and C and make them look like T’ and C’ will tend to increase the correlation between them.

    In other words, smoothing induces spurious correlations.

    Correlation is a measure of linear relationship between variables. If one were to plot T and C against each other and fit a line to the plot, the correlation would indicate the relative “closeness” of the points to the line as compared to the slope of the line. If the data were smoothed and T and C were replaced by T’ and C’, the line would remain pretty much the same, but the points would be less variable and closer to the line. The calculated correlation would be increased (provided that the slope was not zero), but evaluating its significance requires that one take in to account that the correlation was calculated using the smoothed sequences. Typically, there is no gain in doing this.

    With regard to your example, spurious correlations arise not because the two sequences, T’ and C’, are (close to being) identically zero, but because in practice random variation in the sequences will usually produce a false (small positive or negative) trend in each of the sequences. Rather than “inducing” the spurious correlation, the smoothing process exagerates the strength of the relationship between the two sequences and, as you correctly point out, may produce results that appear to be meaningful when in fact they are not.

    What exactly do you mean by “in classical statistics any attempt to calculate the ordinary correlation between T’ and C’ fail because that philosophy cannot compute an estimate of the standard deviation of each series”? I am not sure that I understand.

  75. Roger Cohen
    Posted Feb 14, 2008 at 10:24 AM | Permalink

    A couple of suggestions:
    1. Check out Kravtsov and Spannagle, Multidecadal Climate Variability in Observed and Modeled Surface Temperatures (J. Climate 2007). They present a case that the Atlantic Multidecadal Oscillation gives a discernible global temperature signal and speculate (Sect. 5 and Fig 11) that it can arise from modulation of atmospheric CO2 by the varying SSTs. The residual after detrending the secular CO2 increase appears to correlate with SSTs. They did not look at leads and lags however.
    2. To Francois and others looking at the CO2 and temperature time series: What climate sensitivity( delta T/delta CO2 ppm) does the correlation imply?
    Then get delta T for doubling ~ 380 x ln2 x delta T/ delta CO2 ppm.

  76. Mark T.
    Posted Feb 14, 2008 at 10:55 AM | Permalink

    John Creighton,

    I think the distinction you’re looking for, which I hinted at, is the relationship between discrete time and continuous time transforms. Sorry if I didn’t make that clearer. I had an additional sentence to add but it was late and I was tired, which is why I bolded DISCRETE instead. The distinction is actually the reverse of what you state, i.e. the DFT/FFT can be applied to aperiodic signals but not the regular FT (at least, not for meaningful results). The DFT/FFT essentially take the block of data that you’re transforming and copy it infinitely to estimate the FT. If a signal is aperiodic, such as a ramp, then it appears as if the block of data constitutes one period, repeated at the block rate.

    As a result, there will be a discontinuity at each block edge and the DFT/FFT will converge to the mid-point of that discontinuity. If it is a sinusoid that you are transforming, the result will be something we call “leakage” which essentially appears as power in the neighborhood of the fundamental spectral component. If, for example, you have a block that contains a sinusoid that contains 10 1/2 cycles, then this block gets repeated with the discontinuity showing up as a 180 degree phase shift every 10 1/2 cycles of the sine wave. This is the same effect as if you had a sinusoid modulated by a square wave. Since multiplication in time is convolution in frequency*, the result looks like a spectral component with “lobes” on either side of the fundamental that decay as the transform of the square wave. It is because of this that windowing techniques are often popular in DSP since they attempt to reduce the effect of the discontinuity.

    Mark

    * The convolution theorem can also be used to analyze a filter as UC and I did above. Filtering is a convolution in time, which is equivalent to a multiplication in frequency. Multiply the frequency response of the filter (e.g. MA20.jpg as given by UC) by the signal frequency response, consisting of a 50 Hz and 5 kHz signal, and the result is the original 50 Hz signal with an attenuated 5 kHz signal. To recover the signal, simply perform the inverse DFT/FFT.

  77. Andrew
    Posted Feb 14, 2008 at 11:01 AM | Permalink

    Roger Cohen I’m not sure how you expect us to get climate sensitivity from comparing a derivative of CO2 to temperature…At any rate, estimates of sensitivity are impossible without knowing the magnitude of all the forcings (and how out of equilibirum you are) which we don’t have (especially with aerosols).

  78. Mark T.
    Posted Feb 14, 2008 at 12:15 PM | Permalink

    but not the regular FT (at least, not for meaningful results).

    This is stated incorrectly… the regular FT will decompose anything aperiodic into some spectral components but there’s no way to analyze data in this fashion, i.e. you need the analytic functions a-priori to determine their ultimate transform. The DFT and FFT simply approximate this transform but induce a periodicity that otherwise did not exist.

    Mark

  79. Pat Keating
    Posted Feb 14, 2008 at 1:16 PM | Permalink

    76, 78 Mark
    The discrete sampling doesn’t really add periodicity, it causes what is called ‘aliasing’. There is an ambiguity introduced between frequency m/N and (m+N)/N. So you have to either sample at a rate at least twice the maximum frequency in the signal being processed, or apply a low pass filter to remove higher frequencies before doing the DFT/FFT.

    The problem with the ends of the records is not due to the discrete sampling — it is due to the finite record length, and arises for a continuous FT also. In optics, it limits an instrument’s resolution because of the side-lobes, though it’s called diffraction, there.

  80. Roger Cohen
    Posted Feb 14, 2008 at 1:43 PM | Permalink

    Sorry Andrew. I should have been more explicit. Take a simple phenomenological model in which temperature responds to CO2 AND its derivative via some kind of physical feedback (outgassing or whatever): T = K ( CO2/tau + dCO2/dt ). Here the first term is the linearization of T = A ln CO2, where A = (Delta T for doubling)/ln 2. Then K/tau ~ A/380ppm. The time parameter tau is an inertial lag. So for example if T = exp (iwt), the CO2 signal will be (tau/K) x exp (iwt)/(1+iwt). Another simple case is a step change in temperature at t = 0, from which you get a lagging CO2 response of the form (delta T x tau/ K)x [1-exp(-t/tau)]. (Note that CO2 appears to lag temperature). From this model and some data analysis you should be able to get a value of tau (~5 months?)and the value of K, from which you can get A and therefore climate sensitivity for doubling.

    The point is that there is information in the observation that both T and dCO2/dt are varying in a similar manner, presumably faster than than other factors such as aerosols. And a simple systems analysis approach might lead to an estimate of what everyone wants to know.

  81. Mark T.
    Posted Feb 14, 2008 at 1:55 PM | Permalink

    The discrete sampling doesn’t really add periodicity, it causes what is called ‘aliasing’. There is an ambiguity introduced between frequency m/N and (m+N)/N. So you have to either sample at a rate at least twice the maximum frequency in the signal being processed, or apply a low pass filter to remove higher frequencies before doing the DFT/FFT.

    Well, I wasn’t saying that the discrete sampling caused this problem, the finite block length is what causes what I was describing. It’s just that the DFT is applied over finite periods of time whereas the regular FT is over all time. What I was getting at was an “apparent” periodicity, as I described in the other post. I.e. the DFT analog to the regular FT is copied blocks (infinite in number) which have the periodicity with period equal to the block length.

    Aliasing does result from the discontinuity at the block edges, however, since it has infinite bandwidth (well, one over the sample period). But that’s not really the effect I was referring to though they are related to each other, i.e. they are two different viewpoints of the same phenomena.

    You can test this by performing an FFT on a block of data that contains a sinusoid with a non-integer number of cycles and is sufficiently oversampled to capture the fundamental spectral component. On either side of the fundamental, there will be the characteristic leakage.

    Test this in MATLAB as follows:

    a = sin(2*pi*(0:1023)/(1024/64.5)); % 64.5 cycles of a sinusoid of length 1024
    plot(20*log10(abs(fftshift(fft(a,2048)))))

    You’ll see the sinusoid response convolved with a square wave response quite clearly.
    I used 2048 points in the FFT (which is the same as padding the data with 1024 zeros) to separate the spectral components of the square wave, otherwise there would be one per bin. Applying a window reduces the discontinuity at the edges, and a low-pass filter will do something similar IF you don’t trim the edges, i.e. keep the entire convolved data set. Both will result in a decreased discontinuity at the expense of an increased main lobe. Windowed actually “looks” better in this case, however.

    The problem with the ends of the records is not due to the discrete sampling — it is due to the finite record length, and arises for a continuous FT also. In optics, it limits an instrument’s resolution because of the side-lobes, though it’s called diffraction, there.

    Yes, that’s what I was referring to.

    Mark

  82. Mark T.
    Posted Feb 14, 2008 at 2:01 PM | Permalink

    Btw, Pat, I believe it was you and I that discussed the overall problem with finite-length transforms at some point in the past. Ultimately the problem is that finite time implies infinite bandwidth and finite bandwidth implies infinite time.

    Mark

  83. Mark T.
    Posted Feb 14, 2008 at 2:20 PM | Permalink

    I should add, that’s only approximately 64.5 cycles…

    Mark

  84. Earle Williams
    Posted Feb 14, 2008 at 2:28 PM | Permalink

    All this talk of discrete and continuous Fourier transforms is taking me back to grad school days! Unfortunately that was a quarter century and a career ago.

    I suggest the distinction between filtering and smoothing as defined by Willis may be thought of through the following ill-fitting analogy. Suppose you wish to make some delicious flaky pie crust for your cherry pie, but find that your flour has roaches in it. Crunchy roaches just aren’t aesthetically pleasing in a delicious cherry pie, so you consider how to remove or mask their presence.

    Given that you prefer real flour to Generic Crust Medium synthetic flour, you consider sifting the flour (filtering) or milling the flour (smoothing). If you sift the flour, you will remove the offending roaches but since the sifter is an imperfect physical device some of the flour will remain stuck to the roaches while some of the little insect bits will break off and pass through the screen. There has been a little bit of mixing of the two signals but for the most part you’ve removed one from further processing.

    The other alternative is to just grind everything together (smooth) so there won’t be any crunchy bits to worry about. You don’t lose any of the delicious flour and those pesky roaches are virtually indetectable at least with regard to the flakiness of the crust. You don’t throw anything out, but you’ve altered the flavor profile of your pie crust in the process. Depending on how flaky you want the crust to be you may opt for a finer grind. You’ve guaranteed a flaky crust but the flavor is now a bit off.

    OK, it’s quite the contrived analogy and doesn’t really reflect time series data. However for me it represents the conceptual difference between filtering and smoothing.

  85. Willis Eschenbach
    Posted Feb 14, 2008 at 2:30 PM | Permalink

    Allan Macrae, author of the original paper, welcome to the discussion. My apologies for not notifying you of the discussion, my email has been !@#$%, I’ll email you with my new email address. In the meantime, some more grist for the mill.

    John Hekman said above:

    There are econometric tests for this. Granger causality tests can tell you whether it is more likely that x is causing y or y is causing x.

    This comment led me along another one of those delightful mathematical pathways. It turns out that Granger was a very brilliant mathematician, who developed a most ingenious test to tell which way causality was running between a pair of correlated datasets. We’ll call them Dataset A and Dataset B.

    The test works look like this. First, develop an AR(n) model of both datasets, where “n” is the number of lags considered. What this means is that we first model the evolution of each dataset in time, using just the past history of each dataset itself. We look backwards at just the dataset itself for “n” months, and we use that information to predict the next month’s values.

    What Granger proposed was seeing whether such an autoregressive (AR) model of one dataset can be significantly improved by adding information from the other dataset, and vice-versa. If such predictions of Dataset B can be significantly improved (less than 0.05) by adding in the AR(n) information from Dataset A, then phenomenon A is said to “Grainger-cause” phenomenon B.

    Note that Granger causation is not the same as causation. We’d like the world to be logical, where either A causes B, or B causes A, or neither one causes the other. However, with Granger causality, sometimes A Granger-causes B while at the same time, B Granger-causes A …

    However, that is not the case here. I took a look at the Grainger causation of the MSU and CO2 datasets used above, with various values for “N’. Here’s the result:

    This shows that MSU (UAH MSU satellite based tropospheric temperature) is significant (less than 0.05) in predicting CO2 values, but not the other way around. In addition, this significance starts with n = 2, and by the time we’re at n=9, the significance test value is approximately zero. However, going the other way, CO2 is not significant in predicting MSU temperatures.

    This means that temperature variations Granger-cause variations in CO2, at least in the short term (1979-2006), but CO2 does not Granger-cause changes in MSU.

    My best to everyone, the investigation continues.

    w.

    PS – UC (and others), I’m finally starting to see what you mean by filtering and smoothing, my thanks for the patient explanations.

  86. Mark T.
    Posted Feb 14, 2008 at 2:54 PM | Permalink

    What about the other option, i.e. C->A, B. A and B would exhibit some causal relationship, but the cause is actually C, not A->B or B->A?

    Mark

  87. Posted Feb 14, 2008 at 2:54 PM | Permalink

    PS – UC (and others), I’m finally starting to see what you mean by filtering and smoothing, my thanks for the patient explanations.

    You are welcome. CA is multidisciplinary blog, and paying attention to terminology is worthwhile ( so that engineers could understand statisticians and physics guys etc and vice versa, and finally climatologist will join ). ( My message in http://www.climateaudit.org/?p=2720#comment-211052 is essentially the same as Briggs’ at http://www.climateaudit.org/?p=2720#comment-211444 , engineer is understanding the statistician’s message 😉 )

  88. Pat Keating
    Posted Feb 14, 2008 at 3:12 PM | Permalink

    82 Mark

    Ultimately the problem is that finite time implies infinite bandwidth and finite bandwidth implies infinite time.

    That’s true, but we never have infinite pieces of anything, and we deal with it. We can deal with that problem by things like apodization, of course.
    We can also extend the record by doubling it, with a reflection. This gives you a new function, twice as long, which is symmetric. This eliminates the discontinuity in the function at the ends of the record, which reduces the problem quite a bit. We still have the slope discontinuities, but we’ve reduced the errors significantly.
    I wrote a short paper on this in connection with FT interpolation methods (IEEE Trans. on Acoustics, Speech, Signal Processing, ASSP-26, 368 (1978)) if you are interested.

  89. Peter Hartley
    Posted Feb 14, 2008 at 4:05 PM | Permalink

    Willis #85 The conclusion that “temperature variations Granger-cause variations in CO2, at least in the short term (1979-2006), but CO2 does not Granger-cause changes in MSU” is astonishing. I hope you have alerted Roy Spencer to this result. It would seem to add to the statistical evidence he had adduced for essentially the same conclusion in the pieces he wrote for Andrew Watts’ blog.

    But what do you conclude from this? Given that we believe the physics behind the radiation absorption by CO2 is correct, does this mean that there must be enough negative feedbacks in the real world to make it irrelevant? Or are the lags from CO2 to temperature much longer than you have allowed for?

    Might there be a statistical problem if the CO2 series is non-stationary whereas the temperature series are not? I think the Granger causation tests are problematic under those circumstances. Even in that case, however, it would seem to raise questions about the CO2-> temperature causal mechanism. What do you conclude?

  90. Wansbeck
    Posted Feb 14, 2008 at 4:10 PM | Permalink

    If anyone wants more information on the data reflection mentioned by Pat Keating in post #88 they can google ‘Discrete Cosine Transform’.

  91. Mark T.
    Posted Feb 14, 2008 at 4:15 PM | Permalink

    As does a window function (reduce the bandwidth problem). Blackman seems to provide the best trade-off between main lobe and side lobes for most of the things I do, but I guess that’s dependent upon your particular application. I’ll look up your publication since I THINK I have access to the ASSP transactions (I’m in the Signal Processing, Comm and Aerospace/Defense societies).

    1978, you’re dating yourself, you know! 🙂

    Mark

  92. Peter Hartley
    Posted Feb 14, 2008 at 4:20 PM | Permalink

    Further to #89 First off, I apologize — I should have said Anthony Watts not Andrew Watts.

    Second, as regards the non-stationarity of CO2, what happens if you “subtract” the anthropogenic emissions of CO2 (from the CDIAC web site) from the Mauna Loa numbers? For example, suppose you regress the CO2 numbers on the emissions numbers and see if the residual is stationary. That would say that the measured CO2 levels and the emissions are cointegrated. It would then be interesting to redo the Granger causality analysis using the residual from the cointegrating regression instead of the original non-stationary CO2 series. Those short run fluctuations around the long term trend should be what is driven by seasonal factors and temperature.

  93. Martin Ringo
    Posted Feb 14, 2008 at 4:49 PM | Permalink

    Two notes
    Re: smoothing and correlation

    Many decades ago, I was taught (in econometrics) never calculate from a smoothed series, however that series was smoothed. Actually, I believe it OK if your calculation contain know variance or cross product terms, but that rules out regression, correlation and a whole bunch. (More formally for those who like that stuff, it is not the calculation but the interpretation: the distributions and degree of freedom are all messed up when each observation is a function of largely similar observations.) For instance, the expected absolute value of the sample correlation statistic as a function of the numbers of the moving average terms grows roughly as given in the table below.

    # of MA Expected
    terms Absolute Value
    of sample correlation
    1 — 4%
    5 — 7%
    10 — 9%
    20 — 14%
    50 — 22%
    100 — 32%

    Re: Grange Causality

    Be careful of the use of Granger Causality. The CO2 and temperature series that Willis supplied show a rejection of the null hypothesis “there is no Granger Causality” of CO2 by temperature at the 1% level for the MSU temps and 5% level for the HadCRUT3 temps, while the reverse Granger no-causality hypothesis (temperature by CO2) cannot be rejected. However, if one looks at annual data — that is annual averages, not 12 month moving averages — the results switch (with the significance switching to HadCRUT3 more significant).

    Remember that regression on which the Granger causality is tested includes the lags of both the “causing” variable and the “caused” variable, and that latter relationship is not stable across specification. Further one has to be careful about ignoring the non-rejection with higher order lags by saying “Oh, we have just lost too many degrees of freedom.” It might be there really are confounding cumulative effects.

    Marty
    PS: Willis, thanks for a most interesting post.

  94. Bernie
    Posted Feb 14, 2008 at 4:52 PM | Permalink

    Peter:
    I would really err on the side of caution in making any assertions about causality based on these data sets and Willis excellent initial foray.. Surely the first step is to gain agreement on what are the “best” set(s) of data and then carefully specify the model that is being tested. I fear that we are in danger of doing that which we criticize others for – crude empiricism and inflating findings the fit a world view. Certainly the notion of asking open minded pros about a line of investigation makes sense — but I can’t imagine that we are treading brand new ground at the moment — as opposed to highlighting a potential new tool.

  95. Pat Keating
    Posted Feb 14, 2008 at 4:57 PM | Permalink

    91 Mark

    1978, you’re dating yourself, you know!

    Yeah, but that’s nothing. This will really date me:
    I first got involved with discrete FTs for a Michelson Interferometer in the early 60s, before Cooley-Tukey. I was programming a computer with TUBES and only 2K of (core) memory.
    Now, that’s old…..

  96. Jesper
    Posted Feb 14, 2008 at 5:05 PM | Permalink

    Allan MacRae:

    You have nicely shown that temperature variations strongly influence the future increment of CO2. This is important and interesting, but I don’t think your sweeping conclusions are justified.

    Notice in your Fig. 1, delta-T has both positive and negative values (mostly positive, yielding the upward trend). However, delta-CO2 is always positive. You show variations in delta-CO2 (i.e. the local slope), but this doesn’t tell us how temperature affects the long-term CO2 growth (why delta-CO2 is always positive), only how this growth varies. The crucial comparisons are CO2 level vs. Temp level, or delta-CO2 vs. delta-T.

    Someone who believes that CO2 drives temperature would think you have found evidence of a positive feedback.

  97. Pat Keating
    Posted Feb 14, 2008 at 5:14 PM | Permalink

    91 Mark
    The reflection technique I mentioned avoids the main-lobe broadening that one gets with windows/apodization, that you touch on in #91.
    As far as I know, there is no performance trade-off, as there is with most approaches, just a need for the extra computation for twice as many data values, but that’s no big deal.

    Of course, it doesn’t eliminate the problem, just reduces it, but that is true of all the methods.

  98. Mark T.
    Posted Feb 14, 2008 at 5:32 PM | Permalink

    Yeah, but you still get the mixing anomaly that I described as well, i.e. the leakage components. If you apply the reflection to the data in my above example you’ll see what I mean. Thinking about it, the problem is likely worse with sinusoidal signals than it would be for something aperiodic. A ramp using reflection would appear as a triangle wave rather than a sawtooth. I need to think about the implications with sinusoids a bit more.

    Mark

  99. Mark T.
    Posted Feb 14, 2008 at 5:34 PM | Permalink

    Before Cooley-Tukey… oof. 🙂

    Mark

  100. Peter Hartley
    Posted Feb 14, 2008 at 5:35 PM | Permalink

    Bernie #94 I agree — that was partly what I meant by “astonishing” — the results seem so much at variance with what we think we know about the underlying physics that more careful investigation is warranted. One thing that concerned me was the non-stationarity of CO2 (it has a trend) while the Granger causality test requires both series to be stationary. This concern is related to the point made by Jesper in #96, but rather than look at the changes in both temperature and CO2, I bet a formal test would find temperature levels stationary. Also, rather than make CO2 stationary by differencing, I was suggesting doing so by “purging it” of the non-stationary anthropogenic emissions.

  101. Mark T.
    Posted Feb 14, 2008 at 5:40 PM | Permalink

    Oh, and I do have access to your paper… technically I’m working (right) so I can’t read it right now.

    Mark

  102. steven mosher
    Posted Feb 14, 2008 at 5:52 PM | Permalink

    RE 91. Dating yourself is only allow on the Onan thread

  103. Mark T.
    Posted Feb 14, 2008 at 6:04 PM | Permalink

    I was wondering which one of the peanut gallery members would jump on that. I am hardly surprised that it was the moshpit.

    Mark

  104. Allan MacRae
    Posted Feb 14, 2008 at 6:11 PM | Permalink

    #85 – Hi Willis,

    It is nice to be back to CA after a long absence.

    I first looked at this subject for about 40 minutes on December 30, 2007 and realized there was an apparent correlation between LT from UAH and dCO2/dt, as shown in Figure 1 of my paper. Early December 31 I sent an email to some friends asking for help and comments. Then I added ST data from HadCrut3. Roy Spencer of UAH and Ken Gregory of Calgary and a few others responded and were most helpful. All my original work was done without detrending, which I added later. I ran several more cases to examine the results without the use of 12-month running means. Some of these cases are included in the spreadsheet as Figures 5 to 8. I also ran cases using only ST and Mauna Loa CO2 data, going back to 1958, that are not included in the published spreadsheet. No gremlins emerged, so I published on Jan 31, 2008 on ICECAP.US.

    When I send you Figure 5b, I believe you will see that ST leads LT and dCO2/dt. I believe CO2 lags ST by ~9 months. If one were to dig deeper, I think one would find that solar variation leads ST. I await your email address.

    I would like to speed this work to a rational conclusion. My wife is expecting a baby in August, and I have to get back to work to support the next generation. Then there is the small matter of the trillions of dollars being wasted to fight the myth of catastrophic humanmade global warming. This money could be better allocated to solve real problems.

    Best regards, Allan

  105. Roger Cohen
    Posted Feb 14, 2008 at 6:32 PM | Permalink

    To Willis

    I appreciate your posting the 3 data sets. Unless I’m mistaken the “HadCRUT3” is actually the land-only “CRUTEM3” ??

  106. Willis Eschenbach
    Posted Feb 14, 2008 at 6:32 PM | Permalink

    Mark T, you are right when you say:

    What about the other option, i.e. C->A, B. A and B would exhibit some causal relationship, but the cause is actually C, not A->B or B->A?

    Mark

    This is quite possible. What Mark proposes is that if C causes both A and B, either A or B could be the Granger-cause of the other, but that would be just an effect of the causation by C.

    w.

  107. Allan MacRae
    Posted Feb 14, 2008 at 7:08 PM | Permalink

    #96 – Jesper has pointed out one of the main counter-arguments that will arise. The IPCC alludes to possible feedback mechanisms in AR4, and I believe this is what they are referring to. Jesper’s question has been discussed at CS and still merits further examination, in my opinion.

    Here are comments on this subject from CS that may be of interest:
    _____________________________________________

    With respect, there are many facts that counter your arguments – here are four:

    1. If humanmade CO2 emissions were the dominant factor driving atmospheric CO2 concentrations, they would leave a clear signal in the data, and they do not.

    2. The oceans and land contain vastly more carbon and CO2 than the atmosphere and CO2 moves reasonably freely between these three.

    3. Seasonal variations in atmospheric CO2 and seasonal exchanges between oceans, land and the atmosphere are much greater than the humanmade CO2 component – the system is not nearly saturated and the humanmade CO2 component is easily accommodated within the much larger system and is a small component of that larger system.

    4. The impact of temperature change (primary driver is the Sun and its variability) shows clearly in the dCO2/dt signal – if it were a minor feedback effect as alleged, it would not be visible, but would be buried within the alleged dominant signal from humanmade CO2 emissions (which does not exist).

    Regards, Allan

    P.S. Point 5 – the “missing sink”.
    ___________________________________________

    [SM: snip – we’ re not doing Beck here]

    ________________________________________________

  108. Pat Keating
    Posted Feb 14, 2008 at 8:01 PM | Permalink

    98 Mark
    IIRC, I was actually using non-integral sinusoids as worst-case tests for the method.
    Now I was using it for interpolation, and turning a ramp into an isosceles triangle is not a problem for that, but it might be for other uses of the FFT. I’ll have to think about it.

  109. Willis Eschenbach
    Posted Feb 14, 2008 at 8:28 PM | Permalink

    Peter H., thank you for your comment. You say:

    Willis #85 The conclusion that “temperature variations Granger-cause variations in CO2, at least in the short term (1979-2006), but CO2 does not Granger-cause changes in MSU” is astonishing. I hope you have alerted Roy Spencer to this result. It would seem to add to the statistical evidence he had adduced for essentially the same conclusion in the pieces he wrote for Andrew Watts’ blog.

    But what do you conclude from this? Given that we believe the physics behind the radiation absorption by CO2 is correct, does this mean that there must be enough negative feedbacks in the real world to make it irrelevant? Or are the lags from CO2 to temperature much longer than you have allowed for?

    Might there be a statistical problem if the CO2 series is non-stationary whereas the temperature series are not? I think the Granger causation tests are problematic under those circumstances. Even in that case, however, it would seem to raise questions about the CO2-> temperature causal mechanism. What do you conclude?

    Several questions in there. I tried the Granger causality tests using detrended CO2, and using detrended CO2 and detrended MSU. Both gave the same answer, that MSU Granger-causes CO2, and not the other way around.

    I have not written to Roy about this finding, I’m always cautious before saying much more than “this is how it looks to me …”. I wouldn’t swear that my results are right without much more investigation.

    I don’t know what to conclude from this. It is quite possible, for example, that temperature causes changes in CO2 in the short term, while CO2 causes changes in temperature in the long term …

    w.

  110. Posted Feb 14, 2008 at 8:37 PM | Permalink

    Allen #107, Are you aware of the work of Stern and Kaufmann in this regard? They have used econometric methods in unravelling CO2, Aerosols, and temperature and Solar effects using cointegration and Granger causation. In one of their papers they mention a negative coefficient for CO2 indicating lagging response to temperatures. However, if I remember correctly, they change the assumptions and the effect dissapears. I would have to hunt up the papers as they are about 10 years old. At the time they weer taken as evidence that CO2 affects temperatures at all. Kaufmann has also complained at RC about supression of their work in peer review, but the issues are no doubt more complex than that characterization.

  111. Posted Feb 14, 2008 at 8:46 PM | Permalink

    Willis: It seems even the AGW’ers admit CO2 lags temps at long time scales. See http://www.realclimate.org/index.php/archives/2007/04/the-lag-between-temp-and-co2/.
    The thing is even if CO2 lagged at all time scales, the argument is that human CO2 has been added to the system, and this in itself produces warming due to physical principles. The finding that CO2 lags temps at all time scales would cast doubt on the basic radiative model, as it seems unlikely there would be no historic cases of CO2 increases that were natural and not CO2 driven, but not impossible, so the finding would not necessarily falsify it (unfortunately).

  112. Willis Eschenbach
    Posted Feb 14, 2008 at 8:56 PM | Permalink

    Roger C., you say:

    To Willis

    I appreciate your posting the 3 data sets. Unless I’m mistaken the “HadCRUT3″ is actually the land-only “CRUTEM3″ ??

    You are correct, my bad. I didn’t use it in my analysis, so we’re still ok. Note that Allan Macrae has informed us that the data he used are at http://www.climateaudit.org/?p=2720#comment-211488. I’ll redo the Granger causality analysis with his data.

    w.

  113. Willis Eschenbach
    Posted Feb 14, 2008 at 9:15 PM | Permalink

    Allan Mccrae, I just looked at your data. It looks to me as though you are comparing CO2 data which includes monthly variation with temperature data from which the monthly variations have been removed.

    Is this the case?

    w.

  114. Allan MacRae
    Posted Feb 14, 2008 at 10:49 PM | Permalink

    Willis,

    I doubt this but you can check.

    My data is included in the Excel spreadsheet at

    http://icecap.us/index.php/go/joes-blog/carbon_dioxide_in_not_the_primary_cause_of_global_warming_the_future_can_no/

    While I didn’t check with the Hadley people re ST, I did run the spreadsheet by Roy Spencer re LT and the folks at NOAA re CO2.

    If you can be much more specific about which columns in my spreadsheet concern you, and which Figure numbers you are referring to, then I can respond.

    You still owe me an email re Figure 5b.

    Best, Allan

  115. Allan MacRae
    Posted Feb 14, 2008 at 11:10 PM | Permalink

    #110 David,

    This is all new to me as of December 30, 2007, but may not be that new to others.

    Here is a quote from CS:

    “Several others have published analyses that provide the same finding. The seminal work was titled
    “Coherence established between atmospheric carbon dioxide and global temperature”
    ref. Kuo C, Lindberg C & Thomson DJ, Nature 343, 709 – 714 (22 February 1990)

    Assuming this is true, then I wonder why this has not been the subject of more discussion.

    My methodology may not be precisely correct – if not, let’s fix it. The correlation will remain, because it did not exist by mere chance in the first place.

    Some have said this is a false correlation due to the use of 12-month running means, or because of the use of detrending – but the correlation exists clearly without using either of the above – that is what Figure 5b will show once it is posted.

    Best, Allan

  116. Posted Feb 15, 2008 at 1:02 AM | Permalink

    Allan, I find this quite interesting too, both because of the statistical challenges and the prospect of finding new climate drivers that are not CO2 related. The lack of curiosity by warmers in this area is frustrating. I will try to dig up the Kaufmann paper.
    Cheers

  117. mikep
    Posted Feb 15, 2008 at 1:56 AM | Permalink

    One of the Kaufman and stern papers is here

    Click to access 9901.pdf

  118. Armin
    Posted Feb 15, 2008 at 1:57 AM | Permalink

    Isn’t it obvious that on short term T causes CO2 and not visa-versa? I mean, I guess from pure ‘alarmist’ to pure ‘denier’, most of them acknowledge that warming the oceans causes outgassing of CO2 (and other relations exist between T and CO2 like plantgrowth) and also that CO2 is causing some warming. It is just the magnitude that most people discuss about.

    So, that on short term T precedes CO2, is that realy so amazing? I’d expect on a continues rising trend (*) the fluctuations are caused by other factors than us humans. That is other factors in the climate, including T. Or am I missing the master issue here?

    *) Yes, there are people, who claim the rising trend is/could abso be non-human, but I personally have seen no evidence in that direction.

  119. Mark T
    Posted Feb 15, 2008 at 1:59 AM | Permalink

    This is quite possible. What Mark proposes is that if C causes both A and B, either A or B could be the Granger-cause of the other, but that would be just an effect of the causation by C.

    OK, thanks Willis. This is the “correlation does not imply causation” statement in disguise. As one quote on the wiki states (paraphrased) “yes, it does not imply causation, but it sure gives a hint.” There’s never a true way to know whether there is some other actor/actors at play without a bunch of additional information.

    I’m curious then, how one could test for that without something more substantive than we’ve been given (overall, not particularly in this thread).

    ^Pat, yeah, that’s what I thought. With a sinusoid the severity of phase shift obviously depends upon where the last cycle ends and where the first began (or at least, the relative phases between the first and last). 180 out of phase seems to be worst case to me. BTW, this matters to me usually only when I’m testing a receiver design. The only good way to determine how good your A/D clock is running (w/out expensive equipment) is to put in the cleanest tone possible and look at the FFT. Sometimes that means counting cycles to get the tone bin-centered (good thing prime radices exist in FFTs, hehe) which eliminates leakage. Then you can see what type of jitter spurs are showing up, along with some other anomalies that are bad…

    Mark

  120. Mark T
    Posted Feb 15, 2008 at 2:03 AM | Permalink

    Yes, there are people, who claim the rising trend is/could abso be non-human, but I personally have seen no evidence in that direction.

    Not many… the alarmist position is that all of it is man-made: “do you know how many different ways we can demonstrate that it is all man-made?” i’ve often heard. Just one proven method would suffice to convince me, I always think to myself. Most rely on a LOT of guesswork, at least, most that I keep hearing repeated ad-infinitum.

    Mark

  121. Wolfgang Flamme
    Posted Feb 15, 2008 at 2:32 AM | Permalink

    @Willis
    I also included Barrows CO2 data – here’s the puzzeling outcome.

  122. Posted Feb 15, 2008 at 4:51 AM | Permalink

    TIME SERIES PROPERTIES OF GLOBAL CLIMATE VARIABLES:
    DETECTION AND ATTRIBUTION OF CLIMATE CHANGE
    by David I. Stern and Robert K. Kaufmann on
    Kuo, C., C. Lindberg, and D. J. Thomson, 1990: Coherence established between atmospheric
    carbon dioxide and global temperature. Nature, 343, 709-714.

    An analysis of the phase of coherence suggests that CO2 lags temperature by five months, which suggests the presence of Granger causality (though this term is not used by the authors) from
    temperature to CO2. As described by the authors, this conclusion is very tentative. They caution
    that their method for generating correlations can generate misleading results regarding causality and
    that this lack of reliability is compounded by the short sample.

    This is a rather technical area and would be worth revisiting with the additional 10 years data now available.

  123. MarkW
    Posted Feb 15, 2008 at 5:53 AM | Permalink

    dating yourself

    Is that still legal?

  124. Posted Feb 15, 2008 at 6:43 AM | Permalink

    Slightly off-topic, but:

    the IPCC claims CO2 may stay in the atmosphere for upp to 200 years. However the shapes of the curves of anthropogenic emissions of CO2 and actual CO2 levels have different shapes. If I assume an e-fold removal of CO2 and calculate expected lifetime, I get at 37,5 years (+/- 5) as a residence lifetime of CO2 in the atmosphere, far from 200 years. Is the assumption of e-folding bad? Is data improperly showed in the FAR? Or am I having a light bulb moment? Or perhaps mould growing on my brain again?

    Many thanks if any of you wizzkids out there care to educate me… 😉

  125. Willis Eschenbach
    Posted Feb 15, 2008 at 7:01 AM | Permalink

    Allan Macrae, I downloaded your spreadsheet to make sure I have the latest version. It is an excellent piece of work. You were right, you removed the monthly variations.

    The surprise to me was that you were not comparing CO2 with Temperature. Instead, you were comparing CO2 with Temperature. That was my misunderstanding.

    Following your lead, I graphed the Granger Causality of the ∆CO2 vs UAH MSU temperature:

    Being curious whether the presence of trends would change the result, I graphed both the detrended and raw (containing trends) data. There is not much difference. This result shows that MSU temperature is significant in estimating CO2, but not the other way around.

    I also looked at whether there was any apparent lag or lead between the two datasets. Here is the graph:

    I cannot say from this that there is any lead or lag. Which is what I would expect rather than a several month lag, the globe reacts quickly.

    Conclusions?

    1. Temperature changes Granger-cause changes in the rate at which CO2 increases (∆CO2). Changes in ∆CO2, on the other hand, do not Granger-cause temperature variations.

    2. The correlation between temperature and ∆CO2 is small (~ 0.2) but statistically significant (p ~ 0.02, adjusted for autocorrelation).

    3. There is no discernible lag between the two.

    After midnite here, I’m off to bed.

    w.

  126. Posted Feb 15, 2008 at 7:18 AM | Permalink

    re 124:
    Welcome to the skeptic world!
    http://www.john-daly.com/dietze/cmodcalc.htm
    Carbon Model Calculations
    by Peter Dietze
    10 March 2001

    The half-life time of any partial pressure increment is 55·ln(2)=38 years.

    see for a discussion here
    http://www.john-daly.com/dietze/cmodcalD.htm

  127. Posted Feb 15, 2008 at 10:12 AM | Permalink

    re 126, Hans Erren, thanks for the link. A lot to dig into.

    I actually was sceptical before. I work with climate models of building interiors, and have yet to see or develop a model I can thrust. If we cannot deterministically compute the climate in a house (heat and moist transfer) for a period of a year, that provides some seeds for sceptisism against claims that models of earth climate have skill in projecting the climate hundred years from now.

  128. Allan MacRae
    Posted Feb 15, 2008 at 10:35 AM | Permalink

    #125 Hi Willis,

    Interesting and fun stuff isn’t it?

    If you run ST Hadcrut3 versus dCO2/dt, I think you will find more lag than for LT vs. dCO2/dt.

    I accept McKitrick and Michaels (2007) conclusion that ~half the ST warming since 1980 is due to economic factors, but the peaks and valleys of the ST data still tell a story, and the data comes from the Hadley Centre, so it should be credible to the other side of this oft-rancorous debate.

    Also, using Hadcrut3 and Mauna Loa data, you can extend the analysis back to ~1958. I’ll email you that Excel spreadsheet on a confidential basis, because it has not been scrubbed for errors and contains my various meanderings. It also contains Figure 5b, which I hope you will post here in due course.

    If we had the data for sunshine and (lack of) cloud cover, I think we could show that this leads ST.

    Best, Allan

  129. Roger Cohen
    Posted Feb 15, 2008 at 8:48 PM | Permalink

    If you take the statistical relationship between dCO2/dt and temperature as a given, is it possible to account for the 20th century increase in atmospheric CO2, or at least a major piece of it, from the secular temperature rise of the 20th century alone? And if one can, why are there no similar rises in CO2 associated with other historical temperature increases (Medaeval warming, Roman warming, etc.)? Are the ice cores wrong?

  130. Andrey Levin
    Posted Feb 16, 2008 at 3:10 AM | Permalink

    Re#129, Roger Cohen:
    From Lenny Kouwenberg at al, Stomatal frequency adjustment of four conifer species to historical changes in atmospheric CO2

    Abstract

    A stomatal frequency record based on buried Tsuga heterophylla needles reveals significant centennial-scale atmospheric CO2 fluctuations during the last millennium. The record includes four CO2 minima of 260–275 ppmv (ca. A.D. 860 and A.D. 1150, and less prominently, ca. A.D. 1600 and 1800). Alternating CO2 maxima of 300–320 ppmv are present at A.D. 1000, A.D. 1300, and ca. A.D. 1700. These CO2 fluctuations parallel global terrestrial air temperature changes, as well as oceanic surface temperature fluctuations in the North Atlantic. The results obtained in this study corroborate the notion of a continuous coupling of the preindustrial atmospheric CO2 regime and climate.

    http://geology.geoscienceworld.org/cgi/content/abstract/33/1/33

    It takes at least two hokey sticks to play a game. One is already broken; time to take closer look to another.

    Would be interesting to compare Kouwenberg (and there are others) stomatal CO2 proxy with current Loehle temperature reconstruction.

  131. MarkR
    Posted Feb 16, 2008 at 5:07 AM | Permalink

    That is the nub of the matter.

    Which causes which, it isn’t both.

  132. Roger Cohen
    Posted Feb 16, 2008 at 9:55 AM | Permalink

    Re #130. Thanks much Andrey. I’ll check out Kouwenberg. Something like this must be true if non anthropogenic processes modulate atmospheric CO2 to a substantial degree.

    Here’s an estimate of how large the effect of temperature on atmospheric CO2 could be. I took the data posted by Willis for the 1979+ temperature anomaly per UAH MSU and correlated it with the annual change in atmospheric CO2 (rolling 12-month intervals to elimate annual cycle). I got a result very similar to Francois #22 but did not look at leads or lags. The best fit (r^2 = 0.44) gives dCO2/dt = K T + const. where T is the UAH anomaly and K = 1.5 ppm/deg.C-year. The constant term presumably includes all the other sources and sinks and are independent of temperature. I did a rough integration from 1900 to 2000 using the HadCRUT3 global surface anomaly data. This gives an increase in CO2 over the century due to temperature rise of about 70 ppm, about 80% of the inferred increase from ice cores (pre 1959) and direct measurements (1958+)

    Of course all this hinges on the empirical relationship dCO2/dt = K T + const holding over the entire span of time and range of global temperatures and CO2-values — a big if. This assumption is equivalent to saying that the system is always out of equilibrium and governed by first order kinetics. Nevertheless, it does show that the inferred coupling between temperature and atmospheric CO2 is sufficiently large to make a major contribution to observed CO2 changes over time. To do this calculation “right,” one should correlate the entire 1958+ Mauna Loa data set with a surface temperature data set such as HadCRUT3, and use the same data set for the 1900-2000 integration. And one should include the statistical uncertainty in K, but this rough calculation was enough to convince me that the effect could be important.

    Comments please.

  133. Peter Hartley
    Posted Feb 16, 2008 at 10:34 AM | Permalink

    The surprising thing here is not that there is a T -> CO2 effect. For example, Hans Erren calculated 10 ppm/degree C from the Vostok ice core/temperature relationship (misinterpreted by Al Gore). Surely this feedback is also part of the GCM models. The surprising thing to me is that the results presented here do not show a CO2 -> effect. Since basic physics says we should find some positive effect (albeit, from other evidence, probably much smaller than the range given by the IPCC), it suggests to me that the Granger causality tests here are missing something. Maybe the relevant lag is too long to detect with this data (but the radiation effect should be almost immediate shouldn’t it?). My suggestion is that the test is not correct since the CO2 series is non-stationary. I understand the results are similar after de-trending the CO2 series, but maybe a simple linear trend is not the right way to make the CO2 series stationary. Maybe the (non-stationary) trend part of CO2 is related to anthropogenic CO2 emissions. If they are taken out first by regressing CO2 on the emissions, then perhaps the remaining part (the regression residual) could be looked at for lead/lag relationships to temperature.

  134. Wolfgang Flamme
    Posted Feb 16, 2008 at 1:29 PM | Permalink

    @Geoff (#57)
    Sorry for not noticing your reply at first.

    IMO there are several conclusions.
    First, on monthly timescales one should not consider CO2 as being well mixed globally.
    Second, since granger causality results differ depending on the location of the CO2 measurement chosen, CO2 lags are probably involved as well.
    Third, since granger causality results differ depending on the temperature dataset chosen, I do not see any support for causal relationships at all.

    So this rather looks like climate science to me.

  135. John Creighton
    Posted Feb 16, 2008 at 2:46 PM | Permalink

    The Temperature causing CO2 is a low amplitude high frequency effect. Global CO2 changes occur at much higher amplitudes and lower frequencies. Establishing temperature causes CO2 on these scales does not invalidate that CO2 might cause temperature changes on different scales.

  136. deadwood
    Posted Feb 16, 2008 at 3:51 PM | Permalink

    What would result from a comparison of Muana Loa CO2 with Pacific oceanic and/or atmospheric T rather than global?

  137. Posted Feb 16, 2008 at 5:55 PM | Permalink

    The third paragraph of Willis Eschenbach original post of February 12 says:

    “In the MacRae study, he used smoothed datasets (12 month average) of the month-to-month change in temperature (∆T) and CO2 (∆CO2) to establish the lag between the change in CO2 and temperature . Accordingly, I did the same.”

    This is false. Allan MacRae never calculated any month-to-month change in either temperature or CO2 or ∆CO2.

    The temperature curves are all 12 month averages of the detrended temperatures.
    The CO2 curve is the 12 month average of the detrended CO2 concentration.
    The ∆CO2/yr curve is the 12 month change of the 12 month average of the detrended CO2 concentration.

    I suggest the original post should be corrected.

    It seems that Willis confused detrending with taking a derivative. Note that all the temperature curves in the paper are labeled LT or ST, not delta LT and delta ST. Detrending temperature is just plotting the difference between temperature and the temperature trend line, effectively rotating the graph to change the best fit slope to zero, so the detrended best fit line is now horizontal at 0 Celsius.

    In post number 125, Willis correctly says that Allan compared ∆CO2 to Temp, rather that ∆Temp.

    Willis then says “I cannot say from this that there is any lead or lag. Which is what I would expect rather than a several month lag, the globe reacts quickly.”

    Of course, Allan’s analysis also shows that there is no significant lag between ∆CO2 and Temp, so Willis and Allan agree on this point. But why does Wilis say “rather than a several month lag”? Nobody every suggested that there was a lag of ∆CO2 of several months!

    Allan’s analysis shows a lag of 9 months of CO2 wrt temperature, but no significant lag of ∆CO2 to Temperature.

  138. Allan MacRae
    Posted Feb 16, 2008 at 10:22 PM | Permalink

    A note to reiterate something Willis mentioned earlier:

    Willis posted Lower Tropopshere (LT) Tropics temperature data from UAH and Mauna Loa CO2 as per the note and graph at the top of this page.

    I used in my paper LT Global temperature data also from UAH, and Global CO2. All my raw data and sources are included in my spreadsheet, which accompanies my paper at
    http://icecap.us/index.php/go/joes-blog/carbon_dioxide_in_not_the_primary_cause_of_global_warming_the_future_can_no/

    I suggest all of you run consistent data if you want to compare results.

    See also my note 128 above.

  139. Allan MacRae
    Posted Feb 17, 2008 at 12:15 AM | Permalink

    RE #135 John C,

    Interesting comment John. You said:

    The Temperature causing CO2 is a low amplitude high frequency effect. Global CO2 changes occur at much higher amplitudes and lower frequencies. Establishing temperature causes CO2 on these scales does not invalidate that CO2 might cause temperature changes on different scales.

    Prior to writing this paper, I had read Veizer (2005) and Veizer and Shaviv (2003). On December 30, 2007, I realized most of the conclusions in my current paper. Since then I have read Richard S. Courtney’s Stockholm paper (2006) and B’s (2007) compilation of historic direct CO2 measurements.

    It seems to me possible that we are dealing with “a wheel within a wheel within a wheel”. This is an elegant hypothesis, and Nature is, after all, remarkably elegant…

    Here is a recent exchange on CS, to illustrate:

    Hi Tim (Ball),

    I think you’ve stated a profound truth:

    “If CO2 and especially human CO2 is driving global temperature why does every single piece of evidence at any time scale keep showing that temperature change precedes CO2 increases? ”

    Speculating:

    I’ve shown the short-term relationship – CO2 lags surface temperature by ~9 months, which relates to (peak-to-peak) cycle lengths of ~~3-5 years. Concern has been expressed that the magnitude of the temperature-CO2 relationship that I’ve described is not big enough to account for the current ~~2ppm/year growth in atmospheric CO2. Perhaps there is an underlying longer term, but similar relationship.

    B’s compilation of direct CO2 measurements centered in the reported CO2 upspike from 1936 to 1949 suggests a ~~5-10 year lag between larger CO2 trends and major temperature swings, which relates to cycle lengths of ~~70-90 years. B’s compilation of thousands of direct measurements of atmospheric CO2 has been derided but not discredited.

    Then there are the reports that CO2 lags temperature by hundreds of years, as measured from ice cores, for much longer cycle lengths.

    I recall when I started this work that someone (Ken Gregory) commented on a fundamental relationship between the cycle length and the delay (lag time).

    Best regards, Allan

    Some here may wish to conduct further analysis on the “low amplitude high frequency event” as you have described it, and this would be most helpful, in my opinion.

    Others may wish to look at the next level – the 70-90 year Gleissberg-scale events, and see if they can detect a “temperature-drives-CO2” signal there.

    I have examined (but not in my paper) the relationship between CO2 emissions and atmospheric CO2, and the only (weak) correlation I found is that CO2 emissions lag CO2 by about 2 years – some may conclude that increased atmospheric CO2 causes large numbers of people to turn up their furnaces, jump into their cars, and drive away…

    Best regards, Allan

  140. Roger Cohen
    Posted Feb 17, 2008 at 4:40 PM | Permalink

    Once again the phenomenological equation dCO2/dt + CO2/tau = K T captures both short term fluctuations and longer term secular effects. Here CO2 concentration and temperature are referenced to some base year in the period of interest. It should be adequate if changes is CO2 are small compared to the base year value so that logarithmic effects in forcing can be ignored.

    The first term on the left is probably the result of ocean outgassing/uptake, and the second is traditional greenhouse gas forcing (John Creighton #135 and others). The parameter K was estimated at ~ 1.5 ppm/deg.C-year by correlating short term temperature fluctuations with annual CO2 change (#132). For slow variations in CO2 we can neglect the dCO2/dt term and get something that looks like a traditional climate sensitivity equation CO2 = K tau x T. This can be related to the temperature rise for doubling via: Temperature = A ln CO2, where A = temperature rise for doubling/ln 2. Then temperature relative to the base value = T ~ [A/CO2(base)] x [CO2 – CO2(base)]. Then we can identify K tau = CO2(base)/A. Suppose temperature rise for doubling is 2.5 degrees. Then A = 3.6 deg. C. and we take a base CO2 of 360 ppm corresponding to the period used to extract the value of K, then tau ~ (360 ppm)/(3.6 deg. C) x (1.5 ppm/deg.C-yr)~ 67 years. This is a nice long time, as it should be for longer term secular variations.

    Note that the rate at which temperature increases in a forcing scenario is the sum of the traditional forcing term plus a term proprotional to the SECOND DERIVATIVE OF CO2.

  141. John Creighton
    Posted Feb 17, 2008 at 6:10 PM | Permalink

    #140 Fit the above equation using the high frequency data above and see if it gives a valid prediction of the low frequency response.

  142. Posted Feb 17, 2008 at 10:52 PM | Permalink

    Roger,
    In post 132, you estimates how large the effect of temperature on atmospheric CO2 could be. You said “I took the data posted by Willis for the 1979+ temperature anomaly per UAH MSU ..”

    However, Allan MacRae correctly noted in post 138 that Willis used the TROPICS temperatures, while Allan used GLOBAL temperatures in his study. If you used the data posted by Willis, did you know the K factor you calculated was based on temperatures in the tropics rather than global temperatures? Since we are concerned with global warming, and the related CO2 out-gassing, I think we should use global temperatures.

    Here is a graph of 12-month CO2 change versus MSU Temperature:

    Note that the temperature is the average of the GLOBAL MSU UAH lower troposphere 12-months used for the CO2 change.

    You had determined a best fit r2 = 0.44. The r2 shown on my graph is 0.4454, almost the same as your number, but my slope, K = 2.33 ppm/degC-yr, is significantly higher than the K = 1.5 ppm/deg.C-yr you calculated.

    My K is based on global temperature and your K may be from tropical temperatures. Can you comment on the descrepancy?

  143. Roger Cohen
    Posted Feb 18, 2008 at 1:15 AM | Permalink

    #142 Thanks Ken. Your graph looks just like mine except with a compressed temperature axis, giving the larger slope. I agree that using global temperature makes more sense, so the value K = 2.33 seems a more plausible coupling constant. Then by the argument of #140, the dynamical relaxation time for atmospheric CO2 changes is lowered to about 43 years, if the climate sensitivity is really 2.5 degrees C for doubling. Or, if one could get an independent determination of this relaxation time, one could extract the climate sensitivity empirically rather than rely on models. That may give someone a Nobel Prize, and not for peace either. Said another way, the discovery that short term temperature changes appear to correlate well with short term CO2 changes could be of high scientific importance because it gives a new empirical handle on the relationship between climate and atmospheric CO2.

    One should also extend the data all the way back to 1958 to cover the entire range of precise atmospheric concentrations and use a different (i.e., surface) data set to test the robustness of the coupling constant.

    Notice also that causality doesn’t enter here. Any lag between rapid temperature change and CO2 concentration change is too small to matter.

  144. Bernie
    Posted Feb 18, 2008 at 6:47 AM | Permalink

    Given Pielke et als paper, Unresolved issues with the assessment of multidecadal global land surface temperature trends, is the Global temperature better than the tropical temperature? Will it not depend on where the dominant CO2 measuremnts are taken? If the CO2 measures are from Muana Loa, would the tropics be better?

  145. John Lang
    Posted Feb 18, 2008 at 8:04 AM | Permalink

    All of these charts require a third dimension – time.

    For example, MSU temperatures dropped 0.62C in the past 12 months while CO2 levels increased 2 ppm to 4 ppm. The MSU temp anomaly in January 2008 at -0.044C is the same anomaly as July 1979 at -0.05C while CO2 increased by 47 ppm over that period.

    Where do those data points fit on the charts?

  146. Roger Cohen
    Posted Feb 18, 2008 at 10:49 AM | Permalink

    Re: #145. John Lang asks reasonably how the data look, and if I could master how to put a graph up, he could see the relationship between dCO2/dt and temperature. It looks similar to Francois #22. He also asks what the model says about recent events, in particular the rapid drop in temeprature over the past year. It turns out that for the past 7 years or so, we have been in a cycle of alternating el Nino-la Nina events with a period of about two years. The peak to trough amplitude is about 0.4 deg C. This cycle is superimposed on what appears to be a slow overall cooling. To see this take a look at a plot of, for example, HadCRUT3 monthly data.
    Using a cyclic temperature variation T = To exp (iwt) on the right side of dCO2/dt + CO2/tau = K T, one gets the oscillating solution for the CO2 variation:

    CO2 = K To / i w,
    where the phasing implies that CO2 LAGS the temperature by 90 degrees (six months). Putting in K = 2.3 ppm/deg.C-year from Ken #142, To = 0.2 degC, w = 2 pi/2 radians/year, one gets the change in CO2 ~ 0.15 ppm. This seems too small to discern and could be masked by the annual cycle. John is right in pointing out that the recent plunge is bigger but it may still be too small to see. In any case we need to wait ~ 6 months to find out.

  147. Roger Cohen
    Posted Feb 18, 2008 at 11:33 AM | Permalink

    Re: #145. John Lang asks reasonably how the data look, and if I could master how to put a graph up, he could see the relationship between dCO2/dt and temperature. It looks similar to Francois #22. He also asks what the model says about recent events, in particular the rapid drop in temperature over the past year. It turns out that for the past 7 years or so, we have been in a cycle of alternating el Nino-la Nina events with a period of about two years. The peak to trough amplitude is about 0.4 deg C. This cycle is superimposed on what appears to be a slow overall cooling. To see this take a look at a plot of, for example, HadCRUT3 monthly data.

    Using a cyclic temperature variation T = To exp (iwt) on the right side of dCO2/dt + CO2/tau = K T, one gets the oscillating solution for the CO2 variation:
    CO2 = K To / i w,
    where the phasing implies that CO2 LAGS the temperature by 90 degrees (six months). Putting in K = 2.3 ppm/deg.C-year from Ken #142, To = 0.2 degC, w = 2 pi/2 radians/year, one gets the change in CO2 ~ 0.15 ppm. This seems too small to discern and could be masked by the annual cycle. John is right in pointing out that the recent plunge is bigger but it may still be too small to see. In any case we need to wait ~ 6 months to find out.

    Finally, John asks how it is possible that we can end up at the same temperature as mid 1979 with a 47 ppm increase in CO2. I really don’t know but part of the answer could be that during most of that time, the temperature WAS higher than 1979, so the oceans could have been outgassing at higher rate. To estimate this, integrate dCO2/dt = K T over the last 30 years. Again using John Lang’s value of K, this gives a little less than 20 ppm additional atmospheric CO2 over that time. As for the rest of it, it is of course possible that the net anthopogenic CO2 added had little effect on temperature because of offset by aersols (the favorite excuse for why temperature hasn’t gone up as fast as predicted) or simply the climate sensitivity to CO2 is smaller than models calculate.

  148. Posted Feb 18, 2008 at 2:10 PM | Permalink

    Here is the Figure 5b from Allan MacRae’s file.

    This figure doesn’t use detrending or running means. The surface temperatures (ST) appears to leads dCO2/dt.

  149. DeWitt Payne
    Posted Feb 18, 2008 at 6:16 PM | Permalink

    We know that atmospheric CO2 as measured at Muana Loa shows cyclic annual variation. We also know that these seasonal variations are much smaller for measurements made in Antarctica and much larger for measurements made at Barrow. This tells us that it’s a process on land that dominates the seasonal variability of CO2, not the ocean and that the process is almost certainly driven by seasonal temperature change not insolation. This means that the inter-annual CO2 concentration should also vary with inter-annual temperature change. It is also not surprising that there is no detectable signal from inter-annual variation in anthropogenic CO2 emission because the anthropogenic flux is an order of magnitude smaller than the fluxes into and out of the biosphere and has only a small year to year and within year variability. This does not mean, however, that anthropogenic emissions play an insignificant role in the long term trend in atmospheric CO2 concentration.

  150. Allan MacRae
    Posted Feb 18, 2008 at 7:54 PM | Permalink

    Ladies and Gentlemen,

    Some of you may not realize that all my data and calculations are posted on an Excel spreadsheet, so you can experiment for yourselves. Note that all my graphs are plotted wrt time.

    I see considerable miscommunication about matters that have already been “reasonably” demonstrated to be true or untrue in my paper and spreadsheet. Ken Gregory has commented on this above.

    The correlations among ST, LT , and dCO2/dt are apparent with or without 12-month running means or detrending. Figure 5b has recently been added to the published paper (and above) to demonstrate this point. CO2 lags ST by approx. 9 months.

    The title of this thread “Data Smoothing and Spurious Correlation” is also unfortunate.

    I have since run Hadcrut3 ST vs Mauna Loa CO2 back to 1958, and the relationships hold, although the 9 month lag of CO2 after ST seems to decline by a few months – no matter, the Sun (and lack of clouds) is the likely driver, not just ST.

    I have also run LT NH, LT SH and LT TRPC vs Global CO2 and the relationships all hold.

    My paper was originally posted Jan.31/08 with a spreadsheet at
    http://icecap.us/index.php/go/joes-blog/carbon_dioxide_in_not_the_primary_cause_of_global_warming_the_future_can_no/

    The paper is at http://icecap.us/images/uploads/CO2vsTMacRae.pdf

    The spreadsheet is at http://icecap.us/images/uploads/CO2vsTMacRae.xls

    See also Roy Spencer’s (U of Alabama, Huntsville) take on this subject at
    http://wattsupwiththat.wordpress.com/2008/01/25/double-whammy-friday-roy-spencer-on-how-oceans-are-driving-co2/

    and http://wattsupwiththat.wordpress.com/2008/01/28/spencer-pt2-more-co2-peculiarities-the-c13c12-isotope-ratio/

    Spirited discussion is ongoing at Climate Skeptics http://tech.groups.yahoo.com/group/climatesceptics/message/44900

    and also here at Climate Audit

    Best regards, Allan

  151. DeWitt Payne
    Posted Feb 18, 2008 at 9:21 PM | Permalink

    To expand further, the seasonal variation in CO2 is caused primarily by competition between reduction (photosynthesis) and oxidation (by decay or metabolism). CO2 concentration is out of phase with temperature because photosynthesis peaks in the warm months causing a net reduction in CO2. At Barrow, CO2 concentration is lowest in August. At Muana Loa, it’s lowest between September and October. At the South Pole, it’s lowest in February.

  152. kim
    Posted Feb 19, 2008 at 6:06 AM | Permalink

    #151, DeWP, most of the vegetation is in the northern hemisphere, but at the same time plants are capturing CO2 there, cool surface waters in the southern hemisphere(mostly water) are also capturing CO2. Is any one absolutely certain of the reason for the annual variation?
    =============================

  153. DeWitt Payne
    Posted Feb 19, 2008 at 9:34 AM | Permalink

    kim,

    The evidence that the magnitude of the annual variation in atmospheric CO2 increases from south to north (South Pole to Alaska) is conclusive that the phenomenon is caused by annual variation in growth rate of vegetation on land. Sea surface temperatures don’t change very much or very fast so CO2 uptake by the ocean is relatively constant on the annual to decadal scale. The other problem with the assumption that temperature is the major driving factor for increasing CO2 is that the rate of increase in atmospheric CO2 has accelerated over time while temperature has increased linearly.

  154. DeWitt Payne
    Posted Feb 19, 2008 at 2:11 PM | Permalink

    If you plot annual averages, the South Pole data lags the Barrow data by about 28 months with the Muana Loa data in between. This is entirely consistent with anthropogenic CO2 emission from combined fossil fuel burning plus land use/land cover changes primarily in the Northern Hemisphere as the principal cause of the long term trend in atmospheric CO2 concentration.

    This reminds me of the extended discussion of hurricane data following Poisson statistics, to which the obvious response is: Duh! There is nothing new here and especially nothing that falsifies greenhouse warming. It doesn’t prove it either.

  155. kim
    Posted Feb 19, 2008 at 2:26 PM | Permalink

    Thank you, DeWP.
    ============

  156. Sam Urbinto
    Posted Feb 19, 2008 at 3:01 PM | Permalink

    Nothing is more boring than experts arguing with each other in their own field, much less cross field.

    Anyone care to estimate the percentage breakdown between land-use changes and fossil fuel use in the climate system? And the degree of their correlation to the anomaly trend. And the percentage breakdown within the fuel subset that is attributable to AGHG. Finally, the part that each gas plays when everything is taken into account.

    Can it be done without making assumptions, generalizations, guesswork and the use of models, scientifically and without regard to anything but how it all works in reality and ignoring the mundane details of specifics and overall minutiae, in a non-political non-partizan agnostic neutral manner?

    I doubt it.

  157. Allan MacRae
    Posted Feb 19, 2008 at 11:45 PM | Permalink

    RE #154 Dewitt,

    Would you be so kind as to post a graph showing your 28 month lag between Barrow and South Pole, and your source of digital data.

    RE your statement “If you plot annual averages, the South Pole data lags the Barrow data by about 28 months with the Muana Loa data in between. This is entirely consistent with anthropogenic CO2 emission from combined fossil fuel burning plus land use/land cover changes primarily in the Northern Hemisphere as the principal cause of the long term trend in atmospheric CO2 concentration.”

    It is also consistent with other possible mechanisms, such as the seasonal CO2 “sawtooth” that every year dwarfs humanmade emissions.

    Thank you, Allan

  158. Allan MacRae
    Posted Feb 20, 2008 at 9:26 AM | Permalink

    Hi Dewitt:

    I found your data sources in your post 149, thank you for posting them earlier, and analyzed the data. Comments as noted in (brackets)

    DP#149 We know that atmospheric CO2 as measured at Mauna Loa shows cyclic annual variation. We also know that these seasonal variations are much smaller for measurements made in Antarctica and much larger for measurements made at Barrow. (Agree) This tells us that it’s a process on land that dominates the seasonal variability of CO2 (Agree), not the ocean and that the process is almost certainly driven by seasonal temperature change not insolation (Question this, insolation could be the primary driver). This means that the inter-annual CO2 concentration should also vary with inter-annual temperature change. It is also not surprising that there is no detectable signal from inter-annual variation in anthropogenic CO2 emission because the anthropogenic flux is an order of magnitude smaller than the fluxes into and out of the biosphere and has only a small year to year and within year variability (Agree with most). This does not mean, however, that anthropogenic emissions play an insignificant role in the long term trend in atmospheric CO2 concentration (Humanmade emissions may play a significant role in the growth of atmospheric CO2, or they may not).

    DP#151 To expand further, the seasonal variation in CO2 is caused primarily by competition between reduction (photosynthesis) and oxidation (by decay or metabolism) (Agree). CO2 concentration is out of phase with temperature because photosynthesis peaks in the warm months causing a net reduction in CO2. At Barrow, CO2 concentration is lowest in August. At Mauna Loa, it’s lowest between September and October. At the South Pole, it’s lowest in February. (Agree with months, approx.)

    DP#153 The evidence that the magnitude of the annual variation in atmospheric CO2 increases from south to north (South Pole to Alaska) is conclusive that the phenomenon is (primarily) caused by annual variation in growth rate of vegetation on land (Agree). Sea surface temperatures don’t change very much or very fast so CO2 uptake by the ocean is relatively constant on the annual to decadal scale (if oceans have been warming is their role net uptake or net exsolution of CO2?). The other problem (please state the first problem) with the assumption that temperature is the major driving factor for increasing CO2 is that the rate of increase in atmospheric CO2 has accelerated over time while temperature has increased linearly. ( A rather weak argument, imo, but not nearly as weak as the one that says CO2 is a significant driver of temperature, after a decade of no warming)

    DP#154 If you plot annual averages, the South Pole data lags the Barrow data by about 28 months with the Mauna Loa data in between. (Sorry but I cannot confirm this lag – examining the past decade including the large El Niño spike of 1998 shows much shorter lag times. How do you calculate 28 months?) This is entirely consistent with anthropogenic CO2 emission from combined fossil fuel burning plus land use/land cover changes primarily in the Northern Hemisphere as the principal cause of the long term trend in atmospheric CO2 concentration. (Other alternatives are possible, and perhaps more probable. As you point out above, the seasonal decline in CO2 at Barrow is ~18 ppm, ~9 times the annual growth rate of atmospheric CO2.)

    Regards, Allan

  159. DeWitt Payne
    Posted Feb 20, 2008 at 10:26 AM | Permalink

    Allan #159,

    For the lag calculation I calculated annual averages for Barrow and South Pole starting in 1976, plotted and calculated trend lines. I then took the value for 1991 for Barrow and plugged that into the trend line equations to calculate the time value for each (in months) and subtracted. I should probably look at the system as having a net source of CO2 in the Northern Hemisphere and a net sink in the Southern Hemisphere resulting in a concentration gradient rather than a lag. The year over year behavior, particularly the El Nino 1998 spike as you noted, does indicate that the actual lag is nowhere near as large as 28 months. As the concentration continues to increase, the source flux must be greater than the flux into the sink. Here’s the graph (I hope):

  160. Sam Urbinto
    Posted Feb 20, 2008 at 12:48 PM | Permalink

    So while the CO2 trended up 60 ppmv (about 18% of 330) over 30 years, the anomaly trend line was up .55 (about 4% of 14) over 30 years.

    During the time, the gistemp anomaly trend went up and down as low as -.16 and as high as +.62 (depending on data set that might be different) over the same period and ended at +.54 I’m curious as to how one explains the many periods it’s suddenly gone down or up around half of that or more (81-82 down .22, 94-98 up .45, 90-92 down .26, 98-99 down .24, 99-00 up .23 etc) I don’t see how that can be correlated, at least not more reliably than a possibility some of the effect in the climate system is net warming due to this one of many factors, to some unknown extent.

    Maybe think it might just be the system doing its thing?

  161. DeWitt Payne
    Posted Feb 20, 2008 at 3:42 PM | Permalink

    Sam #160,

    Maybe think it might just be the system doing its thing?

    Specifically heat transfer between the ocean and atmosphere (El Nino/La Nina, e.g.), volcanoes (Pinatubo, El Chichon) are implicated and probably more we don’t know about. The warmers are way to glib about attribution considering that they don’t model El Nino/La Nina all that well (see Hansen’s prediction of a super El Nino for last year as opposed to the fairly large La Nina we’re getting now). If the weather/climate is indeed chaotic then it varies on all time scales. What I’m trying to do besides improving my own understanding is to keep the skeptics here grounded and not chasing after some idea that will only alienate the uncommitted.

  162. DeWitt Payne
    Posted Feb 20, 2008 at 4:37 PM | Permalink

    Allan,

    The southern ocean(s) must be a net sink of CO2 for two reasons. First, the annual average concentration of CO2 at the South Pole is lower than the for the average over the same period at Barrow. Second, the variability of CO2 within year decreases from Barrow to Muana Loa to the South Pole. Using the electric circuit model, the ocean is a capacitor and the finite flux rate of CO2 from North to South acts as a resistor so ripple in the signal is filtered.

    IMO, the null hypothesis is that the year over year variability in CO2 concentration is primarily caused by the same mechanism as that which causes the within year variability. AFAIK, only fluxes into and out of the terrestrial biosphere are large enough to explain the variability in the Barrow CO2 data. Given the very large change in absolute temperature from summer to winter, and you have to use actual temperature here not temperature anomaly, that produces that variability, I do not believe that the quite small by comparison change in average temperature over thirty years is anywhere close to sufficient to explain the long term trend in atmospheric CO2.

    Here’s the Barrow year-to-year delta CO2 plot:

    No 1998 spike in this data. In fact, I don’t see any correlation to satellite delta T, although I didn’t try all that hard. I looked at UAH NoPol land and sea with no and six month lag.

  163. Sam Urbinto
    Posted Feb 20, 2008 at 6:05 PM | Permalink

    DeWitt:

    keep the skeptics here grounded and not chasing after some idea that will only alienate the uncommitted.

    I appreciate that effort.

    We know certain things:
    There are 800% more people than before.
    Burning fossil fuels adds AGHG to the atmosphere.
    Changing the land and burning fuel impacts the weather and therefore climate.
    Many glaciers are receding or melting.
    GreenHouse Gases absorb and emit IR.
    The satellite averages show what they measure is increasing on average.
    The air readings combined show what they measure is increasing on average.
    There are more AGHG in the atmosphere than before.
    The anomaly trend is going up since before.
    There isn’t universal scientific consensus on everything associated with this subject.
    Some of the ways we deal with data aren’t known to be robust (or are known not to be).
    There are some issues with the anomaly gathering sites, ones that may or may not be adaquately taken into account.
    There is a lot that is unknown.
    There are a lot of political and policy factors in the debate not related to science.

    We don’t know other things:
    To what extent the AGHG and the trend are correlated or in which direction.
    The net effect upon weather/climate of the AGHG in the system.
    If the anomaly is an accurate indicator of rising temperatures.
    If the net effects of what appears to be going on will continue.
    What the net effects of what appears to be going on will be.
    If we can actually in practice do anything about the specifics of the suspected causes.

    None of this proves or disproves there is AGW, nor am I trying to. In fact, I agree with Steve; if I was implementing policy, I would take the suggestions of the major scientific bodies into account when developing cost/benefit and risk/reward analysis of the subject and the factors involved.

  164. DeWitt Payne
    Posted Feb 20, 2008 at 8:33 PM | Permalink

    The difference between the Barrow and the South Pole CO2 measurements is increasing over time and the rate of increase appears to be significantly different from zero, if I did my sums right. The difference in the annual average was about 3 ppm for 1976 and was over 4 ppm for 2006. This must be at least part of the data that people use to claim that the capacity for CO2 absorption by the southern oceans is not keeping up.

  165. Allan MacRae
    Posted Feb 20, 2008 at 9:00 PM | Permalink

    #162 Thank you DeWitt,

    I will be offline starting Friday, and need to prepare Thursday. If you plot the four datasets – Global, Barrow, Mauna Loa and South Pole – the 1998 El Nino spike is apparent. It is present at Barrow, but there are many other spikes of similar or even greater magnitude. The Barrow data is much more highly variable compared to the other stations, and this seems reasonable. Given that we are calculating 12-month differences, [e.g. CO2May1981-CO2May1980], an early spring or late fall (or similar minor shift in summer or winter) will cause some havoc with these differences.

    The Barrow data is interesting, because of the very high range of the differences, and because there are so many large negative numbers – when CO2 levels significantly declined (on an absolute basis) over the 12-month interval. Some will disagree, but I think this weakens the mass-balance arguments that humanmade CO2 is the primary driver of growth in atmospheric CO2.

    If you plot ST or any LT anomoly (NH, SH or Tropic) versus dCO2/dt, you will see the same correlation as in Figure 5b above. If you extend the analysis back to 1958, the same relationships still exist.

    Re comments of spurious correlation – look at the graphs in my paper – do you really believe that these are random relationships? I expect there are better statistical methods of showing these correlations, but they clearly do exist.

    There are problems with both positions, and there may not be sufficient good-quality data to ascertain which is more correct, or if both factors play a significant causitive role.

    Thank you again for your interesting posts.

    Best regards, Allan

  166. Posted Feb 20, 2008 at 11:17 PM | Permalink

    Do the models display the same CO2 lag characteristics as the measurements? Or does CO2 lead temperature? That would be interesting, and a significant finding comparable to the incompatability of tropospheric predictions of models with measurements.

  167. DeWitt Payne
    Posted Feb 20, 2008 at 11:50 PM | Permalink

    David,

    Do the models (GCM’s that is) include carbon cycle models? I don’t know, but I doubt it. My understanding is, and anyone who knows better please correct me, that the models are fed scenarios of global average CO2 and other ghg’s like those published in the IPCC reports. If that is the case then the short term behavior of atmospheric CO2 cannot be used as a test of model validity.

  168. Posted Feb 21, 2008 at 3:12 AM | Permalink

    If carbon cycles are the major fluxes of CO2 then why don’t they? Otherwise, if CO2 is implemented as a driving variable only, then they assume cause by CO2. The validity of the model is tested by testing the assumptions. If they are wrong, the model is wrong.

    I think it is up to modellers to prove there is no other possible explanation for warming than AGW. For example, look at all the ways Anthony Watts tried to find other explanations for a possible eruption. Scientists who are really serious about getting to the truth are paranoid about trying to disprove their theory. Data like CO2 leading temperature would be throughly investigated to eliminate all doubt.

    So you have made the statement. Models supposedly incorporate the most important feedbacks. Why shouldn’t CO2 vs temp lags test model validity? Because they assume CO2 leads temperature does not immunize them from data showing temperature leads CO2 in my view.

  169. Bernie
    Posted Feb 21, 2008 at 5:52 AM | Permalink

    I thought CO2 was “well mixed”. It seems to me that charts in #148 and #162 suggest otherwise. Can someone plot the Barrow and Muana Loa absolute and year over year ppm change on the same graph?

  170. Roger Cohen
    Posted Feb 21, 2008 at 8:14 AM | Permalink

    http://en.wikipedia.org/wiki/Confirmation_bias is the answer to David’s question (#168)

  171. Posted Feb 21, 2008 at 8:27 AM | Permalink

    you can do it yourself!
    http://www.esrl.noaa.gov/gmd/ccgg/iadv/

  172. Bernie
    Posted Feb 21, 2008 at 8:54 AM | Permalink

    Hans:
    Many thanks. I had not seen this site before. Shame on me.

  173. bender
    Posted Feb 21, 2008 at 9:34 AM | Permalink

    #161 DWP:

    What I’m trying to do besides improving my own understanding is to keep the skeptics here grounded and not chasing after some idea that will only alienate the uncommitted.

    Kudos.

    If the weather/climate is indeed chaotic then it varies on all time scales.

    This leads to problems for estimating the magnitude of the CO2 sensitivity coefficient if there is any tuning that is done to fit the GCM output to observed “trends” (which may not be a real trend, but simply low frequency noise (like ENSO) that has yet to be characterized (or caricatured) the way ENSO has)). We are told one hand by GS that no such tuning is done. But then we have the Kiehl thread showing that this is not true. Who do you trust?

    Elsewhere, J. Curry argues that Steve M is not getting his “engineering quality” exposition of the derivation of the CO2 sensitivity number because there is no such number, there is only an expected distribution. Respectfully: this is nonsense. Expected distributions have derivations too. So make the engineering report longer. Explain where the min, mean, max, and shape of the expected distribution come from.

    IPCC: quit dragging your heels and produce the requested report.

    I accept, based on PP and energy security concerns, that we must try to change. But what is it we are trying to adapt to? How much CC response can we expect from our (very expensive) mitigation efforts? You can’t answer those questions without the requested engineering quality report. So get on with it. And I (unlike others) want to see confidence intervals on those parameters!

    Why do I insist on statistical robustness? Because I am a scientist. I want to know the probability that something not-yet-understood is contributing to effects that are falsely being attributed to GHG-AGW. Something involving solar-ocean-atmosphere LTP noise. The thing that RC won’t touch and that Gavin Schmidt’s precious GCMs and output ensembles don’t consider.

  174. Posted Feb 21, 2008 at 9:42 AM | Permalink

    Bender we don’t even have confidence intervals of SRES emission scenarios (the model input). How are policy makers supposed to build policies on that?

  175. bender
    Posted Feb 21, 2008 at 9:43 AM | Permalink

    I’m not that unreasonable. I don’t insist confidence intervals be generated where they can’t be. Only where they can be.

  176. Posted Feb 21, 2008 at 12:18 PM | Permalink

    #171 Hans, I can give it a go if I can a) find the global means output of a carbon cycle model,
    and b) its in a format thats OK. I don’t follow your figure and web site, thats observations not model data isn’t it?

  177. Posted Feb 21, 2008 at 4:10 PM | Permalink

    re 176:
    The vostok ice core has a temperature and a CO2 dataset. It is then straightforward to calculate the temperature effect caused by a given CO2 change (the CO2 contribution), using the climate sensitivity ranges of the climate models (1-3 K/2xCo2):

    dT=0.27*5.35ln(Co2/284.7) or dT=0.81*5.35ln(Co2/284.7)

  178. Posted Feb 21, 2008 at 4:20 PM | Permalink

    #177 Hans, I think you misunderstood my message. The interest is in whether the climate models, GCMs, that have carbon cycles do exhibit the same causal characteristics as the real world data that are the subject of this post. Christy and others have recently demonstrated that the models fail to accurately represent the pattern of temperature increases in the atmosphere. The inference then is that the fundamental understanding of what is driving temperature change is wrong. My question is, do we have a similar inaccuracy in the Granger cause (GC) relationships between CO2 and temperature: i.e. does Temp GC CO2 in the data, but CO2 GC temp in the models?

  179. Allan MacRae
    Posted Feb 21, 2008 at 4:39 PM | Permalink

    The spreadhseet including new Fig. 5b is now uploaded at
    http://icecap.us/images/uploads/CO2vsTMacRaeFig5b.xls

    The updated paper remains at

    Click to access CO2vsTMacRae.pdf

    Regards, Allan

  180. Posted Feb 21, 2008 at 6:05 PM | Permalink

    Dear all,

    Sorry to drop in late, I was not aware that the discussion at CS and Anthony’s blog was repeated here.

    To begin with: Have a look at the scales for the CO2 variability. We are discussing a variability of +/- 2 ppmv over a period of 27 years. Allan has calculated the integral over the full 27 years and found an increase of 2 ppmv. In the same period the real CO2 level (the one which is supposed to influence temperature) increased with 42 ppmv (60 ppmv since Mauna Loa data collection started and ~100 ppmv since the start of the industrial revolution). At the same time the emissions increased with ~70 ppmv (~110 ppmv since Mauna Loa, ~145 ppmv since…).
    By detrending the original dCO2/yr graph, one throws away some 40 ppmv, which is most of the real CO2 trend…

    Thus we are NOT discussing the real (upward) trend, but the variability in trend! And the variability probably is influenced by temperature, with some period-dependent lag, but that doesn’t say anything about the cause or the influence of the trend…

    What we see is that in all time scales, there is a lag of CO2 after temperature. That is several hundreds to thousands years for glacials-interglacials and back (Vostok ice core). For shorter time frames, that is about 50 years (Law Dome ice core), and for rapid changes in current times (El Niño, Pinatubo) it is a few months. Be aware that the latter is about change in increase speed, not increase!

    More important, the ratio between temperature and CO2 is surprisingly linear in the ice cores. Smoothing may play a role, but on longer time scales, that doesn’t influence ratio’s. For Vostok and Law Dome ice cores, the ratio is about 8 ppmv/°C, for seasonal fluctuations (globaly) about 5 ppmv/°C and for year-by-year changes about 3 ppmv/°C (1992 Pinatubo, 1998 El Niño).

    Thus any speculation that temperature is the main driver of current CO2 increases is a little premature…

    Further see the paper of Pieter Tans at the celebration of 50 years Mauna Loa CO2 measurements about the same topic. He combined temperature and precipitation to have a better forecast of the temperature influence on CO2 levels here.

    More comment and graphs to come tomorrow…

  181. Posted Feb 21, 2008 at 8:20 PM | Permalink

    Thus any speculation that temperature is the main driver of current CO2 increases is a little premature…

    Exactly. I think the issue is if/whether CO2 variations produce measurable lagged correlations with temperature at any scale. If CO2 doesn’t Granger Cause temperature variations, why not? Allan postulates a common cause for both temp and CO2 variations.

    Very nice powerpoint presentation of Pieter Tan’s BTW.

  182. Posted Feb 21, 2008 at 9:09 PM | Permalink

    There seems to be a strange difference between Tan’s models and Allans. Tan shows negative responses of dCO2 to temperature, as you would expect for a growth season effect, but Allan seems to show a positive response of dCO2 to temperature at the same 9 mth lag. Am I reading this right?

  183. DeWitt Payne
    Posted Feb 21, 2008 at 10:03 PM | Permalink

    David,

    It seems very clear to me that the within year and year over year variations in CO2 are both growing season effects. If this is the case then adding precipitation should indeed improve the correlation. Both the within year and year to year dCO2 is too small to produce a detectable temperature change even with a high climate sensitivity. For example, 385 over 380 ppm CO2 with 3 degree climate sensitivity is less than 0.06 degrees change. That’s the same order as the quoted error for all the temperature records. The delta CO2 within year is higher at Barrow, but forcing at high latitudes is smaller for a given change in CO2. The absolute temperature delta within year, which is the real driver of within year variation, is on the order of tens of degrees, not hundredths.

  184. Posted Feb 21, 2008 at 10:34 PM | Permalink

    DeWitt, Yes, perhaps, but your explanation seems like attempting to explain a problem away, rather than understand it. It seems very strange to me that the enhanced AGW postulates a causal chain CO2->TroposphereT->SurfaceT but analysis of the lags seems to show the opposite chain SurfaceT->TroposphereT->CO2.

    I am not saying that this shows temp causes dCO2. Just that the actual temporal analysis of the data admits other possible causes of temperature change other than CO2 (e.g. unidentified insolation amplifications). And that detailed analysis of the lags doesn’t add weight to the received view of CO2 caused temperature variations.

  185. Posted Feb 22, 2008 at 1:38 AM | Permalink

    re 184:
    That’s because the ENSO caused variation in CO2 uptake speed dominate the statistics over simple warming.
    Here is a multivariate analysis:

    Douglass, D.H. and B.D Clader, 2002, Climate sensitivity of the earth to solar irradiance, Geophys. Res Lett. vol 29, no. 16, 10.1029/2002GL015345

    See? The “unknown linerar effec L” almost vanishes in the signal, but still agrees with 1 K/2xCO2 forcing:

    So it looks like your Granger Cause test is very sensitive to noise.

  186. Posted Feb 22, 2008 at 3:09 AM | Permalink

    David,

    The Graunger test was done on dCO2/dt and temperature, where temperature is leading CO2 variability with a few months lag (on short-term changes like the 1992 Pinatubo and the 1998 El Niño). This lag is visible on all time scales. But since about 1850, we see that CO2 increase leads the temperature increase, far beyond what can expected from temperature variability.

    I am curious of the results, if one does the same test on real CO2/dt, not on dCO2/dt…

  187. Posted Feb 22, 2008 at 4:21 AM | Permalink

    I have made a few graphs to show the difference between long-term influences of temperature and the emissions on the growth of CO2 in the atmosphere. If one focusses on shorter intervals, one introduces larger and larger errors, and additional, short cycles like the seasonal cycles act as noise which is comparably larger on sub-year intervals than on multi-year intervals.

    I used only yearly averages, as well as for the global temperature data (Hadcrut3) as for the Mauna Loa data, as the emissions are given only as yearly averages. This can influence some of the correlations, but the overall trends and appearance would be similar. Time span: 1959-2004 for CO2 increase and 1960-2004 for dCO2. 1964 is not present, due to several missing months in the Mauna Loa data of that year and additional 1965 for the dCO2 trends.
    The use of other datasets for temperature or atmospheric CO2 wouldn’t give much difference in trend and/or appearance.

    Here follows Fig.1, the trends for temperature and emissions vs. CO2 increase:

    While there is a near parallel increase between emissions and atmospheric increase, the short-term influence of temperature on the increase is hardly visible in the full trend. Thus short-term temperature variations have very little influence on the increase of CO2 in the atmosphere.
    Also for longer-term temperature influences: the 1959-1976 trend of temperature is completely flat. After that, there is a constant rise (until 2000). The two distinct parts in temperature trend have no visible influence on the atm. CO2 trend, neither directly nor with any amount of lag. It looks like that temperature has little influence on CO2 levels.

    Let us have a look at the one-by-one trends, thus assuming that either temperature or the emissions are fully responsible for the atmospheric CO2 increase.

    Fig. 2, One-to-one trendline for temperature and CO2 increase:

    Quite noisy, although with a reasonable correlation coefficient (0.870) and R^2 (0.7574). Theoretically, the decadal temperature increase may have a huge influence on CO2 increase, but that is very questionable, as the year-by-year changes have a low influence on CO2 changes. That was already clear in Fig. 1, but in Fig. 2 we see similar problems. Take e.g. the 1998 El Niño year. There is an increase of temperature in two steps: 1997: +0.21°C – +1.1 ppmv (back to average) and 1998: +0.20°C – +2.9 ppmv (warming by the El Niño). This two-step warming of 0.41°C is over halve the total warming over the full period. If we take the two-step temperature change as base, then we have about 10 ppmv/°C short-term influence of temperature on CO2 levels (in fact smaller, as this is not detrended). If we should assume that temperature is fully responsible for the total CO2 increase, then the ratio increases to about 100 ppmv/°C. That is a tenfold of the year-by-year ratio, highly unlikely…
    Moreover, after the 1998 El Niño, the 1999 La Niña followed with cooling ocean waters, decreasing global temperatures with 0.25°C, but CO2 levels increased with 2.0 ppmv. That means that the short-term temperature changes have very little influence on the overall trend. These only show small variations around the trend.

    Conclusion: That decadal temperature changes have a large influence on CO2 changes is very unlikely.

    Fig. 3, trendline between cummulative emissions and CO2 increase:

    Well, this needs little comment. That the increase of CO2 in the atmosphere follows the cummulative emissions closely (correlation: 0.999, R^2: 0.9988) is obvious. That any natural process can be responsible for an increase in CO2 in the atmosphere, completely in line with the emissions is quasy impossible. I don’t know of any natural process which is that linear in time…
    That the increase in the atmosphere follows the emissions in such a way, points to a simple linear dynamic process in disequilibrium (as long as the emissions increase with a more or less fixed %).
    And comments like “no wonder that there is a high correlation between the two, as both are straight lines” need to explain why both are straight lines (in fact upgoing curves). For the emissions, that is not that difficult, but why should the atmospheric increase follow the emissions in such a way, midst a multitude of natural processes, most far from linear…

    Conclusion: it is highly probably that the cumulative emissions are the largest cause of the increase in the atmosphere. The near-fit of the trend simply excludes a huge influence of any, more variable, natural process.

    Now down to year-by-year variations. Fig. 4:

    Here we see that the CO2 variations show a much larger variability, but please notice the difference in left scale between Fig. 4 and Fig. 1! Here we talk about variations of +/- 1.3 ppmv, which are hardly visible in the 60 ppmv scale of the total increase of CO2 in the atmosphere…
    As one can see, it looks like that the year-by-year CO2 variations are dominated by the temperature variations, but there is a base offset from zero at about halve the emissions curve, the latter trend is much smoother.

    OK, let us have a look at the one-by-one trends…

    Fig. 5, trendline between temperature and yearly increase:

    What a mess! Looks like random noise… And a correlation of 0.661 with R^2=0.4375 is not that impressive, but quite normal for natural processes. The correlation will improve with more detailed (monthly) averages and lagging the CO2 changes after the temperature changes, which is what Allan found. But there seems to be a lot of opposite temperature-CO2 swaps. But even with a better correlation, the influence of temperature variations is mainly (if not completely) on about +/- 1.3 ppmv. In how far the temperature trend over the whole period influences the CO2 increase per year, is already answered in the first part, but even here: why should there be a decadal influence, if the year-by-year influence is that random?
    Last but not least: Allan found a 2 ppmv integrated rise from the detrended variability (which is mainly naturally driven) over 27 years, that means that the total contribution of natural variations over the full 44 year period is 3-4 ppmv, for a total of 60 ppmv CO2 increase in the atmosphere.

    Conclusion: Temperature has some lagged influence on CO2 levels, but limited to about +/- 1.3 ppmv on short term (year-by-year) and less than 4 ppmv on longer term (44 years).

    Finally, Fig. 6, trendline between yearly emissions and yearly increase:

    Well, although the correlation is low (0.552, R^2: 0.305), the trend is far more consistent with continuous additions of CO2 to the atmosphere. Even the slope (48%) of the year-by-year increase vs. emissions is quite similar to the slope of total increase vs. cummulative emissions (55%). It looks like that the trend itself is (near) completely caused by the emissions and that the variability (large in year-by-year trends, small in cumulative trends) is caused by nature.

    Conclusion: Most of the trend itself is caused by the emissions, while the small variability around the trend is caused by natural variations.

    Add to this all, the fact that the emissions (near) always were larger than the measured increase, and thus the variability is about sink capacity (not about an additional source). Add to that that all other variables (ocean pH, atmospheric and oceanic d13C decrease, pre-bomb d14C depletion, oxygen depletion, ocean pCO2 increase,…) are consistent with the dominancy of the emissions over natural causes on periods of 1-2 years and longer.
    Even on longer term (which involves ice core CO2 trends), there is only a poor correlation between CO2 and temperature, but that also implies a poor correlation between temperature variations and CO2 levels. See e.g. the non-influence of the 1945-1975 cooler period, while CO2 levels continuously increase (Fig.1 has only the last 15 years of the 1945-1975 period). Thus any lag between temperature and CO2 levels smaller than 50 years should have been noticed in Fig. 1, but it is not…

    General conclusion: One must come with extremely good arguments, consistent with all observations, to counter the general “consensus” (I hate that word!), that humans are responsible for the increase of CO2 in the atmosphere. Reactions that there “may be” other causes are not good enough in this case…

    That humans are responsible for the increase of CO2 in the atmosphere is completely separated from the question in how far the increase of CO2 influences temperature/climate. That is a different debate…

    What this long contribution hopefully may add to the debate, is that looking at (too) small time frames (monthly-seasonal-yearly) for longer-term trends (near 50 years nowadays) gives a risk of overestimating the influence of causes which show an important (but limited) influence on short time periods, but have little influence on longer term…

    Regards,

    Ferdinand

  188. RomanM
    Posted Feb 22, 2008 at 11:12 AM | Permalink

    #187 Ferdinand

    I am having a little trouble with your Figure 3. It appears to me to be a good example of the title of this thread. Correct me if I am wrong. What you have done by calculating the cumulative CO2 emissions appears to impose a smoothing which can produce a spurious positive correlation with any roughly linear increasing (possibly unrelated) process.

    Look at it mathematically:

    Suppose that as a function of time, the rate e(t) of CO2 emissions at time t satisfies

    e(t) = C + d(t)

    where C is a constant (call it the average rate) and d(t) is the amount the rate differs from that constant at time t. Then the cumulative emission by time t is E(t) = the integral of e(t). This gives

    E(t) = Ct + D(t)

    where D(t) = the integral of d(t). If D(t) is relatively small compared to Ct, then E(t) is pretty much a straight line with a positive slope. This means that one would get a high correlation between E(t) (e.g. observed at yearly intervals) and any other linearly increasing process. The “near fit” seems spurious and I am not sure that one should base any strong conclusion about the relationship between those variables on such an analysis.

  189. DeWitt Payne
    Posted Feb 22, 2008 at 11:35 AM | Permalink

    RomanM,

    In this case it’s more like hypothesis testing rather than looking for a correlation to generate a hypothesis. The null hypothesis must be that human emissions of CO2 from fossil fuel burning and land use/land cover changes causes an increase in atmospheric CO2 concentration. We know this hypothesis is plausible by definition. It is up to the skeptics on this point to come up with a result that falsifies this hypothesis. Allan McRae’s data falls far short.

  190. Sam Urbinto
    Posted Feb 22, 2008 at 11:57 AM | Permalink

    Ferdinand: Good stuff! But I would replace your “…~100 ppmv since the start of the industrial revolution” (1700? 1750? 1800? 1850?) with either the year you mean or even better, with this “…since the world’s population surpassed a billion people in 1804” 🙂 (Is it just me, or does the anomaly graph seem to look about the same as the graph for adding 7 billion people, or what?)

    Well.

    Anyway, as far as data smoothing and spurious correlation, how about two grids, where the mean measured temperature for the grid on hour x of day y is a mean of 50 F for the 4 years on record. Next year on that time and day? Well now, what is getting us that 50 F, hmmmm?

    Both grids have 4 sensors contributing to the grid. The sensors are well-sited, calibrated, accurate, yada yada. Both grids are fairly close to the equator, so each 5 x 5 degree grid is about 550 sq km. Grid 1 has the sensors placed in the center of each quadrant, so they are about 165 km from the edges and 220 km from each other. Grid 2 has a large city covering the center 1 x 1 degree, and the sensors are at the corners of it, so they are all about 110 km from each other.

    The weather patterns for the areas are a bit strange though; they’re in an alternate dimension in the Bermuda Triangle.

    For years 1-4; temperatures each year in F, all rounded to whole numbers:

    Odd Grid

    Quad Year 1 2 3 4 M SD
    1 40 60 40 60 50 12
    2 50 50 50 50 50 0
    3 20 80 20 80 50 35
    4 -8 47 102 60 50 55

    Odd Grid; Mean 50 F StdDev 25

    Bizarro City

    Corner Year 1 2 3 4 M SD
    A -50 50 -50 50 0 58
    B 0 50 0 50 25 29
    C 125 75 90 110 100 26
    D 75 75 75 75 75 0
    Bizarro City; Mean 50 F StdDev 24

  191. RomanM
    Posted Feb 22, 2008 at 12:27 PM | Permalink

    #189: DeWitt
    My point was that the graph in figure 3 should not be treated as meaningful evidence either one way or another in support of any hypothesis. This type of processing of the data can produce strong relationships even when the data are not related to each other.

  192. Sam Urbinto
    Posted Feb 22, 2008 at 1:08 PM | Permalink

    No, burning fossil fuels and other things, such as land-use/biomass burning, agriculture, industrial processes, power stations do add AGHG; is there anyone that really disputes they (CO2, CH4, N2O, F-gasses HFCs, PFCs and SF6, and aerosol and ozone pre-cursors or acidifiers CO, NMVOC, NOx, NH3 and SO2) get added?

    Anyway, the primary question is if the added AGHG are the same AGHG in the atmosphere. That seems pretty obvious; but like the above graph of AGHG etc (gridded data from EDGAR 3.2 estimates circa 1995) (or the anomaly trend) the meaning and correctness and the details are what’s in question.

    Because you have to remember one thing; none of this stuff just exists on its own; they all interact with each other and with the other parts of the system; water it its multiple forms of snow, rain, oceans, lakes, streams, rivers, clouds, glaciers, sea ice and vapor, UHI and other effects of roads, farms, etc etc etc. Oh, and the non-GHG gasses, particularly O (O1, O2, O3) Don’t forget various processes creating run-off or soaking into the ground that change the composition of the oceans, water tables and such also. Oh, and wind and sunlight.

    Anyway, back to terminology: If carbon dioxide is the chosen proxy for all of the AGHG, I doubt it’s the best way to do it but fine; however the accurate thing to say it adds AGHG so I will; there’s more to it even than just even the top three involved as you can see.

    Actually, not really fine. I would go as far as to say that just saying “carbon dioxide” is an alarmist-friendly phrase, a typical type of oversimplification; a purposefully crafted system of implying things about how the climate is changing vis-a-vis the anthropogenic influence. A system where people just take it for granted (infer, think about things) that it’s not “AGHG”, it’s “carbon dioxide”; it’s not “the part of the changing climate that’s human-influenced”, it’s “climate change” or “AGW”; it’s not a “global averaged temperature anomaly trend rising”, it’s “temperature going up”; and it’s not “increased levels of AGHG in the atmosphere are caused to some extent by humans and seem to be causing the anomaly to rise”, it’s “carbon dioxide is causing dangerous warming”.

    Sure, nobody says that last one directly much, but it’s sure being implied the hell out of. And even most everyone here’s drinken the coolaid on this to at least some extent, me too; heck, I had to stop myself from writing some of the charged phrases while I was writing this!

  193. Sam Urbinto
    Posted Feb 22, 2008 at 1:17 PM | Permalink

    That above was directed towards DeWitt mentioning ” emissions of CO2 from fossil fuel burning and land use/land cover” and commenting it is too simplistic for my tastes, and carries certain connotations with it. It’s more “anthropogenic greenhouse and related gasses and solids from land-use change, fossil fuel burning, industrial processes.”

    As far as the human-influenced part of the changing climate, and what we believe is the effect, I prefer this to explain everything: “Increased levels of AGHG in the atmosphere are caused to some extent by humans, and seems to be causing the anomaly to rise.”

    I still go for the 800% population rise and the technology that enabled it and enables the technology answer as the root cause though.

  194. Posted Feb 22, 2008 at 1:55 PM | Permalink

    Re #188

    RomanM,

    About graph #3: By accident, the cumulative emissions are following a more or less equiprocentual curve, just coincidence that there are only few times that the economy had some recession. The yearly increase in the atmosphere is more variable, and indeed if you integrate (the variability) over time, this is smoothed out, as long as it is (mostly natural) variability. If it is one-sided increase (as the emissions are, and a smal part by nature), that will not smooth out and that is the base of the curve seen in Fig. 1 as increase in the atmosphere, and why Fig. 3 is a near-fit.

    What we see, that the increase CO2 in the atmosphere follows the emissions (even in year-by-year changes), is quite unusual and points to the possibility that the sink capacity of the oceans/vegetation is limited and in general smaller than the current (yearly) emissions. That is clear in Fig. 4 too.

    Accumulation of emissions in this case didn’t introduce smoothing (a doubling of emissions in one year shows up in all further years), neither did the bulk of CO2 increase (only the variability is smoothed out). But the use of yearly averages did, compared to the graphs which Allan made. But that depends of what you are interested in… If you are interested in seasonal CO2 exchanges, then you have to go to monthly averages. But I think that everybody here is interested in why there is an increase in the atmosphere and if that has something to do with the emissions. In such cases, one can not make conclusions about a long-term trend by looking at (too) short intervals…

    Of course, that isn’t definitive proof of causation. But in fact that is already proven by the mass balance: in every year of the past (now) 50 years the emissions were equal to larger than the increase in the atmosphere. That means that the sum of all natural flows together (in and out the atmosphere) never have added any amount of mass to the atmosphere over a year (there was a lot of exchange, but a net decrease in mass – more natural sink than natural source).

    Thus you can’t say that the emissions and the increase in the atmosphere may be spurious, as there is a physical path involved: emissions go directly into the atmosphere (the other way out would be more difficult), and there is nothing natural or extra-terrestrial that can influence emissions. Thus even cause A which has effects B and C doesn’t add up here… And a natural cause A which increases C completely independent of B, but gives exactly the same curve as can be expected from B (as a simple linear dynamic process), well I am awaiting good ideas for that (and where stays the mass of B then? Out to space?)…

    It will need a very good ground to disprove the emissions-increase relationship. Something that gives a better explanation to all observations than the current one…

  195. Posted Feb 22, 2008 at 2:22 PM | Permalink

    Re #190,

    Sam, you may not believe it, but

    is there anyone that really disputes they (CO2, CH4, N2O, F-gasses HFCs, PFCs and SF6, and aerosol and ozone pre-cursors or acidifiers CO, NMVOC, NOx, NH3 and SO2) get added?

    is exactly what is at hand. For over a year now, I and others have quite fierce discussions with some other sceptics about the CO2 increase as caused by natural sources or by human emissions. Several, like Allan, base their believe of a natural cause on the large seasonal/year-by-year variations and think that, because the natural flows within a year are much higher than the emissions, and there are relative large year-by-year variations. They expect that the current flat temperature will be followed by a temperature drop and that CO2 levels in the atmosphere will drop accordingly. Which will not happen, as long as our emissions increase year-by-year…

    I agree that the population curve is quite similar to the CO2 emission and atmospheric increase curve. But that is a typical case of A influencing B, which causes C… And that is not a natural process (although are humans not natural?)…

  196. Posted Feb 22, 2008 at 2:46 PM | Permalink

    DeWitt, Ferdinand,

    Allan McRae’s data falls far short.

    Conclusion: Most of the trend itself is caused by the emissions,

    Allans paper is not about explaining the cause of CO2 increases with temperature. The lead paragraph of the post says clearly it is about the potency of CO2 as the source of temperature changes:

    In the study he argues that the changes in temperature (tropospheric and surface) precede the changes in atmospheric CO2 by nine months. Thus, he says, CO2 cannot be the source of the changes in temperature, because it follows those changes.

    Nice posts, but talking whether CO2 increase is caused by emissions is irrelevant and OT.

  197. DeWitt Payne
    Posted Feb 22, 2008 at 3:15 PM | Permalink

    David,

    Allans paper is not about explaining the cause of CO2 increases with temperature. The lead paragraph of the post says clearly it is about the potency of CO2 as the source of temperature changes:

    But as Ferdinand has pointed out much more elegantly than I, it doesn’t do a very good job of that either. The within year and year to year changes in CO2 as a function of temperature are best explained by the well known mechanism of temperature as a major determinant of the rate of removal of CO2 from the atmosphere by photosynthesis by terrestrial plants, which are concentrated in the Northern Hemisphere. The apparent lag between CO2 and temperature is because in the short term CO2 and temperature are inversely related. In order to determine the potency of CO2 in causing long term global changes in temperature, you have to remove the confounding short term effect.

  198. Posted Feb 22, 2008 at 3:16 PM | Permalink

    Dear David (#196)

    In the study he argues that the changes in temperature (tropospheric and surface) precede the changes in atmospheric CO2 by nine months. Thus, he says, CO2 cannot be the source of the changes in temperature, because it follows those changes.

    Two points:

    The quote is right for integrated 2 ppmv in his graphs of the about 42 ppmv increase in the atmosphere. The other 40 ppmv shows no correlation with temperature at all, these show a very high correlation with the emissions. And there is no lag between emissions and increase in the atmosphere.

    There is no reason at all that a variable that lags another variable can not have a feedback towards the other one (again with or without a lag). All depends of the feedback coefficients (if larger than 1, then you have a runaway reaction)…

  199. Posted Feb 22, 2008 at 3:31 PM | Permalink

    DeWitt, Appreciate your argument that the inference – CO2 lags temperature therefore CO2 cannot cause temperature change – is flawed. But one thing puzzles me:

    the apparent lag between CO2 and temperature is because in the short term CO2 and temperature are inversely related.

    Allans data show a positive relationship, not negative. Is it me or his data?

  200. Sam Urbinto
    Posted Feb 22, 2008 at 4:02 PM | Permalink

    Ferdinand: Yes, of course, people are natural. I usally make sure to put it in terms of natural variability versus human-influenced change in the climate.

    On the AGHG et al, I think you may be misunderstanding me. I am only saying it’s a fact we’re adding them. I’m not saying it’s a given it’s a net addition after everything else is taken into account. This is yet again another example of how so much is so oversimplified or vague or meaningless or unimportant in the field. However, it seems likely for a variety of reasons that some amount of what is there is due to how we use the land and energy.

    —————————-

    That said, I will again point everyone’s attention to what I believe is the root cause, due to which there may not be any answer this all, whatever “this” is; regardless of what the anomaly is or means or how accurate it is, and regardless of what’s causing it.

    The positive feedback loop of population and technology.

    If indeed that is the root cause of “un-natural” 🙂 warming, all we need to do is go back to 1 billion people and 1804 technology and science, and “the planet will be safe” from the upright walking despoilers of nature. That might be a little tiny bit difficult to do. So if not that, then what?

    Look, it’s a good idea to make the world safer, better off weath-wise, healthy and clean. It doesn’t matter why we do it; is all this just a mistaken belief that some hyped-up over-focus on the effects of population and technology phrased as causes is required to impel change? Or that not enough is being done and it’s not being done quickly enough, and this is the answer for more and faster? When in fact, it might be a receipe for a cake made of dynamite; disasterous unintended and unseen consequences, the least of which may be the tragedy of condemning the people of developing and poor nations to remain in poverty; disease, famine, inadaquate drinking water and the like?

    In any case, since it really doesn’t matter why, what is so wrong with having a conversation about using conservation, increased efficiency, alternative power, renewable energy, environmentally friendly industrial processes and the like? As long as the goal is to reduce starvation, disease and assist developing nations to raise their standard of living?

    If indeed technology is the driver that allows 8 billion people as well as its own increases, and the effects of land-change and fuel-use creating the sub-effects of warming, pollution and such; then isn’t, short of returning to pre-1800s, using technology and people to solve the same problems they create?

    I’d just like everyone to really think; what that is beneficial and productive is there about tracking the weather and developing information about climate as a goal? Or better yet; what are you going to do to make the world a better place for those that don’t have the luxury of a warm house, plenty to eat, and a computer screen in front of them to type here on?

    That is why it’s important to have the science correct; to reach the goal of implementing cost effective safe ways to take care of the most pressing issues. If we don’t know the facts, the alternatives, and the most pressing issues, based off of sound science and thoughtful apraisments of the situation, taking everything into consideration; what good is what we’re doing to fix a problem we don’t even know the specifics of?

  201. DeWitt Payne
    Posted Feb 22, 2008 at 4:40 PM | Permalink

    David #199

    Allan’s data show a positive relationship, not negative. Is it me or his data?

    How about a year over year small direct response to temperature from increased flux of CO2 into the atmosphere from the biosphere caused by increased rate of decay during a warm winter. That may even be testable by looking at the range for CO2 within year. My guess is (before actually looking) that the range will increase in warm years because the min is lower and the max is higher. While that may be useful in refining carbon cycle models, I doubt that the temperature sensitivity of this effect can be related to long term temperature change that may be caused by the long term trend in CO2.

  202. Posted Feb 22, 2008 at 5:17 PM | Permalink

    DeWitt,
    The statement I was referring to was this.

    within year and year to year changes in CO2 as a function of temperature are best explained by the well known mechanism of temperature as a major determinant of the rate of removal of CO2 from the atmosphere by photosynthesis by terrestrial plants

    It seems like you have an explanation available whatever the data show. Details of carbon cycle mechanism is not something I want to get into. In defense of Allan, and my main interest in following this thread are the observations that:

    1. In the short term, CO2 is of no use in predicting temperatures (because of the lag)
    2. In the long paleo term , CO2 is of no use in predicting temperatures (because of the lag)

    So where is the correlative structure to show CO2 causes temperature?

    The AGW claim, in a nutshell, is that CO2 is a useful predictor of global temperatures. Surely this should be evident by a temperature lagging CO2 at some scale?

  203. DeWitt Payne
    Posted Feb 22, 2008 at 7:33 PM | Permalink

    David,

    I’m still in educational mode here so my position will evolve over time. My explanation of Allan’s results still depends on fluxes into and out of the biosphere, but previously I had not considered a temperature effect on flux out of the biosphere because the variation with temperature of flux into the biosphere is so large. So I’m modifying my position somewhat.

    As far as AGW, I believe that there has to be some effect of CO2, all other things being equal. On the paleo time scale, all other things are essentially never equal and the other drivers of temperature are larger, probably much larger than the effect of CO2. I would say for example that the contribution of CO2 to the glacial to interglacial temperature changes in the Vostok ice core is no more than 10% from direct forcing. Ice/albedo and insolation changes from orbital variation (Milankovitch cycles) are much more important. The jury is still out on variation on decadal to century level time scales. IMO, the current calculated temperature trend is too high reflecting the high point of ocean circulation cycles and may well decline over the next decade, but I doubt it will decline to zero.

    So where is the correlative structure? How about the PETM? Of course the resolution and dating is probably insufficient to determine lead or lag, but give me another explanation besides a spike in methane that then fairly rapidly oxidized to CO2 that fits the observations.

  204. Posted Feb 24, 2008 at 2:10 AM | Permalink

    Re #200,

    Sam,

    I agree with you for most part. I am interested in science in general and climate change in particular. And I am a sceptic to both sides of the fence. If there is sufficient proof of increased CO2 by humans (which in my opinion, after reading a lot of arguments of both sides is the case), then it is so, until somebody comes with a better explanation. But in my opinion, there is insufficient proof of a large influence of CO2 on temperature.

    What need to be done is more research in alternatives for fossil fuel burning, and even more in massive energy storage (which should reduce the need for fossil backup stations). Not for CO2 reduction per se, but for less dependency of not so stable countries…

    And meanwhile, let’s spend Kyoto money without clear benefit for projects which really aids the third world countries…

  205. Posted Feb 24, 2008 at 2:31 AM | Permalink

    Re #202:

    David,

    The problem is not with the lag itself, the problem is with the overlap. In the past over halve million years, in most cases, there is a huge overlap between temperature increases (glacial-interglacial transitions) and CO2 increases: 800 +/- 600 years lag for about 5,000 years increase. The opposite way (interglacial-glacial transitions) shows even longer lags: several thousands of years for a 10,000-15,000 years decrease.

    This allows climate modellers to include a huge feedback from CO2 on temperature, as that is not disprovable, neither provable. But we have one period in time, where the decline of CO2 only started after temperature decrease was near minimum: the end of the Eemian.

    The effect of the subsequent decline of CO2 with 40 ppmv had no measurable effect (within the measuring error) on temperature…

    This doesn’t disprove a CO2-on-temperature action, but it points to a small sensitivity of temperature for CO2 changes.

  206. Posted Feb 24, 2008 at 3:18 AM | Permalink

    Ferdinand,
    Thanks, I would like to study these epochs more.
    My question, Where is the lag structure to show dCO2 causes temperature?, was mostly rhetorical.
    I think your stance is rational, and much the same as mine, as long as alternatives [self snip].

  207. Posted Feb 24, 2008 at 6:18 AM | Permalink

    Re #130 and others about stomata data:

    One need to take into account that stomata data are proxies, while ice core CO2 is really measured CO2 (with its own problems, of course).

    Further:
    – stomata have a wider error margin (+/- 10 ppmv) than the ice core data (+/- 1.2 ppmv for Law Dome ice core data over the past about 1,000 years).
    – stomata data have a local/regional CO2 spring bias.

    The latter needs some explanation:
    At the moment that leaves are formed, CO2 levels in the atmosphere are at maximum level (in the NH). That means that you need a learning curve of stomata data with ice core data e.g. over the last century. No problem itself, in general the ice core – firn – South Pole CO2 curve is used. But that implies that nothing changed in local vegetation over the previous 900 years (including the MWP-LIA transition). And that is a big question.
    The calibration is done over a period where climate and CO2 levels increased, but vegetation probably decreased (urbanisation, land use changes), while the previous centuries may show quite different local/regional vegetation changes…

    Despite that, if we take into account the relative small climate caused (?) variations in the stomata data (+/- 30 ppmv vs. +/- 6 ppmv in ice cores), the +/- 10 ppmv accuracy of stomata data, the 25-40 years smoothing of (Law Dome) ice core CO2 (due to bubble closure), the NH/SH smoothing out of atmospheric CO2 changes caused by vegetation, and the probablity of larger influences of local/regional vegetation changes on CO2 levels due to temperature, then we may say that the variations seen in stomata data and ice cores are comparable.

    While rereading the last sentence, has anyone an idea what I am talking about?

    What I mean to say: even with all probable influences (more/less local/regional, more/less smoothed), the historical climate induced CO2 variations via stomata (260-320 ppmv) and from ice cores (272-284 ppmv) are within the same range. The ice core data underestimate short-term variability, but are more accurate for long-term global levels, while the stomata data are a better indication of short-time variability, but may overestimate global variability and may have a local time-dependent variable bias…

  208. Vincent Gray
    Posted Feb 27, 2008 at 1:05 AM | Permalink

    A plague on all of your houses. Irregualar climate data cannot be modelled mathematically, smoothed,correlated or trended without making outrageous unjustifiable assumptions. snip

  209. Andrey Levin
    Posted Feb 27, 2008 at 1:24 AM | Permalink

    Re#207, Ferdinand:

    the historical climate induced CO2 variations via stomata (260-320 ppmv) and from ice cores (272-284 ppmv) are within the same range.

    No they are not. Stomata is 290+-30 ppmv and ice cores are 278+-6 ppmv.

  210. Posted Feb 27, 2008 at 1:40 AM | Permalink

    Andrey,

    Stomata data are accurate within +/- 10 ppmv, ice core data within +/- 1.2 ppmv (each one sigma), the difference in average is at the border of one-sigma error…
    Given the unknown change in historical bias of stomata data, because of unknown changes in vegetation during MWP-LIA-current transitions, we may conclude that both are within the same range.

  211. Posted Feb 27, 2008 at 1:53 AM | Permalink

    Re#208,

    Vincent, that doesn’t mean that skeptics should make the same error. You can’t make any conclusion about the medium-term (5-100 years) influence of temperature on CO2 levels (or reverse), based on seasonal or year-by-year variations (except that the influence probably is small). For long-term influences, one need to look at long-term trends and be very carefully to be sure about the attribution of cause and effect.

    Much depends of the signal-to-noise ratio. For emissions and CO2 levels, the signal is already emerging from the background noise (seasonal, year-by-year temperature changes) within 1-2 years. For d13C changes, one need about 8 years and for the influence of solar on climate one need many solar cycles, hundreds of years…

  212. Allan MacRae
    Posted Mar 1, 2008 at 9:43 AM | Permalink

    I think there are perhaps four cycles in which CO2 lags T:

    1. A cycle of thousands of years, in which CO2 lags T by ~hundreds of years (Vostok ice cores, etc.)

    2. A cycle of ~70-90 years (Gleissberg), in which CO2 lags T by ~5-10 years (this is contentious – Beck’s direct-measurement CO2 data supports, ice core data does not, and there is the question of how much, if at all, humanmade CO2 affects this cycle).

    3. The cycle I described in my paper of 3-5 years (El Nino/La Nina), in which CO2 lags T by ~9 months.

    4. The seasonal “sawtooth” CO2 cycle, which ranges from ~18 ppm in the North to ~1 ppm at the South Pole.

    It is clear that T precedes CO2 in cycles 1, 3 and 4. For Cycle 2 we have conflicting and perhaps inadequate data.

    Best regards, Allan

    P.S. My Figure 5b uses no running means and no detrending and the relationships are still obvious. In my opinion, Ferdinand’s only real issue is that the magnitude of the temperature-driven (Cycle 3) changes in CO2 are too small to explain the ~2ppm/year average increase in global CO2 since ~1980. Perhaps this missing magnitude is in Cycle 2. Is this 2ppm/year annual growth mostly humanmade, or mostly natural or a combination of these factors? I have plotted CO2 emissions versus atmospheric CO2 and there is no clear short-term relationship. As for the alleged long-term correlation of CO2 with humanmade emissions, one could obtain similar correlations plotting CO2 versus human population, or the number of bicycles, or the number of checkers and chess pieces. The fact that annual humanmade CO2 tonnage is ~twice the annual buildup of atmospheric CO2 (the “missing sink”) is a serious problem for Ferdinand’s position – if AT LEAST half of humanmade emissions are absorbed by the natural system, this suggests that these emissions are relatively insignificant in the natural CO2 cycle.

    Steve: I’ve already asked you not to use Beck’s data in discussions here.

  213. Willis Eschenbach
    Posted Mar 1, 2008 at 2:59 PM | Permalink

    Allan, thanks for your comment. You discuss your figure 5b, posted above.

    However, your claim that “My Figure 5b uses no running means and no detrending and the relationships are still obvious.” is not correct. You are comparing a one year average change in CO2 to monthly temperature figures. In other words, your CO2 is a 12-month running mean.

    All the best,

    w.

  214. Posted Mar 1, 2008 at 4:17 PM | Permalink

    Re #212:

    Allan,

    There are even more (solar) cycles involved, but that doesn’t matter here. What matters is that there is another issue with cycle 3. Cycles 1, 2 and 4 are all CO2-temperature comparisons. Cycle 3 is a dCO2-temperature comparison. Quite a difference.

    Even in consecutive seasonal cycles, the extra (man-made) CO2 can be seen in the year-by-year increase at Mauna Loa:

    Thus seasonal CO2 cycles follow temperature changes with a few months, but compared to cycles 1-2 (and 3 for the dCO2/dT part), this is an anti-cycle. In cycles 1-2(3), CO2 is positively correlated with temperature, in cycle 4 negative. The warm NH seasons use CO2 in vegetation, release CO2 in the NH winter, while cycles 1-2(3) are more ocean dependent, less vegetation dependent. The above multi-year seasonal cycle trend simply shows that temperature changes are not at the base of the (bulk) CO2 increase and they show that oceans/vegetation are in near-equilibrium with temperature and have a limited CO2 source/sink capacity to respond to emissions, or there wouldn’t be a visible influence of temperature (and emissions) on CO2.

    A more realistic comparison of real CO2 increase and temperature/emissions comparisons is done in #187 and a response to the “missing” sink (what missing sink? There is more source from the emissions than natural sink capacity!) is in #194…

  215. Sam Urbinto
    Posted Mar 1, 2008 at 5:49 PM | Permalink

    Thanks for the comments Ferdinand.

    I’ll just say that removing grassland to create biofuels adds much more CO2 (lost as a sink) than they save as alternative sources of fuel. So the point being, even if AGHG are to blame for than anomaly trend rise, land-use is an even greater factor than anyone pays attention to as a cause.

    This is all going to be very interesting in the next few months…

  216. Allan MacRae
    Posted Mar 1, 2008 at 10:04 PM | Permalink

    Re #213 – Willis, can you suggest a better way of handling this data (to calculate dCO2/dt)?

    Re #214 – Ferdinand, your graph in #214 only says (to me) that atmospheric CO2 is increasing, on average, about 2 ppm per year – but it could be due to any cause. Similarly, your mass balance and other arguments in #187 and 194 only say that CO2 is going up – again, there is no clear cause. I don’t think we are going to agree on this point until there is more or better data. Please post your correlation of short term (annual) atmospheric CO2 with CO2 emissions (or email me) – as stated previously, I found no such relationship. Finally, there is little lag between dCO2/dt (in Cycle 3) and ST or LT, but CO2 lags ST by ~9 months. Many of your other comments were quite agreeable and helpful, thank you.

    snip – policy

    Best regards to all, Allan

  217. Posted Mar 2, 2008 at 5:00 AM | Permalink

    Allan,

    Re #216,

    You need to be consequent: either you use CO2/dt for all temperature – CO2 comparisons, or you use dCO2/dt for all comparisons.

    Temperature (and precipitation) has a short-term influence on CO2 levels with some lag, and a long-term influence. The short-term ratio is about 3 ppmv/°C the (very) long term ratio is about 8 ppmv/°C. That is all. Thus temperature (and precipitation) can’t be the cause of the huge (100 and more ppmv/°C) increase in CO2 of the past 50 years, as your own integration over the 27-year period (+2 ppmv) shows.

    Neither the historical measurements that are supposed to be responsible for the 100 ppmv increase and drop in the 1935-1950 period can be true. Especially the 100 ppmv drop. I don’t know of any physical process that can remove 210 GtC (about 25% of the atmospheric CO2 content) in only 7 years. The excess CO2 removal half-life time is about 38 years…

    The correlation between yearly CO2 emissions and yearly increase is 0.552, R^2: 0.305. For yearly temperature and yearly increase, the correlation is 0.661 with R^2=0.4375. Both are not impressive, but for temperature/CO2 variations border significant and quite normal for natural processes.

    I don’t know if you have any experience with multi-variable processes. In this case this is what happens: you have several natural variables (temperature, precipitation), which have their (lagged, but limited) influence on CO2 levels. In the (far) past these were the only variables (except continuous large volcanic eruptions and meteorite impacts). Today we have a known disturbance of the natural processes, the emissions. The year-by-year variability still is mainly caused by natural variations, but the average increase is (near) fully from the emissions, as these are about twice the measured increase in the atmosphere. The natural variability only influences the sink capacity, but has zero addition to the atmospheric increase. There is no other cause of the increase than the emissions…

  218. Allan MacRae
    Posted Mar 2, 2008 at 4:00 PM | Permalink

    Hi Ferdinand,

    Re #217,

    Thank you Ferdinand – your comments are most interesting.

    My comments as noted in (brackets)”

    Temperature (and precipitation) has a short-term influence on CO2 levels with some lag, and a long-term influence. The short-term ratio is about 3 ppmv/°C the (very) long term ratio is about 8 ppmv/°C. That is all. Thus temperature (and precipitation) can’t be the cause of the huge (100 and more ppmv/°C) increase in CO2 of the past 50 years, as your own integration over the 27-year period (+2 ppmv) shows.
    (As stated in our emails, I agree this is a problem, but I think the “jury is still out” on the final conclusion.)

    Neither the historical measurements that are supposed to be responsible for the 100 ppmv increase and drop in the 1935-1950 period can be true. Especially the 100 ppmv drop. I don’t know of any physical process that can remove 210 GtC (about 25% of the atmospheric CO2 content) in only 7 years. The excess CO2 removal half-life time is about 38 years…

    [snip – please don’t use Beck’s data or editorialize]

    The correlation between yearly CO2 emissions and yearly increase is 0.552, R^2: 0.305. For yearly temperature and yearly increase, the correlation is 0.661 with R^2=0.4375. Both are not impressive, but for temperature/CO2 variations border significant and quite normal for natural processes.
    (But look at the yearly lead/lag – atm. CO2 leads CO2 emissions by ~3 years according to my data! Do you get different results? One suggestion – try to use T to predict the peaks and valleys of dCO2/dt – I have done this and it works; then try to use CO2 emissions to predict the peaks and valleys of CO2 – I have not tried this, because of the lead/lag problem, but my bet is this will utterly fail.)

    I don’t know if you have any experience with multi-variable processes. In this case this is what happens: you have several natural variables (temperature, precipitation), which have their (lagged, but limited) influence on CO2 levels. In the (far) past these were the only variables (except continuous large volcanic eruptions and meteorite impacts). Today we have a known disturbance of the natural processes, the emissions. The year-by-year variability still is mainly caused by natural variations, but the average increase is (near) fully from the emissions, as these are about twice the measured increase in the atmosphere. The natural variability only influences the sink capacity, but has zero addition to the atmospheric increase. There is no other cause of the increase than the emissions…
    (You may be right, but I’m not so sure over the short term. On a different topic, would you agree that atm. CO2 has, over geologic time, been much higher than today and CO2 has been sequestered in enormous quantities in limestones, dolomites, coal, hydrocarbons, etc., and that over geologic time all life on earth must cease when CO2 sequestration is complete?)

    Best regards, Allan

  219. Posted Mar 3, 2008 at 12:28 PM | Permalink

    Re #218,
    About Beck’s data vs. ice core data: I will respond via CS, as I have made a few new graphs to show the difference in (statistical) behavior for both.
    The emissions don’t show much variation, and there is (of course) no lag between emissions and increase in the atmosphere. Thus no wonder that you don’t find a correlation. What you found is that near all variability is from natural variations. That is +/- 25% of the yearly emissions (but less than 5% of the total increase), while the average yearly increase in the atmosphere is about 55% of the emissions.

    Thus you have two variables which influence the CO2 increase in the atmosphere: natural ones (where CO2 lags temperature) and the emissions, without a lag. I will try to give a calculated example, where the different behavior of natural variability and continuous emissions will be shown, closely to what happens in reality.

    The fate of CO2 on geological times has seen quite large fluctuations, but even then, the change was spread over very long time frames, sometimes many millions of years. For the past million years, the speed of change was about 100 ppmv/5,000 years or 0.02 ppmv/yr for glacial-interglacial transitions, but that includes a lot of smoothing. In more recent pre-industrial times, stomata data indicate about 30 ppmv/100 years or 0.3 ppmv/yr (sub decadal averages) at the end of the MWP. And at the start of the Holocene, we have the 8.2 kyr event (a sudden cooling in the NH) with 50 ppmv/500 years or 0.1 ppmv/yr, with about 100-years smoothing (see here).

    The same stomata data (and d13C data) don’t show anything special in the period 1935-1950, while Beck’s data show a change of 14 ppmv/yr, or about 50 times faster than the stomata data in the fastest natural change until now…

    Steve:
    Ferdinand, I know that you didn’t raise Beck, but I’ve asked Allan not to use this highly questionable data here.

  220. Allan MacRae
    Posted Mar 4, 2008 at 12:30 AM | Permalink

    Hi Ferdinand,

    I just looked at the latest Mauna Loa CO2 data at
    ftp://ftp.cmdl.noaa.gov/ccg/co2/trends/co2_mm_mlo.txt

    Please note that the Feb-Jan difference was
    2001: +0.98 ppm CO2
    2002: +0.73
    2003: +0.70
    2004: +0.84
    2005: +0.70
    2006: +0.80
    2007: +0.92
    2008: +0.45 – the lowest since 2000, when it was +0.31.

    This sharp decline in dCO2/dt reflects the steep temperature drop in both ST and LT in January 2008 and earlier months. As I said, the lag between T and dCO2/dT is small.

    Based on your logic, what change in atmospheric CO2 (and when) would be sufficient to demonstrate that your mass balance argument is false?

    Regards, Allan

  221. Posted Mar 4, 2008 at 3:38 AM | Permalink

    Allan,

    To the contrary, the mass balance argument is fortified by the this-year figure: the mass balance still is negative for natural additions, more for low SST’s (higher CO2 sink), less for high SST’s (less CO2 sink).

    Only if the increase in the atmosphere was larger than the emissions (which happened several times in the first ~100 years of the industrial revolution), then the natural flows are adding some CO2 mass in these years. In all years that the increase in the atmosphere is smaller than the emissions (even negative), the net addition of natural flows over a year is zero, no matter how large the individual natural flows are/were, no matter where the (missing) sinks exactly are.

    The general formula for the change of CO2 in the atmosphere is:

    Catm(t) = Catm(t-1) + F(sources) + F(emissions) – F(sinks)
    or for the yearly difference:
    dCO2/dt = F(emissions) – (F(sinks)-F(sources))
    where
    F(emissions) were 2.58-7.7 GtC/yr (period 1960-2004)
    dCO2/yr were 1.05-5.75 GtC/yr (minimum in 1964, maximum in 1998)
    F(sinks)-F(sources) is the year-by-year difference between these two:
    0.25-4.58 GtC/yr natural sink quantity (minimum in 1973, maximum in 1992/3)

    Notes:
    1. The yearly sink quantity is the difference between calculated emissions and measured atmospheric increase. These are known with a reasonable accuracy. It is not necessary to know any individual natural flow, neither where the removed CO2 is ultimately captured. Even if all natural flows (sources and sinks) nearly doubled, or some parts changed from sinks to sources and vv., that has no influence, as the ultimate sink quantity must remain what is calculated.
    2. In every year of the past 50 years, there is a net natural sink (with a few years near zero within the accuracy), thus no net addition from nature.
    3. The variability in sink capacity is governed by natural variability (temperature, precipitation,…). High sink years are cold years (1992 Pinatubo eruption), low sink years are warm years (1973, 1998 El Niño’s).

    Last but not least, you are concentrating on year-by-year variations (dCO2/dt), which mainly reflect natural variability, while the emissions are adding CO2 continuously with minimal variability, which is better visible in multi-year trends (CO2/dt)… More on this in next comment.

  222. Posted Mar 4, 2008 at 7:22 AM | Permalink

    Allan,

    Here follows a numeric example of what can be expected from a combination of a continuous addition (like the emissions) and a more or less cyclic influence (as temperature is). The units are chosen with the real observations in mind, but as we in this case know what caused the results, we may come to conclusions about what can and what can’t be used to know what happened in reality with CO2 levels in the atmosphere.

    We have a system where two independent variables are influencing the same dependent variable:
    One variable is a continuous, constant addition, where x is 1 unit per period in time and y(t) = y(t-1) + 0.55 x(t)
    The other variable is a cyclic function over 10 time periods, plus a small long-term addition (about 5 units/100 time periods), needed to find some correlation. The cyclic function is y(t) = 0.05*sin(t) and there is a lag of 2 time periods. The formula to calculate the influence on the dependent variable then is y(t) = 0.05*sin(t-2) + t/500

    The combined influence of the independent variables on the dependent one then is:

    y(t) = y(t-1) + 0.55 x(t) + 0.05*sin(t-2) + t/500

    With this formula, the increase of y is for about 95% caused by the continuous addition and 5% by the sine increment part, while the variability is for 100% caused by the sine function.

    This gives over 50 time periods (sum is y in the formula, sine is the sine function + small increment part, acc. incr. are the accumulated periodic increments):

    Where the sine function is not visible in the 50-year trend and the accumulated increments are dominating the sum of both independent variables.
    For the one-to-one comparisons of both variables with the sum of both, here first the sine function:

    This resembles the temperature-CO2 curve in #187, be it far more regular, as a sine function is.

    The same problems as with the temperature-CO2 curve in #187 apply here: a small period of the sine function is over halve the change over the full period and has little influence on the end result. Thus one can’t say that the longer-range sine function was the cause of the increase in sum.

    Anyway, the correlation (0.58; R^2=0.336) is not impressive, and as we know (because we made the formula ourselves), the increment part of the sine function is only 5% of the total increase. This is also clear from Fig. 7, if one compares the left hand / right-hand scales. As in this case the units are equal, the influence of the sine function increment is clearly small. In the case of temperature influence on CO2 levels, one need to compare the short-term influence (3 ppmv/°C) with the longer-term one (8 ppmv/°C), to estimate the influence of temperature on CO2 levels for 50 years periods.

    Then the accumulated increments vs. the sum:

    Well, as most of the increase in sum is caused by the accumulated increments, it is no surprise that the correlation is high (0.998; R^2=0.995). That needs not much comment.

    Now let’s go to the period-by-period graph:

    As you can see, the contribution of the periodic increment to the period-to-period variability simply is zero, as there is no variability at all in the periodic increments. The correlation with d-sum can’t be calculated (division by zero), but is essentially zero, as R^2 is.

    The correlation between the sine function (dsine/period) and the periodical variability (dsum/period) is low (0.156; R^2=0.024), but that improves to 0.756, R^2=0.572, if one takes into account a 2-period lag for dsine vs. dsum. The influence of the sine function on the variability of the periodical sum is huge, but that doesn’t say anything about the influence of the sine function on the total sum over 50 periods. That is given by the integral over 50 periods, which is about 5% of the total increase.

    What does that mean for the whole emissions-temperature-CO2 discussion?
    1. There is a lag and a reasonable correlation/causation between short-term temperature variability and short-term CO2 sink capacity.
    2. The short-term variability/lag doesn’t say anything about multi-year variability/lags
    3. The short-term variability doesn’t say anything about the influence of the emissions on longer-term CO2 levels.
    4. There is little influence of temperature on the multi-year atmospheric CO2 increase.
    5. The short-term lag of CO2 variability after temperature variability doesn’t say anything about the longer-term influence of CO2 on temperature.
    6. The high correlation between accumulated emissions and increase in the atmosphere is real and causative. Most of the multi-year atmospheric CO2 increase is from the emissions.

    Most important lesson: use proper time frames to derive conclusions about (probable) influences.

    The start of this thread: spurious correlations due to smoothing, is less applicable here, as in the dCO2/dt comparisons the influence of short-term temperature variations on CO2 increase speed variability was studied, while the longer-term CO2/dt comparisons use relative short yearly (accumulated) averages over a 50-year period. And the proper cause-effect relationship was taken into account.

  223. Allan MacRae
    Posted Mar 6, 2008 at 10:09 PM | Permalink

    Hi Ferdinand,

    Thank you for all your hard work.

    I still have a serious problem with your logic. I suggested to you earlier that your longer term correlation of atm. CO2 with CO2 emissions was equally valid for atm CO2 vs. human population, bicycles, chess pieces, etc. All are going up, and nothing is proved.

    Re your equation: F(emissions) – dCO2/dt = F(sinks) – F(sources)

    I produced a spreadsheet using real global data for 1980 to 2006. I also backed out the CO2(sinks) component due to LT temperature, using your factor of 3 ppm/degreeC.

    Even if the difference [ F(sinks) – F(sources) ] is a relatively small number, it is the difference of two very large numbers, F(sinks) and F(sources), and there is no reason to believe that F(emissions), another small number, plays a significant role in the equation.

    As noted by others, the anthropogenic emission is ~6.5GtC per year, and is only about 0.02% of the total carbon flowing in the cycle.

    Based on the evidence, it may be possible that humanmade CO2 emissions are contributing significantly to atmospheric CO2 buildup, but is it is also entirely possible that they are not at all significant.

    I think we are now at the point of repeating past arguments, rather than producing new ones.

    Best regards, Allan

  224. Allan MacRae
    Posted Mar 7, 2008 at 2:02 AM | Permalink

    To all,

    Is the ability to predict important to science? is successful prediction one measure of the validity of a theory, or the lack thereof?

    There is an interesting drop in global temperatures ST (Hadcrut3) and LT (UAH) over the past year or so. Both ST and LT anomalies are now near-zero.

    The IPCC model projections utterly failed to predict any such cooling, based on their assumption that CO2 primarily drives temperature.

    Such cooling is not unusual, and occurs every few years. What is different this time is that ST has dropped as much as LT – about 0.6 C. All the warming since ~1980 has been temporarily eliminated.

    Based on the correlation of Global ST. LT and dCO2/dt it should be possible to (reasonably accurately) predict dCO2/dt and CO2 for the next few months. Most of the data (except Dec07 and Jan08) and data sources are at: http://icecap.us/images/uploads/CO2vsTMacRaeFig5b.xls

    Best regards, Allan

  225. Posted Mar 7, 2008 at 4:39 AM | Permalink

    Allan Re#223,

    I have tried to give a reasonable numeric example in #222, where cause and effect are exactly known. It closely resembles what happened in the atmosphere…

    About other comparisons: I am well aware that one need to be careful about spurious correlations. In the case of emissions and increase in the atmosphere we know that the emissions are a possible cause, as all emissions go straight into the atmosphere. That the increase in the atmosphere follows the emissions with such a fixed % is quite certain a result of direct cause and effect. If don’t know of any natural cause which follows the emissions with such a straight %, I am very curious if you have an explanation for that… Temperature anyway has not such an effect.

    Again, the mass balance is what is calculated with reasonable accuracy from the emissions and the increase in the atmosphere. That excludes any net natural addition in mass of CO2 for the past 50 years, regardless of the height and variation of individual flows.

    To give a simple example: if you have a large circulating flow (as many of the natural flows/cycles are over one year) over a fountain, thanks to a few big pumps (from the bottom up and back) of 10,000 l/hr and you add 10 l/hr with a small hose, you can be sure that the fountain reservoir will overflow after a certain time, although the addition is only 1/1,000th of the circulating flow…
    Thus the relative height of emissions vs. seasonal flows has not the slightest influence on the fact that the net result of all natural flows together over one year (and for near each year in the past 50 years) is negative. The net result of a perfect (seasonal) cycle is zero…
    It doesn’t matter that F(sources) is 10, 100 or 1000 GtC/yr. In all these cases we have the observed average:

    F(emissions) – dCO2/dt = F(sinks) – F(sources) = ~3 GtC/yr sink capacity.
    or for natural sinks and sources:
    10 – 7 = 3 GtC/yr
    100 – 97 = 3 GtC/yr
    1000 – 997 = 3 GtC/yr

    Thus it simply doesn’t matter how large the sinks and sources are, it only matters that the net result after a year is more sink that source and thus that the increase in the atmosphere is smaller than the emissions (even if it is negative) and thus caused by the emissions. There simply is no net natural addition over a year (with a few exceptions for exceptional warm years, which are near zero).

  226. Posted Mar 7, 2008 at 4:58 AM | Permalink

    Allan,

    Here is my bet:

    If the global temperature remains 0.6°C below the previous years, we will see a drop in dCO2/yr from about 2 ppmv/yr to a low 0.02 ppmv/yr in the next months. After that dCO2/yr will increase slowly again with steady-state temperatures, but increase fast (with over 2 ppmv/yr) if the temperature rises again.

    CO2/dt still will increase over time, but much slower in next months and rise again to average increase speed (at steady temperature) or much faster with higher temperatures.

  227. MarkR
    Posted Mar 7, 2008 at 7:06 AM | Permalink

    So does OCO cause Temp increase or not?

  228. kim
    Posted Mar 7, 2008 at 7:36 AM | Permalink

    Why wouldn’t the slope of rising CO2 have already flattened a little?
    ====================

  229. Posted Mar 7, 2008 at 11:01 AM | Permalink

    Re #227:

    MarkR, difficult to see, temperature has a small influence on CO2 levels (between 3 ppmv/°C for short-term changes, 8 ppmv/°C for very long-term changes). And CO2 has a small influence on temperature (about 1°C/2xCO2). Thus one need far larger CO2 changes and temperature changes to see a difference.

    Re#228:

    Kim, over the past 50 years, the total influence of temperature on CO2 is about +4 ppmv, quite small in the +60 ppmv which is measured in the same period (the rest is highly probably from the emissions). This year, there may be some flattening, but next year may show the opposite…

  230. kim
    Posted Mar 7, 2008 at 11:13 AM | Permalink

    I see, and thank you.
    ============

  231. Allan MacRae
    Posted Mar 12, 2008 at 5:35 PM | Permalink

    Here is my guess of average atmospheric CO2 readings for the next 6-8 months. Note that Global CO2 data is now available to end December 2007, and Mauna Loa data is available to end February 2008. There is room for improvement – starting from raw data, this work took ~1 to 2 hours:

    Prediction of Atmospheric CO2 (ppm)
    Year Mo Global M.Loa
    2008 1 385.1 385.4
    2008 2 385.2 385.8
    2008 3 385.5 386.3
    2008 4 385.7 388.1
    2008 5 385.5 388.1
    2008 6 384.5 387.5
    2008 7 382.7 385.8
    2008 8 381.4 383.2

    Best regards, Allan

    P.S. Ferdinand #226 – can you be more clear please with your bet?

  232. Posted Mar 16, 2008 at 10:35 AM | Permalink

    Re #231

    Dear Allan,

    Herewith my detailed forecast on CO2 levels in the atmosphere (both global and Mauna Loa) for the monthly averages in 2008:

    Year Mo Glob MLO
    2008 1 384.2 385.4
    2008 2 384.8 385.8
    2008 3 385.4 386.2
    2008 4 385.8 388.1
    2008 5 385.7 388.3
    2008 6 384.6 387.6
    2008 7 382.9 385.9
    2008 8 381.8 383.6
    2008 9 382.2 382.5
    2008 10 383.7 383.1
    2008 11 385.4 384.5
    2008 12 386.6 386.2

    That is for the case that the temperatures remain lower than average as seen in January this year. Not much difference with your predicition.

    If the temperatures go back to the previous years’ average e.g. from June on, we have these series:

    Year Mo Glob MLO
    2008 1 384.2 385.4
    2008 2 384.8 385.8
    2008 3 385.4 386.2
    2008 4 385.8 388.1
    2008 5 385.7 388.3
    2008 6 385.3 388.3
    2008 7 383.6 386.6
    2008 8 382.5 384.3
    2008 9 382.9 383.2
    2008 10 384.4 383.8
    2008 11 386.1 385.2
    2008 12 387.3 386.9

    Formula: 2008.month = 2007.month + 2.2 ppmv (from 8.6 Gt emissions in 2008) + 3 ppmv * dT (2008.month – 2007.month).

    Regards,

    Ferdinand

  233. Alan S. Blue
    Posted Mar 16, 2008 at 1:31 PM | Permalink

    What sort of error bars are on the measurements here? Will we be able to decide (in May-June) between these options with statistical certainty? Because the Dec-Jan drop seems like an excellent step change to be used in validating or invalidating temp-leads-co2.

  234. Posted Mar 16, 2008 at 3:35 PM | Permalink

    Re #233,

    Alan,

    Good question…

    You can have statistical certainty for emissions-lead-increase in about 2 years, as that is a continuous small addition (about 0.2 ppmv/month) and need some time to emerge beyond the temperature induced noise. For temperature-leads-increase, a fast drop of 0.4°C as we have seen in about a halve year, will be seen near immediately in the increase speed. The error margins anyway are larger than the difference between the two estimates. Much will depend from what temperature will do in the next months.

    Thus in general: the response to temperature is fast (but limited) and the response to the emissions is fixed (about 55% of the emissions), but need a longer time to be detected in the noise (changes in sink capacity), caused by temperature variability.

  235. Allan MacRae
    Posted Mar 16, 2008 at 10:24 PM | Permalink

    RE #234 Ferdinand
    and #233 Alan Blue

    Thank you very much Ferdinand for your post and your prediction.

    Agree very good question re error bars Alan – at post #148 above is my Figure 5b. Note that the LT vs. dCO2/dt correlation looks good up to ~2002 and then gets a bit messy after that time. After re-evaluating this data, I think one can forecast CO2 about 6 months forward using the LT anomaly – and not much more, unless you assume a future temperature profile as Ferdinand has done.

    If you plot historical Mauna Loa CO2 data, you will see considerably more variation (up to ~+/- 1 ppm) than in the Global CO2 data (up to ~+/- 0.5 ppm), which is to be expected.

    The maximum difference between Ferdinand’s first case (temperatures remain lower than average) and my own prediction is 0.9 ppm, and most of the differences are 0.2 ppm and less.

    For the first six months, my guess is that we should be able to frequently predict to within ~0.5 ppm for Global CO2 data and ~1 ppm for Mauna Loa. Let’s see if Ferdinand or I can improve upon that, based on our predictions for the next six months. After six months, it’s anyone’s guess, imo.

    Best regards, Allan

  236. Allan MacRae
    Posted Apr 29, 2008 at 12:44 PM | Permalink

    Further on uncertainties in CO2 adn dCO2/dt measurement #233 and 235:

    http://www.esrl.noaa.gov/gmd/ccgg/trends/

    Global CO2

    The table shows annual mean carbon dioxide growth rates based on globally averaged marine surface data. The annual mean rate of growth of CO2 in a given year is the difference in concentration between the end of December and the start of January of that year. It represents the sum of all CO2 added to, and removed from, the atmosphere during the year by human activities and by natural processes. The annual mean growth during the previous year is determined by taking the average of the most recent December and January months, corrected for the average seasonal cycle, as the trend value for January 1, and then subtracting the same December-January average measured one year earlier. Our first estimate for the annual growth rate of the previous year is produced in January of the following year, using data through November of the previous year. That estimate will then be updated in February using data though December, and again in March using data through January. We finalize our estimate for the growth rate of the previous year in the fall of the following year because a few of the air samples on which the global estimate is based are received late in the following year. The values in this table are subject to change depending on quality control checks of the measured data, but any revisions are expected to be small. The estimates of the global mean CO2 concentration, and thus the annual growth rate, are updated every month as new data come in. The statistics are as follows. If we estimate during a given month (“m”) the global average CO2 during the previous month (“m-1”), the result differs from the estimate made (up to almost a year later) when all the data are in, with a standard deviation of 0.57 ppm. For month m-2, the standard deviation is 0.17 ppm, and for month m-3 it is 0.10 ppm. We decided to provide the global mean estimates with a lag of two months. Thus, a December average is first calculated during the following February.

    The estimated uncertainty in the global annual mean growth rate is 0.07 ppm/yr. This estimate is derived using a bootstrap technique that computes 100 global annual growth rates, each time using a slightly different set of measurement records from the NOAA ESRL cooperative air sampling network (Conway, 1994). The reported uncertainty is the mean of the estimated uncertainties for each annual average growth rate using this technique.

    Mauna Loa CO2

    The table shows annual mean carbon dioxide growth rates for Mauna Loa. The annual mean rate of growth of CO2 in a given year is the difference in concentration between the end of December and the start of January of that year. If used as an average for the globe, it would represent the sum of all CO2 added to, and removed from, the atmosphere during the year by human activities and by natural processes. There is a small amount of month-to-month variability in the CO2 concentration that may be caused by anomalies of the winds or weather systems arriving at Mauna Loa. This variability would not be representative of the underlying trend for the northern hemisphere which Mauna Loa is intended to represent. Therefore, we finalize our estimate for the annual mean growth rate of the previous year in March, by using the average of the most recent November-February months, corrected for the average seasonal cycle, as the trend value for January 1. Our estimate for the annual mean growth rate (based on the Mauna Loa data) is obtained by subtracting the same four-month average centered on the previous January 1. Preliminary values for the previous year are calculated in January and in February.

    The estimated uncertainty in the Mauna Loa annual mean growth rate is 0.11 ppm/yr. This estimate is based on the standard deviation of the differences between monthly mean values measured independently by the Scripps Institution of Oceanography and by NOAA/ESRL.

    *************************************************

  237. Allan MacRae
    Posted May 9, 2008 at 5:57 AM | Permalink

    Here is an earlier look at this subject, using different statistical methodology. It reaches similar conclusions to my own paper.

    Coherence established between atmosheric carbon dioxide and global temperature

    ref. Kuo C, Lindberg C & Thomson DJ, Nature 343, 709 – 714 (22 February 1990)

    Summary

    The hypothesis that the increase in atmospheric carbon dioxide is related to observable changes in the climate is tested using modern methods of time-series analysis. The results confirm that average global temperature is increasing, and that temperature and atmospheric carbon dioxide are significantly correlated over the past thirty years. Changes in carbon dioxide content lag those in temperature by five months.

    Regards, Allan

  238. Allan MacRae
    Posted May 12, 2008 at 5:57 PM | Permalink

    Interannual extremes in the rate of rise of atmospheric carbon dioxide since 1980

    C. D. Keellng*, T. P. Whorf*, M. Wahlen* & J. van der Plicht†
    * Scripps Institution of Oceanography, La Jolla, California 92093-0220, USA
    † Center for Isotopic Research, University of Groningen, 9747 AG Groningen, The Netherlands
    Nature, Vo. 375, 22 June 1995

    OBSERVATIONS of atmospheric C02 concentrations at Mauna Loa, Hawaii, and at the South Pole over the past four decades show
    an approximate proportionality between the rising atmospheric concentrations and industrial C02 emissions. This proportionality,
    which is most apparent during the first 20 years of the records, was disturbed in the 1980s by a disproportionately high rate of
    rise of atmospheric CO2, followed after 1988 by a pronounced slowing down of the growth rate. To probe the causes of these changes, we examine here the changes expected from the variations in the rates of industrial CO2 emissions over this time, and also from influences of climate such as EI Niño events. We use the 13C/12Cratio of atmospheric CO2 to distinguish the effects of
    interannual variations in biospheric and oceanic sources and sinks of carbon. We propose that the recent disproportionate rise and fall in CO2 growth rate were caused mainly by interannual variations in global air temperature (which altered both the terrestrial biospheric and the oceanic carbon sinks), and possibly also by precipitation. We suggest that the anomalous climate-induced rise in CO2 was partially masked by a slowing down in the growth rate of fossil-fuel combustion, and that the latter then exaggerated the subsequent climate-induced fall.

    An unexpected slowing in the rate of rise of atmospheric CO2 appeared recently in measurements of CO2 made at Mauna Loa Observatory, Hawaii and the South Pole…

    … In summary, the slowing down of the rate of rise of atmospheric CO2 from 1989 to 1993, seen in our data and confirmed by other measurements6,15, is partially explained (about 30% according to Fig. Ie) by the reduction in growth rate of industrial
    CO2 emissions that occurred after 1979. We further propose that arming of surface water in advance of this slowdown caused
    an anomalous rise in atmospheric CO2, accentuating the subsequent slowdown, while the terrestrial biosphere, perhaps by
    sequestering carbon in a delayed response to the same warming, caused most of the slowdown itself…

    … We point out, in closing, that the unprecedented steep decline in the atmospheric CO2 anomaly ended late in 1993 (see Fig. Ie). Neither the onset nor the termination was predictable. Environmental factors appear to have imposed larger changes
    on the rate of rise of atmospheric CO2 than did changes in fossil fuel combustion rates, suggesting uncertainty in projecting
    future increases in atmospheric CO2 solely on the basis of anticipated rates of industrial activity.

  239. Allan MacRae
    Posted May 24, 2008 at 1:51 AM | Permalink

    The evidence to date suggests that increased atmospheric CO2 plays NO significant role in causing global warming.

    The best data shows no significant warming since ~1940. The lack of significant warming is evident in UAH Lower Troposphere temperature data from ~1980 to end April 2008, and Hadcrut3 Surface Temperature data from ~1940 to ~1980.

    LT data: http://www.atmos.uah.edu/data/msu/t2lt/tltglhmam_5.2
    ST data: http://www.cru.uea.ac.uk/cru/data/temperature/hadcrut3gl.txt

    Furthermore, it is clear that CO2 lags temperature at all measured time scales, from ice core data spanning thousands of years to sub-decadal trends [the latter as stated in my January 2008 paper, and previously by Kuo (1990) and Keeling (1995)].

    My paper is located at: http://icecap.us/images/uploads/CO2vsTMacRae.pdf

    In late November 2007 Pieter Tans described the close relationship between dCO2/dt and temperature, about one month before I made a similar finding. This is a further step forward in our understanding.
    Tan’s paper: http://esrl.noaa.gov/gmd/co2conference/agenda.html

    Unlike Kuo, Keeling and me, Tans apparently did not mention the fact that CO2, the integral of dCO2/dt, significantly lagged temperature.

    Finally, humanmade CO2 emissions have increased almost 900% since 1940.
    CO2 data from CDIAC: http://cdiac.ornl.gov/ftp/ndp030/global.1751_2004.ems

    This data consistently suggests that the sensitivity of global temperature to increased atmospheric CO2 is near-zero, and thus there is no humanmade catastrophic global warming crisis.

    Regards, Allan

  240. Allan MacRae
    Posted May 26, 2008 at 11:30 AM | Permalink

    I have been asked elsewhere to justify statements in my previous post, where I said:

    The best data shows no significant warming since ~1940. The lack of significant warming is evident in UAH Lower Troposphere temperature data from ~1980 to end April 2008, and Hadcrut3 Surface Temperature data from ~1940 to ~1980.

    LT data: http://www.atmos.uah.edu/data/msu/t2lt/tltglhmam_5.2

    ST data: http://www.cru.uea.ac.uk/cru/data/temperature/hadcrut3gl.txt

    Further explanation:

    There has been very significant Lower Troposphere (LT) cooling in the past 12 months. This cooling has also been observed in the Surface Temperature (ST), but that data is much less reliable, as further discussed further below.

    The average LT global temperature anomaly for the four months January-April 2008 (inclusive) is +0.02 degrees C.
    The average LT global temperature anomaly for year 1980 is +0.09 degrees C.

    The average ST global temperature anomaly for year 1980 is +0.08 degrees C.
    The average ST global temperature anomaly for year 1940 is +0.02 degrees C.

    If you prefer a bigger picture, you can plot the data for the entire period from the above data sources.

    By no significant warming, I mean no net average global warming between 1940 and 2008, as measured by our best instruments. There has been some cooling and warming and very recent cooling again, but not much net change since 1940.

    Some observers might want to (erroneously, imo) use the ST data exclusively, to prove that warming has occurred. The 1980-to-present ST data exhibits a strong and misleading warming bias, as demonstrated by Michaels and McKitrick (2007) and others. Although the monthly variations in the ST and LT data match very well, the two plots diverge, with ST rising above LT. I sincerely doubt that this divergence is a long-term reality, since it would suggest that the surface has warmed significantly more than the Lower Troposphere over the past few decades.

    For a comparison of ST and LT data, see Figure 1 of my January 31, 2008 paper.

    We”ll see if the current global cooling continues…

    Best regards, Allan

  241. Allan MacRae
    Posted Jun 7, 2008 at 10:45 PM | Permalink

    Update: The UAH LT global average temperature anomaly cooled another ~0.2C from April to May 2008.

    Year M LT Global Anom. (degC)
    2008 1 -0.046
    2008 2 +0.020
    2008 3 +0.089
    2008 4 +0.015
    2008 5 -0.180

  242. Posted Aug 24, 2009 at 7:35 PM | Permalink

    When using one time series (Surface Temperature) and the rate of change of another (CO2 concentration) then any cycles present will be displaced by 1/4 of a cycle due to the “rate of change” operation. For example the rate of change of a sine wave is a cosine wave which has a 1/4 of a cycle displacement. There is a clear ~3 year cycle present in the Surface Temperature data, so taking its rate of change would also displace it 9 months. This is a mathematical fact.

4 Trackbacks

  1. […] and After The Smoking Gun At Darwin Zero When Results Go Bad … Willis: Reply to the Economist Data Smoothing and Spurious Correlation The people -vs- the CRU: Freedom of information, my okole… Floating Islands Fudged Fevers in the […]

  2. […] Data Smoothing and Spurious Correlation :: Examples of how the smoothing of data can create false correlations between datasets. Another Look at Climate Sensitivity :: A calculation of “climate sensitivity” from basic principles. GISScapades :: Examples of why good correlation between datasets does not mean they have similar trends. Some of the Missing Energy :: Shows how energy can both enter and leave the earth’s system without changing the temperature. Where Did I Put That Energy? :: My first attempt to understand the equations used by the AGW scientists. The Cold Equations :: The improper mathematics used by AGW scientists, and exactly where it is wrong. Not Whether, but How to Do The Math :: The new Berkeley Earth Surface Temperature (BEST) project. […]

  3. […] correlations. There’s an old post of mine on spurious correlation and Gaussian smoothing here for those interested in an […]

  4. […] not. Using smoothed datasets can even generate totally spurious correlations. I give some examples here … and lest you think that I made up the idea that smoothing can lead to totally spurious […]