On Unthreaded #32 @ 150 I posted on March 20 2008 that the preamble to some FACE work (for oceans) had ideological preconceptions and I gave this excerpt:

Free Air CO2 Enrichment (FACE) experiments. Both SOLAS and the IMBER project have proposed FACE-like experiments for the ocean. The benefit of such experiments is that they are more likely to show the actual long-term effects that will occur in the future. The major anticipated drawback is that it might be impossible to use for pelagic communities without enclosing them in some way or somehow using a Lagrangian approach. There is a need to start with a feasibility study because the amount of CO2 or acid required for a full-scale pelagic FACE experiment may be very high. The other drawback is the public perception problem. This drawback might be approached by pointing out that the effects of elevated CO2 under “business as usual” scenarios may be so severe that understanding them might cause policymakers to think more carefully about emission controls or other mitigation methods.

Now on to trees and compounds of forest with high CO2 added.

http://www.anl.gov/Media_Center/News/2005/news051220.html

ARGONNE, Ill. (Dec. 20, 2005) — Researchers from the U.S. Department of Energy’s Argonne National Laboratory – with collaborators from Oak Ridge National Laboratory, Kansas State University and Texas A&M University– have shown that soils in temperate ecosystems might play a larger role in helping to offset rising atmospheric carbon dioxide (CO2 ) concentrations than earlier studies had suggested. Results of the new study are published in the current issue of Global Change Biology.

Higher CO2 concentrations often stimulate plant growth. A subsequent increase in the amount of decaying plant material might then lead to an accumulation of carbon in soil. Yet nearly all field experiments to date have failed to demonstrate changes in soil carbon against the large and variable background of existing soil organic matter.

In this new study, funded by DOE’s Office of Science, scientists overcame that issue using a statistical technique called meta-analysis. This analysis of earlier published experiments showed that elevated CO2 concentrations – ranging from double pre-industrial levels to double current levels – increased carbon in soil surface layers by an average of 5.6 percent across diverse temperate ecosystems. If a response of this magnitude occurred globally for all temperate systems in a CO2 -enriched world, the authors calculated that increased soil carbon storage might remove 8 to 13 billion metric tons of carbon from the atmosphere over a period of about 10 years.

A minor derivation from this press release is that maybe reduction of atmospheric CO2 can be done by adding more CO2 to the atmosphere.

Note, however, that no promises are given that the new carbon will stay in the soil forever. Other agricultural studies suggest it will be temporary, otherwise soils would soon turn to coal or similar.

The reason I raise FACE here is to show you CA maths people the power of meta-analysis. The sought effect was not found until meta-analysis was done. Why have you not informed and educated the rest of us before of the power of this method/ (He asks sarcastically).

Finally, another CA contributor has noted an absence of reported temperature measurements, so that tree ring analysis in the future, from these trees, might be difficult to interpret.

Is FACE worth a separate thread?

]]>As I understand it, this step removes the trend from the data. How then can you use the detrended data to say anything about the trend in the data, i.e., that it is warmer in later years than in earlier years? Don’t the results we are interested in depend critically on the curve used to detremd the data?

And even if some information manages to survive the detrending, don’t the confidence intervals explode because the detrending process itself introduces noise into the results?

Given that the Team can, with a straight face, use the confidence intervals from the calibration period in the backcasted period, I would be astonished if the Team has even thought of this, but has anyone thought of it?

]]>However, it seems to me that the standard method, with or without a power transformation, throws away a lot of information. I would respectfully submit that my method preserves that information.

I had looked for a method that would preserve the pattern, minimize single-year jumps, and have a relatively normal distribution. The derivative, or “first difference”, was the obvious choice. However, the normal (discrete) first difference definition of (Y(t) – Y(t-1))/(X(t) – X(t-1)) doesn’t work because a ring width change of 1 mm on a 2 mm ring is very different from a 1 mm change on a 5 mm ring. To equalize the effects of wide and narrow rings, it is better to use a percentage change. (Using the median of percentage change, curiously, will also remove most of the “early fast growth” bias which the dendros remove with ARSTAN.)

I initially used a straight percentage of increase, Y(t) / Y(t-1) – 1. This does not give a normal distribution. A better form is logarithmic, ln( Y(t) / Y(t-1) ). This can be restated as ln(Y(t)) – ln(Y(t-1)), and is not too far from normality. (The normality only matters for the error calculation.)

Then I took the median of the individual tree changes for each year. I used the median to minimize the “one-year jump” error. Finally, I inverted the original logarithmic transformation to yield the reconstructed dataset.

To understand why this transformation is better than the ARSTAN==>average transformation, we can consider a much simpler question. What is the most accurate way to estimate a given single year’s change in the record?

Now, we can average year t and year t-1, and subtract one from the other, that’s one way. The problem is that it doesn’t deal well with single-year jumps. If a couple of the trees took big single-year jumps last year, it will skew the record. It may make a year of declining ring width look like a year of increasing ring width. And unfortunately, it will skew the record for as long as it does not return to its former position. Taking the median of the first difference (as a percentage) avoids or greatly minimizes those problems.

In addition, I’m nervous about ARSTAN. I get nervous in general when you fit a line to a bunch of natural data and say “ceteris paribus, it would should be like this” …

`Ringwidth = A + B exp(-C*age)`

For starters, it assumes that you know the tree’s age. Given that in this species, the location of the heart is often anyone’s guess, does a core reveal the age of the tree? I would think in many cases, no.

Steve is pointing in the right direction with detrending on the basis of the whole tree or the stand, rather than the individual core. But this still does not deal with the one-year jump problem … and it still requires that we know the age of the tree. The core with the green crosses illustrates the problem perfectly. If we didn’t have the longer core, we would ARSTAN detrend that whole tree totally incorrectly.

The result of this incorrect detrending is that it removes good information from the dataset, and replaces it with wrong information. Does this make a difference? I don’t know. I also don’t like the idea that some trees get detrended with a straight line with a negative slope … and when you are looking for a trend in data, that doesn’t seem like a good plan.

This is a particularly insidious error, because (as in the green cross data in Fig. 1) the un-detrended change may represent valid data. The insidious part is, under ARSTAN, it sometimes will fit a line or a curve with an erroneous *negative* slope, but it will never fit a line or curve with an erroneous *positive* slope. This has the potential of introducing an erroneous positive bias in the results, because some valid negative trends will have been removed.

As I said, I don’t know whether this is a problem, or how to quantify it. I point it out as a potential hazard of ARSTAN detrending. I suspect that some broader average, based on Steve M’s “grassplots” or the like, applied only to the trees where it is clear that there is a real need based on some statistical evidence, would do a better job.

Finally, I like my method better than the dendro method because it is conceptually much simpler. I am calculating the integral of the median of the first derivative. This avoids fitted curves, and allows the inversion of the final result back into the units we started with (ring width).

Anyhow, gotta run, work calls. I’ll look further at these questions and report back.

w.

]]>Steve, a year ago I expressed reservations about using mid-level correlation coefficients, preferring r^2 above 0.9. I am pleased our thoughts are converging and that we both agree that a correlation coefficient of zero has significant interpretative confidence.

Re # 32 Tony Edwards

Graphs are ok, but have a closer look at Steve’s graph “Grabill & Updated…” The two curves do look well correlated, but when you do a count you find that the eye sees mainly about 25 peaks in this 400 year term, marching in step. It is not uncommon in many fields of science that the devil is in the detail, requiring an excellent statistician (which I am not) to extract the max from the data. But you’d know this. It’s part of the reason why the hockey stick had so much public impact.

Steve – Re “detrending” exponential correction. I presume this is used because the ring width is a greater part of the diameter when the tree is young, then it evens out somewhat in midlife. However, towards death, the ring width should thin (relatively) as its function and load decreases. I have not specifically searched recent papers for a more sophisticated curve, but did work through the maths of this with foresters 20 years ago. We also did live weight vs. time curves for plantation trees to predict best harvest time, this possibly related to ring growth measurements.

Looking forward to the final conclusions from your stay in Calif. Fascinating numbers.

]]>If the trees reliably go stripbark at an advanced age, then the age dummies would pick that up (on average) and you would have a novel way of partialling out that particular confounding influence. The big panel data approach would identify the strip-bark growth pulse with age not year. Maybe I’ll fire up Stata and poke around in the data set when I don’t have anything better to do.

]]>For this particular data set, where the cores are often long, the detrending doesn’t have as much impact as one would think. The juvenile portion wears off after about 50 years and is flat thereafter. My suspicion is that there is some bias introduced with these series, since they usually don’t hit the pith and the juvenile bias could simply be a decline from a high quasi-cycle. A ring width average looks very similar.

]]>Hey, I look at it and think that it looks like a massive panel data set. With enough samples you can estimate year dummies and age dummies across the entire stand. No need to impose any structure on the data or growth patterns of trees. You could parameterise it if you felt the need – but that wouldn’t be necessary with enough data. You might have some identification problems for really ancient dates but that may not be a significant problem depending on the objective. Take the year dummies and you have your ‘climate’ signal. Although you will still have the problem of interpreting that climate signal, your normalisation should be better than what you would get with a tightly parameterised and restrictive funcitonal form as is used in the standard detrending procedure.

The techniques are the same as are used to estimate interesing effects in, say, the PSID (Panel Study of Income Dynamics) and the statistical properties are well established. (Like, for example, separating cohort effects from age effects.)

]]>