## Famiglietti Strikes Again

There is another terrific article by Bürger and Cubasch posted up here . I’ve just looked at for a few minutes so far and it will take time to fully digest, but one can tell right away that it is a very interesting and stimulating article. Gerd Bürger notified me of it and I therefore bring it to your attention.

It was rejected by GRL.

I don’t know what reasons Famiglietti gave for the rejection. I was going to make some nasty comment about Hockey Teams and water boys, but thought better of it. I suspect that it will be along the lines that they are bored with the matter. But in terms of really understanding the statistical methods, the study of MBH methods has only just begun. It’s taken a long time to even figure out MBH did. That was only the first step and shouldn’t have taken any time.

Now it’s time to figure out the statistical properties of their method, which they should have explained on day 1. It doesn’t matter that they’ve moved on to some even more obscure method. Insight from understanding the statistical properties of something where the methods are now more or less understood will be a big leg up for the task of figuring out the next method.

### Like this:

Like Loading...

*Related*

## 39 Comments

The reason for the rejection should be forthcoming from the editor. Hopefully, its more than “we are bored with this subject”

The paper is poorly written Steve:

-background info is tediously presented without really clarifying things.

-Confusion over whther this paper is abnout MBH, RegEM, or reconstrucitons in general.

-Poor thought heirarchy and seperation of topics.

You all need to learn to write clear papers. If you mispronounce the word, who cares, as long as you say it loud and clear!

Just wanted to add a perspective that probably science nerds will hate, but is one topic since it seems like there is another attempt to start the ‘stop picking on poor little Mann’ rejoiner. That is the perspective of the Lit. Crit. people who gladly write whole books on works more obscure than MBH98/99. One of the works that impressed me was the devastating critique of Rousseau by Derrida in Of Gammatology. Rousseau was the french guy who contended that man was good by nature, a “noble savage” but is corrupted by society. His political ideas influenced the development of socialist theory, and other screw-ups like the French Revolution.

Anyway not wanting to start a political thread. The point is that the superficial way references used to bolster the context of ‘general agreement’ is one of the worst, sophomoric aspects of science today. For all of MBH98 faults, it was at least influencial and so detail and attention to it should be seen as respect for that influence. I would be happy to see a whole book devoted to it without a single other reference if it took that to get to the bottom of it.

Look, it’s not Shakespeare, but it;s not nearly as lugubrious as anything by Ammann, which would say the same thing in 85 pages. If GRL didn’t object to Ammann on style points, then that wouldn’t be the problem here. I’m pleased because they are the first people to follow up on spurious RE.

The link is down for me; the whole Copernicus site appears to be down. Probably due to its sudden popularity

thank you

The link is back up.

thank you

Steve:

I’m not asking for Shakespeare, I’m asking for a clearly written business plan. Let’s identify the issues, marshall the analyses, discuss the results. BC05 was much better as was H comment.

The lesson for you Steve is to be very clear in your papers. Write about one topic at a time. Take it, do the analysis. Publish. Do it several times. Then write an EE05 (and not in EE! that summarizes and ties it all together.) But the point is to build it up, solid brick by solid brick. No tying things togehter with straw.

Re 2, TCO, thanks for your comment. You say:

I note that you mis-spelled heirarchy, reconstructions, and separation. However, this makes no difference in understanding your meaning.

Similarly, although the Burger/Cubash paper may not be the clearest possible exposition of the facts, they are scientists and not literary geniuses. Their work is laid out in detail, and I followed their argument despite not being a specialist. Their conclusion seems quite clearly presented, and their work is described with enough detail to replicate. That’s about all I expect from a scientific paper.

Regarding your question about whether it is about MBH, RegEM, or reconstructions in general, their conclusion seems quite clear:

and

Can’t ask for much more clarity than that …

w.

Willis, I have read an awful lot of science papers in some detail and published a few too. It’s a poorly written paper as a science paper. It should not be too much to ask for clarity as we unmask things.

An engineer’s view:

The paper takes six words to say what a traditional science paper would say in twenty-six. I like people and papers who get to the point, but that’s just me.

Agreed. I actually think that a more clear and incisive explication (even with a bit of teaching) could make the paper shorter. Certainly shorter to read. And easier. Some errors I see are:

1. They are not clear about what studies are impacted by their work. Is it the REM studies only (Mann 2003?). Is it MBH1998 also? How prevalent is the practice that they disdain in other reconstructions?

2. They misuse the word “vary” in the second sentence of the abstract. It should be “are”.

3. Mixes passive and active voice in the abstract paragraph.

4. Does not clearly say what the thesis is that they are testing. What the main point that will be decided will be.

5. Instead of the tedious introduction, would be better off to start with the question that they pose at the end of the conclusion. Then build the paper to answer that. Some of the intro (who wrote what) can be kept, but they would be better off using the intro to describe the pyramid of thought structure and analyses which will be looked at to answer the question and to describe more fully the nature of the engine which they will be dissasembling.

6. I don’t see any analysis (numbers, p-tests, detailed discussion) to support the assertion that the MWP question can’t be answered. I’m not disagreeing, btw. I’m just saying they need to tell me why a 25% CE can’t answer the question.

***

There are some other things wrong, I think. I actually would really need to understand the content to write it more clearly. But I know this stuff well enough to tell that it’s not just a technical issue. It’s not clearly layed out either. I’ll take a crack at understanding/rewriting their abstract.

I think there main theme has to do with the inadequacy of the training period: the arbitrary nature of verification and calibration divison (vice an a posteiri test), the shortness of it and the lack of degrees of freedom, etc.

I don’t think the degrees of freedom thing is adequately investigated/described/quantified along with what are the implications from it.

They do seem to show that by using revised sampling within the overall observed period, that they get a different value of RE/CE. I need to read more of that, because that is the meat of their work here. No comment yet on that.

Intuitively, I can buy the issues with division of training period into two sections and the RE/CE false distinction. But I think that instead of just claiming it, they need to show the argument for why it is false or at least give a citation to classical literature where we can see this. This is a main difference with the work they are investigating. If they want to win that argument, they need to do more than just make an unsupported claim that paleo has wandered off the road and is using bad practices. Note, I’m inclined to beleive them. I just think they need to show the whole chain or cite it, if they are going to change a whole field’s practices.

They reference “McIntyre and McItrick 2005a-c”. But there are only two 2005 papers in the bibliography.

On a side note, I really am more used to numbered end note citations. I guess it’s nice to be able to talk about MM03 and the like if a lot of a paper is looking at other papers. But am I supposed to assume that “a” is the first in the row of 2005 papers for instance? Something about how y’all do it, doesn’t give me a warm fuzzy. Check out Physical Review or Journal of the Americal Chemical Society or Nature (MBH98 good example). Like how they do it there better.

What does it mean that Rsq is scale independant and thus not appropriate?

If it is “easy to see that calibrating a model in one part of a trending series and validating it in another yields high RE scores”, then show it! Either cite some literature that describes exactly this issue or show the reader. We are trying to re-evaluate an accepted peice of work and maybe even change an accepted (flawed) methodology common to a field. Even if they’re wrong, these heathens, we have to break out the Bible and read the Word of the Lord to them. We can’t just say we have the good book in our knapsacks. They will continue to worship idols!

On topic, as I read, seems that the point that Zorrie made to me yesterday comes to the fore (and actually Gavin and I have gone round on this). The thing about a single data point if all you look at is one trend matched, that you need to get more by looking at year to year.

Steve,

The paper is hard to read, but it should not have been rejected. The editor should have requested a rewrite. Also, it is not clear if the editor rejected it, or the reviewers, assuming the GRL is a refereed journal.

That said, the paper strongly demonstrates just how bad the MBH work is from a statistical and methodological standpoint. MBH prove nothing and prove nothing in a statistically dubious manner. Critical is the paper’s finding that CE and RE are essentially the same statistic.

The hockey stick has been fed into the wood chpper and should be used for mulch. Eventually, even the dimmest bureaucrat and reporter will understand.

JSP

Who is JSP?

TCO,

Pot, meet Kettle!

Chill. I’m just curious, not demanding. Want to know if he is a heavy. I’m not. Now go run off and argue with Hartlodt or Lee or other inanity.

Re #16

View Rsq as drawing a scatterplot of estimated temperature on the X-axis and the reference “true” temperature on the Y-axis. If the dots all lie in a perfect straight line, you’ll get an Rsq of 1. Rsq reduces the further the dots are from forming that straight line.

The scale independence thing is (I think!) due to the fact that the gradient of the line doesn’t matter to Rsq. If the gradient is 2 (i.e., the estimated temp anomaly is half the true temp anomaly) or if the gradient is 0.5 (i.e., the estimated temp anomaly is twice the true temp anomaly), you’ll still hit an Rsq of 1.0 if the data points form a perfect straight line. (Rsq also doesn’t “care” if the straight line goes through the origin or not, but that is another story…)

Whether scale independence is important depends on what you want to do with the data. If you are only interested in ordered statistics, i.e. the skill in determining the rank of a given year (or decade etc.) within the reconstruction, scale independence is not an issue. IMHO this is appropriate to the conclusion regarding whether the MWP was warmer than today or not. If you are interested in the extent to which it was warmer (e.g., was the MWP 1 degree warmer/cooler than the 20th century) then scale is important, and the Rsq test would not be sufficient.

Of course, if I’ve got completely the wrong end of the stick I’m sure Steve or someone will put you straight on this :-)

FWIW, I thought this latest paper was a good one. It does take a bit of careful reading, but it picks up on important themes from the NAS report, the other papers (MM, vSZ, BC) and tries to drive them on to the logical consequence (that current attempts to derive temperature have no statistical merit). I suspect this conclusion may be a bridge to far for the community to swallow – even though this conclusion is more or less stated in the MM papers, I don’t think climate scientists are willing to hold as much stock in this as they should.

Oh and bringing RegEM into the mix is also a step forward in understanding what this methodology offers.

Some musings:

Essentially, the calibration period finds a correlation between temperature (by gridpoint? by overall?) and a given proxy, no? The validation period checks that result? And then we go off and get implied temps from the values of the proxy in the past, no?

So what the calibration does is give us some result like .02 inches of RW (greater than the reference) is .1 degrees of temp (greater than reference). So I end up with something like deltaT=kdeltaRW, with k= 5 (in this example).

Now, why wouldn’t rsq be important in getting that K value? Would think that it would give you a pretty good idea of how reliable k is as a correlation, for various data. Combined with the degrees of freedom, would give an idea of the reliability of even having a correlation coefficient (there must be some stat test that does this, I don’t know which one.)

But let’s say that you find that k=5, with an rsq of 0.1. That is not very “good” in some sense. It means that if the behavior is the same in the past (we haven’t even gotten to nonsense regressors and cherry picking yet), that for a given prediction, there is a 90% chance that k correlation won’t help us. I know I’m not doing this right, since I don’t know stats, but intuitively it’s something like that. 10% of the time you’re on or close to the line, the other 90% of the time, you’re in the data cloud.

Now, that really sucks if you actually are trying to tell something about a given year. BUT, if you have a lot of years and just want to know about how a given century compares to this century, it’s not so bad. Also, maybe the prescence of multiple proxies, helps you.

Re #2

I thought the paper, for me anyway, gave a relatively good

rendering of the question of the uncertainty of the results (skill) reported for proxy-based reconstructions of NH temperatures. For me, the authors articulated well background issues including CE, RE, and RegEM. For my selfish interests, I want to understand the statistical skill inferred and potential for over fitting the data. Contrasting this paper with that of the dissenting reviewer’s comments, my selfish interests were much served by this paper. Leveling this paper (even when in combination with the same treatment of the reviewer paper) with a blanket of negative comment, I think misses this point entirely.

TCO, I find your intentions for giving fatherly advice to Steve sometimes warming the cockles of my heart, but I need to use the parable (nothing personal intended) below to explain my thoughts further. It is about the immigrant drunken father who never bothers to learn English or stay sober but implores his son to stay sober and learn to use the English language well. Now, when I want advice on the use of the English language or sobriety, would I listen to his son, who was successful in both pursuits, or his father, who gave the advice?

I think we’ve beaten the writing style aspect to death. Like to start discussing the paper itself. What we can learn from it, how well it supports its inferences, any deficencies, etc.

“

The low verification scores apply to an entire suite of multiproxy regression-based models, including the most recent variants.It is doubtful whether the estimated levels of verifiable predictive power are strong enough to resolve the current debate on the millennial climate.”Wow. All of them are unreliable.

TCO, I realise looking a little more closely now that I may have misinterpreted what B&C were talking about with respect to scale independence. They might have been referring to the effect of applying the metric over different time scales. I’m not so familiar with RE/CE so I’m not so sure what their point is here, I’d need to do some digging to get to grips with it.

Also, reading more closely, I’m not clear whether they are saying “r^2 is scale independent and therefore inappropriate for the type of analysis we are applying” rather than saying “r^2 is scale independent and therefore not a useful verification metric”. These are clearly two very different statements. I note in the referee #2 response, the mysterious reviewer seems to read this as a criticism of r^2; yet the B&C paper goes on to refer to r^2 approvingly further down.

Perhaps I’m starting to agree with you more that the paper should have been clearer :-) A little disambiguation would be helpful here.

I currently don’t have the background to asses if the paper is well written. Since my ability to ready the paper was probably far more limited by my background then the papers writing style. I do find it interesting that the paper says you can’t validate a hypothesis by testing in a nearby region.

I think the paper should focus more on this and only use climate studies as an example. Perhaps publish to a magazine based on statistics. I think the problem is low frequency noise can look like trends. The closer you are to the interval where the estimation was done the greater the correlation will be with the low frequency noise and the greater the bias that will be introduced in the verification because of this noise.

If your verification period is near your estimation period then the verification must be done with the higher frequency parts of the signal. This will reduce the effect of the noise correlation between the two regions.

I’m still a ways away from understanding this paper but I was interested in the discussion of skill. I am not sure what the formal definition of skill is but to me a skillful estimation is one that can distinguish between noise and drivers. M&M did such test on MBH98 like seeing how the estimate responds to moving anyone of the proxies. Another test was done by replacing the proxies with white noise.

I would like to propose another test. I suggest replacing any one of the principle components with a random sequence that has the same autocorrelation properties. Several trials could be done and we can calculate the probability the correlation with the random signal is greater then or equal to the correlation with the principle component. This gives us a degree of confidence that the principle component is not noise. Another observation would be the standard deviation in the correlation with the random signal. This standard deviation gives us an idea of the uncertainty in the principle component.

Finally knowing the uncertainty in the principle component several trials can be done by choosing likely values for the principle components and see what range of possible reconstructions are obtained. I know it is better to draw from theory but this gives an experimental trial and error verification of the theory to make sure the users of the theory is not fooling themselves.

Re: #29

It is instructive to see how you all are groping your way toward a proof of something (Ho: uncertainty is a hypothesis killer) using a method you do not fully understand. This is being pursued in an ad hoc, evolutionary kind of way, using first those papers that you either understand or confirm your approach. This is not a criticism, because of course you already admit you’re not statistics experts. And neither am I. But this is the blind watchmaker at work. It is precisely the ad hoc method by which MBH98 invented the Mannomatic.

Not a particularly helpful observation. Except to point out the obvious: your hypothesis may well be correct (uncertainty is the lynchpin in the AGW house of cards), but you are going to need some heavy weight to get you there. TCO says you need a PCA expert. I think you need a time series simulation expert. You need someone who works in computational statistics (which uses computer simulation and numerical solutions) as well as classical mathematical statistics. It’s not often you can get both in one package. (I believe Wegman is of the latter variety.)

On Burger (or more accurately, the discussion surrounding it) –

The paper is actually quite interesting if you are seriously interested in coping responsibly with the uncertainty problem. The reason it seems opaque to some and boring to others is that it is fairly complete and not unnecessarily brief. i.e. It is everything that Nature tries not to be. This is what climate science papers are going to look like in the future, by the way, if the auditing process has its intended effect. So don’t complain about these kinds of papers!

We don’t know why the paper was rejected from GRL, but the authors would probably happily tell you why. I suspect it is because as #1 John A suggests: uncertainty is soooooooo boring. The associate editors and reviewers themselves would probably not dare to say it is too boring a subject to publish. But the Editor might suggest it was too boring for the GRL audience.

Audits like this are generally going to have a hard time getting published because they are narrowly focused on targeting one paper or class of papers. Get used to it. The gatekeepers favor papers that have broad relevance. (A look at the criteria for publication in any major journal will show that there is a strong bias toward “interesting” and “broadly relevant” papers.) You can already hear the criticism from the AGW anti-MM camp: too focused on one paper (MBH98), one group (HT), one pattern (HS), etc. We all know that’s not true – that CA is focused broadly on the METHODS of climate science – but that message is not resonating with the believers.

The more you focus on BUILDING something positive, as opposed to DISCREDITING something negative, the more acceptable your work will be in the mainstream literature, because the more broadly useful it will be. (It is a mistake to assume the literature is a forum for “good science”, conjecture & refutation. It is a place to market tools & ideas that others are willing to pay for (in untraceable utils of goodwill). That steams science purists like Popper & Feynman. But that’s the way it is.)

The Wegman outcome is favorable in that now a fair number (if small percentage) of good statisticians may sudenly emerge to rise to the defense of science, if they are encouraged to do so.

There is a seed – a natural alliance between science purists and CC skeptics – that will grow beyond your expectations … if you let it. Fence-sitting skeptics need to know that

skepticism does not make you a planet-hater.The house of cards will fall when the reality of uncertainty starts sinking in, pushing the fragile institution beyond its tipping point. Believers do not understand just how unstable that structure is. They do not understand the uncertainty problem. It takes too much hard slogging through the dry statistical literature, like Burger, to ‘get it’. Suggestion for believers on both sides: if it bores you, or hurts your brain, you had probably better read and understand it.

As for the paper itself …

#30 I think what Steve does isn’t adhoc. Steve stays very objective and uses well established methods to highlight the weakness in the methods of MBH98 and other papers. Steve has been critical about the frequency that members of the hockey stick team use adhoc methods without first trying to establish the methods in a Journal.

There is a large body of academics which believe it is necessary to first toughly study a field before attempting to conjecture on the subject and apply ones deductive logic to reach their own concussions. For instance I have gotten the response before on a newsgroup on symbolic mathematics, if you don’t have anything constructive to say don’t say anything at all.

I however believe the deductive process used to arrive at the answer is as important as the answer. I think basic problem solving and brain storming skills are as important as the solution. If this wasn’t the case why are mathematics students require to spend so much time at proving things which are already known. Understanding goes far beyond what is typically found enumerated in the average textbook. Memorization does not equal understanding.

The deductive process has many dead ends. Edison tried 100 times before inventing the light bulb. Not all dead ends are useless. For instance, one estimation procedure I recently attempted is to assume the signal is all noise then try to reduce the error in the prediction. That is minimize norm(E[(y-AX) (y-AX)']) with the assumption y is all noise. I thus assumed that the autocorrelation of the error y-AX was equal to the autocorrelation in y. I then applied weighted least mean squared to get an estimate of the singular values in the singular value based pseudo inverse and use the MATLAB routine fmin search to try to find the singular values which minimize: (E[(y-AX) (y-AX)']) or more precisely minimize:

norm(P-P Phi’ A’- A Phi P + A Phi P phi’ A’)

Where P=E[(y-AX) (y-AX)']

And phi is the singular value base pseudo inverse. I ran the routine several times and my estimates appeared to converge. It seemed to fit the data quite well but not as well as the assumption that the error was independent and identically distributed. I then checked to see how norm(E[(y-AX) (y-AX)']) varied with each iteration and to my surprise it didn’t change. The optimization routine was only finding local minimums due to round off error.

So what seems like a failed estimation routine reviles something seemingly obvious and surprisingly interesting. That is the noise in the residual E[(y-AX) (y-AX)'] cannot be reduced by weighted least mean squares. All that happens is the uncertainty in the residual (Are Aproi information) is spread between the uncertainly in the measurement (P) and the uncertainty in the fit (A Phi P phi’ A’). Thus for weighted least mean squares the error bars that you get for the fit are equal to the error bars that you assume in the first place. Moreover the error bars in the parameters you are trying to estimate are based on the error bars assumed aprori for the fit. This is something I have not read in the discussion of weighted least mean squares but it clearly highlights the weakness in the method when the error is not known aprori’

29 (JC): Yes, that would be an interesting experiment. “matching in autocorrelation” is probably not a no-brainer. Is it AR, ARMA, FARIMA, all spectrum, etc. What exactly is a match. Of course, you can still do the work. Just need a little thinking about the model. Maybe you even try a few different types (this gives you the understanding of interaction of method and form). Like my desire for an understanding of method and shape.

30. I wouldn’t say that we NEED a PCA expert, in the sense that we (really Steve…ok…he does all the work and we kibbitz…unless you are going to start.) I just think that it might help. Totally agree on the time series expertise, Steve has mentioned it. It is more of an issue then the PCA transforms. Both consultants would be nice! Steve is pretty good on his own, don’t get me wrong. But more smart heads will help. That’s just how things go.

Bender: I am predisposed to be sympathetic to the Burger paper, but I found the same faults that 2 of the reviewers did in terms of it making unsupported, unfocused claims (MWP elimination, broad class of reconstructions). Also, I’m not the only one who finds the paper frustrating in its writing. And I’ve read a lot of papers in a lot of fields. I have some clue about how to lay out an argument clearly. This is not that different if you are analyzing a business issue or the intricacies of Mannian statistics. I’m not blaming Burger for the linear algebra in the paper. I have no problem with that. Made no judgment on it. Don’t know enough to read it. My issue is with the other parts of the paper. It is NOT TOO MUCH to ask for skeptics to write clearly. In fact it is VITAL. How else are we supposed to hash things out and assess if their criticisms are valid, and how important they are?

#30 I am interested in past temperature reconstructions but my first question are with regards to how the drivers effect temperature and what are the certainties in our estimates of the correlations. I am interested in what is the optimal way to estimate these correlations given no aprori information on the noise and I am interested in the best way to test the robustness of these methods.

Once we figure out how to best determine that with the instrumental data we will be in that much better a position to figure out how to construct past temperatures. As for what kind of model to use for reconstructing past data it would depend on the proxy. It would be necessary to consider a Varity of models to see what works and for some proxies like tree ring nonlinearities are essential to include in the model. A tree wring model may look as follows:

W(k*dt)=sum(aj * si ((k-j) dt))* W((k-1)*dt)

sj=(1-((T((k-j) dt))-Topt)/Tw)^2)*log(c) for T-Topt>0

0 otherwise

si the quality of the growing conditions

Topt is the optimal temperature

Tw the temperature difference from optimal conditions at which the tree doesn’t grow

K the time index

dt the width of the time step.

W(k*dt) is the width of the tree ring at time k*dt

C is the partial preassure of carbon dioxide

The model I am suggestion is that tree width growth is a moving average model of the quality of the annual growing conditions which I define by the function sj but I include an auto regressive part in a unusual way. The auto regressive part is multiplied by the moving average part instead of added to it. This is interesting because the model is such that if the tree doesn’t grow the previous year it won’t grow the next year. Moreover it models the decrease of growth of a tree with respect to age. If I want to include more auto regressive terms I might do it as follows:

W(k*dt)=sum(sum(a_{j,n} * si ((k-j) dt))* prod(W((k-n)*dt))^(1/n)

So what we are now doing is considering various orders of geometric averages of past growing years. This is interesting because each of the geometric averages are such that if the tree didn’t grow at all in the past year the geometric average will be zero. Thus we are considering past growth in such a way to model the fact that trees die.

The other part which I didn’t mention is the non station aspects. This can be done by treating the nonlinear parameters we identify as stats with noise as the input. For example a random walk. Since trees are probably quite non stationary the low frequency data like pollen and boar holes will probably contribute the most information to past climate well the tree rings may help fill in the missing high frequency information.

Re #32

I was not referring to Steve’s approach as

ad hocbut the blog’s collective approach (if that even makes sense). Still, I see your point. I can only judge by what I see published or blogged, and these additional comments prove there’s quite a bit going on ‘on the side’. Also, it’s a pretty subjective term prone to misinterpretation, and that’s not helpful either. Hopefully the comments to come (after a few days’ worth of reading) will be more helpful.I like having you on the site, Bender.

I was thinking more about modeling tree growth and how after a certain age tree growth changes approximately linearly with age. There are two models that came to mind. The first is a forced auto regressive model. In this type of model tree growth linearly decreases with time and there is no “getting back to normal”

w(k+1)=w(k) – k //w(k+1)>0

w(k+1)=0 otherwise

The second is a moving equilibrium

w(k+1)=alpha w(x) +(1-alpha) w_typical(k)

In the second model the constant alpha is between zero an one and governs how long the tree takes to get back to normal. w_typical(k) is the typical growth rate for a tree at age k. The time constant is defined as the k such that (alpha)^(k)=alpha/e and is a good measure of the time a tree takes to get back to normal. The first model is a long memory process. Wagmann suggested that trees may be best modeled by long memory processes. In both models the inputs can be dealt with by a moving average component. Short term effects can be introduced into the first model by cancellation. For instance:

w(k+1)=w(k) – k + u(k) – u(k-1) //w(k+1)>0

where u is an input. So we notice that even though the model has infinite memory with respect to it’s state it is possible to choose the moving average part so no inputs are remembered. We can combine the two models as well to give a long term memory part and a short term memory part as follows:

w1(k+1)=w1(k) – k //w1(k+1)>0

w2(k+1)=alpha w2(x) +(1-alpha) w_typical(k)

w(k+1)=k1 w1+ k2 w2 (k+1)

As I mentioned in my previous post death can be dealt with though multiplication. If we multiply by the previous state this gives us an exponential decay in tree growth. Since we know tree growth is nearly linear a model that gives an exponential decay in growth is not a good choice. An alternative is to multiply by a factor of the form:

(w(k)/w_max)^(1/n)

where w_max is the maxium possible tree growth. This way the factor is very close to one and does not dominate the difference equation unless the previous growing season was really bad like in the case that the tree died.

I was thinking how burger talked about principle component analyses as least squares by whitening the parameters you are trying to estimate. Clearly if the drivers are nearly collinear you have a problem. In a way principle component analysis helps as it orthoginilizes your drivers and although Steve mentioned the PCA is equivalent to least mean squares via the singular value based pseudo inverse this does not address the fact about the sensitivity of least mean squares to noise that correlates with the common signal of two nearly collinear drivers (e.g co2 and solar).

Usually in least mean squares you want to whiten the model error. I would like to suggest that it may be better to only whiten the parts of the spectrum that do not contain a common driver signal. It may be better to suppress the parts of a spectrum containing a common driver signal. This way although less information will be available at least the estimate will be less sensitive to assumptions about the model error (noise).