Re: Ryan O (#86),

I tried posting this yesterday at RC after seeing a lack of response to Ryan O’s last comment:

376. John Norris Says:

Your comment is awaiting moderation.

7 March 2009 at 8:09 PM

re: 375 in the context of 353, 364, 369, 370, 372, 373, and 374

So any further defense of “Antarctic warming is robust”?

It no longer shows up as awaiting moderation, so I am guessing it didn’t make the cut. Perhaps they are preparing an irrefutable response that shows something wasn’t taken into account in the analysis that all you folks did. Then they can then close the thread.

]]>At this risk of getting this all wrong, I will simply write what I thought to be the case.

When looking for a signal, it is convenient to perform a PCA as it enables one to reduce the degrees of freedom provided you can be sure that the PCs that are discarded contain no signifcant information and that you can verify this.

Having done that one has a smaller dataset in which to look for the signal you are interested in. If you have some data to calibrate against you use it to discover the weights for that signal in each of the retained PCs. Having done that you form the product of the weighs and the retained PCs and you have a recovered signal. If the calibration data was partial you have extended the recovered signal to cover the gaps. Obviously there are tests that need to be performed to gauge the level of confidence one can have in the recover data.

Now either I am being simplistic and just wrong, but I thought that it was a straightforward almost mechanical operation. Also I thought that you could always include all the PCs, it just requires more computation.

I feel I must be missing something, I get the feeling (not on this site) that the PCs are presented if they are signals whereas I would consider them more like “flavour scales”. Surely if a strong PC is on average weighted down as much as up the PC remains but the signal is cancelled out.

I do use PCs and other orthogonal bases, and often it is not all over until the last variance is explained.

Sorry if this is banal but I think I understand, yet I simply can not understand the way PCA is used in some of the papers. If you need at least 32 (as above) you need at least 32 you can not discard PCs if they contain the signal be it a positive or negative contribution. PCs that look nothing like the signal can still contain sufficient signal to be worth including.

Thanks for listening, now am I being dumb or just naive.

Alex

]]>Many thanks, the links you provided led me to proof that I could understand. I was damn sure it was true, but it was a procedure that I came to by trial and error. I shall continue to use it. It is all I need for matrices that will fit into EXCEL and macros can take care of the iterations.

Sitting on another eigenvector is not a problem as the smallest error does eventually turn the vector.

Alex

Alex

]]>What you are looking at is called the “power method” of calculating the eigenvector corresponding to the largest eigenvalue. It works if the largest eigenvalue is unique and if the initial vector you have chosen is not orthogonal to the eigenvector you are trying to find.

A Google search with “calculating eigenvectors power method” finds lots of pages on the topic including the pretty much unreadable Wiki page and a simpler explanation found here.

The method converges reasonably quickly when the separation between the largest and next largest eigenvalue is large. You can also speed up the convergence by continually squaring the covariance matrix to form powers of C of order 2^n before multiplying by the vector V instead of just successively multiplying by C at each step.

]]>This is OT but can anyone assist?

I lack specialised software to extract PC/EOFs but I do so in a way that works (for me) but I have no proof.

Is it generally true that if one forms the covariance matrix C (NxN) and any normalised vector V (N) then the product CV (once normalised) is always a better approximation of the eigenvector with the lagest eigenvalue provided that all the eigenvalues are different?

In practice, iteration always seeks and finds an approximation of that eigenvector and finding the corresponding PC/EOF is straightforward.

If so does any know a reasonably simple proof? Or am I making a huge error somewhere?

Many Thanks

Alex

]]>I will be interested in seeing your post on that Jeff. Depending on what you see, you may want to pursue accounting for other sources of correlation as well (region, ocean, latitude, etc.). One would think there would be other correlations, and similar degradation of such correlation with the low order factors.

]]>Re: Layman Lurker (#89), They both have the same general effect, reducing the average continental warming trend. I’m going to run some comparisons to help quantify the similarities and differences between the two. One thing I find interesting is how they affect the pre-1980 vs post-1980 trend. That distinction gets lost when you only examine linear trends over the 50 year time frame.

It seems that input vs. output weighting addresses different deficiencies. Input would presumably address spurious correlations in RegEM. Output would address the geographic inbalance in calculating a mean. At this point I have not looked closely enough to draw conclusions.

]]>Were there any subtle or interesting differences observed when you compare spatial weighting of the input data vs. output?

]]>