The Three AVHRR PCs

Roman Mureika perceptively observed about 10 days ago that the AWS reconstruction consisted of only 3 PCs. Jeff Id has extended this to observing that the AVHRR reconstruction consists of only 3 PCs. Jeff’s demonstration of this is correct, but a little awkward. I’ll show an alternate demonstration which shows a useful linear algebra property of singular value (eigenvalue) decompositions: the SVD of the transpose of a matrix is the same as the SVD of a matrix. If Jeff had simply transposed the short fat matrix to a long thin matrix, his algebra would have been simplified. I’ve also extended his analysis to plot the 3 eigenvectors (corresponding to the 3 PCs) on a geographic grid. Steig has archived something that’s been fitted. There is no possibility that the AVHRR archive represents original data.

First download the AVHRR recon data:

download.file(“http://faculty.washington.edu/steig/nature09data/ant_recon.txt”,”temp.dat”,mode=”wb”)
grid=scan(“temp.dat”,n= -1) # 3305400
#600*5509 #[1] 3305400
recon=ts(t(array(grid,dim=c(5509,600))),start=c(1957,1),freq=12)
dimnames(recon)[[2]]=1:5509

Now download the AVHRR grid info (changing to -180/180 long for plotting):

grid.info=1:5509;grid.info=data.frame(grid.info);names(grid.info)=”id”
grid.info$lat=scan(“http://faculty.washington.edu/steig/nature09data/data/Tir_lats.txt”)
grid.info$long=scan(“http://faculty.washington.edu/steig/nature09data/data/Tir_lons.txt”)
temp=grid.info$long>180
grid.info$long[temp]= -( 360- grid.info$long[temp])

Now transpose and do the singular value decomposition:

X=t(recon);dim(X); rm(recon) #5509 600
svd0=svd(X)

Next plot the eigenvalues. This proves that only 3 PCs contribute to the recon.

plot(svd0$d,xlim=c(0,100),main=”Steig AVHRR Eigenvalues”)

This yields the following plot:

For good order’s sake, the first six eigenvalues are:

round(svd0$d[1:6],7)
#[1] 2272.6863885 993.2727255 708.5284449 0.0000024 0.0000024 0.0000024

If you’re wondering whether there might be some variation in this data over and above the three PC fit (notwithstanding the eigenvalues – say, you don’t exactly know what an eigenvalue is). Here’s a further simple proof that the AVHRR is fitted. Expand the 3 PC- 3 eigenvectors and compare the fit. The difference is negligible. As the others have noted, Steig’s AVHRR archive is a fitted product. There is NO possibility that this is original data.

A= svd0$u[,1:3]%*% diag(svd0$d[1:3]) %*% t(svd0$v[,1:3])
range(A-X)
#[1] -6.12266e-08 6.07958e-08

A plot of the 3 PCs (identical to Jeff Id’s plot) can be produced as follows:

plot.ts(svd0$v[,1:3],main=”Stieg AVHRR PCs”)

This yields the following diagram – with the interesting change in amplitude of the PC3 in the early 1980s – about the time when AVHRR measurements actually began to exist.

Jeff did not plot the three corresponding eigenvectors. This is a bit trickier, but we have the grid info on the location of the AVHRR gridcells and, by color coding the eigenvector values, these can be illustrated in the same way as trend maps. The maps below do NOT show trends. They show the weights in each of the PCs. The following code produces the three maps. Using the method here, I’ve plotted each pixel as a round point. You can control the point size through cex; in each map using this method, I usually have to guess at the size and experiment a little to get a map that looks OK. (If nothing else, this shows that it doesn’t take much code in modern languages to produce pretty graphics.)

#library(GDD)
library(mapproj)
col1=c(“#0000BB”,”#0014FF”,”#006BFF”,”#00C2FF”,”lightblue2″,”lightblue1″,”grey85″,
“lightgoldenrodyellow”, “lightgoldenrod”,”#FF8700″, “#FF5B00″,”#D70000″,”#800000”)
K=length(col1)
M=nrow(grid.info)
for (eigen in 1:3) {
grid.info$eof=svd0$u[,eigen];
ylim0=range(grid.info$eof,na.rm=T);ylim0 # 0.007698207 0.200473573
y0=ceiling ( max(abs(ylim0))/.005) *.005;y0 #.025
breaks0=seq(-y0,y0,2*y0/K); (N=length(breaks0))
h=function(x) { if (is.na(x)) h=NA else {
if(x< =max(breaks0)) h= min( (1:length(breaks0)) [ x<breaks0 ]) -1 else h= length(breaks0);
h[h==0]=1}
h}
grid.info$class=rep(NA,M); for(i in 1:M) grid.info$class[i]=h(grid.info$eof[i])

##PLOT
# GDD(file=paste("d:/climate/images/2009/steig/avhdd.eof",eigen,".gif",sep=""),type="gif",h=420,w=420)
par(mar=c(3,3,2,1))
map('world',proj='azequidistant',orient=c(-90,0,0),ylim=c(-90,-60),fill=TRUE,col="white") #irrelevant
a=seq(-180,150,30); for(i in 1:length(a)) lines(mapproject(list(x=rep(a[i],181),y= c(-90:90) )),col=4,lty=2)
a=c(-180,0); for(i in 1:length(a)) lines(mapproject(list(x=rep(a[i],181),y= c(-90:90) )),col=4,lty=1)
a=seq(-50,-80,-10);
for(i in 1:4) lines(mapproject(list(y= rep(a[i],72),x= seq(-177.5,177.5,5) )),col=4,lty=2 )
points( mapproject(list(y= grid.info$lat,x= grid.info$long) ),pch=19,col=col1[grid.info$class],cex=.5 )
title(paste("Steig AVHRR Eigenvector ",eigen) )
# dev.off()

Here are the three eigenvectors:


It is possible that this archive is derived from RegEM PTTLS, but, if so, it’s done in a different way than what we’ve seen for the AWS data by parsing Tapio Schneider Matlab code. In that code, real data where available is spliced with the fitted data. In the AVHRR case, it doesn’t look like any real data is spliced back in – if there was, you wouldn’t get such a perfect fit.

There’s another potentially interesting phenomenon here – in two cases, when RegEM is applied to truncated total least squares in which the truncation parameter is set at 3, the fitted output is made of 3 PCs. Is there a connection between the truncation parameter and the number of PCs required to decompose the solution? If so, is the RegEM methodology simply a long-winded way of converging to something that can be calculated directly? Sort of like a slow-converging infinite series where the answer can be calculated directly. Just a thought.

98 Comments

  1. Mike B
    Posted Feb 18, 2009 at 9:41 AM | Permalink

    Steve, is it my lyin’ eyes, or is that very heavily weighted area in PC 2 suspiciously close to Harry?

    Steve: I wondered that too. Harry is not on the bulls-eye, but it’s near.

    • Mike B
      Posted Feb 18, 2009 at 11:26 AM | Permalink

      Re: Mike B (#1),

      Steve, I’m not an Antarctic geographer by any means, but it looks to me like the maps you’re using include ice shelves as part of the antarctic land mass. In particular, the “bullseye” in PC2 appears to be over the Ross Ice Shelf.

      I don’t know if this is significant or not, just thought I’d point it out.

  2. bernie
    Posted Feb 18, 2009 at 9:57 AM | Permalink

    Steve:
    A different color scheme that does not connote temperature might avoid confusion.

  3. Steve McIntyre
    Posted Feb 18, 2009 at 10:04 AM | Permalink

    I’m used to red and blue for positive and negative. Plus color schemes take a while to work out. I’ve got a usable one right now. So no plans to change.

    It’s also important to understand that the PC2 and PC3 are contrasts – so that something is being flipped over at some point in the calcs. Things flipping over are always worth watching out for.

  4. Posted Feb 18, 2009 at 10:05 AM | Permalink

    The intent of my original post when I began was to rerun the satellite reconstruction from the 3 PC’s. To that end, I had already extracted the eigenvectors for the second set of plots in the original code, I posted a truncated version of the code on line. On another post I showed that the AWS reconstruction can be emulated from using only the 3 pc’s or using the data itself. So now if we have these 3 pcs, we can run Jeff C’s gridded data next to the satellite recon in a matrix and get a less peninsula weighted reconstruction.

    When I plotted the 3 pc’s everything stopped because the variance in the 3rd pc in the reconstructed data doesn’t match the ‘real’ data very well confirming the point RomanM made about the RegEM process having a different variance in the reconstructed data vs the real. To me that puts a serious credibility burden on the reconstructed data and it is clearly visible in the 3rd pc.

  5. Steve McIntyre
    Posted Feb 18, 2009 at 10:15 AM | Permalink

    #4. what the PC3 indicates to me is that during the period where we actually have AVHRR measurements, there is contrast between West Antarctica and East Antartica – perhaps this is related to the Southern Annular Mode, perhaps not. Sometimes East is warm and West is cold, sometimes opposite.

    In the pre-instrumental period, the contrast is pretty much extinguished. They move more in lock step. So for all the huffing and puffing about complicated covariance matrices, they haven’t maintained the amplitude of this series. Does it “matter”? It seems pretty relevant when the entire purpose of the analysis is to reconstruct 1957-1980 values and you fail to preserve the amplitude.

  6. Posted Feb 18, 2009 at 10:15 AM | Permalink

    Is there a connection between the truncation parameter and the number of PCs required to decompose the solution?

    I believe the answer to this is that the truncation parameter is equivalent to the number of PC’s.

  7. Steve McIntyre
    Posted Feb 18, 2009 at 10:30 AM | Permalink

    #6. Jeff, you’re simplifying things a little. Truncated TLS keeps 3 PCs in each of the 600 lines if the truncation parameter is 3. I don’t see that it is obvious from this that the RegEM fit after convergence likewise decomposes to 3 PCs. Indeed, we were all surprised when Roman noticed this.

    Having said that, there’s reason to think that the 3 PC decomposition and truncation parameter of 3 are connected, but proving it seems non-trivial to me.

    • Posted Feb 18, 2009 at 10:40 AM | Permalink

      Re: Steve McIntyre (#7),

      You may be right but I ran a whole bunch of different parameters when I started.

      Deconstructing a Reconstruction of Temperature Trend

      If you look at neigs=3 and regpar=3 and neigs=4 and regpar=3, the output is visually the same. Molan Labe pointed this out to me in the comment thread. I didn’t confirm it any other way.

      • Cliff Huston
        Posted Feb 18, 2009 at 11:03 AM | Permalink

        Re: Jeff Id (#8),
        Also, it is intersting to see the loss of amplitude pre-1980 when neigs=3 and regpar=3 vs neigs=3 and regpar=2. PC3 effects?

        – Cliff

      • Steve McIntyre
        Posted Feb 18, 2009 at 11:14 AM | Permalink

        Re: Jeff Id (#8),

        Jeff, regpar is the same in the two results. So that doesn’t test regpar.

        • Posted Feb 18, 2009 at 12:58 PM | Permalink

          Re: Steve McIntyre (#11),

          I looked into it further doing some single stepping through matlab. neigs 3 and regpar=2

          rc = r(ir);
          V11 = V(colA, 1:rc);
          V21 = V(colB, 1:rc);

          V21 and V11 is altered from V which in this case contains the 3 eigs down to two.

          Xr(:,:,ir) = (V11 / (V11’*V11)) * V21′;

          Xr is the return value from pttls and comes back as B. This is the only place Xr is used.

          The only place B is used is in regem.m:

          % missing value estimates
          Xmis(j, kmisr{j}) = X(j, kavlr{j}) * B;

          and finally

          % update data matrix X
          X(indmis) = Xmis(indmis);

          The only other operations to X and Xmis I can find are scaling.

          Since the regpar truncates the third vector and there’s no code anywhere else for it to have an effect, it looks like regpar is actually a truncation of neigs as Molan Labe stated.

          Steve: regpar is the truncation parameter for the TTLS step. neigs enters at a different step – in the SVD decomposition of the correlation matrix C to give V entering the TTLS step. If you’ve already limited V through neigs, it may limit the scope of regpar. So the number of final PCs might be limited in two different ways. regpar probably has to be less than neigs to make sense, but I haven’t checked this.

        • Posted Feb 18, 2009 at 1:21 PM | Permalink

          Re: Jeff Id (#17),

          Theres a section of the code which limits regpar to less than neigs.

          Yup, here it is pttls.

          ir = find(r > n);
          if ~isempty(ir)
          r(ir) = n;
          warning(‘Truncation parameter lowered.’)
          end

  8. Cliff Huston
    Posted Feb 18, 2009 at 11:09 AM | Permalink

    SteveM,
    Is it time to ask if the AVHRR reconstruction file matches the plot on the cover of Nature? Has anyone checked?

    Did Steig post an intermedary file instead of the reconstruction? It would not be the first time for such an event.

    – Cliff

  9. Kenneth Fritsch
    Posted Feb 18, 2009 at 11:21 AM | Permalink

    I doubt that this will happen, but, in my judgment, a brief update or progress report on what has been elicited from the Steig et al. (2009) paper by one of the more technical participants in that endeavor would be helpful to the laypeople reading and posting here. I suspect that the full extent of the detective work done here could go unnoticed and under appreciated. As a novice using R and a layperson wanting to understand PCA and RegEM better these analyses has been a great learning experience. I have particularly appreciated Roman M’s patient and clear explanations of the methodologies involved – and help with R.

    Without detail and from memory, my layperson’s view of the Steig paper analyses ongoing here at CA is the following.

    The AWS and TIR reconstructions used the manned stations during the overlap period of measurements to calibrate and validate the reconstructions that then go back to pre-AWS and TIR (AVHRR) times to extract more fully covered data from that period than the manned stations would provide.

    The AWS reconstruction used 3 AWS PCs for the entirety of the 1957-1980 period (or there abouts). For the period of actual AWS measurements (1980-2006) the measurements were used along with in-filled data from the 3 PCs. For the AWS reconstruction, the contributions of the PCs after the first 3 fall off perceptively. Could the contributions of more PCs affect the final reconstruction results? And is this less clear than in the case of the TIR reconstruction?

    The TIR reconstruction used 3 TIR PCs for the entirety of the 1957-2006 time period in contrast to how the AWS reconstruction was handled. It would appear to me and from memory that the fall off from PC3 to PC4 for the TIR reconstruction is greater than in the AWS case.

    Why would the Steig authors not use the TIR measurement data for the 1982-2006 time period? Could this change from the AWS case indicate that the authors had less faith in their adjusted TIR measurement data then the AWS measurement data? Do not the critical measurements in these reconstructions come from the manned station measurements during the 1980-2006 and 1982-2006 time periods? Evidently the manned station measurements, going back in time to the beginnings in 1957 when a number of these stations started measurements, are too sparse spatially (and temporally?) to be used alone to track trends from 1957-2006. That development appears to make the PCA and RegEM methods and resulting reconstructions provide more complete coverage during the pre-AWS and TIR period. But does it really?

    The trends for the AWS reconstruction and the GISS temperature series for the zone 64S-90S for the period 1957-2006 (and time period segments within the 1957-2006 period) are in reasonably good agreement. I believe that GISS uses only manned station measurements going back from 1980 to 1957 and no TIR measurements for all of the 1957-2006 time period. I should determine whether how many of the AWS stations GISS uses.

    • MJT
      Posted Feb 18, 2009 at 12:21 PM | Permalink

      Re: Kenneth Fritsch (#12),
      Ryan O over at Lucia has an interesting write up…

      In other words, the null hypothesis that there is no statistically significant difference between the satellite reconstruction and the ground data/AWS reconstruction is strongly rejected.

      http://rankexploits.com/musings/2009/steigs-antarctica-part-three-creative-mathemagic/#more-3409

      • Kenneth Fritsch
        Posted Feb 18, 2009 at 1:01 PM | Permalink

        Re: MJT (#16),

        In other words, the null hypothesis that there is no statistically significant difference between the satellite reconstruction and the ground data/AWS reconstruction is strongly rejected.

        I think Ryan was showing this for individual stations. I did a difference comparison with annually reconstructed data between AWS and TIR using the trend of the difference over the period 1957-2006. I saw an increasing trend for the difference TIR minus AWS coming forward from 1957 to 1983. The significance with p less than 0.05 did not occur until I used the 1983-2006 time period. I judged that one could a prior select the 1982-2006 period for a comparison since that is the first year that TIR measurements were available. That period had a p = 0.07. I am not so sure one could find a good reason for the 1983-2006 period.

        What I find in these reconstructions is a good deal of uncertainty in warming and cooling trends and differences between AWS and TIR.

  10. Steve McIntyre
    Posted Feb 18, 2009 at 11:27 AM | Permalink

    #12. There sure is a lot of material and I’m sympathetic to the request, but I don’t think that things are ready to be pulled together. Yes, I will make a go at it at some point, but first we have to figure out what Steig did. We’re still guessing.

  11. Patrick M.
    Posted Feb 18, 2009 at 12:43 PM | Permalink

    I’ll show an alternate demonstration which shows a useful linear algebra property of singular value (eigenvalue) decompositions: the SVD of the transpose of a matrix is the same as the SVD of a matrix.

    Steve,
    Do you have a reference for this property?

    Thanks

  12. Ross McKitrick
    Posted Feb 18, 2009 at 1:02 PM | Permalink

    There’s an alternative explanation: maybe the actual Antarctic surface and atmosphere is not of full rank. Outside the circumpolar vortex the climate system has hundreds of degrees of freedom, whereas inside, everything is a linear combination of only 3 vectors. This might also explain why it’s so cold there: the entropy production vector keeps vanishing into the matrix kernel.
    Geez I bet nobody’s ever thought of this before, but it would definitely explain your PC results.

    • Steve McIntyre
      Posted Feb 18, 2009 at 1:14 PM | Permalink

      Re: Ross McKitrick (#19), remember the IMO interesting posts last year on spatial eigenfunctions (arising out of the humdrum Stahle SWM network)? I produced things that pretty clearly equated to Chladni figures. I’ve been experimenting with similar spatial covariance things in the Antarctic. (Gavin, get busy.)

      But I’m 99.999% sure that the 3 PC thing has nothing to do with anything physical, but is a property of Regem truncated TTLS with regpar=3.

      • Ailee
        Posted Feb 19, 2009 at 6:06 AM | Permalink

        Re: Steve McIntyre (#21),

        remember the IMO interesting posts last year on spatial eigenfunctions (arising out of the humdrum Stahle SWM network)? I produced things that pretty clearly equated to Chladni figures. I’ve been experimenting with similar spatial covariance things in the Antarctic. (Gavin, get busy.)

        I am waiting with impatience on the results of this experiment .
        I remember well the thread about spatial autocorrelations and consider that this issue is largely misunderstood and understudied .
        If I had not a job or if I was lucky to be paid for doing “climatology” , I would focus on spatial patterns , stability of chaotic attractors and relations between both .

        Normally when a physicist finds that a process A (temperatures) analysed in a creative manner presents surprising similitudes to apparently unrelated processes B (Chladni figures) and C (Eigenfunctions for rectangular potential pits in QM) , he is either on something interesting or has incredible bad luck .

        So good luck with your “experiments” Steve and keep us informed .

    • John Archer
      Posted Feb 18, 2009 at 1:59 PM | Permalink

      Re: Ross McKitrick (#19),
      LOL Excellent.

    • Craig Loehle
      Posted Feb 18, 2009 at 2:41 PM | Permalink

      Re: Ross McKitrick (#19), “the entropy production vector keeps vanishing into the matrix kernel”–you’re pulling our leg, right?

  13. Posted Feb 18, 2009 at 1:09 PM | Permalink

    Steve,

    I get an ‘Unexpected symbol error in function h’ error message at this line: if(x <= max(breaks0)) h= min( (1:length(breaks0)) [ x h[h==0]=1}.

    I removed the extra space between the less than symbol and the equals sign and placed it in front of the max(breaks0) call. This didn’t fix the error.

    Keep up the good work.

    Steve: it’s something to do with wordpress.this came up before. I’ll post a text version some time.

  14. MC
    Posted Feb 18, 2009 at 1:31 PM | Permalink

    This is interesting. I’m beginning to see some sense in what they were trying to do. I need to get this paper and the data but it appears to be that by generating the PCs they get the inherent variance over time in the measurement set (or at least the set presented). They then fit linear trends to this and extrapolate into the past and then fill in the best estimates (from linear trend) at each grid points using each PC trend and eigenvalue weights.
    So that if you undid the PC matrix transformation you would come back to a set of data that was a best estimate over the extrapolation period. That’s actually quite clever.
    The problem is does your PC generation method catch all the data i.e. do 3 PC’s catch all the AVHRR data. It looks like they catch a large portion but you don’t know the values of PCs above 4. It may be a truncation.
    Another interesting titbit was Eli Rabbett’s answer to a post I made at RC. Apparently the satelite data measures temps a few kms from the surface so you have to include a lapse rate estimate from AVHRR to actual surface before comparing apples to apples. They just considered trends and variation instead of actual estimates. Fascinating.

    • Posted Feb 18, 2009 at 1:34 PM | Permalink

      Re: MC (#23),

      In this case a different instrument was used which actually measures surface temperature by IR emission rather than air temp done by microwave absorption.

      • MC
        Posted Feb 18, 2009 at 1:49 PM | Permalink

        Re: Jeff Id (#24), So Jeff it is apples for apples then. Yet the stuff you guys show is that there is a disparity between AVHRR and AWS trends for each station in the calibration period. Sounds like we need a standback moment (and data) so we can have a proper look. What are the chances though?

      • AnonyMoose
        Posted Feb 18, 2009 at 11:34 PM | Permalink

        Re: Jeff Id (#24),

        In this case a different instrument was used which actually measures surface temperature by IR emission rather than air temp done by microwave absorption.

        And on the ground, apparently they use a PIR… which people like to lie under. Can we see the station log entries where they remove the contaminated data?

  15. Posted Feb 18, 2009 at 1:49 PM | Permalink

    I’m not sure exactly if you’re agreeing or not but here’s my take:

    neigs is the number of decompositions calculated
    regpar is a truncation of the number of eigs used. This happens by default in the TTLS analysis.

    I’m not sure why anyone would ever have these variables different unless TTLS is used in a more general form sometimes.

    Anyway, I’m going to work on other stuff.

    • MC
      Posted Feb 18, 2009 at 5:16 PM | Permalink

      Re: Jeff Id (#26), Yeah Jeff I was agreeing. The standback moment was a more general thing as in you guys have uncovered a very interesting property, enough for everyone else to say, ‘wait hold the machines a bit’.

  16. Dishman
    Posted Feb 18, 2009 at 2:23 PM | Permalink

    Patrick M,

    I don’t have a reference, but it follows from the definition of eigenvalue:

    For matrix A, a value l is said to be an eigenvalue of A if there exists a non-zero eigenvector K such that AK = iK.

    If you substitute the transpose of matrix A, the eigenvector is the transpose of K, but the eigenvalue remains the same.

    • jc-at-play
      Posted Feb 18, 2009 at 4:46 PM | Permalink

      Re: Dishman (#28),

      Math correction:

      It is NOT generally true that the transpose of an eigenvector is an eigenvector of the transpose. In fact, “transpose of a vector” doesn’t really make sense, in this context.

      It IS true, however, that each eigenvalue of a matrix also is an eigenvalue of its transpose. This follows from the fact that the determinant of a matrix is equal to the determinant of its transpose, since each eigenvalue of A is a root of the equation det(rIA) = 0.

      Re: Patrick M. (#16),
      I believe that strictly speaking, the SVD of a matrix is NOT the same as the SVD of its transpose, but something similar is true. Here’s a derivation from basic principles of linear algebra. The SVD of a matrix A takes the form A = U*D*VT, with D a diagonal matrix having some special properties. Since the transpose of a product is the product of the transposes, multiplied in reverse order, it follows that

      AT = (U*D*VT)T = V*DT*UT= V*D*UT

      using the fact that DT=D, since D is a diagonal matrix.

      This is not exactly the same as the SVD of the original matrix A, although the D part remains unchanged.

      Steve: C’mon. The SVD is identical for the relevant purpose. Of course, the U and V matrices are exchanged and transposed. You can make the thing wide and fat if you want by transposing again. You’re being a bit pedantic here. You know the point.

      • jc-at-play
        Posted Feb 18, 2009 at 4:51 PM | Permalink

        Re: jc-at-play (#37),

        In the previous message, all the T’s are supposed to be superscripts (for transpose).

      • jc-at-play
        Posted Feb 19, 2009 at 11:02 AM | Permalink

        Re: jc-at-play (#36),

        In response to Steve’s in-line reply:

        You may have mistaken me for someone who actually understands applied linear algebra. I trust your statement that “The SVD is identical for the relevant purpose;” but personally, I don’t really “know the point.”

        Re: Mark T (#40),

        Yes, I realized belatedly that I had been thinking only of the case where A is a square matrix.

        • Mark T
          Posted Feb 19, 2009 at 11:25 AM | Permalink

          Re: jc-at-play (#63),

          I don’t really “know the point.”

          Not sure if it is the point, but since SVD is a linear transformation, then the transpose of the SVD is the same as the SVD of the transpose (which is what Steve meant, but did not specifically state, I think). That implies that the eigenvalues (singular values) are identical, which is really the point, since I guess it is computationally easier to work more rows than columns (which seems to make sense from a brute-force computation standpoint).

          Btw, I figured you simply mis-stated, as I did in my first follow-up, hehe. I probably just caught my boo-boo quicker because I’m not exactly busy at the office right now!

          ^RomanM: agreed, PC1 does resemble the elevation.

          Mark

  17. Dishman
    Posted Feb 18, 2009 at 2:32 PM | Permalink

    Steve,

    Do you have a more accurate date for the change in amplitude of series 3?

    Which series went from spliced to real data on that date?

  18. MarkR
    Posted Feb 18, 2009 at 2:42 PM | Permalink

    UPDATE: An additional question has been brought up related to why the data seems to be missing from the poles. Dr. Christy also responded:

    As the spacecraft rolls over the pole it does so at an inclined orbit so
    that the highest nadir latitude is about 82 deg with the scanner looking
    out a bit closer to the pole. Since we apply the scan line data mostly to
    the nadir area directly below the satellite, the actual data only go to
    about 83 deg. In the gridded data I interpolate over the pole, but I
    wouldn’t trust the data too much beyond 85 deg.

    Putting myths about UAH and RSS satellite data to rest

    • Posted Feb 18, 2009 at 3:12 PM | Permalink

      Re: MarkR (#31),

      This is a different satellite instrument. The instrument you referenced is the microwave sounder which measures temperature through microwave absorption in a thickness of air. The data in the antarctic paper is actually physical surface temperature measurement by infrared emission (not surface air temp).

    • Kenneth Fritsch
      Posted Feb 19, 2009 at 10:44 AM | Permalink

      Re: MarkR (#31),

      As the spacecraft rolls over the pole it does so at an inclined orbit so
      that the highest nadir latitude is about 82 deg with the scanner looking
      out a bit closer to the pole. Since we apply the scan line data mostly to
      the nadir area directly below the satellite, the actual data only go to
      about 83 deg.

      It is always interesting to note that the ratio of the area south of 82S to the area south of 60s is about 7% and for the ratio of 83s to 60s is about 5% and for the ratio 85S to 60s is 3%.

  19. Jean S
    Posted Feb 18, 2009 at 2:50 PM | Permalink

    is a property of Regem truncated TTLS with regpar=3

    Yes, this is the case here. The imputed values are obtained through the linear regression model (eq. (1) in Schneider’s paper) in (Reg)EM. Now the regression matrix (denoted by B) is calculated in RegEM-TTLS by truncated total least squares, which means that B is obtained by TLS truncated to the given rank. Now the truncation rank is given in Schneider’s code as

    trunc = min([options.regpar, n-1, p]);

    Since there are much more rows and columns (n and p) in input here, trunc is set to regpar (=3). In other words, for each time instant, the matrix B is of rank regpar. Hence, if you a looking only the final result, the data appears to be a linear combination of three vectors (PCs) although it is actually a rank three linear combination of higher number of vectors (take also into account that B is different for each time instant).

  20. Dishman
    Posted Feb 18, 2009 at 3:27 PM | Permalink

    Let me see if I understand this correctly:

    TTLS produced a best-fit of 3 new constructed series from the raw series.

    Prior to 1981, one or more of the raw series was filled with reconstructed data from other series.

    Some time around 1981, one or raw series switched from reconstructed data to (presumably) actual data.

    The switch was different enough from previous data that it warranted its own constructed series (series 3).

    Prior to the switch, series 3 had only slight differences from the reconstructed data. After the switch, it reflected the newly introduced data.

    Series 3 may be one or more stations. It seems likely to me that it is actually composed of several stations and is actually a composite of all stations which began recording real data after 1981.

    This is just my stab at understanding, and I’m not prepared to defend it, but others can probably confirm or reject it.

  21. Jeff C.
    Posted Feb 18, 2009 at 3:59 PM | Permalink

    I have commented on this before over in the comments of Roman’s post. I apologize for banging the same drum, but I think it is relevant to this discussion.

    Dr. Steig actually does give us a peek behind the curtain as to what the real cloud masked data looked like prior to any RegEM or reconstruction manipulation. On page 1 of the SI, he discussed various cloud masking methods explaning why his results differ from Comiso 2000. He included this graphic.

    Look at figure C and figure D. Steig says these are the trends from the cloud masked data using his new methodology. This is supposed to be actual measured results, with date/locations where clouds covered the continent removed from the data set (i.e. cloud masked). Figure C is the trend for 1982 to 1999 (same as Comiso) and figure D is the trend for 1982 to 2006.

    Now compare figure C to this plot. These are the trends for the 5509 gridcells of the satellite reconstruction windowed to the 1982 to 1999 time frame of figure C. I’ve tried to use the same color scheme as above.

    Note that the huge area of cooling in the interior is gone. Also note that the spatial detail seems less. The reconstructed trends look to have been smoothed (as a result of being boiled down to 3 PCs presumably).

    There is also the issue of why figure C looks so different from figure A, but that is a discussion for a future thread.

  22. Terry
    Posted Feb 18, 2009 at 5:16 PM | Permalink

    So – PC3 looks completely goofy. A few questions from a layman: Would Steig likely have graphed the PCs as a normal part of doing the research/calcs? And has anyone from the Hockey Team commented on this? I haven’t seen anything from them posted here, the Air Vent, or RC. Just curious.

  23. Mark T
    Posted Feb 18, 2009 at 5:35 PM | Permalink

    D is only diagonal if A is square, though even if not, the diagonal elements of D are unchanged after the transpose. You cannot make the generalization in your last equation, i.e., DT is not generally equal to D, but their nonzero values are equal (obviously).

    Mark

    Steve: Nope. You get a diagonal D even from rectangular matrices. Please, I don’t want to spend space on this as there is nothing to disagree about.

    • Mark T
      Posted Feb 18, 2009 at 6:18 PM | Permalink

      Re: Mark T (#40),

      D is only diagonal if A is square,

      Oops, D is still diagonal, you just can’t say DT = D unless D is square.

      Mark

    • Mark T
      Posted Feb 19, 2009 at 12:11 AM | Permalink

      Re: Mark T (#40),

      Steve: Nope. You get a diagonal D even from rectangular matrices.

      See my comment in (#42), I corrected myself before you posted this. Your statement in the original post above, however, should be corrected, as the SVD of a matrix is not the same as the SVD of its transpose, though you do get the same singular values.

      Mark

      Steve: As I said above, you get fat and wide back simply by transposing the U and V matrices. we’re talking about the same point.

  24. Manfred
    Posted Feb 18, 2009 at 5:50 PM | Permalink

    So Steig used only 3 of over 100 eigenvectors characterising the measured data.

    Then, why didn’t he publish his results on the back of an envelope ?

    Steve: I don’t object to 3 eigenvectors per se. IMO, there might be a case for only using one eigenvector, tho I’m still toying with this. Right now we’re just observing and analysing this sort of thing. The 3 eigenvector thing is interesting mathematically, that’s all that can be said for now.

  25. David Cauthen
    Posted Feb 18, 2009 at 9:06 PM | Permalink

    There ought to be a rule on this blog that after, say, 40 comments, Steve has to write a translation of this stuff for us folks in the cheap seats. My head hurts- I’m going to bed now.

  26. VG
    Posted Feb 19, 2009 at 1:19 AM | Permalink

    Has ANYONE told Nature (journal) about ALL this?

    Steve: We’re just exploring this data right now. Maybe something will come of it, maybe not. Right now, we don’t even know for sure what Steig did. Nor do we have any AVHRR data. Tho perhaps it’s time to ask Nature to deal with the AVHRR data unavailability problem.

    • Jason
      Posted Feb 19, 2009 at 2:41 AM | Permalink

      Re: VG (#46),

      It is WAY to early for anybody to be:

      1. Telling nature about this
      2. Assuming that the ultimate result will render Steig et al useless or
      3. Gloating

      I don’t understand the impatience.

      • bender
        Posted Feb 19, 2009 at 6:55 AM | Permalink

        Re: Jason (#47),
        It is not too early to tell Nature the real story about Antarctica: no continental-scale warming in decades and any warming is restricted to the peninsula. Impatience because the issue is topical. Wait, and you’ve lost your audience. Steig’s study is not “useless”. For some special interests it’s very useful. That’s not to say it isn’t a major distortion of fact.

        • Jason
          Posted Feb 19, 2009 at 7:28 AM | Permalink

          Re: bender (#50),

          The folks at real climate almost immediately admitted that East Antarctica has been cooling recently.

          In the best case, this analysis will show that the Steig et al calculation of a long term continent wide trend was flawed.

          In the more likely case, this analysis will show that the Steig et al trend is merely not statistically significant.

          Steig et al wasn’t terribly earth shattering. Whether Antarctica is warming or cooling, Real Climate has told us that it is consistent with the model predictions.

          The main consequence of Steig et al may be what it reveals about the utility (or lack there of) of Nature’s peer review process.

          I doubt very much that they are eager to print such a analysis of their own failures.

          Any accusation or broken review processes must be withheld until the evidence supporting it is absolutely rock solid, lest the scientific community dismiss it with a wave of its collective hand.

        • bender
          Posted Feb 19, 2009 at 8:17 AM | Permalink

          Re: Jason (#52),

          I doubt very much that they are eager to print such a analysis of their own failures.

          When you submit a correspondence they have no choice but to hand it to an Editor to deal with it. Whether they’re eager to do so is irrelevant. As scientists they are compelled to at least pay lip service to the search for truth. Which means they can’t just dismiss a correspondence addressing substantive aspects of the science. They would look too ridiculous.

        • Jason
          Posted Feb 19, 2009 at 8:32 AM | Permalink

          Re: bender (#53),

          If what has been uncovered thus far were submitted to Nature, I think it is very safe say that the submission would be ignored, and Nature would not look terribly ridiculous in the process.

          See the Strumpf vs. Liebowitz discussion in Ross’s recent paper. It was only bad luck that resulted in the editor’s decision making process even seeing the light of day.

          Steve: I totally agree. Most readers are far too quick to see gotchas. My sense of this thing is there are probably a bazillion calculations going around in circles, but all you really have to work with from 1957-1980 are a dozen or so surface records and, try as you may, you’re going to end up with linear combinations of these records in the recon and there’s a limited amount of harm that you can do to this sort of data. The big interest for me (and hence the focus) is that I’ve been nibbling around the edges of RegEM for a couple of years, but never really dug into it. This data set gives an excellent foothold for analysing the method without all the complications of bristlecones and upside-down Tiljanders. And the extra input from the Jeffs and Roman has really moved the analysis along. I expect that the main dividend will be in the proxy studies as we’ll end up with usable RegEM tools.

        • Ryan O
          Posted Feb 19, 2009 at 11:00 AM | Permalink

          Re: Jason (#54), (Steve Mc’s inline comment)

          I totally agree. Most readers are far too quick to see gotchas. My sense of this thing is there are probably a bazillion calculations going around in circles, but all you really have to work with from 1957-1980 are a dozen or so surface records and, try as you may, you’re going to end up with linear combinations of these records in the recon and there’s a limited amount of harm that you can do to this sort of data.

          .
          AFA investigating how the AVHRR reconstruction was done, yep, there’s a lot more that needs to be done before it could see the light of day. But there is no question that the reconstruction does not match the instrumental record for those records that are present, and the differences are systematic, not random.

  27. MrPete
    Posted Feb 19, 2009 at 6:18 AM | Permalink

    Note to frustrated commenters (Kenneth F, Ryan O, Craig L, Geoff S, etc 🙂 )…

    AFAIK, you are having to fill out a “captcha” when you comment, because Javascript is disabled in your browser.

    I sometimes disable JS while writing comments, because the (usually handy) preview thing can make everything glacially slow… one solution: temporarily enable JS just before pressing “Submit Comment”.

    Perhaps there’s a different comment preview plugin, that puts the preview in another tab so it is not always active.

  28. bender
    Posted Feb 19, 2009 at 7:20 AM | Permalink

    What are the trend stats on eigenvalues 1 and 2? They appear to be temperature trends with slightly different phasing schedules; 1=continental, 2=regional. Red/Blue loadings therefore implying warming/cooling.
    .
    Eigenvalue 3 is bizarre.

  29. RomanM
    Posted Feb 19, 2009 at 9:21 AM | Permalink

    I am not sure whether everyone understands just how pervasive the 3 PCs in the AVHRR reconstruction really are.

    In Figure 2 of the paper, the authors compare the AWS and AVHRR reconstructions through a visual comparison (no statistics) of the annual averages of the series:

    Solid black lines show results from reconstruction using infrared satellite data, averaged over all grid points for each region. Dashed lines show the average of reconstructed AWS data in each region.

    Since this involves averaging 5509 grid points in one case and 63 AWS in the other, this has to produce results with a extremely high level of certainty, right? Not exactly.

    We have seen in the AVHRR case that each of the grid points is a simple weighted sum of the same three PCs. What happens if we average them? A little high school algebra easily shows that the result must itself be a simple weighted sum of the same three PCs where the weights are the average of the weights of the series you are averaging. Thus, the East Antarctic, West Antarctic and the continental monthly averages are nothing more than simple linear combinations of the same three PCs. The uncertainty level is not reduced by the averaging process because of the high correlation of the series being averaged. It is easy to see that the annual averages will also be just the weighted sum of the annual averages of the three PCs as well.

    For the AWS situation, it is a little better. For the time period before 1980, where the sequences are pure reconstructions, the same considerations hold for the monthly and annual averages. Post 1980, there is real data mixed in and this may offer some reduction in uncertainty levels. Note that any comparison of AWS and AVHRR prior to 1980 is basically controlled by the manned stations which are used to construct both sequences.

    So what does the AVHRR monthly mean sequence look like?

    We can continue from Steve’s script above to express that sequence in terms of the PCs.

    pcs = svd0$v[,1:3]

    #calculate monthly means
    # east is defined as long greater than zero
    antarc = rowMeans(recon)
    east = rowMeans(recon[,grid.info$long>0])
    west = rowMeans(recon[,!grid.info$long>0])

    #determine weights
    antarc.reg=lm(antarc ~ pcs[,1] + pcs[,2] + pcs[,3])
    coef(antarc.reg)

    # (Intercept) pcs[, 1] pcs[, 2] pcs[, 3]
    #1.467360e-12 2.873536e+01 2.055228e+00 2.146882e+00

    east.reg=lm(east ~ pcs[,1] + pcs[,2] + pcs[,3])
    coef(east.reg)
    # (Intercept) pcs[, 1] pcs[, 2] pcs[, 3]
    #-2.627066e-12 3.434335e+01 -1.947170e+00 -2.261958e+00

    west.reg=lm(west ~ pcs[,1] + pcs[,2] + pcs[,3])
    coef(west.reg)
    # (Intercept) pcs[, 1] pcs[, 2] pcs[, 3]
    8.822761e-12 1.866089e+01 9.245322e+00 1.006713e+01

    Since I am not sure of the paper’s definition of east, I have taken it to be longitude between 0 and 180. It is easy to change the above script if you wish to have a different definition. the results:

    antarctic = 28.74 PC1 + 2.06 PC2 + 2.15 PC3

    east = 34. 34 PC1 – 1.95 PC2 – 2.26 PC3

    west = 18.66 PC1 + 8.25 PC2 +1.01 PC3

    The difference between east and west becomes pretty evident in the influence of PC2. I think that the interpretations made in previous comments on the “meaning” of the PCs were pretty accurate in light of these results.

    By the way, did you notice the fairly strong spatial coherence of the PCs evident in Steve Mc’s maps despite the fact that there was no specific spatial weighting involved in their calculation?

    • Steve McIntyre
      Posted Feb 19, 2009 at 9:44 AM | Permalink

      Re: RomanM (#55),

      Roman, you observed:

      By the way, did you notice the fairly strong spatial coherence of the PCs evident in Steve Mc’s maps despite the fact that there was no specific spatial weighting involved in their calculation?

      as I’ve mentioned, I’ve done some experiments to see what the eigenvectors would look like for a process in which you have exponentially decaying spatial autocorrelation and sites distributed as above. (I took a random sample of 500 of 5509 sites to make the calculation more manageable on my computer.) You get intriguing patterns and I can sort of convince myself that I see Chladni patterns (a la the posts on Stahle spatial covariance that we chatted about). If Antarctic were a perfect circle, my guess is that the match would be closer, but the asymmetry appears to emphasize some “tones” more than others.

      If you have highly asymmetric sampling e.g. the AWS-surface network, the eigenvectors appear to be affected by the site pattern. The concentration of sites in some locations promotes those patterns.

    • Jeff C.
      Posted Feb 19, 2009 at 9:56 AM | Permalink

      Re: RomanM (#55),

      By the way, did you notice the fairly strong spatial coherence of the PCs evident in Steve Mc’s maps despite the fact that there was no specific spatial weighting involved in their calculation?

      I did notice that and plotted out the three eigenvector of the AWS reconstruction for comparison. While they look vaguely similar, there are significant differences, particularly in the West/East divide of #2 and #3. It makes sense that there should be a spatial pattern as there was a pattern in the original information (the AWS and satellite data). Since the pattern in the eigenvectors is different for the AWS and satellite recons, I think that means the station data was used differently in the infilling (or infilling and fitting for the satellite recon).

  30. RomanM
    Posted Feb 19, 2009 at 9:26 AM | Permalink

    Added note:

    I forgot to show a verification that in fact the calculated results really are expressible as a combination of the three PCs. To do that, e.g. type summary(antarc.reg) in R. Notice that the R-squared is exactly 1 indicating a perfect linear fit. The same holds for the other two regressions.

  31. RomanM
    Posted Feb 19, 2009 at 9:44 AM | Permalink

    Darn! Learn to read properly!

    west = 18.66 PC1 +9.25 PC2 +10.07 PC3

  32. Jeff C.
    Posted Feb 19, 2009 at 10:07 AM | Permalink

    In the methods section of the paper the authors state:

    The first three principal components are statistically separable and can be meaningfully related to important dynamical features of high-latitude Southern Hemisphere atmospheric circulation, as defined independently by extrapolar instrumental data. The first principal component is significantly correlated with the SAM index (the first principal component of sea-level-pressure or 500-hPa geopotential heights for 20u S–90u S), and the second principal component reflects the zonal wave-3 pattern, which contributes to the Antarctic dipole pattern of sea-ice anomalies in the Ross Sea and Weddell Sea sectors.

    I did some googling of the SAM index and the zonal wave-3/Antarctic dipole and found some charts of these over Antarctica. They do look similar to Steve’s maps #1 and #2 above. Makes sense as the Antarctic temperature pattern should be related to the known climactic phenomena. This probably gave the authors confidence they were on the correct path.

    • RomanM
      Posted Feb 19, 2009 at 11:08 AM | Permalink

      Re: Jeff C. (#60),

      The first principal component is significantly correlated with the SAM index (the first principal component of sea-level-pressure or 500-hPa geopotential heights for 20u S–90u S), …

      I am more down to earth (ice?). I have been looking at topographic relief maps and to my eyes, PC1 sort of looks like this to me:

      (From: http://terraweb.wr.usgs.gov/projects/Antarctica/accdemsr.html )

  33. Steve McIntyre
    Posted Feb 19, 2009 at 11:24 AM | Permalink

    Jeff, you quoted the following from Steig:

    The first three principal components are statistically separable and can be meaningfully related to important dynamical features of high-latitude Southern Hemisphere atmospheric circulation, as defined independently by extrapolar instrumental data.

    Leaving PC1 and PC2 along for a moment, I wonder what “meaningful relationship” the PC3 has to an “important dynamical features of high-latitude Southern Hemisphere atmospheric circulation”. They don’t discuss it – I wonder why 🙂

    Taking their assertion at face value, there was sure a remarkable increase in its amplitude around 1980-82.

  34. Steve McIntyre
    Posted Feb 19, 2009 at 11:31 AM | Permalink

    As for PC1 they say:

    significantly correlated with the SAM index (the first principal component of sea-level-pressure or 500-hPa geopotential heights for 20u S–90u S),

    and PC2, they say:

    the second principal component reflects the zonal wave-3 pattern, which contributes to the Antarctic dipole pattern of sea-ice anomalies in the Ross Sea and Weddell Sea sectors,

    Or it could just be what you get from doing principal components on spatially autocorrelated data – i.e. Chladni patterns.

  35. Ryan O
    Posted Feb 19, 2009 at 12:37 PM | Permalink

    Steve, this line:

    h=function(x) { if (is.na(x)) h=NA else {
    if(x< =max(breaks0)) h= min( (1:length(breaks0)) [ x h[h==0]=1}
    h}

    doesn’t seem to make any sense. 😦

    Steve: Download the ASCII version. It’s a wordpress problem that I don’t know how to fix.

  36. Jeff C.
    Posted Feb 19, 2009 at 2:14 PM | Permalink

    Here are the three eigenvectors for Roman’s reconstruction of the AWS (the AWS recon from only the 3 PCs, not measured data). I ran these a few days ago but had set them aside. I’ll put them up for comparison to Steve’s maps above. I would think these should be comparable to the satellite recon eigenvectors since these are entirely fitted with no measured data.

    Although there are some vague similarities to the satellite maps, they are notably different in magnitude and even sign over large sections of the continent. If these are well-correlated with “important dynamical features of high-latitude Southern Hemisphere atmospheric circulation”, I would think they’d agree better.

    • Jean S
      Posted Feb 19, 2009 at 3:09 PM | Permalink

      Re: Jeff C. (#70),
      It seems to me that the “robustness” purported in Steig’s paper boils down to the fact that RegEM-TTLS with regpar=3 and PCA-based method (with 3 PCs) are ultimately very similar. So instead of thinking more of wonders of RegEM-TTLS, I think a more fruitful approach would be running data with different RegEM parameters. My preliminary investigations with default RegEM parameters (just remember to increase the maximum iteration value) and AWS data indicate that the final result is not so “robust” they indicate in the paper. Unfortunately, I’m very busy right now, and can’t investigate that any further for awhile. Maybe you, or someone who has already data and (Matlab) code ready to run, would like to continue to that direction.

      • Steve McIntyre
        Posted Feb 19, 2009 at 4:16 PM | Permalink

        Re: Jean S (#70),

        with regpar=1 (which has much to recommend it), there’s a slight negative trend from 1957 to the present. regpar=2 looks like regpar=3.

        • Jean S
          Posted Feb 19, 2009 at 4:25 PM | Permalink

          Re: Steve McIntyre (#71),
          Yes, and running the analysis with standard RegEM parameters (ridge regression) seemed to reduce the positive trend considerably in my preliminary tests.

  37. Steve McIntyre
    Posted Feb 19, 2009 at 5:08 PM | Permalink

    Jean S, here are AWS recons for regpar=1 and regpar=6. A slight negative trend overall for regpar=1; and no trend since 1965 for regpar=6 (or 5). Looks like the regpar for maximum trend is 3. What a surprise. (This is all with old Harry etc)


    • bender
      Posted Feb 19, 2009 at 5:32 PM | Permalink

      Re: Steve McIntyre (#73)

      What a surprise.

      It’s Prunuspicker’s rule. Use the number of PCs required to get the desired trend. No, wait, Preisendorfer.

      • John F. Pittman
        Posted Feb 20, 2009 at 11:39 AM | Permalink

        Re: bender (#74), Not a comment to read drinking hot tea.

  38. Mark T
    Posted Feb 19, 2009 at 5:55 PM | Permalink

    I can almost see the snark in your face every time you post one of these replies. Maybe it’s because Bender is my 2nd favorite Futurama character (my son calls him Vender). 🙂

    Mark

  39. Jeff C.
    Posted Feb 19, 2009 at 7:38 PM | Permalink

    This looks like it was covered earlier in the thread, but I’ll put it up FWIW. The Matlab version limits you to having regpar less than or equal to neigs. If regpar is larger than neigs, Matlab throws an error.

    With this limitation, here are the Matlab slopes increasing neigs for regpar greater than 3 (AWS, old Harry, 1957-2006 anomaly calculation window)

    Regpar Neigs Slope (deg/decade)
    1 3 -0.073
    2 3 +0.164
    3 3 +0.157
    4 4 +0.168
    5 5 +0.113
    6 6 +0.091

    Steve – I’m asuming your R version allows you to have a regpar larger than neigs? Was your second plot regpar = 6, neigs =3?

    • Posted Feb 20, 2009 at 12:40 AM | Permalink

      Re: Jeff C. (#76),

      Was your second plot regpar = 6, neigs =3?

      Jeff, unless I missed something the neigs represents the number of eigenvectors calculated and the regpar is a truncation of the matrix. I didn’t see where the neigs were used anywhere else in the R code.

      Did someone see something different?

      • Jeff C.
        Posted Feb 20, 2009 at 11:22 AM | Permalink

        Re: Jeff Id (#77),

        I’m not quite sure what is going on, and I wasn’t very clear in my comment. I was having difficulty replicating Steve’s second plot in #73 using Matlab. Does your comment in #22 mean that Matlab RegEM will force regpar to a value equal to neigs if you try and set regpar to a number larger than neigs?

        I don’t understand why Matlab RegEM would have that limitation, but I was speculating that maybe Steve’s R version doesn’t, thus the discrepancy.

        • Steve McIntyre
          Posted Feb 20, 2009 at 11:27 AM | Permalink

          Re: Jeff C. (#78), Jeff, I didn’t set up neigs as I couldn’t see any purpose for it.

  40. Mark T
    Posted Feb 20, 2009 at 11:29 AM | Permalink

    My guess would be that using neigs larger than regpar results in a situation, in the MATLAB code, in which neigs is used to index into the array after it has been truncated by regpar. MATLAB would, by design, generate an “index out of bounds” error since the dimension would no longer exist. Whether R does this or not I cannot say (in a compiled C program, the index would simply to into subsequent memory which may nor may not create a seg-fault), but (#77) seems to indicate that the R version compiled by Steve avoids this. Perhaps there needs to be some check to choose the smaller of the two, i.e., the smaller of the array dimension or the index.

    Mark

    • Jeff C.
      Posted Feb 20, 2009 at 12:03 PM | Permalink

      Re: Mark T (#80), I think your close as that is the error I was getting (wording different but same intent), but it was when regpar was larger than neigs, not visa versa. I’m away from my computer, but I’ll experiment with neigs when I get back to my office.

  41. Ryan O
    Posted Feb 20, 2009 at 11:39 AM | Permalink

    A general question on PCA . . . if your processing power allows, why would you limit the number of PCs?

    • Mark T
      Posted Feb 20, 2009 at 11:47 AM | Permalink

      Re: Ryan O (#82), It’s not necessarily an argument about processing power, but one of dimensionality (or rank). You could have, for example, 500 series, but the entire space is only described by 3 or 4 vectors. There’s no reason to maintain the full data set, and it may actually be detrimental to do so for other reasons (rounding, overflow, etc.). Also, part of the goal is to find the dominant signals in the underlying data, with the assumption that the smaller signals are either noise, or not relevant because they contribute so little.

      Btw, there is a counterpart to PCA called minor component analysis which seeks to understand the information contained in the smaller components (at least, that’s my take, I’ve never directly studied MCA).

      Mark

      • Ryan O
        Posted Feb 20, 2009 at 11:58 AM | Permalink

        Re: Mark T (#83), Wasn’t quite what I was getting at . . . my terminology here is poor. Speaking from the point of someone who hasn’t done any PCA, I would continue including PCs until the results stopped changing. Above, we see significant differences (in terms of the magnitude of the effect we are investigating) by including #4, 5, and 6. This would indicate to me that PCs 1-3 are insufficient to fully capture the variation in the data set and PCs 4-6 (and maybe more) are relevant.
        .
        That is the context of the question. In a case where including additional PCs makes a difference in the results, what justification would there be to exclude them?
        .
        I apologize for the poor wording of my initial question. 🙂

        • Mark T
          Posted Feb 20, 2009 at 12:39 PM | Permalink

          Re: Ryan O (#84),

          Speaking from the point of someone who hasn’t done any PCA, I would continue including PCs until the results stopped changing. Above, we see significant differences (in terms of the magnitude of the effect we are investigating) by including #4, 5, and 6. This would indicate to me that PCs 1-3 are insufficient to fully capture the variation in the data set and PCs 4-6 (and maybe more) are relevant.

          Well, that is what I was saying you should do, but that may not be the clear case here (I haven’t run the numbers myself). What you are referring to as “results stop changing” may also be more extreme than is required. If the MSE only changes by 1%, even though it may look to the eyes as significant, it doesn’t really mean anything in the end and is therefore not interesting. Now, if PCs 4-6 contain, say, 10% of the variance, than maybe it makes sense to include them, but even with that, they will only change the outcome no more than about 1 dB, which is pretty small. In other words, your question

          In a case where including additional PCs makes a difference in the results, what justification would there be to exclude them?

          needs to be put in terms of actual increase in error without them, which is, as I’ve discussed, rapidly diminishing below 10%.

          Re: Jeff C. (#85), OK. Indexing either way, which is what it sounded like. I’d have to dig into the code to better understand. Maybe this weekend if you want me to.

          Mark

        • Ryan O
          Posted Feb 20, 2009 at 1:28 PM | Permalink

          Re: Mark T (#86),

          What you are referring to as “results stop changing” may also be more extreme than is required.

          .
          What I meant was that it’s dependent on the effect you are looking for. 🙂
          .
          When it comes to the recons, we’re looking for 1/10 degree C changes in information with a noise amplitude of +/-5 to 10 degrees . . . in other words, our signal is very small compared to the noise and the MSE matters more than if our signal was on the order of 1 degree, or 2, or 5.
          .
          Not only that, we’re looking for trends in specific geographic areas. As Steve’s plots show, the eigenvectors have geographical significance. So while including an additional PC might not change the overall error much, it would seem imprudent to exclude them on that basis alone as they may reduce the error significantly in local regions.
          .
          Or am I missing something? Sorry if these seem overly pedantic or basic. 🙂

        • Mark T
          Posted Feb 20, 2009 at 2:21 PM | Permalink

          Re: Ryan O (#87),

          MSE matters more than if our signal was on the order of 1 degree, or 2, or 5.

          MSE as I mentioned is a relative value, i.e., in terms of percentage of the total signal. I commented on PCs 4-6 in the other thread, btw, particularly noting why they might matter in this case (which I think is an unphysical representation).

          Mark

  42. Ryan O
    Posted Feb 20, 2009 at 2:54 PM | Permalink

    Thanks, Mark. 🙂
    .
    If you want to see the eignenvectors in action based on how they predict temperature, here’s a little movie in R. The two text files are 1) with the AWS recon and manned stations centered and spliced; and, 2) with just the manned stations centered and spliced. They’re easy to pick out . . . as are the eigenvectors.
    .
    To use, set sat_recon to the matrix you want (either spliced.manned or spliced.all). The rest of the instructions are commented in the last 4 lines of the script.
    .
    http://www.mediafire.com/file/imkymyqz4fw/MeansPlot.R
    .
    http://www.mediafire.com/file/mnyw15yy51m/spliceall.txt
    .
    http://www.mediafire.com/file/5qdzcuxynut/splicemanned.txt

  43. Mark T
    Posted Feb 20, 2009 at 3:11 PM | Permalink

    Hehe, sounds like I have a “learn R” notation on my agenda now. 🙂 Thanks.

    Maybe I’ll look at this stuff more this weekend. I’d be willing to bet, btw, that if you look at the impacts the various PCs have, PCs 3-6 will be unnoticeable pre-1980, and much larger post-1980, i.e., the percentage of variance they describe pre-1980 is near zero, but much larger post-1980, which averages out to the values used in the recon.

    Mark

    • Ryan O
      Posted Feb 20, 2009 at 4:18 PM | Permalink

      Re: Mark T (#90),

      Maybe I’ll look at this stuff more this weekend. I’d be willing to bet, btw, that if you look at the impacts the various PCs have, PCs 3-6 will be unnoticeable pre-1980, and much larger post-1980, i.e., the percentage of variance they describe pre-1980 is near zero, but much larger post-1980, which averages out to the values used in the recon.

      .
      This one is a different situation than the AWS recon. In this case, there are only the 3 PCs because the entire recon is generated exclusively from them. Without the processed satellite data, we can’t see what 4-6 would be.
      .
      And I would be willing to bet that PCs 4-6 are significant, because the trends of the grid points corresponding to the manned station locations are very different than the trends from the actual ground observations.

  44. Mark T
    Posted Feb 20, 2009 at 4:25 PM | Permalink

    Uh, I was speaking in general terms, but that’s cool, too, since I didn’t notice the distinction anyway – good to know when I actually delve into it. It’s getting confusing jumping between threads, I think.

    Mark

  45. Ryan O
    Posted Feb 20, 2009 at 9:49 PM | Permalink

    Hopefully not too OT . . . I started looking at the PCA reconstruction and eigenvector 2 looks to be the inverse of the AVHRR eigenvector 2. I didn’t know where else to put this . . .
    .

    • Jeff C.
      Posted Feb 20, 2009 at 10:25 PM | Permalink

      Re: Ryan O (#94), Don’t forget your day job! Seriously, this is good stuff. The PCA is supposed to use similar methodolgy to the RegEM except they replace missing values with the monthly mean instead of letting RegEM do it. Why would the eigenvector flip? Doesn’t make any sense, particularly when they say “The first three principal components are statistically separable and can be meaningfully related to important dynamical features of high-latitude Southern Hemisphere atmospheric circulation” as they do in the paper.

      There is also a 15 station recon using RegEM that may have some interesting features.

      • Jeff C.
        Posted Feb 20, 2009 at 10:36 PM | Permalink

        Re: Jeff C. (#95), After thing for a few minutes (should do that before I type)the fact that it flipped probably isn’t that significant. Ryan – can you put up the plots of the 3 PCs for this recon?

  46. Jean S
    Posted Feb 21, 2009 at 2:13 AM | Permalink

    re: neigs
    In the RegEm code neigs controls the number of eigenvalues actually calculated in the TTLS regression (peigs.m). Of course, it should be larger than “the number of eigenvalues actually used” (regpar). By default, neigs is set to the maximum (i.e., all eigenvalues are calculated), so you don’t have to worry about that. The purpose of neigs is purely computational: there is no point of calculating all eigenvalues of a large matrix if you only plan to keep a few of those.