Yesterday, I showed an interesting comparison between the 3 Steig eigenvectors and “Chladni patterns” generated by performing principal components on a grid of spatially autocorrelated sites on a disk. Today I’ll show a similar analysis, but this time using a random sample of points from actual Antarctica. The results are pretty interesting, to say the least.

Key points for the disk included:

1) the first disk eigenvector and the first Steig eigenvector weight interior points more heavily than points around the circumference, but the first Steig eigenvector is displaced somewhat to the east. I speculated as follows:

Antarctica is by no means perfectly circular and the “center of gravity” is displaced to the east as well. My guess is that the same sort of graphic done on the actual Antarctic shape will displace to the east as well. I’ll check that some time.

2) the 2nd and 3rd Steig eigenvectors and disk eigenvectors were both “two lobed”. In the disk, any axis orientation is as likely as any other, while the axis of the Steig eigenvectors could perhaps be construed as being related to the peninsula and the Transantarcti Mts.

The Steig AVHRR grid information contains 5509 gridcells with lat-longs. I took a random sample of 300 cells (by taking a random sample from 1:5509 and taking the corresponding gridcells). I calculated the distances between gridcells (a network with 90000 from-to pairs) and the correlations assuming an exponential decorrelation of exp(-distance/1200) – this is sort of consistent with what we see, but I’m mainly just experimenting right now. I converted this into a 300×300 correlation matrix and took principal components. I then used the akima program to make this into a contour map (changing everything into x-y coordinates, using Roman’s pretty extraction of the Antarctic contour from mapproj to overlay the continent onto the contour map.) I need to white out some of the ocean areas, but that’s a little fiddly and not germane to the plots shown below.

On the left as before are the Steig eigenvectors; on the right are eigenvectors from the above procedure (with the order of the 2 and 3 eigenvectors reversed for a reason that will be obvious). Using preferred Team terminology, I submit that the patterns are “remarkably similar”.

As before, let’s return to Steig’s assertions about these three eigenvectors (for which no evidence was provided):

The first three principal components are statistically separable and can be meaningfully related to important dynamical features of high-latitude Southern Hemisphere atmospheric circulation, as defined independently by extrapolar instrumental data. The first principal component is significantly correlated with the SAM index (the first principal component of sea-level-pressure or 500-hPa geopotential heights for 20S–90S), and the second principal component reflects the zonal wave-3 pattern, which contributes to the Antarctic dipole pattern of sea-ice anomalies in the Ross Sea and Weddell Sea sectors 4,8

Now consider some of the following as possible “confirmation” that the form of these eigenvectors result from nothing more than principal components on spatially autocorrelated series on a figure with an Antarctic shape.

In addition to the high interior weighting of the 1st eigenvector, it is displaced towards the east as I predicted.

The orientations of the 2nd and 3rd eigenvectors now match the 3rd and 2nd spatially autocorrelated eigenvectors. So the axis orientations seem to be derived merely from the shape of the continent. There is a little extra oomph in the eigenvector with a NW-SE axis relative to the eigenvector with a perpendicular axis.

I’m not saying that this model explains everything in the Steig eigenvectors, but it sure accounts for most of the major features.

Note: we still haven’t seen any actual AVHRR data, only the rank 3 AVHRR version. As Jean S observed, it appears increasingly likely that the rank 3 data was what Steig, Mann et al used in their RegEM process. Keep an eye on this story.

## 110 Comments

Looks like the coup de grace. Akin to the hockey stick, where the outcome was shown to be an artifact of the metholody. I wonder what components you would get for geometric shapes like a triangle?

Re: Omar Adamson (#1),

Uh, no, this does not look like a “coup de grace”.

To be sure, there are now multiple potential problems looming over the Steig analysis: Was original or reduced data analyzed? Does their trend result depend on their choice of PC retention? But the fact that the most principal components of spatial temperature data look like low-order multipoles is

notone of those problems.That the dominant spatial temperature patterns in Antartica are long-wavelength patterns related to the continent’s geometry should come as no surprise to anyone. The spatial temperature patterns can look like this regardless of wether the temporal trend is rising or falling — that is a seperate question.

Re: David Wright (#12),

The reported result depends only on shape and distance with no reference to the topography. Is this not an indication that the connection to climate is non-physical

Re: David Wright (#12),

Exactly so – this was the point of my calculation In the previous thread (#27). The eigenvectors that Steve computed for the spatial correlation matrix on the disk, are just those for the Laplacian, and hence for the wave equation (and drum membranes etc). So with Antarctica; and the eigenvectors of the wave equation are described by terms like SAM and zonal wave-3 patterns, which is the point Steig is making. As DW says, there are still problems – I can’t see that it

isa zonal wave-3 – but the fact that eigenvectors of the wave equation and of spatial correlation correspond is not a surprise.Re: David Wright (#12), I have to agree with David. These plots do not show that Steig is “wrong”. The color of the plots is NOT temperature but weights. The 3 PCs are used in Steig (AFAICT) to weight the surface station data and extend them across space (interpolate) (Steig? is this right? oh well…). If the pattern is only exactly what one would get from the shape of the continent and local autocorrelation, then that is ok. It makes the talk about SAM index and zonal patterns of wind rather suspect, but not the reconstruction. If anything, it says there is NOT some special monkey business that extends the peninsula stations data to an unrealistic extent in space.

I agree that using the permanent ice shelves would improve the fit because this is “land” down there.

Re: Craig Loehle (#19),

I also agree. What the plots show to me though is the mechanism for substantial trend blending which would give a poor quality result but not necessarily incorrect.

Re: David Wright (#12),

people do not need to go a bridge too far.

Personally, as I said at the outset, I would be surprised )1) if Antarctica weren’t warming along the rest of the world and 2) if it wasn’t, the physical circunstances of Antarctic are peculiar enough that I personally would not hold consider adaptations of models of perhaps relatively unique features from earlier versions as “refuting” every other aspect of everything else. (Though I also believe that any such problems should be more clearly reported by IPCC than they are.)

Having said that, it is legitimate to inquire as to whether Steig have proven what they claim to have proved.

At this stage, I think that we’ve shown that Steig provided no evidence for their claim that the three eigenvectors correspond to physical processes – and, in particular, to the claimed physical processes. We’ve provided a highly plausible alternative explanation of these three eigenvectors as simply an artifact of spatially autocorrelated data on a geometric shape.

As to #12, you say: “the dominant spatial temperature patterns in Antartica are long-wavelength patterns related to the continent’s geometry should come as no surprise to anyone.” First of all, I’ve not said that these things are the “dominant spatial temperature patterns” – people have to get out of the habit of reifying principal components. The PC3 corresponding to the 3rd eigenvector is a total nonsense and cannot possibly correspond to a physical process.

I think that we can pretty well stick a fork in the idea that these eigenvectors are produced from the claimed physical processes. Now perhaps Steig and Mann have some evidence for these claims that they haven’t produced yet. But until such evidence is produced, I personally regard these claims as mere arm-waving. It is regrettable that magazines like Nature permit this sort of arm-waving claim but they do. In the absence of them providing any support for the claims, I’m not sure how you’d go about “refuting” their claims beyond the sort of alternative explanation provided here.

Does this “refute” their reconstruction? Not per se, but together with other lines of analysis, their proofs are looking very precarious. The possibility – and at this point probability – that they used a rank 3 version of the AVHRR data in their RegEM meatgrinder should be worrying to people who relied on these results. If so, my guess (And it’s just a guess right now, rather than an assertion) is that their results would be worthless, though not neessarily wrong – if you permit the distinction.

I hasten to say that we don’t “know” that they used the rank 3 version as input to the meatgrinder – Steig has refused to provide data and code to show what he actually did. But there’s increasing evidence that this happened. Again I’m not trying to assert that this has been established – merely indicating where analysis seems to be going.

Re: Steve McIntyre (#22),

I fail to see how David Wright, in the quoted statement, is “reifying” PCA. I guess you are merely asserting that orthogonality is not guaranteed to lead to physicality? Claims of physicality need to be proven.

Re: David Wright (#12), and Craig,

But this isn’t the dominant pattern! The dominant pattern is of no warming over the bulk of Antarctica, and strong warming of a small region, the peninsula. Steig et al even admit this in their paper! (“Although the Antarctic Peninsula is one of the most rapidly warming locations on Earth,weather stations on the Antarctic continent generally show insignificant trends in recent decades”). This dominant pattern would only show up in the higher PCs that show finer structure. The Steig et al “reconstruction” uses only the first 3 PCs, which blurs the Antarctic peninsula warming over the whole continent, leading to their false conclusion. Sorry to repeat this but it needs to be said.

The other really simple point that needs to be repeated is Alexander Harvey’s in the previous thread. (“I feel that their is a tremendous joke going on here that I am not sharing in. Or is the impossibility of the data the joke”). The emperors have no clothes – the paper is supposed to be about warming of W Antarctica since 1957, but there is no data for this area over this period.

Re: PaulM (#23),

Yes. There is a big difference between what the article says and the alarmist message that got amplified in the press release and subsequent media discussion. But again: do you hear Steig and Mann complaining?

I wonder what happens if you erect a wall around the continent and force all that spills into the ocean to stay within the boundaries. On a circle, the PCs avoid crowding the edges. Are all of Steig’s cells over land? Do cells over water have any special characteristics, such as always having a certain value?

But the obvious next step would be to process all the grids rather than a sample. Did you only do some because of sampling cost (not having all data or manually translating coordinate systems) or due to computational cost?

The number of from-to cells is the square of the number of cells – there are 5509 gridcells so this makes a data from with over 25 million rows. The program itself is very simple but usually things this large don’t work on my computer.

Steve, another outstanding piece of work. As always. Your continued push and production is laudable and surprising.

One suggestion, unless you’ve done it. I suspect that the fit would improve greatly if you include (as Steig et al. appear to have done) the ice shelves. There’s a lot of interest, and thus a lot of data, there. Take a look. The darkest red in Steig’s eigenvector 2 is on the Ross Ice Shelf.

Strangely, ice is a pretty good conductor of heat, and there’s warmer water underneath. As a result, the temperature of the ice is affected by the ocean temperatures. This is not true where there is no underlying water layer. Couldn’t begin to guess what that would do to an eigenvector, though … always more to learn.

Congratulations,

w.

I have put my simple take on this on my blog, but have I oversimplified or misunderstood it? Please correct me if I’m wrong.

…The groove of an LP is a single signal. But we know it is made up of many individual signals combined, the drums, the guitars and the janitor on cowbells etc. What the consensus scientists tell us is that the magic way they play the track, a way they won’t reveal, shows that that the thousands of individual signals, the weather data, tells them that the Antarctic is cooling.

But “auditing” the data shows that that it was only recorded on a three track machine, the other tracks were left on the studio floor. Each track is a special setting of the control knobs to record the incoming noise. And now we know what those settings are we can see what happens if you record random noise in the same way, (put random data into the formula); you get very similar results to what the scientists claimed was significant proof.

If you play any random music backwards you will hear that Paul is dead, if that is what you want to hear…

Or even that it is warming – (sorry cut and paste error)

There might be someone somewhere prepared to bet that on a larger computer with the shape of the spatial eigenvectors constrained to the land+ice shelves, it would be possible to differential Steig’s eigenvectors from spacially autocorrelated random numbers…and its not going to be me.

I know that a lot of lurkers might wonder about the significance of these pictures, as I would have been even a year ago, but let me tell those lurkers that this is a stunning result – but its a stunning result that only statisticians would be impressed with.

Re: John A (#7),

Could this be done using a BOINC client? If so, I for one would be happy to subscribe to such a project.

Steve,

Once again just as you’ve done so many times in the past you have more than adequately demonstrated that the recently hyped up claims are entirely an artefact of the cherry picking of data and the method used to analyse the data.

“We now see warming is taking place on all seven of the earth’s continents in accord with what models predict as a response to greenhouse gases,” said Eric J. Steig, a professor of earth and space sciences at the University of Washington in Seattle, who is the lead author of a paper to be published Thursday in the journal Nature.

Because the climate record is still short, more work needs to be done to determine how much of the warming results from natural climate swings and how much from the warming effects of carbon dioxide released by the burning of fossil fuels, Dr. Steig said.

But Drew T. Shindell of the NASA Goddard Institute for Space Studies in New York, who is another author of the paper, said, “It’s extremely difficult to think of any physical way that you could have increasing greenhouse gases not lead to warming at the Antarctic continent.”

see here for more details

Well done once again Steve, keep up the good work.

KevinUK

My admiration grows for this site and your work Steve.

Great work, although I probably don’t understand all the implications!

Scientists are SUPPOSED to declare everything (they can think of) that might disprove their results (*), otherwise the result is not science. Since Team members often don’t seem to pay more than lip service to this, we have to rely on a few people like SteveM to act as their scientific conscience.

(* = see Richard Feynman’s article on Cargo Cult Science if you don’t believe me)

Here is link to a symposium on peer reviewing that might interest some people here. http://www.iiis2009.org/wmsci/website/default.asp?vc=27

Not to be totally OT:

I don’t see anything special in the eigenmodes. They look as low order modes should. Both Steig and Steve. I would have been surprised, if they had been different. The question is, are Steig’s results anyting meaningfull, if he used the eigenvector generated data.

David Wright, thanks for the post.

When analyzing anything using a new method, the first thing one has to do is to see what happens with “red noise” data. That is data which are not totally independent. They have some things in common, in particular spatially or temporally autocorrelated data.

The result that one gets from a given method (RegEM, PC, whatever) on semi-random data forms the null hypothesis. Thats the shape and pattern of signal that occurs

solely as a result of the pattern of spatial autocorrelation, that is to say it is a result of the shape of the continent. No other reason. That’s what you can expect to see. No assumptions necessary about warming or cooling. No need for the signal to be temperature. It’s just random spatially autocorrelated data. The oddity is, with random data, you get very near the patterns shown by Steig.Now. If that’s the signal we get with no warming or cooling, just random, how can we diagnose warming and cooling?

The obvious answer is, first subtract out the part of the signal that is inherent in continent’s shape, from the full signal. What’s left after that is gone is the deviation from the null case. That’s the effect of something other than random spatially autocorrelated data.

The danger, of course, is to mistake the patterns created by the physical shape of the continent for a signal. Steig et al. appear to have made this mistake, and I can see why. It’s not an obvious mistake.

Doesn’t mean there isn’t a signal. There may well be a signal in there. But to find it, first you have to subtract out the natural resonance patterns resulting solely from the shape of the continent. What’s left is your signal.

Or noise …

I would say this would be a huge problem for Stieg. He failed to calculate a baseline. He mistook the signal resulting solely from the shape of the continent, for the signal resulting from the temperature changes … gonna be hard to hand-wave that away. I suspect this one might be the deal-breaker.

It raises an interesting question for another time, however. Is the remaining signal, after removal of what we might call the inherent geometric signal, large enough to tell us anything? My vote is on no, because the ground station records are so few, so short, and so scattered.

It also suggest another line of analysis, which would be to use random spatially autocorrelated data at just the geographical locations used by Steig. It’s have to be a monte carlo analysis, there’s so few of them. Steve’s method is more elegant. But I think there’s more to be pulled out of this line of analysis by using Steig’s locations.

w.

There is also water under part of continental Antarctica, although its effect upon surface temperatures is unknown.

“Alps-like Mountain Range Exists Under East Antarctic Ice Sheet”

After seeing these last two posts, PCA has become so much more obvious in its function. These modes will always be present in PCA analysis 100% of the time. The spatial autocorrelation aspect just makes it a plottable effect and it happens that the antarctic sat data is only 3 pc’s so very highly autocorrelated. There’s a lot less information in the 5509 gridcells than it apppears.

I’ve never been taught principal components – for money, but it seems to me that this analysis should be standard fare in the texts. The quote above is very very understated.

One important feature of these patterns, — the area density and therefore quantity of surface stations will affect the centering of the pc’s. More peninisula stations will weight the lobes of pc2 and 3 so the centerline is closer to them.

Myself, JeffC and others have narrowed down the RegEM replication to the point where I don’t see any other option. I’ve completely reproduced the satellite reconstruction from the 3 pc’s alone last week.

If you chop off the pre-1982 data and place it next to the surface station matrix, RegEM will reproduce the 3 pc’s.This is visible in the first graph of the lower link. The unnatural look of PC3 is reproduced as well.By placing these pc’s next to JeffC regridded surface station data (which more correctly weights the peninsula stations) I was able to reproduce the entire satellite reconstruction in a more appropriate regridded format which reduced the decadal trend to 0.07 C/decade.

http://noconsensus.wordpress.com/2009/02/20/satellite-temperature-trend-regridded/

Does this analyses suggest that another approach that combines RegEM and some other technique that accounts for spatial correlation be used to attempt to analysis Antartic temp trends. I am definitely no expert but would doing something like (1) using RegEM to infill missing data at the selected manned stations, then (2) use kriging to interpolate grid points between stations be an approach that would minimize autocorrelation effects? Probably would have to use a much coarser grid than used by Steig, but it might a reasonable approach for checking the “robustness” of the results.

“their results would be worthless, though not neessarily wrong..”: what a courteous way to say that they’re not even wrong.

Re: dearieme (#25),

that’s a very consistent line of appraisal here. That’s my view of the HS as well – MBH and other bristlecone-addicted recons do not establish their point and, in that sense, are “worthless”. That doesn’t preclude the possibility of someone establishing the Stick with a better set of proxies. I certainly don’t claim to be able to show that the MWP was warmer than the modern warm period in a way that would pass my own critical methods. I can and have showed that trivial selection changes lead to varying conclusions – that’s one of the main lines of argument as to why the Team recons are “worthless”.

I read a geological report yesterday on a small mining property and I was struck one more time at the difference between an 80 page geological report and a 4 page Nature article.

There’s a lot more attention to particulars. At the end of the day, there are about 13-14 surface stations that provide the only information about 1957-1980. You can calculate covariance matrices until you’re blue in the face, but personally I’d like to see some QC reports on the surface station data itself before anyone got too excited either way. And that includes people who think that temperatures went down. Maybe there are enough inhomogeneities in the surface stations that little weight can be placed on this view. As I’ve said before, just because people like one answer rather than another doesn’t mean that they should waive critical appraisal.

In addition, every method in the geological report has to be established and the geologist has to stamp that it’s an approved method. The geologist couldn’t use a RegEM-equivalent to calculate ore reserves. If a geoglogist thinks that RegEM is hyper-dandy to calculate ore reserves, then he shouldn’t try to prove this method on a new ore body being presented to investors. He can experiment with non-topical ore bodies where the public is not involved. Not the equivalent of a Nature cover.

Increasingly I think that problems in this field often tie back to a faux need for “originality” in academic research and journal review criteria that appraise this. If there were more things that looked like geological reports and fewer things that looked like Nature articles, IMO it would do a great deal to improve matters in this field.

Re: Steve McIntyre (#26),

What does the geographical distribution of the first 13-14 stations look like?

Re: Thor (#41),

I’ll post it up some time. But they are not geographically random and EOFs from their geography are different than those from a disk or randomly sampled Antarctica. Does it “matter”? Who knows.

Maybe it does make physical sense. I commented about this over on Jeff Id’s blog.

Wouldn’t there be a strong correlation between the temperatures of stations at equal latitude? (At least in the interior.) If so, then the strongest PCA component is essentially one that correlates to how far from the South Pole a station is. The next two seem to relate to the difference in shape between West and East Antarctica. Obviously, the peninsula in the West behaves differently (temperature-wise) from the East.

Unless I’m misunderstanding something (a high probability 😉 I wouldn’t be surprised if this ends up making sense.

–t

RE Steve #26, I can understand that a high-profile journal like

Naturewould only want to put research highlights in the print edition. However, a condition of publication of your highlights should be archiving the full-length detailed paper on the Nature webpage.The Steig SI does discuss some aspects the paper in greater length than the article, but it is far from complete.

Re: SM #22

Steve, I believe that the un-natural PC3 is an artifact of the Chladni pattern eigenvector weighting and the fact that RegEm does not distinguish geographic distribution of surface stations. 17 of 42 stations are “shrunk” disproportionately through this process and reflected in PC3. The Jeff’s then offset this artificial bias by external grid weighting and the un-natural segment of PC3 disappeared.

Re: Layman Lurker (#31),

there’s something going on with the PC3 that’s not been fully dissected. The eigenvector pattern is “predictable” from the geometry. What’s surprising is the change in amplitude from pre-1980 to post-1980. This must be related to the change from RegEM-extrapolated to rank 3 “data”, but I don’t think that the precise form of the effect has been fully run to ground, tho maybe one of the Jeffs has something on this that I’ve lost track of.

Someone said the PC’s “smear” the peninsula data over the continent. That is not how I would interpet Steve’s maps. Kriging is also a possible tool to use with the satellite data (though it has its own problems). The key point is one Steve makes: you don’t take a whole series of novel methods and apply them to produce an earth-shaking result without TESTING them on known data. How about producing various artificial trend histories over the continent over time (no trend, warming, cooling, warming here cooling there) and then apply the methods and see if they can detect the known trend? This is standard when developing statistical methods.

Re: Craig Loehle (#32),

Craig,

As Steve has shown, the topological shape of Antarctica has an expectedly dominant impact on the first 3 PCs, which have relatively low spatial confinement.

Any signal that is isolated to just the 5% of Antarctica that is the peninsula (and which is unquestionably experiencing a rapid increase in temperature) will be found primarily in higher order PCs.

Using only the first three PCs will inevitably have the effect of smearing temperatures from the Peninsula across a wider area.

This should be easily testable. Once the AVHRR masked data is available, we can subtract the 3 given PCs from the result and graph it.

This “remainder” graph will show a significantly more positive trend on the peninsula than is present elsewhere on Western Antarctica.

Effectively, those trends (with high spatial confinement) have been excluded or reduced by this analysis.

Re: Craig Loehle (#32), and Eric,

Twas I. I’ve just been looking at Steig et al again and there is further clear evidence of this. They say “peninsula warming averages 0.11 +- 0.04 C per decade”. That’s 0.55 degrees over the last 50 years. But if you look at the BAS site they say the peninsula has warmed by 2.8 C over the last 50 years, which is roughly what you can see by looking at the station data. So if Steig et al’s “reconstruction” is correct, the station data overestimate warming by about a factor of 5! One of their conclusions ought to be that the peninsula is warming much less than previously thought. But curiously this is not the conclusion they reached or the story reported in the media.

So the “reconstruction” is clearly wrong. Where has the peninsula warming gone? Answer: it has been smeared and diluted over the continent by the inappropriate use of a small number of PCs. See also Jason’s comment 43. All of this ought to have been obvious to Eric Steig and his co-authors and the Nature referees and editors.

[For those familiar with Fourier analysis: It is like taking a spiky function and trying to represent it with just 3 Fourier modes. The spike gets lower and broader]

Steve,

Forgive me if this has been discussed in past threads on PCA (although I couldn’t find it in a quick search of the site), but it’s worth noting that the issue of principal component pattern dependence on domain shape is (or was) well known within the atmospheric sciences community. It was first documented by C. Eugene Buell in a 1975 paper published in the Proceedings of the Fourth Conference on Probability and Statistics in Atmospheric Sciences. Buell’s work focused on square/rectangular domains, particularly because the latter approximated the shape of the conterminous U.S. This was followed by a second paper at the Sixth P&S Conference in 1979 which states: “When a region with well defined boundaries is concerned, the EOF’s computed over this region are expected to be very strongly influenced by the geometrical shape of the region and to a large extent independent of where the region is located. As a consequence, the interpretation of the topography of the EOF’s in terms of geographical area and associated meteorological phenomena should be looked on with suspicion unless the influence of the effect of the shape of the region has been completely accounted for. Otherwise, such interpretations may well be on a scientific level with the observations of children who see castles in the clouds.”

Buell’s work generated considerable discussion within the atmospheric sciences literature because PCA (or, as they referred to it, EOF [empirical orthogonal function]) analysis was in widespread use. Mike Richman, in a paper published in the Journal of Climatology (1986) entitled “Rotation of Principal Components” made the case that component rotation appears to eliminate the problem of domain shape dependence. Richman’s paper is worth reading in that it reviews and considers nearly all of the important work that had been done using PCA on meteorological and climatological data up to that time, including work by Gerry North and others familiar to CA readers. So these problems have been well documented and understood for a long time. Like so many other elements of meteorology and climatology, though, modern climate science appears to have forgotten some important statistical insights produced by it’s own practitioners . . . indeed, some of the practitioners themselves appear to have forgotten what they wrote.

Re: HFL (#33),

that’s a wonderful quote and reference (and one that I was unaware of). Do you have a pdf of the Buell paper?

Re: HFL (#33),

here’s a link to the Richman 1986 article. The connection of PCs to geometric shape was clearly known a generation ago and the observations made in these posts would be old hat to Richman in 1986.

Unfortunately Mann and Steig did not address these issues nor did Nature or its reviewers require them to do so. “castles in the clouds” indeed.

Re: HFL (#33),

This issue is discussed clearly in Jolliffe’s text – see here.

So the methodological point is not an obscure one.

Re: Steve McIntyre (#44),

It’s good to see the other references, I checked out the first pdf link (quickly). IMO the method you used to plot autocorrelated data in ever increasing order (in your first) post lays it out conceptually better. While I had a basic understanding before, I get this very clearly now and I’m going to take some time to reread some of the old CA battles.

Many in this thread are treating PCA as if were some sort of traditional test of significance. You put the data in, it calculates the trend. Then it tells you how significant that trend is.

While Mann may try to use PCA in this manner, that is NOT how PCA is normally used.

PCA is a DATA MINING technique. Its job is to find signals in the noise. Ideally, these candidate signals are then tested against new or sequestered data to verify that the relationship is real, and not some random artifact.

Principal components do NOT inherently represent statistically significant relationships. Any hypothesis that they do must be tested using alternative means. Those alternative means are not generally considered to be part of PCA which, again, is not a statistical technique. It is a data mining technique.

Re: Jason (#34),

if you re-examine the history of MBH discussion, this point is always overlooked or misrepresented by the Mann group. They persistently conflate something being a PCA pattern being equivalent to being proof of it being a temperature proxy. I don’t think that there;s anything the least bit difficult about this point, but it’s not agreed on by the Team.

Sparse network of a finite number of stations, would Shen94 (Spectral Approach to Optimal Estimation of the Global Average Temperature, Journal of Climate, V7 N12) be relevant, or is it already obsolete ? In that paper average of sparse vs. full dataset of U.K. data was compared. With 60 stations, sampling error was found to be (probably) small compared to other errors in the budget.

Thanks to Willis Eschenbach. Prior to his post I didn’t understand a single thing being discussed here!

Steve, do you need a new computer? What kind of computer would be sufficient?

I guess I am still simple-minded on the representation of the physical processes via this particular statistical analysis. Again I see PCA as an exploratory technique and you really need to “prove” that you have identified some real structure at a minimum by regenerating the same structure on a separate set of data or through an independent confirmatory approach. Loose ex post definitions of the factors as representing physical processes seems like, to use Steve’s phrase, arm waving.

snip – some prohibited language

How are temperature station elevation/altitude differences accounted for in analyses such as this where there appears to be some geographical schmearing when correlating with the satellite info? Since you are measuring deltas does it not matter?

Thanks!

Noob Lurker

#45

Steve, if you look closely at the wiggles of the pre 1980 segment of PC3 and compare them to the Jeff’s re-calculated PC3 with grid weighting, they are very comparable. The amplitude of the wiggles have shrunk prior to 1980 in Steig’s PC3. The fact that the post 1980 segment of PC3 looks normal, would reflect the post 1980 change of process somehow which you alluded to. The Jeff’s grid weighting had a profound effect on pre 1980 and very little post 1980 (or on any of the other PC’s for that matter). This artifact was picked up during the derivation of PC3

Nice work Steve. Back in #3 you said:

Is the code available? It might be interesting to reduce this to a sparse grid of the original 5509 lat/long points (perhaps delete every other point to reduce the array elements by a factor of four) and see if it will run.

For those who have not been following it, this topic has been scattered over multiple threads and dozens of comments. Roman gives a succinct summary in these two comments here, and here.

Jeff Id has a post relating to it here from a few days back here

I have a plot comparing what we know about the pre-massaged satellite data (from Steig’s SI) vs. what the recon looks like for the same period here. They aren’t close.

These are plots of the satellite trends from the 5509 grid points of the recon here, 5508 points show warming, 1 shows cooling. I suppose no one would have believed it if all 5509 indicated warming.

Re: Jeff C. (#50),

This should work:

### circledist function

#calculates great circle distance with earth radius 6378.137 km

#from http://en.wikipedia.org/wiki/Great-circle_distance 2nd formula

circledist =function(x,R=6372.795) { #fromlat,fromlong, lat,long

pi180=pi/180;

y= abs(x[4] -x[2])

y[y>180]=360-y[y>180]

y[y< = -180]= 360+y[y180

grid.info$long[temp]= -( 360- grid.info$long[temp])

#ant_recon download

download.file(“http://faculty.washington.edu/steig/nature09data/data/ant_recon.txt”,”temp.dat”,mode=”wb”)

grid=scan(“temp.dat”,n= -1) # 3305400

#600*5509 #[1] 3305400

recon=ts(t(array(grid,dim=c(5509,600))),start=c(1957,1),freq=12)

#rm(grid)

#load(“d:/climate/data/steig/recon.tab”)

dimnames(recon)[[2]]=1:5509

id=index=1:5509

K=length(index);K

use0=”pairwise.complete.obs”

grid.info=grid.info[,c(“id”,”lat”,”long”)]

station= data.frame( from=gl(K,K,labels=id),to= rep(id,K))#,cor= c(corry) )

dim(station) #40000 3

station$from=as.numeric(as.character(station$from))

station$fromlat=grid.info$lat[station$from]

station$fromlong=grid.info$long[station$from]

station$tolat=grid.info$lat[station$to]

station$tolong=grid.info$long[station$to]

station$dist=apply(station[,3:6],1,circledist)

dim(station) #40000 8

station$cor=exp(-station$dist/1200) #rough approx

#now make correlation matrix and do SVD

R=array(station$cor,dim=c(K,K))

W=svd(R,nu=16,nv=16)

dim(W$v) #[1] 5509 16

##SAVE W and get back to me

Re: Steve McIntyre (#51), Thanks. I’ll try to run the whole thing but if that doesn’t work I’ll experiment with decimating the grid (say every other row and column) to see if it will run.

The coincidence of the first 3 principal component patterns between random data and the actual data is intriguiging. This naturally leads to a question which I don’t believe has been asked or answered on this page.

What is the fraction of the data explained by the first 3 principal components for the random data and the real data of Steig, and how much would the next 1 or 2 principal components add.

I am not a statistician and don’t have access to a program from which I can do a calculation of these things. These would relate to a criterion for choosing how many principal components that could be used, the Kaiser criterion or the Scree criterion. See the following web site for definitions:

http://www.statsoft.com/textbook/stfacan.html

Re: Eric (#52),

Eric, what exactly are you referring to? What is the “actual data” and what is the “random data”? What is the “coincidence”? There is lots of data (AWS, satellite) floating around and you have to be more specific to have your question make sense. You keep referring to Kaiser and Scree which are are not relevant to any of the analyses we have done on the blog, but might be relevant (but at this point unanswerable by us) if you are talking about the analyses Steig et al. did. If you could be clearer about exactly what you are asking, someone might be able to provide an answer to you.

Thank you for your kind offer.

The random data refers to this statement by Steve M,

“Today I’ll show a similar analysis, but this time using a random sample of points from actual Antarctica. The results are pretty interesting, to say the least.”

It isn’t clear what he means by this to me, but I assume it is clear to everyone here.

By the actual data, I mean the data that was used by Steig and Schneider to derive their principal components.

So my question is how much of the variation in each of the data sets is explained by the first 3 principle components of each of these data sets.

Presumably the validity of using the first 3 primitive components is determined the Kaiser and or Scree criteria as described on this link explaining principal components to the uninitiated:

http://www.statsoft.com/textbook/stfacan.html

Understanding this would be critical to any analysis of the validity of using the principal components to represent the actual data. The validity of the physical explanations provided by Steig and Schneider for the data would appear to depend on the validity of the representation. I haven’t seen any analysis of this question on this web site. Please point me to it, if I have missed it.

Re: Eric (#55),

The random points selected by Steve for his example do not have any particular relevance to your question since he is dealing with a completely different issue. However, the reference to the data used by Steig et al. does make sense.

I would also like to know this , but I don’t have an answer for you because the authors have not made the specific satellite data (after all of their “corrections” and adjustments) available for examination. Without seeing that data to determine how well the PCs derived from the satellite measurements capture the variability for the entire continent, it isn’t possible to possible to provide a

completeanswer. So the question should be directed to Dr. Steig to provide a justification. By the way, the rules you have referred to are only guidelines and are really derived from assumptions about the origin of the data which may not be satisfied by the data in question.Despite the lack of the original data, there are still other reasons to question unjustified statements made in the paper and repeated by persons who have spread the questionable information through further interviews by the media .

RomanM,

Steve is saying above,

The significance of this comment depends on how good a fit the first 3 principal components are as a representation of Steve’s original random data that he chose, versus how good Steig’s principal components are as a representation of his.

We can at least determine how good Steve’s principle components are to see whether his principle components are a good or poor representation. If they are not very good the form is more likely than not to be a coincidence assuming Steigs principal components are at least a fair representation, even though we won’t have a definitive answer.

Perhaps Steig would be good enough to show his analysis as well.

Re: Eric (#57),

But they are not

Steve’sPCs. They are the ones created by Steig. The analysis done here managed to determine how many PCs were used and what they were. We didn’t create them.Oops. Hit submit too quickly. If you are talking about the emulation of the pattern, the only random part was the selection of 300 grid points at which non-random distances were calculated. No other random data was created or used. He chose a sample because the calculation for the entire set would have been too large for his computer. Three huindred points would spread themselves pretty much over most of the continent.

I attempted to run Steve’s code using all 5509 points. As Steve suggested, the sheer size of the project overwhelmed my computing power. In order to get around this, I removed 3 of every four points from the lat/long grid. I did this by converting the lat/long to a polar XY coordinate system. It became obvious Steig must have chosen his gridpoints from the same system as the points arranged themselves into a very orderly grid. I deleted all even numbered rows and columns to end up with a grid of 1378 points out of the original 5509. Here is the grid pattern I used.

I then converted the coordinates back to lat/long and ran Steve’s code. The results are below. I was a little disappointed that the plots didn’t take on more definition, but they do confirm Steve’s results above using gridpoints that well define the area used in the reconstruction.

Can I ask a dumb question?

Why does Steve compare Steig’s Eigenvector 2 with Spacial Eigenvector 3 and vice versa?

Re: John A (#62),

because their axes are oriented similarly. I would view the “extra” power as being an index of potential climatic information.

Re Paul #60,

I think you’re really on to something here.

RegEM doesn’t know where the stations are, and treats all equally, so that the numerous peninsula stations will end up with disproportionate weight in the end product. If they get smeared onto W. Ant stations, the TIR covariances can then spread this over a larger region, though at the expense of peninsula warming itself, as you note.

Someone observed that eigenfunctions of Laplacians yielded similar patterns. On another thread, related sorts of structures were observed in Karhunen-Loeve (SVD) decompositions of turbulent flows.

A question: don’t we have sort of a classic boundary value problem here? Let’s put aside Mannian covariances of everything with everything for a moment and posit something fairly simple – that is some sort of turbulent flow which results in spatially autocorrelated temperatures. Both seem to be describable by Laplacian methods, if I’ve grabbed the right piece of terminology.

In terms of information, for the most part, the early data are stations somewhat evenly spread around the circumference, except for the Peninsula, plus 1-2 interior points (South Pole, Vostok). So you have a sort of boundary value problem on the disk (except for the information about the center.) My guess is that the analytic solution to this is given by pretty simple averaging, with the only question being how to weight the interior point(s) relative to the circumference. I’m having trouble understanding what RegEM brings to the party.

Re: Steve McIntyre (#65),

Canned convenience. RegEM is the TV dinner of the armchair climatologist.

Re: Steve McIntyre (#65),

Steve, a classical boundary value problem would yield similar answers. There’s a good reason; when discretised, the matrix is numerically very similar to the spatial correlation matrix.

But this has worried me in a lot of the discussion on these threads, referring to the shape of Antarctica etc. The fact is, there isn’t a real boundary. Winds blow across the land-sea divide. Weatherwise, sea ice is much like land, so the effective shape is not like what you see on the map, and is quite variable.

The same is true of the analogous spatial correlation problem. Nothing happens at the edge, except that you stop correlating with points beyond. In Steig et al, to add confusion, distant islands are included, but not the sea between.

Steig compares his eigenvectors to spherical modes, and physically they seem more real.

Re: Steve McIntyre (#65),

No Steve it is not “only” a boundary problem .

However “spatial autocorrelation” is just a cumbersome and unnatural way to speak about wave propagation in a very general way .

Indeed spatial autocorrelation means that what happens at a point M is influenced by what happened at point N .

And a propagating wave means that what happens at a point M will arrive sooner or later to point N .

Now a propagating wave obeys the wave equation which is given by the Laplacian regardless if one talks about physical waves (drums , fluids) or abstract waves (psi in quantum mechanics) .

That’s why a wave , ANY wave , can be written as a combination of the eigenvectors of the Laplacian and so it doesn’t surprise me that the eigenvectors of a spatial autocorrelation matrix have the same shape as the eigenvectors of wave mechanics .

When you solve the Schrodinger equation of a simple system of 1 particle in a potential well (solve = find eigenvalues of the Laplacian) you find functions that have exactly the same shape as those presented on this thread .

You can even do the exercice the other way round – combine manually some eigenfunctions in order to reconstruct the original picture of Antarctica and this for any arbitrary picture .

.

It seems to me that what Steig “discovered” and what anyone applying PCA to spatially autocorrelated data would “discover” is that the underlying process is a (any) wave propagation process .

Indeed the shape of the eigenfunctions and their symmetries are constant and completely independent of the actual system and its temporal evolution be it a drum , a tsunami or the energy of a particle .

They are of course also influenced by the geometry of the boundary (square , disk , ellipse) but it is not a fundamental influence .

.

Everybody who wants to check that can play with the following applet :

This applet gives the eigenfunctions (quantum states) of a particle in a 2D potential box (potential is 0 inside and infinite outside the square) .

Paul and Hu:

Shouldn’t this be visible in the weights of the 3 factors? If the peninsula stations load on all three then the interpretation of the factors gets messy.

#65

Steve, I threw out some points at Air Vent on the nature of the spatial correlation. My intuitive response was that because the of the unique geography of Antarctica – the coincidence of an island with the pole near it’s center – that the correlation would reflect the symetry of latitude around the disk center. In other words, correlation of points equidistant from the center of the disk. Obviously different from Australia for example. We also have a situation here with a lot of coastal surface stations… just some thoughts.

Has everybody seen Jeff Id’s latest video of the temperatures? The temperatures are clearly unconnected (not even teleconnected!) to any physical explanation!

oops, clicked before I pasted the link:

http://noconsensus.wordpress.com/2009/02/28/a-little-bit-of-magic/#more-2521

PaulM,

Re:Post #60, Feb 27, 7:50am.

Paul,

I just read Gavin’s response to your above post (which also appeared in the RC thread, “Antarctic Warming is Robust” , Jan 27, 8:46am, # 353.

GAVIN:[Response: Surprisingly enough, this was realised by the authors and they discussed it: “A disadvantage of excluding higher-order terms (k . 3) is that this fails to fully capture the variance in the Antarctic Peninsula region. We accept this trade-off because the Peninsula is already the best-observed region of the Antarctic.” – gavin]

Does this (a) deal at all with your point, and (b) as an answer make any sense at all? Also was Jim Eager (RC Post #357) being silly or saying something significant with his suggestion that the warming has gone to heat of fusion?

Ron

Ron, someone else (apolyton, 364) has pointed out that Gavin’s comment doesn’t deal with the point, and it fact it seems that he and Steig etal accept the point.

Jim Eager is being silly – the warming is observed, and is an input to the Steig et al “analysis”.

Thanks Paul,

Regarding Gavin‘s response to ApolytonGP : [Response: If people want to make specific points, they should make them. I’ll often try to intuit what someone is trying to get to, but I’m not psychic. In this case, I didn’t find people expressing shock that they have worked out the number of eigenmodes used in an analysis, when it was clearly stated in the paper, particularly interesting. With respect to your point, the number of modes is always a balance between including more to include smaller scale features, and not including modes that are not climatically relevant or that might be contaminated by artifacts. There are a number of ad hoc rules to determine this – ‘Rule N’ for instance, but I don’t know exactly what was used here. It doesn’t appear to make much difference, but it’s a fair point to explore. – gavin]

A-This leaves me curious about the “specific point” he was trying to make in his reply to your post.

B-I’m not a psychic either so I can at least suspect that Gavin is really saying that he has serious doubts about Steig’s paper. Are you going to take up his apparent offer to explore the “fair point” that it does make a difference?

I am going to display my ignorance here for all to see. However the dependence on shape noted here reminds me of the windowing functions used for sampling signals (time or otherwise) in digital signal processing. A sample of s signal is taken by multiplying it with a sampling function. This multiplication is equivalent to convolving the signal with the sampling function in the frequency domain. This will create artifacts in the result that will distort the content of the signal. These artifacts are dependent on the shape of the window. Various windowing shapes have been used to try to eliminate the artifacts. Various widow shapes are called triangular, cosine, raised cosine, Kaiser … Each has their own properties.

I was wondering if similar manipulations could be used in this case. Could various weighting be given to points near the edge, near the center etc, to deal with the artifacts created by the shape in a similar vein the various windowing functions used in DSP.

Anyway, I hope that this comment was not a distraction.

Re: Martin Sidey (#75),

stepping back, let’s forget for a moment about what Steig did and suppose that we’re trying to calculate the average temperature of a disk where we have measurements more or less equally spaced around the circumference and a measurement at the center – and you know that the temperatures are linked together by something that yields spatial autocorrelation – and we’ve seen plausible equations that do so.

I presume that the average temperature of the disk would be estimated by a really simple formula – though you’d need some calculus to prove the formula.

Re: Steve McIntyre (#76),

Consider the cylindrical wedge (0 <= r <= R, Q1 <= Q <= Q2; where r and Q are radial and angular coordinates, respectively). Assume the temperature T varies linearly with position:

T(r, Q) = T2×(Q-Q1)/(Q2-Q1)×(r/R)

+ T1×(Q2-Q)/(Q2-Q1)×(r/R)

+ T0×(R-r)/R

where T0 = T(0, Q), T1 = T(R, Q1), and T2 = T(R, Q2 ). Then the area-averaged temperature over the wedge is:

AvgT = (T0 + T1 + T2)/3

For a disk divided into n equal sized wedges (corresponding to n equally spaced temperature measurements), the previous equation can be summed over the wedges to give the average temperature over a disk:

AvgT = (2TP + T0)/3

where TP is the average temperature at the perimeter. For unequally spaced measurements, the wedges should be area-weighted in averaging.

Re: Steve McIntyre (#76), Re: JPetersen (#77),

I suggest using a variation of ordinary kriging which will give you a minimum variance estimate. Wiki “kriging”, click “ordinary kriging equation” to get the kriging system of equations for estimating a single point: x*.

The lambda vector are the station weights, the square gamma matrix are the station to station variances from the variogram as a function of the station to station distances,and the gamma vector are the variances between the stations and the point to be estimated, again from the variogram as a function of distance.

In the latter vector, redefine x* to mean the locus of all points in the circle and use your calculus to calculate the average gammas. In the special case where the variogram is linear, this reduces to the variogram value at the average distance between a station and all points in the circle. (a linear variogram = ar(1) = autocorrelation declining as a function of sqrt(distance))

My calculus is way rusty, so I did a brute-force numerical approximation instead, kriging a fine grid of points over a circle with a central station and 36 stations equally spaced on the circumference, and averaging the results. By setting the value at the central station to 1 and the circumference stations to 0, the kriged mean equals the central station’s weight, and a circumference station’s weight will be (1-mean)/36. For the ar(1) case, I got weights of about .26 and .0204.

Re: Steve McIntyre (#76), Re: JPetersen (#77), Re: Evan Englund (#78)

Evan, kriging interpolation is new to me, so thanks for the suggestion. I worked out an analytical solution for the center station’s kriging weight. If there are N+1 stations, station 0 at the pole and stations [1..N] evenly distributed around the perimeter (thetai = 2×i×pi/N), then

lambda0 = (A-f(N))/(2-f(N))

where A = 32/9/pi + 1/3 ~ 1.4651

and f(N) = 2/N * SUM(sin(thetai/2))

For the case of 36 stations on the perimeter, this equation gives

lambda0 ~ 0.264

in agreement with Evan’s solution. For a large N, f(N) –> 4/pi

[This is probably of little interest unless you’re a calculus junky like me ©¿© ]

I don’t believe anyone has mentioned that the Steig AVHRR reconstruction has a non-constant negative mean, as shown below:

This looks remarkably like the second eigenvector of the raw recon data. However, removing this mean has very little effect on either the eigenvalues or the shape of the eigenvectors. This is somewhat surprising, since artificially adding a big nonzero constant to the recon, say 10, introduces a fourth non-zero eigenvalue and somewhat alters the other three.

Likewise, the standard deviations are not constant, as illustrated below:

This looks remarkably like the first raw-recon eigenvector. As one might expect, it is lowest around the coast, where the ocean presumably moderates the temperatures.

Normalizing the demeaned AVHRR recon by these standard deviations tends to flatten the first eigenvector, and reduces the sizes of the eigenvalues, but still leaves only three no-zero eigenvalues. The third PC still has its “ain’t natural” flat shape before 1980.

Re: Hu McCulloch (#79),

It hasn’t been discussed that much, but it has been noticed. Each of the AVHRR reconstructions have been scaled to have zero means for the time period 1982-2006. Since the surface stations indicated a warming trend over the pre-1982 time, the reconstructions reflected that by having negative averages during that same interval. The mapped pattern produced by the (negative) overall means would naturally look very much like the trend pattern (with the colors reversed) because a lower pre-1982 mean would in turn tend to produce a higher trend for the entire 1957-2006 period.

From what we can glean so far, the production of the reconstructions involved several stages. In the first stage, the raw satellite data (from 1982 to 2006) was first condensed to three (genuine) PCs (scaled to mean 0). Then, the RegEM procedure was used to

appendan extension to each PC. As noticed above, these extensions were not scaled to mean zero, but that should be understood as being the result of the extension process and not from “adding a big nonzero constant to the recon”.The three extended series were then combined in a simple linear fashion to form the 5509 reconstructions. The combining weights would have been calculated from the original PC calculation in the first stage. The three “PCs” constructed in this fashion should not be interpreted as genuine PCs obtained from some dataset (for one thing, like the early Mannian paleo PCs,they are not orthogonal). However, the reconstructed data matrix has a rank of three so that any SVD analysis of the entire reconstruction will simply look like an ordinary 3 PC decomposition.

Re: Hu McCulloch (#79),

Neat graphics, but I’m struggling away Down Under with a similar complication. In my study set of rural stations, some coastal stations are not trending upwards in 40 years but some inland ones are – one showing 1 deg rise per 40 years (linear fit). It can’t be due to the sea in the sense of keeping temperatures low, because we are dealing with the

rateof change of temperature. If it is increasing inland for 40 years at the above rates, where will it be in 200 years? Where was it 500 years ago?Do you have the same complication? I worry about data that seem specific to the period of study, but go wild when extraploated back or forwards. Often it turns out that the figures were wrong.

Re: Hu McCulloch (#79),

Your Standard Deviations map is very similar to a altiude map:

Including the two little red/orange horns in the West.

Alex

RE #80,

Thanks, Roman — I’m just learning this SVD stuff as we go along, so I’ll think about it a while.

Meanwhile, is it time for one or more of us besides Steve to be politely asking Comiso when the TIR “raw” data will be ready? It’s been over a month now since he promised Steve the data.

There is massive primary TIR data on the site Steig points to, which Comiso has presumably edited down to a 300X5509 matrix for Mann, Rutherford, and team to feed into RegEM. In order be able to replicate their study, as required by Nature’s guidelines, I gather we would need to know both how the primary data was edited down, and also what the “raw” 300X5509 matrix is?

I suppose any request to Comiso should be copied to Steig as lead author, so that he has been officially notified of the request.

Hu and Roman,

I put together a flow diagram of my understanding of the satellite recon process that is posted at Jeff Id’s site http://noconsensus.wordpress.com/2009/03/06/flow-chart-of-regem-satellite-reconstruction/

click on the diagram for a larger version.

My main goal was to put everything in one place to help others get up to speed and also to document things before we forget. If you notice anything I have missed or have wrong, please pass it along and I’ll update it.

Beautiful plots Hu. We never may fully get to the bottom of this thing, but at least we will have made some nice graphics in the process. 🙂

Re: Jeff C. (#82),

Good descriptive flow chart, Jeff. Obviously a bit of (IMHO, quite accurate) guesswork leading up to the calculation of the 3 PCs formed from the raw data, but pretty much on the money thereafter. It would (will?) be interesting to see the initial summary of the AVHRR and cloud mask fitered product to see how well “the monthly anomalies are efficiently characterized by a small number of spatial weighting patterns and corresponding time series (principal components) that describe the varying contribution of each pattern.”

Re: RomanM (#85) and Hu McCulloch (#86), Yes, there is definitely some guess work prior to the reduction to 3 PCs. I do think it is very close to reality from what we have been able to piece together. The “convert to anomalies” step may happen earlier in the process, but it shouldn’t make a difference (I think). The “calculate satellite monthly mean” step may feed from the second box instead of the first one. Actually, I think that might make more sense than what I am showing.

Regarding Gavin’s comment, it was regarding the “enhanced” cloud masking Steig speaks of in the SI (the third box in my chart). Steig says that those points outside of +/- 10 deg C of the climatological mean are assumed to be cloud contaminated and thrown out. I had assumed the “climatological mean” referred to measured surface temperatures. Jeff asked Gavin over at RC and his comment was that they were from the mean of the satellite data. He seems to understand it well and apparently has access to the scripts. He was able to quickly produce the trends using different numbers of PCs in response to questions.

I do find this enhanced cloud masking step problematic. I thought clouds provided a systemic error. By throwing out data points both plus and minus of the mean it is being treated as a random error. I’ll need to look into this further.

There probably is another box at the input that takes the AVHRR data and somehow formats it prior to this process. This might included combining the data in 50 x 50 km cells as Hu suggests and deleting areas that aren’t used in the reconstruction (such as those over water).

Hu – regarding your plots, I have been struggling with how to code the colors to gradually transition as opposed to stepping from one to another at a threshold. Is there an easy way to do this?

Re: Kenneth Fritsch (#89), Yes, I had the two cloud mask steps share a trash can, I was in a charitable mood! Regarding the satellite data standing by itself, we think this is happening but are not certain. We are very certain that the RegEM step with the three satellite PCs input only appends the surface data to the front end of those PCs. It doesn’t alter the PCs themselves over the satellite era.

What isn’t certain is the top row. There is an “infill missing data (RegEM??)” step prior to reduction to 3 PCs. This is for the cloud-masked intervals in the satellite record. I don’t know for certain this step is there, but it stands to reason. If they used RegEM, they could utilize the surface data in this infilling. Perhaps they could also somehow use the surface data in reducing the satellite dataset to 3 PCs? My hunch is they didn’t do either of these but I’m not sure.

Re: Jeff C. (#82),

Jeff C, a real smart a– might have had 4 thrash cans but since you are obviously not one you stopped at 3.

Seriously and from your flow chart I judge that the 1982-2006 TIR data stands by itself and during that overlap period with the manned surface stations is calibrated to the manned stations for taking the reconstruction back to the 1957-1982 period. I suppose the calibration of the TIR to the manned stations could have been used with in-filling for the entire 1957-2006 period, but the consensus here says it was not done that way?

When I see temperature reconstructions such as by Mann and others, the claim is always that the instrumental part of the record is not part of the reconstruction but just hung on the end for better visualization. I always thought the issue was how to splice the instrumental record to the reconstruction. In fact when the reconstruction runs into and through the instrumental part of the record we begin to often see that reconstruction differs from the instrumental and we all stand up and shout divergence – or worse.

Is not what we suspect with the Steig TIR reconstruction involve, in effect, a splicing of the instrumental record to the reconstruction – and does that bother anyone?

I’ve been rereading the paper’s methods section a more critical eye and a better understanding of teamspeak. I probably have read this sentence ten times and always thought it pertained to the infilling methodology:

You really have to parse every word. They key word is “adaptation”. The adaptation is using the AVHRR 3 PCs as input to RegEM instead of the AVHRR data set. In the following paragraph lies another revelation:

Likewise, I have read this sentence numerous times and always thought it was an observation, not a process description. When coupled with the sentence above, it seems to vaguely imply replacement of the dataset with the 3 PC fitted version.

Just opinion here, but this is CYA as opposed to an enlightening methodology description. They included enough oblique references to support a claim they revealed their methods if challenged. If they genuinely wanted readers to understand their methods, a single sentence could clear things up immensely. Something such as:

Of course, no such sentence exists.

Oops, replaced not replacing. I should proofread my own writing before telling others how to do theirs.

RE #82,

Thanks for the chart, Jeff! I’m still confused, but at least I have a better idea now just where.

Your chart starts with relatively raw AVHRR data, 1982-2006, 300X5509. Is this directly obtainable from the NSICD site Steig cites, at http://www.nsidc.org/data/avhrr/, just by plugging Steig’s 5509 coordinates into their tables? The coarsest data I see described are 25X25 grids that only come up to 2000. Did they simply average 4 25KM grids to get 1 nominal 50 KM grid, or is it more complicated than this? Does one have to pay for the updates past 2000?

I guess that at a minimum, what we want from Comiso is the matrix that went in to the upper left segment, the gaping matrix that came out (due to replacing cloud covers and outliers with Nans), and a precise description of the steps in between?

In your notes, you say that some of the details come from Gavin at RC. He may have inside info, and if so it’s helpful that he’s sharing, but he’s not an author on this one, so to be official it should come from Steig or Comiso or one of the others.

Thanks for the kind words on my graphs. I’ve been having fun trying to learn Matlab graphics as I go along as well as SVD. The colormap is one I developed that I think makes better use of the visually distinct colors than JET (the default) or HSV. I’d be glad to share it with people if there’s interest.

Re: Hu McCulloch (#86),

.

It’s a bit complicated to get the raw data, and the raw data isn’t the input for the reconstruction. The 25x25km set only goes up through 2000; to get the full set you have to get the 5x5km version. You also can’t get it by using the GISMO interface. You have to contact NSIDC and get them to manually collate it for you, Gzip it by year, post it on an FTP site, and then you have to have ~ 1.5 terabytes of storage set up to receive it. I’m in the process of obtaining the entire set. NSIDC is putting it up in 4 year blocks for me. It’s free, BTW, but it took them about 2 weeks to start posting the set for me to download and it’ll probably take another 2-3 weeks to get through downloading the whole thing.

.

Even with that, you’re nowhere close to what Steig used for input. The complete time series for the AVHRR data set is not a set of cohesive measurements. There are three distinct AVHRR versions because different processing was used. Version 1 and 2 span from 1981-1993 and have issues with time-of-observations; Version 3 goes through June of 2005 and has issues with data being assigned the wrong spatial location. Additionally, the calibration procedure was changed for the NOAA-16 satellite. I haven’t yet looked in detail at the calibration procedures, but there’s not much overlap between the satellites carrying the AVHRR instrument, so the possibility for instrument bias to be inappropriately accounted for (like the problem Christy had with the UAH data a few years back) would seem a potential pitfall.

.

Along with that, the pixel data has to be merged with the time-of-observation mask and both are stored in a binary format that is peculiar to the AVHRR data. To use any of it, you have to first parse it. Once it’s parsed, you have to combine the different masks into a single data set. After you do that, you still have the joint tasks of quality control and converting a 1605×1605 grid with pixels at 90-minute intervals into a 5509-pixel grid at monthly intervals – which cannot be done without first processing the data for cloud masking and any other data removal criteria.

.

A complete description of the data set – along with references for the calibration and mask procedures used by NSIDC (these are likely NOT the same mask procedures Steig used) is here:

.

http://nsidc.org/data/docs/daac/nsidc0066_avhrr_5km.gd.html

.

On top of all of this, there are periods of completely missing data – with 2004 taking the prize (66 days missing).

.

This will make it quite the task to figure out what the input data was for Steig generating their initial 3 PCs for the satellite data unless Comiso posts it. Even with that, figuring out how they got from the raw data to the input data will be an interesting task in and of itself.

Re: Ryan O (#100),

Thanks much, Ryan, for the background information. I would be most interested in what you find if you do follow through with the “complete” process.

Re Jeff C’s flow chart on #82 again,

Is it possible that at one stage only 2 PCs were used, whence the virtually zero coefficients on PC3 before 1982 or so?

Re: Hu McCulloch (#87),

I don’t believe that’s the case. I guess PC3 is just a result from the way RegEM works. I would expect low average correlation of PC3 (post 1982) with the READER set compared to other two PCs (haven’t checked though).

I do not know anything much about the background of using these techniques in climate studies but there does seem to have been an assumption that the most significant PCs are most likely to contain the trend.

Now I would have thought that these PCs would have been dominated by the weather because that is were the majority of the variance comes from.

Is there a theory that was relied on, that indicates that climatic trends are like weather patterns when looked at spatially?

Or is it simply that they only had these 3 PCs and as they were the most significant it was assumed that they should contain the majority of the trend?

Alex

Oops I meant “Including the two little red/orange horns in the

East“Now I look again the Steig 1 is also very similar to the altitude contours in the East. So it could be spatial in not just outline but due to internal details like height or just plain height as that includes the coastal outline.

Alex

Sorry to keep posting in little bits but Steig 1 probably looks a lot like a mean temperature map which is to a large degree a function of altitude and separation from the coast.

Alex

NSIDC Summary:

“The Mean Annual Temperature map was calculated by creating a contour map using compiled 10 meter firn temperature data from NSIDC and other mean annual temperature data from both cores and stations.”

RE 79, 80,

OK, I think I understand the mean plot now — if the recon is a linear combination of basically only 2 EOFs (since EOF3 is for some reason getting a virtually 0 weight from PC3 before 1980), then it will often look like one or the other EOF. Since the recon was normalized to 0 over 1982-2009 (I get numbers like 1.e-9), it doesn’t have to average to 0 over the earlier period, and hence the full average may, by chance, happen to look like one of the EOFs.

I suppose EOF1 looks like the standard deviation simply because the sd measures the variation and EOF1 finds the direction of greatest variation. The fact that the sd looks like altitude per Alexander Harvey’s map in #93 is interesting, but presumably has a physical cause.

Incidentally, the compass rose in Alexander’s map is somewhat ironic, since from the S pole, all directions are N! East is clockwise, and West is ounterclockwise. “West Antarctica” is not the part that is W of the pole, but that which lies in the W Hemisphere, with W latitudes. Do the 3 people who live in Antarctica actually call Greenwichwards “N”, etc?

RE Jeff C, #90,

In Matlab a colormap is an nX3 array of numbers between 0 and 1 indicating RGB. The map I constructed for #79 has 13 key colors, but is 121X3, with the key colors at every 10th position, and linear interpolation in between. Plotting routines like SURF pick a line from the colormap for each point, but 10 shades are enough to make the transitions look gradual. If I had just given it the 13 key colors as a 13X3 colormap, there would be bands (I think).

The dozen or so standard-equipment Matlab color maps, like JET, HSV, HOT, etc, all are 64×3 for some reason. HSV has trouble with this, since when it rotates through the 6 colors RYGCBM by dividing 64 by 6, it doesn’t quite hit some of the pure colors.

I suppose R works similarly.

Wow, good luck Ryan, it sounds like an overwhelming task. Better apply for some grant money. I’ve been working on something similar but on a much smaller scale. Univ of Wisconsin has monthy mean data for the 25 x 25 km cells through 2004. I’m collecting it and writing some R scripts to process it, but it may come to nothing. I don’t see how you can apply any sort of cloud mask to monthly means. I considered the route you were taking but figured my wife would divorce me.

Here is the UW link if anyone else is interested. http://stratus.ssec.wisc.edu/products/appx/appx.html

Hu, thanks for the plotting tip. I have been working in R, but I think the steps you described for Matlab are close to what I need to do.

RE 100, 101 —

Amazing, Ryan! For us mortals, I think it might actually be easier to get the data and procedures out of the authors, per Nature’s data availability policy, at http://www.nature.com/authors/editorial_policies/availability.html.

I suppose that any request should be directed to both Steig (as coordinating lead author) and to Comiso (as the AVHRR expert who actually compiled this data and promised it to Steve “soon” back in January). You sound like a logical person to ask for it, since you are so familiar with its details. It wouldn’t hurt for you to ask as well as Steve.

Even if documenting all the procedures used, as required by Nature, might take some time, there is no excuse for their not immediately releasing the presumably 600X5509 raw AVHRR file, as well as the 600X5509 screened file whose first 3 PCs evidently went into RegEM along with the READER data.

Re: Hu McCulloch (#102), I agree. Steve asked me the same thing. I’ve also asked NSIDC if they provided Steig (or Comiso) a more prepared set of data. I have not made a request of Comiso yet, but I will.

.

Even if Comiso is able to come through, that still leaves an open question as to how sensitive the results are to the exact procedure they used. If the results are not very sensitive, then reprocessing the AVHRR data might not be of extraordinary benefit. I would suspect, however, that the results are highly sensitive to the method used given the disparity between Comiso’s earlier findings, Monaghan’s (sp?) findings, and now Steig’s. If they are highly sensitive, then playing with the raw data could have some impact.

.

For me, there were 2 big red flags with the paper.

.

The first was the use of RegEM for the reconstruction. Given the testing done in Tapio’s paper, I could not see how anyone could put reasonable confidence levels on the output. There are too many unknowns and (in my opinion) a very likely possibility that the values of the missing quantities

areactually related to the fact they are missing via geography (Gavin’s comments notwithstanding)..

The second red flag was the meshing of the AVHRR data with ground instrumentation. Unless the AVHRR data can be properly calibrated to ground instrumentation, there’s no guarantee they are measuring the same quantity (and many, many reasons to believe they are not). In that case, splicing them together is an exercise in futility. Based on the description of what was done and the extremely low correlation coefficients – in terms of an instrument calibration, anyway – described in the SI, I did not feel that they had shown with any level of certainty that the ground instrumentation and the AVHRR data were measuring the same physical quantity. In other words, they had not presented sufficient justification that they could do what they did and still obtain physical results.

.

So even if Comiso posts the processed data, there is potentially a lot of information to be gained by attempting an independent calibration.

RE #97 —

Alex, what are the yellow dots? Penguin migration routes?? GISS Urban areas??

More seriously, what’s the NSIDC URL where you found this interesting graphic?

Hu,

THERMAP:

Ice Temperature Measurements of the Antarctic Ice Sheet

Interesting stuff, it seems that teams have been roaming Antarctica since 1957 drilling holes and taking the subsurface temperatures. The penguin tracks are from the last 5 treks. The yellow dots are I guess the sampling sites. Anyway they seem to have archived all the data, etc. Really clear and friendly site.

The grahpic I used seems to have lost its temperature scale but the one on this link is fine. It has many of the features of Steig 1. But then it would as does altitude, distance from coast line, etc.

Alex

This seems as good a place as any to draw attention to a great post today by Ryan O at tAV:http://noconsensus.wordpress.com/2010/03/30/pca-sampling-error-and-teleconnections/

Much of the post can relate back to the many discussions about Steig et al here and at tAV, namely the physical basis for the eigenvectors used in Steig’s reconstruction.

I’ll try the link again: http://noconsensus.wordpress.com/2010/03/30/pca-sampling-error-and-teleconnections/

## 2 Trackbacks

[…] the shape of the 3 pcs as plotted across the antarctic by CA HERE and the fact that there are only the first two effective PC’s as shown by this plot […]

[…] that a featured Nature article in 2009 (Steig et al) also misinterpreted Chladni patterns. See here for a discussion. Also see https://climateaudit.org/tag/chladni/ for other […]