Ryan O on the Cloudmasked Data

Following excellent notes transferred from a comment by Ryan O, who has been doing excellent analyses of the Steigian swamp.

As I had mentioned before, there appear to be unaccounted-for offsets between the different satellites that make up the Comiso AVHRR cloudmasked data. I have spent a while trying to determine first if the offsets actually exist; and, second what the result of correcting for them would be. The R script and two supplemental data files you will need to be able to replicate this are:
R Script: http://www.mediafire.com/download.php?dtuntzxadzz
Station Information: http://www.mediafire.com/download.php?ioyiizcmmzn
Updated READER Temps: http://www.mediafire.com/download.php?mygwgedfa4z

The first thing to note when plotting the AVHRR data against the ground temperatures (all manned and AWS stations) is that it appears to contain multiple populations that are not all equally correlated to ground temperatures. This could cause several problems when trying to determine satellite offsets, such as:
1. The multiple populations increase the data scatter, which decreases the ability to identify offsets.
2. A poorly correlated population with data concentrated only in certain times could cause mistaken identification of an offset.
3. Some of the populations are not related linearly with ground temperatures, which would exaggerate or suppress the magnitude of a calculated offset.
So the first thing we would need to do is identify the populations. This proved to be a somewhat challenging proposition, as they are all intermixed in the higher temperature range. After much trial and error organizing groups, plotting, reorganizing groups, replotting, etc., five distinct groups emerged:

In the R script, I retained and documented the plotting functions I used to help do the grouping. Function plt.stn() allows you to plot a particular station vs. the groups. If you want, you can go through it just to verify that there are, indeed, five separate groups and that I have the correct stations assigned.
After identifying the groups, we should check to see if there may be some physical reason that the AVHRR temperatures at different station locations would behave differently. The first thing would be to look for geographical significance (NOTE: The colors DO NOT match the above plot).

The main group, which was the long, skinny group in red on the scatter plot, corresponds to the Antarctic interior. The other groups are coastal. If I had to venture a guess, I would say that the difference in the shape of the curves is due to reflectivity differences between water, snow (which also changes with grain size), and ice. If so, this effect was also described (for 37GHz measurements) in Shuman (2001) linked by Roman earlier:

Click to access i1520-0442-14-9-1977.pdf

Now that we’ve identified our groups, we need to calibrate them to the ground temperatures. Doing this on a station-by-station basis would be suspect since many of the stations have a small number of points. Within a group, however, we have a much larger number of points, so we can be more certain of our transforms.
The process of doing the calibration (after trying lots of things) ended up being fairly simple:
1. Bias correction
2. Nonlinearity correction
3. Fine bias correction
4. Fine nonlinearity correction
The result:

The next step is to convert to anomalies. Care has to be taken here. Remember that the purpose is to try to determine offsets between satellites. Because the ground data is discontinuous with large chunks missing, simply making the base periods the same when converting to anomalies is not enough. Instead, we will convert to anomalies using the entire time frame (1982-2006) and using ONLY months for which there is corresponding ground data. This makes sure that the comparison between the calibrated anomalies and the ground anomalies is an apples-to-apples comparison.
After converting to anomalies, we need to find some test to determine if there are statistically significant offsets between the satellites. For this we will use a paired Wilcoxon test (since the residuals are non-normal – I checked) with a 24-month range. The estimate of the difference in means will be normalized to the 95% confidence interval to allow continuous plotting of the points as we move through all 300 rows of the data sets. If there is a statistically significant offset between satellites, we will see a peak, approximately in the center of the satellite coverage period, that exceeds 1.0:

The biggest feature is the huge spike with NOAA-14. Without a doubt, there is a statistically significant offset with NOAA-14. NOAA-7 and -9 are also low; NOAA-11 looks generally okay except for the massive dip at the end (which I have not come up with a satisfactory way of handling yet); and NOAA-16 and 17 also look okay.
Now that we’ve convinced ourselves that the offsets are real, it is time to calculate them. We obtain:

-0.136315035 -0.217185496 -0.097247497 0.215448620 -0.006319678 -0.195520508

It’s pretty obvious that these factors will reduce the trends. We get a continent-wide trend of 0.074 +/- 0.158 (compared to 0.187 +/- 0.151 from the Comiso data).
However, had we simply calculated offsets without going through the above calibration, we would have gotten:

-0.14944679 -0.16962065 -0.04562425 0.33161241 0.09104473 -0.05796546

Note that this would have even further decreased the trends – to the tune of a continent-wide average of 0.032 +/- 0.149.
Original Comiso trends (deg C/decade and 95% CI):

Peninsula 0.406552585 0.1925186
West Antarctica 0.411971312 0.2217650
Ross Ice Shelf -0.104963672 0.2206460
East Antarctica 0.225650354 0.1942384
All 0.187422507 0.1510635

Calibrated trends (deg C/decade and 95% CI):

Peninsula 0.29165083 0.2282490
West Antarctica 0.26177232 0.2502632
Ross Ice Shelf -0.12965218 0.2714344
East Antarctica 0.06001991 0.2033417
All 0.07399090 0.1572431

Here’s a plot showing my geographical groupings:

Teaser:
Hm.
Trends about halved – very similar to what the Jeff’s got by regridding. Common theme, maybe? Suspiciouser and suspiciouser . . . but that’s enough for now. There’s a lot more in the script I posted – you can compare the main, PCA, and AWS recons as well. There’s also some single value decomposition at the end which isn’t finished yet and will be the subject of another post. Until next time, however, I will leave you with this curious plot:

The blue line is simply a slope of 1, provided for scale.
Unlabeled, one might have mistaken this for the Small Magellanic Cloud:

40 Comments

  1. Steve McIntyre
    Posted Apr 5, 2009 at 9:37 PM | Permalink

    Steig took huge exception to someone at CA misspelling his name, apparently blaming me for this misdeed. Stygian means “a. Gloomy and dark.
    b. Infernal; hellish”

    as in Stygian RegEM methodology, not to be confused with Steigian RegEM methodology.

  2. Layman Lurker
    Posted Apr 5, 2009 at 9:54 PM | Permalink

    I copied my comment from the “Steig’s Secret Data” thread.

    Ryan, very impressive post. Thanks for all the work. Regarding NOAA 11, is it possible that Pinatubo interfered with the signal recognition at the tail end of this period?

  3. Posted Apr 5, 2009 at 10:32 PM | Permalink

    Good stuff. The sat data is a messy network and after looking at raw UWisc data it’s hard to imagine what Comiso did to make his set so much better.

    I’m thinking of RegEM as a high frequency correlation matrix for the surface station data to be distributed on. Steig’s paper basically admits that point when detrended sat data is used in his paper for reconstruction confirmation. Steps in the data become less important but not unimportant.

    Steig could have kept going with their methodology to remove satellite data trends entirely simply relying on surface data. I was working on that yesterday when I ran into a problem. I found that due to the 10+ to 1 ratio of short to long term signal, RegEM is completely insensitive to the long term trend of a station. The trends are copied positively for positive short term correlations and negatively for negative. I’ve done some preliminary calcs and the plots aren’t that different from the Magellanic cloud either.

    So how come I bring it up here? First, Comiso did an excellent job massaging the sat data into good shape for high freq correlation. The data looks great. It has a highly controlled spatial trend pattern and good correlation to surface. As has been shown through correlation vs distance plots, using low PC reconstruction basically guarantees that regem will copy everything everywhere making the trend across the continent appear reasonable (spurious positives and negatives on top of each other) while holding the spatial shape of the low PC satellite data (some trend is kept).

    The higher order PC analysis I did, where I am the first man to discover the massive cooling on the Ross ice shelf really brought out this point to me. BTW: All the climatologists and their instruments were wrong ’cause my computer shows heavy cooling.. 😉 The 10 pc reconstruction also shows a 6 lobed Chiladni pattern corresponding to PC=8 plot by our host.

    For RegEM to be accurate, there has to be a good deal of faith that high freq positive correlations have matching long term trends. My criticism of this paper at this point, lies in the failure to confirm that this is the case. There’s no QC whatsoever. In fact, despite the apparent realization that this is a problem (confirmed by the no-trend sat recon), all the confirmation revolved around the high frequency correlations.

    Anyway, I hope this makes some sense. Yesterday morning I had all these great thoughts of improving on RegEM. At this point, I don’t believe it is a workable method for high noise (short term fluctuation) data.

    • Ryan O
      Posted Apr 6, 2009 at 12:51 PM | Permalink

      Re: Jeff Id (#3),
      You might be able to help with this.
      .
      There is an odd thing that happens to the Comiso data when you use only common months between the sat and ground to calculate anomalies. This doesn’t happen in my post-calibrated version. If you calculate anomalies on the Comiso set using all available months, you get a scatterplot like this [use “plt.grp(dat=sat.orig.anom.all, base=gnd.anom)” in the script to duplicate]:

      .
      It looks almost exactly like the scatterplot with the calibrated data. However, if you calculate anomalies on the Comiso set using ONLY common months between ground and sat data, the plot changes dramatically “plt.grp(dat=sat.orig.anom, base=gnd.anom)”:

      .
      Even curiouser, the eigenvalues of the SVD for the the top one look like this “plt.svd(dat=svd.orig.all)”:

      .
      But the eigenvalues of the SVD for the bottom one look like this “plt.svd(dat=svd.orig)”:

      .
      Changing the anomaly period shouldn’t do this – unless there is something special about the data points within that period.
      .
      With my calibrated set, the eigenvalue scaling and scatterplots look virtually identical regardless of anomaly base period. With Comiso, there’s more variability, with a distinct change the closer you get to using common months between sat and ground for calculating the anomaly period. Why would the months where ground data exist be so special?
      .
      Without having the mathematics background to confirm this for myself, I think this indicates that the monthly sat means are NOT just the leftover days after cloud masking was done. I think they infilled missing days, using the ground data to help form the covariance matrix, and they did it using anomalies, not raw temperatures. That’s why those points are special, and that’s why when you form your anomalies using those months that a single eigenvector gets to account for such a large percentage of the variation in the set.
      .
      Pure speculation at this point, but I figured I’d throw it out there and see what you think.

  4. Jeff C.
    Posted Apr 5, 2009 at 11:20 PM | Permalink

    I just left this comment on the other thread, but I’ll repost it here.

    Great post Ryan. I got all your code and am going to try some similar tests to the U Wisconsin dataset. In that data there are also oddities as you transition from one spacecraft to another. Funny thing, unlike what you show here, in that dataset NOAA-14 looks okay and NOAA-16 is the oddball.

    When you performed your calibration, did you apply the same correction to each station within each of the five groups? I got that impression from reading the text but wanted to make sure. I think you need to do it that way as opposed to each station getting its own tweak.

    To me, it looks like the geographical breakdown is something like this:
    East Coast/Peninsula (red)
    East and West Interior (black)
    Ice shelves (blue)
    Ross Sea Coast (green)
    West coast (light blue) – this one is iffy, but there aren’t many points

    These breakdowns make sense to me. It is not so much the region as the commonality of the physical environment.

    One last point and this is totally off the wall. I realize they are completely different things, but did you notice how much your Wilcoxon test plot looks like the MSU temp anomaly plot? Coincidence? Probably, but they are quite similar. In fact, when I was just scanning the post I thought that was what it was.

    • Ryan O
      Posted Apr 6, 2009 at 12:24 PM | Permalink

      Re: Jeff C. (#4),
      My guess with the NOAA-16 stuff is that UWisc didn’t do as good of a job as Comiso of removing data where the scan motor didn’t work properly. From your posts at the Air Vent, the larger variability in the UWisc data – and the cloud masking method (where you showed an increase in the frequency of removed points with time, along with the sudden change in method for NOAA-16) probably changed the appearance quite a bit. In the UWisc data, NOAA-16 probably actually is the oddball. In Comiso, it’s NOAA-14. Which satellite shows up as odd may be as much of a function of how the data is processed as it is a function of the satellite itself.
      .
      On the RSS plot – weird. I don’t really know why they would be correlated like that. Maybe coincidence . . . but they do look very similar.

  5. Mike Lorrey
    Posted Apr 6, 2009 at 2:21 AM | Permalink

    with all this interpolation handwavium going on, has anybody thought to test the methodology against a well known dataset, say GISS, but take out data from a number of areas and see how well these methods are at interpolating the right results and figuring out their accuracy.

    • Craig Loehle
      Posted Apr 6, 2009 at 8:57 AM | Permalink

      Re: Mike Lorrey (#5), “Handwavium”: a new form of matter, a form of the alchemists stone, that allows miraculous transformations of random data into whatever pattern the user wishes. Use sparingly, as use of Handwavium is addictive.

      • Mike Lorrey
        Posted Apr 9, 2009 at 1:35 AM | Permalink

        Re: Craig Loehle (#10), handwavium is a catalytic element used to generate sufficient quantities of unobtainum needed to supply thermometers in virtual weather stations as a means of measuring unobtainable temperature records, without which, Pretend Component Analysis cannot take place.

  6. Geoff Sherrington
    Posted Apr 6, 2009 at 5:47 AM | Permalink

    That 1998 temperature peak mentioned in passing above does not appear on 3 Australian Antarctic ground stations (Mawson, Davis, Macquarie Is). Looking at 16 other Australian rural stations I’ve studied, it hardly shows, is sometimes among the cooler of the recent 40 years, is occasionally highish (especially Tmin) in places like Gove, Northern Territory. Its only “strong” expression is way offshore at Lord Howe Island.

    I wonder what the explanation could be?

  7. MrPete
    Posted Apr 6, 2009 at 6:05 AM | Permalink

    I don’t want to read too much into this, but are you saying part of the 1998 peak, from a sat-measure perspective, could be sat sensor bias in NOAA-14?

    • Geoff Sherrington
      Posted Apr 7, 2009 at 6:30 PM | Permalink

      Re: MrPete (#7),

      Not certain if you referred to my post above yours, but I am puzzled as to why most ground station temperatures I have looked at in and around Australia do not show the 1998 peak. Whether it is regionally confined like the MWP in Great Britain, ahem, or whether it is an artifact of satellite processing, or surface record adjustment, I do not have enough basic info to infer a cause.

      However, I do think it is rather important and worth examination by others with better access to data.

  8. Andy
    Posted Apr 6, 2009 at 6:49 AM | Permalink

    So is your halving of trend and Jiffs halving of trend adding together?

  9. Hu McCulloch
    Posted Apr 6, 2009 at 7:06 AM | Permalink

    Ryan and Steve —
    reader.data.04-04-2009[1].txt and station.data[1].txt at mediafire are coming through garbled. Is it possible to post these at CA?

  10. Posted Apr 6, 2009 at 10:59 AM | Permalink

    I believe Handwavium was first discovered in 1989 at the Melvin Dummar Institute of Physics in Utah.

  11. jack mosevich
    Posted Apr 6, 2009 at 11:00 AM | Permalink

    The Wilcoxian graph is a classic head and shoulders from technical analysis. Its time to sell:

    http://www.investopedia.com/terms/h/head-shoulders.asp

  12. AnonyMoose
    Posted Apr 6, 2009 at 11:32 AM | Permalink

    Do those satellites always take readings, or do they omit collecting data under some situations? I was recently wondering if Landsat metadata of cloud cover could be used to measure decades-old cloud cover, and noticed that images are not even made when NOAA predicted 80% cloud cover… so to misuse their cloud cover info (which they intended to measure bad data regions) one also needs to figure out what images are missing. I haven’t found a copy of their metadata archive so don’t know if they make entries for non-images.

  13. Ryan O
    Posted Apr 6, 2009 at 1:04 PM | Permalink

    Oh, yah . . . one other thing . . . after I calibrated the data, I also found massive cooling on the Ross Ice Shelf. The confidence intervals were quite large, however.
    .
    But the combination of us both finding Ross cooling, the overall trend being halved, and more region-to-region trend variation by two completely different, unrelated methods hints at something suspicious.

    • curious
      Posted Apr 6, 2009 at 5:14 PM | Permalink

      Re: Ryan O (#17),Re: Ross Ice shelf temp –
      Maybe at a bit of a tangent but here’s a plot from NASA Nov 2007:

      http://earthobservatory.nasa.gov/IOTD/view.php?id=8239

      It has this paragraph of text in the accompanying commentary:

      “Across most of the continent and the surrounding Southern Ocean, temperatures climbed. In some places the rate of warming approached a tenth of a degree Celsius each year, which would translate to more than two degrees over the entire period. The most dramatic changes appear as solid red streaks and splotches. In most cases, these changes are likely linked to major iceberg calving events on the ice shelves that fringe the Antarctic coastline, including the Ross Ice Shelf and the West and Shackleton Ice Shelves in East Antarctica. In the case of the Larsen B Ice Shelf on the Antarctic Peninsula, the entire ice shelf collapsed. After the calving or collapse, the satellite saw open water where there had previously been ice, so the temperature increase was stark.” (my bold)

      The pattern of temperature trend along the seaward edge of the Ross shelf seems to match strongly with the recent calving line of Dec04 shown in the map reported on here:

      http://pubs.usgs.gov/imap/i-2600-h/

      Similar pattern goes for the Larsen Shelf which is reported on here:

      http://pubs.usgs.gov/imap/i-2600-b/

      There is another report on the Ronne shelf which shows the same – calved edge/exposed sea = high temp.

      Link to the READER database trends here:

      Click to access reader.temp.pdf

      The reader database annual bars at Faraday and Esperanza show warming trend on the penninsular so there is presumably a real air temp change here.

      My comment/query is – is it possible to filter the satellite data to eliminate the “edge” info. and what does this do to the trends deduced? And in turn to the correlations to the surface stations? I wondered if this would be caught by the +/- 10degC cloud filter I’ve seen mentioned but I’d guess in a calving situation that will not apply as I think it was a daily filter whereas calving will be a step change ?

      Sorry if this is heading a bit OT and/or has already been discussed and I’ve missed it. Move/point to ref/snip if req.

    • Posted Apr 6, 2009 at 5:17 PM | Permalink

      Re: Ryan O (#17),

      This is very interesting and a bit complicated. The graphs give a good understanding of your result but of course the key is in the detail. I’ve taken one slow read through of the code which is really nice. It reads like a doctors office, very clean and organized which helps a lot. (what’s the emoticon for thanks) The full detail of the calibration is a little difficult because you are much more advanced in R than myself and it takes time for me to figure out.

      I should have some time over the next couple of days to run it and really understand the detail of your recalibration. In the meantime, I’ll keep reading the other’s comments

  14. AnonyMoose
    Posted Apr 6, 2009 at 1:40 PM | Permalink

    Why would the months where ground data exist be so special?

    Do they turn off the instrument during the month when the ozone hole is at its peak? Is the instrument affected by the moon, therefore there’s something odd about once a month? Hmm… there is a channel used for night cloud monitoring, which might be useful during Antarctic night, but is that affected by the phases of the moon?

  15. Kenneth Fritsch
    Posted Apr 6, 2009 at 4:09 PM | Permalink

    I find it a bit sad and not a little frustrating that reasonable questions popping out of these detailed, time consuming and well designed analyses could not be addressed directly to the appropriate authors with some assurance of provoking a reply, if for no other reasons than scientific curiosity and the tendency of good scientists to want to communicate their views.

    Do any of you think that a summary of these questions linked to the results of the analyses and politely addressed to the appropriate scientists would have a reasonable chance of being addressed?

    I may be naive in thinking that scientists in this field are comfortable discussing issues in a public forum other than in peer-reviewed literature.

    • stan
      Posted Apr 7, 2009 at 6:36 AM | Permalink

      Re: Kenneth Fritsch (#19),

      I may be naive in thinking that scientists in this field are comfortable discussing issues in a public forum other than in peer-reviewed literature.

      Has the term “peer review” become a pejorative yet?

    • bender
      Posted Apr 7, 2009 at 6:42 AM | Permalink

      Re: Kenneth Fritsch (#19),
      Ken, these authors claimed they uncovered continental-scale Antarctic warming, when in fact the warming is isolated to the peninsula. Do you really think they are going to engage the “deniers” on the topic of their faulty methods? They are so shamefully wrong that they would be foolish not to hide.

  16. Tim Channon
    Posted Apr 6, 2009 at 8:20 PM | Permalink

    “That 1998 temperature peak mentioned in passing above does not appear on 3 Australian Antarctic ground stations (Mawson, Davis, Macquarie Is). Looking at 16 other Australian rural stations I’ve studied, it hardly shows, is sometimes among the cooler of the recent 40 years, is occasionally highish (especially Tmin) in places like Gove, Northern Territory. Its only “strong” expression is way offshore at Lord Howe Island.”

    There has been rumbling about the 1998 peak was malfunctioning, to do with ionising radiation. Seems unlikely.

    All the following agrees with you.

    No 1998 tls peak in Anartica

    or this one

    or this one

    It does appear further north in muted form in south ext ocean

  17. sean egan
    Posted Apr 7, 2009 at 12:24 AM | Permalink

    In the earlier posts there has been some talk that Noaa 16 or another Noaa is problematic as it does not match the ground data. You can not re-calibrate or give any special treatment to individual satellites using the ground record, and then compare the ground stations record with the satellite record.

    So if NOAA x or y is suspect, you need to look at the splicings; Are any results robust to using using another published slicing? If as has been suggested ground records have been used to infill the satellite record, you need to unfill first.

  18. Posted Apr 9, 2009 at 9:13 AM | Permalink

    I’ve been out of town for the last couple of days and, I calculated an alternative reconstruction to the antarctic which doesn’t use RegEM and isn’t sensitive to the long term trend of the cloud data. My intent was to address the problems we know about in the RegEM method and provide an improved baseline so we have an idea what the data should look like.

    The Antarctic, an engineers reconstruction.

  19. Ryan O
    Posted Apr 9, 2009 at 10:53 AM | Permalink

    Cool. BTW, I am halfway through calibrating the entire Comiso set using the correlations to the ground stations. It would be interesting to run that set through your engineer’s reconstruction after I’m finished.

  20. Ryan O
    Posted Apr 10, 2009 at 2:17 PM | Permalink

    PART TWO: Calibrating the entire cloudmasked AVHRR set.
    .
    After having obtained correction curves for the different groups where ground stations were present, the next task is to determine characteristics that can be used to group grid points where no ground data exists. This will allow us to calibrate the entire cloudmasked set.
    .
    This was a bit of trial-and-error. I had to find an expression that allowed me to distinguish between the groups. The bit of R script for this expression is here:

    sort.fn=function(x, dat=sat.temps) {

    ### Normalize the range of temperatures to be from 0 to 1.0
    a=(dat[1:300,x]-min(dat[1:300,x]))/(max(dat[1:300,x])-min(dat[1:300,x]))

    ### Center on the mean and enhance the spread by multiplying each
    ### point by its absolute value
    a=a-mean(a);a=a*abs(a)

    ### Sort into 1/100th bins and return mean temp (Kelvin), mode (bin)
    ### frequency of the mode (bin), and the ratio of temp to mode
    a=(round(100*(a-min(a)), 0))/100
    means=mean(dat[1:300,x])
    mode=names(sort(-table(a)))[1]
    frq=-as.vector(sort(-table(a)))[1]
    ret=c(as.numeric(means),as.numeric(mode), frq, as.numeric(means)/as.numeric(mode), min(dat[1:300,x]), max(dat[1:300,x]))
    ret
    }

    What this does is enhance the peaks within the temperature distribution. Group 3 has the long, flat tail, so it will have a peak toward the beginning. Groups 2 and 5 will have peaks toward the end. This allows me to sort into Groups 2, 3, and 5 based on where the peak occurs and the frequency of points that fall within that peak. To distinguish between Group 1 and Group 4, I primarily used temperature. The code to do the sorting is here:

    groups=matrix(ncol=5509, nrow=1)
    sort.gp=sapply(1:5509, sort.fn, dat=avhrr)
    groups[1,]=4
    for(i in 1:5509) {groups[1,i]=ifelse(min(avhrr[,i])<235 | sort.gp[6,i]<269 | sort.gp[1,i]260, 2, groups[1,i])}
    for(i in 1:5509) {groups[1,i]=ifelse(range(avhrr[,i])[2]-range(avhrr[,i])[1]1000 & sort.gp[2,i]245 & sort.gp[3,i]>30, 3, groups[1,i])}
    for(i in 1:5509) {groups[1,i]=ifelse((groups[1,i]!=1 & groups[1,i]!=2 & groups[1,i]!=5)&((sort.gp[2,i]>.19 & sort.gp[3,i]>3) | (sort.gp[1,i]>256) | (sort.gp[2,i]>45)), 4, groups[1,i])}
    for(i in 1:5509) {groups[1,i]=ifelse(groups[1,i]!=3 & (min(avhrr[,i])<235 | sort.gp[6,i]<269 | sort.gp[1,i]<251), 1, groups[1,i])}

    The sequence of the code is important. It is easy to tell between 2, 3, and 5, but not so easy to tell between 1, 3, and 4. Since Group 1 has the smallest correction factor, I defaulted to placing cells not firmly in 3 or 4 into 1. The result of this calculation is below:

    .
    The next part, after having placed all the data into groups, is to apply the linearity corrections from the initial post. IF my grouping is proper, when I compare to the ground stations, I should get a similar result as in the first post.
    .
    Cross fingers and run script.
    .
    Result:
    .

    .
    Damn near the same thing. *sigh of relief* This is good news. Without explicitly identifying any station locations, we were able to properly sort into the correct groups and apply non-linearity corrections that result in a better fit than the original data.
    .
    I might even go so far as to call this a real calibration.
    .
    Every single cell was tested vs. exactly the same criteria, appropriate corrections applied, and we ended up with a data set that responds linearly to changes in ground temperature. Bonus.
    .
    There are a couple of differences, though. They are primarily in the 270+ Kelvin range. Some of the Peninsula stations are 200+ km from the nearest grid cell, and my sorting method placed them in Group 2 instead of Group 5. Since the non-linearity correction for Group 2 is much smaller than Group 5, this is the conservative way to apply the correction. It means they retain more of the high-temperature tail, but as they are so far from the nearest grid cell, it does not seem wise to force the grid cell to match.
    .
    Okay. Now, with that done, let’s see how our trends look compared to the manually calibrated set in the initial post:
    .
    MANUALLY CALIBRATED (using all available data for anomaly calculations):

    Peninsula 0.29677849 0.1889836
    West Antarctica 0.30699523 0.2186480
    Ross Ice Shelf -0.21351228 0.2300461
    East Antarctica 0.11679406 0.1869936
    All 0.07882804 0.1489791

    .
    ALGORITHM CALIBRATED:

    Peninsula 0.26005423 0.1757284
    West Antarctica 0.30873169 0.2188066
    Ross Ice Shelf -0.23006654 0.2268117
    East Antarctica 0.12563573 0.1913120
    All 0.07018597 0.1496105

    .
    Almost identical.
    .
    We now have a data set that we can use to reconstruct Antarctica back to 1957 that is:
    1. Calibrated to ground temperatures
    2. Corrected for satellite offsets
    .
    The updated R script: http://www.filefactory.com/file/aga1b15/n/AVHRR_Cal_R
    .
    Calibrated AVHRR set (raw temps, no satellite correction): http://www.filefactory.com/file/aga1b42/n/calibrated_avhrr_txt
    .
    Calibrated AVHRR set (anomalies w/satellite correction): http://www.filefactory.com/file/aga1b3d/n/calibrated_avhrr_anom_txt
    .
    Save the data files and simply use R’s “load” command. They are already saved as time series.

  21. Layman Lurker
    Posted Apr 10, 2009 at 4:53 PM | Permalink

    Yeoman’s work here Ryan. I’m sure the Jeff’s will be rubbing their hands together when they read your post.

  22. Posted Apr 12, 2009 at 2:21 PM | Permalink

    Looks great Ryan, I need to take some time and run your scripts.

    In the meantime I’ve been creating new reconstructions based as much as possible on surface station data. This reconstruction is based on a closest station algorithm and uses no satellite data at all.

    Closest Station Antarctic Reconstruction

    The trend drops all the way down to 0.052 C/decade. Everywhere we look the trend comes out less than reported.

    I bet it’s consistent with the models though.

    • Ryan O
      Posted Apr 12, 2009 at 3:15 PM | Permalink

      Re: Jeff Id (#33), Everything is consistent with the models. That’s the great thing about models. They are always consistent. Even when they’re not. Inconsistency is consistent. 🙂

  23. Posted Apr 14, 2009 at 8:50 AM | Permalink

    Ryan,

    I just finished a short post on the satellite vs ground trends showing a warming trend in the sat data as compared to the surface stations.

    Warming the Raw Sat Data

    I’ve got a request to do a reconstruction without the peninsula data which I’ll do and then I’d like to try your data in some of the reconstructions.

  24. Ryan O
    Posted Apr 14, 2009 at 3:45 PM | Permalink

    Cool. I’ll take a look later tonight.
    .
    In the meantime, I generalized Steve’s mapping script to add some functionality without having to reprogram it each time you want to do something different.
    .
    First, you set your labels: map.lbls=c(“Deg C/Decade”, “Trends (1957-2006)”, “Test Recon”), where [1] is the units for the legend, [2] is the main title, and [3] is the subtitle.
    .
    The base settings “plt.map(trends2)” gives you a plot with a scale of +/- 1 and a color change every 0.1:
    .

    .
    Color scale too wide? Set your scale limits with argument mv: “plt.map(trends2, mv=.25)”:
    .

    .
    Want smoother colors? Set the number of divisions on each side of zero with argument divs: “plt.map(trends2, mv=.25, divs=24)”:
    .

    .
    Still not smooth enough? Try “plt.map(trends2, mv=.25, divs=1000)”:
    .

    .
    Want the color scale bigger or you want to move it? The x and y arguments set the upper left corner, and the sl argument scales the box. You can scale the text with the st argument: “plt.map(trends2, mv=.25, x=-.5, y=-.3, sl=1.7, st=1.7)”:
    .

    .
    For plotting dimensionless quantities – like eigenvectors – it’s sometimes convenient to normalize them so features clearly stand out. Just set argument norm to TRUE and the plot will normalize for you (without disrupting your data): “plt.map(trends2, norm=TRUE, divs=1000)”:
    .

    .
    Now let’s say you want to highlight a certain range in the graph – like from -0.7 to -0.4. Just use argument hlite: “plt.map(trends2, norm=TRUE, divs=1000, hlite=c(-.7, -.4))”:
    .

    .
    Don’t like that color? Try a different one with hcol: “plt.map(trends2, norm=TRUE, divs=1000, hlite=c(-.7, -.4), hcol=”yellow”)”:
    .

    .
    The arguments for the function plt.map are:
    .
    dat: Data to be plotted
    coord: Coordinates (lon/lat)
    mv: Limit for the color scale
    divs: Number of colors between zero and +mv (and, also, number of colors between -mv and zero)
    labels: Vector of UNITS, MAIN TITLE, SUB TITLE
    norm: If TRUE, normalize the data before plotting
    hlite: Value range to highlight
    hcol: Color to use for highlighting
    mbx: If TRUE, draws a box around the plot
    mxs: If TRUE, places x-y labels on the plot
    x, y: The upper-left corner of the legend
    sl: The scaling factor for the legend box
    st: The scaling factor for the legend text
    sp: The scaling factor for plotted points
    s.main: The scaling factor for the main title
    s.sub: The scaling factor for the subtitle
    s.axis: The scaling factor for the axis labels
    .
    Here’s the plt.map function:

    plt.map=function(dat, coord=sat.coord, mv=1, divs=9, labels=map.lbls, norm=FALSE, hlite=NA, hcol=”#99FF33″, mbx=TRUE, mxs=FALSE, x=-.3, y=-.35, sl=1, st=1, sp=1, s.main=1, s.sub=1, s.axis=1) {

    library(mapproj)

    ### Set up Antarctic projection and lat/lon lines
    map(“world”, proj=”azequidistant”, orient=c(-90, 0, 0), ylim=c(-90, -60))
    a=seq(-180, 150, 30)
    for(i in 1:length(a)) {lines(mapproject(list(x=rep(a[i], 181), y=c(-90:90))), col=4, lty=2)}
    a=c(-180,0)
    for(i in 1:length(a)) {lines(mapproject(list(x=rep(a[i], 181), y=c(-90:90))), col=4, lty=1)}
    a=seq(-50,-80,-10)
    for(i in 1:4) {lines(mapproject(list(y=rep(a[i], 72), x=seq(-177.5, 177.5,5))), col=4, lty=2)}

    ### Make the color scale
    mapcolors=get.colors(maxval, divs, hlite, hcol)

    ### Normalize the data to a maximum of +/- 1?
    nm=ifelse(norm==TRUE, max(abs(as.vector(dat))), 1)

    ### Assign the color scale
    temp=(divs+2)+(round((divs+1)*(dat/(mv*nm))))
    temp=ifelse(templength(mapcolors), mapcolors[length(mapcolors)], mapcolors[temp])

    ### Plot points
    xy=mapproject(coord[, 1], coord[, 2])
    points(xy$x, xy$y, pch=15, cex=sp, col=temp)

    ### Overlay map
    map(“world”, proj=”azequidistant”, orient=c(-90, 0, 0), ylim=c(-90, -60), fill=FALSE, add=TRUE)

    ### Set up legend and labels
    legends=get.legend(x, y, sl, st, sp, h=hlite, mv, ml=map.lbls, mbx, mxs, mc=mapcolors, divs, s.main, s.sub, s.axis)
    }

    .
    It first calls the function get.colors to make the color map. I broke this out as a separate function because I use it to generate color maps for other purposes as well:

    get.colors=function(mv, divs, hlite, hcol) {

    ### Make color scale
    cold.colors=matrix(nrow=divs); hot.colors=matrix(nrow=divs)
    for(i in 1:divs) {
    lum.ratio=25*(i/divs)
    c1=ifelse(lum.ratio<15, 255-(17*lum.ratio), 0)
    c2=ifelse(lum.ratio0, 255-(12*lum.ratio), 0)
    h2=ifelse(255-(25*lum.ratio)>0, 255-(25*lum.ratio), 0)
    cold.colors[divs+1-i]=rgb(red=c1, green=c1, blue=c2, maxColorValue=255)
    hot.colors[i]=rgb(red=c2, green=h1, blue=h2, maxColorValue=255)
    }
    neut.col=ifelse(divs<11, “#EEEEEE”, “#FFFFFF”)
    mapcolors=c(“darkmagenta”, as.vector(cold.colors), neut.col, as.vector(hot.colors), “black”)
    if(!is.na(hlite[1])) {
    highlmin=(divs+2)+(round((divs+1)*(hlite[1]/(mv))))
    highlmax=(divs+2)+(round((divs+1)*(hlite[2]/(mv))))
    if(!is.na(highlmax)) {mapcolors[highlmin:highlmax]=hcol} else {mapcolors[highlmin]=hcol}
    }
    mapcolors
    }

    .
    It generates the legend by calling a separate function called get.legend. I did it this way to be able to build color scale legends for things other than Antarctica.

    get.legend=function(x, y, sl, st, sp, h, mv, ml, mbx, mxs, mc, divs, s.main, s.sub, s.axis) {
    rect(xleft=x, xright=x+(.6*sl), ytop=y, ybottom=y-(.080*sl), col=”white”)
    points(x=x+(.3*sl), y=y-(.023*sl), col=mc[2+divs], pch=15, cex=1.3*sl)
    for(i in 1:divs) {
    points(x=x+(.3*sl)-(.2/(divs+1))*i*sl, y=y-(.023*sl), col=mc[divs+2-i], pch=15, cex=1.3*sl)
    points(x=x+(.3*sl)+(.2/(divs+1))*i*sl, y=y-(.023*sl), col=mc[divs+2+i], pch=15, cex=1.3*sl)
    }
    points(x=x+(.1*sl), y=y-(.023*sl), col=mc[1], pch=15, cex=1.3*sl)
    points(x=x+(.5*sl), y=y-(.023*sl), col=mc[length(mc)], pch=15, cex=1.3*sl)
    text(x=x+(.3*sl), y=y-(.055*sl), ml[1], cex=st)
    text(x=x+(.085*sl), y=y-(.055*sl), -mv, cex=st)
    text(x=x+(.51*sl), y=y-(.055*sl), mv, cex=st)
    rtext=ifelse(!is.na(h[2]), paste(h[1], “to”, h[2]), h[1])
    xtext=ifelse(!is.na(h[1]), paste(rtext, ml[1], “Highlighted”), NA)
    title(main=ml[2], sub=ml[3], xlab=xtext, cex.main=s.main, cex.sub=s.sub, cex.axis=s.axis)
    if(mbx==TRUE) {box()}
    if(mxs==TRUE) {map.axes()}
    }

    .
    Anyway, since several of us seem to be plotting similar things but need to change color scales conveniently, I figured I’d throw this out there in case someone else finds it useful.
    .
    🙂

  25. Posted Apr 14, 2009 at 3:54 PM | Permalink

    Great, now I’ll have to credit RomanM, RyanO, SteveM and JeffC every time I map something.

    The maps look great thanks for this.

  26. Ryan O
    Posted Apr 14, 2009 at 4:14 PM | Permalink

    Haha. 🙂 We’ll all be crediting each other every time we post on Antarctica.
    .
    BTW, feel free to use my data if you want. I have to play with it some more because the corrections to the groups cause some artifacting where the groups merge. It’s not too bad, but I’d like to do some finer-tuning.
    .
    Also, I’ve completed a script combining your R reconstruction scripts and Steve’s porting of RegEM to R. I need to clean it up, but at the end, we’ll have a function that will take any data set we want and perform the whole reconstruction all in R – Antarctica or otherwise. Takes about 2 minutes to run.
    .
    I’m thinking of generating a fake data set with all known quantities, deleting a bunch of them, and running it through the recon function. It would help demonstrate quite clearly what you’ve been saying at the Air Vent – similar to the way you generated hockey sticks from noise.
    .
    As a final touch, I was planning on using it in a real-world situation, like Australia or something, and showing how RegEM/Steigian-PCA booger the whole thing up.

  27. curious
    Posted Apr 14, 2009 at 4:35 PM | Permalink

    Jeff and Ryan – FWIW I wonder about the terminology of “reconstruction”? It has a ring of credibility to it which the results of some of the papers being audited don’t seem to warrant. I’m not sure what would be better – simulation maybe? – but I like the idea of doing a known and verifiable example for comparison.

  28. Ryan O
    Posted Apr 14, 2009 at 6:14 PM | Permalink

    Jeff, WordPress boogered up the script. Maybe I’ll take a page out of Steve’s book and start calling it StygPress (not to be confused with SteigPress). Anyway, here’s a text version which should work:
    .
    http://www.filefactory.com/file/agbf06d/n/plt_map_R

  29. Lars Baath
    Posted Apr 20, 2009 at 11:45 AM | Permalink

    Steve, here is a paper which you should follow up on.
    http://www.news.com.au/story/0,,25348657-401,00.html

One Trackback

  1. […] and Ryan O have found very real reasons to question the trends in the satellite data HERE, HERE and HERE. There are apparent trends and steps which occur between different satellites and instrument […]