Steig Mystery (almost) Solved!

Thanks to some insight from Ryan O (#43), we have been able to make progress on the puzzle of where the values in the Steig paper came from. In fact, it now is quite apparent that Steig et al used the reconstructions based on the AVHHR satellite data rather than the ones based on the AWS dataset .

First, we should make it clear that Shuman used a regression which was based on the annual sequences which we looked at in the previous post (and got a perfect fit both in the trends as well as in the 95% error bounds). To demonstrate this, we use R starting with the monthly data defined in comment RomanM (#20). This has to be converted to anomalies before the regression can be carried out.

#we need shuman.mon from “It’s a Mystery”,comment 20
shuman.mon=ts(shuman.mon, start=c(1978,11),frequency=12)-273.15

#limit to 1979 on and create anomalies
sbyrd.mon = window(shuman.mon[,1],start=1979)
time=time(sbyrd.mon)
sbyrd.mon =ts(c(t(scale(t(matrix(sbyrd.mon,12,19)),scale=F))),start=1979,frequency=12)
#plot(sbyrd.mon)
#max(abs(colMeans(t(matrix(sbyrd.mon,12,19))))) #[1] 1.682864e-15

byrdmon.reg = lm(sbyrd.mon~time)
#Target values: 0.45 +/- 1.3 C
10*coef(byrdmon.reg)[2]# 0.451124
10*diff(confint(byrdmon.reg)[2,])/2 #0.7695063

ssiple.mon = window(shuman.mon[,4],start=1979)
ssiple.mon =ts(c(t(scale(t(matrix(ssiple.mon,12,19)),scale=F))),start=1979,frequency=12)
#plot(ssiple.mon)
#max(abs(colMeans(t(matrix(ssiple.mon,12,19))))) #[1] 2.617789e-15

siplemon.reg = lm(ssiple.mon~time)

#Target values: 1.1 +/- 0.8 C
10*coef(siplemon.reg)[2]# 1.097965
10*diff(confint(siplemon.reg)[2,])/2 #0.4578283

Although the trends in both case are basically the same, the error bounds differ substantially. The mathematical reason for this is that although the annual averages will be less variable than the monthly values, there are 12 times as many monthly data points. However, there may also have to be considerations made for autocorrelation, the strength of which may differ for the two types of series. Because of this and possible non-linearities in the relationship, no data analyst would automatically jump to the conclusion that the calculation giving the lower error bound for the trend is somehow “more accurate”.

Anyway it is clear that the Shuman results are based on the annual series.

We now look at the Steig satellite reconstructions.

#Look at Steve’s post at http://www.climateaudit.org/?p=5265 for how to get
#recon = satellite data from ant_recon.txt , and the coordinate variables ,
#lat = Tir_lats.txt and lon=Tir_lons.txt
#
#We need Info from the scripts used in the earlier “It’s a Mystery” post
#get coordinates of byrd and siple
Info$aws_grid[c(3,57),]
# id lat long name wa trend class
#3 89324 -80.0 -119.4 Byrd TRUE 0.01219965 7
#57 89284 -75.9 -84.0 Siple TRUE 0.01640790 7

lat.aws = Info$aws_grid$lat[c(3,57)]
lon.aws = Info$aws_grid$lon[c(3,57)]

#find the 100 grid points with closest coordinates to each AWS
#extract recons for those points

#function for calculating relative great circle distance (not actual distance)
circle.dist =function(x) { #fromlat,fromlong, tolat,tolong
x = x*pi/180
reldist = sin((x[,1]- x[,3])/2 )^2 + cos(x[,3])*cos(x[,1])* (sin((x[,2]-x[,4])/2)^2 )
100*(reldist-min(reldist))/diff(range(reldist)) }

bdist.circ = circle.dist(cbind(lat.aws[1],lon.aws[1],lat,lon))
sdist.circ= circle.dist(cbind(lat.aws[2],lon.aws[2],lat,lon))

byrds = recon[,sort.int(bdist.circ,ind=T)$ix[1:100]]
colnames(byrds)[1:5]#[1] “817” “818” “863” “773” “816” five closest to Byrd
siples = recon[,sort.int(sdist.circ,ind=T)$ix[1:100]]
colnames(siples)[1:5]#[1]”324″ “358” “325” “359” “323” five closest to Siple

The values calculated as “distance” are in the same numeric order as the actual distances (but with considerably less calculation error). Now we convert to annual values and then calculate the trends for the ten closest points to each AWS.

#calculate annual means
byrds.ann = ts(sapply(1:ncol(byrds), function(x) colMeans(matrix(byrds[,x],12,50))),start=1957)
colnames(byrds.ann) = colnames(byrds)
siples.ann = ts(sapply(1:ncol(siples), function(x) colMeans(matrix(siples[,x],12,50))),start=1957)
colnames(siples.ann) = colnames(siples)

#function to calculate trends and error bars (for a matrix of data)
trend.calc = function(dat){
year = time(dat)
trends =matrix(NA,ncol(dat),2)
for (ii in 1:ncol(dat)) {
temp = lm(dat[,ii] ~year)
trends[ii,1] = coef(temp)[2]
trends[ii,2] = diff(confint(temp)[2,])/2}
trends}

#run byrd for 1979 – 1997 and 1957 – 2006
byrd.trend = 10*cbind(trend.calc(window(byrds.ann, start=1979, end=1997)),trend.calc(byrds.ann))
colnames(byrd.trend) = c(“trend1979″,”err1979″,”trend1957″,”err1957”)

#Target: Byrd trend1979: 0.36 +/- 0.37 C, trend1957: 0.23 +/- 0.09 C.
#lat = -80.0, lon = -119.4 ( = 240.6)

#results for closest 10 gridpoints
b.which = as.numeric(colnames(byrds.ann)[1:10])
cbind(b.which,round(byrd.trend[1:10,],2),round(lat[b.which],1),round(lon[b.which],1))

# b.which trend1979 err1979 trend1957 err1957 (lat lon)
#[1,] 817 0.41 0.55 0.26 0.14 -80.0 241.1
#[2,] 818 0.41 0.57 0.26 0.14 -79.8 238.8
#[3,] 863 0.41 0.54 0.25 0.14 -80.4 239.7
#[4,] 773 0.40 0.56 0.25 0.14 -79.6 242.3
#[5,] 816 0.41 0.54 0.26 0.14 -80.2 243.4
#[6,] 864 0.40 0.55 0.25 0.14 -80.2 237.4
#[7,] 774 0.39 0.56 0.25 0.14 -79.4 240.1
#[8,] 862 0.41 0.54 0.25 0.14 -80.6 242.2
#[9,] 772 0.39 0.54 0.25 0.14 -79.8 244.6
#[10,] 819 0.40 0.57 0.26 0.14 -79.5 236.7

The trends for the ten closest grid points not all that close for either time period. More importantly, the error bounds are off by 50%.

#run siple
siple.trend = 10*cbind(trend.calc(window(siples.ann,start = 1979,end=1997)),trend.calc(siples.ann))
colnames(siple.trend) = c(“trend1979″,”err1979″,”trend1957″,”err1957”)

#Target: Siple trend1979: 0.29 +/- 0.26 C, trend1957: 0.18 +/- 0.06 C.
#lat = -75.9, lon = -84.0 ( = 276.0)
#results for closest 10 gridpoints
s.which = as.numeric(colnames(siples.ann)[1:10])
cbind(s.which,round(siple.trend[1:10,],2),round(lat[s.which],1),round(lon[s.which],1))

# s.which trend1979 err1979 trend1957 err1957 (lat lon)
#[1,] 324 0.31 0.39 0.19 0.10 -75.7 276.4
#[2,] 358 0.32 0.40 0.19 0.10 -76.1 276.7
#[3,] 325 0.33 0.40 0.20 0.10 -75.7 274.6
#[4,] 359 0.33 0.40 0.20 0.10 -76.2 274.8
#[5,] 323 0.30 0.39 0.18 0.09 -75.6 278.3
#[6,] 357 0.29 0.38 0.18 0.09 -76.1 278.5
#[7,] 293 0.31 0.39 0.19 0.10 -75.2 276.2
#[8,] 395 0.31 0.40 0.19 0.10 -76.6 276.9
#[9,] 294 0.32 0.39 0.19 0.10 -75.3 274.5
#[10,] 396 0.32 0.40 0.20 0.10 -76.6 274.9

The trends are closer than for Byrd, but they are still not quite on the mark. Error bounds are off (by close to 50%) again.

So, Steig et al did NOT use the annual series calculated from the satellite reconstruction in calculating the posted values for the trends at Byrd and Siple. However, the trends aren’t that far out so it is reasonable to try one more approach. Did they run a regression on the monthly series?

#Try monthly trends:
#run byrd for 1979 – 1997 and 1957 – 2006

byrds=ts(byrds,start=1957, frequency=12)
byrd.month = 10*cbind(trend.calc(window(byrds, start=1979, end=c(1997,12))),trend.calc(byrds))
colnames(byrd.month) = c(“trend1979″,”err1979″,”trend1957″,”err1957”)

#Target: Byrd trend1979: 0.36 +/- 0.37 C, trend1957: 0.23 +/- 0.09 C.
#lat = -80.0, lon = -119.4 ( = 240.6)
#results for nearest gridpoint:

#results for closest 10 gridpoints
b.which = as.numeric(colnames(byrds)[1:10])
cbind(b.which,round(byrd.month[1:10,],2),round(lat[b.which],1),round(lon[b.which],1))

# b.which trend1979 err1979 trend1957 err1957 (lat lon)
#[1,] 817 0.40 0.37 0.26 0.09 -80.0 241.1
#[2,] 818 0.40 0.37 0.26 0.09 -79.8 238.8
#[3,] 863 0.40 0.36 0.25 0.09 -80.4 239.7
#[4,] 773 0.39 0.36 0.25 0.09 -79.6 242.3
#[5,] 816 0.41 0.36 0.25 0.09 -80.2 243.4
#[5,] 816 0.41 0.36 0.25 0.09 -80.2 243.4
#[6,] 864 0.39 0.36 0.25 0.09 -80.2 237.4
#[7,] 774 0.38 0.36 0.25 0.09 -79.4 240.1
#[8,] 862 0.40 0.36 0.25 0.09 -80.6 242.2
#[9,] 772 0.39 0.36 0.25 0.09 -79.8 244.6
[#10,] 819 0.40 0.37 0.26 0.09 -79.5 236.7

The Trends are just a bit off, but the error bounds are right on!

#Try checking the other 90 grid points for matches:
ser.b=which((round(byrd.month[,1],2)==.36)&(round(byrd.month[,3],2)==.23))
b.which = as.numeric(colnames(byrds[,ser.b]))
cbind(b.which,round(byrd.month[ser.b,],2),round(lat[b.which],1),round(lon[b.which],1))

#All grid points matching both trends
# b.which trend1979 err1979 trend1957 err1957 (lat lon)
# [1,] 860 0.36 0.33 0.23 0.08 -81.0 247.4
# [2,] 954 0.36 0.33 0.23 0.08 -81.7 242.0
# [3,] 726 0.36 0.33 0.23 0.08 -79.7 250.3
# [4,] 867 0.36 0.34 0.23 0.08 -79.4 231.1
# [5,] 777 0.36 0.35 0.23 0.08 -78.6 234.1
# [6,] 914 0.36 0.34 0.23 0.08 -79.7 229.5
# [7,] 769 0.36 0.32 0.23 0.08 -80.3 252.0
# [8,] 1004 0.36 0.33 0.23 0.08 -82.1 240.5
# [9,] 725 0.36 0.31 0.23 0.08 -79.9 252.8
#[10,] 690 0.36 0.33 0.23 0.08 -77.9 236.6
#[11,] 868 0.36 0.34 0.23 0.08 -79.1 229.3
#[12,] 647 0.36 0.33 0.23 0.08 -77.7 239.6
#[13,] 915 0.36 0.34 0.23 0.08 -79.4 227.6
#[14,] 768 0.36 0.31 0.23 0.08 -80.4 254.6

Fourteen perfect matches on both trends, but none(!) on the error bars.

Let’s look at Siple

#Target: Siple trend1979: 0.29 +/- 0.26 C, trend1957: 0.18 +/- 0.06 C.
#lat = -75.9, lon = -84.0 ( = 276.0)
#results for closest 10 gridpoints
s.which = as.numeric(colnames(siples)[1:10])
cbind(s.which,round(siple.month[1:10,],2),round(lat[s.which],1),round(lon[s.which],1))

# s.which trend1979 err1979 trend1957 err1957 (lat lon)
#[1,] 324 0.29 0.28 0.19 0.06 -75.7 276.4
#[2,] 358 0.30 0.28 0.19 0.06 -76.1 276.7
#[3,] 325 0.31 0.29 0.20 0.06 -75.7 274.6
#[4,] 359 0.30 0.29 0.20 0.06 -76.2 274.8
#[5,] 323 0.27 0.27 0.18 0.06 -75.6 278.3
#[6,] 357 0.27 0.27 0.18 0.06 -76.1 278.5
#[7,] 293 0.29 0.28 0.19 0.06 -75.2 276.2
#[8,] 395 0.28 0.28 0.19 0.06 -76.6 276.9
#[9,] 294 0.30 0.28 0.19 0.06 -75.3 274.5
#[10,] 396 0.30 0.29 0.20 0.06 -76.6 274.9

Similar story to Byrd: No grid point matches exactly on all trends and error bounds.

Look for other candidates:

ser.s = which((round(siple.month[,1],2)==.29)&(round(siple.month[,3],2)==.18))
s.which = as.numeric(colnames(siples[,ser.s]))
cbind(s.which,round(siple.month[ser.s,],2),round(lat[s.which],1),round(lon[s.which],1))

#All grid points matching both trends
# s.which trend1979 err1979 trend1957 err1957 (lat lon)
# [1,] 295 0.29 0.27 0.18 0.06 -75.3 272.7
# [2,] 265 0.29 0.26 0.18 0.06 -74.8 276.1 ***
# [3,] 266 0.29 0.26 0.18 0.06 -74.8 274.3 ***
# [4,] 240 0.29 0.26 0.18 0.06 -74.4 274.2 ***
# [5,] 238 0.29 0.27 0.18 0.06 -74.3 277.5
# [6,] 241 0.29 0.27 0.18 0.06 -74.4 272.5
# [7,] 237 0.29 0.27 0.18 0.06 -74.2 279.2
# [8,] 242 0.29 0.27 0.18 0.06 -74.4 270.8
# [9,] 269 0.29 0.27 0.18 0.06 -74.9 269.1

There are three possible candidates (none of whom are among the closest ten grid points – grid point 265 is the 19th closest grid point to Siple’s co-ordinates).

So, we have discovered the following:

The Shuman reconstruction was based on actual AWS data and was extended by using satellite data to the 19 year period. They did their regression on an annual basis generating a trend and an error bound. The Shuman results (without the caveat placed by the original authors on the interpretation of the trend) were used to “enhance” Figure 3 of the Steig paper.

The Steig reconstruction was based on the manned surface station data and different satellite data. It did not use ANY actual AWS data. The regression was done on a monthly basis thus reducing the apparent error bounds by a factor of about 1/3. No other overt numeric comparison of the results is indicated in the paper or the SI although supposed seasonal similarities are indicated as existing.

Some comparison.

There is still a bit of a puzzle with regard to which grid point(s) or average of grid points may have been used, but the overall picture seems to have been solved.

Now, if we could only see how the satellite data was used…

87 Comments

  1. deadwood
    Posted Mar 15, 2009 at 10:30 AM | Permalink

    Apples, oranges and cherries – Oh My!

  2. Posted Mar 15, 2009 at 10:37 AM | Permalink

    Nice work, I would have thought the paper would make this more clear. I wonder why they won’t release the code?

    Jeff C and I have been looking at the AVHRR data, it’s not matching very well to the Steig paper. I’ve been looking for the problem.

    • romanm
      Posted Mar 15, 2009 at 10:59 AM | Permalink

      Re: Jeff Id (#2),

      You would think that they would make things more clear. The original quote from the paper

      Independent data provide additional evidence that warming has been significant in West Antarctica. At Siple Station (76S, 84W) and Byrd Station (80S, 120W), short intervals of data from AWSs were spliced with 37-GHz (microwave) satellite observations, which are not affected by clouds, to obtain continuous records from 1979 to 1997 (ref. 13). The results show mean trends of 1.1 +/- 0.8C per decade and 0.45 +/- 1.3C per decade at Siple and Byrd, respectively (Ref. 13). Our reconstruction yields 0.29 +/- 0.26 C per decade and 0.36 +/- 0.37C per decade over the same interval. In our full 50-year reconstruction, the trends are significant, although smaller, at both Byrd (0.23 +/- 0.09 C per decade) and Siple (0.18 +/- 0.06 C per decade).

      suggests that the AWS reconstruction would be the more appropriate one to use. Spelling this out i the SI couldn’t hurt.

      Fred (#3), don’t hold your breath waiting for any “corrections”. It ain’t gonna happen. My suspicion (just a guess) is that the authors didn’t bother to look more closely at the Shuman result than just quoting the numbers. They probably weren’t even aware of the details of how Shuman arrived at them. If you are going to go to the length of actually using them (instead of your own) in a graphic, I would think that it might be imperative to ensure proper comparability.

      • mjt1st
        Posted Mar 15, 2009 at 12:29 PM | Permalink

        Re: romanm (#4),
        In regards to Fred’s comment also, is there an official rebuttal process, ie is everything conclusive enough and far enough along to have a published Roman, Ryan, Jeff, Jeff (2009) (sorry if I left anyone out)? Or do we still need the original data to be able to make definitive statements about Steig et al(2009)?

        • romanm
          Posted Mar 15, 2009 at 3:35 PM | Permalink

          Re: mjt1st (#5),

          I think that you left our host off the list. 😉

          Darn!!! I had a great lengthy reply written out for you, but my computer ate it!

          I may come back to this topic later, but for the moment, the short answer is no, I don’t think we don’t have anything substantial of the type that would be worth pursuing with the editors of nature at this time.

          The bottom line was that I am of the opinion that we have discovered a lot of questionable things:

          – using wrong data

          – doing the wrong thing in a throwaway comment

          – possible misdirection of the press and public by exaggerating the “significant” nature of their rather numerically unimpressive results

          – using the “Antarctic average” rather than focusing on the different nature of the behaviour of the climate in East and West

          -using a time period that starts with lower temperatures when we can see how changes of on or two years in the range of a period can reverse supposed trends

          -using linear trends when the temperatures don’t seem to respond that way for many of the stations

          -PC results (e.g. PC3 of the satellite recon) that are spurious-looking

          -other theoretical statistical issues – were accomodations made for autocorrelation?

          None of these things will make a dent with the Nature editors. What we need is the intermediate satellite data results and information that was crucial but not mentioned (in particular what percent of the variability did their 3 PCs account for – something that should have been in the SI but wasn’t). I think that there are still some skeletons in that data that are worth hunting down.

          (Please. No piling-on on the disclosure issue or the snip-scissors will come out!)

        • mjt1st
          Posted Mar 15, 2009 at 5:02 PM | Permalink

          Re: romanm (#8),
          Thanks for the reply!
          Is the issue of using regEM for infilling with non-gaussian and non-spatially random missing data encompassed in your list or was that resolved as a non-issue? Theres been so much detailed breakdown its been hard to keep up 🙂
          Also if Nature was not apt to publish, assuming the data is made available and the problems persist, would there be other publications that might consider taking this analysis?

  3. Fred
    Posted Mar 15, 2009 at 10:41 AM | Permalink

    Well I’ll be waiting for the next edition of Nature to publish a full retraction of Steig and the paint by numbers front cover art and to publish your article explaining how apples and oranges are, in fact, different.

    They will correct their mistake won’t they ?

    You know, honesty and the pursuit of truth in science.

  4. DJ.
    Posted Mar 15, 2009 at 2:23 PM | Permalink

    romanM…Great Job in breaking it down…this is Science the way it should be…

    • Neil Fisher
      Posted Mar 15, 2009 at 2:53 PM | Permalink

      Re: DJ. (#6),

      this is Science the way it should be…

      I have to disagree – RomanM et al should not have to try to work this this out, the original authors should have made it plain up-front. That’s science.

      • DJ
        Posted Mar 16, 2009 at 10:05 AM | Permalink

        Re: Neil Fisher (#7), Neil, “Should” is the Key word you state. I could say more but it will get “sniped” so I’ll leave it as that!

  5. Ian
    Posted Mar 15, 2009 at 3:48 PM | Permalink

    I hate to be a pain, and I know it’s unreasonable, but please rather than refer to another thread to get some data/functions etal could you please include the appropriate R, commented out if necessary so it’s turnkey. I also note one of Steve’s recent posts includes a link to a Steig file on his D: drive which didn’t match anything in the Steig directory by the time that I was going to run it on a 640 cpu, 4Tb memory machine which has current R installed, and people were talking about factorising it. My plan was to do what Steve asked and send him the W response, and then give him an account to do it himeself on real machines.. I know Steve tries to make things turnkey, and I know you end up commenting things out to remove redundant downloads but..

    • RomanM
      Posted Mar 15, 2009 at 5:05 PM | Permalink

      Re: Ian (#9),

      I appreciate the request, but it becomes a matter of inflating the size of the post if every time you add a new procedure you have to go back to first principles. I try to be complete so I did make a reference to two other places where the initial work was done. The burden was shifted to the reader to catch up. I always run the script from scratch to try to guarantee that if you follow my instructions, the script will run. Steve is a (much) more prolific poster so it is not surprising that he may have personal references in his scripts. i will make an extra effort to be more complete.

      Re: Scott Brim (#10),
      Hey you’re speaking to the converted. I’ve usually done one or the other of both these things. This ***one*** time I didn’t do either, and my laptop decided to drop the wireless to our home router. When I tried to go back, it didn’t. I even searched the files in the IE temporary directory for one or two keywords in case the text was stored before sent – but to no avail. Rats!

      Re: Sinan Unur (#11),

      Not sure what you are calling “monthly dummies” here. More explicitly?

      • Posted Mar 15, 2009 at 5:48 PM | Permalink

        Re: RomanM (#13), I am talking about a simple de-seasonalization procedure. First, define 12 dummy variables:

        d^i_{tm} = 1 if i = m where i=1,\dots,12

        Then, run the regression

        y_{tm} = \beta_1d^1_{tm} + \dots + \beta_{12}d^{12}_{tm} + \mbox{error}

        Now, the residuals from the regression above will have the seasonal variation removed and can be used to estimate trend.

        — Sinan

        • Posted Mar 15, 2009 at 6:54 PM | Permalink

          Re: Sinan Unur (#16), note that this is the crudest possible form of seasonal adjustment. For real work, http://www.census.gov/const/www/faq2.html might be interesting reading.

          — Sinan

        • RomanM
          Posted Mar 15, 2009 at 8:03 PM | Permalink

          Re: Sinan Unur (#16),

          The least squares solution of the first equation produces betas which are equal to the monthly means. In effect what you are doing is turning the values into anomalies by subtracting the monthly means from each value.

          The name that I usually use for 0-1 dummy variables is indicator variables.

        • Posted Mar 16, 2009 at 6:25 AM | Permalink

          Re: RomanM (#18), normally, I cringe when I see ‘anomalies’ because they are usually calculated as deviations from the mean of a limited time period. So, apparently, my brain did not comprehend what it was seeing in your code.

          Dummy is a much cuter term than indicator 🙂

          Anyway, apologies for the superfluous comments.

          — Sinan

        • romanm
          Posted Mar 16, 2009 at 7:00 AM | Permalink

          Re: Sinan Unur (#22),

          I also dislike the use of the term “anomaly” because of the overtones which go beyond the meaning of as it is intended here. One of the definitions of the word is “peculiar, irregular, abnormal”. When applied to temperature data (similarly to the case of “significant trends”), the ordinary listener adds their own emotional interpretation.

          A much better descriptor might be to refer to them as “deviations”, or as “centered” or “standardized” values. However, I used the term because it is common practice in temperature data discussions.

          In the case that you are referring to (deviations from the mean of a limited time period), I would think that the adjective “indexed” would be more appropriate .

  6. Scott Brim
    Posted Mar 15, 2009 at 4:23 PM | Permalink

    RomanM: I had a great lengthy reply written out for you, but my computer ate it!

    If you have a lengthy post, write it in Word first (or else notepad) and then copy/paste it into the text box. Or else copy it to the clipboard from the text entry box just before hitting Submit Comment, in case the upload process fails for some reason.
    .
    It would appear that a comprehensive writeup of CA’s analysis efforts to date is in order, one which also documents what material is currently missing that would allow the analytical reconstruction of Steig et al (2009) to move forward to final completion.

  7. Posted Mar 15, 2009 at 4:35 PM | Permalink

    I haven’t paid much attention to details here but …

    If you are using monthly data, you should use monthly dummies to control somewhat for seasonality.

    — Sinan

  8. ScotchTapeSmell
    Posted Mar 15, 2009 at 5:27 PM | Permalink

    It’s amazing how much you guys are accomplishing working together! (mainly Steve M, I know0

    [snip] -Un-Steve-like comment

    • ScotchTapeSmell
      Posted Mar 16, 2009 at 9:03 PM | Permalink

      Re: ScotchTapeSmell (#14),

      [snip] -Un-Steve-like comment

      I knew it would be, so I put IMO, didn’t work anyway

  9. Scott Brim
    Posted Mar 15, 2009 at 5:43 PM | Permalink

    RomanM: When I tried to go back, it didn’t. I even searched the files in the IE temporary directory for one or two keywords in case the text was stored before sent – but to no avail.

    Most blog software holds the text input form only in memory, and only within the scope of the process which is driving that instance of the form. Once the process terminates, the form goes to the bit bucket, taking whatever text you entered with it. (Easy come, easy go.)

  10. Posted Mar 15, 2009 at 9:32 PM | Permalink

    Roman, I hope this isn’t too far off topic. I think we’re getting closer to reasonably useful processing of the satellite data. It has several differences from what was originally plotted and Jeff C has noticed some possible odd and unmentioned processing steps. It’s not ready yet but it’s coming along quickly.

    I made a movie of the satellite data and found the edges of the Antarctic are apparently contaminated with sea surface data.

    Evidence of Sea Contamination In Antarctic Sat Data

    • romanm
      Posted Mar 16, 2009 at 4:28 AM | Permalink

      Re: Jeff Id (#19),

      Keep up the good work Jeff. Independent analysis of the satellite data is useful. If your results differ from those presented in the paper in any substantial way, this could possibly become an issue to be raised with Nature putting the onus on Steig to justify why their results differ.

  11. Mike Lorrey
    Posted Mar 15, 2009 at 11:58 PM | Permalink

    [snip]

    RomanM: Nobody is being accused of fraud and I would rather not have that issue discussed here.

  12. DaveR
    Posted Mar 16, 2009 at 6:54 AM | Permalink

    I really can’t believe how ludicrous this situation is. You shouldn’t have to try and guess how someone else carried out an experiment. Reproducibility, far more than peer review, is absolutely vital to ensuring the scientific fidelity of any given answer. Nature is extremely poor in presenting such methods, but they really should ensure that such information is available on request, and the paper withdrawn if it isn’t, or the exercise is pointless 😦 .

    • Mark T
      Posted Mar 16, 2009 at 8:59 AM | Permalink

      Re: DaveR (#23),

      or the exercise is pointless 😦 .

      No, there is a very serious point they are trying to make.

      Mark

  13. bill-tb
    Posted Mar 16, 2009 at 8:12 AM | Permalink

    So why won’t they just tell us? I think you know the answer don’t you.

  14. Hu McCulloch
    Posted Mar 16, 2009 at 8:27 AM | Permalink

    RE ScotchTapeSmell #14,

    Thanks for the sentiment, but this is the sort of “piling on” post that Steve discourages at CA, even on “Unthreaded”. He’s been travelling this week, and so can’t snip you.

  15. kim
    Posted Mar 16, 2009 at 4:44 PM | Permalink

    It’s sort of amusing that Nature magazine should have to be notified of this work; surely the Editor has someone reading this blog. How much more ridicule before that stiff upper lip starts to tremble?
    ==========================================================

    • romanm
      Posted Mar 16, 2009 at 5:03 PM | Permalink

      Re: kim (#29),

      But surely you must realize the the [snip], [snip], [snip], [snip], [snip] auditors have no credibility in the climate science environment… 😉

      [I decided to snip my own post so that our host wouldn’t have to do it]

      • Posted Mar 16, 2009 at 7:32 PM | Permalink

        Re: romanm (#30),

        Wow, I think that might be a record number of snips for a single entry.

        Actually, I thought there would be more comments. Roman is kind of a Jedi with R. The code reads very easily. — At least for R code.

        • romanm
          Posted Mar 16, 2009 at 7:39 PM | Permalink

          Re: Jeff Id (#32),

          Every one of them was a genuine adjective that I had written, too ! Some of them had been used by the RC team and friends in reference to the CA folks… but meant in a nice sort of way.

          May the farce be with you!

  16. kuhnkat
    Posted Mar 16, 2009 at 9:02 PM | Permalink

    If you are using monthly data, you should use monthly dummies to control somewhat for seasonality.
    – Sinan

    I believe Michael Mann WAS a co-author!!!

  17. rephelan
    Posted Mar 16, 2009 at 10:03 PM | Permalink

    [snip][snap][snop]… I think you guys are having too much fun here. If Steve decides he even wants to come back from Thailand, will he recognize this blog when/if he does? Will you let him have it back?

  18. VG
    Posted Mar 17, 2009 at 2:39 AM | Permalink

    VG (00:38:15) : Your comment is awaiting moderation

    Has anybody noticed (because I have… for the PAST TWO YEARS, every day). This picture has not changed nearly for every day!
    http://wxmaps.org/pix/temp8.html. The vast part of South America has been below anomaly nearly every day for the past two years check it yourselves…

  19. Posted Mar 17, 2009 at 9:44 AM | Permalink

    If anyone wants to download 80 megabytes of sat data. Here’s the link to the site.

    http://stratus.ssec.wisc.edu/products/appx/appx.html

    It’s much noisier than I thought it would be in its original form. The data is definitely not very similar to the reconstruction. I think the noise level could be the reason behind the PC breakdown but the data has gaps so it’s not a simple process to run PC analysis.

    I don’t know for sure but it seems like it’s reasonable to simply truncate the array months missing, run PCA and replace them later. The sat grid covariance shouldn’t be distorted by the 3 missing months. I’m not even sure if R can handle this calc yet.

    Roman, I’m sorry if this is too off topic just snip it. It seems like you or Ryan (or perhaps some of the others here) are probably my best bet’s for an answer.

    • romanm
      Posted Mar 17, 2009 at 10:43 AM | Permalink

      Re: Jeff Id (#38), Re: Ryan O (#39),

      When I got to the ftp site after clicking “FTP site” on Jeff’s link (and then “accept” their conditions) , the initial ftp page seemed to keep cycling as it tried to load. I stopped the cycling by clicking the red “stop” x on IE, then hitting “refresh”. I could navigate the site smoothly after that.

      The files seem to be monthly with two different times 0200 and 1400. Which (if any of these) ones are you looking at? It looks like a lot of scraping would be needed.

    • Jeff C.
      Posted Mar 17, 2009 at 12:33 PM | Permalink

      Re: Jeff Id (#38), – regarding running PCA on the http://stratus.ssec.wisc.edu/products/appx/appx.html monthly data, you can simply replace the missing months with the monthly mean, or could we run all 5509 series through RegEM? I have not tried this yet and have no idea if RegEM could handle it, but it might be fun to see if it works.

      I’m also experimenting with deleting cell/month combinations that exceed +/- 10 deg C of the monthly mean for each cell (similar to what Steig discusses on page 1 of the SI). I was going to infill them with the mean data, but if RegEM will work I’ll use it.

      Re: Ryan O (#44), Ryan O – how were you going to process the daily AVHRR data? Do you have CASPR running?

      Re: romanm (#41), Jeff Id has assembled all of the monthly temperature data into two files, one for 0200 and one for 1400. I have some scripts that combine four cells into one (to make them 50 x 50 km), remove the cells over water, calculate monthly means, and covert to anomalies.

      • Ryan O
        Posted Mar 17, 2009 at 12:43 PM | Permalink

        Re: Jeff C. (#48), No, no CASPR. I wasn’t going to try to duplicate Steig from the raw data unless Comiso/Steig provide a pretty detailed explanation of how they did it. I was going to attempt to calibrate the Chan 4/5 data to ground measurements. More like Shuman, less like Steig.

        • Jeff C.
          Posted Mar 17, 2009 at 1:13 PM | Permalink

          Re: Ryan O (#49),

          I was going to attempt to calibrate the Chan 4/5 data to ground measurements

          Do you have a link for Shuman? I’m not familar with his methods. I performed a ground calibration on the monthly AVHRR data by forcing the South Pole cells to match the Amundsen-Scott surface data and offsetting the rest of the AVHRR cells accordingly. I got trends that look surpisingly close to the Comiso trends shown in Monaghan and the Steig SI.

          Roman – hope this isn’t going to far OT.

        • romanm
          Posted Mar 17, 2009 at 1:31 PM | Permalink

          Re: Jeff C. (#51),

          Roman – hope this isn’t going to far OT.

          You can buy me off from snipping it cheap with a copy of your data. 😉

        • Jeff C.
          Posted Mar 17, 2009 at 2:01 PM | Permalink

          Re: romanm (#52), Send me an email and we can sort it out. I have so many different “copies of the data” I’m not sure what level of processing would make sense.

        • Ryan O
          Posted Mar 17, 2009 at 4:11 PM | Permalink

          Re: Jeff C. (#51), Roman provided this:
          .

          Click to access i1520-0442-14-9-1977.pdf

          .
          The procedure they used (basically, applying a sinusoidal correction to the sat data) won’t work for AVHRR. But, like any calibration, the principle is the same. Find an expression that forces the sat to fit the ground for a sample of the data and then test the expression vs. the rest of the data. A good calibration will have good out-of-sample predictive power; a bad calibration will not.
          .
          The question is how much are clouds going to bugger up the calibration . . . and I don’t know the answer to that one.

        • Jeff C.
          Posted Mar 17, 2009 at 11:41 PM | Permalink

          Re: Ryan O (#56), Thanks for the paper, I had not seen this one. Regarding CASPR, I might be interested in giving it a try if I can find a machine to run it on.

  20. Ryan O
    Posted Mar 17, 2009 at 10:13 AM | Permalink

    Jeff,
    Thanks. Either the FTP is not working right now or I don’t have the proper permissions to download. I will try later.

  21. Posted Mar 17, 2009 at 10:32 AM | Permalink

    I’ve just sent off a letter to Steig and Comiso officially requesting the TIR data and protocols under Nature’s availability policy, with a copy to Nature’s Chief Physical Sciences Editor Karl Ziemelis.

    Steig has already point blank refused Steve’s request for this data. Comiso said he would comply, but has been sitting on Steve’s request to him for over a month now. Nature authors are required to make data “promptly” available to readers, not when it suits them.

    It wouldn’t hurt if Ryan, Roman, and/or the 2 Jeffs, with their superior knowledge of the details, would also request this data, also making it clear that they will take any refusal to Ziemelis for appropriate action. It may be fun to try to rebuild this data from scratch, but it’s Steig and Comiso’s job to just tell us how they did it and what the intermediate matrices are.

    [3/17/09]
    to: Eric J. Steig, Josefino C. Comiso
    cc: Karl Ziemelis
    Chief Physical Sciences Editor, Nature
    c/o nature@nature.com
    from: J. Huston McCulloch
    Economics Dept.
    Ohio State University

    Dear Drs. Steig and Comiso,
    As you are undoubtedly aware, Nature has a strict policy that, “An inherent
    principle of publication is that others should be able to replicate and
    build upon the authors’ published claims. Therefore, a condition of
    publication in a Nature journal is that authors are required to make
    materials, data, and associated protocols promptly available to
    readers without preconditions
    .”
    ( http://www.nature.com/authors/editorial_policies/availability.html ,
    original emphasis).
    Pursuant to this policy, I hereby request that you make publicly
    available the data and protocols that were used to construct the AVHRR
    Antarctic temperature reconstruction file ant_recon.txt that underlies the
    principal conclusions of your letter, “Warming of the Antarctic
    ice-sheet surface since the 1957 International Geophysical Year,”
    which appeared in Nature for 22 Jan. 2009.
    It appears that detailed raw AVHRR data from NSIDC
    (http://www.nsidc.org/data/avhrr/) was first edited down to a 300 X 5509
    matrix of monthly averages representing 50km square areas for 1982-2006.
    This was then processed to remove observations believed to be
    contaminated by cloud cover. The resulting matrix was then either directly entered
    into Tapio Schneider’s RegEM program along with surface station data,
    or else was first reduced to rank 3 (the same rank as the output matrix)
    before being entered.
    The data and protocols I am requesting be made publicly available are therefore
    1. The detailed raw AVHRR data for 1982-2006 that was employed,
    or else the identify of the specific publicly available files that contain this data.
    According to http://www.nsidc.org/data/nsidc-0094.html, the 25km
    EASE data is at present publicly available from NSIDC only through 2000,
    while the 5 km EASE data only runs through June 2005, so you evidently
    had access to data that has not yet been released by them.
    2. The protocol by which this detailed data was reduced to monthly
    50 km averages. This could either be in the form of a detailed description or
    commented computer code. In particular, how were coastal grid square
    averages computed?
    3. The reduced 300X5590 matrix of pre-cloud-masked data.
    4. Any details of the cloud-masking protocol that go beyond the discussion
    on p. 1 of the online Supplementary Information .
    5. The 300X5590 matrix of cloud-masked data.
    6. The protocol by which this matrix, with its many missing observations,
    was reduced to rank 3 (if it was) before being entered into RegEM.
    The 600X5509 output file ant_recon.txt already online at Dr. Steig’s webpage
    http://faculty.washington.edu/steig/nature09data/ is very helpful and
    informative, as are his link to Tapio Schneider’s RegEM code and the
    online SI. However, the additional information specified above is
    necessary in order for other researchers to fully replicate the
    findings of this very important letter.
    I am writing to Dr. Steig as lead author, but also directly to Dr.
    Comiso, since as I understand the division of labor on this paper,
    the preparation of the AVHRR data was primarily his responsibility.
    If it would expedite matters, perhaps Dr. Comiso could post the
    requested information at his own institution, eg with a link at
    http://earthobservatory.nasa.gov/IOTD/view.php?id=36736 .
    If items 1 and 2 above prove to be time-consuming, it would
    be very helpful if just the two matrices in items 3 and 5 could be
    made available immediately.
    Thank you very much in advance for your help in this matter. As
    you are undoubtedly aware, Nature’s availability policy goes on to provide that,
    “After publication, readers who encounter refusal by the authors to comply
    with these policies should contact the chief editor of the journal (or the chief biology/
    chief physical sciences editors in the case of Nature). In cases where editors
    are unable to resolve a complaint, the journal may refer the matter to the authors’
    funding institution and/or publish a formal statement of correction, attached online
    to the publication, stating that readers have been unable to obtain necessary
    materials to replicate the findings.”
    Sincerely yours,
    J. Huston McCulloch

    • romanm
      Posted Mar 17, 2009 at 10:48 AM | Permalink

      Re: Hu McCulloch (#40),

      Good clear specific request, Hu.

      Would it be better for us to all go after now or add to any subsequent complaint should that become necessary?

    • Dave Dardinger
      Posted Mar 17, 2009 at 10:51 AM | Permalink

      Re: Hu McCulloch (#40),

      I note that some of the matrixes you reference in 1-5 above show 300 (or 600) x 5509 while others are x 5590. Is this a typo or not? And if so, is it just in your typing to this blog or also to Nature? If the latter, you might want to issue a correction.

    • theduke
      Posted Mar 17, 2009 at 3:50 PM | Permalink

      Re: Hu McCulloch (#40),

      Hu: minor correction in #1 of the data you are requesting: “identify” should read “identity.”

  22. Ryan O
    Posted Mar 17, 2009 at 11:03 AM | Permalink

    5509 is the correct dimension.
    .
    BTW, I’m up to 1998 on the raw AVHRR data. Looks like it takes NSIDC about 1 day per year to collate, tar, and gzip.
    .
    Nice letter, Hu! 😉

  23. Posted Mar 17, 2009 at 11:17 AM | Permalink

    RE RomanM, #42,

    Would it be better for us to all go after now or add to any subsequent complaint should that become necessary?

    You can’t complain that you’ve been refused the data, unless you have first requested it yourself. Likewise, I can’t complain that Steve has been refused — only Steve can do that. I’m hoping he does file a complaint in the near future, and I think that maybe a few others’ rattling of the cage may improve his odds of results.

    So I would urge at least a couple of you who are most familiar with the details to submit similar requests. For the sake of brevity, you might want to just second my request, but should add any details that I overlooked. Then, after you are refused or ignored after a reminder sent a week later, you can send in your own complaint.

    Steve got blown off by Nature on MBH98, but times have changed, editors have turned over, Mann is not the lead author, and Comiso (if not Steig) seems inclined to cooperate, so maybe we’ll get lucky this time around.

    • romanm
      Posted Mar 17, 2009 at 11:39 AM | Permalink

      Re: Hu McCulloch (#45),

      Fair enough. I will wait until tomorrow and then send my own request to the same people which yours were sent to.

  24. Posted Mar 17, 2009 at 11:24 AM | Permalink

    RE Dave Dardinger #43 ,

    I note that some of the matrixes you reference in 1-5 above show 300 (or 600) x 5509 while others are x 5590. Is this a typo or not? And if so, is it just in your typing to this blog or also to Nature? If the latter, you might want to issue a correction.

    My bad, but I think they will figure it out. I’ll correct this in my complaint if necessary. At first I had them all 600X5509, but then at the last minute fortunately remembered that the AVHRR data itself would only be 300X5509.

  25. Posted Mar 17, 2009 at 1:07 PM | Permalink

    RomanM writes in the original post,

    Although the trends in both case are basically the same, the error bounds differ substantially. The mathematical reason for this is that although the annual averages will be less variable than the monthly values, there are 12 times as many monthly data points. However, there may also have to be considerations made for autocorrelation, the strength of which may differ for the two types of series. Because of this and possible non-linearities in the relationship, no data analyst would automatically jump to the conclusion that the calculation giving the lower error bound for the trend is somehow “more accurate”.

    If there is no serial correlation, there should be no substantial difference between standard errors from monthly or annually averaged data. There are 12 times as many monthly data points, but then the annual averages will be only about 1/12 as noisy, so it should be a wash.

    There might still be a slight difference in confidence bounds that arises because t critical values are being computed with fewer degrees of freedom with the annual data than with the monthly. But this would be only a very small effect unless the number of years is getting really small (as may be the case with raw AWS data and even with Shuman’s ~10-yr AWS recons).

    If there is serial correlation, there should still be no systematic difference between the standard errors after the serial correlation is correctly adjusted for (see Steig 2009’s non-correction for serial correlation thread).

    If the monthly serial correlation is truly AR(1) with ρ ≈ .3, the serial correlation should almost entirely wash out at the annual frequency since .3^12 = 5.e-7. However, Ken Fritsch, at Comment #14 of the It’s a Mystery thread, reports,

    For AR1 correlations for the AWS and TIR reconstructions from 1957-2006, I found:

    AWS Monthly = 0.20; AWS Annual = 0.27 and for TIR Monthly = 0.32; TIR Annual = 0.17.

    For TIR, the annual correlation is still over half the monthly, and for AWS it is actually greater! This is not at all what would be expected from AR(1) errors.

    One possibility is that this is just measurement error in the correlation coefficients. For large samples (eg n = 600 as with the monthly 1957-2006 data), the standard error of the correlation coefficient itself is approximately 1/sqrt(n) = .041 with n = 600, so the monthly correlations are off-the-chart significant.

    For n = 50 observations, as with annual 1957-06 data, this is a bad approximation, and should be replaced with the exact Durbin-Watson test for serial correlation. The DW-stat and its exact p-values can be computed with MATLAB function DWTEST, and also with an R function of the same name.

    Using MATLAB, I get comparable correlation coefficients, and DW = 1.42 (2-tailed p = .015) for annual AWS, and DW = 1.60 (2-tailed p = .081) for the annual TIR trend line. The annual serial correlation is therefore still highly significant for AWS, indicating that AR(1) may not be a perfect model for it after all. The TIR annual serial correlation is significant at the 10% level on a 2-tailed test, and even at the 5% level on the 1-tailed test originally favored by D&W (1-tailed p = .040). So perhaps the AR(1) model isn’t that great for TIR either. But still, adjusting for it is a lot better than no adjustment at all. In any event, it doesn’t hurt to try both monthly and annual and see if the answer is robust.

    (Ken uses an R pkg named durbin.watson in his Comment #281 on the Deconstructing Steig thread, with only slightly different results. For some reason he gets and error message with this, but still obtains answers.)

    • Kenneth Fritsch
      Posted Mar 18, 2009 at 10:00 AM | Permalink

      Re: Hu McCulloch (#50),

      (Ken uses an R pkg named durbin.watson in his Comment #281 on the Deconstructing Steig thread, with only slightly different results. For some reason he gets and error message with this, but still obtains answers.)

      Hu, the error message that I left in was due to my attempted call to the function durbin.watson without loading the package called “car”. I had to load that package before doing any DW testing in R.

      As I recall I left the error message in because I had asked Roman for help when I could not call up the DW function and Roman responded, in college professor style, by giving me sufficient info to figure it out for myself. Now I know (something) about loading packages and libraries into R.

    • Geoff Sherrington
      Posted Mar 18, 2009 at 10:34 PM | Permalink

      Re: Hu McCulloch (#49),

      On precision, accuracy and packaged tests for them, I have a hypothetical.

      Start with a daily maximum average temperature from a single site from 1860 onwards. In 1860 the thermometer was mercury-in-glass and it hung from a tree. One could draw a graph of temperature with time for a few decades, and select a way to draw error bounds. Nothing fancy, been done many times.

      Then the thermometer gets put inside a home-made chamber. There is a jump in the data. At some recent date some authority like GISS or Hadley decided to add 1 degree to the past average, as an “adjustment”.

      Where should the error bounds go? Should not the lower one stay where it was and the upper one be shifted up one degree?

      Next the thermometer is put in a regular Stevenson screen which has a systematic 0.5 degree difference. Up goes the main curve again, but where should we put the error bounds?

      Then the lights start to come on and the UHI effect is introduced into the data and the thermometer is replaced by a thermocouple then a thermistor in a different screen box. Adjustments like TOBS and FILNET are projected back to 1860 for some reason. Same question, what do we do to the error bounds?

      There seems to be a tendency to compute statistical error bounds about the new average line each time a new average is adjusted. It seems to me that the correct error bounds should enclose all of the readings, including the early unadjusted ones. Do you agree in principle? Bias versus precision becomes relevant.

      The same topic arises with GCMs. As I’ve written before, the error of an ensemble should not be the error of the selected runs handpicked for comparison in a round robin. It should include all runs by all participants, even ones that looked wonky, unless there was a known mathematical error to cause rejection. Otherwise you get inbreeding, which we all know can be damaging to future propagation.

  26. Adam Gallon
    Posted Mar 17, 2009 at 4:09 PM | Permalink

    Since my computer language ceased at my failure to comprehend “BBC Basic”, I don’t get any of the calculation stuff, but I am most impressed by the efforts various contributors to this blog have spent, in getting to the bottom of all things “Manomatical”.
    I’m especially impressed by the use of “Nature”‘s own availability policy to get the data that Steig has refused to divulge.
    I see that Dr Schmidt over at (Un-)Realclimate still persists is his mantra of “Steig is robust”.

  27. Posted Mar 18, 2009 at 7:58 AM | Permalink

    Nice letter Hu. I hadn’t contacted any of the coauthors myself but had emailed Steig who replied to communicate with his coworker at the university. After some initially hopeful communications I was told to wait 3 months. Since it sounded hopeful that Comiso was going to release the data soon I was simply waiting. At this point, I have no idea why it’s taking so long to push send on their computer.

  28. Hu McCulloch
    Posted Mar 18, 2009 at 8:33 AM | Permalink

    RE Jeff Id, #58,
    My guess is that the reason Steig switched so quickly from cooperative to stonewalling is that Mann intervened. Comiso sounded forthcoming at first, but since we haven’t heard anything more from him, it looks like Mann got to him as well. So unfortunately, it seems like politely reading them the riot act is the only way to get the data now.

    Who was the coworker Steig referred you to? It’s the job of Steig and/or his co-authors to provide the data, not some grad student. Nature requires the data and protocols to be provided “promptly”, not in 3 months. You should definitely write back to Steig that this is inadequate, and that Ziemelis will be hearing from you next week if the required data and protocols aren’t on his website by then.

    I would interpret an algorithm to be a mathematical “protocol” to transform one set of numbers into another, whence the authors are required to provide their algorithms as well as their data. This could either be in the form of a written and detailed explanation of what they did, or just well-commented code.

    • Dave Andrews
      Posted Mar 18, 2009 at 3:08 PM | Permalink

      Re: Hu McCulloch (#59),

      “My guess is that the reason Steig switched so quickly from cooperative to stonewalling is that Mann intervened.”

      My gut feeling exactly!

  29. Posted Mar 18, 2009 at 11:23 AM | Permalink

    RE Ken, #60,
    R’s durbin.watson function in CAR is described at
    http://hosho.ees.hokudai.ac.jp/~kubo/Rdoc/library/car/html/durbin.watson.html. It finds the p-value either by bootstrapping the residuals, or by Monte Carlo simulation using a normal RNG.

    R’s dwtest in LMTEST is described at
    http://hosho.ees.hokudai.ac.jp/~kubo/Rdoc/library/lmtest/html/dwtest.html. This instead uses the “Pan” algorithm, which is what MATLAB uses in its dwtest function.

    It sounds like durbin.watson would give slightly different results each time it is used due to sampling or resampling error, whereas dwtest should give the same answer each time.

    Note that R’s dwtest defaults to the normal approximation for n > 100, while Matlab doesn’t do this until n > 400. Either will let you override this default. They may also differ in whether the default test is 1-tail or 2-tail.

    The Hokudai R library sounds like a very useful resource for R users.

  30. Kenneth Fritsch
    Posted Mar 18, 2009 at 12:09 PM | Permalink

    Thanks, Hu for the details on these methods – it can be utilized to take my application of beyond the black box. It also means more work for me to now determine what kind of differences I’ll see between methods.

  31. mjt1st
    Posted Mar 18, 2009 at 12:21 PM | Permalink

    http://faculty.washington.edu/steig/nature09data/

    Their comment from the above…

    Raw Data.
    All of the data used in the temperature reconstructions are from publically available data sources. Any queries about the raw data, or access to it, should be directed to the appropriate data centers, listed below.
    Weather station data (both occupied and automatic weather stations) from the British Antarctic survey: http://www.antarctica.ac.uk/met/READER
    Note that corrections have been made to AWS stations “Harry” and “Racer Rock”. See the READER web site for details.

    AVHRR thermal infrared data: http://www.nsidc.org/data/avhrr

    In regards to Steig isn’t he in the Antartic at the moment, that may answer the question on why the request was sent to an assistant.

    • mjt1st
      Posted Mar 18, 2009 at 12:34 PM | Permalink

      Re: mjt1st (#63),

      Opps didn’t cut and past my entire comment correctly…when writing to nature it might help to reference Steig’s data link (above in previous post) specifically and note where it is not complete enough to replicate the results. This might help stop the reply that they already have provided the data requested.

  32. Robinedwards
    Posted Mar 18, 2009 at 12:42 PM | Permalink

    There’s something I must have missed in these threads, which is fundamental to understanding the import of some of the stats analyses that form quite a feature of this blog. This is /exactly/ what do authors mean by statements like those in #4, (which happens to be a quotation from the original paper). Does anyone know what statements like “mean trends of 1.1 +/- 0.8 C per decade” are intended to tell us? Is the 0.8 one standard error, or is it the SE times the appropriate t multiplier, which a number rather close to 2 if one is considering the 95th percentile of the two tailed t distribution? If the latter then some of the quoted stats are apparently believably significant. It would be nice to know for sure, but we must rely on the original authors for an explanation, though in this instance they are not members of our community. I’d prefer always to see confidence intervals at a specified probability level, which of course encompass the sample size in their values.

    • RomanM
      Posted Mar 18, 2009 at 1:23 PM | Permalink

      Re: Robinedwards (#65),

      Does anyone know what statements like “mean trends of 1.1 +/- 0.8 C per decade” are intended to tell us? Is the 0.8 one standard error, or is it the SE times the appropriate t multiplier, which a number rather close to 2 if one is considering the 95th percentile of the two tailed t distribution?

      I can confirm what Hu says. In the It’s a Mystery thread initial post, for this exact reason, I calculated the 95% confidence intervals for the trends using the R confint function (which would use the t distribution and the appropriate degrees of freedom for the standard error of the trends) and divided the length of the interval by 2. When rounded, the results matched the Shuman “error” terms exactly indicating that they were indeed 95% error bounds.

      • Posted Mar 19, 2009 at 6:01 AM | Permalink

        Re: RomanM (#67),

        Roman, if you or anyone serious enough wants the NSIDC AVHRR data , drop a comment on my blog with your email. I can send it in a gridded timeseries prior to 50km re-gridding or after. Since I don’t have a nice ftp site, the 80 MB takes about 20 minutes to upload to a free file share site so I don’t want to do it too many times.

  33. Posted Mar 18, 2009 at 12:58 PM | Permalink

    RE Robinedwards, #65,
    Shuman’s Table 3 indicates that the +/-0.8°C per decade represents a 95% confidence interval, ie about 2 times the standard error. They admit that of the 4 AWS they study, only Siple appears to show a significant trend either way.

    However, I am not certain whether or not they made any adjustment for serial correlation.

    Shuman’s paper is linked on the previous “It’s a Mystery” thread.

  34. Posted Mar 18, 2009 at 1:18 PM | Permalink

    RE Ken #62,
    More precisely, for annual AWS data, all Antarctica (50 years, slope = 0.1386), I get DW = 1.4189, 2-tailed p = 0.01510 using MATLAB’s dwtest. For annual TIR data (slope = 0.11819), I get DW = 1.5996, 2-tailed p = 0.80557.

    Can you match these with R’s dwtest? If you rerun R’s durbin.watson, do you get the same results each time, or just close?

    • Kenneth Fritsch
      Posted Mar 19, 2009 at 10:53 AM | Permalink

      Re: Hu McCulloch (#66),

      Can you match these with R’s dwtest? If you rerun R’s durbin.watson, do you get the same results each time, or just close?

      For several trials with annual AWS reconstruction in R using the durbin.watson (car) test I obtain 1.4182 for the DW stat every time, but the p changes with each trial: 0.020(T1),0.022(T2),0.014(T3) and 0.024(T4).

      I need to try the other DWtest to which you linked me. I believe we think/know that that test is the same as the one in Metlab.

      • Kenneth Fritsch
        Posted Mar 19, 2009 at 11:14 AM | Permalink

        Re: Kenneth Fritsch (#80),

        Using the dwtest(lmtest) in R on the AWS annual reconstruction 1957-2006, I obtain a DW stat = 1.4182 and a p= 0.01165 with every trial. I used the default with the “pan” algorithm (sample size less than 100). The iterations were 15.

  35. Posted Mar 18, 2009 at 2:28 PM | Permalink

    RE Roman, #68,
    Does Shuman consider serial correlation, or just use OLS se’s? It wouldn’t be hard for serial correlation to wipe out even the significance of the Siple trend. Is the SC in these regressions significant using R’s dwtest?

  36. Posted Mar 18, 2009 at 4:07 PM | Permalink

    RE #67,

    For annual TIR data (slope = 0.11819), I get DW = 1.5996, 2-tailed p = 0.80557.

    Umm — make that .080557!

  37. Steve McIntyre
    Posted Mar 18, 2009 at 7:04 PM | Permalink

    checking in briefly from Chiang Mai. Actually there is quite convenient internet access in all sorts of places here as it’s oriented to backpack travelers, all of whom are documenting their steps on Facebook.

    Roman, thanks for these interesting contributions. Made me feel very proud both of the forum provided here and, if I may put it this way, a style of doing things.

    Thanks as well to Ryan O, the rwo Jeffs. Hu and others for interesting comments.

  38. rephelan
    Posted Mar 18, 2009 at 9:16 PM | Permalink

    the “rwo Jeffs”? A bit too much Tiger Beer maybe? Steve, do you really want to come back from Thailand? And my earlier question still stands…. these guys are having a lot of fun with your site… will they let you have it back? Enjoy your vacation… and you other guys.. keep uo the good work.

  39. Hu McCulloch
    Posted Mar 19, 2009 at 7:02 AM | Permalink

    RE Geoff Sherrington,#73,
    That’s a big and important topic that goes way beyond this narrow thread.
    However, the secular consistency of adjustments and equipment is will be discussed at length in [snip] a forthcoming report by Anthony Watts. Stay tuned to WUWT for details!

    • Geoff Sherrington
      Posted Mar 20, 2009 at 5:17 AM | Permalink

      Re: Hu McCulloch (#75),

      Thank you, Hu. Please don’t forget a reminder when it’s posted. I’ve dug up some interesting early numbers on satellites as well. Grist for the mill.

      Contacting people in some USA agencies is now getting difficult, even old buddies, because they fear for their futures if they open up.

  40. Posted Mar 19, 2009 at 8:02 AM | Permalink

    [snip – HM]

  41. Hu McCulloch
    Posted Mar 19, 2009 at 10:08 AM | Permalink

    Re #75-77
    My bad — Anthony’s report hasn’t been officially released yet, so he’s asked us to wait to discuss it. Meanwhile, stay tuned to WUWT for an Important Announcement!

  42. Posted Mar 19, 2009 at 11:27 AM | Permalink

    RE #80,
    Thanks, Ken for confirming that R’s durbin.watson test (in the car package) gives random simulated p-values.

    The R routine dwtest (in lmtest) sounds like it uses the same non-random PAN algorithm as MATLAB’s dwtest, but it can’t be exactly the same, since the MATLAB version defaults to the N(0, 1/T) approximation to the distribution of r1 above T = 400, while R’s version defaults to this above T = 100. (Either will let you override the default, however.)

    The numbers we are getting for the DW for annual averages of Steig’s AWS series are off in the 5th significant digit (1.4182 vs 1.4189). This is a little odd, but of no practical consequence. I just took the file as was, and didn’t round any intermediate results. Computing DW itself (if not its p-value) is straightforward, and should be replicable to machine precision.

    I believe we think/know that that test is the same as the one in Metlab.

    Isn’t Met’lab where they produce that stuff that makes your teeth fall out?? 😉

  43. Posted Mar 19, 2009 at 11:53 AM | Permalink

    RE Ken F, #81,

    Using the dwtest(lmtest) in R on the AWS annual reconstruction 1957-2006, I obtain a DW stat = 1.4182 and a p= 0.01165 with every trial. I used the default with the “pan” algorithm (sample size less than 100). The iterations were 15.

    Something must be wrong with one of the routines, since MATLAB gave me p = .01510 for the same problem. What are you getting for TIR?

    MATLAB’s routine doesn’t mention iterations. Perhaps this is somehow related to the discrepancy, but I have no idea how this PAN algorithm works.

    Last year, I tried printing out the orginal Durbin and Watson 1950 and 1951 Biometrika articles to learn where their critical values come from, but got blown away. I see now that both the numerator and denominator have exact chi-square distributions (provided D&W’s otherwise peculiar formula is used), but evidently these are not independent, else the ratio would just have an F distribution. The distribution of the ratio may be related to the arcane Wishart distribution which generalized chi-square to non-independent contributions. It arises also in Brown’s paper on Calibration which UC has called to our attention.

    BTW, Jim Durbin visited OSU ten years ago or so, and gave a very stimulating series of lectures on Kalman filtering, which served as the basis for his subsequent book on that topic, and got me excited about using it for some problems I was working on. Quite a long and productive career!

    • Kenneth Fritsch
      Posted Mar 21, 2009 at 7:23 PM | Permalink

      Re: Hu McCulloch (#83),

      Using the dwtest(lmtest) in R on the AWS annual reconstruction 1957-2006, I obtain a DW stat = 1.4182 and a p= 0.01165 with every trial. I used the default with the “pan” algorithm (sample size less than 100). Iterations from 10 to 100 did not change the DW stat or the p value.

      When I used an alternative normal approximation with dwtest(lmtest) I got a p-value = 0.01216 and the DW stat remained at 1.4182. I will try the annual TIR when I recover from having grandkids over for the week. Thanks for the warning on metlab since a man of my age cannot afford to lose any more teeth – implants are darned expensive. It shall be MATLAB for ever and a day.

  44. Hu McCulloch
    Posted Mar 22, 2009 at 10:29 PM | Permalink

    RE #85,
    Thanks, Ken —
    The documentation for R’s dwtest at http://hosho.ees.hokudai.ac.jp/~kubo/Rdoc/library/lmtest/html/dwtest.html gives references to 2 papers by Farebrother describing the algorithm (originally due to Jie-Jian Pan) supposedly used in both the R and MATLAB dwtest routines, despite their different answers. I’ll try encoding it when I get a chance and see what I come up with with this Steig data.

    Meanwhile, all these details are disgressing pretty far from Roman’s topic of the validity of Shuman’s figures as cited by Steig. Let’s move further discussion over to the Serial Correlation thread.

  45. Robinedwards
    Posted Mar 23, 2009 at 4:54 PM | Permalink

    This thread is especially interesting to me because the techniques recommended regarding use of dummy (indicator) variables to help cope with pronounced seasonal factors is something I’ve used routinely for climate data ever since 1992, when I first typed in the 1400 rows of values for Kew/Greenwich temperatures. I now use anther but equivalent method which is marginally quicker (in my software) for me to implement. The obvious benefit is that temporal resolution improves dramatically relative to using annual means and perhaps more importantly residuals from subsequent regressions are much lower. The term I use is “monthly differences” which is neutral as regards ones feelings or biases about what the values mean.

    A further advantage of using monthly differences is that the incidence of missing values is proportionately lower, in that one or two missing months do not mean sacrificing the whole year’s data. For many Arctic and Antarctic data sets this is of considerable practical importance, as many will have noticed.

    It will be pretty obvious that the main, large scale, conclusions from Annual and Monthly Difference data will be closely similar, but the latter turn out to be (in general) far more interesting, to me anyway.

    I’ve posted in other threads, just occasionally, regarding the widespread – indeed almost universal – practice of fitting simple linear models to climate data. The advantage of adopting this type of underlying model (assumption?) is that it is very simple. My strong opinion is that it is over-simple, since virtually every plot of climate time series data makes it abundantly clear that climate does not behave linearly on almost any time scale. Why not use the observations themselves to help in the choice of a model that is much more feasible and more plausible? Well, perhaps no-one ever got fired for fitting a linear model! This does not mean that it is in any sense an optimum model.

    The technique I use, and which I strongly recommend some people here to try, is first to form the monthly differences, by whatever method comes most naturally. These, by definition, have a mean zero. Now, form their cumulative sum by simply summing the MDs successively. Any missing value yields a missing value for the cusum, and is simply ignored in subsequent processing.

    The resulting data set will be grossly autocorrelated, but this, for present purposes, is immaterial. We are not interested in the intimate statistics of the cusum but merely its general form.

    Now plot the cumulative sum on the time axis. In very many cases the plot will be striking. What you may find is that the cusum plots as what appears to be a collection of roughly straight segments of varying duration which terminate in an abrupt change of slope which heralds another roughly straight segment. The pattern may be repeated several times on different scales. However, in many data sets there is a “grand scale” linear section that may endure for even a hundred years, indicating a very stable long term temperature regime, with several brief excursions in both directions.

    To see these things for yourself I would advise initially looking at data for the North West Atlantic (Greenland/Iceland) for which there are several individual sites available. There is also a “consolidated” set published by Vinther et al in 2007, which goes back to the 18th century. If you use this one you will see a most spectacular cusum plot with a very pronounced change point at the end of 1922 (September) of about 2 C. Yes 2 degrees C, occurring in the space of a month or so. Prior to that date a remarkably stable regime existed, with a very slight but significant rise. After the “event” the climate was again stable at a higher temperature for around 60 years. The conventional view of the data is that there was a marked change in temperatures over the first part of the 20th century, although no-one, as far as I am aware, has noticed that the temperature increase was actually a step change.

    The cusum has /no/ predictive properties. Indeed, the typical stepwise nature of climate change seems to me to indicate that reliable area or regional forecasting is impossible. Vinther’s data up to Sept 1922 gives absolutely no hint of a change. Just fit trend lines to the data partitioned at that date to assure yourself that the contemporary observer/analyst would have had zero expectation of a change until it occurred.

    Many other data sets exhibit similar phenomena, usually masked by the normal climate noise until uncovered by old-fashioned industrial quality control techniques. I could write endlessly on this topic!

    If you could email me I’d be able to provide uch more on this.

    Robin

    P.S. My first “Reply” seems to have been lost somewhere. I’ve had to re-write this stuff – rather tedious.