Hu, I appreciate your professional thoughts on these analyses. I do not have a comprehensive background on the theoretical underpinnings of some of these statistical issues and thus your explanations help this (very)layperson considerably.

In my view, I think that using the annual data to avoid the serial correlation that results from using the monthly data should reduce the uncertainty in the value of the adjusted trend standard deviation used in Santer et al. I use an example below, that might be all wet from a theoretical standpoint (and I hope someone here evaluates that aspect and comments), but it makes my point.

Using the RSS T2LT time series from Santer et al. (2008) for monthly data from 1979-1999, one obtains an unadjusted trend standard deviation of 0.0307 degrees C per decade. With a residual lag 1 versus residual correlation of r = 0.886, an adjusted trend standard deviation of 0.132 is obtained.

Now, I go to the potentially illegitimate part. If the lag1 residual versus residual regression is carried out, the correlation of r = 0.887 is obtained and with a range covering the 5-95% CI of 0.827 to 0.944. Using those limits one can calculate the 5-95% limits on the adjusted trend standard deviation and obtain a range from 0.104 to 0.212 degrees C per decade.

If annual data is used, these uncertainties, if legitimate, are reduced to very small and unimportant values due to the sharply reduced serial correlations. This approach goes along with UC’s reminders that I have read at CA that in effect states that avoidance of autocorrelation is preferable to using corrections for it.

]]>I was once co-author on a conference paper, but could not even attend the session because my security-clearance level was not high enough. ]]>

The equation could then be Y0 = ENSO function + linear trend + noise. The errors in the linear trend would be less as the autocorrelation effects would be much less provided the ENSO function was modelled. But then again that seems to be somewhat way off in the future by the look of things

Re: UC (#58), Is that the new Eminem album cover? ]]>

This must be the primary reason Ken is getting much smaller standard errors using the full sample (to 2007 or so) than when he stops, as Santer, Nychka et al (2008) did, in 1999. The autoregression adjustment is important, but since the autoregressive coefficient isn’t much different for the two samples, it is not causing much change in the se.

As I noted in comment 64 of the Oct 22 thread “Replicating Santer Tables 1 and 3″, the excellent Nychka, Santer, et al (2000) unpublished working paper on serial correlation found by Jean S demonstrates that the adjustment Santer, Nychka et al use in 2008 is in fact inadequate, so that the true standard errors are somewhat larger than are obtained using the 2008 adjustment. It is unfortunate that Santer, Nychka et al did not heed Nychka, Santer et al!

As Jeff Id points out above, it should be borne in mind that linear trends are highly suspect as literal models of climatic data. It would be sufficient for the purposes of global warming advocates to make a case that temperature has merely drifted up to a level that is significantly higher than it used to be, without requiring it to be a linear trend. Nevertheless, for the data at hand, the significance or insignificance of the “trend” (and differences between measures of the trend) is probably a descriptively adequate way of characterizing such a warming.

As for multiple co-authorship, an unfortunate incentive for co-authors to pile on is that universities (and research centers, I assume) often count raw citations as a measure of performance for salary purposes etc. If they counted co-author-adjusted citations instead, eg giving each of the 17 authors of the 2008 paper 1/17 citation credit apiece for having co-authored it, gratuitious piling on would quickly come to an end!

]]>But does anyone ever get buttered?

]]>UC wins the thread! ]]>