Working in the digital signal processing area, I think what is missing in handling climate data is a better understanding (and knowledge) of sampled data. After all what we have here is discrete data series sampled from a continuous signal. Besides obvious errors with the sampling process (UHI et al) I wonder if people working in climate science fundamentally understand what it means to “take a temperature sample”. Basic Nyquist and Shannon stuff.

Besides, I think taking a “true” daily-average “temperature sample” at a weather-station is quite pointless (for climate science IMHO) if you don’t have records about the accompanying daily sunshine duration and energy transport via wind, convection, and so on. Even than I have difficulties to fathom how one could create a representation of “real climate” from this data – but maybe I just lack imagination.

]]>Given that PNAS knew that these calculations would be scrutinized, you’d think that they’d have made a better effort to have stuff that was a little less embarrassing.

As you are probably aware, *PNAS* has a mixed reputation because of the different tracks of peer review that can taken to have a paper published in the journal. The “Communicated by Lonnie G. Thompson” on Mann (2008) indicates that the paper was submitted through Track I. In this path to publication, Dr. Thompson served as editor for the article and obtained at least 2 reviews of the paper from individuals at other institutions, and not from those of any of the authors. The peer review process is not conducted at the level of the PNAS editorial board, but at the level of the communicator. In principle, those peers and their reviews should be anonymous, but in practice, they are often not. Once the concerns of the reviewers are answered, the editor advises the editorial board of whether to publish it. While the editorial board reserves the right to reject any manuscript, this rarely happens on Track I publications. I have been witness to conversations between scientists that sounded like the following: “We’ll get Dr. X down the hall to communicate it, and he’ll lean on 2 of his peers to bless it, and it’ll get published.”

My mentor, a NAS member themselves, refused to publish my work in PNAS because of the stigma attached when you communicate your own work. I don’t enjoy the fact that I am immediately more skeptical of manuscripts submitted through Track I, but I am.

]]>You have no basis for asserting that the effective N is about 14 merely because “Mann is using a 20-yr lowpass filter”. I don’t know how many df are really in this setup; but there’s more to this set up than the Butterworth. You haven;’t shown that Slutsky-Yule isn’t present here, for example.

You can’t ignore the binned results, which are arguably more faithful to the data as it exists. All in all, the Mannian calculation is seriously screwed up way of doing things. Given that PNAS knew that these calculations would be scrutinized, you’d think that they’d have made a better effort to have stuff that was a little less embarrassing.

]]>I don’t think this is quite right. Mann is using a 20-yr lowpass filter which makes the effective sampling rate about once every 10 yrs, for N ~ 14. Applying the Quenouille formula on an annual autocorrelation basis in this case produces a meaningless number. It would be more suitable to calculate the autocorrelation at lag ~11. A correlation greater than 0.5 for N=14, given serial independence, does reach 95% significance.

But if you use the autocorrelation at lag 11, the “N” in the formula should be 14 or so, not 146, so that as long as there is still a fair amount of autocorrelation, the adjusted sample size will still be well under 14.

But the AR(1) model probably isn’t very good for this complicated doubly smoothed data that may have been autocorrelated to start with. I would therefore just take the Quenouille adjustment as an indication of cause for extreme concern, rather than as definitive. I think Steve did right to just regress the raw Dongge data on binned temperature, though as I noted in #3 above, there might be other reasonable ways of binning.

]]>Y’see, the number of years is 146 (1850-1995). The autocorrelation of the residuals in a linear regression is 0.9945544 and the resulting degrees of freedom using the Quenouille formula used in Santer et al 2008 (N(1-r)/(1+r) is only 0.399, something that must have worried Gavin Schmidt.

I don’t think this is quite right. Mann is using a 20-yr lowpass filter which makes the effective sampling rate about once every 10 yrs, for N ~ 14. Applying the Quenouille formula on an annual autocorrelation basis in this case produces a meaningless number. It would be more suitable to calculate the autocorrelation at lag ~11. A correlation greater than 0.5 for N=14, given serial independence, does reach 95% significance.

]]>Steve: Why are you asking me what the rationale is? Ask Mann or Gavin Schmidt.

A great idea, only Gavin would censor everything at RC, Hansen is busy on the Discovery channel claiming Armageddon from 2 C temp rise, and Mann is busy with stealth deletions and alterations on the Penn State web site. In all seriousness, why isn’t the paleoclimate crowd demanding these answers, if not in the review process, then in later commentary? (I know, rhetorical and OT).

]]>Hmm. Seems to me as if much of accepted warming theory has a very low signal to nits ratio.

Regarding the Dongge O18 data, it’s been a long time since my last statistics course, but the notion of linear interpolation between irregular-year data points raises a flag. This seems to add additional smoothing at the front end. Depending on the ratio of filled-in to raw-data years, this could be a **very** heavy-handed smoothing. But maybe that’s what you already said in another way. If so, sorry.

Two space shuttles burned up because engineers failed to “pick nits.”

]]>