It should be remembered that there are in fact two Nychka standards: The Santer Nychka+15 (2008) adjustment of

ne = n*(1-rho)/(1+rho),

and the far superior Nychka Santer+4 (2000) adjustment of

ne = n*(1-rho-.68sqrt(n))/(1+rho+.68sqrt(n)),

which tries to compensate (if still only imperfectly) for the small-sample bias in estimating rho.

Hu, if the co-author of the Santer et al. (2008)paper, i.e. none other than Nychka himself, “allows” the use of the Santer adjustment for the trend standard deviation over what you see as the better, no, make that far superior, Nychka adjustment from the 2004 paper, then either you or Nychka has some explaining to do.

]]>Craig, sorry to be a pedant but the Shakespeare quote is “hoist with his own petar”ðŸ™‚, although petar and petard mean the same and most people use your format (a bit like “play it again Sam”}

]]>Maybe, in the spirit of the climatological times, Nychka has developed a new “proxy statistics,” in service to proxy thermometry, in which the statistical adjustments are chosen *a posteriori* to give the best signal.

Remember the specious political manipulation of events to provide a government principal with “plausible deniability”? Well, we now see high analytical skill being bent to provide an academic principal with ‘plausible assertability.’ As distance from the event increases, and if the desired story holds through insistent repetition, the “plausible” part drops away and the denial or assertion takes on the force fact. So the meaning of “moved on” as regards MBH98&99, and so it goes in climatology these days.

]]>I can confirm Steve’s ARMA 1,1 results by my own work modeling different M08 proxies. There were 15 series which I couldn’t get to converge at all with an ARMA 1,1 model. Some of them were the most ridiculous looking ones. (i.e. a big flat line replacing HF data in the middle of the graph). A bunch had very high AR figures in the regression.

I understand that it is subjective, but the 5th grade class giggle test could have been used with some of this data. From the 100ish series which were rejected before correlation analysis (1357-1209-luter difference), this means something subjective was already used. The more unreasonable of these curves, would have been rejected to the 1357 anyway if they ‘accidentally’ were one of the few with high weighting in the final result because of the implications it would have to the paper.

In CPS at least there isn’t any reason I see for putting in a fake flat line where data was missing (if it was). It simply flattens the historic pre-cali result. In the CPS version at least you could simply rework the averaging algorithm to simply leave it out.

Series 422 is a good example.

Quansheng Ge, Jingyun Zheng, Xiuqi Fang, Xueqin Zhang, and Piyuan Zhang. 2003. Winter half-year temperature reconstruction for the middle and lower reaches of the Yellow River and Yangtze River, China, during the past 2000 years. Holocene Volume 13, Issue 6, pp. 933-940

BTW: since I am fairly new to ARMA regression, is there some reason that CA group prefers AR1 to ARMA 1,1 which often gives me a better SE result on proxies? Perhaps the term is being used interchangeably in the threads?

]]>and the far superior Nychka Santer+4 (2000) adjustment of

ne = n*(1-rho-.68sqrt(n))/(1+rho+.68sqrt(n)),

Why do you say the Nychka Santer +4 is far superior?

I’ve run monte carlo on various AR1 process and I just don’t find this method superior. Without the 0.68/sqrt(N) correction, it rejects less frequently that it should, with the correction, it corrects too frequently. (Or at least it does for N near 90 and for N of 252.) For N=252, the original Nychka is closer to correct. (I don’t remember for N=90.)

So, the “improved” Nychka method is only an improvement if the standard is that excess false positives is bad but excess false negatives is ok. But that doesn’t make sense. The analyst is supposed to select their false positive rate, then the method is supposed to provide the rate one intended!

]]>This isn’t quite a final rendition as there’s much hair on the “decadal” data, which I’m planning to get to.

]]>ne = n*(1-rho)/(1+rho),

and the far superior Nychka Santer+4 (2000) adjustment of

ne = n*(1-rho-.68sqrt(n))/(1+rho+.68sqrt(n)),

which tries to compensate (if still only imperfectly) for the small-sample bias in estimating rho.

See my discussion at Comment 64 of the Oct. 22 thread Replicating Santer Tables 1 and 3. The unpublished 2000 NCAR working paper was linked by Jean S at #10 of that thread.

Note that Nychka Santer+4 2000 are using this formula to adjust the effective sample size itself, so that the effective DOF (ne-2 versus n-2) adjustment is even bigger. Furthermore, they then enter the adjusted DOF in the heavy-tailed Student t table to find a 5% critical value that can be well above 2.

Steve here is only imposing the inferior 2008 Nychka standard on Mann, and not the far superior 2000 Nychka standard.

]]>