Re: Hu McCulloch (#21),

I don’t know who the reviewers were, but note that it was NAS-member Lonnie Thompson who initially recommended this paper to the editors. It evidently met his standards of documentation and replicability.

Any idea which NAS member invited:

Ramanathan, V. and Y. Feng, 2008: On avoiding dangerous anthropogenic interference with the climate system: Formidable challenges ahead, PNAS, 105, 14245-14250, Sept 23, 2008.

… currently under review at ClimateScience under:

Misconception And Oversimplification Of the Concept Of Global Warming By V. Ramanthan and Y. Feng.

The case in Mann et al. (2008) of eliminating the Briffa series for the poorly rationalized claim of divergence seems to be made even more questionable by using RegEM to put it back into the reconstruction.

Obviously I meant eliminating only part of the series or truncating it and not eliminating the whole series. Actually eliminating the whole series would have been the more honest choice here, although that might be considered a choice made to avoid a discussion of “divergence”.

]]>There’s a VERY simple way to look at ridge regression. Stone and Brooks 1990 (Which I think that I’ve written about) describe a one-parameter continuum between OLS and Partial Least Squares regression (these are the ridge). There are some systems for picking a stop along the way – Mann’s “objective” criterion, but it’s still going to be a “blend” of the two coefficient approaches.

The real issue is the one that we were discussing with Brown – “inconsistency”. I can’t imagine that Brown confidence intervals will be anything below floor-to-ceiling.

]]>This leads me to the question whether reviewers of climate science papers ever object to cherry picking and data mining, be it direct or indirect? Or do they merely look at the formal statistics and assume that one of a scientific mind is going to use an a priori criteria as indicated (before the fact) and for reasonably objective and technical reasons? It seems to me that a strategy of forcing your statistics into a subjective mode would provide cover with reviewers that might be looking for a reason (or an excuse) to make a favorable or at least neutral review and at the same time obtain some reasonably good (for a desired result) data mined answers.

Some of these statistics, with what I will call a subjective bent, are rather obvious to a layperson, but I am wondering whether anyone with a statistical background has made a comprehensive list of these suspicious statistics and methodologies in the Mann et al. reconstructions.

The case in Mann et al. (2008) of eliminating the Briffa series for the poorly rationalized claim of divergence seems to be made even more questionable by using RegEM to put it back into the reconstruction.

I am aware that the RegEM (regularized expectation maximization) algorithm is a rather commonly used method for infilling missing data. My question as a layperson is, that since I have read that ridge regression is, or can be, a part of this method and that ridge regression in turn depends on a ridge parameter which is an extra parameter that has to be introduced in the model and its value is assigned by the analyst, and determines how much ridge regression departs from LS regression, can the RegEM method be made subjective?

]]>Re: Henry (#17), Steve’s response

So (ignoring series which are based on the local temperature) about 45% of the proxy series are more correlated with the second closest gridcell temperature record than with the closest. This does suggest that the local temperature signal is generally weak.

But it could be a way to demonstrate teleconnection, by refusing to stop at “pick two”: instead compare each series with every gridcell (big calculation, but even so) to find where in the world it is most correlated with. For the vast majority, it will be somewhere far away, thus *proving* teleconnection. And in most cases the largest positive value will be way over Mann’s filter, so most of the series can then justifiably be included in the next step of the reconstruction.

Has anyone checked if there is any geographic significance to this procedure? If a proxy site is on the boundary between two geographic regions, is this procedure a means of finding which region ( i.e the grid cell which contains the region) is more representative of the proxy?

I have a feeling that this question is more a display of my ignorance of the issues than anything else but anyway.

]]>This is a very odd procedure that I’ve never seen in any previous statistics. My experience with the authors is that you can’t exclude the possibility that the odd procedure was done for a reason and that a non-odd calculation was done but not reported.

Ross, for the 90th percentile or 95th percentile, it looks to me like pick two is more like a few points from say .14 to .17 (white), with higher for autocorrelation.

]]>…it was NAS-member Lonnie Thompson who initially recommended this paper to the editors. It evidently met his standards of documentation and replicability.

Point taken. Still has a large flabbergast factor, though.

]]>