As to your questons, I’ll onl;y pick up on a couple now.

The subtraction is in the second figure.

rbar is the mean interseries correlation. If all the “proxies” are essential orthogomal, you really begin to wonder how good they are as “proxies.

I really dislike these little stgatistical fixes by tree ringers quoting one another. If it ain’t in a text by a real statistician who knows about confidence intervals, don’t use it. Guys like Cook, Briffa and Osborn should give up trying to be theoretical statisticians.

My cherrypick detector is this: if you’ve got covariance in the calibration period but not in the historical period, this suggests to me that you’ve cherrypicked series with a common trend.

]]>It seems that the more technically difficult a posting is, the fewer comments it gets. Which means someone here could use the comment # as a proxy for technical difficulty and rate Steve’s posts for degree of difficulty. OTOH, maybe it’s all just red noise.

]]>B. Not sure what rbar is.

C. Seems (intuitive hunch here, since I don’t follow all the math/background, that J is confusing natural variability with measurement error. For instance, as Kerry/Bush campaign winds down, there is some variation (swing) of the electorate choice. But there are also poll errors, mostly related to how expensive of a survey you do. So if you plot a graph over time and some times were not well surveyed (few people polled or by analogy few proxies) there is large measurement error. This even though actual variability has not changed. But that does not mean that you can wave a wand and reconstrain the variability. It’s just that some of the variability is instrument error.

D. It seems that with the comment about “number of independent series” that they anticipated point C. But it’s not clear to me what they are doing or why they don’t address number of independent series all along. Or why there construction is not understood by Box, Hunter, Hunter or by some pure stats guy.

E. But conversely, if they really do have something, it ought to be published and pushed to other fields that use statistics. Not just the tree-holer dungeon.

F. Wait, if there is 0 interseries correlation, then the series are completely independent and Nprime reduces to n. That makes sense, no?

G. BTW, it seems that what they are doing would be to reduce the number of effective independent series, no? That’s the direction that you should go if some series are dublicates, no? But how does that play into the transform of variability? In your initial discussion, variability is decreased with added sampling, not because of any math done but because that is how sampling works. What does this have to do with some overt adjustment? If anything, they would want to blow up the variability if there is duplication of series but not duplication of all independent series, no? Since there would be false agreement?

H. Also, how do they tell the difference between series that correlate because they are both good measurements and ones that have unwanted duplication?

I. Not clear to me how to tell what a significantly low rbar is. What would you expect from white noise series? And if the rbar is low, does that mean that the some of the proxies must be bad (since they are not correlated to each other, how can they be correlated to the truth?) Same issue as H.

J. “The difference in rbar certainly suggests to me a serious risk that the series have been cherry-picked for a difference between 19th and 20th century means – why else would there be a difference in rbar?” Interesting. Have you found a cherrypick detector? ðŸ˜‰

K. On the last figure, this goes to a couple points:

a. It’s not possible to replicate the published reports because the methodology is not spelled out (imagine if someone published a math thereom and left out steps…I think this work which is fundamentally statistical, should include all the details of the math.)

b. It’s likely that since they don’t have to share all their details, that there is some kludging, fudging. Not even always of a biased nature. Sometimes of more of a sloppy changing methods halfway through method. I’ve seen this be a tendancy in work that is not well reviewed. For instance patent examples (which can be very cherry picky).

P.s. Impressed that someone actually posted to this?

]]>