In my openion, the test is approporiate for this kind of investigation. My only concern is the effect of autocorrelation. Judging by the simulations supplied by Willis #42 and my experience with tau, the existence of autocorrelation in each series increases the variance of the test statistic, resulting in more rejections (or equivalently highly significant test statistic values) than when each series is random.

]]>Gotta run, more later,

w.

]]>What would you do with it if you knew it ? With more discrepancies between series, R is low and W drops accordingly. That’s the logic of the test, no matter how R is composed.

I fear you misunderstand my point. Inherently, there is nothing to distinguish between a Kendall W comprised of six identical Spearman Rank correlations between k=4 datasets (lets say all of the R’s are 0.5), and a Kendall W comprised of six Spearman Rank correlations of R = (0, 0.25, 0.5, 0.5, 0.75, 1). Because the averages are equal, both give a W statistic of (3*R+1)/4 = 0.625

I hold that in the second case, the Kendall W will have a greater inherent inaccuracy than in the first, and that the way to deal with this is to put error bars on the Concordance figure. This is particularly true when a claim is made (as in Briffa 2008) that a slight change in the W statistic has some larger significance. If the slight change is less than the sum in quadrature of the relevant errors, then it has no statistical significance.

w.

]]>I would still be interested on your take on how Briffa used Kendall W both specifically with respect to the results (How should he have stated his results?) and from a pure measurement point of view ( Would you have used this measure?). ]]>

Thanks for the explanation. However, in his book “Rank Correlation Methods (1948)”, Kendall writes about tau (section 1.13): “…… and thus has evident recommendations as a measure of the concordance between two rankings.”

There should be no problem in calling perfect negative correlation between two rankings “discordance” or “perfect disagreement” which is also used by Kendall in section 1.8 in his book. Of course for three or more rankings we can only talk about concordance or discordance.

I was referring to Kendall’s Tau, another measure of concordance….

In fact, it’s a measure of correlation, let’s not confound the concepts. Imagine you have the series x = (1,2,3,4) and y = (1,2,3,4). Then you have perfect correlation and perfect concordance (agreement between series). With y = (4,3,2,1) instead, you still have perfect (negative) correlation, but zero concordance.

# 61 Willis :

The Kendall W is defined as….

BTW,the original definition is a different one. The computation through the average rank correlations is just one way. Conover (“Practical Nonparametric Statistics”) explains the interrelationships between Kendall’s W, Friedman Test (a rank-based analysis of variance) and average Spearman’s rho.

…there is another uncertainty in the calculation of W, which is the standard error of the mean of the… pairwise rank coefficients.

Without the data, however, there’s no way to tell how large that uncertainty is.

What would you do with it if you knew it ? With more discrepancies between series, R is low and W drops accordingly. That’s the logic of the test, no matter how R is composed.

]]>You’re welcome. Just don’t go holding your breath, OK?

]]>