Emanuel 2005 stated:
The accumulated annual duration of storms in the North Atlantic and western North Pacific has indeed increased by roughly 60% since 1949, though this may partially reflect changes in reporting practices, as discussed in Methods.
Speaking of “rudimentary statistics”, this seems like a rudimentary statistical statement. But there are wheels within wheels.
From the collation of unadjusted Best Tracks data, I calculated the number of quarter-days that each storm had reported wind speeds greater than 18 m/sec (a figure mentioned in Landsea’s Comment) , divided by four and calculated the sum – which I interpreted to be the “accumulated annual duration”, doing this for both North Atlantic and West Pacific. (This calculation is using pre-adjusted data.)
Figure 1. Accumulated annual duration. Left – Atlantic; center – W Pacific; right- total. Red – 2005-2006.
I’ve tried pretty hard to figure out how one can get an increase of “roughly 60% since 1949”. First, what is an operational definition of what periods are involved? As an exercise, I fitted a simple linear trend to the 1949-2004 periods for Atlantic, W Pacific and Total as shown below. Obviously there is not a “roughly 60%” increase using fitted values from a linear trend – which would seem to be the most logical means of estimating an increase. The closest that I could get to a “roughly 60% value was using the ratio of the segments shown in blue in the right graph – the 2000-2004 mean over the 1946-1950 mean. But even that is only 47%. Maybe the adjusted data is a little different.
Whatever the answer to this little conundrum, it illustrates the differences between audits and peer review. Obviously a Nature peer reviewer would not dirty his fingernails checking whether Emanuel’s “roughly 60%” figure was right. I don’t even think that they noticed how indeterminate this statement was – even allowing for “roughly”.
Script is here. The script uses R-tables; you’ll need to modify to use the *.txt files that I archived if I don’t get to that.