Click to access bams_hurricanes.pdf

Again, Held et al.

]]>Nino 3.4 index was negative: p = 0.40

Nino 3.4 index was positive: p = 0.92

I think these results demonstrate that one can obtain similar results to those determined by Mann without using SST when the process is applied to Easy Detect storms. Following further on this line of thinking offers evidence that the variable SST with its trend increasing with time can easily be confounded with improvements over time in storm detection capabilities.

Since Manns use of the nino 3.4 index for the months following the storm season seemed rather an unphysical connection without some explanation, I looked at the differences that would result in the using the same months for categorizing the storms counts for the following season (as opposed to the preceding one as Mann evidently did). Those results for the goodness of fit test for a Poisson distribution were as follows:

Nino 3.4 index was negative: p = 0.01

Nino 3.4 index was positive: p = 0.90

It is interesting to note a couple relationships here. Firstly the Easy Detect storm counts for the years with a DJF nino 3.4 index that is positive yields a very good fit to a Poisson distribution whether applied to the following or preceding storm season. Secondly, when the nino 3.4 index is negative, the fit to a Poisson distribution becomes significantly less probable for the storm count years preceding the DJF indexed months and, when the negative DJF nino 3.4 indexed months precede the storm count years, one can reject the null hypothesis that the distribution fits a Poisson one at the 0.05 level.

I did one more analysis by removing the cyclical component from the Easy Detect storm count series (1860-2007) using the Willis E derived cycle for the Total count series of a sin wave with a peak-to-peak amplitude of 3.2 and a period of 58.8 years. I would caution that this procedure removed a cyclical component based on the Total count series and not the Easy Detect series and should be redone using the Easy Detect series. Removing the described cyclical component increased the p value for a chi square fit for a Poisson distribution from 0.01 to 0.59.

I have received an update of the Easy Detect storms that was not applied to this analysis. It does not appear that the changes will change the above result significantly, but for completeness I will repeat the calculations with the update and report any differences.

]]>Also, why didnt they make him repeat with similar binning for pre/post aircraft sampling. Can the reader tell whether one might not get just a change in the poisson distributions based on that sort of binning?

I am looking at the Mann data and analysis as we speak and plan to do some sensitivty analyses. I agree that the total data does not fit a Poisson distribution nor does the data for Easy to Detect storms fit one even through that distribution has little or no trend.

Mann divides the data by classifying favorable and unfavorable using the variables SST (which I judge is confounded with changing detection capabilities and is effectively detrended using Easy to Detect storm counts) and nino 3.4 index. He reports some p values for the chi square test that give reasonably good confidence the distribution is Poisson for the classifacation -SST/+nino, the unfavorable condition, and the classification +SST/+nino or -SST/-nino, the nuetral condition. The p value for the chi square test for the classification +SST/-nino, the positive condition is 0.27 and while one cannot reject the null hypothesis that the distribution is a Poisson distribution (normally one uses p less than 0.05 for a rejection level) the beta error here would be considerably larger. In an attempt to get around this issue Mann looks at the average p value for all three cases or at least the favorable and unfavorable cases and implies a rule of thumb guide line that an acceptable average value for p is 0.50.

Willis E. in a much earlier thread detrended the storm count time series and removed the cyclical component. He graphed it and I did a chi square test on the adjusted time series and calculated a p value of approxiametly 0.9 — which I consider a very good fit.

I want to remove the cyclical component from the Easy Detection storms counts and do a chi square test on the result. I think it is important to note that in the Willis E analysis nothing is presented a prior as a parameter. It simply shows that a detrended count series with a cyclical component removed fits a Poisson distribution very well.

I will also use Mann’s classifications for nino 3.4 and calculate the p values for a goodness of fit test for my Easy Detection storm counts series.

]]>1) If he took the batch as a whole, we can reject the hypothesis that the distribution is poisson. (Figure 2.a: Note, y axis shows as many as 40 samples in some bins.)

2) If he breaks it into two batches with half as much data each, it *still* reject the null hypothesis. (Figures 2. b & c: note, y axis shows only 20 samples in some bins.)

But

3) if we split it into even smaller bits, we can no longer reject the null hypothesis?

Is there some way to calculate beta errors on this? Is this really just an exercise in increasing beta error until we have a test that more likely to give the wrong answer than the right one?

I can’t do numbers in my head, but, *given the logic* for making a conclusion, I wish the reviewers had required Mann to include some estimate of beta errors.

After all, the logical argument may rest on beta error.

Also, why didn’t they make him repeat with similar binning for pre/post aircraft sampling. Can the reader tell whether one might not get just a change in the poisson distributions based on that sort of binning?

]]>For 2008 guess the actual average of the last five years. Keep doing that and your error should be pretty low over time.

Then you can say your longer term record is very good.

#67. I also think the observations, instruments, and logs of 200 years ago are pretty good. I believe the issue is the presence of ships to observe storms. Sailing ships and fishing avoided unproductive routes and areas. That left large areas of ocean empty. That meant many small and short duration storms were undetected. I suspect ships were even more cautious about routes during the peak storm months.

]]>From all of the evidence that I have seen, historical instruments, while not as accurate as today, were still pretty darn accurate.

The idea that 200 hundred years ago, instruments routinely mismeasured windspeed on the order of multiple knots, is utterly ridiculous.

Additionally, from the data that Anthony Watts has been gathering, it might be safe to claim that back then, they took better care of their instruments as well.

]]>Rock paper scissors

The issue has move Beyond public information to public indoctrination.

GUESS LOW for 2008! and blame the excess on AGW. GUESS really low and say

that AGW has effed up our understanding, we are in uncharted waters.

Guessing high and being wrong has no political force. Guess low, Blaming

c02 for your missed putt…..Is genius. C02 is warmng the climate and confounding our ability

to predict disaster. it’s a doubly disasterous disaster.