When I looked at the fits of the 6 global basins for annual Category 4 and 5 hurricane counts (Cat45) to a Poisson distribution using the goodfit function in R, I was surprised that the fits did have larger p values (better fit). When I did Monte Carlo simulations for random Poisson distributions with a lambda value approximating the lambda of the basin Cat45 counts, I also obtained low p values. After consultation with RomanM I found that the binning for the good fit function does not account for the end bins with low counts (less than the prescribed 5).

I redid my Poisson fits for Cat45 counts from the IBTracs data series using the 10 minute observed maximum wind speed converted to 1 minute using the factor 1.13 and average maximum wind speed of the sources used by IBTracs. The period was from 1984-2007 – as this is the period that has the most consistent observational criteria.

I simulated 10,000 Poisson distributions for lambdas approximating the lambdas from the basin Cat45 counts and obtained a distribution of p values for the fit of the distributions to a Poisson distribution. Before the simulations I determined the optimum binning strategy that would give the largest number of bins with at least 5 counts in a bin. I then used that binning for the Cat45 counts for the 6 basins and determine the p values for a Poisson fit.

I have listed the generic R code below and a table with a summary of the results. The table shows the basins, Western Pacific (WP), Eastern Pacific (EP), North Atlantic (NATL), South Pacific (SP), South Indian (SI) and North Indian (NI); the p values for the Poisson fit (p) and the percentage of simulated distributions with a lambda approximating that of the basin that had p values less than that calculated for the basins (%D). I guess, in order to avoid a “too good to be true” result, I would have preferred those fits to be closer to a 50% of the distribution.

Seeing all these basins with these levels of apparent fit to a Poisson for CAT counts leads me to conclude that these hurricanes result primarily from a random gathering of conditions and cannot be associated strongly with SST. I suspect that a next logical step would be to do a Poisson model that includes possible factors such as SST and wind fields, but at this point I do not see how the fits would improve.

]]>library(vcd)

z=rep(10000,0)

for (i in 1:10000){

X=rpois(n=24,lambda=3.5)

Xtab=table(factor(X,0:10))

ObX=as.numeric(Xtab)

Xo=c(sum(ObX[1:3]),sum(ObX[4:5]),sum(ObX[6:11]))

gfX=goodfit(X, type=”poisson”,method=”ML”)

Lambda=gfX$par

L=as.numeric(Lambda)

Exp=dpois(c(0,1:10),L)

Xe=c(sum(Exp[1:3]),sum(Exp[4:5]),1-sum(Exp[1:5]))

z[i]=chisq.test(Xo,p=Xe)$p.value}

hisz=hist(z,breaks=20,plot=FALSE)EP=read.csv(“WP”,skip=1)

EPmax=aggregate(EP[,13],by=list(Year=EP[,2],Number=EP[,3]),FUN=max)

EPcat=ifelse(EPmax[,3]>100,no=”No”,yes=ifelse(EPmax[,1]>1980,yes=”Cat45″,no=”No”))

EPcat2=cbind(EPcat,EPmax[,1])

Cat45=EPcat2[EPcat2[,1]==”Cat45″,]

X=as.numeric(Cat45[,2])

EP45=hist(X,breaks=1980:2008,plot=FALSE,)

EPct81_07=EP45$counts[1:27]

EPct84_07=EPct81_07[4:27]

X=EPct84_07

Xtab= table(factor(X,0:10))

ObX=as.numeric(Xtab)

Xo=c(sum(ObX[1:3]),sum(ObX[4:5]),sum(ObX[6:11]))

library(vcd)

gfX=goodfit(X, type=”poisson”,method=”ML”)

Lambda=gfX$par

L=as.numeric(Lambda)

Exp=dpois(c(0,1:10),L)

Xe=c(sum(Exp[1:3]),sum(Exp[4:5]),1-sum(Exp[1:5]))

chisq.test(Xo,p=c(Xe))$p.value

High-confidence forecasts may be developed via

* the latest long-term atmospheric prognostications

* recent forecasts and reasoning from veteran seasonal forecasters

* various peer-reviewed articles on hurricane trends

* recent 150m Atlantic SST temperature anomaly charts

* a nice Merlot or Cabaret Sauvignon

This year our contest will cover July 1 thru November 30. This is to align us with the UKMet forecast period. June activity is excluded.

The contest winners will be those who correctly forecast the seasonal ACE category. The five categories are -

Well below average (lowest 20% of Atlantic season ACEs, which is an ACE range of 0 to 40)

Below average (next 20%, which covers 40 to 85 ACE)

Average (85 to 100)

Above average (100 to 150)

Well above average (150+)

Since we’ll likely have multiple category winners, please also offer your forecast for the number of named Atlantic storms so as to possibly become our Grand Winner. Tropical systems only, subtropical ones do not count.

A sample entry is, “Above-average ACE with 11 named storms”.

Get your entry in soon!!! No knowledge is necessary, in fact it may get in the way. Winners will be immortalized via an end-of-season Certificate of Accomplishment.

]]>