Here is the closest match in the NHC database.

Hurricane Brenda passed 180 n. miles north of Bermuda on June 23, 1968 heading east. Nothing earlier in the season.

Perhaps there is a ship history or website online. ]]>

I still maintain that when looking for small changes (less than 2 sigma) in trends of time series with noisy data the well established technique (in industry) of Exponentially Weighted Moving Average control charts should be used. When the variable of interest is shown to follow a Poisson distribution, then Poisson CUSUM charts should be used. They would also be useful to show the effects of attempts to detrend data.

DeWitt, I do not disagree with your methods, but my main concerns with looking at TC/hurricane counts and TC activity in general over time has been the need, in my mind, to remove cyclical effects like AMM, Nino and AMO and other effects like changing detection capabilities and SST and any confounding of detection capabilities with changes in SST. I would assume that if these effects are removed with reasonable completeness the count distributions should fit significantly more closely to an expected Poisson distribution and this is basically what I have found. The ACE index with similar corrections fits better to a normal distribution as one would expect for that variable.

By the way, when I look at unadjusted TC raw counts versus SST there are long periods with little or no correlation, e.g., over the time period 1940-1990 the NATL TC count to SST shows a trend of 2.3 counts per degree C, but with an R^2 = 0.04. Compare that to 1870-2006 where the trend is 6.3 counts per degree C with an R^2 = 0.31. I say that observation is consistent with SST being confounded with some other influential variable.

]]>I’m not sure how you can use a Poisson distribution to determine the association between two time series. You lost me there. I probably require further explanation.

Poisson regression simply assumes the distribution of errors is poisson, as opposed to standard linear regression, which assumes a normal distribution. Easy to specify using R glm. Once you specify a poisson error structure, the dependent variable is regressed onto the independent variable as per usual.

]]>I understand Climate Audit is one of the major AGW denial blogs.

I think my statistical observations (and confirmed by Mann and Sabbatelli, who would never, ever, be accused of being denialists) on not using a linear regression when the dependent variable is shown to follow a Poisson distribution with time while the independent variable no doubt follows a normal distribution. It would not matter to the statistics how the distribution was derived. I think you need to read the paper I linked. The authors attempt to make the case that the state variables SST and NATL associated climate cycles when removed from the NATL TC counts significantly improve the fit of the counts to a Poisson distribution. It is my judgment that changes in SST are being confounded with changes in detection capabilities and have obtained some substantial improvement in the Poisson fits for TC and hurricane counts by looking at Easy to Detect storms (to remove detection changes) and AMM cycles in the MDR for NATL TCs. Also it is interesting to note that hurricanes (which should have been easier to detect going back in time) have smaller trend with time and with much reduced R^2 than TC counts. Using ACE for TCs from the west of longitude 60w (easier to detect going back in time) over time shows no trend while that to the east of 60w (more difficult to detect going back in time) shows a significant trend.

Moving averages in my view are simple techniques acting as a visual aid for the graph readers when a trend is involved. Statistically they will not tell you any more than a regression will about a trend – they just sometimes make it easier to see. If the time series have much noise in them such that a MA is needed to see a trend that is also telling us that there is much noise, i.e. other effects are acting to significant extents. For TC and hurricane counts I am assuming that slate is wiped clean at the end of the old season and a new one appears for the current season. If this is the case than one would have a difficult time explaining lag effects and even using MAs. Your SST and TC count MA dips and valleys do not correspond well and as one might expect if SST was being confounded with detection capability changes.

Joseph, I would feel better about your residual methods if you could provide me with a literature reference to a method used as you applied your method.

]]>which you will find here:

http://residualanalysis.blogspot.com/2008/07/hurricanes-and-global-warming-revisited.html

Kenneth: I’m not sure how you can use a Poisson distribution to determine

the association between two time series. You lost me there. I probably require

further explanation.

You also didn’t comment on the apparent 1 year lag.

]]>I neglected to note previously that if one uses residual TC or hurricane counts and those counts fit a Poisson distribution than one cannot properly regress residual counts versus residual SST using linear regression. I have listed excerpts from a Sabbatelli paper below that describes the use of generalized linear models and estimates of a maximum likelihood values. The authors also talk about SST autocorrelation and the need to adjust the statistics for that relationship.

I am sufficiently curious that I plan to simulate a perfect Poisson distribution randomly applied to the SST residuals and then linearly regressed to determine the probability of the occurrence of a substantial positive trend of the residual count to the residual SST with an R^ between 0.05 and 0.10.

In the paper titled and linked below “The influence of climate state variables on Atlantic Tropical Cyclone occurrence” rates by Thomas A. Sabbatelli and Michael E. Mann and published Sept 15, 2007, the authors note that:

]]>Any statistical approach to analyzing TC counts must respect the Poisson distributional nature of the underlying process (that is, that TC counts are characterized by a point process with a low occurrence rate). Our first approach employs Poisson regression [see e.g., Elsner et al., 2000, 2001; Elsner, 2003; Elsner and Jagger, 2006], a variant on linear regression which is appropriate for modeling a conditional Poisson process in which the expected occurrence rate co-varies with some set of state variables (e.g., indices of ENSO, the NAO, and MDR SST)…

..Poisson regression is a variant on linear regression appropriate for data such as TC counts for which the null hypothesis of a Poisson distribution is appropriate [see Elsner et al., 2000, 2001; Elsner, 2003; Elsner and Jagger, 2006 for further discussion]. Given a count series Y with unconditional mean rate m believed to follow a state dependent Poisson distribution, Poisson regression estimates a generalized linear model for the conditional expected rate of occurrence..

.. Unlike ordinary linear regression, a closed-form analytical solution to equation (5) is not possible. However, it is straightforward to numerically estimate maximum likelihood values for the regression parameters..

I looked at the residuals from a sixth order polynomial for TC counts to determine how well they might fit a Poisson distribution over the period 1870-2006. The distribution using the raw TC counts over this period yields a chi square goodness of fit p = 0.00, while using the residuals gives a fit with a p = 0.59. (I had to determine a zero point for the residual counts since a Poisson distribution cannot have negative values. For that point I used the mean count for the raw count and added it to the residual counts). This fits with my previous analyses using Easy to Detect storm counts to remove trends due to detection changes and AMM positive or negative to remove some of the cyclical component in storm counts.

I also noted that the TC count residuals from the sixth order polynomial have seven high leverage points, all of which are the highest of all the count residuals. Two of the seven TC residuals had corresponding SST residuals which were negative (low ranking) while the other five had corresponding SST residuals that were in the 75th percentile and higher rankings, although these apparent outliers did not have corresponding outlier SSTs.

As a further check on the residual method I used the 1870-2005 hurricane count times series with a sixth order polynomial. The results, which are listed below, show that the residual methods give a positive trend for hurricane counts with SST and very low R^2 values. The fit to a Poisson distribution using hurricane count residuals is very good which is again in agreement with previous analyses I made using a positive and negative AMM index. I need to analyze the residual hurricane counts in more detail as I did for the residual TC counts.

Raw hurricane count versus time:

1870-2006:

R^2 = 0.04 and Trend = 1.3 counts per century.

1870-1940:

R^2 = 0.06 and Trend = -3.1 counts per century.

1941-2006:

R^2 = 0.03 and Trend = 2.2 counts per century.

Hurricane residual counts versus SST residual counts:

For a sixth degree polynomial for hurricane counts, R^2 = 0.18.

1870-2006:

R^2 = 0.08 and Trend = 2.7 counts per degree C.

1941-2006:

R^2 = 0.09 and Trend = 3.3 counts per degree C.

Poisson chi square goodness of fit for period of 1870-2006 for hurricane counts:

For raw counts, p = 0.15

For residual counts, p = 0.95

With a p = 0.95 for the fit to a Poisson distribution for the residual hurricane counts versus residual SST, I see little room for an actual hurricane count to SST relationship.

What bothered me about using the residual TC counts (I need to analyze the residual hurricane counts) is that influential points (in the regression of residual counts versus residual SST) were all on the high side of the residual counts and it appears that a smooth curve, like a sixth degree polynomial, cannot approach connecting with these blips. One might expect that the corresponding SST values would be at the extreme end of those values, but that was not the case, even though they were on the average on the higher side.

]]>You are correct that it loses statistical significance, even though there’s still

a trend in the expected direction. I get a slope of 1.2. My initial impression

was that either we’ve in fact controlled for a subtle coincidence with the closer

fit (and there’s really no association) or that my conjecture that closer fits better

control for coincidences is mistaken, or rather, in trying too hard to control for

coincidence we’re adding noise. In fact, ‘This approach has been criticized by some

authors because of loss of valuable information in course of such “detrending”.’

[ http://www.statistics.com/resources/glossary/d/dtrndca.php ]

But then I tried the following, and you can try it too with your data. Correlate the

temperature residuals with the named storms residuals one year later. The slope I get

in this case is 4.892, enough to push it over statistical significance. It’s 3.126

two years later, and it deteriorates after that, although there’s somewhat of a cycle

which I think is expected due to autocorrelation.

I can’t really explain why there would be an effect 1 year later, but I’ve found this

to be the case in a few different analyses.

When you do the sixth-order fits, do you see that the shapes of the fits match between

the trends? Don’t you find that interesting?

I also did the analysis with a standard linear detrending starting in 1900 (in order

to righ censor data that is bad as suggested by Bob Koss). In this case the effect

found is about 6 more storms for every 1 degree.