Southern Hemisphere Hurricanes

The other day we discussed missing data in the Northern Indian Ocean, where the main best tracks archive showed storm track data up to the mid-1970s but lacked wind speed estimates, with a sharp decline in storm track occurrences in the 1970s.

In his comment on Webster, Curry et al, William Gray observed that there were data problems in the Southern Indian Ocean and South Pacific as well. Webster, Curry et al acknowledged the problem in respect to the North Indian Ocean in their Reply to Gray, available on the internet but never published in a journal, but denied that this was material since North Indian Ocean storms were only a small fraction of total storm activity.

Today I want to look at the situation in the Southern Hemisphere, where the majority of storms occur in the South Indian Ocean.

When I collated SH hurricane data, I was surprised to find that, although all other data sets used by Webster, Curry et al matched the data sets at unisys, this was not the case for the SH. Here the Webster Curry data set was substantially different between 1970 (the start of their data as used) and 1983, when the two data sets began to substantially match. Figure 1 below shows that the best tracks data had a substantial number of track location measurements without wind speed estimates prior to 1983 (as we observed in the Northern Indian Ocean) , while the Georgia Tech data had virtually no such examples.


Figure 1. Number of measurements with storm track measurements but no wind speed estimates. Orange – SIO and SPac; green – NIO; red- Georgia Tech SIO-SPac combined, showing virtually no such examples.

In their website (but not in the body of the original article), Webster, Curry et al report the following:

The southern hemisphere track data of 1970 – 2002 are provided by Charlie Neumann and are made up of a reanalysis of available data from Australia, Fji and JTW.

I have been unable so far to locate any publication describing Neumann’s reanalysis. However, whatever the merits of this particular reanalysis, one can reasonably say that Neumann wind speed estimates are not a homogeneous method to the post-1983 SH record. Now on to the SH information itself.

Figure 2 is an excerpt from Webster Curry Figure 3, showing the number of SH hurricanes and number of SH hurricane days in blue on the left. The coloring in the right panel is inconsistent as both unsmoothed SH and WPac are in black, but the SH annual measurements are lower and are tracked by the heavy blue smooth.


Figure 2. Excerpt from Webster Curry et al 2005 Figure 3.

Next, Figure 3 shows the number of actual SH measurements in the two archives divided by 4, bringing the series forward from 2004 to 2006-to-date (but Nov-Dec in SH are small months). Black shows the information archived at Unisys; red dashed shows the information used by Georgia Tech, demonstrating from a slightly different viewpoint the number of Neumann measurements included in the Georgia Tech data set.

It is also interesting to note the decline from 2004 to 2006, where the number of measurements is at rather low levels. If you squint, you can perhaps discern the pattern of the red dashed line of number of measurements (divided by 4) in the number of SH hurricane-days in the right panel of the Webster Curry figure. There is a slight difference in level owing presumably to some difference in cut-off point.

Webster Curry et al observed that there was no increase in number of hurricane-days, but perhaps could have drawn more attention to the decline in hurricane-days from the peak in the 1990s to 2004 (which has continued to 2006)


Figure 3. Number of archived SH wind speed measurements. Black – Unisys archive; red dashed- Georgia Tech version; blue – update to 2006 collated from Unisys data. I’ve collated individual storms for 2003-2006, interpolated them to 6-hour intervals and used this information to update the Best Tracks database, using this collation from 2003-2006 for both data sets.

The Interesting Part

Obviously the Neumann estimates are constructed differently than post-1983 measurements, but how can one evaluate the amount of potential non-homogeneity in the Neumann estimates.

As an experiment, I calculated the total number of wind speed measurements (as illustrated in Figure 3) plus the number of track measurements without wind speeds (as illustrated in Figure 1) – hypothesizing that the track locations without wind speed estimates would nearly all have had winds meeting storm benchmarks – a hypothesis which seems quite plausible to me based on admittedly limited experience with this data.

Figure 4 shows that there is now a very big difference between the two data sets. In the one case, there is a strong decline from levels in the 1970s with new lower levels in the 1980s; in the other (Georgia Tech) case, the levels remain fairly steady. In addition to the combined storm track information, I’ve also shown plots of measured hurricane-level measurements (everything divided by 4 to match "days") and in purple the number of Cat4/5 measurements ( divided by 4).


Figure 4. Count of measurements plus track location without wind speed measurement. Number of measurements meeting hurricane levels (blue) and Cat4/5 levels (purple). Total number of measurements divided by 4 to match number of "days". Left – Unisys archive; right – Georgia Tech.

There is either a real decline in number of storm-days or an inhomogeneity in the number of measured storm track locations. To establish that the Neumann estimates are valid would require a demonstration that there has been a non-homogeneity in decisions to record storm track locations. This is quite possible.

However, the onus is on Webster, Curry et al to demonstrate the existence of this non-homogeneity. The Neumann estimates are undoubtedly meritorious, but Webster Curry et al provided no discusison of how these estimates were made. And whatever else one may think of Gray, he has expressed caveats on how much reliance can be placed on pre-1985 estimates – caveats that were rather airily dismissed by Webster, Curry et al – but I felt that their Reply did not deal directly with some important issues raised by Gray.

Histograms

As a closing illustration, here are, first, histograms of all wind speed measurements, separated at 1983 and below that corresponding histograms of all wind speed measurements meeting storm cut-offs. One sees that the distributions are somewhat different with meaurements in the 30-40 knot class less frequent after 1983, with increased proportions in bins with greater wind speed.


Figure 5. Histogram of All Wind Speed Measurements(in knots). Left – up to 1983; right- 1984 on.


Figure 6. Same as Figure 5, but limited to wind speeds greater than or equal to 35 knots (18 m sec-1)

The statistical problem faced by Webster, Curry et al is to show that this (IMHO) fairly subtle change in distribution is (a) a real change in distribution rather than a stochastic effect; (b) not a product of the inhomogeneity between Neumann estimates to best tracks estimates (however these were done.) If you re-read Webster Curry et al, I think that it would be fair to say that their statistical analysis was unequal to either challenge.

I’ll try to re-visit this issue on some later occasion.

21 Comments

  1. David Smith
    Posted Oct 26, 2006 at 6:48 AM | Permalink

    About two-thirds of the SH storms occur in the Southern Indian Ocean (SIO) while the remaining one-third are in the Southwest Pacific. I can understand how historical data from Fiji and Australia might help classify some of the landfalling typhoons in the Southwest Pacific, but I cannot imagine what basis Neumann used for the SIO.

    The SIO is remote and the large majority of storms never touch land. Ships avoid storms. There was no aircraft recon (unless Madagascar had some stealth effort that they’ve now revealed to the world). Satellites, especially early, were problematic and even today the best method is Dvorak, which has its own set of problems.

    Also, a little-noted problem in detecting category 4 and 5 storms is that they are typically short-lived. The typical duration of winds that strong is about two days. So, to detect a category 4,5, one has to be quick.

  2. Steve McIntyre
    Posted Oct 26, 2006 at 7:24 AM | Permalink

    David, I just noticed a data set on Australian landfalls here http://dss.ucar.edu/datasets/ds824.1/inventories/aus_area.inv

    There is some other information in the same directory not discussed so far.

  3. David Smith
    Posted Oct 26, 2006 at 7:59 PM | Permalink

    RE #2

    The Australian database reports that a 1993 typhoon had 765 knot winds, a Mach 1.2 storm. That record was shattered seven years later, with two typhoons peaking with 1,000 knot winds. Those must be the “hypercanes”, super hurricanes hypothesized by Emanuel. Imagine the noise that a 1,000 knot storm makes, with sonic booms from flying coconuts and all that.

    Now that NCAR data shows two 1,000 knot storms, do you think that Emanuel will soon be publishing on this in Science? The data exists, after all.

    Seriously, though, you’d think they’d check their work for typos like these.

    I’m pretty sure that this database covers both landfalling storms and storms that never touched land.

    The data from 1985 onward has pressure estimates as well as wind estimates. The fact that they show pressures indicates that there must have been some methodology used, probably Dvorak techniques. Interestingly, there is no upward trend in category 4 and 5 storms.

    On overall storm count over the past 35 years, the trend is towards fewer storms in the Southwest Pacific.

  4. BradH
    Posted Oct 27, 2006 at 8:09 AM | Permalink

    O Judith, Judith! Wherefore art thou, Judith?
    Defend thy study and support thy data;
    Or, if thou wilt not, I’ll no longer be a Curry-Believer.

  5. BradH
    Posted Oct 27, 2006 at 8:35 AM | Permalink

    And, Judith, be very wary of adopting a, “This has been an interesting experiment, I’m high and mighty, but now I’m done with you plebeians and I will leave you to your bovine ruminations,” attitude.

    Michael Mann treated Steve and Ross with contempt. You treated them and the climateaudit community with condescension – as if we were an experiment for your students.

    Given what Steve has discovered with a relatively cursory examination of your work, you might well live to regret your pomposity.

    Should that prove to be the case…well, you have only yourself to blame.

  6. David Smith
    Posted Oct 27, 2006 at 9:40 AM | Permalink

    In case my remark on Emanuel implies that I have a similar view of Wesbster-Curry-Hoyos, let me clarify.

    My view of Webster-Curry-Hoyos is that they are above-board scientists who, unbeknownst to them, used data of doubtful quality. They have the major responsibility to understand the limitations of any data they use, and to communicate those limitations, and to acknowledge subsequent discoveries, but I’ve seen nothing to indicate that they tried to mislead anyone.

    They are on a narrow cliff at the moment, and they are defending that cliff, but that’s science, in my opinion. I’m glad that Curry spent some time here, as it made me think.

    My view of Emanuel is different.

  7. David Smith
    Posted Oct 29, 2006 at 10:57 PM | Permalink

    I found some information on the methodology used by Neumann to estimate SH storms, located here.

    The information is not complete, but there is enough to make me believe that Neumann never intended his SH storm data to be used in the way it was used by Webster et al.

  8. Steve McIntyre
    Posted Oct 30, 2006 at 7:06 AM | Permalink

    David, Neumann mentions that the SH historical data tends to have pressure information. I wonder why they didn’t include this when they were making the archived collations.

  9. David Smith
    Posted Oct 30, 2006 at 7:43 AM | Permalink

    That is untrue, unless he is talking about post-1980 storms that had Dvorak satellite estimates of pressure. There were no reconnaissance flights in the SH and round-the-clock Dvorak didn’t come into existence until the early 1980s, I believe. Ships run away and land encounters capture only a small part of the life of a storm.

    I only had time to scan the article last night, but will read it in depth later today. What I read last night tells me that his data should not have been used by Webster.

  10. David Smith
    Posted Oct 30, 2006 at 10:12 AM | Permalink

    The digging is leading back to the Australian Bureau of Meteorology, which has a website here. Still digging.

  11. David Smith
    Posted Oct 30, 2006 at 10:59 AM | Permalink

    Here is an article on some of the intensity data problems.

    Note the satellite images of five NIO storms. They were classified as cat 3 or weaker but, per the images, were cat 4 or 5. Four of the five were pre-1985.

  12. Steve Sadlov
    Posted Oct 30, 2006 at 11:34 AM | Permalink

    Were there RB-29 and RB-47 flights from Diego Garcia? I am thinking there were not, I think DG post dates that era. Failing that, the only air recon I can imagine would have come from joint US – Australian bases flying out to the West. And those would have ended in the 70s.

  13. David Smith
    Posted Oct 30, 2006 at 6:45 PM | Permalink

    My take is that Neumann used storm data (pressures) provided by Australia, Fiji and Reunion (France). His major contribution was to put the data into an apples-to-apples basis, commonly using a formula to convert pressure data into windspeed, consistent with JTWC practices.

    All that does is move the question upstream. The question is, where did the Australians, Fijians and Reunioners get their pressure values?

    I’ve e-mailed the Australian BOM asking if they can explain the pressure-estimation methodology used in Australia before 1980. The other major party is Reunion (France) but my French is quite poor, so I am trying to think of a third party to ask.

    What I expect to learn is that forecasters made best-guesses of central pressure and intensity pre-1980, based on scattered data from the edge of the storm. The exception would be if the storm overran an island having a barograph. Best-guesses are a poor technique (but better than nothing, usually) and become a real problem if they are grafted onto other techniques like satellite estimates and then used in a search for small trends.

    How small of a data change is being sought? Per Steve’s graphs, there are about 100 storm-days a year in the Southern Hemisphere, with about 10 of those being category 4 or 5. Ten percent. What we’re looking for is evidence of about a 25 to 50% increase in cat 4 and 5, or 3 to 5 extra intense storm-days a year, in a mostly-remote 8,000-mile stretch of ocean. Daunting.

    Neumann’s article contain many warnings and caveats about the SH storm intensity data, even in the satellite era. The data was OK for his purposes (calculating hurricane risk at given locations) but my guess is that he’d caution against using it to search for changes in cat 4 and 5 frequency. I wonder if he was asked.

  14. David Smith
    Posted Oct 30, 2006 at 7:32 PM | Permalink

    Here is a map of annual tornado count in the US since 1950. The record spans various detection techniques, from damage reports to police to early radar to Doppler radar to next-generation radar and so forth.

    The experts attribute the rise in recorded tornadoes to improvements in detection and reporting, and not due to actual increases in tornadoes. What a difference improved techniques make in the historical record. This is also true in the world of hurricanes.

    I am glad that (so far) tornadoes have not entered the global-warming arena.

  15. Steve Sadlov
    Posted Oct 30, 2006 at 8:02 PM | Permalink

    RE: #14 – in the days before Doppler Radar, there was often lots of confusion in any given damaged area, especially when hit at night, as to whether what hit were freak straighline winds, a downburst or a lower F number tornado. Even today, it is not always completely clear, especially when such storms hit in places (such as here in California) which are outside of the well instrumented Tornado Alley.

  16. Steve McIntyre
    Posted Oct 30, 2006 at 9:25 PM | Permalink

    #13. DAvid, did you ask them for the pressure data? why not worry about that first? For that matter, maybe Neumann would provide it if you asked him.

  17. Kenneth Blumenfeld
    Posted Oct 31, 2006 at 12:37 AM | Permalink

    14 & 15 (tornado stuff):

    It will be a very long time before we can discuss tornado frequency (or hail or straight-line winds) in the context of climate change. That’s too bad because it is an interesting question. Tornado winds are not measured; they are inferred from structural damage. On top of that, the F-scale is an uncalibrated scale. Extremely large tornadoes that hit nothing get ranked as F0, even if they had the potential to do tremendous damage. Storm spotters have also helped increase the number of reported tornadoes. A “harmless” tornado passing through a meadow may have been missed 25 years ago if it didn’t bother anyone; very few tornadoes are missed now. Wind speeds from thunderstorms are rarley measured; they are estimated–big difference. Hail stones are compared to coins or spherical objects (by spotters), then assigned an estimated diameter. There are more spotters near cities, so rural severe weather is under-reported. You wanna talk inhomogeneities? Start here.

  18. David Smith
    Posted Oct 31, 2006 at 5:06 PM | Permalink

    Re #16 I’ve asked for pressure data. This evening I will e-mail Neumann on the same subjects.

    The Australian Bureau of Meteorology has a table here that lists the estimated minimum pressure of each storm in its region, stretching back to circa 1900. It’s not the same a 6-hourly estimates, but it’s still of some use.

    I looked at the minimum pressures of storms on this list from 1970 thru 1979, which is essentially pre-satellite for intensity estimates. It is also a period used by Webster. Here is the distribution:

    1000-1010 mb: 2 storms
    990-999 mb: 34
    980-989 mb: 42
    970-979 mb: 37
    960-969 mb: 19
    950-959 mb: 15
    940-949 mb: 8
    930-939 mb: 8
    920-929 mb: 3
    910-919 mb: 2
    900-909 mb: 0

    Here are the storms for 1990-1999, presumably a predominately satellite (Dvorak) era:

    1000-1010 mb: 0 storms
    990-999 mb: 17 storms
    980-989 mb: 14
    970-979 mb: 13
    960-969 mb: 13
    950-959 mb: 16
    940-949 mb: 7
    930-939 mb: 7
    920-929 mb: 10
    910-919: 7
    900-909: 1

    Plotted, those distributions look quite different to me. Either (a) global warming has affected the distribution of storm pressures/intensities, (b) the use of different estimation techniques has affected the distributions or (c) some other shift occurred in the data, such as inclusion of storms from a different region. My guess is (b) or (c). If it is (a), then it seems likely that we should find similar shifts in other storm basins. I will be checking for (c).

    (Note: my storm counts are by eyeball-tally, so my counts could be off by a few, but that would not affect the patterns.
    Note: The lower the minimum pressure, the stronger the storm. Category 4 and 5 storms are probably 940 mb and lower. Weak storms are 980 mb and higher.)

  19. Gerald Machnee
    Posted Oct 31, 2006 at 8:07 PM | Permalink

    RE **I am glad that (so far) tornadoes have not entered the global-warming arena.**.
    An Environment Canada scientist did a study on Canadian hurricanes and concluded that warming is not having a significant effect. With about a thousand tornadoes in the USA and under a hundred in Canada it is difficult to accurately count them and there can be significant variation from year to year. Then there are stil problems in deciding whether some events were tornadoes. However some have still commented that warming will increase severe weather.

  20. David Smith
    Posted Nov 19, 2006 at 9:58 AM | Permalink

    (The Katrina thread is malfunctioning, at least on my computer, so I’ll borrow this one.)

    Steve M expected, way back in early August, that there would be seven or fewer Atlantic hurricanes in 2006. To date, there have been five, with one (Ernesto) which might be downgraded to a tropical storm in reanalysis.

    I am declaring Steve a winner on his wager, even though there are about 10 days left in the official season. He has beaten the US National Hurricane Center, Bill Gray, me, most private forecasting firms, my local TV weatherman, my neighbor and (I bet) the RealClimate regulars.

    The reason I’m doing this 10 days early is because the Caribbean is about to experience a large intrusion of cool, dry air. That is somewhat unusual for mid-November. Orlando, Florida (DisneyWorld) is forcast to dip to +2C, Miami to +8C and Habana, Cuba to +14C. Miami is even issuing a “wind chill advisory”, I guess for the beachgoers.

    This dry, cool air greatly inhibits the main breeding ground (western Caribbean) for late-season storms.

    We’ll make a (virtual) champagne toast to Steve on December 1’st.

  21. bender
    Posted Nov 19, 2006 at 10:30 AM | Permalink

    Several of the hurricane threads are inaccessible since the “CA new look”.