Baby whirls: improved detection of marginal tropical storms

With the North Atlantic hurricane season officially starting in a couple weeks (June 1), but possibly getting a head start with a developing low-pressure system in the Bahamas, considerable attention will be paid by the media to each and every storm that gets a name. In the North Atlantic, a name is granted to a tropical or subtropical storm with sustained winds of greater than 34 knots AND when the National Hurricane Center declares it is so. It is these storm counts that are used for a variety of purposes including insurance rates and climate change research.

Back in 2007, David Smith described short-lived, generally weak or marginal tropical storms as Tiny Tims. A couple of posts were dedicated to various aspects of the North Atlantic Tiny Tim climatology including here and here.

One is a modern period (the last twenty years, 1988-2007). This is a period of good (and ever-improving) detection tools, like advanced satellites, improved recon devices, denser buoy networks and so forth.

The modern period also matches the 1988-2007 list of Tiny Tim storms. Tiny Tims are storms so weak, small, remote and/or short-lived that there’s no record of ships or land experiencing storm-force winds, yet they were classified as tropical storms. By historical standards these modern Tiny Tims would have been regarded as depressions or disturbed weather, not tropical storms.

In a local Florida newspaper, Chris Landsea describes a new paper he has (re-)submitted to the Journal of Climate along with three other prominent tropical cyclone and/or climate researchers: Gabe Vecchi (NOAA/GFDL), Lennart Bengtsson (Reading, UK), and Thomas Knutson (NOAA/GFDL).

From Kate Spinner’s article online:

Landsea scrutinized the hurricane center’s storm data and corrected for technological advances in hurricane detection and tracking. He concluded that hurricane seasons of the past rivaled today’s activity, suggesting the influence of a periodic climate cycle in the Atlantic, not global warming, is behind the current spike in storms…

Landsea’s new study, currently under review by other scientists, stemmed from his objection to studies in 2006 and 2007 linking the increased number of recorded hurricanes with a rise in global temperatures.

“I did not agree with the studies because I thought their assumption that all the storms were in the database was faulty,” Landsea said.

However, perhaps the most illuminating part of the article includes two quotations from Landsea’s “critics”, including Michael Mann and Kerry Emanuel, both professors who have published various papers on aspects of the Atlantic hurricane climatology. Many of their papers received a high-level of scrutiny here at Climate Audit during the past 4 years.

Mann disputed Landsea’s research, saying that his technology argument ignores the chance that a single storm could have been counted twice before satellite records could show the exact track. He expressed doubt that the study would pass muster to be published.

Kerry Emanuel, a leading hurricane researcher and professor of atmospheric science at Massachusetts Institute of Technology, said Landsea’s work is scientifically robust, but not as important as looking at whether warming causes hurricanes to gain strength.

“I don’t think the number of storms is a terribly interesting thing,” Emanuel said, emphasizing Atlantic storms now rarely exceed Category 2 strength, but that the majority of damage-inflicting storms are Category 3 or higher. “We’re pretty confident that intensity increases with global temperature. There are arguments about the amount.”

Mann helpfully provides an editorial comment on the likelihood of publication, apparently low in his estimation. However, Emanuel (who has collaborated with Mann on several hurricane related papers) finds that the work is scientifically robust but not important to the issue of global warming. While the frequency argument has largely died away with regards to relationship with SST with a few exceptions including Holland and Webster (2007), and remarkably a paper by Mann and Emanuel which does what Emanuel describes as not “terribly interesting”: correlates historical North Atlantic tropical storm frequency with SST warming and tests the hypothesis of multi-decadal oscillation impact on storm activity (AMO).

Here is the abstract of Landsea et al. (submitted)

Records of Atlantic basin tropical cyclones (TCs) since the late-19th Century
indicate a very large upward trend in storm frequency. This increase in documented TCs
has been previously interpreted as resulting from anthropogenic climate change. However,
improvements in observing and recording practices provide an alternative interpretation for
these changes: recent studies suggest that the number of potentially missed TCs is
sufficient to explain a large part of the recorded increase in TC counts. This study explores
the influence of another factor–TC duration–on observed changes in TC frequency, using
a widely-used Atlantic TC database: HURDAT. We find that the occurrence of short-lived
storms (duration two days or less) in the database has increased dramatically, from less
than one per year in the late-19th/early-20th Century to about five per year since about 2000,
while moderate to long-lived storms have increased little, if at all. Thus, the previously
documented increase in total TC frequency since the late 19th Century in the database is
primarily due to an increase in very short-lived TCs.

We also undertake a sampling study based upon the distribution of ship
observations, which provides quantitative estimates of the frequency of “missed” TCs,
focusing just on the moderate- to long-lived systems with durations exceeding two days.
Both in the raw HURDAT database, and upon adding the estimated numbers of missed
TCs, the time series of moderate to long-lived Atlantic TCs show substantial multi-decadal
variability, but neither time series exhibits a significant trend since the late-19th Century,
with a nominal decrease in the adjusted time series.
Thus, to understand the source of the century-scale increase in Atlantic TC counts
in HURDAT, one must explain the relatively monotonic increase in very short duration
storms since the late-19th Century.
While it is possible that the recorded increase in short
duration TCs represents a real climate signal, we consider it is more plausible that the
increase arises primarily from improvements in the quantity and quality of observations,
along with enhanced interpretation techniques, which have allowed National Hurricane
Center forecasters to better monitor and detect initial TC formation, and thus incorporate
increasing numbers of very short-lived systems into the TC database.

The first figure from Landsea’s paper shows the unadjusted frequency of tropical storms (and subtropical) from 1878-2008 which demonstrates the significant upward trend. The second figure shows the frequency of storms which last longer than 2-days, which no longer has a significant trend. Maybe it is possible the Tiny Tims were lost? We’ll keep our eyes out for more of these “Baby Whirls” and at the same time see if Landsea’s paper can “pass muster”.

References:

Holland G. J., and P. J. Webster. 2007: Heightened tropical cyclone activity in the North
Atlantic: natural variability or climate trend? Philos. Transact. R. Soc. A. Math. Phys.
Eng. Sci.. 365, 2695-2716.

Landsea, C. W., G. A. Vecchi, L. Bengtsson, and T. R. Knutson: Impact of duration thresholds on Atlantic tropical cyclone counts. Submitted J. Climate, May 7, 2009.

Mann, M., and K. Emanuel, 2006: Atlantic hurricane trends linked to climate change. Eos,
Trans. Amer. Geophys. Union, 87, 233-241.

55 Comments

  1. John A
    Posted May 18, 2009 at 11:48 PM | Permalink

    “I don’t think the number of storms is a terribly interesting thing,” Emanuel said, emphasizing Atlantic storms now rarely exceed Category 2 strength, but that the majority of damage-inflicting storms are Category 3 or higher. “We’re pretty confident that intensity increases with global temperature. There are arguments about the amount.”

    Parsing this statement it appears that – snip. Thus the number of storms isn’t interesting when there are very few of them, and when the storm intensity declines as it has, he avers his previous belief that warming causes greater intensity and the question is by how much.
    snip

    • EddieO
      Posted May 19, 2009 at 3:27 AM | Permalink

      Re: John A (#1),
      Can someone explain the logic of Emanuel’s claims that the number of storms is not important.

      He implicitly agrees with Landsea’s findings when he says
      “Atlantic storms now rarely exceed Category 2 strength, but that the majority of damage-inflicting storms are Category 3 or higher”
      Perhaps he is accepting the recent evidence that sea surface temperatures have declined so we should expect the number of storms to decrease. (See Craig Loehle’s recent paper and Willis et al. 2007,2008a; Wijffels et al. 2008)

      Since he also says
      “We’re pretty confident that intensity increases with global temperature. There are arguments about the amount.”
      Is the logical conclusion the the number of intense storms is decreasing hence the SST must be decreasing.
      Ed

  2. Posted May 19, 2009 at 1:08 AM | Permalink

    The “non publication” bias is interesting here. I can’t see any justification to reject papers like this in favor of those that align with AGW hypothesis or any other non-methodological criteria.

  3. Mike Lorrey
    Posted May 19, 2009 at 5:29 AM | Permalink

    Problem is the Atlantic hurricane record shows 50% more cat 3 or higher in the first half of the 20th century than the latter half.

    Since warming has been at the poles and not at the equator, the temperature differential is less so storm severity should be less.

  4. Sean
    Posted May 19, 2009 at 5:36 AM | Permalink

    Talking about hurricanes, I recall the theme of stories in the 1980’s regarding hurricanes and coastal development. As beach and shoreline frontage saw heavy developement, the story line was to recall that in the 1930’s to the 1950’s the frequency and strength of cyclonic storms was much higher back then and if the pattern returned, we’d be set up for disaster. Well the pattern did return at the turn of the century and low and behold property damage increased disproportionately. This is one time where I’d give the MSM a pat on the back if they just said, “I told you so”.

  5. TAG
    Posted May 19, 2009 at 5:56 AM | Permalink

    Michael Mann mentions his idea of double-counting for small storms.

    Is there anything more to this than arm waving?

    • Michael Jankowski
      Posted May 19, 2009 at 7:57 AM | Permalink

      Re: TAG (#6), what more do you need than arm waving? He’s is da Maestro, after all!

  6. Andrew
    Posted May 19, 2009 at 7:29 AM | Permalink

    I hate to criticize Landsea, but I have to object to the use of smooths and linear trends (which I assume are OLS) to analyze the the data. Matt Briggs has criticized the use of smooths and I believe I recall hearing somewhere that OLS should not be used for data related to counts.

    That said, it looks like good work and I am baffled by Mann and Emanuel’s dismissal of this worthwhile effort.

    • Posted May 19, 2009 at 8:05 AM | Permalink

      Re: Andrew (#7), It really shouldn’t be baffling at all, Andrew. It disagrees with the consensus, therefore they dismiss it.

      • Andrew
        Posted May 19, 2009 at 8:09 AM | Permalink

        Re: Jeff Alberts (#9), Okay, this belongs more in the unbelievable but not surprising category. I did not mean to suggest I wasn’t expecting it. However, Mann and Emanuel’s criticisms are surprisingly poorly thought out and weak. I guess they didn’t have time to run it by the rest of RC?

  7. Posted May 19, 2009 at 8:07 AM | Permalink

    As far as “damage-inflicting” storms being cat 3 or higher. Perhaps improved construction techniques has made re-built coastal buildings more resistant to cat 1 and 2 type storms? It takes more power to inflict damage, therefore we notice it more…

    • Harry Eagar
      Posted May 19, 2009 at 1:33 PM | Permalink

      Re: Jeff Alberts (#10),

      Well, yes and no. In 1970, my wife, who had lived on the Gulf Coast, and I, who had never been there, were driving along the Mississippi shore. I observed that the houses were not as crowded and junky as I was used to seeing along the East Coast.

      “You should have seen it before Camille,” she said.

      But, yes, commercial construction has improved and survivability is probably better than it used to be, as long as the ground isn’t washed away.

      Some of the newer hotels where I live (Hawaii) have massive screen doors to protect the ground floors from surge; or, alternatively, shorefront housing has breakaway walls on the ground floor (which is, at least legally, not inhabitable) so that the structure is supposed to retain its integrity after a wave washes through.

  8. Kenneth Fritsch
    Posted May 19, 2009 at 8:15 AM | Permalink

    Thanks much, Ryan, for a summary of some climate science thoughts on TC numbers and changes in detection capabilities. In my layperson’s view I judge there is much evidence that neither TC numbers or intensities have increased significantly in past decades.

    It would be surprising in my view of things if Landsea cannot get his paper published. It appears to me that there is more information from this paper being divulged than one would normally expect.

  9. Andrew
    Posted May 19, 2009 at 8:19 AM | Permalink

    Here’s some ice for your Shasta: I spite of finding Landsea’s work “uninteresting” Kerry Emanuel apparently offered “helpful comments”-one wonders how Landsea was helped by yawning:

    Useful comments were provided on earlier versions of this manuscript
    by Fabrice Chauvin, Kerry Emanuel, James Franklin, Colin McAdie, Ed Rappaport, Bill
    Read and three anonymous reviewers.

  10. Gary
    Posted May 19, 2009 at 8:34 AM | Permalink

    So what is the likelihood “that a single storm could have been counted twice before satellite records could show the exact track?” From what I recall of the HURDAT dataset, it should be possible to approach the problem from two directions to come up with an estimate. First, examine the record and count the differently numbered storms that are close in time and space that might be double counts. Second, calculate the number of double counts necessary to make the linear tend significantly different from zero. Compare and draw conclusions. My guess would be that this criticism is a red herring.

    • Andrew
      Posted May 19, 2009 at 8:35 AM | Permalink

      Re: Gary (#14), Why analyze when you can arm wave?

    • Posted May 19, 2009 at 8:53 AM | Permalink

      Re: Gary (#14), to further your logic — I would think a double-counted storm would more likely be a weak one rather than a major hurricane, which usually have fairly regular paths (influenced by large-scale flow and beta effect). Thus, Mann’s comment seems to bolster Landsea’s argument, in my opinion.

  11. Posted May 19, 2009 at 8:50 AM | Permalink

    Stop piling on Professor Emanuel and questioning his research motivations. My highlighting of his statement is indicative of current “debate” in the tropical cyclone and climate department — even if it is between two authors on the same paper.

    It is likely that Dr. Emanuel’s quote about counts not being terribly interesting was taken out of context by the author of the article — but is consistent with his previous research and David Smith’s charts in the other threads here at CA. As his Nature (Emanuel 2005) paper uses the power dissipation metric (which is correlated with ACE r=0.99 on a yearly basis), Dr. Emanuel’s focus has been on the cumulative power/energy impact of tropical cyclones on climate. Thus, as pointed out here in David Smith’s previous posts in 2007, the Baby Whirls do not contribute much of anything to the seasonal Power Dissipation or ACE, and are NOT large climate signals. Thus, here counts can skew trend studies, if you are not careful to diagnose just what has a climate signal (whatever that may be), while accumulative metrics of storm energy will not be sensitive to weakest storm (sensitive to strongest, instead, Catch22).

    I believe Dr. Curry has a to-be-submitted paper which attempts to explain why there may be more weaker-Baby Whirls in a warmer climate due to the expansion of the warm pool. She presages the paper in comments on a few other threads. So, the count vs. ACE/PDI metric battle goes on.

    • Andrew
      Posted May 19, 2009 at 10:20 AM | Permalink

      Re: ryanm (#16), I okay, point taken. I don’t want to be misconstrued, I don’t think anything sinister is going on. I was just most irate that the response of Emanuel would seem so ridiculing. It had not occurred to me that it may be misleading, having been taken out of context-you are probably right on that.

    • Kenneth Fritsch
      Posted May 20, 2009 at 8:44 AM | Permalink

      Re: ryanm (#16),

      I believe Dr. Curry has a to-be-submitted paper which attempts to explain why there may be more weaker-Baby Whirls in a warmer climate due to the expansion of the warm pool. She presages the paper in comments on a few other threads. So, the count vs. ACE/PDI metric battle goes on.

      I eagerly await Dr. Curry’s paper as her coathored papers have always seemed to inspire some lengthy discussion and detailed analyses here at CA.

  12. Bob Koss
    Posted May 19, 2009 at 11:03 AM | Permalink

    Ryan,
    There is something wrong with that 2nd graph.
    I looked at 1933 and 2005 since both years show 20 storms.

    1933 had 21 storms with one storm two days or less. So 1933 matches the graph figure of 20. But 2005 had 28 storms with four of them two days or less. That means the figure for 2005 should reach the 24 storm mark in the graph. You would have to remove all storms four days or less to get down to the 20 storm mark. That would make 1933 incorrect.

    I also think the first graph should have removed the 24 subtropicals since 1968. Why remove them from one graph and not the other? Although they only affect the slope of the trend line about 1/2 a storm over the period, it leaves one wondering why they are used in one graph and not the other.

    • Andrew
      Posted May 19, 2009 at 11:12 AM | Permalink

      Re: Bob Koss (#19), On the second point, how can you tell that sub-tropicals were left out of the second graph?

      • Bob Koss
        Posted May 19, 2009 at 11:29 AM | Permalink

        Re: Andrew (#20), because the 1st graph mentions them as being included, but not mentioned in the 2nd graph.

        • Andrew
          Posted May 19, 2009 at 11:56 AM | Permalink

          Re: Bob Koss (#23), I could be assumed it would be understood to be implied.

    • Posted May 19, 2009 at 11:18 AM | Permalink

      Re: Bob Koss (#19), you have to take into account the strength of the tropical storm: 34 knots and higher, for counting tropical storm days. Some of the weak 2005 storms were classified as Depressions, which means they were not named until reaching 34 knots.

      2005 list of Baby Whirls: Bret, Cindy (even a hurricane!), Gert, Jose, Lee, Subtropical Whirl, Tammy, and Alpha.

      • Bob Koss
        Posted May 19, 2009 at 3:47 PM | Permalink

        Re: Ryan Maue (#22), I see said the blind carpenter as he picked up his hammer and saw. Mea culpa.

        I assumed all plots from the beginning of each storm in the database were being used to arrive at the two day cut-off.

        Do I now have it correct that only plots of greater than 33 knots count toward the two days, ignoring consideration of storm organisation? e. g. sub-tropical, extra-tropical, tropical stage.

        • Bob Koss
          Posted May 19, 2009 at 6:57 PM | Permalink

          Re: Bob Koss (#26), using all plots greater than 33 knots I can now match the 2nd graph. There are 179 of the two day storms. 33 pre-1946 and 146 more after that. The pre-1946 surface observation shorties averaged about 0.5 storm per year. Air recon era was about 2 per year. Satellite era about 2.5 per year.

          The observation platform seems to make quite a difference.

          Disregard my comment #19 and graphic in #21 as I misunderstood the method employed.

  13. Bob Koss
    Posted May 19, 2009 at 11:17 AM | Permalink

    Here are the short(2 day) storms. 35 in the top graph. Only 13 of them pre-1946 when air recon. started. I would suggest observational ability makes the difference.

  14. Mark_T
    Posted May 19, 2009 at 4:14 PM | Permalink

    Just a comment… I noticed in fig 3 that there was a spike in the number of “North Atlantic Tropical Cyclone counts” around the time of the dust bowl. I noticed this because I have been told that the warming in the ’30’s was mainly in the US yet hurricanes can get started near the coast of Africa.

  15. danbo
    Posted May 19, 2009 at 5:43 PM | Permalink

    There’s also the chance that two storms could be counted as one. They did lose track of storms. They lost track of the Key West storm of 1918.

    There were 2 or was it 3 unnamed “hurricanes” in the 1969 season.

  16. David Smith
    Posted May 19, 2009 at 8:33 PM | Permalink

    Another oddity with the Baby Whirls is their geographic distribution. Here’s an old graphic that shows the approximate location of short-lived storms (I believe I defined them as lasting 24 hours or less):

    There is a remarkable clustering along the coast, especially in the region of oil rigs, coastal radar, buoys, etc. I can’t think of a natural explanation for the clustering.

    On another note, I see that Knutson and Vecchi are co-authors with Landsea. The three visit this issue:

    We also undertake a sampling study based upon the distribution of ship
    observations, which provides quantitative estimates of the frequency of “missed” TCs,
    focusing just on the moderate- to long-lived systems with durations exceeding two days.

    This, I believe, may be a way to correct a problem in the very good Knutson and Vecchi 2007 study on “missed” TCs. The problem was that their 2007 study employed another study which had overestimated the aerial extent of newly-formed storms. The newly-formed storms (and the weaker ones, too) were smaller in area than that study assumed.

  17. Louis Hissink
    Posted May 19, 2009 at 11:16 PM | Permalink

    I suggest comparing the shape of the hurricanes and those of spiral galaxies. Same force produces both, and the latter has been expertly modelled by A.J. Peratt via his PIC simulation using super computers.

    But I don’t expect to see the ‘Ahah’ flash too soon. Took centuries after Galileo for science to work it out.

  18. Julius St Swithin
    Posted May 20, 2009 at 6:38 AM | Permalink

    An analysis of Atlantic Hurricanes data from the NOAA’s hurricane research division suggests that since 1850 there has been a tendency for numbers of storms of all intensities to increase in periods of warming and to reduce during periods of cooling. What is more there is no obvious long-term trend.

    See:
    http://www.climatedata.info/Impacts/impacts.html

    and click on “tropical cyclones”

  19. Dave Andrews
    Posted May 20, 2009 at 1:55 PM | Permalink

    A question :

    “We find that the occurrence of short-lived
    storms (duration two days or less) in the database has increased dramatically, from less
    than one per year in the late-19th/early-20th Century to about five per year since about 2000,
    while moderate to long-lived storms have increased little, if at all. Thus, the previously
    documented increase in total TC frequency since the late 19th Century in the database is
    primarily due to an increase in very short-lived TCs.”

    Can anyone tell me how they know the frequency of short lived storms of two days or less in the late 19th/early 20th C?

    • Scott Lurndal
      Posted May 20, 2009 at 3:49 PM | Permalink

      Re: Dave Andrews (#34),

      You missed the phrase “in the database”.

      The supposition is that:

      1) The actual number of short-lived storms (or the ratio of short-lived to longer lived) is similar in the late 20th century as it was in the 19th.

      2) Thus, the modern “increase in TC frequency” is due to better reporting, not more storms (and by extension AGW).

  20. Judith Curry
    Posted May 21, 2009 at 5:09 AM | Permalink

    A quick comment on the tiny tims. I haven’t seen the Landsea paper yet. But I have come to distrust the tropical storm identification, even in recent years. For example in 2008, there were several storms judged to be TS that i thought were pretty marginal, and additional one that they didn’t call, that I would have called. And of course early in the record (before 1970), I don’t think the TS are reliable at all. So my current thinking is that we should just stop trying to interpret the record of number of tropical cyclones, and focus on the number of hurricanes (forget the TS, losing the tiny tims, subtropical storms, etc).

    • bender
      Posted May 21, 2009 at 8:24 AM | Permalink

      Re: Judith Curry (#36),
      Agreed – sort of. But how do you know what the right (or best) threshold is? As we have seen, the higher the counting threshold (e.g. landfalling cat 4+5 vs. all storms) the lower the 20th c. trend. So the threshold used for cutting off counts (or ACE) matters quite a bit.

    • Kenneth Fritsch
      Posted May 22, 2009 at 9:45 AM | Permalink

      Re: Judith Curry (#36),

      And of course early in the record (before 1970), I don’t think the TS are reliable at all. So my current thinking is that we should just stop trying to interpret the record of number of tropical cyclones, and focus on the number of hurricanes (forget the TS, losing the tiny tims, subtropical storms, etc).

      That begs the question of what time period to reliably use to look for global trends in hurricane counts and those counts with the higher intensities. David Smith has presented links that in my judgment give evidence that classifying hurricanes was changing (improving) into the early 1980s. I have been using 1984-present as my period. Even if one goes back futher in time looking at these trends, I think one would have to explain a no significant trend for the 1984-current time period.

      • Posted May 22, 2009 at 10:21 AM | Permalink

        Re: Kenneth Fritsch (#46), before even looking at the “data” in whatever incarnation it may be, we can look at the substance of the theory relating global warming (SST warming) and TC intensity change. As hurricanes act as Carnot heat engines, we can approximate through various thermodynamic and physical arguments that the maximum sustained winds will increase a few knots per degree of warmed SST. There is uncertainty in the amount but most agree on the sign of the change, increase.

        This is where maximum potential intensity theory comes in, and is waved around like a magic wand. Before you can invoke the spirit of MPI, you must declare to the audience: “with all else being equal”. While SSTs may increase, other factors deleterious to TC development, such as vertical shear or drier lower-tropospheric humidity may be an inhibiting factor.

        The overarching complicating factor and a major and, I dare say, fatal oversight in most TC trend papers is interannual variability associated with ENSO, AMM, moon phases, etc. For instance, if you want to examine the intensity change of Category 4+ typhoons in the Western Pacific since 1970, you must take into account the interdecadal changes in the North Pacific SSTs associated with the PDO which are connected to ENSO in the tropical Pacific.

        I submit that scientists must know very well the natural variability before diving into the realm of AGW influences. Then and only then should you worry about data uncertainty, which is noise compared to the interannual fluctuations. And by theoretical calculations, the intensity changes expected since 1970 should also be in the noise range compared to our intensity estimates.

        • Kenneth Fritsch
          Posted May 22, 2009 at 5:59 PM | Permalink

          Re: Ryan Maue (#47),

          I submit that scientists must know very well the natural variability before diving into the realm of AGW influences. Then and only then should you worry about data uncertainty, which is noise compared to the interannual fluctuations. And by theoretical calculations, the intensity changes expected since 1970 should also be in the noise range compared to our intensity estimates.

          From my layperson perspective I agree with what you say in this post, but since I am not a scientist, or at least have not been one for a while, I believe you qualified me to worry about the data uncertainty.

  21. Mark T
    Posted May 21, 2009 at 8:58 AM | Permalink

    I would think if there’s a problem with the threshold for determining a TS from a TD, there would be for classifying a Cat 1 from a TS, or a Cat 2 from a Cat 1, etc.

    Mark

  22. David Smith
    Posted May 21, 2009 at 10:26 AM | Permalink

    Re #38 Mark T, I agree. I can think of cases in recent years where cyclones were classified as hurricanes based on very limited data. For example, here’s an excerpt from the report on Hurricane Cindy (2005):

    Cindy was operationally assessed to be a tropical storm with 60 kt winds when its center crossed the coast of Plaquemines Parish in extreme southeastern Louisiana early on 6 July. No reconnaissance data were available in the last few hours leading up to landfall. A detailed post-storm analysis of Doppler velocity data from the NOAA National Weather Service (NWS) Slidell, Louisiana WSR-88D Doppler radar (KLIX), however, indicates Cindy was slightly (5 kt) and briefly stronger – a hurricane with 65-kt winds….No land-based observations support hurricane-force surface winds…

    So, Cindy was considered a to be a strong tropical storm when it happened, but a later detailed reanalysis of special radar data revealed a narrow band of strong winds aloft which was calculated to give 65 kt (minimum cat 1) winds at the surface. Based on that, Cindy was reclassified as a hurricane.

    Had Cindy occurred in 1975, or 1950 or 1925, the classification would have been different. Therefore how does one compare modern storm intensities with historical ones, even “hurricanes”, when the measurement technology has changed and continues to change?

    • Mark T
      Posted May 21, 2009 at 11:23 AM | Permalink

      Re: David Smith (#39), I think this makes bender’s question even more relevant, or at least, I think bender is correct when he says the “threshold” matters quite a bit, but not just for the initial cutoff point. Simply saying we can throw out Tiny Tim’s as irrelevant doesn’t really address the underlying problem of classification error, which clearly can be seen to be decreasing with increasing technological capability.

      ^Ryan: still a subjective criteria, and one that did not exist prior to satellite imagery! How do you (scientists in general) account for technological advancement in their analyses?

      Mark

      • Posted May 21, 2009 at 10:30 PM | Permalink

        Re: Mark T (#41), your final quip about “scientists” dovetails nicely with this little clip from climate czar Henry Waxman: YouTube Waxman Link “I don’t know the details. I rely on the scientists.”

        I think we should stick to the satellite era, just to be safe. Unfortunately, a lot of the trends go away.

  23. Posted May 21, 2009 at 11:03 AM | Permalink

    In most cases, when satellite imagery is available, the first appearance of an eye heralds the determination of the hurricane or typhoon stage of development.

  24. RomanM
    Posted May 21, 2009 at 6:22 PM | Permalink

    Slightly OT (sorry Ryan), I am off tomorrow (no internet) for a week to study Eastern pacific SSTs (while hopefully avoiding even the smallest baby subtropical whirls) and to view some Alaskan glaciers before they all melt away completely. 😉

    When I get back, I hope to come up with a post on relating category 4 and 5 hurricanes (or how not to try to relate them – depending on what transpires between now and then) to other factors.

    • Kenneth Fritsch
      Posted May 21, 2009 at 7:06 PM | Permalink

      Re: RomanM (#42),

      That is the second teaser I have heard on the impending Cat45 post. I can wait, I think.

      Perhaps you can post some photos of those glaciers just to prove that they are still there -and show off your photography abilities.

    • Posted May 21, 2009 at 9:06 PM | Permalink

      Re: RomanM (#42), make sure you watch out for the swine flu.

  25. David Smith
    Posted May 23, 2009 at 12:16 PM | Permalink

    Here are a few plots on short-duration hurricanes. By “short duration” I mean the ones which had hurricane-force winds for 24 hrs or less.

    As the frequency, density and accuracy of measurement devices (recon flights, radar, buoys, satellite, etc) have improved the detection of storm intensities has improved, especially the detection of short-duration, small-area wind maxima.

    Plot one shows the annual number of storms which had hurricane-force winds for 24 hrs or less during the recon-flight era.

    The count has generally increased since 1945, a period also characterized by improving detection capability.

    Here is the count of longer-duration storms –

    The count varies but any uptrend is modest.

    Here are short-duration hurricanes as a percent of all hurricanes –

    I can’t think of a possible natural explanation for these patterns. I suspect that the apparent growth of short-duration storm count is driven by changes in detection technology.

  26. Posted May 28, 2009 at 10:33 AM | Permalink

    Time to add another baby whirl to the Best-Track dataset: Tropical Depression One has been initiated in the North Atlantic, with the expectation that it will become a 35 knot tropical storm for 6-12 hours before it is sheared apart over colder waters north of Bermuda.

    Visible Satellite Image

  27. Michael Jankowski
    Posted Jun 3, 2009 at 8:09 AM | Permalink

    Gray and Klotzbach scale back 2009 hurricane outlook

    Link

  28. David Smith
    Posted Jul 21, 2009 at 4:48 PM | Permalink

    Here’s the global time series of short-duration ( less than 24 hours of 34+ kt 1-min winds) tropical cyclones. This is based on IBTRACS data –

    I have no physical reason to think that the count of short-duration storms is increasing. I do suspect that improved detection, especially in marginal situations like cyclones near fronts, cyclones near land, and cyclones which may be marginally warm core, are now putting more marginal systems into the tropical cyclone record.

    The interesting aspect of this may come when one subtracts these marginal storms from the global record and looks only at the systems strong enough to have been counted both today and thirty years ago. These global baby whirls may be masking a slow decline in the global count of tropical cyclones. Perhaps the count is not constant at about 85 per year but rather is presently slightly declining. If this proves to be true and if one were devilish, one might take the short-term decline and credit it to AGW, with our great-grandchildren facing a world which runs smack-dab out of hurricanes and typhoons due to CO2.

    I’ll do that exercise soon.

    • Posted Jul 21, 2009 at 9:34 PM | Permalink

      Re: David Smith (#52), simply take the three decades and plot up the locations and intensities of the baby whirl tracks. Are they located more closely to land or out in the middle of the oceans? As satellite techniques improve such as scatterometry and other microwave sensors, mid-ocean storms should be on the increase. The launch of QuikSCAT and operations beginning in July 1999 should be a changepoint under that hypothesis.

  29. Posted Aug 4, 2009 at 10:13 AM | Permalink

    Landsea’s Baby Whirl paper has been accepted for publication and is on the J. Climate in press section. LINK

    Impact of Duration Thresholds on Atlantic Tropical Cyclone Counts

    Christopher W. Landsea, Gabriel A. Vecchi, Lennart Bengtsson, Thomas R. Knutson

    Records of Atlantic basin tropical cyclones (TCs) since the late-19th Century indicate a very large upward trend in storm frequency. This increase in documented TCs has been previously interpreted as resulting from anthropogenic climate change. However, improvements in observing and recording practices provide an alternative interpretation for these changes: recent studies suggest that the number of potentially missed TCs is sufficient to explain a large part of the recorded increase in TC counts. This study explores the influence of another factor–TC duration–on observed changes in TC frequency, using a widely-used Atlantic TC database: HURDAT. We find that the occurrence of short-lived storms (duration two days or less) in the database has increased dramatically, from less than one per year in the late-19th/early-20th Century to about five per year since about 2000, while moderate to long-lived storms have increased little, if at all. Thus, the previously documented increase in total TC frequency since the late 19th Century in the database is primarily due to an increase in very short-lived TCs.

    We also undertake a sampling study based upon the distribution of ship observations, which provides quantitative estimates of the frequency of “missed” TCs, focusing just on the moderate- to long-lived systems with durations exceeding two days. Both in the raw HURDAT database, and upon adding the estimated numbers of missed TCs, the time series of moderate to long-lived Atlantic TCs show substantial multi-decadal variability, but neither time series exhibits a significant trend since the late-19th Century, with a nominal decrease in the adjusted time series.

    Thus, to understand the source of the century-scale increase in Atlantic TC counts in HURDAT, one must explain the relatively monotonic increase in very short duration storms since the late-19th Century. While it is possible that the recorded increase in short duration TCs represents a real climate signal, we consider it is more plausible that the increase arises primarily from improvements in the quantity and quality of observations, along with enhanced interpretation techniques, which have allowed National Hurricane Center forecasters to better monitor and detect initial TC formation, and thus incorporate increasing numbers of very short-lived systems into the TC database.

  30. Paul Coppin
    Posted Sep 29, 2010 at 6:48 PM | Permalink

    Perhaps the easiest solution to counting (named) storms is to start the count at SS cat 2. Since there is always some incipient variability as where the line is between a TS and a cat 1, drop cat 1 altogether, and base the storm count on at least cat 2. Mind you, all that would likely accomplish is to shift the argument to the line between the cat 1 and 2s. Then there would have to be corrections applied for the new baseline such as namedstorms(year)=TOT(namedstorms+x), where x={rnd(>0:<inf)}. Some place would have to be found to hide the declime in numbers of named storms; I might suggest South Beach.

3 Trackbacks

  1. […] zoals mogelijke dubbeltellingen. Voorlopig zullen de onderzoekers het waarschijnlijk nog wel oneens […]

  2. […] at Climate Audit, we described this type of storm as a “baby-whirl“.  The ACE of Nicole is 0.1225.  Here are the top 10-weakest storms from 1970 to 2009 […]

  3. […] at Climate Audit in 2009, Steve McIntyre, myself, and others had a long running conversation about the potential issue of […]