More on Elsner et al 2006

I’m feeling a little less grumpy about blog crashes which were wearing me out. I’ll obviously be commenting on AR4 but I’m not sure where I want to start. While I was researching some material, I came across an interesting comment in James Elsner’s occasional blog (only a few posts per year) about adjustments to hurricane wind speeds – a hot topic in the hurricane wars – which ties back to a thread from Willis in October about Elsner et al 2006.

Elsner, a frequent author on hurricanes, stated at his blog:

The debate on hurricanes and climate change can sometimes devolve into issues of data reliability. Unfortunately, some of what is said about these issues is nonsense, or worse, self serving. As one example, during the middle 1990’s, the high priest of NOAA’s best-track data argued vehemently that the hurricane intensities during the 1950’s and ’60s were biased upward. I checked with my colleague Noel LaSeur, who flew into these early storms, and he said “If anything, we underestimated the intensity” suggesting a possible downward bias. Noel is correct. With this light, the intensity of the hurricanes of 2004 & 2005 is not that unusual against the backdrop of the formidable mid century hurricanes. Enthusiasts and partisans should not be tinkering with these data. Moreover, while it stands to reason (a priori) that the historical information will be less precise than data collected today with modern technologies, to ignore these earlier records is scientifically indefensible. Inspired by Edward Tufte recommendations for truth-telling in graphical presentations (Visual Explanations, Graphics Press, 1997), I suggest that one way to enforce data standards is to insist that the original, unprocessed data be posted alongside the manipulated data, and that the manipulators and their methods be identified.

I agree 1000% with Elsner’s attitude towards data adjustments – the reference to Tufte also leads to many extremely interesting analyses. In proxy studies, all too often, the adjustments are often poorly or inaccurately described and can only be discerned through patient dissection of the data. Further, all too often, the adjustments are as large or larger than the effect being measured. So the policy towards showing both versions makes eminent sense.

However, in respect to the controversy initiated here, as so often in climate science, one feels that one has fallen down a rabbit hole. I presume that the “high priest” in question is Landsea, the author of Landsea 1993, which proposed a downward adjustment of Atlantic wind speeds from 1945 to 1969. Despite Elsner’s excoriation here, Landsea appears to have resiled somewhat from the adjustment proposed 14 years ago, saying in his Reply to Emanuel 2005:

It is now understood to be physically reasonable that the intensity of hurricanes in the 1970s through to the early 1990s was underestimated, rather than the 1940s and 1960s being overestimated.

On the other hand, the adjustment is alive and well in Emanuel 2005, Webster et al 2005 and Holland and Webster 2007, all of which go uncriticized by Elsner. Indeed, the results of Emanuel 2005 depend to a very considerable extent on the adjustment. But no mention of this by Elsner. I discussed the Landsea adjustment here and here .

Elsner himself fails to adhere to the above policy in the recent Elsner et al 2006, High frequency variability in hurricane power dissipation and its relationship to global temperature, discussed by Willishere. They state:

It is well known that hurricane data from the Atlantic basin are not uniform in quality owing to improvements in observational technology over the years. We adjust the pre-1973 wind speeds to remove biases using the same procedure as described in Emanuel (2005) and compute the PDI by cubing the maximum wind speed for each 6-h observation…. For comparisons we perform the analysis using both the adjusted and unadjusted wind speeds.

Sounds good. But the analysis with both versions is nowhere to be found. Instead we see the following graphic of a single version (actually showing the cube root of the PDI – or a simple integral of the wind speed – a point observed by Willis previously):

Elsner et al 2006 Figure 2 top panel.

To make it more confusing, there is no discussion of whether this is the adjusted or unadjusted version? In his earlier post, Willis couldn’t figure it out, observing that i seemed to be neither:

Second, I haven’t a clue what they’ve done with their PDI data. They say they got it from HURDAT, but it looks nothing like my calculation of the PDI from HURDAT. It also looks nothing like Emanual’s PDI, or Landsea’s PDI. Elsner says: “We adjust the pre-1973 wind speeds to remove biases using the same procedure as described in Emanuel (2005) …” but this is not the case. Here’s the difference.

Figure from Willis’ post.

Back to Elsner’s blog, where he stated:

I suggest that one way to enforce data standards is to insist that the original, unprocessed data be posted alongside the manipulated data, and that the manipulators and their methods be identified.

Good idea. Elsner et al 2006 would be a good place to start.


  1. Terry
    Posted Feb 10, 2007 at 12:10 AM | Permalink


    What happened to the paper you were going to write quantifying how much difference the various corrections noted in the NAS report made to proxy study results?

    Seemed like a straightforward and important summation of a lot of issues. I hope it hasn’t been abandoned.

  2. David Smith
    Posted Feb 10, 2007 at 10:04 AM | Permalink

    The “High Priest” Elsner mentions might have been a reference to Jose Partagas rather than to Chris Landsea.

    Nevertheless, Landsea had the opinion expressed in his 1993 paper, which is the relevant point.

  3. David Smith
    Posted Feb 10, 2007 at 2:25 PM | Permalink

    Elsner’s website has a neat animation of the 2005 Atlantic hurricane season. It’s his February 9 blog entry. You can watch the storms as they track across the basin.

    Green and yellow denote weak systems – note how many weak systems there are in the record-setting 2005. In my opinion, some of these would have been missed in the years before satellites.

    The Elsner paper linked by Steve M concludes that global warming may tend to stabilize the tropical atmosphere, which would (partially) counter any increase in hurricane intensities due to warmer sea temperatures. In other words, global warming would both warm the ocean (making storms stronger) while stabilizing the atmosphere (making it less supportive of strong storms).

  4. Ken Fritsch
    Posted Feb 10, 2007 at 2:50 PM | Permalink

    That animation was neat (David, I thought you’d be young enough to use term “cool” here) and it reminded me of the dry weather we were having in early June 2005 and seeing the hurricane/storm track all the way up into IL from the animation. We had much hope that it would break our dry spell but the results were very localized and we missed out. Oh, well we made up for it in 2006.

    I am sure the people of LA and MS do not relish watching the Katrina tracks.

  5. Tom C
    Posted Feb 10, 2007 at 3:20 PM | Permalink

    Michael E. Mann has a Dec 5, 2006 blog on The Weather Channel One Degree website.
    Mann uses statistical charts and argues that fossil fuel burning is leading to increased hurricane activity in the Atlantic Ocean. Here is the first paragraph of his blog, followed by my comments that The Weather Channel One Degree website has not posted despite my repeated attempts to post comments.

    Excerpt from Michael E. Mann blog of Dec 5, 2006 blog:
    As commented on previously by colleagues and me at the scientist-run website RealClimate, it is impossible to implicate human-caused climate change for individual meteorological events such as Hurricane Katrina, or even last year’s devastating Atlantic hurricane season. Nonetheless, we can draw conclusions regarding the links between the changing statistics of hurricane activity and climate. The situation is analogous to rolling loaded dice. Imagine one were to construct a set of dice where sixes occur twice as often as normal. If you were to roll a six using these dice, you could not blame it specifically on the fact that the dice had been loaded. Half of the time, sixes would have come up anyway. Nonetheless, if you were to continue to roll sixes twice as often as expected (33% of the time instead of 16.7% of the time), you would eventually be led to the unavoidable conclusion that the dice had been loaded. And so it is the same for the influence of climate variability and climate change on Hurricanes.

    My comments on Mann’s blog:

    Dr. Mann,

    To illustrate a point about using statistics to determine influences on climate you gave an example of using statistics to determine influence on dice. Your calculation of simple dice statistics is wrong, and calls into question your credibility in calculating statistics for global climate change.

    For a pair of dice, there are 36 possible outcomes, of which 5 outcomes are sixes. For unloaded dice, sixes would be expected 13.9% of the time (not 16.7% of the time). For the loaded dice, sixes would be expected 27.8% of the time (not 33% of the time). Your calculations of expected outcomes are 19-20% greater than statistically valid expected outcomes.

    Your statistics for determining influence on simple dice is wrong. It undermines your credibility in using statistics for determining influences on complex climate phenomena, such as hurricane activity.

  6. richardT
    Posted Feb 10, 2007 at 3:42 PM | Permalink

    Die tend to have 6 sides, so the probability of a 6 is 1/6 or 16.67%.

  7. John G. Bell
    Posted Feb 10, 2007 at 3:57 PM | Permalink

    Tom C. is talking about the odds of getting the sum six when rolling a pair of dice. Not the odds of a getting a six on one roll of a die.

  8. David Smith
    Posted Feb 10, 2007 at 4:02 PM | Permalink

    Ken, I’m in an identity-barren generation, slightly younger than “neat” but, as my kids say, I’m definitely not “cool”.

    Another thing that the 2005 animation illustrates is what someone called “the brevity of severity”. Even Katrina and Rita did not last long in their most-severe form.

    Re #5 That is a classic! Mann’s paragraph communicates the problem so well. I hope it does not get lost in all rapid postings at CA.

  9. Tom C
    Posted Feb 10, 2007 at 4:11 PM | Permalink

    #6 and #7

    In the excerpt Mann states:

    Imagine one were to construct a set of dice where sixes occur twice as often as normal.

    #7 is correct. I am talking about a pair of dice, just as Mann appears to refer to a pair of dice (

    a set of dice

    ) not a die.

  10. Steve McIntyre
    Posted Feb 10, 2007 at 4:13 PM | Permalink

    #5. Tom, in fairness to Mann (who told the NAS Panel that he was “not a statistician”), I think that it’s obvious that he’s talking about rolling one die, not two dice, and his comment is not wrong for one die under those assumptions. That’s not to say that his statistical tests in Mann and Emanuel 2006 were correctly done though.

  11. Tom C
    Posted Feb 10, 2007 at 4:44 PM | Permalink

    #10 Steve, Mann’s text clearly refers to a “set of dice”, not a die. I don’t see how the text can construed as one die. Mann may have mistakenly derived 16.7% for a pair (set) of dice from a mistaken application of statistics for one die to two dice.
    In any case, Mann’s error is manifest since his text refers to a set of dice, not one die.

  12. Steve McIntyre
    Posted Feb 10, 2007 at 4:47 PM | Permalink

    I understood what he meant; perhaps he wrote it infelicitously but I wouldn’t encourage you to get bent out of shape about it. There are lots of other fish to fry.

  13. Melvin
    Posted Feb 10, 2007 at 5:57 PM | Permalink

    I think Mann deserves all the “correction” he can stand, and more. I for one will never make excuses for his nastiness and carelessness.

  14. David Smith
    Posted Feb 10, 2007 at 7:23 PM | Permalink

    RE #10 I do wonder why one would

    construct a set of dice

    and then roll but one…

    Only in climate science and government…

  15. Ken Fritsch
    Posted Feb 10, 2007 at 7:47 PM | Permalink

    Mann would probably get an F on his effort to keep clear whether he was rolling dice or a die, but he gets an A for the inanity of this example. If we know that a die or the dice are loaded the statistics will change, but if we were tracking the results to determine the fact that they are loaded and we forgot to report some of the results we might come to the wrong conclusions.

  16. David Smith
    Posted Feb 10, 2007 at 8:39 PM | Permalink

    In trying to figure out the likely identity of Elsner’s “high priest” (probably not Partagas but I still think not Landsea either) I came across some HURDAT committee meeting notes.

    They are located here .

    Anyone thinking of using 19’th or early 20’th century records for climatological should take a few minutes to scan the notes. I think such a scan would build an appreciation for the poor quality of the early records.

    There are also sections in 2000, prior to the Hurricane Wars, where the committee realizes that the wind/pressure relationship used in earlier days had problems.

    The 2006 notes discuss Judith Curry’s GT communication with the Committee.

    Interesting, and worth a 10-minute scan.

  17. Duane Johnson
    Posted Feb 10, 2007 at 9:10 PM | Permalink

    On the other hand, if a pair of dice is rolled, the probability that at least one six is rolled (six pips on a face, rather than the sum of the pips) would be 11/36 or .3056. With each dice loaded to double its probability of rolling a six, wouldn’t the probability be 20/36 that at least one face would be a six?

    Maybe we should ask Marilyn. She would probably consult a statistician!

  18. David Smith
    Posted Feb 10, 2007 at 9:13 PM | Permalink

    I’ve been reading some of Elsner’s work and I struggle to figure him out. At moments I wonder if he sometimes tweaks the noses of the hard-core AGW/hurricane scientists.

    I also wonder whether he’s really serious in some of the papers I see, like this one , or maybe he’s simply mocking some of the lighter-weight analyses of the AGW worriers. I dunno.

  19. Tom C
    Posted Feb 11, 2007 at 12:18 PM | Permalink

    #10 #12 Steve,The basis for Mann’s hurricane statistics, not just dice statistics, is the fundamental issue, as discussed below under item 2.

    1. You mention that Mann told an NAS panel that he is not a statistician. However, Professor Mann pointedly begins his Dec 5, 2006 analysis of Atlantic hurricane activity with a lesson in statistics. Mann instructs his audience in statistics so the audience will understand the statistical reasoning that is the basis for his argument that fossil fuel burning is leading to increased hurricane activity in the Atlantic Ocean.

    Mann may not be a master of higher-level statistics, but he should be competent in basic statistics. While Mann has the option of getting statisticians to assist in his scientific work, he still needs a firm grasp of basic statistics and statistical reasoning in order to conduct his scientific work. And indeed, in Mann’s Dec 5, 2006 analysis, Mann makes a point of instructing his audience on basic statistics.

    So, while Professor Mann may demur from higher-level statistics, he appears confident of his ability to instruct a public audience on basic statistics that are the foundation for his assertions about human effects on Atlantic hurricane activity.

    2. The assertion by a prominent scientist, who is often quoted in science articles and the press, that fossil fuel burning is leading to increased hurricane activity in the Atlantic Ocean, is, I believe, a reasonably big fish to fry. How about plopping this large carp in the frying pan?

    Two excerpts from Michael E. Mann blog of Dec 5, 2006 blog:

    Nonetheless, if you were to continue to roll sixes twice as often as expected (33% of the time instead of 16.7% of the time), you would eventually be led to the unavoidable conclusion that the dice had been loaded. And so it is the same for the influence of climate variability and climate change on Hurricanes.

    A simple statistical model using global average SST (which represents global warming) and the time history of atmospheric aerosols (which represents the late 20th century offsetting tropical Atlantic aerosol cooling) explains 85% of the observed variation in tropical Atlantic SST.

    Mann’s statement that statistics of hurricane activity and climate are analogous to statistics for dice glosses over a fundamental sense in which the statistics are not analogous. Understanding the science of dice and calculating statistics for expected outcomes for dice are straightforward because of 1) the relatively simple process of tossing the dice, 2) the simple, stable physical properties of the dice cubes and physical world the dice pass thru and land on (air, climate, table, walls, floor or other medium against which dice are tossed, etc.), 3) the short time for a dice toss (a few seconds), 4) the extremely tiny geographic extent of the phenomena (generally less than a meter or two), 5) the easy capacity to generate a record of hundreds or thousands or tens of thousands of outcomes of dice rolls, providing a sound basis for statistics and statistical reasoning.

    In contrast, understanding the science of hurricane activity and climate and potential human influences and calculating statistics for expected outcomes for climate, such as hurricanes in the Atlantic, is the epitome of complexity. This complexity is due to many factors, including the dynamic and diverse physical properties of the ocean, atmosphere and terrestrial environment. Understanding the science of hurricane activity and climate and potential human influences and calculating statistics for expected outcomes is the extreme opposite of the five factors (above) that make understanding dice and dice statistics easily achievable.

    For example, Mann’s two graphs (Figure 1 and 2) appear to be based on about 135 and 130 years (outcomes), respectively. Statistics can be calculated for these outcomes, and may suggest some possible hypotheses to explain the 135 outcomes. However, is 135 years (outcomes) of complex climate phenomena a sufficient basis for asserting that humans are likely causing an increase in hurricane activity in the Atlantic? Is it a sufficient basis for alarming the public, disrupting the economy, and betting billions of dollars?

    Assume someone gives you a pair of dice and you do not know whether the dice are loaded or unloaded (fair, random). Would 135 tosses of the dice in which sixes came up twice as often as expected be sufficient basis for concluding that the dice are loaded, and if you conclude the dice are loaded, will you now bet billions of dollars that the dice will continue to roll more sixes than expected?

    Regardless of your answer to the dice question, it should be apparent that 135 tosses of the dice are fundamentally not analogous to 135 years of climate records in regard to nature of the physical phenomena and the basis to derive meaningful statistics. Hurricanes in the Atlantic are orders of magnitude greater than dice tosses.

    Right now, the Atlantic hurricane climate record is not long enough, and the scientific understanding of hurricanes is not advanced enough, to establish firm conclusions…the kind of conclusions warranting hundreds of billion dollars bets.

    The problem is that some climate scientists appear to regard their statistical analysis and pronouncements on climate as on the same order as statistical analysis and pronouncements on dice. Hurricanes in the Atlantic Ocean and global climate change are many orders of magnitude more difficult than dice to understand or to characterize statistically.

  20. Mark T.
    Posted Feb 11, 2007 at 4:35 PM | Permalink

    Tom C., good post. I have been thinking about the “dice analogy” used by Mann and I concluded that he was actually referring to the pips on a die, over multiple rolls, rather than the sum of more than one die. I.e. his “set” was nothing more than multiple rolls and his outcome was how many times a 6 showed up. That said, his percentages were correct, though his intent… oh boy.

    Until your post in 19, nobody had caught the real problem with Mann’s analysis. Even giving him the “basic statistics for laymen” benefit he really blew it. One cannot conclude from any finite set of observations a true trend. It is possible, no matter how many times you roll a die, that it will come up 6s repeatedly (or more than your expected 16.7%), while still maintaining perfect uniform randomness. Granted, the more 6s (or any number) in a given roll, the less likely, but still possible. Mann seems to think frequency of occurrence == probability of occurrence, at least, he simplified it to that point to make his otherwise weak CO2-Temperature correlation seem as if it is definitively man-made in origin. Pathetic.

    Now, I wonder where Jim “master statistician” Barrett is on this statistical howler? Is is simply that Mann is allowed a little leeway because he’s a believer (oh, sorry, one of the faithful)?


  21. Dave Dardinger
    Posted Feb 12, 2007 at 9:49 AM | Permalink

    Continuing along these lines, the simpliest case would be where we divide the 135 “dice rolls” in half say 67 & 68 and then figure out what it would take to give us twice the number of 6s in the first half as in the second half. we’d get about 8 and 16 respectively vs 11 & 12 expected. Not very great deviations, I think everyone would agree. Note, that while we can calculate the true expectation of 1 in 6 sixes, that can’t be done for hurricanes and we can only calculate the ratio of the two halfs. This demonstrates one failure by Mann to explain the statistical basis even on an elementary level.

  22. Posted Feb 13, 2007 at 5:05 PM | Permalink

    Have you asked Elsner? I’ve emailed him one or two times; he’s quite helpful. The answer to not showing both plots may turn out to be something like “the reviewers insisted we only show the main results”. The answer to not specifically saying whether the graph matches the corrections could be “whoops”!

    The three or four Elsner papers I’ve looked at strike me as being written by someone who is loath to fiddle with or throw out data, and who makes rather more cautious conclusions than some other people.

  23. David Smith
    Posted Feb 27, 2007 at 7:05 PM | Permalink

    A new paper on global tropical cyclone trends is discussed here . (This may have been mentioned earlier today at CA but I could not find that post, sorry.)

    It says that, when the data is put on an apples-to-apples basis, there is insufficient evidence to conclude that global storms are getting worse. In fact, for the past 20 or so years, there is basically no-trend in PDI or the other intensity measures, despite higher sea surface temperatures.

  24. Ken Fritsch
    Posted Feb 28, 2007 at 10:55 AM | Permalink

    Re: #23

    This appears to be a continuation of the discussions/presentations that Kossin and Holland had at a conference in 2006 where Kossin shows that there are no trends in PDI and hurricane counts worldwide when going back to 1984 with “corrected” records (while showing that there are increase in NATL TS activity but not for the worldwide average) and Holland and company continue to confine their arguments to the NATL in attributing AGW to increased TS activity without, as far as I can tell, acknowledging Kossin’s results.

    From afar, I get the impression that Kossin is almost apologetic about his results while Holland and company seem to be confidently pushing forward with theirs.

  25. David Smith
    Posted Feb 28, 2007 at 12:11 PM | Permalink

    Ken, until recently I thought that the Holland Webster et al group had inadvertently used bad historical information which others (like Landsea) had over-sold as being of good quality. OK, shame on everyone, their attention to detail was poor but there were good intentions.

    I also figured that they were reluctant to soften their position on so public an issue because (1) mea culpa is no fun and (2) I sense that some of the Hurricane War combatants genuinely dislike some other combatants, maybe for decades.

    But, then one of the group posted that hurricanes have “traction” with the public that other issues, like permafrost, lack. Hurricanes motivate the US body politic, permafrost does not. Hmmm.

    That makes me wonder if the group will continue to make their scareplots and apocalyptic forecasts, to help drive “action now”, rather than just put the subject on the back burner until better data is available.

  26. Bob K
    Posted Feb 28, 2007 at 12:37 PM | Permalink

    I just compared the category 5 tracks for the W. Pacific and Atlantic basins for the period 1945-2003. The entire time period I have for the W. Pacific records.

    W. Pacific had 150 cat. 5 storms containing 760 cat. 5 tracks.
    Atlantic had 20 cat. 5 storms containing 82 cat. 5 tracks.

    Both areas with those cat 5 tracks are approximately the same size and same location relative to the equator and land to the west.

    Does anyone have readily available SST for the areas for comparison?

    That would be 120E-160E by 10N-30N for the Pacific and 95W-55W by 10N-30N for the Atlantic.

    Just wondering if the SST history is about the same.

  27. David Smith
    Posted Feb 28, 2007 at 2:18 PM | Permalink

    Bob, you can generate a time series of those areas using this site , either the NCEP or the Kaplan button. The plots start about 1950. (Note that the Atlantic box you specified will include part of the eastern Pacific.)

    willis e. has a way to get better plots than these, so think of these as quick-and-dirty

    Atlantic, July-Nov

    W Pacific, July – Nov

  28. Bob K
    Posted Feb 28, 2007 at 11:10 PM | Permalink

    Thanks for links Dave. I’ll make use of them.

  29. Bob Koss
    Posted Mar 2, 2007 at 12:52 AM | Permalink

    According to Kossin they used a 23 year record of world-wide storms.

    Using a homogeneous record, we were not able to corroborate the presence of upward trends in hurricane intensity over the past two decades in any basin other than the Atlantic. Since the Atlantic basin accounts for less than 15% of global hurricane activity, this result poses a challenge to hypotheses that directly relate globally increasing tropical SST to increases in long-term mean global hurricane intensity.

    Figured I’d create my own world-wide database by combining the individual ones.
    I used the years 1983-2003 to filter the data. The reasoning was, 1983 is the first year where all storms have a recorded wind speed, and a couple of the individual databases i have don’t contain records past 2003. Figured I’d make sure I was comparing apples to apples.

    If he wanted to be perfectly frank, he could have truncated the bold part of that sentence. Guess he’s trying to avoid making too many waves.

    Here is a comparison of the Atlantic, World-wide, and W. Pacific basins during the time period. There sure doesn’t seem to be anything out of the ordinary happening, compared to world-wide.

    **Storm filters.**
    Year 1983 to 2003
    Storms with wind: 232
    Peak wind: Mode 50 Median 65 Mean 73 Std. Dev. 29
    TS: 105 Cat 1: 55 Cat 2: 24 Cat 3: 20 Cat 4: 23 Cat 5: 5
    Tracks with wind: 7377
    Wind: Mode 30 Median 45 Mean 50 Std. Dev. 25
    TD: 2111 TS: 3284 Cat 1: 1148 Cat 2: 392 Cat 3: 224 Cat 4: 197 Cat 5: 21
    **Storm filters.**
    Year 1983 to 2003
    Storms with wind: 1956
    Peak wind: Mode 45 Median 65 Mean 73 Std. Dev. 33
    TD: 128 TS: 799 Cat 1: 353 Cat 2: 181 Cat 3: 165 Cat 4: 243 Cat 5: 87
    Tracks with wind: 60936 **No wind:** 11
    Wind: Mode 25 Median 40 Mean 48 Std. Dev. 28
    TD: 24155 TS: 21643 Cat 1: 7157 Cat 2: 3380 Cat 3: 2142 Cat 4: 2093 Cat 5: 366
    ____________________W. Pacific
    **Storm filters.**
    Year 1983 to 2003
    Storms with wind: 665
    Peak wind: Mode 30 Median 70 Mean 75 Std. Dev. 38
    TD: 92 TS: 206 Cat 1: 110 Cat 2: 68 Cat 3: 45 Cat 4: 86 Cat 5: 58
    Tracks with wind: 22599
    Wind: Mode 25 Median 35 Mean 48 Std. Dev. 31
    TD: 9955 TS: 6576 Cat 1: 2676 Cat 2: 1346 Cat 3: 784 Cat 4: 1006 Cat 5: 256

    Here’s a 255k gif of the world, showing the tracks for this period of time. Actual size is 3600×1800. Some browsers may show it in reduced size but probably have a setting to convert it to full size if you want to see the detail. TD and TS are green. Cat:1-2 blue. Cat:3-4 are red. Cat:5 are yellow.
    Gif image
    Poster formerly known as Bob K. And no. My name hasn’t been shortened from Kossin.:-)

  30. David Smith
    Posted Mar 2, 2007 at 6:10 AM | Permalink

    Re #29 Excellent check and plot, Bob (Formerly Known as K).

    If I recall correctly, the decade 1985-1994 in the Atlantic was the least-active in the past 70 years or so. That quiet period helps make the Atlantic 1983-2006 upslope look so strong.

  31. jae
    Posted Mar 2, 2007 at 10:48 AM | Permalink

    29: Why are there no storms near South America?

  32. David Smith
    Posted Mar 2, 2007 at 1:58 PM | Permalink

    Re #31 Near South America the upper winds are too strong (tearing apart any seedling), there are very few seedlings and the waters are generally below the “magic temperature” of 26C for storm formation.

    It’s a tough neighbourhood.

    A hurricane did form in 2005, and there have been other weaker storms off and on, but they are freakish.

  33. Steve Sadlov
    Posted Mar 2, 2007 at 2:00 PM | Permalink

    Gavin apparently got irritated a bit by Kosson et al:

    No real surprise … 🙂

  34. Steve Sadlov
    Posted Mar 2, 2007 at 2:01 PM | Permalink

    Kosson > Kossin

  35. Steve Sadlov
    Posted Mar 2, 2007 at 2:44 PM | Permalink

    RE: #34 – In the RC thread, both Drs. Curry and Kossin have been participating. Recommended!

  36. Roger Dueck
    Posted Mar 2, 2007 at 5:56 PM | Permalink

    #33 – Steve, I was struck by this comment:

    However, rather than this study being taken for what it is – a preliminary and useful attempt to make homogeneous a part of the data (1983 to 2005) – it is unfortunately being treated as if it was the definitive last word. We’ve often made the point that single papers are not generally the breakthroughs that are sometimes implied in press releases or commentary sites and this case is a good example of that.

    Did you notice he was particularly irritated by the fact that “people”(who would that be, IPCC?) take one paper as definitive when we should all know how to pick the cases when one paper SHOULD be taken as definitive, as in the case of MBH98.

  37. David Smith
    Posted Mar 2, 2007 at 7:06 PM | Permalink

    Re #35, #36 Trenberth’s reply makes me feel a bit sad for the man. Curry continues to be the most level-headed of them all.

    gavin schmidt is funny.

    I’ve been watching the Southern Hemisphere tropical activity. So far, the numbers of storms are normal and the intensities are slightly below normal. The season is about half over. No surprises, no apocalypse.

    I’ve also been watching the sub-surface ocean temperatures. and believe that 2007 will not be the warmest on record. La nina looks imminent and there is cool water lurking beneath the surface in two important parts of the Indo-Pacific Warm Pool. The mid-latitude Pacific waters are in a cool-phase PDO pattern. The Arctic Oscillation continues to be in a negative (ice retention) phase. The NCEP global surface temperature for February is unimpressive. I won’t be surprised if 2007 is the coolest year since 2000.

  38. Steve Sadlov
    Posted Mar 2, 2007 at 8:06 PM | Permalink

    RE: #37 – Australia and New Zealand are having a largely cool summer. Right around the Solstice there were still some impressive low elevation snow events along the Southern Ocean Coast of Aus, as well as Tazmania an S. Island NZ. I also saw a low elevation snow event report for the Chile – Argentia border area, affecting one of the Andes passes. It will be interesting to see if the NH summer is similar.

  39. Stan Palmer
    Posted Mar 3, 2007 at 7:05 AM | Permalink

    What I found interesting in Kossin’s comments, given Trenberth’s public statements on hurricanes, was:

    Dr. Trenberth, I am puzzled by how forcefully you are dismissing our results while you apparently have virtually no knowledge of how intensity estimates are formed or how best track records are constructed. You seem to know very little about many of the things that a hurricane researcher would consider very fundamental. There’s no crime in this of course, but you’re taking an adversarial tack on these things and I think that’s unreasonable.

    This does put two things in perspective:

    a) Landsea’s resignation from the IPCC in reponse to public statements by Trenberth on hurricanes


    b) Public statements by IPCC officials in general

  40. David Smith
    Posted Mar 3, 2007 at 9:10 AM | Permalink

    I wish I understood the Hoyos et al methodology well enough to be able to plug in Kossin’s (UW/NCDC) storm data and see if the conclusions still hold.

    Hoyos found a trend in storm intensity and a trend in SST but no trend in shear, humidity and deformation. The conclusion was that the SST “shared information with” storms, implying a connection.

    Now, if there is no-trend in storm intensity, yet SST are rising, it seems like the evidence is now in the opposite direction – decadal SST changes and intensity are unconnected.

    (BTW, this is the paper on which I was unable to replicate the details of their SST plots. The Hoyos SST and Webster SST plots, by almost the same group, are not identical in all basins. I never could figure out why. (SST are Figure 1 in both papers.))

  41. Ken Fritsch
    Posted Mar 3, 2007 at 2:27 PM | Permalink

    What I get out of the RC exchange is that Jim Kossin seems well able to articulately defend the work at hand and make some of the remarks by Gavin, Trenberth and Curry appear out of place in a scientific discussion, i.e. as being a might bit too off-handed. I refer to the application of Kossin’s adjustment algorithm to other hurricane generating basins.

    This entire hurricane debate shows me that there is a great amount of interest out here in both the policy and climate science community to make some cases for extreme weather in terms of relatively small changes in global temperatures. That it has been overdone with hurricanes is becoming rather more apparent as time passes and more is written on the matter. I would think the scientific side of some of those expressing deep concern about hurricanes earlier would now be seriously considering backing off, although their policy advocacy side may not be listening.

  42. David Smith
    Posted Mar 3, 2007 at 7:02 PM | Permalink

    Re #41 Ken, I agree. The sensible things to do would be to (1) continue to reanalyze historical data, so as to get things on an apples-to-apples basis, (2) continue to improve the understanding of how the tropics and cyclones work, and (3) avoid publicity, other than joint statements advocating better building codes and avoiding barrier islands.

    The temptations will be when a storm smashes Tampa and the BBC, CBC and CNN call, or when there’s a chance to influence legislation or add support to a public initiative.

    I expect that some will be sensible while others will continue to create scareplots. My guesses:

    Curry – sensible
    Emanuel – scareplots
    Webster – hard to figure. Even though he’s fiery (he even threw two blog-rocks at me in December!) I hope he goes quiet and doesn’t climb on that East-Atlantic-is-Different ledge he hinted about. It would be a narrow ledge indeed.

  43. Posted Mar 7, 2007 at 12:06 AM | Permalink


    mike is fitting a Poisson distribution to TC counts, and finds that 2006 (given it was El Nino year) was fairly unusual.

  44. Paul Linsay
    Posted Mar 7, 2007 at 10:42 AM | Permalink

    #43, Mann is correct that the probability of at least 10 storms in a single season is 11%. [I don’t understand what he means by a fit, it’s just Poisson statistics] On it’s own that’s a fairly big number, the same as the probability of throwing a 5 with a pair of dice.

    Let’s take this a bit further though. According to Wikipedia, El Ninos occur every two to seven years. Take the average as once every five years. This means that back to 1870 there were about 27 El Nino years. Thus we expect 0.11*27 = 3 El Nino years with at least 10 storms. The probability of seeing at least one 10-or-more-storm season is thus 1 – exp(-3) = 0.95, i.e., 95%. Hardly rare.

  45. Ken Fritsch
    Posted Mar 7, 2007 at 11:56 AM | Permalink

    Re: #44

    I think the best reaction, as it would be in many of these situations, would be to say: Mikey is right — as far as he goes. How was the season in terms of hurricanes, PDI, ACE and major hurricanes? How strong was the El Nino effect? What about the cyclical nature of hurricane activity? I do not know off the top of my head, but if that information (or any of it) is available it would be interesting to see.

  46. Posted Mar 7, 2007 at 1:10 PM | Permalink

    All events are unlikely, because of AGW. Except if the event itself is very likely, then it is unlikely given some other event. This is due to AGW.

  47. Posted Mar 7, 2007 at 2:06 PM | Permalink


    I don’t understand what he means by a fit, it’s just Poisson statistics

    He is fitting TC data to Poisson distribution. 11% probability holds if Poisson is correct assumption and if the variance was estimated accurately. But if TC data (given El Nino= TRUE) are realizations of Poisson(6.3), then there cannot be AGW effect in that data. Inactive season given that SST explains it all, active season if you take El Nino and assume no SST connection 😉 I still think that Mann should consult nearest statistician.

  48. David Smith
    Posted Mar 7, 2007 at 2:51 PM | Permalink

    Its worth recalling that the 2006 was not a typical El Nino year. El nino arrived quite late and appears to have had little effect on the season until sometime in October, after the traditional peak of the season.

    Also of note is that the 10’th storm of 2006 was not discovered until, months later, the Hurricane Center studied old satellite images and additional data and decided that a mid-Atlantic swirl was probably a weak tropical storm. It lasted all of 18 hours . That type of system would probably have not been detected prior to about 1980, when satellite coverage and capability improved.

    I think a more accurate description of 2006 is that of a season with 9 storms and neutral-to-weak-El Nino conditions.

  49. Posted Mar 7, 2007 at 3:21 PM | Permalink TC count analysis need to be carried out conditionally, given El Nino index. I’ll check how many times El Nino is mentioned in Mann, Emanuel Eos 2006 article.

  50. David Smith
    Posted Mar 7, 2007 at 6:15 PM | Permalink

    If I was trying to filter out the effect of El Nino on TC count, I’d:

    * use the average ENSO multivariate index ( here ) during May-October of a year, instead of the SOI or some crude category of “El Nino year”, “La Nina year” or “Neutral”

    * eliminate subtropical cyclones from the storm count. They are not tropical cyclones.

    * eliminate storms which lasted 24 hours or less, as these weak systems were rarely detected or recorded before the era of good satellites.

  51. bernie
    Posted Mar 7, 2007 at 7:05 PM | Permalink

    I also saw the debate on the numbers at RC. The time period covered and the need to separate
    El Nino years makes almost any statistical statement problematic. You guys are pros and have more
    data to hand but to use the dice analogy, how many times do you need to throw a die before you have a reasonable
    shot at spotting a rigged die? I count about 8 El Nino years in Kossin’s data (which he selects because
    he is using the same satellit yardstick), of which 2 have 10 storms. The SD
    for this period is uniquely small compared to the entire data set. Sometimes it is better to “check” than to bet
    (unless you know who rigged the dice!).

  52. Posted Mar 7, 2007 at 11:21 PM | Permalink

    mike makes a prediction, that is a very positive sign from the scientific point of view. More than 15 named storms in the next season.

  53. David Smith
    Posted Mar 8, 2007 at 7:50 AM | Permalink

    Re #52 As you note, it’s good to see people make predictions. The average number of storms since 1995 has been about 15. Since this will be a ENSO-neutral or La Nina season, a prediction of more than average is probably pretty safe. My guess is that Mann it has maybe a 70-80% chance of being right.

  54. David Smith
    Posted Mar 14, 2007 at 9:21 PM | Permalink

    Slide 9 of this presentation shows a couple of interesting correlation maps. These maps correlate seasonal sea surface temperature with “ACE”. “ACE” is a measure of the combined duration and intensity of a season’s hurricanes and tropical storms. ACE is a better indication of a hurricane season’s activity than storm count.

    For the Atlantic, the (lower) map shows some correlation between high ACE (active) hurricane seasons and elevated temperatures in the tropical Atlantic. But, the greater correlation is with the traditional Atlantic multidecadal oscillation pattern (warmer eastern and tropical Atlantic, cooler western Atlanticm, with especially warm temperatures off the coast of Spain). This correlation is classical Bill Gray.

    (Even though this lower map is for Atlantic storms, it does show correlation colors in the Pacific. Those colors are the fingerprint of La Nina.)

    For the Pacific activity, the (upper) map shows, remarkably, no correlation between western Pacific typhoons and western Pacific sea surface temperatures. What the colors do show are a positive correlation between western Pacific typhoons and the warm-phase PDO (and also some correlation with El Nino).

    What these indicate are that, if one wants to detect the causes of decadal trends in tropical cyclones, one should look at the classical atmospheric modes (AMO, PDO, ENSO) moreso than just SST in the development regions. In fact, SST looks more like a consequence than a cause.

    (Now, a moment from the Jeopardy game show. Answer: zero. Question: what is the chance that you’ll see these charts on Real Climate?)

  55. Bob Koss
    Posted May 2, 2007 at 5:28 AM | Permalink

    The hurricane season is upon us. So I’ve examined one of the current ideas being pushed.

    Hypothesis: Hurricanes are unnaturally increasing in number and or severity.

    Here I compared three 50 year time periods by only hurricane force readings. Simply a total for each period.

    It appears there might be something to the idea of getting more and stronger storms. Storm figures go up as time passes.

    I extracted the initial and final track data for only the hurricanes. Here are the initial wind speed data.

    Big discrepancy, looks like observation coverage. It appears the 1800’s were a little meaner than it first looked.
    Had to have missed many readings in the 1800’s. With 100 storms not found until achieving hurricane strength, there are probably entire storms missing. A couple storms were at 100 kts or more before discovery. Not one at hurricane speed in the latest time frame.

    Here are the final track data.

    Once again, observation coverage problem in the 1800’s. Now I’m starting to think it might have been nastier in the 1800’s. For eighty some odd storms, detection was lost at hurricanes speeds. Four times the number from the latest time frame.

    Here’s a count of tracks with hurricane force winds from 0-300 miles and 301-1200 miles from land for 1851-1900 and 1951-2000.

    Landfall 0-300 miles 301-1200 miles
    1851-1900 3342 tracks 1165 tracks
    1951-2000 2933 tracks 1921 tracks

    Looks like observation bias again. If a ship wasn’t there or didn’t make it back, it didn’t get recorded.

    They’re right when they say the older records aren’t as reliable. But most of the unreliability comes from missed data. I doubt wind speeds can be seriously wrong. I think the 1800’s can and should be used as a rough gauge against the more recent years. I can’t put a figure to the data quantity adjustment needed, but it’s substantial enough that the 1800’s might even have been the worst of the periods.

    It was certainly colder back then. What kind of hurricane making ability could the water have back then?

    I can’t reasonably accept the stated hypothesis.

  56. David Smith
    Posted May 2, 2007 at 11:52 AM | Permalink

    Nice charts, Bob. They illustrate in yet another way the problems with the historical data.

    I do think that wind estimates are a problem, even to today. Usually the strongest part of a storm’s life is when it is at-sea, yet pre-1950 the only way to estimate the at-sea strength was from ship observations. Those encounters were few and likely missed the center of the storms and/or the time of peak intensities. It’s a guessing game.

    If one takes Kerry Emanuel’s Figure 1 ( link ) at face value, and mentally extrapolates backward into the 19’th century (SST maybe 0.5-1.0K lower than today), then one would expect there to have been minimal hurricane activity in the 19’th century. That expectation sure doesn’t match what likely occurred back then, especially since the records likely understate both the occurrence and intensity of those 19’th century storms.

  57. James Erlandson
    Posted May 2, 2007 at 7:00 PM | Permalink

    SciGuy, a science guy blog (!) at the Houston Chronicle writes about hurricane frequency.

    Landsea’s argument, in contrast to the likes of Kerry Emanuel, Greg Holland, Judith Curry and others, is that observers missed so many storms during the pre-satellite era that a re-analysis of past data might explain why hurricanes seem to have become more common and destructive in the last 30 years.

    What is becoming clear with Landsea’s new work, along with this recent article in Geophysical Research Letters, is that the debate over global warming and hurricane activity remains very far from being settled. Anyone who tells you otherwise is ignoring the scientific literature.

    The piece was picked up by the Energy Roundup blog at the Wall Street Journal.
    Main Stream Media is peaking into the skeptic’s tent.

  58. bender
    Posted May 2, 2007 at 7:27 PM | Permalink

    The immutable fact is: powerful hurricanes are not all THAT much more common, even if you consider the alleged biases.

    People are asking whether there IS OR IS NOT a connection between AGW and hurricane dynamics. That’s not the right question. Nature is not so black-and-white. The right question is QUANTITATIVE: how, precisely, does frequency and intensity vary with a change in GHGs, SSTs, etc.? The answer, which will not change, is: very, very little. Hurricanes are a serious problem with or without AGW.

    Follow the Money will tell you: over-attribution to AGW is the next big insurance scam.

  59. Bob Koss
    Posted May 4, 2007 at 6:35 AM | Permalink

    Here are a couple more charts to demonstrate that observational ability is very likely reason Atlantic basin storms appear to be getting worse.
    The coordinates for the first chart is the center of a ten degree grid box. The other two are centers of a ten degree swath.

    The first chart is rather busy, so I highlighted the main area of discrepancy. It roughly corresponds to the empty area of ocean between Bermuda and the Azores.

  60. Posted Aug 25, 2007 at 9:12 AM | Permalink

    The attached table clearly illustrates why there were so few storms [only10] in 2006 and why the previous years 1998-2005 was so much more active in terms of named storms namely [16-28 storms/year] .The table for example shows that during 2003 there were 16 named storms and twenty [20] X class solar flares during the main hurricane season of June1-November 30. Three of the solar flares were the very large ones like X28, X17 and X10. On the other hand during 2006 there were only 10 named storms and only 4 X size solar flares of which none were during the hurricane season. During 2005 and 2003 there were 100 and 162 respectively of M size solar flares while in 2006 there were only 10. The 2000-2005 increase of named storms was not due to global warming or the years 2006-2007 would have continued to be high in terms of storms. During the period 2000-2005, much more electrical energy was pumped into our atmosphere by the solar flares especially the larger X size flares. There may have also been planetary electrical field increase brought on by the close passing of several major comets and special planetary alignments, like during September 6,1999 and August 26-29,2003. The year 2007 will likely be similar to 2006 with fewer storms as there has been no major solar flaring to date or major passing comets. It is possible but unlikely that major solar flaring will take place during a solar minimum year which the year 2007 is. Unless there will be significantly more solar flaring during the latter part of this year, the number of named storms will again be closer to the average of 9- 10 and not 15-17 as originally predicted nor the current predictions of some 13 -15 storms.

    FLARES FLARES SEASON adjusted not adjust.
    1996 1 1 13 12 solar min HALE BOOP
    1997 3 X9.4 3 YES 8 7
    1998 14 10 NA 15 14
    1999 4 4 13 12 LEE YES
    2000 17 X5.7 13 16 15 solar max ENCKE
    2001 18 X20,X14.4 8 16 15 LINEAR YES
    2002 11 9 YES 13 12
    2003 20 X28,17,10 15 16 16 NEAT V1 YES
    2004 12 11 YES 15 15
    2005 18 X17 12 NA 28 28
    2006 4 X9 0 YES 10 10
    2007 0 0 NA 5 5 solar min
    to date to date 0 to date to date

    * assumed season June1 to
    C&M flares were not included
    Some flares last longer and deposit more energy. This was not noted.

    NA EL NINO present but not during hurricane season
    Very minor EL NINO months at the beginning of year

%d bloggers like this: