The following is from Judith Curry:

Whether changes in the characteristics of tropical storms observed in the last few decades [1,2] are the result of only natural variability, due to climate change, or a combination of both factors is the subject of intense debate [3]. Central to the debate is the quality of the tropical cyclone data [3,4].

Here we examine what is inarguably the most reliable data in the global tropical cyclone database: the data in the North Atlantic since 1944-5 [4,5]. Since 1944, aircraft reconnaissance flights have been made in nearly all of the North Atlantic tropical cyclones, and since 1970 satellite observations have made observing and monitoring tropical cyclones even more accurate. The data are obtained from the NOAA National Hurricane Center best track data, and are plotted in Figure 1. To reveal the decadal and longer term variability and eliminate the year-to-year variability (e.g. El Nino), an 11-year running mean has been applied to the data. The figure addresses another issue of contention regarding the North Atlantic tropical cyclone data has recently emerged [1,4]: whether or not Landsea’s6 adjustment to the intensity of major (categories 3, 4, 5) hurricanes prior to 1970 should be made. In Figure 1, the data are presented both with (solid) and without (dash) the intensity adjustment prior to 1970 recommended by Landsea6.

Figure 1a shows the time series in the North Atlantic since 1994 of the numbers of named storms (tropical cyclones), hurricanes, and category 4+5 hurricanes. Figure 1b shows the time series of accumulated cyclone energy (ACE) and potential destructive index1 (PDI), which are integral measures of tropical cyclone activity that include the number, intensity and duration of tropical cyclones. All of the measures of tropical cyclone activity shown in Figure 1 indicate that the period 1944-1964 was associated with elevated tropical cyclone activity and the period 1965-1994 was associated with relatively low activity. The period since 1995 shows tropical cyclone activity that is elevated substantially relative to the active period of 1944-1964.

No attempt is made to determine a linear trend in the time series 1944-2005, owing to the combination of multidecadal natural variability in the North Atlantic and the nonlinearity of the global temperature variations during that period3. Instead, we compare the annual average statistics for the recent period of elevated activity since 1995 with averages from the previous period of elevated activity during 1944-1964. Relative to the previous active period 1944-1964, the period since 1995 has averaged (annually) 50% more named storms, 37% more hurricanes, 167% more category 4+5 storms, 55% greater ACE, and 63% greater PDI. If Landsea’s intensity adjustment for major hurricanes is not used, the differences between periods are not as large, but are still substantial: 44% more category 4+5 storms, 41% greater ACE, and 31% greater PDI.

This analysis indicates that North Atlantic tropical cyclone data are consistent with assertions1-3 that a warming sea surface temperature induced by greenhouse warming is contributing to an increase in the intensity and number of North Atlantic tropical cyclones.

*Figure 1: Seasonal tropical cyclone data, filtered by an 11-year running mean. Data are obtained from http://www.aoml.noaa.gov/hrd/hurdat/. 1A: total number of named storms (blue), hurricanes (red), and category 4 + 5 hurricanes. 1B: accumulated cyclone energy (ACE; blue) and potential destructive index (PDI; red). Dashed lines reflect values that do not include the Landsea correction8.*

(1) Emanuel, K., 2005, Nature, 436, 686-688 (2005).

(2) Webster, P.J., G.J. Holland, J.A. Curry, and H.-R. Chang, 2005, Science, 309, 1844-1846 (2005).

(3) Curry, J.A. P.J. Webster, G.J. Holland, 2006, Bull. Amer. Meteorol. Soc., 87, 1025-1037 (2006)

(4) Landsea C.W., B.A. Harper, K. Horau, J.A. Knaff, Science, 313, 452-454 (2006)

(5) Landsea, C.W., R.A. Pielke, A. Mestas-Nunez, J.A. Knaff, Clim. Change, 42, 89-129 (1999)

(6) Landsea, C.W., Mon. Weather Rev., 121, 1703-1713 (1993)

## 78 Comments

Judy- A few questions:

1. Is your 11-year averaged centered?

2. Does it include 2006?

3. Can you explain why your Figure 1 appears quite different than Lansea’s presentation of the same data? Below:

Steve- Can you fix the size? Didn’t take.

Quick question: don’t you need a correlation with SST to make the link between storms, and SST (not to mention AGW which is, IMO, a totally different question)? Isn’t Dr. Curry a bit quick in establishing a conclusive link between the three, by just showing there have been more storms since 1994?

Also, I find the use of such language as “inarguably the most reliable data” a bit strong, and not of the kink usually found in scientific publications. Obviously, everything is, and should be, “arguable”.

#3 Sorry, it should read “not of the kind”.

Does all the warming have to be from greenhouse warming?

Maybe I’m confused but I don’t understand how the analysis (thanks for it!) supports the statement as the relation of the variables (hurricane count, ACE, PDI) and the SST is not analysed, i.e. the

causeof the increased activity is not attributed. I’m not saying it would not be the reason, merely that I don’t see how the analysis supports the statement “a warming sea surface temperature induced by greenhouse warming is contributing”.Klotzbach and Gray

Once again we have a climate science paper with no error bars. In fact, since the paper is about hurricane counts, the data is subject to the statistics of counting. This means that if N storms are counted in a season, the variance on the count is N, and the raw count is correctly displayed as N +- sqrt(N) for one sigma errors. This is true, even if you have 100% efficiency and don’t miss any storms, because the count is the result of a stochastic process.

These errors propagate into the running mean in the usual way. If the weighted mean is

N_mean = SUM(w_k * N_k, k = 1..m)

then the variance of the weighted mean is

v_mean = SUM( w_k^2 * v_k, k = 1..m)

where N_k are the original hurricane counts, v_k their variances (=N_k), and w_k are the weights.

I suggest that you redraw the plots and have one with the raw data with the correct error bars, and a second one with the running mean and its correct error bars. This will give everyone a better idea of what is actually going on.

With regard to your statement

At the very least you have to show that the atmospheric temperature increase in the region of the hurricane life-cycle is as large as, if not larger, than the SST and that its growth leads the SST in time.

Quick comment: I know Russ Davis’s 1976’s correction doesn’t apply directly to this graphs. After all, it shows no error bars and you do no statitical tests, and the correction can only be applied directly to estimates of error or correlation coefficients.

Nevertheless, as many of us are aware, the integral time scale imposed by an 11 year averaging filter is 5.5 years. (As one would find by quickly integrating to find the area under a triangle with a base of 11 year and a peak of 1).

That means averaged data going from 1944 to 2004 represent at best 11 “stastitical equivalent number of points”. (If the underlying data are correlated — which they are, the data represent less than that. If the correlation is multidecadal, we can’t even figure out how much to knock that number down. Also, since the 11 year average for 1945 includes data as far back as 1934, it actualy include the period one is claiming to remove from the data set. So, really, the “equivalent” number of data points is closer to 7, or lower.)

That said, giving the graph the benefit of the doubt and assuming it represents the equivalent of 11 data points, those eyeballing the data and trying to figure out what it tells us about the three regions should consider the right hand side to represent 3 to 4 data points, the center or “lower” bit to represent 3 to 4 and the right bit represents at 3 to 4 data points.

Furthemore, even if there was some correlation between storm frequency and SST, you could still not make a causal link without a causal mechanism. A simplistic assertion of the kind “more heat means stronger storms” would have to be demonstrated by a proper physical model, and confirmed by a comparison of the model with actual data (I’m not talking GCM’s here but physical modeling of storm and hurricane formation, which I believe is way beyond what GCM’s can do). The model would have to explain storm frequency not only in the Atlantic, but elsewhere in the world, ie. be of general applicability. Predictive capability for such a model wouldn’t hurt either.

From what I’ve read so far on hurricanes, and someone correct me if I’m wrong, we don’t have yet a proper physical model, let alone predictive capability, apart from empirical predictive schemes àƒ➠la Bill Gray. Storm formation is a complex, multiparameter phenomenon, and seeing a correlation with just one parameter (albeit probably an important one), especially over a geologically minuscule time period, makes attribution at best risky, and, let’s say, “arguable”.

For my part, I don’t understand all the fuss about the hurricane problem. The data are just too poor to extract anything meaningful, which means that Roger and Judith could go on debating forever. Isn’t it better, for the time being, to address the problem the way we address earthquakes? we know they will happen sometime, and we know approximately where, but that’s about it, so we just get ready for them. Didn’t prevent anyone from developing LA and San Francisco.

Clarification, this is a brief essay, not a published paper, that was written to flesh the argument in my BAMS article re comparing the 50’s to the recent decade in view of the uncertainties in the data raised by Landsea. NO, this brief essay does not make the argument for the global warming, it merely states that it is consistent with the assertions made by others. The last paragraph is irrelevant to the main point which is about landsea’s correction (Steve M, you might want to remove the last paragraph so it doesn’t distract from the issue of the landsea correction).

Roger, the running mean is centered averages. Landsea’s plot of PDI should look the same as Emanuel’s after 1970. This analysis was done right after landsea et al. was published (well before the end of the 2006 TC season).

Re Klotzbach and Gray’s statement. The earlier active period should be compared with the current active period, which started in 1995 (NOT 1990). The main increase of the recent active period relative to the 50’s is in the total number of TCs and the number of category 4+5 storms, but there is also an increase in the total numer of hurricanes. Yes, we still have the issues of potential undercounting of TCs in the 50’s and errors in intensity observations prior to 1970. But the main point of this is that, with or without Landsea’s correction, the recent active phase is more active than the previous active phase in the 50’s.

I don’t see what the word “consistent” means here. If the data are (statistically) meaningless, they will be consistent with everything, including the reverse proposition.

Judy-

Thanks, but let me restate my question-

In Landsea’s Nature paper 1949 unbias-corrected PDI is higher than 2004. As is Emanuel’s adjustment (middle curve) in the Figure 1 here:

http://sciencepolicy.colorado.edu/admin/publication_files/resource-1890-2005.48.pdf

In your graph the upturn post-2000 is much higher than what Landsea and Emanuel agreed on. Your data should be the exact same as Landsea’s Figure 1b, right? Are the different presentations of the same data a function of the smoothing?

Note that PDI = power dissipation index (not potential destruction)

Just two comments and a question for the moment

1. The article states

That’s not quite correct and should be reworded, in my opinion.

Storms in the central and eastern Atlantic basins are usually not sampled by reconnaissance. In older days not all storms in the western basin were sampled. Also, sampling in older days were rather spotty compared to modern practices, but that’s another story.

Example: I spot-checked the NHC Atlantic storm records for 1958-1963 (pre-1958 are not listed). I counted 12 of 41 storms (30%) as not having reconnaissance flights.

Example: in 2005, there were seven storms (out of twenty-seven total, or 25%) which did not receive reconnaissance flights. These included TS Lee, Major Hurricane Maria, Hurricane Vince, TS Alpha, TS Delta, Hurricane Epsilon and TS Delta.

I think the pre-1958 records would show even lower percentages.

This may sound like quibbling, but I think the word “most” is more appropriate than “nearly all”.

2. On the subject of intensity estimates, it is worth a minute to look at the report on hurricane Katrina , in particular Figure 2. Figure 2 shows the estimated windspeed of the storm at various points in its existence. These are surface, aircraft, dropsonde, satellite, etc. Note the spread from one technique to another! This was a modern, ultra-monitored storm and yet you-pick-em on what the windspeed actually was. Compare that with earlier storms, where the lone data might be a ship of questionable quality or a recon flight at one point in time and in one part of the storm or a post-storm land damage assessment.

3. Instead of 11-year averaging to remove ENSO, why not use unaveraged data for ENSO-neutral years?

RE: #8 – Careful there, you may be engaging in statistical terror. /sarc

RE: #15 – I real time monitored off shore buoy readings during my waking hours while Ivan recurved its way toward North America. Some buoys were never working quite right. Others had only certain sensing systems working at any given time. One of them went off line (probably damaged by wave strikes) just as the eye wall approached it. And this is just NOAA buoys.

David-

Re-analysis of Andrew is also telling see:

http://sunburn.aoml.noaa.gov/hrd/Landsea/landseabams2004.pdf

Consider this conclusion:

This is 1992 and “very likely” spans 18 kts (21 mph) or about 1 full categories on the S/S scale. It would be hard to believe that earlier years for much less sampled storms are better measured.

People can argue about the data quality until their are blue in the face (I’ve seen it happen;-), but from a statistical standpoint the way to move ahead is to acknowledge the uncertainties and include error/uncertainty factors in all past data.

These uncertainties are nonzero to begin with and invariably increase as the data record goes back in time.

Judith:

Thanks for clarifying that what you posted is a brief essay which you have presented at a BAMS conference and that you’re saying only suggesting that a quick glance at the curve would seem to support the general idea the arguments of those who say AGW is real. After all, the graph goes “up”, then “down”, then “up but more than before”. The key point would be the “up more than before bit”, which one might expect if AGW is real, right? (Please correct me if that is not what you meant.)

I understand your frustration with comments. Blog comments are also just brief casual comments providing our opinions after glancing at the brief, casual essay you presented to your colleagues at BAMS.

I suspect we all agree the graph goes “up”, “down” then “up more that before” and that the “up more than before” observation is central to the diagnosis of AGW.

That said, I think we can all agree that many of the readers here are unfamiliar with the Davis 1976 correction. So, they might benefit from reading a quick estimate describing how application to your data might guide their interpretation of the major “up more than before” bit of the data. (BTW, my comment was not aimed at you. I know you are familiar with the Davis (1976) article. It was cited in Holland and Webster 2006, and you called the technique standard when we were discussing H&W. )

For those who are not familiar with the technique: application suggests that what we are seeing in each of the three regions is based on the equivalent of at

most2 to 3 independent data samples, and the whole graph is based on the equivalent of 7-11 independent data samples.But I would like to close by setting your mind at ease: Since you are familiar with this correction, we all know that you would have reminded the BAMS audience of the impact of the correction had the word limit for the conference rules not forced you to edit out for brevity.

Paul Linsay (#8), are you making some implicit assumptions in your analysis? How do you know that you are dealing with a stochastic process? Also, I’m not experienced with the type of analysis that you suggest, but can you calculate the variance so simply?—are there assumptions about linearity and independence (assuming that the process can be modelled stochastically)?

Re #18 I agree, Roger, and to a big extent CA has already touched on a lot of the data-quality and uncertainty topics.

The one that makes my head spin the most is the use of pre-1900 storm count and intensity data. Dr Curry doesn’t do that, but others do.

Looks to me as though these charts and graphs are the result of desperate people trying to prove what’s not provable. Someone needs to step back and see the forest for the trees. A conclusion reached by using a semi-accurate 30 year record can only be politely described as “weak.”

P.S. If you turn figure 1a sideways isn’t there a strong resemblence to New Hampshire’s “Old Man of the Mountain” just before its demise?

#16

Huh? I thought it’s good science.

#20

The assumption is that hurricanes are independent events. No one is able to predict the next hurricane so I’d say an independent stochastic process is a good guess. It’s just Poisson statistics and is used all the time in physics and physics related measurements like radioactive decay, scintillation counting, photon counting, pulse height distributions. It describes the noise in the CCD inside your digital camera.

Maybe SteveM or Willis would be kind enough to replot the data with the correct errors?

Paul re 23,

I’m fairly certain Steve did not mean belittle your use of statistics. See the remarks that begin on this thread. Then scroll down to #77, #78 and continue.

I want to bring up a point that I think Roger was pointing at but didn’t seem to be explicit on and that Judith explained but didn’t make much of. That is how reliant the final uptick is on how the data at the end is treated. Since we are assured by Dr. Curry that the 11 year filter is centered and that the 2006 data is not used, then anything after 2000 will rely to some extent on years which haven’t happened yet. Thus in the (unlikely?) case that in 2007 and beyond the data is more like 2006 than 2005 the peak of the filtered data will come down, perhaps considerably and may not look much different than that in the first segment of the graph.

According to the data Roger presented it’s filtered by a 1-2-1 filter twice which I believe was stated here a while back to be equilivant to 1-3-3-1 filter which is clearly not the same as an 11 year filter and probably has much to do with the difference in the two graphs.

Margo, this little essay has hitherto not been presented or published or anything prior to its debut in the journal of climateaudit. The calculations were done by an undergraduate student. I quickly drafted some text to rebut the landsea et al.(2006) paper before realizing that Science does not accept comments on Perspectives. I thought it would be useful to resurrect this little writeup to show the magnitude of the difference that the landsea correction makes on the varous measures of intensity. The BAMS article i referred to is a paper that was discussed here last august that lays out the causal chain that connects TC intensity and AGW in our hypothesis. The issue of AGW-hurricane intensity was raised in the little writeup since landsea was claiming to have refuted this. I was stating that his arguments about the correction do not refute our hypothesis.

A reminder to everyone, this little note is NOT about a correlation with SST. The graphs compare various measures of intensity with and without the landsea correction. The 11 yr running mean is for visual representation. The averages for the the active periods were simple arithmetic averages. This is all that has been done. No trend has been fitted to the 11 year running mean The only place that i can see statistical analysis comes in is to assess whether the differences in the mean in the two populations (50’s and recent active period) is statistically significant. These graphs were calculated from the same NATL data that you all have been using, and including the Landsea correction

This little paper is not intended to convince anyone of the link between intensity and AGW, it is merely to show that the intensity in the current period is greater than the intensity of the previous active period, WHETHER OR NOT you use the landsea correction. The intention was to put the landsea correction into some kind of perspective. If you don’t use it, its not like there is a negative trend or anything.

Paul, you are right, the standard univariate Poisson model should be first thing to try to this type of data. However, since the true interest seems to be in strong hurricanes I would try a multidimensional extension (e.g. vector of type (#non-hurricanes,#cat1-2,#cat3-5) ). Moreover, I seem to recall that there might be some overdispersion present, so something like negative binomial marginals might be better. Also, I recall reading here that there is some autocorrelation present (especially if you divide you data on monthly – not yearly – basis). This is why I gave some literature references for anyone who seriously wants to deal with this data.

On a general level, I tend to agree somewhat with Francois (#10) that the data seems to be hard. I would not categorize it poor though or say that no useful information can be extracted from it. Rather, I would say that the hurricane data seems very challenging, and I think uptodate knowledge about count data models is really needed if one wants to get some lasting results out of the data set. But as I have said before, I know next to nothing about hurricanes, and all this is just based on my intuition on what the data looks like (mathematically) to me. I might be completely wrong.

Judith, could you review us non-experts what exactly (and based on what) the Landsea correction is. I’m sure it’s been explained here somewhere but I could not quickly find it.

Judy-

Can you (once again) please comment on why Landsea’s presentation of the exact same data directly contradicts your assertion:

“it is merely to show that the intensity in the current period is greater than the intensity of the previous active period, WHETHER OR NOT you use the landsea correction. The intention was to put the landsea correction into some kind of perspective.”

Landsea’s data shows exactly the opposite. Is this because you and he both cherrypicked smoothing methods to make your points?

Thanks!

#23 (Paul Linsay) says

So then if SSTs affect hurricanes, the assumption is violated. Whether SSTs do or not might be disputed—but, that implies the assumption is disputed. There might be other factors besides SSTs as well. I.e. the quote seems to make my point: the suggested variance calculation is dependent upon disputable assumptions.

(Not a criticism)

One thing I noticed is that Curry points to 1964 as a transition point, I believe based on the 11-year smoothing. But when I look at individual year data for storm count and PDI, it looks to me like circa 1970 is more like the transition point. (All of this has subjectivity, of course – there is no clean transition).

The reason I mention this is this chart of south-to-north (meridonal) wind for the Atlantic basin from the equator to 10N. This is for the hurricane season. This region is, generally speaking, the “bottom side” of the ITCZ, where some air flows in from the Southern Hemisphere.

The interesting thing is that those southerly winds are relatively strong from 1950-1970, then suddenly weaken until 1995, at which time they strengthen again. In the early 2000s, they were exceptionally strong. This corresponds with an active hurricane period 1950-1970 (my view), followed by 25 years of dormancy, then an upswing in the mid-1990s and exceptionally strong in 2003-5.

One possible interpretation is that, when the ITCZ is unusually far north, the southerly component of winds is stronger in the 0 to 10N region. And, when the ITCZ is north, the coriolis effect is stronger and the land effect of South America is less, creating more, and sooner, hurricanes. That’s a simple hypothesis.

Over the long haul, the ITCZ follows SST temperature differences. Over the short haul, though, I don’t recall any sudden SST change in 1970 or 1995 that accounts for this. The SST jumps look more like the late 70s and about 2000.

Anyway, there may be causes for ITCZ shifts, and hurricanes, besides “raw” SST increases.

I would like to strongly support Paul Linsay’s comments. I would have expected that mathematically the system would obey Poisson statistics and presenting the data as 11 year running averages hides a vast amount of information, indeed I think misleads, even if done to try and account for El Ninos. In the statistics of small numbers, even 10, the approimate mean of the named storms per year is a small number where 1 standard deviation is about 3 and therefore we would expect 90% of the named storm counts per year to lie within (about) the range 4-16 if there is a mean of 10 and since there are about 60 individual year counts, half a dozen or so of those years will be outside that range – and that’s only what you would expect from random statistics, pure and simple. The uptick in recent years

in the moving averagecould be caused by a couple of recent extreme years that are caused purely by the normal random variability.I am not saying that it isonly that the way that the data is presented makes it impossible to tell much about the data. Since these are discrete counts I would much rather see a histogram against year (with suitable caveats about El Nino years) and then we could see whether for example the standard deviation is what would be expected by Poisson statistics – that would be very telling for understanding the processes.When we consider the higher intensity storms, things are even worse, natural variability of only an average of 1 or 2 major storms per year is going to have huge variance. If we average 11 year blocks we are going to have a standard deviation of about 25% on the averages or about +/-0.5 on an average of about 1.5 I get from looking at the graph. That looks like all the variation and bigger than the Landsea correction. (Again, I am making no comment on whether the Landsea correction is correct or not, only that within the natural variability of small number statistics, you would be hard put to tell the difference).

re #30 (Sara): Yes, and no. I agree with you that Paul’s “first-aid” calculations (treating the series as i.i.d. Poisson) is not stricly speaking valid here, but they are better than nothing at all. What actually should be done here immediately, is to do the actual standard Poisson regression, where your predictor variable(s) would be the SST (and possibly some other known hurricane variables). Depending from the results of that fit, we would be much wiser at least in the sense of knowing what to try next.

Jean S – I discussed the Landsea adjustment in a prior post here. Here’s Emanuel’s most recent statement on the matter:

.

While this is all very interesting, Emanuel does not give an operational definition of the new adjustment. If one plots wind speed against pressure, the period 1945-1970 is definitely displaced relative to the period post-1990. However, the pre-1945 relationship of pressure-wind where available is more in line with post-1990 relationships.

A small point on the reason for the averaging … Judith didn’t say, but the reason is presumably to get rid of the 11-year solar cycle.

Note that, implicit in this, is concern that solar output might affect hurricanes. (I have no idea whether this has been studied.)

Sara, #30. No, I don’t think I necessarily agree with you. If there is a certain probability of a hurricane starting within a certain period of time its like throwing a dice so that (let’s say) if the number is 3 or more then there is a hurricane. If I throw the dice more often then there will be more storms or if say, there is a correlation with SST so that there is a hurricane if the dice is 2 or higher so a higher probability – then there would be more storms and a significant correlation with SST but each event would still be stochastic and subject to Poisson statistics albeit with a higher average.

Where I could agree with you is if there is some physics based causal mechanism which if there is a storm, it makes it more likely that there will be another storm. Say for example a hurricane stirs up surface waters and by some mechanism thereby makes another one more likely. However, I’m not at the moment seeing that as the hurricanes are spaced in time and geography so they look uncorrelated.

This is where it would be very interesting to look at the raw numbers per year (with all the corrections +/- the Landsea corrections, whatever you argue is most appropriate to get the best estimate you can of the number of storms per year). If the standard deviation over different time periods is different from what you would expect from Poisson statistics then that tells you that the process is not stochastic and then you start to get a handle on mechanism.

(#34) Thanks Steve!

Wow, you are really aloud to make adjustments to raw data without explicitly explaining others what you did!?! This is getting stranger every day…

We can rehash the SST-TC correlation issue and the running mean issue that were dealt with exhaustively on other threads, or we can focus on the impact of the landsea correction on what Holland and Emanuel are talking about in terms of increased intensity.

So lets do some hypothesis testing. I will send Steve M the xcel spread sheets with the data used in these calculations in a few hours, after we’ve agreed on the logic of the arguments to look at.

Central Hypothesis: The recent active NATL period (1995-2005;include 2006 if you want) is characterized by TC intensity (as measured by NCAT45, PDI, and/or ACE) that exceeds that of the previous active period (1944-1964) (I would argue that it is greather than 30%, but i am open to exactly how this is posed)

I propose a series of null hypothesis tests:

I. The mean intensity (NCAT45, PDI, ACE) with the Landsea correction is not significantly different from the mean intensity without the Landse correction for the period 1944-1964

II. The mean intensity for 1995-2005 is not significantly different from the mean intensity with the landsea correction for the period 1944-1964

III. The mean intensity for 1995-2005 is not significantly different from the mean intensity without the landsea correction for the period 1944-1964

Proposed chain of reasoning:

a) If we cannot reject the null hypothesis I, then this whole exercise is pointless (one might even conclude that it doesn’t matter whether or not you use landsea’s correction)

b) If we reject null hypothesis I and III, but cannot reject II, then the Landsea correction makes a critical difference in rejecting the original hypothesis

c) If we reject null hypotheses I, II, III then the Landsea correction does not make a critical difference in whether we reject the original hypothesis

If c) then we have to see if we can address the impact of observational errors in the early period to see if we can still reject null hypothesis III

Note: PDI (more sensitive to the landsea correction) and ACE (slightly less sensitive to landsea correction) may be easier variables to work with from the statistical point of view since NCAT45 is a count (and there are some zeroes in the record). Note, the observational errors will be slightly different for each of the 3 variables as well.

Let me know if you would like to see these arguments refined or have other suggestions. In the mean time I will pull the data together

IL (#36), I read an argument somewhere as follows (apologies for not recalling where). The sea surface heats up, inducing hurricanes. If there are many hurricanes in that year, then they take away a lot of the heat, and there will be fewer hurricanes the following year. If there are not many hurricanes, then….

I have no idea how valid this is. Even if it’s wrong though, it still illustrates my point: the assumption of independence (and other assumptions?) is being made without adequate supporting argumentation.

On the other hand, the primary criticism raised by Paul Linsay is surely right: Judith’s argument is to measure two quantities where we know there is potential error and then compare them without considering that. The argument is fundamentally unsound.

Sara #39. Yes, I think that more or less agrees with what I said in my second paragraph of #36. This is where detailed analysis of the data could tell you whether the process is stochaistic or not (having corrected the data to be the truest representation of the actual numbers of hurricanes etc per year actually is so that you really are comparing apples with apples in earlier years with more recent years which I think is Judith’s main concern at the moment).

#38. Judith,

The impression that I get from the data is that there are “50-year events” or even “100-year events” in hurricane history that are do not necessarily indicate a change in state. These sorts of events are familiar to people like TAC.

In a quick pass through the data, the years that seem most comparable to 2005 are 1933 and 1886-1887, both of which are outside the 1945-1964 period. Then you get into missing data problems and how to estimate how much data is missing. What surprised me was how active these earlier years were in the western sector even given the probability of missing data. Also the pressure-temperature relationship in the pre-1945 period doesn’t have the same displacement as the 1945-1969 data.

Judith’s data posted up at http://data.climateaudit.org/data/hurricane/curry.natl44to05.csv with “\t” separator for R-readers. Should read into Excel as well if you insert the url into the Excel program. The following reads to R:

The conclusion,and final paragraph of Judy’s article.

Judy’s statement here comment #26:

Some editorial advice Judy: If you don’t want people at blogs to suggest your little paper is “about AGW”, avoid writing strong closing conclusions “about AGW”.

With regard to this:

Judy, You’re not seeing the need for statistics does not mean statistics are not required.

When a person exmines smoothed data to “get a feeling for it”, they must also bear in mind the artifacts introduced by smoothing. They must, at a minimum, do a back of the envelop computation to estimate the equivalent amount of uncorrelated data the graph represents or to obtain an estimate of the error bands for that which they wish to understand.

The details of poisson/ gaussian/ discrete may not be so very important in this regard. After all, you are doing a visual.

However, knowing that your graph is based on the rough equivalent of 7-11 independent sample– rather than 60 = (2004-19944) is important. Otherwise, your conclusions will be guide based on the idea that you are looking at 60 data points worth of information.

Judith:

So what do you mean by “significantly different”? In deciding that something is significant we need to assume (agree upon) a statistical model to data.

#23

I agree with 30.

But the HW paper has something similar:

..all the correlations we report must be spurious.

35 notes:

Boy, the shapes and timing of all these curves sure match Figure 4 of this paper. Note that it is also based on an 11 year running mean, which corresponds to the approximately 11-year solar cycle. And ENSO is an approximately 11-year cycle? Hmmm…

Decided to answer my own question😉

Here is a graph of the PDI data 1944-2005 using two different smoothing methods using the non-bias corrected PDI data from Judy’s spreadsheet (Thanks!); it doesn’t make a different for this post as this is about smoothing and picking cherries. The 1-2-1 filter applied two times is that smoothing shown by Landsea in his 2005 Nature response to Emanuel. The 11-year centered average is that shown by Curry in Figure 1 of this post.

Unprecedented? Or not? Which cherry would you like to pick?

Now here is the same graph with the original PDI data included (note change of scale).

Can someone share 2006 PDI?

I would like to see the raw data again.

The more these “corrections” are applied and the more smoothing that is done, the more I am reminded of what instrument Canada used to beat Russia at a certain event today.

Roger Re 47: I guess I missed that. Did you say Judy she smoothed the data twice?! (Or I maybe based on #26, her undergraduate student smoothed it twice for her? )

Judy: Re 38.

This is not an “either do one or the other” issue. In the “little article” you posted above, you used a running mean in an attempt to establish the impact of the Landsea correction on Holland and Emmanuel’s results. No technically competent reader can assess the validity of your conclusions without dealing with the anomalies and uncertainties introduced by applying running means.

If you want to avoid discussion or thinking about the errors, uncertainties and bizarre features that are introduced by running means, don’t use them.

I look forward to getting the original data, because there is a subtle problem with Poisson distributed events (counts such as hurricane counts)

This is the fact that you cannot directly average Poisson events, because they are not normally distributed. Normally distributed events can be described by an equation of the form

Y = mX + b

Poisson events, on the other hand, are described by an equation of the form

Log(Y) = mx + b

To get an accurate average of Poisson events, you need to take the log of the number of events each year, average them, and then take the inverse log. If you do not do so, your average will be highly affected by extreme events.

This also allows us to calculate error ranges that are appropriate for the Poisson distribution. Suppose the average for ten years worth of cyclones is 5 cyclones. It is obvious that, while we can have 7 cyclones more than the average, we cannot have 7 cyclones less than the average. Because of this, the “+” and “-” error ranges are different.

I await the spreadsheet regarding the Landsea correction. Thank you for your continued participation.

w.

Roger, I take it that pdi is calculated as the sum of the cubed wind speed in m-sec. On that basis, I calculated pdi as follows:

This series has a correlation of 0.992 to Judith’s series, but is scaled differently by some constant. I’ll post up graphics.

RE #48 Roger, I calculate a PDI of 6.1 for 2006, using Emanuel’s method.

re #49, Jeff Waller

For a direct plot of the original/corrected/smoothed/unsmoothed PDI series from Dr Curry’s data as per #42 above,

see here.

willis,

Taking the average of the logs and then inverting is equivalent to taking the geometric mean: for n y’s, multiply all the y’s and raise to the power of 1/n. Computing the geometric mean without logs will be quicker and more accurate.

#51 Willis, I don’t follow you.

As with the normal distribution, or any distribution with finite mean, the expected value of the average of iid Poisson variates (i.e. with common lambda) is equal to the expected value of a single Poisson variate. In this case, the expected value for any number of iid Poisson variates is always lambda. If you don’t believe this, try

mean(rpois(n=1000000,lambda=2))

The expected value of the

logarithmof a Poisson variate is undefined (i.e. non-finite), because the Poisson df has positive mass at X=0.Are you trying to say that we are looking at Poisson variates which are not iid? Even then you can take averages.

Anyway, FWIW, I think I missed your point in #51.

Very sloppy…

The first assumption here is that a warming sea surface temperature is consistent with increased cyclonic activity. On the surface, this seems reasonable, however there are many instances of very warm sea surface temp’s which do not coincide with unusually high cyclonic cycles.

Were the relationship empirical, it would prove this statement true. Unfortunately for the author, it is not. Therefore, regardless of the unwavering certainty with which it is asserted, it is either merely probable…or, perhaps, “possible”.

Regardless of which it is, the certainty with which the author states it is..ahem….breathtaking. [Something akin to Einstein saying that the speed of light has been “proven” to be 3x108m/sec, but only so long as weather conditions this year are “average”.]

So, to summarize where we are: cyclonces are consistent with warmer temperatures…except during the years when reality proves that they are not.

Then, we are told that:

Now, let’s break this down:

1. Warming SST

isinduced by greenhouse warming – no question! [No measurement bias; no doubt about the source of the warming {greenhouse, not solar activity}]. -[snip]2. This SST warming

iscontributing to increased North Atlantic cyclones. – Well, excuse me for my obtuseness, but I don’t see a 100% consistent relationship between the two events from the data. Given that lack of consistency, it’s rather unwise to say, “is consistent”, rather than, “may be consistent”, or…oh, I don’t know…”Would be really nice for my belief systems, if it were consistent”.[snip – Steve: Brad H = please re-read Blog Rules]

#38, 44

Would this be something that we are looking for:

H0: We have a Poisson process with constant intensity

H1a: We have inhomogeneous Poisson process

(H1b: Intensity is a function of SST)

?

Brad, you seem not to understand something in this statement

This analysis indicates that North Atlantic tropical cyclone data are consistent with assertions 1-3 that a warming sea surface temperature induced by greenhouse warming is contributing to an increase in the intensity and number of North Atlantic tropical cyclones.

The “1-3” refers to references in the literature (appended in the text). See the following paper (published in BAMS) for the entire argument that links the increased TC activity to AGW that is referred to in this essay (which is familiar to climateauditors that have been following the hurricane threads since Aug). It is clearly stated in this paper that these are hypotheses, and the uncertainties are clearly discussed. The statement in the little essay that these results are consistent with the assertions laid out in the paper is correct.

Mixing politics and science in testing the hypothesis that greenhouse warming is contributing to increased hurricane intensity

http://ams.allenpress.com/archive/1520-0477/87/8/pdf/i1520-0477-87-8-1025.pdf

I took up Judy’s challenge in #38. I skipped testing H1, because that’s a fundamentally silly test. I tested her H2 and H3 and concluded that:

Using the Landsea correction

doesaffect our conclusions.If we apply the correction the difference in the PDI measured between 1995 and 2004 and that measured between 1944 and 1964 is statistically significant to the 5% confidence level

If we we don’t apply the correction, the difference in PDI is not statistically significant.

So, vis-a-vis the test Judy actually suggested, and using the statistical tests she suggested we find: Judy’s claim about the lack of impact of Landsea’s correction is rejected.

This suggests the questions about the validity of Landsea should be addressed.

Suppose I am a young environmentalist looking to counter the arguments of those pesky young republicans on campus who don’t see the relationship between hurricanes and global warming and foolishly ignore the obvious benefits of using solar power to abort babies. They cite the following statistics:

1. Landfalling hurricanes first half (1851 to 1927) = 145; Second half (1928 to 2005) = 134. No increase.

2. In the first half of the record, nine years with very stormy seasons (>3 landfalling storms), but only 5 such years in the second half.

3. The quietest hurricane period (post World War II) is concurrent with the period of greatest fossil fuel use and associated increase in atmospheric carbon dioxide.

How do I answer these non-believers? We’ve tried shouting them down with bullhorns and throwing pies, but the administration is getting nervous about “free speech issues”.

My model: ‘Named Storms’ is i.i.d Gaussian process. 2005 is over 4 sample stds, astronomically improbable. My models are never wrong, so 2005 is faulty observation. Outlier. Removed.

(That kind of ‘observations forced to be Gaussian’ never happens in science, right?)

#61,

More details, please!

I’ve snipped a comment above and deleted a couple of responses with which I agree so that we don’t end up debating editorializing.

UC: I should have added the link to the details showing that when we apply the tests suggested by Judy, we conclude her claims are unsupported:

http://truthortruthiness.com/blog/?p=31

The “study” can be replicated and/ or extended using EXCEL. . .

re #61 (Margo): Very nice!

Just a small side comment (related also to my comment #44): I think t-test is based on assumption of normal statistics. However, it should be fairly robust on departures from that. For mean tests in Poisson environment, see, e.g.,

K. Krishnamoorthy & J. Thomson: A more powerful test for comparing two Poisson means. Journal of Statistical Planning and Inference, 119, 23-35, 2004.

http://www.ucs.louisiana.edu/~kxk4695/JSPI-04.pdf

(Fortran (!!!!) code available: http://www.ucs.louisiana.edu/~kxk4695/StatCalc.htm)

However, I don’t think using those tests would change your conclusions.

There’s a discussion off comparing Poisson means at this website http://www.childrens-mercy.org/stats/weblog2006/SampleSizeR.asp (Which has other interesting statistical observations)

Jean S.

Thanks! I was going to look for that, because I figure at some point I should use it.

I agree the t-test is based on the assumption of normal distributions. I did no checks to test whether PDI is normally distributed. Unlike hurricane counts, it’s not a “number” it may not be neither Poisson nor Gaussian.

I actually think the broader problem with the t-test I did is this:

1) The two periods selected were not selected about of the blue. They were selected

specificallybecause they represent periods when PDI was high.2) To do the t-test, I used the standard deviations calculated from those periods only. Because these periods were specifically selected as “high” periods, actual standard deviation is much larger than that selected based on those periods. (All the “lows” were excluded.)

3) So, this dramatically favors the claim Judy wants to prove, that the mean for “high period 1” is different from the mean for “high period 2”.

Yet, despite this, we

stillget no difference. I suspect “real” statisticians might point out I should have used the higher standard deviation. Of course, in that case, I would conclude that Judy’s claim that the periods are different is truly, absolutely and unarguably wrong. After all, when doing the comparison, the difference in the mean would not change; the standard deviation would increase. So, the t decreases. (BTW: I say “real” statisticians because my background is not statistics. I’ve just generally used them doing what I really do. )But basically, so far, using the approach Judy suggested, her claim the two periods are “different” seems to me to show they are “the same”. If I refined, to correct for the simplification I made, I find “oh, boy are they the same!”

re #66: Just to correct myself: Margo tested mean

insentitiesnot counts, so standard T-test is likely more appropriate here (with the caveat that I don’t know what the distribution of insentities look like).re #68: Margo, you are right. What you could also test is if the insentity means differ in pre/post Landsea corrections times (with raw insentities and landsea insentities). If your results hold up (as they were for those selected periods), this would make landsea correction questionable.

Jean S:

The Landsea correction is based on a data from measurements external to the region where we are applying it. So, I think it’s either reliable based on something outside the question we are examining or it’s not valid. (Right?)

I also don’t know what the distribution of the intensities look like. I’m trying to teach myself “R” so I can make pretty plots without resorting to the horrors of EXCEL.

FYI – From the thread about a hearing Pielke Jr. spoke at recently:

=============================================================

I am trying to figure out the point of Pielke’s statement. I guess it is to legitimize “cherry picking” of the science. Pielke’s statement had an egregious example of cherry picking on the hurricane-global warming issue. If you read the entire paragraph of the “consensus” statement, you get very different information than if you simply read the portion of the paragraph that pielke chose to quote. Here is the relevant paragraph in its entirety:

“The scientific debate concerning the Webster et al and Emanuel papers is not as to whether global warming can cause a trend in tropical cyclone intensities. The more relevant question is how large a change: a relatively small one several decades into the future or large changes occurring today? Currently published theory and numerical modeling results suggest the former, which is inconsistent with the observational studies of Emanuel (2005) and Webster et al. (2005) by a factor of 5 to 8 (for the Emanuel study). The debate is on this important quantification as to whether such a signal can be detected in the historical data base, and whether it is possible to isolate the forced response of the climate system in the presence of substantial decadal and multi-decadal natural variability. This is still hotly debated area for which we can provide no definitive conclusion.”

Note: the factor of 5 to 8 is incorrect, it is a factor of 2-3.

Pielke’s cherry picking of the text from the statement on hurricanes is a great example in itself of misrepresenting the science for political purposes, supporting the points of Waxman, Piltz, Shindell, et al.

Comment by Judith Curry “¢’¬? 30 Jan 2007 @ 5:44 pm

========================================================

A-hem ……

That was an RC thread …. not one here ….

Re: #72

I am more and more inclined to discount the words of Judith Curry the more I see her POV expressed here and other places. She obviously has some personal feelings about RPjr that seem to show through her comments. Below I excerpted some of RPjr’s testimony that I felt were relevant to and realistic about the case for the mixing of politics and science and particularly the cherry picking of sources.

Politically, as a libertarian, I can see that once the government becomes involved in science as subsidizer and user of that policy for regulation that science well become politicized — and it has despite what Waxman or others with like positions think they can do to mitigate it.

I’ve been reading the tropical cyclone section of the technical report (thanks, Junk Science.com).

Unfortunately, the section has a lot of prattle. It reminds me of those high school essays where students have only one or two sentences worth writing, but they have to write 1000 words. So, they repeat the same thing over and over, slightly changing the wording in each sentence.

I smiled when I saw this line: “In the western North Pacific, long-term trends are masked by strong inter-decadal variability for 1960-2003, but results also depend on the statistics used”. Say what?

That sentence may say more about their methodology than they intended.

I’ve been trying to figure out their headline, “Tropical cyclones have increased in intensity and duration since the 1970s”. It doesn’t seem to be consistent with Webster Curry, which is otherwise cited liberally in the section. Per Webster Curry , Figure 2 ,

If storm counts aren’t increasing and storm-days aren’t increasing, how are storm durations increasing?

And if I take their headline literally (“…since the 1970s”) their increased-intensity hypothesis was hammered hard by Klotzbach ,as well as by Steve M.

Over at RC they talk about “the elephant in the room” implying it is killer AGW. I have a different perspective. To me, it is the PDO and other higher order oscillations (Dave Smith, is there a name for the “dipole” Western Pacific – Indian Ocean one you have described?). The Team do not want to discuss this type of elephant.

I was talking with my neighbor about the reports of increased hurricanes due to rising sea temperatures. She sighed, shook her head and told me that she thinks she knows what has actually risen in recent years, and which correlates well with the scary reports. She sent this to me.

It may even be statistically significant.

RE #77 Link

RE #76 Interestingly, the IPCC technical report even mentions the “1976 climate shift” in a few places, but attributes little/none of the post-1976 warming to it, so far as I could find. The “GW” seems to be all “A” in their book.

## One Trackback

[…] She is emphatic that the only point of her post is to focus on whether or not the Landsea correction1 matters. This little paper not intended to convince anyone of the link between intensity and AGW, it is merely to show that the intensity in the current period is greater than the intensity of the previous active period, WHETHER OR NOT you use the landsea correction. […]