Ryan Maue writes in as follows (see also 2007 Tropical Cyclone Activity).
As reported at Climate Audit at the end of October, the North Atlantic was not the only ocean seeing quiet tropical cyclone activity. When using the ACE cyclone energy scale, the Northern Hemisphere as a whole is historically inactive. How inactive? One has to go back to 1977 to find lower levels. Even more astounding, 2007 will be the 4th slowest year in the past half-century (since 1958).
The 2007 Atlantic Hurricane season did not meet the hyperactive expectations of the storm pontificators. This is good news, just like it was last year. With the breathless media coverage prior to the 2006 and 2007 seasons predicting catastrophic swarms of hurricanes potentially enhanced by global warming a la Katrina, there is currently plenty of twisting in the wind to explain away the hyperbolic projections. The predominant refrain mentions something about being lucky and having escaped the storms, and just wait for next year.
Well before we prepare for the obvious impending onslaught of the next above-average hurricane season, lets review some very positive aspects of what 2007 offered:
When combined, the 2006 and 2007 Atlantic Hurricane Seasons are the least active since 1993 and 1994. When compared with the active period of 1995-2005 average, 2006 and 2007 hurricane energy was less than half of that previous 10 year average. The most recent active period of Atlantic hurricane activity began in 1995, but has been decidedly less active during the previous two seasons.
When combined, the Eastern Pacific and the North Atlantic, which typically play opposite tunes when it comes to yearly activity (b/c of El Nino), brushed climatology aside and together managed the lowest output since 1977. In fact, the average lifespan of the 2007 Atlantic storms was the shortest since 1977 at just over two days. This means that the storms were weak and short-lived, with a few obvious exceptions.
So, before throwing Dr. Gray, NOAA, and Accuweather under the bus, consider what seasonal forecasting must entail to skillfully project hurricane activity. Then consider what we do not know well:
Now, when a seasonal prediction is made, elements of the above questions enter into play: One, two, three, or six months ahead of the season, how many storms will form, how strong will they be, and what is the probability that any of those will affect land (particularly the United States). This requires knowledge of oceanic conditions half-way around the world, precipitation patterns over Africa, and a host of other considerations.
Nevertheless, do not lose heart. Long-range weather prediction is a booming enterprise, with energy, insurance, and governmental agencies investing considerable resources into this colossal effort. The house always wins.
103 Comments
1977 – last PDO flip (negative to positive) completed. 2007 – PDO flip (positive to negative) completed?
Is that statement based on the first (gray) graph?
And was there a break period before the “active period from 1995-2005”? If not, then why were those 10 years highlighted?
?? I would have expected the usual “A warmed climate will cause more weather extremes,” or an Al Gore-ish “It’s complicated.”
another question, and this relates to Mann and Sabbatelli (and a lot of other studies that only focus on the NATL)—why does the Atlantic show a slight uptick in activity based on long term averages and no other basin does?
I’m guessing 1992 was a wicked East and WestPac year.
Here is an interesting development. A Florida hotel owner is threatening to sue Gray for making false hurricane predictions that hurt his business.
http://www.local6.com/news/14730306/detail.html
We have heard of threats of lawsuits against CO2 emitters. If the global warming alarm turns out to be false can we sue Greenpeace, NCAR, etc.? If one could be held liable for bad climate forecasting there would be a lot less hysteria and hyperbole.
Indian Ocean and Australia region cyclone frequency has declined substantially over the last 30 years. Interestingly, the BoM seems to think this is a sign of global warming, ie warming causes fewer cyclones (see the link).
Here is a nice sweater for Mann.
My guess is that less of a N/S temp differential creates fewer/weaker storms. I seem to remember reading something to the effect that that happened during the MWP.
A wild speculation: Could the two Cat 5’s at the beginning of the year have sopped up enough energy to starve the other storms?
Is there an error in the first figure regarding ACE for 2006? If not, I also dont understand the statement referred to by jimmy. Also, what is the difference of ACE-index in first and second figure?
Jimmy and Michael 2 and 9: Please differentiate between Northern Hemisphere as a whole, and the separate basins. The historical Atlantic ACE has been posted on CA several times. Nevertheless, a bar chart of Atlantic ACE from 1944-2007 is presented for your viewing pleasure.
Gray makes his estimates with a range, which I suppose is his way of saying ‘error bar,’ but retrospectively (though I haven’t done the work), I’d say his error bars must be quite a bit bigger than his publicly stated ranges.
In fact, I’d say his error bars are probably bigger than the conceivable range of storm counts.
re #4
Superior instrumentation in the North Am hemisphere. Think NASA. (US landfalling hurricane data show no trend.)
Giant oil companies, so we hear, have the incentive to influence or buy outright the
science they prefer. After all their profits depend on consumers and governments
disbelief of CO2-driven climate change. So, such giants, we hear, buy non-climate
scienctist to muddy the scientific waters by nitpicking the data. This familiar
territory, right? We are all aware of the hypothesis that study of the incentives
allows the objective reader to forecast the conclusions.
But oil giants are not the only giants in the marketplace. Aren’t major insurance
companies also gigantic in terms of the dollars they control and the influence they
sway over governments and public opinion? Aren’t their profits also dependent on
winning a bet with consumers about risks? Isn’t their interest and incentive always
to have governments and consumers believe the risks are ALWAYS slightly higher
than the insurers actually know to be true? It’s the nature of the game, isn’t it?
Would you rely on an insurance company to pay out if you knew that they had
underestimated the risks and undercollected premiums from everybody else? So insurance
giants, it would seem, have incentive to buy scientists and statisticians to study
the risks of storm data quite closely and thoroughly — and produce two sets of
findings. Both “true”, but each emphasisizing slightly different conclusions.
Just from the nature of the market and the insurance game, one expects insurance
giants, expectations based on their high incentives, to have bought and analyzed the
best possible most accurate risk-assessment data about hurricane forecasts over the
next dozen or so financial quarters. That’s not a mind-blowing prediction, is it?
But is it any more improbable that the data such giants release to the public will
emphasize particular risks; and the need to invest in “offsets”, or hedges, or — after
all — insurance? And shouldn’t we expect that the public spin will be a bit more
alarmist than the attitude in the financial boardroom?
This is a testable hypothesis. When did “higher-than-historical risk of hurricanes”
news break, what happened to weather related insurance rates afterwards, and what
has the impact on insurance industry profits been over the last, say, 8 or 9 quarters?
I just like to see that Hurricane forecasters are no better than any other Weather forecaster.But they have to feel part of the club, they have to believe, even when wrong!
The whole AGW thing, and this debating blog site, is just human nature at work, it will tell us a lot about human physchology but very little about the climate.
>> Giant oil companies, so we hear, have the incentive to influence or buy outright the science they prefer. After all their profits depend on consumers and governments disbelief of CO2-driven climate change
Actually, their profits are maximized when the perceived value of their product is greater than its cost. This happens when the public believes that the oil is just about to run out, and is scarce. While they busily find large quantities of cheap oil. One way to do that is to secretly promote peak-oil for obvious reasons, but also AGW, spurring foolish investment in alternate sources of energy, all very much more expensive than oil, thereby making oil seem cheap, raising its price, while knowing full well that no one will actually tolerate a restriction of their right to use energy.
>> The whole AGW thing, and this debating blog site, is just human nature at work, it will tell us a lot about human physchology but very little about the climate.
Excellent point. I’ve noticed the same thing.
Actually the closest forecast was by the UK Met office who use computer model based predictions, they significantly outperformed Gray’s statisically based approach (rather amusing considering his opinion of such models). Here’s a pre-season comment on their forecast:
“This year’s GloSea forecast shows a cooling trend in the tropical Atlantic sea surface temperature (SST) compared to what we’ve seen in recent years, and is a major reason why the UK Met Office forecast is so much lower than the other seasonal Atlantic forecasts. I believe that the GloSea model has high enough resolution to do as good a job as the other seasonal hurricane forecasts this year, but it’s hard to make an informed judgment until their research results are published. The GloSea forecast is based on sound science, though, and does call into question whether or not the other seasonal forecasts are forecasting unrealistically high levels of hurricane activity in the Atlantic this year. “
Horse tushies. The public’s perception is irrelevant. The perception of the crude traders matters, but even they can only speculate for so long, because there’s a very limited amount of storage available. In the end, it’s 2% psychology and perception and 98% supply-and-demand that determines crude prices.
16, what the AGW debate has done is prevent investment in carbon-intensive alternatives such as Fischer-Tropsch coal liquefaction. That tends to drive us back to hard-to-get oil that’s more operating cost intensive and less capital intensive.
#14: I can tell you that catastrophe re-insurance rates skyrocketed after 2004 (unjustifiably in nearly every actuary’s opinion), but they took a large drop this year. Next year’s rates should reflect this.
Ryan M, I like the bar graph that you presented. It shows very graphically the point being made above. It reminded me of pushing a rod through a width of material for each year with a rod of a length that varies some each year, but not nearly like the shortening for the past year.
Can you go back in time to look for previous shortenings that might equal or surpass that witnessed this year, e.g. 1977?
Re #17, See my link in #6.
If the models ‘substantially disagree’ then chance alone says one of them is going to be close to the right answer.
UK Met’s forecast , made in mid-June, was for 10 tropical storms July thru November. The actual number turned out to be 12.
David Smith says:
December 1st, 2007 at 4:53 pm
“UK Mets forecast , made in mid-June, was for 10 tropical storms July thru November. The actual number turned out to be 12.”
Quite (see#17), while other predictions were for significantly more storms (Gray predicted 17), also three of the named storms lasted less than a day (Jerry, Chantal, and Melissa) which in previous years wouldn’t have resulted in a name. One of the reasons given for the failure of Gray et al. was the unexpected (by them) low SST, however in June the Met Office said: “This season, a cooling trend in SST is expected in the tropical North Atlantic, and this favours fewer tropical storms than seen in recent years prior to 2006”
re UKs Met forecast.
It is worth mentioning that TCs are one measure; hurricanes another; landfalling storms yet another; and ACE, still another metric. These four measures are very roughly in order of increasing objectivity.
So when we talk about “models predictions,” and also their range of estimation, we have yet to see a final, comprehensive tally.
Anyone up for making this chart?
Dr. Kerry Emanuel reported in 2005 (Nature) that Power Dissipation was highly correlated with Sea Surface Temperature in the North Atlantic during last half-century. A paper in press (Monthly Weather Review, 2007) also by Dr. Emanuel discusses some of the environmental relationships with Power Dissipation and the individual components of the metric (frequency, intensity, and duration). The correlations between SST and the individual components from the deconvolution of PDI were not as robust as the overall SST / PDI relationship. Thus, Emanuel (2007) concludes that Power Dissipation is a more fundamental quantity related to climate than the individual components.
Thus, if ACE and PDI are more fundamental climate quantities, seasonal predictions reporting those numbers, rather than huge ranges of frequency (9-13 storms, etc) would be better assessments of skill. Also, when running a climate model, it is a somewhat objective process to “detect” tropical storms…but that’s another issue.
Correction: link for the new Emanuel (2007) Power Dissipation paper is in the Journal of Climate .
TJ Olsen No. 25
For 2007, Jeff Masters at Weatherunderground tabulated the predictions and results (although looks like he gave UK Met a break on the timing). It’s not the precise tally you requested, but pretty close.
Everyone blew it on ACE (UK Met doesn’t appear to have predicted an ACE value).
It will be interesting going forward to see how predictions fare, now that they will be closely scrutinized. Why do I keep thinking of Jean Dixon?
Thanks, John M.
From Master’s compilation, of the four key numbers covered, only the UK’s Met Office comes close to one actual number – named storms. (It was the only one they ventered a prediction on.) Contrary to impressions left above, not even this is very impressive.
What are we left with?
SOURCE
Re #28
I’m not an active follower of the hurricane stuff, but the link you reference there has a fascinating insight into the problems of modelling climate. The quote I’m interested in is this:
We are continually told that the fact we cannot predict weather has no bearing on the predictability of climate because the long-term averaging in climate makes predictions easier. Assuming that the assessment here is correct regarding the dust, an event which cannot be predicted more than a couple of days in advance has caused an annual average to be substantially overestimated (in the case of ACE, by over a factor of two!). A couple of days to an annual average constitutes two orders of magnitude difference. Is it really the case that one more order of magnitude (decadal variability, a reasonable definition of “climate”) can make everything predictable again?
This ties in to exponential growth of errors in the initial conditions into larger scales (for which averaging just removes a linear component – which will always be overwhelmed by the exponent at larger scales) and is probably veering off topic for this thread…
Ken, re #21, the Eastern Pacific best track data is not of much use prior to 1970 mainly due to intensity unknowns. I went back as far as 1970, and added up the EPAC + NATL ACE and made a simple bar chart: EPAC+NATL . In the past 30 years, there is not much of a trend (not significant), even without any observational technology improvement considerations.
As David Smith has conjectured before, the importance of easterly waves to both basins is not usually discussed when considering seasonal activity.
Re # 30, when Bill Gray says we had the most dust since 1999 and that is part of the reason for the forecast going off the rails, I have a slight bit of wonderment. 1999 was well above average in terms of ACE, with a slew storms even I remember, like Floyd that were big time Cape Verde monsters. 1999 Weather Underground Atlantic Map While I don’t discount the importance of the dust, it doesn’t just jump off the desert for its own amusement; atmospheric conditions conducive for dust outbreaks were more prevalent in 2007.
The ACE or PDI are going to be heavily biased by long-lived Cape Verde and Caribbean storms, which travel uninterrupted for more than a week. So, a seasonal prediction of ACE would hinge on being able to predict the likelihood or frequency of these major long-lived hurricanes. Plus, it is these monsters that are having meaningful impact on climate (NOT VICE VERSA). Thus, David’s Tiny Tim posting clearly highlights this issue!
Re#30
However the Met Office predicted a cooler SST in June so I’m inclined to think that Gray’s blaming it on unpredictable ‘dust’ is CMA!
Ah, I was careful to caveat this in my comment though…
Re #30
Fine, but could you predict that these atmospheric conditions would be prevalent prior to 2007, or are you making an a posteriori deduction? My interest is in the self-similar nature of climate across multiple scales, in systems with high variability in fine scales (such as weather), one would expect a high degree of clustering of events, making it easy to give a posteriori judgements, but a priori predictions are no easier on large scales than they are on fine scales (in fact, probably more difficult) – which was my main observation.
Not according to Greg Holland:
Hurricane season of 2007 continued active trend
When scientists and the media are having discussions about whether certain storms should have received names, due to their thermo structure, longevity, and/or intensity, I disagree with Holland’s assertion that 2007 was more active than 2006, and is indicative of any trend. However, who knows what metric he is using. With Andrew in 1992, I propose we declare that year very active as well.
When a “substantial” contribution is mentioned because of global warming, I wonder what that means. So, from the online dictionary, it comes back with: fairly large; “won by a substantial margin”. So, how much is natural oscillation, and how much is “substantial” global warming?
It would be interesting to see what an independent analysis of the TC data based on the Dvorak Method would be. If many of these storms had no recon and very little ship data to validate thier condition, the folks at NHC would have depended on the sat analysts for not only the positions of the storms, but also thier intensities. The Dvorak Method is highly subjective, and is dependent upon the expierience of the analyst. The most difficult part of the job is with tropical depressions that are in the early stages of developement. The Dvorak Method is based on the classical evolution of tropical cyclones, and the analyst must constantly attempt to “fit” the best satellite foot-print to each episode. An analyst really doesn’t know how good his “fit” is until he gets some surface data or recce reports.
In this day and age of heightened awareness, I wouldn’t be surprised if there is pressure “from above” to take the safest route and give the developeing storm a higher intensity -it is better to be safe than sorry. Like the issuance of severe storm warnings, it is better to be safe than sorry; however, it does play havoc with climate data.
Link
two years makes a trend?
Does the fact that 2006 was way quieter than 2005 also indicate a trend?
The trend over the past 30 years is based upon beginning a secular analysis in the low point of the 1970s. Instead of worrying about the detection of Tiny Tims, I would favor a forecast verification based upon hurricanes, which usually have an eye that is easy to observe.
2006 had many storms that recurved out to sea (like all of them). 2007 had Noel. Quite an important difference when you consider the export of heat and moisture out of the tropics, the storms in 2007 physically did not do it (track from tropics to extratropics). Not like 2003, 2004, 2005, and 2006…
Re #37 JP, this may be of interest with regards to the Dvorak technique ( link ).
#40. My impression is that hurricane-days is more homogeneous in the Hurdat data base than “stronger” or “lighter” indices. In addition to the inflation in the Tiny Tim count, there seems to me to have been inflation in cat 3 and cat 4 wind speed estimates. Concurrent increases in small storms and cat 3+ hurricanes with a vacated middle doesn’t make much sense to me.
My impression is that the Mann-Webster case requires that there have been inflation of a number of tropical storms in the pre-reconnaissance days from sub-hurricane (in modern terms) to hurricane – which is not, a priori, impossible.
Most atmospheric scientists (specifically, meteorologists) will not argue that the number of tropical cyclones could very well stay the same in the event of AGW. After all, while SST and wind shear could change, the original forcing for tropical cyclones might not. Dr. Gray and NOAA are fulfilling an obligation to at least give us an idea of what the season could look like. Smart scientists always take it with a grain of salt.
In other words, “so?”
#42, Steve, I know there is a bifurcation is various storm intensities, it is a natural result from the Dvorak technique. 2007 saw 6 hurricanes, 3 of which may not have been classified as such, and will have maybe 1 hurricane day max all together. So, there are Tiny Tim hurricanes too…
RE: #37 – You are right on.
Re: 45 and 37
Steve and JP
The very real risk is that for purposes of “message” there will be the temptation on the part of NHC/NCAR etc., to start counting the “severe storm warnings” and not the actual storms.
Re: 35 and 36
Ryan
Statements like that by Greg Holland make one wonder what planet he has been on during the past few years.
Is it possible to get the predictions on the same graph, and perhaps even to calculate the correlation?
try this blog: http://www.wunderground.com/blog/JeffMasters/comment.html?entrynum=869&tstamp=200711
He has some interesting things to say in the “objective skeptic” category about hurricanes.
Although there were many-a-weak storms (which probably didn’t deserve names), 5 records were set in 2007 for hurricanes. Oxy-moronic. Paradoxical.
#48 was for #46
From Jeff Masters blog:
The records set in 2007 are also a function of observational capabilities. This is a syndrome that affects just about every pro sports game you watch. With the Elias Sports Bureau at the ready with countless inane statistics about every possible combination of feats, there are plenty of records broken all the time:
ESPN November 13, 2006
So, now lets look at the records for the North Atlantic season:
Aircraft recon with the best toys is cited in nearly every forecast discussion concerning Felix. Intensity was partially determined through the STEPPED FREQUENCY MICROWAVE RADIOMETER and the EYEWALL DROPSONDE and considerable satellite technology.
Lorenzo was a TD drifting around for almost 2 days before getting its act together. Humberto: HOUSTON WSR-88D VELOCITY DATA INDICATE THAT HUMBERTO HAS INTENSIFIED.
So, these records all involve the time derivative: fastest intensification from TD/TS to HURR/CAT345, etc. It is apparent that increased observational capabilities, both spatially and temporally allow for more fine-tuning of intensity estimates. Jeff Masters discusses the detection improvements related to obs improvement, but our intensity estimates, both in the past and currently, are also a function of evolutionary enhancements of our toys.
# 49. I agree with you about points you make for records 1-3.
But what about records 4 & 5? Do you chalk that up to observational capabilities as well?
Yes Jimmy, Chris Landsea was interviewed along with Chris Mooney from Storm World and posited that Dean (or Felix, forget) would not have been classified as a Category 5 down in the Caribbean b/c we did not have observational capabilities as we have today. The Category 5 at landfall is a function of technology most definitely. The storms did not make landfall in the most highly populated areas, but rural Mexico. Even with Andrew 1992, Category 5 upgrade several years later was based upon a distillation of a huge amount of data.
Our ability to observe eyewall replacement cycles is also a function of technology improvements…Here is an interesting write-up on Project Storm Fury
Here’s a plot worth a minute or two of examination ( link ). It’s a time series of the sea surface temperature (SST) of the Atlantic MDR, the area that gets so much attention. I’ve updated it to include the 2007 SST.
The recent SST in this region appears to show cyclical behavior, something that’s been noted at CA before, especially by bender. This updated chart raises questions in my mind:
* The SST cycling in recent decades is more sharply defined than it was in earlier decades. Is that due to the improvement brought about by the use of satellites? After all, the region is remote with minimal SST measurements by ships.
* Is the 2007 measurement the beginning of another cyclical trough of 3 or 4 years? It’s but one point but that one point fits the cyclical pattern.
* Was the rise from 1995 to 2000 a step-change, perhaps part of the AMO process?
* The lower 2007 SST has been at least partially attributed to Saharan dust. Is Saharan dust what is cycling, causing the SST to cycle?
* If it’s not dust then what is it?
David, good questions…
A few caveats: monthly mean data of September SST is nothing more than an arbitrary choice of 30 days, which are Julian days 244 to 273, assuming you aren’t leaping. What is special about these days? Nothing.
If you look at daily time scale SST and plot the MDR box SST from Sept 1-30 for 1985-2007 using the 1/4 degree Pathfinder product , for the past 23 years, you get a crow’s nest: Crow’s Nest . There is considerable year to year variability, but a general increase through the month. However, when some averaging is attempted, the picture becomes a lot clearer: MDR Average .
There is a clear linear increase to the MDR SST when looking at long-term daily averages. Any given year of course deviates from the very nice upward slope — which has natural implications when monthly mean data is used…
I am not sold on the dust impacting the MDR directly…It seems just a wee-bit too far south.
Re #53 Nice SST plots and a new tool! Thanks. I’m surprised at the intramonth pattern for the MDR, something to think about.
I’m not sold on dust, either. My inclination is to suspect wind and/or upwelling variation, but that’s a weak inclination. Bottom line – I dunno.
Here’s a slightly different look at weak storms, using ACE data to group storms.
In this, a “weak storm” has an ACE below 2.0 (for perspective, an example of a 2.0 ACE storm is one with 50 knot winds for 48 hours, and no wind before or after that period).
A “micro-storm” is one with an ACE below 1.0
A “normal storm” has an ACE above 2.0
Presumably, a normal storm has enough of a “footprint” on the Atlantic to normally be detected by historical means (ship traffic or at landfall). The possible exception is the east Atlantic, due to lack of ship traffic.
A weak storm may or may not be detected by historical means.
A micro storm is hard to detect by ship or shore and generally requires modern tools (satellite imagery, QUIKSCAT, Doppler radar, GPS dropsondes, etc) for detection
The following are time series for weak storms and normal storms, with notes. I’ll try to churn out a micro storm chart today before heading to Fl.
Weak storms
Normal storms
Here is a helpful plot that has the distribution of ACE per storm in the Atlantic from 1944-2006. Of course, with the detection issues David has pointed out, the median has moved towards weaker storms ACE-wise.
ATLANTIC ACE DISTRIBUTION
Micro-storms defined via ACE
RE: #55 – (game show voice)”Behind onnnnne of these doors, is you verrrry own, gold plated, hockey stick, autographed by Michael Mannnnn. Which will it be, door #1, or, door#2?”(/game show voice)
Re #58 Bingo! 🙂
Here’s an interesting tidbit.
There have been storms in the Atlantic basin which have had a higher ACE than the entire 2007 season. Ivan in 2004, and a storm from 1899 which holds the record for both longest lasting at 33 days and the highest ACE.
Here is a time series of the winds from the 1899 storm.
Here is a map of it’s path.
Sure was tough on the Antilles.
RE: #60 – Interesting thing about Ivan was that had it recurved only a few miles farther West, it would have been a direct NE quadrant hit in NoLA. Would have been far, far worse than Katrina … I was sitting there real time on line as it neared shore, with the usual suspects with their shopworn “dey alllllllways turn, let dem good times rolllllll!”
NoLA’s real day is yet to come. One of these days, they are going to get smacked with the NE quadrant of a monster. Only then will people wise up and either decide to spend the bucks treat it like a Dutch polder and put up real sea defenses or abandon it for good.
re #60 – how did they get that good of tracking data for the 1899 storm? It has a lot of small twists and turns in it. I thought back then was supposed to be the “dubious data era”? I’m left feeling a little confused given all of the talk that David Smith, Ryan Maue, and many others have said about the data quality for that timeperiod. In fact, that is the crux of their hurricane argument…
Re #62 Jimmy, the 1899 hurricane passed over a number of islands, was quite powerful and killed an estimated 3,400 people throughout the Western Hemisphere. In was well-known and has been the subject of a lot of research, so the path has been reasonably well reconstructed, at least in the major points.
Here’s a Wikipedia article ( link ) .
Ryan Maue says on December 4th, 2007 at 11:38 am, ca: 49:
Inane statistics, indeed. You banged that nail well and true.
Tetris says on December 3rd, 2007 at 5:30 pm, ca: 45
Not just Holland, but most of NCAR. Their reliance on that vintage Cray supercomputer and its prognostications seems more servile by the week. I hike up their several times a year & always ponder the steam coming from the the Cray’s cooling towers behind the main building. Then there’s the Global Warming Promenade just west of the towers – stick-mounted posters of various “hot topics,” a few actually pertaining to the work done at NCAR. Must be seen to be believed.
David (#63).
Thanks for the link. Interesting. But I’m still confused how they monitored that NW turn in the N. Atlantic after it bounced off of NC and went eastward. Are there any islands around there?
Re #65 I think it’s a committee’s best guess, probably an attempt to reconcile a few mid-Atlantic ship reports. The storm appears to have gotten stuck in the traveled mid-Atlantic and lingered there for a week, so there may have been more than one ship report available.
It’s mildly ironic that this hurricane, with the worst ACE in the records, occurred in a year which also saw the worst outbreak of frigid weather known in North America (the great February 1899 cold wave).
Looks to me like the northern portion is close to the shipping route from NY to Europe.
In regards to #66 and 67… how is it that on the one hand, one can reason that ship routes provided storms reports for a major hurricane in 1899 that were good enough to track a storm all around the Atlantic. But on the other hand, those same storm reports were so poor that they grossly underestimated the amount of storms back then? They underestimated them so much in fact, that the recent rise in storms is directly attributed to their poor detection skills back in the day?
…NATL 2007 Naming season update …seems yours truly now
is correct by number of named, by him validated to be a SS/TS/
HU “…Andrea NOT counted…” and with Karen updated to HU
I just wait for another HU upgrading and I’ll be 100 % correct.
(And you just wonder how much that Swedish Moosedevilseinfeld
pays NHC…???)
#69 Ooops! Postseason SS “Olga” formed N of PR yesterday!!
Interestlingly it’s geographically and climatically in the
tropics….CHECK: CHAOS CONFIRMED…Send more money…LOL
Jimmy 68
the Tiny Tims are missed, not the big storms.
Reading Greg Holland’s analysis scared me. It’s like in Animal Farm when the official report was bumper
crop yields ever since the humans were kicked off the farm, but the animals were actually eating less
and less. I mean, we’re here – people actually live in the Carib and everyone’s conclusion is that it was
a light hurricane season, that after 2005 we’ve now had two light hurricane seasons in a row.
They also say that 2006 and 2007 haven’t been mild in most of the US – – well, natural gas prices didn’t
spike, because peak power demand didn’t spike, because people weren’t using their A/C 24/7…..
They re-wrote a few hundred years out of the climate history in 1998 – now they’re revising current events.
Unnnnnnnprecedennnnnnted. Firrrrrrrrrst nammmmmmmmmed storrrrrrmmmm afterrrrrr Nov 30th! (I know my own little West Coast Office of NHC project is a bit delayed. But I promise, I’ll have it up and running in time for Jan 01, 2008, the new official start of the NE Pacific Tropical Cyclone season.) New season shall be Jan 01 – Dec 31. Unnnnnprecedented in a millllllllllllllllll-yun yearrrrrrrrrrs!
Max Mayfield, former National Hurricane Center director, defended himself against Democrat accusations that he received his talking points from the White House on the linkages between hurricanes and climate change.
Max Mayfield ABC News
However, Henry Waxman, Chairman of the Oversight Committee on all things anti Bush, weaves a different narrative in the most recent
report suggesting systematic White House efforts to manipulate climate change science.
So, the hypothesis that global warming is not causing dramatic changes in hurricanes is a White House talking point. On a political side note, with Bush being the dumbest president since his father and Ronald Reagan, W and his staff must spend a lot of time and effort censoring hurricane/climate change materials. Thus, as Karl Rove (the architect) glares over my shoulder here at the AGU meeting, I acknowledge that I have never met anyone named Olga.
Ryan,
Is Dr. Gray also controlled by the White House?
I notice that you correlate conservative with dumb. A personal fetish of yours?
>> I notice that you correlate conservative with dumb
not into sarcasm?
Without knowing the person, sarcasm is impossible to spot. Especially when the verbiage is mild compared to what many activists spout.
“However, Henry Waxman, Chairman of the Oversight Committee on all things anti Bush” – 😉
That was the “sarcasm on” trigger for me …
Thank you Steve #78 for being astute. Indeed, the media portrays Republican administrations as bumbling (Ford), stupid (Reagan), and unable to understand nuance (pick one). The awe-inspiring intelligence that the former Vice President displays makes the drive-by media swoon and reminisce about days of yesteryear and thoughts about “what could have been”. (I witnessed the filming of the HBO special in Tallahassee about the 2000 election with the Kapaxian himself, Kevin Spacey).
Sarcasm: If the current administration is so inept concerning everything it does, how can it be able to manipulate climate research, put out timely talking-points, and at the same time, force NOAA to order the NHC to name Subtropical Thunderstorm complex Olga? To Waxman, what he is describing as censoring science is actually the other-side of the issue: something Committee Chairs rarely worry about when they are in the majority.
So, where does Al Gore get his talking points from? What group of scientists are providing him the speech text, figures, etc.?
Well, Olga is now classified as a tropical storm. It’s over land and entering the mountains, facing strong wind shear, etc., so it’s tropical status should be short-lived.
That makes 14 named tropical systems for the season.
Yeah, the committee on everything anti-bush and Karl Rove glaring over people’s shoulders at AGU meetings. It’s true: They are inept. Silly, it’s not them manipulating climate research, putting out timely talking-points, and forcing NOAA to order the NHC around, it’s Haliburton and Exxon/Mobil, coordinated by Dick Cheney implementing the planning of the Trilateral Commission’s secret masters.
Everybody knows that.
The addition of Olga has to be setting records. In between brainwashing sessions here in San Fran, I will surf the left-wing eco-press to see how this December tropical storm is being interpreted with regards to global warming. Perhaps Olga was a response to the increased CO2 put into the atmosphere for all of the UN climate conference attendees to fly to Bali?
Steve wants to see 2X CO2= 2.5C
I want to see CO2=temp
We all want something we can’t get, right?
Good luck with the eco-press out there, Ryan. (I’d say left-wing is redundant, but I don’t want to get into politics and the environmental movement, it’s so blase these days.)
RE 80. If the NHC is going to allow late entries, then my late entry into the contest should be
recognized. I know I said 13, but I meant to type 14
Re #85 🙂 OK, I can buy that, plus there’s probably a teleconnection between those two numbers.
#80,
Well, that puts me right on the nose at 14. Now, just have to keep any more from being named.
Olga’s ACE so far is 0.4 (average Atlantic storm ACE is 9.0).
Olga appears to be gone except for a remnant low but, based on their 2007 practices, the NHC may well declare Olga to have strengthened into a major hurricane.
Well, we historically overly conservative NE PAC folks are about to storm the stage (so to speak, pun possibly intended). You want numbers? You are gonna get ’em in 2008! The countdown to the new and improved, this is not your father’s NHC, NHC NE PAC Division, is now down to mere days. Behold – the new and improved year-long TC season!
re 84… divide both sides by 2?
Larry told me to say that.
What, 1X=1.25C? Or CO = 1/2temp? 😀
Seriously, here’s our dilemma, right down the line with some pseudo-code sorta:
If anomaly=accurate + anomaly=meaningful + anomaly=energy balance (We’re into lala land here, but we’ll keep going)
then
If aghg=anomaly + co2=aghg (We can’t get out of lala land, can we?)
then
If co2=anomaly + 100%co2=human (Sigh)
If the past is any indication of the present, and all above hold, we’ll round and fiddle and guess that and have:
400CO2-300CO2=100CO2
300CO2/100CO2=33.3%
+33.3%=.7C trend
Therefore, 100%CO2=2.1C trend
So the temp will go up 2.1C when humans cause an increase in CO2 from 400 to 800 or in other words a 100% increase in CO2 causes 2.1C of temp.
Easy, isn’t it.
So we wait 10 years, watch temp and co2, and if the ratio of 1 ppmv:.007C trend holds, we’ll know our answer.
Or we could arbitrarily pick any ten years in the past going back to 1880 and see if the relationship has been speeding up, slowing down, or staying the same.
Has anyone tried this?
I hope you’re joking.
Whom, mosher or I?
Nevermind, both me and him.
But really, has anyone checked to see if any arbitrary 10 year period shows some correlation between how many ppmv has…..
Nevermind, I’ll do it myself.
1996-2005 Temp trend +.22C per decade / 94.5sig – Co2 20 ppmv (or .22/94.5 – 20)
1986-1995 .09/78.9 – 14
1976-1985 .12/44.1 – 14
1966-1975 0/19.8 – 10
1956-1965 0/6.6 – 6
So there you are.
0 temp from 6 ppmv
0 temp from 10 ppmv
.12 temp from 14 ppmv
.09 temp from 14 ppmv
.22 temp from 20 ppmv
Here’s the 56-65 decade plot. The site has links to CO2 levels.
http://www.ncdc.noaa.gov/gcag/GCAGdealtem?dat=BLEND&mon1=1&monb1=1&mone1=12&bye1=1956&eye1=1965&graph=Lineplot&mon2=0&eye2=0&bye2=0&mon3=0&ye=0&begX=0&begY=0&endX=71&endY=35¶m=Temperature&non=0&klu=1&proce=80&puzo=0&nzi=99&ts=6&sbeX=-180.0&sbeY=90.0&senX=180.0&senY=-90.0
So the the first two decades (which I assume are part of the base period) are ppmv = 0 temp. But there’s a seeming match between about ~.01C = 1 ppmv for the last 3 10-year periods (fairly close to around .007)
Let’s try 1895-1904 (random) Oooops. -.16C (88.7 sig) from 2 ppmv
Yeah, I know, 10 years is too short.
I see the obvious, does anyone else? Yep, another hair-brained theory shot to hell, much like the 2007 NH cyclones.
That is the ugliest link I’ve ever seen.
Use the Link tag and we’d never have noticed!
oh sure, no one ever believes the robot
here is text of the review on the latest vecchi and soden paper that i sent the houston chronical reporter
Our understanding of the climate variability of tropical cyclone intensity is
hampered by lack of an insufficient historical data record and the inability of
current climate models to resolve tropical cyclones. Hence efforts are being
made to develop “proxies” for hurricane activity for which we have adequate
historical data or that can be resolved by climate models, to help us understand
how and why hurricane activity has varied in the past and how it might change in
the future.
As a proxy for changes in hurricane intensity, the Vecchi and Soden paper
examines the tropical cyclone maximum potential intensity. Maximum potential
intensity is a theoretical upper limit on hurricane intensity based on sea
surface temperature and the local vertical thermodynamic structure of the
atmosphere. There have been several theories of maximum potential intensity
proposed, and the most widely accepted of these is by Kerry Emanuel. Because
historical data on the vertical thermodynamic structure of the atmosphere are
not available prior to 1950, Vecchi and Soden further develop a proxy for
potential intensity based upon the tropical sea surface temperature anomalies
relative to the average tropical sea surface temperatures.
The main result of Vecchi and Soden’s analysis is that maximum potential
intensity does not correlate well with the global increase in tropical sea
surface temperature, but rather with the regional variations in the surface
temperature. Looking specifically at the North Atlantic, the historical data
record since 1880 shows an increase in the Atlantic potential intensity that is
strongest near the equatorial latitudes in the eastern Atlantic, while the
Caribbean and Gulf of Mexico show a decrease in potential intensity. In terms
of the climate model projections for the North Atlantic, the maximum potential
temperature increases towards the equator and the African coast and also in the
Gulf of Mexico, but decreases in a swath extending from the Caribbean northeast
to the African coast north of about 25o.
So what is the significance of this paper in assessing the impact of global
warming on hurricane intensity, and particularly for the North Atlantic? North
Atlantic hurricane intensity has shown a strong relationship to sea surface
temperature, and the theory of maximum potential intensity has been used as a
physical link between increasing tropical sea surface temperatures and
increasing hurricane intensity. This paper is one of a number of papers
published in the last few years indicating that the influence of increasing sea
surface temperature on hurricane activity is complex; it is not just the local
heating effect of warmer sea surface temperatures (the maximum potential
intensity argument), but changing gradients of sea surface temperature influence
atmospheric circulations that can influence hurricanes. The eastern Atlantic
has been warming more rapidly than the western Atlantic (Gulf of Mexico and
Caribbean) which is influencing the atmospheric circulations and resulting in
more hurricane genesis in the eastern Atlantic and more equatorward tracks.
This paper is an interesting contribution that will be of some help in filling
out the picture of how climate variations influence hurricanes. There are some
caveats about the methods used in the paper, that will hopefully be addressed in
future research; these caveats are:
* Current theories of hurricane maximum potential intensity still fall short in
terms of accurately accounting for the exchange of heat between the atmosphere
and the ocean and the asymmetric vortex dynamics.
* The SST data is not of adequate quality prior to 1920 especially in the
Pacific Ocean, particularly for the “proxy” regional SST anomalies.
* Climate model projections of potential intensity depend on the
parameterization of tropical convection to provide the upper atmospheric
temperature and humidity used in the calculation of potential intensity.
Tropical convection parameterization is one of the weakest links in climate
models. Confidence in the climate model projections of potential intensity
should have been established by looking at the climate model simulations for the
past century and comparing with the historical data record of potential
intensity.
* Climate models generally do a poor job of simulating realistic interannual
and decadal modes of natural climate variability. The climate models used here
show strong interannual and decadal variability in maximum potential intensity;
assessing this variability using climate model simulations of the last 100 years
would have made the authors’ claim about the magnitude of the natural
variability exceeding the trend more credible.
Sort off topic here, but have any of you gotten a letter from your mortgage lender recently informing you your flood insurance (that you’ve had for years) is not good enough? We did, and the upgrade for this new and approved coverage is..ka-ching!.. +500 dollars a year and we have to pay for it.
re 101 my comment,to be clear:
that is +500 dollars on top of what we already pay now.
Olga had an ACE value of 0.66 (typical is about 9.0). It lasted 21 hours as a tropical storm, though the NHC acknowledges that their windspeed estimates for Olga’s final 13 hours may have been “generous”.
The final six hours included this note from the NHC:
At the time it was declared to be “tropical” it had been over land (Dominican Republic peninsula) for six hours and was entering the very rugged mountains of that island. The NHC based its tropical classification on
That is weak. There was no measurement of a warm core, any winds and precipitation were affected by land/mountain interaction and the winds were still removed from the center. On the consequence side, the system was about to die in the Hispaniola mountains. Why make the classification change based on such weak conjecture? It baffles me. I hope they reconsider the classification when the reanalysis is performed.
RE: #100 – As I pointed out all season, they no longer use a multi criterion approach to name and claim. The de facto Op Def of a TC has had a lowering of the bar. This time they were blatant about it.
Ryan Maue updated his global storm webpage to cover all of 2007, using ACE as the measure of activity. A non-ACE superlative for 2007 was that the year holds the record for longest global period without a tropical cyclone, I think it was 30 or 40 days (I forget the number), back in the northern spring.
It snowed in Baghdad today. Hourly Reports
Of course, odd or unusual weather in a location is heralded as an example of climate change or global warming, and this event is no exception. Iraq snow = climate change
I have an alternative explanation: it is winter.
Tongue planted firmly in cheek:
HURRICANE NOGURI – 1805 Z, 11-JAN-08
EARLIER TODAY, TYPHOON NOGURI CROSSED 180, AFTER A SIGNIFICANT LEFTWARD / NORTHWARD WOBBLE. THIS STORM IS NOW DESIGNATED AS HURRICANE NOGURI, A CATEGORY ONE HURRICANE. LOCATION IS 179W, 40N, TRACK HAS RESUMED ESE. AT THIS TIME, THIS FEATURE IS OF NOTE SOLELY TO MARINE TRAFFIC (ESPECIALLY DUMB SHIPS) AS IT IS EXPECTED TO PULL UP STATIONARY AGAINST A PROGRESSIVE RIDGE PROG’ED TO BE ALONG 14OW AT THE TIME. STORM IS PROG’ED TO THEN LOSE STRENGTH, AND, IF IT EVER REACHES LAND, SHOULD BE A TD, AT MOST. NONETHELESS … THIS IS THE 4TH NAMED STORM (AND 4TH HURRICANE!) IN THIS HIGHLY UNPRECEDENTED 2008 SEASON. RETROSPECTIVELY, CHANGING SEASON TO YEAR ROUND (JAN 01 THROUGH DEC 31) WAS A *WISE* CHOICE. EOT. FORECASTER – HAY-MAN.
Posted by stevesadlov at 10:05 AM 0 comments
HURRICANE BORIS – TS WATCH – 1755Z, 11-JAN-08
THE NATIONAL HYSTERIA CENTER (“NHC”), NEPAC DIVISION, HAS ISSUED A TROPICAL STORM WATCH FOR THE WEST COAST OF NORTH AMERICA, BETWEEN VICTORIA, AND, JUNEAU. HURRICANE BORIS, A CATEGORY ONE HURRICANE, IS CURRENTLY LOCATED AT 150W, 42N, AND IS ON A STEADY NE TRACK. GIVEN SST’S (WHICH LOWER CLOSER TO LAND, DUE TO UPWELLING AND THE LONG SHORE CURRENT OUT OF THE NORTH), WE EXPECT SOME REDUCTION IN STRENGTH, DOWN TO TS LEVEL, PRIOR TO LANDFALL. LANDFALL IS PROG’ED FOR SOME TIME LATE Z TOMORROW OR EARLY Z THE FOLLOWING DAY. STRONG WINDS, HEAVY FLOODING RAINS AND NOTABLE STORM SURGE ALONG WITH DANGEROUS SURF CONDITIONS CAN BE EXPECTED IN THE LAND FALL ZONE, PARTICULARLY IN THE RIGHT, FRONT QUADRANT OF THE STORM. NEXT UPDATE, EARLY, 12-JAN-08 Z. EOT. FORECASTER – HOLDEN.
2001, 2000, 1994, 1990, 1982, 1981, 1978, 1973, 1962, 1951, 1937, 1931, 1930, 1922, 1914, 1907, 1905, 1902, no US landfall hurricanes. I can see what is occurring, but I don`t think SM would want to know.
5 Trackbacks
[…] 2007 Blown off track: Northern Hemisphere Historic Cyclone Inactivity, Ryan Maue, http://www.climateaudit.org, Nov 30th, 2007 […]
[…] As previously reported here and here at Climate Audit, and chronicled at my Florida State Global Hurricane Update page, both […]
[…] previously reported here and here at Climate Audit, and chronicled at my Florida State Global Hurricane Update page, both […]
[…] [La Nina years: 1950, 1954-6, 1962, 1964, 1967, 1970-1, 1973-5, 1988, 1998-9, 2007-8 and El Nino years: 1951, 1957, 1963, 1965, 1972, 1976-7, 1979, 1982-3, 1986-7, 1991-4, 1997, 2002, 2004, 2006, 2009] Flashback to October 2007: I posted on Steve McIntyre’s Climate Audit the following: […]
[…] 2002, 2004, 2006, 2009]Flashback to October 2007: I posted on Steve McIntyre’s Climate Audit the following:The North Atlantic was not the only ocean seeing quiet tropical cyclone activity. When using the […]