AGU 2007

OFf to AGU tomorrow morning. I’m doing two presentations – an oral presentation on hurricanes on Wednesday in a Spatial Statistics session (with Roger Pielke) and a poster on Friday on Almagre tree rings (with Pete Holzmann). See climateaudit.org/pdf/agu07.* for the two PPTs.

AGU tends to be exhausting. Plus I’ve got a reasonably full social calendar as well. Plus for the third year I’m taking my squash racquet – one of my friends for Toronto is working in Toronto, so maybe this year I’ll actually play. Usually I’m too tired at the end of each day.

The bristlecone program has cost about $4000; I haven’t done an exact accounting, but contributions in response to the announcement were about $3500-3700. My trip to AGU is going to cost about $2000 between plane fare, hotels, registration, presentation fees, etc.

I’ll try to write some daily reports. If I don’t, it’s because I’m having a good time.

Update: thanks to readers for their response to this. About 60 readers contributed $2700.

Unthreaded #27

continuation of Unthreaded #26

realclimate on Loehle

RealClimate has a good discussion of problems with Loehle 2007: link, linking to JEG’s discussion but not to discussion here. Update: Luboš has a discussion here.

In some cases, RC non-linking to climateaudit is mere pettiness, but, in this particular case, they cite information on Loehle that was initially made available at climateaudit. In this CA post, we discussed discussed the provenance of Loehle proxies and requested that Loehle provide his proxies as used – which he provided. The numbering in this article (which differs from the original article) is the numbering used in the real climate article, which they have explained to have derived from the proxy version supplied to them by Loehle, which is presumably the same version that we had requested. Loehle’s article did not include data citations for the Loehle versions. Exact data citations are provided in the CA post here ; Gavin uses exactly the same data citations and has not suggested that Loehle sent him the data citations. While the data citations could be developed independently, in this case, I doubt that Gavin can honestly say that he did not incorporate CA information on Loehle proxies.

For reference, plagiarism (Wikipedia) includes:

… incorporating material from someone else’s written or creative work, in whole or in part, into one’s own without adequate acknowledgement.

Having said that, there are useful analyses of individual proxies – a type of analysis that I, for one, welcome and believe to be relevant, including some caveats on individual proxies that have not been previously raised here. The post is a “climate audit” type of post and shows that Gavin can be a pretty good “climate auditor” when he turns his mind to it.

However, given that 9 of the 11 Moberg low-frequency proxies are used in Loehle 2007, presumably most of these criticisms were equally applicable to the prior use of these low-frequency proxies in Moberg et al 2005 or for that matter in Juckes et al 2007.

My take on Loehle 2007 has been (and I hope that this has been understood) that it is really a variation on Moberg and it’s pretty hard for me to see a rational basis on which Moberg is qualified for inclusion in spaghetti charts while Loehle isn’t. If you go through the RC critique of Loehle, my impression is that virtually every criticism can be leveled equally fairly against Moberg – raising the question as to why RC is only now raising these issues.

Some nits are pointed out in Loehle methodology. I haven’t checked the correctness of these points. And I definitely endorse the idea of realclimate (or anyone else) checking for defects in data handling and reporting. However, they would be a little more credible if they dealt with the many beams in their own eye, such as, for example, the incorrect geographic locations of the Mann et al 2007 precipitation series.

While I am mostly in agreement with their proxy comments, I am not in agreement with their views on multivariate methodology. I don’t have time to discuss this today, but Mann’s present RegEM is not an obvious panacea. It’s hard for a statistical method to be sufficiently bad as to be “wrong”, but Mann has accomplished this twice with the MBH data set: first with the MBH98 PCA-regression combination with its erroneous PCA method; more recently, with the Rutherford et al 2005 RegEM method (the code for which has now been expunged from the record). The new Mann RegEM method gets the same results as these two erroneous methods (a bristlecone-pine shaped Hockey Stick). Is the new method “right”? Readers should recognize that all that is done in these long-winded statistical efforts is choose weights for the individual proxies. The new Mann method does not report the weights assigned to bristlecones, but you can be sure that it is large.

Their comments on multivariate methodology appear weak to me, but the comments on individual proxies are well worth reading. But what’s sauce for the goose is sauce for the gander and surely apply equally or even more so to Moberg.

CA comments are back online

An explanation for the problem:

All users should update to Bad Behavior 2.0.11 immediately to prevent being blocked from your own site.

Within the past two days users have found themselves blocked from their own sites while using recent versions of Bad Behavior. A third party blacklist which Bad Behavior queries recently began sending false positives for any IP address queried, causing everyone using Bad Behavior to be blocked. This issue is fixed in Bad Behavior 2.0.11.

Download Bad Behavior 2.0.11 now!

P.S. Yes, Bad Behavior is still in development. More news coming soon.

Update: Some people have asked for more details on what exactly happened. In brief, yesterday I moved all of my sites to a new dedicated server. In the process, I decommissioned an old blacklist I was running which I thought wasn’t being used, not realizing that Bad Behavior was still set to use it. Shortly afterward, I found myself locked out of my own blog, just as you all did. So therefore, this release.

Nothing to see here, move along….

…I’m still retired, you know. 😉

Two Ross McKitrick Op Eds

National Post here
CSM here

Almagre – Crowley Style

Crowley and Lowery (2000) still cited quite often purported to show that the MWP was a dog’s breakfast of odds and ends – very different from the Modern Warm Period. The “proof” was the presentation of a hodge podge of proxies, which supposedly did not show a MWP, but did show a Modern Warm Period. Central Colorado” was one of the series, citing an early Lamarche paper.

As noted elsewhere, Crowley lost his collation of original series and couldn’t remember where he got the digital data from (but acknowledges Jones). Crowley’s “Central Colorado” series is very likely a transformation of Lamarche’s original chronology (Crowley used some really OLD versions) – Crowley’s transformation standardizing proxies to a rank range between 0 and 1. Here’s a comparison of our extended Almagre chronology and the smoothed Crowley version (which he did manage to locate). While the two versions track one another more or less, the updated version has reduced values in the mid-20th century and ends at pretty much the long-term median.

In this case, one can argue that the MWP was as elevated as the Modern Warm Period (although a more likely interpretation is that the data is not a thermometer.)

almagr27.gif

Almagre Chronologies

On an earlier occasion, I posted up our updated measurement data from Almagre. I’ve been working on this material in preparation for AGU (Dec 14). Today I’m going to show some initial chronology calculations. Continue reading

Kiehl (2007) on Tuning GCMs

Eduardo Zorita sent me an interesting paper today from Kiehl, a prominent climate modeler, analyzes the paradox of how GCMs with very different climate sensitivities nonetheless all more or less agree in their simulations of 20th century climate. Kiehl found that the high sensitivity models had low aerosol forcing history and vice versa. Kiehl observed:

These results explain to a large degree why models with such diverse climate sensitivities can all simulate the global anomaly in surface temperature. The magnitude of applied anthropogenic total forcing compensates for the model sensitivity.

Eduardo’s take was as follows:

surprisingly the attached paper, from a main stream climate scientist, seems to admit that the anthropogenic forcings in the 20th century used to drive the IPCC simulations were chosen to fit the observed temperature trend. It seems to me a quite important admission.

Here are some excerpts from Kiehl 2007 together with his key graphics. Continue reading

Tiny Tim Storms

David Smith, a regular commenter on hurricanes, writes.

Tiny Tim is a Charles Dickens character. Tiny was a young lad, small, very weak, in a struggle to survive and of little notice in the hustle-bustle streets of London. Later, of course, his fortunes improved and he and Scrooge became “part of the record” of Victorian England.

In a similar vein (OK, it’s a stretch) there is a type of Atlantic tropical cyclone that is like Tiny Tim: generally of short duration, weak winds, small aerial extent and often in a remote part of the ocean. Its impact on its environment is tiny (a very small “footprint” in the Atlantic).

My operational definition of “Tiny Tim storms” are those that were so minimal that the NHC end-of-season reports do not report a single ship or single shore report of storm-force winds. This is not a matter of report oversights – storm analysts consider surface verification of wind estimates to be an important matter and list shore weather reports and ship reports in their reports.

And, the lack of ship or shore reports is quite significant if someone is looking at storm climatology. Storms lacking ship or shore reports of storm-force winds would, prior to 1945 (the start of recon), not have been classified as a tropical storm/hurricane. Why? Because, prior to 1945, all the meteorologists had were ship and shore reports. No aircraft recon, no satellites, no buoys and no Doppler radar – just ship and shore reports.

So, in this era of strong and many ships, rapid reporting and (US) shores lined with windspeed devices like onshore CMAN stations, a seeming plethora of data, are there still Tiny Tim storms, ones that modern technology sees but which lack storm-strength impact on ships and shores and which would have been ignored in the past?

The US National Hurricane Center (NHC) makes its end-of-season archives available at its website . An example of a storm report is here, the infamous Hurricane Katrina which shows a wealth of information including about 70 selected ship observations (hmmm, must have been some old dumb ships out there left over from the 1930s) of storm-forced winds and about 50 selected onshore observations.

Another example is Tropical Storm Ernesto , a report with with much less content because, well, it wasn’t much. There were no ship reports of storm-force winds and, as the NHC acknowledges, even in reanalysis it is of questionable strength or organization. Yet it carries as much weight in storm count trends as does Katrina.

(A word about reviewing the NHC archives – the ones for recent years are well-written and organized but that quality (for climatological purposes) diminishes as one goes back in time. There is a lot of verbage and data to review. I say this because I reviewed about 250 reports for this post and I may have missed some detail, one way or the other. I doubt that it’s material but I want to mention it anyway and welcome anyone who might audit my list.)

I reviewed the last 20 years of records as I figured that covered the modern increased-activity era (and the 1980s record quality became a bit more challenging).

So, the question of the hour is: how many Tiny-Tim storms, ones with nary a ship or shore report of storm winds, occurred in the last 20 years? The answer is here .

Frankly I was surprised. There are 52 storms on the list.That’s 52 out of the 252 storms in the official record, or 20% of the total. That’s 20% of the modern storms which lack a single classical (ship or shore) report of storm winds. Wow.

The obvious question is: how can one compare these satellite- and aircraft-based storms, which left no ship or shore evidence, with pre-1945 records which were based solely on ship and shore observations?

A little data.

First, here’s a look at a couple of characteristics of the Tiny-Tim group. Here is a bar plot of the duration of the Tiny Tims, grouped by days of existence (6 to 24 hours = 0 to 1 day, 30 to 48 hours = 1 to 2 days, and so forth).

The median duration is 1.7 days (42 hours). A few storms lasted beyond four days, ones that tended to be in remote open waters. For perspective, something that moves at, say, 10mph and lasts 2 days doesn’t cover a huge amount of real estate.

How about winds? Here’s a bar plot of windspeed distribution for the Tims. It shows that the group combined had 182 six-hour periods (45.5 days) of winds in the 35 to 39 knot range, as estimated by aircraft, satellite or buoy. The distribution has a mean windspeed of 43 knots (“strong gale” on the Beaufort scale) with 85% of the time spent below 50 knots.

There is an important graphic which I wish I could present but cannot because, to my knowledge, the data does not exist. The graphic would convey information on the geographical extent of storm-force winds. This is important because Tims likely have peak winds in only a small area on the eastern side of the center, perhaps 30 to 50 miles across typically. Tropical storms often lack symmetry and have their strongest winds in a relatively small area of thunderstorms.

As an exercise for perspective, figure that the hurricane-prone portion of the Atlantic covers 8 million square miles and that a Tiny-Tim has storm-force winds 100 miles across and moves at 10 mph for 2 days before weakening. That equates to the Tim covering 0.6% of the tropical Atlantic, which is not much.

Another useful graphic, which I have not done, would be a map of the storm locations. I think we’d see Tiny-Tims in the Gulf, along the eastern US seaboard (frontal-zone Tims), in the remote open Atlantic and scattered elsewhere.

OK, that’s a view of the group. Now for the main question: how have these storms affected the all-important trend in Atlantic storm count? What does the long-term time series of Atlantic tropical cyclones look like if the recent Tims are omitted?

Since my data covers only the most recent 20 years the plot is rather odd but does offer some information. The blue line is the official record (Tims included) while the red line is what the 5-year average would look like without the recent Tims. The comparisons should be (1) between the red and blue lines for 1988-2007 and (2) the red line (1988-2007) versus the blue line before 1945 (pre-aircraft). The plot shows notably fewer recent storms and shows recent activity more in line with historical (pre-1945) activity.

A closeup, with a few comments, is here . The impact of the Tims on the recent record is clear. I added several comments on the pre-1945 period, lest the question of a peak(1930s)-to-peak(2000s) comparison arises. The 1930s through the mid-40s was a period of limited global commercial activity, due to the Great Depression followed by World War 2 (this was shown on an earlier graph on CA a few months ago). Fewer ships in the 30s and early 40s meant less chance of an encounter with weather of any sort, including tropical cyclones. I suspect that this affected storm reports.

To me, this is further evidence of the problems with long-term comparisons of Atlantic storm counts and reinforces my view that improvements in storm detection are the main drivers, and perhaps the sole drivers, of the increase in reported Atlantic storms.

2007 Blown off track: Northern Hemisphere Historic Cyclone Inactivity

Ryan Maue writes in as follows (see also 2007 Tropical Cyclone Activity).

As reported at Climate Audit at the end of October, the North Atlantic was not the only ocean seeing quiet tropical cyclone activity. When using the ACE cyclone energy scale, the Northern Hemisphere as a whole is historically inactive. How inactive? One has to go back to 1977 to find lower levels. Even more astounding, 2007 will be the 4th slowest year in the past half-century (since 1958).

The 2007 Atlantic Hurricane season did not meet the hyperactive expectations of the storm pontificators. This is good news, just like it was last year. With the breathless media coverage prior to the 2006 and 2007 seasons predicting catastrophic swarms of hurricanes potentially enhanced by global warming a la Katrina, there is currently plenty of twisting in the wind to explain away the hyperbolic projections. The predominant refrain mentions something about “being lucky” and having “escaped” the storms, and “just wait for next year”.

maueh29.jpg

Well before we prepare for the obvious impending onslaught of the next “above-average” hurricane season, let’s review some very positive aspects of what 2007 offered:

When combined, the 2006 and 2007 Atlantic Hurricane Seasons are the least active since 1993 and 1994. When compared with the active period of 1995-2005 average, 2006 and 2007 hurricane energy was less than half of that previous 10 year average. The most recent active period of Atlantic hurricane activity began in 1995, but has been decidedly less active during the previous two seasons.

When combined, the Eastern Pacific and the North Atlantic, which typically play opposite tunes when it comes to yearly activity (b/c of El Nino), brushed climatology aside and together managed the lowest output since 1977. In fact, the average lifespan of the 2007 Atlantic storms was the shortest since 1977 at just over two days. This means that the storms were weak and short-lived, with a few obvious exceptions.

maueh28.jpg

So, before throwing Dr. Gray, NOAA, and Accuweather under the bus, consider what seasonal forecasting must entail to skillfully project hurricane activity. Then consider what we do not know well:

  • Why are there 80-90 tropical cyclones each year globally?
  • Will a given storm rapidly intensify or weaken – prior to landfall several days ahead of time?
  • What mechanism(s) determine when and where a tropical depression will form and how far in advance can we say for sure?
  • Now, when a seasonal prediction is made, elements of the above questions enter into play: One, two, three, or six months ahead of the season, how many storms will form, how strong will they be, and what is the probability that any of those will affect land (particularly the United States). This requires knowledge of oceanic conditions half-way around the world, precipitation patterns over Africa, and a host of other considerations.

    Nevertheless, do not lose heart. Long-range weather prediction is a booming enterprise, with energy, insurance, and governmental agencies investing considerable resources into this colossal effort. The house always wins.