more bender on Emanuel

Here is the (big) change in correlation as the Emanuel data are progressively smoothed, 0x, 1x, 2x. (Here I exclude the endpoints, which is what one ought to do.)

cor(SST,PDI) [1] 0.5050575
cor(SST.1,PDI.1) [1] 0.6278249
cor(SST.2,PDI.2) [1] 0.7484223

Square those to obtain r^2.

Here are two graphs of Emanuel’s SST and PDI, showing the effect of 1x and 2x smoothing on PACF and spectra for SST and PDI respsectively

1. TCO

Is this a general mathematical phenemona with data sets of a certain characteristic? Or just a fortuutious result?

2. bender

Your question as posed is not answerable. Is what a general phenomenon? Is what a fortuitous result?

In any interaction between methods and data, the effect of a method will often depend on the nature of the data. The amplification of a cycle in a series that has no cycle is a fairly common sort of effect, which is why I spotted it instantly when I first saw the Emanuel paper. It’s how I know to cite Slutzky (1937) and Yule (1926). It’ll happen with any white noise, but if there is red noise it will happen more readily.

Good enough?

3. TCO

It’s a start. If I don’t ask a question precisely, you can still respond by restating or saying what are relevant insights in the general area, to a more appropriately phrased question.

Here’s a better one. Please explain to me the characteristics of a data set (or just an example of one) that does not have r increase when one smooths.

4. bender

1. Hey, it’s a start. You’re to be commended.
2. There is no dataset with this property. Because if you design the smoothing filter just so you’ll get a cycle.

Your question is quite a large one that I can’t answer: Describe all combinations of (i) datasets and (ii) filters, and tell me which combinations will/will not cause a cycle where none exists in the unsmoothed data. Oy!

5. Pat Frank

Bender, if you want to publish this stuff in a journal, be careful about putting your figures on the web. An editor may reject your work merely because it’s been publicly exposed here. Journal editors are jealous of their exclusivity. I hate to say it, but pre-exposure on the web may even provide an excuse for rejection in some quarters. You and Willis need to get your analyses into peer-reviewed print in order for it to have a real impact. There comes a time when you and Willis (and Steve and Judith?) should communicate your results in private. I suspect that time is now. I’d really hate to see your good work brushed off on a pretext.

6. TCO

Oh…bla bla bla. They can replot the figures, and they need to clean the whole thing up and there are lots of good journals. Sheesh.

7. bender

Pat has a point – but I’m not worried for the reasons TCO cites. None of the published graphics will resemble what’s been posted here. In content (actual data points), yes. In form, not at all. That is probably enough to make sure there are no copyright conflicts. Besides, Steve M is the owner of everything at this blog. If he’s an author, I don’t see a problem. If it is necessary to delete material, he can do that. We just replace all the bender thread graphics with a link to the finished product.

8. TCO

You can write the paper up without him having owndership. It will change as you get work done, may morph into more then one, etc. Plus even if you had posted the whole thing here, he would not own it. Plus, it would be inappropriate to put his name on the paper unless he had a real contribution, regardless of blog ownnership.

9. bender

Thanks for the lecture on authorship, TCO. In case you don’t recall, Steve M produced the plot of model residuals and pointed out the possibility that the variables were not an AR process, but an ARMA process. And you’ll also recall we lack somebody willing to be the lead writer. So if he wants it, there’s plenty of room for him.

10. TCO

If he made a significant contribution, then he NEEDS to be included. But not because of some issue with the blog. If you posted this on drudge, would you give drude a co-authorship?

yw on lecture. Like my mom says, if you can’t be good be careful. and accidents cause people.

11. bender

[Swat]

12. TCO

feels good on my fanny…

13. bender

TCO, I think I saw some NASCAR on TV …

14. Pat Frank

TCO, it may shock you to know that journal editors (in my experience, anyway) don’t share your cavalier attitude about pre-publication. Differently plotted analyses or not, they may object. It’s not just copyright with respect to figures, it’s the analyses and the data themselves. Editors don’t want their journal upstaged by claims of earlier priority.

If that weren’t the case, I could just re-publish my first paper over and over again, with just a new graphical presentations. Of course, some people actually do that.

Here’s another aphorism to put alongside TCO’s, bender: Err on the side of caution. (please)

15. bender

Re #14.
Hmmm, you could be right about caution. You never know what kind of Editor you’re going to get. And this is outside my field. What I’ll do is pretend I’m listening to TCO, but actually take your advice to heart. (TCO can’t hear us over the roar of the race cars.)

16. bender

Re #1
A second reply to your question, TCO. Not sure if you know how to read periodiograms, but look at the decadal peak at frequency ~ 0.1 in the graphs with the y-axis labelled “spectrum”. See how that peak is present in the unsmoothed data (moreso in SST than PDI)? That is what Emanuel’s multi-stage smoothing is doing – exaggerating that decadal peak by removing the higher frequency variability. See how the right portions of those curves drop down as you increase the smoothing? That;s why Emanuel’s curves “wiggle-match” pretty well. Except they’re decadal cycles, not high-frequency noisy wiggles.

To generalize: any dataset with a mid-range spectral peak like that can have that peak emphasized if you smooth repetitively. (assuming the filter has certain properties, which Emanuel’s does.)

Whether you think this is ok or not depends on what your interpretation is of that peak in the raw data. Is it noise? Is it a cycle? Is it interpretible as ENSO or NAO? If it has an interpretation in the raw data, then it has the same interpretation in the smoothed data.

Good?

17. John S

WRT earlier publication.

It is standard practice in economics that work is produced in working paper form first and then submitted to a journal for publication. It may differ in other fields – but an editor stating that a paper won’t be published because the results have already been disseminated earlier would create fundamental problems for any discipline that circulates work for comment and discussion in working paper form.

An editor refusing publication because the results have already been disseminated is generally just looking for an excuse.

18. TAC

#16 I am not sure I understand the implications. The correlations (as defined by R or R^2) of the smoothed time series are presumably “valid” — at least in the sense that they are computed correctly and are therefore reproducible. However, it would seem that, because of the complex time series structure, it would be difficult — maybe impossible — to attach a statistical significance to the correlations.

Considering the physical processes for a moment, the observed correlations would be consistent with either coincidental (random) low-frequency noise in both series, or a common causal factor with low-frequency structure, or one variable forcing the other (again with low-frequency variability). Given the short datasets, there isn’t much evidence to reject any of the alternative hypotheses.

Is that the main point here?

19. bender

Re #1 & further to #16:
This graphic illustrates the crux of the matter:

There is trend, decadal and ENSO-scale coherence between the two unsmoothed series SST & PDI. Emanuel’s repetitive filtering is accentuating the underlying decadal coherence because of his choice of smoothing filter. Were the undelying decadal coherence not there, the SST/PDI correlation would not increase with smoothing.

20. bender

Re #17
I think this actually does happen. A paper does have to be “novel” to be acceptable. Look at the review criteria for any major journal.

21. bender

Re #18
That’s good that you don’t understand the implications yet, because we’re not finished yet! The question is: given that PDI is related to SST (and that this relationship is sensitive to smoothing effects, endpoint anchoring effects, and time-frame effects), how much of this is attributable to a shared trend, and how much a shared cycle?

The second question is: what happens if you alter the spatial and temporal frame over which SSTs and PDIs are computed? i.e. Were these bounds cherry-picked?

22. TAC

Re #21

One other thought: Because of the temporal correlation, the effective sample size for the smoothed series may be very small — possibly on the order of 5 or less (the effect of smoothing is to eliminate the high-frequency signal, which reduces the effective sample size if low-frequency noise is present [it is important to note that smoothing does not always lead to a loss of information -- e.g. the mean of iid Gaussian (N[mu,1]) data contains all the information in the sample). If the smoothing is having this effect, then it is not at all surprising that one observes increasing R^2. In the limit as effective sample size approaches 2, fit becomes perfect and R^2 approaches unity.

23. bender

Re #22
That observation was made already here Seems we’re all on the same wavelength on this.

24. bender

At this point it’s probably worth mentioning: Kerry Emanuel is obviously a smart guy. None of this will surprise him much. We’re not trying to refute his paper, because I don’t think that’s possible (unless his time-space frames were cherry-picked). What we”re doing is trying to get a robust estimate of the SST effect on PDI, as opposed to a fragile, exaggerated estimate. Did PDI increase by 100 ± 10% over the last few decades? Or did it increase by only 50 ± 20%. The final answer won’t overturn his hypothesis test. But it will probably downgrade his risk estimate.

This is not about attacking people. It’s about getting precise estimates of GW effects. And it’s about showing the tremendous value of an open, collaborative auditing function.

25. Pat Frank

Bender, what you’re all doing here is rather an open critical review of Emanuel’s work; critical meaning knowledgeable evaluation not opinionated criticism. That’s well and good. But if you get to where you’re doing original analyses, it’s time to become discrete.

John S is right in #17 that manuscripts can be circulated among colleagues prior to publication. This happens regularly in physics and math. But the circulation is generally only to peers, is limited, is confidential, and is in the nature of a review of the work. It’s not at all like the full exposure here to public scrutiny, to reporters, and to potentially widespread ad libitum dissemination.

If you get into original work, it would save you from the real possibility of trouble if you keep it restricted to your collaborators until journal acceptance; with the exception of circulating it among your own peers for their confidential review prior to submission. The work going on here is far too important to be squelched by a tactical error. Just my own opinion.

26. KevinUK

#5 and #25 Pat,

I disagree although copyright could well be used as an excuse by some editors for not publishing it will be exactly that i.e. an excuse from an editor who does not want to publish the paper anyway because it doesn’t fit in with their (political) agenda.

Please remember just what this is all about. There are people out there (many of them in control of the IPCC) who want us to change the way we live. They are attempting to come up with evidence for ‘signals’ that in their opinion prove that global warming is down to our (uncontrolled) use of fossil fuels. They are using this ‘proof’ to justify the introduction of controls on how much fossil fuels we consume. They are IMO deliberately taking advantage of the MSM propensity for alarmism and sensationalism to get their propaganda into the media and so engrain their opinions into the minds of the general public.

For the sake of what I hope everyone agrees are our cherished civil liberties, we must all stand up to this onslaught. It is clear that the MSM want the alarmism to continue as this increases either their viewing figures or sales of their newspapers so there is little chance that any refutations of these claimed signals will be heard. So without equal access to the MSM we are therefore left with limited ways of getting across an alternative message – the message that the science that underpins the claims for AGW is weak at best and arguably (as in the case of the HS) contrived. If blogs/forums are one of only a few means available by which this message can be put across then so be it. It staggers me just how much information I have been able to find out thanks to even just this blog let alone many others. Thanks to the internet those of us who choose to research the alarmist claims we are exposed to in the MSM can find out what lies behind the claims. Sadly in most cases there is little if any real science (particularly when it involves epidemiology).

KevinUK

27. bender

Re #25
Pat, your advice is well-taken. I think there is a fine line we ought not cross. And you’re right, we’re getting very close to that point. CA readers need not worry. We will not drop the ball. We will ensure free access to the final product. Last post on this issue.

28. TCO

Woooosh!

29. TCO

Can you go into more detail of series which do not increase in correlation with smoothing, which decrease? Give an example?

30. David Smith

The Mann/Emanuel paper (EOS, June 13, ’06) is an important paper, perhaps moreso than the Emanuel 2005 paper.

1. It uses a statistical argument to remove the AMO from the historical record, which leaves global warming, rather than a major natural cycle, as the cause of the increase in hurricanes since 1995. Sort of like purging the LIA and MWP from the record.

2. If it is true, it is a stunning development which enhances Mann’s credibility in tropical climatology and his credibility as a master statistician.

Mann’s statistical arguments are beyond me, so I have no way of judging the paper. As I’ve mentioned, though, their choice of a “decadal” smoothing technique for SST (different from that of the Emanuel 2005 paper), and the use of pre-1900 Atlantic storm count, trigger my “hmmm” alarm.

31. bender

Re #29
That’s why I posted the graph in #19. It was for you. It contains the answer.

32. TCO

It’s kinda buried, so how about holding my hand and spelling it out exactly. Let’s start with, when you say “would not increase” do you mean, “would decrease”, “would stay exactly the same” or “would increase a very small amount”?

33. bender

Re #30
Has a link has been posted to that paper? Or just links to press releasses surrounding it? Do you have a pdf? Or a full citation I can look up? Looks interesting. Based on your description it sounds like what we just proposed doing with John Creighton’s orthogonal filtering scheme. Maybe we should finish our analysis and *then* read Mann & Emanuel. So that our methods are completely independent of theirs.

34. bender

Re #32
We are now at that point we always seem to get to, TCO, where you start trying to put words in my mouth, I choke, and the horrible force-feeding experiment goes on for days. I have learned to avoid that stage, as the audience does not enjoy it. The conversation on this topic now has amazingly diminishing returns for me. I mean exactly what I said. If you don’t understand it, you’ll need to get yourself an interpreter.

35. David Smith

Re #33 here you go!

http://www.discover.com/images/issues/aug-06/eoshurricanes.pdf#search=%22mann%20emanuel%20kerry%22

36. TCO

Why don’t you directly answer the questions instead of posting a frequency versus correlation graph.

37. Ken Fritsch

It uses a statistical argument to remove the AMO from the historical record, which leaves global warming, rather than a major natural cycle, as the cause of the increase in hurricanes since 1995. Sort of like purging the LIA and MWP from the record.

This paper was referenced before at this blog and I guessed before reading the paper at a simplified method of how the AMO might be factored out of global temperatures and I believe it was Steve B who replied in general terms that that was how it was done: subtract out the global mean temperatures from the SST temperatures in the AMO local area. To this layman’s eyes it appears that is what Emanuel and Mann have done and then they use the residual to determine if a statistically significant tropical storm correlation exists to that residual. I haven’t thought through the implications of this process, but on first glance it would appear to me to be an over simplified attempt given the wide variations of local temperatures and even the magnitude of the RMS error that Mann and Emanuel list in their supplement.

You mentioned the LIA and MWP that Mann and company tell us were localized events and that local areas can have significantly different temperatures (and climates) than the global mean and for lengthy periods of time, evidently, and implying in turn that those local areas are relatively unaffected by the global temperature. That seems the opposite of what they are saying for the AMO.

Mann and Emanuel do not from my reading of their papers and background explain why they used the Butterworth filter on these time series. Maybe it is such a standard practice in climate work that it is self explanatory. Did not Mann earlier write a paper on the effects of the AMO? Doesn’t the paper under discussion imply that SST is the overwhelming variable in determining the effects of the AMO and tropical storms in general?

38. bender

TCO, if you have two white noise series, and they correlate with each other, then how on earth is smoothing out that white noise, which is the basis for the correlation, going to increase the correlation? It will inevitably decrease it. Think! The other scenarios you will inevitably come up with will require a similar style of thinking.

You’re like Bart Simpson in Sunday school: “What if a guy is half man/half robot – does all of him still get to go to heaven?”

I’ve got to go. Don’t blame my absence today on a lack of willingness to entertain you.

39. TCO

hmmm. Mebbe so. Can you illustrate this?

40. David Smith

I was doing more hurricane reading on the internet this morning but hit this…Darn.

41. TAC

#38 bender, I agree with your comment that

if you have two white noise series, and they correlate with each other, then [smoothing will not] increase the correlation…

However, it is interesting to consider the case where at least one of the series is not white noise. For example, if you let X be independent standard normals, and define Y=X+alpha*B(X) (where B is the Backup operator), then the correlation between X and Y is 1/sqrt(1+alpha^2).
Now consider X’ and Y’, the two-element moving average “smooths” of X and Y, whose correlation is cor(X’,Y’)=(2+alpha)/(2*sqrt(alpha^2+alpha+1)). Although cor(X,Y) is reasonably well behaved and positive for all values of alpha, cor(X’,Y’) is more complex. It can be greater than Cor(X,Y), less than Cor(X,Y), or even negative, depending on whether alpha is greater than zero, less than zero, or less than -2.

The point that interests me here is that applying a “smooth” in the presence of serial correlation can substantially alter — sometimes in surprising ways — computed cross correlations (or regression results, or anything analogous). One has to be extremely careful about conducting statistical analyses on smoothed data.

There is a lot more to say. IMHO the relevant question is what happens in the presence of Long Memory/LTP (Koutsoyiannis has made some persuasive arguments that LTP provides a far superior model than iid or AR for Mother Nature’s behavior). That’s where things are likely to fall apart completely [future research?].

42. TAC

#41 Errata: “Cor” and “cor” both refer to the cross correlation between two time series; “backup operator” should be “backshift operator”.

Also, anyone interested in plots of the correlations can generate them with the following R commands:

alpha

43. Steve Bloom

Re #24: bender, you seem to keep wanting to believe that Emanuel asserted something beyond a vague correlation between SST and PDI. He didn’t, and one of a number of reasons why is this sort of thing. Recall from the first half of this year’s season the fate of all of those “promising” tropical waves that were throttled in their cradles. That didn’t happen last year, which points out the further convoluting factor of the wind needed to transport the dust to the region where it can do in the hurricanes. I suppose it’s possible that this dust is the only thing that’s stood between us and a series of 2005-like seasons driven by increased SSTs.

44. Douglas Hoyt

Bloom,
That dust has been coming off of Africa for centuries and is not new. See, for example, MacDonald’s Atlas of the Oceans, published in 1938, that clearly shows the Saharan dust cloud. Samples of ocean sediment in the Caribbean also shows Saharan dust going back centuries and longer.

45. Steve Bloom

Re #44: I would expect that to have been the case ever since the Sahara dried out circa 5,000 years ago. The point is to identify the pattern of dust production and try to figure out the effect on Atlantic TCs. We can tell that it’s an important factor due to its obvious effect on indvidual storms.

46. Steve McIntyre

Landsea picked up the fixed end point issue in his comment as follows:

My first concern is that Emanuel’s figures2 do not match their description: his Figs 1–3 aim to present smoothed power-dissipation index (PDI) time series with two passes of a 1-2-1 filter, but the end-points “¢’¬? which are crucial to his conclusions “¢’¬? instead retain data unaltered by the smoothing;

Landsea correctly points out that in applying a smoothing to the time series, I neglected to drop the end-points of the series, so that these end-points remain unsmoothed. This has the effect of exaggerating the recent upswing in Atlantic activity. However, by chance it had little effect on the western Pacific time series, which entails about three times as many events. As it happens, including the 2004 and 2005 Atlantic storms and correctly dropping the end-points restores much of the recent upswing evident in my original Fig. 1 and leaves the western Pacific series, correctly truncated to 2003, virtually unchanged. Moreover, this error has comparatively little effect on the high correlation between PDI and SST that I reported1.

47. David Smith

Another interesting paper on “ACE” (which essentially is “PDI”) and SST worldwide. Easy reading.

48. Gerald Machnee

Re #43 – This year’s excuse is the dust. Last year it was nothing but Global Warming. My understanding is that NASA is doing a study, but some insist on jumping to conclusions. Sounds like (A)Global Warming is a lot weaker than many thought.

49. Steve Bloom

Re #48: Have you been following the news lately, Gerald?

50. Willis Eschenbach

Re 43, Steve Bloom, you say:

Re #24: bender, you seem to keep wanting to believe that Emanuel asserted something beyond a vague correlation between SST and PDI. He didn’t …

Quotes from the Emanuel study:

Here I define an index of the potential destructiveness of hurricanes based on the total dissipation of power, integrated over the lifetime of the cyclone, and show that this index has increased markedly since the mid-1970s.

… increased markedly …

I find that the record of net hurricane power dissipation is highly correlated with tropical sea surface temperature …

… not the “vague correlation” you claim, but “highly correlated” …

My results suggest that future warming may lead to an upward trend in tropical cyclone destructive potential, and”¢’¬?taking into account an increasing coastal population”¢’¬? a substantial increase in hurricane-related losses in the twenty-first century.

… upward trend in tropical cyclone destructive potential … substantial increase in losses …

There is an obvious strong relationship between the two time series (r^2 = 0.65), suggesting that tropical SST exerts a strong control on the power dissipation index.

… obvious strong relationship … strong control … (note that the real r^2 is only 0.23 or so)

But the large [PDI] upswing in the last decade is unprecedented, and probably reflects the effect of global warming.

… unprecedented …

Did you actually read the paper?

w.

51. bender

Don’t forget the claim that started this all – the caption of Figure 1 – and source of the title:

“total Atlantic hurricane power dissipation has more than doubled in the past 30 yr.”

Bloom, the reason I am looking at this paper is not because I have some beef with Emanuel. I don’t. It’s because David Stockwell asked us to do so, in a post on Sep 22, 11:35 AM.

52. Willis Eschenbach

Whoa, stop the presses! I just found an outrageous site that makes maps of any correlations you might think of, and then some.

Here’s a sample that’s relevant to the current discussion. It’s a plot of the Atlantic Multidecadal Oscillation vs SST, for August to October.

Note the red spot in the hurricane main development region …

Go try it, lots to learn.

w.

53. bender

Hmmm … curious Australia-Antarctica teleconnection … would need some advance fingerprinting tools to verify … but …

Just kidding! It’s a plot of SST vs. random noise!!

Cool tool.

54. Gerald Machnee

Re #49 – Yes I have, but that is not the point. Last year at RC, it was nothing but AGW. This year it is excuses.

55. bender

Re #52 There is a tool there for plotting the time-series as well. (Willis probably knew that already, but others maybe not.)

56. David Smith

Emanuel’s Figure 3 uses an annual average (instead of A-S-O), taken from 30S to 30N (instead of sticking to the Northern Hemisphere where the storms are located).

I’ll see if I can get the site mentioned by Willis in #52 to generate SST for a proper region (A-S-O, inside Emanuel’s combined boxes) versus the one he chose (annual average, over 30N to 30S). He chose that odd combo for a reason.

57. David Smith

Re: Emanuel’s Figure 3

If you’d like to see what an alternate global SST versus global PDI (shown as “ACE”, which correlates well with PDI) looks like, check Klotzbach’s Figure 3 in the paper linked above in #47.

58. bender

Re #56
Yes it is possible to do what you are asking from that site. You just need to swtich pages, to a page devoted to time-series. I was going to post an example of this, but thought I might be accused of wasting bandwidth.

59. Ken Fritsch

re: #47

Another interesting paper on “ACE” (which essentially is “PDI”) and SST worldwide. Easy reading.

Thanks for the link David S.

No complicated filtering, no declarations of clearly robust results and methods, a global approach to reduce any inadvertent cherry picking and data snooping and a plain vanilla statistical approach tells me that the author was simply not sufficiently motivated or prepared to diligently look for a trend and correlation that surely must exist.

How about a review/critique by Steve B of this paper? Certainly some comments by Judith Curry would be helpful.

60. bender

Re #59

From Klotzbach’s conclusion:

These findings are contradictory to the conclusions drawn by Emanuel [2005] and Webster et al. [2005]. They do not support the argument that global TC frequency, intensity and longevity have undergone increases in recent years. Utilizing global “”best track” data, there has been no significant increasing trend in ACE and only a small increase (~10%) in Category 4–5 hurricanes over the past twenty years, despite an increase in the trend of warming sea surface temperatures during this time period.

1. The only problem with his line of argument is the old adage "absence of evidence is not evidence of absence". Given the contradictory results, someone is going to have to figure out the source of the difference. Is it ACE vs PDI? Smoothing vs. not smoothing? Pinning vs. not pinning? Framing? There are many possibilities. I would welcome Bloom’s analysis.

2. TCO won’t like this paper because there’s an "L" in GRL.

61. David Smith

I took a look at SST trends for the warmest parts of the Atlantic basin (Western Caribbean, Gulf of Mexico, Bahamas). It’s a large, important area for Atlantic storms.

The SST has basically been flat for many years, despite the rising PDI for the overall Atlantic.

I think I see why that was excluded from Emanuel’s box.

62. Willis Eschenbach

Well, here’s a fascinating plot from the site I mentioned above …

The interesting part is that the strongest correlation is not to the MDR in the Atlantic … it’s the temperature in the Pacific.

Go figure …

w.

63. Willis Eschenbach

YES! After much experimentation, I was finally able to use the site to do a chart of my own data. I used my calculation of the PDI 1948-2005, and plotted it against SST. Here’s the results for August”¢’¬?November:

Again there is the high correlation with the Pacific equatorial SST. Here’s the individual monthly figures.

64. Willis Eschenbach

Hmmm … maybe it didn’t like the pictures on after the other … here’s the other two again …

and

w.

65. Steve McIntyre

Those are rather pretty.

66. Willis Eschenbach

Great … now all the pictures show up. Maybe Steve or John A. can clear up the confusion. In any case, the fascinating part to me is that, although there is some correlation between the Atlantic SST and the Atlantic hurricanes, the stronger correlation is with the Pacific SST. Here’s the full year …

Now, the amazing part of this to me, in addition to the correlation with the Pacific, is the correlation with the SST off of Spain … what’s up with that?

w.

67. bender

Re #67
To diagnose “what’s up with that” it would be worth taking a look at the time series that produce the correlation (Spain SST vs. Atlantic PDI). My experience is that when a correlation far away is unexpectedly high there’s a shared trend in some nonstationary aspect to teh time-series – which you expect quite a bit for red noise processes. Which is why attribution is such a serious issue. Correlation ain’t causation. That’s why the term “nonsense-correlation” is so useful, despite what TCO thinks of it. (Hey, it was good enough for Yule.)

68. David Smith

Willis, the correlated Pacific equatorial region you show is the famous El Nino / La Nina region

69. bender

When you scan the entire globe looking for a place where series X is correlated with your target series Y, you are going to get some spurious correlations, there’s no question of that. For that reason I would advise doing a spectrum coherency plot as I did in #19. This will tell you whether to what extent your high correlation is due to low-, mid-, or high- frequency coherence.

Here’s a quick example. Suppose your two series are x and y. Here are the commands in R to do a frequency coherence plot:

xy

70. David Smith

Re #66

The region west of Spain is one that runs “cool” during the “many-big-hurricane” phase of the Atlantic multidecadal oscillation (that’s the phase we’ve be in since 1995)

I believe that classical meteorology says:

1. If the equatorial Pacific is warm, then El Nino is active and Atlantic hurricane activity is low.
2. If the equatorial Pacific is cool, then La Nina is active and Atlantic hurricane activity is high.
3. If the area off Spain is cool, then the AMO is in an active phase and Atlantic hurricane activity is high.

If you’ve established the above, you’ve demonstrated that your technique identifies two well-known weather correlations, and helps establish the existence of the AMO. Caution: demonstrating that the AMO truly exists will put you into direct conflict with Mike Mann.

71. bender

Ok. that didn’t work. Sigh. (The assignment operator in R is a less than sign and a minus sign. Even when you separate them by a space Word Press parses it as a tag.)

72. bender

Re #71
Let’s try again.

Suppose your two series are x and y. Here are the commands in R to do a frequency coherence plot:

xy ASSIGN cbind(as.ts(x),as.ts(y)) # create the time-series vector pairing
xy.spc ASSIGN spectrum(xy,spans=c(3),plot=F) # generate the spectrum, smooth it, don’t plot it, store it in xy.spc
plot(xy.spc,plot.type=”coh”) # plot the coherence spectrum

ASSIGN = less-than sign followed by minus-sign, no space.

73. Steve McIntyre

bender, you can input the lt sign by using the tex operator
$<$

74. bender

Re #70
But David, the correlation between Atlantic PDI and Spain SST in #66 is dark orange = +0.55.

75. David Smith

Re #70 Correction – the cool region in the Atlantic is probably farther west than what your chart shows.

What your chart may be picking up is actually part of the circulation pattern associated with above-normal SST in the genesis region. When temps near the Spanish coast are high, then so is the genesis region. I’ll confirm this.

76. bender

There’s probably little chance Willis be able to follow my fragmented directions on how to generate a coherency spectrum. [If you can manage, Willis, I'm almost sure it would be worth the effort. Meanwhile I had better figure out this tex thing.]

77. Steve McIntyre

I see what’s happening – I did 73 from an Admin and it gets chopped outside. Your ASSIGN lexicon is completely clear.

78. bender

Ok. From now on I’ll use “ASSIGN” in text posts, but the actual operator in scripts. Apologies to the readership.

79. David Smith

The region near Spain is expected to run warmer during a high-hurricane phase. (See the linked document, Figure 1, an old article by Dr. Bill Gray, which contains more info on ocean circulation than you probably want.)

Your correlated region south of South Africa may be a signal from the great thermohaline ocean conveyor (water sinks near Greenland then surfaces, cold, somewhere in the Southern oceans). When the thermohaline circulation is strong, the North Atlantic is warm from receiving the tropical water while the upwelling oceans (Southern) are cool. Hurricane activity in the Atlantic is high. Cool water upwelling near South Africa = strong thermohaline circulation= high hurricane activity. Something to ponder.

80. David Smith

Re #82

81. David Smith

Willis, one final comment (bedtime): I think you’re doing something quite valuable with your new plotting tool. Your method may be able to provide useful clues about ocean and coupled circulation patterns. You may want to demo this for Judith.

One day you might try plugging in the PDI or ACE from the west and east Pacific, and see what they show.

82. bender

David Smith,
Would you be willing to write a one-sentence/one-paragraph summary of the difference(s) between PDI (Emanuel’s ‘Power Dissipation Index’) and ACE (Klotzbach’s ‘Accumulated Cyclone Energy’)?

83. bender

Re #66
Given that SS temperature correlations with PDI are higher in areas outside the hurricane box than inside, doesn’t this suggest that the major driver of season-long storm intensity is not local SST, but something else that is correlated with SST, like, say, atmospheric circulation (or like David Smith suggests, thermohaline circulation)? i.e. Maybe warm SST’s do feed hurricane strength at highly localized space-time scales (hundreds of km’s, days), but maybe once you integrate across longer time-scales (90 days) the long-range effects of some other process start coming into play?

If this interpretation is correct, then David Smith #81 is right: Willis may be on to something. He may have found the cause behind Emanuel’s correlation – and it’s not a trend – it’s a cycle. Which is what #19 points at.

The usual caveat applies here (and everywhere). I’m not a climatologist, just a data analyst.

84. David Smith

RE #82

I’ll give it a try:

Accumulated Cyclone Energy (“ACE”) indicates the kinetic energy of a storm. It is the sum of (the maximum windspeed squared), as measured every six hours.

Power Dissipation Index (“PDI”) indicates the dissipated power of a storm. It is the sum of (the integral of the maximum windspeed cubed), as measured every six hours.

Basically, they do the same thing, but the PDI will rise faster than ACE as windspeed rises, because ACE uses squares while PDI uses the integral of a cube.

The general practice has been to use ACE. PDI was created by Emanuel, I believe.

85. David Smith

The correlation between Atlantic PDI and cooling in the Southern Ocean, south of South Africa, is intriguing.

There is a debate among the experts as to where thermohaline upwelling occurs: nobody knows for sure. Willis’ graph may indicate that, in times of a strong North Atlantic thermohaline flow, the “excess” flow upwells south of Africa and in the Southern Indian Ocean.

Interesting.

86. Ken Fritsch

Willis, one final comment (bedtime): I think you’re doing something quite valuable with your new plotting tool. Your method may be able to provide useful clues about ocean and coupled circulation patterns. You may want to demo this for Judith.

Agreed. On the other hand, it could be a great tool for data snooping.

87. bender

Questions of S. Atlantic and Pacific momentarily aside, focusing on the N. Atlantic, does it make sense that the correlation should be so high (+) in the East Atlantic and so neutral (0) in the West Atlantic? This is quite stunning, is it not? Assuming Willis’s mapping is correct, this seems very surprising. Because this is where the storms are generated, not where they gain or lose power in the days prior to landfall. Right? Wouldn’t this suggest the hurricane climatologists are not assying the process that they think they’re assaying? Maybe we need Dr. Curry’s interpretation.

88. Steve McIntyre

#63. It’s interesting that the SST offshore Guyana to the south of the hurricane tracks should be so correlated with PDI (Guyana itself doesn’t get hurricanes because of the wind patterns , I guess.) Curiously these SSTs presumable correlate with Quelccaya ice core somehow – the precipitation comes from Atlantic monsoon rather than the Pacific and the connection to El Ninos would be indirect i.e. through relationship of the Atlantic monsoon to El Nino.

89. bender

Re #86

On the other hand it could be a great tool for data snooping

Exactly my point in #53. Spurious correlations are highly likely when you snoop the entire globe.

But back to the first hand … the correlations on Willis’s map are stronger than the neutral #53, and their spatial pattern IS consistent with a familiar circulation model. So I think the question in #87 is a good one. And the methods suggested in #72 would be useful for investigating the nature of these correlations. Is it (GW) trend or (TH) cycle?

90. bender

Re #88
I had noticed that too, but didn’t want to comment for fear of putting my ignorance on display. Throwing caution to the wind: maybe Guyana ocean temperatures are part of a larger pattern of circulation (such as the whole TH circulation David Smith is pointing to)? Regardless, the time-series properties need to be looked at – in multiple areas. This IS interesting. (But surely the Emanuels of the world have already looked at this? On the other hand, maybe in their search for the GW ‘fingerprint’ they’re not seeing all the other fingerprints?)

91. Steve McIntyre

The Guyana Current is an important current into the Caribbean – so maybe it’s got something to do with things.

92. David Smith

Re #87

The west Atlantic, north of the Bahamas, according to the old Bill Gray article (see his Figure 1, link given in #80), shows little temperature response to changes in thermohaline circulation strength. So, if the PDI changes are due to changes in the thermohaline strength, then a non-correlation with the western Atlantic is not surprising.

93. David Smith

By the way, I’m sure that thermohaline science has come a long way since Gray’s article, so there may be much better info available. I simply use it because I have it handy.

94. David Smith

Here’s a link to a diagram of the thermohaline conveyor belt.

I don’t know about the Guyana situation but do note that inclusion of that region (6N to 10N, hard to justify in my eyes) in Emanuel’s SST box helps his correlation.

95. bender

Inclusion of that region (6N to 10N, hard to justify in my eyes) in Emanuel’s SST box helps his correlation

It would be nice to see a map of the various boxes (the ones plotted in Excel a week or so ago) overlaid onto Willis’ global map to help put this statement in context. This would help us decide how a spatial framing sensitivity analysis ought to be structured to yield maximum impact & interpretibility.

96. bender

Ok, I’ve scanned through Mann & Emanuel (2006) and I think that (1) their argument has been somewhat misunderstood or misrepresented, in part because (2) their argument (as usual for the HT) is quite tortuous. [This in part because of their writing style, tending to reduce complex propositions down to simple, digestible sound bites, but also because of the hair-splitting nature of the fundamental detection and attribution problem. If a warming trend is overlain by massive cycles and noise, then you have to realize that an AGW attribution discussion is going to focus on that sliver of the pie, ignoring the larger questions about the whole pie.]

M&E06 do not dismiss AMO. They say that “the SST variability underlying increased Atlantic tropical cyclone activity appears unrelated to the AMO.” This is a very tricky statement to decode. It means something very precise, but, like all mathematical propositions expressed in ambiguous English language, is highly vulnerable to misinterpretation. Misinterpretation by EITHER side of the debate.

Of course, there’s more than one statement in the paper, and so there’s more to be said … later.

For now I would suggest that Judith Curry’s skill in linguistics, semantics, & debate could be useful in coming to a deeper appreciation for what these various papers really mean. As far as I can tell the major difference between them is the degree of non-scientific alarmism they choose to inject into the analysis and/or portray in the discussion. (Which is a pity. I mean, how much more divisiveness can the nation tolerate? Is this the new normal?)

97. David Smith

RE #96

Agreed – clarity is needed.

What Mann/Emanuel are suggesting is that, while the AMO probably exists (in some fashion), the AMO plays no significant role in Atlantic hurricanes. That is a radical development. That leaves AGW as the primary culprit behind the upturn in storms that began in 1995.

Related to this, Willis’ chart suggests that variability in Atlantic hurricane activity are correlated with strength in the thermohaline circulation, as well as with ENSO. The ENSO connection is widely accepted by all sides, but the thermohaline connection supports the natural-variability argument and would likely be pooh-poohed by those at RC.

98. Ken Fritsch

re: #96

Bender your impression of the E&M paper was strikingly similar the one I took away but could not confidentally voice until someone more knowledgable had more or less agreed.

E&M do not dismiss the existence of the AMO, but to me are saying that SST drives tropical storms and that the AMO residual temperature extracted from the global mean temperature is insignificant in the driving function. They in turn say that anthropogenic forcings, not global warming, is what is causing increasing tropical storms. Did not Mann previously write several papers on the AMO – and presumably not to show that it did not exist?

I would swear that when all is said and done that the effect AMO temperatures on tropical storms was what they extracted as a residual between the global mean temperature (ok maybe they detrended this) and the local temperature in the area of the AMO. They did some other statistical treatments, but essentially that is what I see them doing without them saying so. Their count of tropical storms was not correlated with the AMO residuals but with the increase in mean global temperature (antropogenic forcing was the term consistently used for global warming and I think done so to show that they consider that theory proven and a given) as, I guess, it projected onto the tropical storm area of interest.

Time out. Bring in Judith Curry and/or David Smith for a review or maybe Willis E will give it a go.

99. Judith Curry

hi, I am on travel and having network difficulties. will get back to all this hopefully tomorrow, esp some comments on what Mann and Emanuel did.

100. bender

A quick note on chain-link reasoning.

Suppose the following:
A=>B, r^2 = 0.3
B=>C, r^2 = 0.3
C=>D, r^2 = 0.3

Does it make sense to claim in absolute terms that A=>D? This would be tendentious, to use a TCOism. And yet that is more or less what they are doing in this paper.

As a kid, when playing with plasticine, did you ever try making a chain by stringing together a sequence of links? If you did, then you know how strong those kinds of chains are. They’re fun to build precisely because they’re so fragile.

101. Judith Curry

Here is a new paper that was just published on the Dvorak technique to determine hurricane intensity. Now I have a better understanding of why the hurricane guys don’t believe the intensity data (well, at least when others draw inconvenient conclusions from the data), this looks like voodoo to me

There is another (submitted) paper being circulated on the tropical listserv, it would be inappropriate for me to say much about it here. But the main takehome message with regards to the topics we are discussing is that the time series of PDI in the NATL and WPAC seems to dominated by variations in number of storms and number of storm days, rather than intensity. This is significant since intensity is the only variable that we have a theory to link causally to greenhouse warming. PDI as a variable is linked physically only to the amoung of work done on the ocean. As a damage index, damage seems to be proportional to powers of wind speed greather than **2, but maybe greater than **3 as well, but PDI as an integrated index for an entire ocean basin doesn’t seem useful for damage (number of landfalls is more relevant), but for the work done on the ocean

This raises the issue of what index of TC activity should actually be used. I think that looking at number of storms, number of storm days, average intensity (wind speed per storm), changes in the wind speed distribution, and then looking at one integral measure (ACE, proportional to v**2) is the best way to look at this. We have to be careful about what kinds of conclusions we draw from PDI

102. David Smith

Judith, of all the many comments posted here, to me the most significant, by far, is Willis’ chart in #66.

I believe that what he did was correlate Atlantic PDI versus sea regions around the world.

I believe that what it shows is that PDI (presumably ACE, too) is negatively correlated with Pacific equatorial temperature. When La Nina is around, PDI is up. When El nino is around, PDI is down. not a surprise.

More interesting to me is the area off Spain, and southeast of South Africa. Those almost look like the fingerprints of a strong thermohaline circulation. Upwelling in the Southern Ocean, warm SST in the eastern Atlantic. Strong thermohaline correlation with PDI?

I’m very interested in your thoughts on this, when you have time.

Thanks

103. bender

Re #102, #66
Simplistic interpretation here. If the “purpose” of tropical cyclones is to dissipate heat, doesn’t it make sense that (all other things equal) E-W traveling Atlantic hurricanes will be most numerous/intense when ocean sources (& associated air masses) in the E. Atlantic are warm and ocean sinks/detinations (& associated air masses) in the Pacific are cool? i.e. The stronger the longitudinal heat gradient, the more urgent the need to dissipate it.

I have no sense for the scale over which these dissipative processes operat. Maybe it’s that large?

104. Judith Curry

re #102 David, the correlation between PDI (or NCAT45 or whatever) with SST is misleading. For example, ENSO’s influence generally gives rise to negative correlation with SST, esp in the NATL. THe Hoyos et al. paper addressed this issue (reminder, it can be found at)
http://webster.eas.gatech.edu/onlinepapers.html
(second paper on this list)

It is shared information in the trend, rather than correlation on a seasonal basis that is the important relationship. In my opinion, the Hoyos et al. is the most important paper to date on the hurricane/sst/global warming topic, would be very interested in any comments from the climateauditors on this

By the way, next tues afternoon, the topic of discussion for the Hurricane Seminar class at Georgia Tech will be the climateaudit posts on the statistical analysis of the emanuel and webster papers

105. Judith Curry

Re Mann and Emanuel. This paper addresses a key issue in trying to sort out whether the NATL TC activity (which has the longest data record) can be attributed to anthropogenic activities. The question is then whether Mann and Emanuel’s analysis is useful in addressing this issue. For the sake of argument, lets start by accepting the “consensus” opinion for the anthropogenic component of forcing of the global surface temp since 1870, including the (mostly NH) aerosol cooling from 1940-1970. Given this nonlinear anthropogenic forcing of the surface temperature, I agree that it doesn’t make sense to try to look at linear trends over this period. This is one reason why people are focusing on the trend since 1970, since our physical/model arguments clearly indicate a rationale for expecting a (mostly linear) trend since 1970 (starting the trends at 1970 isn’t cherry picking the data, we are stating that we have no physical reason to be looking for a positive trend during 1940-1970 assoc with AGW. Mann and Emanuel then make the valid point that studies that try to diagnose the AMO from detrending the NATL surface temperature data will pollute their AMO by the global forcing, which is not linear over that period. They then attempt to “detrend” the tropical SST data not by taking out a linear trend, but by removing the global (or at least hemispheric) forcing of the surface temperatures. Whether this was correctly done from a statistical point of view, I will leave it to bender/willis et al. to assess. From this analysis M/E conclude that there is no evidence of the AMO in the tropical sea surface temperature. The paper did not say that the AMO does not exist, although Emanuel did say this in a number of public statements (I think that Mann and Emanuel disagree on this point; Mann does think there is an AMO, but it is reflected mainly in the higher latitude SSTs). M/E further conclude that there is no evidence of the AMO in NATL TC activity, implicitly assuming that the only way for the AMO to influence TC activity is through SST.

So what is my opinion on all this? I think their general approach of trying to isolate the AMO through eliminating the global forcing component (rather than linear detrending) is appropriate, although I cannot judge their actual statistical analysis. I have trouble with both of their main conclusions however. I think Willis or Bender in a previous thread showed a strong mode of TC activity in the 10-20 year time frame, and I have seen this in my plots as well (in particular, something ca 20 years). In the 2 AMO cycles considered by M/E, the 20 year cycle seems to amplify the first AMO cycle, and diminish the second AMO cycle. This was not considered by M/E, and I think it needs to be before we can state categorically that there is no impact of AMO on the NATL tropical SST. My personal rather ad hoc assessment is that the AMO contributes an amplitude of about 0.2C to the NATL tropical SST. If for the sake of argument we accept M/E first conclusion re no AMO contribution to tropical NATL SST, then it does not necessarily follow that therefore there is no influence of AMO on the NATL TC activity. The AMO may set up ocean temperature gradients or otherwise influence atmospheric dynamics and hence influence TC activities in ways that are not related to the average tropical SST. Again, my own rather ad hoc analysis shows that the AMO does have an influence on the total number of NATL tropical cyclones, with an amplitude of 1-2 storms. The most important influence of the AMO may be on the tracks of the cyclones, with the U.S. landfalling TCs clearly showing the AMO influence (with relatively little in the way of a trend).

106. bender

the topic of discussion for the Hurricane Seminar class at Georgia Tech will be the climateaudit posts on the statistical analysis of the emanuel and webster papers

I’m not sure what is the most productive way to discuss a blogospheric work in progress. Especially in a 1-2 hour seminar. It would be sort of like an unannounced site visit – walking into someone’s lab and criticizing the equipment, the layout, the students, their projects, the work ethic … without listening, and looking down the road at what the lab is capable and likely of producing. I would hope that the seminar approach would not be the same as that of a discussion group that is used to focusing on published papers. Because you know in advance where that’s headed. CA works in progress are not published papers. Treating them as such would be silly and naive.

Recognize, student hurricanologists, that it’s not about hurricanes! It’s about a new free and democratic way of doing policy-oriented science. A way that is still evolving. Yes, the Open Audit model is a clear threat to ivory towers as we know them. That is the idea. Lots of people are going to find fault with it. Especially those with a vested interest in maintaining a monopoly on the scientific marketplace of ideas. If you love science, you love open audit.

If you are attending, Judith, let me know how it goes. Just don’t tell them bender is a Gator.

107. Ken Fritsch

Mann and Emanuel then make the valid point that studies that try to diagnose the AMO from detrending the NATL surface temperature data will pollute their AMO by the global forcing, which is not linear over that period. They then attempt to “detrend” the tropical SST data not by taking out a linear trend, but by removing the global (or at least hemispheric) forcing of the surface temperatures. Whether this was correctly done from a statistical point of view, I will leave it to bender/willis et al. to assess.

I am usually most confused just before an awakening, so I must be close to understanding what E&M did here. Did E&M look at a tropical (AMO area) residual SST by attempting to extract a global temperature or were they actually attempting to extract a temperature that could be independently attributed to an anthropogenic contribution? I thought they were using the terms global temperature anomaly and anthropogenically forced interchangeably.

In your view how did they detrend the data?

I can see where an average global temperature can be useful to the understanding of a changing climate, but does looking for a global temperature anomaly signal in a local temperature anomaly really have any meaning vis a vis understanding the local anomalies? How were the extended “local” areas involved in LIA and MWP affected by global anomalies of their periods or should those areas be considered more “isolated”? I may be missing something in this exercise but it does seem a bit forced (no pun intended).

108. bender

Re #105
So much to comment on, so much more yet to read … life is unfairly time-limited …

I think Willis or Bender in a previous thread showed a strong mode of TC activity in the 10-20 year time frame, and I have seen this in my plots as well (in particular, something ca 20 years).

M&E06 Fig 1d shows this as well, only in this case using spectral analysis (which is mathematically related to autocorrelation analysis). 20y,10y,5y,3y – all four wavelengths are significant in their Fig 1d.

I showed that these are not only the hurricane spectral peaks, but that they are the temperature-hurricane cross-spectral peaks as well.

Attribution, like any taxonomic problem, is inherently problematic. You have a turbulent, fluid blend of greys and you want to pull out the blackest blacks and the whitest whites, and see what they’re correlated with, ignoring all that middle ground complexity. Taken too far, it’s not climate science anymore – it’s witch-hunting.

With hurricane-temperature cross-spectral coherence occurring at multiple frequencies, you have to ask about the wisdom of demarcation. Do we really want an answer to the question “Is GW increasing hurricane intensity?”? Or do we just want better hurricane policy? We know what the AGW alarmists want. Do we want their brand of science dictating hurricane policy?

Politics aside, there’s much to discuss about analytical methods …

109. Barclay E. MacDonald

“the topic of discussion for the Hurricane Seminar class at Georgia Tech will be the climateaudit posts on the statistical analysis of the emanuel and webster papers”

With Steve M.’s permission, I would look forward to their comments and observations, if they are so inclined.

110. bender

Re #109
I would think a discussion on the role of “open audit” in a brave new internet-based world of international free science might be in keeping with Climate Audit’s purpose. But I think the hurricane thing may have strayed a little far from Steve’s core interests. He’s been all too patient thus far.

111. David Smith

Shucks, no time to even read these today. I’ve got to finish work on several presentations to be given tomorrow. See you all later.

112. TCO

I think a good group can rip a paper to shreds better then they can an in-process discussion. That said, watching the ripping being done here, may make the class participate more critically themselves. Would be very interested in their comments on the technical issues themselves and not purely on process. I think it’s hard to comment thoughtfully on the alternate process of blogospheric work unless you have done the hard core publishing yourself (for comparison).

113. Steve Bloom

Re #106: Sauce for the goose, bender.

Regarding your “Open Audit” idea, I would point out to you that many scientists may not be thrilled at the idea of having to spend a lot of time refuting arguments from non-scientists or even from scientists in different fields. To the extent that the idea may appeal to some, they may feel that any benefits to be had from such an approach are far more efficiently gained from the new open paper review process being tried elsewhere.

Finally, while I think what Judy is doing here is very interesting and I look forward to its outcome, I would suggest to you that most scientists would tend to consider this particular forum biased and so not the best place to participate.

114. Ken Fritsch

Finally, while I think what Judy is doing here is very interesting and I look forward to its outcome, I would suggest to you that most scientists would tend to consider this particular forum biased and so not the best place to participate.

Thanks much Steve B for your unbiased opinion.

115. bender

Re #113
(1) I accept my fate. If my goose is cooked, at least I will go down honorably. And if not, then what, my sampling-error-prone friend, will you do for me to make up for this erroneous projection of yours?
(2) Scientists who avoid scrutiny from other scientists are not, in fact, scientists.
(3) Those currently with a hold on scientific power will feel threatened by an Open Audit model. Those hungry for it will see it as the great equalizer. Check your demographics. The latter are not as few in number as you seem to think. Especially outside America.

116. bender

This forum appears to be biased towards opening up the science so that it is transparent and accountable. I *thought* these were deeply held democratic principles in this country. I’m pretty sure they used to be.

117. Willis Eschenbach

Re #105, Judith, as always your presence is a pleasure, and your analysis is interesting.

I was concerned, however, by your statement that:

For the sake of argument, lets start by accepting the “consensus” opinion for the anthropogenic component of forcing of the global surface temp since 1870, including the (mostly NH) aerosol cooling from 1940-1970. Given this nonlinear anthropogenic forcing of the surface temperature, I agree that it doesn’t make sense to try to look at linear trends over this period.

1) I am not looking for linear trends, but for correlations. I see no reason to expect that some factor of the hurricane data (count, PDI, ACE, etc) will be correlated with some aspect of this marvelous world.

2) While it is fine to accept “the (mostly NH) aerosol cooling from 1940-1970″ for the sake of argument, please be aware that you are accepting something that doesn’t exist in the data.

Note that the Southern Hemisphere cooled both more and faster than the Northern. This means that the aerosol explanation for the cooling must be incorrect, since the aerosols are almost entirely in the Northern Hemisphere.

And this, in turn, means that all of the models are incorrect (or to be precise, even more incorrect), since all of them explain the 1947 – 1970 cooling by invoking aerosols …

And this brings the whole AGW hypothesis into even more question, since the models are the only thing we have that supports AGW.

More to follow on more substantive issues. Invite your students to comment …

w.

118. Barclay E. MacDonald

“Regarding your “Open Audit” idea, I would point out to you that many scientists may not be thrilled at the idea of having to spend a lot of time refuting arguments from non-scientists or even from scientists in different fields.”

I suspect that is true, but to the extent they are paid and/or financed by those non-scientists, and those non-scientists are expected to pick up the bill on their recommendations, the non-scientists are entitled, at the very least, to full and complete disclosure of data and methods, and such questions must be answered. It’s too bad that it’s inconvenient!

119. David Smith

A very quick question for Judith: your paper with Hoyos, below Figure 1, says that the NATL box is 90W-20E. Is that correct, or should that read, “90W-20W”?

If it’s 90W to 20E, then you’ve taken in a chunk of Africa, including a sizeable part of the Sahara desert, which is interesting.

(Now, back to work.)

Thanks,

David

120. Willis Eschenbach

Re #105, Judith, the Mann-Emanuel use of the global SST to detrend the AMO, versus using a linear trend, is a red herring. There is very little difference between the two. Here’s the comparison:

As you can see, the difference is very small.

w.

121. Posted Sep 28, 2006 at 12:08 AM | Permalink | Reply

#117

And this, in turn, means that all of the models are incorrect (or to be precise, even more incorrect), since all of them explain the 1947 – 1970 cooling by invoking aerosols

The Allen et al (1999) paper mentions non-linear model:

For example, suppose a model forced with the combined effects of changing sulphate-aeorsol and greenhouse-gas levels gave a pattern of change which was significantly different to the sum of the patterns obtained in runs forced with each of these factors alone

Is this just an example, or do they use this kind of non-linear models?

BTW, hurricanes on Nature,

In this issue (28 September 2006)

Is US hurricane report being quashed?
Some NOAA scientists say political appointees blocked climate-change message.

122. Judith Curry

Re #106 Bender (Gator!)

Re the class discussion, we will be focusing on the statistical and scientific issues raised on the bender/willis hurricane threads since mid Aug.

In this forum, we won’t be focusing on the blogospheric process or the greater implications of climateaudit, although this may come up in discussion (will be interesting to see how the students perceive all this).

Re #114 Steve, you bring up an interesting issue as to why more scientists dont blog generally, or don’t post on climateaudit specifically.
A year ago, i wasn’t sure exactly what a blog was, and I had never visited a blog. I began visiting various blogs post WHCC, motivated by emails that i received that referred to specific trashfests of WHCC (scientific and personal). My first ever blog was posted on this thread
http://mustelid.blogspot.com/2005/10/ your-comment-was-denied-for.html (watch out for the space i inserted)
The next blog that came to my attention was prometheus, RP Jr was attacking a number of us on the hurricane/global warming circuit fairly regularly. Then i started reading RC and linking to some of the links posted on the RC site. When we published the BAMS article, we decided not to do a press release, but i did follow what was going on the blogosphere re the paper, and I posted to about 4 different sites. I happened upon climateaudit somehow in this process, I had never heard of climateaudit before. I posted my first message, and then asked myself why i hadn’t come across this blog before, say through RC links (ha ha). then i spotted SteveM’s name, and figured out where I was. I was enjoying the interaction and spent some time trying to figure out what climateaudit was all about. I applaud the general principles behind what you are doing. Further I am quite intrigued by what is going on in the blogosphere generally, and at reaclimate in particular. it is a very interesting experiment. It is interesting and worthwhile, and I have continued to blog here (I am not blogging much anywhere alse at present). I try to keep an open mind an think outside the box, participating in climateaudit has been provocative and interesting. At first on dlimateaudit i was getting “attacked” a bit, but overall i have been well treated and respected by the other bloggers and the attacks are pretty much disappearing at this point.

So why don’t more scientists participate? General ignorance about the blogosphere, not enough time. There are also alot of ad hominem and other kinds of attacks on the blogosphere, which scientists typically do not want to subject themselves to. Specifically with regards to climateaudit, most scientists of the “warmer” persuasion that are into blogging spend time at RC, which matches their worldview (sort of for the same reason people watch FOXNEWS versus PBS Lehrer). There is also probably an element of “fear” regarding what happened to Mike Mann, and that is somehow “blamed” on MM (the main issues of concern were subpoena of personal financial records and other things that were pure intimidation by the federal govt, and had nothing directly to do with MM) and this is somehow connected to climateaudit.

I am thinking about writing a paper on “spinning climate science in the blogosphere”, there are alot of interesting issues here.

I will also be interested in the perception of the students of climateaudit (they probably don’t know the history of RC and climateaudit origins).

123. Roger Pielke, Jr.

Judy-

You are always welcome post and comment on our site. I recognize that discussing issues of scientists in policy and politics can be uncomfortable, espeically for scientists not used to being in the midst of the fray, but we’ve not “attacked” anyone, you included. Lets be fair, OK?

124. bender

Re #122
If there’s anything I can do to faciliate the discussion, let me know. If there are unanswered questions at the end of it all (which there usually are), please record them, and I’d be happy to take a crack at answering them. I’m definitely not the best analyst around; I’ve met plenty better than me. But I’m here and willing to help if I can. We could even post the Q&A at CA if that’s ok with Steve M.

On the one hand, I really don’t have time for this. OTOH human society is starting a new chapter in the annals of science – using the internet to connect human brains with multiple datasets & algorithms to attain full parallel processing in the application of the scientific method – and that is once-in-a-lifetime exciting. If I’m trashed by your students, it’s for the greater good. Good for science. Good for policy.

Thanks for the link. For the record, it was Steve Sadlov at CA who pointed out the BAMS article to us. The only reason it perturbed me was that the time-series in Fig. 1 had no error associated with it. Like the IPCC hockey stick. It made me wonder if scientific standards in climatology are back-sliding. And now we have this uncertainty-free Fig. 5 of Hansen’s. These quotes by Schneider. Policy makers must be forced by scientists to embrace uncertainty. We have to show them how and why it matters, and figure out what to do about it.

125. Judith Curry

I do agree that when i post over at prometheus, I am treated well. but most of my prometheus posts (the most recent two or three posts excluded) have been of a “defensive” nature in response to your posts that mention my name and incorrectly infer my motives or incorrectly state what I said.

126. bender

The #122-#123 exchange, to me, is symbolic and helps to appreciate the depth of the problem. Between the scientist and the policy-maker is this huge gap. The public and the elected officals don’t understand how big it is, but the bridge-builders sure do. As a scientist, as a policy-maker, if you try to bridge it you risk falling in. So there’s fearful cross-talk from the precipice. Messages are distorted. People get frustrated. Progress is slow. Finger-pointing, rock-throwing, skirmishes. Language devolves as adversaries reinvent it in their own image. Communication stops.

At some point the collaborative effort needs to start. Scientists and policy-makers each need to be rewarded for bridging efforts. That will require a major shift in how governments and universities function. (e.g. “Publish or perish” is not serving the taxpayer well. Partisan ownership of a policy stance is not serving the voters well.) Responsible, accountable science-based policy will require a new culture of reaching out and working together. That is what the business community is asking for. That is what the taxpayers are asking for.

Policy-makers, please follow RPJr’s model and accept the uncertain nature of science.
Scientists, please follow JC’s model and seek more transparent and accountable methods for generating and communicating policy-relevant results.

127. Roger Pielke Jr.

Judy- I am comfortable that people can visit our site and make judgments about what they find there. When you do characterize what we’ve said, it is probably appropriate to provide the source so that people can judge for themselves:

You are more than a disinterested party in this, no?

I am also happy to let this be my last word on this at CA and let you folks get back to your good work. You have my email and phone number if you’d like to discuss these matters. Issues of policy and politics are not personal, remember that.

Thanks!

128. Steve McIntyre

You mentioned the financial requests by the House E&C Committee. This is a classic example of two different worlds as I’m 100% convinced that the request was merely pro forma without any special interest in that particular topic.

Their key questions pertained to disclosure i.e. did Mann calculate the verification r2? the results without bristlecones? etc. They never did get an answer to their questions. Mann’s response to the NAS Panel was untrue and the panel didn’t pursue the matter. By the time that Mann got to the House E&C committee, no one had defended him so they didn’t bother with him. Had they still been interested in the question, Mann would have been in a very uncomfortable situation. The question about the failure to report the failed verification r2 statistics remains quite relevant as I doubt that Mann’s study would ever have obtained the following that it did, had he reported the failed statistics.

129. David Smith

Judith, could you take a quick look at #119? I’m reading the Hoyos et al paper and just want to be sure I understand the geographical basis of the study.

Thanks

130. Steve Bloom

Re #126: This causes me to wonder if you’ve ever dealt with policy-makers.

Re #127: “You are more than a disinterested party in this, no?” As are you, Roger. Not an “honest broker” in sight, it seems (as you define the term, anyway).

131. Judith Curry

re #128 Steve, I will do my best to stay out of the hockey stick debate, i have not looked at the issues in detail, I am prepared to believe you on the issues related to statistics. The point i make is that the broader scientific community was totally taken aback by the subpoena of his personal financial records, I believe that was highly inappropriate and smacks of government attempting to intimidate scientists. A scary thing, whether the scientist is right or wrong.

132. Judith Curry

Re #126 Bender, thank you for this analysis, I agree.

133. Judith Curry

re #129: and my biggest thanks to david, who is ignoring the noise and sticking to the main topic. Re #119, oops, should be 20 W.

134. bender

One more test. Humor me.
“ampersand”-”less than”-”minus” yields:

y &

135. TJ Overton

I just saw an interesting event that seems relevant to the relationship between science and policymakers.

“The principal role of the science and technology community is to advance human understanding. But there are times when this is not enough. Scientists and engineers have a right, indeed an obligation, to enter the political debate when the nation’s leaders systematically ignore scientific evidence and analysis, put ideological interests ahead of scientific truths, suppress valid scientific evidence and harass and threaten scientists for speaking honestly about their research.”

136. bender

Re #73

you can input the lt sign by using the tex operator

Steve M, I saw the same message posted to Willis a couple of weeks ago, but didn’t understand it then, and still don’t now. Moreover I can’t find the full instructions on using this “tex operator”. It’s driving me bonkers.

[Failed test in #134 can be deleted.]

137. Steve Bloom

Re #132: I won’t belabor the point, but from my perspective as someone with circa 30 years of policy experience, what bender said didn’t make a lot of sense. More precisely, a lot of the individual points made sense, but the whole was not coherent in the sense of making up an understandable (to me) overall approach. OTOH maybe he could clarify things by making reference to some specifics.

Re #133: While we’re on the subject, the 90W doesn’t quite work either since it picks up a piece of the Pacific along with a big chunk of Central America and Mexico (see map). Since any rectangular box selected is going to tend to have such problems, possibly the intent was to identify the area within the box that is also in the NATL? If so, 20E presents no problem, but note that 90W still misses the western Gulf of Mexico (although I don’t know if the Gulf or the Caribbean even qualify as part of the NATL).

138. Steve McIntyre

#136. Latex option is described here. http://www.climateaudit.org/?p=660 It’s very handy although it may not do what you want to do.

139. fFreddy

Re #131, Judith Curry

the broader scientific community was totally taken aback by the subpoena of his personal financial records

Judith, that’s twice you’ve said that, and I don’t think it is true.
Mann received a letter from a Congressional Committee; that is not the same as a subpoena, which has a very specific meaning. Congress has the power to subpoena, but they didn’t use it in this case.
And as I recall it, they asked for details of the funding of his professional work. That is a very different thing from asking for his personal financial records.
It is possible my memory is playing me false, and I apologise in advance if that proves to be so. But I think you should check your sources before repeating this allegation.

140. Willis Eschenbach

Re #131, Judith, as always your comments on the Michael Mann / Congressional situation were interesting. You say:

The point I make is that the broader scientific community was totally taken aback by the subpoena of his personal financial records

Context is crucial in this issue. The committee’s request, not a subpoena but a request, asked him politely to “please provide the following information” (along with other scientific information about his data, methods, and results):

1. Your curriculum vitae, including, but not limited to, a list of all studies relating to climate
change research for which you were an author or co-author and the source of funding for
those studies.

2. List all financial support you have received related to your research, including, but not
limited to, all private, state, and federal assistance, grants, contracts (including subgrants
or subcontracts), or other financial awards or honoraria.

3. Regarding all such work involving federal grants or funding support under which you
were a recipient of funding or principal investigator, provide all agreements relating to
those underlying grants or funding, including, but not limited to, any provisions,
adjustments, or exceptions made in the agreements relating to the dissemination and
sharing of research results.

Now, the question of who is paying for the research is a question that has gained great prominence in climate science. Steve M. gets asked these kinds of questions all the time. Heck, I’ve even been asked who pays for my research … the answer being, “I do” …

However, in the case of Michael M., I and many other, including Joe Barton, it seems, were incensed by his refusal to reveal work which taxpayer dollars had paid for. He was being paid by the US Government to do climate research, and then hiding the results. In that case, it is entirely appropriate, in my view, for the US Government to find out what research he has done on the public dole, and what he has done with the results.

In order to find this out, however, it is not enough to ask him which research was publically funded, as he is known to … ummm … well, let me just say be parsimonious with the truth. To find out from him, it is necessary to ask who funded all of his climate research, otherwise, he might accedentally put some of the answers in the “CENSORED” file … (if you don’t know the history of that file, it is where Michael Mann put the paleoclimate “hockeystick” results that he did not want to see the light of day).

In that situation, I have no problem with the Committe asking Michael Mann “Who paid for your climate research?” What’s the problem with that? Would you be unwilling to reveal who paid for your research?

w.

PS – at the end of the day, he provided the financial information, none of which was embarassing, and only part of the scientific information, much of which was embarrassing …

141. Roger Pielke Jr.

Re #139

I have expressed the view that Barton’s inquest was misguided, but it ceratinly was not a subpoena and it did not involve a request for personal financial records, as described by Nature here:

http://www.geo.umass.edu/climate/natureedit.pdf

The letter from Barton is here and asks for information on all research support:

http://energycommerce.house.gov/108/letters/062305_Mann.pdf

Assertions involving factual claims about political issues should be held to the same high standards as everything else, in my view.

142. Pat Frank

#122,125,128,139,140,141 — Richard Lindzen has pointed out that when he was hauled before the committee of then-senator Gore in 1992 and was “bullied” (Lindzen’s word) to change his mind about the cause of recent climate warming, mentioned here, no scientists came to his defense.

Nor, apparently, was the scientific community chilled by that particular bit of Congressional persecution. The fearful response seems to be a little selective. Lindzen has been competely transparent with respect to his science, in rather stark contrast to Michael Mann or Phil Jones, or the IPCC for that matter, and further tens of billions of dollars aren’t being leveraged by his work. Why shouldn’t Congress be interested?

What I don’t understand is why the same scientists that have responded sympathetically to Mann haven’t also advised him to release his methodologies. That makes their defense look partisan, and the partisan defense of an obscurant is not ethical scientific practice.

I think Mann deserves no defense, Judith. His actions have not been forthcoming. For that reason, I think Barton’s requests for information should warrant no fear. Barton would have had no reason to make a request had Michael Mann been open about his work. One may also note that in the context of the Baltimore hearings, Barton’s requests were entirely in keeping with usual Congressional activities regarding the auditing of suspect publicly funded science.

143. bender

As important as it is to be factually correct, I think Judith’s main point is that there is a legitimate fear among scientists that their research could get sidelined and their credibility could take an unnecessary hit if they were to be subpoenaed in this way. Whether it happened or not, just the threat of it is a powerful weapon. Powerful enough to keep them away from the blogosphere. Whether or not that weapon would ever be used is hard to say. Hence the fear. It’s not irrational. Not knowing who or what the enemy is is scary.

“And the knowledge that they fear
is a weapon to be held against them.”

144. Willis Eschenbach

bender, you say:

I think Judith’s main point is that there is a legitimate fear among scientists that their research could get sidelined and their credibility could take an unnecessary hit if they were to be subpoenaed in this way.

Describing the fear as “legitimate” assumes facts not in evidence …

1) Michael Mann’s research (unfortunately) did not get sidelined.

2) Michael Mann’s credibility (unfortunately) did not take a hit.

If that’s what happens when you defy the Committee’s request to come clean, what’s there to worry about?

w.

145. bender

Re #137
Back-pedaling Bloom. Maybe what I say does not make sense to you. But you are quite special. Perhaps “as someone with circa 30 years of policy experience” you have been working in an advocacy role to influence policy in one direction, as opposed to what *real* policy makers do, which is try to make balanced choices that satisy many stakeholders & special interests? Or maybe in your 30 years of working with scientsist to bridge the science-policy gap you haven’t been working the kinds of scientists who care about uncertainty, how to measure it and how to cope with it? Tell me – in your 30 years of experience as a stakeholder influencing policy, have you ever stopped to ask yourself: “what kind of data would it take in order to decide whether or not a particular event were unprecedented”? Tell me – in your 30 years of experience of working with scientists, how is it that the issue of sampling error and inferencing in stochastic time-series never came up?

I’m listening. The floor is yours.

But don’t forget, you still owe me a lecture on Bayesian methods in climate science.

Re #130 I can live with your doubts.

146. bender

Re #144
I’m not so sure about that, Willis. Maybe it’s in how you define “sidelined” and “credibility hit”? As for legitimacy of fear, my point was that when you don’t know your enemy, it is in some sense “legitimate” to be fearful.

Bottom line for any fearful scientist is if you are open with your data, and methods, and funding sources, careful in your work, admit when you’ve made an error, then you never have anything to worry about.

Worth noting that the financial information that the committee asked for was not onerous to prepare. It’s the sort of information contained in any NSF grant, for example. It’s a 2-minute copy and paste job.

147. Pat Frank

#143 — I don’t recall a big fearful outcry by scientists when Donald Kennedy was dragged over the coals in front of Congressman Dingle. A lot of attention, certainly, but not much fear. Likewise when Baltimore was investigated. Mann was called down for cause, not by whim. A defensively fearful response by other scientists, while possible, would be a bit irrational. I doubt that many car drivers here are fearful about being hauled off to jail after an object example of some freeway-dragster being given 90 days.

148. David Smith

Judith, concerning the tropics, I have a question. I am stumped by the water vapor trends in the tropical atmosphere.

Linked here is a trend of the specific humidity at 400mb (about 5 or 6 miles above the surface, high cloud level). This is for the world’s tropical region (25N to 25S). The humidity trend is definitely downward. Similarly, this 400mb trend is seen at the other levels above 700mb upwards (in other words, much of the troposphere).

Linked here is a trend of the specific humidity at 925mb (near the surface) for the same region. In recent years, the humidity trend is up.

I understand why the near-ground humidity rises (warmer sea temperatures mean more evaporation) but I do not understand why that moisture does not mix upwards in the tropics. Any idea why?

And, do you know if declines in the water vapor content of the mid and upper troposphere is predicted by the GCMs? I thought the prediction is for rising humidity (=more greenhouse gases) throughout the atmosphere, not just near the ground.

Thanks,

David

149. TCO

I think Judy is showing the courage of years and of…well courage…to take part in the free salon style interchange that is the blogosphere. I think many scientists are not comfortable with the hard core fat-chewing of a real knock-down drag out bull session. They tend to have private personalities and also realize (rightly so) that they have little to gain in terms of the normal push towards advancement and empire building and science renown from blog discussions. Particularly ones, where some pesky little commenter may show them up. But that’s fine. I hate the calcification of science that has come with all the money that poured in after ww2. It is good to have some tumult shaking the Ivory Tower.

150. Pat Frank

#149 — On the contrary. Having been around academic scientists and PhD candidates my entire adult life, I can say with confidence that virtually every personality type is included among them (except perhaps the violence-driven). They do not “tend to have private personalities.”

No one likes to have a favored idea wrecked. But we all have to swallow hard and carry on if that happens. That’s happened to me. I’ve perhaps brought that circumstance upon others. Judith Curry is strong-minded to be here amongst critics, it’s true. After looking at the links RP jr. provided in #127, however, it’s clear that Judith has the stomach for engaging controversy. She has also responded very well; even offering to co-author a paper with Bender (I hope that happens).

Reconciling contrary interpretations through a collaboration with an opposed-view colleague happens with some regularity in science, and is a great way to facilitate real progress. I’ve done it myself, and successfully. It is a perfect out-working of the scientific ethic of disinterested objective knowledge. Judy Curry has it. Michael Mann and Phil Jones, by all evidence, do not. Those who collaborate with them under that circumstance, or who protect them, are guilty of participatory subversion of that ethic.

151. Steve Bloom

Re #145: Nice tap-dance, bender. If you have no practical experience with public policy, just say so. Notwithstanding that, I would be happy to see your ideas.

152. TCO

Pat, I been around the union carders. Been in the military, Wall Street, etc. There are a lot of different types of peoples and tendancies for groupings. We are not all the same and personalities are not distributed across fields the same. That there is a wide variance is irrelevant. I’m talking about the field versus other fields. So mean and sigma are what matter.

153. Steve Bloom

Re #148: That first link says “up to 300 mb” rather than 400 mb.

154. bender

Re #151 Nice dodge, as usual, Bloom.

155. Pat Frank

#152 — I didn’t say that scientists are fully representative of all personality types, TCO. I just disagree that they, “tend to have private personalities.” Here’s what they all do have — the ones that practice research full-time, anyway — they have personalities tempered by self-disciplined attention to analysis and detail. But after seeing some scientists fly off the handle at their grad students or get spit on their lips arguing a point, I really can’t say they tend to private personalities.

156. David Smith

There is something else climate-wise that stumps me.

Linked here is a plot of global air temperature in the upper regions of the troposphere (200mb, meaning that about 80% of the earth’s atmosphere lies below this level).

Notice the dramatic change in temperature circa 1976. This pattern (anomalous temperature rise) is also seen from the top of the stratosphere all the way down to about 700mb (10,000 feet above sea level).

Something major happened to Earth’s atmosphere in 1976. Not only did the mid and upper atmosphere warm sharply, but global surface temperatures stopped declining and began to rise.

This 1976 inflection point is reflected in other atmospheric parameters, like geopotential heights, so I do not see it as a measurement issue. Something big happened in 1976. But what?

The effect is especially evident in the stratosphere.

I’ve heard that the pre-1976 cooling was due to aerosols. It is hard for me to imagine how reduced aerosols could cause a jump like the one we see in the records. I’d expect a gradual shift from cooling to warming as aerosols waned.

Now, the stratosphere can undergo sudden warmings related to things called gravity waves, which I believe are vertically-propogated slow-moving waves. But, my understanding is that those events are relatively short-lived. Whatever happened in the mid 1970s has lasted a long time.

Perhaps someone has figured this out, and I’ve just missed hearing the explanation. If I was a climate scientist, I’d be intellectually “stuck” on this question until I figured it out.

bender, you’re a gator, I see. See you guys October 7!

David

157. David Smith

Re #153

Steve, “up to 300mb” means that the specific humidity data is available “up to 300mb”. This plot is for 400mb, one of the eight atmospheric levels available online.

Here is the link to the government website used to make this plot. Anyone can make a plot. It is a neat site.

158. David Smith

Re #157

I’m sorry, here is the correct link.

159. Pat Frank

#145 — You don’t understand, Bender. Forcing AGW policies on everyone is not the end. It’s a means to the end. The end is rolling back human population to maybe 1 billion people living on organic farms in harmony with a green and happy Earth. That means bringing down technical civilization and, say, letting virulence have its lovely natural way. The redemptive vision always includes the surivival of the good; in this case, the green. Isn’t that right, Steve B.? More better is more fewer? And the few includes you?

Anyone thinking “The Rapture” is only a nut-case Christian longing — Think again. Transport to heaven is a fungible fantasy. It’s also a call to enabling action.

160. David Archibald

Re 156, a number of things changed in 1976. Certain fisheries had dramatic changes in yield. Perth’s weather became drier after 1976.

161. Steve McIntyre

TCO – please don’t post anything until tomorrow afternoon. Yellow card.

162. bkc

Re. 156

Something big happened in 1976. But what?

No idea if it’s related or not, but I believe the PDO shifted states around ’76 or ’77 to the “warm” phase, and I believe it has mostly been in that state since. Interesting how it tracks global temps (or vice versa) – rising in the 30′s and early 40′s, cooling in the 50′s and 60′s, and warming again in the 70′s to present (mostly)… Link for index is here.

163. bender

Re #156
1976 heating of ~0.8°C at 200mb. Nobody else guessing, so I’ll get it started. Random pulse of heating from below via vertical teleconnection?

164. bkc

Re. 160

I think there have been several studies linking fish yields to PDO phase state, at least in NE Pacific. Don’t have any links offhand.

165. bender

Correlations between weather fluctations and animal population fluctuations are always interesting to contemplate. If you’re serious about it I would caution you to be careful about the statistics of that sort of analysis. Read Yule on sunspots and hares.

166. bkc

Re. 165

Being statistically challenged, I can’t really comment on the quality of the statistical analysis. However, here is a typical study – ***WARNING – CONTAINS PCA***. It also, however, had this encouraging comment; “Further, the detection of trends without smoothing tends to lend greater weight to such interpretations of the data.” Also, he provides links to data and plots of all data series.

Here is an overview of the PDA (same author), with climatic and biologic responses, and many references.

167. Pat Frank

#156 — In January this year, I had an email conversation with a science reporter on the San Francisco Chronicle about the Pacific Decadal Oscillation (PDO) and the California Current (CC). She had written a long front-page article saying that the CC is getting warmer due to CO2-driven forcing and so the local sea-bird population (the Cassin Auklets, mostly) was declining. After three go-rounds, complete with extensive literature citations, and attached articles and Figures, showing the CC and PDO are in entirely unremarkable cycles, she was unmoved.

Apparently when the CC warms, the waters of the Gulf of Alaska cool, and vice versa, all in phase with the PDO. It appears that in about 1998, the PDO reversed the phase that began around 1976.

Here’s a bit from my second email to the SF Chron. reporter. I have all the articles in the list below as pdfs, if anyone wants them:

The PDO refers to a cyclic warming and cooling of the Pacific Ocean, and it can take forty years (or more) to complete a cycle, i.e., a time encompassing the entire professional career of a scientist (or a journalist). Along with the PDO, the California Current (CC) warms and cools in concert. The two processes are coupled, have endured for at least thousands of years, and have nothing to do with any current global warming.

Either the PDO influences the CC, or else they are both driven by the same climate process. The warming in the CC noticed in the late 1970s is now known to have been connected to a change of phase in the PDO from cool to warm, and is not ascribed to any atmospheric warming due to CO2.[2]

In the 1970′s, no one knew about the PDO, and so the warming observed in the California Current was explained in terms of other forcings — after 1988 but before 1997 that included global warming. Perhaps some still ascribe the warming of the California Current to CO2-driven global warming. However, since discovery of the PDO the rise in atmospheric CO2 is superfluous to an explanation of the warming of the CC. That is, the atmospheric CO2 trend is not needed to explain the warming CC phenomenon. In any case the warming trend appears to have reversed, as the PDO is known to do.

When the California Current warms due to the PDO, the offshore ecosystems decline. However, at the same time a reverse polarity cooling happens off of Alaska, where the ecosystem is invigorated. When the PDO reverses, the CC cools and offshore California the ecosystems rebound. But off Alaska the sea warms and the ecosystems decline. This last cycle began in 1945 (cooling), switched phase to warming around 1977, and apparently switched to cooling again in 1998.

A recent extensive review of the PDO and the California Current that included a discussion of the accompanying ecological changes,[3] concluded that, “A warming trend between 1950 and 1999 of 1.3 C is evident in upper-ocean temperature along the coast of California, together with strong interannual temperature variations. The observed warming trend extends over the top 200 m of the water column and is consistent with large-scale studies of heat content change over the last 50 years (Stephens et al. 2001). The temperature signal is coherent with indices of large-scale climate variability (e.g., PDO) and is uncorrelated with salinity on decadal time scales. Simple and full physics model experiments reveal that these temperature changes are primarily controlled by changes in net surface heat flux forcing, which acts in phase over the entire northeastern Pacific. … It is apparent that the warming trend between 1950 and 1999 is driven in the model by decadal variations, rather than a trend, in Q, suggesting that the observed trend in ocean temperature is part of natural fluctuations of the climate system rather than associated with global warming. Consistent with this interpretation, the observed transition toward cooler temperatures after 1998 is captured by a simple 1D heat flux forced model (Fig. 15b). This does not exclude the possibility that there is also a component of warming trend associated with greenhouse gas increases. However, the temperature variance associated with decadal changes in heat flux is strong enough to inhibit a clear detection of this global warming signal over the length of the CalCOFI record.

They are saying that the change in the CC is due to a recurring cycle and is not detectably due to a linear increase in global heat balance.

What this means is that the offshore warming that constituted the center-piece of your article can not be ascribed to a global warming trend. It is very much more likely to be part of the enduring cycle of the Pacific Ocean.

Long-term records of the California Current obtained from drill cores taken from the Santa Barbara Channel,[4] show variations in the temperature of the California Current across a thousand years that are entirely consistent with what is happening now. Discussion in the literature about the more recent cooling trend of the California Current, since 1998, speculate that the phase of the PDO is once again turning. If that is true, the ecosystem will strengthen off the California coast, but will diminish off Alaska.

Note the quotation in reference [4] below. I don’t think anyone can say that the ups and downs in the CC now are any warmer than the ups and downs during the Medieval Warm Period, which may have averaged a degree warmer than the current global climate.

Virtually all the changes in offshore ecology you noted in your article are most parsimoniously connected to the PDO rather than to global warming. Perhaps that conclusion will change in the future, though I tend to doubt it.

[1] N. J. Mantua and S. R. Hare (2002) The Pacific Decadal Oscillation Journal of Oceanography 58, 35-44

[2] W. T. Peterson and F. B. Schwing (2003) A new climate regime in northeast pacific ecosystems Geophysical Research Letters 30, 1896, doi: 10.1029/2003GL017528

[3] E. Di Lorenzo, A. J. Miller, N. Schneider, and J. C. McWilliams (2005) The Warming of the California Current System: Dynamics and Ecosystem Implications Journal of Physical Oceanography 35, 336-362.

[4] D. B. Field and T. R. Baumgartner (2000) A 900 year stable isotope record of interdecadal and centennial change from the California Current Paleooceanography 15, 695-709. These researchers specifically note (p. 707) that, “The recent climate shift that occurred over the North Pacific in the mid-1970′s produced thermal changes on a scale roughly comparable to the interdecadal fluctuations of the past 1000 years in the California Current. We also find centennial-scale changes that are consistent with the global temperature changes associated with the Medieval Warm Period and the Little Ice Age.

168. Judith Curry

Steve, for the record I agree absolutely with you that scientistst MUST:
1. make data publicly available
2. document upon publication the methods used in the research so that the results are reproducible
3. answer questions regarding the data/methods from anyone seriously trying to reproduce the results
4. disclose funding

If this is not happening (and it seems not to be), the blame is on the funding agencies, public institutions that employ the scientists, and the scientific journals. Individuals may try to skate around this accountability, but these other groups have the power to insist through not funding, not promoting, not publishing, etc those who fail consistently to provide this accountability. However, government intimidation of scientists (Mann, Lindzen, whoever) is a bad thing.

169. KevinUK

#167, Pat

Surely you already know that ‘journo’s’ are not interested in real science but instead are interested only in the junk (Mannian/Hansen) science that enables them to publish alarmist articles on the front pages of their newspapers?

KevinUK

170. welikerocks

The government made Steve and Ross’ findings official and on the record because to these scientists and their funders and backers, Steve and Ross were just pests and amatuers. The government represents me in this case. The scientists who believe in AGW , their backers, magazines and media who promote only their beliefs intimidate me! Without government representation or audit in this case, where would I be?

Hello Bender,
Some thing Steve Bloom said in another topic spurred me to a google search for new publications (I found so many) Apologies if this one has already been discussed but I wanted to show it just in case not.:

Public release date: 11-Sep-2006
Human activities are boosting ocean temperatures in areas where hurricanes form, new study finds. “We’ve used virtually all the world’s climate models to study the causes of SST changes in hurricane formation regions,” Santer says.
Title: “Forced and unforced ocean temperature changes in Atlantic and Pacific tropical cyclogenesis regions”
Authors: B.D. Santer, T.M.L. Wigley, P.J. Gleckler, C. Bonfils, M.F. Wehner, K. AchutaRao, T.P. Barnett, J.S. Boyle, W. Brueggemann, M. Fiorino, N. Gillett, J.E. Hansen, P.D. Jones, S.A. Klein, G.A. Meehl, S.C.B. Raper, R.W. Reynolds, K.E. Taylor, and W.M. Washington.

171. welikerocks

Re #167 for Pat:

“Remote wind forcing contributed to the recovery of the California Current in summer 2005″
No 3 here:

172. Judith Curry

Re #117 Willis, I absolutely agree with you that there is uncertainty in the attribution of the 1940-1970 cooling. My recent (post Feb acceptance of the BAMS article) oral presentations on the issue of hurricanes and global warming cite as the two main uncertainties in the link as: the quality of the hurricane data, and attribution of the 1940-1970 cooling. I am stumped on the attribution of the cooling. RP Sr recently posted a link regarding Judith Lean’s presentation at the SORCE meeting on solar variations in the last century, and the impact of solar forcing in the past century seems to just get smaller. Sounds crazy, but I am starting to ponder the WW effect. We discussed this in a previous post re the 1915 minima in NATL TC activity (my argument was that this was associated with minima in SST also, lending credibility to the TC minima). Look at Willis’ plot, and if you take out the WWII years in the NH, then you don’t get the big warm spot in the 30s-40s. Mike McCracken made the point at my latest presentation at the Climate Institute that WWII was a major perturbation to the weather observing systems in the NH, including the methods of how SST was measured by ships (apparently he has been looking into this a bit). If WWI had a similar impact on observations, not sure what this means in terms of the reliability of the historical record. Maybe we need to look more closely at the proxies for the “historical” part of the climate record (no bristlecones please). And we need to take a careful look at ice core and other records that can help us assess how much volcanic and pollution aerosol was present and where did it go.

173. Judith Curry

re #120 Willis, the whole issue of trying to somehow remove the “global signal” from the NATL temperature series is very confusing. I can accept the idea that to identify the AMO from SST signal, that you need to remove the global external forcing from this temperature signal. However, I am not sure that this can actually be done in any sensible way.

Since SST variations caused by natural internal variability is hopelessly convoluted with the global forcing and may in fact be modified by the global forcing, lets try to find another proxy for the AMO that has little dependence on greenhouse warming, Ironically, tropical cyclones themselves may be the best proxy for the AMO. Take a look at the NATL figures from my congressional testimony
(watch out for the space).
Consider the following “eyeball” analysis (no statistics have been done on this). In each of these figs, with 11 year running mean, there is trend, ~70 yr cycle (AMO), and 20 year cycle. In the U.S. landfalling plot, the dominant signal is AMO, with secondary signal from the 20 year cycle (very little signal from trend). The season length plot shows the 20 year cycle most strongly, and then AMO and trend almost equally with the trend (AMO seems larger in earlier part of record, with trend larger in later part of the record?). The biggest trend signal comes from the plot of the total number of tropical cyclones, with the AMO and 20 year cycle having almost equal influence (note the anticorrelation with SST in the 20 year cycle). Presumably, with these 3 time series and the assumption that there is some sort of trend, a ~70 yr cycle and a ~20 yr cycle, we could somehow sort this out statistically (sort of like 3 equations, 3 unknowns). However, if we are trying to figure out what is causing the hurricanes, hurricanes will not work as the proxy for the various signals, we need to find something else for proxy (or a combination of something elses), but this may be a better way to go than trying to deconvolute the temperature signal into forced signal and internal variability signals.

The other issue, touched upon by your spatial correlation plots which I haven’t yet had time to digest, is that there are some teleconnections (cross basin stuff) involved in hurricane genesis. Some of this may be “real” (i.e. associated with some sort of causal mechanism). The way I laid out the hypothesis in the BAMS article, related to AGW causing increased SST causing increased hurricane intensity, involves each ocean basin acting as independent entity. This is probably the first order effect of AGW (or at least the simplest one conceptually). But the higher order effects, such as temperature gradients within a basin and teleconnections among the basins, may be of substantial importance. El Nino is the most obvious example, whereby SST changes in the Pacific influence TCs globally (including NATL). We are even apparently seeing a pacific influence on NATL on the subseasonal variation of TCs through the MJO (Madden Julian Oscillation). Does this sort of thing happen on decadal scale time scales also? Probably, but the analyses of decadal scale teleconnections to date have been based upon SST analyses and are arguably convoluted with the externally forced signal. So clearly A LOT more to be done on all this. I will try to find some time soon to digest the spatial correlation plots.

174. Judith Curry

Re #170: The Santer paper didn’t have anything to do with hurricanes, contrary to the press releases and interviews. They concluded from all of the coupled global climate model runs that tropical SST variations over the last century have been externally forced, not associated with internal oscillations. This then begs the question of the attribution of the forcing, and it doesn’t help us understand anything about hurricanes. This paper did help (slightly) to tighten up the link in in the hurricane-AGW causal chain whereby they ostensibly ruled out natural internal variability as a cause of the tropical SST increase (their experimental design could not rule out natural forced variability). So why the hype in the press release and the interviews. in my opinion, this was inappropriate in the context of the scientific findings of this particular paper.

175. fFreddy

Re #168, Judith Curry
Naturally, I agree with you when you say :

However, government intimidation of scientists (Mann, Lindzen, whoever) is a bad thing.

However, recall that you said in #131 :

The point i make is that the broader scientific community was totally taken aback by the subpoena of his personal financial records, I believe that was highly inappropriate and smacks of government attempting to intimidate scientists.

As pointed out above, the “subpoena of his personal financial records” never happened. It is a lie that was, I assume, started by the “green” spin machine.
Who, then, is intimidating scientists ? And how do we stop it ?

176. bender

Re #170
Thanks for the paper. I’m very far from being an expert in this field. My only beef was with the trend analysis methods of Emanuel (2005). From Mann & Emanuel (2006) I conclude that the field has “moved on” toward more sophisticated methods of detection and attribution. Which is good and necessary. Maybe Emanuel (2005) was just a baby-step forward.

The Santer et al. paper, from the press release, appears to be another example of that “moving on”. In many ways my criticisms of Emanuel (2005) are rendered irrelevant, because the field obviously developed a more nuanced approach to ‘fingerprint’ detection. IOW it is no longer a case of trend detection. That means the approaches have gotten more difficult to understand and to audit. Consider that this paper was authored by 19, count ‘em, brains. That’s a lot of “moving on” firepower.

For the sake of the yellowjacket hurricane seminar class (IOW to cover my shiny metal ass), it is important to realize that a global forcing effect will not necessarily result in simple, ocean-scale fingerprints, like a linear trend in SST. You increase the temperature of the burner under your soup pot, and all kinds of new circulation patterns will arise that you never would be able to predict. If you do the experiment slowly enough I would imagine that pre-existing circulations would first be enhanced, in terms of temperatures and flow rates through them, before there are qualitative changes in circulation. If that’s an analog for the globe, it means that circulation features like the AMO, or what have you, will become strengthened under warming as pathways of heat dissipation. If that pathway is intermittent, then the detection model has to allow for intermittency. Hence the need for more realistic temperature-response models. Hence the validity of the easily-overblown, but symbolic, term ‘fingerpint’ detection.

I think that that is what the field is moving toward: more sophisticated models of cause-and-effect, so that we are better able to detect these complex responses to rising temperature. One approach is to use process-based models of cause & effct: Santer et al. 2006. Another is better statistical models that allow for complexity in the background processes which are expected to be affected by GW: Mann & Emanuel 2006. Both are valid, complementary approaches.

The question before us is whether the conclusions of these papers are robust. Whether their methods work ‘as advertised’. That would take quite some time for someone like me, a non-specialist, to answer. Whereas I could see the flaws in Emanuel’s paper on first inspection, it was largely because the approach was quite simple. Not so with these other papers.

To summarize. The field has moved on. My criticisms of Emanuel (2005) are valid, but no longer relevant. It will take quite a bit of work to figure out if the new conclusions of these newer papers actually do follow from the new methods. The methods look reasonable. Yet we know how fragile these computer-intensive methods are to computer bugs and human error. I hope these weaknesses are recognized so that the researchers will welcome an open auditing process.

I think one thing all these papers ought to do is to end with a clear statement as to what they believe is the marginal impact of warming (GW & AGW) over and above the background of normal variability. In the case of a trend, that’s easily done in a sentence. In the case of a more nuanced temperature response, you need a comparative graph: hurricane frequency with & without GW, and also without A under AGW hypothesis. Three lines. A paragraph won’t do. That will put the sliver of AGW in context with all the other non-GW related variability that’s occurring. Which is RPJr’s point. If AGW is causing an extra hurricane or two per year (but delivered in pulses of 3-4 during active seasons), but no detectable increase in landfalling hurricanes, does it make sense to take drastic action against GHG? Or should we just tinker with insurance rates & mechanisms?

That, to me, is objective science that draws a line between hurricane/AGW alarmism vs. policy-useful hurricane science.

This assessment is probably quite naive from the hurricanologist’s perspective. That’s ok. Feedback welcome.

177. bender

Re #175
fFreddy, can we please drop this? The hurricane threads are long enough as it is.

178. bender

Re #173

there are some teleconnections (cross basin stuff) involved in hurricane genesis

Do you mean like I suggest in #103? Has these been hypothesized or proven yet? IOW do we have a novel result here?

179. BKC

#156, 162

If El Nino, which is internal climate oscillation, can significantly affect the global temperature (approx. .5°C in ’98), is there any reason to believe a longer term internal climate oscillation like the PDO doesn’t have a similar effect? In other words, is the PDO shift to a warm state in ’76/’77 responsible for a significant portion of the rise in global temperature since then?

180. bender

Re #179
Which came first – the chicken or the egg?

181. bender

David Smith, you may want to check #178,#103, prompted by your #102, Willis’s #66.

182. fFreddy

Re #177, bender
Yes, all right.
However, for the avoidance of doubt, I should just point out that my somewhat intemperate terminology in #175 was directed at whoever originated the story. There is no question in my mind that Dr Curry was only repeating what she believed to be true.

183. bender

Re #182
Thanks, fFreddy. [Your point is well-taken. The problem is it's a widely circulating rumor. I doubt you're going to get at the root of it here. Besides, the cost of the exorcism would far outweigh the benefits. The thin-skinned ivory tower academics would likely see it as "yet another" heavy-handed invasion of their private thought space. Paranoid? Yes, some of them.] If it’s really eating you, maybe draft up a rant for another thread?

184. BKC

#180

I don’t know. Which came first, the ’98 El Nino or the global average temperature spike?

185. David Smith

Re #181 Interesting concept, bender, and needed, since there is much that is not understood about these storms and brainstorming is needed.

Here is the current thinking: Normally hot tropical air rises in the Pacific and blows into the Atlantic at high altitudes. If there is a lot of this air blowing into the Atlantic, it “blows the top off” of hurricanes and weakens or destroys them (this is the famous ‘wind shear”).

When the Pacific is “cool” (“La Nina”), there is less hot air rising in the Pacific and blowing eastward. That means there is less wind shear to damage hurricanes (and storm seedlings), so that there are more, and stronger, Atlantic storms.

When the Pacific is “hot” (El Nino”), there is a lot of air blowing eastward to damage storms so that the Atlantic storms become fewer, and weaker.

But, it is more complicated than that, and the subtleties are poorly understood.

186. bender

Re #184
Exactly.

If the hypothesis is that AGW creates a global ‘fingerprint’ then you’re going to have a tough time detecting it – because a priori you don’t know what this ‘fingerprint’ looks like! All you know is it’s not global, and it’s not a trend. Which is saying almost nothing at all, really. So we start looking for ‘fingerprints’, knowing only that they look alot like everything else out there on the globe. Needle in the haystack? At first you’re going to see fingerprints everywhere, or nowhere, depending on whether you’re a proponent, or a skeptic. Over time the true nature of the ‘fingerprint’ will reveal itself. Until then detection will continue to be a serious problem.

Start boning up on your GCMs. They are the tool that will be used to define, detect, and quantify these ‘fingerprints’.

187. Judith Curry

Re 178, 179: this general topic of cross basin interactions regarding tropical cyclones is hugely ripe for research, although the analysis in #103 is most likely over simplistic

188. bender

Re #185
Interesting, as this fits well with Willis’ #66 correlation. The windshear mechanism is obviously well understood physically, but is the (-) correlation of Atl PDI to Pac SST a novel result?

RE: #145 – The general “climate” in this neck of the woods (which I share in common with Steve B) is not unlike what was depicted in the book “Ecotopia” – albeit modified by the constraints of the US Constitution and the current, but increasingly ignored, California one. Steve B is part of our local orthodoxy (whereas I clearly am not).

RE: # 149 – “I hate the calcification of science that has come with all the money that poured in after ww2.”

Me too.

191. bender

Re #187
Proof that I’m very far from knowledgeable in this area.

192. BKC

#186

All you know is it’s not global, and it’s not a trend

I guess I’m confused. I thought the “trend” in global temperatures since the ’70s was one of the pillars holding up AGW theory.

Needle in the haystack?

Yes.

I guess my point is, El Nino and the PDO have been around a long time (before industrialization). A natural climatic osillation (El Nino) can obviously affect our metric of (A)GW – global average temperature. Maybe another natural climatic oscillation (the PDO) is also temporarily raising the global average temperature, and AGW is getting the blame for it.

AGW may affect the timing or intensity of the PDO but, as you say, the fingerprint will be very difficult to detect.

Start boning up on your GCMs. They are the tool that will be used to define, detect, and quantify these “fingerprints’.

Until GCMs can predict the timing and intensity of El Nino, the PDO, etc., I don’t think we can trust them to find the signature of AGW.

193. bender

Re #192

All you know is it’s not global, and it’s not a trend

I guess I’m confused. I thought the “trend” in global temperatures since the ’70s was one of the pillars holding up AGW theory.

This is the problem with brief posts. This was not my assertion. This is merely my characterization of the AGW “fingerpint” hypothesis, as I understand it. If AGW is a criminal and the globe is the crime scene and we are the detective, then we should not be searching for the criminal per se – the global trend – we should be looking for his fingerprints – the characteristic impact you would expect from temperature increase given that Earth’s oceans & atmosphere are a dynamic system (dominated by fluid flows with 3-character acronyms).

I took it one step further. Given that we don’t know what the fingerprint looks like, we are both trying to define the fingerprint, and detect it. This leads to a serious problem, which I think Judtih is pointing to: demarcation is not easy when there is inseparability (or “hopeless convolution*”) among these various processes. Profiling the criminal and detecting him are two separate jobs. If you confound them you end up detaining or arresting alot of innocent people needlessly.

*I think that GCMers would argue that GCMs are the deconvolution tool.

194. Pat Frank

#190 — “RE: # 149 – “I hate the calcification of science that has come with all the money that poured in after ww2.”

Me too.”

What calcification of science? Since WWII, science has been wildly productive and remains so.

195. bender

calcification = institutionalization, ossification: productive, yes, but stiff & tightly constrained by multiple factors

196. BKC

#193

Ok, I understand your point. (I think)

If I may use you analogy to explain (or maybe confuse) my point?
Everyone believes AGW robbed the bank and are trying to drum up evidence to convict (they currently only have a circumstantial case), when in fact, (s)he only picked up some of the money that the real thief (PDO) dropped on his way out the door.

AGW has been arrested, and there’s a crowd outside the jail clamoring to…

uh, well, you get the picture:)

197. bender

Re #101
That pdf on Dvorak’s method is, ummm, intriguing. Pretty opaque stuff. What’s with the heavy salesmanship writing style? Pretty funny stuff. Is this some insurance industry product?

This illustrates the growing need for transparency in the science. You can’t figure out what these methods are doing; it’s all packaging. You need a turnkey script (or in this case a spreadsheet would do) to figure out what calculations are really being done. Somewhere out there is an insurance fraud brewing.

198. bender

Re #196 Yes. PDO and AGW share the stash, and sometimes even wear the same clothes, so it’s hard to distinguish them (detection) and to apportion blame (attribution). Is AGW the mastermind (including master of disguise) or just an accomplice?

199. TCO

Maybe there is some financial opportunity here. If you can convince the markets that AGW is real and then the insurance agencies raise rates because of believing it, then you can speculate (through some sort of reinsurance derivative) on the difference between the false over belief and your real beleif. conversely, you could consider the markets as the most rational (can still be wrong, but have an incentive to TRY to be right) expectation of events. So if markets are pricing in more risk, then that’s showing likelhood that there is more risk. Gotta get my warmer betting bud (Annan) over here to think about this.

200. TCO

161. Sorry, for the offenses Steve. I stopped posting when I saw it. My “time out” is over now, so I’m back here to play…

201. Pat Frank

#195 — Groups of humans produce institutions to forward their efforts jointly. That’s a criticism?

The rest of your description includes vague generalities that sound worriesome but have no specific meaning. Ossified and productive seem contradictory.

202. Steve Bloom

Re #196/8: I’m afraid there were multiple witnesses who observed PDO at home in bed with a full body cast at the time of the robbery.

203. Willis Eschenbach

A FIRST ANALYSIS OF THE NEW SANTER ET AL. PAPER

The new Santer et al. paper, Forced and unforced ocean temperature changes in Atlantic and Pacific tropical cyclogenesis regions, purports to show that sea surface temperature (SST) changes in the Pacific Cyclogenesis Region (PCR) and the Atlantic Cyclogenesis Region are caused by anthopogenic global warming (AGW). They claim to do this by showing that models can’t reproduce the warming unless they include AGW forcings. In no particular order, here are some of the problems with that analysis.

1) The models are “tuned” to reproduce the historical climate. By tuned, I mean that they have a variety of parameters that can be adjusted to vary the output until it matches the historical trend. Once you have done that tuning, however, it proves nothing to show that you cannot reproduce the trend when you remove some of the forcings. If you have a model with certain forcings, and you have tuned the model to recreate a trend, of course it cannot reproduce the trend when you remove some of the forcings … but that only tells us something about the model. It shows nothing about the real world. This problem, in itself, is enough to disqualify the entire study.

2) The second problem is that the models do a very poor job of reproducing anything but the trends. Not that they’re all that hot at reproducing the trends, but what about things like the mean (average) and the standard deviation? If they can’t reproduce those, then why should we believe their trend figures? After all, the raw data, and it’s associated statistics, are what the trend is built on.

Fortunately, they have reported the mean and standard deviation data. Unfortunately, they have not put 95% confidence intervals or trend lines on the data … so I have remedied that oversight. Here are their results:

(Original Caption) Fig. 4. Comparison of basic statistical properties of simulated and observed SSTs in the ACR and PCR. Results are for climatological annual means (A), temporal standard deviations of unfiltered (B) and filtered C) anomaly data, and least-squares linear trends over 1900–1999 (D). For each statistic, ACR and PCR results are displayed in the form of scatter plots. Model results are individual 20CEN realizations and are partitioned into V and No-V models (colored circles and triangles, respectively). Observations are from ERSST and HadISST. All calculations involve monthly mean, spatially averaged anomaly data for the period January 1900 through December 1999. For anomaly definition and sources of data, refer to Fig. 1. The dashed horizontal and vertical lines in A–C are at the locations of the ERSST and HadISST values, and they facilitate visual comparison of the modeled and observed results. The black crosses centered on the observed trends in D are the 2 sigma trend confidence intervals, adjusted for temporal autocorrelation effects (see Supporting Text). The dashed lines in D denote the upper and lower limits of these confidence intervals. I only show Figs. 4A and 4B. The left box is Fig. 4A, and the right box is 4B

I have added the red squares around the HadISST mean and standard deviation, along with the trend lines and expected trend lines. Regarding Fig. 4A, which shows the mean temperatures of the models and observations, the majority of the models show cooler SSTs than the observations. Out of the 59 model runs shown, only three of them are warmer in both regions. Two of them are over two degrees colder in both regions, which in the tropical ocean is a huge temperature difference. Only one of the 59 model runs is within the 95% confidence interval of the mean.

Next, look at the trend lines in 4A. In the real world, when the Atlantic warms up by one degree, the Pacific only warms by about a third of a degree. Even if the mean temperatures are incorrect, we would expect the models to reproduce this behaviour. The trend line of the models does not show this relationship.

The standard deviations (Fig. 4B) are even worse. There are no model results anywhere close to the observations. The majority of the models tend to overestimate the variability in the Pacific, and underestimate the variability in the Atlantic. This is probably because the variability is inherently larger in the Atlantic (standard deviation 0.35°), and lower in the Pacific (standard deviation 0.24). However, this difference is not captured by the models. The trend line (thick black line) shows that on average, the model Pacific variability is 90% of the Atlantic variability, when it should be only 60%. The light dotted line shows where we would expect the model results to be clustered, if they captured this difference in variability. Only a few of the models are close to this line.

3) All of this begs the question of whether we can use standard statistical procedures on this data. All of the data is strongly autocorrelated (Pacific, lag(1) autocorrelation = 0.80, Atlantic = 0.89). In their caption to Fig. 4 they say that they are adjusting for autocorrelation in the trend sigma. Unfortunately, they have not done the same regarding the standard deviations shown in Fig. 4B.

In addition to being autocorrelated, the Pacific data is strongly non-normal (Jarque-Bera test, p = $2.7E^{-9}$). Here is the histogram of the Pacific data.

As you can see, the data is quite skewed and peaked. Thus, even when we adjust for autocorrelation, it is unclear how much we can trust the standard statistical methods with this data.

4) There are likely more problems with this paper … but this is just a first analysis.

My conclusion? These models are not ready for prime time. They are unable to reproduce the means, the standard deviations, or the relationship between the two ocean regions. I do not think that we can conclude anything from this study, other than that the models need lots of work.

w.

… this should probably get a new thread …

204. David Smith

I hope Steve starts a new thread for Santer and other papers in general. I have some comments (later) on this one as well as a couple of other papers, including Hoyos.

A bit off-topic for a moment, there was an earlier discussion, somewhere, about the fingerprints of global warming. One fingerprint which I accept is shown in the chart here.

This is a proxy for the global height of the troposphere. It is the pressure layer at which the troposphere ends (and tropopause begins).

As the troposphere warms, it expands. The only direction it can expand is upwards.

(Now, the chart is reversed, meaning that a warming and expanding troposphere will push the chart line downwards. That is a bit confusing until one’s eyes adjust to it.)

You can see the trend towards a warmer troposphere that began in the 1970s. You can see cooling from a volcano about 1984 and again in the early 1990s, and you can the El Nino in 1998.

Since 1998 the rate of rise appears to have slowed.

Now, what about the pre-1960 part? I don’t know if that is a data problem (this is from radiosondes, which was a rather new technology, and the global coverage may have been poor outside North America and Europe) or not. I tend to discount that data.

I think this plot is consistent with the satellite-derived temperature plots and less so with the reported ground station trends.

So, in my personal list of GW fingerprints and things to watch, I put tropopause height.

David

205. fFreddy

Re #204, David Smith
Interesting that the 1998 peak is the first time to exceed the late 1940′s.

206. David Smith

Re #205

If they were credible, if they had sufficent global coverage and accuracy, then I’d say that is a piece of evidence that says that the 1950s troposphere was pretty warm, maybe about like the current period.

207. Pat Frank

#203 — Willis wrote: “My conclusion? These models are not ready for prime time.”

The common conclusion with regard to GCMs, upon testing. The common outcome, though, is that their outputs get published anyway, the specious calculational results get represented as facts, and the pseudo-facts become part of the on-going narrative of alarm; stoutly defended by highly qualified and credible scientists like North, Santer, and Bolin.

Just a question, Willis: Do you think Santer knows that, “Once you have done that tuning, however, it proves nothing to show that you cannot reproduce the trend when you remove some of the forcings.“, and that, “the models do a very poor job of reproducing anything but the trends.“, and that, “The trend line of the models does not show [the] relationship [in Figure 4A]. The standard deviations (Fig. 4B) are even worse. There are no model results anywhere close to the observations.“, and that, “In addition to being autocorrelated, the Pacific data is strongly non-normal.”?

If he doesn’t know all that, how do you think such field-basic considerations could have escaped him?

If he does know all that, how could he have gone ahead and submitted work rife with such problems?

Another question: How long did it take you to do the first analysis you presented here — about as long as a typical review, perhaps? (As a reviewer I typically go through a manuscript three times before writing up my analysis. Sometimes that includes calculations using data taken from figures). Are the problems you found something that a reviewer in the field would normally assess?

If so, how could the reviewers have missed them?

Would you consider writing up your assessment as a critical letter to the journal editor, at the end asking how such a flawed manuscript could have been published, and suggesting that perhaps the stable of reviewers ought to be refreshed?

208. Barney Frank

#202,

I’m afraid there were multiple witnesses who observed PDO at home in bed with a full body cast at the time of the robbery.

Yeah, multiple paid informants with a lot riding on AGW taking the rap.

209. Steve McIntyre

Separate post for #203 established.

210. Steve Bloom

Re #208: OK, Barney, describe for us exactly how it is the PDO could have had the effect being speculated on. Focus on the heat flux issue, please.

211. Steve Bloom

Re #207: Yep, it’s yet another analysis wherein with just a few hours work Willis demolishes hundreds if not thousands of hours of effort by some of the most eminent climate scientists. Please do submit it as a comment.

212. Pat Frank

#211 — Willis shows his work, Steve B. You merely pronounce.

213. Steve Bloom

Re #213: Then he just needs to show it somewhere that counts. Speed the day.

214. Pat Frank

215. bender

Re #211
I agree with your sense of wonderment at Essenbach’s productivity – although my disbelief does not extend to the logic of his arguments. In part Willis’s prolific posting is a reflection of how many holes there are in the science. They’re easily picked at if you are bright enough, and know the field well enough. You seem to imply his arguments are too quickly assembled to be trustworthy. An alternative explanation for his productivity and rapid response time is that he may have research assistants helping him. Not that it matters. All that matters is the content of his arguments. How he does it is a mystery.

Re #213
I’ve said it before, and it’s true: it is not easy to publish negations based on logic. Refutations based on new data or new analyses, yes. Rhetorical works that dismantle the flawed logic of a paper published by the ruling orthodoxy, no. Willis shouldn’t waste his time on publishing in peer-reviewed periodicals. A synthetic monograph, perhaps.

216. David Smith

Judith, here are a few comments on Hoyos et al (linked here , if you’re still around this cyberspace.

The paper looks for trends in factors other than SST which might account for the reported global increase in category 4 and 5 hurricanes. (Note: the key assumption is that storm count (NCAT45) has increased, which is a question under debate, but I accept it as fact here.)

The paper uses an interesting approach (a technique from information theory, with equations from Claude Shannon) which I cannot comment about due to my lack of understanding. (I saw nothing in the results that raised my eyebrows, so my assumption and belief are that the approach is fine.)

The paper reports that, while variability in the factors it checked (humidity, wind shear, stretching deformation) affect hurricane intensity over short periods (like seasons) there is no long-term trend evident in those factors that would account for an increase in NCAT45.

It is a well-written paper with nothing I’d consider a sleight-of-hand or aimed to grab headlines. Unlike Emanuel’s papers, there is no “hmmm” factor in this paper.

* First, my physical basis for what’s being examined. (I always need a physical model in my head before I can make sense of something.) A NCAT45 hurricane has:
1. a SST of at least 28C;
2. moist, unencumbered inflow at 850mb and below;
3. no major intrusion of dry air at midlevels;
4. low windshear of the storm envelope using 700mb-500mb-300mb-200mb levels, so that the center is vertically stacked and not significantly tilted;
5. at 200mb a broad anticyclonic environment so that the storm can establish an unencumbered outflow;
6. and at least one excellent outflow channel.

* Hoyos et al looks at humidity in a 925mb to 500 mb level, and sees no trend. I would have broken that in two and looked at humidity trend at the 925+850mb levels and also the 700+600+500mb levels.
The first grouping is the primary inflow region, and I bet the specific humidities there have trended upward. That is no surprise to me, as the humidiities in these lowest parts of the atmosphere have probably increased as SST have increased.
The second group is a low-inflow region for a storm, and doesn’t play a major role unless there is dry air entering and disrupting the convection and thermal structure. From what I’ve seen, there may be a trend towards drier air, which is probably a mildly negative factor for NCAT45.

In my opinion, if a trend other than SST exists, it will be in #4, #5 and #6.

Hoyos uses the standard windshear meansure of 850mb to 200mb. To me, that is useful as a tool for examining genesis potential but its value drops when looking at strong hurricanes. Many a potential major hurricane has been disrupted by shear in the 300mb or 400mb levels.
For researching NCAT45 storms, I would try to look at shear across a 200-300mb, 300-400mb, 400-500mb and 200-500mb environments and see if trends exist.

That is an enormous task, I realize, and perhaps more importantly, I question the accuracy of the available data. These are sparsley-sampled regions which also tend to be graveyards for dying, small-scale circulations in the mid and upper levels. It is fairly common, even today, for forecasters to be surprised by unexpected shear in the tropics. I would bet that, even in reanalysis, much of that reanalysis is from interpolation across large distances.

* stretching deformation at 850mb is something that I see as a useful tool when looking at storm genesis and did not have a lot of value when looking at whether a storm stays a category 23 or becomes a 45. I think that Dr. Webster is the expert on this, and he was part of the team, so there must have been a good reason for using it. I’d appreciate being pointed towards any papers that might be around to explain its applicability to intense storms.

* I’d tend to want to use the prime months of intense hurricanes, rather than the entire season. For example, in the Atlantic I’d use ASO, rather than June-October, because the ASO upper level environment is typically different from the June-July and October patterns.

* I have no clue as to how to examine available data for a trend in outflow channels

Good paper! I applaud the search and hope it continues with a more-detailed look at upper level trends.

217. eduardo zorita

#104 Hoyos et al paper

I would like to express some comments on the Hoyos et al paper. I am not an expert on hurricanes, and therefore perhaps some or all of these comments may not be justified, but I would appreciate if someone, or even Judith Curry, could shed some clarification.

In essence the paper finds that the correlation (or shared information) between hurricanes and SSTs is different from the link between NCAT45 and other variables, such as wind shear, specific humidity, etc. The SST are apparently the only variable capable of explaining the long-term trend
in NCAT45. The variance explained by SST is also larger with trend timeseries than with detrended timeseries. The other predictors show no long-term trend.

The points that are unclear to me are:

Why is the correlation between detrended SST and NCAT45 smaller than with non-detrended series? Is it because at interannual timescales other factors different from SST influence NCAT45? Could an alternative interpretation be also valid, for instance, that the link between SST and NCAT45 is weak and both happen to have long-term trends, which are however unrelated? Although there seems to be a theory explaining the link to SSTs (Emanuel) , climate simulations for future climates seem to indicate a weak response (Knutson and Tuleya) or no response at all (Bengtsson et al) to higher SSTs due to anthropogenic forcing. In that case, the trend in NCAT45 would be linked to other variable not included in the Hoyos et al paper. In other words, a common trend with a weaker interannual link would not be enough to physically attribute the trend in NCAT45 to SSTs. It is relatively easy to find time pairs of timeseries that exhibit precisely this behavior: weak interannual correlation, stronger correlation when non-detrended.

Another comment only indirectly related to the hurricanes, but that called my attention, is the lack of long-term trend in tropical specific humidity in the presence of a strong trend in tropical SSTs. The water vapor feedback seems to be strongest feedback and the one which all climate models agree upon (recent paper by Soden and Held, Jclim 06), and yet specific humidity does not seem to be reacting to the increasing SSTs. Is this lack of trend in the specific humidity supporting the lack of trend in tropospheric temperatures reported from MSU analysis?

218. bender

Here is one for the GT hurricane seminar class to discuss tomorrow. These are the results of an ARMA(1,1) fit to Emanuel’s PDI series:

Coefficients:
__ar1__ _ma1_ intercept
-0.5957 0.7940 10.5582
s.e. 0.2568 0.1838 0.9568

sigma^2 estimated as 39.93: log likelihood = -179.52, aic = 367.03

It was Steve M that first commented on this. He interpreted the AR lt 0 / MA gt 0 pattern as one of “anti-persistence”. I’d love to hear a reasonable physical interpretation. What would this mean to a hurricane seasonal forecaster? And what does it mean that the SST does NOT exhibit this anti-persistence pattern? [See his original post #22 in bender & Willis on Emanuel.]

Go Yellowjackets.

219. bender

Bonus question: How do you explain the different results for ARMA modeling of the 1851+ HURDAT data for all storms vs. landfalling hurricanes (which are highly correlated, r = 0.62 vs. 0.47, before vs. after 1927):

All Storms:

Coefficients:
ar1 ma1 intercept
0.9797 -0.8231 9.5580
s.e. 0.0261 0.0636 2.0752

sigma^2 estimated as 12.41: log likelihood = -415.63, aic = 839.26

Landfalling only:

Coefficients:
ar1 ma1 intercept
-0.6631 0.7437 1.8125
s.e. 0.8040 0.7272 0.1213

sigma^2 estimated as 2.077: log likelihood = -276.58, aic = 561.16

Is hurricane occurrence persistent, or anti-persistent?

220. Judith Curry

will get back to you tomorrow eve after the hurricane class.

221. Judith Curry

Here is the report from the Georgia Tech hurricane class discussion on the climateaudit hurricane threads. Two students were assigned to make presentations: Student #1 is a 2nd year graduate student, slightly older and with a mature and broad perspective; student #2 is a recent Ph.D. awardee with good knowledge of statistics.

Student #1 gave an overview of the blogosphere and climate-related blogging activities, and some history of the climateaudit site. He described climateaudit’s practice as:
1. attacking a paper on global warming, before reading it very carefully or understanding the context of the paper, assuming that the author is either dumb or has an “agenda”
2. a plethora of statistical activity of a fairly rudimentary nature
3. realization that the issues are complex
4. some attempts at trying to gain physical understanding of what is going on
5. realization that the issues are even more complex
6. give up and move onto something else

1. How influential is climateaudit?
2. What items have they raised that we should pay attention to?
3. What can we learn and avoid the next time?
4. Was Dr. Curry’s blogging time well spent, or did it legitimize and prolong a discussion that in the end hasn’t really accomplished anything?

Student #2 focused on the statistical issues surrounding WHCC and Emanuel papers. He raised the following main points:
1. The climateauditors do not seem to understand parameteric vs nonparametric tests. The Kendall test (rank based test) used by WHCC does not require a normal distribution and is also fairly insensitive to serial correlation, so the emphasis on autocorrelation and distributions did not add anything.
2. The climateauditors show a general lack of physical interpretation and a lack of appreciation of the fundamentally Bayesian approach (if not explicitly, then implicitly) to climate science statistics, whereby physics and prior knowledge suggests your predictors.
3. ARMA (Spanish for weapon) is a brute force method used (not very productively) when nothing is known about the physics.
4. WHCC statistics were robust and appropriate; the Curry et al. BAMS article was unfairly criticized since the readers did not go back to the original paper cited in Figure 1, which explained what went into Fig 1 and how the trend was determined.
5. There were problems with Emanuel’s statistical analysis that should have been caught in the review process
6. Student #2 was pretty hot under the collar about the whole thing
7. “A lot of personal attacks. Not using bad manners… but still personal attacks. An example? Their opening lines on the hurricane thread: There are statistical issues in fitting trend lines to spiky data like this, which bender is well aware of and pointed out in the predecessor thread. If Curry is unaware of these issues, what does that say? If she is aware of these issues and ignored them, what does that say?”
8. “A biased blog that pretends it is not. In terms of most of the statistics they seem to know what they are talking about, but they should. Most of the stuff is part of basic statistical training. While they appear to be curious about some physics, there is a general lack of good physical interpretation.”

Topics raised in the discussion:

People reading only the thread leader and first few posts get the impression that the paper is wrong, when further down the thread the paper gets vindicated. This gives the casual visitor to the site a negatively biased impression of climate science.

One student raised the issue that statistical mistakes such as made by Emanuel (2005) should have been weeded out in the review process; suggested that a “statistical editor” was needed for climate journals to review the papers for basic sound statistical practices.

The students thought that the fact that the climateauditors did not have “external funding” to do this work diminished their credibility

The students agreed that statistics should be done correctly, data should be made publicly available (but extra work should not be done to make the data and programs convenient for the skeptics), and funding sources should be disclosed.

The “biases” of the climateauditors were discussed. Bender was perceived as a hardcore anti- warmer. SteveM and Willis were perceived has hardcore statistical skeptics, assuming that all analyses done be climate people are suspect. Steve Bloom was viewed as a somewhat heroic glutton for punishment. David Smith was viewed as the voice of reason.

I then went on to describe what I thought was useful and interesting about the site and about the hurricane threads, and the blogospheric approach to science. Everyone agreed that the climateauditors spotted things in the Emanuel paper that none of us had spotted.

Overall, the students were pretty negative about the site. I suggested that the two students post their comments; they did not want to, and I agreed to summarize the discussion (I was asked not to mention their names). They viewed blogging on climateaudit as entering a black hole of trying defend yourself against a prejudged guilty verdict. Well, I am not exactly sure what I expected from this discussion, but it doesn’t sound like the younger generation of scientists are very keen to enter the blogospheric discussions on climate science.

Student #2 ended with 3 quotes and a joke:

Bayesian statistics is difficult in the sense that thinking is difficult. Donald A. Berry

Some people use statistics as a drunken man uses lamp-posts”¢’¬?for support rather than illumination. Andrew Lang

Facts do not “speak for themselves.” They speak for or against competing theories. Facts divorced from theories or visions are mere isolated curiosities. Thomas Sowell

Two statisticians were traveling in an airplane from LA to New York. About an hour into the flight, the pilot announced that they had lost an engine, but don’t worry, there are three left. However, instead of 5 hours it would take 7 hours to get to New York. A little later, he announced that a second engine failed, and they still had two left, but it would take 10 hours to get to New York. Somewhat later, the pilot again came on the intercom and announced that a third engine had died. Never fear, he announced, because the plane could fly on a single engine. However, it would now take 18 hours to get to New York. At this point, one statistician turned to the other and said, “Gee, I hope we don’t lose that last engine, or we’ll be up here forever!”

222. TAC

Judy, thank you for sharing your class comments with us. It is interesting, though a bit disheartening, to see how others perceive the debates here on CA.

I was a bit surprised on one technical point: Are you sure Student #2 said:

1. The climateauditors do not seem to understand parameteric vs nonparametric tests. The Kendall test (rank based test) used by WHCC does not require a normal distribution and is also fairly insensitive to serial correlation, so the emphasis on autocorrelation and distributions did not add anything.

I agree that the Kendall test provides robustness against non-normality. But are you sure it is robust with respect to serial correlation?

223. Steve McIntyre

Judith, thank you for your candid comments. I haven’t been an active paricipant in the hurricane threads. To my knowledge, no one has ever suggested that “extra work” be done to archive data for “skeptics”. I don’t know whether your students have waded into any of the proxy threads on MBH,Esper, etc. However, in the multiproxy threads that I’ve written that constitute the bulk of this blog, I would like to see an example of a thread which justifies the following comment:

People reading only the thread leader and first few posts get the impression that the paper is wrong, when further down the thread the paper gets vindicated.

I can’t think of any Hockey Team paper that has been “vindicated” in this fashion.

224. bender

Re #221
Very interesting, and, I hope rewarding for all involved. A follow-up might also be instructive and entertaining.

As Steve M points out, though, it’s multiproxy reconstruction science that is the focus of the blog and where serious improvements are needed. (The hurricane threads were a departure graciously permitted by Steve while he was busy traveling.) It would be great to do the same kind of thing with a tree-ring seminar group. Any open-minded instructors out there willing to experiment a little?

C’mon man, everyone’s doin’ it!

225. bender

Oh – and there are plenty of solid rebuttals for each of these points, if you Yellowjackets are up for a whippin’ by a Gator.

226. Barclay E. MacDonald

Judith, very interesting. Thank you for taking the time to do the post. And congratulations David Smith and Steve Bloom!

I do find the characterizations of the players ironic in light of the perceived unfairness of the opening comment of Bender about your piece and recognition of how it was corrected, but very interesting. I think I see the blog more as a process for discovering information than they do. But I appreciate their input, and will keep it in mind as I continue to lurk far in the background trying to learn and understand.

227. McCall

The two generalized claims made in 221, and the highlighted 222 and especially 223 need statistical justification in their own right. Re the claim in 223, just how many papers were analysed and affirmed as examples to arrive at such a generalization? Specifically, subsequently vindicated papers on CA … a list please?

And what was the class review and individual impressions of Prometheus, RC, Deltoid, Stoat and some of the other blogs? Perhaps the CA review is among the best or worst of the bunch. As a participant on some of these, I would also want to understand their perspective on the others blogs?

228. bender

Correction, friends, bender did not write that introduction. Go back and read. Careful with your detection & attribution algorithms. (See how the meme spreads?)

229. bender

Re #228
Here is the intro.

230. bender

1. The climateauditors do not seem to understand parameteric vs nonparametric tests. The Kendall test (rank based test) …

3. ARMA is a brute force method used when nothing is known about the physics.

How is Kendall going to help you describe the physics? Check out the resident hurricane expert for an example of what reasoning on the basis of Kendall gets you.

231. McCall

Drs. McKitrick and Essex have referred to Sunstein’s “Law of Group Polarization” in TBS — such mutually reinforcing interaction/review was also spoken to by Dr Wegman in his congressional testimony, though principally and critically of the extended hockey team.

The comment about Mr. Bloom, I find puzzling (bizarre)? Mr. Bloom displays a discrediting lack of understanding of basic physics and especially the thermodynamics of this debate; he barely conceals his cheerleading of natural disasters (e.g. hurricanes, in the hopes of making his AGW case); and his statistics have been found wanting on several threads. There are other blemishes, which Mr. McIntyre requests that I not revive — but other than that, he’s been a great contributor ranking only slightly higher than Dano in the requisite background??? If there was technical admiration of Mr. Bloom’s participation on CA, I’m confident one would have to mine his posts to find it.

232. Steve McIntyre

RE the snippy comment about ARMA testing: I’ve found it handy to routinely do ARMA(1,1) tests of time series as a type of quality control. For example, last summer, I performed ARMA(1,1) tests on all the gridcells of Jones’ CRU data set; picked out extreme values; plotted a world map of the coefficients and almost instantly found some bad data that CRU’s quality control had missed e.g. one location intermittently had the decimal places in the wrong spot. I did the same thing this spring on the von Storch-Zorita pseudoproxy network and found problems with their sea-ice formation in the Waddell Sea. It’s not very highbrow, but it can be a very quick way of looking at a large data set. Obviously the originators hadn’t done these tests or they wouldn’t have had the defects.

233. bender

Anyone relying on Kendall should not imagine themselves as head analyst for FEMA. This is a trillion dollar problem we’re talking about here – not to mention the unmentionable uniquenesses of the South. It’s time to get serious, people. Do you want to win a debate on AGW by publishing in Nature, using whatever statistics it takes, or do you want to forecast hurricane occurrence so that you can help people? Cheerlead the activist agenda if you choose. That victory is going to seem a hollow, distant memory when the next Katrina rolls in … whatever the magnitude of A in AGW.

(Whoever called me an anti-warmer hasn’t read my posts. At least 3 times I’ve dared to classify myself as an uncertain luke-warmer.)

234. bender

Re #232
AR(1) models were good enough for legendary dendrochronologist Hal Fritts. Does that mean he not understand the mechanics of plant growth?

235. Steve McIntyre

Mann routinely uses AR1 to benchmark results. Does that mean, heaven forbid, that he doesn’t understand the physics? Who would have thought it?

236. Willis Eschenbach

Judith, thank you very much for your sharing of the class comments. The one I particularly loved was …

The students thought that the fact that the climateauditors did not have “external funding” to do this work diminished their credibility

Methinks I should put in for an Exxon grant to increase my credibility …

Your students’ characterization of me was half right. They said:

SteveM and Willis were perceived has hardcore statistical skeptics, assuming that all analyses done by climate people are suspect.

In fact, for me, all analyses are suspect, including my own. I’ve seen far too much garbage passed off as science to be anything but suspicious of all “scientific” studies, and I encourage your students to be the same. It was my Louisiana bayou grandmother who first pointed me in this direction by saying:

Child, you can believe half of what you see, a quarter of what you hear … and an eighth of what you say …

My very best to you and your students,

w.

PS – Like the others, I truly would like an example of a study that was demolished at the head of the thread, and then rehabilitated.

237. fFreddy

Re #221, Judith Curry

1. How influential is climateaudit?
2. What items have they raised that we should pay attention to?
3. What can we learn and avoid the next time?
4. Was Dr. Curry’s blogging time well spent, or did it legitimize and prolong a discussion that in the end hasn’t really accomplished anything?

Dr Curry, I’m curious, what were your answers ? Particularly to #1 ?

238. fFreddy

Attn: Student #2
I’m sorry that your review made you so hot under the collar. Now that it is completed, why not engage in the discussion here and show us where we are wrong ?

239. Barclay E. MacDonald

#228 and #226 Bender, so sorry. Mea Culpa! I confused Steve’s comment at the beginning of the thread with your earlier discussion with Steve Bloom regarding Judith Curry selecting a 5 year period for a graphic analysis of hurricanes. Regardless of the class opinions, your still one of my heroes:)

240. Proxy

Re: 233

Only a trillion dollar problem, oh then it’s not that serious. In a world economy of approximately \$37 trillion it won’t cause excessive growth. There are plenty of companies able to build those CO2 extractors, coastal defenses, arctic coolers and relocate entire cities. Naturally taxes will have to increase a little

241. Paul Linsay

#219 Judith,

They viewed blogging on climateaudit as entering a black hole of trying defend yourself against a prejudged guilty verdict.

They should definitely stay away from physics. I’ve been to more than one seminar where the speaker never made it past his second sentence before being pounded into dust. Your students don’t seem to understand that science at its highest levels is a bloodsport. When you are claiming to know the fate of the earth you better be prepared.

242. Judith Curry

A few clarifications motivated by your comments: The appreciation of Steve Bloom was associated with his defense of what climate researchers do (I was also puzzled by their characterization of Bender). The students (newcomers to the site) did not pick up on the essential proxy hockeystick focus of the site. The students regularly read realclimate, this is a good way for them to keep up with current topics in climate research (I think they mostly tend to read the initial post, and not go deeply into the blogging). Re #241, student #2, having gone through the entire Ph.D. process is definitely a fighter and well accustomed to defending his research, but found your points (re the hurricane stuff) not very interesting owing to lack of knowledge of the data and the physics. For the 1st and 2nd yr graduate students (and the undergrads), this was probably an education. The “attacks” that were of most concern were the personal ones, such as at the beginning of the thread where either my knowledge or motives were arguably attacked. the “black hole” was the black hole of time, the students did not see the importance of defending anything on this site (there was some concerned voiced about the inordinate amount of time i had spent blogging on the site), as opposed to other venues such as scientific conferences.

I chose to focus on climateaudit (as opposed to the other blogs) since I am intrigued by the “flat world” process of doing (or at least evaluating) science on the internet. I am also intrigued that there are so many people spending a huge amount of time doing research on these topics without any funding. I am somewhat surprised that the students were more “conservative” than me about blogging in general and this site in particular. It was an interesting experiment. I will continue to reflect on this.

243. Steve McIntyre

244. Jean S

Judith, I found your students’ comments interesting and somewhat surprising. A think they do not seem to realize is that this site was founded on study of (and it’s main focus has been on) multiproxy temperature reconstructions, especially Mann’s work (only very recently there have been more general discussions). IMO, the lack of realizing this is reflected in comments like this:

SteveM and Willis were perceived has hardcore statistical skeptics, assuming that all analyses done be climate people are suspect.

Once you’ve gone through in any detail MBH98 (and related work), you would be nuts not to develope the attitude described above.

Since especially Student#2 (S2) seems to have sufficient understanding of statistics, I’d like ask him/her to do the following: please review/replicate (a part of) MBH98 yourself. Especially I have one specific task in my mind:

Replicate Figure 18 of Mann et al, Global Temperature Patterns in Past Centuries: An Interactive Presentation, Earth Interactions, 4-4, 1-29,2000. Once that is done it is easy to experiment with different window lenghts and lags (do it especially with 100 year window and lags 0,5,10). Try also with “standard correlations”. Then I’d like to know S2′s opinion on
a) how well the procedure was described in the papers
b) statistical maturity of the approach
c) what the choice of the method and the illustrated window lenghts (and lags) tell about the author(s) (read especially the text in MBH98)

Finally, I’d like to comment:

The climateauditors do not seem to understand parameteric vs nonparametric tests.

The climateauditors show a general lack of physical interpretation and a lack of appreciation of the fundamentally Bayesian approach

I think these are unfair generalizations. Some of us have Ph.Ds in these statistical issues. Some of us have Ph.Ds is related physics. And what comes to the appreciation of the Bayesian approach, I can not comment the attitudes of others, but personally I could have a long discussion with S2 about the matter … but for now I only refer him/her here.

245. Jean S

246. bender

#219 was the last material post here; after that discussion drifted to “GT report card”

247. bender

This paper:

Li, W.K. & McLeod, A.I. (1987). ARMA modelling with non-Gaussian innovations, Journal of Time Series Analysis, V.9, pp.155–168.

discusses non-gaussian ARMA models (gamma, lognormal) which may be more appropriate for hurricane data.

248. bender

Indeed, a histogram of annual hurricane count suggests a lognormal distribution. Formal tests of distribution fit to come.

249. Steve McIntyre

bender, pretty soon you’re going to get to Mandelbrot and wild distributions. IT’s hard to think of a better candidate for wild distributions than hurricanes – they are obviously way off in the tail.

250. David Smith

Here are the PDIs:

2002: 6.35 (from Emanuel)
2003: 22.6 (from Emanuel)
2004: 30.2 (from Emanuel)
2005: 23.2 (calculated by DRS (me))
2006: 7.9 (projected by DRS)

Frankly, I expected 2005 to be higher, given the monster storms that struck the Gulf Coast. But, several of those were rather short-lived.

Also, 2005 included at least four “trash storms” which were of minimal tropical storm strength and lasted only a day or two. In earlier times, these may have passed for nothing more than squally weather.

251. bender

Re #249
You’ve mentioned that subject before and it’s something I know nothing about. I guess I’ll get there when I get there. (If we had that annotated bibliography I could just search “Mandelbrot” + “wild distributions” and effectively perform the Vulcan mind-meld.)

Re #250
Ecellent, Smithers.

252. Steve McIntyre

I think that Willis posted up a data summary. Can you direct me to the posts. I’ll make a page with data links so that we can find them in this sprawling blog.

253. Steve McIntyre

#250. David, can you email me the back-up for the 2005 and 2006 calculations. I’ll post that up so that there’s audit trails for everything.

254. bender

Re #250
Do you have the HadISSTs for those years as well?
2002 28.01
2003 28.62

255. Steve McIntyre

I’ve go the MSU file open right now. What is the region definition?

256. David Smith

Re #252 Willis posted the PDI numbers in the thread, “Bender on Hurricane Counts Continued”, post #19.

Re #253. Certainly. I’ll do that this evening.

Re #255. If the region is Emanuel’s SST box, it is 6N to 18N and 20W to 90W

257. bender

Re #250
These lower PDIs for 2005, 2006 strongly reduce the strength of the AR(1) coefficient by reducing the trend slope.

258. bender

PDI is not as “wild” or “ill-behaved” as hurricane count.

259. bender

Note also now that the pinning problem of unpunished smoothing is going to have the opposite effect of that pointed out by Landsea: it’s going to anchor the 2006 endpoint low, thus yielding a trend estimate that is biased low.

260. Steve McIntyre

#259. It’s going to be an amusing replication. It will be fun to see why Science rejects it.

261. bender

1. The PDI values predicted for 2007-09 are:

ARMA(1,1): 8.67, 11.93, 10.60
ARMA(5,1): 10.64, 18.30, 18.45

So model choice (and understanding of the physics) is critical.

2. Parameteric trend analysis with the 2006 data says – doh – there’s now no trend in PDI:

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -74.06891 110.25327 -0.672 0.504
Year 0.04305 0.05575 0.772 0.443

Residual standard error: 7.108 on 56 degrees of freedom
Multiple R-Squared: 0.01053, Adjusted R-squared: -0.007136
F-statistic: 0.5961 on 1 and 56 DF, p-value: 0.4433

3. The graphic will have to wait until I’m out from the firewall.

4. The non-parametric equivalent will have to wait until I’m at the other computer where the latest version of R is installed.

262. bender

The pinning effect is quite significant given the huge difference between active 2005 and inactive 2006. Pinning with active 2005 as the endpoint makes the 1949+ trend significant with p lt 0.1. With 2006 as the endpoint p goes up to 0.2. So that’s a pretty major change for a single datum. (Notably, if you do the smoothing properly, with lopping off of endpoints, then the regression is much more stable.)

Emanuel’s analysis started in the 1970s, however, not 1949. So that’s another thing to check. His trend is much more significant than just p lt 0.1. We’ll see next if the 2006 data with improper pinning effect is enough to push p over the insignificance threshold.

263. TCO

[snip drunken idiot]

264. bender

LOL

265. David Smith

Folks, I found an error in my calculation of 2005 PDI. Basically, my manual addition was wrong. The corrected PDI figure for 2005 is 30.4, not 23.2.

My apology.

The 2006 estimated PDI (7.9) is OK.

I will forward the calculation information to Steve M. In advance of that, for anyone interested in doing the calculations, a season’s PDI is the sum of the PDI of the individual storms in that season. The PDI of individual storms is the sum of the maximum windspeed cubed, using six-hour interval windspeeds, for all six-hour periods when the wind is at least 40 mph, times a scaling constant.

The NHC records contain the windspeed information for each storm.

266. bender

No problem. Thanks for double-checking.

267. bender

Re #265
Analysis re-done. All it did was increase the amplitude of the pinning effect. Pinning effect w/ 2005 makes the 1949+ trend significant at p = 0.049.
Pinning effect w/ 2006 makes the 1949+ trend non-significant at p = 0.146.

Still working on the 1974+ analysis to make sure it’s comparable to Emanuel’s time-frame.

268. David Smith

One of those nights – now, the link button won’t work for me.

Anyway, here’s the address of an interesting-looking paper on PDI and global temperature

http://www.uwm.edu/~aatsonis/High%20frequency%20variability%20in%20hurricane%20power%20dissipation%20and%20its%20relationto%20global%20temperature.pdf#search=%22power%20dissipation%20index%202005%22

It attributes to ENSO a major role in PDI variability, which was clearly shown on the correlation map someone (bender or willis) generated last week.

By the way, that correlation map also showed (in my opinion) a strong PDI correlation with upwelling in part of the Southern Ocean, which is a signal of a strong thermohaline circulation, which matches the classical idea of a strong Atlantic thermohaline circulation leading to an upturn in Atlantic storms.

269. bender

Re #368
Yes, that was Willis’s map and we haven’t forgotten about those interesting correlations you pointed out. Next week we can talk about what to do for the detection & attribution part of the paper. (John Creighton made those filters for us 10 days ago and I just found them 2 hours ago. Doh.) The major thing to be decided is whether Judith wants to keep working with us on this. I think it’s a good idea still, but she seems kind of miffed at us. This paper you cite here is yet another one that would need to be read before ours is prepared. That’s exactly why we need someone like Judith (or maybe yourself?). Someone to plug the holes in our basic knowledge of the literature & interpretation of the data. A fundamental decision has to be made as to whether we want this to be purely a sensitivity analysis/audit, or do we take that leap towards something less pedagogical. The former is easier to prepare. The latter is more likely to get accepted and have some impact. Choices, choices.

270. bender

Re #267
1974+ analysis done.

When you cherry-pick your starting point as 1974 then there is such a pro-trend bias that the inactive 2006 season hardly influences the trend statistics.

So that’s it for sensitivity analysis. Now it’s time to move onto to attribution using Creighton’s filters.

271. bender

Just to satisfy the GT crowd, a non-parametric M-K yields the same result:

1949+ PDI trend
tau = 0.00545, 2-sided pvalue =0.9572
tau = 0.00455, 2-sided pvalue =0.96617
tau = 0.0238, 2-sided pvalue =0.80553

1974+ PDI trend
tau = 0.296, 2-sided pvalue =0.016296
tau = 0.43, 2-sided pvalue =0.00058532
tau = 0.449, 2-sided pvalue =0.00040734

No trend if you look at all the data.
Sig. trend if you start the analysis in 1974.

i.e. 1974 is cherry-picked.

272. David Smith

bender, I’ll be glad to do reading and offer thoughts on the physical processes behind the numbers. I hope Judith decides to participate with you guys.

Perhaps ironically, I’ll be spending most of today answering questions from auditors and helping them understand how we do things! These are safety auditors. Once a year we invite safety professionals from other locations to come in and look for things we’ve missed. It is rigorous and highly-detailed, and we answer a lot of questions that lead to nothing, but at the end of the day the audit process makes us a safer place.

273. Judith Curry

btw, I am not miffed at all. I am interested in the sociology of the site, as well as the science nuggets. Re the GT report card thread, I have been trying to explain my perception of the students’ reactions (which is apparently not 100% accurate), not my personal perceptions. This particular thread is 95% science, which is great.

Re PDI, I have been meaning to bring up an issue that I am not sure you have been considering. Prior to 1970, how to determine the wind speeds is being hotly debated by landsea. For details, look at supplementary material in Emanuel, and also the landsea rebuttal. Ironically, emanuel used Lansea’s 1993 correction, which lowers the windspeeds prior to 1970. Landsea now says that the correction should not be used. From what i can tell, the emanuels correction was a bit too large, but overall better than not using the correction. My suggestion is two time series, one with the correction prior to 1970 and one without, this would allow some consideration of the uncertainty in data?

274. bender

Re #273
Great – I was a little worried.

Willis, this Landsea windspeed “correction” – is that the same correction that is the difference between “OrigPDI” and “AdjPDI”? (If so, then it is absolutely miniscule.)

To help refocus I want us to recall exactly what the question was that was asked of us: “has PDI really doubled in recent decades?”

Chew on that question. Meanwhile, Steve M is working on getting my R script turnkey ready for posting. Anyone wihtout R should download it, and be sure to get the “Kendall” package as well. (Although M-K can’t help you answer parametric questions such as the one asked of us.)

275. Dave Dardinger

re: 274

I downloaded R a while back and played with it a bit, but would be hard-pressed to say I was even familiar with it. Where would I have to go to get this “Kendell” package? (a link would be nice).

Hmmm. maybe I’ll go off and pull R up for a bit and play with it.

276. Roger Pielke Jr.

Re: #273

If you look at Emanuel’s reply to Landsea in Nature he says (linked below):

“In correcting for biases in the original
Atlantic tropical-cyclone data, I relied on a
bias correction applied by Landsea, presented
as a table. I had fitted a polynomial to that
correction, as I felt that a continuous rather
than discrete correction was more defensible.
Landsea believes that this had the effect of
overcorrecting the most intense storms in the
pre-1970 record, and I accept his revision to
my analysis.”

However, in subsequent disucssions with Kerry and Chris it was clear that Kerry’s “acceptance” was ambiguous as there are in fact three curves:

A) Original
C) Emanuel’s polynomial

My undersatnding is that Kerry agreed to accept (B) but Landsea wanted (A). See Figure 1b here:

So I’d agree with Judy and suggest that you use A and B if you’d like to bound Emanuel and Landsea, and include C if you want to go beyond this.

277. bender

Re #275
All the “contributed packages” – like Kendall – are available from the CRAN download sites where you get R from. When I google “cran contributed packages” – there it is: scroll down to “k”.

278. Judith Curry

A key issue in whether or not PDI has doubled in recent decades is the pre 1970 correction.
If someone can tell me how to upload a figure to this site i can illustrate. but i can give you the punchline (based upon my naive statistical analysis):

A comparison of the average from the previous active period 1944-1964 with the average 1995-2006 for the NATL only, PDI is 63% greater in the latter period with the correction, and 31% greater using uncorrected. So its a factor of 2 difference. i am not going to defend this statistical analysis here, i am expecting you guys to do all this correctly, i only include these rudimentary stats to show you that this issue does make a pretty big difference.

279. bender

Instructions on how to paste images here

280. jae

Bender, Judy, Willis, et. al. If you pull this off, you will prove that science CAN be done on a blog.

281. bender

Re #276/#278
Be sure to check post #19 in this thread. Willis mentions a correction of OrigPDI to create AdjPDI. Sounds like it could be RPJrs A vs B scenario? But the difference is tiny compared to the scenario Judith is outlining.

282. David Smith

A big question is, how reliable is the historical storm data?

I took a sampling of the number of central pressure “observations” for the Atlantic storms of 1930, 1940, 1950, 1960, 1970, 1980, 1990 and 2000. The central pressure is important because it correlates well with wind speed. The number of observations is important because it tells us how often the storm was sampled (by aircraft, satellite, ship or land station).

If there is little or no sampling of a storm, then the reliability of the data is low, in my opinion. We rely on best-guesses, rather than knowledge.

Here are the number of central pressure observations per storm (be sure to see my caveats below):

1930: 0.5 observations per storm
1940: 0.5
1950: 0.15
1960: 4.15
1970: 9.0
1980: 20.6
1990: 24.6
2000: 25.7

Now, what this tells me is that, before 1960, we basically have a lot of best-guesses (WAGs in some cases) about intensities.

This is for the Atlantic, where the best historical records exist. Global historical records are probably even more problematic.

Caveats:
* The data shown is simply a sample taken every ten years. It is better to have done it annually (but that takes more time than I have currently).
* I used the Unisys database, as it is user-friendly and quick. It should match perfectly to the NHC database, but maybe it does not. It is better to have used the NHC database just in case.
* It would be better to calculate observations per storm-day, or per hurricane-day, so as to remove the effect of some seasons having longer-lived storms.

283. TCO

Nice analysis. You seem thoughtful.

284. David Smith

Test test (I’m having trouble with the link button)

285. Judith Curry

Re #279 bender thanks for the instructions, but i am strictly a poin and click type mac user, so the instructions made no sense to me. I will see if i can get some in house help on monday so i can post plots

Re #282 david, the TC data issue is a mess. Here is my understanding of the issues for North Atlantic and Pacific data:

North Atlantic:

since 1983: the data is very high quality

1970-1982: some degradation of the data likely in early part of the record since the early satellite data was not well calibrated and there was some sampling issues. The number of storms and the number of storm days should be correct, the possible degradation issues are assoc mainly with intensity

1944-1969: aircraft reconnaissance era. Sampling was greater in the 60′s than in the earlier period. I suspect that the total number of storms is accurate, but the number of storm days is underestimated. The intensity issue is impacted by the aircraft sampling as well as the issue i point out in previous point re the disputed landsea “correction” before 1970.

prior to 1944, relying on random aircraft and ship obs, plus landfalling. the number of storms (especially the stronger ones) may have been captured, but intensity values are probably pretty useless.

GT students (#3 and #4) conducted an exhaustive study of the hurdat data base, comparing the best tracks data set with the designated landfall data (obtained from surface obs). They found substantial discrepancies between these two data sets in terms of storm intensity (many discrepancies in the early part of the record, but a significant aamount also even in the past decade). Discrepancies like the landfalling data set says it hit a category 3, but the best tracks data set says the storm never exceeded category 1. etc. We have submitted about 20 pages of documentation on the discrepancies we’ve identified to the HURDAT best tracks committee (Landsea and co), we have not yet gotten any kind of reply from them. One idea is to use this info on the internal discrepancies to put error bars on the intensity data

Re Pacific:
since 1987: data set has actually degraded since aircraft reconnaissance was discontinued. There are now two different satellite derived Pacific data sets, with significant discrepancies
1977-1987: golden age for pacific TC data; good satellite plus aircraft reconnaissance
1960-1977: some satellite coverage plus aircraft reconnaissance
1944-1959: aircraft reconnaissance only

the data situation really is a problem. My suggestion to the climateauditors is to start by taking Emanuel’s data set at face value, and see what conclusions can be drawn from this data set in the context of a rigorous statistical analysis. Then provide an assessment something like what is the maximum amount of uncertainty in the data set that can still support a statistically significant trend or whatever.

286. Judith Curry

Re the Klotzbach paper (mentioned on roadmap), here is an abridged version of my comments that i posted previously on the tropical listserv. I respond in *** to specific talking points on the Klotzbach paper that were posted on Gray’s website

If the increases in TC activity found by Emanuel [2005] over the past 30
years (based on data from 1975-2004) and Webster et al. [2005] over the past 35
years (based on data from 1970-2004) are robust, one would expect to see similar
trends over the shorter time span evaluated in this paper (1986-2005),
especially since SST increases have accelerated in the past twenty years.

*****This is flawed logic, fallacy of distribution of the divisional type,
whereby you cannot assume that what is true of the class is true of its members.
You cannot dice up the 35 year period and expect the same statistical
relationships to be present in each segment. 35 years is marginally short to
identify a statistically significant trend (people who criticized our study
because the length of the data record is too short raised a legitimate point).
20 years is definitely too short. The reason for this is that both the atlantic
and pacific have large multidecadal modes. if you pick a period that is too
short, what you are seeing is one piece of the mode. The Pacific has a big
multidecadal mode that peaked around 1990, so most of the data outside the N.
Atlantic is biased by this particular sampling.

There is considerable disagreement about the data quality before the
middle 1980s. Best track datasets for the Western North Pacific, the North
Indian Ocean and the Southern Hemisphere before 1985 should be “used with great
caution” according to the authors of the best track dataset.

*****No rigorous uncertainty analysis has been conducted to date. the data in the western North Pacific, which has 40% of global hurricanes/ typhoons, actually DECREASES in quality after 1987, when aircraft reconnaissance flights were discontinued in this region. so the arguments about choosing only the data since 1985 owing to better data quality is substantially flawed

With regards to ACE, there has been a large increase in ACE in the North
Atlantic basin since 1986. There has been a large decrease in ACE in the
Northeast Pacific basin since 1986. All other basins show small upward or
downward trends. Globally, there has been a slight increasing trend from
1986-2005; however, if only the past sixteen years are evaluated (1990-2005),
there has actually been a slight decreasing trend.

****the ACE confounds the number of hurricanes with the intensity (ACE
effectively includes both). There are several ocean basins where the number of
cyclones is actually decreasing, which would lead to a decrease in ACE (while at
the same time showing an increase in intensity). See also #1, re problems of
just looking at a 20 yr time period.

With regards to the number of Category 4-5 hurricanes, there has been a
large increase in North Atlantic storms but also a large decrease in Northeast
Pacific storms. When these two regions are summed together, there has been
virtually no increase in Category 4-5 hurricanes (i.e., 47 Cat. 4-5 hurricanes
from 1986-1995 and 48 Cat. 4-5 hurricanes from 1996-2005). For the globe, there
has been an approximate 10% increase in Category 4-5 storms from 1986-1995 to
1996-2005; however, most of this increase occurred from the late 1980s to the
early part of the 1990s in the Southern Hemisphere where some data quality
issues may have still been present. There has been very little change in the
number of Category 4-5 hurricanes since 1990, which is an agreement with Figure
4, panel A from Webster et al. [2005].

***** The more relevant metric would the percentage of cat 4-5 hurricanes (which
is what Webster et al. focused on). If the total number of hurricanes is
decreasing outside the N. Atlantic, then you would expect some decrease in Cat
4-5. The fact that you still get an increase in cat 4-5 hurricanes implies a
substantial increase in the % of Cat 4-5. See also #1, re problems in looking
at just a 20 year time period.

Thereis a positive correlation (significant at the 99% level) between SSTs and ACE values and Category 4-5 hurricanes for both the Atlantic and the Northeast Pacific basins. However, correlations between SSTs and ACE values and Category 4-5 hurricanes for all other basins (i.e., Northwest Pacific, North Indian, South Indian and South Pacific) are not significant.

****This argument was debunked by the Hoyos et al. paper recently published in
science. While other factors such as wind shear etc are important determinants
of the intensity of individual storms and even for seasonal average intensity
(owing to factors such as El Nino), there is no trend in wind shear, humidity,
etc. The analysis of Hoyos et al. clearly shows that the global increase in
intensity shares information with the global trend in tropical sea surface
temperature (and not wind shear, etc.)

287. Steve McIntyre

Judith, WHCC refers to your analysis of cyclone statistics for the period 1970-2004. You cited the Joint Typhoon Warning Center as the original source of storm track data – I was interested in the aggregations that were used in WHCC. Thanks, Steve

288. TCO

Does the tropical listserv actually have good science discussion? The dendro listserv seems very weak. Mostly questions about what diameter borer to use. No in depth discussion of stats or even of competing physical mechanisms. This site much better then the dendro listserv.

289. Steve McIntyre

New post created for Klotzbach comments.

290. David Smith

One more commentary on the poor quality of storm data:

An important question is, how accurate is the storm count in the early years?

Weak storms get named and count as part of the storm count, just as much as a superhurricane. They can be hard to spot, though, as the gale winds of tropical storms often cover small areas, maybe only 30 to 100 miles wide.

Prior to 1900, we had no satellites and aircraft, nor an organized weather monitoring and recording system, nor modern instruments, nor many settlers along some coastlines, nor many (compared to today) ocean-going ships in the deep tropics. Detection, measurement and record-keeping were full of problems.

Below are some plots of named storms from the 2000s, simply for a visual impression. These systems were so weak, and many had such short lives, that even giving them a name probably took some debate. Nevertheless, they count as part of the storm tally in the 2000s.

In looking at these, blue means winds of 35mph or less and green means the wind was above 35mph. So, the part of their track which rises to “storm” classification is the green part.

In my opinion, these named storms from the 21′st century would almost certainly have been missed in the 19′th century.

Bret,2005

Gert,2005

Jose,2005

Lee, 2005

Earl, 2004

Mindy, 2003

Peter, 2003

Fay,2002

Josephine, 2002

Jerry,2001

Lorenzo, 2001

Beryl,2000

Chris,2000

Ernesto,2000

One of the good things about using ACE or PDI is that these storms carry little weight. Emanuel/Mann use storm count back to the 1870s in their 2006 paper (see Figure 2), and compare it to modern counts, but comparing the two is apples-to-oranges.

291. Judith Curry

Steve, the data set used by WHCC was compiled by coauthor Hai Ru Chang under the guidance of Greg Holland (who knows quite a bit about TC data, esp outside the atlantic). The raw data used in our analysis (with reference to the sources) is listed on our website http://www.eas.gatech.edu/research/hurricane_Webster.htm.

292. David Smith

Re #290 Looks like the links don’t work, sorry, but the storm names are listed. The spam filter has been gobbling my posts (hmmm, maybe too many of them).

293. Willis Eschenbach

Well, looking at these studies is giving me a headache. My latest one is High frequency variability in hurricane power dissipation and its relationship to global temperature, James B. Elsner et al.

I went to look for some of Elsner’s work because of Steve Bloom’s comment, viz:

Good news for bender: Elsner’s a *serious* statistics wonk.

Not.

They smooth the September PDI and the September ACR SST, as they describe here:

In order to investigate the high frequency relationship between PDI and Atlantic SST we first fit
a nonlinear trend to each of the signals by applying a regression smoother (Chambers and Hastie
1991) with a span of 44 years. A smoothed value at a given year is obtained by fitting a weighted
regression to the neighboring values within a chosen time span of the year, where the weights are a
decreasing function of time from the given year. Figure 2 shows the raw and smoothed time series
of annual hurricane PDI values. The coefficient of determination $(R^2)$ between the smoothed PDI
and smoothed SST series is 84% indicating a strong relationship. Results are in agreement with
those in Emanuel (2005) showing the unprecedented upswing in hurricane destructiveness related
to rising Atlantic SST.

Well, a couple of problems with that. First is that their dataset is 59 years long (1947-2004), and their filter is 44 years wide …

Second, I haven’t a clue what they’ve done with their PDI data. They say they got it from HURDAT, but it looks nothing like my calculation of the PDI from HURDAT. It also looks nothing like Emanual’s PDI, or Landsea’s PDI. Elsner says:

We adjust the pre-1973 wind speeds to remove biases using the same procedure as described in Emanuel (2005) ,,,

but this is not the case. Here’s the difference:

The effect of this change is to make the fit with the SST much better than that of the Emanuel data.

Third, they use, not the PDI as claimed in the quote above, but the cube root of the PDI, $\sqrt[3]{PDI}$, for their calculations … kinda defeats the purpose of a PDI, since it no longer measures power dissipation, but that’s OK. However, their claim in the abstract and in the quote above about “hurricane destructiveness” is not shown by a correlation with $\sqrt[3]{PDI}$, as that does not measure “destructiveness”.

Finally, they make no attempt to correct for autocorrelation, it doesn’t even get a mention. When you smooth two series and calculate their $R^2$ value, you also need to calculate the significance of that value. This is done by calculating an effective “N” for the series, and using the effective N to calculate the significance of the $R^2$ value. When you do this with a smoothed series, it rapidly loses significance as the smoothing increases.

For example, with an equivalent smoothing to the one they use, the R^2 between global temperature and $\sqrt[3]{PDI}$ (which they discuss at length) is 0.68, which is pretty impressive. Unfortunately, the significance is p = 0.08, not statistically significant …

w.

294. Judith Curry

#293 Thanks Willis, this analysis is very helpful. Elsner’s hypothesis is that by using a Granger Causality test he can show whether Atlantic SSTs are driven by global changes. I have trouble imagining a physical mechanism that says the global air temperature warms up and then the Atlantic follows, especially since the Atlantic is included in the global average. I inferred that this result was of some statistical interest, but was not associated with any kind of viable physical mechanism. Now you are stating that the statistics are dubious. This reinforces that the both physically and statistically viable explanations are needed; if you have one without the other, the one that you think you have is probably suspect.

295. David Smith

Steve M., perhaps post #293 could be a thread for the statistically-gifted on CA. Looks interesting.

296. bender

Good news for bender: Elsner’s a *serious* statistics wonk

That’s part of the problem in climate science – you don’t want “wonks”, you want statisticians. There are too many wonks. Bloom doesn’t take me seriously, but that’s not surprising; he is an uncertainty denier. There’s a lot of that going around.

1. By Define a new word! (Poll) | The Blackboard on Mar 20, 2009 at 8:10 AM

[...] I embraced “lukewarmer” as a word when I first read it at WUWT possibly originated by bender, posting at [...]