Bender on Hurricane Counts Continued

This continues the previous post which is overweight in comments.

91 Comments

  1. Willis Eschenbach
    Posted Sep 20, 2006 at 5:52 PM | Permalink

    Re point 4, (in my last post in the previous thread, which also contains Emanuel’s data), my error. A closer examination reveals that the PDI data I sent are the adjusted PDI data, which include Emanuel’s decreases in the pre 1970 data. So, I don’t know why the SSTs are that different.

    It also highlights the fact that, even with the pre-1970 decreases in the PDI (which Emanuel agreed were too large), there is still no upward trend in the 1949-2004 PDI …

    w.

  2. bender
    Posted Sep 20, 2006 at 5:58 PM | Permalink

    Willis, do you have the data for N. Atlantic (Fig 1) & W. Pacific (Fig 2) separately? Also, is it ok if I work on this problem? Or is this “your turf”? (I’m easy.)

  3. David Smith
    Posted Sep 20, 2006 at 6:43 PM | Permalink

    Good work, Willis! This helps me understand some of my head-scratching over Emanuel’s plots.

    Bender, if you want a challenge, see if you can figure out how Emanuel constructed his Figure 3. It is a combo of North Atlantic and Northwest Pacific storm data, with half of the SST data coming from the Southern Hemisphere for some reason.

    I don’t know how one combines the storm data and I don’t know why one would use Southern Hemisphere SST when examining Northern hemisphere storms.

  4. bender
    Posted Sep 20, 2006 at 7:33 PM | Permalink

    see if you can figure out how Emanuel constructed his Figure 3

    I could try if I had the two data streams. (And we’re talking raw data, not smoothed.) Is the composite not simply a straight weighted average?

    Playing guessing games in order to reconstruct someone’s graph is not too interesting to me. Answering Willis’s questions about why the PDI=f(SST) relationship varies, and whether it is biased by data pre-processing, is, in contrast, very interesting.

  5. Willis Eschenbach
    Posted Sep 20, 2006 at 8:22 PM | Permalink

    Re 2, Bender, I only have the North Atlantic data, because that’s all that was in the Landsea paper … and even that data (PDI) is the adjusted data.

    You are more than welcome to work on any aspect of it.

    w.

  6. David Smith
    Posted Sep 20, 2006 at 8:30 PM | Permalink

    Re #4 Fair enough.

    An example of where I get stumped by Emanuel’s Figure 3 is his 1955-60 PDI. It rose about 100% (0.25 to 0.5) from 1955 to about 1958. Yet the Atlantic PDI for that period went slightly down while the Pacific was up maybe 30%. If the combined PDI is a weighted average, how can that doubling have happened.

    Same kind of thing for 1960 to 1965, but even worse.

    Must be some special math.

  7. bender
    Posted Sep 20, 2006 at 8:36 PM | Permalink

    Re #5 Thx Willis,
    Re #6 That’s SST rising, not PDI. (For some strange reason he’s switched the line styles between Figs 1&2 vs. Fig 3).

  8. Willis Eschenbach
    Posted Sep 21, 2006 at 1:48 AM | Permalink

    Here’s the unadjusted data for the years 1949-1969. From 1970 on, the data is the same.

    Year , PDI
    1949 , 10.55
    1950 , 31.52
    1951 , 16.77
    1952 , 10.36
    1953 , 11.07
    1954 , 13.36
    1955 , 25.17
    1956 , 6.02
    1957 , 10.12
    1958 , 14.57
    1959 , 8.52
    1960 , 12.13
    1961 , 27.79
    1962 , 3.54
    1963 , 13.9
    1964 , 20.95
    1965 , 9.95
    1966 , 17.29
    1967 , 13.95
    1968 , 3.98
    1969 , 17.77

    Post 1970 the data is the same as the adjusted data.

    With the unadjusted data, the situation is the same as with the adjusted “¢’‚¬? the trend is not significantly different from zero.

    w.

  9. Willis Eschenbach
    Posted Sep 21, 2006 at 2:24 AM | Permalink

    Well, I’ve completed getting the data from the Emanuel paper. The interesting thing is this …

    The SST is correlated with the PDI (per his figures) with r^2 = 0.48. But because of the strong autocorrelation of both the PDI and the SST, there is no statistical significance to this correlation (p = 0.16).

    So there is no significant trend in the Atlantic PDI, and there is no significant correlation between the Atlantic SST and the PDI …

    Sigh …

    w.

  10. Willis Eschenbach
    Posted Sep 21, 2006 at 2:27 AM | Permalink

    Re-reading #9, a couple of comments. I was talking about his smoothed figures for the Atlantic. These have been smoothed using the 1-2-1 smoothing filter. It has been used twice on the SST data (near as I can tell), and four times !?! on the PDI figures.

    w.

  11. Judith Curry
    Posted Sep 21, 2006 at 10:50 AM | Permalink

    Some fascinating discussion here, which i haven’t had time to fully digest yet. I am hoping to turn the discussion towards specific recommendations as to how the data should be analyzed and presented, and what kind of conclusions can actually be drawn from the data. Lets forget for now the uncertainties in the data quality and focus on the statistical analyses. If you can suggest a better way of doing what we have been doing, , I would be happy to redo the plots and conclusions at least for the talks I give (giving climateaudit credit), and if all this substantially changes the conclusions that emanuel, webster et al. are drawing, then I would be prepared to to write another paper on this regarding (bender has apparently turned down my offer to coauthor a paper on this 🙂 So rather than just doing the due diligence, finding flaws in what was done, why not try to take it to the next step.

    Here is what i am concluding so far from how to approach these analyses (again, this is based on cursory reading of all this, hopefully i will have more time next week):
    1) looking at an individual ocean basin like NATL, we find tons of autocorrelations, making it difficult to elicit a significant trend or correlation
    2) looking at the global dataset, the autocorrelation problem is much smaller, although there is an apparent strange 5 year autocorrelation in NCAT45 (no idea what that one is, very intriguing esp since it is global)
    3) inferring anything about changing hurricane statistics and SST is more robustly done on the global data set (if the forcing is global, then we expect to see a global signal, and the autocorrelation problem is smaller)
    4) plotting the data in 5 yr bins would have been ok if there hadn’t been 5 yr autocorrelation (4 yr bins would have been ok?)
    5) while it is ok to plot the data this way, the 4 yr bins leaves us with too few degrees of freedom for a meaningful trend analysis (what can we actually conclude about the trend from the global data used by WHCC?)
    6) looking at global data relating some measure of hurricane intensity and SST makes sense since the forcing is global and we have a theoretical relationship that should relate intensity and SST
    7) integral measures like PDI and ACE include not only intensity but duration and number of storms; given that we don’t have theory esp for number of storms, N may have big influence on PDI the N part may not relate to SST (but in the NATL, there is a relation between N and SST)
    8) sorting out the actual physics of what is going on beyond the basic intensity/SST relationship seems to be more logically done at the level of the individual basin, where we can sort out the contributions from N, Ndays, intensity and understand what is controlling them in terms of cyclones, atm dynamics, SST, etc. But we have statistical significance problems in just looking at individual basins.
    9) re the focus on intensity and SST relationship, Emauels potential intensity theory refers to wind speed (not to PDI, etc). we need to come up with the best metric to reflect actual intensity (maybe average peak wind speed is the best). I do like NCAT45 since that is the part of the intensity distribution that is changing the most, but it seems vulnerable to observational errors.

    So if this discussion can help provide concrete suggestions for moving forward on this, climateaudit will have entered a new era of productively collaborating with domain experts to move the science forward, rather than just serving as whistle-blower/due diligence watchdog.
    i think this discussion is on the brink of making such a contribution.

    p.s. the “no increase in peak windspeed” found by WHCC is probably a red herring issue that can be ignored, the hurricane forecasters who assign such numbers to the storms have tended not to move very easily out of a prescribed range. the satellite reanalysis that is underway should give us some much better numbers to look at for that one.

  12. Ken Fritsch
    Posted Sep 21, 2006 at 11:19 AM | Permalink

    re: #5

    Willis was the adjusted data from the Landsea paper the same data used by Emanuel? The most recent graph that you displayed showed standard deviations on the y axis for the PDI and SST time series. Is that correct?

    It was my view that we have unadjusted PDI data and 2 or 3 sets of pre-1973 adjusted data with one set originally over-adjusted by Emanuel and a second set readjusted by Emanuel and perhaps a third set adjust according to
    Landsea. Could you show a graph of all versions together?

    Thanks for Landsea paper. I have not completed reading it but it seems to add much to the discussions. If my questions can be answered from reading this paper just say so.

    Would it add to the discussion if we put together all the reasonable remaining questions we have about Emanuel’s paper and submit them through you to Emanuel?

  13. bender
    Posted Sep 21, 2006 at 11:59 AM | Permalink

    Re #10
    Read the text carefully, Willis! Emanuel [p. 687] craftily says: “This filter is generally applied twice in succession.” So I guess it is up to the reader to determine whether in any paticular instance this general rule was not followed. i.e. He probably did apply it four time in some instances.

    Re #6 David Smith, did you catch #7?

  14. bender
    Posted Sep 21, 2006 at 12:03 PM | Permalink

    Re #9
    Willis, any chance you’d be willing to post the Emanuel data, so that we can make some progress on #11’s requests?

  15. bender
    Posted Sep 21, 2006 at 12:07 PM | Permalink

    Re 13
    Again, probably not a question of deceitfulness here, but Nature’s brutal page restrictions. I’m coming to the conclusion that climate science should never ever be published in Nature because their editorial policies require such drastic textual oversimplifications that it is very damaging to what is surely a very complex story.

  16. bender
    Posted Sep 21, 2006 at 12:18 PM | Permalink

    The SST is correlated with the PDI (per his figures) with r^2 = 0.48.

    1. Can we please stop using “r^2” and “correlation” in the same sentence?

    r is “correlation”
    r^2 is “regression coefficient of determination”

    So is it r=0.48 or r^2=0.48?

    2. Is it the smoothed data from the graphs, or the raw (unsmoothed) data that you’ve got, Willis? Beecuase it’s the raw data we need.

  17. TCO
    Posted Sep 21, 2006 at 12:19 PM | Permalink

    Nature and Science have the same issue with all fields. I know of a fraud by a young turk academic that has not come to light yet. It is essentially a place for people to put press releases. If the work is solid behind it fine, but there is not enough info to check on things well or even to use the report for future endeavours. They have basically become “letters journals” with the same problems that PRL and APL have in physics, but with more newsworthy content.

    It was written in 1955, that full papers in the specialty literature are the appropriate way to report results. Was true then. Is true now. Note that GRL has an L in it…

  18. bender
    Posted Sep 21, 2006 at 12:34 PM | Permalink

    TCO, why are you still contributing to this thread? I thought you said it was all played out? [Kidding. I agree with your point. The problem is the “Letters” papers are still viewed as the most prestigious, and that ain’t gonna change.]

  19. Willis Eschenbach
    Posted Sep 21, 2006 at 1:04 PM | Permalink

    Here’s all of the Emanuel data I have, plus the HadSST data.

    Year , Orig. PDI , Adj. PDI , Smoothed SST , Raw HadCRUT SST
    1949 , 9.25 , 10.82 , 0.83 , 0.32
    1950 , 26.2 , 30.64 , 0.83 , -0.26
    1951 , 14.25 , 16.66 , 0.84 , 0.07
    1952 , 8.9 , 10.41 , 0.85 , 0.23
    1953 , 9.8 , 11.46 , 0.82 , 0.07
    1954 , 11.5 , 13.45 , 0.75 , -0.27
    1955 , 21.35 , 24.73 , 0.63 , 0.17
    1956 , 5.3 , 7.08 , 0.52 , -0.27
    1957 , 8.7 , 10.26 , 0.51 , 0.09
    1958 , 12.5 , 14.62 , 0.6 , 0.18
    1959 , 7.5 , 8.77 , 0.73 , -0.21
    1960 , 10.2 , 12.96 , 0.85 , -0.07
    1961 , 22.9 , 26.78 , 0.89 , -0.12
    1962 , 3.2 , 3.74 , 0.84 , -0.01
    1963 , 11.95 , 14.11 , 0.76 , 0.02
    1964 , 18.05 , 20.77 , 0.71 , -0.34
    1965 , 8.6 , 10.06 , 0.73 , -0.14
    1966 , 14.9 , 17.42 , 0.77 , 0.09
    1967 , 12.05 , 14.09 , 0.76 , -0.15
    1968 , 3.65 , 3.18 , 0.7 , -0.11
    1969 , 15.65 , 18.3 , 0.66 , 0.29
    1970 , 3.75 , 3.75 , 0.67 , 0
    1971 , 9.25 , 9.25 , 0.67 , -0.15
    1972 , 2.9 , 2.9 , 0.63 , -0.36
    1973 , 4.15 , 4.15 , 0.6 , -0.11
    1974 , 7.5 , 7.5 , 0.61 , -0.39
    1975 , 7.8 , 7.8 , 0.64 , -0.32
    1976 , 8.6 , 8.6 , 0.67 , 0.06
    1977 , 2.95 , 2.95 , 0.7 , -0.07
    1978 , 6.6 , 6.6 , 0.7 , -0.25
    1979 , 11.85 , 11.85 , 0.64 , 0.24
    1980 , 19.3 , 19.3 , 0.51 , 0.29
    1981 , 9.95 , 9.95 , 0.42 , 0.06
    1982 , 3.35 , 3.35 , 0.41 , -0.15
    1983 , 1.5 , 1.5 , 0.42 , 0.01
    1984 , 7.45 , 7.45 , 0.47 , -0.23
    1985 , 9.3 , 9.3 , 0.52 , -0.01
    1986 , 3.5 , 3.5 , 0.57 , -0.07
    1987 , 2.65 , 2.65 , 0.61 , 0.5
    1988 , 13.1 , 13.1 , 0.71 , 0.23
    1989 , 16.45 , 16.45 , 0.77 , 0.09
    1990 , 8.5 , 8.5 , 0.7 , 0.52
    1991 , 3.45 , 3.45 , 0.59 , -0.09
    1992 , 8.85 , 8.85 , 0.51 , -0.08
    1993 , 3.5 , 3.5 , 0.52 , 0.14
    1994 , 2.65 , 2.65 , 0.6 , -0.14
    1995 , 24.8 , 24.8 , 0.7 , 0.64
    1996 , 19.55 , 19.55 , 0.8 , 0.33
    1997 , 4.15 , 4.15 , 0.83 , 0.31
    1998 , 21.75 , 21.75 , 0.81 , 0.62
    1999 , 21.85 , 21.85 , 0.75 , 0.53
    2000 , 12.2 , 12.2 , 0.65 , 0.12
    2001 , 11.25 , 11.25 , 0.59 , 0.41
    2002 , 6.35 , 6.35 , 0.61 , 0.14
    2003 , 22.6 , 22.6 , 0.71 , 0.87
    2004 , 30.2 , 30.2 , 0.83 , 0.8

    Data Notes:

    1. Adjusted yearly PDI is from Landsea’s “Communications Arising”
    2. Original PDI is reconstructed from the Landsea adjusted version, using the Landseas data on the smoothed version of the original PDI as a template.
    3. Smoothed SST is from Emanuel’s paper, for September in 6°N-18°N, and 20°W-60°W.
    4. Raw HadSST data is from the HadSST database (available online), and is for the month of September for the area 5°N-20°N, 20°W-60°W.

    Supposedly, all of Emanuels data is available on his website. However, it is in “NetCDF” format, and I’m on a Mac, and I can’t find anything to do the translation for the Mac … anybody got ideas? Is there a script in “R” that would do it?

    Best to all, Judith, welcome back.

    w.

  20. Willis Eschenbach
    Posted Sep 21, 2006 at 1:07 PM | Permalink

    IMPORTANT NOTE, I just noticed I switched the labels on the pinche cabràƒⲮ. The Original PDI has the higher values in the pre-1970 period, and thus is the third column, with the Adjusted PDI in the second column.

    w.

  21. bruce
    Posted Sep 21, 2006 at 1:12 PM | Permalink

    re #11: I am a lay person in this area. However, I truly respect the attitude demonstrated here by Judith Curry. Her attitude is as a scientist’s attitude should be, and it is clear that she is being treated with respect by the CA crowd.

    It would be great if those on CA capable of contributing to the issue would be prepared to work constructively with her, Emanuel, Landsea et al in a collaborative effort to find out what is really going on, and to strengthen the robustness of the scientific conclusions.

  22. bender
    Posted Sep 21, 2006 at 1:12 PM | Permalink

    Re #19
    It’s the unsmoothed data we really need. Thanks all the same, Willis.

    Notably, the raw data are not available from the “Papers, Data, and Graphics” page at Emanuel’s website. (There he cites the use of a 1,3,4,3,1 filter, which will be problematic if the lag-5 autocorrelations are significant.)

  23. TCO
    Posted Sep 21, 2006 at 1:17 PM | Permalink

    Why won’t you write a paper with Judy?

  24. bender
    Posted Sep 21, 2006 at 1:17 PM | Permalink

    Re #21
    I am not part of the “CA crowd” and have been treated with respect as well. This despite the fact that I declared at the outset my belief that the A in AGW in positive non-zero.

  25. bender
    Posted Sep 21, 2006 at 1:18 PM | Permalink

    Re #23
    Have you stopped beating your wife?

  26. TCO
    Posted Sep 21, 2006 at 1:21 PM | Permalink

    I’m still giving it to her good. I’m just curious, she has asked a few times and it seems from last comment (hers) that her requests for merger are unrequited.

  27. Willis Eschenbach
    Posted Sep 21, 2006 at 1:35 PM | Permalink

    Judith, I read your comment about autocorrelation in the North Atlantic, but I don’t find the problem in the data. You say:

    1) looking at an individual ocean basin like NATL, we find tons of autocorrelations, making it difficult to elicit a significant trend or correlation

    However, the Emanuel data for example, which has strong autocorrelations, still shows significant relationships (r^2=0.24, p = .006 on the raw SST vs original PDI data, not the smoothed), and significant temporal trends (for the SST data, but not the PDI data). Here’s the autocorrelation of the SST and the original PDI (adjusted PDI is only slightly different).

    bender, I was talking about r^2, the regression coefficient of determination.

    Re #12, I normalized the data to make the relationships clear, so the Y axis is in standard deviations. To me, this is preferable to say, Emanuel’s practice of multiplying the two datasets by some arrangement of mx+b to bring them into line, as this is subject to … well … subjectivity, as far as chartsmanship.

    w.

  28. David Smith
    Posted Sep 21, 2006 at 1:36 PM | Permalink

    Re #11

    Hello, Judith, good to see your input and your points are well-taken by me. I’d like to answer in two parts, due to some personal time constraints. I cannot comment on the statistical issues (my interest is meteorology) but will do so on other aspects.

    Regarding Emanuel’s approach:

    * For Emanuel’s Figure 1, the approach should be to look at the Basin surface temperature, which means the area covered in Webster et al’s Atlantic box plus the Gulf of Mexico, plus the Bahama region. The latter two are, along with the western Caribbean, the warmest regions of the Basin and the ones which account for much of the PDI. I suggest dropping the 6N to 9N, which is ITCZ-related but which does not seem to be part of his hypothesis so far as I can tell.

    * For Figure 2, expand the SST box westward and northward to include the warmer regions of the western Pacific. the reasoning is the same as above.

    * For Figure 3, exclude from the SST region those basins, and that Hemisphere, in which the PDI storms did not occur.
    (As mentioned before, the PDI in this one is a head-scratcher for me and doesn’t seem to jibe with the PDI data from the two basins. I recommend explaining the method of combining and smoothing PDIs. perhaps he did and I have missed it.)

    * On the Emanuel/Mann paper, drop the use of pre-1900 storm data. (I also have doubts about the completeness of storm count east of 60W prior to 1940, as mentioned before.)

    Once I get some time, in a day or so, I’ll do a part 2 on this.

    David

  29. Willis Eschenbach
    Posted Sep 21, 2006 at 1:37 PM | Permalink

    Re 23, that is the unsmoothed data, with the exception of the SST data from Emanuel. However, I have supplied the unsmoothed SST data (for a slightly larger area) from HadSST … what’s the problem?

    w.

  30. bender
    Posted Sep 21, 2006 at 1:42 PM | Permalink

    No problem, Willis. (I just figured that out for myself a minute ago.)
    Thanks for the post & multiple clarifications. Much appreciated.

  31. bender
    Posted Sep 21, 2006 at 1:44 PM | Permalink

    David Smith, you say:

    the PDI in this one is a head-scratcher for me

    Have you read #7?

  32. bender
    Posted Sep 21, 2006 at 2:17 PM | Permalink

    Suspicions confirmed.

    For both data series (SST, PDI) filtering (“smoothing”) serves to amplify weak 5-yr periodicity (ENSO?) up to the level of a true cycle. Do it twice and you get a strong decadal cycle. Fifty years of data divided into cycles of ten years yields roughly 5 independent observations, or 4 effective degrees of freedom to do the regression with.

    Script & graphics of full analysis to come tomorrow.

    Hurricane insurance, anyone?

  33. bender
    Posted Sep 21, 2006 at 2:35 PM | Permalink

    Re #19
    Wait a sec. Willis, if these are really the source data for Emanuel’s (2005) PDI graphs, then how come the PDI’s tabulated in #19 are U-shaped over time, whereas Emanuel’s are more HS-shaped? Do we have our wires crossed?

  34. Steve McIntyre
    Posted Sep 21, 2006 at 2:39 PM | Permalink

    I’ll put the relevant data in one folder. Could someone give me a list of data sets to collate as I’ve lost track of this thread.

  35. bender
    Posted Sep 21, 2006 at 2:45 PM | Permalink

    #34 good idea, but let’s get straightened out first re #33.
    Also, we still don’t have Atlantic (Fig 1) vs. Pacific (Fig. 2).

  36. bender
    Posted Sep 21, 2006 at 2:59 PM | Permalink

    Re #33
    Same complaint about the variable “Smoothed SST”. It is U-shaped, but “Raw HadCRUT SST” is not. Therefore the one could not have been calculated from the other by smoothing alone.

  37. Willis Eschenbach
    Posted Sep 21, 2006 at 3:19 PM | Permalink

    Re 36, the “Smoothed SST” is from Emanuel’s data, while the HadSST data is from HadSST. I don’t have the raw data for the Emanuel “Smoothed SST”, nor do I know why they are different.

    Re 33, the HS shape is from two factors. One is that he has adjusted the pre 1970 values to reduce them, and the other is that his final data points are exaggerated by the filtering. The first point is too low, and the last is too high, which makes the HS shape …

    w.

  38. Dave Dardinger
    Posted Sep 21, 2006 at 3:19 PM | Permalink

    re: #33,35

    I thought that the Lag(yrs) in the vertical axis shows that this is something else than the actual comparison of strength or some other real-world number. Some sort of statistical thingee. Therefore it’s not surprising the shape is different.

  39. bender
    Posted Sep 21, 2006 at 3:27 PM | Permalink

    “Lag(yrs) in the vertical axis” ???? Not sure what you’re referring to, Dave.

  40. bender
    Posted Sep 21, 2006 at 3:33 PM | Permalink

    Re #37

    the “Smoothed SST” is from Emanuel’s data

    Aha – I think I see half the problem. The data listed here correspond to the dates 1940-1996, not 1949-2005. Looks like maybe a column cut-and-paste error or something llike that, Willis. The uptick from 1995-2005 is totally missing from the tabulated data in #19. Carefully compare the third column of #19 to the Fig 1 solid line in Emanuel (2005).

  41. bender
    Posted Sep 21, 2006 at 3:37 PM | Permalink

    Just to clarify, #40 is in reference to the “Smoothed SST” variable only.

  42. bender
    Posted Sep 21, 2006 at 3:58 PM | Permalink

    Re #22
    What Emanuel has done is to post the Atlantic basin 1970-2005 data on his website. This spans the “30-year” time-frame mentioned in the title of the Nature paper (“Increasing destructiveness of tropical cyclones over the past 30 years”), but does not cover the earlier data period 1930-1970, which his graphs cover. Thus it is not possible to do a sensitivity analysis on the cutoff date. We are stuck with the cherry-picked cutoff date of 1970, unless we can straighten out the mess referred to in #40.

    O, for a turnkey script.

  43. Willis Eschenbach
    Posted Sep 21, 2006 at 4:19 PM | Permalink

    Re #40, my bad. Here’s the full dataset

    Year , Smoothed SST
    1931 , 0.79
    1932 , 0.75
    1933 , 0.7
    1934 , 0.67
    1935 , 0.72
    1936 , 0.84
    1937 , 0.92
    1938 , 0.91
    1939 , 0.87
    1940 , 0.83
    1941 , 0.83
    1942 , 0.84
    1943 , 0.85
    1944 , 0.82
    1945 , 0.75
    1946 , 0.63
    1947 , 0.52
    1948 , 0.51
    1949 , 0.6
    1950 , 0.73
    1951 , 0.85
    1952 , 0.89
    1953 , 0.84
    1954 , 0.76
    1955 , 0.71
    1956 , 0.73
    1957 , 0.77
    1958 , 0.76
    1959 , 0.7
    1960 , 0.66
    1961 , 0.67
    1962 , 0.67
    1963 , 0.63
    1964 , 0.6
    1965 , 0.61
    1966 , 0.64
    1967 , 0.67
    1968 , 0.7
    1969 , 0.7
    1970 , 0.64
    1971 , 0.51
    1972 , 0.42
    1973 , 0.41
    1974 , 0.42
    1975 , 0.47
    1976 , 0.52
    1977 , 0.57
    1978 , 0.61
    1979 , 0.71
    1980 , 0.77
    1981 , 0.7
    1982 , 0.59
    1983 , 0.51
    1984 , 0.52
    1985 , 0.6
    1986 , 0.7
    1987 , 0.8
    1988 , 0.83
    1989 , 0.81
    1990 , 0.75
    1991 , 0.65
    1992 , 0.59
    1993 , 0.61
    1994 , 0.71
    1995 , 0.83
    1996 , 0.94
    1997 , 1.01
    1998 , 1.05
    1999 , 1.03
    2000 , 1
    2001 , 1
    2002 , 1.15
    2003 , 1.37

    Thanks for that, bender.

    w.

  44. bender
    Posted Sep 21, 2006 at 4:20 PM | Permalink

    I suggest an insider, like Judith Curry, write to Emanuel to acquire the raw, unsmoothed, original data used in Emanuel’s (2005) Figs 1 & 2, and that a collective effort be made to analyse the sensitivity of his conclusions to any or all of the following:
    -degree & method of smoothing
    -degree of 3-7y partial autocorrelation in underlying raw data
    -degree of correction for reduced degrees of freedom due to enhanced 1st order autocorrelation
    -choice of start date (as opposed to cherry-picked 1970)
    -choice of trend model
    -choice as to whether basins are considered jointly or separately
    -choice of spatial data frame within basins

    The result would be a constructive reply for submission to Nature.

  45. bender
    Posted Sep 21, 2006 at 4:24 PM | Permalink

    Re #43
    No, thank you, Willis. It takes alot of effort to get all this data together and to read and reply to these endless demands of ours. And it takes some guts to admit when you’ve made a rock simple error. You are a prince.

  46. Willis Eschenbach
    Posted Sep 21, 2006 at 4:56 PM | Permalink

    Re #42, bender, as you note Emanuel has posted an excel spreadsheet with his 1971-2003 data. While useless for our purposes in this thread, I found this very valuable to check my own work.

    Because it is often such a struggle to get authors to release their data, I have taken up the practice of extracting the data directly from their graphs. I take their graph, blow it up, and using a graphics program, place a locus at each data point. This gives me the data in a very exact way. To verify the results, I use Excel to graph the extracted data, copy the graphed line, and overlay it on the original. This lets me spot any mistakes.

    But how exact is this procedure? Well, this spreadsheet of Emanuels lets me check my work in this case. I previously sent you the data from his 1970-onwards graph. Now I have his actual data to do the comparison. Here are the figures for the errors due to my extraction procedure:

    DATA RANGE 26.8°C – 27.6¯C

    Average error , .002°C

    Std. Dev. error, .003°C

    Max Abs error, .009°C

    Correlation, my extracted data with his data = 0.99987

    In other words, my procedure is way more than accurate enough, the biggest single error from my procedure is one hundredth of a degree C.

    On other matters, because of the shortness of the record and the high autocorrelation due to the smoothing, there is NO SIGNIFICANT TREND in either the SST or the PDI datasets for the 1970-on data.

    w.

  47. bender
    Posted Sep 21, 2006 at 4:59 PM | Permalink

    The Atlantic basin relationship is not nearly as strong as Emanuel (2005) suggests. His excessive double filtering is indeed bringing out a decadal cycle that is not present in the original data series (both of them). The high degree of decadal coherence means from 1970+ he’s got ~4 independent observations, and thus only ~3 effective degrees of freedom to assess the significance of that “strong” r=0.91 correlation. Take away the smoothing and the 1949+ correlation drops to r=0.49. Which means the relationship is probably valid (depending on the results of the spatial-framing sensitivity analysis), but not even close to the level that his text indicates. I would advise insurers to take a close second look at this paper.

  48. bender
    Posted Sep 21, 2006 at 5:01 PM | Permalink

    Re #46
    Willis, I do the same and I agree 100%: accuracy of digital reconstruction is really a non-issue. (Believe it or not I have digitized some tree-ring series that are 1000y long. O that people would just release their data.)

  49. David Smith
    Posted Sep 21, 2006 at 5:05 PM | Permalink

    Re #13 Ah, got it! Thanks. I missed his labeling switch.

  50. Ken Fritsch
    Posted Sep 21, 2006 at 5:07 PM | Permalink

    re #47

    The high degree of decadal coherence means from 1970+ he’s got ~4 independent observations, and thus only ~3 effective degrees of freedom to assess the significance of that “strong” r=0.91 correlation. Take away the smoothing and the 1949+ correlation drops to r=0.49.

    Or with R^2=0.83 with smoothing and R^2=0.24 without smoothing and p=??.

  51. bender
    Posted Sep 21, 2006 at 5:11 PM | Permalink

    Re #49
    His graphical error fooled me for a few minutes as well. Yet another example where proper archiving could alleviate the miserable job of methods reconstruction.

    Beats me why these individual Figs 1-3 are not plotted in a single graph. Easier to read, fewer opportunities for labelling errors, saves space. As a reviewer I would have insisted on that. Is this yet another example of editorial fast-tracking?

  52. bender
    Posted Sep 21, 2006 at 5:31 PM | Permalink

    Re #50
    The significance level drops quite a bit, from p = 6.079e-07 to p = 0.0014.

  53. bender
    Posted Sep 21, 2006 at 5:42 PM | Permalink

    Willis:
    We’ve fixed the SST complaint in #36. However the complaint about PDI in #33 is still on the books. Carefully compare columns 1-2 in #19 to the dotted line in Emanuel (2005) Fig 1.

  54. bender
    Posted Sep 21, 2006 at 5:49 PM | Permalink

    Whoops, I see the problem – we need a data point for 2005 in order to prevent the 2004 uptick from being lopped off due to smoothing. Never mind #53.

  55. bender
    Posted Sep 21, 2006 at 6:08 PM | Permalink

    Re #54
    Hmm, that’s not right either. The Emanuel Fig 1 data only go to 2003, so there should be an uptick for 2002-2003 in the smoothed PDI data. I get that with a single filtering (series ending in 2003), but not a double filtering (series ending in 2002). The PDI patterns 1970+ are close to Emanuel’s, but not close enough – and the patterns pre-1970 are off by a fair bit.

    ??

  56. bender
    Posted Sep 21, 2006 at 6:20 PM | Permalink

    Re #33,#54,#55
    I guess my answer is in #19:

    Adjusted yearly PDI is from Landsea’s “Communications Arising”

    Landsea and Emanuel must use different PDI curves. Won’t make too much of a difference for a 1970+ analysis, but it will make a difference if you work with the earlier data. U àƒ⣃¢’‚¬°à‚➠HS.

  57. Willis Eschenbach
    Posted Sep 21, 2006 at 7:06 PM | Permalink

    Re 55, it all fits if you do a few things.

    What you have to do is smooth the Landsea adjusted PDI four times using the 1-2-1 filter and retaining the original end points. Then throw away the 2004 data and you have an exact match to the Emanuel data.

    Go figure …

    w.

  58. Willis Eschenbach
    Posted Sep 21, 2006 at 7:23 PM | Permalink

    Re my 57, upon further examination, bender, I find that what he actually did was smooth the 1949-2003 data four times, keeping the end points untouched, and used the whole thing. It gives almost the exact same result as the procedure I gave last time, but Landsheidt says that he held on to both end points, and that makes sense rather than discarding just one.

    This fits exactly with Emanuel’s curve.

    w.

  59. bender
    Posted Sep 21, 2006 at 8:01 PM | Permalink

    1. Keeping the end points is not justified. That is another permutational flavor to test in a sensitivity analysis.
    2. What do we make of the significant difference between the Landsea & Emanuel PDI pre-1970? I guess that difference should be quantified, if nothing else.

  60. bender
    Posted Sep 21, 2006 at 8:10 PM | Permalink

    3. “Keeping the end points untouched” is not justified either. That’s precisely what is preserving the HS shape, and what I’ve been complaining about. Each smoothing should lop a point off, thus eliminating the blade. THIS is what makes sense – because of the high likelihood that the blade is simply part of a background 3-7 year noise cycle, and not some sudden non-stationary warming trend. The crafty devil.

    Great job, Willis.

  61. Willis Eschenbach
    Posted Sep 21, 2006 at 8:10 PM | Permalink

    Also, bender, note that applying a 1-2-1 smoothing filter four times is the same as a 1,8,28,56,70,56,28,8,1 filter … over time, this will approach a Gaussian filter. The net effect of this smoothing on the autocorrelation of the adjusted PDI is shown below:

    w.

  62. bender
    Posted Sep 21, 2006 at 8:13 PM | Permalink

    When you make arbitrary choices in data preprocessing that systematically bias the analysis in favor of your hypothesis, then the paper should not be accepted without major revision. When there are more than 4 such choices, then the analyst’s objectivity and the reviewers’ competence needs to be questioned.

  63. bender
    Posted Sep 21, 2006 at 8:16 PM | Permalink

    Re #61
    Yes, indeed, Willis, that is most certainly the case. That is precisely the reason why I got interested in these kinds of data in the first place. Tsk tsk.

  64. Barclay E MacDonald
    Posted Sep 21, 2006 at 8:42 PM | Permalink

    Interesting stuff Bender and Willis! Nice work. Better than TV. But in Emanuel’s favor is there any rational, objective reason that you can think of for retaining the two end points untouched and not subject them to the same smoothing process? Or to put it another way, how obvious is it that this is an error?

  65. bender
    Posted Sep 21, 2006 at 8:51 PM | Permalink

    It is not an “error”. It is more a case of poor judgement. It is not fatal to the hypothesis test, but it will reduce its significance by some small measure. Whether that “small measure” is significant from an insurer’s perspective remains to be seen. But every dollar counts.

    The bigger point is that you string four or six of these small questionable judgements together and all of a sudden you ARE influencing the hypothesis test. Maybe even doubling the slope of the response. Which is what this is all about: why do PDI responses to SST in Atlantic & Pacific appear to differ so much? Is it the result of an overfit model in each region?

  66. John Creighton
    Posted Sep 21, 2006 at 8:55 PM | Permalink

    #64 In my opinion after smoothing twice he should get rid of the 5 end points on either side. The number of end points he needs to remove isn’t necessarily the length of the filter. Rather it should be based on the time constant or auto correlation of the filter.

  67. bender
    Posted Sep 21, 2006 at 8:56 PM | Permalink

    It may be worth pointing out that you would consciously have to do this (include those end data points). If you were to copy and paste in Excel or use a filter() function in R, then the data stream end points will not automatically compute. You’d have to manually type in the unprocessed data overtop of the computed missing value. So it’s certainly not inadvertent; it’s deliberate.

  68. bender
    Posted Sep 21, 2006 at 8:59 PM | Permalink

    Most people would probably agree with hardline #66. Leaving those points intact is much like MBH98 deciding to arbitrarily extend the Gaspé cedars back from AD1404 to AD1400 just so that they would not drop out of the PCA. They needed that leverage. They did what it took to get it.

  69. bender
    Posted Sep 21, 2006 at 9:02 PM | Permalink

    Apologies for the multiple postings. Just want to clarify that this was Landsea’s analysis where the PDI endpoints were kept in. We still don’t know what Emanuel did.

  70. bender
    Posted Sep 21, 2006 at 9:09 PM | Permalink

    My guess is that Emanuel did a double smoothing and correctly eliminated one data point of the end of each series twice. Thus his raw data are 1970-2005 and his smoothed data are 1972-2003, as it should be.

  71. Willis Eschenbach
    Posted Sep 21, 2006 at 9:10 PM | Permalink

    Re 59 & 60, bender, you are right, each implementation of the 1-2-1 filter should lop off one year at each end. This is one of the things that Landsea said in his “communications arising”, but unfortunately, Landsea didn’t realize that Emanuel had done the procedure four times, and thus should have lopped off four years rather than one.

    SENSITIVITY ANALYSIS TO ADJUSTMENTS.

    Using either the smoothed or unsmoothed data, neither PDI series has a significant trend. Nor does his smoothed SST.

    Using either the adjusted or unadjusted PDI series (unsmoothed) with the HadSST data, there is a statistically significant relationship (p less than 0.05), but it is small (Adj. PDI vs HadSST, r^2 = 0.23; Orig. PDI vs HadSST, r^2 = 0.17).

    Using the 4x smoothed PDI with Emanuel’s smoothed SST, on the other hand, there is still a relationship, of course a stronger one, but the unadjusted data are no longer statistically significant (p=0.07), whereas the adjusted data are significant (p=0.014) … which may be why he did it.

    It also may be why he smoothed the PDI four times, because the single smoothing vs the SST data is less significant (p=0.03).

    I’m still curious about the pre-1970 difference in his data versus the HadSST data …

    w.

  72. Willis Eschenbach
    Posted Sep 21, 2006 at 9:15 PM | Permalink

    If you read the Landsea paper and Emanuel’s response, you find that Emanuel says the following:

    Landsea correctly points out that in applying
    a smoothing to the time series, I neglected
    to drop the end-points of the series, so that
    these end-points remain unsmoothed. This
    has the effect of exaggerating the recent
    upswing in Atlantic activity. However, by
    chance it had little effect on the western Pacific
    time series, which entails about three times as
    many events. As it happens, including the 2004
    and 2005 Atlantic storms and correctly dropping
    the end-points restores much of the
    recent upswing evident in my original Fig. 1
    and leaves the western Pacific series, correctly
    truncated to 2003, virtually unchanged. Moreover,
    this error has comparatively little effect
    on the high correlation between PDI and SST
    that I reported.

    Emanuel himself seems not to have noticed that he has applied the filter four times. But it is clear that he incorrectly left in the endpoints.

    w.

  73. bender
    Posted Sep 21, 2006 at 9:21 PM | Permalink

    The pre-1970 difference in PDI is a big one. I do not think there is a pre-1970 difference in smoothed SSTs.

  74. bender
    Posted Sep 21, 2006 at 9:29 PM | Permalink

    Re #72
    Glad to see we’re on the right track. I never read Landsea.

    Emanuel claims that leaving off the end points makes no difference to his hypothesis test. However the point is that:
    1. his claims of a doubling in PDI are exaggerated
    2. his attribution to SST is exaggerated (biased regression slopes)
    3. his uncertainty is underestimated
    4. he has not recognized or accounted for the variable 0.5%,5%,30% effect first mentioned by Willis a month ago – and this may well be the result of regional overfitting.

  75. bender
    Posted Sep 21, 2006 at 9:33 PM | Permalink

    Willis, are you sure Emanuel 4x smoothed? A 2x smoothing of the HadCRUT SST looks incredibly close, and would be consistent with his data frame 1972-2003.

  76. Steve McIntyre
    Posted Sep 21, 2006 at 9:36 PM | Permalink

    bender/willis – can one/both of you summarize this in one post? It looks pretty nice, but I haven’t been able to keep track.

    bender – you mentioned the Slutsky effect – is this in play with these multiple smoothings?

    There’s an interesting article about the Slutsky effect in a climate context by Klemeà…⟠- in which he notes that many natural geophysical systems perform smoothings, esp in hydrology (e.g. lakes, rivers).

  77. bender
    Posted Sep 21, 2006 at 9:54 PM | Permalink

    Re #76
    Can summarize tomorrow, if you like. [But we really need to know what’s going on with the pre-1970 PDI data. Landsea’s U does not euqal Emanuels’s HS. Moot point if your analysis is 1970+, but we want to know what happens when you pick a different breakpoint, 1960, 1950, etc.] Your call.

    Yes, Slutzky effect directly in play. The PACFS and coherence spectra look very nice. (I spent a past life studying, gasp, solar cycles.)

  78. bender
    Posted Sep 21, 2006 at 10:15 PM | Permalink

    Sluzky effect was described here. Slutzky was concerned about smoothing methods that enhance the strength of a cycle where none formerly exists. It was Yule that was concerned about nonsense-correlations between cyclic processes. Put them together and you have the double concern expressed here.

    In this case, however, it may well be that Emanuel’s method is enhancing a causal relationship between SST and PDI. What Willis & i show is that part of that significance is due to ultra-low frequency coherence (AGW trend?), but part of it appears to be due to 3-7y (ENSO-scale) coherence. Emanuel’s method exaggerates the latter, but he takes it as evidence of the former.

    Report tomorrow.

  79. bender
    Posted Sep 21, 2006 at 10:35 PM | Permalink

    As you shift the analysis breakpoint from 1970 to 1960 to 1950 the r^2 drops from 0.73 to 0.40 to 0.33. The PDI/SST regression slope drops from 15.5±1.6 to 11.5±2.1 to 10.8±2.1. (The ± is the S.E.E. and as an indicator of uncertainty should always be reported with regression parameters.) p-levels do not change, but are, of course, highly exaggerated to begin with, as this analysis is for the 2x smoothed data.

  80. Willis Eschenbach
    Posted Sep 22, 2006 at 3:14 AM | Permalink

    Re 77, bender, thanks as always for your interesting comments. You say:

    Can summarize tomorrow, if you like. [But we really need to know what’s going on with the pre-1970 PDI data. Landsea’s U does not euqal Emanuels’s HS. Moot point if your analysis is 1970+, but we want to know what happens when you pick a different breakpoint, 1960, 1950, etc.] Your call.

    Landsea’s PDI data smoothed 4x is exactly the same as Emanuels. Take Landsea’s original 1949-2003 data. Smooth it with the 1-2-1 filter four times, keeping the endpoints (1949 and 2003) unchanged each time. Here’s what I get from that process …

    Seems like the Emanuel PDI data to me … I’m working now on the SST data, should have it done by tomorrow.

    w.

  81. Willis Eschenbach
    Posted Sep 22, 2006 at 3:19 AM | Permalink

    PS – by “Landea’s original data” in #80, I meant the data that is lower in the pre-1970 time frame. Above I called it the “Adjusted Data”, because it was adjusted from a higher previous value. Sorry for the confusion.

    w.

  82. Willis Eschenbach
    Posted Sep 22, 2006 at 5:25 AM | Permalink

    OK, got the HadISST data … here’s the story.

    Emanuel used the HadISST data, smoothed three times (not four, as with the PDI data). Here’s the match:

    At this point, we can do some actual analysis of the results … the three unsmoothed datasets, appended at the end of this post, have the following characteristics.

    1) NONE of the three has a significant trend. The figures are as follows:

    ITEM , Orig PDI , HadISST , Adj PDI
    Trend z , -0.87 , 1.37 , -0.10
    Kendall z , -1.26 , 1.58 , -0.62

    The “Trend z” is the significance of the trend, using Nychka’s adjustment for autocorrelation:

    Neff = N\frac{1-r(1)-0.68\sqrt{N}}{1+r(1)+0.68\sqrt{N}}

    The “Kendall z” is the significance of the trend, using Kendall’s non-parametric trend test.

    2) The SST is related to both the adjusted and unadjusted PDI, as follows:

    ITEM , Orig PDI vs HadISST , Adj PDI vs HadISST
    r^2 , 0.21 , 0.25
    p value , 0.01 , 0.002

    The p value has been calculated using Bartletts formula for the effective N,

    Neff \approx \frac{N}{1+\sum\begin{array}{l} N\\k=1 \end{array}r1(k)r2(k)}

    While these relationships are statistically significant, they are quite small.

    3) The method of smoothing (pinning the end points and repeatedly using a 1-2-1 smoothing filter) distorts the results. By pinning the endpoints, the start and finish of the curve are held in place, and the smoothed curve is adjusted to meet them. Because the start and end points are low and high respectively for both the PDI and the SST, this converts a “U” shaped curve into more of a hockeystick shape, by pinning the start low and the end high …

    It’s 1:15 AM … I’m going to bed.

    w.

    THE DATA

    Year , Orig PDI , HadISST , Adj PDI
    1949 , 10.82 , 27.81 , 9.25
    1950 , 30.64 , 27.92 , 26.2
    1951 , 16.66 , 28.11 , 14.25
    1952 , 10.41 , 28.37 , 8.9
    1953 , 11.46 , 28.08 , 9.8
    1954 , 13.45 , 27.81 , 11.5
    1955 , 24.73 , 28.06 , 21.35
    1956 , 7.08 , 27.69 , 5.3
    1957 , 10.26 , 28.16 , 8.7
    1958 , 14.62 , 28.20 , 12.5
    1959 , 8.77 , 27.68 , 7.5
    1960 , 12.96 , 27.86 , 10.2
    1961 , 26.78 , 27.88 , 22.9
    1962 , 3.74 , 27.97 , 3.2
    1963 , 14.11 , 27.94 , 11.95
    1964 , 20.77 , 27.58 , 18.05
    1965 , 10.06 , 27.80 , 8.6
    1966 , 17.42 , 28.05 , 14.9
    1967 , 14.09 , 27.78 , 12.05
    1968 , 3.18 , 27.74 , 3.65
    1969 , 18.30 , 28.25 , 15.65
    1970 , 3.75 , 27.88 , 3.75
    1971 , 9.25 , 27.61 , 9.25
    1972 , 2.9 , 27.52 , 2.9
    1973 , 4.15 , 27.79 , 4.15
    1974 , 7.5 , 27.46 , 7.5
    1975 , 7.8 , 27.63 , 7.8
    1976 , 8.6 , 27.94 , 8.6
    1977 , 2.95 , 27.72 , 2.95
    1978 , 6.6 , 27.59 , 6.6
    1979 , 11.85 , 28.09 , 11.85
    1980 , 19.3 , 28.20 , 19.3
    1981 , 9.95 , 27.96 , 9.95
    1982 , 3.35 , 27.67 , 3.35
    1983 , 1.5 , 27.73 , 1.5
    1984 , 7.45 , 27.68 , 7.45
    1985 , 9.3 , 27.76 , 9.3
    1986 , 3.5 , 27.94 , 3.5
    1987 , 2.65 , 28.23 , 2.65
    1988 , 13.1 , 28.03 , 13.1
    1989 , 16.45 , 27.95 , 16.45
    1990 , 8.5 , 28.34 , 8.5
    1991 , 3.45 , 27.60 , 3.45
    1992 , 8.85 , 27.78 , 8.85
    1993 , 3.5 , 27.82 , 3.5
    1994 , 2.65 , 27.79 , 2.65
    1995 , 24.8 , 28.32 , 24.8
    1996 , 19.55 , 28.12 , 19.55
    1997 , 4.15 , 28.16 , 4.15
    1998 , 21.75 , 28.48 , 21.75
    1999 , 21.85 , 28.36 , 21.85
    2000 , 12.2 , 27.99 , 12.2
    2001 , 11.25 , 28.38 , 11.25
    2002 , 6.35 , 28.01 , 6.35
    2003 , 22.6 , 28.62 , 22.6

  83. Willis Eschenbach
    Posted Sep 22, 2006 at 5:54 AM | Permalink

    Can’t sleep … a couple more comments …

    1) While there is a significant trend in the HadISST in the area from 1949-2003, there is no significant trend 1931-2003.

    2) While there is a small relationship between the September HadISST sea temperature and the original PDI (r^2 = 0.21, p = 0.01), the relationship drops to r^2 = 0.08, p=0.12 when we use the August to October HadISST sea temperature … looks like my original suspicions of cherry picking were correct.

    More to come … tomorrow.

    w.

  84. David Smith
    Posted Sep 22, 2006 at 6:58 AM | Permalink

    You all are moving faster than I can track! A question and a comment:

    * the all-event PDI chart that Emanuel e-mailed to Willis is noticeably different from 1995 on, when compared to his original Figure 1. Any idea if that is due to Emanuel changing his smoothing technique, or due to a change in his database?

    * the inclusion of 6N to 9N in the SST box helps convert an oscillating SST pattern (1930-2003)into more of a hockey stick.

  85. bender
    Posted Sep 22, 2006 at 7:48 AM | Permalink

    Re #81 No confusion on this end.
    Re #82

    Because the start and end points are low and high respectively for both the PDI and the SST, this converts a “U” shaped curve into more of a hockeystick shape, by pinning the start low and the end high

    Very sharp observation. I was just going to investigate that possibility. This is exactly where my U-shaped smoothing was deviating from his HS.

  86. Ken Fritsch
    Posted Sep 22, 2006 at 10:27 AM | Permalink

    I dislike interrupting the fine detective work that Willis E, bender and David S are doing in disassembling and understanding the work that Emanuel has published, but I did want to say how great it is to follow the process and what has been accomplished. What a change it has been from the tutorial on standard/sampling error with the recalcitrant students.

    In a former life I was part of a group that on occasion would take apart technical papers looking for findings that could be applied to an electronics manufacturing process. Others in the group were often much more technically orientated and knowledgeable than I and that made the process a more enjoyable learning experience for me.

    An aspect that seemed constant with this process was that the science and technical parts were much more readily understood than the motivations of the authors of the papers. We often would contact authors and put questions to them directly when they were not employed by a competing organization or meet them at conferences and discover that we had the science right but the personal motivations and, therefore, sometimes the stated conclusions or limitations, wrong. I always thought a few beers and an engineer’s/scientist’s ego were a deadly combination for extracting background information.

  87. bender
    Posted Sep 22, 2006 at 10:40 AM | Permalink

    Re #86

    -reluctance to get involved
    -ego’s drive to show someone up
    -fanatic hate of error & lies
    -love of camaraderie

    Dynamic forces jostling with & against each other (tension, compression) leading to cyclic patterns of progress. Sometimes lecturing trolls. Sometimes doing real work. It’s all good.

  88. Willis Eschenbach
    Posted Sep 22, 2006 at 11:53 AM | Permalink

    Re #84, you say

    * the all-event PDI chart that Emanuel e-mailed to Willis is noticeably different from 1995 on, when compared to his original Figure 1. Any idea if that is due to Emanuel changing his smoothing technique, or due to a change in his database?

    Probable a change in the smoothing, as he had been notified by Landsea of the problem.

    w.

  89. Ken Fritsch
    Posted Sep 22, 2006 at 12:17 PM | Permalink

    From Emanuel’s reply to the Landsea communication in Nature:

    I maintain that current levels of tropical storminess are unprecedented in the historical record and that a global-warming signal is now emerging in records of hurricane activity.

  90. bender
    Posted Sep 22, 2006 at 12:21 PM | Permalink

    This appears to be true, but not surprising since the historical record only goes back to 1950. Weren’t the 1960s-1970s a period of aerosol cooling? If so, then a post-1970s trend is not unexpected.

  91. bender
    Posted Sep 22, 2006 at 12:27 PM | Permalink

    The fragility of his inferred trend (exaggerated by the pinning effect of not deleteting smoothed endpoints) will probably be exposed once the relatively calm 2006 data are all in. Nothing like an honest-to-goodness out-of-sample validation test.