Thanks to Judith Curry for sending along these candid comments from a couple of her students about climateaudit. There has been discussion at the other thread wshich I’d prefer move to this thread.
Here is the report from the Georgia Tech hurricane class discussion on the climateaudit hurricane threads. Two students were assigned to make presentations: Student #1 is a 2nd year graduate student, slightly older and with a mature and broad perspective; student #2 is a recent Ph.D. awardee with good knowledge of statistics.
Student #1 gave an overview of the blogosphere and climate-related blogging activities, and some history of the climateaudit site. He described climateaudit’s practice as:
1. attacking a paper on global warming, before reading it very carefully or understanding the context of the paper, assuming that the author is either dumb or has an “agenda”
2. a plethora of statistical activity of a fairly rudimentary nature
3. realization that the issues are complex
4. some attempts at trying to gain physical understanding of what is going on
5. realization that the issues are even more complex
6. give up and move onto something else
Student #1 then asked the following questions (which I answered):
1. How influential is climateaudit?
2. What items have they raised that we should pay attention to?
3. What can we learn and avoid the next time?
4. Was Dr. Curry’s blogging time well spent, or did it legitimize and prolong a discussion that in the end hasn’t really accomplished anything?
Student #2 focused on the statistical issues surrounding WHCC and Emanuel papers. He raised the following main points:
1. The climateauditors do not seem to understand parameteric vs nonparametric tests. The Kendall test (rank based test) used by WHCC does not require a normal distribution and is also fairly insensitive to serial correlation, so the emphasis on autocorrelation and distributions did not add anything.
2. The climateauditors show a general lack of physical interpretation and a lack of appreciation of the fundamentally Bayesian approach (if not explicitly, then implicitly) to climate science statistics, whereby physics and prior knowledge suggests your predictors.
3. ARMA (Spanish for weapon) is a brute force method used (not very productively) when nothing is known about the physics.
4. WHCC statistics were robust and appropriate; the Curry et al. BAMS article was unfairly criticized since the readers did not go back to the original paper cited in Figure 1, which explained what went into Fig 1 and how the trend was determined.
5. There were problems with Emanuel’s statistical analysis that should have been caught in the review process
6. Student #2 was pretty hot under the collar about the whole thing
7. “A lot of personal attacks. Not using bad manners… but still personal attacks. An example? Their opening lines on the hurricane thread: There are statistical issues in fitting trend lines to spiky data like this, which bender is well aware of and pointed out in the predecessor thread. If Curry is unaware of these issues, what does that say? If she is aware of these issues and ignored them, what does that say?”
8. “A biased blog that pretends it is not. In terms of most of the statistics they seem to know what they are talking about, but they should. Most of the stuff is part of basic statistical training. While they appear to be curious about some physics, there is a general lack of good physical interpretation.”
Topics raised in the discussion:
People reading only the thread leader and first few posts get the impression that the paper is wrong, when further down the thread the paper gets vindicated. This gives the casual visitor to the site a negatively biased impression of climate science.
One student raised the issue that statistical mistakes such as made by Emanuel (2005) should have been weeded out in the review process; suggested that a “statistical editor” was needed for climate journals to review the papers for basic sound statistical practices.
The students thought that the fact that the climateauditors did not have “external funding” to do this work diminished their credibility
The students agreed that statistics should be done correctly, data should be made publicly available (but extra work should not be done to make the data and programs convenient for the skeptics), and funding sources should be disclosed.
The “biases” of the climateauditors were discussed. Bender was perceived as a hardcore anti- warmer. SteveM and Willis were perceived has hardcore statistical skeptics, assuming that all analyses done be climate people are suspect. Steve Bloom was viewed as a somewhat heroic glutton for punishment. David Smith was viewed as the voice of reason.
I then went on to describe what I thought was useful and interesting about the site and about the hurricane threads, and the blogospheric approach to science. Everyone agreed that the climateauditors spotted things in the Emanuel paper that none of us had spotted.
Overall, the students were pretty negative about the site. I suggested that the two students post their comments; they did not want to, and I agreed to summarize the discussion (I was asked not to mention their names). They viewed blogging on climateaudit as entering a black hole of trying defend yourself against a prejudged guilty verdict. Well, I am not exactly sure what I expected from this discussion, but it doesn’t sound like the younger generation of scientists are very keen to enter the blogospheric discussions on climate science.
Student #2 ended with 3 quotes and a joke:
Bayesian statistics is difficult in the sense that thinking is difficult. Donald A. Berry
Some people use statistics as a drunken man uses lamp-posts”¢’¬?for support rather than illumination. Andrew Lang
Facts do not “speak for themselves.” They speak for or against competing theories. Facts divorced from theories or visions are mere isolated curiosities. Thomas Sowell
Two statisticians were traveling in an airplane from LA to New York. About an hour into the flight, the pilot announced that they had lost an engine, but don’t worry, there are three left. However, instead of 5 hours it would take 7 hours to get to New York. A little later, he announced that a second engine failed, and they still had two left, but it would take 10 hours to get to New York. Somewhat later, the pilot again came on the intercom and announced that a third engine had died. Never fear, he announced, because the plane could fly on a single engine. However, it would now take 18 hours to get to New York. At this point, one statistician turned to the other and said, "Gee, I hope we don’t lose that last engine, or we’ll be up here forever!"