Readers of this blog may have noticed some chaffing back and forth between me and Tim Lambert. Anyone that’s followed the chaffing may have noticed that Lambert has spent a lot of time criticizing studies by John Lott on guns on statistical grounds. On a personal basis, I dislike/hate guns and cannot imagine why anyone would want one in their house. I have pretty typical urban Canadian views. Intuitively, I’d be inclined to think that anyone purporting to prove that more guns leads to less crime is likely engaging in pretty suspect statistics and would be inclined to think that Lambert’s criticisms of Lott are probably meritorious. But it’s not a topic that so far has interested me enough even to read Lambert’s criticisms of Lott.
By chance, while I was googling a complete different statistical topic, I stumbled across another statistical take on Lott, also severely critical. What intrigued me is that these criticisms of Lott’s methodology parallel my criticisms of Mann’s methodology. Maybe I’ll have to look at the Lott criticisms some more.
Here’s the article that I noticed together with a quote:
Lott’s work is an example of statistical one-upmanship. He has more data and a more complex analysis than anyone else studying the topic. He demands that anyone who wants to challenge his arguments become immersed in a very complex statistical debate, based on computations so difficult that they cannot be done with ordinary desktop computers. He challenges anyone who disagrees with him to download his data set and redo his calculations, but most social scientists do not think it worth their while to replicate studies using methods that have repeatedly failed. Most gun control researchers simply brushed off Lott and Mustard’s claims and went on with their work. Two highly respected criminal justice researchers, Frank Zimring and Gordon Hawkins (1997) wrote an article explaining that:" just as Messrs. Lott and Mustard can, with one model of the determinants of homicide, produce statistical residuals suggesting that ‘shall issue’ laws reduce homicide, we expect that a determined econometrician can produce a treatment of the same historical periods with different models and opposite effects. Econometric modeling is a double-edged sword in its capacity to facilitate statistical findings to warm the hearts of true believers of any stripe."
Zimring and Hawkins were right. Within a year, two determined econometricians, Dan Black and Daniel Nagin (1998) published a study showing that if they changed the statistical model a little bit, or applied it to different segments of the data, Lott and Mustard’s findings disappeared. Black and Nagin found that when Florida was removed from the sample there was "no detectable impact of the right-to-carry laws on the rate of murder and rape." They concluded that "inference based on the Lott and Mustard model is inappropriate, and their results cannot be used responsibly to formulate public policy."
The analogy would be even more complete if Lott, in addition to a complicated methodology, did not disclose his data (until his feet were gradually and increasingly put into the fire), misrepresented his methodology in important particulars, failed to disclose adverse cross-validation statistics, etc. etc. When I see a comment about the impact of slight changes in Lott’s statistical model, the line of analysis seems pretty similar to our analysis of the presence/absence of bristlecones.
It may be that Lambert on Lott and M&M on Mann have more in common than we may have appreciated. Also, it’s pretty easy to understand that if you had some other studies by Lott’s co-authors finding similar conclusions, anti-Lottians would not necessarily be very impressed and would be quick to examine each one of those other studies to see exactly where the flaw was. I’ll try to spend some time working through the analogy.