Another bogus claim from Lewandowsky would hardly seem to warrant a blog post, let alone a bogus claim about people holding contradictory beliefs. The ability of many climate scientists to hold contradictory beliefs at the same time has long been a topic of interest at climate blogs (Briffa’s self contradiction being a particular source of wonder at this blog). Thus no reader of this blog would preclude the possibility that undergraduate psychology students might also express contradictory beliefs in a survey.
Nonetheless, I’ve been mildly interested in Lewandowsky’s claims about people subscribing to contradictory beliefs at the same time, as for example, the following:
While consistency is a hallmark of science, conspiracy theorists often subscribe to contradictory beliefs at the same time – for example, that MI6 killed Princess Diana, and that she also faked her own death.
Lewandowsky’s assertions about Diana are based by an article by Wood et al. entitled “Dead and Alive: Beliefs in Contradictory Conspiracy Theories”. A few months ago, I requested the supporting data from Wood. Wood initially promised to provide the data, then said that he had to check with coauthors. I sent several reminders without success and eventually without eliciting any response. I accordingly sent an FOI request to his university, accompanied by a complaint under any applicable university data policies. The university responded cordially and Wood immediately provided the data.
The most cursory examination of the data contradicted Lewandowsky’s claim. One can only presume that Lewandowsky did not carry out any due diligence of his own before making the above assertion.
A Subpopulation of Zero
Within the Wood dataset, only two (!) respondents purported to believe that Diana faked her own death. Neither of these two respondents also purported to believe that MI6 killed Princess Diana. The subpopulation of people that believed that Diana staged her own death and that MI6 killed her was precisely zero.
Lewandowsky’s signature inconsistency was completely bogus – a result that will come as no surprise to readers acquainted with his work.
Wood et al 2012
Lewandowsky appears to have uncritically relied on results published in Wood et al, 2012, which had stated in their running text:
People who believed that Diana faked her own death were marginally more likely to also believe that she was killed by a rogue cell of British Intelligence (r = .15, p = .075) and significantly more likely to also believe that she was killed by business enemies of the Fayeds (r = .25, p = .003).
Wood highlighted the inconsistency between conspiracy beliefs in their abstract which stated:
In Study 1(n= 137), the more participants believed that Princess Diana faked her own death, the more they believed that she was murdered.
However, as with the official MI6 example, neither of the two respondents purporting to believe that Diana had faked her own death, also believed that she had been murdered by Fayed’s business enemies or that she had been murdered by a rogue cell within MI-6.
Here’s how the correlations arose. Wood had used a 7-point Likert scale (ranging from Strong Disagree to Strong Agree) and people who expressed strong disagreement on one point were more likely to express strong disagreement on a related point. What Wood ought to have said is that participants who strongly disagreed that Diana faked her own death were more likely to strongly disagree that she was murdered. This does not imply people who believed that Diana faked her own death were “significantly more likely” to believe that she was also murdered by Fayed’s enemies.
This is very elementary.
Prior to posting, I wrote to Wood, explaining the problem as follows:
For example, in your Table I, you reported a “significant” correlation of .253 between the proposition that Diana faked her own death and the proposition that she was killed by Fayed’s enemies. I don’t know whether you examined the contingency table between these two propositions, but it is an important precaution and, in my opinion, conclusively contradicts the phenomenon that you had in mind. here is the contingency table (4 – neutral on the 7-point Likert scale):
Only a few respondents (6) purported to either believe that Diana had faked her own death (2) or that she had been killed by Fayed enemies (4) – and NONE believed both simultaneously. Your “significant correlation” arises not because people held inconsistent conspiracy beliefs, as you state, but because of differing confidence in their disbelief. Respondents were much more confident in their belief that Diana did not fake her own death than that she was not killed by Fayed’s enemies. In addition, respondents may express their disagreement with different degrees of emphasis. This is an entirely different phenomenon than believing in mutually inconsistent results.
Rather than conceding this seemingly obvious point, in a reasonably detailed response, Wood denied that his results were affected. First, he contested that non-normality was an issue, because Spearman correlations yielded similar or higher values than Pearson correlation. While true, this observation is obviously unresponsive to the fact that no respondents simultaneously believed that Diana had faked her own death and had been murdered by Fayed enemies (or a rogue cell or official MI6). Wood:
First, you raise the issue that there is some non-normality in the data. This is especially apparent in correlations involving the “faked death” item, which participants were highly sceptical about and therefore ended up with a restricted range of responses, as we noted in the text of the paper in our discussion of a potential floor effect. However, nonparametric tests give much the same results as we originally obtained – for instance, using either Spearman’s rho or Kendall’s tau-b renders the previously marginally significant correlation between “fake death” and “rogue cell” significant at the .01 level. Your use of scare quotes around “significant” is quite unwarranted – the relationships are indeed statistically significant, and if anything the use of Pearson correlations in the original paper understates, rather than overstates, their robustness.
Next, Wood challenged my analysis of the Faked Death/Fayed Enemies pairing as “felicitous”, arguing that a “more instructive” example was the correlation between the rogue cell and Fayed enemies combination.
The second point concerns the interpretation of the correlations. In fact you chose a rather felicitous example for the point you’re making… A more instructive example is the “rogue cell” / “business enemies” correlation, which, not being attenuated by general scepticism about the faked death claim, is more representative of the relationships among the different Diana theories, as is clear from Table 1 of the paper.
However, it was the Faked Death example that Wood et al highlighted in their Abstract and which Lewandowsky has drawn attention to. I didn’t pick it because of perceived weakness, but because Wood and Lewandowsky had themselves promoted it.
Wood also now claimed that because “so few” agreed with the faked death theory, it “does not seem reasonable to expect many people to give high endorsement to both theories”:
as noted previously, very few people expressed a high level of confidence at all in the “faked death” theory – in fact, it’s the least-agreed-with item in the scale. Given that so few agreed with the “faked death” example at all, it does not seem reasonable to expect many people to give high endorsement to both theories.
However, nowhere in Wood et al 2012 is there any explicit statement that only two respondents purported to believe in the Faked Death theory that was highlighted in the abstract. Had readers been aware that only two people purported to subscribe to this theory, then they would obviously not expect “many people to give high endorsement to both theories”. Unfortunately when zero people subscribed to both theories, one cannot justifiably assert that “In Study 1(n= 137), the more participants believed that Princess Diana faked her own death, the more they believed that she was murdered”,
Nor does Wood’s “more instructive” example help their cause. Only two respondents purported to agree that both Diana had been murdered by Fayed business enemies and by a rogue cell. But here’s how Wood et al 2012 had characterized the results:
Similarly, participants who found it likely that the Fayeds’ business rivals were responsible for the death of Diana were highly likely to also blame a rogue cell (r = .61, p < .001).
Given that only two respondents reported subscribing to both beliefs, it is obviously impossible to achieve p < .001 from such a miniscule sample. As with the other pairing, the correlation arises from people who disagree with both propositions, not from people who agree with both propositions. Although this latter point seems self-evident, Wood disputed it as followseven after I had pointed it out to him:
While this sample was generally sceptical about conspiracy theories in general, the fact that participants’ degrees of disbelief appear to stick together does not indicate that the correlations are simply an artefact of participants’ response styles. This explanation seems particularly implausible given the magnitude of the correlations that are not attenuated by floor effects (e.g., r = .61 for the “business enemies / rogue cell” correlation).
However, while two is more than zero, it is still far below the population necessary to arrive at statistically significant conclusions. Any “statistically significant” correlations arise for the reason set out in my email to Wood: “not because people held inconsistent conspiracy beliefs, as you state, but because of differing confidence in their disbelief”.
That Lewandowsky should make untrue statements will hardly occasion surprise among CA readers. However, drawing conclusions from a subpopulation of zero does take small population statistics to a new and shall-we-say unprecedented level.
Postscript: Wood’s Diana claim has become somewhat of an urban legend and citations accumulate. Examples include Thresher-Andrews 2013( PsyPAG Quarterly); Brotherton 2013 (PsyPAG Quarterly):
People who endorse one conspiracy theory tend to buy into many others – including theories with no logical connection and, as Mike Wood and colleagues demonstrated, occasionally even theories which directly contradict each other.
Lewandowsky Fury:
In consequence, it may not even matter if hypotheses are mutually contradictory, and the simultaneous belief in mutually exclusive theories e.g., that Princess Diana was murdered but also faked her own death has been identified as an aspect of conspiracist ideation (Wood et al., 2012).
Lewandowsky Role (PLOS):
For example, whereas coherence is a hallmark of most scientific theories, the simultaneous belief in mutually contradictory theories- e.g., that Princess Diana was murdered but faked her own death – is a notable aspect of conspiracist ideation [30 -Wood et al ,2012].
Lewandowsky in March 2013 here also at SKS here:
Thus, people may simultaneously believe that Princess Diana faked her own death and that she was assassinated by MI5.
See also Cook’s comment here.
136 Comments
The sentence beginning “Rather than conceding” should be pulled out of the section quoting your letter to Wood.
Steve: fixed
It is not that he had much to say
When Wood “answered” your point in this fray
It is quote hard to soften
That Lew called never “often”
But in Wood’s mind, he blew you away.
I’ve been dealing with this in my blog
The Defenders of Falsehoods. One prog
Though a bright, clever friend
Will defend to the end
Global warming — he’s there for the slog.
===|==============/ Keith DeHavelle
The End. I needed that belly laugh, thank you.
Unprecedented? Drawing inferences from samples of size N=zero and smaller (e.g. inverted Tiljander) seems to be SOP for some of these guys.
I, too, immediately thought of reciprocal Tiljander, stood on its head in pursuit of thermal “unprecedentedness.” Will sample sizes of n*√-1 be long in appearing? Or are they already here?
As astute observation that 2/0 is a remarkable large transition! (aka infinity – “is elementary”)
Divide-by-zero rationality
Can we here really unwind it?
Seems Lew’s “truth” is just hostility:
“Truth is where you undefined it.”
===|==============/ Keith DeHavelle
“What Wood ought to have said is that participants who strongly disagreed that Diana faked her own death were more likely to strongly disagree that she was murdered. This does not imply people who believed that Diana faked her own death were “significantly more likely” to believe that she was also murdered by Fayed’s enemies. ”
Depends on how you define it. If it were a two point scale, it would mean that.
Bet Wood got an A on this paper.
People using statistical programs that have little understanding
of statistics. Again.
Yes. And when called on the carpet they employ a Monte Carlo method called “Grasping at Straws” to support their original conclusions.
I’m not sure it’s fair to blame Lewandowsky for this oversight. By the conclusions of his own recent work, we learned that he suffers from extremist ideation which leads to confirmation bias and can result in acceptance of facts which are obviously suspect to an impartial scientific reader. I’m quite certain that Steve M requested the Wood data based on the very unusual conclusions it seemed to support, Lewandowsky’s impenetrable ideation didn’t allow him that foresight and that is no fault of his own.
Had Lew not taught me this valuable lesson, I might have blamed him for being gullible or something when the situation is clearly something else.
Lewanclownsky opted for notoriety long ago … end of story
It becomes even more clear that Lewandowsky’s ideation is impenetrable when you consider the folks with whom he associates.
If you are a scientist looking to research interesting pathologies that arise in people engaged at some level in the climate change debate then look no farther than the photo shopped images of Cook, Nuccitelli, et al, discovered at Skeptical Science.
That was some really weird and disturbing stuff. Somebody should write a paper.
Hahaha!! Nice one 😀
Lewandowsky and Mann teaming up is a blogger’s dream. Who could have guessed that the sum of two self-certain “elite” minds is even less than one lonely one.
If they are the new hockey team, maybe they should have a name? or a theme song.
A hockey team with an open goal.
Jeff ‘n’ Josh, the “bespectacled violent goons with childlike mentalities, complete with toys in their luggage” in Paul Newman’s 70’s film “Slap Shot” were of course the Hanson Brothers.
http://en.wikipedia.org/wiki/Slap_Shot_%28film%29
I can’t help thinking that there’s a paper in “Statistical Ideation”… where the learned discipline of statistical analysis is tortured to create a new statistics which confirms the answer needed. This blog documents many examples.
timheyes,
It goes further than that. The answer needed is often one that reflects a knee-jerk conclusion more than a reasoned one.
Just now, for example, I was watching a PC garden show on Australia’s ABC TV. The commentator noted of his garden design “It protects small creatures from predators”. I thought, but hold it, the predator-prey relationship is one of mutual dependence. Someone has to feed the predator too.
While this is a trivial example, it is reflective of a mindset that seems affiliated with a naïve fairyland. This is shown again by the present wave of chemophobia, where PC people interminably prefer ‘natural’ or ‘organic’ remedies to professionally crafted chemical ones. That the failure of the natural remedy was often a reason to design a chemical one seems to have been forgotten.
Up it a scale and we have a PC preference for expensive, intermittent wind energy over cheap, reliable coal or nuclear.
And so on into the darkness.
Spot on. Id est. We must pity rather than blame.
‘ One can only presume that Lewandowsky did not carry out any due diligence of his own before making the above assertion.’
So normal practice for him then , for facts and data are not important all that matters is that his ‘faith ‘ is confirmed .
Two people marking a “strong agree” on faking Diana’s death in a reasonable sample size could simply be people accidentally hitting the wrong button.
See if you can get this published in a journal Steve – its such an obvious blunder, which could easily be demonstrated by removing the strong disagrees from the sample (?), that the journey to getting a take down like this published in a journal could make an interesting story in itself.
I know its a big ask, its a lot of work, but think about how much you could make them squirm as they tried to invent reasons not to publish your response paper.
“the journey to getting a take down like this published in a journal could make an interesting story in itself.”
I agree. This is worth pursuing further, since Wood is unrepentant.
PS: It would bolster Steve’s case if he could get a bunch of statistical bigshots to endorse his criticism.
I don’t see why Steve needs to get any “hotshots” because this is just stupidity. This fellow Wood is so wrapped up in the complexity he has manufactured that he can’t see the obvious right in front of his face.
His responses to Steve are outright unbelievable. No one held the two beliefs simultaneously, yet he claims it as a central finding? This is getting weirder and weirder all the time.
Was it two instances of “strong agree” on the faking death explanation, or just “agree” of any kind? I got the impression from Steve’s post that it was any kind of agreement.
Steve: both were weak agreement. 5 on a 7-point scale.
And if the responses reflect the respondents position that the cause of death was murder but they at not sure by whom it’s difficult to claim they are holding contradictory opinions at all
Something that went through my mind the moment I read the paper. Partly for this reason I don’t have problems with 7-point scales per se, because they allow respondents to hedge their bets, which may be rational given certain kinds of evidence. What I think might be interesting to look into (though I’m no expert in polling to know how) are those who not only doubt the conventional story in some area but are certain of the real culprits. Intuitively I’d expect too much certainty to correlate with lack of judgment elsewhere. But each of these stories – Diana, autism/vaccine, 9/11, the death of bin Laden – is so different in detail it’s hard to know how to test any such idea. To find out that the one clear contradiction that really stood out, trumpeted by Lew and Mann, was based on a set of zero really does take the biscuit. We who question the paleo pronouncements of Dr Mann because he used contaminated, upside-down sediment data are as unbalanced as a known set of people who believe contradictory things about the death of Diana and viola, that set is empty. Kafka would struggle to do justice to these people.
Richard (just above): voilà!
Yes, the stringed instrument wasn’t quite what I wanted!
Sort of like Schroedingers Cat
I say it’s more appropriate to say The cat is Not Dead AND Not Alive
(the true negation of the Cat is Dead or Alive)
“Two people marking a “strong agree” on faking Diana’s death in a reasonable sample size could simply be people accidentally hitting the wrong button.”
Exactly Eric, either that or they just filled in the form for fun or were fake respondents.
Funnily enough a couple of potential candidates for the actual respondents sprang to mind immediately.
This explains how advocates are able to equate “climate denialists” with “flat earthers” without ever finding a “flat earther” who was also a “climate denialist”.
So much for peer review.
The authors are essentially pursuing conspiracies that few believe.
Having very little depth in statistics, I often find it hard to fathom the jargon that gets thrown around in these discussions. So it is perhaps arrogant of me to harbor the suspicion that apparently knowledgeable disputants often don’t really understand what they think they know about the subject–or at least that they fail to reflect upon that understanding before they take their positions.
For me that suspicion arises in discussions even of ostensibly simple propositions such as whether a given temperature trend is “significant.” I’ve been given to understand that a temperature trend’s being significant means that some given assumption regarding the temperature-producing process would make the probability less than, say, 5% that the process would produce a trend at least as large as the one observed. Yet my distinct impression is that people arguing over significance very often haven’t thought through what that assumption is, i.e., what the assumption is upon which they’re basing their significance conclusions. To the extent that the assumption is reflected upon, moreover, it often seems to be something like “temperature is insensitive to CO2 concentration,” an assumption from which the way to compute a trend’s probability strikes me as being too open to conjecture.
Again, making such a judgment about a discipline of which I have no command may be a trifle arrogant. But I’m all the more attempted to make it when I see jargon such as that contained in Wood’s statement being used to defend so preposterous a proposition.
Professor Lewandowsky has burst into more print in the last few days.
http://theconversation.com/look-out-for-that-turbine-climate-sceptics-are-the-real-chicken-littles-19873
The host is a blog named “The Conversation” that has the support of a number of universities and some bodies such as CSIRO and BOM in Australia. One cannot write a lead essay for this blog if not affiliated with a university or related group.
In the context of Steve’s essay here, a past offering by the Prof was titled http://theconversation.com/the-false-the-confused-and-the-mendacious-how-the-media-gets-it-wrong-on-climate-change-1558
I feel weird. I had contacted Michael Wood about two weeks ago because I came across his article via Stephan Lewandowsky’s, and I found the paper’s conclusions remarkable. He and I exchanged a few e-mails, I got the data set from him (after Steve McIntyre got him to release it, I believe), and I proceeded to begin analyzing it.
My initial impressions were highly critical so I decided to take my time looking into the data to be sure I understood what was going on. This morning I finished analyzing the data, and I e-mailed Wood with the conclusions I had reached. A couple hours later, this post went online.
I could have saved myself a lot of effort if I has just sat back and done nothing.
“I e-mailed Wood with the conclusions I had reached.”
I suggest you post them here.
My conclusions are in the same line as McIntyre’s. I didn’t bother highlighting the small sub-population sizes because I felt the issue was more general than that. Here’s the e-mail I sent him:
I can’t attach the zip file I sent him to this post, but if need be, I can post it somewhere. It’s really just a version of something Steve did a while back for Lewandowsky’s Moon Landing paper. I had pointed out the same oddity before, but at the time, I didn’t know how to demonstrate it concisely. When I saw McIntyre demonstrate the problem mathematically, I realized how easy it is to test the origin of correlation via contingency tables.
Well done Michael. Independent corroboration is always useful.
Very nice, Brandon. I can imagine the recipient reading it and segueing from WTF to OMG as he does so. But I have an active imagination.
snip – sorry. I know that you’re using a religious example purely as an example, but it’s against strictly enforced blog policy.
Here’s a short script showing first the replication of Wood et al Table 1 and then the contingency table with zero respondents holding both propositions.
The actual choices for “Rogue Cell” by the two persons who picked 7 (on a scale of 1 to 7 with 4 being neutral) were a 1 and a 2 on the same scale. Nobody gave the idea of Diana faking her death a value higher than a 5 (only 2 did and neither of them indicate that Rogue Cell had any credibility.
Even though I find the idea of calculating simple correlations on categorical variables using their arbitrary 1 to 7 “names” unscientific, if you combine the 1 and 2 categories for each categorical variable, the correlation coefficient drops to 0.05. If you further combine the disbelief 1, 2 and 3 categories, the correlation becomes negative: -0.05. Steve is quite right on this topic.
I might add that the sample was chosen as follows:
This is hardly a population upon which to base a robust conclusion on believing dueling conspiracies.
No prob. I didn’t realize it was that off-limits here. I’ll modify the comment slightly to remove that. I hope free market/communism is an acceptable comparison (I picked these since Lewandowsky discussed them in his paper).
As I said above, I feel the problem here is more general. This is part of an e-mail I sent to another person before seeing this post. I used data from the Wood et al paper (I filtered out two entries which had NULL values because of problems reading the file if they were included). I just changed the description given for the data to demonstrate the problem:
A correlation coefficient cannot be used like it is in the Wood et al paper, the Lewandowsky et al papers and a number of other papers I’ve seen. These papers take correlations that could indicate more than one thing and draw conclusions based upon simply ignoring some of the possibilities.
It’s crazy. These authors are using a methodology that could literally draw conclusions about groups the authors have no information on. The authors could survey women about their views and draw conclusions about the views men hold. They could survey democrats and say their results prove republicans are insane.
Taken to the extreme, they could conclude anything about anyone by studying anyone but the people they’re interested in.
This also means the mean ages of participants at the time of Diana’s death was 4 years old. Many may never have cared about this story and consequently be very aware they don’t really know much about the issue. In that context, not discounting theories when you have no knowledge makes sense even if those theories are mutually exclusive. You know only 1 theory can be true– but you don’t really know which one is. So you can’t confidently state any specific one is false.
I should note: I am 54 and I never followed this story. Most of what I know comes from seeing the front of tabloids while waiting in line at the grocery store or a little of what was covered on television at the time.
my guess is that there would must be some age differentiation in the JFK assassination the 50th anniversary of which is being commemorated. I was 16 when it happened. It was the Tuesday before our junior football team (UTS) was playing Leaside in the league final on Thursday. I was a very skinny outside linebacker. Our star halfback was named Keith Kennedy. We were just coming from Phys Ed class, when someone announced that Kennedy had been shot. I was dumbfounded: we were an academic school; no one from our school had ever been shot and our prospects for the football final were in disarray. You can imagine the reaction of a 16-year old football player when I learned that it was John F Kennedy who had been shot, and not Keith Kennedy.
Brandon, if you have time, try looking at the 150,000 deaths due to climate change, cited in the latest Lewandowsky tirade. It goes to Patz 2005 and then to a 2002 World Health Organization report. The numbers in the appendix seem to appear out of thin air. It would be worthwhile asking the authors how they got them.
Interestingly, I tried to track that number down some time back. I forget what brought it to my attention, but I never did find an exact source for it. One thing I did note is the WHO source is often cited along with McMichaels 2004. This document has quite a bit of discussion in it, and I never took the time to look through it in detail. It might be able to shed some light on the matter.
I gave up reading it after about the dozenth time I saw them say they “modeled” effects of climate change. We can’t get reasonably accurate predictions from GCMs on global temperature levels. I don’t have any faith in models which receive far less attention.
nIPCC looked at this too.
http://nipccreport.com/articles/2011/may/24may2011a3.html
Brandon – I think your McMichael reference is likely the correct source. See page 303 which details climate related mortality by cause and region. Globally it provides an estimate of mortality of 27.82 per million.
With a world population of 5.5 billion the deaths would be 154K.
oops, rented fingers. Page 1606 for the table.
We should not spend all our time on fighting an obvious nonsense. Dr. Lewandowsky can produce it easily five times faster.
“Dr. Lewandowsky can produce it easily five times faster.”
Which is exactly why we need reviews like this. It is true that we wouldn’t be able to keep up, but there are new minds opening up to articles like this every day and articles like this should bubble up to the top of RSS feeds and search results regularly to catch them.
Re: Steve McIntyre (Nov 7 18:53),
I’ve also tried several times to look into this figure. As Brandon said, the source of the 150,000 deaths is the WHO 2002 Report “Reducing Risks, Promoting Healthy Life” http://www.who.int/whr/2002/en which says (p72):
But as noted above, there’s no source or methodology cited in that WHO report.
Interestingly, a later WHO report “Global Health Risks” (2009) http://www.who.int/healthinfo/global_burden_disease/GlobalHealthRisks_report_full.pdf gives a lower estimate for 2004 (141,000 or 0.2% – p.50) and puts climate change as the lowest among the risk factors considered in the report (p.23). The equivalent statement to the 2002 report is:
That report cites McMichael et al (2004) http://www.who.int/publications/cra/chapters/volume2/1543-1650.pdf (also cited by Patz et al., Nature 2005) which appears to be a description of the methodology:
McMichael et al. is 108 pages, and I’ve not read it in detail. It seems to be trying to detect the health effects of climate change against the counter-factual of no climate change, and appears to ignore adaptation. As deaths from e.g. malaria and diarrhoea are falling worldwide (though still sadly very high), they are presumably saying that they would have fallen slightly less were it not for the 0.3 deg C temperature rise since 1961-1990 (why that baseline?).
(Malaria and diarrhoea trends here http://www.healthmetricsandevaluation.org/publications/summaries/global-malaria-mortality-between-1980-and-2010-systematic-analysis http://www.unicef.org/media/files/Final_Diarrhoea_Report_October_2009_final.pdf)
Re: Ruth Dixon (Nov 8 03:51), I should have written “fallen slightly more” (rather than “less”) in the penultimate paragraph.
McMichael et al. 2004 is a chapter in a WHO report – all of this is in the ‘grey’ literature, which is ironic given Lewandowsky et al.’s quote of the WHO estimate just before a sentence about peer-review:
Saying that a number has been quoted in Nature does not make the number ‘peer-reviewed’!
Oops – I’ve replied to my comment which is still in moderation. I’m sure it will all become clear.
McMichael 2004, linked by Ruth, contained some description of their methodology, whereas the 2002 report only contains tables in the Statistical Appendix. Here is Table 20.16 of McMicheal 2004. I’ll probably start a separate post on this.
Regarding malaria alone, this publication asserts that the closeness of people in dwellings is a major mechanism for spread. Because this effect is newly estimated, it might be necessary to revise the earlier WHO figures down.
http://onlinelibrary.wiley.com/doi/10.1111/rssa.12036/abstract Document identifier: DOI: 10.1111/rssa.12036
(Joint author Prof Ross McKitrick is of course well known here. There is more info on WUWT, to whom h/t).
Steve McIntyre & Brendon
Compare Idur Goklany’s methodology and evaluations of WHO on climate deaths vs deaths from biofuel policies in Could Biofuel Policies Increase Death and Disease in Developing Countries? Journal of American Physicians and Surgeons Vol. 16 No. 1 Spring 2011 pp 9-13
“One can only presume that Lewandowsky did not carry out any due diligence of his own before making the above assertion.”
Now that’s a same assumption. You could even say it is axiomatic.
Just go and have another look at the Lewny’s YouTube videos and watch the eyebrows flickering. Never mind the narcissism, feel the Lewniness.
So, in their endeavours to accuse conspiracy theorists of drawing illogical conclusions it is they themselves who are guilty of the illogical conclusions.
Could this be a form of psychological projection one wonders!
Steve,
Reading your post brought a (big) smile to my face.. thanks.
This is just sad. The fact that they will now resist recognizing this ridiculous error is even more sad.
Yes Nicolas. Sad, ridiculous and I would add reprehensible. Not high level stuff here. It’s like a lab assignment in Stats 101. The only difference is that the Stats 101 student would be suffer consequences for this error. Here it appears wagons are circled and the error gets propagated to serve an agenda.
re: “However, drawing conclusions from a subpopulation of zero does take small population statistics to a new and shall-we-say unprecedented level.”
Uh oh. I imagine the climate science community can reference that statistical usage as precedence and really run with it.
There are inept climate “scientists”, as well as inept “psychologists”. The life is too short to argue with all of them. Nevertheless, it is a worthy undertaking, thank you, Steve.
A couple of quotes spring to mind. As Lew might say –
“Why, sometimes I’ve believed as many as six impossible things before breakfast.”
And the data would reply –
“I know you think you understand what you thought I said but I’m not sure you realize that what you heard is not what I meant.”
With apologies to Charles Lutwidge Dodgeson and Alan Greenspan.
Frontiers in Psychology may have removed their link to Lewandowsky’s Recursive Fury, but they have not withdrawn it. It is still available from PubMed here. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3600613/
It has accumulated 4 citations from other PubMed articles. (G. Scholar shows 3) If Frontiers never withdraws the paper it will still be kept alive in PubMed and probably accumulate more citations since it can be found in G. scholar.
This past July Michael J. Wood and Karen M. Douglas published a new paper titled: “What about building 7?” A social psychological study of online discussion of 9/11 conspiracy theories
Their new paper cites their Wood(2012) paper numerous times and also cites both Recursive Fury and the Moon Landing papers.
Wood may not want to admit error in their 2012 paper due to it having a knock-on effect with respect to their newer paper.
I so want to read their new paper. 🙂
Guess who was one of the reviewers for this new paper… Yup, Lew was 😉
http://www.frontiersin.org/personality_science_and_individual_differences/10.3389/fpsyg.2013.00409/abstract
Here’s a link to the full “What about Building 7” paper:
http://www.readcube.com/articles/10.3389/fpsyg.2013.00409
All my Christmases at once!
Ha! Merry Christmas Richard! I imagine another Lew paper doing an analysis of “climate change denier” websites saying ‘something’ about them, being reviewed by one of these authors. You know, peer reviewed and all…
I sometimes think that it’s beneficial for the skeptical side of the climate argument for someone as ridiculous as Lewandowsky to be involved in the debate over climate science and related fields, but then I think how much higher the level of discourse would be if so-called “scientists” like him were not involved.
Thank you, Steve.
This reminds me of Hempel’s Paradox. Philosophers “know” that supporting instances for the proposition “All ravens are black” can be found by laborious fieldwork observing black ravens, or by the much more relaxing method of looking around your classroom or office and compiling lists of non-black non-ravens.
There is also the point that when you ask about ‘belief’ on a scale of 1 to 7 you are not really asking the respondent not to report a belief, but to assign a probability. If I responded 1/7 to faked death 1/7 to murdered by MI5 and 5/7 to simple accident there is no psychological pathology indicated, it is just my view of the probabilities. To test ‘belief’ requires a Yes/No answer.
So 114 young females plus 23 young males when asked a set of obviously prejudicial questions which confuse ‘belief’ with probability (see above), produces the answers already sought by the questioner and thus confirms his own biased viewpoint about world opinion on other unrelated matters.
My stats professor would have failed me if I tried to submit this to anywhere for any reason.
Strange that others do not accept that viewpoint.
Steve: this sort of complaint can be made about many studies. The point of the present discussion is the more specific problem discussed in the post.
To add a huge insult to injury, they are at KEYNES college. Poor Keynes must be turning in his grave to be associated in any way with this rubbish
To his boss at the Experimental Psych Faculty at Bristol Uni… sent just now. I like to keep reminding them of what it is they have employed 🙂
Dear Professor Noyes,
I wrote to you a while back regarding your in-house charlatan and conspiracy theorist, Stephan Lewandowsky. I gather he has had to retract one of his UWA papers as it was defamatory; this does not surprise me; as with his co-conspirator, Michael Mann, abuse is his first response to criticism.
I thought you might be interested in a further debunking of his work, by Steve McIntyre, the man responsible for the comprehensive debunking of Michael Mann’s fake “Hockey Stick” paper.
https://climateaudit.org/2013/11/07/more-false-claims-from-lewandowsky/#more-18571
Excerpt
“Lewandowsky’s assertions about Diana are based by an article by Wood et al. entitled “Dead and Alive: Beliefs in Contradictory Conspiracy Theories”. A few months ago, I requested the supporting data from Wood. Wood initially promised to provide the data, then said that he had to check with coauthors. I sent several reminders without success and eventually without eliciting any response. I accordingly sent an FOI request to his university, accompanied by a complaint under any applicable university data policies. The university responded cordially and Wood immediately provided the data.
The most cursory examination of the data contradicted Lewandowsky’s claim. One can only presume that Lewandowsky did not carry out any due diligence of his own before making the above assertion.”
Do please read this, as it demonstrates just how poor Lewandowsky’s work is. He is not an academic, rather a man with an obsessive mission to smear those who disagree with him, who uses academia to give him a cloak of respectability.
I find it quite astonishing that you are happy to employ someone who may well bring down opprobrium on your department as a result of his shoddy work, which contributes nothing to the climate science debate, or indeed, the “science” of psychology, but rather exposes his own unbalanced and obsessive nature.
I write this again as a taxpayer who is appalled that I should have to fund this excuse for an academic.
My very best wishes to you
Jeremy Poynton
Steve: the problem with this sort of letter is that it’s far too angry. There’s zero point writing this sort of angry missive as it detracts from substantive points. You also have to take care to be microscopically accurate in any factual assertions and you haven’t been: Lewandowsky’s Fury paper has not (to my knowledge) been “retracted” to this point, though it is being reconsidered by the publisher.
While following the Yellow Brick Road — from your post I ran across this additional article by the good Dr.Lewandowsky …
http://theconversation.com/look-out-for-that-turbine-climate-sceptics-are-the-real-chicken-littles-19873
On the other hand, people steeped in the same culture suffer 216 terrifying symptoms at the sight or sound of a wind turbine, thereby experiencing a risk that is unknown to medical science.
I have been studying Wind Turbines for some time — more on the energy side than the medical side — but I found the quote amusing. They seem to have cataloged bewildering array of symptoms… He quotes Simon Chapman… (Actually it was a colleague that wrote the original paper — but Chapman attached his name an the rest is publication history…)
He seems to enjoy embellishment.
There is a chair at East Anglia waiting for him — I’m Sure — endorsed by Monty Python no less — I’m sure…
WillR,
Then there is the list of things caused by global warming, last updated over a year ago when the tally was 883 culprits.
http://www.numberwatch.co.uk/warmlist.htm
“Another bogus claim from Lewandowsky would hardly seem to warrant a blog post, let alone a bogus claim about people holding contradictory beliefs.” and
“That Lewandowsky should make untrue statements will hardly occasion surprise among CA readers. However, drawing conclusions from a subpopulation of zero does take small population statistics to a new and shall-we-say unprecedented level.”
Classics, both!
I’ve kinda gotten to the point that reading anything about Lewandowsky is a whole lot like inspecting the sole of my shoe to see what I’ve just stepped in.
You know it’s bad, and you don’t really want to deal with it, but you have to know what it is to deal with it properly.
Sorry, Steve, but you’re missing something fundamental, distracted by the stats. Woods’ claim that the beliefs are contradictory is entirely true, but only for an absolute, binary definition of belief. It is plain, though, that Woods has not used that definition of belief everywhere else in his paper, since he talks about degrees of belief.
It is simply incorrect to describe as contradictory the beliefs he is talking about, unless they are held absolutely.
In fact, the opposite is true. It would be contradictory not to admit the possibility of other non-mainstream theories about the death of Diana if one believed there was a strong possibility that one non-mainstream explanation is true, since holding that belief requires one to have already discarded a strong belief in the standard explanation.
Personally, I assign a pretty high probability to the official version of events being at least broadly true, and an extremely low one to any conspiracy theory – but, I assign fairly similar probabilities to all three of the possibilities mentioned.
The references cited in Lewandowsky’s papers are almost as fascinating as the papers themselves, and prove that Lew is not alone in the kind of science he practices.
There are papers on the speeches of Ayatollah Khomeini; the Egyptian uprising; schizophrenia; homophobia among men who have sex with men; and sexual (dys)function and the quality of sexual life in patients with colorectal cancer.
One of Lew’s favourite papers is a study of antisemitism among ethnic Malays in Malaysia which found, unsurprisingly, that there is very little antisemitism in Malaysia, which, the author surmises, is because there are very few Jews in Malaysia.
The author of this paper was cited six times in “Moon Hoax” and by a strange twist of fate got to peer review “Recursive Fury”. He has a peer reviewed paper on the appreciation of the female bottom which is a classic. See
http://geoffchambers.wordpress.com/2013/04/17/lews-guru-and-the-science-of-a-beautiful-you/
Steve,
Understood – and my understanding from a comment by Geoff Chanbers (at Bishop Hill?) was that Lewandowsky HAD withdrawn a paper for the reason that it was potentially defamatory. Regardless, he and his like need to be reminded that they are under the microscope.
The original version of the paper was withdrawn because of a complaint by Jeff Id over Recursive Fury misrepresenting him. A new version was then published. This version was never withdrawn. It was merely removed from the journal’s site while it was (supposedly) undergoing examination in response to other complaints.
There’s an additional detail involved. When the paper was posted a second time, it was changed to address Jeff Id’s concern. However, the journal decided to also address another user’s (Foxgoose’s) complaint. Somehow, that change was only made to the PDF version of the paper. It was not made to the browser version available at the journal’s website. This meant there were two different versions available from the journal at the same time.
I believe that means the tally is one version of the paper was withdrawn, another version was inadvertently published, and the third version is “temporarily” unavailable while it undergoes an “expeditious” examination that has lasted over half a year.
Is it possible that Alan Sokal created climatology and psychology as a joke and they got a little out of hand?
It is fascinating that these guys aseem to strongly PREFER the explanation that
rather than the much more commonsense explanation that
This research says a lot more about the preconceived notions; lack of statistical training; complete lack of common sense; and research biases of the sociologists working in this area than it idoes about conspiracies surrounding Diana’s death.
Ironically these people who purport to be the experts on conspiracy theories are behaving exactly like conspiracy theorists themselves. They have constructed a view of reality which is utterly ridiculous, flies in the face of common sense, and is unsupported by their own data, and yet they are vigorously defending their viewpoint against anyone who attempts to argue rationally with them.
Wow. Ignoring the blindingly obvious while hand-waving about statistics they got out of a statistics mincing machine.
Perhaps someone needs to write a paper on the psychology of people who write peer-reviewed papers and then refuse to accept simple and clear evidence that they have made an error – when that error presumably means at a minimum publishing a correction. How about “Motivated rejection of the bleeding obvious” as a title.
Lewandowsky, making the Royal Society proud.
Interesting … Michael Wood was one of the original reviewers of the Recursive Fury paper. I emailed and asked him, to the extent he was comfortable, if he would comment on his being removed as a reviewer after initial publication.
He cordially and timely responded indicating he had issues with the paper, asked for revisions, but they were not made, hence he asked to be removed as a reviewer.
Recursive Fury also listed Woods paper as a reference.
It is interesting to see Lewandowsky involved in reviewing a Woods paper. While it makes sense based on shared interests I had the distinct impression Michael Woods was not enamored of the Lew crew, but I certainly could be wrong.
Steve – have you asked Woods to participate in the discussion here?
Innumeracy strikes again. Multiplying and/or dividing by 0 is a waste of time.
I get all this, but I still can still hardly believe Lewandowsky and Wood are so crazy as to make their assertion.
The two propositions in play are not just contradictory in the sense of being mutually inconsistent explanations of a single fact. They lead to different facts. If Diana faked her own death, she is still alive. If MI6 killed her, she is dead. No one could possibly believe both at the same time, and it takes a special type of obtuseness to believe they could.
Such ‘special … obtuseness’ has been repeatedly demonstrated in spades by Lewandowsky and his acolytes and lesser lights (not to mention his Mannomatic™ superlights), has it not?!
I tried posting this last night, lost in moderation I guess.
Another problem is that the propositions are not contradictory. If Diana fakes her own death, MI6 can still kill her. It would be easier for MI6, because she is officially dead already. If you conduct a study into mutually contradictory beliefs, the propositions you present must be opposites.
Once alerted to a possible mistake, there is still the matter of what an academic does next. Having got past the initial heart palpitations and rosy cheeks of acute embarrassment, I guess the first move is to check whether the criticism is valid.
It is.
Next comes a chance to show your quality. Do you i) bluster, dissemble, tilt at windmills, set fire to straw men, rearrange the furniture, try to prove you were right anyway by collecting more data and analysing it differently, etc, or ii) accept the error and make efforts to address it by issuing a correction?
Such mistakes actually offer salutary lessons and are excellent material for lectures: here’s what I did… anyone spot anything wrong… here’s what happened next… this guy came out of nowhere and explained that I’d made a huge error… imagine my embarrassment… here’s why you can’t do what I did… lessons learned…
A completed journey that included a shipwreck probably makes better dinner conversation than one that went smoothly!
Another interesting aspect to this incident is that Brandon Shollenberger independently became curious enough about the result to request the data and noticed the error just as quickly as me and, like me, wrote the author. Whereas none of the peer reviewers, none of the readers of the journal nor Lewandowsky citing the result thought to check it.
Steve,
And that is what makes these academics so angry. You and a growing list of others are asking simple questions that show how lacking their work is, and how shallow their claims of superiority actually are.
It’s definitely interesting. In addition to what you point out, Lewandowsky’s Moon Landing paper used SEM modeling which is built upon the methodology Wood used. I pointed out the problem with it, observing:
A short while later, a blog post on this site went up discussing the matter in far more detail. I e-mailed Michael Wood about a problem with his methodology, and a short while later, this post went up. The timing is intriguing.
Re: Jit (Nov 9 04:56),
My wife and I attend a seminar by a recent “world champion of Speaking” and one of the points he made is not to tell jokes any more but instead tell stories about your failures. His point was that people get bored if you use a joke they’ve heard before, but a failure, even if it’s one others have made before, is still your own.
I think this is why I fail to find the team (and others of their ilk) funny.
Part of this is that Wood (and Lewandowsky) don’t see the asymmetry between an explanation in which one does believe and explanations which one disbelieves. Belief in one explanation for Diana’s death would exclude belief in alternative explanations. But one could disbelieve any number of explanations, with differing degrees of confidence, without contradiction, and without this saying anything about what one did believe – if indeed one believed anything.
Unfortunately Wood set up his 7-point belief scale without realizing this. He has interpreted degrees of unbelief as just the inverse of degrees of belief.
The error Wood ascribes to what turns out to be a null set of people is actually an error in his own understanding of his methodology.
One can perhaps understand that he might have failed to see the fundamental difference between believing two inconsistent arguments (logically impossible) and disbelieving two inconsistent arguments (logically possible). But only a special type of plonker would insist on concluding that some people are so stupid as to believe Diana is both dead and alive, rather than re-examining his methodology to see why it appeared to produce this absurd result. And how he could fail to see his error even when it is pointed out to him that there is no one in his sample who is as stupid as he says, is remarkable.
Wood and Lewandowsky failed to see that what they were ridiculing is reasonable behavior. There are countless things we are supposed to believe — the cause of obesity is eating too much sugar, for example. The less you believe these things, the more credence you will give (not a lot, but some) to the many alternatives to them (obesity is caused by X, obesity is caused by Y, obesity is caused by Z, and so on). A person who is less sure than the experts that obesity is caused by sugar, will give a little bit of credence to many alternatives. You can call this “the method of multiple working hypotheses” or you can call it common sense.
Are there any other shrinks discussing cagw?
Is Lew the go to shrink for cagw?
There must be some rational person in the profession that could check it out.
Do any others in the shrink business agreeing with Lew or is it Mann & co?
cn
I guess creating a generation of neurotics is good for business…
When you’re a shrink.
cn
Mike Wood appears to blog here:
http://conspiracypsych.com/author/disinfoagent/
Interestingly it would appear he may no longer be with the University any longer. His page is gone:
http://www.kent.ac.uk/psychology/people/woodm/
And he is no longer on the directory:
Click to access phonelist.pdf
On his blog Wood refers to fellow author Karen Douglas as his PhD supervisor, which would explain why he’s moved on from Kent University.
I’ve posted a positive comment at Frontiers under his abstract. His content analysis does something Lew would never dream of doing – apply the same criteria of analysis to both sides of an argument.
He is now a lecturer at the University of Winchester. No, I’d never heard of it either. They do courses in creative writing, fashion, film studies, media studies … and psychology.
I once did a study which conclusively failed to disprove the proposition that no one who denied the grassy knoll theory didn’t also simultaneously lack confidence in critiques of the governments official dimsissal of the possibility that Oswald didn’t act alone.
Conspiracy theorists are just so weird.
Ignorance doesn’t really bother me. Ignorant people getting paid to produce ignorance… now that bothers me.
I posted a comment at Bishop Hill to correct a minor error in its post referring to this article. It might be of some interest here:
Barry Woods commented on a major point I somehow didn’t catch (more here):
Frontiers took down Recursive Fury in response to complaints yet months later accepted and published a paper which cited Recursive Fury. As far as I know, that hadn’t been caught until recently. It puts a whole new light on how Frontiers has handled complaints.
I wonder how Frontiers would respond if someone filed a complaint about this. Would they delete the single sentence in the paper which references Recursive Fury, or would they stand by the citation?
A Scott upthread has discovered a most interesting blog run by Wood and three other English PhD students on the psychology of conspiracy theorising. There’s just one article there about Lew’s work. I won’t link because it seems to land me in moderation, but it’s worth looking at.
Very interesting Geoffchambers and A. Scott. I found a Special Issue of a publication psyPAG put out and in it Lewandowsky is cited 6 times!
Click to access Issue-88.pdf
It was located at this site: http://conspiracypsych.com/2013/09/19/psypag-quarterly-special-issue-the-psychology-of-conspiracy-theories/
I wonder what throws a comment into moderation. I thought I had a clever post (though that’s what I thought), relating this survey to perceptions of the causes of Diane’s death. I suppose I could try keywords, but that’s going to upset folks.
Sue
I’ve made a comment at the PsyPAG quarterly issue article. I mention the matter of Michael Wood’s role as reviewer of Lewandowsky’s “Recursive Fury” paper.
Michael Wood has several articles at this site, but there are none about the Wood Douglas Sutton 2012 “Diana” article which Steve has analysed, as far as I can see.
geoffchambers, the Wood Douglas Sutton 2012 paper is also cited in the “Special Issue” that I linked to along with the “Recursive Fury” paper.
As an expert on being simultaneously dead and alive I take exception to Lewandowsky’s bogus science.
At the point in the “not withdrawn” Recursive Fury paper where the contradictory conspiracy theories are mentioned and Wood et al 2012 is cited, the authors introduce an appropriate label, MbW, for “must be wrong”.
For best results, apply MbW recursively.
I’ve added some more to Sue and A Scott’s discoveries at
http://geoffchambers.wordpress.com/2013/11/11/the-great-psychological-conspiracy-theory-conspiracy/
That is an excellent post Geoff!
Don’t know if someone already commented on this, but page C2 of the Nov. 9 Wall Street Journal had its left column largely dedicated to Wood et al.’s conclusions. At one point it echoes Wood et al.’s highlighting of conspiracy theorists’ inconsistent beliefs.
yes, good spotting. The WSJ article accepts the subpopulation (N=0_ results:
“Conspiracists in the group often endorsed scenarios that were mutually contradictory.” [My bold] Certainly N=0 gives new meaning to the word “often”! Even Wood et al. didn’t go that far.
The subtitle of the WSJ article is “How do we distinguish between critical thinking and nutty conspiracism?” It seems that Sapolsky [the WSJ author] demonstrates a third way, neither conspiracies nor critical thinking.
A well designed poll can include questions that test respondents’ abilities to be polled, picking up such mechanical effects as donkey votes and mutually exclusive answers to like questions (as might arise from a person with little interest in a considered response). This is part of a poll’s quality control and it does not belong in the analysis of responses, except when used to reject some respondents and/or similar mechanical exercises.
(A former CEO asked me to set up an after-hours Corporate polling centre. I learned some of the little-seen aspects of polling from consultants engaged to help get it operating & tested.)
I have nominated for the Ig Nobel “Statistics prize,
Michael J. Wood, Karen M. Douglas, Robbie M. Sutton and Stephen Lewandowsky
for taking small population statistics to a new / unprecedented level
by drawing conclusions from a subpopulation of zero, and to
Stephen McIntyre for discovering this remarkable breakthrough by his painstaking dogged
persistence in auditing Lewandowsky and Wood et al. ”
I ask for someone to second this nomination to:
IG NOBEL NOMINATIONS
c/o Annals of Improbable Research
PO Box 380853, Cambridge MA 02238, USA
or to
marca@improbable.com
http://www.improbable.com/about/SubmissionGuidelines.html
🙂
Andrew Gelman, a very able statistician with an interesting blog, has a post about Wood et al here – without picking up the zero subpopulation lunacy. The post is by Gelman’s associate, Phil Price.
Steve Thanks for the tip. I have asked them to validate your findings and to 2nd this nomination!
Re: David L. Hagen (Nov 28 20:44),
The nomination should be re-worded and go only to Steve.
All recipients are given an opportunity to quietly decline the award, as has happened several times. Anyone who would be embarrassed or professionally lose face by receiving it would not win it.
Some others have noted that the structure of the analysis of these polls with multi-point Likert scale is a bit like fitting linear regressions to the various multi-point scales then correlating; the mere fitting of the regression (if not constrained to the 0,0 points in some way) can generate a number when that number should be zero.
This leads me a little off-thread to a problem whose solution I have been seeking for years. There are several publications where a climate index like temperature is interpolated or extrapolated, based on correlations between weather station data at different separations. BEST did this, see http://www.geoffstuff.com/Correl.
These correlations are too high. I’ve looked at my home town weather record (Melbourne) and correlated observations 1 day, then 2 days, 3… apart, on the loose assumption that this might be comparable to a weather system moving a distance in 1 day, 2 days, etc.
See http://www.geoffstuff.com/Extended%20paper%20on%20chasing%20R.pdf
In short, while BEST have correlation coefficients above 0.85 for distances up to 600 km, I was hard pressed to find any correlation this high by lagging data from one station.
Question is, what gets the correlation so high on the BEST type graph?
Stephan Lewandowsky in interview with The Conversation yesterday:
The professor links to the Michael Wood/Karen Douglas paper discussed in this thread. Empty sets can go far in the new psychology. Striking indeed.
I read the Wood source. I note that the allowed responses to the various conspiracy theories are grades of dis/agreement from 1 through 7. Having answered a number of questionnaires in this format, I have always found it awkward to answer on a topic about which I know very little. In the absence of a “None of the above” box to tick, the only way to express no opinion is to tick “4: neither agree nor disagree.” Obviously this response covers a variety of opinions, indicating either a level of compliance with the assertion in question, or a real lack of interest in the topic, or simply: “I haven’t got a Scooby Doo.” Only in the first case does it make any sense to rank my level of belief.
BTW I notice that Wood has a climate change conspiracy theory in there. Where would a “moderate warmist” who believed that some scientists or politicians had exaggerated the issue put his or her tick? Luckily we have Wood’s assurance that the Diana theories were always the intended major focus, so we can totally exonerate him and his colleagues from any hint of conspiracy to implicate mild skeptics as loony conspiracy theorists. (Especially since positive correlations between beliefs in climate change conspiracy and other conspiracies are not mentioned at all?)
8 Trackbacks
[…] comes to mind, as he does on occasion take on the stats and methodology underlying papers that are peripheral or tangential to pure climate related […]
[…] Hat tip to commenter David L Hagen from here. […]
[…] https://climateaudit.org/2013/11/07/more-false-claims-from-lewandowsky/#more-18571 […]
[…] […]
[…] https://climateaudit.org/2013/11/07/more-false-claims-from-lewandowsky/ […]
[…] based on a sample of 10, carefully selected from a biased sample of 1200. In his most recent work he draws a conclusion based on zero data points. Why am I not […]
[…] are lots of interesting posts on this: Steve’s at Climate Audit, Judy Curry, Matt Briggs, Warren Pearce, WUWT and related posts by Lucia and Ben […]
[…] McIntyre, with some difficulty, obtained the data. There was a reason for the author being a bit circumspect. McIntyre […]