More False Claims from Lewandowsky

Another bogus claim from Lewandowsky would hardly seem to warrant a blog post, let alone a bogus claim about people holding contradictory beliefs. The ability of many climate scientists to hold contradictory beliefs at the same time has long been a topic of interest at climate blogs (Briffa’s self contradiction being a particular source of wonder at this blog). Thus no reader of this blog would preclude the possibility that undergraduate psychology students might also express contradictory beliefs in a survey.

Nonetheless, I’ve been mildly interested in Lewandowsky’s claims about people subscribing to contradictory beliefs at the same time, as for example, the following:

While consistency is a hallmark of science, conspiracy theorists often subscribe to contradictory beliefs at the same time – for example, that MI6 killed Princess Diana, and that she also faked her own death.

Lewandowsky’s assertions about Diana are based by an article by Wood et al. entitled “Dead and Alive: Beliefs in Contradictory Conspiracy Theories”. A few months ago, I requested the supporting data from Wood. Wood initially promised to provide the data, then said that he had to check with coauthors. I sent several reminders without success and eventually without eliciting any response. I accordingly sent an FOI request to his university, accompanied by a complaint under any applicable university data policies. The university responded cordially and Wood immediately provided the data.

The most cursory examination of the data contradicted Lewandowsky’s claim. One can only presume that Lewandowsky did not carry out any due diligence of his own before making the above assertion.

A Subpopulation of Zero
Within the Wood dataset, only two (!) respondents purported to believe that Diana faked her own death. Neither of these two respondents also purported to believe that MI6 killed Princess Diana. The subpopulation of people that believed that Diana staged her own death and that MI6 killed her was precisely zero.

Lewandowsky’s signature inconsistency was completely bogus – a result that will come as no surprise to readers acquainted with his work.

Wood et al 2012
Lewandowsky appears to have uncritically relied on results published in Wood et al, 2012, which had stated in their running text:

People who believed that Diana faked her own death were marginally more likely to also believe that she was killed by a rogue cell of British Intelligence (r = .15, p = .075) and significantly more likely to also believe that she was killed by business enemies of the Fayeds (r = .25, p = .003).

Wood highlighted the inconsistency between conspiracy beliefs in their abstract which stated:

In Study 1(n= 137), the more participants believed that Princess Diana faked her own death, the more they believed that she was murdered.

However, as with the official MI6 example, neither of the two respondents purporting to believe that Diana had faked her own death, also believed that she had been murdered by Fayed’s business enemies or that she had been murdered by a rogue cell within MI-6.

Here’s how the correlations arose. Wood had used a 7-point Likert scale (ranging from Strong Disagree to Strong Agree) and people who expressed strong disagreement on one point were more likely to express strong disagreement on a related point. What Wood ought to have said is that participants who strongly disagreed that Diana faked her own death were more likely to strongly disagree that she was murdered. This does not imply people who believed that Diana faked her own death were “significantly more likely” to believe that she was also murdered by Fayed’s enemies.

This is very elementary.

Prior to posting, I wrote to Wood, explaining the problem as follows:

For example, in your Table I, you reported a “significant” correlation of .253 between the proposition that Diana faked her own death and the proposition that she was killed by Fayed’s enemies. I don’t know whether you examined the contingency table between these two propositions, but it is an important precaution and, in my opinion, conclusively contradicts the phenomenon that you had in mind. here is the contingency table (4 – neutral on the 7-point Likert scale):

Only a few respondents (6) purported to either believe that Diana had faked her own death (2) or that she had been killed by Fayed enemies (4) – and NONE believed both simultaneously. Your “significant correlation” arises not because people held inconsistent conspiracy beliefs, as you state, but because of differing confidence in their disbelief. Respondents were much more confident in their belief that Diana did not fake her own death than that she was not killed by Fayed’s enemies. In addition, respondents may express their disagreement with different degrees of emphasis. This is an entirely different phenomenon than believing in mutually inconsistent results.

Rather than conceding this seemingly obvious point, in a reasonably detailed response, Wood denied that his results were affected. First, he contested that non-normality was an issue, because Spearman correlations yielded similar or higher values than Pearson correlation. While true, this observation is obviously unresponsive to the fact that no respondents simultaneously believed that Diana had faked her own death and had been murdered by Fayed enemies (or a rogue cell or official MI6). Wood:

First, you raise the issue that there is some non-normality in the data. This is especially apparent in correlations involving the “faked death” item, which participants were highly sceptical about and therefore ended up with a restricted range of responses, as we noted in the text of the paper in our discussion of a potential floor effect. However, nonparametric tests give much the same results as we originally obtained – for instance, using either Spearman’s rho or Kendall’s tau-b renders the previously marginally significant correlation between “fake death” and “rogue cell” significant at the .01 level. Your use of scare quotes around “significant” is quite unwarranted – the relationships are indeed statistically significant, and if anything the use of Pearson correlations in the original paper understates, rather than overstates, their robustness.

Next, Wood challenged my analysis of the Faked Death/Fayed Enemies pairing as “felicitous”, arguing that a “more instructive” example was the correlation between the rogue cell and Fayed enemies combination.

The second point concerns the interpretation of the correlations. In fact you chose a rather felicitous example for the point you’re making… A more instructive example is the “rogue cell” / “business enemies” correlation, which, not being attenuated by general scepticism about the faked death claim, is more representative of the relationships among the different Diana theories, as is clear from Table 1 of the paper.

However, it was the Faked Death example that Wood et al highlighted in their Abstract and which Lewandowsky has drawn attention to. I didn’t pick it because of perceived weakness, but because Wood and Lewandowsky had themselves promoted it.

Wood also now claimed that because “so few” agreed with the faked death theory, it “does not seem reasonable to expect many people to give high endorsement to both theories”:

as noted previously, very few people expressed a high level of confidence at all in the “faked death” theory – in fact, it’s the least-agreed-with item in the scale. Given that so few agreed with the “faked death” example at all, it does not seem reasonable to expect many people to give high endorsement to both theories.

However, nowhere in Wood et al 2012 is there any explicit statement that only two respondents purported to believe in the Faked Death theory that was highlighted in the abstract. Had readers been aware that only two people purported to subscribe to this theory, then they would obviously not expect “many people to give high endorsement to both theories”. Unfortunately when zero people subscribed to both theories, one cannot justifiably assert that “In Study 1(n= 137), the more participants believed that Princess Diana faked her own death, the more they believed that she was murdered”,

Nor does Wood’s “more instructive” example help their cause. Only two respondents purported to agree that both Diana had been murdered by Fayed business enemies and by a rogue cell. But here’s how Wood et al 2012 had characterized the results:

Similarly, participants who found it likely that the Fayeds’ business rivals were responsible for the death of Diana were highly likely to also blame a rogue cell (r = .61, p < .001).

Given that only two respondents reported subscribing to both beliefs, it is obviously impossible to achieve p < .001 from such a miniscule sample. As with the other pairing, the correlation arises from people who disagree with both propositions, not from people who agree with both propositions. Although this latter point seems self-evident, Wood disputed it as followseven after I had pointed it out to him:

While this sample was generally sceptical about conspiracy theories in general, the fact that participants’ degrees of disbelief appear to stick together does not indicate that the correlations are simply an artefact of participants’ response styles. This explanation seems particularly implausible given the magnitude of the correlations that are not attenuated by floor effects (e.g., r = .61 for the “business enemies / rogue cell” correlation).

However, while two is more than zero, it is still far below the population necessary to arrive at statistically significant conclusions. Any “statistically significant” correlations arise for the reason set out in my email to Wood: “not because people held inconsistent conspiracy beliefs, as you state, but because of differing confidence in their disbelief”.

That Lewandowsky should make untrue statements will hardly occasion surprise among CA readers. However, drawing conclusions from a subpopulation of zero does take small population statistics to a new and shall-we-say unprecedented level.

Postscript: Wood’s Diana claim has become somewhat of an urban legend and citations accumulate. Examples include Thresher-Andrews 2013( PsyPAG Quarterly); Brotherton 2013 (PsyPAG Quarterly):

People who endorse one conspiracy theory tend to buy into many others – including theories with no logical connection and, as Mike Wood and colleagues demonstrated, occasionally even theories which directly contradict each other.

Lewandowsky Fury:

In consequence, it may not even matter if hypotheses are mutually contradictory, and the simultaneous belief in mutually exclusive theories e.g., that Princess Diana was murdered but also faked her own death has been identifi ed as an aspect of conspiracist ideation (Wood et al., 2012).

Lewandowsky Role (PLOS):

For example, whereas coherence is a hallmark of most scientific theories, the simultaneous belief in mutually contradictory theories- e.g., that Princess Diana was murdered but faked her own death – is a notable aspect of conspiracist ideation [30 -Wood et al ,2012].

Lewandowsky in March 2013 here also at SKS here:

Thus, people may simultaneously believe that Princess Diana faked her own death and that she was assassinated by MI5.

See also Cook’s comment here.

136 Comments

  1. Lance Wallace
    Posted Nov 7, 2013 at 2:36 PM | Permalink

    The sentence beginning “Rather than conceding” should be pulled out of the section quoting your letter to Wood.

    Steve: fixed

  2. Posted Nov 7, 2013 at 2:48 PM | Permalink

    It is not that he had much to say
    When Wood “answered” your point in this fray
    It is quote hard to soften
    That Lew called never “often”
    But in Wood’s mind, he blew you away.

    I’ve been dealing with this in my blog
    The Defenders of Falsehoods. One prog
    Though a bright, clever friend
    Will defend to the end
    Global warming — he’s there for the slog.

    ===|==============/ Keith DeHavelle

  3. Posted Nov 7, 2013 at 3:04 PM | Permalink

    However, drawing conclusions from a subpopulation of zero does take small population statistics to a new and shall-we-say unprecedented level.

    The End. I needed that belly laugh, thank you.

    • TAC
      Posted Nov 8, 2013 at 8:48 AM | Permalink

      Unprecedented? Drawing inferences from samples of size N=zero and smaller (e.g. inverted Tiljander) seems to be SOP for some of these guys.

      • jorgekafkazar
        Posted Nov 8, 2013 at 6:11 PM | Permalink

        I, too, immediately thought of reciprocal Tiljander, stood on its head in pursuit of thermal “unprecedentedness.” Will sample sizes of n*√-1 be long in appearing? Or are they already here?

    • David L. Hagen
      Posted Nov 8, 2013 at 4:24 PM | Permalink

      As astute observation that 2/0 is a remarkable large transition! (aka infinity – “is elementary”)

      • Posted Nov 8, 2013 at 4:52 PM | Permalink

        Divide-by-zero rationality
        Can we here really unwind it?
        Seems Lew’s “truth” is just hostility:
        “Truth is where you undefined it.”

        ===|==============/ Keith DeHavelle

  4. MikeN
    Posted Nov 7, 2013 at 3:23 PM | Permalink

    “What Wood ought to have said is that participants who strongly disagreed that Diana faked her own death were more likely to strongly disagree that she was murdered. This does not imply people who believed that Diana faked her own death were “significantly more likely” to believe that she was also murdered by Fayed’s enemies. ”

    Depends on how you define it. If it were a two point scale, it would mean that.

  5. PhilH
    Posted Nov 7, 2013 at 3:25 PM | Permalink

    Bet Wood got an A on this paper.

  6. Bill
    Posted Nov 7, 2013 at 3:28 PM | Permalink

    People using statistical programs that have little understanding
    of statistics. Again.

    • RalphR
      Posted Nov 7, 2013 at 8:14 PM | Permalink

      Yes. And when called on the carpet they employ a Monte Carlo method called “Grasping at Straws” to support their original conclusions.

  7. Jeff Id
    Posted Nov 7, 2013 at 3:33 PM | Permalink

    I’m not sure it’s fair to blame Lewandowsky for this oversight. By the conclusions of his own recent work, we learned that he suffers from extremist ideation which leads to confirmation bias and can result in acceptance of facts which are obviously suspect to an impartial scientific reader. I’m quite certain that Steve M requested the Wood data based on the very unusual conclusions it seemed to support, Lewandowsky’s impenetrable ideation didn’t allow him that foresight and that is no fault of his own.

    Had Lew not taught me this valuable lesson, I might have blamed him for being gullible or something when the situation is clearly something else.

    • ianl8888
      Posted Nov 7, 2013 at 4:21 PM | Permalink

      Lewanclownsky opted for notoriety long ago … end of story

    • dgh
      Posted Nov 7, 2013 at 6:30 PM | Permalink

      It becomes even more clear that Lewandowsky’s ideation is impenetrable when you consider the folks with whom he associates.

      If you are a scientist looking to research interesting pathologies that arise in people engaged at some level in the climate change debate then look no farther than the photo shopped images of Cook, Nuccitelli, et al, discovered at Skeptical Science.

      That was some really weird and disturbing stuff. Somebody should write a paper.

    • Posted Nov 8, 2013 at 4:11 PM | Permalink

      Hahaha!! Nice one 😀

      • Jeff Id
        Posted Nov 8, 2013 at 11:15 PM | Permalink

        Lewandowsky and Mann teaming up is a blogger’s dream. Who could have guessed that the sum of two self-certain “elite” minds is even less than one lonely one.

        If they are the new hockey team, maybe they should have a name? or a theme song.

        • Speed
          Posted Nov 9, 2013 at 7:24 AM | Permalink

          A hockey team with an open goal.

        • PMT
          Posted Nov 12, 2013 at 4:23 AM | Permalink

          Jeff ‘n’ Josh, the “bespectacled violent goons with childlike mentalities, complete with toys in their luggage” in Paul Newman’s 70’s film “Slap Shot” were of course the Hanson Brothers.

          http://en.wikipedia.org/wiki/Slap_Shot_%28film%29

    • timheyes
      Posted Nov 9, 2013 at 12:36 AM | Permalink

      I can’t help thinking that there’s a paper in “Statistical Ideation”… where the learned discipline of statistical analysis is tortured to create a new statistics which confirms the answer needed. This blog documents many examples.

      • Geoff Sherrington
        Posted Nov 9, 2013 at 9:28 PM | Permalink

        timheyes,
        It goes further than that. The answer needed is often one that reflects a knee-jerk conclusion more than a reasoned one.
        Just now, for example, I was watching a PC garden show on Australia’s ABC TV. The commentator noted of his garden design “It protects small creatures from predators”. I thought, but hold it, the predator-prey relationship is one of mutual dependence. Someone has to feed the predator too.
        While this is a trivial example, it is reflective of a mindset that seems affiliated with a naïve fairyland. This is shown again by the present wave of chemophobia, where PC people interminably prefer ‘natural’ or ‘organic’ remedies to professionally crafted chemical ones. That the failure of the natural remedy was often a reason to design a chemical one seems to have been forgotten.
        Up it a scale and we have a PC preference for expensive, intermittent wind energy over cheap, reliable coal or nuclear.
        And so on into the darkness.

    • twr57
      Posted Nov 9, 2013 at 9:56 AM | Permalink

      Spot on. Id est. We must pity rather than blame.

  8. KNR
    Posted Nov 7, 2013 at 3:34 PM | Permalink

    ‘ One can only presume that Lewandowsky did not carry out any due diligence of his own before making the above assertion.’

    So normal practice for him then , for facts and data are not important all that matters is that his ‘faith ‘ is confirmed .

  9. Posted Nov 7, 2013 at 3:55 PM | Permalink

    Two people marking a “strong agree” on faking Diana’s death in a reasonable sample size could simply be people accidentally hitting the wrong button.

    See if you can get this published in a journal Steve – its such an obvious blunder, which could easily be demonstrated by removing the strong disagrees from the sample (?), that the journey to getting a take down like this published in a journal could make an interesting story in itself.

    I know its a big ask, its a lot of work, but think about how much you could make them squirm as they tried to invent reasons not to publish your response paper.

    • rogerknights
      Posted Nov 7, 2013 at 6:04 PM | Permalink

      “the journey to getting a take down like this published in a journal could make an interesting story in itself.”

      I agree. This is worth pursuing further, since Wood is unrepentant.

      • rogerknights
        Posted Nov 7, 2013 at 6:58 PM | Permalink

        PS: It would bolster Steve’s case if he could get a bunch of statistical bigshots to endorse his criticism.

        • tomdesabla
          Posted Nov 12, 2013 at 12:37 PM | Permalink

          I don’t see why Steve needs to get any “hotshots” because this is just stupidity. This fellow Wood is so wrapped up in the complexity he has manufactured that he can’t see the obvious right in front of his face.

          His responses to Steve are outright unbelievable. No one held the two beliefs simultaneously, yet he claims it as a central finding? This is getting weirder and weirder all the time.

    • gober
      Posted Nov 7, 2013 at 6:33 PM | Permalink

      Was it two instances of “strong agree” on the faking death explanation, or just “agree” of any kind? I got the impression from Steve’s post that it was any kind of agreement.
      Steve: both were weak agreement. 5 on a 7-point scale.

      • what
        Posted Nov 8, 2013 at 1:10 AM | Permalink

        And if the responses reflect the respondents position that the cause of death was murder but they at not sure by whom it’s difficult to claim they are holding contradictory opinions at all

        • Posted Nov 8, 2013 at 1:07 PM | Permalink

          Something that went through my mind the moment I read the paper. Partly for this reason I don’t have problems with 7-point scales per se, because they allow respondents to hedge their bets, which may be rational given certain kinds of evidence. What I think might be interesting to look into (though I’m no expert in polling to know how) are those who not only doubt the conventional story in some area but are certain of the real culprits. Intuitively I’d expect too much certainty to correlate with lack of judgment elsewhere. But each of these stories – Diana, autism/vaccine, 9/11, the death of bin Laden – is so different in detail it’s hard to know how to test any such idea. To find out that the one clear contradiction that really stood out, trumpeted by Lew and Mann, was based on a set of zero really does take the biscuit. We who question the paleo pronouncements of Dr Mann because he used contaminated, upside-down sediment data are as unbalanced as a known set of people who believe contradictory things about the death of Diana and viola, that set is empty. Kafka would struggle to do justice to these people.

        • Coldish
          Posted Nov 9, 2013 at 11:39 AM | Permalink

          Richard (just above): voilà!

        • Posted Nov 9, 2013 at 12:56 PM | Permalink

          Yes, the stringed instrument wasn’t quite what I wanted!

  10. JohnB
    Posted Nov 7, 2013 at 4:47 PM | Permalink

    Sort of like Schroedingers Cat

    I say it’s more appropriate to say The cat is Not Dead AND Not Alive
    (the true negation of the Cat is Dead or Alive)

  11. Man Bearpig
    Posted Nov 7, 2013 at 4:56 PM | Permalink

    “Two people marking a “strong agree” on faking Diana’s death in a reasonable sample size could simply be people accidentally hitting the wrong button.”

    Exactly Eric, either that or they just filled in the form for fun or were fake respondents.

    • Jud
      Posted Nov 7, 2013 at 5:08 PM | Permalink

      Funnily enough a couple of potential candidates for the actual respondents sprang to mind immediately.

  12. tmitsss
    Posted Nov 7, 2013 at 5:00 PM | Permalink

    This explains how advocates are able to equate “climate denialists” with “flat earthers” without ever finding a “flat earther” who was also a “climate denialist”.

  13. bernie1815
    Posted Nov 7, 2013 at 5:07 PM | Permalink

    So much for peer review.

    The authors are essentially pursuing conspiracies that few believe.

  14. Joe Born
    Posted Nov 7, 2013 at 5:08 PM | Permalink

    Having very little depth in statistics, I often find it hard to fathom the jargon that gets thrown around in these discussions. So it is perhaps arrogant of me to harbor the suspicion that apparently knowledgeable disputants often don’t really understand what they think they know about the subject–or at least that they fail to reflect upon that understanding before they take their positions.

    For me that suspicion arises in discussions even of ostensibly simple propositions such as whether a given temperature trend is “significant.” I’ve been given to understand that a temperature trend’s being significant means that some given assumption regarding the temperature-producing process would make the probability less than, say, 5% that the process would produce a trend at least as large as the one observed. Yet my distinct impression is that people arguing over significance very often haven’t thought through what that assumption is, i.e., what the assumption is upon which they’re basing their significance conclusions. To the extent that the assumption is reflected upon, moreover, it often seems to be something like “temperature is insensitive to CO2 concentration,” an assumption from which the way to compute a trend’s probability strikes me as being too open to conjecture.

    Again, making such a judgment about a discipline of which I have no command may be a trifle arrogant. But I’m all the more attempted to make it when I see jargon such as that contained in Wood’s statement being used to defend so preposterous a proposition.

  15. Geoff Sherrington
    Posted Nov 7, 2013 at 5:17 PM | Permalink

    Professor Lewandowsky has burst into more print in the last few days.
    http://theconversation.com/look-out-for-that-turbine-climate-sceptics-are-the-real-chicken-littles-19873

    The host is a blog named “The Conversation” that has the support of a number of universities and some bodies such as CSIRO and BOM in Australia. One cannot write a lead essay for this blog if not affiliated with a university or related group.
    In the context of Steve’s essay here, a past offering by the Prof was titled http://theconversation.com/the-false-the-confused-and-the-mendacious-how-the-media-gets-it-wrong-on-climate-change-1558

  16. Brandon Shollenberger
    Posted Nov 7, 2013 at 5:49 PM | Permalink

    I feel weird. I had contacted Michael Wood about two weeks ago because I came across his article via Stephan Lewandowsky’s, and I found the paper’s conclusions remarkable. He and I exchanged a few e-mails, I got the data set from him (after Steve McIntyre got him to release it, I believe), and I proceeded to begin analyzing it.

    My initial impressions were highly critical so I decided to take my time looking into the data to be sure I understood what was going on. This morning I finished analyzing the data, and I e-mailed Wood with the conclusions I had reached. A couple hours later, this post went online.

    I could have saved myself a lot of effort if I has just sat back and done nothing.

    • rogerknights
      Posted Nov 7, 2013 at 6:08 PM | Permalink

      “I e-mailed Wood with the conclusions I had reached.”

      I suggest you post them here.

      • Brandon Shollenberger
        Posted Nov 7, 2013 at 6:20 PM | Permalink

        My conclusions are in the same line as McIntyre’s. I didn’t bother highlighting the small sub-population sizes because I felt the issue was more general than that. Here’s the e-mail I sent him:

        Dear Michael,

        I’m afraid my examination of your data has turned up a serious problem. Your paper argues people endorse contradictory conspiracy theories, and it uses correlation coefficients for support. However, these correlation coefficients are calculated over groups that both reject and endorse conspiracy theories. These are the possible pairings for any two conspiracy theories:

        Endorse – Endorse
        Endorse – Reject
        Reject – Endorse
        Reject – Reject

        This could be represented as:

        + +
        + –
        – +
        – –

        A positive correlation happens when signs are identical. That covers the first pairing where both conspiracy theories are endorsed. However, it also covers the fourth pairing where both conspiracy theories are rejected.

        The effect of this is responses which reject both of a pair of contradictory conspiracy theories will be treated, by your approach, as evidence people believe contradictory conspiracy theories. That’s nonsensical. If a person does not believe in any conspiracy theories, they obviously cannot believe in contradictory conspiracy theories.

        Now then, this does not necessarily mean your conclusions are wrong. It merely means your results could stem from a possibility you had not accounted for. To examine whether or not they did, it’d be necessary to examine where the correlations originate. I took the liberty of doing so for Study 1.

        Attached to this e-mail is a zip file including several text documents. One shows my replication of the results in your Table 1 (with minor differences as I filtered out two entries which had NULL values for simplicity). There are three more, one for each pair of contradictory conspiracy theories highlighted in your paper. These are of primary importance.

        Each document contains a contingency table of responses for the pair of conspiracy theories. The table also lists the amount each pair of responses contributes to the calculated correlation coefficients. Summing the Contribution column will give the correlation coefficient for the two conspiracy theories. For convenience, the tables are sorted on this column.

        These tables confirm your calculated correlations are heavily influenced by responses which reject multiple contradictory conspiracy theories. To demonstrate, you’ll note the correlation between “Diana killed by rogue cell of British Intelligence” and “Diana faked her own death” was given in the paper as 0.15. Here are results taken from the corresponding C1_C16.txt file :

        C1 C16 Freq Contribution
        2 1 21 0.058595088
        1 1 26 0.145092599

        This shows people who adamantly disagreed (1) with the idea Princess Diana faked her death and strongly disagreed (1-2) with the idea Princess Diana was killed by British Intelligence are responsible for the majority of the correlation you found in these two conspiracy theories. In fact, they contribute more than the total correlation because their correlation is partially counterbalanced by results like:

        C1 C16 Freq Contribution
        6 1 6 -0.050224361
        5 1 8 -0.044643877

        Where people believe British Intelligence killed Princess Diana and adamantly reject the idea Princess Diana faked her death – which is internally consistent.

        Now then, this doesn’t show there are no people who believe in contradictory conspiracy theories. Your results show there are some. For example, from the same file:

        C1 C16 Freq Contribution
        5 3 2 0.031282153
        5 2 9 0.045272664

        These contradictory conspiracy beliefs do contribute to your results. They just aren’t the primary cause of your results. The correlation testing you used was too simplistic for the analysis you wanted to do, and that led you to misinterpret your results.

        I’m afraid your results are not supported by your analysis. It is simply impossible to use a single correlation coefficient calculated over two views amongst two groups as indicating something about one view for one group.

        Brandon Shollenberger

        P.S. I’m ataching a script file with code that should be turnkey if you have the statistical programming language R installed. It only compares two conspiracy theories, but you can compare whichever you want by changing the column references. If you don’t have it installed, the data and code are still readable so you can see exactly what I did. If you have any questions, please feel free to ask.

        I can’t attach the zip file I sent him to this post, but if need be, I can post it somewhere. It’s really just a version of something Steve did a while back for Lewandowsky’s Moon Landing paper. I had pointed out the same oddity before, but at the time, I didn’t know how to demonstrate it concisely. When I saw McIntyre demonstrate the problem mathematically, I realized how easy it is to test the origin of correlation via contingency tables.

        • Posted Nov 8, 2013 at 6:36 PM | Permalink

          Well done Michael. Independent corroboration is always useful.

        • jorgekafkazar
          Posted Nov 8, 2013 at 6:44 PM | Permalink

          Very nice, Brandon. I can imagine the recipient reading it and segueing from WTF to OMG as he does so. But I have an active imagination.

      • Brandon Shollenberger
        Posted Nov 7, 2013 at 6:32 PM | Permalink

        snip – sorry. I know that you’re using a religious example purely as an example, but it’s against strictly enforced blog policy.

        • Steve McIntyre
          Posted Nov 7, 2013 at 6:43 PM | Permalink

          Here’s a short script showing first the replication of Wood et al Table 1 and then the contingency table with zero respondents holding both propositions.

          loc="http://www.climateaudit.info/data/lewandowsky/wood_2012/WDS_2012_study_1.csv"
          work=read.csv(loc)
            #this is csv version of Wood's excel spreadsheet with mnemonic headings to each column
          
          round( cor(work[,c("RogueCell","Official_MI6","FayedEnemies","MuslimArab","FakedDeath")],use="pairwise.complete.obs"),2)
          #               RogueCell Official_MI6 FayedEnemies MuslimArab FakedDeath
          #RogueCell         1.00         0.75         0.61       0.67       0.15
          #Official_MI6      0.75         1.00         0.66       0.62       0.21
          #FayedEnemies      0.61         0.66         1.00       0.61       0.25
          #MuslimArab        0.67         0.62         0.61       1.00       0.24
          #FakedDeath        0.15         0.21         0.25       0.24       1.00
            #this replicates Wood Table 1
          
          test=sign(work-4)
          with(test,table(FakedDeath,FayedEnemies))
          #          FayedEnemies
          #FakedDeath  -1   0   1
          #        -1 102  24   4
          #        0    3   2   0
          #        1    0   2   0
            #this shows that ZERO respondents agree with both FakedDeath and FayedEnemies
            #only two respondents agree with FakedDeath and only 4 respondents with FayedEnemies
          
        • RomanM
          Posted Nov 7, 2013 at 7:09 PM | Permalink

          The actual choices for “Rogue Cell” by the two persons who picked 7 (on a scale of 1 to 7 with 4 being neutral) were a 1 and a 2 on the same scale. Nobody gave the idea of Diana faking her death a value higher than a 5 (only 2 did and neither of them indicate that Rogue Cell had any credibility.

          Even though I find the idea of calculating simple correlations on categorical variables using their arbitrary 1 to 7 “names” unscientific, if you combine the 1 and 2 categories for each categorical variable, the correlation coefficient drops to 0.05. If you further combine the disbelief 1, 2 and 3 categories, the correlation becomes negative: -0.05. Steve is quite right on this topic.

          I might add that the sample was chosen as follows:

          Participants. One hundred and thirty-seven undergraduate psychology students (83% female, mean age 20.4) were recruited from a second-year research methods class at a British university. Participation was voluntary and no compensation was given.

          This is hardly a population upon which to base a robust conclusion on believing dueling conspiracies.

        • Brandon Shollenberger
          Posted Nov 7, 2013 at 7:00 PM | Permalink

          No prob. I didn’t realize it was that off-limits here. I’ll modify the comment slightly to remove that. I hope free market/communism is an acceptable comparison (I picked these since Lewandowsky discussed them in his paper).

          As I said above, I feel the problem here is more general. This is part of an e-mail I sent to another person before seeing this post. I used data from the Wood et al paper (I filtered out two entries which had NULL values because of problems reading the file if they were included). I just changed the description given for the data to demonstrate the problem:

          Here’s an example of why this bugs me so much. Suppose people were surveyed to see if they support communism and/or free markets. On a scale of 1-7, 1 being strongly disagree, 7 being strongly agree, and 4 being neutral, the results are:

          Free Market
          Communism 1 2 3 4 5
          1 26 1 1 1 0
          2 21 7 0 1 0
          3 14 8 3 1 1
          4 14 2 4 2 1
          5 8 9 2 0 0
          6 6 0 0 0 0
          7 1 1 0 0 0

          If you calculate the correlation coefficients for that data, you’ll get a correlation of .15 (I can provide the raw data if it’s desired). That’s obviously because there’s a correlation in disbelief. Nobody supported both free markets and communism, but a number rejected both.

          And while that’s obvious to you and me, if we were scientists in the right field, we could go out and publish a paper saying, “Free Market Supporters Support Communism.” The data doesn’t show that at all, but if we ignore one possibility (rejection of both possibilities), that’s what the correlation coefficient shows!

          A correlation coefficient cannot be used like it is in the Wood et al paper, the Lewandowsky et al papers and a number of other papers I’ve seen. These papers take correlations that could indicate more than one thing and draw conclusions based upon simply ignoring some of the possibilities.

          It’s crazy. These authors are using a methodology that could literally draw conclusions about groups the authors have no information on. The authors could survey women about their views and draw conclusions about the views men hold. They could survey democrats and say their results prove republicans are insane.

          Taken to the extreme, they could conclude anything about anyone by studying anyone but the people they’re interested in.

        • Posted Nov 9, 2013 at 5:03 PM | Permalink

          Participants. One hundred and thirty-seven undergraduate psychology students (83% female, mean age 20.4) were recruited from a second-year research methods class at a British university. Participation was voluntary and no compensation was given.

          This also means the mean ages of participants at the time of Diana’s death was 4 years old. Many may never have cared about this story and consequently be very aware they don’t really know much about the issue. In that context, not discounting theories when you have no knowledge makes sense even if those theories are mutually exclusive. You know only 1 theory can be true– but you don’t really know which one is. So you can’t confidently state any specific one is false.

          I should note: I am 54 and I never followed this story. Most of what I know comes from seeing the front of tabloids while waiting in line at the grocery store or a little of what was covered on television at the time.

        • Steve McIntyre
          Posted Nov 10, 2013 at 9:13 AM | Permalink

          my guess is that there would must be some age differentiation in the JFK assassination the 50th anniversary of which is being commemorated. I was 16 when it happened. It was the Tuesday before our junior football team (UTS) was playing Leaside in the league final on Thursday. I was a very skinny outside linebacker. Our star halfback was named Keith Kennedy. We were just coming from Phys Ed class, when someone announced that Kennedy had been shot. I was dumbfounded: we were an academic school; no one from our school had ever been shot and our prospects for the football final were in disarray. You can imagine the reaction of a 16-year old football player when I learned that it was John F Kennedy who had been shot, and not Keith Kennedy.

    • Steve McIntyre
      Posted Nov 7, 2013 at 6:53 PM | Permalink

      Brandon, if you have time, try looking at the 150,000 deaths due to climate change, cited in the latest Lewandowsky tirade. It goes to Patz 2005 and then to a 2002 World Health Organization report. The numbers in the appendix seem to appear out of thin air. It would be worthwhile asking the authors how they got them.

      • Brandon Shollenberger
        Posted Nov 7, 2013 at 8:06 PM | Permalink

        Interestingly, I tried to track that number down some time back. I forget what brought it to my attention, but I never did find an exact source for it. One thing I did note is the WHO source is often cited along with McMichaels 2004. This document has quite a bit of discussion in it, and I never took the time to look through it in detail. It might be able to shed some light on the matter.

        I gave up reading it after about the dozenth time I saw them say they “modeled” effects of climate change. We can’t get reasonably accurate predictions from GCMs on global temperature levels. I don’t have any faith in models which receive far less attention.

        • DGH
          Posted Nov 8, 2013 at 6:13 AM | Permalink

          nIPCC looked at this too.

          http://nipccreport.com/articles/2011/may/24may2011a3.html

        • DGH
          Posted Nov 8, 2013 at 7:03 AM | Permalink

          Brandon – I think your McMichael reference is likely the correct source. See page 303 which details climate related mortality by cause and region. Globally it provides an estimate of mortality of 27.82 per million.

          With a world population of 5.5 billion the deaths would be 154K.

        • DGH
          Posted Nov 8, 2013 at 7:35 AM | Permalink

          oops, rented fingers. Page 1606 for the table.

      • Curious George
        Posted Nov 7, 2013 at 10:20 PM | Permalink

        We should not spend all our time on fighting an obvious nonsense. Dr. Lewandowsky can produce it easily five times faster.

        • Posted Nov 9, 2013 at 1:22 PM | Permalink

          “Dr. Lewandowsky can produce it easily five times faster.”
          Which is exactly why we need reviews like this. It is true that we wouldn’t be able to keep up, but there are new minds opening up to articles like this every day and articles like this should bubble up to the top of RSS feeds and search results regularly to catch them.

      • Posted Nov 8, 2013 at 3:51 AM | Permalink

        Re: Steve McIntyre (Nov 7 18:53),

        I’ve also tried several times to look into this figure. As Brandon said, the source of the 150,000 deaths is the WHO 2002 Report “Reducing Risks, Promoting Healthy Life” http://www.who.int/whr/2002/en which says (p72):

        “…Because of this complexity, current estimates of the potential health impacts of climate change are based on models with considerable uncertainty.

        Climate change was estimated to be responsible in 2000 for approximately 2.4% of worldwide diarrhoea, 6% of malaria in some middle income countries and 7% of dengue fever in some industrialized countries. In total, the attributable mortality was 154 000 (0.3% [of total worldwide]) deaths and the attributable burden was 5.5 million (0.4%) DALYs [disability-adjusted life years].”

        But as noted above, there’s no source or methodology cited in that WHO report.

        Interestingly, a later WHO report “Global Health Risks” (2009) http://www.who.int/healthinfo/global_burden_disease/GlobalHealthRisks_report_full.pdf gives a lower estimate for 2004 (141,000 or 0.2% – p.50) and puts climate change as the lowest among the risk factors considered in the report (p.23). The equivalent statement to the 2002 report is:

        “Climate change was estimated to be already responsible for 3% of diarrhoea, 3% of malaria and 3.8% of dengue fever deaths worldwide in 2004. Total attributable mortality was about 0.2% of deaths in 2004; of these, 85% were child deaths.”

        That report cites McMichael et al (2004) http://www.who.int/publications/cra/chapters/volume2/1543-1650.pdf (also cited by Patz et al., Nature 2005) which appears to be a description of the methodology:

        “In this chapter, we have used existing or new models that describe observed relationships between climate variations, either over short time periods or between locations, and a series of health outcomes. These climate–health relationships were linked to alternative projections of climate change, related to unmitigated future emissions of greenhouse gases, and two alternative scenarios for greenhouse gas emissions. Average climate conditions during the period 1961–1990 were used as a baseline, as anthropogenic effects on climate are considered more significant after this period.”

        McMichael et al. is 108 pages, and I’ve not read it in detail. It seems to be trying to detect the health effects of climate change against the counter-factual of no climate change, and appears to ignore adaptation. As deaths from e.g. malaria and diarrhoea are falling worldwide (though still sadly very high), they are presumably saying that they would have fallen slightly less were it not for the 0.3 deg C temperature rise since 1961-1990 (why that baseline?).

        (Malaria and diarrhoea trends here http://www.healthmetricsandevaluation.org/publications/summaries/global-malaria-mortality-between-1980-and-2010-systematic-analysis http://www.unicef.org/media/files/Final_Diarrhoea_Report_October_2009_final.pdf)

        • Posted Nov 8, 2013 at 4:12 AM | Permalink

          Re: Ruth Dixon (Nov 8 03:51), I should have written “fallen slightly more” (rather than “less”) in the penultimate paragraph.

          McMichael et al. 2004 is a chapter in a WHO report – all of this is in the ‘grey’ literature, which is ironic given Lewandowsky et al.’s quote of the WHO estimate just before a sentence about peer-review:

          “According to the World Health Organization, climate change is already claiming more than 150,000 lives annually (Patz, Campbell-Lendrum, Holloway, & Foley, 2005), and estimates of future migrations triggered by unmitigated global warming run as high as 187 million refugees (Nicholls et al., 2011). A common current attribute of denial is that it side-steps the peer-reviewed literature…”

          Saying that a number has been quoted in Nature does not make the number ‘peer-reviewed’!

        • Posted Nov 8, 2013 at 5:14 AM | Permalink

          Oops – I’ve replied to my comment which is still in moderation. I’m sure it will all become clear.

        • Steve McIntyre
          Posted Nov 8, 2013 at 9:59 AM | Permalink

          McMichael 2004, linked by Ruth, contained some description of their methodology, whereas the 2002 report only contains tables in the Statistical Appendix. Here is Table 20.16 of McMicheal 2004. I’ll probably start a separate post on this.

          mcmichael 2004 table 20.16

        • Geoff Sherrington
          Posted Nov 8, 2013 at 6:12 PM | Permalink

          Regarding malaria alone, this publication asserts that the closeness of people in dwellings is a major mechanism for spread. Because this effect is newly estimated, it might be necessary to revise the earlier WHO figures down.
          http://onlinelibrary.wiley.com/doi/10.1111/rssa.12036/abstract Document identifier: DOI: 10.1111/rssa.12036

          (Joint author Prof Ross McKitrick is of course well known here. There is more info on WUWT, to whom h/t).

      • David L. Hagen
        Posted Nov 8, 2013 at 9:33 PM | Permalink

        Steve McIntyre & Brendon
        Compare Idur Goklany’s methodology and evaluations of WHO on climate deaths vs deaths from biofuel policies in Could Biofuel Policies Increase Death and Disease in Developing Countries? Journal of American Physicians and Surgeons Vol. 16 No. 1 Spring 2011 pp 9-13

        Results derived fromWorld Bank andWorld Health Organization (WHO) studies suggest that for every million people living in absolute poverty in developing countries, there are annually at least 5,270 deaths and 183,000 Disability-Adjusted Life Years (DALYs) lost to disease. Combining these estimates with estimates of the increase in poverty owing to growth in biofuels production over 2004 levels leads to the conclusion that additional biofuel production may have resulted in at least 192,000 excess deaths and 6.7 million additional lost DALYs in 2010. These exceed WHO’s estimated annual toll of 141,000 deaths and 5.4 million lost DALYs attributable to global warming. Thus, policies intended to mitigate global warming may actually have increased death and disease in developing countries.

  17. ursus augustus
    Posted Nov 7, 2013 at 6:03 PM | Permalink

    “One can only presume that Lewandowsky did not carry out any due diligence of his own before making the above assertion.”

    Now that’s a same assumption. You could even say it is axiomatic.

    Just go and have another look at the Lewny’s YouTube videos and watch the eyebrows flickering. Never mind the narcissism, feel the Lewniness.

  18. Marion
    Posted Nov 7, 2013 at 6:15 PM | Permalink

    So, in their endeavours to accuse conspiracy theorists of drawing illogical conclusions it is they themselves who are guilty of the illogical conclusions.

    Could this be a form of psychological projection one wonders!

  19. stevefitzpatrick
    Posted Nov 7, 2013 at 7:43 PM | Permalink

    Steve,
    Reading your post brought a (big) smile to my face.. thanks.

  20. Nicolas Nierenberg
    Posted Nov 7, 2013 at 8:06 PM | Permalink

    This is just sad. The fact that they will now resist recognizing this ridiculous error is even more sad.

    • Layman Lurker
      Posted Nov 8, 2013 at 2:45 PM | Permalink

      Yes Nicolas. Sad, ridiculous and I would add reprehensible. Not high level stuff here. It’s like a lab assignment in Stats 101. The only difference is that the Stats 101 student would be suffer consequences for this error. Here it appears wagons are circled and the error gets propagated to serve an agenda.

  21. John Norris
    Posted Nov 7, 2013 at 8:21 PM | Permalink

    re: “However, drawing conclusions from a subpopulation of zero does take small population statistics to a new and shall-we-say unprecedented level.”

    Uh oh. I imagine the climate science community can reference that statistical usage as precedence and really run with it.

  22. Curious George
    Posted Nov 7, 2013 at 10:10 PM | Permalink

    There are inept climate “scientists”, as well as inept “psychologists”. The life is too short to argue with all of them. Nevertheless, it is a worthy undertaking, thank you, Steve.

  23. Neverjog
    Posted Nov 7, 2013 at 10:31 PM | Permalink

    A couple of quotes spring to mind. As Lew might say –

    “Why, sometimes I’ve believed as many as six impossible things before breakfast.”

    And the data would reply –

    “I know you think you understand what you thought I said but I’m not sure you realize that what you heard is not what I meant.”

    With apologies to Charles Lutwidge Dodgeson and Alan Greenspan.

  24. Bob Koss
    Posted Nov 7, 2013 at 11:46 PM | Permalink

    Frontiers in Psychology may have removed their link to Lewandowsky’s Recursive Fury, but they have not withdrawn it. It is still available from PubMed here. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3600613/

    It has accumulated 4 citations from other PubMed articles. (G. Scholar shows 3) If Frontiers never withdraws the paper it will still be kept alive in PubMed and probably accumulate more citations since it can be found in G. scholar.

    This past July Michael J. Wood and Karen M. Douglas published a new paper titled: “What about building 7?” A social psychological study of online discussion of 9/11 conspiracy theories

    Their new paper cites their Wood(2012) paper numerous times and also cites both Recursive Fury and the Moon Landing papers.

    Wood may not want to admit error in their 2012 paper due to it having a knock-on effect with respect to their newer paper.

  25. pottereaton
    Posted Nov 8, 2013 at 1:14 AM | Permalink

    I sometimes think that it’s beneficial for the skeptical side of the climate argument for someone as ridiculous as Lewandowsky to be involved in the debate over climate science and related fields, but then I think how much higher the level of discourse would be if so-called “scientists” like him were not involved.

    A few months ago, I requested the supporting data from Wood. Wood initially promised to provide the data, then said that he had to check with coauthors. I sent several reminders without success and eventually without eliciting any response. I accordingly sent an FOI request to his university, accompanied by a complaint under any applicable university data policies. The university responded cordially and Wood immediately provided the data.

    Thank you, Steve.

  26. logicophilosophicus
    Posted Nov 8, 2013 at 2:58 AM | Permalink

    This reminds me of Hempel’s Paradox. Philosophers “know” that supporting instances for the proposition “All ravens are black” can be found by laborious fieldwork observing black ravens, or by the much more relaxing method of looking around your classroom or office and compiling lists of non-black non-ravens.

  27. Geoff Cruickshank
    Posted Nov 8, 2013 at 4:37 AM | Permalink

    There is also the point that when you ask about ‘belief’ on a scale of 1 to 7 you are not really asking the respondent not to report a belief, but to assign a probability. If I responded 1/7 to faked death 1/7 to murdered by MI5 and 5/7 to simple accident there is no psychological pathology indicated, it is just my view of the probabilities. To test ‘belief’ requires a Yes/No answer.

  28. RichardLH
    Posted Nov 8, 2013 at 10:40 AM | Permalink

    So 114 young females plus 23 young males when asked a set of obviously prejudicial questions which confuse ‘belief’ with probability (see above), produces the answers already sought by the questioner and thus confirms his own biased viewpoint about world opinion on other unrelated matters.

    My stats professor would have failed me if I tried to submit this to anywhere for any reason.

    Strange that others do not accept that viewpoint.

    Steve: this sort of complaint can be made about many studies. The point of the present discussion is the more specific problem discussed in the post.

  29. SC
    Posted Nov 8, 2013 at 11:10 AM | Permalink

    To add a huge insult to injury, they are at KEYNES college. Poor Keynes must be turning in his grave to be associated in any way with this rubbish

  30. Posted Nov 8, 2013 at 11:26 AM | Permalink

    To his boss at the Experimental Psych Faculty at Bristol Uni… sent just now. I like to keep reminding them of what it is they have employed 🙂

    Dear Professor Noyes,

    I wrote to you a while back regarding your in-house charlatan and conspiracy theorist, Stephan Lewandowsky. I gather he has had to retract one of his UWA papers as it was defamatory; this does not surprise me; as with his co-conspirator, Michael Mann, abuse is his first response to criticism.

    I thought you might be interested in a further debunking of his work, by Steve McIntyre, the man responsible for the comprehensive debunking of Michael Mann’s fake “Hockey Stick” paper.

    https://climateaudit.org/2013/11/07/more-false-claims-from-lewandowsky/#more-18571

    Excerpt

    “Lewandowsky’s assertions about Diana are based by an article by Wood et al. entitled “Dead and Alive: Beliefs in Contradictory Conspiracy Theories”. A few months ago, I requested the supporting data from Wood. Wood initially promised to provide the data, then said that he had to check with coauthors. I sent several reminders without success and eventually without eliciting any response. I accordingly sent an FOI request to his university, accompanied by a complaint under any applicable university data policies. The university responded cordially and Wood immediately provided the data.

    The most cursory examination of the data contradicted Lewandowsky’s claim. One can only presume that Lewandowsky did not carry out any due diligence of his own before making the above assertion.”

    Do please read this, as it demonstrates just how poor Lewandowsky’s work is. He is not an academic, rather a man with an obsessive mission to smear those who disagree with him, who uses academia to give him a cloak of respectability.

    I find it quite astonishing that you are happy to employ someone who may well bring down opprobrium on your department as a result of his shoddy work, which contributes nothing to the climate science debate, or indeed, the “science” of psychology, but rather exposes his own unbalanced and obsessive nature.

    I write this again as a taxpayer who is appalled that I should have to fund this excuse for an academic.

    My very best wishes to you

    Jeremy Poynton

    Steve: the problem with this sort of letter is that it’s far too angry. There’s zero point writing this sort of angry missive as it detracts from substantive points. You also have to take care to be microscopically accurate in any factual assertions and you haven’t been: Lewandowsky’s Fury paper has not (to my knowledge) been “retracted” to this point, though it is being reconsidered by the publisher.

  31. Posted Nov 8, 2013 at 1:34 PM | Permalink

    While following the Yellow Brick Road — from your post I ran across this additional article by the good Dr.Lewandowsky …

    http://theconversation.com/look-out-for-that-turbine-climate-sceptics-are-the-real-chicken-littles-19873
    On the other hand, people steeped in the same culture suffer 216 terrifying symptoms at the sight or sound of a wind turbine, thereby experiencing a risk that is unknown to medical science.

    I have been studying Wind Turbines for some time — more on the energy side than the medical side — but I found the quote amusing. They seem to have cataloged bewildering array of symptoms… He quotes Simon Chapman… (Actually it was a colleague that wrote the original paper — but Chapman attached his name an the rest is publication history…)

    He seems to enjoy embellishment.

    There is a chair at East Anglia waiting for him — I’m Sure — endorsed by Monty Python no less — I’m sure…

  32. Chuck L
    Posted Nov 8, 2013 at 1:47 PM | Permalink

    “Another bogus claim from Lewandowsky would hardly seem to warrant a blog post, let alone a bogus claim about people holding contradictory beliefs.” and

    “That Lewandowsky should make untrue statements will hardly occasion surprise among CA readers. However, drawing conclusions from a subpopulation of zero does take small population statistics to a new and shall-we-say unprecedented level.”

    Classics, both!

    • JEM
      Posted Nov 8, 2013 at 5:52 PM | Permalink

      I’ve kinda gotten to the point that reading anything about Lewandowsky is a whole lot like inspecting the sole of my shoe to see what I’ve just stepped in.

      You know it’s bad, and you don’t really want to deal with it, but you have to know what it is to deal with it properly.

  33. Dave
    Posted Nov 8, 2013 at 2:50 PM | Permalink

    Sorry, Steve, but you’re missing something fundamental, distracted by the stats. Woods’ claim that the beliefs are contradictory is entirely true, but only for an absolute, binary definition of belief. It is plain, though, that Woods has not used that definition of belief everywhere else in his paper, since he talks about degrees of belief.

    It is simply incorrect to describe as contradictory the beliefs he is talking about, unless they are held absolutely.

    In fact, the opposite is true. It would be contradictory not to admit the possibility of other non-mainstream theories about the death of Diana if one believed there was a strong possibility that one non-mainstream explanation is true, since holding that belief requires one to have already discarded a strong belief in the standard explanation.

    Personally, I assign a pretty high probability to the official version of events being at least broadly true, and an extremely low one to any conspiracy theory – but, I assign fairly similar probabilities to all three of the possibilities mentioned.

  34. Posted Nov 8, 2013 at 2:54 PM | Permalink

    The references cited in Lewandowsky’s papers are almost as fascinating as the papers themselves, and prove that Lew is not alone in the kind of science he practices.
    There are papers on the speeches of Ayatollah Khomeini;  the Egyptian uprising; schizophrenia; homophobia among men who have sex with men; and sexual (dys)function and the quality of sexual life in patients with colorectal cancer.
    One of Lew’s favourite papers is a study of antisemitism among ethnic Malays in Malaysia which found, unsurprisingly, that there is very little antisemitism in Malaysia, which, the author surmises, is because there are very few Jews in Malaysia.
    The author of this paper was cited six times in “Moon Hoax” and by a strange twist of fate got to peer review “Recursive Fury”. He has a peer reviewed paper on the appreciation of the female bottom which is a classic. See
    http://geoffchambers.wordpress.com/2013/04/17/lews-guru-and-the-science-of-a-beautiful-you/

  35. Posted Nov 8, 2013 at 3:15 PM | Permalink

    Steve,
    Understood – and my understanding from a comment by Geoff Chanbers (at Bishop Hill?) was that Lewandowsky HAD withdrawn a paper for the reason that it was potentially defamatory. Regardless, he and his like need to be reminded that they are under the microscope.

    • Brandon Shollenberger
      Posted Nov 10, 2013 at 11:43 AM | Permalink

      The original version of the paper was withdrawn because of a complaint by Jeff Id over Recursive Fury misrepresenting him. A new version was then published. This version was never withdrawn. It was merely removed from the journal’s site while it was (supposedly) undergoing examination in response to other complaints.

      There’s an additional detail involved. When the paper was posted a second time, it was changed to address Jeff Id’s concern. However, the journal decided to also address another user’s (Foxgoose’s) complaint. Somehow, that change was only made to the PDF version of the paper. It was not made to the browser version available at the journal’s website. This meant there were two different versions available from the journal at the same time.

      I believe that means the tally is one version of the paper was withdrawn, another version was inadvertently published, and the third version is “temporarily” unavailable while it undergoes an “expeditious” examination that has lasted over half a year.

  36. ztabc
    Posted Nov 8, 2013 at 3:34 PM | Permalink

    Is it possible that Alan Sokal created climatology and psychology as a joke and they got a little out of hand?

  37. Ian H
    Posted Nov 8, 2013 at 5:42 PM | Permalink

    It is fascinating that these guys aseem to strongly PREFER the explanation that

    the more participants believed that Princess Diana faked her own death, the more they believed that she was murdered.

    rather than the much more commonsense explanation that

    participants who strongly disagreed that Diana faked her own death were more likely to strongly disagree that she was murdered.

    This research says a lot more about the preconceived notions; lack of statistical training; complete lack of common sense; and research biases of the sociologists working in this area than it idoes about conspiracies surrounding Diana’s death.

    Ironically these people who purport to be the experts on conspiracy theories are behaving exactly like conspiracy theorists themselves. They have constructed a view of reality which is utterly ridiculous, flies in the face of common sense, and is unsupported by their own data, and yet they are vigorously defending their viewpoint against anyone who attempts to argue rationally with them.

  38. Spence_UK
    Posted Nov 8, 2013 at 6:22 PM | Permalink

    Wow. Ignoring the blindingly obvious while hand-waving about statistics they got out of a statistics mincing machine.

    Perhaps someone needs to write a paper on the psychology of people who write peer-reviewed papers and then refuse to accept simple and clear evidence that they have made an error – when that error presumably means at a minimum publishing a correction. How about “Motivated rejection of the bleeding obvious” as a title.

  39. Mickey Reno
    Posted Nov 8, 2013 at 9:44 PM | Permalink

    Lewandowsky, making the Royal Society proud.

  40. A. Scott
    Posted Nov 8, 2013 at 11:22 PM | Permalink

    Interesting … Michael Wood was one of the original reviewers of the Recursive Fury paper. I emailed and asked him, to the extent he was comfortable, if he would comment on his being removed as a reviewer after initial publication.

    He cordially and timely responded indicating he had issues with the paper, asked for revisions, but they were not made, hence he asked to be removed as a reviewer.

    Recursive Fury also listed Woods paper as a reference.

    It is interesting to see Lewandowsky involved in reviewing a Woods paper. While it makes sense based on shared interests I had the distinct impression Michael Woods was not enamored of the Lew crew, but I certainly could be wrong.

    Steve – have you asked Woods to participate in the discussion here?

  41. Brian H
    Posted Nov 8, 2013 at 11:29 PM | Permalink

    Innumeracy strikes again. Multiplying and/or dividing by 0 is a waste of time.

  42. David Brewer
    Posted Nov 9, 2013 at 4:33 AM | Permalink

    I get all this, but I still can still hardly believe Lewandowsky and Wood are so crazy as to make their assertion.

    The two propositions in play are not just contradictory in the sense of being mutually inconsistent explanations of a single fact. They lead to different facts. If Diana faked her own death, she is still alive. If MI6 killed her, she is dead. No one could possibly believe both at the same time, and it takes a special type of obtuseness to believe they could.

    • Posted Nov 9, 2013 at 11:02 PM | Permalink

      […] it takes a special type of obtuseness […]

      Such ‘special … obtuseness’ has been repeatedly demonstrated in spades by Lewandowsky and his acolytes and lesser lights (not to mention his Mannomatic™ superlights), has it not?!

    • JohnC
      Posted Nov 12, 2013 at 10:54 AM | Permalink

      I tried posting this last night, lost in moderation I guess.

      Another problem is that the propositions are not contradictory. If Diana fakes her own death, MI6 can still kill her. It would be easier for MI6, because she is officially dead already. If you conduct a study into mutually contradictory beliefs, the propositions you present must be opposites.

  43. Posted Nov 9, 2013 at 4:56 AM | Permalink

    Once alerted to a possible mistake, there is still the matter of what an academic does next. Having got past the initial heart palpitations and rosy cheeks of acute embarrassment, I guess the first move is to check whether the criticism is valid.

    It is.

    Next comes a chance to show your quality. Do you i) bluster, dissemble, tilt at windmills, set fire to straw men, rearrange the furniture, try to prove you were right anyway by collecting more data and analysing it differently, etc, or ii) accept the error and make efforts to address it by issuing a correction?

    Such mistakes actually offer salutary lessons and are excellent material for lectures: here’s what I did… anyone spot anything wrong… here’s what happened next… this guy came out of nowhere and explained that I’d made a huge error… imagine my embarrassment… here’s why you can’t do what I did… lessons learned…

    A completed journey that included a shipwreck probably makes better dinner conversation than one that went smoothly!

    • Steve McIntyre
      Posted Nov 9, 2013 at 8:39 AM | Permalink

      Another interesting aspect to this incident is that Brandon Shollenberger independently became curious enough about the result to request the data and noticed the error just as quickly as me and, like me, wrote the author. Whereas none of the peer reviewers, none of the readers of the journal nor Lewandowsky citing the result thought to check it.

      • hunter
        Posted Nov 9, 2013 at 10:45 AM | Permalink

        Steve,
        And that is what makes these academics so angry. You and a growing list of others are asking simple questions that show how lacking their work is, and how shallow their claims of superiority actually are.

      • Brandon Shollenberger
        Posted Nov 9, 2013 at 3:20 PM | Permalink

        It’s definitely interesting. In addition to what you point out, Lewandowsky’s Moon Landing paper used SEM modeling which is built upon the methodology Wood used. I pointed out the problem with it, observing:

        Keep this in mind when hearing about correlations in this post. You can find correlations between people’s belief in a conspiracy and other things even if nobody believes in the conspiracy.

        A short while later, a blog post on this site went up discussing the matter in far more detail. I e-mailed Michael Wood about a problem with his methodology, and a short while later, this post went up. The timing is intriguing.

    • daved46
      Posted Nov 9, 2013 at 9:13 AM | Permalink

      Re: Jit (Nov 9 04:56),

      My wife and I attend a seminar by a recent “world champion of Speaking” and one of the points he made is not to tell jokes any more but instead tell stories about your failures. His point was that people get bored if you use a joke they’ve heard before, but a failure, even if it’s one others have made before, is still your own.

      I think this is why I fail to find the team (and others of their ilk) funny.

  44. David Brewer
    Posted Nov 9, 2013 at 2:01 PM | Permalink

    Part of this is that Wood (and Lewandowsky) don’t see the asymmetry between an explanation in which one does believe and explanations which one disbelieves. Belief in one explanation for Diana’s death would exclude belief in alternative explanations. But one could disbelieve any number of explanations, with differing degrees of confidence, without contradiction, and without this saying anything about what one did believe – if indeed one believed anything.

    Unfortunately Wood set up his 7-point belief scale without realizing this. He has interpreted degrees of unbelief as just the inverse of degrees of belief.

    The error Wood ascribes to what turns out to be a null set of people is actually an error in his own understanding of his methodology.

    One can perhaps understand that he might have failed to see the fundamental difference between believing two inconsistent arguments (logically impossible) and disbelieving two inconsistent arguments (logically possible). But only a special type of plonker would insist on concluding that some people are so stupid as to believe Diana is both dead and alive, rather than re-examining his methodology to see why it appeared to produce this absurd result. And how he could fail to see his error even when it is pointed out to him that there is no one in his sample who is as stupid as he says, is remarkable.

  45. Posted Nov 9, 2013 at 2:53 PM | Permalink

    Wood and Lewandowsky failed to see that what they were ridiculing is reasonable behavior. There are countless things we are supposed to believe — the cause of obesity is eating too much sugar, for example. The less you believe these things, the more credence you will give (not a lot, but some) to the many alternatives to them (obesity is caused by X, obesity is caused by Y, obesity is caused by Z, and so on). A person who is less sure than the experts that obesity is caused by sugar, will give a little bit of credence to many alternatives. You can call this “the method of multiple working hypotheses” or you can call it common sense.

  46. Chuck Nolan
    Posted Nov 9, 2013 at 8:51 PM | Permalink

    Are there any other shrinks discussing cagw?
    Is Lew the go to shrink for cagw?
    There must be some rational person in the profession that could check it out.
    Do any others in the shrink business agreeing with Lew or is it Mann & co?
    cn

  47. Chuck Nolan
    Posted Nov 9, 2013 at 8:54 PM | Permalink

    I guess creating a generation of neurotics is good for business…
    When you’re a shrink.
    cn

  48. A. Scott
    Posted Nov 9, 2013 at 9:07 PM | Permalink

    Mike Wood appears to blog here:

    http://conspiracypsych.com/author/disinfoagent/

    Interestingly it would appear he may no longer be with the University any longer. His page is gone:
    http://www.kent.ac.uk/psychology/people/woodm/

    And he is no longer on the directory:

    Click to access phonelist.pdf

    • Posted Nov 10, 2013 at 2:42 PM | Permalink

      On his blog Wood refers to fellow author Karen Douglas as his PhD supervisor, which would explain why he’s moved on from Kent University.
      I’ve posted a positive comment at Frontiers under his abstract. His content analysis does something Lew would never dream of doing – apply the same criteria of analysis to both sides of an argument.

    • Posted Nov 12, 2013 at 4:10 AM | Permalink

      He is now a lecturer at the University of Winchester. No, I’d never heard of it either. They do courses in creative writing, fashion, film studies, media studies … and psychology.

  49. seanbrady
    Posted Nov 10, 2013 at 1:02 AM | Permalink

    I once did a study which conclusively failed to disprove the proposition that no one who denied the grassy knoll theory didn’t also simultaneously lack confidence in critiques of the governments official dimsissal of the possibility that Oswald didn’t act alone.

    Conspiracy theorists are just so weird.

  50. sleeper
    Posted Nov 10, 2013 at 11:28 AM | Permalink

    Ignorance doesn’t really bother me. Ignorant people getting paid to produce ignorance… now that bothers me.

  51. Brandon Shollenberger
    Posted Nov 10, 2013 at 11:47 AM | Permalink

    I posted a comment at Bishop Hill to correct a minor error in its post referring to this article. It might be of some interest here:

    The first paragraph of this post is mistaken:

    There have been some interesting developments on the Lewandowsky front. Firstly, Steve McIntyre has written a hilarious post about one of the sources Lew relied on in his Moon Hoax paper.

    Wood et al 2012 (Dead or Alive) was not used as a reference in LOG12.

    I’ll see if I can get the sequence straight. Both of those papers were published without reference to the other. Recursive Fury was then published, citing both papers. Michael Wood initially served as a reviewer of the paper, but he asked to be removed after his complaints regarding the paper were overruled. He then published a new paper which cited all three of the other papers, and Stephan Lewandowsky served as a reviewer for it.

    In addition to that, both of the original papers cited a Swami 2011 paper. That paper was one of a series by Viren Swami about conspiracists, members of which were referenced in all four papers. Swami is credited as editing and reviewing the Recursive Fury paper. It’s all pretty incestuous.

    On another note, Swami’s series of papers on conspiracists used SEM in almost an identical way as Lewandowsky did in at least one paper. It’s plausible Lewandowsky used Swami’s work, at least in part, as inspiration for his “analysis.” I haven’t been able to find copies of the Swami papers referenced by Lewandowsky and Wood, but the fact a related paper used the methodology is suggestive. That’s especially true since Lewandowsky offered no reference or discussion to justify his use of SEM. The Swami paper I linked to is similar in that it offers only a hand-wave to a textbook on SEM without any discussion or justification. A further similarity is neither Swami nor Lewandowsky did basic checks that are always precursors to the use of SEM, checks which would have shown there were problems that needed to be addressed (problems of the same class as Wood et al’s).

    I’d like to get copies of Lewandowsky’s earlier papers to see what methodologies he’s used before, as well as Swami’s papers to see what each of them use for methodology, but there’s already enough incestuousness here to be troubling.

    • Brandon Shollenberger
      Posted Nov 10, 2013 at 1:08 PM | Permalink

      Barry Woods commented on a major point I somehow didn’t catch (more here):

      Michael Wood – the Frontiers reviewer of Fury that pulled out, has a paper citing Recursive Fury at Frontiers!!!

      Frontiers took down Recursive Fury in response to complaints yet months later accepted and published a paper which cited Recursive Fury. As far as I know, that hadn’t been caught until recently. It puts a whole new light on how Frontiers has handled complaints.

      I wonder how Frontiers would respond if someone filed a complaint about this. Would they delete the single sentence in the paper which references Recursive Fury, or would they stand by the citation?

      • Posted Nov 10, 2013 at 3:12 PM | Permalink

        A Scott upthread has discovered a most interesting blog run by Wood and three other English PhD students on the psychology of conspiracy theorising. There’s just one article there about Lew’s work. I won’t link because it seems to land me in moderation, but it’s worth looking at.

  52. Posted Nov 11, 2013 at 2:21 AM | Permalink

    Sue
    I’ve made a comment at the PsyPAG quarterly issue article. I mention the matter of Michael Wood’s role as reviewer of Lewandowsky’s “Recursive Fury” paper.
    Michael Wood has several articles at this site, but there are none about the Wood Douglas Sutton 2012 “Diana” article which Steve has analysed, as far as I can see.

  53. sue
    Posted Nov 11, 2013 at 4:46 AM | Permalink

    geoffchambers, the Wood Douglas Sutton 2012 paper is also cited in the “Special Issue” that I linked to along with the “Recursive Fury” paper.

  54. Schrodinger's Cat
    Posted Nov 11, 2013 at 5:30 AM | Permalink

    As an expert on being simultaneously dead and alive I take exception to Lewandowsky’s bogus science.

  55. Posted Nov 11, 2013 at 7:02 AM | Permalink

    At the point in the “not withdrawn” Recursive Fury paper where the contradictory conspiracy theories are mentioned and Wood et al 2012 is cited, the authors introduce an appropriate label, MbW, for “must be wrong”.

  56. Posted Nov 11, 2013 at 12:47 PM | Permalink

    I’ve added some more to Sue and A Scott’s discoveries at
    http://geoffchambers.wordpress.com/2013/11/11/the-great-psychological-conspiracy-theory-conspiracy/

    • sue
      Posted Nov 11, 2013 at 4:06 PM | Permalink

      That is an excellent post Geoff!

  57. Joe Born
    Posted Nov 12, 2013 at 8:49 AM | Permalink

    Don’t know if someone already commented on this, but page C2 of the Nov. 9 Wall Street Journal had its left column largely dedicated to Wood et al.’s conclusions. At one point it echoes Wood et al.’s highlighting of conspiracy theorists’ inconsistent beliefs.

    • Steve McIntyre
      Posted Nov 12, 2013 at 9:54 AM | Permalink

      yes, good spotting. The WSJ article accepts the subpopulation (N=0_ results:

      conspiracy theorists can be so focused on rejecting all official versions of things that they come to embrace alternative explanations that are mutually contradictory.

      The study concerned conspiracy theories surrounding the 1997 death of Princess Diana and her boyfriend Dodi Fayed. The alternative “real” stories listed as options by the researchers included: Queen Elizabeth having Diana killed to prevent the mother of the future king from marrying an Arab, Diana’s being killed by business enemies of the Fayed family, and the princess and Fayed faking their own deaths.

      Volunteers were asked to rate the likelihood of each story. Conspiracists in the group often endorsed scenarios that were mutually contradictory. Those who believed, for example, that Princess Diana was assassinated were significantly more likely than chance to believe that she is still alive. In other words, when contemplating any given scenario, the fervent conviction that we are constantly being deceived trumped their ability to assess the internal consistency of their own thinking.

      • HaroldW
        Posted Nov 12, 2013 at 12:12 PM | Permalink

        “Conspiracists in the group often endorsed scenarios that were mutually contradictory.” [My bold] Certainly N=0 gives new meaning to the word “often”! Even Wood et al. didn’t go that far.

        The subtitle of the WSJ article is “How do we distinguish between critical thinking and nutty conspiracism?” It seems that Sapolsky [the WSJ author] demonstrates a third way, neither conspiracies nor critical thinking.

      • Geoff Sherrington
        Posted Nov 29, 2013 at 12:58 AM | Permalink

        A well designed poll can include questions that test respondents’ abilities to be polled, picking up such mechanical effects as donkey votes and mutually exclusive answers to like questions (as might arise from a person with little interest in a considered response). This is part of a poll’s quality control and it does not belong in the analysis of responses, except when used to reject some respondents and/or similar mechanical exercises.
        (A former CEO asked me to set up an after-hours Corporate polling centre. I learned some of the little-seen aspects of polling from consultants engaged to help get it operating & tested.)

  58. David L. Hagen
    Posted Nov 28, 2013 at 8:44 PM | Permalink

    I have nominated for the Ig Nobel “Statistics prize,
    Michael J. Wood, Karen M. Douglas, Robbie M. Sutton and Stephen Lewandowsky
    for taking small population statistics to a new / unprecedented level
    by drawing conclusions from a subpopulation of zero, and to

    Stephen McIntyre for discovering this remarkable breakthrough by his painstaking dogged
    persistence in auditing Lewandowsky and Wood et al. ”

    I ask for someone to second this nomination to:

    IG NOBEL NOMINATIONS
    c/o Annals of Improbable Research
    PO Box 380853, Cambridge MA 02238, USA
    or to
    marca@improbable.com

    http://www.improbable.com/about/SubmissionGuidelines.html

    • Steve McIntyre
      Posted Nov 28, 2013 at 11:01 PM | Permalink

      🙂

      Andrew Gelman, a very able statistician with an interesting blog, has a post about Wood et al here – without picking up the zero subpopulation lunacy. The post is by Gelman’s associate, Phil Price.

      • David L. Hagen
        Posted Dec 1, 2013 at 1:45 PM | Permalink

        Steve Thanks for the tip. I have asked them to validate your findings and to 2nd this nomination!

    • MrPete
      Posted Apr 17, 2014 at 2:20 PM | Permalink

      Re: David L. Hagen (Nov 28 20:44),
      The nomination should be re-worded and go only to Steve.
      All recipients are given an opportunity to quietly decline the award, as has happened several times. Anyone who would be embarrassed or professionally lose face by receiving it would not win it.

  59. Geoff Sherrington
    Posted Nov 29, 2013 at 1:42 AM | Permalink

    Some others have noted that the structure of the analysis of these polls with multi-point Likert scale is a bit like fitting linear regressions to the various multi-point scales then correlating; the mere fitting of the regression (if not constrained to the 0,0 points in some way) can generate a number when that number should be zero.
    This leads me a little off-thread to a problem whose solution I have been seeking for years. There are several publications where a climate index like temperature is interpolated or extrapolated, based on correlations between weather station data at different separations. BEST did this, see http://www.geoffstuff.com/Correl.
    These correlations are too high. I’ve looked at my home town weather record (Melbourne) and correlated observations 1 day, then 2 days, 3… apart, on the loose assumption that this might be comparable to a weather system moving a distance in 1 day, 2 days, etc.
    See http://www.geoffstuff.com/Extended%20paper%20on%20chasing%20R.pdf
    In short, while BEST have correlation coefficients above 0.85 for distances up to 600 km, I was hard pressed to find any correlation this high by lagging data from one station.
    Question is, what gets the correlation so high on the BEST type graph?

  60. Posted Apr 17, 2014 at 7:10 AM | Permalink

    Stephan Lewandowsky in interview with The Conversation yesterday:

    There are some researchers who have linked conspiracy beliefs to personality variables. So yes, it is quite possibly a stable characteristic of some sort. The most striking thing is that conspiratorial thinking can be self-contradictory, for example people think MI6 killed Princess Diana while also thinking that she faked her own death.

    The professor links to the Michael Wood/Karen Douglas paper discussed in this thread. Empty sets can go far in the new psychology. Striking indeed.

  61. logicophilosophicus
    Posted Apr 19, 2014 at 3:45 AM | Permalink

    I read the Wood source. I note that the allowed responses to the various conspiracy theories are grades of dis/agreement from 1 through 7. Having answered a number of questionnaires in this format, I have always found it awkward to answer on a topic about which I know very little. In the absence of a “None of the above” box to tick, the only way to express no opinion is to tick “4: neither agree nor disagree.” Obviously this response covers a variety of opinions, indicating either a level of compliance with the assertion in question, or a real lack of interest in the topic, or simply: “I haven’t got a Scooby Doo.” Only in the first case does it make any sense to rank my level of belief.

    BTW I notice that Wood has a climate change conspiracy theory in there. Where would a “moderate warmist” who believed that some scientists or politicians had exaggerated the issue put his or her tick? Luckily we have Wood’s assurance that the Diana theories were always the intended major focus, so we can totally exonerate him and his colleagues from any hint of conspiracy to implicate mild skeptics as loony conspiracy theorists. (Especially since positive correlations between beliefs in climate change conspiracy and other conspiracies are not mentioned at all?)

8 Trackbacks

  1. […] comes to mind, as he does on occasion take on the stats and methodology underlying papers that are peripheral or tangential to pure climate related […]

  2. By Biofuels Kill | sunshine hours on Nov 9, 2013 at 2:39 PM

    […] Hat tip to commenter David L Hagen from here. […]

  3. […] https://climateaudit.org/2013/11/07/more-false-claims-from-lewandowsky/#more-18571 […]

  4. By The Climate Change Debate Thread - Page 3340 on Nov 11, 2013 at 11:12 AM

    […] […]

  5. […] https://climateaudit.org/2013/11/07/more-false-claims-from-lewandowsky/ […]

  6. […] based on a sample of 10, carefully selected from a biased sample of 1200. In his most recent work he draws a conclusion based on zero data points. Why am I not […]

  7. […] are lots of interesting posts on this: Steve’s at Climate Audit, Judy Curry, Matt Briggs, Warren Pearce, WUWT and related posts by Lucia and Ben […]

  8. […] McIntyre, with some difficulty, obtained the data. There was a reason for the author being a bit circumspect. McIntyre […]

%d bloggers like this: