Another Absurd Lewandowsky Correlation

Lewandowsky’s recent article, “Role of Conspiracist Ideation” continues Lewandowsky’s pontification on populations of 2, 1 and zero.

As observed here a couple of days ago, there were no respondents in the original survey who simultaneously believed that Diana faked her own death and was murdered. Nonetheless, in L13Role, Lewandowsky not only cited this faux example, but used it as a “hallmark” of conspiracist ideation:

For example, whereas coherence is a hallmark of most scientific theories, the simultaneous belief in mutually contradictory theories—e.g., that Princess Diana was murdered but faked her own death—is a notable aspect of conspiracist ideation [30].

However, this example is hardly an anomaly. The most cursory examination of L13 data shows other equally absurd examples.

One of the more amusing ones pertains to one of Lewandowsky’s signature assertions in Role, in which he claimed, echoing an almost identical assertion in Hoax, that “denial of the link between HIV and AIDS frequently involves conspiracist hypotheses, for example that AIDS was created by the U.S. Government [22–24].”

Lew reported a correlation of -0.111 between CYAIDS and CauseHIV, citing this correlation (together with negative correlations related to smoking and climate change) as follows:

The correlations confirm that rejection of scientific propositions is often accompanied by endorsement of scientific conspiracies pertinent to the proposition being rejected.

However, as with the fake Diana claims, Lewandowsky’s assertions are totally unsupported by his own data.

In the Role survey (1101 respondents), there were 53 who purported to disagree with the proposition that HIV caused AIDS (a vastly higher proportion than in the climate blog survey – a point that I will discuss separately). Of these 53 respondents, only two (3.8% of the 53 and 0.2% of the total) also purported to believe the proposition that the government caused AIDS. It is therefore simply untrue for Lewandowsky to assert, based on this data, that denial of the link between HIV and AIDS was either “frequently” or “often” accompanied by belief in the government AIDS conspiracy. It would be more accurate to say that it was “seldom” accompanied by such belief. Although Lewandowsky did not mention this, both of the two respondents who purported to believe this unlikely juxtaposition also believed that CO2 had caused serious negative damage over the past 50 years.

Lewanowsky’s assertion in Role about a supposed link between denial of a connection between HIV and AIDS and a government AIDS conspiracy had been previously made in Hoax not just once, but twice:

Likewise, rejection of the link between HIV and AIDS has been associated with the conspiratorial belief that HIV was created by the U.S. government to eradicate Black people (e.g., Bogart & Thorburn, 2005; Kalichman, Eaton, & Cherry, 2010)…

Thus, denial of HIV’s connection with AIDS has been linked to the belief that the U.S. gov¬ernment created HIV (Kalichman, 2009)

However, Lewandowsky’s false claim received even less support in the survey of stridently anti-skeptic Planet 3.0 blogs. Even with fraudulent responses, only 16 of 1145 (1.4%) purported to disagree with the proposition that HIV caused AIDS, and of these 16, only 2 (12.5%) also purported to endorse the CYAIDS conspiracy. These two respondents were the two respondents who implausibly purported to believe in every fanciful conspiracy. Even Tom Curtis of SKS argued that these responses were fraudulent. Without these two fraudulent responses, the real proportion in the blog survey is 0. Either way, the data contradicts Lewandowsky’s assertion that disagreement with the HIV-AIDS proposition is “often” or “frequently” accompanied by belief in the government AIDS conspiracy at the climate blogs surveyed by Lewandowsky.

Even though there were even fewer respondents supposedly subscribing to the unlikely propositions in the blog survey, the negative correlation between CYAIDS and CauseHIV propositions was even more extreme: a seemingly significant -0.31, though only the two fake respondents purported to hold the two unlikely propositions.

Update: I’ve added some plots below to illustrate how Lewandowsky’s calculations of correlation go awry.

The contingency table of CauseHIV and CYAIDS for the L13Hoax data is shown below, with the size of each circle proportional to the count in the contingency table. Most of the responses are identical – thus the large circle. Because there are only two respondents purporting to hold the two most unlikely views, this is a very faint dot. A correlation coefficient implies a linear fit and normality of residuals: visually this is obviously not the case. There are a variety of tests that could be applied and the supposed Lewandowsky correlation will fail all of them.

CauseHIVvsCYAIDS_Hoax

If one goes back to the underlying definition of a correlation coefficent, it is a dot-product of two vectors. In the context of a contingency table, this means that the contribution of each square in the contingency table to the correlation can be separately identified. I’ve done this in the graphic shown below, since the points, while elementary, are not immediately intuitive in these small-population situations. For each square in the contingency table, I’ve calculated the dot-product contribution and multiplied it by the count in the square, thereby giving the contribution to the correlation coefficient (which is the sum of the dot-product contributions.) The area of each circle shows the contribution to the correlation coefficient: pink shows a negative contribution.

There are a few interesting points to observe. In a setup where nearly all the responses are identical and at one extreme, these responses make a positive contribution to the correlation coefficient. Responses in which the respondent strongly disagrees with CYAIDS but only agrees with CauseHIV or in which the respondent strongly agrees with CauseHIV but only disagrees with CYAIDS make a negative contribution to the correlation. Respondents with simple agreement with CauseHIV and simple disagreement with CYAIDS make a strong contribution to the correlation coefficient. The two (fake) respondents make a very large contribution to the correlation coefficent despite only being two responses.

CauseHIVvsCYAIDS r contributions_Hoax

53 Comments

  1. Posted Nov 13, 2013 at 1:24 AM | Permalink

    But, Steve, is there not a possibility (albeit highly improbable, if not implausible!) that you may not have considered during the course of your analysis: Perhaps Lewandowsky has redefined, (inter alia), “correlation”, “frequently” and “often” 😉

    Kidding (somewhat) aside, the more I see of Lewandowsky’s “work”, the more I worry about the future of our planet’s stock of academics.

    Perhaps history will one day record that Mann represented the nadir of mediocrity; and once this “tipping point” had been reached, it’s been downhill all the way for those who have tied their sails to his mast and/or puck to his hockey-stick!

    • John Ritson
      Posted Nov 13, 2013 at 4:01 AM | Permalink

      How can you go downhill from a nadir, especially if you’re on a boat or an ice rink?

      • Posted Nov 13, 2013 at 5:57 AM | Permalink

        This was indeed hard until populations of zero were shown by Dr Lewandowsky to prove both a proposition and (by the same logic) its opposite. After that no hill, however steep, either up or down, could be an obstacle or even a hill.

        • Jim S
          Posted Nov 13, 2013 at 4:09 PM | Permalink

          +1

      • Gary
        Posted Nov 14, 2013 at 11:34 AM | Permalink

        Going downhill from a nadir just requires application of the Tiljander correction.

    • Posted Nov 13, 2013 at 10:44 AM | Permalink

      His approach, as identified by Hilary, was of course nicely summarized by Lewis Carroll:

      When I use a word,’ Humpty Dumpty said, in rather a scornful tone, ‘it means just what I choose it to mean — neither more nor less.’

      ‘The question is,’ said Alice, ‘whether you can make words mean so many different things.’

      ‘The question is,’ said Humpty Dumpty, ‘which is to be master — that’s all.’

      Alice was too much puzzled to say anything; so after a minute Humpty Dumpty began again. ‘They’ve a temper, some of them — particularly verbs: they’re the proudest — adjectives you can do anything with, but not verbs — however, I can manage the whole lot of them! Impenetrability! That’s what I say!’

      Certainly, the goal as stated in the penultimate sentence seems to be the aim…

      • Posted Nov 13, 2013 at 11:59 AM | Permalink

        A frabjous point, my beamish boy.

        • Gary Lohr
          Posted Nov 19, 2013 at 12:59 PM | Permalink

          Almost made me chortle.

  2. Posted Nov 13, 2013 at 1:35 AM | Permalink

    [Sorry I must have messed up a closing tag … always seems to happen only when I neglect to test elsewhere before hitting “Post Comment” … Let me try this again:]

    But, Steve, is there not a possibility (albeit highly improbable, if not implausible!) that you may not have considered during the course of your analysis: Perhaps Lewandowsky has redefined, (inter alia), “correlation”, “frequently” and “often” 😉

    Kidding (somewhat) aside, the more I see of Lewandowsky’s “work”, the more I worry about the future of our planet’s stock of academics.

    Perhaps history will one day record that Mann represented the nadir of mediocrity; and once this “tipping point” had been reached, it’s been downhill all the way for those who have tied their sails to his mast and/or puck to his hockey-stick!

  3. Posted Nov 13, 2013 at 2:12 AM | Permalink

    I have say I feel bsd for my girls that are going in to science

  4. Posted Nov 13, 2013 at 2:14 AM | Permalink

    oops bad

  5. Posted Nov 13, 2013 at 2:52 AM | Permalink

    It would seem at first sight that reasoning from populations of zero (or, if one is data-rich, two whole respondents) is a hallmark of Lewandowsky. The real hallmark, history will record, is demonisation of the mentally ill to smear his opponents.

    In the UK there is currently a high-profile campaign against stigmatisation of mental illness, with Tony Blair’s former director of communication, Alastair Campbell, who has been candid about his own battles with alcoholism and depression (something said to have forged a close personal link with George W Bush), playing a leading role.

    It came to mind as I wrote the first paragraph and wondered at the lack of compassion in Lewandowsky. Two strands in our culture so at odds. To be clear, I’m not pleading insanity and thus diminished responsibility for being a long-time supporter of Steve McIntyre’s highly rational critiques of climate science. But I wonder if there could be something self-correcting not just in science but in society. Those of good heart must make the connection.

  6. KNR
    Posted Nov 13, 2013 at 3:01 AM | Permalink

    ‘Although Lewandowsky did not mention this, both of the two respondents who purported to believe this unlikely juxtaposition also believed that CO2 had caused serious negative damage over the past 50 years.’

    No surprise one thing that made Lewandowsky work look total rubbish from the start was the frequency of true AGW believers who also were 9/11 truthers and are more than willing to see the CIA in ever bad act.

    Given their are those that find tin-foil and good hat material on both sides , the idea that AGW proponents were all pro-science is total undermined by their views on GM and nuclear when science is throw out of the window in favour of scare.

  7. Geoff Sherrington
    Posted Nov 13, 2013 at 3:36 AM | Permalink

    In the FAQ article linked here by geoffchambers, Prof Lewandowsky asserts that –
    “In the history of science, we are not aware of a case in which a serious scientific issue was adjudicated by tabloid journalists or their modern-day equivalents such as blog commenters”.
    Irrespective of the historical accuracy of this assertion, there is a high probability that it has been shown wrong by this very blog by commenter Steve McIntyre.
    To avoid disproof of this assertion, it is simply required of Lewandowsky to show that commenter McIntyre got it wrong.
    Otherwise, it’s had it.

    • seanbrady
      Posted Nov 13, 2013 at 12:59 PM | Permalink

      On this point, I recommend “The Book Nobody Read: Chasing the Revolutions of Nicholas Copernicus” by Owen Gingerich.

      The title is ironic, a quote from an earlier academic who mistakenly thought that although everyone wanted a copy of De Revolutionibus for his library, nobody actually read it all the way through.

      Gingerich shows that, in fact, a large proportion of the existing 1st and 2nd edition copies of the book were heavily annotated by people who decided to work through all of Copernicus’ calculations in the margins.

      Many of them were not full time scientists or academics and some of them contributed to the evolving understanding of the solar system through their commenary. I would say those folks who got their hands on the calculations and worked it all the way through, checking and sometimes criticizing the calculations in the margins, then sharing their copy with others, are the modern day equivalents of the best blog commenters.

  8. samD
    Posted Nov 13, 2013 at 5:53 AM | Permalink

    A few bits: A typo? Wasn’t it 1001 respondents?

    Be careful with the reversed statements (marked with R). These are mis-reported in the figures and have the agree-disagree scale around the wrong way – you can tell by cross-tabbing answers from the datafile and checking for consistency.

    The age question wasn’t properly screened or cleaned. It’s not a big issue, but it is an easy and obvious check you’d expect to be done.

    Within the survey respondents didn’t have the option of ‘Don’t know’. Anyone not sure had to put themselves into the middle category. For instance, 23 respondents rated 3 on all 40 statements. (One interpretation of a statement with a high number of 3-ratings therefore is that respondents didn’t fully understand the question)

    This leads on to the point that a national representative sample of individuals must by definition include people with a huge variety of backgrounds and educational abilities. Long and strange/boring attitude batteries will have a level of ‘mis-punching’ – both accidental and wilful – particularly if you can’t select Don’t know.

    In addition, complex and multi-part statements may be misread by people with lower than high-school level reading comprehension, or people completing the survey too quickly. CNatFluc (which is reversed) seems to have people not picking up the ‘just’ in the wording.

    For this reason, a level of ‘human-failing’ error has to be taken into consideration in the results and not just statistical error – this can be quite high, particularly on self-complete online surveys.

    It was also strange to report variable constructs, and then look for SEM relationships without looking for groups classified under the generated constructs. My reason for looking through the data was to establish how many people had each viewpoint. It felt like the work was done without ever running a cross-tab.

    Steve: I agree 100% that the “work” seems to have been done without elementary cross-tabs. The SEM techniques used by Lewandowsky seem to be popular and deeply embedded in psychology. It would be interesting to see a thorough discussion of these methods by real statisticians – in particular, discussion of their calculations of “statistical significance”.

  9. observa
    Posted Nov 13, 2013 at 5:56 AM | Permalink

    There must be some practising climatologists out there that are prepared to call out these charlatans and hangers-on, if only to maintain some shred of credibility for their field. OTOH it may be a lot less cringeworthy for them to drop the tag of climatologist altogether. Classic Catch22 for them I suppose.

    • Kneel
      Posted Nov 13, 2013 at 4:05 PM | Permalink

      Judy Curry is one:

      A subterranean war on science?

      “With regards to the Lewandowsky papers that were the subject of Condon’s and Wood’s concerns, it is difficult to call that twaddle ‘science.’ Lewandowsky’s defensive moves trying to cover up the deficiencies of his studies was pathetic.”

  10. chris y
    Posted Nov 13, 2013 at 8:45 AM | Permalink

    Lewandowsky and pals may have inadvertently stumbled upon a new branch of psychology. Maybe call it Quantum Psychology (QP), with a concomitant uncertainty principle that allows zero point fluctuations in sub-populations that have a classical count of zero. The correlations Lewandowsky reports could be likened to virtual particles. Attempts to observe these virtual populations influences the population count and/or the phase of the belief being measured. Reconciling QP with Classical Psychology will require decades of research grants.

    This could be the dawn of a whole new ‘cross-disciplinary’ division at the National Science Foundation…

  11. EdeF
    Posted Nov 13, 2013 at 9:37 AM | Permalink

    Ex Nihilo Absurdum.

  12. Wayne
    Posted Nov 13, 2013 at 11:02 AM | Permalink

    Steve,

    SEM originated in economics, but as far as I can tell it’s been most widely adopted in psychology/sociology and other softer “sciences”. The psychology association originally biased me against it, but while it may be misunderstood and abused in that field, it’s statistically legitimate as far as I can tell.

    While it does have a reasonable statistical foundation, it also takes a lot more due diligence than many methods and it also likes large sample sizes. It also makes much stronger assumptions than most methods about correlations and what you do and don’t include in your model. It features more goodness-of-fit measures than you can shake a stick at, none of which are superior to the others, and it is more of a confirmative/comparative tool than most.

    It can implement the equivalent of many other methods, including linear regression, simultaneous equations, etc, including latent variables. Stata has really been pushing it in the latest releases — after initially being skeptical about it — which also should give some statistical cred to it. To quote from their manual:

    “Structural equation modeling encompasses a broad array of models from linear regression to measurement models to simultaneous equations, including along the way confirmatory factor analysis (CFA), correlated uniqueness models, latent growth models, multiple indicators and multiple causes (MIMIC) models, and item-response theory (IRT) models.”

    “Structural equation modeling is not just an estimation method for a particular model in the way that Stata’s regress and probit commands are, or even in the way that stcox and mixed are. Structural equation modeling is a way of thinking, a way of writing, and a way of estimating.”

    That’s my take on it, anyhow. Of course, Lew is pretty much using a tool that he doesn’t understand but which he uses to confirm his biases. SEM is more flexible than most in that regard, but if used properly it seems to be a powerful tool. I wouldn’t focus too much on SEM itself but rather on Lew’s incompetent/improper use of it.

    • Steve McIntyre
      Posted Nov 13, 2013 at 11:29 AM | Permalink

      I see what it is doing. However, in my experiments with the method – I’ve been using the R-package lavan which seems more stable than the R-package sem – it seems possible to achieve “statistical significance” with a variety of different models. So when Lew makes statements about statistical significance using one model, that doesn’t preclude achieving statistical significance with another somewhat contradictory model.

      Plus the documentation that I’ve seen presumes normality. Do these assumptions extend to circumstances where one is examining relationship between hypotheses in which there are only a few outliers? Seems to me that one cannot assume this, but it has to be demonstrated.

      • Wayne
        Posted Nov 13, 2013 at 1:00 PM | Permalink

        Yes, I prefer R’s lava an as well. Much nicer notation.

        I believe the “many models statistically significant” is part and parcel of SEM CFA. You’re supposed to use various goodness-of-fit measures and tests to compare among statistically significant models to find the best. It’s flexible enough that you can reverse arrows and still get the same results.

        Perhaps a useful quote: “Individual parameter estimates in a model can be meaningless even though model-fit criteria indicate an acceptable measurement or structural model. Therefore, interpretation of parameter estimates in any model analysis is essential.”

        Lavaan and others are TSLS/covariance approaches. If it makes better sense to you there are also PLS (partial least squares) methods in R libraries like semPLS, plspm, pls, etc, which evidently work better in large-scale applications.

      • Brandon Shollenberger
        Posted Nov 13, 2013 at 2:08 PM | Permalink

        Steve, (univariate or multivariate) non-normality can definitely affect SEM results. Parameter estimation can be affected, and verification statistics likely will be. The extent of the effect of non-normality depends on a variety of factors, and it is mitigated by a large (500+) sample size, but it is never something that can be ruled out a priori.

        That’s why one of the first steps in any guide to using SEM is to check for normality. It’s also common to modify a data set by doing things to increase normality. The most relevant example being the deletion of outliers (though one should always be careful to test the effect of such modifications).

        Steve: the problem with the Hoax dataset is that term “non-normality” doesn’t do full justice to things like CYMoon, CYAIDS,… where the vast majority of respondents strongly disagreed and there were only a few dissenters, some of whom were fraudulent.

        • Brandon Shollenberger
          Posted Nov 13, 2013 at 3:36 PM | Permalink

          I agree “non-normality” doesn’t do the data set justice. I’m not sure SEM can be sensibly applied to it, even if you transformed the data to make it more normal and deleted outliers (or used bootstrapping to try to account for them). The only thing I’m sure of is the results would be dramatically different if you did.

          Lewandowsky et al failed to do any of the simplest checking necessary to justify SEM. In doing so, they ignored problems which indisputably affected their results. Anyone with any meaningful experience using SEM would know better than to do what they did. It may be justifiable to use SEM for this data set, but it is not possible to justify how Lewandowsky et al used it.

        • Geoff Sherrington
          Posted Nov 13, 2013 at 8:36 PM | Permalink

          The first lecture in my 1960s statistics class started with “First, establish your distribution”.
          Soon after, we touched on ways like logs to transform non-normal to pseudo-normal for ease of work. Plus associated known dangers.
          I guess that’s all become non-trendy in this post-normal age.

  13. mpaul
    Posted Nov 13, 2013 at 3:09 PM | Permalink

    Lew’s pioneering work in the statistical treatment of small populations traces its roots to one of his earliest papers: “The Role of Pre-existing Attitudes in the Denial of Self Regeneration”, Quarterly Journal of Statistical Psychology, 66, 1493 – 1503.

    In it he concludes, “Despite overwhelming evidence of fact denial within the popular culture, our scientific analysis (involving the sampling of more than 1,000,000 individuals) conclusively demonstrates that the average human has one testicle and one ovary.”

    • David Jay
      Posted Nov 13, 2013 at 10:18 PM | Permalink

      Nice!

      Of course you are assuming a whole-number value.

      Acutal data from large sample populations will be closer to .97 testicle, 1.03 ovary

  14. Posted Nov 13, 2013 at 3:40 PM | Permalink

    My comment here (the second, just after Hilary Ostrov’s) has disappeared, though Geoff Sherrington’s comment referencing mine (currently the eleventh) is still there.
    I realise that this blog is essentially devoted to the statistical analysis of certain scientific propositions, but tolerates comments which advance the discussion in some way. For that reason, I avoid comments on the statistics, and try to limit my interventions to information on developments of the argument elsewhere.
    There are many issues with Lewandowsky’s “Role” paper, of which Steve’s statistical analysis is just one. I referred to others dealt with in comments on Lewandowsky’s own blog.
    Steve made a comment the other day on my blog. I would like to assure him that it will remain there.

    Steve: I don’t recall any issue with your comment. DUnno what happened.

  15. Brandon Shollenberger
    Posted Nov 13, 2013 at 3:45 PM | Permalink

    Michael Wood sent me an e-mail (in response to this one) in which he defends his methodology. I wrote a response which discussed technical problems with what he said. I then decided a technical discussion would likely get bogged down and resolve nothing. To avoid that, I wrote a different e-mail. This is a copy:

    Dear Michael,

    I could write a technical explanation for why your methodology is flawed. In fact, I did. Then I deleted it. Rather than discuss technical details, I’m going to simplify things. Forget everything that’s been said before now. Consider instead the effects of your methodology:

    Suppose I surveyed 100 liberals, asking their views on various issues as well as how conservative/liberal they are. Naturally, I would find correlations. Under your methodology, I could take those correlations and declare they prove things about what conservatives believe. The fact I surveyed no conservatives would be irrelevant to your methodology. The fact my data has absolutely no information about conservative views wouldn’t matter – I’d still be able to conclude things about conservatives’ beliefs.

    To demonstrate the full absurdity of this, I’ll take things to the extreme. Suppose in my previous example one of the questions I asked was, “Do you think slavery is bad?” Every liberal I surveyed would say yes. That’d give me a high correlation between liberal views and opposition to slavery. Via your methodology, I could then conclude, “Conservatives strongly support slavery.”

    I could ask women, “Have you ever murdered somebody?” then publish a paper which says, “All men are murderers.”
    I could ask global warming skeptics, “Are you from outer space?” then publish a paper which says, “Global warming movement acknowledged to be an alien conspiracy!”

    There is no limit to this. Your methodology would allow us to “prove” any negative characteristic exists for any group. There is no way to justify it. One doesn’t need a technical explanation. It’s obvious a methodology is nonsensical if it can prove things about groups no information exists for.

    That said, I can provide a technical explanation for why this happens if you’d like.

    Brandon Shollenberger

    Steve: remember how Gavin Schmidt, William Connolley etc pretended to be “unable” to understand upside-down Tiljander. I’m sure that Wood will be even more defensive. Over the years, I’ve tried to stay away from criticizing young scientists. Lewandowsky’s use of this stuff is worse than Wood’s error.

    • Posted Nov 15, 2013 at 4:27 PM | Permalink

      Wood has decided to terminate communication with me. While not surprising, it’s annoying he’d misrepresent my argument in insulting ways. For example, he says:

      If I survey a large group of liberals and they all give the same response regarding slavery, the correlation between political orientation and slavery would be impossible to compute because there is no variation in either variable. If we instead gave that group of liberals a left-right political orientation scale on which they placed themselves, and there was some variation in that variable but not in the slavery variable, we would get the same result. You can’t compute a correlation coefficient in the absence of variation – this is very basic statistical knowledge.

      While it is true “this is very basic statistical knowledge,” it seems to be a case of willfully misunderstanding my point. First, my example specifically referred to asking after “views on various issues” and “how conservative/liberal” the respondents are. Wood suggested if “we instead gave that group of liberals a left-right political orientation scale…” The scale he suggests be given “instead” is practically identical to the scale I listed in my example. I find it incredibly difficult to see how he could misunderstand my point such.

      His other misunderstanding in this quote is “fair” in that I proposed the slavery question in my example as yes/no rather than as a scale. That was a mistake on my part. However, it was a mistake one could easily spot and account for. Rather than assume I lack “very basic statistical knowledge,” he could have realized I misspoke once. Of course, given he was able to simply ignore what was written earlier in the e-mail, this is not surprising. What is surprising, at least to me, is that he’d have the audacity to say:

      I understand that you think the sample we used in the 2012 paper had no variation in terms of belief in the Diana conspiracies

      I have never said anything to suggest I think this. In fact, I showed him contingency tables which directly demonstrate the variation he claims I don’t think exist. Misrepresenting what I’ve said, (seemingly intentionally) misunderstanding my point, and flat-out making things up about what I believe is bad enough. However, he preceded all of this by opening his e-mail with:

      Though I appreciate the effort to demonstrate your point, I’m not having trouble understanding what you’re saying. I just disagree with the premise. Moreover, the examples that you’ve provided here don’t make mathematical sense.

      I never expected anything to come from this exchange, but I expected better than this.

      • Posted Nov 15, 2013 at 4:34 PM | Permalink

        Oops. I didn’t realize signing into WordPress to use one’s blog would mean I’d have that account signed to my comments at other blogs. Oh well. Live and learn.

        For the record, I’m not planning on taking up blogging. I tossed that up because I got tired of having to do internet searches for things I’d written, and I figured I could use that as an easy resource.

        • johanna
          Posted Nov 15, 2013 at 10:45 PM | Permalink

          Good work, Brandon. He doesn’t have anything of substance to refute your central point.

          As I have said at Judy Curry’s, so much of this “research” is like a mountain goat leaping from crag to crag, without even a downward glance, let alone acknowledging what lies in between.

        • Posted Nov 16, 2013 at 12:56 PM | Permalink

          Thanks johanna. By the way, I’m actually being generous to Michael Wood. It is not unheard of for yes/no questions to be asked with a scaled set of responses. You can have categories like, “Definitely,” “I think so,” “I don’t know, “I don’t think so” and “Definitely not.” I’ve taken a number of surveys like that. As such, my questions weren’t even wrong.

          But it’s like Steve McIntyre has observed elsewhere – People defending work will often take anything they can portray as error as proof your criticisms are wrong. It doesn’t matter if you make an error or not. It doesn’t matter if the (supposed) error affects your argument or not. Unless you write without including anything that could possibly be taken as mistaken, they can misunderstand and misrepresent your arguments.

          Of course, if you did miraculously write your case perfectly, they could just ignore you, or as seen above, simply ignore what you write and respond to fabrications.

          Steve: yup. you’re 100% right on their argument technique.

    • MikeN
      Posted Nov 16, 2013 at 12:23 AM | Permalink

      Crimestop means the faculty of stopping short, as though by instinct, at the threshold of any dangerous thought. It includes the power of not grasping analogies, of failing to perceive logical errors, of misunderstanding the simplest arguments if they are inimical to Ingsoc, and of being bored or repelled by any train of thought which is capable of leading in a heretical direction. Crimestop, in short, means protective stupidity.

  16. manicbeancounter
    Posted Nov 13, 2013 at 7:03 PM | Permalink

    Even more bizarre than absurd correlations, is to draw inferences of cause and effect from correlations, when there are a huge number of equally valid (or invalid) inferences that can be made.
    The title of the Hoax paper is “NASA faked the moon landing|Therefore (Climate) Science is a Hoax: An Anatomy of the Motivated Rejection of Science“. The first part implies that, due to coming to believe that the moon landing was faked, survey respondents reasoned that climate science was also a hoax. But, given that this survey was only on climate blogs, is it not more likely that the respondent’s rejection of “official” or orthodox version of events goes the other way?
    Looking at the data there is a similar issue of low numbers on support of the paired statements. Only 10/1145 supported CYMoon. Of these only 3 supported CYClimChange. Of these only 2 scored “4” for both. And these were the two faked/scam/rogue respondents 860 & 889 whose support of every conspiracy theory underpinned many of the correlations. The third, 963, also supported every conspiracy theory. Let us assume that they are genuine believers in all the conspiracy theories. Further, let us assume that one of the 13 conspiracies in the survey did trigger a response of the form “because I now know A was a conspiracy, I now believe B is a conspiracy”. There are 2n(n-1)= 312 possible versions of this statement. Or, more likely, no such reasoning process went through any respondent’s mind at all. Given the question was never asked, and there is no supporting evidence for the statement “NASA faked the moon landing|Therefore (Climate) Science is a Hoax” it most likely a figment of someone’s imagination.

  17. mertoniannorm
    Posted Nov 13, 2013 at 9:48 PM | Permalink

    I await with lively anticipation Mr. McIntyre’s take on the risible new RC posting: “Global Warming Since 1997 Underestimated by Half”, based in part on research conducted by a Canadian who is not an established climate scientist. This Canadian the choir chooses to believe, with the notable exception of Ray Ladbury. Nothing to do with the favourable outcome, of course, and thank the heavens there has been no hiatus after all!

    • Posted Nov 14, 2013 at 3:21 PM | Permalink

      Re: mertoniannorm (Nov 13 21:48), That study is clearly worth a Lew-en-dorsement!

      It has all the required features — including data derived from …. nothing… zero sum, ergo sum.

    • GrantB
      Posted Nov 15, 2013 at 7:14 AM | Permalink

      So the missing “heat” is not in the ocean depths after all. Thank goodness a more robust climate science analysis has put that furphy to the sword. Or has it?

  18. Geoff
    Posted Nov 14, 2013 at 12:30 AM | Permalink

    Sorry to be off topic but I just noticed James Annan is out of a job and headed back to the UK (see http://julesandjames.blogspot.sg/2013/11/so-long-and-thanks-for-all-sushi.html ). I’ll miss his photos of Japan.

  19. Brian H
    Posted Nov 14, 2013 at 4:53 AM | Permalink

    True; absurdity in a Lewandowsky offering is no anomaly.

  20. Posted Nov 14, 2013 at 8:37 AM | Permalink

    I think below is relevant here. Lewandowski appears to have fallen completely into a danger regarding objectivity that he himself warned about in a paper (not the conspiracy ideation ones):

    “Lewandowsky himself has an interesting article plus paper in the Association for Psychological Science online titled: ‘Misinformation and Its Correction: Continued Influence and Successful Debiasing’. See:
    http://www.psychologicalscience.org/index.php/publications/journals/pspi/misinformation1.html . This work stresses the role of social media and social networking in transmitting misinformation, and that ‘Individuals pre-existing attitudes and worldviews can influence how they respond to certain types of information’, which includes how they then filter further information. So far so good. Yet you’d surely think this might alert him to memetic influences, and indeed the paper references urban myths, which are essentially trivial memes. But apparently not. I believe the problem here is the definition of ‘information’ and ‘misinformation’. If (as it appears from the work), the primary definition of genuine information is that it comes from ‘a credible source’, i.e. authority, then further analysis will be blind to the largest memeplexes, which typically are authority. While the paper addresses ‘multiple authoritative sources’, ‘correction of inadvertent misinformation from authority’, ‘coherence’, and various other angles, a major memeplex will, spiderlike, have very many sources at its service, so also providing a sense of coherence, and the information it disseminates is not wrong inadvertently, nor necessarily wrong at all, but is in service to replication and not to truth. A further problem here as with psychology in general (or rather what little of it I have seen in relation to climate issues) is that while the effects of cultural immersion are certainly acknowledged, e.g. in the quote above, it appears to be the case that CAGW is not viewed as a culture in and of itself, even though other narrative replicators (religions or dogmatic political systems) are, with various conservative memes particularly being quoted as an impediment to action. CAGW is mistakenly viewed as ‘pure science’, and ultimately it would appear that Lewandowsky has not looked inward and seen the effect his own paper describes.”

    This is from (book sized!) essay here:

    Click to access cagw-memeplex-us-rev11.pdf

    But digestible summary here:

    The Catastrophic AGW Memeplex; a cultural creature


    And here:

    CAGW memeplex

  21. Duster
    Posted Nov 14, 2013 at 1:44 PM | Permalink

    Not be fussy, but:

    “… CO2 had caused serious negative damage over the past 50 years….”

    begs the question of whether there is such a thing as “positive damage.” There is also the ever popular “negative impact,” another commonplace in enviro-speak. It leaves one wondering if a “negative impact” leaves a dent, while a “positive impact” might raise a lump.

  22. Posted Nov 14, 2013 at 3:03 PM | Permalink

    Hi Steve,

    did my comment about Lewandowsky’s ‘Misinformation and Its Correction’ paper, drop into the spam?

    Thanks, Andy

  23. johanna
    Posted Nov 15, 2013 at 12:11 AM | Permalink

    And to think that Bristol University and the Royal Society thought this guy was such a star that they headhunted him and gave him a pay rise. Very depressing.

    Based on Wayne’s explanation of the SEM methodology, it reminds me a bit of the debate about the use of kriging to fill in data between temperature stations. Practising geologists who have used it in fields like mineral exploration say that there are caveats on using it; that other methods should be deployed as well wherever possible for verification; and that even then, you don’t really know the truth until you dig a hole. Hence, they have reservations about the way some climate scientists use the technique when applied to infilling temperature data.

    It sounds as though SEM, even if used correctly, is subject to similar caveats.

  24. geronimo
    Posted Nov 15, 2013 at 12:42 AM | Permalink

    “And to think that Bristol University and the Royal Society thought this guy was such a star that they headhunted him and gave him a pay rise. Very depressing.”

    Johanna, this is the problem with the whole debate, that a dwarf intellect can be feted as a great scientific mind sis common in this “debate”. Take Michael Mann, anyone with any dignity in the same field would have publically ditched him years ago, or Peter Gleick, Failed Soothsayer in Chief made a Fellow of the Royal Society. It stinks.

    • johanna
      Posted Nov 15, 2013 at 2:30 AM | Permalink

      geronimo, I suspect that you mean Paul Ehrlich, not Peter Gleick. But yes, the RS really hit rock bottom when they anointed him. He’s not just a charlatan, he’s an openly failed charlatan. Very sad for anyone who cares about science and integrity.

  25. Posted Nov 15, 2013 at 8:29 AM | Permalink

    Okay I’ll try again with a repost. Hopefully this is on topic as it addresses Lewandowski’s core bias. The extract below highlights the fact that Lewandowski has fallen victim to the very problem regarding objectivity (or lack of it) that he himself describes in one of his papers:

    “Lewandowsky himself has an interesting article plus paper in the Association for Psychological Science online titled: ‘Misinformation and Its Correction: Continued Influence and Successful Debiasing’. See:
    http://www.psychologicalscience.org/index.php/publications/journals/pspi/misinformation1.html . This work stresses the role of social media and social networking in transmitting misinformation, and that ‘Individuals pre-existing attitudes and worldviews can influence how they respond to certain types of information’, which includes how they then filter further information. So far so good. Yet you’d surely think this might alert him to memetic influences, and indeed the paper references urban myths, which are essentially trivial memes. But apparently not. I believe the problem here is the definition of ‘information’ and ‘misinformation’. If (as it appears from the work), the primary definition of *genuine* information is that it comes from ‘a credible source’, i.e. authority, then further analysis will be blind to the largest memeplexes, which typically *are* authority. While the paper addresses ‘multiple authoritative sources’, ‘correction of inadvertent misinformation from authority’, ‘coherence’, and various other angles, a major memeplex will, spiderlike, have very many sources at its service, so also providing a sense of coherence, and the information it disseminates is not wrong inadvertently, nor necessarily wrong at all, but is in service to replication and not to truth. A further problem here as with psychology in general (or rather what little of it I have seen in relation to climate issues) is that while the effects of cultural immersion are certainly acknowledged, e.g. in the [Lewandowski] quote above, it appears to be the case that CAGW is *not* viewed as a culture in and of itself, even though other narrative replicators (religions or dogmatic political systems) are, with various conservative memes particularly being quoted as an ‘impediment to action’. CAGW is mistakenly viewed as ‘pure science’, and ultimately it would appear that Lewandowsky has not looked inward and seen the effect his own paper describes.

    This extract is from the essay here (warning, book sized!):

    Click to access cagw-memeplex-us-rev11.pdf

    But digestible summary here:

    The Catastrophic AGW Memeplex; a cultural creature


    And here:

    CAGW memeplex

  26. NotAGolfer
    Posted Nov 16, 2013 at 5:27 PM | Permalink

    This guy tries to treat a population of skeptics, each with different sets of knowledge and opinions, as one being, accusing it of being internally conflicted and incoherent. However, alarmists have different sets of knowledge and opinions, and, if viewed as one being, would be internally conflicted and incoherent, too.

  27. Stephen Richards
    Posted Nov 17, 2013 at 12:27 PM | Permalink

    “… CO2 had caused serious negative damage over the past 50 years….”

    negative = -ve
    damage = -ve

    2 minuses make a plus. so he correctly states that CO² has had a +ve effect.

  28. Zazooba
    Posted Nov 19, 2013 at 12:55 AM | Permalink

    Steve: the problem with the Hoax dataset is that term “non-normality” doesn’t do full justice to things like CYMoon, CYAIDS,… where the vast majority of respondents strongly disagreed and there were only a few dissenters, some of whom were fraudulent.

    Careful. The models probably assume only normality of the errors, not of the data.

  29. Posted Jan 15, 2014 at 3:27 PM | Permalink

    For posterity’s sake, I’d like to post an update regarding the absurdity of Lewandowsky (and Wood’s) methodology. I recently did a survey (which amazingly got over 5,000 responses) so I could apply their methodology. Today I posted some results from it. The title of the post is, Warmists Are Never Wrong, Even When Supporting Genocide.

    That’s right. Using methodology popularly supported by global warming proponents, I have managed to “prove” global warming proponents support genocide and believe they are never wrong.

    Just think. This is the quality of work people get paid to do.

4 Trackbacks

  1. […] McIntyre has posted a number of instances where Stephan Lewandowsky has reported correlations for which there is little […]

  2. […] Brandon Shollenberger   I also observed elsewhere: […]

  3. By Lew’s Thinking | Geoffchambers's Blog on Nov 17, 2013 at 1:46 PM

    […] https://climateaudit.org/2013/11/13/another-absurd-lewandowsky-correlation/ […]

  4. […] have all the flaws listed against the Seralini paper. Sample size issues? There are clowns drawing conclusions from a sample size of zero! Data availability? There are people who refuse to release data 15 […]