The Third ‘Skeptic’

In Lewandowsky’s original editorial about climate “skeptics”, as discussed yesterday, Lewandowsky characterized climate skeptics as “obsessively yelping” and marked by the following belief:

The further fact that the satellite data yield precisely the same result without any surface-based thermometers is of no relevance to climate “skeptics.”

A few days ago, Lucia wondered about the identity of the five ‘skeptic’ blogs to whom that Lewandowsky had sent out his survey.

As it turned out, no surveys had been sent to “skeptic” blogs by Lewandowsky nor any surveys referring to Lewandowsky (whose association with the survey had been prominently featured at Deltoid and Hot Topic.)

However, a Charles Hanich (who turns out to be an assistant to Lewandowsky) had sent me a link to the survey (which I disregarded). It was quickly determined that Junk Science had, in fact, posted a link to the survey (sent to them by Hanich) but with heavy caveats (contrary to Lewandowsky’s claim that no skeptic blogs had posted the link). Since Lewandowsky’s name was connected to the survey in announcements at Deltoid and Hot Topic, it seems evident that the survey was sent to anti-skeptic blogs under a different cover letter.

Yesterday, I was contacted by a third blogger who had also received (and responded to) Hanich’s letter. The blogger was not considered at Lucia’s thread as a candidate recipient. No one thought of him because he believes that increased CO2 causes temperature increases and that it is an important and relevant problem.

The third “skeptic” blog is …. Pielke Jr.

Pielke Jr obviously doesn’t have a particularly high regard for Peter Gleick, Gavin Schmidt and Michael Mann, but these are positions that one can reasonably hold without being a “skeptic” who obsessively yelps and who disregards satellite records.

Pielke’s correspondence with Hanich also sheds some interesting light on an important statement in Lewandowsky’s article on the handling of responses from the same IP address – an issue discussed by Lucia here, citing the following statement from the article:

Following standard recommendations (Gosling, Vazire, Srivastava, & John, 2004), duplicate responses from any IP number were eliminated (N = 71).

Lucia discussed this sentence in the context of someone using Hide-My-Ass or similar proxy disguises to game the survey through multiple responses. (The existence of attempts to game the survey is conceded even by Lewandowsky from the detection of 71 such attempts.) However, it seems clear that many fake responses went undetected, resulting in Lewandowsky’s primary conclusions also being fake, as discussed in yesterday’s post here.)

Most people reading the above sentence from Lewandowsky’s article probably took this to mean that multiple responses from the same IP address were eliminated. But watch the pea in light of Pielke’s correspondence with Hanich.

On Sep 6, 2010, Hanich wrote as follows (identical to the letter to me):

From: Charles Hanich
Date: Mon, 06 Sep 2010 15:43:52 +0800
To: Roger Pielke
Subject: Survey link post request
Dear Mr Pielke,

I am a research officer at the University of Western Australia, and I am seeking your assistance with a web-based survey of attitudes towards climate science (and other sciences) and skepticism. The survey has been approved by the University’s ethics committee and carries no risks for participants.

Completion should take less than 10 minutes and all data will be analyzed anonymously and without monitoring or identifying individual responses. We collect no personal identifying information, save for age and gender.

I would greatly appreciate it if you could perhaps post the link below, which goes directly to the survey, on your blog, so that your readers could participate if they chose to do so. We do not ask you to endorse the survey in any way, simply to make it available to your readers.
http://www.kwiksurveys.com/online-survey.php?surveyID=HKMKNH_7ea6091

Thank you very much for your assistance and do not hesitate to contact me for further information.

Kind regards,
Charles Hanich.

Pielke wrote back that day as follows:

Can you tell me a bit more about the study and the research design?

Hanich promptly replied:

Dear Mr Pielke,

the rationale behind the survey is to draw linkages between attitudes to climate science and other scientific propositions (eg HIV/AIDS) and to look at what scepticism might mean (in terms of endorsing a variety of propositions made in the media). In addition, we consider people’s life satisfaction and their attitudes towards market economies, both of which are known to be important determinants of how people respond to messages relating to conservation and so on.

The study consists of about 40 questions / statements, most of which are provided with one-of-four selections of the type: strongly disagree – disagree – agree – strongly agree.

For details of the questions, you are most welcome to check out the link. The answers are not recorded in the database until the final “Save-and Exit” button is clicked.

I thank you for your prompt response and should you require further clarification, please don’t hesitate to contact me again.

Kind Regards,
Charles Hanich

Pielke wrote back:

Dear Charles-
Thanks. I am unclear about how posting on a blog helps your purposes, as you will get anonymous, perhaps repeated replies. I have seen various efforts to query opinions via online surveys fail to be methodologically rigorous, so that is the basis for my query.
Thanks,
Roger

A week later, Sept 13, Hanich replied:

Subject: Re: Survey link post
Dear Roger,
I am sorry for not replying earlier. You have raised a very valid point. We are aware of methodological issues, one of which is dealing with repeated replies.

When we published the surveys, we had two options:

a) Use the provision offered by the hosting company to block repeated replies using IP addresses. This, however, will block legitimate use of the same computer, such as in our laboratory, where numerous participants use the same PCs.

b) Not to block multiple replies and allow for the possibility of repeated replies when evaluating the data.

We chose option b), which was more practical in our situation.

I took the liberty of attaching an paper by Whitehead (2007) [SM – see here], addressing some of these issues.

Kind Regards,
Charles

The Whitehead paper is not particularly helpful as it deals with online medical questionnaires (as opposed to purporting to survey skeptics at anti-skeptic blogs.) Hanich’s justification for turning off the duplicate-IP function at kwiksurveys is to-say-the-least strained. My impression is that most skeptics operate from their own computers; missing a few skeptics who share a computer is a pretty small price. And why would he be trying to accommodate respondents from their own laboratory? What business do they have filling out the survey in the first place? I wonder how many responses came from his own university? And how many of the fake responses?

So Lewandowsky went out of his way to accommodate multiple respondents from the same IP address by turning off this option at kwiksurveys. The sentence from the article needs to be re-read very carefully now that we know this – it needs to be parsed word-for-word as though it was written by Gavin Schmidt. Once again, the article stated:

Following standard recommendations (Gosling, Vazire, Srivastava, & John, 2004), duplicate responses from any IP number were eliminated (N = 71).

Gosling et al 2004 do not set out a “standard recommendation” of dealing with multiple responses from the same IP address. (Watch the pea here.) The problem is dealing with multiple responses from the same IP address and that was my first reading of this statement. But Lewandowsky’s statement actually only refers to duplicate responses from the same IP adddress – not quite the same thing at least according to Gosling et al 2004 who state:

A major motivation for participants to respond multiple times is to see a range of possible feedback (e.g., how their personality scores would look if they answered the questions differently). Therefore, our first strategy was to give participants a direct link to all of the possible feedback options to allow them to satisfy their curiosity. Our second strategy was to identify repeat responders using the Internet protocol (IP) addresses that the Web server logs with each completed questionnaire. A single IP address can be associated with multiple responses submitted during a single session, such as by individuals taking the test again but changing their answers to see how the feedback changes.

Thus, we eliminated repeated responses from the same individual at a single IP address. To avoid eliminating responses from different individuals using the same computer (e.g., roommates or people using public computer labs), we matched consecutive responses from the same IP address on several key demographic characteristics (e.g., gender, age, ethnicity) and when such a match was detected, we retained only the first response. Johnson (2001) offers another solution for detecting repeat responders: He suggests comparing the entire set of item responses in consecutive entries to identify duplicate or near-duplicate entries.

Some repeat responses could remain in the sample even after taking these steps to eliminate them. For example, participants might revisit the questionnaire to see whether their scores change over time. Thus, our third strategy was to add a question asking participants whether they had completed the questionnaire before. Only 3.4% responded that they had completed the questionnaire before. Most important, analyses showed that repeat responding (as identified by the question) did not change the findings in Srivastava et al.’s (2003) study on personality development. When repeat responding is of great concern, researchers can always take additional precautions such as requiring participants to provide a valid email address where they receive authorization to complete the questionnaire (Johnson, 2001).

As in Hanich’s letter to Pielke, Gosling et al 2004 consider survey methods which do not automatically reject multiple responses from the same IP address (for a similar reason, people in the same lab.) They cited a suggestion to check for duplicate (or near-duplicate) responses to detect repeat respondents.

My interpretation (and it is only an interpretation, since the description is not conclusive) is that Lewandowsky accepted multiple responses from the same IP address as long as there was a slight variation in any answer. For example, the answers from the two scam responses who agreed with every conspiracy were nearly identical, but varied on a couple of questions. As I interpret the methodology, because the two answers were not item-for-item identical, they would be accepted even if they came from the same IP address. No need for complicated hiding behind proxy servers as long as one or two answers were varied.

I re-iterate that this is an interpretation of the methodological description and it is possible that the algorithm operated differently. Lewandowsky could easily clarify this issue without providing the actual IP addresses. It is trivial to assign a unique ID number for each unique IP address so that this phenomenon could be analysed.

UPDATE: Marc Morano received an email from Hanich on Sep 23 and did not respond to Hanich.

81 Comments

  1. Posted Sep 10, 2012 at 9:46 AM | Permalink

    It would be really easy for Professsor Lewandosky to post the frequencies of responses and a detailed description of the methodology used in the study. The fact that he hasn’t speaks volumes. He’s playing a game. I think the intent of the game is to influence local politics. I think the ramifications of the survey will subside quickly as more flaws in it are revealed.

    • kim
      Posted Sep 10, 2012 at 9:57 AM | Permalink

      Olly olly ox an’ Dutch Uncle in free.
      ==========

  2. Posted Sep 10, 2012 at 10:00 AM | Permalink

    Pielke Jr obviously doesn’t have a particularly high regard for Peter Gleick, Gavin Schmidt and Michael Mann, but these are positions that one can reasonably hold without being a “skeptic” who obsessively yelps and who disregards satellite records.

    Steve, if it wasn’t for you I wouldn’t have known that the Lewandowsky Scam included confusion of mainstream skeptics with the much small number who refuse to believe that the satellite records show any warming or that CO2 is in any way responsible. Yet for the organisers of this survey Pielke Jr is also a skeptic and his blog is a skeptic blog. Thank you for unveiling this. The inconsistent disregard for even the first level of useful distinction has one positive side-effect – a kind of unity between those of us who accept that temperature has been rising but have become the targets of the same smear machine.

    But the blatant disregard for truth throughout has I’m afraid to be wilful. And in saying this we are painted as conspiracists, for instance in the blurb on Slashdot quoted by A. Scott on Bishop Hill:

    Curiously, public response to the paper has provided a perfect real-life illustration of the very cognitive processes at the center of the research.

    We are being gamed, not just the survey.

  3. Posted Sep 10, 2012 at 10:01 AM | Permalink

    Written before reading Thomas Fuller, by the way.

  4. Posted Sep 10, 2012 at 10:27 AM | Permalink

    Roy Spencer has discovered he was one of the recipients of Hanrich emails, doesn’t remenber it, and assumes he ignored it.
    One more to go. I think Jo Nova is going to do an update.

  5. KnR
    Posted Sep 10, 2012 at 10:37 AM | Permalink

    At its best this approach is an awful way to collect data for scientific purposes , is used not becasue its good but because its cheap , quick and can spread a lot of dung fast and wide . And this effort is very far from being at its best , the fact that different survey were used for different web sites on its own should kill this off. While once again you see the ‘professionals’ of climate science, in their published work , failing to match the academic standards expected of undergraduates completing an essay.

    • RomanM
      Posted Sep 10, 2012 at 11:31 AM | Permalink

      Lewandowsky’s arrogant reply to the criticism of using different surveys at different blogs:

      So here it is, the secret code. Read it backwards: gnicnalabretnuoc.

      shows a distinct lack of understanding of the principle of counterbalancing.

      Counterbalancing is a method of avoiding confounding among variables. What he does not seem to realize is that, in this case, he induces such confounding of questionnaire with the web site from which the subject self-selects. Given the range of stridency in the sampled web sites it is not unreasonable that this would be reflected in the specific sample from each site. So rather than supposedly accommodating a possible problem, he in fact creates a new analytic situation which becomes virtually uncorrectable.

      It is somewhat surprising that he seems to have chosen not to provide any statistics on the size of the samples from the various blogs, nor has he apparently done even the most elementary checks to see how their results might differ.

      • Steve McIntyre
        Posted Sep 10, 2012 at 1:04 PM | Permalink

        Roman, his counterbalancing argument would be more convincing if there had been order randomization within the anti-skeptic blogs. Thus far, we have two versions sent to the skeptic blogs and two versions to the anti-skeptic blogs.

      • Steve McIntyre
        Posted Sep 10, 2012 at 1:52 PM | Permalink

        Re: RomanM (Sep 10 11:31),

        Roman, I’ve now collected the links for all known contacts.

        Here is lewandowsky’s “counterbalancing”. All
        anti-skeptic surveys were sent one of two versions; all skeptic blogs were sent one of two different versions. Not a single example of the same survey being sent to opposite parties,

        If counterbalancing is important (and for this survey, there are so many defects, that it’s low on the list), then Lewandowsky should not have done what he did.

  6. Steve McIntyre
    Posted Sep 10, 2012 at 10:42 AM | Permalink

    Marc Morano is the fifth blog. Marc Morano received an email from Hanich on Sep 23 and did not respond to Hanich.

  7. Glacierman
    Posted Sep 10, 2012 at 11:01 AM | Permalink

    Weren’t they going public with results by September 23? If so, that seems more than a coincidence.

  8. Steven Mosher
    Posted Sep 10, 2012 at 11:04 AM | Permalink

    Did they confuse Jr with Sr?

    • Posted Sep 10, 2012 at 11:06 AM | Permalink

      That’s a minor distinction to miss, compared to the others 🙂

  9. Paul
    Posted Sep 10, 2012 at 11:23 AM | Permalink

    This outrage over the research methods seems somewhat contrived. Lewandosky posted the survey on science blogs, whether or not the blogs are identified as skeptical or not seems irrelevant. The implication seems to be that “real” skeptics can only be found on skeptical blogs and that Lewandowsky knowingly avoided those sites for fear of getting answers from these “legitimate” skeptics; nonsense. The truth, I’m afraid is much simpler. The majority of global warming deniers and “skeptics” are right wing anti-government conspiracy nuts with very little interest in science or facts. This is pretty evident from the anecdotal evidence alone. Spend some time on any website or blog discussing the “science” of global warming and you’ll quickly conclude the majority in the denial camp are morons. That said, I would agree that there are legitimate skeptics like Dr. Muller who in the past have expressed valid concerns regarding data collection and interpretation, but I would be confident in asserting that they are in the very small minority.

    • Carrick
      Posted Sep 10, 2012 at 2:28 PM | Permalink

      Shorter Paul: “This outrage over using legitimate methods of scientific data collection is somewhat contrived. Just make up data.”

      • Paul
        Posted Sep 10, 2012 at 2:57 PM | Permalink

        No, he didn’t make up the data, and it’s legitimate to criticize his methods of collection, but to suggest a larger conspiracy seems, well, paranoid and frankly only lends support to his original point that the majority of people in the denial of climate science tend to be somewhat predisposed to conpiracy ideas.

    • Skiphil
      Posted Sep 10, 2012 at 2:35 PM | Permalink

      shortest Paul: “science, schmience, we know the Truth!”

    • DaveS
      Posted Sep 10, 2012 at 2:49 PM | Permalink

      Having worked in technical R&D for 25 years, and thus developed a pretty good nose for real “science”, I am confident in asserting that you haven’t got clue what you’re talking about.

      • Paul
        Posted Sep 10, 2012 at 3:12 PM | Permalink

        Having spent almost 17 years reading the research and scientific evidence behind global warming I can assure you I know exactly what I’m talking about. I’ve witnessed for years while “self-identified skeptics” have provided very little in the way of evidence or peer reviewed literature to support their positions on climate science. In fact, all they ever seem capable of doing is shouting from the sidelines and hurling baseless accusations. Dr. Muller is only a recent example of a high profile “skeptic” who criticized from the sidelines only to spend 2 years of research to arrive at basically the same conclusion everyone else in the scientific community had already arrived at years earlier. While I agree that skepticism is healthy and criticism is good when merited, that is not what the community of people in denial of the scientific facts around AGW are engaged in. Instead they use ad hominem attacks and innuendo to attack a scientist with very little evidence or knowledge of the facts.

        • RomanM
          Posted Sep 10, 2012 at 3:58 PM | Permalink

          Having spent almost 17 years reading the research and scientific evidence behind global warming I can assure you I know exactly what I’m talking about.
          ….
          Instead they use ad hominem attacks and innuendo to attack a scientist with very little evidence or knowledge of the facts.

          By all means, explain the facts to us. Have you read the paper yourself? Was the sampling done in a scientifically justifiable manner? Did the samples represent a proper cross-section of the skeptic population? Did you understand the factor analyses? How different were the extracted latent factor scores from the simple averages of the naively 1-to-4-scaled answers? Did you understand the Structural-equation model results? Did they quantify the character of the purported relationships between the various factors beyond the calculation of some simple correlations? And how exactly were the “consensus responses binned into 9 categories with approximately equal numbers”? Perhaps you could enlighten us.

          The end result is that the entire exercise is drivel produced for the purpose of attacking those of us who would like to see honest scientific effort instead of cobbled-up papers with predetermined conclusions.

          How about addressing the issues at hand here?

      • Paul
        Posted Sep 10, 2012 at 3:17 PM | Permalink

        For the record here is Lewandosky’s paper in case you didn’t actually take the time to read it yourself before chiming in.

        Click to access LskyetalPsychScienceinPressClimateConspiracy.pdf

    • hunter
      Posted Sep 10, 2012 at 4:20 PM | Permalink

      Paul,
      It is a screwed up study that should have been peer rejected, not peer approved.
      And it sounds like adults at this university wish to discuss his childish and immature anti-scientific behavior furhter.

  10. DGH
    Posted Sep 10, 2012 at 11:34 AM | Permalink

    “This, however, will block legitimate use of the same computer, such as in our laboratory, where numerous participants use the same PCs.”

    OK this study was botched. But surely Hanich intended the “such as” to refer to the first part of the sentence and not the last.

    • Don Wagner
      Posted Sep 10, 2012 at 12:43 PM | Permalink

      Nice to hear from you Mrs Lewandowsky. How is your son? Stephan isn’t it? I heard he was a researcher in Australia or New Zealand or somewhwere

      • DGH
        Posted Sep 10, 2012 at 1:42 PM | Permalink

        If you mean to suggest that I defended Dr. L or Hanich with my post I might suggest you take a second look. My first sentence was, “…this study was botched.”

        There are plenty of undeniable flaws in this study. But it’s difficult to believe that the authors were so incompetent as to allow themselves, the surveyors, or their associates to complete the survey. Perhaps you believe that intended to manipulate the results? Then why would Hanich admit to such a thing to a supposed skeptic blogger?

        There’s room for benefit of the doubt on this one, IMO.

        • Don Wagner
          Posted Sep 10, 2012 at 1:44 PM | Permalink

          Sorry, my comment was addressed to Paul

        • Don Wagner
          Posted Sep 10, 2012 at 1:47 PM | Permalink

          My apologies, the comment was meant for Paul

        • DGH
          Posted Sep 10, 2012 at 3:11 PM | Permalink

          No worries Don.

          As for Paul, as I read his post I began to understand the point about morons on these scientific blogs.

          But seriously, it would be to surmise the political leanings of the majority of participants by reading this blog. Indeed the blog rules and road map state clearly,

          “Politics
          While there’s a little politics from time to time, by and large, I would prefer that you don’t talk politics; there are plenty of other perfectly good places to do that.”

        • Paul
          Posted Sep 10, 2012 at 3:40 PM | Permalink

          DGH. Please don’t misconstrue my point about morons as anything but a point about morons.
          snip

          Steve: enough of this,

  11. HaroldW
    Posted Sep 10, 2012 at 11:44 AM | Permalink

    So the final (?) tally of 5 “skeptic” (or “skeptic”-leaning) [Lewandowsky’s characterization] sites is
    Climate Depot (Morano) – no response
    Climate Audit (McIntyre) – no response
    Pielke Jr – correspondence, declined to post
    Junk Science (Milloy) – posted poll
    Spencer – no response

    Is that correct? If so, then Lewandowsky can stop inquiring about whether he can release the names of the sites to which hehis assistant sent the link.

    Although why the presumed confidentiality of email *replies*, is relevant to whether he could release this information, has always been beyond me.

    • DR_UK
      Posted Sep 10, 2012 at 12:14 PM | Permalink

      Was WUWT deemed not skeptical enough? 🙂

  12. Posted Sep 10, 2012 at 11:49 AM | Permalink

    Lewandowsky publishes on weighty matters in statistics e.g. ‘especially in connection with the long-running dispute concerning the relative merits of pie and bar charts.’

    see: http://smr.sagepub.com/content/18/2-3/200.short

    Clearly much grant money has been wasted in the pursuit of very little value.

    • Steve McIntyre
      Posted Sep 10, 2012 at 12:14 PM | Permalink

      That magnum opus was published when Lewandowsky was at University of Oklahoma – where David Karoly, another Australian recently discussed at CA, was formerly at.

      • Betapug
        Posted Sep 10, 2012 at 1:41 PM | Permalink

        And while at U Oklahoma received an AMOCO (as in American Oil COmpany) award for “Superior Teaching” in 1993!

        Click to access SLvita.pdf

        Good investment.

  13. Posted Sep 10, 2012 at 12:22 PM | Permalink

    Funnily enough, Professor Lewandowsky removed my last comment from his website…

  14. Bob Koss
    Posted Sep 10, 2012 at 12:56 PM | Permalink

    Currently known distribution of surveys.

    Warmist:
    profmandia HKMKNF_991e2415
    deltoid HKMKNF_991e2415
    hot-topic HKMKNF_991e2415
    tamino HKMKNF_991e2415
    illconsidered HKMKNG_ee191483
    bbickmore HKMKNG_ee191483
    skepticalscience ???
    trunity ???

    Skeptic:
    junkscience HKMKNI_9a13984
    climateaudit HKMKNI_9a13984
    pielke jr ??? [SM- HKMKNH_7ea60912]
    climatedepot ???[HKMKNI_9a13984]
    spencer ???HKMKNH_7ea60912

    It would be interesting to know if any survey sent to skeptic blogs matches the surveys sent to warmist blogs. Currently the distribution seems somewhat skewed although it isn’t definitive.

    • Bob Koss
      Posted Sep 10, 2012 at 3:08 PM | Permalink

      Thanks for filling in the remain three skeptic surveys Steve.

      From the updated list of known different surveys, and seeing none in common between skeptic and warmist, I’m willing to go out on a very sturdy limb and say the different surveys were not distributed to sites by random selection, but instead targeted according to suspected beliefs of the majority of commenters at the sites. IMO astoundingly poor methodology.

      • Ian H
        Posted Sep 10, 2012 at 6:50 PM | Permalink

        The warmist blogs were contacted first by Lewandowsky himself who sent two versions. The skeptic blogs were contacted much later by Lewandowsky’s research assistant Hanich sending two different versions.

        I conjecture that there was a realisation late in the piece that a survey on the views of skeptics could have no credibility if no attempt were made to contact skeptic blogs, and so a late a belated attempt was made to add some.

        There is no need in this case to imagine that the version disparity was the result of malicious design when we have a much simpler explanation. Namely that this was a simple botch up, with Hanich sending different versions in error.

        The botch up explanation is consistent with the overall quality of the research.

  15. David Brewer
    Posted Sep 10, 2012 at 1:19 PM | Permalink

    Hanich as quoted in his e-mail to sceptic blogs:

    “The survey has been approved by the University’s ethics committee and carries no risks for participants.”

    Lewandowski was already on record as associating sceptics – whom he proposed to survey – with belief in conspiracy theories. Did he disclose this to the ethics committee? Would they have approved the survey if he had?

    Since the survey has come to light, Lewandowski has been smearing sceptics right, left and centre. How does that constitute “no risk to participants”?

  16. DR_UK
    Posted Sep 10, 2012 at 1:19 PM | Permalink

    Steve, is your post title a reference to the surreal novel ‘The Third Policeman’ by Flann O’Brien? I hope so!

    • Posted Sep 10, 2012 at 1:35 PM | Permalink

      I assumed the allusion was Graham Greene’s 1949 film noir The Third Man. But that’s the thing I love about skeptic blogs – the conspiracy theories we produce at every turn. (Never mind that Steve has never called himself a skeptic and that neither suggestion is a conspiracy theory. We’re in Lewandowsky country now – a place Salvador Dali and David Lynch would find surreal.)

      • michael hart
        Posted Sep 11, 2012 at 1:50 PM | Permalink

        Or Douglas Adams. “Don’t you try to out-weird me. I get stranger things than you free with my breakfast cereal”-Zaphod Beeblebrox

    • Steve McIntyre
      Posted Sep 10, 2012 at 1:44 PM | Permalink

      Re: DR_UK (Sep 10 13:19),

      I wish I could say so. An Irish friend of mine was a big fan of Flann OBrien. A LeCarre allusion would be more my style. I like the way that Tinker, Tailor and Hon Schoolboy start with small characters and slight disturbances in the cosmos.

    • Hoi Polloi
      Posted Sep 10, 2012 at 2:15 PM | Permalink

      How about Graham Greene’s “The Third Man”?

  17. pax
    Posted Sep 10, 2012 at 1:20 PM | Permalink

    “And why would he be trying to accommodate respondents from their own laboratory? What business do they have filling out the survey in the first place? I wonder how many responses came from his own university? And how many of the fake responses?”

    Surely you understand that this was provided as an example and not as an indication that they were submiting responses from their lab to their own survey.

    Steve: maybe, maybe not. They’ve done every other goofy permutation.

    • Posted Sep 10, 2012 at 2:39 PM | Permalink

      The phrasing Hanich used was:

      This, however, will block legitimate use of the same computer, such as in our laboratory, where numerous participants use the same PCs.

      So there are three problems with your “example” explanation:

      1. The rather ludicrous presumption inherent in the thought that an “example” was required.

      2. His use of the word “participants” is at the very least odd; particularly in the context of a purported explanation of their survey methodology – if it was not, in fact, intended to indicate that people in their lab would be (or had been) “participating” in the survey.

      3. It provides you with an opportunity to focus on that which may (or may not) be the least problematic aspect of this particular part of the methodology – while ignoring the primary concerns identified by Steve, including:

      a) The fact that “Lewandowsky went out of his way to accommodate multiple respondents from the same IP address” when it was clearly the least methodologically sound option available to him.

      b) Lewandowsky’s apparent misreading of Gosling et al, 2004.

      How sad, though, that in this day and age the noble toilers in Hanich’s lab are obliged to share the use of the same PC. Perhaps the grants received in support of this “research” could have been put to better use through the acquisition of PCs for all, rather than in such a shoddy and blatant exercise in “scientivism“.

      • pax
        Posted Sep 10, 2012 at 4:28 PM | Permalink

        Given what what we have learned over the years from Steve’s blogs and elsewhere, perhaps I should not be surprised if it turns out that in fact people from their own lab participated in their own survey. Maybe I was too naive in assuming this as an impossibility, scientific method and all that.

      • Dave
        Posted Sep 10, 2012 at 9:27 PM | Permalink

        hro>

        You’re presuming that Lew is capable of writing intelligibly. I don’t see any evidence for that.

    • Bebben
      Posted Sep 10, 2012 at 2:51 PM | Permalink

      Lewandowsky will surely consider this a Conspiracy Theory.

    • Robin Melville
      Posted Sep 11, 2012 at 3:33 AM | Permalink

      Whilst finding the Lewandowsky paper a grotesque abuse of scientific process I’d have to disagree with the suspicions cast here about the single IP address issue. Many organisations — universities, Internet cafés, and, for example, the entire UK National Health Service — pass traffic to the Internet through a gateway which conceals the internal network addresses using the Network Address Translation protocol (http://tools.ietf.org/html/draft-cheshire-nat-pmp-05).

      Thus, an external website (for example the survey site) would perceive a connection from any computer within these organisations as having the same IP address — that of the gateway. Consequently, using single IP address filtering is a poor way of detecting multiple form entries since it excludes different people in the same organisation from participating.

      Actually, almost any method (e.g. using cookies or requiring a valid email address) can be gamed which makes the kind of internal consistency checking which Steve has highlighted an obligatory part of processing such surveys. However, if the headline has been written in advance of the actual survey — which it clearly was in this case — who cares if the methodology has any rigour.

  18. William Larson
    Posted Sep 10, 2012 at 1:25 PM | Permalink

    OK, here’s one for you statisticians: I am currently reading “Indiscrete Thoughts” by Gian-Carlo Rota. Early on he writes about William Feller (“His name was neither William nor Feller.”), who proposed that “in multiple choice exams, students should be asked to mark one wrong answer, rather than to guess the right one.” So, what if, in this “survey”, Lewandowsky had asked responders to do analogously with their answers. Would such a protocol prevent scamming? Would it make scamming easier to detect? INQUIRING MINDS WANT TO KNOW.

  19. AnonyMoose
    Posted Sep 10, 2012 at 1:28 PM | Permalink

    Notice “consecutive” in the boldfaced sentence above: “Johnson (2001) offers another solution for detecting repeat responders: He suggests comparing the entire set of item responses in consecutive entries to identify duplicate or near-duplicate entries.”

    Johnson’s intent is apparently to deal with someone who is clicking “Submit” several times on the same survey. This is not the same as Lewandowsky’s “duplicate responses from any IP number” description. Lewandowsky’s lack of mentioning a consecutive frame implies that most people at the same institution/ISP (same IP address) who have identical opinions will be ignored by his survey.

    Either Lewandowsky did not follow Johnson’s solution, or Lewandowsky did not clearly describe his procedure.

  20. MikeN
    Posted Sep 10, 2012 at 1:30 PM | Permalink

    You are reading too much into the PCs in our lab use the same IP address. He is not trying to make things easier for people in his lab to fill out the survey, just giving an example of different people having the same IP address.

    • LearDog
      Posted Sep 10, 2012 at 4:09 PM | Permalink

      Like a husband and wife perhaps…

  21. Posted Sep 10, 2012 at 1:32 PM | Permalink

    IP blocking isn’t necessarily a great way of screening surveys. Lots of institutions reallocate IP addresses or pass data through a proxy server and so seem to have the same IP address – students in a university library for instance. For large ISPs (eg AOL or mobile providers), IP addresses can be re-used and reallocated. Even in a regular household running off a shared wi-fi connection, a family interested in a topic may have several individuals complete the same survey, but each give different answers.

    For ‘open’ surveys (where there is no element of pre-selection such as an invite and ID to control who completes the survey) there will always be problems with people trying to game the survey so results always need some provisos about quality.

    So a more sophisticated survey researcher will first look at other indications such as the time taken to complete the survey, key-pressing patterns, inconsistencies, answers to open ended questions and will build in check elements to catch people answering in unlikely manners. If these match an IP check and check on time of completion will be done as a further test. If an IP address does surveys in rapid succession this might be another indicator.

    But far bigger than the IP issue is that the sample is a convenience sample and so of unknown representativeness of the population in question. Convenience samples may be enough for a ‘quick-and-dirty’ commercial project where cost and speed balance the importance of accuracy, but I’d be surprised if any full academic journals would consider any surveys that don’t have some element of EPSEM (equal probability of selection), and some element of pre-testing to validate the questionnaire before it goes live. At the very least we would need a full description of how the convenience sample was achieved – where and how – and how the sample was tested for reliabilty (eg hold-outs, cross-checks against existing data, back-checking by recontacting). Daily newspapers may publish cheap surveys done for PR purposes, but academic journals really shouldn’t.

    • Craig Loehle
      Posted Sep 10, 2012 at 2:23 PM | Permalink

      Far worse than being a “convenience sample”, which might be ok for some topics, it is a sample of highly motivated people on a highly contentious topic. It is not a sample of gmail users or Amazon customers, but of those who frequent blogs where open combat and ad homs are the norm (speaking of the blogs where the survey was in fact filled out). And this sample is far more likely than readers of People magazine to try to spam the survey to help their cause. The idea that trolls could be ignored is stunningly naive.

  22. MrPete
    Posted Sep 10, 2012 at 2:06 PM | Permalink

    Seems to me the non-response of skeptic blogs proves a point that Lewandowsky is loathe to support: skeptics are… skeptical. We don’t have time for conspiracy theories, junk email and the like.

    When you get us curious enough to dig in, we actually dig in. We don’t accept the status quo at face value. We look beneath the surface hype.

    But to get us started requires a lot more than a junk email in the in-box, particularly for an obviously junk-science survey.

    Compare this methodology with a professional survey organization: they carefully preselect randomized participants, and intentionally contact each one to obtain the best possible information.

    Such silliness. And to further prove the point, the MSM is quick to accept this junk-science result. Thus further proving the need for skepticism.

    • Posted Sep 10, 2012 at 2:13 PM | Permalink

      And to further prove the point, the MSM is quick to accept this junk-science result.

      Wasn’t it just in London that there were major articles discussing the result though – in the Guardian and then the Telegraph? One reason I wasn’t convinced by Tom Fuller’s

      I think the intent of the game is to influence local politics.

      Not the greatest feather in the cap for my home city I have to admit.

  23. Posted Sep 10, 2012 at 2:18 PM | Permalink

    Steve … as far as I know we don’t have the copies of the different surveys, however, Lewandowsky (or at least the “Moderator” at his blog) said:

    In response to:

    I think we know that Steve McIntyre and the JunkScience site were offered one flavour of survey so I would assume that the other skeptic sites were at least offered the others to mix this up. Is this what happened? This seems a hard to manage randomising technique that can’t be controlled for. Were attempts made to control for balancing the counterbalancing over the cohorts* of “pro-science” and “skeptic” targets?

    “Moderator” said:

    Moderator Response: You are basically spot on. We used different question orders in different versions of the survey, distributing the different links across blogs in a quasi-random manner. This practice is so standard that we did not explicitly mention it in a Method section for a paper that had to fit within 4000 words.

    Thomas Fuller replied:

    thomaswfuller at 01:45 AM on 8 September, 2012
    Actually, Professor Lewandowsky, while randomizing responses within a question is standard practice and could go unmentioned, using different iterations of a survey with differently ordered questions is unusual enough to be mentioned in the methodology description. You should have done so.

    In the age of the internet, researchers have acquired the habit of posting study elements online to provide answers to just the type of questions you are receiving. I think you are remiss by not posting the different iterations of the survey online, labled by which blog fed them.

  24. Steve McIntyre
    Posted Sep 10, 2012 at 2:18 PM | Permalink

    Lewandowsky just wrote Roy Spencer as follows:

    Date: Tue, 11 Sep 2012 03:01:19 +0800
    Subject: survey contact
    Dear Dr Spencer:
    Please find enclosed correspondence from my research assistant dating back to 2010. He contacted you at the time to ask whether you would post a link to one of my research projects on your blog.
    There appears to be considerable public interest in the identity of the bloggers whom I contacted for my project in 2010, and I am therefore pleased that my university has today affirmed that there are no ethical issues involved in releasing their identity.
    I will post the relevant information on my blog shortly.
    Kind regards,

    Lewandowsky is up late.

    I would have thought that the University should have been addressing what to do about Lewandowsky’s fake data.

    • geronimo
      Posted Sep 11, 2012 at 4:11 AM | Permalink

      Pea-thimble time, note he doesn’t give the actual date the surveys were sent, just the year. Were the Sceptic blogs sent the surveys later than the alarmist blogs to provide a cover for the fact that he only ever intended to use alarmist blogs?

  25. son of mulder
    Posted Sep 10, 2012 at 2:22 PM | Permalink

    “The further fact that the satellite data yield precisely the same result without any surface-based thermometers is of no relevance to climate “skeptics.””

    All the satellite record shows is a small amount of warming over 30 odd years and hints at some sort of cycle. It says nothing about cause.It says nothing about future catastrophe. It says nothing about the surface record.

    The hill up to my local town is as steep as the pitch of my roof but I’m sceptical that they are connected by anything other than coincidence.

  26. Posted Sep 10, 2012 at 2:58 PM | Permalink

    The current KwikSurveys is NOT the same as they were when the original study was done. The original site was hacked in June of 2012 and shut down. It was then purchased and re-opened with totally different software. I actually initially set up the new re-created survey first in KwikSurvey – to try to be a true to the original process as possible. However, I found that many of the higher end features are no longer available:

    I am afraid it is not possible to randomize questions. In June 2012 KwikSurveys was attacked and suffered multiple outages and data loss which resulted in the company closing and the website being shut down. In July, we (now the current owners) decided to purchase the domain name and branding. We had no choice but to very quickly replace the existing system with our own. The home page design and documentation is being updated now.

    Luckily – the new owners have left all the help documents from the old version up so we can look at available features. The old version DID have “randomization” – but it was not that robust – it allowed for randomization by page only:

    http://kwiksurveys.com/docs/@survey_settings_3adisplay.htm

    They also used to offer IP tracking, but no longer do so – only the option to turn on or off – “take survey once” controls.

    As I understand it randomization and counter-balancing in survey design are two distinct processes. Randomization is just that – mixing up the questions. In case of KwikSurveys appears they only randomized by page.

    Counter-balancing is much different:

    In questionnaire design, a method of controlling for acquiescence response set by wording a test so that an affirmative answer is scored in the direction of the attribute being measured in half of the questions and in the opposite direction in the rest.

    And a comment from a Journal of Psychology article:

    … the sequence in which the subject answers the questions is in itself an experimental variable that may considerably distort the responses given … Choi and Pak (2001, p. 9), for example, while describing some 48 types of bias in questionnaires, mention the problem of item order only in the context of learning effects: “Having thought about prior questions (such as the first question) can affect the respondent’s answer to subsequent questions (e.g., the second question) through the learning process as the questionnaire is completed. To avoid learning bias, it may be necessary to randomize the order of the questions for different respondents.”

    As far as I can observe counter-balancing would be manually accomplished by grouping questions to balance any order effects. There is no indication I’ve seen that that was done.

    That said – and randomization or counterbalancing should be done in a random fashion, meaning respondents within the same group receive different versions. Here it is pretty clear different groups received different versions – which is not random at all.

    From the same Journal of Psych article – an example why order effects bias can be a problem:

    The work that most clearly recognizes this problem is that of Knauper and Schwarz (2004), who have argued that item order may change the subject’s responses greatly, and that this effect may interact strongly with both subject variables and the topic chosen for investigation. They observed that in a survey asking whether divorce should be easier to obtain, large changes resulted from rearranging the response alternatives slightly. These were ‘easier’, more difficult’, or ‘stay as it is now.’ When ‘more difficult’ was presented as the last rather than the second alternative, agreement with this choice rose from 33% to 68% among subjects over 70, whereas the change in order had essentially no effect on young adult subjects (Schuman & Presser, 1981; Schwarz & Knauper, 2000). Conversely, in a survey asking whether access to abortion should be available on demand, a change in item-order had no effect among seniors but changed the level of agreement among younger adults by 20%. The magnitude and complexity of these effects (involving a three-way interaction in the above example) suggest that further exploration of this topic is desirable.

    Without seeing the actual question oer in each version it is impossible to say what order effects may be in play. That is another apparent failing – and certainly something that should be addressed.

    For the record, I think, notwithstanding the poorly structured questions, this survey overall and the questions asked were simple enough that there would not likely be an order effect. It still should have been addressed in the paper.

    • tlitb1
      Posted Sep 10, 2012 at 3:29 PM | Permalink

      I think you are probably right and any order effect would be negligible, however the fact they considered a counterbalancing procedure was covered by the process of generating the 4 different surveys would confirm they must have thought there was some sort of order effect. Otherwise they would have sent the same survey out to all.

      There is great deal of talk of “irony” that this study should be criticized only by skeptics – the concept of irony being bent into a horseshoe shape to suit some people’s needs! I think the real irony is that psychologists designed this study and that the nature of the bias that would affect the thinking process of psychologists can clearly be seen by the “counterbalancing” method they used. I can’t see any explanation for the inadequacy of it other than at best mere window dressing.

      I think asking if the study authors were suffering from bias is the sort of critique psychologists should accept. The title alone gives a big clue this study has biasing issues because – unless they later say they meant it as a joke – it has nothing but headline grabbing appeal – it alludes to the weakest part of the study as if that was all that mattered to their predisposed mind-set.

  27. Posted Sep 10, 2012 at 2:59 PM | Permalink

    Sorry – forgot the abstract to article:

    Does the Order of Questionnaire Items Change Subjects’ Responses?

  28. Johna Till Johnson
    Posted Sep 10, 2012 at 3:15 PM | Permalink

    Yelp yep. Yelp yelp yelp yelp yelp yelp. 🙂 (Sorry that line about “obsessively yelping” just hits my funnybone…)

    Yelp yelp.

  29. Mooloo
    Posted Sep 10, 2012 at 3:19 PM | Permalink

    I really don’t understand why Anthony Watts wasn’t contacted. If I was looking for the loony end of scepticism his would be the first place I looked.

    While I often go to WUWT, it has a very light moderation and a higher tolerance for nutters than I personally could muster.

    However going to WUWT could result in a really high turnout if Anthony endorsed it. Which would likely swamp all other results — including all the bogus skydragon ones. Maybe that’s Lewandowsky they didn’t go to him.

    Oh dear, now I am engaging in conspiracy theories! I must be one of those moon-landing deniers!

  30. Posted Sep 10, 2012 at 4:01 PM | Permalink

    As Professor Lewandowsky deleted my previous comment, here for posterity’s sake is one I just posted at his website:

    I have no theories. I have many questions.

    Professor Lewandowsky, when will you post a topline summary wit frequencies?

    I’m interested in your theory of counterbalancing. It normally refers to having half the Likert questions order preference with highest preference at the top of the scale and half at the bottom. You apparently mean something different. What do you think it is?

    When will we be able to view the different iterations of the survey?

    When will we be able to see how many respondents filled out each version?

    Why would you send invitations to bloggers to post the survey without attaching your name to it?

    Why would you discuss the objectives of a survey with potential respondents while the survey was still in the field?

    Why do you not attach numbers of respondents, as is customary, to your discussion of results in your paper?

    Are you aware that your study essentially focuses on respondents who failed a rare items test and whose responses would normally be excluded from collected data? Or do you actually think that people who believe there was more than one reason the U.S. invaded Iraq are highly likely to believe that the U.S. faked a moon landing?

    Why did you exclude 5 questions from your analysis? What were the questions and what did the data show for these questions?

    • Posted Sep 11, 2012 at 4:45 AM | Permalink

      Re: thomaswfuller2 (Sep 10 16:01),

      Thomas

      Here are the first page questions from at least 2 of the surveys (h/t tlitb Bishop Hill):

      surveyID=HKMKNI_9a13984

      * 1. In most ways my life is close to my ideal

      * 2. The conditions of my life are excellent

      * 3. I am satisfied with my life

      * 4. So far I have gotten the important things I want in life

      * 5. If I could live my life over, I would change almost nothing

      and:

      surveyID=HKMKNF_991e2415

      * 1. An economic system based on free markets unrestrained by government interference automatically works best to meet human needs.

      * 2. I support the free-market system, but not at the expense of the environmental quality

      * 3. The free-market system may be efficient for resource allocation, but it is limited in its capacity to promote social justice

      * 4. The preservation of the free market system is more important than localized environmental concerns

      * 5. Free and unregulated markets pose important threats to sustainable development Strongly Agree

      * 6. The free-market system is likely to promote unsustainable consumption

      This pretty clearly shows us there was no randomization or worries about order effect. They kept questions grouped similarly and simply moved them around in blocks.

      There were exactly 6 questions in the “free market” group and only 5 questions in “am I happy” group … and room for only 6 or so questions on the first page. The lack of a 6th question on the one page pretty much proves the questions were not truly randomized but were kept in their order with their orig groups. Basically no value I can see to this – no attempt to address order effect.

      • tlitb1
        Posted Sep 11, 2012 at 5:26 AM | Permalink

        Re: A.Scott (Sep 11 04:45),

        You might want to check out this blog article :

        http://www.ambitgambit.com/2012/09/06/fish-rot-from-the-head-part-1/#comments

        The article has screen shots of the full survey he took. His survey started the same as first questions as HKMKNF_991e2415 (so it probably was that).

        Questions 1-5 from KMKNI_9a13984, “Satisfaction with life scale” SWLS questions, appear in the same order as questions 28-32 on his survey on their own page. I would hazard a guess the survey variations may just depend on the positioning of these SWLS questions as L makes mention of them as being some sort of benchmark in his paper.

        You can find the original SWLS ED DIENER, ROBERT A. EMMONS, RANDY J. LAR.SEM, and SHARON GRIFFIN paper if you Google.

        I can’t see anything about using it for counterbalancing it seems a pretty arbitrary way to do it, but I’m just guessing.

  31. Posted Sep 10, 2012 at 4:59 PM | Permalink

    Steve, he zapped your most recent comment. I hope you saved it.

  32. Posted Sep 10, 2012 at 5:37 PM | Permalink

    Since there is a whole lotta “snippin” going on – my recent post preserved here just in case

    The “skeptic” community of bloggers joined together when you made the claim they had been contacted and did not reply, and each searched extensively for contact from you.

    You failed to note that it was an assistant that sent these emails making that search more difficult. And that by all appearances, at least some of these emails from your assistant contained no reference or association with you – again making their search essentially impossible.

    When it became apparent you did not send the email, they all searched more broadly, including on the uwa.edu.au domain and several did find emails from your assistant and so noted.

    In like of your withholding the names of the skeptic sites you say you contacted, the group continued to search and try to identify who might have been recipients of your 5 emails. This was a needle in a haystack approach, thanks to your silence on the issue.

    Over the last weekend the 3rd and 4th site found evidence of your assistants emails, and they were so noted earlier today at Steve McIntyre’s Climate Audit.

    Followed shortly thereafter by your release here of the 5 sites you contacted.

    You deserve no apologies, and your arrogance is misplaced. This information should have been included in the paper. You could and should have obtained ethics permission if you felt it necessary two years ago. You forced a large amount of effort on many “skeptic” site operators, by withholding these names.

    If you felt input from skeptic sites users was important, and frankly – as your entire premise was regarding skeptics attitudes about science – I can not understand why you felt it not highly important to gain data FROM skeptics, then you or your staff should have made a more concerted effort, to at least keep contacting until you made sure they had the information on the survey. If they then had turned you down you might have had a leg to stand on.

    From what it appears we know Junk Science acknowledged receipt of your email and DID post your link. So your claim no skeptic site participated is false.

    We also appear to know that Pielke Jr. has found emails that he did receive the original email, and did correspond, but subsequently decided not to participate.

    You only submitted to 5 skeptic sites vs 8 known pro-AGW sites, all of whom posted the survey link.

    We also know that pro-AGW sites were contacted it appears almost a month before you sent emails to the skeptic sites. In fact, we know you were already publicly discussing both the data/results, and the number of responses (N=1100) prior to or at same time you finally sent the emails to skeptic sites.

    No professional publicly discusses survey data and results while the survey is underway. Your public presentation shows you had received virtually all the responses you eventually used and had already analyzed it, and that the attempt to contact skeptic sites came at or after the same time – by all appearances an after thought.

    That you did not attempt to engage the skeptic community near the beginning of the survey, and that you or your staff made little or no effective effort to affirmatively insure that each of the skeptic sites DID receive your messages seems highly suspect.

    Last, to your claim of “counter balancing.” Counter balancing, according to most readings, is more sophisticated – and important – than mere randomization. Counter balancing is as I understand it, to manage questions to balance and such order effect, where the answer to one question may effect subsequent answers.

    Randomization blindly mixes the questions, which may be some help, but does little, except by chance, regarding order effect.

    You say you used 4 different survey versions with different question order within pages. Presumably you might have addressed order effect thru counter-balancing, but we cannot know that as you have not included copies of each survey within the data – making any review impossible.

    Even given the benefit of the doubt, that you did a wonderful job of counter-balancing, you admit however, that you pretty much completely negated any value by your “quasi-random” distribution of the different versions.

    Thru the “crowd sourced” research of the blogging community, and no thanks to you or your paper, we appear to know that there was no true random distribution. The “pro-AGW” sites received 2 of the versions exclusively, and the “skeptic” sites were offered the other 2 versions.

    As you claim no skeptic sites responded, which we can largely confirm by your public statements at the time – where you identified N=1100 on appx same date you sent skeptic site emails, we can surmise then that essentially all responses were from 2 of the 4 versions.

    I personally think randomization and counter balance were unlikely to have significantly skewed results here – however that is a laymans common sense perspective. Your job as a professional is to use the industr standards and show that order effect was or was not an issue, and if so how you addressed it.

    I, and I think many others, would like responses to these questions. They are legitimate, and in general respectfully submitted, despite the arrogance and derision you have displayed towards those you disagree with.

    Regardless of what we might like, professional ethics and standards indicate a response is required.

  33. Posted Sep 10, 2012 at 9:58 PM | Permalink

    Hi all-

    Perhaps this will help clarify. A few weeks ago I was emailed by Joanne Nova who asked if I had been contacted by a Stephen Lewandowsky about a climate survey.

    Here is my response to her:

    “Hi Joanne-

    Never heard of the guy, and a search of my email finds no contact from him.

    Hope this helps,

    Roger”

    When Steve McIntyre returned from a long time away I read his post (hoping it would be about his London talk). It was a post about this topic, with Nova’s email relatively fresh I put 2 + 2 together and searched my email for Hanich and found the exchange that Steve has now posted. I sent it to Joanne and Steve upon finding it.

    I am not following this issue (sorry) and have never heard of Lewandowsky. What I know does not whet my appetite for learning more.

    All best ….

    • Posted Sep 11, 2012 at 4:07 AM | Permalink

      Roger, you’re right to want to know about Steve’s London talk. It seemed to me, as a neophyte on WG2, like a tour de force on climate and extreme events. If you had been sitting next to me I’m sure I would have learned a great deal more.

  34. johanna
    Posted Sep 11, 2012 at 12:16 AM | Permalink

    Having formally studied survey methodology (at Master’s level), conducted surveys for a living, and dealt with ethics committees – I did a lot of work with people with disabilities – my flabber is gasted.

    All open online surveys are junk. Trying to track duplicates, gamers, jokesters and all the rest is just putting lipstick on the pig. The bedrock of a survey is in the formulation of the questions and the selection of participants. On both of these, this survey would have failed a first year undergraduate course.

    I would love to know what the ethics committee was told about this exercise, which has as its premise “when did you stop beating your wife?”

    It would be perfectly reasonable to conduct a proper survey about the relationship between people’s political or personal views and their opinions on AGW, or CAGW. But, that is not what this sloppy and inherently biased exercise was about. Indeed, it is difficult to find a single aspect of this ‘survey’ which is defensible, from soup to nuts.

    • Robin Melville
      Posted Sep 11, 2012 at 3:55 AM | Permalink

      That just about sums it up. Thanks johanna. I’d also add that there’s no indication in his methodology section about how the quaire was validated, whether it provided test/retest and internal consistency, or how you can apply a scalar response to complex non-linear questions. In my view it falls into the category of a tabloid newspaper “survey” e.g. “Should aggressive beggars be repatriated: ‘Yes’/’No'”.

      My opinion of the quality of Psychology journals just dropped several notches.

  35. Posted Sep 11, 2012 at 2:27 AM | Permalink

    Bob – can we get the full link if possible for each? These two are in the Wayback Machine:

    http://web.archive.org/web/20100903013604/http://www.kwiksurveys.com/online-survey.php?surveyID=HKMKNF_991e2415&UID=2378302794
    http://web.archive.org/web/20100928145229/http://www.kwiksurveys.com/online-survey.php?surveyID=HKMKNI_9a13984&UID=3313891469

    Warmist:
    profmandia HKMKNF_991e2415
    deltoid HKMKNF_991e2415
    hot-topic HKMKNF_991e2415
    tamino HKMKNF_991e2415
    illconsidered HKMKNG_ee191483
    bbickmore HKMKNG_ee191483
    skepticalscience ???
    trunity ???

    Skeptic:
    junkscience HKMKNI_9a13984
    climateaudit HKMKNI_9a13984
    pielke jr ??? [SM- HKMKNH_7ea60912]
    climatedepot ???[HKMKNI_9a13984]
    spencer ???HKMKNH_7ea60912

    • Bob Koss
      Posted Sep 11, 2012 at 3:41 AM | Permalink

      A. Scott,

      This is the only other survey with the UID appended.
      http://www.kwiksurveys.com/online-survey.php?surveyID=HKMKNG_ee191483&UID=3305003676

      Unfortunately it doesn’t work in the wayback machine. I did note a couple comments at Bickmore’s site complaining they couldn’t complete the survey. Maybe that survey has been scraped.

      Maybe someone has to take the survey to generate a UID, in which case neither Pielke or Spencer initiated their survey.

      That’s my best guess anyway.

  36. Mike Mangan
    Posted Sep 11, 2012 at 6:33 AM | Permalink

    Once again, it’s not what they do, it’s that they can get away with it. This story has penetrated the outer edges of main stream media, being reported in the Huffington Post, the Blaze, and on MSNBC’s website. Mann, Gleick, and Lewandowsky can act with impunity because they are given the mantle of credibility by the media and academia. It’s maddening.

  37. al
    Posted Sep 11, 2012 at 6:51 AM | Permalink

    got to feel sorry for them really – obviously a massively underfunded lab where people doing internet based research are having to share desktop computers.

  38. Posted Sep 12, 2012 at 8:28 AM | Permalink

    Well, at least Lewandowsky is half honest when he says, “the rationale behind the survey is to draw linkages between attitudes to climate science and other scientific propositions (eg HIV/AIDS)…” He states, maybe unintentionally, that the intent is, “to draw linkages” rather than for instance, ‘investigate the possibility that there might be linkages…’ Got to give credit where credit is due, that’s pretty bold, or maybe its just his subconscious leaking through.

    On the other hand Lewandosky’s statement, “…attitudes to… …other scientific propositions (eg HIV/AIDS)…” is well beyond obtuse and smacks of dissembling. It, in my opinion, is a deliberate attempt to conceal the real focus of the study which is to ‘create’ a linkage between climate change ‘skepticism’ and belief in paranoid, pseudo-scientific conspiracy theories. Here, Lewandowsky is trying to ‘game’ Pielke the Younger – not going to work. Exactly how stupid does Lewandowsky think Dr. Pielke is?

    And, he wonders why nobody on the skeptic side, gets associated with the skeptic side or wishes to maintain their professional reputation wants to play with him.

    It does have to hurt though, that so many of the bloggers he attempted to hustle into his survey didn’t even notice him.

    An interesting follow-up survey to ask respondents might be [IF you could get an honest answer]:

    When you took the survey did you believe that the survey was a valid and scientific attempt to study the issue?
    a] Yes
    b] No
    c] if b, did you care?

    ~W^3

7 Trackbacks

  1. By The Daily Lew | Watts Up With That? on Sep 10, 2012 at 1:36 PM

    […] And it turns out Pielke Jr. was contacted as the Third Skeptic. […]

  2. […] Climate Audit on "The Third Skeptic" […]

  3. By Lewandowsky Timeline | Geoffchambers's Blog on Mar 24, 2013 at 7:36 PM

    […] https://climateaudit.org/2012/09/10/the-third-skeptic/ […]

  4. […] am (Australian). By this time, five skeptics had already been identified at both Climate Audit (here) and updates at Jo Nova (here), with these identifications even being reported by Barry Woods on a […]

  5. […] am (Australian). By this time, five skeptics had already been identified at both Climate Audit (here) and updates at Jo Nova (here), with these identifications even being reported by Barry Woods on a […]

  6. By Lewandowsky’s Backdating « Climate Audit on Aug 2, 2013 at 9:42 AM

    […] am (Australian). By this time, five skeptics had already been identified at both Climate Audit (here) and updates at Jo Nova (here), with these identifications even being reported by Barry Woods on a […]

  7. By Lew’s Third Table | Geoffchambers's Blog on Aug 9, 2013 at 5:25 AM

    […] https://climateaudit.org/2012/09/10/the-third-skeptic/#comment-350166 […]