Conspiracy-Theorist Lewandowsky Tries to Manufacture Doubt

As CA readers are aware, Stephan Lewandowsky of the University of Western Australia recently published an article relying on fraudulent responses at stridently anti-skeptic blogs to yield fake results.

In addition, it turns out that Lewandowsky misrepresented explained variances from principal components as explained variances from factor analysis, a very minor peccadillo in comparison. In a recent post, I observed inconsistencies resulting from this misdescription, but was then unable to diagnose precisely what Lewandowsky had done. In today’s post, I’ll establish this point.

Rather than conceding the problems of his reliance on fake/fraudulent data and thanking his critics for enabling him to withdraw the paper, Lewandowsky has instead doubled down by not merely pressing forward with publication of results relying on fake data, but attempting to “manufacture doubt” about the validity of criticisms, including his most recent diatribe – to which I respond today.
.

In a post several days ago, I temporarily considered other issues in the Lewandowsky article beyond the reliance on fake responses, reporting on my then progress in trying to replicate results – not easy since his article omitted relevant methodological information. Separate from this, Roman Mureika and I (but especially Roman) have made further progress in trying to replicate the SEM steps – more on this later.

I reported a puzzle about explained variance results as reported in Lewandowsky’s article – results that could not be replicated using a standard factor analysis algorithm. Roman Mureika also tried to figure out the discrepancy without success. I pointed out that Lewandowsky’s factor analysis did not seem to have much effect on the downstream results where the real problems lay.

The reason why we were unable to replicate Lewandowsky’s explained variance from factor analysis was that his explained variance results were not from factor analysis, but from the different (though related) technique of principal components, a technique very familiar to CA readers.

The clue to reverse engineering this particular Lewandowsky misrepresentation came from a passim comment in Lewandowsky’s blog in which he stated:

Applied to the five “climate science” items, the first factor had an eigenvalue of 4.3, representing 86% of the variance. The second factor had an eigenvalue of only .30, representing a mere 6% of the variance. Factors are ordered by their eigenvalues, so all further factors represent even less variance.

Eigenvalues are a term that arise from singular value (“eigen”) decomposition SVD. As an experiment, I did a simple SVD of the correlation matrix – the first step in principal components, a technique used in principal components and was immediately able to replicate this and other Lewandowsky results, as detailed below. Lewandowsky’s explained variance did not come from the factors arising from factor analysis, but from the eigenvectors arising from principal components. No wonder that we couldn’t replicate his explained variances.

But instead of conceding these results, Lewandowsky fabricated an issue regarding the number of retained eigenvectors in this analysis, a point that I had not taken issue and which did not affect the criticism, as I’ll detail.

Factor Analysis of “Free Market” Questions

Lewandowsky’s first example was his factor analysis of the free market questions, where he reported results using two factors as follows:

For free-market items, a single factor comprising 5 items (all but FMNotEnvQual) accounted for 56.5% of the variance; the remaining item loaded on a second factor (17.7% variance) by itself and was therefore eliminated.

In my previous post, I described my emulation, in which I confirmed that factor analysis with two factors did indeed result in loading of the second factor on the second question, but, as a puzzle, observed that I had been unable to replicate the reported variances:

I tested this using the R function factanal, the loading report from which is shown below. The loading on the first factor so calculated (48.7%) is somewhat lower than Lewandowsky’s report (56.6%). In addition, while the second factor is indeed dominated by the second question, there is also some loading from the third question onto this factor. Such a simple first step should be directly replicable. The calculation here is close, but only close.

I’m not sure that this (or the other factor analysis calculations) matter in the end. My interpretation of Lewandowsky’s sketchy description is that the “latent variable” used downstream is a weighted average of the 5 retained series (with relatively even weights – see his Table 2.)

	
pc=factanal(lew[,1:6],factors=2)
pc$loadings
....
               Factor1 Factor2
SS loadings      2.924   1.095
Proportion Var   0.487   0.183
Cumulative Var   0.487   0.670

However, if one does the first step of a (correlation) principal components analysis – a SVD on the correlation matrix – one does get the results claimed by Lewandowsky as shown in the following code:

R=cor(lewdat[,1:6])
pc=svd(R)
pc$d/sum(pc$d) #proportion of explained variance
# 0.5653 0.1774 0.0873 0.0735 0.0570 0.0395

pc$d
#  3.392 1.065 0.524 0.441 0.342 0.237

In this example, I had done the calculation with the same number of retained factors as Lewandowsky but had got a different result. In his blog post, Lewandowsky ignored this example, focusing instead on a different example where I had reported results using a different number of factors (though this was not the only experiment that I had done.) Lewandowsky pretended that it was the difference in number of retained factors that explained the difference, but this was untrue – even though he knew or ought to have known that the discrepancies arose in this case where there was no difference in retained factors. Why did Lewandowsky conceal this in his blog post?

As noted in the earlier post, this discrepancy didn’t seem that important for the downstream analysis.

Factor Analysis of the Conspiracy Questions

Lewandowsky’s incorrect reporting of explained variance for factor analysis of the conspiracy questions had an identical explanation. Lewandowsky reported his results as follows:

For conspiracist ideation, two factors were identifi ed that accounted for 42.0 and 9.6% of the variance, respectively, with the items involving space aliens (CYArea51 and CYRoswell ) loading on the second factor and the remaining 10 on the first one (CYAIDS and CYClimChange were not considered.

Again, I had been unable to replicate these results. I made no attempt to replicate Lewandowsky’s procedure for deciding how many factors should be used, an aspect of the algorithm not discussed in the article itself.

Once again, the explained variance reported by Lewandowsky can be obtained SVD on the correlation matrix (principal components) i.e. loading on the first two eigenvectors, not the first two factors.

R=cor(lewdat[,c(13:15,17:24,26)])
pc=svd(R);
round(pc$d/sum(pc$d),3)
# [1] 0.420 0.096 0.071 0.066 0.061 0.058 0.049 0.047 0.044 0.039 0.032 0.017
round(pc$d,3)
 # [1] 5.036 1.154 0.848 0.793 0.732 0.692 0.593 0.568 0.533 0.463 0.383 0.205

Factor Analysis on CO2 Questions
On the factor analysis of the five CO2 questions, Lewandowsky said only:

The 5 climate change items (including CauseCO2) loaded on a common factor that explained 86% of the variance; all were retained.

As with the other two factor analyses, I was unable to replicate Lewandowsky’s reported explained variance. Once again, it can be shown that Lewandowsky’s explained variance claim comes not from the first factor, but from the first eigenvector from principal components. Factor analysis yields an explained variance of 82.7%. By coincidence, factor analysis using two factors yields explained variance of 86%.

Puzzled by the unexplained inconsistencies, I speculated that Lewandowsky might have inadvertently reported the explained variance from two factors, rather than the explained variance from one factor. ( I certainly wasnt taking a position on the merit of either decision; I was simply trying to figure out what Lewandowsky had done. I had asked at Lewandowsky’s blog that he provide source code to clarify various questions, but he did not respond.) I reported this speculation as follows:

I wasn’t able to replicate Lewandowsky’s claim at all. I got explained variance of 43.5% in the first factor(versus Lewandowky’s 86%). I notice that the explained variance for two factors was 86%: maybe Lewandowsky got mixed up between one and two factors. If so, would such an error “matter”? In Team-world, we’ve seen that even using contaminated data upside down is held not to “matter”; perhaps the same holds in Lew-world, where we’ve already seen that use of fake and even fraudulent data is held not to “matter”.

As it turns out, I was right to question Lewandowsky’s explained variance claims, but my speculation as to the source of the 86% value was incorrect: as noted above, Lewandowsky had misrepresented explained variance from principal components eigenvectors as explained variance from factor analysis.

Lewandowsky’s Manufactured Doubt
At his blog, Lewandowsky fulminated:

How could Mr. McIntyre fail to reproduce our EFA?

Simple: In contravention of normal practice, he forced the analysis to extract two factors. This is obvious in his R command line:
pc=factanal(lew[,1:6],factors=2)

In this and all other EFAs posted on Mr. McIntyre’s blog, the number of factors to be extracted was chosen by fiat and without justification.

In Lewandowsky’s FM factor analyses, it was Lewandowsky’s decision to use two factors. Lewandowsky did not describe his methodology for deciding on two factors and, in my blog post, I made no attempt to guess. The reason why I was unable to replicate his results for FM and Conspiracy questions (or CO2 questions) had nothing to do with the number of retained factors. It was to do with Lewandowsky’s reporting of explained variance using principal components.

It’s hard to believe that Lewandowsky was unaware of this at the time that he wrote his blog post. If so, his suggestion that I had “rigged” my reanalysis to locate discrepancies is contemptible even by Lewandowsky’s standards:

Or else, he intentionally rigged his re-“analysis” so that it deviated from our EFA’s in the hope that no one would see through his manufacture of doubt.

The reason why I was unable to replicate these results was that Lewandowsky had presented explained variance from principal components – not from factor analysis.

Retained Eigenvectors and Factors

The decision on how many eigenvectors/principal components to retain has been a wheelhouse issue at Climate Audit.

Steig (in Steig et al 2009) had misunderstood the commentary of North et al 1982 on principal components and had additionally and incorrectly reified Chladni patterns as physical patterns. We observed that Steig’s retention of only three eigenvectors had incorrectly spread observed warming in the Antarctic peninsula onto the rest of the continent.

Retained principal components also (famously) arose in discussion of MBH, where Mann and others have created massive disinformation. Mann had notoriously used a highly biased principal components algorithm (not that a centered principal components method was necssarily correct either.) Using a centered principal components method, the bristlecone hockey stick, said by Mann to be the “dominant pattern of variance”, was demoted to a lower order PC (the PC4). Why a PC4 of a regional network should be a unique and magic thermometer for the entire world was never explained. In the NAS panel report, they recommended that bristlecones be avoided in reconstructions (regardless of which PC.) One would have though that this would have put a silver bullet in the MBH reconstruction. However, the climate science community has proved unequal to the small task of rejecting Mann’s use of contaminated upside-down Tiljander data. All these issues linger on.

In the wake of the original error in Mann’s PC algorithm, Mann proposed (ex post) a retention policy that went deep enough to include the bristlecones. There was no mention of this method in the original paper. Nor did this algorithm yield the pattern of retained eigenvectors in other networks. Nor has Mann ever explained how he calculated the number of retained eigenvectors. Instead, Mann and his associates merely threw mud, seemingly to the great pleasure of the climate community.

Like Mann, Lewandowsky did not describe his retention policy in his original article. Nor was it even described in his blog post. Instead, Lewandowsky curiously described the effect of a common retention as “illustrative”:

One core aspect of EFA is that the researcher must decide on the number of factors to be extracted from a covariance matrix. There are several well-established criteria that guide this selection. In the case of our data, all acknowledged criteria yield the same conslusions.

For illustrative purposes we focus on the simplest and most straightforward criterion, which states one should extract factors with an eigenvalue > 1. (If you don’t know what an eigenvalue is, that’s not a problem—all you need to know is that this quantity should be >1 for a factor to be extracted). The reason is that factors with eigenvalues < 1 represent less variance than a single variable, which negates the entire purpose of EFA, namely to represent the most important dimensions of variation in the data in an economical way.

Lewandowsky observed that application of the eigenvalue-less-than-1 criterion to the CO2 questions resulted in the retention of 1 factor. The second eigenvalue of the FM questions was greater than 1, but in that case, Lewandowsky eliminated the question loading on the second factor.

Again, at this point, I’m not taking exception to any particular criterion. Nor did I argue that two factors should be retained for the CO2 questions. I was merely trying to guess how Lewandowsky had obtained his explained variance results.

Lewandowsky’s Accusations

Lewandowsky ended his blog post with the following accusation:

There are two explanations for this obvious flaw in Mr. McIntyre’s re-“analysis”. Either he made a beginner’s mistake, in which case he should stop posing as an expert in statistics and take a refresher of Multivariate Analysis 101. Or else, he intentionally rigged his re-“analysis” so that it deviated from our EFA’s in the hope that no one would see through his manufacture of doubt.

As noted above, the reason why I was unable to replicate Lewandowsky’s explained variance claims was because they were incorrect – they came from the eigenvectors (from principal components) and not the factors (from factor analysis). The person who appears to be in need of Multivariate 101 is Lewandowsky himself.

Lewandowsky’s attempt to divert attention to the number of retained factors was a fabricated diversion on several counts. I made no attempt to emulate lewandowsky’s unreported retention procedure. I used two factors to analyse the FM question not because of a “fiat” on my part, but because Lewandowsky himself had used that number. Nor did I propose the use of two factors for the third (CO2) analysis: I noticed that 86% explained variance arose with two factors. As matters turned out, Lewandowsky had made a different error – one that I had not guessed in my previous post, but one that pervaded his other factor analyses as well.

Lewandowsky repugnantly alleged that I might have “intentionally rigged his re-“analysis” so that it deviated from our EFA’s in the hope that no one would see through his manufacture of doubt.”

Lewandowsky’s results are bogus because of his reliance on fake and fraudulent data, not because of replication issues in his factor analysis. Nor do I believe that there should be any “doubt” on this point. In my opinion, the evidence is clearcut: Lewandowsky used fake responses from respondents at stridently anti-skeptic blogs who fraudulently passed themselves off as skeptics the seemingly credulous Lewandowsky.

That Lewandowsky additionally misrepresented explained variances from principal components as explained variances from factor analysis seems a very minor peccadillo in comparison (as I noted at the time.) On this last point, to borrow Lewandowsky’s words, there seem to be two alternatives. Either Lewandowsky “made a beginner’s mistake, in which case he should stop posing as an expert in statistics and take a refresher of Multivariate Analysis 101”.

Or else Lewandowsky, cognizant of how thoroughly compromised his results are by fake/fraudulent data, rather than thanking his critics for spotting defects and withdrawing his study, has decided to double down by trying to manufacture doubt about criticism of the degree to which his data and results have been thoroughly compromised in the “hope that no one would see through his manufacture of doubt.”

265 Comments

  1. Mooloo
    Posted Sep 21, 2012 at 12:35 AM | Permalink

    “who fraudulently passed themselves off as skeptics the seemingly credulous Lewandowsky”

    should it be

    “who fraudulently passed themselves off as skeptics TO the seemingly credulous Lewandowsky” ?

  2. Gil Grissom
    Posted Sep 21, 2012 at 12:36 AM | Permalink

    I have stated before that the Lewandowskys, Manns, etc. of the climate debate know that people such as Steve, Jean S, Lucia and many others know that their work is often total bunk. They are activists, not scientists and they know we know that. They don’t care. They are simply attempting to put on a poker face and present their nonsense as proper and correct science for the mass media and millions of others that are merely trying to decide who is telling the truth, and steer public policy accordingly. We should not be discouraged when they refuse to acknowledge even the simplest, most blatant errors that are pointed out here and other places (Lucia, etc). This is a fight to the death for public opinion. No more, no less. Again, thank you for what you do!

    • Otter
      Posted Sep 21, 2012 at 3:05 AM | Permalink

      ‘A joker face’

      That sounds like one for Josh….

      • Jeff Alberts
        Posted Sep 21, 2012 at 9:44 AM | Permalink

        I’m still picturing Lew’s eyebrows going up and down, and haven’t been able to sleep…

    • Craig Loehle
      Posted Sep 21, 2012 at 8:57 AM | Permalink

      I dsagree. The extremely large ego does not admit that it can make mistakes. Anyone pointing out mistakes is simply engaged in a personal attack and they do not even listen to it. Combine giant ego with a vague idea of science and stats and voila, junk science vociferously defended.

  3. Mooloo
    Posted Sep 21, 2012 at 12:36 AM | Permalink

    Nice article. Even if it did mean I had to spend 15 minutes chasing down what eigenvectors are.

  4. theduke
    Posted Sep 21, 2012 at 12:44 AM | Permalink

    In the five years I’ve been following this blog, I’ve never seen Steve make more specific and onerous charges of scientific misconduct.

    Dr. Lewandowsky’s response if he chooses to seriously engage, will be interesting, to say the least.

  5. Nosyam
    Posted Sep 21, 2012 at 12:50 AM | Permalink

    I am trying to find a charitable explanation for Lewandowsky et al behaviour without success.

    • Brandon Shollenberger
      Posted Sep 21, 2012 at 1:19 AM | Permalink

      Or faustusnotes. He simply claims Lewandowsky’s results are gotten via factor analysis.

      • JerryM
        Posted Sep 21, 2012 at 4:34 AM | Permalink

        And then faustusnotes too joins the rush for the hills with ” I don’t have an opinion as to whether that’s what L et al do, though (haven’t read that much of their paper). ”

        Only the very young or the terminally stupid left defending Lew now. Pretty soon WorldShaping will be back at its default as a comment-free zone.

        Many thanks for your latest work/post, Steve, it made my day.

      • AntonyIndia
        Posted Sep 21, 2012 at 10:07 AM | Permalink

        Australian CAGWarmer faustusnotes doesn’t “actually think a great deal of Lewandowsky’s paper”: “I don’t like seeing Factor Analysis done on 4-point scales, there’s obvious problems with internet surveys (unavoidable in this instance, mind), and the unbalanced sample (only 20% “skeptic” according to McIntyre) weakens the findings”.
        http://faustusnotes.wordpress.com/2012/09/17/censored-by-climateaudit/#comment-6671

        Steve: around 20% identified themselves as “skeptic”, but some of these responses were fraudulent. The actual number of respondents appears to be much less than that. My guess is that over half of the “skeptic” responses were fake.

  6. Jimbomo
    Posted Sep 21, 2012 at 12:51 AM | Permalink

    Yet again another example of how withholding methods and code is the spark that ignites a conflagration. If Lendowsky had provided code as Steve requested, Steve wouldn’t have speculated on his methods and Lendowsky wouldn’t have (foolishly and spitefully) tried to skewer Steve.

    Unfortunately, Steve’s reply, though both necesssary and devestating, is a predictable response that Lendowsky has goaded him into and now our attention wil be turned towards the exchange of attacks and, as Lendwsky may hope, we will pay less attention to the core issue – the inclusion of faked responses and their exagerated role in the resulting analysis.

    • HAS
      Posted Sep 21, 2012 at 1:01 AM | Permalink

      I’m actually surprised the faked responses get so much air-time, in a sense they were inevitable (even if their impact is somewhat more serendipitous).

      The way the sample was drawn is the fundamental problem.

    • theduke
      Posted Sep 21, 2012 at 1:11 AM | Permalink

      “. . . we will pay less attention to the core issue – the inclusion of faked responses and their exagerated role in the resulting analysis.”

      I really don’t think that’s a concern at this point. The paper is deficient in so many ways . . .

  7. GrantB
    Posted Sep 21, 2012 at 12:55 AM | Permalink

    DVD – Déjà Vu Downunder

    Methodology outlined in paper doesn’t produce results given in paper.

    • michaelozanne
      Posted Sep 22, 2012 at 10:56 AM | Permalink

      I’m starting to think that there’s some kind of miningeal fluke or worm that incubates in Fosters Lager and causes aberrant behaviour in academics…..

  8. Brandon Shollenberger
    Posted Sep 21, 2012 at 1:17 AM | Permalink

    It took longer than expected for this post to go up, but it was well worth the wait. While this issue has little relevance to Lewandowsky’s results, it is all sorts of hilarious.

  9. faustusnotes
    Posted Sep 21, 2012 at 1:18 AM | Permalink

    try looking up “kaiser criterion.”

    • AndyL
      Posted Sep 21, 2012 at 2:42 AM | Permalink

      faustusnotes
      I’ve read through the latest thread at SKS, and it is clear that you knnew L et al were using PCA all along. From the degree of your knowledge of R it is also clear you knew that Steve and Brandon were not using PCA.
      Why did you not clearly state this?

      • faustusnotes
        Posted Sep 21, 2012 at 5:08 AM | Permalink

        AndyL, I’ve never commented at SKS, what are you talking about? I didn’t know that L et al were using PCA all along, but it’s a pretty common practice in the social sciences: use eigenvalues from PCA to identify number of factors to retain, then do factor analysis on those factors. If Mcintyre had even a passing familiarity with the social science literature he would have known that too. Why should I do his work for him? I already pointed out to him on his last train-wreck thread that he needs to up his standard to an undergraduate level – not my problem if he can’t.

        I note Mcintyre hasn’t used the word “rotation” once in this post. I wonder why?

        Steve: The post was already long. Lewandowsky’s shot about rotation appears to me to be another fabricated diversion. My impression is that Lewandowsky used varimax rotation as well, because his pattern in the FM pattern analysis with two factors closely matched my results, also with two factors. Indeed, the closeness of this match was one of the reasons that it seemed to me that I was using an algorithm that matched or was close to his – a supposition that still seems valid to me.

        • AndyL
          Posted Sep 21, 2012 at 7:00 AM | Permalink

          apologies, I meant the shapingtomorrowsworld thread, as I am sure you realised.

        • faustusnotes
          Posted Sep 21, 2012 at 7:13 AM | Permalink

          instead of supposition, though, you could actually ask, couldn’t you? – snip – untrue allegation – and now you’ve shown you don’t understand basic aspects of his craft, so he’s not feeling very cooperative. Oops!

          The reason rotation is important is not that you were or weren’t doing it his way but that you didn’t mention which you used. If you want to do an audit, you need to present exactly what you do, not just rough it out with an as-you-like-it, near-enough-is-good-enough attitude. You didn’t even think about whether the rotation might affect the results, not enough to mention. That’s not how auditing works, is it?

          Steve: Lewandowsky did not provide source code for his calculations; I did. The purpose of doing so is to permit interested parties to examine calculations for assumptions that were not described in the text. My results were replicable within minutes by any interested reader. Lewandowsky’s results should have been immediately replicable, but weren’t. Nor should replicability be dependent on the goodwill of the author – giving data and methods only to pals has even been condemned by Geoffrey Boulton’s Royal Society report (who, incidentally, commended my work.)

          If I can manage to provide code for blog articles, Lewandowsky should, in my opinion, do so for a controversial article purporting to be “peer reviewed”.

        • faustusnotes
          Posted Sep 21, 2012 at 7:26 AM | Permalink

          The allegation is not untrue. Here is your first sentence from today’s post:

          Stephan Lewandowsky of the University of Western Australia recently published an article relying on fraudulent responses at stridently anti-skeptic blogs to yield fake results.

          Your article on Sep 8th used the word “scam” in the title; another title refers to Lewandowsky’s “fake results.” Do you seriously have so little insight that you don’t see that as an accusation of fraud?

          Your results were replicable within minutes because they were laughably simple; Lewandowsky’s were replicable within minutes by anyone who understands how social scientists do factor analysis, which is something you clearly still aren’t up to speed on.

          You also need to learn what “replicable” means, since it doesn’t mean what you think it does.

          Steve: in a reference below http://www.let.rug.nl/~nerbonne/teach/rema-stats-meth-seminar/Factor-Analysis-Kootstra-04.PDF, it is pointed out that the explained variance from factor analysis is not the same as that from principal components:

          “although the loading patterns of the factors extracted by the two methods do not differ substantially, their respective amounts of explained variance do!”

          You merely asserting that psychologists are confused on this point does not demonstrate it.

        • faustusnotes
          Posted Sep 21, 2012 at 7:44 AM | Permalink

          Yes Steve, the respective amounts of explained variance do differ! and if you had read that paper as part of your postgraduate studies, you might have known that and been able to guess at the reason why your fumblings didn’t match Lewandowsky’s results. You would also know that it’s possible to use PCA to choose the number of factors, and factor analysis to calculate the loadings. But you didn’t, so you were mystified by something the rest of us learnt years ago.

          Oh, no response to the quote from your own blog post? Did I make an untrue allegation, or did you?

        • RomanM
          Posted Sep 21, 2012 at 8:11 AM | Permalink

          S—, you need to drop your arrogant and abusive manner. It makes you look like a jerk.

          The description of the methodology given in the paper is pretty much non-existent. You would have more credibility if in fact you had figured any of this out yourself instead of merely throwing out accusations of conspiracy-monging at everyone.

          The paper has used several versions of “factor analysis” without specifying that they have done so. The initial exploratory work appears to use simple principal components. However, the later results clearly indicate a need for a more complex factor analysis using other unspecified methodology. You would think that this would be vital information for a proper peer review.

          What specific methodology was used to produce the factor loadings for the questionnaire items in table 2? These loadings clearly differ from one the PCs produce. Also, why were the conspiracy items treated differently from the other factors? You don’t suppose that the loadings produced by a factor analysis might have indicated that some of his pet conspiracies play only a minor role in the picture…

        • faustusnotes
          Posted Sep 21, 2012 at 8:36 AM | Permalink

          RomanM, if you want me to drop the arrogant and abusive manner, try associating yourself with a blog that doesn’t accuse other people of f*** and incompetence. A well-mannered post saying “I don’t understand what Lewandowsky did” is very different to a vicious post saying (and I quote)

          Stephan Lewandowsky of the University of Western Australia recently published an article relying on fraudulent responses at stridently anti-skeptic blogs to yield fake results.

          That is arrogant and abusive. If you don’t like it, don’t do it. Didn’t your mother teach you that?

          You know full well that the paper is not yet published, and that it is associated with a detailed online supplement (not yet available) that describes the methods in detail. If you wait instead of accusing people of not contacting you before you have searched your email, or instead of accusing people of f*** before the paper is published, then you might have more credibility.

          The paper has used one version of factor analysis (drop the quotes; they don’t look good in scientific writing). It has used PCA to identify the number of factors, using a criterion that the authors don’t reveal in the unpublished draft that you guys are hacking; it then uses factor analysis (based on mle) to get the loadings. I’ve said this multiple times but you don’t seem to be listening. This is pretty standard procedure in social sciences, but you didn’t know that.

          My guess is that they’re using a variant of the Kaiser criterion for small number of variables to choose the number of factors. My guess is as good as yours (well, maybe it’s better, since I seem to be able to replicate their results and you don’t); I could be wrong, but my guess is that the maximum number of factors you can extract from, e.g. 5 variables is 2, so it’s pretty likely that any rational selection method is going to end up choosing 1. Maybe if you look through Lewandowsky and/or Oberauer’s prior publications you can see the sorts of things they do? But hey, you guys are the auditors, you knew to check that stuff, right? I’m guessing a method for choosing factors based on e.g. no more than 1 factor per 5 variables, or a variable-count-penalized method. I don’t know; when I do EFA I use the full variable set and let the loadings tell me their story, but apparently that’s not how the cognitive science kids roll. But you see, I’m waiting for the paper to be published; you’re rushing in where angels fear to tread.

          The thing is, if you want to audit someone’s paper before it’s published, you need to get smart. But you’ve been playing to the gallery here, and you haven’t had to lift your game for a while, you don’t even check your inbox properly before you make accusations. You’ve been getting sloppy. So once again, I suggest that you wait for the full paper, because right now you’re digging yourself a deep hole. Once the paper is out you’ll be able to find lots of flaws (I can see many) and spin them however you like; but I suspect the problems you’re claiming to find now aren’t going to stand up to scrutiny once the final paper is out. Lewandowsky knows that, hence his smug tone – you would do well to pay heed to it.

        • RomanM
          Posted Sep 21, 2012 at 9:57 AM | Permalink

          Arrogant and genuinely abusive is someone purporting to be carrying out a scientific study when in fact it is nothing more than a propaganda exercise intended to smear and marginalize people like myself within the context of climate discussion. Exposing the fact that it is done incompetently, as in this case, is the only way of defending oneself from such an onerous assault. That some other people might term the paper a “fraud” and use somewhat inflammatory language describing it does not surprise me. However, I have not used such words nor have I claimed the existence of any conspiracies surrounding the paper.

          You also do not understand the concept of blog auditing. Posts are not intended to be completed exercises with all the i’s dotted and t’s crossed withd everything that has been atempted or done expressly written in the post. Since the background statistical work is generally provided as turnkey material, one can figure out what has been done and adapt the material to take it further. This blog operates as a seminar with ideas thrown out and batted around and improved by those people who can contribute. Others use the opportunity to learn something about the analyses and the procedures used in them so it is also educational.

          You know full well that the paper is not yet published, and that it is associated with a detailed online supplement (not yet available) that describes the methods in detail

          If it is put out on the in public sphere, it is fair game to examine what was done and to attempt to determine whether it does or does not have substance, your opinion that we all sit with our hands folded and wait for the “official” version notwithstanding.

          You might also explain to me why most of what you wrote seems to be aimed at someone other than myself.

        • Steve McIntyre
          Posted Sep 21, 2012 at 10:04 AM | Permalink

          You know full well that the paper is not yet published, and that it is associated with a detailed online supplement (not yet available) that describes the methods in detail

          Lewandowsky publicized his results and, if his results were not ready for scrutiny, he should have waited until they were. The article was said to be “in press” and not merely a draft.

        • Sven
          Posted Sep 21, 2012 at 9:40 AM | Permalink

          You keep hammering the quote about fraudulent responses and fake results, faustusnotes. What’s so wrong about it. I think it’s just obvious that with these kind of obviously guided questions, if you almost uniquely rely on the answers from the vehemently anti-skeptic sites, then most of these answers would be from alarmists, posing as skeptics, who are trying to help the poor professor creating the image he’s after. The fact of removing the outliers or not accepting DUPLICATE answers clearly is not sufficient to overcome this obvious shortcoming. So – it is based on fraudulent responses (from alarmist posing as skeptics) and thus the result are fake. It’s not arrogant, it’s not abusive, it’s just factual. Just google Roman Mureika and you’ll hopefully understand why his behavior is so different from yours. RomanM is right, you do look like a jerk and looking at the perseverance of your abusive arrogance, I guess I know why. I think it’s just because you are.

        • David L. Hagen
          Posted Sep 21, 2012 at 10:00 AM | Permalink

          faustusnotes
          Re: “using a criterion that the authors don’t reveal in the unpublished draft that you guys are hacking;”
          Lewandowsky prominently posts his publications in press including ” Lewandowsky, S., Oberauer, K., & Gignac, C. E. (in press). NASA faked the moon landing—therefore (climate) science is a hoax: An anatomy of the motivated rejection of science.. Psychological Science.” and posted a preprint. These are replicated at “Psychology for a safe climate” with an online preprint.
          I fid your accusation of “hacking” is a baseless ad hominem accusation.

          In their draft, I found Lewandowsky et al. to use neither “PCA” nor “Principal” nor “component”. However, they used “factor” 15 times including “Separate exploratory factor analyses”; “For free-market items, a single factor comprising 5 items (all but FMNotEnvQual) accounted for 56.5% of the variance; the remaining item loaded on a second factor (17.7% variance) by itself and was therefore
          eliminated.”
          That appears to me that Lewandowsky “forgot” to fully explain their method and that McIntyre finally discovered via exploring the implications of a blog comment. That would not pass my “peer review” evaluation as a “clear” reproducible explanation of the method used.

        • calb
          Posted Sep 21, 2012 at 10:02 AM | Permalink

          faustusnotes,

          Now you are just being silly. If Lewandowsky didn’t want people commenting on his paper before formal publication, then he should not be actively publicizing it. You refer to “the unpublished draft that you guys are hacking.” Hacking? The paper is published on the UWA web site and is linked from the Guardian article. You have one weird definition of “hacked”. Also, what makes you think it is a draft? The description of the link on Lewandowski’s website is ” Lewandowsky, S., Oberauer, K., & Gignac, C. E. (in press). NASA faked the moon landing—therefore (climate) science is a hoax: An anatomy of the motivated rejection of science.. Psychological Science.” This doesn’t sound like a draft to me.

          You probably also don’t recognize how you contradict yourself in this thread. You earlier said: “Lewandowsky’s were replicable within minutes by anyone who understands how social scientists do factor analysis, which is something you clearly still aren’t up to speed on.” Now you are saying you are “guessing” at their methodology and you are “waiting for the paper to be published”. Sloppy, fautusnotes, sloppy.

          Finally, as has been pointed out to you many times, SM did not accuse Lewandowsky of fraud. He said Lewandowski’s analysis used results that were faked by survey respondents. For some reason, you continue to conflate these claims.

        • faustusnotes
          Posted Sep 21, 2012 at 10:06 AM | Permalink

          Sven, I think what you’ve postulated there is what the rest of the world calls a “conspiracy theory.” Fortunately, cognitive scientists are working to understand how people with a tendency to endorse conspiracy theories view modern scientific debate, so there are some studies you can participate in.

          RomanM, no one’s trying to marginalize you: they’re trying to understand how ordinary people interpret scientific debate, in order to better understand scientific communication in the future. Skepticism about AGW is like a kind of modern heresy. Don’t you think it would be nice, back in the time of Gallileo, to understand why people took the positions they took, what cognitive processes informed them? Well, people like Lewandowsky are making the effort.

        • faustusnotes
          Posted Sep 21, 2012 at 10:09 AM | Permalink

          Steve, oh pink interloper, if your only concern was to get the information quickly because it had been oublicized, then surely a quick search of your inbox would have revealed some useful information? Followed by a polite email, along the lines of “ooh, sorry, I didn’t host your study back when you asked me too, but I’m really interested in seeing a draft of the paper” and then “oh, the supplement would be good too, Dr. Lewandowsky” and then a polite discussion piece?

          That’s not quite what you did, is it?

        • joannenova
          Posted Sep 21, 2012 at 10:40 AM | Permalink

          faustasnotes if people are supposed to wait for the paper to be “published” before they audit it, you must be absolutely enraged that someone promoted their conclusions in international media outlets before they’d even finished writing the paper. How un-scientific? How could the “peer reviewers” check the methods if they are not written up, or does that kind of stuff not really matter to you?

        • None
          Posted Sep 21, 2012 at 10:44 AM | Permalink

          Faustus,
          A quick search of his inbox would not have revealed useful information because Lewandowsky didnt send him anything, and gave no indication that it was someone else who had.

          Btw, what do you think of the fact that despite having Steve’s efforts in R to reproduce their results Lewandowsky and Oberauer still got it completely wrong on why his results differed ? All that “forcing” the analysis and using factors “chosen by fiat and without justification” was just hot air. Leaves them looking like a pair of clowns (again).

          Looking forward to the post on robust regressions with the Lewandowsky paper…

          It’s such a dreadful study in so many ways, it’s like another gift that keeps giving.

        • Lady in Red
          Posted Sep 21, 2012 at 11:06 AM | Permalink

          Faustus…. It is time for you to withdraw. I’m sure that even you know the extent to which you are an embarrassment to yourself. (Poor Dr. Lewandowsky does not need more assistance in the embarrassment department.)

          This has dragged on. You lost: game, set, match.

          No, pls, quietly go away. …..Lady in Red

        • Tim Irwin
          Posted Sep 21, 2012 at 11:15 AM | Permalink

          Faustus,

          I am beginning to wonder if you aren’t the good professor himself posting anonymously. You exert so much wasted energy on Steve’s tone. So all of this misunderstanding would have been resolved if he had simply asked for information politely? Nonsense. The title of paper and the accompanying press were clearly intended to insult the intelligence if not the integrity of skeptics. I would say the tone was set by Lewandowsky himself. Your hypocrisy on this subject is ridiculous. Furthermore, the tone is irrelevant. If Lewandosky sincerely believed in his study one would think he would go out of his way to explain his methodology to all of his critics, particularly Steve. One would think he would show all of his work just like Steve did.

        • David L. Hagen
          Posted Sep 21, 2012 at 3:40 PM | Permalink

          faustusnotes
          Encourage you to understand the difference between “draft” and “in press”, or in modern terminology “forthcoming”. See:
          “A. Sample Citation and Introduction to Citing Forthcoming Journal Articles”, National Center for Biotechnology Information.

          Forthcoming material consists of journal articles or books accepted for publication but not yet published. “Forthcoming” has replaced the former “in press” because changes in the publishing industry make the latter term obsolete.

          Do not includeas forthcoming those articles that have been submitted for publication but have not yet been accepted for publication.

        • HAS
          Posted Sep 21, 2012 at 5:01 PM | Permalink

          It is useful to understand what was done here simply at the level of what technique was used, but the more interesting question from the point of view of the quality of the research is why was that technique used and what does it contribute to our understanding of the matter in hand?

          My issue frankly is first and foremost that the provenance of the data was simply insufficient unto the ends. But if we put that aside the way it got tortured was more so.

          We have one set of data. First components are identified and called constructs. In some areas there is earlier work on these constructs on which the questionnaire is based (free market, some aspects of conspiracy), in others they are novel.

          Now we don’t see the previous scholarship imported into this research in terms of factors, nor a discussion of the impact of limiting the conspiracy construct, so in fact this data set is used to recreate these relationships again. Note that factor/construct extraction is a pretty subjective game, so the value only accrues over time (much testing and standardisation – think IQ tests).

          So with a set of factors induced from the data, they are run against the data again to produce relationships between the (reasonably subjective) constructs.

          All good clean fun for developing some hypotheses to test on the real word by others (or validate against data held out from the sample for this purpose).

          But not much fun to stop there. The authors move on to “predictions” within the data set, and various claims of empirical evidence for relations between this “conspiracist ideation” construct (only ever developed from this data set)and various other factors.

          Perhaps this is all acceptable in bits of psych where studies of middle class psych students getting credits for their participation seems widely regarded as an acceptable basis to create constructs to apply to the population at large (I must check if the economic constructs are based on work with those samples).

          Perhaps O/T for climate audit, but then again there are fit for purpose audits as well as compliance audits.

        • Posted Sep 21, 2012 at 6:17 PM | Permalink

          Re: faustusnotes (Sep 21 05:08),

          The thing is, if you want to audit someone’s paper before it’s published, you need to get smart. But you’ve been playing to the gallery here, and you haven’t had to lift your game for a while, you don’t even check your inbox properly before you make accusations. You’ve been getting sloppy. So once again, I suggest that you wait for the full paper, because right now you’re digging yourself a deep hole. Once the paper is out you’ll be able to find lots of flaws (I can see many) and spin them however you like; but I suspect the problems you’re claiming to find now aren’t going to stand up to scrutiny once the final paper is out. Lewandowsky knows that, hence his smug tone – you would do well to pay heed to it.

          If you don’t want your paper critically reviewed before its published then don’t publish it, and don’t provide it to, and promote it in, the media.

          You want it both ways. That Lewandwosky can write a paper with an unprofessional and ridiculously sensationalized title, supported by a tiny thin thread that disappears altogether when the clearly fake responses (as opposed to those merely “suspect”) are removed. And can publish and publicize that same paper – with the inflammatory and arguably false headline claim – which we all know was written purely for the media response it would garner. And yet be free from all scrutiny and review – free from defending the papers methods, work and conclusions.

          The title is certainly ridiculous … as is your assertion he should have free reign to disseminate the paper and promote it in the press, with the resultant obedient articles painting skeptics as conspiracy nutters and the like, all while being entirely unaccountable and not subject to review or criticism. That he should be allowed to reap the benefits while not having to respond to critical review and criticism of his work.

          Lewandowsky made the choice to release the paper and publicize and promote it. Yet you think everyone should sit on their hands – and are all vile people for not waiting until the publication Lewandowsky has claimed HAS occurred, which more likely than not will never occur.

        • faustusnotes
          Posted Sep 21, 2012 at 6:56 PM | Permalink

          calb et al, I think you have misunderstood my use of the term “hacking.” Keep your shirts on.

          joannenova, I agree with you that it would be better if journals embargoed their work until publication. But have you checked whether Lewandowsky is the one who did the original publicizing? It was buried for a while, wasn’t it? Did he have a post about it on his blog?

          What I find much more offensive, though, is this self-styled auditing. Doing it before you have the full methods at your disposal enables you to claim the study can’t be duplicated without good reason. It’s spin, rather than science. There are other ways it can be done, but Steve hasn’t used them. Why not?

        • Steve McIntyre
          Posted Sep 21, 2012 at 8:38 PM | Permalink

          Faustus, do you agree that Lewandowsky should be obliged to make a warranty on data integrity along the following lines:

          Author Contributions: Dr Tsubokura had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

          Is it your opinion that Lewandowsky could honestly make such a warranty?

        • HAS
          Posted Sep 22, 2012 at 2:50 AM | Permalink

          In my comment (lost somewhere above in this tentacle) I suggested L et al hadn’t imported previous scholarship into their analysis.

          One little comment in the paper hadn’t quite registered in this regard:

          ” ..the model in Figure 1 exactly replicated the factor structure reported by Lewandowsky et al. (2012) using a sample of pedestrians in a large city.”

          Now L et al. (2012) is Lewandowsky, S., Gignac, G. E., & Vaughan, S. (2012). “Climate science is not alone: The pivotal role of perceived scienti c consensus in acceptance of science.” Manuscript
          submitted for publication.

          The model in Fig 1 (see paper) is described thus: “Latent variable model for the relationship between perceived consensus among scientists and acceptance of scientific c propositions.” Give the quote from the current paper one can only assume that the L et al. (2102) analysis replicates the current paper down to the same questions, factors and results – only difference being the sample.

          What is curious then is why L et al derived factors/constructs ab initio in the current paper rather than testing the ones they already had from L et al. (2012). This would deliver a much more robust result.

          I guess we’ll have to wait and see if L et al. (2012) sees the light of day.

        • DGH
          Posted Sep 22, 2012 at 8:42 AM | Permalink

          faustusnotes

          I for one look forward to seeing the supplement that you mentioned. I note that of the 103 articles referenced on Dr. Lewandowsky’s website only 2 or 3 have supplements. Apparently most of his papers are more forthcoming with details on methodology, source code, data, etc., than this one.

          As to your question, “But have you checked whether Lewandowsky is the one who did the original publicizing?”

          How does that matter? Whether or not Lewandowsky was the original party responsible for releasing the news before the paper was published the news is out, the paper is available, it’s weak on detail, and the supplement is unavailable.

          But for the record, both the Association for Psychological Science and Dr. Lewandowsky have promoted this paper in advance of publication.

          http://www.psychologicalscience.org/index.php/news/were-only-human/a-climate-for-conspiracy.html (which also appeared on Huffington Post)
          http://websites.psychology.uwa.edu.au/labs/cogscience/Publications_Main.htm

          Dr. Lewandowsky, his co-authors, and their friends cannot stand behind the “it’s all in the supplements” defense. If the supplement is so important, where is it?

        • Carrick
          Posted Sep 22, 2012 at 11:24 AM | Permalink

          faustus:

          AndyL, I’ve never commented at SKS, what are you talking about? I didn’t know that L et al were using PCA all along, but it’s a pretty common practice in the social sciences: use eigenvalues from PCA to identify number of factors to retain, then do factor analysis on those factors. If Mcintyre had even a passing familiarity with the social science literature he would have known that too. Why should I do his work for him? I already pointed out to him on his last train-wreck thread that he needs to up his standard to an undergraduate level – not my problem if he can’t.

          Nice combination of childish behavior combined with regurgitating a wikipedia article. Well done sir.

          Your type of blathering works when the person you are blathering to understands less than you do, in which case the few words you comprehended from your cursory reading of the wikipedia article makes you sound semi-authoritative.

          I don’t know what you are trying to accomplish by being so juvenile, other than to make everybody think you are an immature prat. If that’s your goal, it’s worked. (And don’t bother responding, I won’t be listening.)

    • GrantB
      Posted Sep 21, 2012 at 3:40 AM | Permalink

      Why? It’s not referenced in the paper. Is this a magical mystery tour?

      • faustusnotes
        Posted Sep 21, 2012 at 5:26 AM | Permalink

        It’s certainly becoming a magical mystery tour of skeptic manias. The reason Mcintyre needs to look it up is because he will then find a range of papers discussing the best method for choosing the number of factors, and with a bit of practice might be able to learn how to do this sort of thing.

        Steve: I am familiar with these issues and have published peer reviewed articles in which such selection has been discussed. As you say, there are a variety of methods. however, choosing a method also depends on the nature of the data. In this particular study, I’m thus far unable to see how this particular issue impacts downstream results.

        • AndyL
          Posted Sep 21, 2012 at 6:22 AM | Permalink

          Deciding how many factors to use in PCA is a subject Steve is very knowledgeble in; he has written peer-reviewed articles which include discussion of this.

          The problem here is that L et al did not disclose they were using PCA, and you chose not to enlighten us

        • faustusnotes
          Posted Sep 21, 2012 at 6:25 AM | Permalink

          sorry AndyL, I already replied but I’m caught in moderation. Don’t you think it interesting that your hero here, The Great Auditor, couldn’t work this out for himself but some random stranger who hasn’t read the paper could? It’s as if there’s some domain-specific knowledge he lacks …

        • AndyL
          Posted Sep 21, 2012 at 6:58 AM | Permalink

          He did work it out eventually using trial and error. You got their first.

          Steve’s first article on replication explains how his initial attempts using the method as stated in the paper failed. If domain-specific knowledge includes using one technique and calling it something else, then yes he lacks it.

        • AndyL
          Posted Sep 21, 2012 at 7:04 AM | Permalink

          their –> there

        • faustusnotes
          Posted Sep 21, 2012 at 7:10 AM | Permalink

          Domain specific knowledge in this case means knowing how social scientists usually do factor analysis. Steve doesn’t know, which is why it was obvious to the rest of us but he was caught fumbling around in the dark.

        • faustusnotes
          Posted Sep 21, 2012 at 7:29 AM | Permalink

          Steve: you can’t see how this issue impacts downstream results, but Oberauer states that factor selection methods don’t affect their conclusions at all. Perhaps the online supplement to their article will contain extensive sensitivity analysis, which will be a bit embarrassing for you, won’t it?

          You have published peer reviewed articles who factor selection method is controversial. Even if it wasn’t, different methods are used in different fields and you can’t just rampage in and pretend you know what you’re doing when you clearly don’t. You should have waited until the full article was published, so you could follow their methods in detail, or checked your work with someone who understands the field. snip

        • DaveA
          Posted Sep 21, 2012 at 7:37 AM | Permalink

          He didn’t want “the best method” he wanted to know THE method Lewandowsky et al had used (but you know this). Why don’t they provide sufficient information in the paper to enable replication without a trial and error exercise such as Steve has gone through?

          Confronted over at ShapingTW you finally admit – though via a flippant jibe – the existence of the false lead which Steve talks about,

          “Brandon, maybe he discusses it in order to mislead Mcintyre into another fuming post that makes him look bad?”

        • faustusnotes
          Posted Sep 21, 2012 at 7:40 AM | Permalink

          DaveA, they do provide sufficient information in the paper, but the paper hasn’t been published yet. Perhaps if you wait just a month or two (you can do it!) then you might be able to find out exactly what their factor selection method is.

          You know full well that jibe was about the blog post, not the paper.

        • Posted Sep 21, 2012 at 11:25 AM | Permalink

          “Domain specific knowledge in this case means knowing how social scientists usually do factor analysis.”

          And climate ‘scientists’ have their own ways of doing statistics too, right?

          Why do these two fields not use the same statistical and mathematical methods as the rest of the world? And why do they object so strongly when someone points out problems with their methods?

        • David L. Hagen
          Posted Sep 21, 2012 at 9:19 PM | Permalink

          faustusnotes
          Re: ” but the paper hasn’t been published yet”
          See links above where Lewandowsky post and list their paper as “in press” indicating that it has been accepted and that the posted version is a preprint of what will be published.
          Re: “you might be able to find out exactly what their factor selection method is.”
          Only if they change or withdraw their paper in light of McIntyre’s audit or if the editor wises up and requires them to do so. Otherwise their “mistakes and all” posted version will be published.

        • RC Saumarez
          Posted Sep 22, 2012 at 9:32 AM | Permalink

          I must be incredibly stupid, but I thought PCA was PCA and Factor Analysis was Factor Analysis.
          I would have thought that the majority of numerate people would agree with this, but apparantly social scientists are different – they use the terms interchangeably. Is this a fixed rule in social sciencies, or can it be done a la carte?
          Does this imply that when Prof L uses PCA, his results mean sweet FA?

  10. Posted Sep 21, 2012 at 1:24 AM | Permalink

    Congratulations to Steve on figuring out the ‘method’ – one of the only known cases of signal extraction in climatology.

    Mann has come to Lewandowsky’s defense on Facebook. (A mistake on Mann’s part, no doubt.

    • Tony Mach
      Posted Sep 21, 2012 at 4:00 AM | Permalink

      And the whole lot accuse skeptics of being “rude” and “deceptive”.

  11. Nathan
    Posted Sep 21, 2012 at 1:27 AM | Permalink

    Fig 1 of the paper that they linked shows the process

    Click to access Factor-Analysis-Kootstra-04.PDF

    • Brandon Shollenberger
      Posted Sep 21, 2012 at 1:39 AM | Permalink

      This paper has a fascinating claim. First it says:

      After having obtained the correlation matrix, it is time to decide which type of analysis to use: factor analysis or principal component analysis.

      Then in the attached footnote, it says:

      There are more types of factor analysis than these two, but these are the most used (Field 2000; Rietveld & Van Hout 1993).

      This makes it seem Lewandowsky’s position is factor analysis and principal component analysis are both subsets of… factor analysis.

      • Skiphil
        Posted Sep 21, 2012 at 2:53 AM | Permalink

        Brandon, interesting, but later in the same paper the author seems to recognize theoretical differences between PC analysis and factor analysis (perhaps Lewandowsky and Oberauer are not clear on it though) and also says this:

        2.2.3.3. Which type of analysis to choose?
        According to Field (2000: 434) strong feelings exist concerning the choice between factor analysis and principal component analysis. Theoretically, factor analysis is more correct, but also more complicated. Practically, however, “the solutions generated from principalcomponent analysis differ little from those derived from factor analysis techniques” (Field 2000: 434).

        • Paul Vaughan
          Posted Sep 22, 2012 at 11:58 PM | Permalink

          Conclusion:
          The simple misunderstanding between Steve & faustusnotes cannot be resolved efficiency even though it hinges on nothing more than a rotation from principal component extraction. Illuminating.

          Steve: No, it doesn’t. You’re pretending to know what you’re talking about, but you don’t.

        • HAS
          Posted Sep 23, 2012 at 12:18 AM | Permalink

          Paul Vaughan, are you saying that had SM performed a rotation (or some kind) following his original FA calculations he would have achieved the same results L.et al got from their PCA?

          If so I’d be interested to see a reference.

        • Paul Vaughan
          Posted Sep 23, 2012 at 9:59 AM | Permalink

          HAS “Paul Vaughan, are you saying that had SM performed a rotation (or some kind) following his original FA calculations he would have achieved the same results L.et al got from their PCA?”

          I have no idea how you’ve gotten the idea that I’m suggesting that.

          What caught my interest was false statements on the (general) relationship between PCA & FA on the parallel WUWT thread.

          (I have NO interest whatsoever in the (specific) Lewandowsky issue as it does nothing to advance our understanding of natural climate variations.)

          FA subsumes PCA.

          FACT: One way of doing FA is to rotate following principal component extraction.

          The resistance to simply acknowledging this is more than a little creepy.

          Steve: In statistics – see Ripley – factor analysis and principal components have different matrix algebra. In statistics, principal components is not a subfield of factor analysis. perhaps the usage in psychology is slovenly, but that does not make a well-defined distinction in statistics “creepy”.

          Here is a quote from one of your links http://psych.wisc.edu/henriques/pca.html clearly stating that explained variance from PCA is not the same thing as explained variance from factor analysis:

          Another difference between the two approaches has to do with the variance that is analyzed. In PCA, all of the observed variance is analyzed, while in factor analysis it is only the shared variances that is analyzed.

        • Carrick
          Posted Sep 23, 2012 at 10:04 AM | Permalink

          Paul, we’re still waiting for a reference.

        • Carrick
          Posted Sep 23, 2012 at 10:24 AM | Permalink

          And perhaps a little less cryptic explanation (aka “fully fleshed out version”) of what you actually mean. If you are intentionally writing obtusely you are doing a good job. You have to remember people come from different fields, and the language nuances of one field don’r always transfer smoothly to another. What you are seeing as push-back is really people saying they aren’t sure what you actually mean.

          I think I understand what you are saying, I’d just like confirmation we’re using the same terminology.

        • HAS
          Posted Sep 23, 2012 at 3:49 PM | Permalink

          Paul Vaughan the reason I thought you were saying that because was because you said “The simple misunderstanding … hinges on nothing more than a rotation from principal component extraction.” So I thought what you were saying was what SM did was a rotation from PCE (what L. et al did).

          You now go on to say “FACT: One way of doing FA is to rotate following principal component extraction.” This seems to be implying SM’s results could be achieved from PCE by rotation.

          They are different (see SM above and multiple other comments on the thread and references) – even the lowest common denominator (Wikipedia) says “PCA is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix.”

        • Paul Vaughan
          Posted Sep 24, 2012 at 8:10 AM | Permalink

          False presumptions & compounding misunderstandings are making discussion too cumbersome & inefficient. I have paid work to do. Misunderstand & distort as you see fit.

        • Carrick
          Posted Sep 24, 2012 at 9:14 AM | Permalink

          Shorter Paul: I know I can’t explain any of this clearly, which is why I was writing in this pithy style to start with in hopes nobody would notice.

        • HAS
          Posted Sep 24, 2012 at 2:44 PM | Permalink

          Et tu

        • Paul Vaughan
          Posted Sep 28, 2012 at 1:50 PM | Permalink

          “Steve: No, it doesn’t. You’re pretending to know what you’re talking about, but you don’t.”

          This is a misunderstanding &/or misrepresentation. Review my comments with more care & less false presumption.

          Steve: Perhaps you can clarify your point, because I’m unable to see how your comments respond to mine. As Ripley sets out clearly, the algorithm for principal components is different from the algorithm for factor analysis with k factors. Do you agree with this? If not, why?

      • faustusnotes
        Posted Sep 21, 2012 at 5:13 AM | Permalink

        Oh Brandon, Brandon. Did I not quote Johnson and Wichern at you before? Principal components (PCA) is an extraction method that can be used as part of factor analysis. Remember, the matrix of eigenvectors (loadings) obtained from PCA is only unique up to an orthogonal rotation. So you can always get factors of interest from PCA. Doing it using PCA is called (in the soc sci literature) a “principal component extraction.” I think when soc sci folks say “factor analysis vs. pca” what they really mean is “factor analysis by maximum likelihood estimation vs. factor analysis via pca” but the extra words are mathematically trivial.

        That quote from Field is pretty true, and Brandon and I saw an example of that over about 50 comments on STW today …

        Steve: this is untrue. Eigenvectors in principal components are usually extracted by SVD on the data itself (scaled or original). The eigenvectors can also be obtained by SVD of the correlation (covariance) matrix. Factors are obtained a little differently. Both factors and eigenvectors are loadings (weights) on the original data, but they are identical only in unusual circumstances. Accordingly the explained variance from the factor is not the same as the explained variance from the eigenvector. Perhaps, as you suggest, this confusion is endemic in psychology (though I would prefer to see clear evidence of such widespread confusion first), but that doesn’t change the mathematics. They aren’t the same thing.

        • faustusnotes
          Posted Sep 21, 2012 at 7:32 AM | Permalink

          Just for clarity here Steve, are you saying that teh eigenvectors obtained from PCA are not unique up to an orthogonal rotation? Are you also saying that it is not possible to conduct factor analysis by extracting PCs and then rotating them? Please clarify.

          Note I nowhere said that factors and eigenvectors are identical. Don’t put words in my mouth.

          Steve: I don’t have time right now to give notes for Multivariate 101. The take-home point right now is that explained variance from factor analysis is not the same as explained variance from principal components. Lewandowsky incorrectly reported explained variance from principal components, as though it was from factor analysis. When you agree on this point, we can perhaps proceed to other topics.

        • faustusnotes
          Posted Sep 21, 2012 at 8:41 AM | Permalink

          Lewandowsky correctly reported explained variance from principal components, as though it was from principal components. That you can’t easily tell the difference between them is your problem, not Lewandowsky’s (btw, I think Oberauer wrote that post).

          Can I confirm then, that you don’t understand the role of orthogonal rotations in factor analysis? That you are unfamiliar with the concept of “principal component extraction”?

        • Ged
          Posted Sep 21, 2012 at 11:42 AM | Permalink

          I think you need to read: http://www2.sas.com/proceedings/sugi30/203-30.pdf

          Some salient points: “It is inappropriate to run PCA
          and EFA with your data. PCA includes correlated variables with the purpose of reducing the numbers of variables
          and explaining the same amount of variance with fewer variables (prncipal components). EFA estimates factors,
          underlying constructs that cannot be measured directly.
          In the examples below, the same data is used to illustrate PCA and EFA. The methods are transferable to your data.
          Do not run both PCA and EFA, select the appropriate analysis a priori.”

          Or what about: http://psych.wisc.edu/henriques/pca.html

          Which states clearly: ” In contrast, when there are no guiding hypotheses, when the question is simply what are the underlying factors the investigator conducts an exploratory factor analysis. The factors in factor analysis are conceptualized as “real world” entities such as depression, anxiety, and disturbed thought. This is in contrast to principal components analysis (PCA), where the components are simply geometrical abstractions that may not map easily onto real world phenomena. ”

          It seems like you may not know as much as you think you know.

        • Paul Vaughan
          Posted Sep 22, 2012 at 5:23 PM | Permalink

          “Steve: […] The take-home point right now is that explained variance from factor analysis is not the same as explained variance from principal components.”

          There’s some misunderstanding here. Rotation has no effect on explained variance. I suggest very carefully reconsidering what faustusnotes is (accurately & correctly) saying about “factor analysis by maximum likelihood estimation vs. factor analysis via pca”. It’s possible that that’s the source of the misunderstanding, but the exact nature of the misunderstanding is not yet clear. (All that’s clear is that faustusnotes is saying things that are true while Steve appears to be saying things that are not true.) If this misunderstanding is not sorted out in a fair manner, that will leave a rather distasteful impression.

          Best Regards to All.

          Steve: as I’ve repeatedly observed, this is a very small point relative to Lewandowsky’s use of fake data.

          Nor have you given any examples of what is supposed to be incorrect in my exegesis.

          The explained variance from factor analysis for (say) two factors is different than the explained variance from principal components for two eigenvectors. The issue that we’ve raised is NOT due to rotation, but due to the matrix algebra: read, for example, Ripley’s discussion of the difference between the matrices of factor analysis and principal components.

        • HAS
          Posted Sep 22, 2012 at 7:34 PM | Permalink

          “Best Practices in Exploratory Factor Analysis: Four Recommendations for Getting the Most From Your Analysis” Anna B. Costello and Jason W. Osborne, Practical Assessment, Research & Evaluation, July 2005 http://pareonline.net/pdf/v10n7.pdf, has a useful discussion of the PCA vs MLE methods and an empirical example.

          The last couple of paras are also useful word of warning to the likes of L. et al. who would seek to draw substantive conclusions based on EFA.

        • Paul Vaughan
          Posted Sep 22, 2012 at 7:56 PM | Permalink

          Maybe this is the source of the misunderstanding between Steve & faustusnotes:

          http://support.sas.com/documentation/cdl/en/statug/63347/HTML/default/viewer.htm#statug_factor_sect002.htm
          =
          “A frequent source of confusion in the field of factor analysis is the term factor. It sometimes refers to a hypothetical, unobservable variable, as in the phrase common factor. In this sense, factor analysis must be distinguished from component analysis since a component is an observable linear combination. Factor is also used in the sense of matrix factor, in that one matrix is a factor of a second matrix if the first matrix multiplied by its transpose equals the second matrix. In this sense, factor analysis refers to all methods of data analysis that use matrix factors, including component analysis and common factor analysis.”
          =

          I’ll leave it at that.

          Cheers.

          Steve: This is NOT relevant to the issue.

          The mathematical point here is very simple. Again I urge you to consult (say) Ripley e.g. quotes below, which show clearly that the loadings from factor analysis are solving something different than principal components.

  12. Posted Sep 21, 2012 at 1:44 AM | Permalink

    You clearly show that Lewandowsky gave a false description of the statistical procedures he had used. When his false description resulted in you making a false assumption about what he’d done, instead of correcting his false description, he accused you of mendacity. I don’t understand why you describe this as a “minor pecadillo” compared with his use of fake responses.

    Forgive me if I’m being thick, but I don’t see how the clear intention to mislead (which is easily proven) can be considered a minor pecadillo compared to the use of (probably) faked responses. Surely no response can ever be certainly identified as fake or genuine?
    Given that the sampling method was absurd, practically guaranteeing that responses would be faked, it seems unnecessary to insist on the probability of this or that particular response being a probable fake.

    Steve; the misrepresentation of the explained variances seems minor. The false allegations/insinuations are, as I said, repugnant.

  13. John
    Posted Sep 21, 2012 at 2:07 AM | Permalink

    I seem to remember a Lewandowski advocate posting recently here re the extraction of Factors=2 and loudly proclaiming ….

    “Checkmate”

    Do we have any advance on that, given this analytical debunking.

  14. Q
    Posted Sep 21, 2012 at 2:25 AM | Permalink

    Psychologists often use PCA and FA interchangeably. I expect that Lewandowsky understands the difference, though perhaps has only a hazy idea as to how they differ mathematically. It would have been helpful of him to have clarified this earlier.

    • Mooloo
      Posted Sep 21, 2012 at 4:34 AM | Permalink

      “Psychologists often use PCA and FA interchangeably”

      And there was me thinking that the reason for academic papers being so dry and hard to read was because of their excessive precision.

      This might wash in a blog comment, but in a peer-reviewed journal article?

    • Maus
      Posted Sep 21, 2012 at 3:51 PM | Permalink

      And Carpenters often use ‘tool face’ and ‘handle’ interchangeably. I expect that they understand the difference even if they are unsure which end of a hammer to hold.

      It’s an inspirational, but unsatisfactory defense, of an entire field having a lack of competency with their own toolset.

    • Carrick
      Posted Sep 21, 2012 at 4:05 PM | Permalink

      Q:

      Psychologists often use PCA and FA interchangeably.

      You might use the terms “interchangeably” or discuss them as if they were interchangeable, but you definitely can’t start with one method then results from that method as input into the other, which it strongly appears is what Lewandowsky did here.

      Well, not and arrive at anything meaningful in the outcome. I don’t think what he’s appeared to have done in terms of actual methods (to separate from language use) is mathematically defensible.

  15. James
    Posted Sep 21, 2012 at 2:25 AM | Permalink

    @Geoff

    I agree. One should be careful of “minor pecadillo”-cases in research ethics.

    I remarked earlier that Lew’s moderation style was unethical. Some posters commented that this wasn’t so important since few people read the comments. However, this is besides the point. as researchers we are required to always act ethically. Small slips are as unacceptable as large slips. The argument (a good one IMO) is that small ethics violations make slightly larger ethics violations acceptable (“its not so bad to do X, look we already do Y and that’s ok”) which in turn lead to even larger ethics violations.

    I’ve been appalled by Lew’s handling of this.

    • faustusnotes
      Posted Sep 21, 2012 at 5:18 AM | Permalink

      Lewandowksy doesn’t do the moderating, and there is no evidence that the moderation is something he has any control over – it may be a condition of using blogs associated with UWA. Have you checked that?

      Regardless of who is doing the moderation, it’s much more ethical than here. The rules of moderation are clearly stated, and any moderation act is explained in a little green box by the moderator. People at risk of being banned are instructed that this will happen and given multiple opportunities to change the way they comment. Here, on the other hand, comments get thrown into moderation and then just left there without being touched for days, until the conversation is dead, ensuring that only positive comments get through when most people are reading. The moderation here is capricious and biased, whereas at STW it is much more rigorous. You may not like rigorous moderation, but when you comment at Lew’s place you know exactly how to behave – no accusations of fraud, for example, which is why so many people here get banned. My comment comparing Mcintyre to a scalded monkey got heavily snipped. Should I be crying for my 1st amendment rights?

      • Posted Sep 21, 2012 at 7:19 AM | Permalink

        At Shaping Tomorrows World – all my comments have appeared on Lewandowsky articles.. (bar 1)

        however EVERY single comment of mine, however innocuous , has just vanished on Dana’s article…. very odd behaviour..

      • MrPete
        Posted Sep 21, 2012 at 7:31 AM | Permalink

        Re: faustusnotes (Sep 21 05:18),
        At STW there’s plenty of evidence that those in favor receive light moderation, while questioners are snipped vigorously, arbitrarily, NOT strictly according to the rules.

        You like the rules there because they are applied lightly to you.

        Here, we have a different kind of problem. And you are completely correct about comments being “just left there without being touched for days.” It’s actually quite embarrassing that there are comments much older than that, which have not been reviewed.

        The difference: this is a 100% volunteer site. NOBODY here is paid for the time they spend working on this site. Yet, this is one of the most heavily visited blogs on the planet.

        It is quite rare here for someone to be banned. They receive plenty of notice over a long period of time. And their posts don’t just disappear.

        (One additional challenge, as long as I’m covering this: once upon a time we had the ability to move off-topic posts to a separate thread. That’s not been available since traffic skyrocketed post-Climategate. The new hosting arrangement eliminated that feature. Still hope to restore that capability some day.)

        OK… let’s get back to the science. Discussions of moderation policy seem off topic to me.

        • faustusnotes
          Posted Sep 21, 2012 at 7:33 AM | Permalink

          How do you know people are paid to spend time working on moderation at Lewandowsky’s site? Too many suppositions in too short a space, MrPete.

          I don’t like the rules at STW,but I recognize that they are not arbitrary. WUWT, on the other hand, has moderators running their own sockpuppets.

        • MrPete
          Posted Sep 21, 2012 at 7:53 AM | Permalink

          Re: faustusnotes (Sep 21 07:33),
          I’m not claiming people are paid for any particular role at STW. However, look at the STW “About Us” page. You will see the funding sources and the universities officially partnering to produce the site. It’s all there in black and white (and a bit of color on the partner logos 🙂 )

          In light of those public assertions, do you believe university policies require them to write their articles and responses on personal time, and that the STW site is 100% volunteer-driven? Seems to me it’s your suppositions exposed by your assumptions.

          Compare that to the zero dollar funding of CA other than occasional tip jar contributions covering non-blog costs.

        • faustusnotes
          Posted Sep 21, 2012 at 9:01 AM | Permalink

          I didn’t make any suppositions about whether or not the site is volunteer-driven. I just corrected yours. How do you know, incidentally, that the moderators have any connection to Lewandowsky at all? They could be central UWA staff, managing a slew of sites and applying the same community rules across all of them. Have you checked?

          I have no idea whether Lewandowsky has to write his blog posts on personal time, but I think the question is a bit moot: many academics work on weekends and holidays, and the distinction between personal and work time is very blurred. But you knew that, right, because you’re an academic with extensive experience in social sciences publications, which is why you’re involved with this high-quality auditing site?

        • Mark W
          Posted Sep 21, 2012 at 9:21 AM | Permalink

          As are you.

        • MrPete
          Posted Sep 21, 2012 at 11:09 AM | Permalink

          Re: faustusnotes (Sep 21 09:01),

          They could be central UWA staff, managing a slew of sites and applying the same community rules across all of them.

          Even your suppositions recognize the truth of what I am asserting, but of course you won’t admit it. STW is a low traffic blog that benefits from university resources in a variety of ways. They advertise that support on their own site.

          Meanwhile, CA is an extremely high traffic blog written and moderated by volunteers.

          You keep trying to put words in my mouth but I’m not buying it. You are the one making unsupported claims about my thought processes, unsupported claims about moderators at other web sites, unsupported claims about “hacking” L’s paper, etc etc.

          Arrogance only goes so far, sir. It certainly doesn’t put your skills in a good light. I suggest a pause to rethink your purpose in posting.

      • JamesG
        Posted Sep 22, 2012 at 4:08 AM | Permalink

        Well my comment on another thread here which discussed the various selection biases in the use of stats by climate scientists, some of which are even repeated in this post, was utterly disappeared after a day – no inline comment, no snip, no explanation, nada. So if there is any biased editing here I fail to see it.

  16. Coldish
    Posted Sep 21, 2012 at 2:36 AM | Permalink

    “…passim comment in Lewandowsky’s blog…”
    Do you really mean ‘passim’ (i.e. widespread, all over the place)- or should that be ‘passing’?

  17. tlitb1
    Posted Sep 21, 2012 at 3:19 AM | Permalink

    This is a great addition to the evidence of the opaqueness of Lewandowsky’s paper. I always assumed Lewandowsky wrung out his numbers out of the dodgy data somehow, but I have been content to be at enough of a layman’s distance from the “domains” here to just think there is enough to dismiss Lewandowsky within the criteria of his own alleged domain of psychology alone.

    What does it take in the alleged best psychological journal to give qualms about a paper?

    When you see such cosy overt bias, obviously accustomed to getting plaudits and an easy ride, then getting a real level of scrutiny, and break apart like this in the psychological domain then that seems enough to me.

    I note there are plenty of supporters of Lewandowsky who think this is all some elaborate scheme to trick “deniers” into making a psychological mistake that proves L’s point – Isn’t that enough to show these supporters that Lewandowsky has nothing to stand upon now until this sting is revealed? 😉

    Extraordinary

  18. fjpickett
    Posted Sep 21, 2012 at 3:20 AM | Permalink

    Coldish

    In this context, ‘passim’ is used to refer to previous instances – the ‘here and there’ sense rather than ‘everywhere’.

    http://dictionary.reference.com/browse/passim

  19. fjpickett
    Posted Sep 21, 2012 at 3:28 AM | Permalink

    So Lew is not only rubbish at statistics, but his constant misreading of other people’s reactions and apparent lack of self-awareness suggests that he’s not too hot at psychology, either. Time to retire, perhaps…

  20. Posted Sep 21, 2012 at 3:38 AM | Permalink

    Or else Lewandowsky, cognizant of how thoroughly compromised his results are by fake/fraudulent data, rather than thanking his critics for spotting defects and withdrawing his study, has decided to double down by trying to manufacture doubt about criticism of the degree to which his data and results have been thoroughly compromised in the “hope that no one would see through his manufacture of doubt.”

    Lewandowsky really should consider ditching his copy of Mann’s 12 steps to self-vindication and glorification.

    As one who studied psychology, albeit many, many moons ago, what boggles my mind is the extent to which Lewandowsky engages in so many classic exercises in (what was then commonly called) projection: the act of attributing to others actions an accuser cannot substantiate, but of which the s/he is demonstrably “guilty”.

    Although I will concede that it’s quite possible that in the post-modernist era in which we now find ourselves, “projection” may well have been abandoned in favour of some obfuscatory psychobabble with which a “cognitive psychologist” could pepper his prose … without realizing what it really means.

    • Posted Sep 21, 2012 at 10:00 AM | Permalink

      The new psychology term in Northern CA for accusing others of one’s own machinations (‘Projection’ ) seems to be ‘Externalization’ ; less likely to be read as a reference to a technology.

      I find that in my discussions with friends struggling to understand why they are routinely falsely accused by a particular actor,
      that the metaphor of that actor operating (unconsciously) as a ‘Projector’ of their own thoughts onto others,
      elicits a quicker ‘Aha’ than ‘Externalizer’.
      Clearly ‘Externalization’ has fewer alternate uses than ‘projection’, long used in math, and is thus less ambiguous in text usage.
      What text can capture the conspicuous contempt messages in the curiously inflected video speeches above. What a blessing to have avoided guys like this man, as a professor in a required class.
      RR

  21. lurker passing through, laughing
    Posted Sep 21, 2012 at 3:55 AM | Permalink

    This gets better and better.

  22. Posted Sep 21, 2012 at 4:07 AM | Permalink

    PLease tell me that someone who is keeping track of all this, is updating the editor of the Journal Pschological Science of Prof Lewandowsky’s accusations , irreproducible methodology and frankly odd behaviour.

    • Anthony Watts
      Posted Sep 22, 2012 at 10:15 AM | Permalink

      for those that are keeping track, and wish to register a complaint on the statistical methodology being faulty (not to mention the sampling) you can contact:

      Professor Robyn Owens
      Deputy Vice-Chancellor (Research)
      The University of Western Australia, M460
      35 Stirling Highway, Crawley WA 6009
      Phone: +61 8 6488 2460
      Fax: +61 8 6488 1013
      Email: dvcr@uwa.edu.au

      • HAS
        Posted Sep 22, 2012 at 3:53 PM | Permalink

        Another way in is through the funding agency. L. is part funded through a Discovery Australia Linkage Project LP120100224 “Creating a climate for change: from cognition to consensus” (you can find details of the Australian Research Council site). The administering organisation is the University of NSW who have a contract with the ARC for this funding (the generic contract is on the ARC site). Ben R Newell Assoc Prof @NSW is likely the lead.

        Anyway there a number of points in the ARC contract that are possible breached by L. et al. and the associated publicity around it. A quick scan suggests that those climate sceptics that feel aggrieved should review clause 18.4 and 18.6 of the funding contract that reference the Australian Code for the Responsible Conduct of Research (2007) (also available at the ARC web site).

        The sections dealing with conflict of interest (L. other blog interests); respect for research participants; reporting results; and communicating research findings (informing interested parties before the media) appear to have been breached. These are matters that could well be referenced regardless of the contract in any communication directly with the UWA. The Code lays down the process for UWA to follow.

        However while UWA may seek to balance Code compliance with academic freedom there is the issue of the ARC contract under which L.’s activities have been part funded. It seems that UWA and the U. of NSW also have a responsibility in this regard that are not balanced by academic freedom, and the ARC as funder has a clear interest in breaches. These could all be approached by anyone who feels L.’s work has breached the code (or any other part of the funding agreement) pointing out these obligations are independent of academic freedom.

  23. Martin A
    Posted Sep 21, 2012 at 4:16 AM | Permalink

    This is a case of the profoundly ignorant, unaware of the depth of their own ignorance, taking on someone who specializes in exposing, in minute detail, ignorance which has been dressed up to appear as sophisticated statistical analysis .

    • Posted Sep 21, 2012 at 4:47 AM | Permalink

      Ding Ding Ding Ding Ding! We have a winner.

      • Anthony Watts
        Posted Sep 22, 2012 at 9:54 AM | Permalink

        Charles, tell him what he’s won…

        • Posted Sep 22, 2012 at 10:16 AM | Permalink

          It better not be a life time supply of “Rice A Roni” !!!

        • mwgrant
          Posted Sep 22, 2012 at 11:21 AM | Permalink

          Choice of Four year stipend to study cognitive science in the School of Pschology at the University of Western Australia

          OR

          a family-size bag of Doritos.

      • mwgrant
        Posted Sep 22, 2012 at 11:25 AM | Permalink

        Psychology — preemptive Doh!

  24. James Lane
    Posted Sep 21, 2012 at 4:50 AM | Permalink

    I’m a psychologist, and the difference between factor analysis and PCA was made crystal clear at Melbourne Uni c1980.

    PCA is not a subset of factor analysis. In fact they are separate chapters in my Uni textbook: Harris, R. (1975) “A Primer of Multivariate Statistics”.

    For various reasons, PCA is generally preferred in the social sciences, but it is unacceptable to describe it as “factor analysis” in a methodological description.

    • faustusnotes
      Posted Sep 21, 2012 at 5:23 AM | Permalink

      James, from Harris, R(1975),A Primer of Multivariate Statistics, page 40:

      Both of these techniques – though authors differ in whether they consider principal component analysis to be a type of factor analysis or a distinct technique – can be used to reduce the dimensionality of the set of variables

      I guess university in 1980 was free for you, which is a shame because with a comment like this as evidence you could have got a hefty windfall from demanding your tuition fees back.

      • David L. Hagen
        Posted Sep 21, 2012 at 10:21 AM | Permalink

        faustusnotes
        For current scholarship see: Parimal Mukhopadhyay (2008) Multivariate Statistical Analysis, Sect 9.6, p 357 ISBN-10: 9812791752

        factor analysis and principal component analysis are very much related, both having the goal of reducing dimensionality of the data. However, there are some major differences between the two approaches. In principal component analysis, principal components are linear functions of the variables, while in factor analysis, the variables are expressed as linear combinations of factors.

        • Posted Sep 21, 2012 at 6:47 PM | Permalink

          That’s a nice succinct description of the difference. I’ll have to mentally bookmark it.

        • faustusnotes
          Posted Sep 21, 2012 at 7:00 PM | Permalink

          David, I was pointing out that James Lane’s claim doesn’t proceed from the source he referenced. I’m not sure why you’re going into this detail, however. Are you saying that Lewandowsky didn’t use Factor Analysis? They did. The sole issue is whether they used PCA or FA to identify the number of factors. There’s no indication in their work or their blog post that they don’t know the difference.

        • David L. Hagen
          Posted Sep 21, 2012 at 9:28 PM | Permalink

          faustusnotes
          I was showing you current scholarship clearly distinguishing between PCA and FA.
          From reading CA, and a little of Lewandowsky’s posts, I understand McIntyre’s audit, Lewandowsky et al. state that they used FA but their results are not replicable using FA. McIntyre discovered that they actually used PCA to arrive at their results, contrary to their described methodology. Either they are confused about FA vs PCA, or one conducted the analysis in PCA and reported the results, and the other described his usual method of FA assuming that was how the results were arrived at. I look forward to Lewandowsky owning up to what happened and clarifying their method so others can reproduce it directly from the description.

      • GPhill
        Posted Sep 21, 2012 at 11:11 AM | Permalink

        Careful – a quick search of the “peer-reviewed” literature and texts brings up the following
        ————————————————
        Principal Component Analysis
        Ian Jolliffe 2002
        >>>>>>>>
        There are two parts of Chapter 6 concerned with deciding how many principal components (PCs) to retain and with using PCA to choose a subset of variables. Both of these topics have been the subject of considerable research in recent years, although a regrettably high proportion of this research confuses PCA with factor analysis, the subject of Chapter 7
        pg vi

        In many texts on multivariate analysis, especially those written by nonstatisticians,PCA is treated as though it is part of the factor analysis. Similarly, many computer packages give PCA as one of the options in a factor analysis subroutine. Chapter 7 explains that, although factor analysis and PCA have similar aims, they are, in fact, quite different techniques.
        There are, however, some ways in which PCA can be used in factor analysis and these are briefly described.
        pg xxii
        ——————————————————

        Leandre R. Fabrigar, Duane T. Wegener,Robert C. MacCallum,Erin J. Strahan
        Evaluating the Use of Exploratory Factor Analysis in Psychological Research
        Psychological Methods 1999, Vol.4. No. 3.272-299
        >>>>>>>>
        If the goal is data reduction, principal components analysis (PCA) is more appropriate. Many researchers mistakenly believe that PCA is a type of EFA when in fact these procedures are different statistical methods designed to achieve different objectives (for a discussion of the distinction, see Bentler & Kano, 1990;Bookstein, 1990; Gorsuch, 1990: Loehlin, 1990; McArdie, 1990; Mulaik, 1990; Rozeboom. 1990; Schonemann, 1990; Steiger, 1990; Velicer & Jackson,1990a, 1990b; Widaman, 1990).

        Second, this article suggests that the quality of EFAs reported in psychological research is routinely quite poor. Researchers sometimes base their analyses on studies with less than optimal features, commonly make questionable choices when selecting analytic procedures, and do not provide sufficient information for readers to make informed judgements of the soundness of the EFA being reported.
        ———————————————————-

        • HaroldW
          Posted Sep 21, 2012 at 11:22 AM | Permalink

          Wow, the Fabrigar et al. article is quite prescient, isn’t it?

        • Posted Sep 21, 2012 at 12:15 PM | Permalink

          Great catch.

      • Carrick
        Posted Sep 21, 2012 at 11:54 AM | Permalink

        He’s previously admitted that he doesn’t understand PCA or factor analysis, and now we see the problem is worse….

        faustus doesn’t even understand what he’s quoting or why he’s wrong.

        Noobs. There’s a million of them.

      • TerryMN
        Posted Sep 21, 2012 at 12:09 PM | Permalink

        I guess university in 1980 was free for you, which is a shame because with a comment like this as evidence you could have got a hefty windfall from demanding your tuition fees back.

        Likewise, I’m sure.

      • James Lane
        Posted Sep 21, 2012 at 4:25 PM | Permalink

        Re: faustusnotes (Sep 21 05:23),

        Come off it Faustus. Whether PCA is a subset of Factor Analysis is a semantic issue. They are mathematically distinct techniques. If you are using PCA, then say so. If, as you suggest, Lew et al used BOTH PCA and FA, that just makes the confusion even worse.

        Back in the days of the uni education (for which you suggest that I ask my money back) we were taught that the methodological description should contain all the information necessary to replicate the results. It’s no use appealing to some in-the-future SI that will reveal all. This paper is supposed to have passed peer-review and be “in press”. This information should already be available.

  25. Solomon Green
    Posted Sep 21, 2012 at 5:02 AM | Permalink

    I have been trying to ascertain what Professor Lewandowski studied at university. His CV reveals that he has a first degree, a Master’s and a Ph. D. but not what he majored in. His background is reported as having been “a software entrepreneur” before he became a professor of psychology. He has published more than 120 papers but none of these appear to be on statistics. Does he have any statistical qualifications or is he, too, a “self-declared expert in statistics”?
    Since he obviously reads this websute, perhaps he will enlighten me.

  26. Arkh
    Posted Sep 21, 2012 at 5:03 AM | Permalink

    Too much work has been put in a paper which should have been dismissed for one simple reason: using the results of public online polls as data is wrong.

    • NZ Groover
      Posted Sep 22, 2012 at 1:01 AM | Permalink

      +1

      Exposing lews methodology is good for highlighting him as the amateur he is……but bottom line……a paper based on an online poll, c’mon, anybody who gives it any weight made up their mind a longgggggg time ago.

      An online poll!!!!! LMFAO.

      • Posted Sep 22, 2012 at 10:23 AM | Permalink

        Online polls:
        The “Ron Paul” supporters are experts at gaming online polls and group bombing online polls to incorrectly inflate support for the crazy Texan.
        Why should an online poll about global warming and conspiracies be expected to be any more true??

    • Posted Sep 22, 2012 at 4:12 AM | Permalink

      Re: Arkh (Sep 21 05:03),

      Usually yes … and somewhat true here … however, the point of the paper was carefully scripted – that it was to study “blog denizens” about their beliefs as they believe the skeptic bloggers have credibility and the ability to get information out effectively.

      The paper actually praises Steve McIntyre as one of those folks – which makes Lew’s attack all the more silly.

      In my research on online surveys I think you can use them for useful purposes as long as you understand the shortcomings, and spend a fair amount of effort reviewing for fraudulent responses.

      An important key is larger sample sizes – whether online or other. Larger sample sizes allow a more robust statistical analysis (or sometimes any statistical analysis) – more responses seems to tend to find the true “signal” as fraudulent answers do become noise at some point.

      That said in the Lew paper’s case, while his claimed N-1100 for responses (out of 1,300+ total received) LOOKS like it would be enough, that is not the true response sample. His analysis was on skeptic responses – and there were only appx 130-150 I’m guessing of those. And many are suspect. Worse the headline claim was based on 10 skeptic responses. And at least 2-3 are clearly fake – scam responses.

      The small overall sample of skeptic response data, coupled with the questionable provenance (that they were all ‘sourced’ thru strongly anti-skeptic blogs), and the tiny number of specific question responses, would seem to make it all but impossible to draw meaningful conclusions from this data.

      • jim
        Posted Sep 22, 2012 at 3:06 PM | Permalink

        A Scott

        To me, on-line surveys are opinion polls. Very useful, as you say. This survey in question is not a representative sample of “blog denizens”. It’s not at all suitable for scientific publication, even in the field of psychology.

    • jim
      Posted Sep 22, 2012 at 2:54 PM | Permalink

      +1 That self-reported data from a not controlled, not representative, web survey are the basis of a published paper is very funny.

  27. Posted Sep 21, 2012 at 5:12 AM | Permalink

    I agree with Arkh, but I am grateful for Steve’s efforts. Trivial as Lewandowsky’s paper is…he should not be allowed to get away with improper methods. Pointing out every error is the only way to ensure that we eventually get much sounder science from the climate community.

    • Mike Mangan
      Posted Sep 21, 2012 at 6:08 AM | Permalink

      Lewandowsky WILL get away with it. Who will call him out? The academic establishment? Journalists? He can make up the entire paper from whole cloth and it won’t make a difference. The conclusion will be eagerly embraced by warmists across the globe and be regurgitated as readily as the “97% of climate scientists” canard is.
      The problem is not the lying, incompetent “scientists”, it’s the individuals and organizations that bestow credibility on them.

  28. Adrianos Kosmina
    Posted Sep 21, 2012 at 6:52 AM | Permalink

    There is obviously a major problem with Australia’s Higher Eucation system, In Western Australia anyway, but I think you could extend this to Victoria, University of Melbourne remember the Gergis et al withdrawal etc….

  29. Mickey Reno
    Posted Sep 21, 2012 at 8:41 AM | Permalink

    Excellent. I think Steve is training future scientists and statisticians on what open science communication looks like. Unfortunately, Steve’s fighting a raging fire, burning out of control. Think how much more effectively science could work if Steve (or other interested parties) had some input on the front side, when the trash barrel may be on fire, but has not yet spread. (sidebar: I wonder if I really meant to conflate Lewandowsky’s survey with a barrel of flammable trash?)

    In the future, if science publishing continues to drift toward the OpenSource style online model (which I think it should, and will), then people like Steve will get their say on the front end of an experiment like this, and designing a BS survey like Lewandowsky’s will be next to impossible.

    In the OpenScience environment I imagine, a discussion of lack of subject and control groups would have stopped Lewandowsky dead in his tracks. His survey would have had objective questions which didn’t scare away the very population he claims he’s “studying.” And a specific hypothesis, as well as the statistical methods and analysis ostensibly “proving” the hypothesis would have been written in advance, and tested with the goal of falsification. In that environment, temptations to spin and gin up new hypotheses after the data was been gathered, will be seen as misconduct, a priori.

    Of course, Lewandowsky will hate the environment I imagine, because in it, activists and propagandist like him will be marginalized and despised.

    When can we start?

  30. Lady in Red
    Posted Sep 21, 2012 at 9:02 AM | Permalink

    I also agree with Arkh: Lewandowsky’s work is beneath contempt, or notice. The idea, the design, was dishonest and like so many CAGW scientists, the data is, also, irrelevant. Data must *always* be shoehorned into the pre-ordained “results.”

    Sad.

    At the same time, McIntyre did this in record time. Doubt if it affected his squash schedule at all.

    I can only hope he enjoyed the challenge, like a particularly oversized suduko puzzle, and savored the taste, like a dog might appreciate a well-chewed bone.

    Damn, but McIntyre is fun to watch! ….Lady in Red

  31. observer
    Posted Sep 21, 2012 at 9:06 AM | Permalink

    An amusing read:

    http://www.theregister.co.uk/2012/09/21/lewandowsky_trick_cyclist_rides_again/

    • Posted Sep 21, 2012 at 3:45 PM | Permalink

      Re: observer (Sep 21 09:06),

      The paper itself:
      Misinformation and Its Correction
      Continued Influence and Successful Debiasing

      Comments from it:

      Skepticism: A key to accuracy

      We have reviewed how worldview and prior beliefs can exert a distorting influence on information processing. However, some attitudes can also safeguard against misinformation effects. In particular, skepticism can reduce susceptibility to misinformation effects if it prompts people to question the origins of information that may later turn out to be false.

      Importantly … skepticism also ensured that correct information was recognized more accurately, and thus did not translate into cynicism or a blanket denial of all … related information.

      Taken together, these results suggest that a healthy sense of skepticism or induced distrust can go a long way in avoiding the traps of misinformation.

      Who knew? Lew. says skepticism’s a good thing ….

  32. scp
    Posted Sep 21, 2012 at 9:14 AM | Permalink

    “The reason why we were unable to replicate Lewandowsky’s explained variance from factor analysis was that his explained variance results were not from factor analysis, but from the different (though related) technique of principal components, a technique very familiar to CA readers.”

    Coincidentally, I recently finished a course in applied multivariate modeling. I happened to notice that in SPSS (and PSPP), PCA is located under the “Factor Analysis” menu. Also, PCA was covered in the “Exploratory Factor Analysis” chapter of our textbook. (Discovering Statistics Using SPSS, Field, 2009).

    If Professor Lewandowsky is an SPSS user, it would be easy for him to conclude from the menu structure that PCA is a form of “Factor Analysis”. (although our textbook clearly indicates that PCA is related, but different)

    This quote from our text is interesting – “… only factor analysis can estimate the underlying factors and it relies on various assumptions for these estimates to be accurate. Principal component analysis is concerned only with establishing which linear components exist within the data and how a particular variable might contribute to that component…. (p. 638)”

    I’m definitely a novice – so could be misinterpreting – but from that excerpt, I’m not sure whether the Lewandowsky study would be a valid use of PCA? It seems like limiting the eigenvalues to just the first one and extracting its factors is sort of a kludge.

    • Steve McIntyre
      Posted Sep 21, 2012 at 10:18 AM | Permalink

      Here’s a brief description of the difference between principal components and factor analysis by Brian Ripley, a distinguished statistician. http://www.stats.ox.ac.uk/~ripley/MultAnal_HT2007/PC-FA.pdf I urge faustus and others to read this.

      • Wayne2
        Posted Sep 21, 2012 at 12:19 PM | Permalink

        Agreed. Here’s another quote I got from a UTexas document for students using SAS’ PROC FACTOR:

        “Factor analysis as a generic term includes principal component analysis. While the two techniques are functionally very similar and are used for the same purpose (data reduction), they are quite different in terms of underlying assumptions.

        “The term “common” in common factor analysis describes the variance that is analyzed. It is assumed that the variance of a single variable can be decomposed into common variance that is shared by other variables included in the model, and unique variance that is unique to a particular variable and includes the error component. Common factor analysis (CFA) analyzes only the common variance of the observed variables; principal component analysis considers the total variance and makes no distinction between common and unique variance.

        “The selection of one technique over the other is based upon several criteria. First of all, what is the objective of the analysis? Common factor analysis and principal component analysis are similar in the sense that the purpose of both is to reduce the original variables into fewer composite variables, called factors or principal components. However, they are distinct in the sense that the obtained composite variables serve different purposes. In common factor analysis, a small number of factors are extracted to account for the intercorrelations among the observed variables–to identify the latent dimensions that explain why the variables are correlated with each other. In principal component analysis, the objective is to account for the maximum portion of the variance present in the original set of variables with a minimum number of composite variables called principal components.

        “Secondly, what are the assumptions about the variance in the original variables? If the observed variables are measured relatively error free, (for example, age, years of education, or number of family members), or if it is assumed that the error and specific variance represent a small portion of the total variance in the original set of the variables, then principal component analysis is appropriate. But if the observed variables are only indicators of the latent constructs to be measured (such as test scores or responses to attitude scales), or if the error (unique) variance represents a significant portion of the total variance, then the appropriate technique to select is common factor analysis. Since the two methods often yield similar results, only CFA will be illustrated here.”

        • Wayne2
          Posted Sep 21, 2012 at 12:22 PM | Permalink

          Another nice quote from that document:

          “The “eigenvalues greater than one” rule has been most commonly used due to its simple nature and availability in various computer packages. It states that the number of factors to be extracted should be equal to the number of factors having an eigenvalue (variance) greater than 1.0. The rationale for choosing this particular value is that a factor must have variance at least as large as that of a single standardized original variable. Recall that in principal components analysis 1’s are retained in the main diagonal of the correlation matrix, therefore for p standardized variables there is a total variance of p to be decomposed into factors. This rule, however, is more appropriate for PCA than FA, and it should be adjusted downward when the common factor model is chosen. In a common factor analysis, communality estimates are inserted in the main diagonal of the correlation matrix. Therefore, for p variables the variance to be decomposed into factors is less than p. It has been suggested that the latent root (eigenvalue) criterion should be lower and around the average of the initial communality estimates. The PROC FACTOR statement has the option MINEIGEN= allowing you to specify the latent root cutoff value. For example, MINEIGEN=1 requests SAS to retain the factors with eigenvaues greater than 1.”

  33. AJ
    Posted Sep 21, 2012 at 9:38 AM | Permalink

    Over at the ScienceBlogs, “A Few Things Ill Considered” asks:

    Is Steve McIntyre an expert statistician?

    http://scienceblogs.com/illconsidered/2012/09/is-steve-mcintyre-an-expert-statistician/

  34. KevinM
    Posted Sep 21, 2012 at 9:59 AM | Permalink

    At least you did not discover he reduced his sample size to one respondent to get a clear signal from the noise. The cleanest way to arrive at a desired result is to disappear enough data.

    Looks like he kept all the (bad) data and worked through to his (bad) result, which is more of a quality problem than an integrity problem.

    Confirmation bias is hard to escape.

  35. Tom Murphy
    Posted Sep 21, 2012 at 10:13 AM | Permalink

    “As matters turned out, Lewandowsky had made a different error – one that I had not guessed in my previous post, but one that pervaded his other factor analyses as well.”

    If I understand the sequence of events, this was not an “error” but instead a deliberate misrepresentation (on multiple occasions) by Lewandowsky on the methodology employed. And this phrase is nothing more than a polite euphemism for “lie.” McIntyre is being too kind when labeling this an error – especially when the scientific method demands intellectual honesty by focusing on the objective.

    “Lewandowsky repugnantly alleged that I might have intentionally rigged his re-‘analysis’ so that it deviated from our EFA’s in the hope that no one would see through his manufacture of doubt.”

    Isn’t this nothing more than Lewandowsky asserting that his paper’s conclusions are correct vis-a-vis the “desire” of McIntyre and Mureika to seemingly perpetuate the conspiracy? Of course, this tactic reveals more about Lewandowsky’s now apparent conspiracy tendencies than the alleged intentions of McIntyre and Mureika.

    When the dust settles, this is a classic example of, “If you can’t dazzle them with brilliance, baffle them with bulls*it,” by Lewandowsky with only lip-service paid to the scientific method. A true scientist would review the criticism constructively rather than respond subjectively by alleging a “rigged” result without offering supportive evidence.

    With respect to faustusnotes, “Domain specific knowledge in this case means knowing how social scientists usually do factor analysis. Steve doesn’t know, which is why it was obvious to the rest of us but he was caught fumbling around in the dark.”

    On the presumption the statement is true, credit should be given to the methodology employed by McIntyre, which was subsequently and thoroughly documented and reported by him for review by others (even those that think themselves intellectually-gifted). The same cannot be claimed of Lewandowsky, who actually published the article with only a sparse reference to “This finding replicated the factor structure reported by Lewandowsky et al. (2012)”. Even subsequently, a clarification of this factor structure could (arguably “should” given the attention surrounding the article) have been made by Lewandowsky through his online articles and comments (he had the time to mock and deride, why not actually educate), if it was so obvious that McIntyre and Mureika were referencing an incorrect methodology.

    If Lewandowsky (or even faustusnotes) wants to claim an intellectual edge or superiority over others equally competent in statistics, then do so professionally rather than in subjectively-emotional outbursts. Such statements only present a petty elitist with whom research would be difficult – at best.

    • David L. Hagen
      Posted Sep 21, 2012 at 10:32 AM | Permalink

      Tom Murphy
      Re: “a deliberate misrepresentation”
      A charitable interpretation would be that Lewandowsky et al. did not understand the difference, or that one author ran the program and the other wrote the paper. Please do not attribute moral turpitude to actions that have more common explanation.

      • Tom Murphy
        Posted Sep 21, 2012 at 12:28 PM | Permalink

        “Would” is the operative word in that statement. Upon publication, the authors take responsibility for their work – errors and omissions included. And I understand that we’re eagerly awaiting final publication of the paper.

        However, the authors have made numerous online comments (i.e., multiple occasions) in support of the article referencing the paper. Their online comments failed to clarify the applied methodology, EVEN WHEN it was purportedly obvious (to a few gifted individuals anyway) that McIntyre and Mureika were “fumbling around in the dark.” Each occasion offered an opportunity for the authors to correct or (at the very least) redirect a misapplied methodology. And yet, they did not.

        This inaction is what allows for the attributing of moral turpitude to Lewandowsky et al., if only through a personal opinion.

        • Steve McIntyre
          Posted Sep 21, 2012 at 12:55 PM | Permalink

          I recently noticed that some journals appear to require authors to take responsibility for the integrity of their data. In a recent publication by Masaharu Tsubokura et al, entitled Internal Radiation Exposure After the Fukushima Nuclear Power Plant Disaster, published in the Journal of the American Medical Association, lead author Tsubokora vouched for data integrity as follows:

          Author Contributions: Dr Tsubokura had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

          Data integrity is what’s absent from Lewandowsky et al. None of the coauthors can honestly stand up and vouch for or take responsibility for the data integrity of Lewandowsky et al, vouching that the data set does not include fraudulent responses and that the results are not contaminated by fraudulent responses.

          It would be interesting to hear how, for example, the coauthors of this sort of study took steps to ensure the integrity of the data. it’s also interesting that the journal Psychological Science does not appear to have similar standards.

          Perhaps Faustus can weigh on whether Lewandowsky should be obliged to provide a similar warranty on the integrity of his data.

        • William Larson
          Posted Sep 21, 2012 at 3:30 PM | Permalink

          Re Steve McIntyre above: I take heart from Dr. Tsubokura here, in his example of “the buck stops with me”. In my estimation it is the Tsubokuras, the McIntyres and the Feynmans (to name three important ones) who advance, or keep hold of, the core principles of scientific investigation. By their example especially.

        • David L. Hagen
          Posted Sep 21, 2012 at 9:29 PM | Permalink

          Tom
          Thanks for clarifying those blog issues as the source of your comment.

        • DGH
          Posted Sep 22, 2012 at 5:18 AM | Permalink

          Data integrity? Let’s not forget that the authors excluded the data that was collected from three sites that are not blogs. They make no mention of this data or the decision not to use it in their paper.

          The invitation to post the survey read,

          “UWA researcher Charles Hanich is seeking participants for a web-based survey of attitudes towards climate science (and other sciences) and skepticism.”

          No mention of blogs here. But when the paper is written the methodology becomes,

          “Visitors to climate blogs voluntarily completed an online questionnaire between August and October 2010 (N = 1377). Links were posted on 8 blogs (with a pro-science science stance but with a diverse audience); a further 5 skeptic” (or skeptic-leaning) blogs were approached but none posted the link.”

          As I understand statistics larger samples are generally preferred. Indeed Dr. Lewandowsky has been reluctant to exclude certain questionable data. So why would he exclude presumably good data without mention?

        • Geoff Sherrington
          Posted Sep 22, 2012 at 6:54 AM | Permalink

          Re Steve’s comment just below, there is another major factor – accountability. In some types of employment, the ducks and drakes shown by lewandowsky et al would result in either dismissal or in a requeast to make good the $$$ damage from their own pockets.
          As matters stand, accountability is determined by an officer of the University (who might not know of the Chinese Wall) and the taxpayer is expected to make good at least some of the extra costs incurred by the excursion into this relam of silliness.
          Academics and grant bodies for academics should not assume that the open check book is available now and will be even more available in the future.
          Pesponsibility as per Tsubokura and accountability as per the principles of fairness and equity should work naturally together.

    • Posted Sep 22, 2012 at 6:14 AM | Permalink

      Re: Tom Murphy (Sep 21 10:13),

      DGH – I suspect because someone said to him – “hey mate, are you a bloomin’ fool? You said you were reviewing the beliefs of “skeptical blog denizens” and blathered on about their relevance – ability to influence – the climate debate. What value will students and staff here at the good ‘ol U of W A and/or psych mailing list responses have to that construct.”

      The likely response might have been – “well heck, we’ve only gotten at best 150 or so responses we can say are “skeptical” – and for some responses we only have 10 responses beneficial to our conclusions – they’ll laugh me outta the biz if I try and paint these skeptics as nutter with such small quantity and quality data.”

      And the response – “how are you going to identify those responses – the only way you can use them is to lie.”

      “Aw bloody hell- guess you’re right – but we better not say a word about them or those damn skeptics will be at my throat about them ….”

      • DGH
        Posted Sep 22, 2012 at 7:37 AM | Permalink

        A. Scott

        To be clear Charles Hanich never mentioned blogs. Even in his follow up discussion with Pielke Jr he uses the phrase “web-based.”.

        That the data collected from UWA should be rejected is least surprising. In fact it was implied at SKS that the students in the laboratory had taken the surve presumably through the university link.

        But the other two sites are more interesting. They would qualify as web based. And the respondents would be less likely to game the survey in that they entered through a non-climate related venue. If Dr. Lewandowsky was interested in skeptics attitudes towards science In general thisnisnprobably his best data.

        But he excluded the data. And that leaves opens many possibilities. Maybe nobody responded from those sites. He probably should have mentioned that, but OK. Or maybe Dr Lewandowsky preferred the results he yielded from the pro-science science stance blogs. I think they called that massaging the data to strengthen outcomes.

        See Smeester here
        http://news.sciencemag.org/scienceinsider/2012/06/rotterdam-marketing-psychologist.html

        It’s hard to know because as much as Dr. Lewandowsky and his co-author have written about the study over at their blog thus far they have declined to address this question.

  36. Ed_B
    Posted Sep 21, 2012 at 10:41 AM | Permalink

    Good ship global warming hits McIntyre rock, again.

  37. owqeiurowqeiuroqwieuro
    Posted Sep 21, 2012 at 11:08 AM | Permalink

    It’s too bad that all of the data Lewandowsky collected was completely worthless because the paper really could have been something.

  38. Posted Sep 21, 2012 at 12:14 PM | Permalink

    Steve: around 20% identified themselves as “skeptic”, but some of these responses were fraudulent. The actual number of respondents appears to be much less than that. My guess is that over half of the “skeptic” responses were fake.

    How do you KNOW
    are you sure some of those calling themselves non-sceptic were fraudulent. (Fraudulent is not the right word much too emotive – but that’s what you are trying to do -whip up a bit of name calling)

    Steve: I don’t “know” how many responses were fraudulent. Based on available information, I estimate that over half of the “skeptic” responses were fake. I’ve also stated previously that it was two-way: some “warmist” responses were fake as well. Lewandowsky’s failure to ensure data integrity was very comprehensive.

    Although, as you know, I’ve discouraged use of “fraud” as a word, in this case, I think that “fraudulent” is a technically correct word for the fake responses intended to deceive and that there is no point in pussyfooting around the issue.

    • David Jay
      Posted Sep 21, 2012 at 2:23 PM | Permalink

      Aw, come on Ford….

      How many skeptics do you think there are at:

      Deltoid
      Tamino
      Skeptical Science
      Bickmore
      Hot Topic
      Scott Mandia
      Ill Considered
      Trunity

      If you get to the fingers on your second hand you will be doing well.

      • Tom Murphy
        Posted Sep 21, 2012 at 5:48 PM | Permalink

        I believe cAGW skeptics do frequent the listed sites but not in appreciable numbers – well, not when compared to cAGW apologists. Catastrophic (c) is referenced because therein lies the misuse of science with regard to AGW. Regardless, the likelihood of a cAGW skeptic participating in a conspiracy theory-orientated research survey posted on an either a known or readily-apparent pro-cAGW web site is remote. Couple these presumptions to the web hits commanded by the detailed web sites and the likelihood of any meaningful participation by cAGW skeptics recedes even further.

        This doesn’t mean that that cAGW skeptics didn’t participate, but it does place the survey’s ability to collect significant data into context.

        As anecdotal evidence, I used to frequent OpEdNews.com to discuss a number of topics. Among these was the conspiracy that the 9/11 attacks was an “inside job” – the US government either made or let the attacks happen on purpose. This conspiracy was embraced openly by the web site editors and the overwhelming majority of its commentators. Naturally, I was (and still am) a skeptic regarding that particular position.

        On a few occasions, surveys would be posted, requesting participation for 9/11 opinion tracking purposes. However, the wording of the questions and their categorized responses were typically weighted (i.e., biased) to the point of discouraging a skeptic’s participation. Essentially, the few skeptics present “refused to feed the monkeys” just so the researchers could “confirm” their pre-determined conclusions. Not participating (or feeding) was the least harmful path of resistance available to the skeptics. As a result, the skeptic’s opinion was largely absent in the subsequent results, and it was obvious in the lopsided distribution curve of survey results (i.e., 9/11 actually WAS an inside job).

        Now, compare this example (although anecdotal) to the alleged participation of cAGW skeptics on opposing-view web sites, as referenced by Lewandowsky et al., and the results are decidedly opposite. Not only did the cAGW skeptics participate, they did so with apparent zeal and enthusiasm. Common sense alone dictates a disconnect, while McIntyre and Mureika analysis confirms the same but in an objective and significant manner.

        • bernie1815
          Posted Sep 22, 2012 at 9:06 AM | Permalink

          I go to those sites. However, I do not trust the sites to maintain the integrity of my email information and, therefore, decline to leave comments. I certainly would not complete a survey at any of these sites.

        • TerryS
          Posted Sep 23, 2012 at 2:57 AM | Permalink

          I do not trust the sites to maintain the integrity of my email information.

          I set up a separate mailbox that I use solely for commenting at blogs because, like you, I don’t trust blogs with my real email address. I’ve been using this email address for many years now and have commented on multiple blogs on a multitude of subjects.

          I have only ever received 1 unsolicited email at that account (I’ve used it to contact blog owners). That was from the blog owner contacting me on behalf of another commentator.

          Despite to nature of some of the blogs I’ve posted on they all seem to have a surprisingly high degree on integrity when it comes to commentator’s personal details.

    • manicbeancounter
      Posted Sep 21, 2012 at 2:51 PM | Permalink

      An alternative view might be that quite a few skeptics who visit these sites. However, given that skeptic views are attacked quite dogmatically, and and adverse comments deleted, it is possible to hypothesize that such skeptics are not representative of the wider skeptic community. We do not know that. In a similar we can tell that the climate change and New World Order conspiracy theories are more strongly supported by skeptics, as there are no contrasting conspiracy theories that “pro-science” people might believe in, such as secret funding of skeptics by big business.

      • Neil fisher
        Posted Sep 21, 2012 at 10:15 PM | Permalink

        “as there are no contrasting conspiracy theories that “pro-science” people might believe in”

        Heh – “big-oil funded deniers” springs to mind, despite the fact that Jo Nova and many others have comprehensively shown that, if anything, the disparity is in the other direction.

        The disparity is stunning: it seems to me that if you have the temerity to even question the “experts”, you are being funded by self-interested parties like oil companies, yet the blatant advocacy of financial giants for carbon trading is ignored – even lauded – despite the obvious conflict of interest.

  39. RayG
    Posted Sep 21, 2012 at 12:22 PM | Permalink

    I realize that what I am about to suggest would mean additional work on the part of our blog host and several commenters but hope that a few volunteers will take on the task.

    There is ample evidence that Lewandowsky et al’s sampling methods are problematic as is the questionnaire, the lack of proper control groups, poor data handling, poor statistical analysis, conflation of statistical methodologies used, etc. Therefore, it seems appropriate to propose a letter to the provost or equivalent of UWA (their chief academic offider) outlining the problems as well as Prof. Lewandowsky’s unprofessional ad hom. attacks on those who identified the problems with his paper. In my experience this would be best accomplished in a 1 to 1 1/2 page letter describing the key points with the supporting material included as attachments. I would consider addressing the letter to both the provost and the editor of the journal. It would help if a senior Australian academic was among the co-signers. Finally, I would send it electronically but back it up with hard copies send via snail mail with verification of receipt (the equivalent of the U.S. Return Receipt Requested form.)

    • Posted Sep 21, 2012 at 2:58 PM | Permalink

      I think copying the journal would be approriate.. As Psychological Science is a large, high impact journal (with little climate related papers as far as I can see) they may be more objective than a University who would be concerned with its’ and the prof’s academic reputation.

  40. Posted Sep 21, 2012 at 12:35 PM | Permalink

    Interesting that “Skip”(*) Lewandowsky start with argumentum ad verecundiam and does no more than to reveal that imperator non habet vestimenta.

    In a comment on the SavingScrewiShapingTomorrowsWorld blog, I asked 3 things. One of them was

    My pet statisticians would like to know which method you used to determine the number of factors. You seem to have overlooked mentioning it in your paper. Or did you explore the use of all the common methods and find no difference?

    I’ve not seen a response to that from either author in the comments. Commenters seem to be second-guessing what Skip and Klaus were thinking/writing/doing in order to “support” Skip, et al. Implausibly, a moderator may have snipped the response. Moderation does NOT appear to be being done by UWA staff outside of Skip’s sphere. Of course, Psych may be employing AI to detect abuse and that could easily result in Skip’s responses being snipped, given his usual tone. 😉

    (*) I think I’ll call him “Skip” after the large resonant vessel, usually full of rubbish.

  41. Steven Mosher
    Posted Sep 21, 2012 at 1:30 PM | Permalink

    Just a reminder

    http://www.universitypolicies.uwa.edu.au/search?method=document&id=UP12%2F25

    5.1 The University recognises the importance of research being communicated to other researchers, professional practitioners and the wider community. Ideally this would occur after peer appraisal. Where research is reported in the public media prior to peer review, the reporting must be based on the research data and findings.

    5.8 Deliberate inclusion of inaccurate or misleading information relating to research activity in curriculum vitae, grant applications, job applications or public statements, or the failure to provide relevant information, is a form of research misconduct. Accuracy is essential in describing the state of publication (in preparation, submitted, accepted), research funding (applied for, granted, funding period), and awards conferred, and where any of these relate to more than one researcher.

    5.9 All reasonable steps must be taken to ensure that published reports, statistics and public statements about research activities and performance are complete, accurate and unambiguous.

    ##############

    failure to provide relevant information
    failure to take reasonable to steps to ensure that public statements are complete, accurate and unambiguous.

    #################

    We have seen Dr. Loo’s type of behavior before. When Steve had difficulty replicating Jones work and asked for code. In that case Jones wrote in an email that he knew why Steve could not replicate the results: the paper didnt describe all the steps. The case here will be interesting since dr. Loo took a departur from the ideal path. The Ideal path as called out in the policies is that publication preceeds promotion in the press.
    Perhaps the policy makers were aware of the dangers of touting results that havent been vetted, Something that Muller and Watts have both been accused of. In those two cases neither individual is governed by a policy. In Dr. Loo’s case he is governed by a policy that calls out an Ideal approach. Under section 5.8 I think its clear that Dr. Loo has made public statements about his research. And I think its clear that he has failed to provide relevant information. Under 5.9 there are reasonable steps he could take to enure that the record is complete, accuarte and unambiguous. here is a clue all verbal descriptions of statistical methods can be ambiguous. The code as run is unambiguous.

    • TerryMN
      Posted Sep 21, 2012 at 2:34 PM | Permalink

      Steve — Related to that, in the “I wouldn’t hold my breath” category – I sent Lewandowsky a couple of questions awhile back in a cordial e-mail asking about site selection methodology and whether they did any IP analysis to determine whether anyone used known proxy servers. Instead of a response from him, I got this e-mail (yesterday):

      ——————————————-

      Dear Terry

      Thank you for your recent contact regarding your concerns about the paper “NASA faced the moon landing-Therefore (Climate) Science is a Hoax: An anatomy of the Motivated Rejection of Science” by Lewandowsky, S., Oberauer, K., & Gignac, C. E. (in press, Psychological Science).

      Your complaint was received and managed consistent with the University Policy on: Public Complaints. Information about the policy and the associated can be found at: http://www.complaints.uwa.edu.au/home/general#Whatis

      As a member of the University Executive and the Deputy Vice-Chancellor (Research), the matters were referred to me for consideration. I have completed my review and find that:

      1. The focus of Professor Lewandowsky’s research relates directly to his interest and expertise in scepticism and the updating of memory. As such, the topic of this paper of is well within his remit and consistent with the University’s Code of Ethics and in particular the academic freedom of staff;
      2. The research was undertaken in a manner compliant with the University’s strict Human Ethics approval;
      3. A review of all correspondence to the blog sites was undertaken confirming the contact with blog sites;
      4. In considering the release of the names of the blog sites that did not respond to Prof Lewandowsky’s initial request, it is noted that Prof Lewandowsky was concerned that he would breach research ethics by releasing the names. Following receipt of legal advice, Prof Lewandowsky has now released the names of the blog sites; and
      5. The paper has been peer-reviewed and accepted in a high quality, international journal.

      Academic freedom is recognised and protected by this University as essential to the proper conduct of teaching, research and scholarship. Freedom of intellectual thought and enquiry and the open exchange of ideas and evidence are a University core value. It is the University’s expectation that all academic and research staff are guided by a commitment to freedom of inquiry and exercise their traditional rights to examine social values and to criticise and challenge the belief structures of society in the spirit of a responsible and honest search for knowledge and its dissemination.

      The University recognises your right to express your concern and I believe that we have appropriately considered the matters you have raised.

      Regards

      Professor Robyn Owens
      Deputy Vice-Chancellor (Research)
      [ I’ve redacted the rest of the sig / contact info ]

      ——————————————-
      My reply was:

      Hello,

      To clarify, my correspondence was not a complaint, I just had some questions after reading the paper. Still mostly unanswered, but not a big deal.

      Thanks,
      Terry

      • mondo
        Posted Sep 21, 2012 at 10:56 PM | Permalink

        I also sent a comment to UWA – a comment, not a complaint – and received exactly the same e:mail back from them.

      • Posted Sep 22, 2012 at 4:41 PM | Permalink

        It sounds like Owens pwnd you pretty hard.

        Care to share the correspondence that brought that response down on your head?

      • Dave Andrews
        Posted Oct 14, 2012 at 4:03 PM | Permalink

        Surely the reply says it all in its first paragraph, talking of Prof Lew’s interest in scepticism and the updating of memory which is consistent with the code of ethics and the academic freedom of staff.

        That is even the UWA have doubts about the value of his work.

  42. Posted Sep 21, 2012 at 1:32 PM | Permalink

    Lewdandorky’s work fails before he even sets up the questions on the free online survey site.

    You can’t conduct impartial research on a population you have an emotional antagonism to.

    As an academic trick-cyclist, Lew knows this, it’s psych-101.

    No professional detachment, chance of objectivity zero, breaches of professional ethics – inevitable.

    It’s a career train wreck.

    • Eric Barnes
      Posted Sep 21, 2012 at 7:13 PM | Permalink

      Yes. A career train wreck to an objective observer. When viewed through the climate science looking glass, it’s a glowing example of how committed you are to the cause.

      • Posted Sep 22, 2012 at 10:30 AM | Permalink

        Very sad, but unfortunately very true.

  43. RayG
    Posted Sep 21, 2012 at 1:37 PM | Permalink

    OT but entertaining. This year’s IgNobel prizes include one for detecting brain waves in dead Atlantic salmon. It just requires using the “right” statistical techniques.

    http://www.guardian.co.uk/science/2012/sep/21/ig-nobel-awards-dead-salmon

  44. Posted Sep 21, 2012 at 1:53 PM | Permalink

    Prof Lewandowsky sent copies to colleagues in mid July, and it was reported on then in various newswire.

    Dr Adam Corner, who received a copy, wrote about it in the Guardain in July (as in press, as passed peer-review to be published (ie FINISHED) in the journal

    Dr Adam Corner tweeted on the 3rd September (when the Carbonbrief suggested that it was just a draft)

    @ajcorner
    @carbonbrief its not a draft study, its been peer-reviewed and is in press!

  45. Posted Sep 21, 2012 at 2:19 PM | Permalink

    I do wonder about the identity of “Dr. Fautus”… Well — it’s not that important, so never mind…

    Just a reminder that people here can make their own “deal” — here’s the recipe:
    http://en.wikipedia.org/wiki/Faust

    The story concerns the fate of Faust in his quest for the true essence of life (“was die Welt im Innersten zusammenhält”). Frustrated with learning and the limits to his knowledge, power, and enjoyment of life, he attracts the attention of the Devil (represented by Mephistopheles), who agrees to serve Faust until the moment he attains to the zenith of human happiness that he cries out to that moment to “stay, thou art so beautiful!” (Faust, I, l.1700) — at which point Mephistopheles may take his soul. Faust is pleased with the deal, as he believes this happy zenith will never come.

    Of course there are many version so you will need to pick that as well as your reward for faithful service or perhaps your punishment if you see it that way…

  46. MikeN
    Posted Sep 21, 2012 at 5:44 PM | Permalink

    We see in the PBS ombudsman article that Forecast the Facts labelled Anthony Watts a denier and conspiracy theorist. This was the goal all along.

  47. Posted Sep 21, 2012 at 5:49 PM | Permalink

    Tim Irwin
    Posted Sep 21, 2012 at 11:15 AM | Permalink

    Faustus,

    I am beginning to wonder if you aren’t the good professor himself posting anonymously….

    No Tim – faustusnotes is this — bloke

    Judging from his blog, he’s a man of strong, if predictable, opinions on many things ….. like climate sceptics…..

    Their aim is to deceive, to manipulate the scientific record to support their own dodgy aims, and to intimidate their political opponents. Their goal is to deceive, not to educate, but people who don’t understand the details of statistics will not be able to tell the lies from the half-truths unless they are shown, which is why these sites carefully prune out anyone who can dispute their misrepresentations. Thus does Mcintyre get a reputation as an “expert in statistics,” and Tony Watts gets to be seen as an authority on climate science even though he never even got an undergraduate degree in atmospheric physics. They are liars, and they are lying about an issue of fundamental importance to the future of the planet. In my book, that makes them scumbags, too.

    ……… and politics

    …it’s easy to mistake right wing thought for a belief in the fundamental inequality of individuals, when in fact they’re just reflexively using power to their own advantage (and that of their immediate peers) without considering broader philosophical issues at all. Romney’s politics is that of the shark, which doesn’t think about whether you’re “equal and equally rational” when it eats you: it just thinks about whether it can…Or maybe that’s too kind to Romney. Obviously a lot of these crazy Americans have got an ideology to defend their predatory behavior…

    Yet another academic with rather disturbing extremist tendencies.

  48. JamesD
    Posted Sep 21, 2012 at 6:05 PM | Permalink

    Tom Murphy: ““Lewandowsky repugnantly alleged that I might have intentionally rigged his re-’analysis’ so that it deviated from our EFA’s in the hope that no one would see through his manufacture of doubt.”

    Isn’t this nothing more than Lewandowsky asserting that his paper’s conclusions are correct vis-a-vis the “desire” of McIntyre and Mureika to seemingly perpetuate the conspiracy? Of course, this tactic reveals more about Lewandowsky’s now apparent conspiracy tendencies than the alleged intentions of McIntyre and Mureika. ”

    Tom, it is worse. Steve posted his R code. Therefore Loo is lying by asking his question. At that time, he knew absolutely what the problem was, Steve was not using PCA (for obvious reasons). Therefore Loo is practicing deceit. No debate on that point.

    Pay attention to these quotes by Loo also, note AFTER he was aware that Steve was not using PCA:
    ” How could Mr. McIntyre fail to reproduce our EFA?

    Simple: In contravention of normal practice, he forced the analysis to extract two factors. This is obvious in his R command line:
    pc=factanal(lew[,1:6],factors=2)

    In this and all other EFAs posted on Mr. McIntyre’s blog, the number of factors to be extracted was chosen by fiat and without justification.”

    “There are two[!!] explanations for this obvious flaw in Mr. McIntyre’s re-“analysis”. Either he made a beginner’s mistake, in which case he should stop posing as an expert in statistics and take a refresher of Multivariate Analysis 101. Or else, he intentionally rigged his re-“analysis” so that it deviated from our EFA’s in the hope that no one would see through his manufacture of doubt.”

  49. JamesD
    Posted Sep 21, 2012 at 6:08 PM | Permalink

    So let’s get back to the hippo in the bathtub we need to notice: A “prominent” academic journal has agree to post an alarmist paper based on the results of (1) online poll.

    • JamesD
      Posted Sep 21, 2012 at 6:42 PM | Permalink

      This paper should be renamed “OMG! Sceptics Doubt Moon Landing!”. Can they get any lower? I know it is irritating, but maybe Steve should just announce he’s not going to waste any more time on a “paper” written about an OMG! poll or take seriously people who publish it.

  50. Kenneth Fritsch
    Posted Sep 21, 2012 at 6:21 PM | Permalink

    Lewandosky and faustunotes have, after the fact, decided to explain what was done in selecting the factors used in a study that was from the start obviously flawed by the data used and further flawed as a study that certainly does not and cannot inform the worth of the papers on climate science and the IPCC reviews thereof. It is rather a thinly disguised attempt to influence policy on AGW mitigation by associating crackpots with reasonable skeptics.

    What I think is important to note about this late revelation about the statistical methods used in this study, and evidently others in the fields of psychology, is that the explanations could easily be obtained from a short summary of the use of factor analysis and principal components that could be read and understood by any layperson of reasonable intelligence. I found it unusual for a scientist working in a field and knowledgeable about the statistics involved in their work to not be very upfront, and wanting to be upfront, about explaining in detail how their methods were applied. I have known academics who could pull off the appearance of good knowledge of expertise on a topic by parroting real experts in rather assertive tones.

    Further, in my layperson view, I see a factor as something rather physical or in the case of the softer sciences something well defined while principal components do not have a direct physical or well defined character.

    • JamesD
      Posted Sep 21, 2012 at 6:46 PM | Permalink

      And I’ll add, they certainly should have been more “forthcoming” after the R code revealed Steve was not using PCA (for obvious reasons). Instead, they throw mud and deceive instead of doing the right thing. Or maybe they DID’T catch that was the problem (even though he quotes some R code in his blog), which makes their expertise even more in doubt.

  51. simon abingdon
    Posted Sep 21, 2012 at 6:39 PM | Permalink

    And still this farce continues.

    Remember, the discussion is about a paper entitled “NASA faked the moon landing—therefore (climate) science is a hoax: An anatomy of the motivated rejection of science”.

    So the paper’s subject is “An anatomy of the motivated rejection of science”; in other words “An analysis of reasons for the rejection of [climate] science”.

    Instead of providing any in-depth analysis of his subject the author simply uses the absurd proposition “if NASA faked the moon landing” then “(climate) science must be a hoax” as a basis for gathering data to decide if both might be similarly-held conspiracy theories. And that’s it.

    How has this nonsense managed to command such widespread recent attention instead of being immediately and deservedly junked?

  52. Bertharry
    Posted Sep 21, 2012 at 7:19 PM | Permalink

    For those not in the know, the charming ‘faustusnotes’ is S— :
    snip

    S– has an MSc. in Statistics, so clearly he is an expert.

    Oh, I see – he was co-author on the paper about Fukushima that Steve referred to where the lead author was required to take responsibility for data integrity. I’m sure that was just coincidence, rather than Steve making some ironic point.

    • Posted Sep 21, 2012 at 11:08 PM | Permalink

      Some things go beyond irony. Steve was asking a very good question.

    • Tim
      Posted Sep 22, 2012 at 10:18 AM | Permalink

      (Maybe O/T)
      Should revealing peoples true identity be sanctioned? If some know a username’s identity I can’t help feel that it should be up to the user to reveal this information not others.

      Personally I’d like to know who is saying what, but I realise that it may be difficult to reveal this in some circumstances in such a polarised debate.

      • Bertharry
        Posted Sep 22, 2012 at 3:19 PM | Permalink

        @ Tim:

        I gathered that RomanM knew who faustusnotes was, because he (RomanM) said:
        “S–, you need to drop your arrogant and abusive manner. It makes you look like a jerk.”
        (Couldn’t agree more with the sentiment, BTW).

        So given that some denizens here obviously knew who they were dealing with, I thought the rest of us should, too. It’s also fairly obvious that Steve knows exactly who he is.

        If anyone doesn’t want to be identified, they can resist using the same ‘handle’ when they comment as they do in their own blog.

        If anyone goes out of their way to be obnoxious and patronizing, they shouldn’t be too surprised if others take the trouble to find out if they have the stature to allow them to act in that way. (Er, decidedly not, in this case).

        I see S— has since commented that he interprets Steve ‘playing games with his identity’ as a ‘threat’. Which, along with the tone of his comments here, tells you about as much as you need to bother knowing about him.

        Steve: I asked faustusnotes whether he thought that Lewandowsky should warrant the integrity of his data, as the authors of the Fukushima study had done. I think that it is entirely reasonable to quote what people say in one venue to show inconsistency in another. F-notes had the option of simply agreeing or disagreeing.

        • Carrick
          Posted Sep 22, 2012 at 4:42 PM | Permalink

          S… didn’t exactly hide his tracks, did he?

          Looking at his degree level pretty much confirmed what I thought already.

        • faustusnotes
          Posted Sep 22, 2012 at 9:38 PM | Permalink

          Steve, things are getting a bit heated around here. I’ve obviously pissed off some people and I don’t think anything productive will come of my engaging on this thread anymore, but I’d like to make you a peace offer: delete the personal photos of me from here, and I’ll do an analysis of the data from the Lew paper as I would have done it if I were submitting it for publication. I’ll also do a separate post on what I think are the two main problems with the analysis.

          My analytical approach would have been quite different to Lew’s and (I suspect) more like yours. I think it might be a tad too traditional for some of your commenters, but if they’re interested, and you agree to delete the photos, I’ll do it.

          I think everyone’s done the issue of the data to death, and noone’s going to budge from their positions on that, but the data itself could be a rich source of some interesting insights beyond the restricted ones Lewandowsky has produced. Will you take me up on my offer?

          And to the rest of you who I pissed off, my apologies for my tone. I guess I came here using the same tone that I’m used to in some of my more contentious past posts on my blog (I get a lot of drive by attacks on one particular series) and it was unhelpful. So by way of mea culpas, I hope that Steve will take me up on my offer.

          Steve: will do. sounds constructive. I also have data from the WUWT survey and would welcome co-operation on this.

          I’ll do more than you requested and will remove all identification comments. I very much intended to reproach you for the difference in attitude on data integrity in your own work and in your attitude to Lewandowsky, but wasnt going to do more than that by explicitly identifying you online. Some readers took matters further and since the comments were online, I left them up. But enough is enough.

          I can understand reasons why you don’t want your blog wars to be associated with work profiles. While you’ve been incautious, I dont see any purpose in pushing it further and have removed all links to you and all name identification.

        • JamesD
          Posted Sep 22, 2012 at 10:07 PM | Permalink

          What “data” are you talking about? An OMG! online poll? Seriously man, take stock of the situation and let it go.

        • theduke
          Posted Sep 23, 2012 at 12:20 AM | Permalink

          It’s easy to spuriously attack Steve from the safe house of anonymity. Steve has no such advantage and probably wouldn’t avail himself of it if he did. Faustusnotes should now have some idea what it’s like to be a controversial public figure as opposed to one who lives in the carefree luxury of anonymity. It’s not easy being a target.

          That said, I commend faustusnotes for his offer to work together and for his apology to those he deliberately set out to offend.

        • sHx
          Posted Sep 23, 2012 at 1:19 AM | Permalink

          Steve, you, sir, are a champ.

          The best reply imaginable to FN’s offer.

        • HaroldW
          Posted Sep 23, 2012 at 3:57 AM | Permalink

          Steve,
          In line with your response to Faustusnotes (Sep 22, 2012 at 9:38 PM), it would be helpful to edit your post of Sep 22, 2012 at 4:42 PM, and Carrick’s of 6:41 PM & 6:45.

        • sHx
          Posted Sep 23, 2012 at 4:07 AM | Permalink

          With re FN’s “I’ll also do a separate post on what I think are the two main problems with the analysis.”

          If you are gonna do that, offer the post as FN guest post on Climate audit so as not to get the wars and blogs confused. If you cooperate with the WUWT survey too and if any of those efforts proves worthy of publication for a scholarly journal, then you’ll have to use your real name unfortunately.

        • faustusnotes
          Posted Sep 23, 2012 at 4:41 AM | Permalink

          Thanks very much for that Steve, I’m relieved. It turns out a bullet train is a very good place to get some stats done, and I’ve done much of it already. Could you point to your (or Lewandowsky’s, or both) definition of warmist vs. skeptic vs. skydragon (the ones you used to classify respondents)? With those definitions I think I can finish up an analysis, though I will probably be too busy to write it up tonight, and I should add I won’t be doing SEM, which is not something I’ve ever done before. I agree with HAS (I think) that an exploratory factor analysis should not then be fed straight into an SEM with the same data.

          I’d rather not do anything involving the revised survey – I think its having been launched in the context of this controversy means that it will be highly contaminated (everyone from both sides will have been watching their answers very carefully), and the extra point on the scale makes things messy. But I’m pretty confident my work, whether anyone likes it or not, will be replicable … from a technical perspective it’s nice to be able to compare the effect of the single extra scale point on the results, but it being a different sample makes this problematic too.

          theduke, anonymity has different value to different people, and some of us have no desire to be public figures!

          JamesD, I think an online poll is the only way to access data on an online community. If one accepts that the online “skeptic” community is special, in the sense of being a coherent and definable group of people with special properties, then I can’t see how you could get their opinion any other way. So an online poll may be a necessary tool for gathering data on them. Anyway, it’s all the data we’ve got, and some of us can’t help but analyze data when we see it …

        • sHx
          Posted Sep 23, 2012 at 5:20 AM | Permalink

          “I’d rather not do anything involving the revised survey”

          You’re not the Level 12 Paladin I thought you were.

  53. Paul Penrose
    Posted Sep 21, 2012 at 8:59 PM | Permalink

    While I appreciate the discussion on the fine points of PCA and EFA, given the serious flaws with collection of the data it’s really just rearranging the deck chairs on the Titanic, isn’t it? No analysis of any kind can produce anything useful with data of this kind.

    • Posted Sep 22, 2012 at 10:50 AM | Permalink

      Re: Matthew W (Sep 22 10:35), All Dr. Lew has to do to end this (and embarrass Steve Mc) is RELEASE THE DATA !!

      I disagree. It would take that plus a more respectful manner on his part to make me believe that his opinions and research would be worth consideration. Critics can be a source of valuable advice. When critics must endure scorn and derision to make the simplest of points it detracts from any scientific debate.

      Lewandowsky has insulted and demeaned people who he deems to be in opposition to his views. It would seem that he has done so from a weak — perhaps non-existent — foundation.

      • Scott Basinger
        Posted Sep 22, 2012 at 2:09 PM | Permalink

        His study clearly aims to promote bigotry. There’s no merit to this whatsoever.

  54. AntonyIndia
    Posted Sep 21, 2012 at 9:25 PM | Permalink

    Professor Lewandowsky and scientific neutrality, an off-screen official Myth Buster? Read his 9 pages “Debunking Handbook” first published in November 2011. Two pages are title and publishing info, his 6 main pages only focus on “climate myths”. “Climate myth leaflet” would therefore be a better title but this cognitive scientist prefers quarrelsome titles as we now know.

    A quote: “For those who are strongly fixed in their views, being confronted with counter-arguments can cause their views to be strengthened. One cognitive process that contributes to this effect is Confirmation Bias, where people selectively seek out information that bolsters their view.” Does this professor ever look in the mirror?

    His main point: anybody who dissents from a climate science “consensus” has cognitive issues.

    He managed to affix the logos of the University of West Australia and Queensland on this pamphlet, so I guess he has institutional back up for this semi political tract.

    • Posted Sep 21, 2012 at 11:12 PM | Permalink

      His main point: anybody who dissents from a climate science “consensus” has cognitive issues.

      And Pointman‘s main point still stands on that.

  55. Eugene WR Gallun
    Posted Sep 21, 2012 at 10:40 PM | Permalink

    Hey Faustusnotes —

    Lewandowsky went to the media with this paper and got the big headlines. And then you claim none should reply to it before it is peer-reviewed and full data given? And if Lewandowsky can’t pass peer-review? But the headlines have already been blasted in the press.

    So Lewandowsky gets to strike a low blow first and then if someone answers you attempt to discredit the one who has been punched in the balls?

    And his data is worthless. The paper is worthless. He is worthless. His paper is sewer science.

    Eugene WR Gallun

    • Posted Sep 22, 2012 at 6:19 AM | Permalink

      IT HAS passed Peer review!! just waiting a slot in the journal (which may be on hold now?)

      BUT it was absolutely peer reviewed and finished, Lewandowsky sent it to colleagues after it had been peer reviewed, and then issues press releases.
      Dr Adam Corner (who wrote about it in the Guardian), confirmed it HAS passed peer review..

      which, makes you wonder aboutthe peer rellly, did noone pick up on th eactivists term ‘pro-science’ blogs, and think just which blogs did you survey (ie the names of the 8 blogs are NOT listed in the paper)

      • JamesD
        Posted Sep 22, 2012 at 10:09 PM | Permalink

        Gentlemen, I’m a broken record, admittedly. But a paper on an OMG! survey passed peer review? What the heck are we dealing with? It can’t get more ridiculous than this.

  56. Posted Sep 21, 2012 at 11:50 PM | Permalink

    Apparently all is not rosy in social psychology land:

    “According to the report, Smeesters said this type of massaging was nothing out of the ordinary. He “repeatedly indicates that the culture in his field and his department is such that he does not feel personally responsible, and is convinced that in the area of marketing and (to a lesser extent) social psychology, many consciously leave out data to reach significance without saying so.”

    http://blogs.discovermagazine.com/notrocketscience/2012/06/26/why-a-new-case-of-misconduct-in-psychology-heralds-interesting-times-for-the-field/

    • Posted Sep 22, 2012 at 7:22 PM | Permalink

      It would seem the more important and interesting stuff from your link (great find BTW) is in this – and in the Nature article it came from:

      “[Joseph Simmons] recently published a tongue-in-cheek paper in Psychological Science ‘showing’ that listening to the song When I’m Sixty-four by the Beatles can actually reduce a listener’s age by 1.5 years7. Simmons designed the experiments to show how “unacceptably easy” it can be to find statistically significant results to support a hypothesis. Many psychologists make on-the-fly decisions about key aspects of their studies, including how many volunteers to recruit, which variables to measure and how to analyse the results. These choices could be innocently made, but they give researchers the freedom to torture experiments and data until they produce positive results. [Note: one of the co-authors behind this study, Uri Simonsohn, has now been revealed as the whistleblower in the Smeesters case- Ed, 28/07/12, 1400 GMT]

      In a survey of more than 2,000 psychologists, Leslie John, a consumer psychologist from Harvard Business School in Boston, Massachusetts, showed that more than 50% had waited to decide whether to collect more data until they had checked the significance of their results, thereby allowing them to hold out until positive results materialize. More than 40% had selectively reported studies that “worked”8. On average, most respondents felt that these practices were defensible. “Many people continue to use these approaches because that is how they were taught,” says Brent Roberts, a psychologist at the University of Illinois at Urbana–Champaign.”

      • Posted Sep 22, 2012 at 7:37 PM | Permalink

        A.Scott: The discover article links to a site called Retraction Watch. Its OT because Faustusnotes was talking about differences in the scientific culture and practises, and in general the process of questioning the stats of scientific articles is well ingrained in many fields — though possibly in different ways in different fields.

        Rejecting papers on the basis that the results are ‘too good’ has merit, as you would expect meaningful significances to barely rise above 2 sigma in these types of fields, unless the survey methodology was very biased. I dont know what significance Lew claimed, but the far greater uptake of the survey on warmist blogs vs skeptical blogs would be a major representativeness problem IMHO that smacks of “working” the study.

  57. jfk
    Posted Sep 22, 2012 at 12:22 AM | Permalink

    It seems that some people are offended by the use of the terms “fake data” and “fraudulent data” which could be interpreted to say that Lewandowsky simply made up the responses. Steve never said that, but it could be interpreted that way. I’m not sure what the best choice of words might be…isn’t there a name for someone who posts on an internet forum and states his opponents’ opinions in such an extreme way as to make them look ridiculous?

    Somehow the movie Borat comes to mind. Maybe all the people who filled out the survey pretending to be climate sceptics, in order to make skeptics look ridiculous, should be called Borats.
    -snip. other will invoke irrelevant discussion –

    • Steve Reynolds
      Posted Sep 22, 2012 at 8:30 PM | Permalink

      How about corrupt data?

  58. JamesG
    Posted Sep 22, 2012 at 4:25 AM | Permalink

    What I find odd in many of these discussions is the detailed nitpicking about mathematical methods which always turn out to be inapplicable in the first place. The tables of the data put up in an earlier thread showed that the vast majority of the conspiracy theorists were those who identified themselves as believers in AGW. In the light of that it doesn’t matter whether PCA or factor analysis was used or whether the data was fake, too sparse, biased or whatever: The hypothesis is fundamentally disproven just by using your grey matter.

    • JamesD
      Posted Sep 22, 2012 at 10:10 PM | Permalink

      Kind of like upside down Tiljander.

    • Posted Sep 22, 2012 at 11:10 PM | Permalink

      I can look at a list of graphs of the data or simply at the responses and see the basic relationships. Certainly that is a layman’s uneducated view. But I agree with you – if it doesn’t pass the simple layman’s common sense test, no amount of crunching is going to offer a meaningful conclusion.

      For example, there is no rational way that with only 10 positive “Moon Landing is a Hoax” responses – whether compared with appx 150 skeptic, let alone 1133 total responses, that ANY useful or valid correlation can be derived.

  59. Mervyn
    Posted Sep 22, 2012 at 6:24 AM | Permalink

    Understand this… there are many teachers in academia who suffer from attention deficit syndrome. Lewandowsky is simply one of them.

    All Lewandowsky is doing is seeking his 15 minutes of fame! He is totally irrelevant to the catastrophic man-made global warming debate.

    What Lewandowsky has written is not worthy of any response. It is trash. He knows it is trash. Treat it as trash. I do.

  60. AntonyIndia
    Posted Sep 22, 2012 at 6:45 AM | Permalink

    Just found a “real” conspiracy case for professor Lewandowsky, only this time about many peer reviewed medical publications: a quote: “But those colleagues can be in the pay of drug companies – often undisclosed – and the journals are, too. And so are the patient groups. And finally, academic papers, which everyone thinks of as objective, are often covertly planned and written by people who work directly for the companies, without disclosure. Sometimes whole academic journals are owned outright by one drug company. Aside from all this, for several of the most important and enduring problems in medicine, we have no idea what the best treatment is, because it’s not in anyone’s financial interest to conduct any trials at all.” See http://www.guardian.co.uk/business/2012/sep/21/drugs-industry-scandal-ben-goldacre about all these peer reviewed wonders of modern science.

    • DocMartyn
      Posted Sep 23, 2012 at 2:54 PM | Permalink

      I am shocked you are shocked.
      If you have a dog in the fight you are biased.

  61. silsleuba
    Posted Sep 22, 2012 at 7:04 AM | Permalink

    Be of Good Cheer!

    There is timely news on the wires …

    2013 is designated the “International Year of Statistics”.

    http://www.statistics2013.org

    YAY!

  62. Posted Sep 22, 2012 at 9:29 AM | Permalink

    Irritated by his lies, insults and condescension towards Steve, Anthony Watts and sceptics in general – I decided to politely challenge Faustus (aka Asst Prof S—) on his own blog, observing that someone whose professed main recreation is fantasy games may not be the most reliable source of political judgement ( personal, I know, but he didn’t hesitate to get personal about Steve & Anthony).

    His initial retort was a foul mouthed rant with more unprintable expletives than I could easily count – including a couple I hadn’t even heard before!

    He’s now removed that and formally banned me and any further climate discussion from his blog as off topic ( despite that thread title being “Censored by Climate Audit”.

    Quote:-

    September 22, 2012 at 10:36 pm

    I’ve just deleted a comment by foxgoose, my angry response, and a further comment by foxgoose. This blog is about role-playing games, fantasy and sci-fi. If you don’t like those topics, you’re welcome to go elsewhere. If you want to comment on topics unrelated to those matters, you need to show respect for the main theme of this blog and for the people who use it.

    The tone of debate here is taking a decidedly unpleasant turn, and I’m not going to encourage things to degenerate further. I’m not going to be allowing nasty comments, and I’m not going to be giving any more commentary on the Lewandowsky paper. Further nasty comments here will see the whole post and thread deleted. My blog is intended to be about RPGs and sci-fi/fantasy, with occasional excursions into science or healthcare when it interests me. I’m not interested in adversarial debate of the kind promoted by foxgoose, and I’m not interested in debates that become personal. I’m certainly not going to host comments insulting my main audience or the main reason for the blog’s existence, and I’m not going to see the main focus of this blog derailed by the kind of rudeness evidenced by foxgoose.

    We’ve noticed before how these sneering narcissists like Lewandowsky, Mann et al react violently and viciously to any mild criticism – but this is surely a new benchmark.

    If anyone thinks “narcissist” is a bit strong, remember this is a guy who has a public webpage devoted to — “Views of Me”.

  63. bernie1815
    Posted Sep 22, 2012 at 9:32 AM | Permalink

    How about an effort to redo Lewandowsky’s work in a more rigorous way? Perhaps we can arrange to have the survey completed by visitors of high profile sites like WUWT, Climate, etc., RealClimate. Of course, the survey would have to be significantly improved both to be more suitable for the intended statistical analysis and in order to ensure quality responses to the survey itself. It sounds like there are plenty of survey design and statistics experts here – I designed my first online survey in 1998 and we have build our own on-line survey platform.

    • jfk
      Posted Sep 22, 2012 at 2:40 PM | Permalink

      I will do the data work if you design the survey. I’ve done a lot of work in statistics but not so much in surveys so I might look for guidance there.

      The biggest barrier I see is how to avoid the “grab sample” problem. We would need cooperation from blog proprietors. Ideally one would like to email the survey to a random sample of registered blog commenters.

    • theduke
      Posted Sep 22, 2012 at 5:00 PM | Permalink

      There was this over at Anthony’s a couple of weeks back:

      Replication of Lewandowsky Survey

      That may be different from what you have in mind, but it was an attempt.

      • theduke
        Posted Sep 22, 2012 at 5:18 PM | Permalink

        Maybe A. Scott can update us here as to his progress in collating the data. He gives a couple of updates near the end of the post. (Last one on 9/11, I believe.) The survey was offered for takers on Sept. 8th, so he must have made some progress toward final results.

        I took the survey. The one difference is that he offered an option/choice to give a neutral or “don’t know” answer to the questions. So instead of four possible answers to choose from as in Lew’s survey, there were five.

        • Steve McIntyre
          Posted Sep 22, 2012 at 6:08 PM | Permalink

          The inclusion of the extra choice much complicates a direct comparison.

        • Posted Sep 22, 2012 at 7:31 PM | Permalink

          Steve and team have been busy with the work in this thread and others on the original data and determining methodology, since Lewandowsky has not been forthcoming with it.

        • theduke
          Posted Sep 22, 2012 at 8:59 PM | Permalink

          I would still be interested in the results. Maybe after this whole thing blows over?

          Steve: I have the data and am looking at it and will report on it in due course. My priority right now is to fully analyse the Lew data. Unfortunately direct comparisons are complicated by the addition of a Dont Know option.

    • j ferguson
      Posted Sep 22, 2012 at 7:57 PM | Permalink

      Bernie, A better focused survey could be interesting.

      I’ve been perplexed by what seems to me the preponderance of conservative (in the US sense of the term) political outlook among commenters on the skeptic sites I follow. Are conservatives more skeptical? Or are people capable of suspicion of “commonly held beliefs” also more likely to be conservative? I suspect engineers are probably disproportionately conservative. I see a lot of engineers on the serious doubter sites, here, Lucia’s, and Jeff’s as well as on the technical threads which Anthony posts.

      I don’t care what people who think the moon landing was a hoax think about anything else, and I would marvel that any sentient being would, unless it was to build some basis for ridicule.

      It could be that the warmist view fits more comfortably with liberal (again US sense of the term) assumptions of how the world works. If that’s true then they would have no inducement to think more about it – for example, if you think man is a trespasser on the planet, then CAGW is just one more example. It all fits.

      • Posted Sep 23, 2012 at 9:42 AM | Permalink

        Paul Matthews, without using a word of research-speak or other jargon, you have summarized the prospects of a future survey pretty brilliantly.

      • Carrick
        Posted Sep 23, 2012 at 10:00 AM | Permalink

        j ferguson:

        ’ve been perplexed by what seems to me the preponderance of conservative (in the US sense of the term) political outlook among commenters on the skeptic sites I follow. Are conservatives more skeptical? Or are people capable of suspicion of “commonly held beliefs” also more likely to be conservative? I suspect engineers are probably disproportionately conservative. I see a lot of engineers on the serious doubter sites, here, Lucia’s, and Jeff’s as well as on the technical threads which Anthony posts.

        I suspect the real link is whether you are free market vs government controlled market… in the US that pretty well aligns you conservative or liberal ideology at the same time, but lumps libertarians in with the conservatives.

        As long as the solutions proposed involved major government interference with the market, I think both conservative and libertarians will not only resist these changes, but will regard with deep suspicion the argument given (“scientific basis” in this case) for why this interference might be needed.

    • Paul Matthews
      Posted Sep 23, 2012 at 6:11 AM | Permalink

      I think repeating the survey would be pointless. It was clear to many of those taking the original survey that the agenda was to paint skeptics as conspiracy nuts. This makes any results from the original survey meaningless because of the unknown number of fake responses. Now, after all the fuss, everyone taking the survey would know the agenda, making any further results less thas meaningless.
      Regarding the left/right believer/skeptic tendency, this has already been well established and is not in serious doubt I think.

      • j ferguson
        Posted Sep 23, 2012 at 8:43 AM | Permalink

        Paul Matthews,
        if this is a well established tendency does it track across many other issues? Can you think of any where it doesn’t? I started to wonder about this after sensing the disposition of the skeptics in the blogs I track. It could be that skepticism was never important to me until this, although I suppose I, too, have generally been skeptical of most things i read, at least at the mass media level.

        Realizing this is drifting off topic, there don’t seem to be any niggling doubts about conservative beliefs among the conservative types I encounter in this area.

      • Mooloo
        Posted Sep 24, 2012 at 5:25 AM | Permalink

        Regarding the left/right believer/skeptic tendency, this has already been well established and is not in serious doubt I think.

        Correlation is not causation.

        I say this as a left-leaning, liberal sceptic, who is annoyed every time I am painted as right-wing because I am sceptical of CAGW. And I’m hardly alone in that.

        That is not “in serious doubt” doesn’t make it meaningful.

  64. Robin Melville
    Posted Sep 22, 2012 at 10:31 AM | Permalink

    I notice that the dread Dr Faustus is complaining bitterly about being censored from here. Given his rather challenging behaviour I’m hardly surprised. If he’s concerned about the generally mild mannered handling of his serial, at the very least, condescension here he should, just as an experiment, go to SkepticalScience and post a moderately challenging comment along non-warmist lines. He likes rôle playing games so there’s one he could try.

    As a fellow-traveller in the public health field of substance misuse he should realise that Aaron’s rod is hardly the best way to tease out complex truths in the intersection between science and public policy.

    Lewandowsky’s egregious paper is nothing more than an activist (social) scientist generating a few headlines and attempting to bolster the meme that “deniers” are pathological (in addition to being swivel eyed right-wing loons} and who’ll subscribe to any conspiracy — however phantastic.

    There are some inhabitants of this (not so much) and other boards who might truthfully fit the above stereotype. In the same way that warmist boards are infested with humourless, hectoring William Connelly drones and dewy-eyed earth-lovers cheering on their science heroes as they fight the evil carbon devils and their WUWT and JoNova orcs.

    None of this matters that much. The mainstream press will continue to cut and paste alarmist stories at a rate of approximately 100:1 vs cautionary ones. One suspects from his behaviour thus far Lewandowsky really doesn’t care whether his farrago gets published or not. If it does he, like Mann, will engage in trench warfare with “investigations” and “panels” to exonerate him and get sympathetic colleagues to publish “me-too” articles in order to keep his paper out there.

    Steve’s indefatigable work prevents their typifications from being unopposed home runs for which I, at least, am grateful.

    In the mean time the politicians and the public have much more urgent considerations to address. Hopefully, this whole thing will dwindle to a bizarre footnote in history. Before too long!

    Steve: I very seldom snip or censor critics as has been conceded by even some severe critics (e.g. hengist) though I frequently snip or delete supporters for piling on. Some words trigger moderation. Faustus has posted 23 comments in the past few days. He is in a different time zone; some of his comments have triggered moderation filters. I’ve been fairly busy the past week with some business meetings and social events; my blog priority has been my own posts and have therefore sometimes been slow in attending to the moderation queue.

    I don’t regard commenters as having a right to post abusive comments. Nonetheless, I generally allow critics to post abusive comments as such tirades usually say more about the commenter’s personality than the target but prefer that readers do not respond in kind.

    • HAS
      Posted Sep 22, 2012 at 2:41 PM | Permalink

      I personally found faustusnotes quite reasonable and straight up over at the Shaping Tomorrow’s World blog. Discussing the way L. et al had used PCA etc I commented that they had used it inductively but drew conclusions as if they had actually tested hypotheses deductively.

      f. responded: “I think I see what you mean HAS, and I think that might be very common in the psych literature. I don’t have an opinion as to whether that’s what L et al do, though (haven’t read that much of their paper).”

      A day or so ago so he’s probably read the paper by now.

      • Robin Melville
        Posted Sep 22, 2012 at 8:04 PM | Permalink

        That’s as may be “HAS”. However on his own Role Playing/Dwarves vs Orcs blog where he posted his discontent about being moderated here he was rude and dismissive of a long, reasoned post about surveying “skeptics” from a WUWT stalwart– accusing him of being a dupe of the “scumbag” Anthony Watts. He then flew into a rage with someone from here who provoked him and deleted his immoderate response plus the provocative comments that caused it.

        We often forget, as Steve kindly points out in his addition to my post, that unpaid(!) bloggers have to sleep and do other things sometimes. The point of my post was that the finger wagging on both sides achieves remarkably little. The purpose of diplomacy is to achieve results rather than win arguments. The world is going to decide, probably sooner rather than later, between the catastrophists and the optimists (admittedly my personal bent). I suspect that diplomacy has very little purpose in the battle between the “believer” and sceptic blogs.

        What amused me most that there was a poster on Dr. Faustus’ blog who said:

        Nobody sees themselves as “the bad guy”. Everybody thinks they are “right”, and justifies their actions accordingly.

        But it is very, very difficult to try to empathize with these people who:
        – invent (non-)facts
        – deny the real facts
        – engage in analysis where many facts are ignored
        – engage in analysis where a small number of facts are given undue weight
        – use threats and intimidation to silence critics
        – always fail to admit to and correct mistakes.

        I mean, what are they thinking?
        When your opinion can only be advanced through lies and subtle thuggery, what do you imagine your opinion is worth?
        How long can you continue to advance your anti-factual opinion when you are continually proven wrong, day after day?

        It struck me that this could have been posted with equal vehemence on SkS or WUWT.

  65. Hugo M
    Posted Sep 22, 2012 at 2:00 PM | Permalink

    I’m wondering about the decidedly non-normal distributed answers of the Lewandowky survey. Is this something to be expected with online surveys adressing controversial subjects on short scales?

    In contrast to the Lewandowsky survey, the variables of the USJudgeRatings data set are predominantly normal at the 1% level.


    lew<-read.csv("http://www.climateaudit.info/data/psychology/LskyetalPsychSciClimate.csv")
    for (variable in colnames(lew) )
    print(sprintf('%20s:%g',variable,shapiro.test(lew[,variable])$p.value))

    [1] " FMUnresBest:2.67716e-35"
    [1] " FMNotEnvQual:6.55224e-34"
    [1] " FMLimitSocial:2.41472e-35"
    [1] " FMMoreImp:1.06392e-35"
    [1] " FMThreatEnv:8.30454e-36"
    [1] " FMUnsustain:8.8284e-36"
    [1] " CO2TempUp:1.68617e-45"
    [1] " CO2AtmosUp:3.58966e-44"
    [1] " CO2WillNegChange:4.07147e-44"
    [1] " CO2HasNegChange:4.67544e-37"
    [1] " CFCNowOK:1.74898e-31"
    [1] " AcidRainNowOK:6.6075e-31"
    [1] " CYNewWorldOrder:2.46865e-44"
    [1] " CYSARS:1.57094e-40"
    [1] " CYPearlHarbor:9.23573e-36"
    [1] " CYAIDS:3.82694e-50"
    [1] " CYMLK:9.48948e-38"
    [1] " CYMoon:2.36751e-55"
    [1] " CYArea51:5.9383e-44"
    [1] " CYJFK:2.88025e-33"
    [1] " CY911:2.12165e-43"
    [1] " CYRoswell:1.98858e-43"
    [1] " CYDiana:1.60134e-44"
    [1] " CYOkla:1.43922e-36"
    [1] " CYClimChange:2.11347e-49"
    [1] " CYCoke:1.38565e-36"
    [1] " CauseHIV:1.77534e-51"
    [1] " CauseSmoke:1.70158e-50"
    [1] " CauseCO2:2.64933e-45"
    [1] " ConsensHIV:5.83387e-55"
    [1] " ConsensSmoke:2.28682e-55"
    [1] " ConsensCO2:1.6696e-48"

  66. Scott Basinger
    Posted Sep 22, 2012 at 2:06 PM | Permalink

    There’s a certain delicious irony when the people whose opinions that you just tried to write off because they are ‘stupid’ tear you and your poorly executed ‘study’ a new one in public.

    This should be a lesson to anyone whose aim is to publish something this bigoted.

  67. uknowispeaksense
    Posted Sep 23, 2012 at 12:30 AM | Permalink

    Once lewandowsky’s paper is published, the exact methodology should be available, at which point, all the people having a crack at being statisticians can then try and replicate his work exactly rather than guessing. Steve can then write up his results and publish it in the same journal or perhaps a stats journal highlighting all the errors he finds. We can all read it then knowing it has been through the rigours of peer review.

    Steve: Lewandowsky has an obligation to ensure the integrity of his data and to ensure that his results do not rely on fake/fraudulent data. The potential problem was not explicitly disclosed in the article and there is no evidence that the article was reviewed with this problem in mind. As I’ve said before, I believe that Lewandowsky’s prudent course of action is to notify the journal of the problem and either withdraw the paper or ask the journal to re-review with a keen eye on problems arising from fake/fraudulent responses, rather than doubling down and proceeding with publication with full knowledge of the problems with fake/fraudulent responses.

    • HaroldW
      Posted Sep 23, 2012 at 3:34 AM | Permalink

      Nothing prevents Lewandowksy or his co-authors from making public the SI now, as they have done with the paper. If the paper is not, by itself, adequate to determine the exact methodology, then the SI should be made available at the same time, rather than playing games by witholding it.

      • uknowispeaksense
        Posted Sep 23, 2012 at 4:00 AM | Permalink

        They’re not playing games. What the journal is doing while the paper is in press is common practice by a lot of journals. I would suggest that had the title of the paper not been as…. colourful as it is, noone would have noticed it. I am curious though, is it just a matter of principle that the SI be released early or is it just this paper?

        • HaroldW
          Posted Sep 23, 2012 at 5:19 AM | Permalink

          uknow –
          First off, you changed the subject. I criticized Lewandowsky & his co-authors for not releasing the SI. You changed that to “the journal”. Lewandowsky & his co-authors have had ample opportunity to release the SI, but have not chosen to do so. One may draw one’s own conclusions from this.

          However, to your question…The SI often is an elaboration of an abbreviated “methodology” section which is condensed to meet a page limitation. As such, the SI is an essential part of the paper. Therefore, common practice or not, it is reasonable that the body of a paper & its SI be made available at the same time.

          For papers which are predominantly data analysis, I contend that the SI should include code along with its methodology description. (Source data as well, naturally.) Verbal description of the methodology is liable to have a certain amount of ambiguity; code is unambiguous.

          Consider the recent Gergis et al. paper on Australasian temperature reconstruction. Only its substantial (but still incomplete) SI allowed analysis of its described methods; from the text of the paper one would not have been able to determine the actual procedure used. Providing the code would have made the discrepancy between methodology description and actual practice much more evident, perhaps even enabling the error to be corrected in review before publication.

        • uknowispeaksense
          Posted Sep 23, 2012 at 5:51 AM | Permalink

          Did I say journal? Oops. Apologies. Anyway, I’ll take it as a matter of principle then, which brings me to my next question. Given that it is a common practice for authors, do you routinely visit blogs and criticise authors who don’t release the SI at the same time for papers you are interested in or are you just making a special effort in this case?

        • Hugo M
          Posted Sep 23, 2012 at 7:25 AM | Permalink

          I wonder what paper you’re refering to. May be it still evolves online, despite being “in press”? Nowhere in the Lewandowsky paper an SI is mentioned. I downloaded it only a few days ago.

          Steve: the paper says that tables of correlations would be in supplementary information, but nowhere refers to the SI as providing details on methodology not provided in the paper. Personally, I don’t believe that any such additional details were submitted to the journal or were considered by reviewers.

        • theduke
          Posted Sep 23, 2012 at 10:33 AM | Permalink

          uknow writes:

          Given that it is a common practice for authors, do you routinely visit blogs and criticise authors who don’t release the SI at the same time for papers you are interested in or are you just making a special effort in this case?

          I can’t speak for Steve, but when a clearly bogus, provocative paper that aims to insult is announced in a huge media-blitz to adoring crowds around the world, it tends to attract attention.

          Steve: I’ve frequently criticized authors for not providing data/SI when their papers were press released. It’s unusual for authors to press release prior to publication, but IPCC does this and I’ve sharply criticized IPCC for not even providing their report when their press releases go out.

        • Carrick
          Posted Sep 23, 2012 at 12:28 PM | Permalink

          uknow:

          They’re not playing games. What the journal is doing while the paper is in press is common practice by a lot of journals. I would suggest that had the title of the paper not been as…. colourful as it is, noone would have noticed it. I am curious though, is it just a matter of principle that the SI be released early or is it just this paper?

          I’m not sure the point of your comment/question.

          I’ve always gotten the SI too when I review a paper, in fact there are specific review questions relating to that, in any journal I’ve peer-reviewed for.. I often have made comments about the appropriateness of what is included in the SI (I had an occasion where the “program” was a binary for a particular but unspecified LINUX OS, and I pointed out the obvious problems with that).

        • Posted Sep 23, 2012 at 2:17 PM | Permalink

          When an author releases a paper to the general press and promotes that paper in the general press, then absolutely they should have to release all information then and there.

          Especially with a paper such as this – with its sensationalized claim and title – a title that reflects a minor finding. And a finding which is supported by the thinnest of threads (and which disappears entirely when you take away the clearly fake responses).

          Lewandowsky wants the press – which is almost entirely based on his sensationalized and barely supported title, but doesn’t not want to have to be accountable for the accuracy or even review of his work.

          If you release to and promote in the press – you have “published” your work. And once you do you should be responsible for providing all information so its accuracy can be reviewed.

          Lewandowsky released this paper to the press in July. It is not in this months journal, making it October at the earliest. He’s had months of press coverage on a paper that by all indications is serious flawed.

  68. Beth Cooper
    Posted Sep 23, 2012 at 2:56 AM | Permalink

    Say, when yer ends are NOBLE then it follows that yer oh-so-good ends justify yer somewhat-sus-but -justified-means … if yer know what i mean.

  69. Posted Sep 23, 2012 at 3:54 AM | Permalink

    Steve

    Your acceptance of the olive branch from Faustus shows your characteristic fair-mindedness and does you great credit.

    I would respectfully suggest though that, before engaging in “collaboration” with him – you re-read his original Sept 17th “Censored by Climate Audit” post carefully.

    It was a tissue of carefully constructed slander and lies, unprompted by any aggression from yourself or anyone here – and therefore presumably represents his real feelings on the issues.

    Sup with a long spoon – and count it afterwards.

    Leopards & spots come to mind.

  70. harold
    Posted Sep 23, 2012 at 5:47 AM | Permalink

    I hope this is not too OT, but there is an amazing one hour sermon by Professor Robert Manne on the war of denialists against science and reason.

    http://desmogblog.com/2012/09/19/robert-manne-how-vested-interests-defeated-climate-science

    43:50
    “one of the most remorseless denialists, a man called Stephen McIntyre”
    More on Steve after, 50:50

    • dougieh
      Posted Sep 28, 2012 at 4:17 PM | Permalink

      thanks for the link Harold – Posted Sep 23, 2012 at 5:47 AM

      that guy has a problem (what has happened to arguement) his finishing statement, he may know the early “denial” history – but don’t lecture on things you know FA about, ie Steve’s motives being part of a climate change denier cause !!!

      what a stumbling verbal prat, Steve & Ross you should pull this guy up for slander.

      ps- apparetly CA is a “echo chamber” & apparently you had climategate as a blog title ready to go beforehand & the guy liked MM book ? nuts.

  71. Patrick M.
    Posted Sep 23, 2012 at 7:52 AM | Permalink

    Just as a side note to those of us who may not be familiar with the differences between Principle Component Analysis and Factor Analysis. Here’s a description of some of the main differences:

    http://psych.wisc.edu/henriques/pca.html

    “Factor analysis versus PCA
    These techniques are typically used to analyze groups of correlated variables representing one or more common domains; for example, indicators of socioeconomic status, job satisfaction, health, self-esteem, political attitudes or family values. Principal components analysis is used to find optimal ways of combining variables into a small number of subsets, while factor analysis may be used to identify the structure underlying such variables and to estimate scores to measure latent factors themselves. The main applications of these techniques can be found in the analysis of multiple indicators, measurement and validation of complex constructs, index and scale construction, and data reduction. These approaches are particularly useful in situations where the dimensionality of data and its structural composition are not well known.
    When an investigator has a set of hypotheses that form the conceptual basis for her/his factor analysis, the investigator performs a confirmatory, or hypothesis testing, factor analysis. In contrast, when there are no guiding hypotheses, when the question is simply what are the underlying factors the investigator conducts an exploratory factor analysis. The factors in factor analysis are conceptualized as “real world” entities such as depression, anxiety, and disturbed thought. This is in contrast to principal components analysis (PCA), where the components are simply geometrical abstractions that may not map easily onto real world phenomena.
    Another difference between the two approaches has to do with the variance that is analyzed. In PCA, all of the observed variance is analyzed, while in factor analysis it is only the shared variances that is analyzed.”

    Steve: Ripley, who is a world authority on statistics, gives a precise difference from the matrix algebra. The above commentary by a psychologist seems very unhelpful in comparison, as it apparently reifies the algebra.

  72. tlitb1
    Posted Sep 23, 2012 at 9:55 AM | Permalink

    Hugo M

    Conspiracy-Theorist Lewandowsky Tries to Manufacture Doubt

    I wonder what paper you’re refering to. May be it still evolves online, despite being “in press”? Nowhere in the Lewandowsky paper an SI is mentioned. I downloaded it only a few days ago.

    Raw correlation matrices and summary statistics are reported in the online supplemental material

    For example, the online supplemental material shows that responses to the Satisfaction With Life Scale (SWLS; Diener, Emmons, Larsen, & Griffin, 1985) replicated previous research involving the population at large, and the model in Figure 1 exactly replicated the factor structure reported by Lewandowsky et al. (2012) using a sample of pedestrians in a large city.

    This, when I first read it, seemed to me to mean the SI was available. I couldn’t find it .

    The paper was “in press” and had self contained references to contemporary available material. This means the SI is going to be released after the controversy of its joke headline had played out. I think we layman psychologists should know what that means 😉

    Note the second reference to supplemental material references a second unavailable paper (with its own unavailble Si?) that is co-authored by the first papers author…

    Is this usual?

  73. tlitb1
    Posted Sep 23, 2012 at 10:03 AM | Permalink

    Re-posting this as my first version had a link to the confines of this page which attracted spam filtering:

    @Hugo M Posted Sep 23, 2012 at 7:25 AM | Permalink | Paste Link

    I wonder what paper you’re referring to. May be it still evolves online, despite being “in press”? Nowhere in the Lewandowsky paper an SI is mentioned. I downloaded it only a few days ago.

    Raw correlation matrices and summary statistics are reported in the online supplemental material

    For example, the online supplemental material shows that responses to the Satisfaction With Life Scale (SWLS; Diener, Emmons, Larsen, & Griffin, 1985) replicated previous research involving the population at large, and the model in Figure 1 exactly replicated the factor structure reported by Lewandowsky et al. (2012) using a sample of pedestrians in a large city.

    This, when I first read it, seemed to me to mean it be available. The supplemental material that is. The paper was “in press” and had self contained references to contemporary available material.

    Note the second reference to supplemental material references a second unavailable paper (with its own unavailble Si?) that is co-authored by the first papers author

    Is this usual?

  74. MikeN
    Posted Sep 23, 2012 at 11:13 AM | Permalink

    snip – OT

  75. Posted Sep 23, 2012 at 2:56 PM | Permalink

    Lewandowsky’s “other” paper is now in press – in Dec Issue of Psychological Science in the Public Interest (which is a different sister pub to PSS)

    Note the co-authors … apparently John Cook is now a social scientist as well.

    Stephan Lewandowsky, Ullrich K. H. Ecker, Colleen M. Seifert, Norbert Schwarz, and John Cook
    Misinformation and Its Correction: Continued Influence and Successful Debiasing
    Psychological Science in the Public Interest December 2012 13: 106-131, doi:10.1177/1529100612451018

  76. Posted Sep 23, 2012 at 5:45 PM | Permalink

    Dear Steve,

    In regards to the agreement between you and “Prof. S-” I would suggest that you also delete comment #comment-356519. This is the one where you point out how Roman figured out who Prof. S- is, and it was quite easy for me to follow along, even though I read this post’s comments after other identifiers were deleted. I imagine the cat is already out of the bag on other forums, but it seems that deleting this comment would be in the spirit of the agreement.

    Cheers. -t

    • HaroldW
      Posted Sep 24, 2012 at 7:39 AM | Permalink

      …and #comment-356554 & #comment-356556 for a similar reason.

  77. Shevva
    Posted Sep 24, 2012 at 7:09 AM | Permalink

    All this time wasted to pigeon hole people.

    I hope you all took 5 minutes to enjoy the summer because it’s gone now.

  78. Skiphil
    Posted Nov 23, 2012 at 10:35 PM | Permalink

    Nothing seems to slow down Lewandowsky in his drive to smear and defame anyone who questions any aspect of extremist climate science:

    Lewandowsky continues to spew vile misinformation

    [linked on Jo Nova]

    Robyn Williams radio show for ABC (Australia) with Stephan Lewandowsky as guest:

    Attitudes to climate change

    Broadcast:
    Saturday 24 November 2012 12:05PM (view full episode)

    “If 95, 96 or 97% of scientists say that human activity is driving the world temperature higher, why is it that some people reject the view of the overwhelming majority? Stephan Lewandowsky has studied scepticism. In the field of climate science the so-called sceptics he says are not sceptical, they are rejecting the evidence for ideological reasons, and a personal world view. He says extremist market ideology leads people to reject climate science. They are rejecting the enlightenment, and all that has been achieved over hundreds of years. He says there is a false consensus effect and the media has done a terrible job at representing climate science. News Limited publications in Australia systematically misrepresent climate science. Denial is a way of wishful thinking. He says solutions need to be highlighted along with new entrepreneurial opportunities as climate changes and the challenges increase.”

    =====================================================================

    [Jo Nova quotes Lewandowsky from the program]:

    Jo Nova: Lewandowsky goes on to defame, degrade and maliciously attack skeptics with misinformation

    [Lewandowsky]
    “They were rejecting the science not based on the science… but on other factors…
    what we basically found was the driving motivating factor behind their attitudes was their ideology.
    People who endorse an extreme version of free market fundamentalism are likely to endorse…

    They are also rejecting the link between smoking and lung cancer, and between HIV and AIDS…

    it’s an extremist free market ideology”

  79. Skiphil
    Posted Feb 5, 2013 at 10:39 AM | Permalink

    Blog post and paper’s abstract at this link (h/t Tom Nelson):

    Lewandowsky et al.
    New paper says:

    “Conspiracist ideation is arguably particularly prominent on climate blogs, such as when expressing the belief that temperature records show warming only because of systematic adjustments (e.g., Condon, 2009)….”

    Condon, J. (2009, November). Global temperature records above the law. Retrieved from
    http://noconsensus.wordpress.com/2009/11/29/global-temperature-records-above-the-law/ (Accessed 6 May 2012)

    Actually, Jeff’s argument (in 2009) was that with CRU flouting FOI requirements no independent party could examine how they got from raw data to adjusted data. Condon’s points were about openness, reproducibility, scientific rigor, and also observing the FOI LAW. Lewandowsky et al. have grossly warped Condon’s posts into ‘conspiracist ideation’ as a straw man figure.

  80. Skiphil
    Posted Mar 20, 2013 at 5:32 PM | Permalink

    Oh, Steve, you are caught in Lewandowsky’s net, too! For shame…. (j/k)

    p.s. I don’t think it’s worthy of your time right now with all that’s going on with the Marcott paper, but just thought you should know.

    BishopHill tweet:

    Lewandowsky has accused @richardabetts, Met Office head of climate impacts, of being conspiracy theorist! http://www.bishop-hill.net/discussion/post/2091932#post2091980

    Lewandowsky’s “Recursive Fury” paper sweeps up prominent Met Office climate scientist in the hall of “conspiracy ideation” miscreants — problems with Lewandowsky methodology and rigor??

9 Trackbacks

  1. […] Conspiracy-Theorist Lewandowsky Tries to Manufacture Doubt […]

  2. By The Latest Cli-Sci Developments | suyts space on Sep 21, 2012 at 2:41 PM

    […] Conspiracy-Theorist Lewandowsky Tries to Manufacture Doubt […]

  3. […] a recent blog post at Climate Audit, Steve McIntyre’s concludes:  “…Lewandowsky, cognizant of how thoroughly compromised his results are by fake/fraudulent […]

  4. […] low on the priority totem pole, activist “scientists” such as Weaver – and more recently “psychologists” such as Stephan Lewandowsky – (not to mention anti-free-speech […]

  5. […] which essentially claims that climate ‘deniers’ are a bunch of conspiracy nuts. Climate Audit has taken the research, the analysis and claims made in the paper to pieces, and pretty well all […]

  6. […] Science effort to the advocacy disguised as science going on at the University of Western Australia with Stephan Lewandowsky. Since this was sent using the University of Queenslands public network resource, it is fair game […]

  7. […] Lewandowsky has not posted anything on his blog. Steve McIntyre has posted two further articles (here and here). Has someone had a quiet word to Lewandowsky? Plus, we note, Psychological Science must […]

  8. By Lewandowsky Timeline | Geoffchambers's Blog on Mar 24, 2013 at 7:37 PM

    […] https://climateaudit.org/2012/09/20/conspiracy-theorist-lewandowsky-tries-to-manufacture-doubt/ […]

  9. […] problem with this paper – at least according to Climate Audit principal, Steve McIntyre – was the quality of its “denialist” […]