“Forensic Bioinformatics”

Pielke Jr has sent me the following two links on the longstanding dispute between Baggerly and Coombes, two biostatisticians, against a team of cancer researchers at Duke University, led by young star Dr Potti. See CBS News here and a Baggerly 2010 lecture here.

Baggerly and Coombes had attempted to replicate a leading paper; their efforts have ultimately led to retraction of the papers. But the decisive step in the retraction did not arise from proper operation of the peer review system or university investigations, but through something entirely fortuitous.

Their experience has many parallels to Climate Audit versus the Mann hockey stick, even to some small details. (This is not to say that all details are parallel). For example, the Potti et al papers used “meta genes”, which Baggerly explained as being nothing but principal components.

Like us, Baggerly and Coombes were frustrated by incomplete and/or misleading documentation and the resulting need to resort to what they described as “forensic bioinformatics”- which is exactly equivalent to what has been described at CA as reverse engineering. Like us, they even encountered a one-row-off error (compare to MM03). They described an incident where the Potti authors appear to have reversed labels (resistant-sensitive) on a drug – though the labels were reversed and not Mannian upside-down.

They published critical comments in journals. Potti and the original authors used Mannian language to rebuff the critiques, which they described as “deeply flawed”. The Potti coauthors said that the Baggerly and Coombes criticisms didn’t matter, that they were little more than complaints about typographical errors and that any given criticism was obsolete since subsequent studies had confirmed the results being criticized at any given time.

Baggerly and Coombes had trouble publishing results in cancer journals because their results were “too negative”. Eventually they published in Annals of Applied Statistics, leading to an investigation at Duke University.

But after three months, the Duke investigation completely cleared the Potti authors, stating (in language reminiscent of Muir Russell) that the investigation had “strengthened their confidence”. Baggerly and Coombes’ request for a copy of the investigation report was refused. Eventually they noticed that a copy had been sent to the National Cancer Institute, a federally funded agency, and, in a tactic reminiscent of Climate Audit, they submitted a FOI request for the investigation report, receiving it in a redacted form. Needless to say, the investigation had failed to actually investigate the allegations.

Baggerly and Coombes were incredulous, but had more or less run out of avenues to pursue.

Then completely out of left field came a decisive revelation from “The Cancer Letter”, a weekly newsletter that is sort of like a blog (see here). In his CV, Dr Potti had falsely claimed to be a “Rhodes Scholar (Australia)” – a claim refuted at The Cancer Letter.

Although this misrepresentation did not bear on the dispute itself, it was the sort of thing that the academic community could dig its teeth into and Potti’s data manipulation began to unravel. Dr Nevens, Potti’s senior and coauthor at Duke, finally withdrew his support. The articles (published at the most eminent journals) were subsequently retracted.

There are many other parallels, but not everything is parallel. However, failing to report adverse results (e.g. a verification r2 of ~0 or hide the decline) are forms of data manipulation that should be taken seriously in the climate community, but aren’t.

86 Comments

  1. John
    Posted Oct 16, 2012 at 3:53 PM | Permalink

    It’s almost like there is a bizarre course somewhere, somewhere subterranean, where you can learn how to bob and weave, to use PR to throw off the scent, when your obfuscation has been detected.

    I wonder how much of this has gone undetected, how many more shoes have to drop?

  2. Local Person
    Posted Oct 16, 2012 at 4:01 PM | Permalink

    A UK County Council leader has publicly challenged the warmists in his blog and then in a full council meeting, which has been picked up by the press.

    Sound-minded people could pitch in to the debate on the newspaper website
    http://www.cambridge-news.co.uk/Cambridge/Global-warming-a-theory-by-bourgeois-left-wingers-16102012.htm
    or on his personal blog
    http://nickclarkeconservative.wordpress.com/2012/10/14/global-warming-the-king-is-not-wearing-any-clothes/

    Any comments would be welcome and might prevent it just being a horde of Cambridge-centric warmists getting all the debate action.

    • MarkB
      Posted Oct 16, 2012 at 11:00 PM | Permalink

      Or else you could show your host the minimum of respect and stay on topic. In going off topic, you essentially say that his writing is not worth discussion.

  3. Posted Oct 16, 2012 at 4:02 PM | Permalink

    In his CV, Dr Potti had falsely claimed to be a “Rhodes Scholar (Australia)” – a claim refuted at The Cancer Letter.

    The move Mann didn’t make. Rhodes Scholar envy can be a powerful thing. It spurred Mark Shuttleworth to start the company that made him a billionaire by 27, he told me as we shared a taxi through Oxford in 2003. (Mark had been rejected as a South African hopeful early in the 90s. He has since become a generous sponsor of the open source movement, from which he benefited commercially through the Python language, giving rise among other things to the popular Linux distro Ubuntu.)

    But what a Potti thing to do in bioinformatics and what remarkable climatic parallels.

  4. Craig Loehle
    Posted Oct 16, 2012 at 4:08 PM | Permalink

    In the academic world egos can be big and disputes are rife. People hold incompatible world views (Bayesian vs frequentist, the noble savage vs evidence of frequent violent deaths among prehistoric peoples, incompatible string theories) and quite often the work is so complex that nonspecialists are reluctant to pass judgement. This plus the ideal of academic freedom (even the freedom to be wrong or a crackpot) make universities reluctant to adjudicate academic disputes. This becomes even worse when various fields that are protected can put forth total nonsense work without being questioned (I dare not speak the names but they use the word “studies”). When there is outright fraud, gross incompetence, or data transposed or faked (such as by respondents to a survey) this reluctance to intervene becomes the turning of a blind eye to bad behavior or incoherent results. I’m not offering a solution, just pointing out the difficulty. In my mind when it is pointed out that statistics were used wrong, or data is a mess, or a lone tree at Yamal determines the result, everyone should take the possiblity of a problem seriously.

    • Adrian
      Posted Oct 16, 2012 at 6:34 PM | Permalink

      “People hold incompatible world views (Bayesian vs frequentist,”

      My understanding is that Frequentists are a subset of Baysians – i.e. they don’t consider priors or at least they assume uniform priors?

      I notice you didn’t capitilse “Frequentist”, lol

      • MarkB
        Posted Oct 16, 2012 at 11:01 PM | Permalink

        There was no person named Frequent.

        • AndyL
          Posted Oct 17, 2012 at 10:20 AM | Permalink

          Should Beyesian lose its captial letter?
          Once names change into words that are part of the langage, they lose the capital. See for example ‘derrick’

        • Jeff Norman
          Posted Oct 18, 2012 at 1:40 PM | Permalink

          There were however people named frequently.

    • Jim T
      Posted Oct 17, 2012 at 9:37 PM | Permalink

      “…make universities reluctant to adjudicate academic disputes.”

      Yes, that certainly makes sense as to why such investigations are few and far between. The problem is that when they DO ‘investigate’, they ARE adjudicating the issue. When they instead issue whitewash after whitewash, they don’t have the excuse that you are offering.

      The only publicized academic investigation that actually worked, as far as I can remember, is the Bellesiles matter at Emory. Even that was only due to his comparative lack of Mannian obfuscation and stonewalling – his excuse for not being able to produce his (fabricated) data was equivalent to ‘the dog ate it’.

    • TG
      Posted Oct 19, 2012 at 1:50 PM | Permalink

      The concluding sentence reminded me of a paper by John P. A. Ioannidis “Why Most Published Research Findings Are False” http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124 ). I should add that the paper was previously mentioned on CA on a different thread, but it’s worth re-reading…

  5. Robert Bradley
    Posted Oct 16, 2012 at 4:22 PM | Permalink

    No big deal, but in the next to last paragraph, did you mean ‘when’ rather than ‘and’:

    into and Potti’s data manipulation began to unravel

    _____

  6. Jimmy Haigh
    Posted Oct 16, 2012 at 4:24 PM | Permalink

    Who wants to be a billionaire?

  7. theduke
    Posted Oct 16, 2012 at 4:47 PM | Permalink

    In the end, integrity, persistence and truth will usually win the day. Or in the case of Climate Science, the decade.

    • bernie1815
      Posted Oct 16, 2012 at 7:22 PM | Permalink

      theduke:
      I had you pegged for a realist and now you disappoint and pull a Candide. I recommend a neat little book by David and Stephen Clark, Newton’s Tyranny: The Suppressed Scientific Discoveries of Stephen Gray and John Flamsteed as an antidote for your misplaced optimism.

      • theduke
        Posted Oct 16, 2012 at 11:17 PM | Permalink

        Thanks, Bernie. I guess it depends on the day of the week. Climate Science will do that to you.

        I’m not a scientist, mathematician, or statistician. My education was in history, literature and philosophy. I was describing my impression of Steve’s efforts. He’s winning in his own inimitable way, I think we can all agree.

        I’ll get the book.

        • bernie1815
          Posted Oct 17, 2012 at 12:29 PM | Permalink

          theduke:
          I share your opinion as to the likely effectiveness of Steve’s efforts. However, Steve quite appropriately looks at a carefully defined series of topics that play to his significant mathematical skill and he uses what he finds there to argue for greater transparency and replicability – the heart of the scientific method – in the climate science field in general. If I had my way, Steve McIntyre would be a recipient of a MacArthur genius award. Without question, he deserves a fellowship or position at Queen’s or UoT. I hope he gains the scientific recognition that were so long denied Gray and, to a lesser extent, Flamsteed.

      • Ian H
        Posted Oct 18, 2012 at 5:57 PM | Permalink

        The heart of the dispute between Flamstead and Newton was Flamstead’s refusal to release his data (obtained by virtue of his official position as astronomer royale) in advance of the publication of his intended magnum opus. Newton’s position was the not unreasonable one that data obtained by a public servant belonged to the public and should be published without delay. There are obvious parallels here with the disputes over release of data in climate science.

        Unfortunately the means by which Newton chose to pursue his objective were not so reasonable. Lacking an FOIA his efforts to get Flamstead to comply descended to the level of ugly personal vendetta against Flamstead and his associates. Gray, who was denied recognition of his work and died a pauper as a result, was the worst casualty.

  8. William Larson
    Posted Oct 16, 2012 at 5:26 PM | Permalink

    One of the suggestions offered by Baggerly in his lecture (cited above) is that papers before they are published be sent to a reviewer whose job it is solely to take the data and code and rerun it to see if he gets the same result. He is pressing journal editors to do this. His take is that this would increase “by orders of magnitude” the availability (if only by active links) of data and code. Such a fine and reasonable and decent suggestion. And so the question of the hour: When will this become the norm in climate science? One thing I like about CA is that you always get both data and code right away from Steve M.

    • Posted Oct 16, 2012 at 6:31 PM | Permalink

      If taken up this simple step could transform the peer review system more than one could hope. In fact, why not have this done first, and let other reviewers have access to the replication as a matter of course?

    • Txomin
      Posted Oct 16, 2012 at 8:23 PM | Permalink

      It sounds simple (to take the data and code and rerun it) but this is often a very complex procedure.

      Steve: yes and no. I’ve posted many scripts that readers have used. Doing analysis is more than just running code, but functional code dramatically simplifies the analysis procedure.

      • Posted Oct 16, 2012 at 9:22 PM | Permalink

        Alan Kay’s dictum for programming language design seems relevant here: simple things should be simple; complex things should be possible. The amount of time Steve has had to spend because really simple things have been made complex, despite peer review, shows not just spite (which has sometimes been present) but a system that is broken. If someone publishing knew the independent replication step was sine qua non it would change publishing. And this would be a bad thing because?

        • William Larson
          Posted Oct 17, 2012 at 12:46 PM | Permalink

          “The amount of time Steve has had to spend…”: Baggerly states that his team spent about 1500 hours doing all the “forensic bioinformatics” (reverse engineering), digging into this highly flawed paper on cancer drug trials based on genetic mapping. Yup. I wonder how much time SM has put into similar investigations in climate science. And as an aside, sort of: What is horrific is that the drug trials at Duke were never stopped because of the egregious (?) flaws in Potti, et al., as pointed out by Coombes and Baggerly, but were stopped instead because of a revelation of lies in Potti’s CV.

      • Peter
        Posted Oct 16, 2012 at 10:15 PM | Permalink

        “It sounds simple (to take the data and code and rerun it) but this is often a very complex procedure”

        One of the reasons I became skeptical in the first place was because our host backed up his words in the language that I speak. Running code. Occasional glitches sure, always fixed. This is the way Engineers work, we don’t have to trust each other, we check.

        Now even if the scientists are running horrifically complex stuff, the current state of the art makes it perfectly possible for them to share their virtual machine image with interested parties. Its not rocket science now. I can travel all over the world and create a copy of my machine’s memory, executing images, os, everything, I can save it, restore it, etc. its commonplace.

        Steve has shown one way to allow replication, but another is simply to create a complete working virtual PC that others can access and try.

        While I could see that very very large simulations may be difficult to replicate due to the sheer compute power required most of the types of complaints that have been raised are against tiny data sets and small computations compared to what regularly goes on in the computer world these days.

    • michael hart
      Posted Oct 17, 2012 at 3:29 PM | Permalink

      I can foresee resistance when authors do not want to release too much of their precious raw data to ‘competitors’ who not only might find something wrong with it but, even worse, find something else that the authors hadn’t.

      Having the reproducibility test performed ‘in-house’ before submission seems a reasonable solution in that context.

      • Posted Oct 18, 2012 at 10:44 AM | Permalink

        Reproducibility tests ?
        What are co-authors good for? Shouldn’t they be the first to do such tests?
        Seems that nowadays co-authors are just an adornment and have no good idea of the contents of the papers they sign.

    • Jeremy
      Posted Oct 18, 2012 at 9:26 AM | Permalink

      I’m not clear on why simple math checks can’t become a required portion of peer review. There’s got to be a simple way to force reviewers to go through the math used in papers.

      Steve: reviewers don’t have the time and can’t be bothered. What they can and should do is ensure that the authors have made a proper archive so that someone sufficiently motivated can subsequently parse the study.

      • Pat Frank
        Posted Oct 18, 2012 at 8:18 PM | Permalink

        As a reviewer, I regularly work through portions of manuscripts. Sometimes to check a central result, and sometimes to derive a result the author might have missed. I also look up critical citations to be sure they convey what the author relates. What others do, I can’t say. But the duty of critical review can be fulfilled by those who take that duty seriously.

        • Tom Gray
          Posted Oct 19, 2012 at 8:36 AM | Permalink

          I reviewed one paper in which the main example had a major mistake in it. This mistake was very revealing about the basic shortcomings of the proposed method. I put that in my comments to the authors and I recommended that the paper not be accepted. The other three reviewers recommended acceptance and so I asked them about the mistake. That is, was there really a mistake there or was I just not understanding the paper. They agreed that the mistake was there and with its significance but still recommended acceptance.

          I still do not understand this.

      • Bob K.
        Posted Oct 19, 2012 at 1:40 PM | Permalink

        In my world reviewing papers is something that we do “out of hide”, which is to say without any kind of support for the effort. That may seem a very greedy thing to say and yes, we are expected to so some things out of service to our academic community, but we already do a lot of the latter and a day really is only so long. So unless something jumps out to me as suspicious or captures my curiousity in an unusual way, I am not likely to replicate the steps used by the authors. Passing peer review means passing a very cursory quality-control check. Papers with incorrect claims can and do get published all the time in the peer-reviewed literature. To me, this is expected, no big deal. The problem is when some try to confer a blessed status on a paper just because it was published in a peer-reviewed journal.

  9. SamG
    Posted Oct 16, 2012 at 5:36 PM | Permalink

    Establishment confirmation-bias seems to be rampant all over the place; climate science, medicine, psychology, Keynesianism, even amongst the cult-of-science, anti-theist movement which is spreading. I was led to believe the sciences are a bastion of truth, Was I wrong?
    Seems collectivists and sociopaths have co-opted all positions of power.
    Btw, I’d be interested in knowing how Steve’s opinions have changed over the last few years. I think he’s getting more cynical.

    • Tony Mach
      Posted Oct 17, 2012 at 10:05 AM | Permalink

      @SamG

      Your confirmation bias is showing, you forgot several “establishment confirmation-bias” organizations in your list – some of which established thousands of years ago.

      And that there are bad apples does not mean that apples can not be eaten in general (nor does it mean that apples are not tasty) .

      But go ahead, you already know that science is wrong, and you and your religion(s) are right – so believe whatever you want to believe. After all you don’t need any confirmation from someone methodical like Steve, you just need to believe.

  10. Adrian
    Posted Oct 16, 2012 at 6:08 PM | Permalink

    ‘In his CV, Dr Potti had falsely claimed to be a “Rhodes Scholar (Australia)” – a claim refuted at The Cancer Letter.’

    So despite all the so called scientific checks – the deciding factor was a reverse “argument from authority”. He was no longer an authority, so the scientific community “piled on”. lol

  11. Adrian
    Posted Oct 16, 2012 at 6:20 PM | Permalink

    Where is Keith DeHavelle to do some poetry about “the rain in Maine falls mainly in the Seine” and the equivalent gnome joke? lol

    • Posted Oct 18, 2012 at 1:59 PM | Permalink

      It’s in moderate-status; I wish comments let us
      Go back in and make minor edit
      There is one short word “got” that should just be “hot”
      Too late now. It’s stuck just as I said it.

      ===|==============/ Keith DeHavelle

  12. Posted Oct 16, 2012 at 7:39 PM | Permalink

    Perhaps I could offer an argument of a parallel in AGW. Skeptic climate scientists have largely been ignored by the mainstream media in favor of those on the IPCC side (“PBS NewsHour global warming coverage: IPCC/NOAA Scientists – 18; Skeptic Scientists – 0” http://junkscience.com/2012/07/13/pbs-newshour-global-warming-coverage-ipccnoaa-scientists-18-skeptic-scientists-0/ ), and an educated guess is that journalists believe two things: the science is settled and skeptics are on the payroll of ‘big coal & oil’.

    The central promulgator of that corruption accusation is anti-skeptic author Ross Gelbspan, whose 1997 “The Heat is On” book is described as seminal for other works which claim skeptics are corrupt, but which ultimately rely on Gelbspan as their source. The ‘big coal & oil’ corruption accusation is not independently corroborated.

    Gelbspan was (and in some cases, still is) described as a Pulitzer winner, with no ambiguity about that, one of the more prominent examples being the cover of his 2004 Hardcover “Boiling Point” book ( http://img2.imagesbn.com/images/102700000/102707056.jpg ). His back-peddling about the matter in the 2005 paperback’s preface is quite unconvincing.

    He never won a Pulitzer Prize.

  13. PaddikJ
    Posted Oct 16, 2012 at 7:49 PM | Permalink

    Although this misrepresentation did not bear on the dispute itself, it was the sort of thing that the academic community could dig its teeth into . . .

    Either that, or it was something they could no longer toe under the rug. Neither explanation does much for the reputation of academe: It either can’t distinguish between solid research and garbage, or it is only interested in gossipy scandals. I don’t buy the excuse that the academies (and journals) are reluctant to pass judgement on highly specialised or technical work; if their credibility is important to them, they damn well better find a way. In this case, inviting a statistician in for review would have been a simple matter.

    Steve: Bagerly and Coombes were statisticians but the journals didn’t want to listen them.

    • Craig Loehle
      Posted Oct 16, 2012 at 8:06 PM | Permalink

      Credentials are very important in the academic world, since there are no stock options or bonuses. Debasing this coinage is a serious offense. Likewise when people claim to have a degree and don’t. Ironically, of course, journals never ask for credentials and anyone can publish in any journal without a Ph.D. (which I think is wonderful). Most critically, it is the kind of fact that is easily shown to be false and not just a difference of opinion. When it is shown that someone lied about something this important, it is easier to imagine maybe they lied about their research.

  14. Ampersand
    Posted Oct 16, 2012 at 7:59 PM | Permalink

    This story is also quite similar to what Serge Lang called “The Baltimore Case”.

    http://www.gatewaycoalition.org/files/Gateway_Project_Moshe_Kam/Resource/DBCre/serge.html#part5para2

  15. Carrick
    Posted Oct 16, 2012 at 11:40 PM | Permalink

    The amazing thing about Anil Potti and quite dissimilar to this is his manipulation of data he collected relating to a drug cocktail used to treat lung cancer patients. I can’t imagine anything more vile than that. This has quite a long account of it.

    Unlike Mann, Potti lost his job (but as I understand it) kept his pension by resigning instead of being fired, a common trick used by organizations to minimize damage in cases like this by greasing the way out for the ex-employee. Potti amazingly still practices medicine (oncology, in Chapel Hill, North Carolina). By “catching him” puffing his resume, they avoided having to even deal with his other alleged misconduct.

    You have to ask Dr. Neven where he was during the fraudulent behavior, but he won’t say when asked. Two word summary (paraphrased of course): “I dunna.”

  16. Latimer Alder
    Posted Oct 17, 2012 at 2:35 AM | Permalink

    Great line in Baggerley’s lecture – which is well worth watching even if you are a statistical and medical pygmy like me. He does it all with pizzazz.

    ‘Hey, who needs to conduct experiments if you already know the answer?’

    I wonder if he has read the Hockey Stick Illusion? His forceful and energetic style of presentation would make an excellent counterpoint to Steve’s more cautious and reticent Canadian delivery. And it seems that despite their apparent external differences they are working in very similar ways.

    Perhaps Steve should rename himself as a ‘forensic climatologist’? Sounds a bit better in the media (think Gil Grissom, great brain, clever, figuring out the puzzle) than ‘auditor’ (think Enron, boredom, nit-picking)

  17. KnR
    Posted Oct 17, 2012 at 4:26 AM | Permalink

    This type of problems has been seen before in other areas and even when it becomes public it can be hard to correct .
    But its still unusual in most cases , what we seem to see in climate science is this issue far more often even as an expected norm .
    This could be becasue of its pseudo-science nature and it could be becasue of its political nature, with even ‘the Team ‘ calling their work that highly unscientific term ‘the cause ‘ while its clear its leader are has happy to act as advocates as scientists.

    Almost worse than this is the the failure of of the gatekeepers to control such behaviour and the failure of scientific establishment to call out such nonsense when it seen for to often its found that its ivory tower something its simply not wished to leave to address the public’s concern .

  18. Geoff Sherrington
    Posted Oct 17, 2012 at 5:25 AM | Permalink

    The Baggerly lecture needs a lot of thought before one rushes to comment. There is new information in there, like simple errors tending to be the most common. While this might seem old hat to some, methods to detect and correct the simple errors can do with much more attention and sophistication that currently exists. Also, reviewers and colleagues “might” simply brush past the rudimentary error check stage in order to get to their high expertise areas. (I knew a famous medico who never learned how to put gas in his car, always went to the same garage for a fill).
    In recent months in Oz we have found with climate data that conversion from degrees F to C and back using one place after the decimal does not produce a unique result, hence it is impossible to reconstruct how many original deg F were in whole numbers. We have found that some record-setting high temperatures reported in newspapers arose because the Bureau gave Glaisher screen results to newspapers, but used Stevenson screen results for official records; that transcription errors remain despite many rounds of checking. Positional coordinate errors remain. One that we discussed today was the double insertion of a result that displaced the remaining table column by a day playinh havoc with correlation coefficients. Simple errors like these can have large unforseen effects but they seem to be beneath the dignity of peer reviewers to examine.
    For reasons like these, I’m still at the stage of mistrusting all Australian climate data pending further checking and verification together with several colleagues, all acting far below their true skills.
    You can’t make a solid building on a weak foundation and for global temperatures at least, the foundations are weak while the edifices are elaborate enough for Nobel Prize talk.
    ………………
    How can scientists carry on like this, knowing that there is a plausible chance that they will one day be personally on the receiving end of manipulated and ineffective medical treatment?
    ………………
    For whom the bell tolls.
    ………………
    Many parallels with Steve, but note – must learn to say ‘doxorubicin’ and ‘topotecan’ in under 10 milliseconds.

    • eqibno
      Posted Oct 18, 2012 at 8:58 AM | Permalink

      Ditto on the less than rigorous approach to the use of statistics in academia and even industry. While my experience in that field was decades ago, just avoiding the pitfalls of designing experiments was a major aid to getting reproducible results that could be presented to management and provide consistent improvements to the bottom line.
      In the current case of teasing some kind of “proof” for pre-conceived notions out of a mass of already measured, non-randomly produced data, makes the job excruciatingly hard.
      All the more reason to have in-house statisticians on hand BEFORE conclusions are reached, let alone published.

  19. Tom C
    Posted Oct 17, 2012 at 6:19 AM | Permalink

    “For example, the Potti et al papers used “meta genes”, which Baggerly explained as being nothing but principal components.”

    Maybe scientific journals should adopt a policy that any paper using PC methods should include a separate statistics review. PC seems widely misunderstood and the potential for mis-interpretation is high. These shenanigans could be nipped in the bud.

    • Jeff Alberts
      Posted Oct 17, 2012 at 9:50 AM | Permalink

      You’re assuming that it actually is misunderstanding and mis-interpretation.

    • Brandon Shollenberger
      Posted Oct 17, 2012 at 11:33 AM | Permalink

      Tom C, many people seem to think PCA was some huge innovation, but really, it’s quite simple. All PCA does is add a layer of obscurity to the process. The problem is journals don’t require papers even say what they did* much less provide true documentation/code so it can be checked. That, combined with the fact PCA obscures how results were generated, means reviewers can miss things they ought to catch.

      A separate, statistical review isn’t necessary. What’s necessary is a decent first review. And that requires journals hold authors to higher standards than they currently do.

      *Theoretically they do, but the requirement is flouted all the time.

  20. Posted Oct 17, 2012 at 8:00 AM | Permalink

    Resume embellishment seems popular for the climatologists.

    For example, people at the Geophysical Union know whether Michael Mann’s nomination for Jones’ AGU fellowship employed a fraudulent ‘H’ score, as Mann suggested in email to Jones.

    http://climategatei.blogspot.com/search?q=1213201481

    I think I once inquired of the Geophysical Union whether they could check the Jones nomination and received no response. Perhaps this type of behavior is common or even expected for Geophysical Union fellows?

  21. Skiphil
    Posted Oct 17, 2012 at 8:56 AM | Permalink

    Without the fake “Rhodes Scholar” issue this affair could have dragged on much longer, while clinical trials were being conducted on real live cancer patients!

    Meanwhile, our climatologists want policies of national and world importance to be based upon their own stonewalling and lack of good practices. It is always worth citing episodes such as this disturbing statement in 2005 noting apparent issues with Mann’s work:

    Myles Allen telling colleagues he could not replicate Mann’s error analysis

    [Myles Allen in 2005 on problems in analyzing Mann’s work]: “…I tried and failed to understand Mann’s error analysis using both approaches about 5 years ago, so I don’t think it is worth trying again, particularly given his current level of sensitivity.”

  22. Matt Skaggs
    Posted Oct 17, 2012 at 9:19 AM | Permalink

    This sort of stuff transcends academia, it is really the politics of power. The Soon and Baliunas rebuttals were also “too negative,” including fabricated claims and “deeply flawed” blather, but got published anyway because Mann et al were riding a power wave and Soon was seen as an outsider. Duke U. apparently perceived nothing more than a couple gnats buzzing around its head, and swatted them away. But when the powerful retracted their support, the outcome was inevitable. The moral of the story is that legitimate skeptics must not quail when confronted with power politics. Activism involves losing a thousand battles on the way to winning the war.

  23. Kenneth Fritsch
    Posted Oct 17, 2012 at 10:54 AM | Permalink

    Lesson learned from this incident that well can be applied to the current state of climate science is that unlike what we hear from the defenders of accepting, more or less out of hand, what might appear as an overwhelmingly consensus view by the powers to be in a given science community, it is prudent and necessary to have knowledgeable, skeptical and independent individuals doing analysis and reporting the results by whatever means are available.

    On the other hand, it shows that while the peer review system can tend to ignore these skeptical inputs as too negative or for other rather vague reasons and other bodies can do precursory investigations to apparently counter this skepticism, it should give no great solace to the out-of-hand defenders – as that system and those bodies can be shown to get it all wrong on some occasions.

    While the peer review system and the academic enablers in these instances are probably not going to make major changes anytime soon, the healthy skepticism that this incident and ones like it can generate and become a motivation for some informed analyses has to be a good thing for science in general. Obviously the advent of the internet as a source for data and a means of communicating results of analyses has been a very positive and important factor in making these analyses effective counters to consensus thinking and the problems that that type of thinking can generate.

  24. Kenneth Fritsch
    Posted Oct 17, 2012 at 1:10 PM | Permalink

    I have a question that came to my mind during the discussion on this thread about any potential legal liabilities of reviewers of a published paper that, as in this case, enable further actions that would, because of the flawed and peer reviewed publication, create damages to individuals. Certainly the due diligence of the reviewers could be questioned in cases like this one.

    I am not at all sure that litigation of this sort would be proper or practical as a deterrent, but in the litigious times, at least in the US, we live in, I am wondering if such litigations have been initiated or what shields are in place such that reviewers are exempted.

  25. Duke C.
    Posted Oct 17, 2012 at 2:44 PM | Permalink

    Well I guess that’s the dividing line, then.

    (Climate) Science research can be bent and distorted beyond recognition by those with stellar CVs, but if one of them falsely claims a prestigious credential, academia is on them like bees on honey.

    A bit of irony there, somewhere.

  26. R Case
    Posted Oct 17, 2012 at 3:00 PM | Permalink

    I just recently saw a very interesting TED talk about bad science and bias in the reporting and publication of research results. While the presentation focuses more on particular clinical trials and medicine, the presenter talks about how FOIA efforts have been necessary to try to find the truth and missing data. Not only am I appalled at this happening so pervasively in the field of medicine, the parallels to climate science are undeniable. I think it’s worth 10 or 12 minutes of your time to watch.

    Some relevant quotes from the video:
    “[In science,] we only hear about the flukes and about the freaks.”

    “Positive findings are around twice as likely to be published as negative findings.” (Very relevant given the case cited in Steve’s blog post here)

    “Real science is all about critically appraising the evidence for somebody else’s position.”

  27. Skiphil
    Posted Oct 17, 2012 at 3:14 PM | Permalink

    Perhaps statisticians need to become more organized to enhance and enforce higher research and publication standards across all empirical disciplines. When someone with statistical expertise blows a whistle it ought to require (at a minimum) genuine independent and exhaustive review of data, code, methods, etc. That there has so often been no real independent scrutiny is part of the problem. Here is a disturbing case in the courts this year, a large study on Alzheimer’s at Harvard:

    http://www.ahrp.org/cms/content/view/848/9/

    • Duster
      Posted Oct 17, 2012 at 4:55 PM | Permalink

      Re: Skiphil (Oct 17 15:14),

      I think that all that is really needed is that within “multidiscplinary” sciences, any paper needs to reviewed by specialists from each contributing discipline. Climatologists would weep real tears at the thought of course, since you are dealing with physics, geology, statistics, and a raft of other disciplines as well.

  28. Matt Skaggs
    Posted Oct 17, 2012 at 3:31 PM | Permalink

    I noodled around at RetractionWatch and discovered that there was a “60 Minutes” segment back in February on US TV. Transcript here:

    http://www.cbsnews.com/8301-18560_162-57376073/deception-at-duke-fraud-in-cancer-care/?pageNum=4&tag=contentMain;contentBody

    The most striking part is this nugget referring to the “outside” investigators that cleared the doctor:

    Pelley [interviewer]: “How could they have found nothing wrong, nothing suspicious about the work at that point?

    Califf: They were analyzing a data set that had been prepared by Dr. Potti. So, the data set they got was one that produced the same results that had been seen in our own analyses.”

    Pretty direct comparison to Muir Russell asking UEA to tell them which papers were controversial.

  29. Matt Skaggs
    Posted Oct 17, 2012 at 3:35 PM | Permalink

    Oops, just noticed that my link in the previous comment goes to the same place as Steve’s first link “CBS News here.”

  30. Skiphil
    Posted Oct 17, 2012 at 3:46 PM | Permalink

    This news article describes a situation in cancer research of many potentially flawed studies which are not proving reproducible:

    http://www.reuters.com/article/2012/03/28/us-science-cancer-idUSBRE82R12P20120328

    • betapug
      Posted Oct 17, 2012 at 4:58 PM | Permalink

      Too much money sloshing around the fringes of another Noble Cause?

  31. thisisnotgoodtogo
    Posted Oct 17, 2012 at 4:28 PM | Permalink

    Skiphil,
    maybe this is a clue as to why there is no great outcry from other branches of Science over the Climate Scientology.

  32. Max H.
    Posted Oct 17, 2012 at 6:10 PM | Permalink

    Anil Potti has been featuring for some time on “Retraction Watch” as his various papers are withdrawn after serious flaws have been uncovered in those papers.

    He has lots of good and often former high flying company on “Retraction Watch” and that’s only within the biological sciences research game.

    http://retractionwatch.wordpress.com/

  33. Skiphil
    Posted Oct 17, 2012 at 11:10 PM | Permalink

    The supposedly independent review late in 2009 was not provided with key info from Baggerly and Coombes. What then did the “review” actually “replicate” since they didn’t notice the problems? Sounds like the kind of whitewash inquiry we’ve seen too much of…. Interesting article at Nature website:

    For 2009 review Duke did not forward key info to the “external reviewers”

    “In an NCI report obtained by Nature, Duke’s external reviewers say that they can replicate the results using data provided by Potti, but seem unaware of any doubts about the data. Kornbluth and Cuffe admit that, in consultation with John Harrelson, who was acting as chairman of Duke’s Institutional Review Board, they decided not to forward the latest communication from Baggerly and Coombes to the rest of the board or the external reviewers.”

    • Matt Skaggs
      Posted Oct 18, 2012 at 9:30 AM | Permalink

      Thanks Skiphil, the article is interesting and so is the first comment from Steven McKinney. In fact, if we take one of McKinney’s statements and leave out a few modifiers we get:

      “The […] modeling machinery has undemonstrated type I and type II error rates, and should be evaluated on simulated data of known structure. My concern is that on simulated random […] data for a few dozen cases with randomly assigned binary categorization, the model will find structure somewhere across the 20,000 […] rows of random data and appear to yield a highly accurate predictor.”

      Now just substitute “climate proxy” where you see a “[…]” and we have a decent correlation to the Mannian Hockey Stick Generator.

  34. Tom Gray
    Posted Oct 18, 2012 at 8:43 AM | Permalink

    Very pertinent posting

  35. Posted Oct 18, 2012 at 11:37 AM | Permalink

    Another mystery to solve, the Lewandowsky et al paper was annouced and circulated to the media (especially the Guardian) as pending publication, back in July.

    http://pss.sagepub.com/content/current

    as there is no sign, now in October, does anybody know what is happening to this, as the press releases and media coverage went far and wide..

    Dr Adam Corner (a phsychologist, that need to be a bit more sceptical?)

    in the Guardian (in July, having been sent it by Lewandowsky):

    “But new research to be published in a forthcoming issue of Psychological Science has found a link between the endorsement of conspiracy theories and the rejection of established facts about climate science.”
    http://www.guardian.co.uk/environment/blog/2012/jul/27/climate-sceptics-conspiracy-theorists

    Could someone (who is likely to get a response) ask the journal the status of this..

    • DR_UK
      Posted Oct 18, 2012 at 2:26 PM | Permalink

      I had a paper accepted in March 2012 in an academic social sciences journal which has only just appeared in the online edition (Oct 2012). So I don’t think it’s at all strange that a paper accepted in July has not been published yet.

  36. DocMartyn
    Posted Oct 18, 2012 at 2:31 PM | Permalink

    We were in a meeting with our intellectual property officers last week and I was informed of something quite pertinent to the ‘Potti and Nevins’ case. You can patent a computer program in which you put information in at one end and get information out the other end, as long as no clinical judgement is required. If you develop an algorithm that helps a clinician make a more informed decision, then you cannot patent it. Potti and Nevins wanted to make money from a computerized diagnostics system. They could only have IP protection if they could show
    sequence in —-> drug choice out put
    was completely independent of people judgement. We are no where as near this for any aspect of personalized medicine.

    • Tom Gray
      Posted Oct 18, 2012 at 4:51 PM | Permalink

      This is true for all patents. Patents cannot be about “best practices” or procedures that are done by people. Even if the human plays only a small part of teh process the entire process is unpatentble.

      A patent could be found for a “system or method” that uses a computer a components and that “displays” information form which the human makes a choice. That is if the determination of the choice involves a novel (new and useful) method or system

      • Tom Gray
        Posted Oct 18, 2012 at 4:59 PM | Permalink

        this should be

        “uses a computer as a component”

      • DocMartyn
        Posted Oct 18, 2012 at 5:24 PM | Permalink

        Yes Tom, you can see that their computerized diagnostic system required a definite output.
        They would do incremental improvements in their algorithms, but would need a ‘definite’ output.
        I am working out how to make a simple output, based on screening, is not dependent on human judgement, but informs a human. It is actually jolly difficult.

        • Tom Gray
          Posted Oct 18, 2012 at 9:59 PM | Permalink

          The main issues that I think that you are describing is that of “obviousness and perhaps “novelty”. The operative part of a patent are the claims which are sentences appended to the specification and precisely describe be what the claimed invention is. If the claimed invention is something that someone of “ordinary skill in the art” would find to be “obvious” then the invention is not patentable. However one must read the claims to determine just to what a patent is claiming and it is difficult to discuss the validity of any patent without a close examination of the claims. Is a claim “obvious to someone of ordinary skill in the art” which is usually determined to be someone with a masters degree.

  37. RayG
    Posted Oct 18, 2012 at 7:37 PM | Permalink

    Very OT but congratulations to Jean S and our host are in order. Gergis et al has been with drawn! So, buckets of kudos to the statistical stalwarts of CA.

    • AJ
      Posted Oct 19, 2012 at 5:05 PM | Permalink

      Here, here! Here’s to improving science!

  38. RayG
    Posted Oct 18, 2012 at 7:39 PM | Permalink

    Oops forgot to include the link and an acknowledgement of where I found the article, WUWT. http://wattsupwiththat.com/2012/10/18/gergis-et-al-hockey-stick-paper-withdrawn-finally/#more-72585

  39. Zach
    Posted Oct 19, 2012 at 1:14 AM | Permalink

    Call for tighter standards to combat tide of science misconduct

    http://www.moneycontrol.com/news/wire-news/call-for-tighter-standards-to-combat-tidescience-misconduct_770873.html

    • Posted Oct 19, 2012 at 3:44 AM | Permalink

      ah yes, that would be good in theory. However, that piece cites the example of Andrew Wakefield, and from close investigation I know that he was not the fraudster he is portrayed in Wikipedia as being. The story there makes me even more upset and alarmed than the Climategate whitewashes do. You have to dig thoroughly, like Steve McIntyre does here, to find the hidden evidence.

      • Zach
        Posted Oct 22, 2012 at 12:55 AM | Permalink

        Agreed

  40. William Larson
    Posted Oct 19, 2012 at 1:59 PM | Permalink

    I suppose that this is not entirely OT: A friend of mine is a professor of biochemistry at an R1 university. He has said to me that, “back in the day” (around 1970), when he graduated with an MD-PhD, there was “no question” that he would land a professorship at a major university, that he would achieve tenure, and that he would get research funding, but that none of those is true anymore for persons with the same degrees. Apparently the supply of professors greatly exceeds the demand. And so, therefore, the pressure to publish and BE RECOGNIZED must be intense. Of course this could tend to lead to all kinds of behaviors that throw true science under the bus in the interest of self-advancement, both as authors and as reviewers. So I think I understand it, but understanding does not equate with a vote of approval.

    • Keith Sketchley
      Posted Oct 20, 2012 at 12:31 PM | Permalink

      Interesting.
      Perhaps too many people using easy funding to get a degree that isn’t worth much for income because they aren’t of top notch capability for the field of their degree.
      Though when I graduated in 1967 I wondered how so many science graduates would find work.

  41. Keith Sketchley
    Posted Oct 20, 2012 at 12:37 PM | Permalink

    Perhaps something will rub off from this objective ethics program at Duke U:
    http://www.vem.duke.edu/program.htm
    though many academics and U bureaucrats will make a fallacious claim they are not in “the marketplace”.

  42. Alan Watt, Climate Denialist Level 7
    Posted Oct 23, 2012 at 8:51 AM | Permalink

    We appear to have arrived at the point where the athletics departments of major universities have more integrity than their academic counterparts. This is indeed a bizarre twist for me as I have long viewed the emphasis on athletics (in the US this primarily means football) as a corrupting influence on the proper purpose of a University.

    In a case like this it is Duke University which needs to be made accountable for the academic misconduct of its faculty member because they failed to properly supervise his activities and abetted his misconduct by conducting a sham investigation.

    Sanctions should be imposed which impact the University’s access to research funds.

  43. Gras Albert
    Posted Oct 26, 2012 at 12:54 PM | Permalink

    Then completely out of left field came a decisive revelation from “The Cancer Letter”, a weekly newsletter that is sort of like a blog. In his CV, Dr Potti had falsely claimed to be a “Rhodes Scholar (Australia)” – a claim refuted at The Cancer Letter.

    Although this misrepresentation did not bear on the dispute itself, it was the sort of thing that the academic community could dig its teeth into and Potti’s data manipulation began to unravel. Dr Nevens, Potti’s senior and coauthor at Duke, finally withdrew his support. The articles (published at the most eminent journals) were subsequently retracted.

    On his web site

    http://www.meteo.psu.edu/holocene/public_html/Mann/about/index.php“>

    Michael Mann states he shared the Nobel Peace Prize with other IPCC authors in 2007 and his

    Click to access michael-mann-complaint.pdf

    ‘complaint’ filed October 22, 2012, against CEI & Mark Steyn states As a result of this research, Dr Mann and his colleagues were awarded the Nobel Peace Prize

    It appears that no less an authority than Geir Lundestad, Director, Professor, The Norwegian Nobel Institute has responded

    1) Michael Mann has never been awarded the Nobel Peace Prize.
    2) He did not receive any personal certificate. He has taken the diploma awarded in 2007 to the Intergovernmental Panel on Climate Change (and to Al Gore) and made his own text underneath this authentic-looking diploma.
    3) The text underneath the diploma is entirely his own. We issued only the diploma to the IPCC as such. No individuals on the IPCC side received anything in 2007.

    It would appear that the IPCC/Rajendra Pachauri took their Nobel diploma, added the supplementary text and sent a copy exclusively, to some 2,000 AR4 contributors…

    Is this not the sort of thing that the academic community could dig its teeth into?, it’s certainly reminiscent of Mann’s proxy practise

  44. Skiphil
    Posted May 29, 2013 at 4:11 PM | Permalink

    This is from Oct. 12, 2012 but I haven’t seen it discussed. Note the issue of what they term “resubstitution” in biostatistics may have some analogues in climate science papers which are not scrupulous about how they “train” and compare data in different periods of time:

    http://www.nature.com/news/cancer-institute-tackles-sloppy-data-1.11580