Kennedy, Editor of Science, on PBS

There is an interesting discussion at PBS on peer review, in which Donald Kennedy, editor of Science, defended their existing "rigorous" processes, but re-iterated:

the journal has to trust its reviewers; it has to trust the source. It can’t go in and demand the data books.

If I criticize Science’s due diligence procedures, I don’t think that anyone will accuse me of piling on after the Hwang affair. Here’s something that I wrote last summer:

the underlying issue is that Science does not seem to either have policies that require authors to archive data or administration practices that ensure that their policies are applied. Since NSF then relies ( a reliance which seems to me to be an abdication of their own separate responsibilities) on journals like Science, with either inadequate policy or inadequate administration, there’s a knock-on effect.

Here are some comments on the PBS interview and some re-cap of past commentary.

The PBS discussion is posted up here.

The discussion of replication and archiving policy has been a theme at this site for some time. An index is on the right frame under "Disclosure and Due Diligence". I draw your attention particularly to the post Replication Policy. That post did not draw as much response as some other posts. As a purported rebuttal, Dano asked:

How many of the 14 Reports in the latest ed of Science have sufficient archived material to permit replication? And the latest ed of Nature has 20 methodable articles/letters – how many of these have sufficient archived material to permit replication?

The answer is probably none of them. I’ve discussed Science in particular in a number of posts, including the following Science Editorial, Science Editorial #2, Letter re Esper, Kilimanjaro Data and others. To date, Science’s performance in ensuring compliance with their own data policies is abysmal.

On to PBS. Scadden pointed out clearly that there are important disincentives for scientists to commit fraud. Scadden:

In some ways the example of Dr. Hwang provides a clear example of why there’s such a tremendous disincentive for this kind of activity to happen. This gentleman’s career and probably his life is ruined.

I agree with this and it’s why outright fraud doesn’t happen a lot. However, it’s probably easier to understand the nuances and degrees of fraud from a business background. There are many degrees of dishonesty between Bre-X on one extreme and the many forms of subtler forms of dishonesty that keep securities commissions in business. At Bre-X, there was salting of assays on an industrialized scale. De Guzman’s "career" was ruined and, if he was thrown from the airplane or if he jumped from the airplane, his life was "ruined". (There are interesting unresolved mysteries here.) So there were disincentives here as well, but the fraud still happened. The Ontario Securities Commission did not react to Bre-X by simply saying that this was a "bad apple", but re-examined processes of due diligence and disclosure. Whether their cures are relevant to the disease is a different issue, but they used the occasion to re-examine what they were doing.

Scadden pointed out that the "process" did eventually work:

And fortunately the process does eventually work. In this case, it unfortunately worked after this had been touted as a major breakthrough.

OK, but surely one can inquire as to whether there are ways of making the "process" more efficient. If you re-read McCullough and Vinod, they discuss replication from an economist’s point of view: can you make the replication process more efficient? One of their answers is the one that I’ve consistently advocated: complete archiving of data as used, accurate data citation and archiving of source code. That a process "eventually" works is not good enough – surely we are entitled to ask whether the process can be made more effective and more efficient. Scadden then points to one of the difficulties for peer reviewers:

And unfortunately you don’t have the opportunity to really see the primary information other than what’s given to you by the scientists.

OK, why not? Is lack of access to primary information impossible or simply a convention? Let’s turn to Donald Kennedy, editor of Science. Kennedy:

It’s not our happiest day at Science, Ray. What we’ve done, of course, is to review the peer review process as we engaged in it with the Hwang papers. It was a pretty rigorous process.

I hate the use of the word "rigorous" by scientists. Too often it’s nothing more than arm-waving throught the hard part (e.g. Mann’s use of the term in respect to the RE statistic.) The interviewers attempted to get a more operational description of the actual peer review process at Science from Kennedy, but without much success. Kennedy did distinguish peer review from an audit as follows:

What we can’t do is ask our peer reviewers to go into the laboratories of the submitting authors and demand their lab notebooks. Were we to do that, we would create a huge administrative cost, and we would in some sense dishonor and rob the entire scientific enterprise of the integrity that 99.9 percent of it has….
I am trying to explain to everybody that, as Dr. Scadden said, it all depends on trust at the end, and the journal has to trust its reviewers; it has to trust the source. It can’t go in and demand the data books. What it can do is to make sure that its process is as good as it can be.

OK, why not? Let’s apply this to the paleoclimate studies that I’ve considered carefully. Here Kennedy’s comments don’t make any sense at all. Doesn’t Science already have theoretical policies requiring authors to archive their data? The equivalent of "data books" or "lab notebooks" for multiproxy paleoclimate studies can be easily archived. Part of the problem at Science is that it doesn’t administer its existing policies effectively. I’ve reported from time to time on my efforts to get Science to archive data from various paleoclimate authors – Thompson, Esper for example. This has been going on for months and virtually nothing has been accomplished. (My experience with Nature in respect to MBH is also instructive – they complied partially with some requests, but were totally unresponsive on some completely reasonable data requests. It might be interesting to post up my correspondence file, now that this is in the news).

So why can’t Science demand that paleoclimate authors archive these? (I don’t know whether there is an equivalent archive that would be applicable to stem cell research, but I’ll bet that there is something that could have been readily archived and which would have been useful to expert readers, which Science did not require the authors to archive.)

Kennedy’s excuse about "huge administrative cost" is simply piffle. There’s a distinction between the journal ensuring that the authors have provided a complete replication archive and between the journal reviewers actually carrying out the replication. All scientists seem to agree that "replication" is part of the scientific process. To the extent that the costs of replication are an "administration cost" – huge or otherwise – they are therefore a cost that eventually has to be borne by someone. In the stem cell research area, according to Scadden, other labs rush out to try to replicate results using the same methodology. (While this may be the case in medical research, may I observe in connection with paleoclimate that there’s no evidence that anyone had attempted to replicate MBH methodology until we attempted to do so.)

By Science failing to require an archive at the time of publication, they vastly increase the "administrative cost" for other parties, as McCullough has pointed out. In fact, by not requiring an adequate archive, the journals may create almost impossible barriers to exact replication. The obstacles to replication of MBH (or Esper, or Briffa) require virtual litigation to overcome, as my experiences have provided an object lesson and are undoubtedly one of the reasons why noone tried to replicate MBH until we tried.

Is this form of archiving feasible? The process of requiring the authors to confirm that they had archived the data and source code/methodology and checking that the archives were not empty would have a trivial cost. This form of archiving is already practiced by econometrics journals at negligible cost (see my post on the Journal of Political Economy or an earlier post on the American Economic Review).

One last point about MBH, if I may. I’m not sure that I entirely agree with Scadden’s views expressed below, but he connected reliance on results using accepted methods to publication:

But science is really based on trust like most other human endeavors and so when that information is provided as long as it has the credibility of coming from a person of some reputation done with methods that are accepted methods and where the details are sufficiently clear that you feel this information makes sense and has the validity you anticipate for a series of studies that give multiple perspectives on the issue, you then regard it as something that is worthy of publication.

McCullough (as previously posted here) observed of MBH that the principal contribution that they claimed was a "new statistical approach", while archly observing their reluctance to furnish the details of their algorithm:

As expected, high visibility invites replication and tests of robustness. In a series of papers, McIntyre and McKitrick (2003, 2005a, 2005b) have chronicled their difficulties in obtaining the data and program code; the publishing journal, Nature, did not archive the data and code. After some delay, the authors provided the data (see Mann et al., 2004) but have declined, at least as of this writing, to furnish their statistical estimation programs despite their statement that the statistical method is the principal contribution of their article, specifically, to “…take a new statistical approach to reconstructing global patterns of annual temperature back to the beginning of the fifteenth century, based on calibration of multiproxy data networks by the dominant patterns of temperature variability in the instrumental record.” (Mann et al. 1998, p. 779).


Bradley also remarked on the "new statistical approach" in the UMass magazine interview as follows (discussed here):

Bradley credits Mann with originating new mathematical approaches that were crucial to identifying strong trends.

I simply point out the irony of Scadden’s applying "accepted methods" as a criterion for assessing results with MBH claim to use "new" methods and their inaccurate documentation of these methods, refusals to document the methods and the ultimate acquiescence of Nature in this performance. I’ll post up on this another day.

For now, I want to keep hitting on the point that, in my opinion, there is no acceptable reason why the franchise science journals, Science and Nature, – for paleoclimate articles at least – cannot insist on comprehensive archiving of data sources, data as used and source code. I don’t know how this would exactly translate into policy for medical research, such as Hwang’s, but I’m sure that the same philosphy applied to the altered circumstances would much improve the situation and no one has ever provided a reason not to do it.


  1. John A
    Posted Dec 28, 2005 at 11:09 AM | Permalink

    My experience with Nature in respect to MBH is also instructive – they complied partially with some requests, but were totally unresponsive on some completely unreasonable data requests.

    Did you mean “reasonable”? I might be unresponsive on some completely unreasonable data requests as well….

    I’m pretty sure that most science and most scientists are accurate and fair in their methods, data archiving and published results. For most of the time, it doesn’t actually matter. But in the case of climate science, where it now powerfully affects public policy, “trusting the scientists” ain’t good enough.

    We’ve now had at least two thumping scientific frauds this century which neither Science nor Nature detected. What does it take?

  2. Posted Dec 28, 2005 at 11:47 AM | Permalink

    Kennedy is full of BS. No wonder the peer review process is all screwed up! As Reagan used to say “Trust, but verify.”

  3. TCO
    Posted Dec 28, 2005 at 12:02 PM | Permalink

    Kennedy would be more honest to say that “business as usual” peer review was performed on the Hwang paper. Whether such methods of peer review are appropriate is an open question created by the scandal. One can argue either side of that question, but one shouldn’t just wave a wand from on high and say that business as usual is the right method. And if we stick with business as usual, we should more appropriately define the danger of continued Hwang scandals, undiscovered Hwang scandals, more frequent but more minor scandals, etc.

  4. TCO
    Posted Dec 28, 2005 at 12:06 PM | Permalink

    For instance there have been some interesting discussions and even papers on the issue of amount of fraud and amount of mistakes in science papers in general. Some of these papers have also addressed the issue that many of the mistakes/frauds do not come to light (becuase much work is never replicated…or even if it is, there are disencientives to correcting mistakes of others.) Kennedy would do himself better to address the overall issue with more sophistication rather than the PR-based, “on high” remarks that he has uttered.

  5. jae
    Posted Dec 28, 2005 at 1:49 PM | Permalink

    I think the boys and girls at Nature and Science have egg all over their faces, and they are having a very difficult time trying to deal with it. They have to know by now that they have some serious credibility problems. Keep it up, Steve, you are spot on and are making a big contribution to science here!

  6. joshua corning
    Posted Dec 28, 2005 at 2:27 PM | Permalink

    But science is really based on trust

    Not it isn’t. In fact a good description of science would be that it it does not rely on trust at all but varifiable facts.

    Anway I have this running theory that anything with the word science in it is by definition not science. examples: scientology, christian science, political science.

    Now I guess I can add Science magazine to that list.

  7. John A
    Posted Dec 28, 2005 at 2:39 PM | Permalink

    Re #6

    What about American Association for the Advancement of Science?

  8. jae
    Posted Dec 28, 2005 at 3:06 PM | Permalink

    It’s hard for me to believe that archiving of data is an administrative or economic problem. This is a red herring. Virtually everthing in science today is recorded electronically, and it takes mere seconds to zap files to the internet. Someone (like Steve) has to drag the editors of the scientific journals, kicking and screaming, to the 21st century.

  9. joshua corning
    Posted Dec 28, 2005 at 3:09 PM | Permalink

    What about American Association for the Advancement of Science?

    exeption that proves the rule. Anyway my intent is to show how angry i am at Science…perhaps even shame them by comparing them to the above examples.

    also note that I just figured out that the quote I used was from DR. DAVID SCADDEN not the Science editor DONALD KENNEDY.

    But he said something nearly as stupid:

    “the journal has to trust its reviewers; it has to trust the source. It can’t go in and demand the data books.”

  10. per
    Posted Dec 28, 2005 at 4:18 PM | Permalink

    At the moment, I don’t see any way you could e-archive all the data for (most) biomedical papers. Sure, you could provide electronic tiffs of photos; but that happens anyway. I don’t think this is entirely science’s problem here.

    It is up to institutions and funding agencies to require practices which enable audit. That means keeping physical copies of data, records and lab notebookds. Such practices (known as GLP in the pharma industry) are loathed, and put ~50% on top of the cost of doing research (at least, I reckon). Government won’t pay for a 50% increase, and neither will the biomedical research charities, which insist on 0% overheads in the UK; because they won’t get as much research for their money.

    As a kicker, academia also relies upon transient staff, who leave every three years. That means that all the archived data in a lab notebook is utterly uninterpretable once the temporary staff goes.

    And I am not sure how relevant this discussion is to the Hwang story. If I understood your precis correctly, there are allegations of errors which must be deliberate fraud. GLP doesn’t prevent fraud.

  11. TCO
    Posted Dec 28, 2005 at 4:51 PM | Permalink

    Even if we do stick with “business as usual” we should lower the imprateur given by peer review. And science journals are responsible for that. They should be very upfront about the many caveats that must be applied to science lit. Let’s face it…there are a lot of crappy papers and a lot of people cutting corners and a lot of careerist star types. Science and Nature are particularly vulnerable to the latter and for this reason, I often prefer the leading subspeciality journals.

  12. Posted Dec 28, 2005 at 5:31 PM | Permalink

    Steve, I think you are barking up the wrong tree with placing responsibility for archiving on journals (even though they pay lip service to it). I think the primary responsibility for archiving data should be with the grantee institution for a number of reasons.

    1. They have the legal responsibility for policing research conduct by members of their organization. So ensuring proper archiving should naturally fall under the parts of their regulations that prescribe research conduct.

    2. They have the financial means, through billing the granter institutions, to pay for proper archiving.

    3. The have the intellectual property motivation.

    4. They are a natural home for centralizing aggregated databases, whereas published articles are generally only derivative of those data.

    That said, I am not expecting a big change soon. However, I do think they would be sensitive to scandal, and so there is hope.

  13. Paul
    Posted Dec 28, 2005 at 7:12 PM | Permalink


    I think what Steve is saying is that the journals claim to have requirements for archiving the data of the stories articles they publish. The journals have not been diligent, consistent or honest about this. Either enforce the policy or get rid of it. But to pretend to have one is dishonest.

  14. Mike Rankin
    Posted Dec 28, 2005 at 7:20 PM | Permalink

    Politically biased journals will find ways to advance their agenda. Does anyone believe that these recent examples of peer review failures are an aberation? Science and Nature are beyond shame as shown by their actions. Will anyone believe that their future publications are more than propaganda? What does it take to bring them to acknowledge their errors? How do we raise our voice to say enough?

  15. TCO
    Posted Dec 28, 2005 at 7:34 PM | Permalink

    I think the bigger issue is the prominence of climate science in those top journals. The frequency of publications at that level is what smacks to me of politcs (and let’s face it academic science skews left). When you add on the amount of uncertainty of climate recontstructions and the like, that is one more reason why the number of articles is inappropriate.

  16. per
    Posted Dec 28, 2005 at 7:47 PM | Permalink

    re: 13,14
    it is quite clear that Nature is a business, as are Cell, and other top journals. Business makes money; and spending lots of time (salary) chasing up arcane details of data archives that have been forgotten about is not a way to make money.

    I think David Stockwell’s point is just right; but the problem is that it is the journals that hand out their highly coveted accolade of publication, and the journals cannot do audit.

    I am not sure that some of this discussion isn’t misguided. If the journals have an archive policy, they should stick to it, and implement it. But the issue with Hwang does not appear to be an archiving issue, nor does it appear to be an abuse of peer-review- although I speak without knowledge.

    There may be an issue that the very terseness of nature/ science papers makes them impossible to replicate on the basis of the printed word. This could be addressed very easily in the supplementary materials.


  17. Steve McIntyre
    Posted Dec 28, 2005 at 8:55 PM | Permalink

    I’m not suggesting that journals do audits. As to what’s appropriate, I only know paleoclimate – I don’t make any suggestions on biomedical. However, my guess is that, if there are obvious improvements that can be made in paleoclimate, there are probably other, perhaps different improvements that can be made in biomedical. The connection of archiving to replication/audits is that the archiving vastly increases the efficiency of the process. The decoding of methods and data is so time-consuming that no one (well, hardly anyone) can be bothered. It’s not that the journal should audit; it’s that they have the opportunity to make the materials available. Would this have helped with Hwang? Perhaps not. But I recall seeing comments that people had trouble understanding his methodology.

    Again, I also want to emphasize the nuances. Maybe you can’t catch a well-organized and deliberate fraud. But the more disclosure that you require, the easier any subsequent due diligence becomes.

    The other issue is, of course, steps between journals and international policy. I don’t know what the answer is – but, if the journals have limited due diligence – which Kennedy is obviously admitting – how can you base public policy on this without some additional process of due diligence?

  18. Paul
    Posted Dec 28, 2005 at 9:51 PM | Permalink


    Haven’t you also discussed, on this site, the supposed policies of the journals with regard to archiving? Haven’t you also tried to go to those same journals to get the data, only to discover that they didn’t have it (as they should have had according to their stated policies)? And when you did ask about it, they referred you to the authors (who weren’t about to help you, either). If Science and Nature have policies, why don’t the simply enforce them? Or, get rid of the policy? If it appears to be a “burden” then why not place a requirement on the authors to provide publicly available archived data, code, etc. as is appropriate?

    In today’s era of cheap digital storage and nearly universal access there is no excuse for not making it available…unless you’re lazy or have something to hide. The laziness part can be taken care of (either through publishing requirements or as a requisite requirement of receiving the funding). If you’ve got something to hide, then well…that’s a different story.

  19. ET SidViscous
    Posted Dec 28, 2005 at 10:15 PM | Permalink

    “In today’s era of cheap digital storage”

    Hell I’ve got 1/2 a terabyte in my basement, not bragging, cost me less than $800 There’s no excuse for it. Hell, I don’t know how they could do their research without having detailed records.

    Not having an archive is just a BS excuse.

    In fact it’s harder to throw this stuff away than it is to save it to be perfectly honest.

    And in fact if you refer to the ram temp data I posted awhile ago. All of these data-logging systems are designed to append data, rather than download and deleting old data. When your doing research like this it’s important to have all of the data. Often you realize yourself you’ve screwed up and have to reconstruct from scratch.

    Publishing a paper and then flushing the data can have one, and only one reason.

    You don’t want your work to be checked.

  20. Jack
    Posted Dec 29, 2005 at 12:56 PM | Permalink

    Extremely interesting discussion — and yet no one has commented on one particular aspect of the problem, except perhaps for this from David Stockwell:

    2. They have the financial means, through billing the granter institutions, to pay for proper archiving.

    Who has this means? The proliferation of journals has made scientific publishing a difficult way to make money — you should see the subscription costs that libraries pay for journals. OUTrageous. Granting institutions (like NSF) want to distribute funds for research; what’s being suggested here, though it has considerable merit, would cost the researchers the funds required to pay someone to do a good job of recordskeeping. I.e., a data “accountant”. (If Congress really did double NSF’s budget by 2008, as promised in 2002, and very unlikely, maybe NSF could pad their grants with funds for scientists to do a better job of data archiving!)

    I would suggest examining how the Ocean Drilling Program (now the IODP) does its scientific business — and how it is funded. It is run like a business. As many of you know, independent scientific research (much of it academic) is not run like a business. If science is to follow a model like the IODP, then the funds have to be found to substantially increase the management aspect.

  21. Steve McIntyre
    Posted Dec 29, 2005 at 1:07 PM | Permalink

    #20. I disagree. For paleoclimate, money is just an excuse. Archiving of data and code is required in econometrics, which has data sets of similar size and more complex code. It’s a matter of culture rather than money.

  22. ET SidViscous
    Posted Dec 29, 2005 at 1:34 PM | Permalink


    How much does it cost to -NOT- hit the delete button?

  23. ET SidViscous
    Posted Dec 29, 2005 at 1:43 PM | Permalink

    RE: Reporting in general.

    Funny Yesterday this came out and I was all like “Don’t they even do basic fact checking.”

    Then I was talking with a friend yesterday evening, and I mentioned that Entertainment Weekly had reported that Joss Whedon didn’t plan to do anything with the Firefly Frachise anymore. To which I was reporting what I had read about the article. And yes EW got it worng, but more importantly I repeated it without checking. Seems a bit of human nature is in this as well.

  24. Jack
    Posted Dec 29, 2005 at 1:57 PM | Permalink

    Some questions Re: #21.

    How do econometrics researchers get their funding?

    Are the majority of econometrics professors in soft money (grant) or hard money (salary) positions? (To state why this is significant, all of the researchers at Woods Hole Oceanographic Institution, arguably the most important oceanographic research unit in the country, are ENTIRELY funded by grants.)

    What is the soft money vs. hard money percentage for econometrics researchers? (I.e., of the total number of econometrics researchers, what percent are in soft money positions?)

    How many econometrics researchers work for business compared to the number in academia?

    What is the average size and duration of an econometrics research grant?

    The reason that I ask all this: there is, unfortunately, a difference between what the funding pays for in the various sciences, and where it comes from. I would hazard a hunch that there are a lot more grants from the private sector — meaning business — for econometrics research compared to the private sector funding of paleoclimate research.

    Now — you say it’s a matter of culture. To an extent, I agree. But I would point out that the level of funding is an integral part of the culture. If you desire the cultural change that you’re advocating, the motivation to change will come from the funding sources. If the scientists are going to change, money (or the loss of it) will force them to change.

  25. TCO
    Posted Dec 29, 2005 at 2:41 PM | Permalink

    It’s not that expensive to be organized. Once you get used to doing it, you just do it.

  26. Steve McIntyre
    Posted Dec 29, 2005 at 3:26 PM | Permalink

    #24. I don’t have any information on econometrics funding. My guess is that it’s mostly academic.

    I agree that the funding agencies can force a change of culture. One of many frustrating aspects of paleoclimate is that it seems to me that there are pretty specific high-level policies of the U.S. federal government requiring recipients to at least archive their data, but the N.S.F. does not enforce these policies in paleoclimate. I’ve tried to draw attention to this.

    Researchers would probably save time in the long run if they properly archived their methods and results for benchmark studies.

  27. TCO
    Posted Dec 29, 2005 at 3:30 PM | Permalink

    I also think that having the pressure to archive would make people do better work. On the one hand this is just obvious human nature. In addtion, you have the example of crystallography, where this is how it has played out. I think the extrac cost is well worth it, considering the benefit of better work product.

  28. Jack
    Posted Dec 29, 2005 at 3:32 PM | Permalink

    I did some poking around the NSF site.

    Policy for Oceanographic Data, NSF 94-126

    “3. Principal investigators are required to submit all environmental data collected to the designated national data centers as soon as possible, but no later than two (2) years after the data are collected. Inventories of all marine environmental data collected should be submitted to the designated national data centers within sixty (60) days after the observational period/cruise. For continuing observations, data inventories should be submitted periodically if there is a significant change in location, type or frequency of such observations. Inventory forms (Report of Observations and Samples Collected on Oceanographic Programs, (ROSCOP) and instructions are supplied by the National Oceanic and Atmospheric Administration’s (NOAA) National Environmental Satellite Data and Information Service (NESDIS), based on lists of investigators provided to NOAA/NESDIS by funding agencies.

    4. Data sets identified for submission to the national data centers must be submitted to the designated center within two (2) years after the observational period. This period may be extended under exceptional circumstances by agreement between the principal investigator and NSF. Data produced by long-term (multi-year) projects are to be submitted annually. Principal investigators working in coordinated programs may (in consultation with their funding agencies) establish more stringent data submission procedures to meet the needs of such programs.

    5. NOAA’s National Environmental Satellite Data and Information Service staff and program representatives from funding agencies will identify the data sets that are likely to be of high utility and will require their principal investigators to submit these data and related information to the designated center.

    6. Funding agencies will apply this policy to their internal ocean data collection and research programs and to their contractors and grantees and will establish procedures to enforce this policy.”

    You also might want to read this:

    Geoscience Data and Collections: NATIONAL RESOURCES IN PERIL

    summarized here:

    Resources in Peril

    This really isn’t about the data and code archive problem, but it is about paleoclimate research.

    In summary, I think the necessary changes will have to be motivated by funding agencies.

  29. TCO
    Posted Dec 29, 2005 at 3:35 PM | Permalink


  30. Armand MacMurray
    Posted Dec 29, 2005 at 6:21 PM | Permalink

    AND the journals, and perhaps the universities/institutions. There’s no need to limit “good governance” to just one actor. Every part of the system that has leverage should be encouraged to make use of it to support good practices. Funding agencies have an obvious source of leverage, and will be important. Journals also have an obvious source of leverage; universities/institutions can also adopt archiving requirements as a condition of employment/tenure.

    However, real change will only come about when actual people enforce the requirements. Steve has posted examples of funding agency principals unwilling to request data/method sharing, and similar incidents involving journal editors and university administrators. Publicizing such shameful incidents along with the fakery scandals should help encourage more to consider “doing the right thing.”

  31. Posted Dec 29, 2005 at 9:13 PM | Permalink

    Here are some approximate costs for basic archival storage services. The last time I asked, the proposed rate was $600 per terabyte per year for one copy of data stored on tape. In practice, you need two copies to handle media failure. You also need additional discipline-specific data preservation environments in order to address a range of issues: e.g. portal and meta data development and maintainence; multisite data grid; access, version, and digital signature control; search, analysis and visualization primitives to name a few. Such projects done at San Diego Supercomputer Center have included:

    – California Digital Library, Digital Preservation Repository
    – NARA Research Prototype Persistent Archive
    – NHPRC Persistent Archive Testbed
    – NSF National Science Digital Library persistent archive
    – UCSD Libraries image archive

    Additional costs incurred for supporting a basic data archive include:
    – a database instance for managing collection properties – typically $4000 / year
    – data grid administration support – typically about 5% FTE or $6000 / year

    I think marine scientists are ahead of the pack, possibly because their research has always involved planning costly multi-disciplinary cruises. In disciplines like dendroclimatology and ecology data collection is less cost pressured. This translates into the way they handle data after collection. Other disciplines might do well to follow the marine folks’ lead.

    The additional cost to factor in is the cost of ensuring compliance. I tend to think that human nature is such that you will not get effective compliance without penalties. Somebody has to pay for policing as well. It all adds up to a significant, but necessary investment in the future.

  32. TCO
    Posted Dec 29, 2005 at 10:17 PM | Permalink

    But you get better product.

  33. Steve McIntyre
    Posted Dec 29, 2005 at 10:51 PM | Permalink

    Right now you have dendro information being lost simply because the dendro people collect information and then don’t archive it if it doesn’t tell the “story” that they are looking for. Hit the Jacoby category in the right frame for some egregious examples. There is an already funded archive for dendro information at WDCP so the archiving is fully funded. It’s just that people who have received millions of dollars in grants are letting the information get lost.

    Worse, in cases like Crowley, he’s lost his data, but it’s still being relied on by IPCC.

    Aside from the journals, IPCC should have its own quality control so that people who want to have their articles used by IPCC (and most climate scientists do) should warrant to IPCC that their data and methods are properly archived. No excuses.

  34. Posted Dec 29, 2005 at 11:11 PM | Permalink

    Those above are the easy problems. Data integration and aggregation is another level of quality. For example, it would be one thing for researchers to be able to submit individual data sets from cores to an archive. But another thing altogether to be able to conduct queries such as select out relevant index values for a specific time interval say. For multi-proxy studies you would have to ensure there is a time variable. Raw data might have isotope ratios or length only, and considerable uncertainty as to dating.

    You have to get people to agree on a standard format so that it is machine readable. CVS is next to useless as the column headers don’t have a fraction of the information to convey the meaning of the numbers in the columns. While most disciplines have quasi-standards, they have usually have application specific extensions for all but the least-common-denominator.

    OK, next problem, concurrency. How do you know that data is up to date, and what does it mean anyway to be up to date? Most data is full of errors and is consequently constantly being corrected, but are the corrections accurate? One data set at site A says animal X is male and another at site B says animal X is female. Which do you believe?

    Next problem, changes in names. Things like place names and species names change frequently enough to make lists of names most unrecognizable after 50 years or so.

    I am not saying that it can’t be done, or that all these problems apply is every case. Just that establishing ‘an archive’ is not as simple as is sounds.

  35. Posted Dec 29, 2005 at 11:25 PM | Permalink

    Re: #33. Finally there is the compliance problem. You could build the most beautiful archival system in the world, it doesn’t mean they will come.

  36. TCO
    Posted Dec 29, 2005 at 11:31 PM | Permalink

    because it isn’t 100%, we shouldn’t do it at all. what a crock.

  37. Larry Huldén
    Posted Dec 30, 2005 at 1:31 AM | Permalink

    RE to #34 by David.
    I am not so sure that changes of place names or species names is a big problem. Any data must be documented somehow and it is quite possible to include a species list (with reference to a published list) and also add some standard work according to which the place names have been used. These additions make it possible to trace the interpretation of the names used.
    I faced with the problem in my data on Macrolepidoptera of Finland. There were a lot of obscure place names and species names in old data and archives. When I created the database I had produce coordinates for each record. The coordinate system had also to be defined. When you know the origin of a record (which must be included in each record) it will be possible to recheck my interpretation of the species, locality etc. There is totally about 1,28 million records in the data set and most of the records can be clearly traced to the source, although some are problematical. That is because there were many people typing in the data, and the source was in some cases lost. In this case the problem concerns single records and not the whole data set. In the case of tree ring data it seems like whole data sets are lost.
    As a consequence, I think that the really big problem is to get people to understand that they should save digital data and make any primitive (but documented) copies of the data. I know about several cases where people have deliberately deleted big data sets from the hard disc because the results had already been published. Later when they wanted to reanalyze the data I was asked to somehow recreate the data set from postscript files, maps etc. The source of each record was naturally lost because they had skipped all the original handwritten papers when the data had been transferred into a digital version. Small errors in data or difficulties in interpreting some names is a very marginal problem in comparison with not saving data or deleting data.

  38. Peter Hearnden
    Posted Dec 30, 2005 at 2:56 AM | Permalink

    Can anyone name me a digital storage form that was around twenty years ago and will still be around in twenty years time – and widely accessable? Who now reads/uses 5.5 inch disks? Who will still have flash cards in 2025? OK, we might be talking about some kind of more robust storage media for archives of paleo data but I still think (don’t know) the point holds. I genuinely think storage as digital media has serious degredation over time problems, we have books, and pictures hundreds and decades of years old – no digital stuff has such durability so far.

    *BUT* I’d be interested in the expert opinions of others here. Is there an widely accesable but durable digital media?

  39. Larry Huldén
    Posted Dec 30, 2005 at 4:25 AM | Permalink

    Peter pointed out a problem which users must be aware of. The media cannot be stored for ever. I have tried to make new copies from old media into new media within 5 years or so. Of course copies can also be preserved in big computers which regularly produce safety copies. Another way is simply to print out raw data in text format.

    I have still this year transformed data from CP/M Kaypro floppies from 1980’s into dos (windows), I use three different computers to complete the transfer. I have saved some old computers exactly for this use.

  40. Steve McIntyre
    Posted Dec 30, 2005 at 7:55 AM | Permalink

    #38, 39. By archiving, I don’t mean that authors should simply put data on their own diskettes, but in an archive committed to permanent storage. In paleoclimate, use the World Data Center for Paleoclimatology, which is committed to permanent storage. AGU data archiving policy has theoretically required authors to archive published data in an archive committed to permanent storage; however their editors don’t enforce the policy. Once an author does that, he doesn’t have to woory about it any more. Let professional people worry about it. Or use the journal (e.g. GRL has a data archive facility which we used) and let the journal worry about it.

  41. Michael Jankowski
    Posted Dec 30, 2005 at 7:57 AM | Permalink

    Digitally, I expect CD and DVD capable readers to be around in 2025, and my experience with the storage quality of 10+ year old CDs has been perfect (the same cannot be said for zip disks!).

    Microfiche is still a great means of storing print data, and maybe we’ll have a quality means of converting print to digital data by 2025. I remember people in my comp sci class 12 yrs ago using the first hand-held scanner I’d ever seen to scan-in the printed code of old programs and getting about 95% success in converting it to digital code.

  42. Peter Hearnden
    Posted Dec 30, 2005 at 9:04 AM | Permalink

    Digitally, I expect CD and DVD capable readers to be around in 2025, and my experience with the storage quality of 10+ year old CDs has been perfect (the same cannot be said for zip disks!).

    Everything gets smaller/replaced, I VERY much doubt anyone will be using CD’s in 2025, something new will come along and the maufacturers will try to sell us a whole new lot of kit – but we’ll see. Certainly durability = less kit sold. Re experience, actually I have some old 3.5 inch disks and last time I tried one it worked, so, maybe degredation with time isn’t such a issue.

    There are, of course, books hundreds of years old. I doubt disks will work that far in the future. For a start a letter can degrade a lot and still be readable, can a ‘pit’ in a CD do likewise?

    Whatever, interesting discussion 🙂

  43. Timo Hàƒ⣭eranta
    Posted Dec 30, 2005 at 9:27 AM | Permalink

    Quickly looked, in this discussion nobody has referred to the rules U.S. scientists from 2003 on are obliged to follow:

    US National Research Council 2003: “Sharing publication-related data and materials”

    Sharing Materials Integral to Published Findings

    Sharing of materials integral to a published work is a responsibility of authorship. For consistency with the spirit of the uniform principle for sharing integral data and materials expeditiously (UPSIDE), materials described in a scientific paper should be shared in a way that permits other investigators to replicate the work described in the paper and to build on its findings.”

    Further, please see also:

    Public release date: 20-Oct-2005

    Contact: Coimbra Sirica
    International Council for Science

    28th General Assembly of the International Council for Science (ICSU)
    New approach to ensure challenges to data access and management

    Suzhou, China — Complex changes in data production, distribution and archiving–and issues they raise regarding who pays for data, who preserves it and who has access to it–should prompt an international initiative that ensures current and future scientists worldwide will have the information they need, according to a new report on challenges to data management and access presented today to the International Council for Science (ICSU).
    For more information, visit the Virtual Press Room at:

    I wish you all a Happy and Prosperous New Year 2006

    Timo HàƒÆ’à‚⣭eranta
    Moderator, Climatesceptics

  44. Armand MacMurray
    Posted Dec 30, 2005 at 9:30 AM | Permalink

    1) Since people are still using vinyl records from the 60s, of course some people will still be using CDs in 20 years’ time. Similarly, CD-reading drives will still be produced (even if only as a disused “feature” on the latest HD-DVD (or whatever) drives. 3.5″ floppy drives have only stopped being supplied as standard within the past couple of years, and are still easily and cheaply available as addons.
    2) Even if (1) weren’t true, with the continuing increases in hard disk (or equivalent) storage capacity at a constant price, it’s trivially easy to just reserve perhaps 1/3 of your new 3x bigger drive to store a copy of the entire contents of your previous drive. There’s thus no real justification to *ever* throw away the contents of a drive due to limited space.
    3) The one issue that does come into play is the issue of data formats. If you’re saving your word processing documents in xyz format, it would be useful to at least know how xyz format works in 30 years’ time. This shouldn’t be a problem with the standard formats of the major commercial programs like Microsoft Office, but could potentially be a problem with specialized technical programs in a given field. Using an alternate common format or a text-only format for storage of archival data would be a good workaround for this problem.

  45. Posted Dec 30, 2005 at 9:38 AM | Permalink

    re 43:

    “Sharing Materials Integral to Published Findings”


    Why are postings to the climatesceptics hidden to the public but open to scientists, (I know Gavin used to be a member)?

  46. TCO
    Posted Dec 30, 2005 at 9:43 AM | Permalink

    Can someone direct me back to the (I think it was Hans Errens’s) page that shows a graph of year by year CO2 production, consumption, and excess?

  47. Paul
    Posted Dec 30, 2005 at 9:55 AM | Permalink

    This place is awash with straw men. This is about disclosure, transparency and the verifiability of findings.

    Who cares if storage media are going to change in 20-30 years’ time. Aren’t we trying to ensure that any interested party can replicate stusied published 2 or three years ago.

  48. Alex Avery
    Posted Dec 30, 2005 at 9:58 AM | Permalink

    Steve has raised a critical point that many seem to be missing: our marquee journals aren’t living up to minimal standards of oversight. They should require for paleoclimate and other studies of archivable datasets full transparency. They’re not. It’s absurd.

    I’ve personally had three experiences (in agriculture) where I was told by the editors of Science and Nature that the editors didn’t have the power to require authors to disclose raw data or share critical formulae. When I was subsequently able to obtain the information and show the editors that the authors’ conclusions were not supported by the data or that there was obvious bias which undermined the conclusions, the editors concluded that the subject was too specialized and objection too esoteric to be of interest to the broad readership of these two journals. Hack organic farming activist David Pimentel is a great example of somebody given WAY too much leeway in the pages of Science by Don Kennedy.

    As such, I’ve concluded that Science and Nature have become sensationalist and biased rags that are too broad and too arrogant to be of use. The specialized journals know the subject material and the biases of submitting authors far better and, thus, are far more reliable. Too bad the press and public doesn’t know this.

    Einstein has a great quote regarding this: “The right to search for truth implies also a duty; one must not conceal any part of what one has recognized to be true.”

    We require witnesses in court to pledge to tell the truth, the whole truth, and nothing but the truth — why not scientists too?


  49. Steve McIntyre
    Posted Dec 30, 2005 at 10:32 AM | Permalink

    Alex, you might be interested in this post from July – Full True and Plain Disclosure and Falsification

  50. Doug L
    Posted Dec 30, 2005 at 10:48 AM | Permalink

    Re #45

    “Why are postings to the climatesceptics hidden to the public but open to scientists..?”

    Yeah, inquiring minds want to know!

    If it’s not possible, I’d promise to sit in the back and be quiet!

    This is the sort of blankety blank I’ve come to expect…. 🙂

  51. Dave Dardinger
    Posted Dec 30, 2005 at 11:06 AM | Permalink


    I always understood that one one the advantages of CDs was that the information was stored in an error-resistant manner. I forget the coding name just now, but it was supposed to allow you to correct errors and the info in each track was supposedly spread out so that small scratches were unlikely to lose data. Of course major damage is major damage and will let data be lost. The best way to prevent that is to have multiple backups and safe storage places for them. Actually practically any business these days large enough to have an IT department has emergency plans which take into account all these things and has a plan on what to do. And this would include advanced planning on how to transfer info on old media to new media regularly.

    And note how blithely we talk about terabytes of data! I’m sure we can all ‘boast’ of the old days, which in my case was a Tandy Model 1 with 4k of memory, a tape recorder for storage and a speed of a half meg. (The college I attended, BTW, my junior or senior year (1967-8) got a gift of an old mixed tube / discrete transister computer which they housed in a building and used for student schedules, etc. Impressive to look at, but the MTBF was an hour or two.)

  52. John Hekman
    Posted Dec 30, 2005 at 11:15 AM | Permalink

    Interesting article today in the Wall Street Journal by Thomas Stossel of Harvard Medical School titled “Mere Magazines”, on the subject of peer-reviewed journals and their limitations. Concerning the stink over Merck’s research on Vioxx, he says:

    The message in all this is clear: Medical academics are saints — devoted selflessly to patient care — and corporate people are sinners, morally blinded by greed. But having worked in academic medicine for over 35 years and consulted for companies, this Manichean duality is inconsistent with my experience and a woeful distortion of reality. In a Sept. 8 article in the New England Journal of Medicine, I reported that no systematic evidence exists that corporate sponsorship of academic research contributes to misconduct, bias, public mistrust or poor research quality

    And on the issue of the purity of academic research incentives:

    On the other hand, many academic colleagues working in my field of basic biological research (I study how your body cells crawl around, which has no obvious commercial value) would run over their grandmothers to claim priority for a discovery, impose their pet theory on the field, obtain a research grant, win an award or garner a promotion. It’s the same in other scientific fields, and no wonder, because for relatively modest remuneration we compete for scarce resources and labor in obscurity to achieve small advances few understand or appreciate. We exercise our ambitions by publishing research papers in high-profile journals


    He also says that FDA due diligence for research backing up new drugs is far more stringent than anything in academic journals, which is exactly what Michael Crichton was reviled for saying.

  53. Michael Jankowski
    Posted Dec 30, 2005 at 11:43 AM | Permalink

    Who cares if storage media are going to change in 20-30 years’ time. Aren’t we trying to ensure that any interested party can replicate stusied published 2 or three years ago.

    It appears that it can sometimes take a decade (or maybe two) to obtain the originial data, programming, etc, required to replicate 2-3 year old studies.

    Wouldn’t it be nice if temperature proxy study data could have been archived in the 1960s (or earlier) in a digital format that could readily be obtained today? It seems to me there are a number of cases where the underlying data sets used for relatively recent publications are gone, or based on scanned-and-scaled graphs from old publications, etc.

  54. Posted Dec 30, 2005 at 12:52 PM | Permalink

    Re: #48. I second that. But how often have the reviews come back with an open-minded interest in scientific truth e.g. “oh, that is an objection that may be important, and we should have that critical perspective in the literature.” An editor can’t do much when 3 reviewers find conclusions like “the stated results are not significantly different from natural variability or experimental error” not an important contribution to the original article. The fault lies not only with the journals.

  55. ET SidViscous
    Posted Dec 30, 2005 at 3:18 PM | Permalink

    Interesting article that just came up. Doesn’t relate directly to the discussion but interesting in the timing none-the-less

    To answer Peter’s distraction: “Can anyone name me a digital storage form that was around twenty years ago and will still be around in twenty years time” I can name two. Hard Drives and Magnetic tapes (though granted the physical formats have changed). But Hard Drives are still the standard for data storage, and in many cases supplanting other backup technologies like tapes etc. The first Magnetic Hard disc dates back to 1956, the first modern Hard disk as we know them today was in 1973, 32 years ago, and there is nothing on the horizon to replace hard drives in the future. CD’s and DVD’s Floppies and the like are portable formats, for archiving you do not ever limit yourself to an individual format. The importance is as you upgrade the files are transferred. The importance of digital data is that you can copy it unlimited times with extremely limited data degradation. Books and Papyrus may last for centuries, but copying those means someone sitting down and re-writing them, which is notorious for loosing data as the copy is rarely identical.

    As to the sidebar about Formats; Most data files are not meta files like Word or Excel, which can be lost overtime. They are comma or tab delimited ASCII files and do not have the issues of meta file compatibility. The same holds true of .txt formats, which also holds to ASCII with no formatting. To answer the un–asked question. “What file format has been around for 20 years and will still be available in 20 years” The answer is ASCII which was first published in 1967. Oftentimes meta files can still be read as ASCII, regardless of formatting, you just have to dig out the odd meta characters that get translated into ASCII.

    Put Simply, there are no issues with storing or reading data twenty years, or even more, hence. So long as the data isn’t discarded and is transferred onto upgraded systems. Don’t through the Baby (data) out with the bathwater (obsolete hardware) in other words.

  56. Posted Dec 30, 2005 at 3:25 PM | Permalink

    This just in from the Financial Times:

    “Science is to publish the most prominent retraction in its long and distinguished history. The Washington-based journal said on Friday it had obtained signed agreement from all 23 South Korean co-authors to withdraw the landmark paper on therapeutic cloning published in May under the leadership of Hwang Woo-suk.”

  57. Reid B
    Posted Dec 30, 2005 at 5:08 PM | Permalink

    If Science changed it’s name to Politcal Science it would be truth in naming and explain their current behavior.

  58. Brooks Hurd
    Posted Dec 30, 2005 at 6:27 PM | Permalink

    It appears to me that neither Science nor Nature have implemented Quality Assurance or Quality Control procedures which would allow them to even know if their authors have followed what guidelines that they do have. They can not enforce anything if they have not implemented procedures that would let the editors know whether or not there has been any sort of due diligence in their peer review process.

    By defending policies which have clearly failed, Kennedy only makes himself appear to be tragic caricature of an editor.

  59. Posted Dec 30, 2005 at 8:10 PM | Permalink

    Ouch. Well said Brooks. Class action suit anyone? Among the damaged parties, tax payers of California who voted in the 6 Billion dollar Proposition 71.

  60. John G. Bell
    Posted Dec 30, 2005 at 10:22 PM | Permalink

    What can you say. PBS gets into a ménage àƒÆ’à‚➠trois with politics and climate change. You’d think they’d be able to pull it off with a bit of style, but no… Think drunken frat party and straw on the floor.

    But take your own look. Heck the true believers will lap it up.

  61. Ian Castles
    Posted Dec 31, 2005 at 12:57 AM | Permalink

    Re #48 (“I’ve concluded that Science and Nature have become sensationalist and biased rags that are too broad and too arrogant to be of use”) and #58 (Science and Nature “can not enforce anything if they have not implemented procedures that would let the editors know whether or not there has been any sort of due diligence in their peer review process”), it’s worth recalling a letter published in The Economist (16 February 2002) from Jeff Harvey, who had co-authored Nature’s review of Bjorn Lomborg’s “The Skeptical Environmentalist”. After accusing The Economist of “smearing the vast majority of the scientific community”, Harvey produced his trump card in the form of the following quotation from an article by Ove Nathan, former president of the University of Copenhagen:

    “There is no scientific periodical that outshines or is more critically edited than Nature, Science and Scientific American. In science, they speak with almost the same authority as the Bible of Christianity or the Koran of Islam. If all three periodicals pass the same severe judgment upon Lomborg, I personally would take it for gospel truth.”

  62. Brooks Hurd
    Posted Dec 31, 2005 at 1:02 AM | Permalink


    I was channel surfing tonight, when I happened upon the PBS program “NOW” featuring climate change. I wathed it expecting a distortion, but they really outdid themselves.

    One theme was that hundreds of thousands of scientists agree that climate change is caused by man. (Richard Alley) Another was that essentially all who are opposed to the AGW argument are supported by the oil and gas industry. This mostly came from Ross Gelbspan (author of “The Heat is On”) and the reporter. There were a few comments by Imhofe, but well over 90% of the program was pro AGW.

  63. Timo Hàƒ⣭eranta
    Posted Dec 31, 2005 at 6:11 AM | Permalink

    Hans Erren (No 45) and Doug L (No 50) misuse this forum to inquire about Climatesceptics discussion group. For more information please visit my home page (below) and see Climatesceptics Policies and Procedures.

    I wish you all a Prosperous New Year 2006

    Timo Hàƒ⣭eranta
    Home page:

  64. Posted Dec 31, 2005 at 6:25 AM | Permalink

    re 63:

    Sorry Timo, I _use_ this forum to ask why your forum is not public like all the other ones. Remembering the membershiplists before I was evicted, the list represents pro and contra members (Jelbring and Schmidt), so why hide the postings?

    I can’t post on climatesceptics to ask this, and I don’t like mailing you with fourhundred cc’s. As you appeared on this blog, it is the proper way of asking you in public.

    you write:

    10. Confidentiality
    Our group is restricted for members only, for the purpose to enable confidential discussions.
    The author’s permission is needed, when you want to forward others’ messages outside this group.

    I ask: Why?

  65. Larry Hulden
    Posted Dec 31, 2005 at 6:45 AM | Permalink

    RE # 40 by Steve
    I didn’t mean that people should store their data on personal discs in the sense you commented.
    The biggest problem with some people in Finland is that they don1t even store their data on their own discs. they simply delete data after their articles have been published. For years I have tried to get them understand that produced data sets should be preserved for future use. They would not understand in first case why the data should be stored in a public place. They feel that it is something personal that would not interest outside people or readers of the article.

  66. Timo Hàƒ⣭eranta
    Posted Dec 31, 2005 at 7:01 AM | Permalink

    Hans Erren, you only misuse this blog.

  67. Doug L
    Posted Dec 31, 2005 at 7:16 AM | Permalink

    What’s needed is an oath for scientists upon receiving their bachelor degree.

    “I vow to practice science in an open and transparent way which will include archiving all data , all code and all methods on a durable medium. To assure the public of the integrity of science, I shall never become involved in confidential scientific discussion groups in a place called anything like “Yahoo” “. 🙂

  68. Peter Hearnden
    Posted Dec 31, 2005 at 9:08 AM | Permalink

    Re #66 – Hans is well known for his scepticism, but he asked a reasonable question in #64, why the abusive reply? What have you to hide?

  69. Muirgeo
    Posted Dec 31, 2005 at 9:09 AM | Permalink

    Hans is right. Timo’s website has nothing to do with open discussion of issues but is merely a support group for wayward skeptics in which Timo is the Grand Inquisitor feeling very powerful with is ability to eject some one that doesn’t spew the proper dogma.

    This thread here is about open science and reproducable results. Timo’s Joy Luck Club for Skeptics is nothing about openness or honest discussion.

  70. fFreddy
    Posted Dec 31, 2005 at 9:27 AM | Permalink

    His website, his rules.
    The link he gives leads on to a load of interesting looking papers, though.

  71. Steve McIntyre
    Posted Dec 31, 2005 at 10:08 AM | Permalink

    Re #69 – Timo has sought to limit participants in his discussion group to scientists rather than allowing participation from the general public. There are pro’s and con’s to this philosophy, but I don’t see that it’s inherently objectionable.

    Timo has welcomed participation in discussions at his discussion group from Gavin Schmidt, Michael Schlesinger and Mike MacCracken and people with similar views without censorship, so you’re making an incorrect jibe about "Grand Inquisitor". I don’t recall any incidents between Timo and such scientists.

    People unfortunately get into fights; I don’t know what the problem was between Timo and Hans – both have been cordial to me. Whatever the problem was, I don’t want to consume space on this blog with a re-hashing of it and the problem has to be viewed in the context that Timo’s non-censorship of Gavin et al.

    If your concern is hypocrisy, look not at the mote in your brother’s eye, but at the beam in your own. Let’s talk about realclimate censorship, which is far more objectionable and hypocritical because they purport to be a public discussion in which serious discussion is encouraged. However, I (and others) have posted up comments with material scientific content and no editorializing, which they have censored, leaving a biased record as a result. Unfortunately, this is all too consistent with their scientific practices, where MBH withheld adverse statistics, a practice laughably repeated by Ammann and Wahl.

  72. Posted Dec 31, 2005 at 11:00 AM | Permalink

    re 71:

    Your blog, your rules 😉

  73. Doug L
    Posted Dec 31, 2005 at 11:38 AM | Permalink

    I have noticed Hans directing discussion to UKweatherworld, surely that is not an abuse of the forum.

    There is a potential problem for not letting the public view the posts at Climatesceptics. Any new person wishing to explore the controversy “learns” that all climate skeptics are funded by Exxon-Mobil. Seeing the website completely closed up like that just feeds into the idea that they have to conspire to play tricks on the public.

    I’m personally satisfied that there is no such problem in the Climatesceptics forum, and few people are likely to have their views shaped by the Climatesceptics policy. But what do I know? maybe it is something to worry about. I can see how it could unconsciously influence someone to ask a question of someone who is likely to be quite tired of having to answer.

  74. TCO
    Posted Dec 31, 2005 at 12:25 PM | Permalink

    The Illuminati listserve is closed too. 😦

  75. ET SidViscous
    Posted Dec 31, 2005 at 1:23 PM | Permalink

    “they simply delete data after their articles have been published.”

    Why? hy would someone spend the time to do the work, write the paper and then throw everything away? What possible sense does this make.

  76. Doug L
    Posted Dec 31, 2005 at 1:40 PM | Permalink

    Re #74

    Funny, but a majority Americans believe in a popular conspiracy theory, and probably more than one. I’ll leave out details so as not to disrupt the thread.

  77. Posted Dec 31, 2005 at 4:00 PM | Permalink

    Re #75 Hey Sid – great nom-de-plum. Why do people do anything irrational? For me, irrational scientific acts become clear when viewed from the point of view of power – getting it and keeping it. Power is created through heirarchies – by elevating one thing and diminishing another. The article is elevated over the data, because the article conveys more power and prestige. The data is a potential source for diminishing that if people check it, and mostly its not entirely the authors data anyway.

    Which reminds me of a thought I had about the hockey-stick. Could it be that it has aroused so much interest because it is a powerful phallic symbol? The more the past is flattened and the more erect the recent rise, the more power it has as a symbol. Sigmund Freud has human nature pegged in my book.

  78. Ed Snack
    Posted Dec 31, 2005 at 10:11 PM | Permalink

    Steve, re your “Unfortunately, this is all too consistent with their scientific practices, where MBH withheld adverse statistics, a practice laughably repeated by Ammann and Wahl.” This it appears is incorrect, according the gospel of Dano. There are some “grey cells” or whatever in one of the diagrams in MBH that represent the r^2, or so it is alleged !

  79. Aynsley Kellow
    Posted Dec 31, 2005 at 11:07 PM | Permalink

    An interesting discussion! Some posts mention GLP guidelines and Michael Crichton’s points about the standards required in drug testing – separation of teams preparing doses, administering doses, making diagnoses and analysing results.

    It is worth mentioning that this is not done (at considerable expense) out of the sense of public spiritedness of pharmaceutical companies, but because they are required by regulators to do so.

    Inevitably, much research into the efficacy and safety of chemicals and drugs is conducted in-house, or by external contract researchers, where the potential for conflict of interest is clear. Domestic regulatory systems therefore insist on certain standards before research findings are accepted for product marketing approval processes.

    This is expensive, an so there has been considerable effort to harmonise regulatory approaches, with Guidleines on GLP and MAD (Mutual Acceptance of Data) being developed in the OECD, to reduce costs and prevent (not always with total success!) different standards being applied as disguised barriers to trade.

    Significantly, one of the checks is an audit process. Registered labs, in other words, are subject to audit. But the very nature of audit is not that all labs are audited, but that they might be audited. It is this strategy of ‘sunshine’ – the credible risk that transparency will be applied to any case – which keeps standards high and costs at a minimum.

    We can add other safeguards in biomedical research, including the requirement for conflicts of interest to be stated. (Of course, no statement about the reputational benefits of publication, desire to save lives, or membership of political or environmental groups is commonly made or expected).

    It seems to me that the problem ultimately lies not just with journal editors, granting institutions and universities/research institutions and the like, but with politicians (and science politicians), including those at the international level – in this case the IPCC.

    The IPCC has not – and must, if it is to salvage credibility – insisted on anything like the standards of biomedical or agricultural chemical science to apply to what counts as climate ‘science’. When researchers have control of the data, and its analysis, have collaborated with most of the likely referees (where many journals are not double blind refereed), and then sit in judgment on its inclusion in IPCC reports (and act as gatekeepers on competing views) then we have a problem. As far as acceptable science is concerned, (with apologies to the Wizard of Oz) we’re not in Kansas any more.

    Steve and Ross McKitrick have made these and other points before, but I think they are worth repeating.

  80. Jo Calder
    Posted Jan 1, 2006 at 8:12 AM | Permalink

    #78: Ed: you have been struck by one of Dano’s constructed narratives. We’re all po-mo now, you know.

    r2 is indeed shown in one of the MBH diagrams, but the reported calculation only refers to the calibration period (from 1850 onwards), IIRC.
    Cheers, — Jo

  81. Steve McIntyre
    Posted Jan 1, 2006 at 8:42 AM | Permalink

    I posted up a survey of MBH on the r2 statistic at:

    AS Jo notes, Mann showed a figure with r2 statistics for the AD1820 step (which had 112 proxiesi including actual instrumental records as "proxies"). The dispute is over the failed values for the statistic in the earlier steps, espectially the controversial AD1400 step. The above posts show that Mann’s source code calculated the r2 statistic at the same time as the RE statistic but the SI, which reported statistics for each step, failed to include the r2 statistic. The display for the AD1820 step and the withholding for other steps is particularly misleading, as is the claim in IPCC of skill in "cross-validation statistics" when in fact it only passed one test (the RE statistic). Spurious RE statistics occur, as shown in our GFRL paper.

    I discussed this topic with Dano here . The Barton Committee asked Mann why he did not report the r2 statistic. Mann said that he did not use it, because they “preferred” the RE statistic. This answer obviously cannot be reconciled with the figure in Nature illustrating the r2 statistic for the AD1820 step. Maybe the Nature diagram was slipped into his dietary supplements.

  82. muirgeo
    Posted Jan 1, 2006 at 6:00 PM | Permalink

    Mr Mcintyre,

    I understand you are all for open processes. Could you please share with us the exact post that went censored by RealClimate?

    If your concern is hypocrisy, look not at the mote in your brother’s eye, but at the beam in your own. Let’s talk about realclimate censorship, which is far more objectionable and hypocritical because they purport to be a public discussion in which serious discussion is encouraged. However, I (and others) have posted up comments with material scientific content and no editorializing, which they have censored, leaving a biased record as a result. Unfortunately, this is all too consistent with their scientific practices, where MBH withheld adverse statistics, a practice laughably repeated by Ammann and Wahl.

    Comment by Steve McIntyre “¢’‚¬? 31 December 2005 @ 10:08 am

  83. Armand MacMurray
    Posted Jan 1, 2006 at 7:49 PM | Permalink

    As noted in the quote by Steve, there have been multiple such incidents, but here’s a nicely documented example:
    The realclimate topic that Steve tried to post on in this case was:

  84. TCO
    Posted Jan 1, 2006 at 9:40 PM | Permalink


    I have had temperate, on-topic science posts censored on that site. There’s another data point.

  85. Steve McIntyre
    Posted Jan 1, 2006 at 9:47 PM | Permalink

    #82. Also see

  86. Doug L
    Posted Jan 1, 2006 at 10:07 PM | Permalink

    Re #82

    There should be more evidence at the thread at UKweatherworld called “RealClimate Shadow Postings”:

    OT: I think that Bible verse is more about impulsive sorts of judgements, but not entirely, irrational ones count too. 🙂 JMHO

  87. Hans Erren
    Posted Jan 2, 2006 at 4:57 AM | Permalink



    December 9, 2005 — In Final review

    Have you had a opportunity to compose a reply yet?

  88. Steve McIntyre
    Posted Jan 2, 2006 at 7:28 AM | Permalink

    We’ve been given an opportunity to reply, which is now due at the end of January. I asked for an extension of a couple of weeks because of the holidays and family circumstances, which Famiglietti granted. I guess they are being a little more cautious after the Ritson debacle, where his comment was taken out of the garbage can, reviewed in isolation, accepted, and then, after our reply was reviewed, was rejected. The Ammann Comment was also originally rejected. Like the Ritson Comment, it was taken out of the garbage can. It has been very slightly amended and they agreed to re-review it. We were told that our Reply and their Comment will be sent out for review together (which is in accordance with AGU policies).

  89. per
    Posted Jan 2, 2006 at 8:59 AM | Permalink

    hi there
    and congratulations.
    In a way, I am disappointed that the ritson reply was rejected; I was looking forward to seeing your excoriating reply in print 🙂
    best wishes with disposing of the rest of the un-dead !

  90. muirgeo
    Posted Jan 2, 2006 at 9:07 AM | Permalink

    TCO, Armand,

    I see no fewer then 40+ post by you two at RealClimate. I see the post from Armand that was supposedly censored and could not for the life of me figure why they would have censored it. Have you asked them specifically why they censored any of your “innocuous” post? I notice the post are reviewed before posting. Mine from yesterday took 6 hours before it was posted. I wonder if what you calling censorship is just things falling through the cracks because most of the exchanges seem to be one sided in favor of the RealClimate people. I would think they’d invite the fodder.

  91. Hans Erren
    Posted Jan 2, 2006 at 9:15 AM | Permalink


    The frustrating thing in realclimate is, that you never know if a critical comment gets posted. This is heavily discouraging critics: would you spend 15 minutes on a posting, knowing that it disappears into nothing?

  92. TCO
    Posted Jan 2, 2006 at 9:52 AM | Permalink


    To your question, yes I have asked about blocked posts a few times. My experience: they don’t respond to such inquiries.

    I have definitely felt the hand of censorship at RC. Several posts were blocked. If you doubt me, read back over this site. There are several (not all) that I posted over here to have a copy of them. It happened often enough and others posts were being allowed that I can tell the pattern was unfair.

    As you say, I’ve had plenty of opportunity and time posting over there. I can tell that contrary posts tend to be censored. I have had some get on. More openness than usual recently. I think the victories that the stick-breakers are winning in the hard-core science journals is having an effect. I also think that spending time on both sites, it’s pretty obvious that while neither is perfect, Steve is much more interested in explaining to ground truth his math. When it’s confusing, it’s just because of the level or occasionally his style. But he will engage on critical comments. Gavin et al on the other side, instead really hate to get into the guts of technical discussion and when they do tend to be dismissive/obfuscatory/faux authoritative vice really hashing things over in a Richard Feymannlike method.

    This site is incredible, with about 4 Ph.D. theses of 3/4 completed analyses that are shrewd in problem selection and sophisticated in math. I think even the stickers can tell that Steve can bring it on the stats/methods discussions. That’s why you are seeing people like Rasmus and the like run to a “statistics* doesn’t matter” retort.

    * i.e. the mathematical description of sginficance of data observed)

  93. Armand MacMurray
    Posted Jan 2, 2006 at 2:23 PM | Permalink

    Re: #90
    Just to clarify, the censored post in the link I posted above was Steve’s, not mine, although I have also had a number of my posts censored.

    RC’s efforts to prevent any links to can lead to farcical situations, such as that at :
    To their credit, RC had finally gotten around to mentioning von Storch’s and Huyber’s published comments on M&M. On that RC page, they provided free links to vS and H’s comments, but no links to M&M’s replies, although they at least mentioned that the replies existed! When queried about this (comment #1), Gavin posted links to *subscription-only* versions of the M&M replies, ensuring that most would be unable to access them, and claimed he couldn’t find any non-subscription links. Michael Mayson then posted a comment pointing out that the replies were freely available to all at, and the RC moderators even commented on it. Shortly thereafter, a higher authority at RC then decided that either (a) perhaps M&M’s replies were too good for public consumption 🙂 or (b) that having posted links to just wouldn’t do, and deleted Mayson’s posted comment and the RC reply! (See comment #23 here: The later ruckus about RC adhering to its own posting policy shamed them into allowing my much later repost of the links to the M&M replies.

    Similarly to TCO’s experiences, my questions about why postings don’t appear have been ignored. As for “I wonder if what you calling censorship is just things falling through the cracks…”, I don’t consider my posts as censored until 3 business days have passed, and many other posts & moderators’ replies have appeared.

    Hans is spot-on in post #91 about the effect this has on serious discussion at RC: why bother investing serious time and thought in a comment when chances are high it either won’t appear or will be ignored if the RC moderators can’t quickly dismiss it without spending much time on it? The RC staff seem most comfortable with a lecture-type model where they impart wisdom as set-piece presentations and avoid engaging very deeply with the readers.

  94. pj
    Posted Jan 4, 2006 at 1:50 PM | Permalink

    Kennedy is a political hack and has no business editing a scientific journal.

  95. Geoff
    Posted Feb 5, 2006 at 1:53 AM | Permalink

    It starting to look like Nature is reading CA. First they editorialize in favor of improved economic guidance for the IPCC, and now they have a special report about fraud and peer-review. In the report published 2 Feb 06 titled “Should journals police scientific fraud?”, they start: ” Editors don’t expect peer review to catch deliberate fakers. But recent scandals mean that journals are looking at other ways to detect fabricated papers”.

    They start by absolving themselves of responsibility to detect fraud due to having only voluntary staff and reviewers, and go on to state: ” other groups seem to agree that the primary responsibility for determining whether a paper ought to be shelved in fiction or non-fiction should not rest with journals. This opinion is based partly on the patchy staffing and funding of most journals, which are volunteer-run society publications.[Howecer, Nature itself is published by the Nature Publishing Group.] “Journals don’t have the resources or the expertise,” says Mary Scheetz, director of extramural research at the Maryland-based Office of Research Integrity, which investigates ethical violations in work funded by the US National Institutes of Health”.

    The scientific journals seem to want to improve the chances of detecting fraud, commenting that “in the past few years, journal editors have been taking a more proactive approach to dealing with fraud, and exploring what they can do with the resources they have”.

    They give an example: “Stephen Evans, a statistician at the London School of Hygiene and Tropical Medicine, occasionally analyses papers in which the raw data are suspect. Tricks include looking for ‘digit preference’, the tendency of humans to round towards 0s and 5s, or the amount of variance in the data. “It is very difficult to invent data that has the right variability,” says Evans. But he agrees that the time and expense make checking every study “totally impractical””.

    In a spot of good news, they go on to say that “journals are also starting to request that researchers carry out their own checks before even submitting a paper. Nature now advises authors to include independent verification for certain cloning papers, for example. And the Journal of the American Medical Association requires that industry-funded trials go through independent data analysis” [!].

    The report winds up on a somewhat mornful note “Ultimately, if a journal does uncover evidence of fraud, it has to rely on the researchers’ institution or funding agency to investigate fully. But this depends on such bodies having the will and authority to do so. When the British Medical Journal tried to get someone to investigate the work of cardiologist Ram Singh of Haldberg Hospital and Research Institute in Moradabad, India, for example, no institution or scientific body could be persuaded to make a judgment on the case. Singh went on to publish similar work in The Lancet. In the end, both journals published expressions of concern, but did not feel able to retract the papers. And in an ongoing case involving RNA researcher Kazunari Taira, the University of Tokyo seems unlikely to get to the bottom of whether suspicious data were faked, because it does not have the authority to make a full inquiry”.

    Their concluding paragraph is reminiscent of the exchanges Steve has had with the NSF (although it’s hard to see the NSF is really powerless in this regard).

    I guess that folks at Nature have also not paid attention to the many postings here on archiving the data, which would also go a long way to address real fraud.

    Special Report
    Should journals police scientific fraud?
    Emma Marris
    Nature 439, 520-521 (2 February 2006) | doi:10.1038/439520a

  96. Louis Hissink
    Posted Feb 5, 2006 at 4:10 AM | Permalink

    # 91


    compose the criticism beforehand, then paste it into the comment space.



  97. Louis Hissink
    Posted Feb 5, 2006 at 4:19 AM | Permalink

    Nobel Laureate Hannes Alfven remarked, paraphrasing, There are too many scientists.

3 Trackbacks

  1. […] have to take my word for the abuse that Kennedy has done to the scientific process. He is noted for saying on PBS: … the journal has to trust its reviewers; it has to trust the source. It can’t go […]

  2. […] Dr. Alberts, Science Magazine had a very poor record of compliance with your own stated policies under the previous Editorship of Donald Kennedy. Some […]

  3. […] since you were a graduate student, a once-stellar magazine has fallen on hard times. Starting with Donald Kennedy, and continuing under Bruce Alberts, it has become a shabby vehicle for strident climate activism […]

%d bloggers like this: