There’s an interesting article online here by David Goodstein of Caltech, in which he notices that misconduct problems seem rife in biological sciences administered by NIH and very infrequent in sciences administered by NSF. He identifies three factors as common in problems, noting that exact reproducibility in physical sciences is a major deterrent to fraud. If you look at the ingredients in climate science, I hardly need to editorialize.
However, the larger number of cases arise from more self-interested motives. In the cases of scientific fraud that I have looked at, three motives, or risk factors have always been present. In all cases, the perpetrators,
1. were under career pressure;
2. knew, or thought they knew what the answer would turn out to be if they went to all the trouble of doing the work properly, and
3. were working in a field where individual experiments are not expected to be precisely reproducible.
It is by no means true that fraud always occurs when these three factors are present; quite the opposite, they are often present and fraud is quite rare. But they do seem to be present whenever fraud occurs. Let us consider them one at a time.
Career Pressure: This is included because it is clearly a motivating factor, but it does not provide any distinctions. All scientists, at all levels from fame to obscurity are pretty much always under career pressure. On the other hand, simple monetary gain is seldom if ever a factor in scientific fraud.
Knowing the answer: If we defined scientific fraud to mean knowingly inserting an untruth into the body of scientific knowledge, it would be essentially nonexistent, and of little concern in any case because science would be self-correcting. Scientific fraud is always a transgression against the methods of science, never purposely against the body of knowledge. Perpetrators always think they know how the experiment would come out if it were done properly, and decide it is not necessary to go to all the trouble of doing it properly. The most obvious seeming counter-example to this assertion is Piltdown man, a human- skull and ape-jaw planted in a gravel pit in England around 1908. If ever a fraudulent physical artifact was planted in the scientific record, this was it. Yet it is quite possible that the perpetrator was only trying to help along what was known or thought to be the truth. Prehistoric remains had been discovered in France and Germany, and there were even rumors of findings in Africa. Surely human life could not have started in those uncivilized places. And, as it turned out, the artifact was rejected by the body of scientific knowledge. Long before modern dating methods showed it to be a hoax in 1954, growing evidence that our ancestors had ape-skulls and human-jaws made Piltdown Man an embarrassment at the fringes of anthropology.
Reproducibility: In reality, experiments are seldom repeated by others in science. When a wrong result is found out, it is almost always because new work based on the wrong result doesn’t proceed as expected. Nevertheless, the belief that someone else can repeat an experiment and get the same result can be a powerful deterrent to cheating. This appears to be the chief difference between biology and the other sciences. Biological variability – the fact that the same procedure, performed on two organisms as nearly identical as possible is not expected to give exactly the same result – may provide some apparent cover for a biologist who is tempted to cheat. This last point, I think, explains why scientific fraud is found mainly in the biomedical area.