Originally published 8 December 1986
Two weeks ago, a team of researchers at Harvard’s Dana-Farber Cancer Institute retracted a paper published earlier this year [1986] in the journal Science. The paper reported the discovery of a molecule called interleukin-4A, which was said to play a role in amplifying the immune responses of the human body. The isolation of the molecule was considered a promising step in the search for a cure for cancer.
It now appears that interleukin-4A does not exist. The data on which the report was based are allegedly fraudulent, contrived by one member of the team after initially promising experiments failed to pan out. The author of the deception is reported to have said, “There was a lot of pressure in the lab and I didn’t have the courage to tell them.”
This latest scandal in biomedical research recalls several other cases of purported fraud within the past decade, two of them within the Boston area. In 1980, Dr. John Long, a senior researcher at Massachusetts General Hospital, resigned after admitting he had invented data relating to an experiment on Hodgkin’s disease, a form of cancer. Three years earlier, Dr. Marc Straus of Boston University and University Hospital had been accused of submitting reports on cancer research that contained repeated falsifications. The highly-respected physician resigned under fire, insisting that he was the victim of a staff conspiracy. The Globe later ran a five-part Spotlight series on the Straus affair that led to an investigation by the National Cancer Institute.
The scandals involving Long and Straus caused handwringing and soul-searching within the research community. The Dana-Farber scandal is sure to do the same. These are the questions that will be asked: Are there pressures in science that encourage fraud? How can deception be prevented? Just how serious a problem is fraud in science?
Competition is intense
The pressures that encourage fraud are obvious enough. Contemporary science is a high-risk, high-stakes game. Successful research is a prerequisite to advancement and tenure. More fundamentally, the ability to do research is increasingly dependent upon federal funding. Long used $750,000 in federal funds for his research on Hodgkin’s disease. Straus was awarded nearly $1 million in cancer research grants over a three-year period. Grants are usually made on the basis of an established track record in research. The competition for funds is intense.
In a discussion of fraud at last year’s meeting of the American Academy for the Advancement of Science, Dr. Robert Petersdorf, vice chancellor for health sciences at the University of California at San Diego, suggested that science today is “too competitive, too big, too entrepreneurial, and too bent on winning.” If reported cases of fraud or alleged fraud are an indicator, these pressures seem to apply most forcefully to biomedical research.
Walter Stewart and Ned Feder, research scientists at the National Institutes of Health in Bethesda, Md., conducted a survey of the work of research cardiologists at two highly respected medical schools. They claim that 35 of the 47 scientists in their sample had engaged in “dubious or substandard practice.” The details of the survey have not been published, so it is difficult to assess its validity. If the survey reflects the actual state of medical research, then its conclusions are deeply troubling.
Still, cases of outright fraud in most areas of science appear to be rare, considering the number of people and the amount of money involved. In my experience, people who choose science as a career are more often motivated by an honest intellectual curiosity than by a desire for personal wealth or advancement. But scientists are no less human than anyone else. To reinforce the integrity of research, science has evolved a system of peer review that makes blatant deception very difficult to perpetrate.
Creative ’rounding-off’
Blatant deception aside, what about less serious cases of merely fudging the facts? How often do scientists play down data that contradicts a hypothesis, or use statistical methods that show data in the most favorable light? This kind of minor fraud may be common. As one who has taught introductory lab courses, I know that a little creative “rounding off” and tidying of data begins early in a scientist’s career, usually without any malice of intent. It is hard to imagine that it doesn’t continue.
It may even be true that a modest nudging of the facts has occasionally worked to the benefit of science. Ptolemy, Galileo, Newton, and Mendel have all been accused by historians of minor deceptions. Paul Feyerband, a philosopher of science who can be relied upon for provocative comment, has argues that small-scale cheating is essential to the advancement of science. No theory, no matter how good, will coincide with observation in every detail. According to Feyerband, a scientist may sometimes best serve truth by suppressing the scrupulous reporting of facts in favor of good rhetoric. It is to be hoped, I suppose, that the scientist who bends facts to favor rhetoric is as capable of recognizing the truth as Newton or Mendel.