We tend to think of science as a dispassionate (impartial, neutral) search for truth and certainty. But is it possible that we are facing a situation in which there is a massive production of wrong information or distortion of information? Is it possible that certain scientific disciplines are facing a crisis of credibility? Mounting evidence suggests this is indeed the case, which raises two questions: How serious is the problem? And what could explain this?
How Serious Is the Problem?
The title of an editorial in the prestigious medical journal The Lancet, dated April 6, 2002, asks the question, “Just How Tainted Has Medicine Become?”4 The article states, “Heavily, and damagingly so, is the answer.” Among other things, in 2001, researchers completed experiments with biotechnology products in which they had a direct financial interest and doctors did not tell their patients that others had died using these products when safer alternatives were available. In the same journal, dated April 11, 2015, Dr. Richard Horton stated the gravity of the problem as follows: “The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue . . . science has taken a turn towards darkness.”5
In 2004, under the heading of “Depressing Research,” the editor of The Lancet had this to say about antidepressants for children: “The story of research into selective serotonin reuptake inhibitor (SSRI) use in childhood depression is one of confusion, manipulation, and institutional failure. . . . In a global medical culture where evidence-based practice is seen as the gold standard for care, these failings [i.e., of the USA Food and Drug Administration to act on information provided to them about the harmful effects of these drugs on children] are a disaster.”6 After being editor of the New England Journal of Medicine for 20 years, Dr. Marcia Angell stated that “physicians can no longer rely on the medical literature for valid and reliable information.”7 She referred to a study of 74 clinical trials of antidepressants that indicates that 37 of 38 positive studies were published. In contrast, 33 of the 36 negative studies were either not published or published in a form that conveyed a positive outcome. She also mentions the fact that drug companies are financing “most clinical research on the prescription drugs, and there is mounting evidence that they often skew the research they sponsor to make their drugs look better and safer.”
In 2011, researchers at Bayer decided to test 67 recent drug discoveries on preclinical cancer biology research. In more than 75 percent of cases, the published data did not match their attempts to replicate them.8 In 2012, a study published in Nature announced that only 11 percent of the sampled preclinical cancer studies coming out of the academic pipeline were replicable.9
In the prestigious Science journal, in 2015, the Open Science Collaboration10 presented a study of 100 psychological research studies that 270 contributing authors tried to replicate. An astonishing 65 percent failed to show any statistical significance on replication, and many of the remainder showed greatly reduced effect sizes. In plain terms, evidence for original findings is weak.
A discovery in physics, the hardest of all hard sciences, is usually thought of as the most reliable in the world of science. However, two of the most vaunted physics results of the past few years—“cosmic inflation and gravitational waves at the BICEP2 experiment in Antarctica, and the supposed discovery of superluminal neutrinos at the Swiss-Italian border—have now been retracted, with far less fanfare than when they were first published.”11
These examples are just the tip of the iceberg,12 and they indicate, in the words of Dr. Horton (quoted earlier), “that something has gone fundamentally wrong with one of our greatest human creations.”13 So let us turn to the next question.
What Could Explain This?
First, although replication (confirmation) is essential for maintaining scientific credibility, there are many reasons that studies fail to replicate (for example, when there was a difference in initial conditions [experimental set-up] and theoretical understanding between the original investigators and the failed replication, or when the original discovery and interpretation was false). The problem becomes exacerbated when, “in most scientific fields, the vast majority of the collected data, protocols, and analyses are not available and/or disappear soon after or even before publication.”14 It is often forgotten that small errors can have large effects. In 2013, three years after two economists from Harvard University published research showing that when a country’s debt reaches more than 90 percent of GDP there is an associated plunge in economic growth, a student from the University of Massachusetts ran into trouble when he tried to replicate their findings. He found they “had made several mistakes including a coding error in their spreadsheet.”15 Nevertheless, the observations of the economists had a major impact on the public policy debate.
Second, career aspirations and yearning for prestige, competition between researchers and for limited resources, commercial gain (the profit motive) that leads to selective reporting, the fixing of “small errors” so that it appears to have a more favorable result, and deliberate fraud are impossible to deny.16 One well-known problem with statistical analysis, the practice commonly known as “p-hacking”—collecting or selecting data until non-significant results become significant—is especially rife among the biological sciences.17 Another problem is the “tuning” of models that scientists use to explain the phenomena they observe. For example, “According to some estimates, three-quarters of published scientific papers in the field of machine learning are bunk because of this ‘overfitting’.”19 Taken together, these problems make it difficult to decide what to accept as evidence and what not to accept.
A third explanation relates to the peer review process. It is “deadly effective at suppressing criticism of a dominant research paradigm.”19 It means, among other things, that results that contradict previous results may be suppressed and the dissemination of false dogma perpetuated. But can science enlarge our understanding of phenomena when transparency, critical thinking, and questioning of central tenets are rigorously restricted?
A fourth way to explain flawed scientific results relates to the researcher’s presuppositions that influence their interpretation of research results. This is hardly ever discussed in the official research literature, and when it is acknowledged as a problem, the reader is left in the dark as to what exactly that means. Dr. Horton is illustrative when he states that “scientists too often sculpt data to fit their preferred theory of the world [i.e., worldview].” This means that we think about the world and ourselves against a background or on the basis of some conceptual scheme or framework of beliefs. This has at least one implication: evidence does not “speak for itself”; research results are not interpreted from a neutral point of view.
There is another “background assumption that almost all practitioners in the biomedical sciences agree upon and that is naturalism.”20 Naturalism is problematic because human problems are often reconceptualized and subsequently described in terms that are consistent with the evolution story but otherwise in conflict with alternative perspectives. The following is just one example.
According to Laurence Tancredi,21 psychiatrist/lawyer and professor of psychiatry at New York University, “Morality begins in the brain.” He says that “new developments in neuroscience” have altered our concept of deception, abuse, manipulation, uncontrollable sexual desires, greed, murder, theft, infidelity—of every possible sin and immoral act related to the Ten Commandments—“into problems of brain biology.” What we consider as sins or moral transgressions actually “created an evolutionary advantage during certain early phases of man’s development.” For instance, “The compulsion to eat . . . had the advantage of holding people over during periods of famine. Women having ‘extramarital’ affairs resulted in children, which increased genetic diversity. Even homicide, during periods of limited resources, ensured the survival of some over others.” In sum, he says, “Morality in humans evolved from other primates and depends on the brain.”
In the first place, chimps often deceive, manipulate, and kill one another, but no neuroscientist has ever suggested they suffer from “problems of brain biology.” Thus, what we are presented with is a bizarre form of logic: chimps that deceive, manipulate, and kill have no brain problems, but humans who do these things have these problems. Yet by the same logic, the cannibalism, infidelity, and murder that were not sins of our alleged ancestors are also now not sins for us because these things are brain problems. Tancredi’s evolutionary and neuroscientific explanation of immoral conduct has the next bizarre implication: those who will one day “appear before the judgment seat of Christ, that each one may receive the things done in the body, according to what he has done, whether good or bad” (2 Corinthians 5:10) will be people with brain problems.
Tancredi’s account of morality may have two unintended consequences. On the one hand, it may lead Christians to think anew the Bible’s teaching, the causes of wrongdoing, the place of praise, blame, responsibility in their moral practices, and the treatment of wrongdoers. On the other hand, if morality “begins in the brain,” then it may lead researchers, who falsify and suppress negative evidence in order to deceive others, to think that they have brain problems. And if that is science, then it is ludicrous, to say the least.
To conclude this brief overview of the explanations of flawed scientific results, I wish to make four points. Firstly, it is always good to ask whose interest the research would serve, when, for example, a scientist claims that “the soul is dead” and that it “is what modern neuroscience promises to deliver.”22
Secondly, the aim of a conceptual analysis is to show that the articulation of a scientific explanation is in some way incoherent, that it is logically and conceptually unintelligible, that an explanation of some property is inappropriate, or that a question being asked of the object being investigated is unintelligible. Thus, when empirical problems are addressed without adequate conceptual clarity, misconceived questions and goals are bound to be raised, and misdirected research is likely to ensue.
Thirdly, many scientists are able to see that the goal of science is the seeking and presentation of truth, and that any deviation from this goal adversely affects our lives; but they refuse to accept that the scientific method is only one source of truth among others. What need serious reevaluation are the naturalistic materialist and the biological reductionist worldview that dominates the academia; it is a wholly misguided conceptual framework for the articulation and explanation of human origins, personal and interpersonal problems, and how it may be rectified.
Finally, if scientific evidence is the basis of scientific authority, then critique of that authority is unavoidable to those who are able to see through the interpretations and explanations of the research results. Close scrutiny of interpretations and explanations is, therefore, imperative when trust in scientific authority is to lead to ontological, epistemological, and moral guidance in our lives.