Sunday, October 15, 2017

The "Faith" Of Scientists

William Wilson
First Things 

According to the popular understanding, science is simply the comparing and ordering of sense data originating from experiment or from the observation of natural phenomena. If we are lucky, patterns or other forms of order emerge from these data. Scientists can then build theories that describe and abstract these regularities, and perhaps even use them to make predictions about as-yet-unobserved phenomena. Finally, these theories are put to the test by new observations and discarded if they contradict the best available new data. This is a process of induction, whereby simple, raw observations are grouped together in such a way that the law that connects them becomes evident. The higher-level relations and associations are grouped in turn, such that the meta-law which underlies them all comes into focus, and so on higher and higher up the chain of abstraction, toward theories ever more rarefied and powerful. Yet in principle, even the most complex theory does nothing more than tie together a vast number of simple observations, each of which is pure, objective, and incontestable.

A crucial feature of this story, and the source of a great deal of its attraction, is the freedom it offers from the oppressive legacies of ideology, privilege, and prejudice that taint every human institution. If science is nothing more than the cataloging and systematization of information directly accessible to our senses, then it could be a source of knowledge that is objective, neutral, and accepted by all. Moreover, if each step of this process is solely determined by the data—that is, if at no point does a theorist have a free choice between alternative interpretations or generalizations—then we can be sure that no lingering taint in the scientist’s mind will impress itself upon the completed theory. The scientist is like an automaton, albeit a clever and subtle one, transforming inputs into outputs, discovering rather than inventing, performing a mechanical rather than an artistic task.

This is why those most invested in science as a way of knowing the world react with such horror to the proposal that values, even the progressive values they overwhelmingly share, should inform the scientific method. The threat is not so much that such a program would have grave consequences if carried out, as that the assumptions behind it threaten to undercut what they believe makes science unique. If such a thing as “feminist science” or “XYZ science” were even possible, then it would mean that science as it currently exists might not be perfectly neutral and value-free. It would imply that there are many possible ways of doing science, and that those different ways might reach different answers. Worst of all, it would make who does science a relevant question—a sort of scientific Donatism—opening up the field to further suspicion from its ideological enemies.


The trouble is that this idealized view is wrong. The political, moral, and religious views of a scientist really do affect the results that he gets. Consider the process of theory formation. A theorist is struck by inspiration: Something innocuous, like a passing remark by a stranger at the grocery store, suddenly triggers the realization that two unrelated phenomena can be linked, or an existing body of theory can be simplified or unified through a new form of explanation. The scientist then goes looking for evidence to bolster his theory (the precise opposite, it’s worth noting, of Karl Popper’s rather idealistic conception of the scientific method). Given the messiness and flexibility of all real-world datasets, he will invariably be able to find it. Partisans of the old theory remain unmoved and argue, convincingly, that looked at in a different way, the data support their interpretation instead. Often the ensuing scholarly battle stimulates the development of new experimental techniques, and sometimes these new methods are able to settle the matter decisively. Other times the battle can rage for years, or even decades. Even when questions are settled, it usually isn’t because either the old guard or the upstarts won their rivals over, but because one party failed to make the case to the next generation of students and eventually died off.

Scientists who are caught in the raptures of a new theory will often stick with it for a time even when all available evidence counts against it. Sometimes, such a theory even wins in the end. A dramatic, and perhaps surprising, example comes from one of the most famous scientific theories of the twentieth century: Albert Einstein’s special theory of relativity. A year after Einstein proposed it, the theory suffered a devastating blow from the famous experimentalist Walter Kaufmann, who published an empirical result that appeared to disprove the new theory. We now know that Kaufmann’s equipment was insufficiently sensitive to detect the effect Einstein predicted, and moreover that it was miscalibrated, but it took a decade before this became clear. In the meantime, Einstein brushed aside the criticism and continued propounding his theory, winning an increasing number of converts over time, despite the fact that the best experimental evidence had “refuted” it.

The experience evidently had a profound effect on Einstein. He began his career as a dedicated positivist and empiricist, only losing the faith when it failed him again and again. Rigorous attempts to inductively postulate laws from data brought him only years of stagnation and failure while he searched for the field equations of general relativity, and nearly cost him priority for the discovery. In desperation, Einstein searched for the mathematically simplest explanation, embracing prior philosophical criteria as a constraint on the space of possible theories, and then found his answer almost immediately. He ultimately concluded that, as he put it in his Autobiographical Notes, “no collection of empirical facts however comprehensive can ever lead to the formulation of such complicated equations. A theory can be tested by experience, but there is no way from experience to the construction of a theory.” In other words, the inductive approach to theory-building on which so many of science’s claims to neutrality hang is not only a poor description of science as it exists, but is, because of the limited powers of the human mind, not a way that science even could be done. The consequence of this, as Einstein said in an interview at the end of his life, is that “every true theorist is a kind of tamed metaphysicist, no matter how pure a ‘positivist’ he may fancy himself.”

Einstein’s claim is essentially a practical one: It is far too hard for human beings to reason backward from a mass of complex and entropic data to the compact and simple law that gave rise to it. Yet this argument is not as devastating to the inductivist story of science as it may at first sound. Yes, one might concede, the actual practice of the scientific method may be messy, or even the complete opposite of the inductive approach, but the fact remains that there is a law out there that is generating the data of our experience. So long as we continue to be guided by the data, we will gradually approach closer and closer to the true laws of nature, even if not by inductive means.

But the trouble is that there is never just one such law. Theory is almost always underdetermined by data. It’s simple enough to construct artificial examples of different laws that make identical predictions, but most can be dispatched by Occam’s razor (though note that this is a sneaky application of metaphysics if there ever was one!). History, however, offers something altogether more disturbing: countless examples where data could be explained by two fundamentally different types of theory, trafficking in different approaches, different causal mechanisms, even different ontologies.

Consider, for instance, the astonishing accuracy with which both Newtonian mechanics and general relativity predict the motions of the various bodies in the solar system. This may seem like an odd example—isn’t it a case in which a flawed theory explained the evidence for some time, and was eventually replaced by a better theory? Yes, but as Einstein put it in his Herbert Spencer lecture, On the Method of Theoretical Physics: “We can point to two essentially different principles, both of which correspond with experience to a large extent; this proves at the same time that every attempt at a logical deduction of the basic concepts and postulates of mechanics from elementary experiences is doomed to failure.” If two theories barely inhabiting the same conceptual universe can both explain our observations with such accuracy, what if there’s another? What if there are ten more? What if they give identical predictions beyond the accuracy of any instruments we will build for ten thousand years? When forced to choose between two such radically different theories, parlor tricks like Occam’s razor win us nothing. The choice is philosophical and metaphysical: It can be informed by experience, but can never be settled by science.

In practice, scientists are rarely paralyzed by indecision when faced with situations of this sort, which implies that they must have prescientific metaphysical beliefs to help them to make the choice, even if those beliefs go unstated. Scientific theories compete with one another to explain a given body of evidence while also exhibiting the greatest simplicity, elegance, scope, consonance with other theories, and internal harmony. But they do more than that; they also make claims, implicitly or explicitly, about what evidence needs explaining and what would constitute a satisfactory explanation.

In the official story, evidence inspires us to create theories, or sometimes refutes existing theories. But in reality, theories can also create and destroy evidence by highlighting some sorts of the elementary data of experience as significant while dismissing others. A superficial example of this might be the evidentiary standards of many of the social sciences, where studies achieving a significance value of p < 0.05 are arbitrarily considered to be results that a theory must explain or at least accommodate. There is nothing in nature that recommends a sharp cutoff. It is purely a social and indeed ideological consensus to make p < 0.05 the standard. This is a free parameter of the metatheory which could be varied, and which, given the limited power of most studies, if varied, might very well lead to a different body of “facts” and hence different forms of explanation achieving dominance.

But there are deeper cases of theory affecting the kinds of evidence by which theories are judged. Take the behaviorist school of psychology. According to behaviorism, all human and animal behaviors are merely reactions to external stimuli and previous conditioning. In particular, behaviorists believe the internal states of individuals have no causal effects on their actions, regardless of what those individuals may claim. Now imagine that a behaviorist and a non-behaviorist come up with an identical hypothesis explaining some form of activity, but every individual in their study explains, “Actually, the reason I did it was that I believed it would be wrong to do otherwise.” The non-behaviorist might take this as strong evidence that the hypothesis was incorrect. However, the behaviorist, already committed to a theory of human activity that rejects the causal effects of internal states, might rule out these protestations and refuse to consider them as evidence. Whose methodology is correct? Science cannot tell us the answer. Our beliefs about what even constitutes empirical data with which our science must reckon cannot be self-justifying. Indeed, they can be influenced by whatever theory is currently in vogue.

As with evidence, so with what counts as a satisfactory explanation for a given body of evidence. Taking again our example from psychology, suppose a behaviorist and a non-behaviorist are trying to explain why an individual did something apparently irrational. When asked, the subject replies, “Because I thought that if I did it, I would receive a million dollars.” The non-behaviorist might find this belief to be curious, and might inquire further to discover a reason for the belief, but he would almost certainly consider the belief itself to be a sufficient explanation for the action. The behaviorist, on the other hand, would consider the act of speaking, and perhaps even the act of holding a belief, to be nothing more than another behavior, and therefore not sufficient as an explanation for the observed action, since only external stimuli and conditioning can cause behaviors. So if the non-behaviorist formulated a theory that said “individuals will do strange things if they believe that doing so will result in a million dollars,” the behaviorist wouldn’t even consider this theory to be wrong. Rather, it would be not-a-theory, a category error, something as unscientific as saying that fairies did it.

Behaviorism is not just a pathological case; nor can these issues be dodged by avoiding sciences dependent upon the unobservable inner life of conscious beings. Every theory makes claims about which phenomena demand an explanation in simpler or more fundamental terms, and which are just brute facts about reality that neither need nor permit explanation. For example: Newton’s theory of mechanics had great and immediate predictive success, but it was assailed as unscientific at its birth because, unlike Descartes’s vortices and hooked atoms, it did not offer a causal chain of influences whereby one body affected another.

Many scientists, when pressed, will say that our theories progress precisely by becoming more reductive and demanding explanations for things previously seen as brute facts. But it’s often quite unclear what “more reductive” even means. Consider the introduction of variational methods into mechanics by Jean d’Alembert, Joseph-Louis Lagrange, and William Rowan Hamilton. The use of an extremal principle to compute the behavior of a system was seen as unacceptably teleological and non-mechanistic (and continues to be resisted by each new generation of undergraduates). I can’t count the number of times I’ve explained the Lagrangian approach to a non-physicist scientist, only to be met with a dropped jaw and a “that isn’t science!” Perhaps the only reason physicists are comfortable with the approach is that they refuse to think too hard about what they’re doing.

Well, and perhaps also because it works astonishingly well. Most scientific fields today are conceptualized as the handmaidens of technology. Consequently, the forms of explanation which are accepted as scientific tend to be those that give humanity greater powers of prediction and control. This was not an inevitable development, however, and different fields of science have succumbed to the Promethean temptation to varying degrees. Is it possible that a science which valued different qualities in an explanation could have evolved along different lines and given rise to alien theories that traffic in fundamentally different concepts? One of the greatest tragedies of the globalization and homogenization of scientific inquiry is precisely that we are now far less likely to discover these, and other roads not taken.

Science is not simply the answering of questions; it is also the choosing of which questions to ask. Contrary to the inductivist account, facts and data do not just present themselves to us. Experimental and observational studies must be formulated and conducted, often at great cost, to gather them. This is commonly done in the service of one or more research programs—broad efforts to answer a question or to understand a phenomenon. But these programs grow out of an extended dialogue within a community of scientists, or due to funding pressures, and either way are the product of the norms, values, and interests of broader society. Thus these norms and values shape not only what qualifies as evidence, but what evidence is even available to be considered in need of explanation.

For instance, imagine two studies on gift-giving, one conducted by a neuro-economist and the other by a sociologist. Were we to have only the former’s data, we might conclude that people give gifts in response to an activation of the anterior cingulate gyrus. Were we to have only the latter’s, we might conclude that people engage in gift-giving in order to consolidate their social status. Both accounts might be accurate and useful answers to the narrow question that they sought to address, while at the same time being utterly impoverished accounts of human behavior. The trouble comes when we confuse the mere fact that a theory explains some empirical data with the notion that a theory tells the “whole truth” about a facet of the world. Often, the data were gathered in response to the theory, and no theory can be successfully falsified by data that nobody looked for.

Another way in which our metaphysical beliefs construct the body of evidence that is available for theory to address lies in the ways we classify and categorize the world. Every theory makes choices about what elements it considers to be the primitive constituents of the world, what groupings of those elements make for interesting objects of study, and what makes objects more closely or more distantly related. One could imagine a social science that instead of treating individuals as its fundamental units of analysis instead chose families, or neighborhoods, or athletic clubs. Such a science doubtless would come up with very different empirical “laws” governing the behaviors and dynamics of human institutions. Indeed, one of the great triumphs of feminist thought was precisely that it constructed “women” as a separate category and subject of inquiry, thereby turning “how does this policy affect women?” into an interesting scientific question, unlike, say, “how does this policy affect red-haired people?” All such competing schemes for carving the world at its joints represent the enactment of a particular ontological and metaphysical vision.

Again, this all remains true when one moves to “harder” sciences. In fact, the disciplinary boundaries themselves are contingent choices about how to chop up the universe that end up influencing the kinds of questions that are asked and answered. But lay that aside and contemplate a question like “how should we classify forms of cancer?” By the organ affected? By genetic similarity? By typical biological course in the absence of treatment? All of these have been tried, and all give very different answers for when one cancer is “like” another.

To all this the pragmatists have a partial answer: “Pick the divisions that are the most fruitful! The ones that result in useful regularities!” The trouble with this answer is that the world is absolutely rotten with order. Much of it is real, and much more is conjured into being when fallible, order-seeking minds go hunting for it. A great many schema for organizing the world, such as the classification of beetles by visual appearance rather than by genetic similarity, generalize gracefully beyond the examples that inspired their development, despite presumably not tracking the deep cleavages that underlie reality and that science seeks to map.

None of this is meant as a counsel of despair, or a suggestion that the world is so inaccessible to our reason that we should speak only about measurements without reference to underlying reality, or any other of the rather silly views that have sprung from the revelation that science is a contingent, underdetermined social phenomenon. The point, rather, is just that science is not unique, and that it can never be self-justifying. Questions like “which science?” and “why this science?” are often useful ones. Scratch a scientist, find a metaphysicist, even if he doesn’t realize it.

Einstein, acutely sensitive to these issues, was in favor of bringing the oft-unstated prescientific beliefs of scientists out into the light and making them explicit. So, in a very similar way, are feminist philosophers of science like Helen Longino, Lynn Nelson, and Elizabeth Anderson. The difference is that where Einstein’s nonempirical, metaphysical criteria for selecting between theories tended to be “internal” qualities of a theory, like mathematical simplicity or aesthetic balance, these later critics are willing to bring political and moral considerations to bear in the selection of a science.

A wonderful case study of “feminist science” is offered by Anderson in a 2004 paper analyzing a book-length treatment of divorce outcomes by Abigail Stewart and colleagues. Anderson breaks down the process of investigating a scientific question into eight steps—orienting to the background of the field, framing a question, articulating a conception of the object of inquiry, deciding what types of data to collect, establishing and carrying out data-gathering procedures, analyzing the data, deciding when to stop analyzing data, drawing conclusions—and then shows how Stewart’s feminist beliefs influence and inform her methodology at each of these steps.

To take just one example: Stewart and her team consciously chose to reject the “traditionalist” interpretation of divorce as a traumatic and negative event, and searched carefully for ways in which the divorces they studied had produced opportunities for personal growth and maturation on the part of both parents and children. Sure enough, they found them where previous researchers had not. One might object that cancer and broken legs also provide opportunities for personal growth, and that a study which focused on them without mentioning the pain and harm that they cause is a study that lies by omission, or by misplaced emphasis, just like the neuro-economist’s account of gift-giving. But this is precisely the feminists’ point! One need not posit data manipulation or academic dishonesty to see that a researcher’s prior beliefs about the desirability of divorce will shape the results of a study. Merely by changing the questions that are asked, by shifting the background conception of the subjects of study, and by seeking out and collecting a new type of evidence, “feminist science” is able to reach a new and different conclusion.

And none of this—none of the norms, values, and agendas guiding the outcomes of scientific research—touches on the way science is made up of fallible institutions and fallible individuals. Yet the mechanisms of peer review, grant-making and funding, access to laboratory resources, and so on make it all too easy for a dedicated cabal to deliberately (or even accidentally) freeze out research that does not conform to their vision of the world. The ease with which accidental or deliberate error can enter data analysis provides yet another mechanism for the views of a scientist to leach into his or her results. Given the degree of esteem and respect still paid to assertions bearing the imprimatur of a study, it would be madness for the partisans of any faction not to try to ensure that as many of their own as possible occupy positions related to science production.

Which is why my progressive scientist friends are deluded if they think that those genuinely concerned about “colonization, racism, immigration, native rights, sexism, ableism, queer-, trans-, intersex-phobia, & econ justice” can be dissuaded from attempting to capture not just the institutions of science, but its methods and research programs as well. Every instance of scientific inquiry, every study, rests on a vast submerged set of political, moral, and ultimately metaphysical assumptions. As the great quantum theorist Max Planck put it:

It is said that science has no preconceived ideas: there is no saying that has been more thoroughly or more disastrously misunderstood. It is true that every branch of science must have an empirical foundation: but it is equally true that the essence of science does not consist in this raw material but in the manner in which it is used. The material always is incomplete . . . [and] must therefore be completed, and this must be done by filling the gaps; and this in turn is done by means of associations of ideas. And associations of ideas are not the work of the understanding but the offspring of the investigator’s imagination—an activity which may be described as faith, or more cautiously, as a working hypothesis. The essential point is that its content in one way or another goes beyond the data of experience. The chaos of individual masses cannot be wrought into a cosmos without some harmonizing force and, similarly, the disjointed data of experience can never furnish a veritable science without the intelligent interference of a spirit actuated by faith.

 ... To face th[e] future with intellectual sophistication rather than sloganeering, we need metaphysical reflection. Scientists would do well to start with a frank acknowledgment that they do not really know the deeper sources of their own dearly held scientific truths.