Among the many quandaries a writer must face after publishing a controversial book is the question of how, or whether, to respond to criticism. At a minimum, it would seem wise to correct misunderstandings and distortions of one's views wherever they appear, but one soon discovers that there is no good way of doing this. After my first book was published, the journalist Chris Hedges seemed to make a career out of misrepresenting its contents -- asserting, among other calumnies, that somewhere in its pages I call for an immediate, nuclear first-strike on the entire Muslim world. Hedges spread this lie so sedulously that I could have spent the next year writing letters to the editor. Even if I had been willing to squander my time in this way, such letters are generally pointless, as few people read them. In the end, I decided to create a page on my website addressing controversies of this kind, so that I can then forget all about them. The result has been less than satisfying. Several years have passed, and I still meet people at public talks and in comment threads who believe that I support the outright murder of hundreds of millions of innocent people.
The problem posed by public criticism is by no means limited to the question of what to do about misrepresentations of one's work. There is simply no good forum in which to respond to reviews of any kind, no matter how substantive. To do so in a separate essay is to risk confusing readers with a litany of disconnected points or -- worse -- boring them to salt. And any author who rises to the defense of his own book is always in danger of looking petulant, vain, and ineffectual. There is a galling asymmetry at work here: to say anything at all in response to criticism is to risk doing one's reputation further harm by appearing to care too much about it.
These strictures now weigh heavily on me, because I recently published a book, The Moral Landscape: How Science Can Determine Human Values, which has provoked a backlash in intellectual (and not-so-intellectual) circles. I knew this was coming, given my thesis, but this knowledge left me no better equipped to meet the cloudbursts of vitriol and confusion once they arrived. Watching the tide of opinion turn against me, it has been difficult to know what, if anything, to do about it.
How, for instance, should I respond to the novelist Marilynne Robinson's paranoid, anti-science gabbling in the Wall Street Journal where she consigns me to the company of the lobotomists of the mid 20th century? Better not to try, I think -- beyond observing how difficult it can be to know whether a task is above or beneath you. What about the science writer John Horgan, who was kind enough to review my book twice, once in Scientific American where he tarred me with the infamous Tuskegee syphilis experiments, the abuse of the mentally ill, and eugenics, and once in The Globe and Mail, where he added Nazism and Marxism for good measure? How does one graciously respond to non sequiturs? The purpose of The Moral Landscape is to argue that we can, in principle, think about moral truth in the context of science. Robinson and Horgan seem to imagine that the mere existence of the Nazi doctors counts against my thesis. Is it really so difficult to distinguish between a science of morality and the morality of science? To assert that moral truths exist, and can be scientifically understood, is not to say that all (or any) scientists currently understand these truths or that those who do will necessarily conform to them.
But we must descend further before reaching a higher place: for occasionally one's book will be reviewed by a prominent person who has not even taken the trouble to open it. Such behavior is always surprising and, in a strange way, refreshingly stupid. What should I say, for instance, when the inimitable Deepak Chopra produces a long, poisonous, and blundering review of The Moral Landscape in The San Francisco Chronicle while demonstrating in every line that he has not read it? (His "review" is wholly based on a short Q&A I published for promotional purposes on my website.) Admittedly, there is something arresting about being called a scientific fraud and "egotistical" by Chopra. This is rather like being branded an exhibitionist by Lady Gaga. In retrospect, I see that the haste and bile of Chopra's fake review are readily explained: we had recently participated in a debate at Caltech (along with Michael Shermer and Jean Houston) in which the great man had greatly embarrassed himself. And while I am certainly capable of being both scientifically mistaken and egotistical, I am confident that anyone who views our exchange in its entirety will recognize that I am the firefly to Chopra's sun.
Why respond to criticism at all? Many writers refuse to even read their reviews, much less answer them. The problem, however, is that if one is committed to the spread of ideas -- as most nonfiction writers are -- it is hard to ignore the fact that negative reviews can be very damaging to one's cause. Not only do they discourage smart people from reading a book, they can lead them to disparage it as though they had discovered its flaws for themselves. Consider the following published remarks from the philosopher Colin McGinn, whose work I greatly admire:
As far as I know, the best reviews of The Moral Landscape have come from the philosophers Thomas Nagel, Troy Jollimore, and Russell Blackford. I will focus on Blackford's (along with a few of his subsequent blog posts) as it strikes me as the most searching. It also seems to echo everything of interest in the others.
For those unfamiliar with my book, here is my argument in brief: Morality and values depend on the existence of conscious minds -- and specifically on the fact that such minds can experience various forms of well-being and suffering in this universe. Conscious minds and their states are natural phenomenon, of course, fully constrained by the laws of Nature (whatever these turn out to be in the end). Therefore, there must be right and wrong answers to questions of morality and values that potentially fall within the purview of science. On this view, some people and cultures will be right (to a greater or lesser degree), and some will be wrong, with respect to what they deem important in life.
Blackford and others worry that any aspect of human subjectivity or culture could fit in the space provided: after all, a preference for chocolate over vanilla ice cream is a natural phenomenon, as is a preference for the comic Sarah Silverman over Bob Hope. Are we to imagine that there are universal truths about ice cream and comedy that admit of scientific analysis? Well, in a certain sense, yes. Science could, in principle, account for why some of us prefer chocolate to vanilla, and why no one's favorite flavor of ice cream is aluminum. Comedy must also be susceptible to this kind of study. There will be a fair amount of cultural and generational variation in what counts as funny, but there are probably basic principles of comedy -- like the violation of expectations, the breaking of taboos, etc. -- that could be universal. Amusement to the point of laughter is a specific state of the human nervous system that can be scientifically studied. Why do some people laugh more readily than others? What exactly happens when we "get" a joke? These are ultimately questions about the human brain. There will be scientific facts to be known here, and any differences in taste among human beings must be attributable to other facts that fall within the purview of science. If we were ever to arrive at a complete understanding of the human mind, we would understand human preferences of all kinds. Indeed, we might even be able to change them.
However, morality and values appear to reach deeper than mere matters of taste -- beyond how people happen to think and behave to questions of how they should think and behave. And it is this notion of "should" that introduces a fair amount of confusion into any conversation about moral truth. I should note in passing, however, that I don't think the distinction between morality and something like taste is as clear or as categorical as we might suppose. If, for instance, a preference for chocolate ice cream allowed for the most rewarding experience a human being could have, while a preference for vanilla did not, we would deem it morally important to help people overcome any defect in their sense of taste that caused them to prefer vanilla -- in the same way that we currently treat people for curable forms of blindness. It seems to me that the boundary between mere aesthetics and moral imperative -- the difference between not liking Matisse and not liking the Golden Rule -- is more a matter of there being higher stakes, and consequences that reach into the lives of others, than of there being distinct classes of facts regarding the nature of human experience. There is much more to be said on this point, of course, but it is not one that I covered in my book, so I will pass it by.
Let's begin with my core claim that moral truths exist. In what was a generally supportive review of The Moral Landscape, strewn with strange insults, the philosopher Thomas Nagel endorsed my basic thesis as follows:
Many critics claim that my reliance on the concept of "well-being" is arbitrary and philosophically indefensible. Who's to say that well-being is important at all or that other things aren't far more important? How, for instance, could you convince someone who does not value well-being that he should, in fact, value it? And even if one could justify well-being as the true foundation for morality, many have argued that one would need a "metric" by which it could be measured -- else there could be no such thing as moral truth in the scientific sense. There seems to be a unnecessarily restrictive notion of science underlying this last claim -- as though scientific truths only exist if we can have immediate and uncontroversial access to them in the lab. The physicist Sean Carroll has written a fair amount against me on this point (again, without having read my book), and he is in the habit of saying things like, "I don't know what a unit of well-being is," as though he were regretfully delivering the killing blow to my thesis. I would venture that Carroll doesn't know what a unit of depression is either -- and units of joy, disgust, boredom, irony, envy, or any other mental state worth studying won't be forthcoming. If half of what Carroll says about the limits of science is true, the sciences of mind are not merely doomed, there would be no facts for them to understand in the first place.
It seems to me that there are three, distinct challenges put forward thus far:
1. There is no scientific basis to say that we should value well-being, our own or anyone else's. (The Value Problem)
2. Hence, if someone does not care about well-being, or cares only about his own and not about the well-being of others, there is no way to argue that he is wrong from the point of view of science. (The Persuasion Problem)
3. Even if we did agree to grant "well-being" primacy in any discussion of morality, it is difficult or impossible to define it with rigor. It is, therefore, impossible to measure well-being scientifically. Thus, there can be no science of morality. (The Measurement Problem)
I believe all of these challenges are the product of philosophical confusion. The simplest way to see this is by analogy to medicine and the mysterious quantity we call "health." Let's swap "morality" for "medicine" and "well-being" for "health" and see how things look:
1. There is no scientific basis to say that we should value health, our own or anyone else's. (The Value Problem)
2. Hence, if someone does not care about health, or cares only about his own and not about the health of others, there is no way to argue that he is wrong from the point of view of science. (The Persuasion Problem)
3. Even if we did agree to grant "health" primacy in any discussion of medicine, it is difficult or impossible to define it with rigor. It is, therefore, impossible to measure health scientifically. Thus, there can be no science of medicine. (The Measurement Problem)
While the analogy may not be perfect, I maintain that it is good enough to obviate these three criticisms. Is there a Value Problem, with respect to health? Is it unscientific to value health and seek to maximize it within the context of medicine? No. Clearly there are scientific truths to be known about health -- and we can fail to know them, to our great detriment. This is a fact. And yet, it is possible for people to deny this fact, or to have perverse and even self-destructive ideas about how to live. Needless to say, it can be fruitless to argue with such people. Does this mean we have a Persuasion Problem with respect to medicine? No. Christian Scientists, homeopaths, voodoo priests, and the legions of the confused don't get to vote on the principles of medicine. "Health" is also hard to define -- and, what is more, the definition keeps changing. There is no clear "metric" by which we can measure it, and there may never be one -- because "health" is a suitcase term for hundreds, if not thousands, of variables. Is an ability to "jump very high" one of them? That depends. What would my doctor think if I wanted a full neurological workup because I can only manage a 30-inch vertical leap? He would think I had lost my mind. However, if I were a professional basketball player who had enjoyed a 40-inch leap every day of his adult life, I would be reporting a sudden, 25 percent decline in my abilities -- not a good sign. Do such contingencies give us a Measurement Problem with respect to health? Do they indicate that medicine will never be a proper science? No. "Health" is a loose concept that may always bend and stretch depending on the context -- but there is no question that both it and its context exist within an underlying reality which we can understand, or fail to understand, with the tools of science.
Let's look at these problems in light of Blackford's review:
The Value Problem
My critics have been especially exercised over the subtitle of my book, "how science can determine human values." The charge is that I haven't actually used science to determine the foundational value (well-being) upon which my proffered science of morality would rest. Rather, I have just assumed that well-being is a value, and this move is both unscientific and question-begging. Here is Blackford:
Blackford raises another issue with regard to the concept of well-being:
K2 . Such disagreements do not land us back in moral relativism, however: because there will be right and wrong ways to move toward one peak or the other; there will be many more low spots on the moral landscape than peaks (i.e. truly wrong answers to moral questions); and for all but the loftiest goals and the most disparate forms of conscious experience, moral disagreements will not be between sides of equal merit. Which is to say that for most moral controversies, we need not agree to disagree; rather, we should do our best to determine which side is actually right.
In any case, I suspect that radically disjoint peaks are unlikely to exist for human beings. We are far too similar to one another to be that different. If we each could sample all possible states of human experience, and were endowed with perfect memories so that we could sort our preferences, I think we would converge on similar judgments of what is good, what is better, and what is best. Differences of opinion might still be possible, and would themselves be explicable in terms of differences at the level of our brains. Consequently, even such disagreements would not be a problem for my account, because to talk about what is truly good, we must also include the possibility (in principle, if not in practice) of changing peoples desires, preferences, and intuitions as a means of moving across the moral landscape. I will discuss the implications of this below.
Generally speaking, I think that the problem of disagreement and indeterminacy that Blackford raises is a product of incomplete information (we will never be able to know all the consequences of an action, estimate all the relevant probabilities, or compare counterfactual states of the world) combined with the inevitable looseness with which certain terms must be defined. Once again, I do not see this as a problem for my view.
The Persuasion Problem
Another concern that prompts Blackford and others to invoke terms like "ought" and "should" is the problem of persuasion. What can I say to persuade another person that he or she should behave differently? What can I think (that is, say to myself) to inspire a change in my own behavior? There are, in fact, people who will not be persuaded by anything I say on the subject of well-being, and who may even claim not to value well-being at all. And even I can knowingly fail to maximize my own well-being by acting in ways that I will later regret, perhaps by forsaking a long term goal in favor of short term pleasure.
The deeper concern, however, is that even if we do agree that well-being is the gold standard by which to measure what is good, people are selfish in ways that we are not inclined to condemn. As Blackford observes:
Again, consider the concept of health: should we maximize global health? To my ear, this is a strange question. It invites a timorous reply like, "Provided we want everyone to be healthy, yes." And introducing this note of contingency seems to nudge us from the charmed circle of scientific truth. But why must we frame the matter this way? A world in which global health is maximized would be an objective reality, quite distinct from a world in which we all die early and in agony. Yes, it is true that a person like Alice could seek to maximize her own health without caring about the health of other people -- though her health will depend on the health of others in countless ways (the same, I would argue, is true of her well-being). Is she wrong to be selfish? Would we blame her for taking her own side in any zero-sum contest to secure medicine for herself or for her own children? Again, these aren't the kinds of questions that will get us to bedrock. The truth is, Alice and the rest of us can live so as to allow for a maximally healthy world, or we can fail to do so. Yes, it is possible that a maximally healthy world is one in whichAlice is less healthy than she might otherwise be (though this seems unlikely). So what? There is still an objective reality to which our beliefs about human health can correspond. Questions of "should" are not the right lens through which to see this.
And the necessity of grounding moral truth in things that people "actually value, or desire, or care about" also misses the point. People often act against their deeper preferences -- or live in ignorance of what their preferences would be if they had more experience and information. What if we could changeAlice 's preference themselves? Should we? Obviously we can't answer this question by relying on the very preferences we would change. Contrary to Blackford's assertion, I'm not simply claiming that morality is "fully determined by an objective reality, independent of people's actual values and desires." I am claiming that people's actual values and desires are fully determined by an objective reality, and that we can conceptually get behind all of this -- indeed, we must -- in order to talk about what is actually good. This becomes clear the moment we ask whether it would be good to alter people values and desires.
Consider how we would view a situation in which all of us miraculously began to behave so as to maximize our collective well-being. Imagine that on the basis of remarkable breakthroughs in technology, economics, and politic skill, we create a genuine utopia on earth. Needless to say, this wouldn't be boring, because we will have wisely avoided all the boring utopias. Rather, we will have created a global civilization of astonishing creativity, security, and happiness.
However, some people were not ready for this earthly paradise once it arrived. Some were psychopaths who, despite enjoying the general change in quality of life, were nevertheless eager to break into their neighbors' homes and torture them from time to time. A few had preferences that were incompatible with the flourishing of whole societies: Try as he might, Kim Jong Il just couldn't shake the feeling that his cognac didn't taste as sweet without millions of people starving beyond his palace gates. Given our advances in science, however, we were able to alter preferences of this kind. In fact, we painlessly delivered a firmware update to everyone. Now the entirety of the species is fit to live in a global civilization that is as safe, and as fun, and as interesting, and as filled with love as it can be.
It seems to me that this scenario cuts through the worry that the concept of well-being might leave out something that is worth caring about: for if you care about something that is not compatible with a peak of human flourishing -- given the requisite changes in your brain, you would recognize that you were wrong to care about this thing in the first place. Wrong in what sense? Wrong in the sense that you didn't know what you were missing. This is the core of my argument: I am claiming that there must be frontiers of human well-being that await our discovery -- and certain interests and preferences surely blind us to them.
Nevertheless, Blackford is right to point out that our general approach to morality does not demand that we maximize global well-being. We are selfish to one degree or another; we lack complete information about the consequences of our actions; and even where we possess such information, our interests and preferences often lead us to ignore it. But these facts obscure deeper questions: In what sense can an action be morally good? And what does it mean to make a good action better?
For instance, it seems good for me to buy my daughter a birthday present, all things considered, because this will make both of us happy. Few people would fault me for spending some of my time and money in this way. But what about all the little girls in the world who suffer terribly at this moment for want of resources? Here is where an ethicist like Peter Singer will pounce, arguing that there actually is something morally questionable -- even reprehensible -- about my buying my daughter a birthday present, given my knowledge of how much good my time and money could do elsewhere. What should I do? Singer's argument makes me uncomfortable, but only for a moment. It is simply a fact about me that the suffering of other little girls is often out of sight and out of mind -- and my daughter's birthday is no easier to ignore than an asteroid impact. Can I muster a philosophical defense of my narrow focus? Perhaps. I might be that Singer's case leaves out some important details: what would happen if everyone in the developed world ceased to shop for birthday presents? Wouldn't the best of human civilization just come crashing down upon the worst? How can we spread wealth to the developing world if we do not create wealth in the first place? These reflections, self-serving and otherwise -- along with a thousand other facts about my mind for which Sean Carroll still has no "metric" -- land me in a toy store, looking for something that isn't pink.
So, yes, it is true that my thoughts about global well-being did not amount to much in this instance. And Blackford is right to say that most people wouldn't judge me for it. But what if there were a way for me to buy my daughter a present and also cure another little girl of cancer at no extra cost? Wouldn't this be better than just buying the original present? Imagine if I declined this opportunity saying, "What is that to me? I don't care about other little girls and their cancers." It is only against an implicit notion of global well-being that we can judge my behavior to be less good than it might otherwise be. It is true that no one currently demands that I spend my time seeking, in every instance, to maximize global well-being -- nor do I demand it of myself -- but if global well-being could be maximized, that would be better (by the only definition of "better" that makes any sense).
It seems to me that whatever our preferences and capacities are at present, our beliefs about good and evil must still relate to what is ultimately possible for human beings. We can't think about this deeper reality by focusing on the narrow question of what a person "should" do in the gray areas of life where we spend much of our time. However, the extremes of human experience throw ample light: are the Taliban wrong about morality? Yes. Really wrong? Yes. Can we say so from the perspective of science? Yes. If we know anything at all about human well-being -- and we do -- we know that the Taliban are not leading anyone, including themselves, toward a peak on the moral landscape.
Finally, Blackford asserts, as many have, that abandoning a notion of moral truth "doesn't prevent us developing coherent, rational critiques of various systems of laws or customs or moral rules, or persuading others to adopt our critiques."
The problem posed by public criticism is by no means limited to the question of what to do about misrepresentations of one's work. There is simply no good forum in which to respond to reviews of any kind, no matter how substantive. To do so in a separate essay is to risk confusing readers with a litany of disconnected points or -- worse -- boring them to salt. And any author who rises to the defense of his own book is always in danger of looking petulant, vain, and ineffectual. There is a galling asymmetry at work here: to say anything at all in response to criticism is to risk doing one's reputation further harm by appearing to care too much about it.
These strictures now weigh heavily on me, because I recently published a book, The Moral Landscape: How Science Can Determine Human Values, which has provoked a backlash in intellectual (and not-so-intellectual) circles. I knew this was coming, given my thesis, but this knowledge left me no better equipped to meet the cloudbursts of vitriol and confusion once they arrived. Watching the tide of opinion turn against me, it has been difficult to know what, if anything, to do about it.
How, for instance, should I respond to the novelist Marilynne Robinson's paranoid, anti-science gabbling in the Wall Street Journal where she consigns me to the company of the lobotomists of the mid 20th century? Better not to try, I think -- beyond observing how difficult it can be to know whether a task is above or beneath you. What about the science writer John Horgan, who was kind enough to review my book twice, once in Scientific American where he tarred me with the infamous Tuskegee syphilis experiments, the abuse of the mentally ill, and eugenics, and once in The Globe and Mail, where he added Nazism and Marxism for good measure? How does one graciously respond to non sequiturs? The purpose of The Moral Landscape is to argue that we can, in principle, think about moral truth in the context of science. Robinson and Horgan seem to imagine that the mere existence of the Nazi doctors counts against my thesis. Is it really so difficult to distinguish between a science of morality and the morality of science? To assert that moral truths exist, and can be scientifically understood, is not to say that all (or any) scientists currently understand these truths or that those who do will necessarily conform to them.
But we must descend further before reaching a higher place: for occasionally one's book will be reviewed by a prominent person who has not even taken the trouble to open it. Such behavior is always surprising and, in a strange way, refreshingly stupid. What should I say, for instance, when the inimitable Deepak Chopra produces a long, poisonous, and blundering review of The Moral Landscape in The San Francisco Chronicle while demonstrating in every line that he has not read it? (His "review" is wholly based on a short Q&A I published for promotional purposes on my website.) Admittedly, there is something arresting about being called a scientific fraud and "egotistical" by Chopra. This is rather like being branded an exhibitionist by Lady Gaga. In retrospect, I see that the haste and bile of Chopra's fake review are readily explained: we had recently participated in a debate at Caltech (along with Michael Shermer and Jean Houston) in which the great man had greatly embarrassed himself. And while I am certainly capable of being both scientifically mistaken and egotistical, I am confident that anyone who views our exchange in its entirety will recognize that I am the firefly to Chopra's sun.
Why respond to criticism at all? Many writers refuse to even read their reviews, much less answer them. The problem, however, is that if one is committed to the spread of ideas -- as most nonfiction writers are -- it is hard to ignore the fact that negative reviews can be very damaging to one's cause. Not only do they discourage smart people from reading a book, they can lead them to disparage it as though they had discovered its flaws for themselves. Consider the following published remarks from the philosopher Colin McGinn, whose work I greatly admire:
I think Sam Harris' idea is equally bad [as religion-based morality], I'm surprised he'd write on it. There's just some really bad thinking in Sam Harris's new book, I haven't read it yet, but that's because from what I've heard, it sounds terrible and wrong-headed and just bizarre. He's trying to make science do what religion used to. His basic philosophical reason is a fallacy, it's impossible to derive ought from is, the naturalistic fallacy, it's a complete misconception that you can. I'm surprised Sam Harris would fall for that. A few weeks ago, Anthony Appiah nailed him for it in the New York Times. I have no idea why that arises in some scientists. The idea is wrong. It's been refuted. It's hard to believe they still argue that point.
No matter that I cannot find a single, substantive point in Appiah's review not already addressed in my book, McGinn appears to know otherwise through the power of clairvoyance. Many other philosophers and scientists have begun to play this game with The Moral Landscape, without ever engaging its arguments. And so, mindful of the dangers, I have decided to answer the strongest criticisms that have appeared to date. Failure beckons on both sides, of course, as my response will be all-too-brief for some and far more than others can stomach. But it is worth a try.As far as I know, the best reviews of The Moral Landscape have come from the philosophers Thomas Nagel, Troy Jollimore, and Russell Blackford. I will focus on Blackford's (along with a few of his subsequent blog posts) as it strikes me as the most searching. It also seems to echo everything of interest in the others.
For those unfamiliar with my book, here is my argument in brief: Morality and values depend on the existence of conscious minds -- and specifically on the fact that such minds can experience various forms of well-being and suffering in this universe. Conscious minds and their states are natural phenomenon, of course, fully constrained by the laws of Nature (whatever these turn out to be in the end). Therefore, there must be right and wrong answers to questions of morality and values that potentially fall within the purview of science. On this view, some people and cultures will be right (to a greater or lesser degree), and some will be wrong, with respect to what they deem important in life.
Blackford and others worry that any aspect of human subjectivity or culture could fit in the space provided: after all, a preference for chocolate over vanilla ice cream is a natural phenomenon, as is a preference for the comic Sarah Silverman over Bob Hope. Are we to imagine that there are universal truths about ice cream and comedy that admit of scientific analysis? Well, in a certain sense, yes. Science could, in principle, account for why some of us prefer chocolate to vanilla, and why no one's favorite flavor of ice cream is aluminum. Comedy must also be susceptible to this kind of study. There will be a fair amount of cultural and generational variation in what counts as funny, but there are probably basic principles of comedy -- like the violation of expectations, the breaking of taboos, etc. -- that could be universal. Amusement to the point of laughter is a specific state of the human nervous system that can be scientifically studied. Why do some people laugh more readily than others? What exactly happens when we "get" a joke? These are ultimately questions about the human brain. There will be scientific facts to be known here, and any differences in taste among human beings must be attributable to other facts that fall within the purview of science. If we were ever to arrive at a complete understanding of the human mind, we would understand human preferences of all kinds. Indeed, we might even be able to change them.
However, morality and values appear to reach deeper than mere matters of taste -- beyond how people happen to think and behave to questions of how they should think and behave. And it is this notion of "should" that introduces a fair amount of confusion into any conversation about moral truth. I should note in passing, however, that I don't think the distinction between morality and something like taste is as clear or as categorical as we might suppose. If, for instance, a preference for chocolate ice cream allowed for the most rewarding experience a human being could have, while a preference for vanilla did not, we would deem it morally important to help people overcome any defect in their sense of taste that caused them to prefer vanilla -- in the same way that we currently treat people for curable forms of blindness. It seems to me that the boundary between mere aesthetics and moral imperative -- the difference between not liking Matisse and not liking the Golden Rule -- is more a matter of there being higher stakes, and consequences that reach into the lives of others, than of there being distinct classes of facts regarding the nature of human experience. There is much more to be said on this point, of course, but it is not one that I covered in my book, so I will pass it by.
Let's begin with my core claim that moral truths exist. In what was a generally supportive review of The Moral Landscape, strewn with strange insults, the philosopher Thomas Nagel endorsed my basic thesis as follows:
Even if this is an exaggeration, Harris has identified a real problem, rooted in the idea that facts are objective and values are subjective. Harris rejects this facile opposition in the only way it can be rejected -- by pointing to evaluative truths so obvious that they need no defense. For example, a world in which everyone was maximally miserable would be worse than a world in which everyone was happy, and it would be wrong to try to move us toward the first world and away from the second. This is not true by definition, but it is obvious, just as it is obvious that elephants are larger than mice. If someone denied the truth of either of those propositions, we would have no reason to take him seriously...
The true culprit behind contemporary professions of moral skepticism is the confused belief that the ground of moral truth must be found in something other than moral values. One can pose this type of question about any kind of truth. What makes it true that 2 + 2 = 4? What makes it true that hens lay eggs? Some things are just true; nothing else makes them true. Moral skepticism is caused by the currently fashionable but unargued assumption that only certain kinds of things, such as physical facts, can be "just true" and that value judgments such as "happiness is better than misery" are not among them. And that assumption in turn leads to the conclusion that a value judgment could be true only if it were made true by something like a physical fact. That, of course, is nonsense.
It is encouraging to see a philosopher of Nagel's talents conceding this much -- for the position he sketches nullifies much of the criticism I have received. However, my view of moral truth demands a little more than this -- not because I am bent upon reducing morality to "physical" facts in any crude sense, but because I can't see how we can keep the notion of moral truth within a walled garden, forever set apart from the truths of science. In my view, morality must be viewed in the context of our growing scientific understanding of the mind. If there are truths to be known about the mind, there will be truths to be known about how minds flourish; consequently, there will be truths to be known about good and evil.Many critics claim that my reliance on the concept of "well-being" is arbitrary and philosophically indefensible. Who's to say that well-being is important at all or that other things aren't far more important? How, for instance, could you convince someone who does not value well-being that he should, in fact, value it? And even if one could justify well-being as the true foundation for morality, many have argued that one would need a "metric" by which it could be measured -- else there could be no such thing as moral truth in the scientific sense. There seems to be a unnecessarily restrictive notion of science underlying this last claim -- as though scientific truths only exist if we can have immediate and uncontroversial access to them in the lab. The physicist Sean Carroll has written a fair amount against me on this point (again, without having read my book), and he is in the habit of saying things like, "I don't know what a unit of well-being is," as though he were regretfully delivering the killing blow to my thesis. I would venture that Carroll doesn't know what a unit of depression is either -- and units of joy, disgust, boredom, irony, envy, or any other mental state worth studying won't be forthcoming. If half of what Carroll says about the limits of science is true, the sciences of mind are not merely doomed, there would be no facts for them to understand in the first place.
It seems to me that there are three, distinct challenges put forward thus far:
1. There is no scientific basis to say that we should value well-being, our own or anyone else's. (The Value Problem)
2. Hence, if someone does not care about well-being, or cares only about his own and not about the well-being of others, there is no way to argue that he is wrong from the point of view of science. (The Persuasion Problem)
3. Even if we did agree to grant "well-being" primacy in any discussion of morality, it is difficult or impossible to define it with rigor. It is, therefore, impossible to measure well-being scientifically. Thus, there can be no science of morality. (The Measurement Problem)
I believe all of these challenges are the product of philosophical confusion. The simplest way to see this is by analogy to medicine and the mysterious quantity we call "health." Let's swap "morality" for "medicine" and "well-being" for "health" and see how things look:
1. There is no scientific basis to say that we should value health, our own or anyone else's. (The Value Problem)
2. Hence, if someone does not care about health, or cares only about his own and not about the health of others, there is no way to argue that he is wrong from the point of view of science. (The Persuasion Problem)
3. Even if we did agree to grant "health" primacy in any discussion of medicine, it is difficult or impossible to define it with rigor. It is, therefore, impossible to measure health scientifically. Thus, there can be no science of medicine. (The Measurement Problem)
While the analogy may not be perfect, I maintain that it is good enough to obviate these three criticisms. Is there a Value Problem, with respect to health? Is it unscientific to value health and seek to maximize it within the context of medicine? No. Clearly there are scientific truths to be known about health -- and we can fail to know them, to our great detriment. This is a fact. And yet, it is possible for people to deny this fact, or to have perverse and even self-destructive ideas about how to live. Needless to say, it can be fruitless to argue with such people. Does this mean we have a Persuasion Problem with respect to medicine? No. Christian Scientists, homeopaths, voodoo priests, and the legions of the confused don't get to vote on the principles of medicine. "Health" is also hard to define -- and, what is more, the definition keeps changing. There is no clear "metric" by which we can measure it, and there may never be one -- because "health" is a suitcase term for hundreds, if not thousands, of variables. Is an ability to "jump very high" one of them? That depends. What would my doctor think if I wanted a full neurological workup because I can only manage a 30-inch vertical leap? He would think I had lost my mind. However, if I were a professional basketball player who had enjoyed a 40-inch leap every day of his adult life, I would be reporting a sudden, 25 percent decline in my abilities -- not a good sign. Do such contingencies give us a Measurement Problem with respect to health? Do they indicate that medicine will never be a proper science? No. "Health" is a loose concept that may always bend and stretch depending on the context -- but there is no question that both it and its context exist within an underlying reality which we can understand, or fail to understand, with the tools of science.
Let's look at these problems in light of Blackford's review:
The Value Problem
My critics have been especially exercised over the subtitle of my book, "how science can determine human values." The charge is that I haven't actually used science to determine the foundational value (well-being) upon which my proffered science of morality would rest. Rather, I have just assumed that well-being is a value, and this move is both unscientific and question-begging. Here is Blackford:
If we presuppose the well-being of conscious creatures as a fundamental value, much else may fall into place, but that initial presupposition does not come from science. It is not an empirical finding... Harris is highly critical of the claim, associated with Hume, that we cannot derive an "ought" solely from an "is" - without starting with people's actual values and desires. He is, however, no more successful in deriving "ought" from "is" than anyone else has ever been. The whole intellectual system of The Moral Landscape depends on an "ought" being built into its foundations.
Again, the same can be said about medicine, or science as a whole. As I point out in my book, science in based on values that must be presupposed -- like the desire to understand the universe, a respect for evidence and logical coherence, etc. One who doesn't share these values cannot do science. But nor can he attack the presuppositions of science in a way that anyone should find compelling. Scientists need not apologize for presupposing the value of evidence, nor does this presupposition render science unscientific. In my book, I argue that the value of well-being -- specifically the value of avoiding the worst possible misery for everyone -- is on the same footing. There is no problem in presupposing that the worst possible misery for everyone is bad and worth avoiding and that normative morality consists, at an absolute minimum, in acting so as to avoid it. To say that the worst possible misery for everyone is "bad" is, on my account, like saying that an argument that contradicts itself is "illogical." Our spade is turned. Anyone who says it isn't simply isn't making sense. The fatal flaw that Blackford claims to have found in my view of morality could just as well be located in science as a whole -- or reason generally. Our "oughts" are built right into the foundations. We need not apologize for pulling ourselves up by our bootstraps in this way. It is far better than pulling ourselves down by them.Blackford raises another issue with regard to the concept of well-being:
There could be situations where the question of which course of action might maximize well-being has no determinate answer, and not merely because well-being is difficult to measure in practice but because there is some room for rational disagreement about exactly what it is. If it's shorthand for the summation of various even deeper values, there could be room for legitimate disagreement on exactly what these are, and certainly on how they are to be weighted. But if that is so, there could end up being legitimate disagreement on what is to be done, with no answer that is objectively binding on all the disagreeing parties.
Couldn't the same be said about human health? What if there are trade-offs with respect to human performance that we just can't get around -- what if, for instance, an ability to jump high always comes at the cost of flexibility? Will there be disagreements between orthopedists who specialize in basketball and those who specialize in yoga? Sure. So what? We will still be talking about very small deviations from a common standard of "health" -- one which does not include anencephaly or a raging case of smallpox. [Harris] acknowledges the theoretical possibility that two courses of action, or, say, two different systems of customs and laws could be equal in the amount of well-being that they generate. In such cases, the objectively correct and determinate answer to the question of which is morally better would be: "They are equal." However, he is not prepared to accept a situation where two people who have knowledge of all the facts could legitimately disagree on what ought to be done. The closest they could come to that would be agreement that two (or more) courses of action are equally preferable, so either could be pursued with the same moral legitimacy as the other.
This is not quite true. My model of the moral landscape does allow for multiple peaks -- many different modes of flourishing, admitting of irreconcilable goals. Thus, if you want to move society toward peak 19746X, while I fancy 74397J, we may have disagreements that simply can't be worked out. This is akin to trying to get me to follow you to the summit of Everest while I want to drag you up the slopes of In any case, I suspect that radically disjoint peaks are unlikely to exist for human beings. We are far too similar to one another to be that different. If we each could sample all possible states of human experience, and were endowed with perfect memories so that we could sort our preferences, I think we would converge on similar judgments of what is good, what is better, and what is best. Differences of opinion might still be possible, and would themselves be explicable in terms of differences at the level of our brains. Consequently, even such disagreements would not be a problem for my account, because to talk about what is truly good, we must also include the possibility (in principle, if not in practice) of changing peoples desires, preferences, and intuitions as a means of moving across the moral landscape. I will discuss the implications of this below.
Generally speaking, I think that the problem of disagreement and indeterminacy that Blackford raises is a product of incomplete information (we will never be able to know all the consequences of an action, estimate all the relevant probabilities, or compare counterfactual states of the world) combined with the inevitable looseness with which certain terms must be defined. Once again, I do not see this as a problem for my view.
The Persuasion Problem
Another concern that prompts Blackford and others to invoke terms like "ought" and "should" is the problem of persuasion. What can I say to persuade another person that he or she should behave differently? What can I think (that is, say to myself) to inspire a change in my own behavior? There are, in fact, people who will not be persuaded by anything I say on the subject of well-being, and who may even claim not to value well-being at all. And even I can knowingly fail to maximize my own well-being by acting in ways that I will later regret, perhaps by forsaking a long term goal in favor of short term pleasure.
The deeper concern, however, is that even if we do agree that well-being is the gold standard by which to measure what is good, people are selfish in ways that we are not inclined to condemn. As Blackford observes:
[W]e usually accept that people act in competition with each other, each seeking the outcome that most benefits them and their loved ones. We don't demand that everyone agree to accept whatever course will maximize the well-being of conscious creatures overall. Nothing like that is part of our ordinary idea of what it is to behave morally.
Why, for example, should I not prefer my own well-being, or the well-being of the people I love, to overall, or global, well-being? If it comes to that, why should I not prefer some other value altogether, such as the emergence of the Ubermensch, to the maximization of global well-being?... Harris never provides a satisfactory response to this line of thought, and I doubt that one is possible. After all, as he acknowledges, the claim that "We should maximize the global well-being of conscious creatures" is not an empirical finding. So what is it? What in the world makes it true? How does it become binding on me if I don't accept it?
The worry is that there is no binding reason to argue that everyone should care about the well-being of others. As Blackford says, when told about the prospect of global well-being, a selfish person can always say, "What is that to me?": If we want to persuade Alice to take action X, we need to appeal to some value (or desire, or hope, or fear, etc. ... but you get the idea) that she actually has. Perhaps we can appeal to her wish for our approval, but that won't work unless she actually cares about whether or not we approve of her. She is not rationally bound to act in the way we wish her to act, which may be the way that maximizes global welfare, unless we can get some kind of grip on her own actual values and desires (etc.)... Harris does not seem to understand this idea... there are no judgments about how people like Alice should conduct themselves that are binding on them as a matter of fact or reason, irrespective of such things as what they actually value, or desire, or care about... If we are going to provide her with reasons to act in a particular way, or to support a particular policy, or condemn a traditional custom - or whatever it might be - sooner or later we will need to appeal to the values, desires, and so on, that she actually has. There are no values that are, mysteriously, objectively binding on us all in the sense I have been discussing. Thus it is futile to argue from a presupposition that we are all rationally bound to act so as to maximize global well-being. It is simply not the case.
Blackford's analysis of these issues is excellent, of course, but I think it still misses my point. The first thing to notice is that the same doubts can be raised about science/rationality itself. A person can always play the trump card, "What is that to me?" -- and if we don't find it compelling elsewhere, I don't see why it must have special force on questions of good and evil. The more relevant issue, however, is that this notion of "should," with its focus on the burden of persuasion, introduces a false standard for moral truth. Again, consider the concept of health: should we maximize global health? To my ear, this is a strange question. It invites a timorous reply like, "Provided we want everyone to be healthy, yes." And introducing this note of contingency seems to nudge us from the charmed circle of scientific truth. But why must we frame the matter this way? A world in which global health is maximized would be an objective reality, quite distinct from a world in which we all die early and in agony. Yes, it is true that a person like Alice could seek to maximize her own health without caring about the health of other people -- though her health will depend on the health of others in countless ways (the same, I would argue, is true of her well-being). Is she wrong to be selfish? Would we blame her for taking her own side in any zero-sum contest to secure medicine for herself or for her own children? Again, these aren't the kinds of questions that will get us to bedrock. The truth is, Alice and the rest of us can live so as to allow for a maximally healthy world, or we can fail to do so. Yes, it is possible that a maximally healthy world is one in which
And the necessity of grounding moral truth in things that people "actually value, or desire, or care about" also misses the point. People often act against their deeper preferences -- or live in ignorance of what their preferences would be if they had more experience and information. What if we could change
Consider how we would view a situation in which all of us miraculously began to behave so as to maximize our collective well-being. Imagine that on the basis of remarkable breakthroughs in technology, economics, and politic skill, we create a genuine utopia on earth. Needless to say, this wouldn't be boring, because we will have wisely avoided all the boring utopias. Rather, we will have created a global civilization of astonishing creativity, security, and happiness.
However, some people were not ready for this earthly paradise once it arrived. Some were psychopaths who, despite enjoying the general change in quality of life, were nevertheless eager to break into their neighbors' homes and torture them from time to time. A few had preferences that were incompatible with the flourishing of whole societies: Try as he might, Kim Jong Il just couldn't shake the feeling that his cognac didn't taste as sweet without millions of people starving beyond his palace gates. Given our advances in science, however, we were able to alter preferences of this kind. In fact, we painlessly delivered a firmware update to everyone. Now the entirety of the species is fit to live in a global civilization that is as safe, and as fun, and as interesting, and as filled with love as it can be.
It seems to me that this scenario cuts through the worry that the concept of well-being might leave out something that is worth caring about: for if you care about something that is not compatible with a peak of human flourishing -- given the requisite changes in your brain, you would recognize that you were wrong to care about this thing in the first place. Wrong in what sense? Wrong in the sense that you didn't know what you were missing. This is the core of my argument: I am claiming that there must be frontiers of human well-being that await our discovery -- and certain interests and preferences surely blind us to them.
Nevertheless, Blackford is right to point out that our general approach to morality does not demand that we maximize global well-being. We are selfish to one degree or another; we lack complete information about the consequences of our actions; and even where we possess such information, our interests and preferences often lead us to ignore it. But these facts obscure deeper questions: In what sense can an action be morally good? And what does it mean to make a good action better?
For instance, it seems good for me to buy my daughter a birthday present, all things considered, because this will make both of us happy. Few people would fault me for spending some of my time and money in this way. But what about all the little girls in the world who suffer terribly at this moment for want of resources? Here is where an ethicist like Peter Singer will pounce, arguing that there actually is something morally questionable -- even reprehensible -- about my buying my daughter a birthday present, given my knowledge of how much good my time and money could do elsewhere. What should I do? Singer's argument makes me uncomfortable, but only for a moment. It is simply a fact about me that the suffering of other little girls is often out of sight and out of mind -- and my daughter's birthday is no easier to ignore than an asteroid impact. Can I muster a philosophical defense of my narrow focus? Perhaps. I might be that Singer's case leaves out some important details: what would happen if everyone in the developed world ceased to shop for birthday presents? Wouldn't the best of human civilization just come crashing down upon the worst? How can we spread wealth to the developing world if we do not create wealth in the first place? These reflections, self-serving and otherwise -- along with a thousand other facts about my mind for which Sean Carroll still has no "metric" -- land me in a toy store, looking for something that isn't pink.
So, yes, it is true that my thoughts about global well-being did not amount to much in this instance. And Blackford is right to say that most people wouldn't judge me for it. But what if there were a way for me to buy my daughter a present and also cure another little girl of cancer at no extra cost? Wouldn't this be better than just buying the original present? Imagine if I declined this opportunity saying, "What is that to me? I don't care about other little girls and their cancers." It is only against an implicit notion of global well-being that we can judge my behavior to be less good than it might otherwise be. It is true that no one currently demands that I spend my time seeking, in every instance, to maximize global well-being -- nor do I demand it of myself -- but if global well-being could be maximized, that would be better (by the only definition of "better" that makes any sense).
It seems to me that whatever our preferences and capacities are at present, our beliefs about good and evil must still relate to what is ultimately possible for human beings. We can't think about this deeper reality by focusing on the narrow question of what a person "should" do in the gray areas of life where we spend much of our time. However, the extremes of human experience throw ample light: are the Taliban wrong about morality? Yes. Really wrong? Yes. Can we say so from the perspective of science? Yes. If we know anything at all about human well-being -- and we do -- we know that the Taliban are not leading anyone, including themselves, toward a peak on the moral landscape.
Finally, Blackford asserts, as many have, that abandoning a notion of moral truth "doesn't prevent us developing coherent, rational critiques of various systems of laws or customs or moral rules, or persuading others to adopt our critiques."
In particular, it is quite open to us to condemn traditional systems of morality to the extent that they are harsh or cruel, rather than providing what most of us (quite rationally) want from a moral tradition: for example that it ameliorate suffering, regulate conflict, and provide personal security and social cooperation, yet allow individuals a substantial degree of discretion to live their lives as they wish.
I'm afraid I have seen too much evidence to the contrary to accept Blackford's happy talk on this point. I consistently find that people who hold this view are far less clear-eyed and committed than (I believe) they should be when confronted with moral pathologies -- especially those of other cultures -- precisely because they believe there is no deep sense in which any behavior or system of thought can be considered pathological in the first place. Unless you understand that human health is a domain of genuine truth claims -- however difficult "health" may be to define -- it is impossible to think clearly about disease. I believe the same can be said about morality. And that is why I wrote a book about it...
No comments:
Post a Comment