In his recent contribution to The Huffington Post, Sam Harris offers a detailed defence of his proposal for a science of morality, writing in the aftermath of reactions to his TED talk back in March. A great deal of his defence concentrates on answering the latest criticisms from Sean Carroll.
I'm not going to attempt a point-by-point adjudication of claim and counterclaim from Harris and Carroll. That would require at least a full-scale academic article, and even a few relatively quick observations will end up being quite long enough.
I think that Harris still misses the thrust of some of the points made by Carroll, and it's unfortunate that his initial responses to criticism were so impatient (especially referring to them as "stupidity" on Twitter ... for which, to his credit, he apologised). There's room to see good points on both sides, and the policy conclusions needn't differ.
At the same time, I don't really mind the concept of a "science" of morality. I'll qualify that later, but I don't see any clear boundary between science and philosophy anyway. Back in the eighteenth century the word "scientist" didn't even exist, but David Hume clearly thought he was engaged in a first comprehensive attempt at what we'd now call a science of morality, examining the subject from an empirical perspective - and you know what, he was right! It's not that he conducted experiments, but he laid a theoretical foundation based on the empirical knowledge available, and his foundation is still invaluable for those who want to build on it. Much modern moral philosophy and moral psychology does exactly that (though of course much of it is a program of resistance to Hume).
So, I'm not hung up about words such as "science". Nor am I bothered by the prospect that various kinds of empirical investigation may inform our decisions, including decisions about whether to criticise particular moral systems. What's more, I think we can make rational judgments about whether existing moral systems are "good" in much the way that we can make judgments about whether particular hammers are "good". A hammer is a good one if it's effective/efficient for its purpose. A moral system is good if it does what we want from moral systems as a class. Like hammers, moral systems can be replaced or improved.
Criticising moral systems
It follows that I agree with Harris that we can criticise various cultures' moral systems and their various prescriptions for human behaviour. In doing so, we don't need to find a way to deduce objectively binding "ought" statements from pure "is" statements that make no reference to affective attitudes (such as values and desires) or social institutions (such as codes of ethics). I don't believe that can be done - here I agree with Carroll, if not for precisely the same reasons - and I think Harris got off on the wrong foot at TED in trying to do so (and claiming that certain dubious metaethical claims were "obvious"). Carroll was correct to pull him up on it, and those of us who dissected the specific arguments were similarly justified.
But Harris doesn't actually need to make an unheralded breakthrough in metaethics to establish his main point about the possibility and desirability of criticising moral systems. If those moral systems are harmful to human wellbeing ... then criticise them for it! I think hammers exist to drive nails and that it's approximately correct to say that moral systems exist to conduce to human wellbeing and, to some extent, the wellbeing of other sentient creatures.
To repeat an earlier post, I agree with Harris that:
1. We can criticise the moral codes, cultural practices, etc., of other cultures (or, if it comes to that, our own).
2. When we do so, our criticisms need not be arbitrary, idiosyncratic, or unreasonable. On the contrary, they may be perfectly reasonable, non-arbitrary, and inter-subjectively justifiable.
3. Many people take a contrary view, often reflected in public policy. That's wrong and dangerous.
Here's a sketch of where I think Harris is still getting things somewhat wrong. He concedes the main point that critics have been making when he says: "Of course, goals and conceptual definitions matter." That alone is enough to establish that he's not an extreme objectivist about morality, even if he thinks he is (since he doesn't want to use academic philosophical language, and is rather scornful of it, it's difficult to be sure where he stands on some metaethical points).
But he seems to think this is a relatively trivial problem. It isn't. Our goals when we use a hammer are usually uncontroversial - we (usually) want our hammer to drive nails into wood. By contrast, the goals of morality are much more controversial, the controversy goes very deep, and we have no decision procedure that has any prospect of producing universal convergence among philosophers, let alone all the people who reject a reason-based approach to morality.
In both his TED talk and his new article, Harris says that something similar applies to medicine. I.e., there is no total, final, uncontroversial agreement about what we're trying to achieve when we practice medicine, but we still do so in the real world without any great problem.
Good point. But actually, if Harris were deeply immersed in bioethics he'd realise that it's not so simple. In medicine, there are many marginal cases where deeply contested values come into play and it's not clear what we should do (or how we can ever get unanimity about the relevant values). What can be said about medicine is not that its goals are uncontroversial; its purpose is certainly not as clear-cut and uncontroversial as the purpose of an ordinary tool like a hammer. However, the goals of medicine are much less controversial than those of morality. We can define "health" with enough precision and agreement to get by in most circumstances. There are lurking difficulties for doctors, and they sometimes affect public policy, but surely they're not at issue in the majority of doctor-patient interactions.
Likewise with science. Its purpose, I suppose, is to develop well-evidenced and robust theories about the mechanisms or workings of the natural world - or something of the sort (I'm open to better formulations). This purpose could be contested, however, by someone who considers it futile, or impious, or counterproductive to ... yes ... human wellbeing. What we can say is that there's not all that much dispute about the purpose of science, or about the idea that pursuing it is a good thing to do (even if the "good" here represents a moral evalution, it can be an evaluation from one of the less-contested parts of morality relating to the desirabity of finding out about the world). However, there is some dispute, even if not enough to cause much difficulty with the everyday practice of science. Then again, it causes quite a bit of difficulty in some areas, most notably in biomedical research (which is a different practice from medicine).
What is morality for?
Morality is at the other end of the scale from hammers. There's an enormous amount of disagreement about what we're trying to achieve, and whether we're trying to achieve anything at all beyond, say, applying moral truths revealed by a prophet or a god. We can ask, "What is morality actually for?" ... And we'll get many different answers, including from people who say it's not "for" anything: we just do have an obligation to act in certain ways and not act in others.
There's so much disagreement that most of the intellectually rigorous discussion of morality that's available relates to foundational issues: issues of purposes or goals or definitions. There's certainly a field within philosophy that gets on with proposing the details of what we should be doing, how we should be living our lives, etc. I.e. there's the field of applied ethics. However, morality is far more controversial as a practice than medicine or science, because it's far less clear that the values built into its foundations are acceptable or even roughly agreed. Indeed, people who "do" applied ethics often disagree with each other about fundamentals in a way that is not so much the case with people who practice medicine and carry out scientific research.
Unfortunately, we are nowhere near to having the sort of general agreement as to the goals of, well, moralising, if you will, that we do with practising medicine or science. So it's no use arguing:
P1. The ultimate goals of medicine and science are contestable
P2. We can practice medicine and science with no terrible difficulty.
C. We can, with no terrible difficulty, practice anything whose ultimate goals are contestable.
Hence, we can, with no terrible difficulty, practice a "science of morality".
The correct conclusion, at "C.", is that we can, with no terrible difficulty, practice some things whose ultimate goals are contestable. As far as this argument goes, whether morality is one of those things is left as an open question. It really depends on just how much debate there is about a practice's ultimate goals, and how this pans out in practice. Unfortunately, the ultimate goals of morality are so controversial, and so disputed at such a deep level, that it's not surprising when much of what goes on in moral philosophy relates to trying to get agreement on the ultimate goals.
As it happens, I think that the goals of morality - or at least the point of the practice that we can ascribe to it - relate to a complex of human needs, interests, widespread desires and values, etc. This is pretty vague, but that's the nature of it. Morality evolved with us, biologically and culturally: it's not something we literally and consciously invented, with a clear-cut purpose in mind. But we can ascribe such goals as social survival, amelioration of suffering, providing a framework within which lives can go well (by whatever standards!), and probably other things of that kind. I don't terribly mind these being summed up as "wellbeing" as long as it's acknowledged that that's more a placeholder for a lot of rather vague and contested stuff than a label for something with a meaningful metric. Perhaps there's a meaningful metric for pleasure, but no one seriously thinks that morality is just about pleasure or that this is what "wellbeing" means (none of which is to deprecate pleasure, by the way).
On the other hand, the situation is not so hopeless as to make criticism of existing moral systems impossible or undesirable. If morality has something to do with the sorts of things I identified in the previous para, we can criticise particular moralities that have taken on a life of their own and make a poor contribution to those things - or are even counterproductive to them. Nor is information that science obtains about the natural world (which, importantly, includes us) irrelevant. We can, for example, use that information in our attempts to ameliorate suffering. So, I think Harris's actual conclusions are correct: we can have a science of morality, or, rather, a scientifically-informed practice of morality, as medicine is a scientifically-informed practice; and we can (and should) critique existing moralities. Vulgar cultural relativism is untenable and misleading, and we should put it behind us.
But his program won't be as straightforward as he makes it sound. Its aims and criteria will always be more deeply and pervasively controversial than those of science or medicine.
But again, Harris is mainly an activist, not an academic philosopher. I don't mind if he deals in approximations and simplifications. For that reason, among others, I've supported the broad thrust of what he's saying from the start. On the other hand, he shouldn't get tetchy when others want to question some of the details. Activism is important, and it requires the use of approximations. But intellectually rigorous debate is also important and shouldn't be seen as just a nuisance or a distraction.