I'd like to move onto some other things, including a couple of very important court cases where judgments have issued this week - the case on gene patents in the US, and the Simon Singh libel case in the UK.
But one loose end from all the debate about Sam Harris was whether we are setting the bar too high by maintaining the "is/ought" distinction.
My answer is, "No." Hume pointed out that no number of propositions that use the copula "is" can ever logically entail a proposition with the copula "ought". Yet, he says, we often see philosophers slip into "ought" conclusions without ever explaining how they did it. That's a shrewd observation, and we should not throw it out in the name of being able to study morality more easily. It imposes a discipline on us, that if we start introducing "oughts" we must explain how we did it, and it can't simply be a logical entailment from a string of "is" statements.
Hume's own approach is to connect "ought" with human psychology - he tends to speak of desires, but he means this in a broad sense. Today, we should talk, with some vagueness, about desires, values (or maybe this should be valuings), fears, hopes, sympathies, etc. This is a place where the "etc." is okay, because it is possible in principle to sort out the detail of how all these things relate to each other, though it's difficult to get precisely right in practice. These aspects of human psychology connect up with facts about the world to give us reasons, or rational motivations, or "oughts". For example, if I desire the amelioration of suffering, I am rationally motivated, other things being equal, to give money to certain charities (if they will be effective), or to interfere (if I'm likely to be effective) to stop a bully beating up someone smaller.
There are other ways to try to derive "oughts": e.g. the will or purposes of a God, some process of souped-up Reason; some kind of non-natural, metaphysical properties of things. In my view, none of these work. (Note that the Euthyphro problem remains even if we invoke the will of God or the gods.)
We are left with a world in which "oughts" ultimately depend on features of human psychology (plus facts about what will be effective). Since I can't see what else they could ultimately depend on, I'm not too embarrassed by this, but I do concede that there is this psychological craving to find a basis outside of human psychology, i.e. an external or "objective" basis. Even I can get myself in a mood where I feel this. It's the creepy feeling of not wanting to deny that torturing babies is not just really wrong ... but really, really wrong.
But that external basis doesn't exist. Therefore, morality is not objective: it is not grounded solely in something external to our psychological makeup. Since our psychological makeup is not identical from person to person, this leaves room for some indeterminacy, but perhaps not all that much if we can agree on all the non-moral facts. The more we agree on the facts, the more we tend to converge on agreement about what should be done.
Morality is, however, non-arbitrary: it is grounded in widespread human values, sympathies, desires, needs, etc., combined with facts about the world.
You might think that "needs" are objective, but even what counts as a need has some subjective element to it; survival itself is not important because of some external thing such as the will of a god, but your survival is important to you and to those who love you or depend on you. Even what counts as your flourishing has some subjective element; it will depend on what you value, or on what another person making judgments about your flourishing (or otherwise) values. I can't see any prospect of a plausible concept of flourishing that is not value-laden (laden with the values of somebody) to some extent.
The elements of human psychology that I've mentioned - desires, sympathies, fears, and so on can give us reasons to act in ways that are not narrowly selfish, e.g. if we want to be able to enjoy loving relationships, live in peaceful societies, or ameliorate some of the immense suffering in the world. They can give us reasons to favour certain laws, to support entire systems of social or legal norms that govern conduct, or to try to inculcate certain dispositions into children. We have many reasons to act in ways that are not narrowly or short-sightedly self-interested. Importantly, our desires, sympathies, fears, and so on can give us reasons to try to dismantle moral systems that fail to advance such things as human happiness (which almost all of us value) and which actually operate cruelly.
All of this can be examined objectively, in the sense of carefully, disinterestedly, attempting to get at the truth rather than to confirm our biases, without throwing away Hume's point. In fact, Hume's point is a fundamental finding that we ought(!) to work with if we wish to make progress in the study of morality.
15 comments:
Russell, thanks for your help in clarifying issues that some of us struggle with.
I also am skeptical of Sam Harris’ assertion that, from the standpoint of science, "values are facts about the welfare of conscious creatures" or something close to that. To me this implies that the ‘purpose’ of moral behavior is some utilitarian maximization of the welfare of conscious beings. This seems unlikely if moral behaviors are a product of genetic and cultural evolution as I think science requires.
I’d like to suggest an approach for resolving this question. Isn’t it at least possible that a question like “What is the purpose of moral behaviors from the standpoint of science?” could be objectively answered by the standard methods of science?
For example, we could make several hypotheses about what the purpose of moral behavior ‘is’ as a matter of science. (Note that this is about what moral behavior ‘is’, not what it ‘ought’ to be.) These hypotheses might include Sam’s proposed purpose as well as other hypotheses from studies in the evolution of moral behavior.
Then, each hypothesis about moral behavior could be evaluated for explanatory power, predictive power (such as for moral intuitions), universality, consistency with the rest of science (in particular genetic and cultural evolution and game theory), and so forth just as if one were evaluating standard scientific hypotheses.
There may be no a priori guarantee of success with this approach. Perhaps such a process could never resolve whether any hypothesis might be provisionally ‘true’ in the scientific sense because moral behaviors and cultural moral norms are too diverse and contradictory or perhaps because moral knowledge actually is a different kind of knowledge.
However, work to date shows the opposite. I have been astonished how well a simple, common hypothesis from evolutionary psychology – in condensed form it is that the purpose of moral behavior is to increase benefits from cooperation – meets these criteria for scientific utility. (This hypothesis can also be expressed as: moral behaviors and cultural moral standards are strategies and heuristics for exploiting the synergistic benefits of cooperation. Note this could be the basis of a universal definition of morality for all conscious beings. Of course, biological structures that produce motivating moral emotions and the cultural values and norms that emerge to accomplish this purpose might radically differ in different species based on environment, other aspects of their biology, and random chance.)
An additional challenge is to identify a source of motivation for an individual to accept the burdens of such a morality. Fortunately, it appears easy to assemble arguments that such a definition might be the most rational choice for adoption and practice by at least some groups and individuals. (Here a rational choice is one that is expected to best meet the needs and preferences of a group or an individual.)
Do you believe such an approach to bringing moral behavior into the realm of science is somehow inevitably doomed due to some logical error? Or does it have at least a chance of being useful?
Russell, I expect you may not agree, but there seems to me no contradiction between what I have written here and at least your last three posts, which I thought were very good.
Take two societies, one that believes in objective morality enforced by divine judgement, and one that believes in relative morality enforced by nothing except lawsuits and a police force (keeping in mind that neither the law nor the police would have objective standards to guide their behavior).
Which society will achieve superior levels of social cohesion and stability? Which will have higher birthrates (http://newhumanist.org.uk/2267/battle-of-the-babies)? Which would survive and even thrive in Darwinian competition with the other?
History has already answered these questions.
one that believes in relative morality enforced by nothing except lawsuits and a police force (keeping in mind that neither the law nor the police would have objective standards to guide their behavior).
It is naive to think that in a secular society there would be nothing to guide behavior but lawsuits and a police force. You omit two important things: 1. our psychological makeup (viz. The Moral Animal and 2. custom and social pressure.
Here's an example: in 1996 an Ontario court decision gave women the right to walk around topless in public. Fundamentalist Christians railed against the decision, saying topless women would be everywhere. I said they wouldn't, because customs are slow to change, and social pressure is a big force. Merely because something is legal does not mean everyone will do it. The fundies turned out to be wrong, and I was right.
Russell, It's obviously very important to Sam Harris to be able to say that honor killings are wrong, suicide bombings are wrong, girls should be allowed to go to school, etc, etc. Plus, it's important to him to be able to "ground" those judgments in something this-worldly, not in God. I can't see how non-cognitivism can support these judgments. Of course, non-cognitivists can easily show that you and I shouldn't engage in honor killings and suicide killings. We have all the "con" attitudes that make these things wrong for us to do. But why are they wrong for the people who have "pro" attitudes toward them? While non-cognitivism would be a live option if Harris were doing metaethics in the open-ended way philosophers do, it doesn't seem like an option for him. (Or....are we to somehow try to convince ourselves that everyone really has "con" attitudes toward honor killings and suicide bombings, etc., even the people who engage in them and seem to condone them?)
That's very well put, Russell. Still, I'd like you to have said a little more. The kind of "ought" that you've discussed is not the same kind of "ought" that moral realists (and no doubt Sam Harris) have in mind. If they read your post, they would surely say, "but that's not what I mean when I say 'ought'. When I tell someone he morally ought to do something, I'm not just telling him how best to achieve his own ends, even if those include the ends entailed by his personal moral values."
I appreciate that you didn't want to get onto the tricky subject of accounting for the way people use moral terms. But I'd like to have seen some nod to the fact that people normally use moral "oughts" in a sense other than the one you've described here, perhaps adding that that sense is incoherent (or however you would describe it).
You've said that there are no other ways to derive "oughts". I would rather say that there are no ways to derive the kind of moral "oughts" that most people want.
I agree with all of you except anonymous. The program of research that Mark suggests is useful, though not some kind of ultimate solution to questions of how we should act; and philosophers should work with psychologists and others on this sort of thing (some are doing so; e.g. Peter Singer and Mark Hauser). Jeffrey's point is clearly correct (and I've made a similar point myself in the past). Non-cognitivism is not the answer, as Jean says (and it has a lot of problems anyway). And I agree with Richard that ordinary people typically (but not always because a lot of them are relativists these days) think that morality is externally, "objectively" binding in some way that goes beyond institutional moral norms that we might have good, non-arbitrary reasons to accept (perhaps with specific reservations).
The metaethical position that I actually subscribe to, for those who don't know from my previous bouts of posting on metaethics, is not any form of non-cognitivism, although I do see the force of prescriptivist theories. It's actually the error theory associated with JL Mackie (among others). The question is then: How should we act, talk, etc., given that there is a pervasive error in folk metaethics?
It's a pity that morality can't be everything it seems to be as we grow up with it: determinate; externally sanctioned; strongly action-guiding; and so on. It's also a pity that someone with my views, which a lot of philosophers share, sees an error in the thinking of the folk.
But we have to live with that, and one thing that morality is not is simply arbitrary, and nor is it closed to rational criticism and development. We can actually do a lot of practical philosophical ethics, as well as living our lives well, without having to look over our shoulders at metaethics all the time.
Or that's how I see things.
Ah, all that stuff you were saying about motivation was making me think you didn't think moral claims were fact-stating at all. I have to say--it's a good thing Sam Harris is not attracted to an error theory. It would surely be a public relations disaster for atheists to be telling people that moral claims are all actually false. Metaethics in the public square has some special parameters, I think. It's not the same as what can go on in a philosophy seminar room.
This discussion of morality had me thinking about the question of whether it was wrong to harm/let suffer/kill children. Yes I think it's wrong. really wrong, but why? I imagined myself as a crocodile or even a rabbit or cat, any animal with a very large litter/clutch of offspring, virtually all of whom will die before reproducing. I heard that crocodiles may even eat their own young which sounds bizarre to us but as the mortality rate is so high, maybe it's not so bizarre to them. I wonder if we would care less about our children if there were so many of them and I think that yes, we probably would. This wouldn't just be a rational/intellectual change but the evolutionary pressures which are responsible for our strong protective instincts would also change so that we would even feel less attached.
I wonder then if there might be more to this. Do the feelings of mothers for their infants vary between societies which average 0.9 children per adult and ones which average 4-6 and where mortality rates are much higher? Might they feel less strongly about the need to protect children?
It seems to me that, on the one hand this argues even more for Sam Harris's perspective than I initially thought, that we may be able to use more observations and data to infer which morals statements are most likely to be agreed upon by a community. On the other hand, it also works against him in that it shows that even statements which appear to be uncontroversial and universal may in fact be situational and subject to change.
Jared Diamond has written about how morals vary with the size of the grouping, from tribal to village to nation and things which we view as barbaric and cruel could be accepted or encouraged in tribal groups.
I take his point that we should not be timid in decrying immorality of dogma and faith, and we should show how a secular discussion of values, goals and strategies is probably the most effective yet the more examples I can think of which might test his arguments, the more exceptions I find. I'm forced to agree that the is/ought distinction isn't something trivial which can be brushed aside but something fundamental which must be confronted head-on.
It needs to be confronted head-on in the philosophical study of metaethics. There may be a danger, though, in confronting it head-on in public applied ethics. I'm not sure of this, but some error theorists (well, I'm thinking of one in particular) agree with Jean. There's an argument that error theory is true but we should (where "should" relates to achieving our own goals, etc.) not say so.
Oh, and of course it's "Marc Hauser", not "Mark Hauser".
@Russell
I wouldn't say that non-cognitivism is wrong. I'd just say that it's not the whole story. And neither is error theory, though that comes closer. We should recognise that moral statements have multiple meanings. They can mean different things to different people, and a given instance of a moral statement can (and usually does) convey two or more meanings at the same time.
I would say that "stealing is wrong" is most often meant and understood as a claim that stealing has a property of wrongness that is external or objective in the sense you've described. This is also the meaning that most directly corresponds to the grammatical form and contextual usage of the statement. I'm inclined to call it the "surface" or "literal" meaning. With regard to that meaning I'm an error theorist. When cognitivists of various other stripes (relativists, subjectivists, etc) give me alternative cognitive meanings, I'm prepared to believe that some people, at some times, may actually mean some of these things. I doubt whether they are sufficiently widely meant to be considered standard alternatives, such as one might include in a dictionary, but I haven't rejected that possibility. (I do utterly reject moral naturalist definitions, such as the one proposed by Peter Singer, which I consider to be completely mistaken.)
However, I would say that a moral statement is more than just an expression of a cognitive belief. It typically also expresses one or more non-cognitive attitudes or stances. In the case of "stealing is wrong" I would say that the speaker is typically expressing his own negative attitude towards stealing (as the expressivists would have it). He may also be expressing an imperative, telling the listener not to steal. But that barely scrapes the surface of a complex subject which requires psychological research and not just armchair theorising. Moral statements aren't the only kinds of statements which can convey non-cognitive meanings. But the non-cognitive aspects seem particularly important here, because of the emptiness of the surface meaning.
I have some sympathy for the view that we shouldn't spread the news of moral anti-realism. First, it may weaken people's commitment (including our own!) to moral values that we approve of, as well as to those which we don't. Second, it may make us less persuasive in our attempts to influence other people's moral values. That's why I wouldn't have been very critical of Sam Harris's talk, if he'd stopped there. But his follow-up article was so much more specific in its errors, and so arrogant towards people who understood the subject better than him, that I found it difficult to turn a blind eye!
P.S. Russell, sorry for implying that you think non-cognitivism is "wrong". You say you see the force of prescriptivist theories, so I think we're pretty much on the same page. I probably just give more emphasis than you to the role of non-cognitive attitudes as constituting part of the "meaning" of moral statements.
P.P.S. I forgot to mention that in my opinion moral statements often express an additional cognitive belief (as well as the surface one), namely the belief that there exist objective moral facts. In other words, in making a moral claim that assumes the existence of objective moral facts, the moral realist is implicitly asserting the existence of objective moral facts. Whether this plays a significant role in the speech act will depend on whether the speaker thinks the listener already accepts this fact. If not he might even emphasise the point: "stealing is really wrong."
Thanks, Richard. This is all very helpful, actually, and I'd probably go along with all of it.
And yeah, I know what you mean about Sam's talk. Overall it had a lot going for it, and the stuff about metaethical theory could be kind of overlooked. For example, I mentioned it only in the process of mainly praising the talk. But his response to his critics seemed to have an exasperated tone and an uncompromising attitude. I'm not sure why. Sometimes going on the attack is not best when you're discussing with colleagues. It can make you look bad ... but it also risks harming the credibility of people whose credibility you really need to be protecting if they're on your side in many battles.
I pretty much agree with all of this... but if anybody is interested and has a few extra minutes, I have a blog post where I speculate about a possible way of deriving some basic moral tenets that are independent of our specific biology. I'd be interested to know what anyone thinks of the idea.
Post a Comment