Moral error theory interprets moral judgments as making truth-apt claims - and says they are all false. Now, in fact, it may not be so simple. While that's a position within a set of metaethical pigeonholes, no one who sees a problem with moral judgments necessarily has to claim that all moral judgments are simply false. Morality may turn out to be very messy. Perhaps some moral claims are best interpreted non-cognitively, and so as neither true nor false. Perhaps some are best interpreted cognitively, or as having a cognitive component, but are actually true. In The Myth of Morality, Richard Joyce thinks that moral language is pretty much seamless, so that all sorts of first-order moral judgments stand or fall together. Here, I'm not sure that I'm with him - and this an area where I'd like to find some time for more thought and research.
Still, skipping over that for now, Joyce and I (and Mackie and Garner, and a lot of other philosophers these days) agree that what are often thought of as the most central moral judgments are literally false. Those are claims such as "X-ing is morally wrong" or "Y-ing is morally obligatory". Actually, even here I'm not sure that I'm in full agreement. I think that part of the trouble is that we don't really know what we're saying when we make these claims. This sort of moral language may suffer from confusion and uncertainty as much as anything else.
So, I'm starting to look like an error theorist in only a minimal sense. Nonetheless, one of the things that people seem to be thinking and conveying when they use this kind of language is "X-ing is objectively forbidden" or "Y-ing is objectively demanded". I.e., there's a claim of objective prescriptivity here, and my position is that all such claims are false. Sophisticated moral relativists also hold that all such claims are false, but they argue that we don't make such claims when we make moral judgments. Such positions - which are rather remote from the kind of crude moral relativism that Sam Harris rightly atttacks, and which almost all philosophers hate - make moral judgments seem much like ordinary evaluations. Or, in one version, moral judgments can be interpreted (perhaps charitably) as being like that.
Although moral relativism has a bad reputation, I think the most sophisticated versions, which tend to be inspired by the work of Gilbert Harman, are actually quite attractive. As I've observed before, what separates them from the most common form of error theory is their moral semantic component. Whereas as an error theorist will interpret "X-ing is morally wrong" as "X-ing is objectively forbidden" or maybe "X-ing is forbidden by a standard that is objectively binding", a sophisticated relativist will interpret it as "X-ing is forbidden by standards that those in involved in this conversation share" or perhaps "X-ing is forbidden by standards that I invite others involved in this conversation to share". There may be a further thought that the standards being used could be justified to others involved in the conversation, perhaps by appealing to their values and/or widely shared values.
The problem for sophisticated relativists is that we really do tend - don't we? - to think, when we make these kinds of thin judgments, that we are stating objective requirements or prohibitions. In some sense, the requirement or prohibition exists in the nature of things, or is just a fact (of some sort), or is true as a matter of reason (in some sense of "reason" that we cannot ignore). In that sense, these sorts of moral judgments are not like other evaluations of "good" and "bad" where we don't seem to think that the standards used are binding. As I've discussed in the past, other evaluations leave room for at least some legitimate disagreement. Perhaps more importantly, they don't usually involve the idea that the standard used is objectively binding, even on rational creatures who don't share the desires, goals, purposes, or whatever that underwrite it. We get by making evaluations that are useful to us in particular social contexts.
Towards the end of my interview with Common Sense Atheism the other night, I was asked a question to the effect of how I could reassure someone who is worried about the idea that the standards on which moral judgments are made are not objectively binding.
I had a fair bit to say, but I was conscious as I thought about it once more, that a lot of people really are going to find the idea disconcerting. What can be said to them, at the end of the day, is going to involve a whole lot of theory that they need to grasp, and something important does at least seem to be lost if moral requirements and prohibitions are not, in the sense that I'm denying, objective ones. Putting myself into the mind of an objectivist who is an ordinary good person (note again that I don't think that the word "good" is necessarily tainted with any error), it is quite a lot to be asked to give up the idea of objective prescriptivity.
As Richard Garner emphasises in his publications, there is also a dark side to objective prescriptivity. In the end, we may be better off without it. I'm definitely not suggesting that abandoning it will cause chaos and suffering. But it's a psychological wrench for most people, or at least that's my perception. I don't think we can simply embrace sophisticated relativist moral semantics, because it looks to me at the moment that the folk (and hence the language they use) are committed at least to some extent to the idea of objective prescriptivity. But nor can we expect them to give up that idea lightly. Sophisticated relativists like Harman make some good points, but they are too optimistic about our ability to expunge objective prescriptivity from our thinking and our discourse.
Again, the problem is not that chaos would ensue if we all agreed to think of moral judgments as much like other evaluations, finding an eventual foundation in various human values and projects - with the addition that these will be especially important and salient values, such as social survival and amelioration of suffering. I expect that we can get by just fine if we think like that, and it's a useful way to think when actually doing practical philosophy. I'll go on arguing this. The problem is the psychological wrench involved if we ask people in general, not just a few philosophers and philosophically-minded folks, to think this way.
18 comments:
Hi Russell,
I've yet to see any explanation of what it _means_ for a moral proposition to be true relative to some standard. True, I haven't read much on the subject of relativism, only the SEP entry and one paper by Stephen Finlay. But as far as I could see neither of these addressed this question, which seems to me absolutely vital.
The best sense I can make of the sort of claim you've attributed to sophisticated moral relativists is to say that a moral standard (S) is a set of moral propositions (which the group of people in question believe to be true), and that a moral proposition is true relative to S if it can be deduced from the propositions of S (possibly in combination with some non-moral facts). For example, perhaps standard S includes the proposition that all murder is forbidden, from which we can deduce that murdering Alice is forbidden, and hence we say that murdering Alice is forbidden relative to standard S.
I find this a peculiar use of the term "relative", and if we're going to use it this way we need to be careful not to conflate it with other senses of the term. Stephen Finlay employs an analogy between relative morality and relative motion. But I think he is conflating two very different senses of "relative".
With this interpretation, the relativist is explaining relativised moral propositions by reference to unrelativised moral propositions. So he still needs to explain the status of these unrelativised moral propositions, but I've yet to see a relativist do so. I would say that an error theory still applies to the unrelativised moral propositions on which the relativist account apparently depends.
Yes, Finlay exemplifies the sort of view I'm talking about. I'm actually pressed for time tonight, so this will be rough and I may need to come back to it.
Most of our evaluations of things are like this. E.g. if I say a car is a "good car" I am saying that it conforms to certain roughly-understood standards as to what counts as "good" in a car - certain levels of comfort, fuel economy, performance and so on (each of these can be broken down further). Those standards reflect the kind of goals or purposes that people have in cars - and I suppose such things as what it is realistic to demand at any given time, given technological development, etc. What gets classified as a "good" car in 1930 may not be so classified in 2030.
A good knife may meet such a standard as being sturdy, sharp, and so on, which will be the sort of thing needed to meet our purpose in using knives of this kind.
A "good act" might be one that conforms to roughly-understood standards as to what counts as "good" in an act quaact. Again this will reflect our desires as to what "we" want from acts generally - perhaps that they be constrained in certain ways. And this will reflect more general desires or purposes that people have, e.g. not to add unnecessarily to suffering and to maintain social order.
In all these cases, no one is objectively bound to apply the standard, whatever their own desires, purposes might be.
However, moral claims seem be different. People who make them and hear them made seem to think that the standards they are using are absolutely binding, no matter what anyone's actual desires might be. And yet, it's hard to see how that could ever be the case.
But someone like Finlay will deny that moral language makes claims to be objectively binding or to use objectively binding standards.
Again, it's hard to see how he can simply be right about this. Maybe the folk are confused, but it certainly looks as if - to say the least - this sort of objectivity is ingrained in their thinking.
I don't think, though, that the Finlay type moral relativist says that the group of people believe anything to be true at the lower level such as you describe. They may, for example, desire peace and social harmony and the amelioration of suffering. They don't have to believe that "peace, social harmony, and the amelioration of suffering are good things". They simply desire these things. They then have a code of conduct that they believe (perhaps correctly) will conduce to peace, etc. Thus, they think (again perhaps correctly) that widespread conformity to the code of conduct will conduce to meeting certain of their desires.
What they don't do, according to someone like Finlay, is claim that everyone must share the same desires on pain of being wrong or irrational. So they never make the Mackean error. They do, however, enforce their code.
They can also judge whether the code itself is "good" in the sense of effective for meeting their desires. But again, they never claim that it is good in some sense that transcends those desires and is rationally binding on people who don't share them.
So we have here an agreement with error theorists that no acts are objectively required, and no standards by which we might judge acts are objectively correct. But no one makes the error on this account because the folk don't think or say that there are objective requirements. (Of course, evaluations of various kinds can be objective in a sense if the standard allows this, e.g. if a knife blade really is hard and sharp ... but that's not the kind of objectivity that error theorists object to.)
Does that help?
Russell, I really like your writing on this topic (and also your thinking, especially since you've shown a willingness to not be pigeon-holed by one specific moral theory.)
However there is one thing I think you're trying to do that is premature - engage 'the folk'.
When we had a previous discussion about free will it was 'the folk' view of it that concerned me rather than the academic one since most academics are determinists (who call themselves compatibalists!) but on the issue of morality the academic community seems far from settled on a particular view, or even on a range of views.
I think you should be attempting to get the fundamentals of this topic lined up before worrying about what the folk will think or how to convince them. Unless we know what we're trying to convince them of it doesn't make a whole lot of sense to panic them by telling them there is no objective morality. That's like telling them there's no god without having science to show how some things happen without design.
I do believe there are ways of engaging the folk, but I don't think we haver anything resembling a cogent replacement for their objective worldview. Which is fine for me, but will cause panic among people who think atheists eat babies and without a god they'd run around murdering and raping.
Thanks, Russell, this is the clearest brief description of error theory I have ever come across.
Your post, and especially your comments above, remind me strongly of MacIntyre's virtue ethics - the idea that "good" and "bad" (or virtue and vice) make sense only within a particular practice (e.g. building a car). I wonder if you might be able to take the time to explain the relationship and the differences between the two in a future post?
Perhaps a lot of people do think they are appealing to some objective standard when they make moral statements. though Knobe's research suggests that people may be more likely to recognize the cultural dependence of their moral judgments. Still, most people probably think they are expressing a proposition which could either be true or false. That doesn't mean they're right. The language of our moral idioms may be systematically misleading, as Ryle would put it. Ryle made the same point about our mental idiom: it misleads us into thinking that minds are special sorts of places or things with special sorts of causal properties. On the contrary, he says, our mental idiom exhibits a rather different logic altogether.
Whether or not you agree with Ryle on the mind, the argument at least makes sense: It is conceivable that the grammar of our mental idiom naturally inclines to think of it as working in one way, when it works in another way. The fact that we think we use our moral idiom to express moral propositions does not mean we are using it to express propositions at all.
What evidence is there that there are moral propositions?
Here's why I don't think there are any. If we suppose that there are moral propositions, we should at least have some idea of what it means for them to be true or false. They would have to be at least possibly true, in theory, and we should have some way of knowing what that would mean. But I have yet to see a cogent account of something that could determine the objective moral rightness or wrongness of an action. I don't even know what it means to say that something is morally right or wrong in an objective sense. We might be tempted to say that it just means the action is right or wrong according to some universal and absolute standard--but what makes that standard right or wrong? What would it mean to have a standard that was right in itself? I don't think it makes sense at all, which is why I'm inclined to think that the set of moral propositions is null and void.
If there is still a temptation to regard moral claims as expressing propositions, even though we cannot identify what defines these propositions as such, perhaps it is because of the strong feeling of righteousness or empowerment we feel when we make moral judgments. It is the feeling we have as moralizing agents that makes us think we are right or wrong in an objective way. We might suppose that this feeling results from somehow witnessing or experiencing The Right (or The Wrong), but I can't make sense of that. It seems more likely that we outwardly (and omni-directionally) project a sense of absolute authority which is in fact originating from ourselves. So, if somebody thinks our actions are right or wrong according to an external and universally binding moral authority, I wouldn't say they are wrong. I'd say they're just confused and that they don't really know what they are talking about.
Thanks for the reply, Russell. I've just read a little more of Finlay's work on line, particularly the following paper, in which he responds to Joyce:
http://www-bcf.usc.edu/~finlay/ReplytoJoyce.pdf.
He writes the following:
>> Rather, I defended a „relational view‟ I introduced as claiming that „every kind of value is relative to some standard or end‟ [2008: 350]. True, I maintain that our moral claims are relativized to our standards or ends, but this is
to be read de re, not de dicto; i.e. if my relevant moral end is E then my moral claim is to be interpreted
as ought-relative-to-E, not as ought-relative-to-my-ends." <<
I still think the word "standards" is problematic, but since Finlay offers "ends" as an alternative, and even seems to prefer it, I'll use the latter word instead. The relationship implied by the word "relative" here is one of being conducive to the achievement of ends. This is still a very different sort of relationship to the one involved in Finlay's allegedly analogous example of relative motion. But it negates my other criticism above.
Note the significance of what Finlay is saying here. I had previously understood him to be making moral claims speaker-relative (or speaker's-community-relative). In fact he is making them merely ends-relative, without any reference to who has these ends. I think this is even more implausible as an account of what people actually mean when they make moral claims.
[continued]
In response to an example raised by Joyce, Finlay gives this interpretation of a hypothetical judge at Nuremberg telling Goering, “What you did was wrong":
>> Suppose for example that the unstated end is promoting general human wellbeing. In demanding respect for general human wellbeing and asserting that the Nazis acted in ways detrimental to that end, the judge would not be directing our attention to his/our own attitudes at all, but simply to the ideal of general human wellbeing, and its relation to the actions of the Nazis. (Our attitudes towards that end would therefore be activated, but not necessarily brought to our attention). On this kind of relativist view, it is no essential part of what we as moral speakers communicate that we demand concern or respect for these ends because they are our ends. Hence, my kind of relativist can and should agree with Joyce (and Olson [2010] that the judge can appropriately say, „What you did was wrong, irrespective of one’s moral standards‟. For it is straightforwardly true that no matter what moral standards we, the Nazis, or anyone else were to subscribe to, the actions of the Nazis were wrong in relation to the end of promoting general human wellbeing. Similarly, against Joyce's caricature of my view [2011a], we didn't hang Nazis because (i.e. for the reason that) they did things that „we found wrong‟, but because they did things that were „objectively wrong‟, in relation to (e.g.) the end of general human wellbeing. <<
The obvious objection to all this is that it puts the judge in the position of merely telling listeners that Goering acted in a way that was contrary to general human well-being. But (to recycle a type of criticism made by Joyce) if the judge had actually said, "you acted in a way that was contrary to general human well-being", this would have felt far weaker and less acceptable, even to people for whom general human well-being was the summum bonum, let alone others.
Finlay insists that the unrelativised claim gets its force not from being taken as absolute, but because omitting the explict relativisation acts as a rhetorical device demanding that the listener accept the implicit ends. I find this incomprehensible, and all the more so since I can't see how the listener is supposed to know what are the speaker's ends.
Finally, Finlay sounds an awful lot like a moral realist akin to Harris when he finishes the paper with this passage:
>> Further, I find it quite sufficient for child-rape's being genuinely morally bad that it is bad for the child's wellbeing, that it has no other, redeeming value, and perhaps, that it is performed with awareness of these facts. Addition of absolute authority (or „practical oomph‟, whatever that might be) seems to me quite unnecessary. <<
Where is this alleged "dark side" of objective prescription? "Dark" in what sense? Clearly it cannot be objectively wrong to make claims of objective prescription... so how are you not just talking nonsense?
I still don't get it. Why do you even think that moral realism is problematic? No one has ever been able to give me a good reason to think that the norms implicit in moral discourse are any more suspect than the norms implicit in scientific discourse, or medical discourse, or mathematical discourse.
Well much of that is discussed at great length in the review in JET, and can be looked up. However, the dark side of moralism bit is something for a future post.
Russell, Just want to apologize for ignoring the topic question and instead expounding upon a view which you already seem to find at least somewhat sympathetic. To answer the question (more generally, to include noncognitivism as well as error theory): How disconcerting is moral anti-realism? Surely it's disconcerting to people who believe in God or some other supernatural Ground for The Good. But for the rest, I don't see a big problem. What's more disconcerting to me is when people go to extremes to defend and exercise their moral principles, all the while exhibiting an inability to think coherently about them.
Jason, you've made some good points. I'm particularly interested in this one:
>> Perhaps a lot of people do think they are appealing to some objective standard when they make moral statements. though Knobe's research suggests that people may be more likely to recognize the cultural dependence of their moral judgments. <<
Well, taken as written, there is no contradiction here. People can consistently say that in practice cultures make moral judgements in accordance with different moral standards, while also saying that there is just one objectively "right" standard.
But let's say we're talking about people who deny that there is an objectively "right" standard. Such people may nevertheless engage in just the same sort of realist moral discourse as people who believe there is such a standard. It may not be logically consistent for them to do so, but that needn't stop them. Speaking for myself, at an intellectual level I'm a convinced error theorist. But that doesn't stop me from feeling the instinctive pull of moral realism at a deeper level, and from making realist moral judgements. It's a bit like an optical illusion which you know to be an illusion but you still can't help seeing it. And if that's true to some degree for me, as a convinced error theorist, I should think it's much more true for someone who just has a vague belief that there can't be an objective moral standard, but hasn't carefully thought through the logical consequences of that belief.
So, it doesn't follow from the fact that someone denies an objective moral standard that their moral discourse is not as described by error theorists.
I think I would approach the person with a number of questions about what they expect from objective morals?
Would they expect someone of bad will to care that what they do is objectively seen wrong?
Would they expect people of good will to give up their sense of justice if it was contradicted by some objective morality?
Suppose someone they care about is very upset and they have the choice to act in a way that is against the objective morality but would calm this person without affecting others and acting in a way that complies with objective morality but would upset this person even more, how would they act?
I think asking these kind of questions to induce these people to reflect on what they expect from objective morality and thus lead them in the direction to understand it wouldn't deliver would be the most fruitful way against their moral panic.
What Knobe's research suggests is that people tend to say there is an objectively right or wrong answer to moral disagreements when the disagreement occurs between members of the same culture. Yet, people tend to say there isn't an objectively right or wrong answer when faced with moral disagreements between different cultures (or between humans and extremely implausible alien species.)
How do you interpret that research, Jason? Do you think that the people who give different answers in different situations are unsure about what "objective" means? Or do they have the same meaning for it in both situations but are simply primed to think different things? Or what?
I just finished reading the Knobe article. He's basically rediscovered the fact that modal thinking leads us to different conclusions about the truth value of statements. Talking about other cultures or about aliens forces us to think: If I was an X I would believe Y. It's classic counter-factual thinking and it's generally thought that our theories of truth break down in these sorts of situations which means of course you're not going to get realist arguments from people primed to think counter-factually, their theory of truth has broken down.
I'm left curious about what to conclude from this though. Either I pass this over to our brothers in the Philosophy of Logic, or I argue that regardless of the metaphysics, morality remains an empirical matter and therefore the question we should be asking isn't whether or not morals are necessarily true, which is what modal thinking is aiming for, but rather whether or not it's true for this case and whether or not I can convince the person in front of me that it is.
Actually that thought seems to me useful. The objections to moral realism tend to be of the sort that talk about necessity and maybe that's an easier way to frame the debate. This thinking is somewhat nascent but Kripke touches on it in Naming and Necessity. Essentially he points out that one of the hallmarks of science is that it tends to move to towards more and more necessary descriptions of things: the scientific description of gold as the element with the atomic number of 79 is necessarily true; the description that gold is hard and yellow is not. I'm still mulling this over, but particularly given the Knobe piece I think it's relevant.
Russell,
Intererstingly, it looks like Knobe's studies don't rely on individual interpretations of the word "objective." Participants only ranked agreement with statements of the form, "Since X and Y have different judgments about Z, one of them must be wrong." Those who strongly agreed with such claims were then interpreted as being moral objectivists. Maybe they're applying different notions of the word "wrong," but if so, we should still like to know why.
I think Knobe's interpretation is questionable. Perhaps, as Knobe says, the more we can imaginatively engage with alternative perspectives, the more inclined we are to respect those perspectives on moral questions, even if they are in contrast with our own. The point, then, would be that we only think there are objectively wrong answers when we cannot imagine there being another point of view. But, then, why can't we imagine that the people in the same-culture scenarios have equally plausible moral judgments? Are our imaginative capabilities more limited when we are confronted with people who are culturally and biologically like us? Perhaps, if we assume that they share the same values. In that case, these supposedly objectivist answers might really be relativist answers, and "wrong" just means "wrong for people with those values." When the values aren't shared, there is no longer a single right or wrong answer.
But Knobe hasn't shown that our imaginative engagement is stronger in those cases where we deny there is a right or wrong answer. Rather, it seems that, when we cannot imagine what it is like for the people with whom we disagree--it is hard to imagine what it is like to be a person from a different culture, and even harder to imagine what it is like to be an alien who only wants a world full of pentagons--then we have trouble supposing there is a right or wrong answer to be had. Imaginative involvement could be a criteria for making moral judgments, period, and not a criteria for moral relativism per se.
I should mention that I haven't gone through all the responses to Knobe, yet. But there are some excellent comments, as I'm not finding. One, in particular, offers a plausible alternative to my own interpretation. It's by Cihan Baran, posted
December 17th, 2010 at 5:29 pm. The claim is that people might not be denying an objective morality when they shy away from saying the other-culture or alien individuals are wrong. Rather, the participants are just finding a way to excuse behavior which they feel is really wrong. The other-culture and alien individuals just don't have the right knowledge and background to know any better. So they're not making a mistake. They're not wrong, even though they're acting in a morally impermissable way. The idea, I guess, is that moral judgments require some epistemic component--we can't just somebody as morally wrong if they don't have the right knowledge base. Perhaps some people are using "wrong" in the sense of "making a mistake," and not in the sense of "doing something morally impermissable." But I'd be a little surprised if a significant portion of the participants were thinking about it this way. Further research should be able to flesh this out pretty easily.
Post a Comment