About Me

My photo
Australian philosopher, literary critic, legal scholar, and professional writer. Based in Newcastle, NSW. My latest books are THE TYRANNY OF OPINION: CONFORMITY AND THE FUTURE OF LIBERALISM (2019) and AT THE DAWN OF A GREAT TRANSITION: THE QUESTION OF RADICAL ENHANCEMENT (2021).

Tuesday, May 29, 2018

Rosenberg on moral nihilism

(Republished from Talking Philosophy (2012). Since this was published I've written an entire book on the relevant issues: The Mystery of Moral Authority (2016). Please look this up if you're interested. It's much more rigorous than material I was publishing on line a few years before, using blogs as intellectual sandboxes. Still, the book does not deal with this particular argument from Alex Rosenberg.)

One of the positions that Alex Rosenberg argues for in The Atheists’ Guide to Reality is what he calls “nihilism”, by which he means a form of moral error theory or moral scepticism. He actually reserves the term “moral scepticism” for a different position: that there are moral truths, but we are poorly situated to discover what they are. The use of “nihilism” for his position is reasonably standard in metaethics, though the word can also have other connotations, so I prefer other expressions. However, nothing turns on that.
I’m sympathetic toward moral error theory – in fact, I think it’s about the closest approximation to the truth that we’ll find in the standard metaethical theories on offer. It interprets familiar first-order moral claims (or at least a large and important class of them) as truth-apt, but it also interprets them in such a way as to render them all false.
Thus, a moral claim such as “Torturing babies is morally wrong” is commonly interpreted by moral error theorists as that torturing babies is absolutely prohibited, or that torturing babies is prohibited by a standard that transcends all desires and institutions, or that torturing babies is prohibited by a standard that is binding as a matter of reason, or inescapably binding on all rational creatures, or that is part of the world in a similar way to physical objects and their physical properties, or some such thing.
The idea is that what ordinary people mean by a phrase such as “morally wrong”, or just “wrong” (implicitly in a moral sense) is, perhaps, somewhat inchoate, but is something along the above lines; that ordinary people will not be satisfied with anything less as an understanding of what is meant; and that, indeed, something along these lines just is the ordinary meaning of “morally wrong” – yet the phrase fails to refer to anything in the real world. I.e., no action actually is absolutely prohibited in the required sense, or prohibited by a standard that transcends all desires and institutions, or that is binding as a matter of reason … or whatever the best formulation of the idea might be.
According to moral error theory, something like this applies, mutatis mutandis, to “morally obligatory”, “morally permitted/permissible”, and perhaps a long list of other, similar, expressions used in first-order moral language.
It’s not my intention to defend such a metaethical view here. To debate it, we’d need to get into difficult questions of moral semantics, and we’d need to consider whether our interpretations of these moral expressions – whatever we might think the best interpretations actually are, after much inquiry – do or do not present them as the kinds of expressions that can or do refer to anything in the real world. That’s all very difficult…
Furthermore, even if the argument goes through in a whole range of important cases, it might leave difficulties. What do we say about, for example, “Torturing babies is cruel“? What do we say about, “Adolf Hitler was evil“? Or what about, “X is of bad character“? There is much conceptual work to do here, and it may be that many such claims will turn out, on their most plausible interpretations, to be true, even if a thin expression such as “morally wrong” is most accurately interpreted in such a way that it doesn’t refer to anything in the real world.
Rosenberg takes a rather different approach, although it is by no means entirely unfamiliar (cue Michael Ruse, for one, and Richard Joyce, for another, who have argued on what I take to be related lines). His argument seems to be something like this, and interestingly enough it does not rely on moral semantics:
P1. The core morality that human beings share almost universally is a biological adaptation.
P2. If the core morality that human beings share almost universally is a biological adaptation, then it would be a bizarre coincidence if its claims were true.
P3. It would be a bizarre coincidence if the claims of the core morality that human beings share almost universally were true. (From P1. and P2.)
C. Most likely, the claims of the core morality that human beings share almost universally are not true. (From P3.)
I take it that there is no huge problem here with the inference from P3. to C. The step from P1. and P2. to P3. is deductively valid (some minor, pedantic tidying up would be needed to make this totally clear, but it would not be difficult). However, I’m not at all sure that Rosenberg has done enough to persuade us to accept either P1. or P2., let alone both at once.
On page 104 of his book, he offers a list of universal norms that constitute the universal core morality that he’s talking about:
Don’t cause gratuitous pain to a newborn baby, especially your own.
Protect your children
If someone does something nice to you, then, other things being equal, you should return the favor if you can.
Other things being equal, people should be treated the same way.
On the whole, people’s being better off is morally preferable to their being worse off.
Beyond a certain point, self-interest becomes selfishness.
If you earn something, you have a right to it.
It’s permissible to restrict complete strangers’ access to your personal possessions.
It’s okay to punish people who intentionally do wrong.
It’s wrong to punish the innocent.
This is a slightly odd list to the extent that some of the norms , e.g. “Protect your children”, are expressed as commands, and so cannot be either true or false, while others look like truth-apt propositions, e.g. “It’s [morally] wrong to punish the innocent”. However, I think they could all be rewritten in a way that appears truth-apt. E.g.: “It is morally obligatory to protect your children.” In principle, then, they could all turn out to be false.
Rosenberg also concedes, I think correctly, that any such list will contain items that are somewhat vague. Perhaps we could talk about that, but I don’t see why vague beliefs along these lines could not, nonetheless, have been sufficiently helpful to our ancestors to add to their reproductive fitness. Nor do I see why they could not have evolved vague beliefs such as these through natural selection. As long as the beliefs were clear enough to provide some guidance of behaviour (and actually led to behaviours that assisted in, say, survival and reproduction, and the survival and reproduction of genetically similar organisms) that would be sufficient.
So, again, if P1. and P2. are true Rosenberg seems to have a good argument for what he calls nihilism. The question that we might explore – and I’m not going to take it a lot further at this stage – is whether he really has a good basis for believing both of these things at once. The more I think about the argument, the more elusive it becomes. It does seem as if a lot of work is required to support P1. You’d need, for a start, to establish (presumably using evidence from anthropology and cross-cultural psychological studies) that something like this moral core really does exist universally (or at least that any deviations from it could be accounted for in a way that is consistent with some kind of genetic propensity being expressed differently in different environments). You’d then need a good argument as to why just this set of beliefs would have contributed to the inclusive fitness of our ancestors. And there might be quite a lot more before the basis for P1. was truly convincing.
However, P2. seems more philosophically interesting. Is it really true that a set of moral beliefs such as we’re contemplating could not be both fitness-enhancing and true – without a bizarre coincidence? There’s something unusual about this claim. If a gazelle is hardwired with something like a belief that lions are dangerous to it, as well as with a desire to avoid dangerous things, surely that will be fitness enhancing. And it will be fitness enhancing precisely because lions really are dangerous to gazelles. Generally speaking, we tend to think that true beliefs about the world enhance an organism’s chances of survival, reproduction, etc. Likewise, sufficient cognitive abilities to draw true conclusions and to learn stuff about the world will tend to enhance fitness.
There can be exceptions – sometimes it might be better, from the viewpoint of reproductive fitness, for an organism to follow certain simple heuristics, or to lean in the direction of avoiding certain kinds of false negatives. But over all, it looks as if having the perceptual and cognitive capacities to discover and learn things tends to be biologically adaptive.
Are moral beliefs different? Why should having true moral beliefs contribute to my survival? Is it really like having true beliefs about, say, which animals are dangerous to me? I’m going to end here, except to say that we seem to be back to moral semantics. Might the answers not depend on just what a moral belief actually is? If a moral belief that, “It is morally obligatory to protect your children” translates as “Protecting your children is required by a transcendent standard”, perhaps this gets us nowhere in the evolutionary stakes. It’s not clear why conforming to some transcendent standard is going to enhance my reproductive fitness, so perhaps it would be bizarrely coincidental if the actions that do one correlate tightly with the actions that do the other.
But what if the idea of moral obligation translates in some other way, such as “Protecting your children is very effective for giving you certain experiences that are necessary for your own flourishing” or “Protecting your children is an effective way of contributing to the survival of your tribe”? Presumably a moral naturalist might give meanings to familiar moral expressions, such that there actually is a rather tight correlation between having correct moral beliefs and enhancing your reproductive fitness.
I don’t find moral naturalism very persuasive, but maybe there are other strategies for denying P2. In any event, I don’t think Rosenberg does enough getting his fingers dirty with moral semantics to give adequate support to his premises. Somewhere along the line, I think, we need to sort out what the various moral expressions actually mean in our ordinary language. Only then do we have much chance of knowing whether they refer to properties in the real world.

No comments: