About Me

My photo
Australian philosopher, literary critic, legal scholar, and professional writer. Based in Newcastle, NSW. My latest books are THE TYRANNY OF OPINION: CONFORMITY AND THE FUTURE OF LIBERALISM (2019) and AT THE DAWN OF A GREAT TRANSITION: THE QUESTION OF RADICAL ENHANCEMENT (2021).

Tuesday, February 17, 2009

A little bit more about "is" and "ought"

I'm just getting some notes in one place here. Only read on if interested.

First, as a general point, we always need to distinguish between (1) questions about how, historically, human beings came to have certain systems of morality and (2) questions of why we should, as individuals, give those systems our support (or reject them, or seek to change them, or give them partial support, or whatever). The first of these is a descriptive or explanatory question - "how did it happen?" The other is a prescriptive question - "what should I actually do, given everything I know?"

Part of the answer to the first question must be that Homo sapiens is the outcome of an evolutionary process. We emerged 100 to 200 thousand years ago as a species of social primate. Like other species, we had evolved various broad psychological proclivities, as well as a standard morphology. It's controversial just what those proclivities are, but it's a question that science can study.

As a first approximation, we appear to have a strong tendency to act in our self-interest as individuals, and in the interests of our children, other family members, and those whom we see as allies. But we also seem to have a considerable degree of sympathetic responsiveness to others, even to the sufferings of non-human animals. I don't claim to be expert in how that could happen, but we have a nature that is not entirely selfish. Note that this is perfectly consistent with the selfishness, in a technical sense, of our genes. Contrary to what we've often seen from Mary Midgley and others on this subject, there is no strong connection between the selfishness of genes in that sense and the selfishness of people (in the familiar meaning of words like "selfish"). In fairness, a weaker point should be conceded, that it is prima facie unlikely that beings like us, who are not (for example) genetic clones of each other, would have entirely altruistic proclivities. That fact may set some limits to what kinds of moral systems - deontic constraints, concepts of virtue, etc. - are likely to be realistically workable for societies of human beings.

But what about the second question? Here's how I see things:

I can only get you to give your reflective endorsement to some set of moral norms by appealing to values that you actually (already) have. But again, it's likely, partly because of our commmon evolutionary history, that a lot of values (by no means all) will be very widely shared. Given real-word facts, most of us have good reasons, based on our more persistent and stable values, to uphold the respective moral systems that we are born into, but not necessarily in every detail. Hence, I am definitely not advocating some kind of vulgar moral relativism.

We may well find that we have good reasons to resist or flout some of the moral norms that we encounter, or, indeed, to seek deletions, extensions, and changes to the prevailing morality. The more we know about the world, the more we may find that traditional systems of moral norms, passed down over many generations, after being formulated in very circumstances from those that prevail today, do not actually match well with our own deepest values.

But not everyone's values will be identical - similar, yes, but identical, no. Other people may have reasons to seek somewhat different changes to the prevailing moral norms, and there may be no way to say who is "correct" between Adam and Belinda if some of their differences in values go all the way down. An important point here is that these differences very often will not go all the way down, and it may be possible to engage in discussion that leads to a considerable moral consensus. Such discussion may involve asking people what they think of real or hypothetical cases that seem to conflict with their stated principles, or how they justify certain kinds of harsh treatment of others (despite their admiration of kindness if they do profess to admire it), and so on. Even Muslim fundamentalists, when pressed, will often try to justify their positions by appealing to values, such as human happiness, that are shared with secular rationalists.

It does seem that a liberal society can operate successfully with quite a bit of difference among the values of its citizens, as long as there's also a strong core of agreement. In practice, it may be better, politically, for the law to allow a fair bit of variation in people's morality and actual practices, as long as they agree to be tolerant of differences, and as long as there's agreement on certain very important norms (such as not harming others in certain ways just to win out in social,economic, or sexual competition).

(Note, though, that even the judgment of what is "better politically" ... or what "operating successfully" amounts to ... is based on certain values. These may relate to peace, social survival, minimising suffering, and so on. Once, again, however, there's very widespread agreement on these values.)

In the end, moral and political norms can't be deduced from scientific facts that include nothing like values, desires, and so on. In that sense Hume was right. But nor are those norms just arbitrary: there are explanations as to why human beings tend to agree on many of them; and it is not surprising that well-informed people often converge, when they really think about it, on accepting the same deontic constraints, concepts of moral virtue and so. Hume was right about that, too, although his emphasis on sympathy as the ground of morality may have been a bit simplistic: the grounding is probably more pluralist, and more contested, than Hume seems to have thought.

Finally, it seems to me that there are good reasons, for as long as there's a certain amount of intractable moral disagreement, to prefer a liberal state to a state to, say, a theocracy. Morality is (I think inevitably) a mixture of agreement and disagreement. Even if totally rational and completely well-informed people would converge completely on the same moral views, which I seriously doubt, such people do not exist in the real world. Liberalism enables people who are not in full agreement to flourish side by side; at least as judged by my value system, that's a good thing.

1 comment:

Lorenzo said...

Don't values come from purposes?

Humans have a broad range of common purposes: both primal and more derivative. To say we are purposive beings is to say we are valuing beings.

So it is our purposive nature, which evolved to be purposive, indeed unusually complexly so, which drives our values.

Given compatible purposes, we can reach moral agreement. Even if that means foregoing some to preserve, or at least better facilitate, others. Without such compatible purposes, the outlook is less good.

A certain capacity for mutual acknowledgment is involved too. If that capacity is absent or foregone, then things are grim too. But such mutual acknowledgment is itself about how far we push or direct our purposes.