About Me

My photo
Australian philosopher, literary critic, legal scholar, and professional writer. Based in Newcastle, NSW. My latest books are THE TYRANNY OF OPINION: CONFORMITY AND THE FUTURE OF LIBERALISM (2019); AT THE DAWN OF A GREAT TRANSITION: THE QUESTION OF RADICAL ENHANCEMENT (2021); and HOW WE BECAME POST-LIBERAL: THE RISE AND FALL OF TOLERATION (2024).

Monday, January 15, 2007

Meta-ethics and the human future

In my recent series of posts in which I've attacked the idea that morality is objective, I have not sought to deny that morality of some kind is inevitable for human societies. I think it is . I also think that much of its actual content is inevitable for us ... i.e., for human beings or any species much like us. In a sense, then, we might want to say that some of the basic content of morality is objectively justified, after all, because we would not want to do without it.

In that very weak sense, I could call myself an objectivist, though this is not how the words "objective" and so on tend to be used in meta-ethical debates. (Then again, much of the terminology in meta-ethics is not used with total consistently ... to say the least.)

It would be a weak sense of "objective", as Blake Stacey said in a response to my previous post on this site. On this picture, the justification for morality is a kind of pragmatic one, based on human needs, values, interests, etc. It is also a subjective justification - not in some extreme, nihilistic sense (it's not a matter of "Do as thou wilt shall be the whole of the law"), but just in the sense that justifications of morality are not independent of our needs, values, interests, etc.

Still, if someone wants to claim to be a moral objectivist, or a moral realist, meaning no more than all the above, then I have no substantive quarrel with them. The trouble is that a sense that morality is much more strongly objective seems to be assumed in (much) commonsense meta-ethical thinking, and of course naive religious meta-ethics typically claims that there is a grounding for morality in the will of a deity, which creates absolute standards of right and wrong. It is a trite observation that people with this naive divine-command theory often wield great political power.

I think that there are at least two sets of practical issues that arise out of all this, and affect how we might imagine the human, or post-human, future.

First, it seems that "our" needs, values, interests, etc., are somewhat indeterminate and contested, and this leads me to conclude that the detailed content of morality - as opposed to the broad outlines - will always be both (1) underdetermined by naturalistic facts about the universe (there may be no one perfect way to meet "our" needs, etc.) and (2) pluralistic (there is no single overriding value that morality gives expression to, but rather a range of values that human beings actually tend to have). I think that this recognition could have many practical implications. E.g., no one cultural system of moral norms may do a perfect job of meeting human needs, etc., but many may do some sort of reasonable job, judged against the contested - but largely agreed - values that we want to apply.

This seems to justify a certain kind of sophisticated moral relativism: we should not be too quick to condemn the moral codes of other cultures, which might be functioning reasonably well, however bizarre they look. At the same time, it allows that some moral systems may do a better job than others, when judged by standards with much inter-cultural acceptance. Thus, it refutes the naive relativism which insists that all cultures are equal.

More generally, a pragmatic and pluralistic approach to morality might lead to considerable revision of our traditional moral norms, and might guide us in what norms should be retained, invented, or rejected. Is this norm (we might ask) actually promoting our more fundamental values? If not, pitch it into the flames. Many inherited moral norms may not withstand scrutiny, once it is asked whether they are actually performing such functions as creating happiness and minimising suffering (I take it that these, at least, are widely-agreed values).

Here, I agree with Joshua Greene, who has written much in this area (including a PhD thesis that is expected to be published in 2008). Greene argues that the kind of meta-ethical approach I'm describing, and which he defends eloquently, will push us somewhat in the direction of utilitarianism. I don't think it pushes us quite as far in that direction as Greene thinks it does, but I certainly agree that it has practical implications for how we live our lives and what policies we support.

Phew, that's one set of issues.

Secondly, there's a question of what do we do about it, once we come to believe that morality is not as it seems.

It appears (at least to me) that naive thinking about morality typically involves an illusion: the illusion that, in a strongly objective sense which transcends human interests, there is right and wrong built into the framework of reality. Once we see through this illusion, how should we act (given whatever values, etc., we share)?

Should we try to revise our moral language, as Greene argues? How do we bring up kids - do we use the old, simpler language, or some new kind of language? Are we better off if most people continue to live under the illusion that morality is strongly objective? How easy or difficult is it to shake them free of it? (Perhaps, as Richard Joyce has argued, we evolved to have this illusion, which may have had some survival advantage; perhaps it takes a very special kind of abstract thinking to break the spell, even temporarily.)

It also hasn't escaped me that these questions are analogous to questions about the belief that one or more powerful supernatural beings are taking an interest in us. If this is a deeply-entrenched mistake, how urgent is the need to say so? What are the advantages and disadvantages?

My own bias is toward dispelling illusions, and to expecting that the advantages will outweigh the disadvantages. If we could live, and set policy, without illusions, I believe it would help us lead better lives (judged by values that are largely shared). For example, if we all saw things more clearly, much irrational rejection of biomedical technology might dissipate like the mist. However, I grant that it's not straightforward. There are huge issues here about the future of morality (and religion) - and about the future in general, if we try to plan it in accordance with our actual values, rather than in the thrall of ancient illusions.

I've just mapped out a research program for myself for years to come. Who wants to join me?

2 comments:

Blake Stacey said...

You have certainly mapped out an impressive research programme! (I typically spell the word program with extra letters when I want to make absolutely sure that people don't think I believe the program in question can be implemented in Python or C.) Unfortunately, I expect it would take me a few years of remedial reading to sound like a philosopher when I debate these issues, so the best I can offer is, "Hey, that idea appears in such-and-such Isaac Asimov novel," or alternatively, "Philosopher X has clearly not studied physics." If such intrusions are useful, I will be happy to provide them until my typing fingers wear out.

Incidentally, I read your entry on science fiction in the Literary Encyclopedia, and I quite enjoyed it. In particular, I liked the idea that all of modern SF is roughly speaking "postcyberpunk", in that it incorporates or reacts to cyberpunk themes. Back when I was researching this topic for WP, I read Lawrence Person's original essay which defined the term. Parts of it I found insightful:

Many writers who grew up reading in the 1980s are just now starting to have their stories and novels published. To them cyberpunk was not a revolution or alien philosophy invading SF, but rather just another flavor of SF. Like the writers of the 1970s and 80s who assimilated the New Wave's classics and stylistic techniques without necessarily knowing or even caring about the manifestos and ideologies that birthed them, today's new writers might very well have read Neuromancer back to back with Asimov's Foundation, John Brunner's Stand on Zanzibar, and Larry Niven's Ringworld and seen not discontinuities but a continuum. They may see postcyberpunk not only as the natural language to describe the future, but the only adequate way to start extrapolating from the present.

This tallies with my personal experience of stochastically wandering through the SF-scape in the late 1980s and early 1990s. Clarke's 2010 and Childhood's End were flavors which could naturally mix in a delicious sundae with Light Years, the French "anime" translated into American by Isaac Asimov; Greg Egan's Quarantine, which I probably cited in half a dozen WP articles; Total Recall, which my friend Mark and I calculated had a death toll of over eighty dollars, rating each expendable extra at 25 cents; Jurassic Park and The Andromeda Strain, both of which improved in going from print to screen; and a host of others. (I'm almost ashamed to admit that I couldn't finish Foundation the first time I tried to read it. If somebody had told me that it was first a collection of short stories, it would have been easier to handle the uneven flow, variable style and leapfrogging story line.)

However, I found Person's argument for sticking the postcyberpunk label onto the works he considered was a little too weak. Part of my objection is that the word itself suffers from the same flaw as postmodern or postcolonial: what do you call the next thing to come along? Prepending additional prefixes is inelegant, to say the least. More fundamentally, his designation seems to confuse chronology with content, or at least open the avenue to such confusion. Given enough caffeine, I could argue that Lang's Metropolis (1927) and Lisberger's Tron (1982) both contain elements which qualify as Personian "postcyberpunk", which seems a little unfair, considering the dates they were released!

Focusing on content alone, I'd call such works as Ghost in the Shell: Stand Alone Complex or Transmetropolitan "optimistic cyberpunk", "unalienated cyberpunk" or some like combination of words. (Incidentally, I'd also like to disagree with Person about GitS: SAC: "Jungle Cruise" isn't a bad episode, and the show stacks up better against Neuromancer than he's willing to admit. I will, however, readily admit that my perception may be colored by the fact that I first saw the show with good friends on a perfect summer night shortly before I left Boston to live in France, so even the opening theme song brings back a sensation of a future awaiting exploration.)

The idea of considering the whole genre "postcyberpunk" resonates with my theory that the best way to view SF's evolution is not as a series of revolutions but instead a gradual deepening of the meme pool, interspersed with saltations of suddenly spiking diversity (much the way catastrophism and uniformitarianism are synthesized in modern paleontology). As a concrete example, I'd point to David Weber's Honorverse series, which takes genetic engineering and nanotechnology and plays them out in a space-opera setting. (The most recent novel, At All Costs, also shows the flipside of Aldous Huxley's fetus-in-a-jar technology.) Joss Whedon's show Firefly also springs to mind: when we first saw the episode "The Message" (one of the lesser installments, IMHO), just after they reveal that the guest star is a black-market organ carrier using his own body as a container, my friend burst out, "Jesus! This is more cyberpunk than Johnny Mnemonic." The "Blue Sun" megacorporation also plays into this, I suppose, if you're counting cyberpunk themes brought into the "Space Western" context.

Whew! That was a long diversion. I've been in serious work-avoidance mode for the past few days (a little sick as well, Sedna must be in retrograde), so this sort of procrastination comes naturally.

Blake Stacey said...

Oh dear, that was a very long "incidentally". I apologize.