Friday, May 26, 2006
More thoughts on killing and life extension
I still owe an explanation for my rejection, in a previous discussion, of the strong claim that it is always wrong to shorten the life of a human person.
This claim might also be expressed by saying that human persons have an absolute right to life - meaning, I take it, that each of us is under a comprehensive moral obligation not to take the life of any human person. On this formulation, any such action would be the wrongful infringement of someone's right to continue living (and no such action would be, for example, a morally permissible infringement of a right, analogous to a necessary act of theft to save somebody from great and imminent danger).
Note, at this point, that I am confining my discussion to a relatively plausible class of cases - those involving human persons, i.e. beings who possess such properties as rationality, self-consciousness, and a sense of themselves as existing in time, with a past and a future. If the class of cases under discussion were defined by mere species membership, rather than personhood, any claim of absolute rights to life, or inflexible duties not to kill, would be implausible from the beginning.
However, even if we confine ourselves to human persons and develop the discussion in terms of rights, the claim that there is an absolute right to life cannot be sustained. There are too many easily-imaginable situations, such as in some of the notorious "Trolley cases", beloved of philosophers, where most of us would judge that killing a human person is the morally right action, perhaps to save others, or to achieve some other great utilitarian benefit.
At the same time, utilitarians and other consequentialists do not accept the existence of rights at all, except as conventions or rules of thumb; nor do they normally accept inflexible rules. For them, everything depends on what course of conduct will have the best consequences, seen by utilitarians as the maximisation of happiness or preference satisfaction. Even rule-utilitarians will adopt only those rules that have a prospect of producing the best consequences within a specified social, economic, etc., context.
To the extent that I put particular weight on a moral rule against killing, it is in response to the widespread human fear of death, particularly sudden, violent death at the hands of other human beings. This fear is deep in our nature. When we sense this fear in other people, it tugs at our sympathies, while the reality of others' capacity for violence in the service of social or economic gain necessitates laws against the kinds of acts that we categorise as murder. It is unsurprising that murder is forbidden - or at least drastically restrained and regulated - in all cultures.
But these sorts of utilitarian, Humean, or Hobbesian reasoning get us a rule that is less than absolute, and which is always subject to specification and qualification according to the relevant historical circumstances. Indeed, all of our recognised moral rules are useful only in so far as they meet important human interests and needs in the sorts of circumstances that shaped the rules in the first place. Until now, those circumstances have universally included such facts as that we are mortal, that most people who live into their 80s become very frail, however robust they formerly were, that it is almost unheard of for people to live to 120, and so on.
If we are now considering technological changes that could greatly alter the human condition, and with it the context within which moral rules evolve and moral intuitions are shaped, it is not at all obvious that the existing moral rules can be retained in their entirety. On the contrary, it is easy to imagine science fictional societies in which killing people when they reach the age of 300, or even at the age of 100, is justifiable on utilitarian grounds - though whether such scenarios actually seem at all plausible might depend on fiercely contested perceptions of human nature, such as conflicting beliefs about the inevitability of ennui and despair if one lives long enough.
Even before we find ourselves in such a futuristic setting, all bets may be off. If we are now in a world where anti-aging technologies are a real prospect, that is already a different world from the one in which our intuitions about the wrongness of killing were formed. If historical assumptions about human decline and mortality can no longer made, our moral norms may need to change. We cannot take a radical view of what changes are possible, while taking a conservative view that our current norms remain appropriate to the altered situation. This makes moral and policy debates about emerging technologies very tricky.
Assume for the sake of argument that failing to provide the general population with an immortality drug, if one became available, could be considered equivalent in some sense to killing the individuals who are so deprived. Even if we get to that point, it is not clear that this sort of killing would be morally wrong. It depends on what would be the actual effect of having a population of people who use an immortality drug. If the outcome would be some kind of disastrous conflict for resources, or some kind of widespread, catastrophic unhappiness, withholding the drug might be justified. Bioconservatives such as Francis Fukuyama who evidently expect dreadful outcomes are being rational, in their fashion, in claiming that society and the state are morally entitled to tell us how long we may live.
Of course, it is a shocking kind of rationality. I immediately want to add that the last thing I want to do is join Fukuyama in handing such a power to the state - any state, no matter how democratically accountable. To be clear, I am suggesting that there is no absolute right to an immortality drug, even if we had one, or for some more plausible "cure for aging". But at the same time, I want to stress what kind of argument has to be run if there is to be an intellectually credible case against technologies that would extend human life.
It appears to me that the burden lies heavily on the opponents of life extension to develop such a case, because nothing I've said above (or anywhere else) denies that longer, healthier lives, and particularly the radical extension of our times of physical robustness and mental clarity, are all very attractive. In fact, that is just the point. We should be supporting technological advances that will give us these things - basing our case squarely on the fact that they are things which really are attractive to beings like us, that it is rational for us to want them, and conversely that it is rational to struggle against the process of ontological diminution (David Gems's useful term) that comes with aging. Indeed, supporting research that could deliver these benefits would be a rational and appropriate use of public funds.
Opponents of life extension have little prospect of success unless they can make out a case that the probable result is something horrible. I am not afraid to concede that such a case could be made in principle, but let's see if it can be sustained in practice. Transhumanists and life extension advocates won't have everything their own way in a debate over this issue, but that's because it is far easier (and far easier to be be taken seriously) to present as a pessimist about the future, referring to valued things that could change or be superseded, than to be an optimist, attempting to picture a better future in convincing detail.
In the end, however, I expect that the bioconservatives' case will not be made out intellectually and will have only short-term success in gaining converts. Perhaps ironically, their problem is that their aims run against that thing they hold sacred - human nature - for it is in our nature to change things, including ourselves if we can, to bring them closer to the heart's desire.