Imagine that there are certain natural boundaries to human capacities, beyond which any increase is "enhancement", rather than "therapy". Does that mean we have a moral obligation to stay within those boundaries? I don't see why. Such reasoning seems like a clear case of the fallacy of claiming what "is" also "ought" to be.
For the sake of argument, leave to one side the problem that defining a therapy/enhancement boundary may often be an impractical task, and that it may defy coherent specification in some contexts.
Waiving that point, it appears that many human beings do, indeed, wish to go beyond the natural boundaries, or what look like them. If it were possible, many of us would like to obtain unprecedented levels of health, fitness, cognitive abilities, or whatever. So, why not? The fallacy, famously denounced by David Hume, of deriving an "ought" from an "is" is not a fallacy when part of the "is" relates to a desire or a fear. Accordingly, we may have reason to push beyond the boundaries. It is quite rational and reasonable to construct practical syllogisms such as the following:
P1. Xerxes desires to live to be 300.
P2. If Xerxes uses certain technological means, he can fulfil the desire specified in P1.
C. Xerxes ought to use the technological means specified in P2.
OR
P1. Zenobia fears a loss of mental acuity as she ages.
P2. If Zenobia uses certain technological means, she can avoid the feared thing specified in P1.
C. Zenobia ought to use the technological means specified in P2.
These are perfectly valid items of practical reasoning. Of course, the imperatives they establish are merely hypothetical ones - they give Xerxes and Zenobia reasons to act in the way prescribed if they actually have the desires or fears concerned. However, it is apparent that the horizons of many people's desires and fears do extend beyond what look like natural boundaries between the realm of therapy and enhancement. Why, then, should a relevant moral boundary be located there, assuming we can work out where "there" actually is? At first blush, the relevant boundary is surely that between the domain of what is desired, or feared, and what is beyond desire and fear. For many people, for example, immortality may not be directly desired (I'm not sure I desire it, because I find it hard to comprehend living forever), so there may be no reason for them to pursue it for its own sake, but extended health and robustness (physical and cognitive) are desired by almost anyone, so there is a good reason to pursue those things. (Of course, things that we don't directly desire may follow from pursuing things that we do desire, but that does not seem to be a problem - if the pursuit of better health leads to a much longer life that I did not directly desire, then no harm seems to be done.)
If we have good reasons to use technological means to fulfil certain desires, such as extended health and robustness, how could there be a moral imperative to forego the technological means? This is not meant as a rhetorical question; let's consider the possibilities.
For a start, it might be pointed out that we do not have reason to pursue just any old desire that we happen to have. After all, our desires might conflict with each other, or there may be reasons why we would want to disown some of them on giving them rational scrutiny. Likewise with our fears; perhaps on rational scrutiny there may be things that we fear that are not really against our interests and should actually be welcomed. Hence, I don't deny that some account is needed that distinguishes between the full range of desires and fears that we actually have and those that can be said to survive rational scrutiny, to be connected with interests, and so on. At the same time, I don't see how we can ever entirely step out of the totality of desires and fears that make up the starting point when we scrutinise our motivations. If I am going to judge some desire that I currently possess as irrational, it will eventually be on the basis that it inhibits me from pursuing things that are ultimately more important to me, or from taking steps to avoid things that I fear deeply.
To sum up at this point, I'd rather talk about things that we rationally fear and desire (or value, if this conveys the idea of desires that have been reflected upon) than simply about fears and desires, which may tend to suggest quite superficial levels of our motivation. However, as far as I can see, such an appeal to reason will not lead us to abandon fears of losing capacities, even at a "natural" time in our lives, or abandoning desires to increase them.
More generally, there may be many reasons to act in ways that are not narrowly selfish - e.g. by cultivating such virtues of character as kindness and loyalty, by eschewing attempts to get our way in social competition by violence and fraud, etc. Such reasons may be those of long-term self-interest, those of sympathy for other beings, or those of rational reciprocity (giving up the liberty to act in certain ways if most other people do likewise and we are all better off as a result). Off-hand, it is not obvious that the categories of good sources for morality are closed. But nor will just anything count as a good source of morality. It will always be something to do with our rationally-scrutinised fears, desires, values, interests, and so on.
Bioconservatives can sometimes put a rational case to be cautious, especially in the short term, about a particular technology that poses individual or collective dangers. But bioconservatives have provided only the most flimsy attempts at any general justification for imprisoning our capacities within the boundaries of what is "given" by nature (even though it is in our nature to push beyond). Prima facie at least, if we desire transformations of our capacities then we have reason to pursue them by whatever means promise to be effective. If this leads to new desires at a later time, we or our successors will, in turn, have reasons to pursue those (for example, immortality may be something directly desired by a being with superior capacity to comprehend it).
There is nothing obviously wrong with this iterative and directional process of capacity transformation, and I cannot imagine how it can, or why it should, be halted indefinitely as bioconservatives often seem to want. Resistance to this process, when not aimed at specifiable, short-term dangers, is fundamentally irrational, and we have good reason to stand up and say so.
1 comment:
Hey, Kris - are we going to make a transhumanist of you?
Post a Comment