My friend James Hughes has a great narrative to tell about the expanding circle of recognition of rights to all persons, irrespective of sex, race, culture, and even species and physical substrate. According to this narrative, we will ultimately accord citizenship - and the legal rights that go with it - to non-human persons and then to non-biological persons such as advanced, fully-conscious artificial intelligences. We will realise that it is Lockean personhood, rather than species membership, that accords full moral considerability.
I wish I could go along with this without qualification, because it's a wonderful story to tell, and much simpler than the complex, ambiguous, difficult reality that I see. But I do have a problem here.
I think James's account is roughly correct, and if we need a simple moral story to tell I'm going to tell this one, rather than spout nonsense about "human dignity" - the idea that there is some mysterious factor or set of factors that accords infinite moral worth to human beings, making us exceptional within the universe. As I've often said, there is no such X factor that amounts to human dignity. From my naturalistic point of view, "dignity talk" involves a bizarre kind of human exceptionalism, and I applaud the efforts of anyone who wants to combat it. If we do create fully conscious artificial intelligences, I think we'll need to find ways to integrate them into human societies and to grant them citizenship. A good start towards this way of thinking is to get behind Peter Singer's Great Ape Project, and argue for at least some special legal protections for chimpanzees and the rest of our near-brethren in the animal world. Let's get used to the idea that personhood is what counts.
And yet, and yet...
All that said, this is still the area where I always feel the need to part company somewhat with a number of my friends and intellectual allies. Despite all the above, I continue to question whether it is a good idea to create fully-conscious, self-aware, non-human beings with high intelligence, etc. Even if the personhood theory of ethics is correct, that does not mean it is intuitively correct. Until it has been internalised by most people, it is unlikely to be applied by them, in practice, from moment to moment and day to day. I'd like to see a lot of change in the way people think before we create beings who need the new kind of thinking to prevail in reality, if their interests are going to be protected. Accordingly, despite the solidarity that I feel with transhumanists and technoprogressives in many current bio-political struggles, I worry about the creation of these "new humans" (or whatever shorthand we use), and I don't think that we should accept their inevitability, at least in the short or medium term, when we argue for therapeutic cloning, for the in-principle acceptability of safe reproductive cloning, and for a raft of technologies that will assist us to retain (and even enhance) our capacities, helping us to lead better and longer lives.
James, of course, is aware that there would be practical problems. In Citizen Cyborg, he suggests that we may come to the conclusion that it's best to hold off on uplifting any non-human animals to higher levels of intelligence if this will only make then miserable, while we might have prudential reasons for preventing machine minds from obtaining self-awareness, or for making sure they feel solidarity with us. All this makes sense, but I am as worried about how we would treat a cute little baby Skynet as about how it would treat us. (As a sidenote, the original Skynet only initiated Judgment Day when human beings panicked and tried to shut it down - at least that's how the story is told in Terminator 2.)
Actually, I think the situation is even more complex. Although personhood theory is superior to the spooky doctrine of human dignity, it is still only an approximation to the truth. The truth of the matter is that there are no objective values in the disenchanted universe in which we find ourselves - there are merely things that we value. Human species-membership is not of objective value, but neither is personhood.
Out of our shared values, we have created moral systems that suit our interests, but these are institutions invented by human beings to serve human beings. Moral systems are our servants, not our masters; they have no deeper justification than that. They are responses to our values, fears, and sympathies, including the sympathetic responses that we have to non-human beings whose sufferings we recognise. There is no higher court of appeal to some objectively "correct" set of values, etc., although there is also no reason why we cannot attempt to alter and shape our own values if we wish (but the decision to do so will always be based on our deeper or stronger values; stepping entirely out of our initial value set is not possible, or even something that can be imagined coherently).
On this picture, we don't have a spooky "objective moral worth" that a fully-conscious AI would lack. On the other hand, if someone is simply more emotionally responsive to fellow human beings than to fully-conscious AIs, she is not making an intellectual mistake. It is a contingent empirical fact about us that we are not especially more responsive to people of different racial background (for example) once we overcome our tribalism and fears of the Other, but that fact is probably rooted in the additional fact that we are genetically programmed to respond to human facial expressions, tones of voice, basic morphology, and typical movements ... and not to such things as skin colour. If we were beings who really cared about skin colour at the deepest level of our genetic programming, morality as we now understand it would be impossible for us. Fortunately - as seen from inside our moral institutions and our actual values - Homo sapiens are not like that. Racism is cultural, not biologically programmed; it is skin deep.
However, we'll have to think very long and hard before we screw around with the most basic kinds of emotional responsiveness built into our genetic programming, even if it proves to be possible. Perhaps we'll eventually decide to make some changes to bring all our values into a better alignment, but it's not happening any time soon.
Meanwhile, if we start creating beings that possess the attributes of personhood, but do not possess the down-to-earth empirical characteristics that we are actually programmed to respond to, it is going to be much more difficult for us to extend our sympathies to them than has been the case with ongoing efforts to overcome racism. Perhaps we - or at least some of us - have reached a point where we would most value the happiness and fulfilment of persons, whatever appearance they took, and perhaps we could treat the "new humans" with the same concern as we treat other Homo sapiens. We are, perhaps, ready to put a higher value on the happiness of all Lockean persons than on the happiness of human beings in particular. We can abandon human-racism.
I suspect that in practice, however, even the most enlightened of us would be much more conflicted if push came to shove, and that there would be plenty of resistance from the less enlightened.
In the end, I agree, more or less, with the great importance attaching to personhood, and hope that I'll be able to act with kindness towards any fully-conscious artificial intelligences that I happen to meet over the next few decades. But I also think that we have good, non-spooky reasons to be very cautious about actually creating such beings. While there is no rational argument available against, say, therapeutic cloning, there is plenty of room for rational discussion about what is, and what is not, wise when we begin to contemplate issues that lie far on the other side of current bio-political controversies.
If only we could reach the point where society could have that discussion.
1 comment:
Every once in a while, and with increasing frequency of late, a news piece crosses my screen and gives me what I can only call a weird vibe. Typically, the news item concerns some topic in bioethics which many people might find vaguely troubling, but to me resembles the thin edge of a wedge. Today's example is an article in the New York Times entitled Proof Is Scant on Psychiatric Drug Mix for Young" (23 November 2006, just discovered via Mind Hacks).
We might not yet be creating whole new modalities of human-machine interaction, but we are perturbing the means and variances of those Gaussian distributions which characterize our species. And we don't really understand how. If we don't find ways to discuss the issues just poking into our awareness now, we're going to be in a real bind when the cybernetics of 2016 make the headlines.
Post a Comment