About Me

My photo
Australian philosopher, literary critic, legal scholar, and professional writer. Based in Newcastle, NSW. My latest books are THE TYRANNY OF OPINION: CONFORMITY AND THE FUTURE OF LIBERALISM (2019); AT THE DAWN OF A GREAT TRANSITION: THE QUESTION OF RADICAL ENHANCEMENT (2021); and HOW WE BECAME POST-LIBERAL: THE RISE AND FALL OF TOLERATION (2024).

Saturday, September 30, 2006

On the Frankenstein argument

For quite a few months now, I've been one of the guardians of Wikipedia's article on transhumanism, a job that I'm quite proud of - I think we've produced a worthwhile resource. To be honest, I might have done it differently if I'd been working by myself, but then again it would never have happened without the energy of the other two people who were mainly involved (energy that often seems to exceed my own). So, everything I'm about to say is in the context of my considering this to be a fine piece of work, well worthy of its Featured Article status at Wikipedia and useful, at least as springboard, to anyone who would like to study transhumanism seriously.

Much of the article is devoted to reporting criticisms of transhumanism ... and defences against those criticisms. This is one thing that I'd change somewhat, if I could, partly because it puts a disproportionate weight on controversies, but partly for another reason. The article ends up recounting arguments for and against all sorts of things that don't necessarily go to the heart of transhumanism at all. It puts arguments against transhumanism that actual transhumanists have not necessarily recognised as such, or responded to as if this were necessary in order to defend transhumanism itself.

Transhumanists advocate the use of technology to increase human cognitive and physical capacities and the human life span - though they have widely varying views about what moral constraints apply to the means that may be used. And that's about all they need have in common.

By this simple definition, we should all be transhumanists. I don't see why we need to adopt some more restrictive definition, and I don't see why transhumanists as such should be committed to any other particular project, such as creating fully-conscious artificial intelligence, attempting to upload our personalities onto advanced computational devices, uplifting non-human animals, producing fully-formed human-animal hybrids with human-level intelligence ... and so on. (In my own case, I have substantial reservations about all the above, but I have been prepared elsewhere to "stand up and be counted" as identifying with the transhumanist movement, and I see no reason to recant at this stage of events.) These particular projects are advocated by particular transhumanists who are prominent in the relevant debates, but they are not what I see as the essence of transhumanism. Accordingly, the Wikipedia article seems a bit misleading, and this problem pervades much of its text.

Whatever reservations I have about any of these projects - uploading, uplifting, or whatever - their opponents often argue in ways that seem bizarre. One part of the Wikipedia article with which I've had relatively little involvement, partly because I'm rather confused by it, expounds something that it calls "The Frankenstein argument". The argument goes like this (some references, etc., removed for clarity; some minor reformatting for the same reason):

[...] bioconservative activist Jeremy Rifkin and biologist Stuart Newman argue against the genetic engineering of human beings because they fear the blurring of the boundary between human and artifact. Philosopher Keekok Lee sees such developments as part of an accelerating trend in modernization in which technology has been used to transform the "natural" into the "artifactual". In the extreme, this could lead to the manufacturing and enslavement of "monsters" such as human clones, human-animal chimeras, or even replicants, but even lesser dislocations of humans and nonhumans from social and ecological systems are seen as problematic. The novel The Island of Dr. Moreau(1896) and the film Blade Runner (1982) depict elements of such scenarios, but Mary Shelley's 1818 novel Frankenstein is most often alluded to by critics who suggest that biotechnologies (which currently include cloning, chimerism and genetic engineering) could create objectified and socially-unmoored people and subhumans. Such critics propose that strict measures be implemented to prevent these potentially dehumanizing possibilities from ever happening, usually in the form of an international ban on human genetic engineering.

There seem to be many things going on here - which is fair enough, in a sense, since the article is trying to cover a range of criticisms by different people. As far as I can work out, there is, first of all, supposed to be something intrinsically wrong about "blurring the boundaries" between human and artifact. I cannot imagine what this is. IVF babies, for example, may be thought of as "artifactual" in the sense that they are produced through technological intervention, rather than in the time-honoured manner we inherited from our mammalian ancestors, but no one rational argues that there is anything wrong with conceiving children by IVF (or that their technologically-mediated origin would justify mistreating those kids once they're born).

We are appropriate objects of each other's concern and respect because of our actual intrinsic properties, such as our vulnerability to suffering and our capacity for reason - and perhaps, I'd argue, because of our psychological propensity to bond with each other and form solidaristic communities - not because most of us happen to have been conceived without technological intervention.

Similarly, I am not an "artifact", in any morally relevant sense, because my appendix was removed when I was a child ... but I guess I would not be alive now - I'd have died a few decades ago - but for that technological intervention. Many of us reading this post still exist, right now, partly because of specific acts of technological artifice that prolonged our lives. This element of artifactuality (in a non-moral sense) says nothing about how we deserve to be treated or how we are inclined to treat each other.

There is also a fear of "monsters" reported in the Wikipedia article, or perhaps it is just a fear that we will create persons who will be seen as monstrous and consequently enslaved. In the end of the para, there's also something about "dehumanizing" possibilities, but it is difficult to nail down exactly what dehumanisation, exactly, has to do with any of this. Who is being dehumanised, and in what sense?

We dehumanise people, in a morally reprehensible sense, when we treat them in a way that is not responsive to their basic needs as human persons. We might enslave them, or "merely" pressure them into employment under sweatshop conditions; we might starve, abuse, and torture them; we might treat them as if they are irrational and cannot make their own life choices; we might deny their freedom to express themselves through speech and art; we might try to destroy their sexuality with puritan indoctrination at an early age or by such outrages as female genital mutilation; we might teach them, when they are still too young to challenge it, that curiousity and scepticism are bad - and so on.

It might be that non-human "monsters" with humanlike needs and capacities would, in an anologous sense, be dehumanised if we ended up enslaving them. However, there seems to be a suggestion in the Wikipedia passage (or the arguments it reports) that someone is somehow being dehumanised just because a technology such as somatic cell nuclear transfer might be used for reproduction, irrespective of whether anyone is subsequently enslaved. But that would apply to reproductive cloning no more than it applies to IVF or to the numerous technological interventions we make to prolong (as opposed to initiating) human life. Someone who comes into the world through conception by somatic cell nuclear transfer would not thereby be dehumanised at all.

I hardly know how to begin criticising arguments that invoke Frankensteinian fears, because they seem to combine appeals to irrational yuck-factor responses with more rational concerns that certain categories of future persons might be mistreated. Perhaps some other word or phrase should be used in the previous sentence, rather than "rational" ("clearly articulable"?), because the actual kinds of mistreatment imagined often seem far-fetched.

If the concern is that we will create humanlike slaves, surely the answer is that this imagines a program of creating persons whose rights (to use moral shorthand that I'm not overly fond of) we would then turn around and violate grossly. So of course, that is not a path we should take. I'm not aware of any self-identifying transhumanist who has suggested such a thing, and in any event it's not some idea essential to transhumanism - so how, exactly, is opposition to it a critique of transhumanist thought?

Admittedly, some other people have made proposals from time to time that arguably involve a form of slavery, or something with a resemblance to it. Such possibilities are often presented in science fiction, though usually not with approval. The superchimps described by Arthur C. Clarke in Rendezvous with Rama seem like a relatively benign version of the idea - one on which the author does actually appear to bestow approval. However, if this is the sort of thing that is objected to by the Frankenstein argument, the argument is in trouble. Clarke's superchimps are not human. They are simply very highly intelligent animals that are trained to do some jobs in space. They are actually much better treated than most domestic animals, human slaves, and factory employees have been in the past.

Perhaps there is still something morally wrong in Clarke's scenario - after all, even ordinary chimpanzees seem to have a conception of themselves, and are perhaps inappropriate subjects for training to carry out our tasks - but it is not intuitively shocking. If we want to rule out this kind of program for using ordinary or genetically modified apes as work animals, let's do so by supporting such things as the Great Ape Project.

Actually, I don't believe that we should take the step of creating unique, non-human persons (pig-men, cheetah girls, super-superchimps, or whatever) - at the risk of condemning them to lives of alienation and loneliness. That, however, is a completely different scenario from simply bringing a human child into the world by somatic cell nuclear transfer, or bringing into the world a child who has been given a great gift by having her individual genome manipulated - say, for unusual resistance to ageing or for a propensity, in standard human environments, to develop unusually high intelligence. Nothing rational in the Frankenstein argument addresses this sort of scenario, even though it seems that the argument is meant to have wide application and one of its conclusions is supposed to be that we should ban human genetic engineering.

Bioconservatives continue to generate intellectually confused arguments that go nowhere towards establishing a rational critique of the more moderate proposals favoured by transhumanists (and by a large number of secular bioethicists who don't necessarily identify with transhumanism). The bioconservative arguments are typically so bad that I often feel I have to provide my own critique, just so that transhumanists' proposals get some decent rational testing.

Unfortunately, the debate about emerging technologies that could alter human capacities has long been hijacked by people who appear to be ... let's be blunt ... simply enemies of liberty and reason. A lot of work still needs to be done just to disentangle the more rational fears of what might happen with emerging technologies (hint, I have nothing against Peter Singer's totally sensible expressions of concern) from all the fundamentally irrational ones.

2 comments:

Anonymous said...

One argument about the creation of artificial intelligence is that learning how to create artificial intelligence is essentially impossible to untangle from understanding our own intelligence (and intelligence generally), and that goal is presumably uncontroversially good.

Its not a strong argument for the widespread creation of AI, obviously, but it is a relatively strong argument to at least continue current research until the ethical issues become more clearly defined.

Russell Blackford said...

It's a justification for a lot of research that's going on in the AI field. The real point at which we need to be morally concerned is when we contemplate bringing into existence something that has some kind of rational consciousness, with its own plans for the future, etc.

I used to think that creating such a thing was a cool idea, but have come around to thinking that it's a step we should be very hesitant about taking.

As I see it, the ethical issue is simply whether we are likely to create something whose life will be miserable or (much less likely I think) something that will make our lives miserable, as in all those science fiction movies.

Those utilitarian calculations aside, I see no burning moral issues at all. I'm not worried about cosmic hubris, for example. I still think that, all other things being equal, it'd be cool to bring into existence a new form of intelligent consciousness.