I disagree with so many claims that I hear from various of my transhumanist friends that I sometimes wonder why, at the end of the day, I stand with them, rather than the bioconservatives - but there's no doubt that I do.
Alas, being in the southern hemisphere prevents me from getting to many of the international conferences where the current debates are being played out, such as the forthcoming Stanford conference on human enhancement.
But at Conjure, I chaired a panel whose other members were Bruce Sterling, Andrew Macrae, and Keith Stevenson. We'd been given a slightly confusing topic that encouraged us to talk about the prospects of uploading human personalities onto advanced computer hardware. I confess that I am an uploading sceptic, not because I deny that some materialist, and possibly computationalist, account of consciousness is ultimately true, but because I see huge problems relating to the continuity of personal identity. All four panel members are sensible people, and we all revealed ourselves as uploading sceptics, so there was furious agreement about that.
That could have been the end of it.
But the topic raised wider issues, and I did get a little concerned at one point when the mood in the room - among the panel members and the audience - took a strong turn in the direction of general technofear, an emphasis on the scariness of future technologies that may directly change our physical and cognitive abilities, rather than changing our environment. I tried to remind everyone that nature is not our friend (which is a position totally compatible with emphasising the beauty and even sublimity of wilderness areas; nature and wilderness are not the same thing).
If I could go to the Stanford conference, here's some of what I'd like to say.
To adapt some terminology from David Gems, the horizons of human desire - rather than what is pre-technologically "natural" or "given" - should determine what uses we make of technology. This is why the therapy/enhancement distinction in bioethics, though not entirely bogus from a biological viewpoint, is of only limited use in formulating public policy. In many cases, it may be possible to draw a boundary between therapy and enhancement, but in many other cases it may not be. More fundamentally, even where we can draw the therapy/enhancement boundary on some defensible scientific basis, much that gets classified as therapy may still fall well within the horizon of those human desires that it makes sense to try to satisfy.
Technologies coming down the pipeline from the future will sometimes be dangerous, either because they don't perform as expected or, worse, because they actually do perform as expected.
Some scrutiny and scepticism is a good thing, and I always reserve the right to engage critically with the visions of my transhumanist friends. Finding the correct technologies to satisfy rational human desires, such as the desire to live longer, healthier lives, may not be easy, and using them in the best ways may be even harder. At the same time, we are technological animals. We invent technologies to achieve our desires, and there is no deep reason why we should ever stop doing so, even if we transform ourselves, and create new desires, in the process. It's in our nature (i.e., our evolved psychological characteristics as a species) to alter ourselves from what is, in another sense, natural (i.e., pre-technologically given).
Let's be alert to all the dangers along the way, and work out rational policies to handle them. But we have desires to fulfil - desires that it is rational for beings like us to have. If new technologies can fulfil some of them, I won't be deterred merely by sentimentality about the given, or by other people's shudders at the unknown. There's a whole future of infinite possibility to explore. You can stay home if you want - but you're welcome to come with me. I'm going to tread carefully, but I'm not going to be afraid. The future will be strange, but I damn sure want to go there.