I've signed up with my School (at least I think it's all signed up now) to teach a short introductory course on transhumanism in October - aimed at honours-level philosophy students (and/or auditing by postgrads). The descriptive material that I prepared reads as follows:
"The transhumanist agenda - ambitions and critiques."
Convenor: Russell Blackford
Transhumanism is an intellectual and cultural movement that advocates the use of technology for such purposes as enhancement of human physical and cognitive capacities, alteration of moods or psychological predispositions, and radical extension of the human life span (possibly including a "cure" for the ageing process). Typically, the aim is to negotiate a transition from human-level capacities to capacities so much greater as to merit the label "posthuman" for those who possess them. Some transhumanist thinkers also advocate various other technologies that do not exactly meet this description, e.g. artificial intelligence of a very strong kind, molecular-level engineering and manufacturing, and technological methods for "uplifting" the cognitive capacities of non-human mammals to something approximating the human level. Finally, transhumanists analyse the possible risks, as well as potential benefits, of emerging or anticipated technologies, and formulate proposals that are intended to lessen risks without losing the claimed benefits.
An agenda such as this raises many questions for philosophical consideration. Some questions relate to the transhumanist agenda's practicality and coherence. For example, can a coherent definition be given of "enhanced", as opposed to merely "altered", capacities? If we were transformed into beings with vastly enhanced (or radically altered) capacities, would this be compatible with the preservation of our existing identities and/or with our survival of the transformation? Other questions relate more to how we should react, individually and collectively, to transhumanist proposals. For example, are the transformations advocated by transhumanists desirable for us as individual people? Are they socially manageable? Are they morally compulsory, permissible, or forbidden - assuming there is some real prospect that they can be achieved? Can we be discriminating in accepting some parts of the transhumanist agenda, while rejecting others? On what grounds? What methodologies can be employed to assess such "big picture", and possibly high risk, proposals? Can resistance to them sometimes be explained by invoking irrational features of human psychology?
All these questions and others straddle issues of interest to, at least, metaphysics, ethics, and political philosophy.
The readings below are all available online. A small number of additional readings will be selected and made available prior to commencement of the course. For students wishing to write on topics raised in this course, more detailed readings can be provided by the convenor. Those students are also advised to consult James Hughes, Citizen Cyborg: Why Democratic Societies Must Respond to the Redesigned Human of the Future (Westview Press, 2004) (on reserve in the Matheson Library), which offers one comprehensive version of the transhumanist agenda.
Bailey, Ronald. 2004. "Transhumanism: the most dangerous idea?".
Bostrom, Nick. 2002. "Existential risks: analyzing human extinction scenarios".
Bostrom, Nick. 2005. "A history of transhumanist thought".
Fukuyama, Francis. 2004. "The world's most dangerous ideas: transhumanism".
I'll be fascinated to see what interest this attracts. Meanwhile, I need to choose some further readings for the participants and to work out how best to conduct the course over three time-slots of an hour or two each.
Clearly, there's a great deal raised by transhumanist thought that young analytic philosophers could sink their methodological teeth into - whether they find the overall idea of "going posthuman" attractive or repellent (or just intriguing). The obvious issues range from the coherence of human beings wishing to be posthuman to the problems of distributive justice if the technology enabling them to do so becomes available in an unequal world. We'll see what happens. Ideally, it might encourage someone to do an interesting research paper.