About Me

My photo
Australian philosopher, literary critic, legal scholar, and professional writer. Based in Newcastle, NSW. My latest books are THE TYRANNY OF OPINION: CONFORMITY AND THE FUTURE OF LIBERALISM (2019); AT THE DAWN OF A GREAT TRANSITION: THE QUESTION OF RADICAL ENHANCEMENT (2021); and HOW WE BECAME POST-LIBERAL: THE RISE AND FALL OF TOLERATION (2024).

Tuesday, October 03, 2006

Technological threats to affective communication

One of the main conclusions I've been coming to in my research on the moral issues surrounding emerging technologies is the danger that they will be used in ways that undermine affective communication between human beings - something on which our ability to bond into societies and show moment-by-moment sympathy for each other depends. Anthropological and neurological studies have increasingly confirmed that human beings have a repertoire of communication by facial expression, voice tone, and body language that is largely cross-cultural, and which surely evolved as we evolved as social animals.

The importance of this affective repertoire can be seen in the frequent complaints in internet forums that, "I misunderstood because I couldn't hear your tone of voice or see the expression on your face." The internet has evolved emoticons as a partial solution to the problem, but flame wars still break out over observations that would lead to nothing like such violent verbal responses if those involved were discussing the same matters face to face, or even on the telephone. I almost never encounter truly angry exchanges in real life, though I may be a bit sheltered, or course, but I see them on the internet all the time. Partly, it seems to be that people genuinely misunderstand where others are coming from with the restricted affective cues available. Partly, however, it seems that people are more prepared to lash out hurtfully in circumstances where they are not held in check by the angry or shocked looks and the raised voices they would encounter if they acted in the same way in real life.

This is one reason to be sightly wary of the internet. It's not a reason to ban the internet, which produces all sorts of extraordinary utilitarian benefits. Indeed, even the internet's constraint on affective communication may have advantages - it may free up shy people to say things that they would be too afraid to say in real life. Moreover, people who have difficulty "modelling" other's feelings via facial expressions, and so on, may find the more laborious process of using text actually more effective. One way or another, the internet is not only a setting in which people can quickly make enemies; they can also quickly make friends and even virtual-reality lovers, and some of those on-line relationships turn out to be deep and genuine. We all know folks who have ended up getting married (in real life, that is) after meeting in an internet forum of some kind. In practice, many internet users are careful to spell out their feelings more completely than they normally would, to use emoticons creatively, and so on. Some of the communication on the internet is doubtless of higher quality than real-life communication, which is itself plagued by plenty of misunderstandings of how people are feeling.

However, there's still reason to be careful and to be aware of the internet's limitations. We all know that internet forums are rife with misunderstandings, even apart from the ability they provide to people to fake entire personalities. As to the latter ... hey, girls (or gay guys), you do know that that hunky young bloke from Italy you had cybersex with last night just might be a seventy-year-old truck driver from Omaha. Hey, guys (or gay girls), that pretty little Japanese artist you flirted with in Second Life yesterday may be the very same truck driver! At any rate, what you construct of another person and their feelings, from behind your computer terminal, may have little to do with the reality. Whatever is gained - and I repeat that it's a great deal - there's also a downside.

This suggests that there could be both a market for people who like things much as they are and actually find advantages in what I've been discussing (such as that sexually adventurous Omaha truck driver) and a market for those who would want the internet and the evolving metaverse to take on a greater resemblance to the semi-transparency of real life, where you more or less know who you're interacting with and have a whole range of largely unconscious ways of "reading" their emotions and conveying your own.

As virtual reality and real life merge into each other, and relationships of trust become more important in the metaverse, a pressure may build for the development of products that give a greater degree of transparency and mutual emotional intelligibility. Perhaps we'll see zones where a high degree of trust can be maintained routinely, along with other zones where people will still be able to enjoy the particular kinds of advantages that are gained from doing without real-life inhibitions. There may be shades in between: think of a super webcam device that projects an idealised version of your face but faithfully registers its changing expressions. You may appear more like the "you" of twenty years ago, or you may simply have those troublesome zits edited out. At the same time, if you are afraid, angry, bored, sexually excited, or whatever, the system will faithfully convey the facial expressions that give this away.

To sum up at this point, on balance the internet is a very good thing. For me, personally, it's a blessing in innumerable ways. I'm not too worried about our ability to come to terms with the affective communication problem in respect of the internet and the evolving metaverse, or whatever other communication technologies await us. It's sufficient, for now, simply to raise the issue. I don't necessarily believe that any products will have to be banned; however, I do foresee regulation to ensure that various products function as advertised and that we all know exactly what we are dealing with when roaming in the highly-immersive metaverse of tomorrow.

However, technology's possible threat to affective communication arises in other contexts. To date, our bioconservative friends have largely missed the point in their critiques of advanced biotechnology. They are correct to observe that there are various concerns about social stability and distributive justice if certain technologies are differentially available, and this may suggest a need for caution. I'm not sure that these concerns will turn out to be huge issues in practice, but they may, and I do think that policies should be put in place that take account of the potential problem. I have a fairly open mind about how this should be handled, though one obvious policy step is to try harder to reduce existing economic inqualities within and between societies.

The more troubling possibility is that we will create beings who, for one reason or another, do not have the normal means of affective communication with "ordinary" human beings. That could easily lead to their mistreatment as they fail, moment-by-moment to convey to us how they feel inside. Think of Frankenstein's monster: the problem here was not Victor Frankenstein's supposed hubris in creating life, or human-like life. Rather, he managed to create a being so repulsive to the humans it encountered that it found communicating its initially good intentions impossible. Worse still might be a hostile being that is able to fake emotions it does not really have, using our own modes of affective interpretation against us.

This concern leads me to be quite worried about some proposals from my transhumanist friends, such as that we should uplift non-human animals to human levels of intelligence. All very well, but what are the likely consequences? These beings could be very hampered in their attempts to communicate with us and in gaining our moment-by-moment sympathy. Conversely, a superintelligent AI might be all too capable of manipulating us if we gave it the capacities for facial expression and voice tone.

Such issues lie far on the other side of current debates about therapeutic cloning, reproductive cloning, embryonic sex selection, and extension of human life. I have nothing much against any of those actual or possible technologies. None of them seem to me to strike at the social contract - whether via an erosion of affective communication or in some other way. Indeed, their potential benefits seem to be greater than any burdens they will impose. The real issue of how emerging technologies can be used in an ethical way relates to ensuring that these technologies do not strike at social bonds, and it should be widely understood that anything which tends to undermine our capacity for affective communication with each other, or with any other intelligent beings in our midst, has a serious downside (even if it also has benefits).

It's past time to move on and accept that there is not much wrong with the technologies that are currently demonised - although my above reflections suggest that the internet actually has a more fundamental problem than is often thought, despite its great counterbalancing good. Technologies that could lead to new medical treatments or extend the normal human life would be unequivocally beneficial. We need to identify what moral problems could really arise with technologies that might lie "beyond cloning". Possible threats to affective communication should come high on our list of genuine issues to be addressed.

No comments: