About Me

My photo
Australian philosopher, literary critic, legal scholar, and professional writer. Based in Newcastle, NSW. My latest books are THE TYRANNY OF OPINION: CONFORMITY AND THE FUTURE OF LIBERALISM (2019); AT THE DAWN OF A GREAT TRANSITION: THE QUESTION OF RADICAL ENHANCEMENT (2021); and HOW WE BECAME POST-LIBERAL: THE RISE AND FALL OF TOLERATION (2024).

Sunday, June 29, 2008

Dupuy's "anti-humanism" paper

The third article (counting Hava Tirosh-Samuelson's introduction/editorial) in the June 2008 special anti-transhumanist issue of The Global Spiral is "Cybernetics Is An Antihumanism: Advanced Technologies and the Rebellion Against the Human Condition", by Jean-Pierre Dupuy, director of the Centre de Recherche en Épistémologie Appliquée at the École Polytechnique, Paris. Of the articles I have read so far, this is far the most blatantly neo-Luddite in its approach and the most intellectually confused.

In yesterday's discussion of Don Idhe's article in the same issue of The Global Nexus, I acknowledged that Idhe makes four familiar but legitimate points:

1. In the real world, technological advances involve compromises and trade-offs.

2. Technological advances take place in unexpected ways and find unexpected uses.

3. Implanted technologies have disadvantages as well as advantages: e.g., prostheses and artificial body parts are often experienced as imperfect and obtrusive, and they wear out.

4. Predictions about future technologies and how they will be incorporated into social practice are unreliable.

Idhe writes with a degree of rhetorical excess and unashamed hostility to his imagined opponents that weaken his article, but all four of these points are worth keeping in mind by transhumanists and others who are interested in the advancement of technology and in the social applications of emerging technologies. Thus, Idhe's article is a useful reminder of some basics that I'm sure many transhumanists really do lose sight of from time to time. That, of course, is hardly an indictment of transhumanism or the impulses that lie behind it (though it might be an indictment of some specific transhumanist positions that have - to be blunt - lost contact with reality). Exactly the same four points could have been made by a sensible transhumanist thinker who sought to give her colleagues (or herself) a bit of a reality check.

By contrast, I find it difficult to discover anything of merit in Dupuy's "anti-humanism" paper. Okay, there's some interesting historical discussion of the views of Heidegger, Norbert Weiner, and others, but this sheds no light on whether or not we should approve of the ambitions of (some or all) transhumanists. That question cannot turn on the kinds of questions that arise from a discussion of Heidegger's response to certain historical kinds of humanism.

Once we get beyond that, there is not one point, as far as I can see, that is actually useful for people engaged in current debates about appropriate moral and regulatory responses to emerging technologies such as biotechnology, nanotechnology, and artificial intelligence. Instead, we are treated to rhetorical flourishes that depend more on (perhaps unintended) punning and trickery than on rigorous intellectual examination of the issues. In short, if Idhe's paper is weak on originality,* but at least making some reasonable points, Dupuy's is totally useless to anyone who wants to get some understanding of transhumanism and what might be right or wrong with it.

Alas, it's difficult to know where to begin in demonstrating this, since the paper is so thoroughly permeated by weak reasoning and unsupported claims. It would be a Herculean task to attempt to refute it all line by line - not a task that could be performed in the limited space available for a blog post that anyone is actually likely to read (or in the limited time that I am prepared to give to writing it). Accordingly, I hereby urge readers examine Dupuy's paper for themselves; I'll confine myself to a small number of specific points where I think it goes badly wrong. Even this is made difficult by Dupuy's cryptic, allusive style. It's not difficult to understand that he is hostile towards emerging technologies, but it is certainly difficult to pin down exactly why he is so hostile.

But let's start with an example. At one point, he offers a brief and under-explicated account of the views of German philosopher, Peter Sloterdijk, which I do not claim to understand (Dupuy doesn't help me, because he alludes to these views, quotes Sloterdijk briefly, comments dismissively on what he quotes, but never actually explains what he takes to be Sloterdijk's position).

In response to Sloterdijk, Dupuy makes the following comment (among others):

"For man to be able, as subject, to exercise a power of this sort over himself, it is first necessary that he be reduced to the rank of an object, able to be reshaped to suit any purpose. No raising up can occur without a concomitant lowering, and vice versa."

He does not explain this any further; nor does he support it with evidence. While the sentences I've quoted have a certain rhetorical ring, I have no reason to think that they say something that's actually true. Taking them as literally as I can, Dupuy seems to be saying that if we are to shape ourselves for our own purposes, we must thereby reduce ourselves from being subjects to being mere objects (note the expression "reduced to the rank of an object" - my emphasis). But why should that be so? He doesn't actually tell us why.

Imagine that I attempt to alter my physical capacities by engaging in a rigorous program of exercise accompanied by a low-fat, high-protein diet. At the same time, I might attempt to be reshape my personality (to a degree) by reading books that give me advice on how to overcome my shyness in company - and by acting on the advice that is given in these books. In carrying out this dual program of self-improvement, I am seeing myself as something that can be acted on and altered. If that is the definition of an object, then - to Hell with it, yes - I am seeing myself as an object and treating myself as one. However, the word "object" can have other definitions. Bearing that in mind, let's say that I am seeing myself, and treating myself, as an Object-1. I am also treating myself as an Object-1 if I drink coffee in the morning to try to rouse myself from lethargy (I don't wake up easily after a night of deep sleep) or if I drink alcoholic liquids in the evening, in part to break down my inhibitions and be more relaxed over dinner with friends. An Object-1 is simply something that can be acted upon and changed in one respect or another.

Another conception of what it is to be an object is to lack various properties that might be thought of as constituting subjectivity. I might think of something or someone as a "mere object" if I imagine that they lack such characteristics as sentience, the capacity for reason and understanding and thoughts about the future, and the ability to reflect on their own values. Or perhaps I can be said to treat someone as a mere object if, despite knowing that they possess these or similar characteristics, I treat them as if they do not possess the kinds of moral considerability that such characteristics seem to involve. Let's say that something which lacks these kinds of morally considerable properties is an Object-2, and that we treat someone as an Object-2 (even though she is not one) if we act towards her as if she lacked these sorts of properties.

The thing is, each of us really is an Object-1. I.e., it is possible to act on us and change us in various ways. I treat myself as an Object-1 if I attempt to alter some aspect of myself (whether temporarily or permanently). However, it does not follow that I thereby treat myself as if I were an Object-2. Nor does it follow, when I treat somebody else as an Object-1, that I am also treating her as a mere object, an Object-2. For a start, she might welcome, invite, or even cooperate with my attempts to produce changes in her (perhaps I am her sports trainer, dietician, physician, teacher, counsellor, or psychiatrist). Moreover, we normally think it permissible to make at least some attempts to change people, even if they don't will it and sometimes even against their will (e.g. by means of persuasion). To treat somebody as an Object-1, which we do all the time, is simply not the same as treating her as an Object-2. Whether or not an act of treating someone as an Object-1 is desirable, commendable, or deplorable will not hinge on the mere fact that she is being treated as an Object-1, but on a whole range of accompanying circumstances, such as whether or not she is also being treated as an Object-2.

Indeed, there is far more to it than this. For example, moral issues arise from the well-known fact that early embryos really are Object-2's: they do not possess such characteristics as sentience, rationality, autonomous self-reflection, and so on. There might still be some moral limits to how we should treat them, but these will need to depend on other considerations. Thereby lies a mountain of bioethical literature on the supposed rights of embryos.

I am not going to assert that Dupuy doesn't understand any of this. Maybe he does, maybe he doesn't (though I must say that there's no sign that he does). The point is that careful distinctions need to be made when we explore this philosophical territory, and Dupuy does not make them, preferring, apparently, to throw around emotionally-charged language with an imprecision that verges on irresponsibility. What is clear, though, is that many acts of "raising up" (if this includes acting on ourselves or, in appropriate circumstances, on others, in ways that we see as enhancing) can take place without any "concomitant lowering" (if this means that someone is treated like an Object-2). The "no raising without a lowering" claim sounds impressive - like a line from Heraclitus, perhaps - but there's no reason to give it credence.

Let's take another example of the many where Dupuy appears to be confused. Consider this brief quotation, in which he is complaining about the idea of deliberately redesigning aspects of the world that we find ourselves in:

"One can hardly fail to note the irony that science, which in America has had to engage in an epic struggle to root out every trace of creationism (including its most recent avatar, 'intelligent design') from public education, should now revert to a logic of design in the form of the nanotechnology program—the only difference being that now it is mankind that assumes the role of the demiurge."

Again, where to start with something like this? It is, of course, true that modern biological science is able to explain the diversity of life forms and their functional complexity without resorting to any notions of a supernatural designing intelligence. It is also true that this idea has been resisted on religious grounds, and that rearguard attempts are constantly being made by such bodies as the Discovery Institute to cast doubt on the current evolutionary paradigm - all with the aim of restoring scientific prestige to the idea of intelligent design of living things (by the biblical God, needless to say). It has, indeed, been necessary for genuine scientists to defend legitimate biological science from the well-funded polemics of self-styled Intelligent Design proponents.

But it does not follow from this that nothing is ever intelligently designed. It now seems to be indubitable that the Earth's various life forms (including Homo sapiens) are not the design of a cosmic watchmaker. But it does not follow that watches are not designed by watchmakers. Human beings do, obviously, design many things all the time; it's just that this doesn't entail that other things, such as leaves, eyes, and the flagella of bacteria were designed by a non-human intelligence. The trick is to be able to distinguish which things really are intelligently designed (such as swords, sewing machines, and sailing ships) and which are the products of evolution and deep time (such as livers, lizards, and lorikeets).

Nor does it follow that we are unable to intervene intelligently to modify things that are products of evolution. And nor does it follow that we should not do so when it's in our power, as it often is to some extent. Whether or not we should do so in any particular case will depend upon such considerations as whether the intended modification will really advance our values.

Accordingly, there is no "irony" at all in the idea that we might (1) defend the truth of the claim that leaves, livers, lizards, lorikeets, and Lindsay Lohan are all products of biological evolution, while also (2) defending the desire of transhumanists and others to redesign aspects of the world and themselves to make them nearer their hearts' desires. This is a perfectly consistent position to take. Any irony is entirely in the (evolutionarily-evolved) eye of the beholder - a beholder who is simply not thinking straight in passages such as the one I quoted a few paragraphs back.

Unfortunately, the problems go on from there. As we strive to make sense of Dupuy's paper, we find ourselves struggling with the thoughts of a man who deals in reliance on dubious authorities, long quotations of impressive passages with tangential relevance to the matters at hand, oracular pronouncements (I especially love "In the darkness of dreams, there is no difference between a living cat and a dead cat", whatever that is supposed to mean), false paradoxes, and generally a style with which it's impossible to engage rationally without patiently querying the basis for almost every thought (as I hinted earlier, the level of patience required is considerably more than I can muster on this occasion).

While the paper, taken as a whole, is ornately impressive, its critique of emerging technologies builds dubious point on dubious point to the extent that it has no real foundation. It would, indeed, be easy - and to some extent justifiable - to dismiss the whole thing as X thousand words of high-sounding sophistry, but of course such a dismissal will not convince people who are biased towards Dupuy's neo-Luddite conclusions; hence, it's been necessary to give examples of where it goes badly wrong - to give an indication of why I think it's all a tissue of nonsense.

Near the end, Dupuy offers this paragraph of pseudo-wisdom:

The ethical problem weighs more heavily than any specific question dealing, for instance, with the enhancement of a particular cognitive ability by one or another novel technology. But what makes it all the more intractable is that, whereas our capacity to act into the world is increasing without limit, with the consequence that we now find ouselves faced with new and unprecedented responsibilities, the ethical resources at our disposal are diminishing at the same pace. Why should this be? Because the same technological ambition that gives mankind such power to act upon the world also reduces mankind to the status of an object that can be fashioned and shaped at will; the conception of the mind as a machine—the very conception that allows us to imagine the possibility of (re)fabricating ourselves—prevents us from fulfilling these new responsibilities. Hence my profound pessimism.

I am still not sure where his "profound pessimism" comes from. On close inspection, this passage makes no sense at all. Why are our ethical resources said to be "diminishing"? Surely they are increasing as we obtain a better understanding of the phenomenon of morality and realise the irrationality of clinging to inherited moral ideas that may once have had some pragmatic usefulness in very different cultural, economic, and technological circumstances. We are far better placed than our ancestors to ask whether we really want to live by this or that moral norm under circumstances prevailing today - whether it is really a norm that advances our deepest values (utilitarian, aesthetic, or whatever) and so is worth preserving. Moral philosophy - the rational investigation of the phenomenon of morality - is better placed than ever to make progress; our "ethical resources" are constantly increasing.

As for the claim that "Because the same technological ambition that gives mankind such power to act upon the world also reduces mankind to the status of an object that can be fashioned and shaped at will" ... this seems to make sense only if we confuse the concept of Object-1 (something that we can, to some extent, act upon and change) with Object-2 (a mere object - something that lacks the foundations of moral considerability). It is not at all clear why the idea that we are, in a sense, like machines - i.e. we are physical things in the last analysis, but with an intricacy of functioning - should prevent us from exercising responsibility in how we use emerging technologies. Everything about us can eventually be traced back to physical processes that occurred over the billions of years of deep timeand culminated in the evolution of Homo sapiens, but it does not follow that we lack the characteristics (sentience, rationality, self-reflection, etc.) that we actually have, or that we are wrong to value them. The profound pessimism expressed by Dupuy is based on a series of intellectual confusions. Maybe it's time for him to cheer up a little.

Hopefully, the remaining three articles - which I'll get to soon - will contain arguments of more substance (not to mention lucidity). If this mess by Dupuy is the best argument that the modern-day Luddites can offer, they might as well throw in the towel now.


====
* And don't forget that Idhe has been discussing such issues for many years - going back to the 1970s when his points were (I suppose) less familiar.

8 comments:

Anonymous said...

I read the Dupuy paper, and though I admit to emerging none the wiser about Heidegger and "metaphysical humanism", underneath all the muddiness and pretension I think I discerned some anxieties that attract a certain amount of sympathy from me. (Either that, or the whole thing was so obscure that it simply acted as a kind of Rorschach blot.)

Specifically, if and when we attain the power to act on our own minds to the point where we can alter our own (or our offspring's) entire mental landscape and ethical framework at will, that really won't be comparable to changes in diet and exercise, or reading self-improvement manuals, in the way that minor pharmaceutical or genetic tweaking is comparable to those things.

The idea that we could ever remake ourselves completely is terrifying. (This is not to say it should be irrationally proscribed for all eternity ... but anyone who doesn't feel some sense of existential vertigo and visceral fear at the prospect probably hasn't really contemplated what it entails.) We are not "objectified" when we trim our fingernails or take up weight-lifting, but if our own (or our childrens') entire nature is rendered infinitely malleable in our hands ... then I think we have to admit that that's a staggeringly radical change from our current situation, and one that it would be absurd to expect everyone to welcome.

Now, of course I don't believe that this is imminent (and I'm not trying to make some slippery slope argument that the mere desire for more effective self-modification leads us irrevocably to infinite malleability), but I certainly don't believe it's impossible in principle. And it is -- in a certain sense -- the logical endpoint of accepting the material nature of human beings, so perhaps that is part of the reason why Dupuy is so alarmed.

Then again, maybe I'm just reading this between the lines because I could make so little sense of the lines themselves.

Anonymous said...

Greg, you have posted critical stuff on transhumanism before and I don't quite get where it's coming from, because in your books you explore facets of the transhuman condition, and I thought your stance was rather pro than contra (judging from the books).

Why do you think that remaking ourselves completely would be terrifying?

>existential vertigo and >visceral fear at the >prospect probably hasn't >really contemplated what >it entails.

I have experienced this feeling (not too little), but have come out stronger; only after you have dumped essentialist thinking do you notice that the possibility of complete remaking is a promise of exploration, not something to be feared.

Maybe you assume that complete remaking would also entail some radical departure from ethics?

I do not think so; being deeply influenced by Buddhism and Taoism, I think that with supreme knowledge (awakening) comes love of all beings (wisdom, compassion).

Such a being would never remake itself in a way that would bring suffering to other creatures; it would remake itself only to more fully explore the mindscape and the wonders of the universe. And I do hope that wise people are among the first who will transform themselves; certainly the transhumanist community consists of more friendly and open people than other groups (in my experience).

Of course, there may be people who are not wise and who want to harness technology to remake themselves in a less beneficial way. But I think those are exactly the questions Eli Yudkowsky and others are addressing with his/their FAI project.

So, to conlude: not fear is the appropriate reaction, but joyful expectation paired with foresight and insight.

John Howard said...

If this mess by Dupuy is the best argument that the modern-day Luddites can offer, they might as well throw in the towel now.

But if yours is the best argument transhumanists can offer, then you might as well throw in the towel. So nyah.

What is the game we'd be conceding anyway? Does it have something to do with a law prohibiting genetic engineering? The winner of the argument gets his way on that law?

Anonymous said...

Günther, my stance is pro rather than contra, but that doesn't require me to trivialise the issues, or other people's anxieties about them. I tend to be optimistic in most of my fiction about our ability to make sensible choices once we have these kinds of technical abilities, but I don't confuse my ability to choose events in fiction with a guarantee that everything will turn out so nicely in the real world. (And if you've read the story "Axiomatic" I doubt you'd consider that it has a happy ending.)

Maybe you assume that complete remaking would also entail some radical departure from ethics?

I do not assume that it necessarily follows, merely that it is a risk, and a highly non-trivial one.

the possibility of complete remaking is a promise of exploration, not something to be feared

If you place me in the middle of a minefield and tell me that 1% of the paths I might choose will be safe, but I still have no map that shows me which paths they are, I would be an idiot if my fear went away.

I'm not suggesting that we should never take a walk through the minefield if it can be shown that there is a better, safer place to stand than where we are now. I am suggesting, however, that abandoning all fear about the prospect is currently rather premature.

I think that with supreme knowledge (awakening) comes love of all beings (wisdom, compassion).

Er, maybe, but I'm a bit more worried about the difference between the technical capacity to manipulate ourselves and the technical capacity to anticipate every aspect of the outcome. Even well-intentioned people are perfectly capable of over-estimating their competence and screwing up.

But I think those are exactly the questions Eli Yudkowsky and others are addressing with his/their FAI project.

The FAI project is a prime example of good intentions combined with grotesquely overblown delusions of competence; the specific warnings they make are 90% off target, and the proposed solution 100% wrong.

Russell Blackford said...

There often seems (to me) to be a nice ambiguity of tone in Greg's fiction. I understand the feeling of vertigo at the idea that we could be totally free to shape ourselves all the way down - though that doesn't strike me as a serious possibility so it's not really what I'm on about. But I think that Greg captured the idea nicely in "Axiomatic" and especially in "Reasons To Be Cheerful". The latter is a very rich story, and much could be said about it, but even there the narrator manages to find some basis for making choices.

Still, we could add something about the problematic nature of a freedom to choose ourselves all the way down to the four points that I took from Idhe. I'm happy for us to come up with a list of points that should provide a genuine reality check for transhumanists. In fact, if we ended up with a good checklist of things that transhumanists ought to bear in mind before they get too carried away ... that itself would make for a good blog post or (better) for an article somewhere.

I nominate Greg to write it. :)

Anonymous said...

Greg,

I now see where we differ in our assumptions. You say:

If you place me in the middle of a minefield and tell me that 1% of the paths I might choose will be safe, but I still have no map that shows me which paths they are, I would be an idiot if my fear went away.

First of all, to stay with the minefield metaphor: if you are standing in a minefield (and still live, and excluding timed mines or other nasties), it means that the position you are on is safe. But I don't think our current position is safe (so maybe there are nasties under the ground where we are standing).

To put it differently: when I am in a mood of angst, it concerns rather more traditional dangers we face today without transhumanist technologies, like for instance some crazy dictator going nuclear, or antibiotics losing their power.

So, seeing that where we stand is not so safe, and agreeing that "back" is not an option (I am happy to argue if this is contentious :-), but I don't think you hold that position) there is only one way anyway: forward.

But now I would like to change the metaphor, and that is I think where we actually differ.

I do not see the future as a minefield.

I would rather see it as a landscape, with hills, valleys, mountains, and yes, the occasional pothole. Now we should be careful not to tread in the potholes, but there are not so many: at least they diminish in number compared to the hills and valleys.

And we are also not blind (as the mine metaphor suggests; mines are insidiously hidden beneath the ground). While the landscape is sometimes overhung by heavy fog, which forces one to stay one's stride and prod carefully ahead instead, the alternation of joyful walk and an occasional wayside break is not to be bemoaned.

Now before I get carried away with metaphor, I do think that these different views should be carefully examined; where do they come from?

Why do you pick the minefield metaphor, and I the landscape? Do you have different information? Or different values?

To clarify these issues, I would especially be interested (as also Russell, judging from his previous comment) in what you think are the concrete dangers specific to transhumanism and which, say, are being missed by the FAI project (as you hinted above).

Anonymous said...

I'm often stunned by the political naivete, mindless ethical individualism, and breath-taking arrogance of some (!) analytical philosophers, but your polemic is beyond the pale.

You are calling Dupuy's text "intellectually confused", apparently without having a clue of the philosophical traditions he's referring to. While I was not convinced by his arguments, from my point of view Dupuy's style is not at all excessively allusive.

As you can't get blood from a stone, I won't try to show how silly such statements as "I treat myself as an Object-1 if I attempt to alter some aspect of myself." are, when seen from another philosophical perspective. Instead, I would like to refer you to a passage out of C.S. Lewis' "Abolition of Man" which might be more accessible for someone who thinks that a text he obviously barely understands must be "ornately impressive":

"It is in Man's power to treat himself as a mere `natural object' and his own judgements of value as raw material for scientific manipulation to alter at will. The objection to his doing so does not lie in the fact that this point of view (like one's first day in a dissecting room) is painful and shocking till we grow used to it. The pain and the shock are at most a warning and a symptom. The real objection is that if man chooses to treat himself as raw material, raw material he will be: not raw material to be manipulated, as he fondly imagined, by himself, but by mere appetite, that is, mere Nature, in the person of his de-humanized Conditioners."

Still hoping that "Norbert Weiner" is just a mistake of spelling, I would recommend reading Dupuy's history of cybernetics and cognitive science and then have another try.

Anonymous said...

and Idhe should be Ihde...