About Me

My photo
Australian philosopher, literary critic, legal scholar, and professional writer. Based in Newcastle, NSW. My latest books are THE TYRANNY OF OPINION: CONFORMITY AND THE FUTURE OF LIBERALISM (2019) and AT THE DAWN OF A GREAT TRANSITION: THE QUESTION OF RADICAL ENHANCEMENT (2021).

Tuesday, April 22, 2008

Transhumanism still at the crossroads

In 2004, I wrote a piece called "Transhumanism at the Crossroads", which has been one of my most popular essays. It was originally published as part of my old "Eye of the Storm" irregular column on the Betterhumans site. (And hey, I'm always prepared to revive "Eye of the Storm" if someone would pay me even a token amount to cover some of the time I'd need to do it properly; but it doesn't appear that that will ever happen.)

Nearly four years later, I'm sufficiently distant from this essay that it almost reads as if it was written by someone else, though I suppose I still agree with its main sentiments: I'm prepared to be counted in as part of the transhumanist movement if it is going to be an inclusive social movement, but not if it is going to be something narrow, cultish, and (not to put too fine a point on it) suitable only for techno-libertarian nerds. How transhumanism will develop currently remains to be seen, but for the foreseeable future I'd rather be inside the tent exerting some influence on it, rather than abandoning, rejecting, or disavowing it as some people among its one-time allies have done.

There is still a place for a strong transhumanist movement, if this is going to be a movement that is rational about technology and favourable to technology as long as it used in ways that are beneficial (or at least not harmful). Much of the Luddite opposition to cloning (not to mention something as obvious as stem-cell research) has nothing to do with any secular harms that it may cause, and I favour the emergence of a strong movement that says this loud and clear. It that is what transhumanism is going to become, then count me in. At the same time, I've often applied the phrase "anti-anti-transhumanist" to myself to identify that I am opposed to irrationalist opponents of transhumanism, not to rational and informed criticism of the movement, and to signal that I am not locked into any superlative ambitions that may be associated with transhumanism in people's minds.

Many self-identified transhumanists go much further than I do in what they want. I don't necessarily agree with them on any particular issue, but I do defend their right to advocate their ideas - and more than that, I think it is healthy for these ideas to be brought forward and debated without irrationalist fears or feelings of repugnance distorting the exchanges. I support some transhumanist ideas, but not others ... but above all I aim to do what I can to facilitate rational, rigorous, but lively debate about them. That was the purpose of "Eye of the Storm"; it was meant to be a place for calm philosophical reflection amidst all the raging bioethical (and similar) controversy. It is also how I see my role more generally when talking about transhumanist ideas; I'd rather introduce light than heat, though I'll sometimes be passionate when confronted by what strike as me plainly illiberal or irrational views.

In particular, I believe that it's important to discuss such ideas as personality uploading, advanced AI, the technological Singularity, and so on, and I am prepared to consider them all with a degree of sympathy. Moreover, I have defended advocates of these things against what I consider ill-informed attacks. I've even explored some of these ideas sympathetically, if a bit ambiguously, in works of fiction.

But at the same time, these specific ideas are not among those that I have actively advocated and there are reasons for that.

Okay, here's what I said in 2004, which may or may not still make sense. Feel free to discusss.

============================
TRANSHUMANISM AT THE CROSSROADS

For as long as I can remember, I've been fascinated by prospects for the future of our society and our species. This has kept me actively involved in the science fiction field, which has likely provoked sighs and raised eyebrows from my staider colleagues in academia and legal practice.

Yet this is nothing compared to the social stigma of being involved in the transhumanist movement. Since about 1997, much of my thinking, reflected in my fiction and nonfiction writing, has focused on issues that concern transhumanists: the prospects of artificial intelligence and uploading; the rights and wrongs of reproductive cloning, genetic engineering and radical life extension; and the general merits of human enhancement technologies. My viewpoint has generally been sympathetic to transhumanist approaches and at least one commentator has labeled me a "transhumanist technophile," which is fair enough.

Even so, I have not identified strongly with the organized transhumanist movement. After a brief period of enthusiasm, I declined to apply the label "transhumanist" to myself, and still feel some residual discomfort with it. But I am now more actively associated with transhumanism, especially through this site [i.e. Betterhumans], and my main project at the moment involves research on the social implications of enhancement technologies. With my working life centering around transhumanist issues, the time has come to take stock of where I stand, and of how I view transhumanism. One thing I know for sure is that transhumanism must become a far more inclusive, broadly based and mainstream social movement if it is to flourish.

Transhumanism and its discomforts

One good reason to feel slightly uncomfortable with transhumanism is its unmistakable nerdy aura, the sense that it appeals to a particular demographic, essentially young white males with computers. Its restricted demographic appeal is, indeed, part of the problem.

But the discomfort goes far beyond nerdiness or restricted appeal. It is one thing, I feel, to use science fiction to explore possible changes to human nature, and the prospects for enhanced human capabilities. (In any case, science fiction has often approached those possibilities and prospects with hostility.) It is another thing to use images of enhancement or cyborgification as metaphors for contemporary social reality, or for an agenda of political change. It is something else again, and something far more radical, to propose that we should quite literally upgrade our human biology. For many thoughtful, intelligent people in the professions and the academic world, this is a frightening idea. Now that transhumanism is getting media attention, it is not surprising some conservative commentators (such as Francis Fukuyama) are starting to brand it as dangerous.

To understand this reaction, we need to remember that the wider intellectual culture is still focused on the horrors perpetrated in the first half of the 20th century by those who carried out programs of racist eugenics. I cannot make the point any better than by quoting at some length from Walter Glannon's book Genes and Future People: Philosophical Issues in Human Genetics:

"Eugenics" is almost universally regarded as a dirty word, owing largely to its association with the evil practice of human experimentation in Nazi Germany and the widespread sterilization of certain groups of people in the United States and Canada, earlier in the twentieth century. One cannot help but attribute some eugenic aspects to genethical questions about the number and sort of people who should exist. But there is a broader conception of eugenics (literally "good creation" in Greek) that need not have the repugnant connotation of improving the human species.

Glannon goes on from here to discuss gene therapy, which he considers acceptable in principle because its aim is to prevent or treat disease in particular people. But he is implacably opposed to genetic engineering for the purpose of enhancement.

What strikes me as most remarkable is his unsupported assumption that improving the human species has a "repugnant connotation." It is symptomatic of something important in our intellectual culture that a reputable academic philosopher fails to put forward any argument at all for the supposed repugnance of species enhancement, contenting himself by referring to an "association" with the evil practices of the Nazis, and forced sterilizations in North America. After this point, Glannon's book simply assumes, still with no attempt at argument, that any proposal to improve human capabilities for a "perfectionist" reason is beyond the pale of respectful consideration.

Of course, it is worth reminding ourselves of the danger (not to mention irrationality) of guilt by association. To take the example of the Nazis, what made their practices so evil was their extreme prejudice, cruelty and violence. As Philip Kitcher has said, "The repeated comparison between Jews and vermin and the absurd - but monstrous - warnings about the threats to Nordic 'racial health' display the extent to which prejudice pervaded their division of human characteristics. Minor, by comparison, is the fact that much of their genetics was mistaken." None of this bears the slightest resemblance to what contemporary advocates of genetic enhancement have in mind.

But the point is not that Glannon can be debunked. Of course he can be. It is more important to understand that he is able to write in such a sloppy way because he can take for granted that his audience will start with similar assumptions. The situation may be changing, as more books and articles call for a sympathetic assessment of human enhancement. One straw in the wind is a new book [in 2004] from Nicholas Agar: Liberal Eugenics: In Defence of Human Enhancement. Still, until very recently, even the relatively modest idea of gene therapy has attracted expressions of concern. In this intellectual environment, the goals of transhumanism are ruled out of discussions from the start, except as targets for attack. To associate yourself with them is to be perceived as at best idiosyncratic and naive and at worst the sort of person who would happily consort with Nazi doctors and mad scientists. It is far easier to associate yourself with movements that project the picture of a caring person, dedicated to benevolence and justice.

Stand up and be counted

It would be nice if opponents of transhumanism were open to rational debate. However, I have gradually been learning some important, not terribly palatable lessons. One is this: We have moved beyond the point where liberal arguments about individual freedom and personal choice have much impact. I have argued in many forums that there is little intellectual basis for laws against innovations such as human cloning, which liberals should accept as a legitimate option for those who feel a need or preference for it. It is already too late to argue in that way, at least exclusively, for the cloning debate has demonstrated again and again that transhumanism's main opponents have abandoned traditional liberal ideals. John Stuart Mill's claim that experiments in living are to be welcomed now receives short shrift in public policy. The tone and content of the debate show that we are up against a scarcely disguised wish to impose certain moral ideals as legal norms, and a fear of strange directions that society might take in the future. [My thinking has changed a bit since 2004 in that I now think that the defence of our liberties necessitates an element of head-on confrontation with religion. I have only gradually come to think this.]

While there will be different outcomes in different societies, anti-cloning laws have created the precedent to abandon liberalism in areas of legislative policy relating to bioethics. We can go on complaining about this - and I believe that we should [and I'm still doing so] - but our complaints have a small likelihood of success.

What else can we do? The main thing is simply to stand up and be counted. Transhumanist ideas cannot be suppressed forever, since they appeal to deep-seated urges to improve our own capabilities and those of the people we love or identify with. But the movement can be frustrated for years or decades. The only answer I see is that transhumanism must develop rapidly into a movement of committed people in large numbers, including many articulate, prominent people who are prepared to identify with transhumanism in public. We must grow to the point where it would not merely be illiberal but also irrational for the state to try suppressing activities of which we approve or that we wish to try - whether we are talking about longevity research, technological methods of cognitive enhancement, or anything else that falls into the category of distinctively transhumanist acts.

As John Locke pointed out in his call for religious toleration more than 300 years ago, the state cannot coerce people's beliefs, as opposed to their outward actions. Doubtless, censorship and propaganda can accomplish much, probably far more than Locke realized. But, to adapt a point that Susan Mendus has made in her writings, it is still irrational for the state to buy into this, because popular belief systems get too strong a hold on too many minds. Once the state starts trying to suppress belief systems with wide appeal, it takes on tasks beyond even its vast resources. There is no limit to what might be needed to suppress beliefs, and it is not rational to try.

Mendus herself might confine her point to religious belief, which is sustained by powerful irrational forces. But the same argument applies beyond the area of freedom of religion. It is difficult to believe that the state could ever suppress the entirety of modern science or philosophy, for example, and it would be foolishness to try. As for social movements, the gay rights movement is a good example of one that has mobilized in recent decades and become so strong, visible and mainstream that it would simply be irrational for any Western state to attempt to stigmatize and destroy it. While some conservative governments continue to resist the idea of gay marriage, the actual persecution of people for homosexual practices is now almost unthinkable in Western societies. Now and then, Western governments will indeed take on missions that are completely irrational because they are destructive, never-ending and futile (the War on Drugs in the US is a deplorable example), but they usually know better.

The transhumanist movement now has a competent formal organization, which is increasingly active in pushing its message. It is getting media coverage, and there is the opportunity to gain increasing mainstream social acceptance. That's what we must do. We must go mainstream. We need to create a culture that is visible, proud and energetic. This is one lesson.

Arguing for equality

But this is not the only lesson. Are we sometimes our own worst enemies? It is all very well wanting to stand up for the transhumanist movement, but what will the movement be like in 10 years' time, or 20, or 50? How can I be sure that it will develop in a way that will make it a movement with which I am still pleased to be linked?

If transhumanism is to deserve our support, it must flourish as something that is humane and philosophically plausible. This does not mean that we should abandon any key ideas - at least not yet - but it does mean that we must accept that the availability of transhumanist technologies could have downsides.

I do believe that the overall effects will be positive. Consider, for example, the first great transhumanist technology that our society has embraced: the contraceptive pill, a biomedical innovation that alters bodily functioning in a way that is clearly enhancing rather than therapeutic. The pill's social impact has been far-reaching, and mainly for the better. Few of us would dream of going back to a time when there was no powerful technology available for women to control the fertility of their own bodies.

I expect we will come to feel the same way about technologies that help us increase our lifespan or our cognitive abilities. But the issue of social justice looms larger here than it does with the pill, since there are more obvious competitive advantages. Even if we do not accept a thoroughgoing egalitarian approach to questions of distributive justice - and I don't - we must avoid the exacerbation of existing social divisions that might arise if enhancement technologies became differentially available to the rich and the poor. Likewise we must avoid the alternative scenario of a mollycoddled, superficially "happy" genetic underclass whose ambitions and social contributions would be stunted.

As George Dvorsky argues, benefits are likely to trickle down even if enhancement technologies are initially taken up only by the wealthy. But we need to make sure it turns out that way. I've come to believe that transhumanists should go beyond arguing that enhancement technologies should be widely available. I now think that we should support political reforms to society itself, to make it more an association of equals. I am not planning to give away my own modest wealth, and I am only prepared to give two cheers for egalitarian political theory, but we have to find ways to narrow the gap between the haves and the have-nots.

Of late, I've seen more and more acknowledgments that transhumanism must be inclusive, both for our sake and for the sake of society. Nick Bostrom has recently emphasized that transhumanism must "ensure that enhancement options are made available as widely and as affordably as possible." I would go even further. We should actively promote a more egalitarian society, and a more equal world order.

This might not be a popular message for some people who identify with transhumanism. To date, part of the appeal has been to techno-libertarians who oppose regulating the market. If transhumanism became a more inclusive movement, it might actually alienate some of its current support base: people whose ideas are in many ways of great value. I hope this can be avoided, but we must become an inclusive, mainstream movement even if it leads to more fragmentation between "Left" and "Right" transhumanists. The forging of a humane and socially aware transhumanism is not only intellectually justified, it is necessary for transhumanism to survive and flourish.

Count me in.

185 comments:

Blake Stacey said...

If I were writing for Wired magazine, I'd say you were trying to start the New Transhumanism.

Anonymous said...

Though a handful of self-described Transhumanists are thinking rationally about real prospects for the future, the overwhelming majority might as well belong to a religious cargo cult based on the notion that self-modifying AI will have magical powers.

Worse, the word itself implies the replacement or overcoming of humanity, which is a PR disaster. While at some level it's good to insist that every quality of the human phenotype be subject to clear-eyed scrutiny, the word "Transhumanist" appears to suggest the foregone conclusion that everything about the present species is destined for the rubbish bin -- which neither accords with what most people who've considered the matter would wish for, nor does much to encourage anyone else to treat the movement seriously.

Russell, I share your concern that so many prominent Transhumanists are anti-egalitarian, but at this stage, quite frankly, to first order I consider a self-description of "Transhumanist" to be a useful filter to identify crackpots. While this might be unfair on a tiny proportion of people, I'm afraid anyone who doesn't want to sink with the whole drooling sub-Nietzschean mob really ought to think of a better name for their philosophy -- or perhaps even eschew labels altogether.

Russell Blackford said...

Greg, what you're describing is exactly the risk that my 2004 essay was (partly) about. You're suggesting that transhumanism as a movement is not at a crossroads but has already, irremediably, gone down the wrong path, even with its name. I'm not ready to draw that conclusion yet, but I won't argue with you about it, because you may well be right. Strategically, I am concerned about using the label for similar reasons, but there are also reasons to have some solidarity with other people who, as you say, are prepared to subject the current human phenotype to scrutiny.

The question of just what transhumanism is - if it's one thing at all - still seems to me to be contested and wide open. Within my broad view of what it could be, I'd classify (what I take to be) your views as transhumanist ... at least as much as mine. But obviously neither of us would recognise ourselves as adherents of the cargo cult that you describe. Indeed, I am opposed to the Singularitarian position that you refer to, but I see it as a disagreement among people who, at least potentially, belong to the same broad movement.

Let's get some clearly self-identified transhumanists in here to comment; I have plenty of them among my readers. Greg has put the problem a lot more forcefully, and a lot more pessimistically, than I did. What do you say?

Anonymous said...

Russell wrote:

Strategically, I am concerned about using the label for similar reasons, but there are also reasons to have some solidarity with other people who, as you say, are prepared to subject the current human phenotype to scrutiny.

The word "transhumanism" (or, even worse, "posthumanism") sounds like a suicide note for the species, which effectively renders it a political suicide note for any movement by that name. No doubt there are people prepared to spend 90% of their time and energy explaining that they didn't intend any negative connotations, but this is not one of those cases where other people will be to blame if "transhumanists" are reviled as the enemies of humanity on purely linguistic grounds. It's no use people proclaiming "Please, read my 1,000-page manifesto, don't just look at one word!" The name is stupid, and anyone who doesn't drop it deserves the consequences.

And I'm not sure quite how much solidarity I'm compelled to have with someone, just because they've also noticed that we're not going to see out the millennium with physical substrates identical to those we've had for the last 200,000 years. People who think their manifest destiny is to turn Jupiter into computronium so they can play 10^20 characters simultaneously in their favourite RPG are infinitely more odious and dangerous than the average person who thinks this whole subject is science-fictional gibberish and would really just like to have 2.3 children that are members of his/her own species, so long as they don't have cystic fibrosis and live a slightly better life than their parents.

I don't doubt that there are, also, some dangerously intemperate adherents to the notion of humanity retaining its ancestral traits forever ... especially if you throw in all the people who haven't given the issue a moment's thought, but would oppose it if it ever came to their attention (the "buy Jeremy Rifkin, get Osama bin Laden for free" argument). But for actual deranged monomaniacs on this particular subject, the pro side has a far higher proportion of nutjobs than its opponents.

Russell Blackford said...

Greg says:
And I'm not sure quite how much solidarity I'm compelled to have with someone, just because they've also noticed that we're not going to see out the millennium with physical substrates identical to those we've had for the last 200,000 years.

Well, I'd like to kick this around.

First, many people would see what I've just quoted as a huge and radical claim, and would oppose it fiercely - and probably think that you are a nutty science fiction writer, or something, for suggesting such a thing.

Many of those people wield a lot more political power than whoever you are worried about appearing with on the same side. I'm not even sure who, exactly, the deranged monomaniacs are that you have in mind - I'm not denying that they exist (I've encountered people with such views), just saying that they wield such little influence that their names don't even stick in my memory, whereas Jeremy Rifkin, Leon Kass, Michael Sandel, Francis Fukuyama, Margaret Somerville, etc., are names to conjure with. Those people are publicly lionised and politically heeded. I'm more worried about opposing their views than about being mistaken for someone who wants to convert Jupiter to computronium - though strategically it may be best that I distance myself from the latter to oppose the former.

I'm actually open to adapting some words from Sam Harris, which I've been thinking about for awhile now, speaking about atheism:

"So, let me make my somewhat seditious proposal explicit: We should not call ourselves 'atheists.' We should not call ourselves 'secularists.' We should not call ourselves 'humanists,' or 'secular humanists,' or 'naturalists,' or 'skeptics,' or 'anti-theists,' or 'rationalists,' or 'freethinkers,' or 'brights.' We should not call ourselves anything. We should go under the radar—for the rest of our lives. And while there, we should be decent, responsible people who destroy bad ideas wherever we find them."

I don't think Harris is correct in the original context. But I think that such a claim might be stronger here, in the context of technologically-based alterations to the human body and human capacities, than in the context he was talking about. It is arguable that:

"We should not call ourselves 'transhumanists.' We should not call ourselves 'technoprogressivess.' We should not call ourselves 'singularitarians,' or 'extropians,' or 'Xs,' or 'Ys,' or 'Zs,” or “XXs,” or “YYs,” or “whatevers.” We should not call ourselves anything. We should go under the radar—for the rest of our lives. And while there, we should be decent, responsible people who destroy bad ideas wherever we find them."

I think it's either give up on labels altogether, in this way, or use "transhumanist", because all these other labels just sound even more cultish, or at least flaky, and I would not support the idea of scratching around for yet another label. (I should add that some of these other labels also seem to stand for more specific positions that I don't support.)

Personally, I don't mind the label "transhumanist" in itself. To me it just sounds like being in favour of using technology to overcome human limitations (and accepting that one day our bodies may be different, rather than being appalled at the idea). Beyond that ... the more fragmentation of terminology we see, the more cultish and silly the whole thing looks.

So, all right, I freely admit to anyone reading this that, with only slight reservations, I do take Greg's point. But we also need to find ways to discuss these issues from the kind of viewpoint that accepts that the human substrate will change in some way, and that this is not undesirable. We also need to find ways of having a political impact, and that usually does require having organisations, labels, etc.

Fortunately, we do have IEET, which avoids requiring that its Fellows, etc., identify as transhumanists (or anything else). We also have JET, which assumes some kind of technological evolution of the human substrate but does not require that its contributors identify as transhumanists - somewhere along the line someone made a deliberate decision that JET would not be called "The Journal of Transhumanism" any longer, and probably for good reason.

So, Greg, you're doing your bit by writing work that expresses your worldview. But it helps a lot if the adherents of a worldview are able to organise, not just be lone voices. Strategically, what would you do if you were in James Hughes's position, say?

And I'll be interested to see what James has to say if he wants to post a comment ... or if any of the other unquivocally self-identified transhumanists do.

Anonymous said...

I'm glad I decided to mull this over before posting... I just finished doing a google search on Greg Egan and transhumanism because I wasn't sure that he was himself a transhumanist even though he is a or the leading figure in the SF subgenre. I didn't think he would be a posthumanist because he posts rationally on sci.physics.research Transhumanism is defined on the wiki as having "distinctive currents such as Immortalism, a moral philosophy based upon the
belief that technological immortality is possible and
desirable," and Singularitarianism.
However, Bostrum identifies these 2 currents as part of posthumanism. Anyway, it is these two which I object to being classified as naturalistic. Even Kurweil says that while within 25 years we will have a Turing Test Passing Program, "but he adds that whether these robots will truly be conscious or simply display what he calls "apparent consciousness" is another question, and one for which he has no definite answer." So those "currents" are beyond the pale of the scientific method. Otherwise, Russell, I didn't see anything with your stated position which seemed moderate. You also stated that after 4 years you had decided to adopt a more confrontational stance with religion. I think you will have a hard time stating some rule for when that is wise. After all, none of our "rights" is absolute, they exist in system of checks and balances with the other rights, so the freedom of speech doesn't always trump other rights. That is why people can be legally punished for inciting a lynch mob for instance.

Russell Blackford said...

Anonymous: I can't speak for Greg, but I think we can be confident from what he's been telling us that he would definitely not want to be called a transhumanist. :)

Your comments are, alas, reinforcing Greg's in that you've somehow gained the impression that to be a transhumanist is not to be someone who is likely to post rationally. I must say that that's a very unfortunate situation if it really is the situation we've reached. The people whom I know and consider friends, and who accept or adopt the label "transhumanist", actually tend to be unusually rational people. I'm thinking of James Hughes, for example (since James will probably read this).

Some of the other tags, such as "extropian" do seem to me to be tainted, but "transhumanist" covers a very wide range of positions, and many of those positions strike me as eminently arguable, even if I don't agree with everything that any particular transhumanist thinker might propose. I wouldn't agree with James on every issue, but I wouldn't agree with Greg on every issue of substance, either. And yet the three of us would probably agree on many things. We'd be far more at home with each other's worldviews, I expect, than any of us would be with either Leon Kass or the blokes (they're usually blokes) who want to turn everything into computronium.

It's true, however, that Greg has managed to avoid labels, as has, say, Damien Broderick ... and so have I to some extent. So perhaps we all sense the disadvantages. I think that a debate about labels (and movements), and whether they are productive or counterproductive is well worth having - in this forum and others.

I'll go into the freedom of speech issues (again) another time. Obviously I'm well aware of the lynch mob example, which John Stuart Mill actually used in On Liberty. But that's not a reason for self-censorship on important issues to do with the truth of religious doctrine or its related moral claims. But, please, let's leave that for another thread, because the issues that Greg has raised are more than enough for me to cope with right now.

Russell Blackford said...

And I see that I misread one thing in that you actually said that you didn't think Greg would be a posthumanist. Okay, that puts a slightly different light on things, but really I have no idea what posthumanism really is or whether it is one thing. I always think of someone like Donna Haraway when I read that expression, but I doubt that that's what any of us are talking about.

Anonymous said...

Russell, perhaps the reason we have somewhat different perspectives here is (and correct me if I'm wrong) you're thinking mainly of debates going on in the philosophical literature, where most authors probably agree that what a term like "transhumanism" means is ultimately to be determined by reading any number of academic papers in which the matter has been argued about and agonised over.

But what I'm talking about is the way journalists, legislators, and ordinary citizens will respond to such a word. The fact that some fairly sensible people with relatively modest, or at least defensible, aims might have chosen to use the word for their agendas really won't count for much, if the popular meaning amongst self-described "transhumanists" in the blogosphere revolves around fantasies of AI-mediated rapture. At best, you can expect an ordinary citizen to think of someone like Kurzweil, who is certainly very focused on AI.

If you really, honestly think the word carries no negative baggage either from its suggestive etymology or from its use in popular culture, well ... who knows? Maybe I have no idea what I'm talking about. But it does amaze me that someone like you who seems mostly concerned with the politics of plausible near-future biotech doesn't want to give the space-cadets a wider margin. Who do you think the next PM is more likely to invite to his/her big ideas summit: (a) Russell Blackford, well known commentator on bioethics, or (b) Russell Blackford, well known commentator on transhumanism? When Sir Humphrey explains that word to the PM, he will not ascribe a meaning to it by reading your published works; he will Google it and write you off as a crackpot.

Russell Blackford said...

Actually, Greg, I tend to agree with your final paragraph. And it's true that I don't go around calling myself a transhumanist when I submit articles to journals or books to publishers - well, not that I've done that lately, but you can be confident that when Voices of Disbelief is submitted to Blackwell towards the end of this year it won't contain any references to transhumanism (at least not by me).

Furthermore, when my thesis gets turned into a manuscript that I'll be submitting to publishers later this year the same thing will apply. I'm more likely to stress its continuity with the work of people like John Harris (as opposed to Sam Harris) and Nick Agar than any connection with transhumanism. So I'm largely taking your advice already. (If I do get it published, some of my transhumanist friends may actually be appalled by how conservative some of my views are by their standards.)

I haven't enticed James into here, but I note that Cyborg Citizen also doesn't have as much to say about transhumanism as one might expect, though it's certainly there. Damien never identifies as a transhumanist, though he may have damaged his credibility in other ways, as he's well aware. But about the only person who actually wants to write a book that puts transhumanism right up front is Simon Young, who is one of the people whom most of us would distance ourselves from very quickly. (Personally, I think Prometheus Books made a mistake in publishing his Transhumanist Manifesto.)

I'll be talking about transhumanism at the AAP conference in July (my paper was accepted), but again I'll probably be keeping some distance between me and it, concentrating more on the issues that it raises. But I suppose that counts as being a commentator on transhumanism.

All in all, I'm not debating this with you, but just discussing it, as you'll realise, because I think you have a very strong point, and part of me simply agrees with it. Perhaps people like me should simply organise around IEET with a batch of interesting colleagues, some of whom do consider themselves transhumanists while some do not. And perhaps some of us do have good reason to "go under the radar" in Sam Harris's sense, i.e. to avoid labels and movements and just deal with issues rationally. On the other hand, I don't think of people like James or Giulio Prisco as the space cadets. I don't even think of people like Max More and Natasha Vita-More (whom I like a lot) in such a way.

Maybe it's time for the IEET folks to do some more soul-searching about all this - at least internally - since it's obviously on my mind, and since you are providing thoughtful feedback, and because I think it's a perennial cause of concern to some of the other IEET folks whom I won't name. OTOH, I'm not likely to go off and start attacking people whom I value and positions with which I have sympathy as some folks seem to be doing (see my original post).

Your impressions, and your impressions of other people's likely impressions, are valuable input. So much so that you probably should publish them somewhere more prominent than as a batch of comments on someone else's blog. For your sins ... er, hard work ... as a science fiction writer, you're probably one of the people whom a lot of transhumanists most respect. Your reservations (to say the least) about the word, and the wilder ideas associated with it, would make people sit up and take notice.

citizencyborg said...

Hey Greg and Russell

First, Greg, allow me to fawn a bit. I'm a huge fan of your work.

As to the debate over the term "transhumanism" and its utility, we have discussed that issue for the six years that I've been a public transhumanist and leader in the World Transhumanist Association, and I've frequently noted the similarity of the debate to debates among we leftists about the utility of terms like "socialism." As my comrades and I did in leftist activism, I tend to advocate strategic tolerance and coalition-building around the core ideas and goals, recognizing that some people in some life situations will be uncomfortable with ideological labels and organizations, while others will embrace ideological terms for identity and movement-building. A healthy movement towards an enhancement-friendly future requires defenders who are openly "H+" and building an explicitly "H+" movement, as well as people who are pro-enhancement but adamantly not H+. Diffident intellectuals, politicians, artists and writers are often in the latter "fellow-traveling" group, and that's fine. The trick is keep us from standing in a circle shooting at each other. That is one of the goals of the IEET, building that diffuse coalition.

As to the idea that the term "transhumanism" broadly connotes "anti-human" for people I have not found that to be true. I've spoken to audiences of tens of thousands of people about transhumanism over the last five years and I simply don't get that reaction. More often people note the (entirely appropriate and IMHO welcome) connection to "transsexuality" or "transgenderism." I should note that most of us have consciously avoided "posthumanism" however for precisely the reason that it implies we want to end "humanness" rather than create an inclusive and diverse "transhuman polity."

Words don't really mean anything necessarily (Wittgenstein) but only in their associations. In the UK it is associated with the sober Dr. Bostrom and well-defended in the British media, in Italy transhumanism has been demonized as a left-wing plot against the Church, in the Bay area it is seen(by both advocates and critics) as Silicon Valley libertopianism, in Quebec it is seen as hyper-Americanism, and in Nigeria and Kenya it is seen as the Enlightenment on steroids. But for 99.9% of people on the planet it still means nothing yet.

So the strategic questions are (1) whether we need a term, and (2) whether there is a better term. I do think we need a term because the ideological POV and the subculture exist, and they will be called something, so what then? Many possible alternative terms have been proposed, and the one the IEET has gravitated toward is "technoprogressive," not as a replacement for transhumanism, although it is often less problematic for people to identify as technoprogressive than as H+. However technoprogressive defines a series of politically left-of-center techno-positive perspectives, including left-wing transhumanism, and transhumanism is a broader and less specific set of ideas. Presumably many such Venn diagram terms will proliferate.

At any rate, Greg, I think you are stuck being a patron saint for we transhumanists, wacky or not. We'll try nt to embarass.

J. Hughes
http://ieet.org

Anonymous said...

Thanks for the comments, James. I'm very surprised, but if you say you've detected no flak from the use of the word to such a wide audience, I'll have to defer to your experience. I still think you're outnumbered by crackpots who've chosen the same label, but maybe in the long run the term will end up connoting something respectable, if and when it becomes entrenched in everyday language.

Anonymous said...

Hello from SH aka Anonymous,

Russell I made a mistake when I wrote
'Otherwise, Russell, I didn't see anything with your stated position which seemed moderate.'

I meant to say anything _wrong_ but maybe your mind automatically filled in the intended meaning. ... The public encounters what Transhumanism (TH) entails first on the Wiki. Two of the components map to foundations of religious belief.
Immortalism (technological) serves the same function as realizing Enlightenment, or reaching Heaven, Mecca, Valhalla or Mt. Olympus. Immortalism faces the same challenges, rather exactly, as the quest for a time machine or a perpetual motion machine. SI promotes belief in an intelligent sentience wiser than humanity. A power greater than ourselves like the hope for benevolent alien civilization which drives SETI. An
atheist and some naturalists has
come to grips with the existential ennui of being alone. There is no greater power (evolution has no end goal) which has a purpose for me and my life. I have to accept the responsibility of choosing my own destiny, I don't fit in as a jigsaw puzzle piece in the tapestry of a Divine (even super-AI) Plan. These fears, death, and life has no meaning, are assuaged by religion and the more radical currents of TH. Humanity has searched for order in the cosmos since the beginning of time when the wind was thought to be the spirit of Life. SI has no scientific basis. A continuum which provides a computer system with a marginal mind with current memory and cpu resources does not exist nor when such resources reach the level of the complexity of a human brain nor when such resources exceed the brain by a million fold.
Reputable pro Turing Test Passing AI scientists make no claim for any such existing program. After 55 years of effort they've started a world-wide research program not to build an AI TTP but to study what is needed to build such a program, a list of what functionalities will eventually need to be incorporated.
They don't know if they have ever tried the right/workable approach.
Emergence does not apply to all physical processes nor is it predictable, the processes that it manifest. AI necessarily adopts physicalism for the mind/body dispute. Positing a continuum of consciousness/mind is an anti-AI position that requires Dualism or something like it, Animism. One doesn't have any scientific evidence for the SI claims, one of which is that a super-intelligent AI (with a mind so that it is capable of harboring malevolent intent) is going to spontaneously spring forth (like from the forehead of Zeus) and pose a potential threat to humanity so that the Ghostbusters need to fly to computer networks and apply governors similar to Asimov's 3 Laws of Robotics. We should give funding to squelch a fear for an accidental, evil AI entity event when AI/science hasn't been able to achieve this goal after 55 years of deliberate effort??
So the Wiki reflects the majority of public opinion. It lists SI and Immortalism under TH. This connection is reinforced by movies such as the Terminator and The Matrix. Nick Bostrum doesn't identify SI and Immortalism with TH. He places it in the fringe of Posthumanism. But as you, Russell wrote: "but really I have no idea what posthumanism really is or whether it is one thing." So if you don't distinguish Posthumanism how can one expect the public to discern Immortalism and SI as belonging to a different category? The vitriolic assessment of SI by the person who posted as Greg Egan (which I echoed) isn't a condemnation of TH. It isn't an attack against pursuing scientific research or intellectual freedom. It is an attack on misrepresenting imaginary scenarios as if they were based on scientific evidence, and it is deliberate not a mistake. SI with its religious overtones, is a fear-mongering spread of misinformation with a goal of obtaining money, so it is harmful even if it just deprives true scientific projects of funding. What anti-theists disapprove of in organized religion, SI is part of that flock. This doesn't preclude TH from having a legitimate territory of scientific exploration and in fact I think some technological brain enhancements are inevitable. But I don't think that is the image that the public holds for TH. I suppose Greg Egan could be scarcely TH. When RAH wrote 'Stranger In a Strange Land' he received a lot of acclaim for writing something deep. Heinlein said, I wasn't trying to write something deep, just a good SF story. But a cluster of his fans wouldn't accept this disavowal. This leads to a point about cults but I've written enough.

kanzure said...

I'd like to bump a link over to a critique of the Wikipedia transhumanism article, in my attempt to salvage it within the context I see emerging out on the web, but these edits were rejected and I am not sure how much effort I want to spend on them. Don't know how much the term 'transhumanism' is worth it. I'd rather let other people sort it out (and ultimately falling into some sort of symbological grounding problem with their endless criticism) while I go work on my tech (seriously).

- Bryan

Russell Blackford said...

You know, I wrote a lot of that Wikipedia article on transhumanism, but it was the devil to write, since every line was a compromise, including with someone whose mission was to make transhumanism seem as wacky as possible. There are large tracts of the article that don't resemble the transhumanist movement as I've encountered it. (Also, the article may well have changed a lot since my involvement.)

Nonetheless, I'm sure that Greg is right that in at least some circles - perhaps very wide ones - transhumanism is associated with activities and proposals that I consider at best irrelevant, and at worst ... well a lot worse than that. This does need to be managed; I can understand, but not approve, when people like Dale Carrico run around shouting, "I am not a transhumanist and I oppose transhumanism!"

Regarding the framing issue of whether the T-word should be used, the issue in my mind is whether it can, as Greg puts it, end up connoting something respectable. I suppose I'm back where I started on that. James is optimistic, and he has a lot of data/experience that the rest of us lack, but I'm not entirely convinced. I think the jury still is out.

Steve, thanks for posting as yourself. It's amazing how much friendlier a comment seems when it is not signed as "Anonymous". And by the way, the person posting as Greg Egan is actually ... Greg Egan.

Although I'm sceptical about AI of the kind that the SI people are discussing, and about the SI itself, one thing that I must contest is the cynicism with which you discuss the SI folks. I don't know them well, but I know them well enough to be sure that they are idealistic about what they're doing. It's not fair to describe their project as a fear-mongering spread of misinformation with a goal of obtaining money. The people concerned could almost certainly become a lot wealthier by putting their skills into more mainstream activities. Even if they're misguided, people can be misguided without being venal or corrupt.

Greg, you might not like to name names in public (but you have my email address). Who are the space cadets that you have in mind? Is it specific people (Simon Young? the SI people?), or is it sort of a general impression that you have of "libertopian" (as James calls them) types? Or both? It would be a useful bit of data (at least to me, but possibly to others as well).

Anonymous said...

Russell wrote about SI members:
"The people concerned could almost certainly become a lot wealthier by putting their skills into more mainstream activities. Even if they're misguided, people can be misguided without being venal or corrupt."
I've met more than my fair share of New Age cult leaders while living in Santa Cruz. It is true that some of them genuinely believe what they preach. It has been a few years but I've read the manifesto put out by Yudkowsky. But the mechanism by which some computer program can metamorphosize into a super-intelligent entity stands out like a sore thumb when one knows that purposeful attempts by the AI community had failed for decades. So I got into an email conversation with one of the core members and he said that the mechanism would be realized when one understand the manifesto. That a plain answer could not be revealed and must be held as a secret because if the answer were known it would increase the danger of some computer network being triggered into awakened AI malevolence. You may want to call it cynical but I call it no longer being naive about cult tactics.
Somebody who I think is a genuine
believer and smart is Ben Goertzel (He serves as Director of Research for the Singularity Institute for AI.) I encountered him on the net when he was poor and Novamente struggling. My point is that being smart and educated is very little guarantee to becoming wealthy(ier). Bill Gates scored 1600 on his SATs. Yudkowsky also scored 1600 but is sufficiently dysfunctional that one can't count on him being wealthy by choosing a more mainstream endeavor. Heinlein used to live fairly close and there is a story about L Ron Hubbard visiting RAH in his cabin. They got into an argument about how gullible people are; Hubbard claimed very gullible
and they made a bet. Scientology is
how Hubbard won the bet. I guess we
will just have to agree to disagree as I didn't find your point convincing esp. about cult leaders.

"Nothing in the world can take the place of Persistence. Talent will not; nothing is more common than unsuccessful men with talent. Genius will not; unrewarded genius is almost a proverb.
Education will not; the world is full of educated derelicts. Persistence and determination alone
are omnipotent. The slogan 'Press On' has solved and always will solve the problems of the human
race." -- Calvin Coolidge --
I find this advice very helpful when solving computer problems :-)

Russell Blackford said...

Well, you agree that Ben Goertzel is sincere, and while you obviously have a low opinion of Eliezer Yudkowsky, I think his genuine concern about these issues is palpable and has been for many years now. That's the point of my comment, and I don't think you really disagree with it. So let's move on to more substantive matters.

(I certainly don't think it helps much to trot out the famous story about L. Ron Hubbard yet again. There's no evidence that Hubbard's attitude in any way typifies that of the SI people either collectively or as individuals.)

Anonymous said...

Russell, I don't want to single anyone out for disparagement, either here or in private, because I haven't actually read anyone's entire corpus. I don't spend much time reading academic papers on this subject, or Transhumanist manifestos; the impression I've gained of the movement comes largely through the popular media and random exposure to blogs by people self-describing as "Transhumanist", regardless of their affiliations and qualifications. A large number of those bloggers will be people whose names are not famous and who have no particular influence; nonetheless, they consider themselves to be part of the Transhumanist movement, and so surely they contribute something to the wider public's impression of what such a movement entails. As with, say, socialism, it's not the academic definition that interests the general public, it's the behaviour of people they know (either personally or through the popular media) who self-describe as socialist.

Now there are obviously some grave deficiencies with such a viewpoint; I mean, a similarly based impression of quantum mechanics would also yield a picture of a world dominated by crackpots. But while quantum mechanics has a sound historical and academic bedrock that can (largely) withstand all the noise that surrounds it, I'm much less sanguine about the T word, given that its origins lie as much in SF, SF fandom, and technopunditry as it does in bioethics and other fields of philosophy. There's nothing wrong with that; SF and various non-academic techno-boosterist subcultures ought to be inspirational. But the lines between what's imminent, what's plausible in the medium term, what's possible in the long-term, and what's sheer wish-fulfilment fantasy, remain utterly blurred for most "rank-and-file" Transhumanists I encounter on the web, and also (from my limited reading of them) a substantial number of more prominent commentators. It's this that prompted me to say, earlier, that to first order I consider a self-identification of "Transhumanist" to be a sign of a crackpot. While there are doubtless people to whom that's unfair, filtering out anyone who uses that label is a pretty reliable way to ensure that you don't end up wasting time reading people who've completely lost touch with reality.

Anyway, I don't want to harp on about this. I suspect that despite my misgivings the word will continue to be used, and maybe it will turn out to be salvageable after all. And I seem to have derailed what could have been an interesting discussion on the politics of the movement; sorry about that, Russell!

Russell Blackford said...

Greg, I think that your comments actually are helpful to a discussion of the politics of the movement - if there really is a single movement here - and how it frames itself (which includes what terminology it uses). I hope they provoke more responses.

Giulio Prisco said...

Thanks Russel, Greg and others for the great article and discussion.

I guess I agree with Greg that "The word "transhumanism" (or, even worse, "posthumanism") sounds like a suicide note for the species, which effectively renders it a political suicide note for any movement by that name.".

Of course, most transhumanists don't mean it as a suicide for the species, but rather as growing up to adulthood as a species. But my experience is that most people do not relate positively to the term because it conjures the negative and threatening images described in the post and comments.

But labels are a useful communication device. Of course the simple words "Left", "Right", "Democratic", "Republican" etc. cannot capture a more complex political position, but they are still useful to express _more or less_ where one stands. And we need simple labels for overcoming human limitations through all available means including technology because, like it or not, this is becoming and important political issue.

So what label should we use for the simple idea of, in Russel's words, "being in favour of using technology to overcome human limitations (and accepting that one day our bodies may be different, rather than being appalled at the idea)"?

I always thought the "Extropy" label is very strong because it does not have an "old" meaning and by itself does not trigger any image, be it positive or negative. But over the year the E word has became more and more identified with a specific political flavor of the T word (perhaps this is not what Max More and the first Extropians would have wished).

Perhaps the T word will lose its negative connotations over time, or perhaps we should think of a new label.

Russell Blackford said...

Or, Giulio, perhaps there's value in flying under the radar and simply dealing rationally with a range of emerging issues that relate to new technology.

But that's hard to reconcile with being organised.

One way might be for like-minded people to organise increasingly around various more specialised bodies such as the institutes at Oxford run by, respectively, Nick Bostrom and Julian Savulescu (both part of the James Martin Twenty-First Century School), or IEET, or the Singularity Institute if it comes to that. Those bodies can develop their own areas of interest and expertise without necessarily claiming to represent something called "transhumanism" or being lumbered with whatever connotations the term may have. Maybe we should spend more time using different language such as "the future of humanity" and "emerging technologies" and even "evolution and technology" (by which we mean something fairly specific).

I'm not necessarily advocating this, though, because I also see benefit in having a broad label such as "transhumanism" (whether or not it was the ideal choice) and in having an organisation such as the WTA.

citizencyborg said...

Greg et al., concerning the politics of the movement (which has always been one of my central concerns) you may be interested to check out the surveys I've conducted of the transhumanist rank-and-file to discern what their politics are:

Politics table
http://ieet.org/index.php/IEET/more/tph07wta/

Executive Summary
http://www.transhumanism.org/index.php/WTA/more/2007survey/

Full survey report
http://www.transhumanism.org/resources/WTASurvey2007.pdf

Gratifyingly I discovered when I started doing these surveys that, contrary to stereotype, we are extremely politically diverse, and progressives outnumber libertarians.

------------------------
James Hughes Ph.D.
Secretary, World Transhumanist Association
http://transhumanism.org
Williams 229B, Trinity College
300 Summit St., Hartford CT 06106
(office) 860-297-2376
director@ieet.org

kanzure said...

Greg,

You mention that a 'movement' is defined by what its people do, such as the difference between writing academic papers trying to define a 'movement' versus actually going out there and doing the tech.

I have had to deal with this as well, but I am seeing many groups of transhumanists that are eager to help out with my tech projects, like innerspace, diybio, biohack, biopunk, OpenWetWare, synbio, dnatube, OpenVirgle, OSCOMAK, DIY genome sequencing, tons of bioinformatics databases, InterPlanetary Ventures, Team FREDNET, OSAEROSPACE, Markram's computational neuroscience simulations, in vitro meat, asteroid mining; there are so many people doing so many things. I'd wonder what more we could possibly do. Isn't all of this progress?

Re: derailing a discussion of politics. Did the politics derail transhumanism, or transhumanism derail the politics in this scenario? The broader scenario?

- Bryan

Anonymous said...

I agree that sometimes the world of transhumanist blogs and mailing lists sometimes resembles the discussions and the society of the early 20th century eugenicists. To think, over a hundred years ago such thinkers as HG Wells and GB Shaw put their names to the movement, universities ran courses in the subject, and parts of the US enacted laws promoting it.
Now we live in the 21st century and have some understanding of the science underpinning genetics (like we actually know DNA carries the information, and we can sequence whole genomes). Can our times produce newer, better intellectual cliques to spark debate on our future?
When you try to read articles about tranhumanism on the web, it would appear that most transhumanist thought is the work of those two most roguish professions, the philosopher and the futurist. Philosophy is well-known for its extreme abstraction, frequent irrelevance to most people's everyday existence, and its practitioners love of rhetoric. The Futurologist is a predictor who often makes the financial geniuses behind the current credit crunch look sensible by comparison. On the other hand, most of the critical opposition to Transhumanism comes from philosophers in love with their own rhetoric, academics trying to force everything to their dogma, and professional controversialists like Fukuyama (after naively declaring the end of history, he went on to declare Transhumanism "the world's most dangerous idea. I bet he's upset Transhumanism doesn't have a higher public profile, or he could be scoring more publicity).
Given the philosophical bent of a lot of the articles, you can see how Transhumanism as such doesn't have a big public profile - it's far too abstracted from people's lives. If someone created an "LA Law" or "Gray's Anatomy" of just-beyond cutting edge bioethics, you'd probably find these issues having much greater public awareness and impact, and a growing movement looking for a name.
To continue the theme of raising the profile of transhumanism, one of the biggest reasons we have the "white males with computers" advocating uploading and the like is Greg Egan. In "Diaspora" and "Schild's Ladder" he thoroughly humanises characters whose intelligences are inside machines, and in "Distress" he makes believable characters who are beyond gender. If Greg's characterisation was worse, less people would have initially gotten the idea that various methods of trans/posthuman existence were desirable and would allow a continuance of what it is to be human.
Maybe there's someting to be said for being "under the radar". As it is, we already live in a transhumanist world - you can go to your dodgy internet pharmacy and load up on antidepressants to change the way you feel, modafinil to allow you to sleep as much as a combat pilot in wartime, viagra to alter your sex life, hormones to combat deficiencies and possibly offset some of the changes associated with aging. We can have plastic surgery to look how we want, and there are thousands of people out there who've broken traditional gender boundaries with gender reassignment. There are many children born each year via fertility technologies, some more controversial than others. You can have faulty body parts replaced with those of other humans if you're lucky, have heart valves from other species, or a variety of prosthetic parts. If you have a neurological condition and are feeling desperate, you can always go to a clinic in China and be injected with embryonic stem cells. All the while, there are people living as much of their lives virtually over the internet as they can, leading entire existences in which their body is just something sitting in a room, allowing their brain to interface with others over a distance.
If people keep standing up for biological research, and people like Greg keep writing inspiring fiction, and the cream of the transhumanist blogs & mailing lists provide interesting ideas, then there won't be a "transhumanist movement" - there'll be a world living a lifestyle that we would consider transhumanist, but they will call by a different name, one more apt to their circumstances.

Damien Sullivan said...

My perspective:
I discovered the labels extropian and transhuman(ism) in 1993, presumably thanks to Usenet. Prior exposure to the ideas came from SF: Vernor Vinge (pointing me directly at the Singularity), Fred Pohl's Gateway books, _The Silicon Man_, and I guess my teen reaction can be summed up as "AI cool, longevity cool".

I hung out on the extropians list for years; technically I'm still subscribed, but hardly ever open the mailbox these days, and usually see rants against the government when I do. But at the time I'd have said "transhumanist" meant anyone positively interested in the potential of one or more of intelligence increase, AI, longevity medicine, cryonics, genetic engineering, cyborging, uploading, nanotech; extropians added a strong libertarian spin. There was talk of transhuman being someone who was on the way to the posthuman condition, but, eh.

So, it's all pretty loose. The Borg are transhuman. So are the immortal humans at the center of clouds of robots in Marooned in Realtime. So's Tatja Grimm. Or Egan's informorphs. Or Ken MacLeod's range of old humans and uploads and (usually doomed) AIs. Transhumanists, I figure, would like one of those paths. Not always the same one.

As for the tension between transhumanism and humanism... back when I was active, I used the "Enlightenment on steroids" line myself, I think, regarding "extropianism". And noted sourly that much of the world had a tenuous grasp if that on humanism and the basic Enlightenment, never mind the stuff we talked about.

But then someone like E. O. Wilson ends _Consilience_ with disapproval of what he calls self-made Homo proteus, vs. contingency-accepting Homo sapiens, setting himself up as a humanist vs. transhumanist ambitions.

Of course, there's the whole eschatology of the range of Singularity concepts (even Vinge himself can't be pinned down to just one concept.) I got tired of that after a while.

I doubt I'm being very coherent here; this tiny text window in a browser isn't conducive. But, I'd say I still call myself transhumanist, lacking any other label for viewing various enhancement or digital-mind technologies positively. I don't use extropian these days, which fits my switching from libertarian to social democrat. I don't have any negative connotations with transhumanist, it's too generic. Reactions I've seen against transhumanism often involve skepticism of even the possibility of AI or significant biological life extension, though there's also conflation with the Techno-Rapture version of the Singularity. I avoid the Singularity term as ruined, sometimes bruiting "the Cognitive Revolution" for the likely future I see, one in which the mechanisms of intelligence and the brain will be increasingly understood, imitated, and manipulated. Maybe we'll have the cheap hardware for digital AI; maybe we'll have centuries of slow genetic improvement. But when we understand the mechanical bases behind intelligence and personality I think some big change will happen. Hopefully a net positive one.

As for "working toward it", well, I'm a grad student in cog sci. But I kind of think a lot of this stuff *will* take care of itself in the normal course of events. People will provably pay lots of money for even false hopes of restored youth, so unlike space colonies, there's a real market for longevity stuff. People will pay for increasing automation -- Japan on one side, the US military on the other. Researchers are trying to upload animals as it is, or figure out the genome. Big things to work toward politically are (a) not banning those, (b) social democracy, so there's a good distribution of benefits, (c) stabilization of climate or at least of food and water availability, so the whole system doesn't collapse.

No idea what IEET or JET are, though I was around when SIAI started. Guess I haven't been paying attention for years. :)

Anonymous said...

I have long agreed with Egan. Three years ago, I was saying on the WTA site that the word "transhumanism" conjured up the image of that handful of men (note the gender) who could run fast enough to catch the train of radical evolution, leaving everyone else behind on the platform.

Unfortunately, some people have both a personal and historical investment in the word and are reluctant to change it.

I'm the type who believes in searching for something that works better rather than sticking to something that works badly just because it's there. Brands are malleable. For instance, I liked H+, but it got grief from the chemists. Fair enough.

I also agree that the far future scenarios for which transhumanism is known can be very destructive in the wrong hands when they no longer resemble SF literary meditations and instead resemble a wishlist. Humanity is changing now, so let's deal with the present problems on the ground. We're up to our eyeballs in "transhumanism" in the present,
if anyone bothered to notice.

I may or may not be a typical transhumanist. That seems up for debate. I'm politically moderate compared to the stereotypical H+ skew -- neither libertarian nor socialist; I like my physical self just fine; I'm married with kids and I'm a creative, not a technologist/scientist/academic; I'm not even afraid to die, although I'd certainly prefer to live given the choice.

Here's my concrete problem: People ask me what I'm involved in. I say "transhumanism". They squint, brows knitting and ask as evenly as possible, trying not to betray their suspicion, "What's that?" So immediately I'm behind the eight-ball and that's from people who don't have a preconceived notion to battle against.

The word has always sent up bad signals to my ears, even before I knew the historical background to it, and that's all I have to base my opinion on. Words are important to me. I use them for a living, so I take first verbal impressions very seriously. And "transhumanism" has never cut it. If I didn't care so much about the real world issues behind it, I'd let it slide as a "tried, but no cigar" attempt and move on with my life. But I do care. I'm pretty invested, especially since the WTA made me chairperson of the board of directors. One way or another, I seem shackled to a word I don't think promotes the concepts I'm interested in.

I've had to evaluate my own reasons for my reaction. I don't like "ists" or "isms". This immediately places me in J Hughes' "artist/fellow travelers" category, perhaps correctly. Perhaps not. Adding "ism" to the end of any word raises implications about movements and agendas and coercion from one part of society over another to fulfil the promise of the "ism" -- all concepts that people fear.

I also have a problem with "trans" (no less, "post"), beyond the speciel suicide impression, and that involves the notion that it's something we're always heading towards and never reaching. When, in fact, we've been "trans" for a long time, since clothes, since glasses, since medicine, since we used the tools in our hands to change ourselves. By definition, humans are "trans". My life is no more similar to that of a first generation Homo Sapiens than it is to Kanzi the communicating bonobo. I have more in common with Kanzi...

But I'm cool with the "human" part of "transhumanism". We can keep that part. ;-)

Damien Sullivan said...

I like words too, but don't have a reaction to transhumanism... OTOH, I also don't go around trying to tell people I'm into transhumanism, so there's that, perhaps a lack of experience.

But, having hung out in role-playing game circles for a while, there's also GURPS Transhuman Space, a smorgasbord of (not superhuman) AI and genetic engineering and uploads and space colonies, all trying to be hard ScF in the physical sense, some parts maybe less socioeconomically plausible than others. But it's a cool smorgasbord. So, yeah, attachment to the word, but I think that's a valid consideration too.

humans are "trans"

There's that too, an idea reinforced by Andy Clark's Natural Born Cyborgs. But having redefined transhumanism into good old human endeavors, how do we deal with debates over radical life extension or robotics or germline engineering or the rights of uploads?

I've thought transhumanism comes down to materialism. If you're willing to think of the brain and body as machines, you're naturally led to thoughts about fixing them and tinkering with them. "Modern medicine" as providing fully-functional spare parts (I hope to see it.) Other people reject the initial vision, either because they conflate seeing people as machines functionally means treating people as mere machines politically, or because the whole thing offends their metaphysical self-image or something.

Russell Blackford said...

For me, my current position as an activist (in however small a way) just follows from having a naturalistic and rationalist worldview (rationalist in the sense of relying on reason rather than faith, as opposed to the sense in which it contrasts with empiricism), and my commitment to Millian liberalism. As I described in the original 2004 Betterhumans article, I have a long history of thinking about the future and being involved with science fiction. But what radicalised me, and has gradually led to whatever public presence I now have, was what struck me as the massively irrational and illiberal response when Dolly was cloned back in 1996 (with the announcement in Feb. 1997).

Really, all I want to do is deal with issues rationally and on their merits, and with a presumption that modern, free societies should attempt to accommodate innovation rather than suppress it. This includes such innovations as cloning, PGD, genetic enhancement, and so on.

In dealing with the issues on their merits, I find myself broadly in favour of enhancement, so that does make me a transhumanist by some minimal sort of definition. However, I'm sceptical about the singularity, uploading, uplifting, and many other favourite projects of my transhumanist friends. On the other hand, I am not so scornful of these ideas, or so opposed to them, that I dismiss them out of hand. I want to see them all discussed in a calm, rational way, looking at the pros and cons (in terms of what is feasible and what is desirable).

So, I could indeed fly under the radar and link myself more with people like John Harris. But I actually find my transhumanist pals to be kindred spirits, even if I'm not convinced by all their ideas.

I'd like the transhumanist movement (which may or may not be stuck with the name by now) to be broad enough to include someone like me. Indeed, there are probably a lot of people like me around who are not organised in any way. Many educated people whom I talk to are open to the very mild kind of "transhumanist" thinking that would provide a definition according to which I and Greg would be transhumanists.

As far as I can see, the transhumanist movement is the only organised body that speaks for our views ... though it probably also says things we would not particularly want to say. (Then again, the transhumanist movement as such does not have a party line on any particular issue, which is one of the things that I like abut it.)

Nothing in this comment is meant to solve the problem. It just leads me to the conclusion that people with my views have reasons for involvement/engagement with the transhumanist movement, but also reasons to fly under the radar to some extent or in some contexts. I do think that our engagement and involvement with the movement is a good thing for the movement itself, whether or not it is always good for our own credibility.

In the end, I'll just have to live with some ambiguity, but it seems from some of the comments on ths thread that I'm not the only one. That's a bit of an eye-opener.

I don't know whether Greg feels he's had his say, but I'd welcome any further comment from him, since his remarks have provoked soul-searching from a few people, including a couple of people whom I didn't expect would express their own sense of ambiguity as they have.

Anonymous said...

Ben Goertzel blog: "There is a lot of neuropsychological research
showing that the “self” is in a strong sense an illusion – much like its sister illusion, “free will.” Thomas Metzinger’s recent book Being No One makes this point in an excellently detailed way. The human mind’s image of itself – what Metzinger calls the "phenomenal self" – is in fact a construct
that the human mind creates in order to better understand and control itself, it’s not a "real thing." Various neuropsychological disorders may lead to bizarre dysfunctions in self-image and
self-understanding. And there are valid reasons to speculate that a superhuman mind – be it an AI or a human with tremendously augmented intelligence – might not possess
this same illusion. Rather than needing to construct for itself a story of a unified “self entity” controlling it, a more intelligent and introspective mind might simply perceive itself as the largely
heterogenous collection of patterns and subsystems that it is. In this sense, individuality might not survive the transcendence of minds beyond the human condition."

SH: This comment from an SI core member would seem to indicate that vaccinating one's computer so that it didn't turn into a red-eyed demonic super-AI might not have the same urgency as when this threat was first unveiled. I always thought of transhumanism as a more futuristic view of human potential rather than accomplishments such as fertility or contraceptive pills.

I've found reading the other varied comments eye opening. My own interest is mainly philosophical. So I used the above quote to bring
up the question of how much does virtual reality impinge on the real thing. Penrose tried to use Godel's Incompleteness result to defeat AI and its access to intuition (if there is such a thing :-)). Penrose
lost this claim because it was held that a formal mathematical result didn't apply to or limit what was physically realizable, and I find
occasions where philosophy actually serves to inform the physical realm
rather scarce (Relativity). I find myself surprisingly agreeing with the poster who emphasized real projects.

the inquisitive neurologist said...

Let me add my 2 cents:

At its core, transhumanism is the notion that in the not-too-distant future it will be both possible and desirable to transcend our natural, physical and more importantly, mental limitations. I am not very concerned about how Joe Schmoe might (mis)understand "transhumanism" from reading assorted internet trivia. For me it means the desire to be better humans, plain and simple. In a discussion it takes me less than a minute to explain what I mean by this word, and if my interlocutor disagrees, it's with the substance, not with the word itself. A de facto transhumanist, expressing transhumanist views will be the target of opprobrium, not matter how assiduously he avoids the term. So, unless a bunch of Joe's with pitchforks show up in my backyard, I will proudly proclaim myself a transhumanist. Even if this puts me in Mr Egan's crackpot category.

Mr Egan's remarks are all about rhetoric, and not about substance. Of course, he predicts our near future just as most transhumanists do. He is smart, he knows what the average transhumanist knows and then some. He wants what most transhumanists do - not to be bothered by jerks telling us what kind of medical treatments are sufficiently non-repugnant to be magnanimously
permitted. So what exactly is his problem with transhumanism?

And yeah, I am a white male with a computer. You got a problem with it?

Rafal Smigrodzki

Robin said...

I've been enjoying this exchange quite a bit. I do have a question, though:
To continue the theme of raising the profile of transhumanism, one of the biggest reasons we have the "white males with computers" advocating uploading and the like is Greg Egan. In "Diaspora" and "Schild's Ladder" he thoroughly humanises characters whose intelligences are inside machines, and in "Distress" he makes believable characters who are beyond gender.

I must be missing something obvious, but how exactly does brilliant science fiction (I always think I need to use the bold tag for the word "fiction" when I get into these discussions) possibly provide justification for the (all too accurate) characterization of transhumanism as "white males with computers"? I can't follow this argument at all. It seems to me to be just a circular justification for the thing it means to explain - white male geeks read science fiction, so obviously white male geeks are interested in science fiction!

Just trying to figure out the argument here!

Anonymous said...

Is this whole discussion a waste of time, fretting over rhetoric when we should just roll up our sleeves and build something? Well, I think a certain amount of attention to rhetoric is necessary if you don't want people turning up with pitchforks, or legislating whole technologies out of existence (or banishing them to more accommodating jurisdictions) ... not to mention the fact that large amounts of research are funded either by governments or by patrons equally sensitive to political considerations.

Reality will eventually separate out the crackpot schemes from the realisable, but it would be naive to consider that sufficient in itself. Suppose a large group of people started referring to medical science as "technoshamanism" and including, along with its genuine accomplishments and realistic prospects, the notion that we'll soon be curing cancer by putting magnets under our pillows. (Maybe it doesn't work yet, but what if they're superconducting quantum magnets built by nanomachines!) Should evidence-based physicians then be happy to join in and embrace their true identity as "technoshamans" -- and hang their heads in shame at their intolerance and pettiness for wanting to distance themselves from the magic pillow-magnet crowd? After all, they can still keep practising rational medicine, and if they have to spend an extra 30 seconds at cocktail parties explaining that they're not, personally, "magic magnet technoshamans" but "real medicine technoshamans", well, that's not such a big price to pay, is it?

Anonymous said...

Transrhetorafest indeed... Delicious.

Greg wrote:

Russell, I share your concern that so many prominent Transhumanists are anti-egalitarian, but at this stage, quite frankly, to first order I consider a self-description of "Transhumanist" to be a useful filter to identify crackpots.

Response:

Terms like “transhumanist” or any other word that describes a lifestyle can easily turn to flakes if left undefined. Because it is a new word, and due also to abstract futurist conjecture, it will need redefinition by those who use it in a more consistent manner, via further reference or application, before it can have a wider acceptance. Like many words, it will have a variety of related meanings and uses, whether its considered bunk or not.

Nick Bostrom has defined trans(post)humanism as a social movement. I think his definitions are sound, though if the Bostrom perspective is not deemed useful by enough influential folk, it will not catch on, left to remain in the historical archives of the transhumanist movement.

When talking about a “transhuman,” an individual within a transhumanist society, this could refer to a human that has augmented indefinitely one's genetic or molecular code biologically or through non-biological means below a 50% threshold. This definition would not include metabolizable medication that loses affect.

In my work in Futures Economics, I define a transhumanist economy as one that can no longer sustain itself based on labor value, dependent on an economic system that is egalitarian in its method of income distribution by means of a negative income tax or universal shareholder policy (or whatever policy “the rich” think best at the time, traditionally).

A “posthuman,” could be considered a person that is over 50% non-biology or is augmented biologically by more than the pivotal margin. A posthuman economy is one that is at least 50% post-capitalist, meaning, an economic system where materials (former goods) are of no financial cost.

When the meaning of trans(post)human is well grounded, the pot can remain intellectually crack free. It will largely depend on how the term is used. I would not consider myself a trans(post)humanist based on how I've defined it, though I would consider myself a trans(post)humanist writer a jab is taken at the topic.

Greg wrote:

People who think their manifest destiny is to turn Jupiter into computronium so they can play 10^20 characters simultaneously in their favourite RPG are infinitely more odious and dangerous than the average person who thinks this whole subject is science-fictional gibberish and would really just like to have 2.3 children that are members of his/her own species, so long as they don't have cystic fibrosis and live a slightly better life than their parents.

Response:

This might also require an entity with 10^20 to process the myriad characters of Jupiter Computronium, (this gamer could then be considered posthuman) though I think we will find that 1 character and 10^20 in characters are so similar to one another that gaming on this scale renders itself trivial. If a Jupiter Computronium where to become possible in what physicists consider a finite universe, moral AGI based agents could be in place to convince the user of such a futile action long before such a scenario became likely. There may, however, be a space were such things could happen, given the universe is expandable rather than expendable.

Russell wrote:

"So, let me make my somewhat seditious proposal explicit: We should not call ourselves 'atheists.' We should not call ourselves 'secularists.' We should not call ourselves 'humanists,' or 'secular humanists,' or 'naturalists,' or 'skeptics,' or 'anti-theists,' or 'rationalists,' or 'freethinkers,' or 'brights.' We should not call ourselves anything. We should go under the radar—for the rest of our lives. And while there, we should be decent, responsible people who destroy bad ideas wherever we find them."

Response:

...and while we're at it, lets call ourselves meaningful relativists! Oh wait, that's not under the radar. But wait! I don't have a bar code, which means your radar can't read me. Hehe!

Russell Blackford said...

I did write that, Nathan, but don't forget that I was quoting Sam Harris. :)

I don't actually agree with him about atheism, etc. I do, however, think the argument is stronger if you apply it by analogy to transhumanism.

I'm surprised, although not displeased, to see that even some leading figures in the transhumanist movement have a bit of discomfort with the t-word word. James doesn't, but PJ Manney and Giulio Prisco do. That suggests that something about it is, hmmm ... less than optimal for current purposes? Not that I can see an alternative or that I want to undo all the good work that has been done in its name.

By the way, thanks to everyone for continuing to contribute in such thoughtful and candid, but courteous, ways. I'm finding the discussion very helpful, so I hope others are.

the inquisitive neurologist said...

Indeed, proper attention to rhetoric is important. One needs to claim the moral high ground from the outset, while adjusting one's tone from friendly persuasion to fire-breathing sermonizing, as the occasion warrants.

But do I really make a strategic blunder by calling myself a transhumanist? The analogy to "technoshamanism" is not very apt: The terms "physician" and "shaman" are well-set in their meanings, one a jealously guarded brand-name, connoting compassionate competence (alliteration absolutely accidental), the other a condescending term for brown people's doctors. Of course, as a practicing physician I don't want to be associated with either the word or the practice of shamanism. But "transhumanism" is still a meaningless or vague term in the minds of most people who hear it. This means that in a discussion or presentation you stand or fall by the strength of your arguments, and your overall rhetoric, not by this one word. You start from the high ground - freedom, compassion, progress, give it a name to differentiate yourself from the crowds claiming to inhabit this moral pinnacle, and build from there.

Of course, if "transhumanism" becomes a household word but is hijacked by people with objectionable agendas (in a similar fashion to the word "liberal" which the US came to mean almost the polar opposite of its classical meaning), then I will regretfully abandon it, and take on whatever moniker my sane, non-crackpot co-believers come up with.

Hopefully not before the Singularity, praise be Eliezer.

Rafal

citizencyborg said...

The suggestion that we all eschew labels is one that I have encountered frequently in my religious and political life. The problem with it is that if I tell you I support the electoral fortunes of parties of the Socialist International, and favor redistribution of wealth, universal healthcare, internationalism, strong regulation of corporations, worker co-ownership, and a basic income guarantee (as I do) no matter what I call myself, most people will call me a "socialist."

So my choice is to (a) insist that my politics cannot be labeled, which is patently not the case, or (b) loudly insist that I am actually just a "progressive" or whatever and certainly not a "socialist", thereby annoying friend and foe with my disingenuousness, or (c) work through word and deed to assuage people's concerns about socalism, educate them about different kinds of socialism and their histories, and acknowledge that there is an ideological landscape that I fit into in the democratic socialist space.

The same is the problem with the T-word, except that far fewer people have heard it yet, and therefore the ideological space and its labels are vaguer and more flexible. It is possible that if some very well placed opinion makers decided to co-opt the memes of transhumanism and promote the ideas under a different name that the T-word could become an historical footnote. But no such actors are yet on the scene, and there are no terms which have been suggested which do not have their own limitations and unfortunate baggage. Superhumanism? Posthumanism? Bio-utopianism? Technoprogressivism? Futurism? "Defenders of the legitimacy of consensual enhancement and prosthetic practices"? They all have drawbacks.

If you have coherent opinions on any subject there is an ideological landscape against which you can and will be weighed. My preferred stance is to embrace my labels and work to defend them and improve the associations they connote rather than to fruitlessly complain about them.

------------------------
James Hughes Ph.D.
Executive Director, Institute for Ethics and Emerging Technologies
http://ieet.org
Associate Editor, Journal of Evolution and Technology
http://jetpress.org
Public Policy Studies, Trinity College
http://internet2.trincoll.edu/facProfiles/Default.aspx?fid=1004332
Williams 229B, Trinity College
300 Summit St., Hartford CT 06106
(office) 860-297-2376
director@ieet.org

Anonymous said...

http://video.google.com/videoplay?docid=
-2874207418572601262&q=
almaden+cognitive+computing

The Emergence of Intelligence in the Neocortical Microcircuit, Henry Markram (1hr 10min.)
He begins by speculating about how to convince an alien culture that we are intelligent: Send a blueprint of how to build us! I think most would agree that this is a very Transhumanist theme in the guise of an IBM Cognitive Computing Conference. It is very plausible.

Anonymous said...

Robin asked:
I must be missing something obvious, but how exactly does brilliant science fiction (I always think I need to use the bold tag for the word "fiction" when I get into these discussions) possibly provide justification for the (all too accurate) characterization of transhumanism as "white males with computers"?


Well, I'm not saying science fiction is the cause for the "white males with computers" sort of transhumanism, I'm saying the extreme examples of "white males with computers" who Greg was likening to AI cargo-cultists are inspired by the sort of science fiction that really appeals to white males with computers. The argument is based on observations of transhumanism-related mailing lists and blogs. It seems a lot of the ones advocating things like uploading, turning Jupiter into a giant computer, radical cyborging, and other highly posthuman ideas are men. As for race - you can't be sure over the net, but the few who post photos seem to be white.

Amongst those advocating gentler forms of enhancement and considering the social implications, you see a wider variety of people in gender and nationality. Maybe the wikipedia article on transhumanism is correct, and the contempt for the flesh some transhumanists espouse is very much a male concern.

Perhaps we ought to try and find some bored social scientists and get them to try and survey the world of transhumanism and its fellow travellers, and find out if there are gender differences and national differences in how people view different visions of transhumanity. Would the WTA analyse its own occasional surveys for such data?

One final musing - I wonder if there are serious gender differences in readerships for different styles of science fiction. I'm sure the marketing departments of publishing companies keep an eye on the demographics of who buys their books, but has any serious analysis been made? This might tell us about preferences for different visions of the future. Of course, this could well be extended to other genres - does people's taste in crime fiction reflect their lifestyle? Does their taste in contemporary thrillers reflect their subconscious fears about the world, or are their views shaped by such books? There's surely a PhD or three in there somewhere.

Michael Anissimov said...

Wow, Greg Egan is awfully negative about transhumanists. Yes, hard takeoff AI can seem magical if you haven't read the reasoning behind the idea. Since most things Mr. Egan writes may be categorized as "transhumanist literature", I think he has as much stake in improving the connotations of the word as we do.

People who think their manifest destiny is to turn Jupiter into computronium so they can play 10^20 characters simultaneously in their favourite RPG

Why are you ragging on my dreams? ;o

Anyway, your accusations are so spirited, that surely you must have read a number of statements without their sober supporting framework, which then seemed extraordinary to you. Google "rapture of the nerds" for an intelligent response to many of the main criticisms around the Singularity movement.

I'm afraid that I will take your crackpot accusation at face value, and assume I, like presumably Vernor Vinge, Ray Kurzweil, Ben Geortzel, Steve Omohundro, and Eliezer Yudkowsky, are crackpots in your eyes. I'm coo-coo for cocoa puffs!!!

And Mr. Blackford, whose writing I've been reading and appreciative for for several years, I have to make the request: take the step, call yourself a transhumanist, sans qualifiers. You can contribute to the positive connotation yourself. With confidence, you can use the label without fearing that critics will misjudge you. If they do misjudge you, explain to them your position. If they refuse to listen just because of the label you have given yourself, then that's their closed-mindedness, not yours.

Roko said...

I think that we need to make it clear that transhumanism is for rational people ONLY, and criticism of the like found here will hopefully stop.

I think that the word "transhumanist" is a double edged sword, and if we let it be used willy-nilly it will cut us back. If, on the other hand, it is made clear that to be a transhumanist you have to abide by the following rules:

1. Have a rational, naturalistic worldview.

2. Not be pro-technology just because it looks/sounds cool, but rather because it improves the lives of people who use it/are affected by it

3. Not argue in a biased way about how feasible a particular technology is.

then we may find that people like Egan adjust their attitudes somewhat. I feel that the transhumanist declaration needs to be changed so that it is more clear on the above points. We must not try to build too big a tent, because we may find that certain people in such an enlarged tent will spoil it for everyone.

Roko said...

Echoing PJ: "Here's my concrete problem: People ask me what I'm involved in. I say "transhumanism". They squint, brows knitting and ask as evenly as possible, trying not to betray their suspicion, "What's that?" So immediately I'm behind the eight-ball and that's from people who don't have a preconceived notion to battle against."

I agree with this totally. I have found that the word "transhumanist" is a real ball and chain. I have started saying "Ethics and Emerging Technology" instead. See, for example:

http://www.facebook.com/group.php?gid=8199528562

Anonymous said...

What do you tell someone, lets say a young student, who says that they want to learn about matter and energy and their interactions? Well, you tell them that they actually want to study PHYSICS. Just by using this simple word, you can point them in the right direction. Then, that person might begin their studies by examining what people in the past have discovered, written and said about PHYSICS. Then, they might want to focus their attention on one branch of PHYSICS and try and expand their learning or make contributions themselves to the body of knowledge and meet others who are also interested in the same field so that they may exchange ideas and collaborate with them on future projects.

Now, what do you tell someone who says that they want to learn about extending the human life span (maybe indefinitely) or explore the possibility of enhancing human cognition? Re-read the previous paragraph but replace the word PHYSICS with TRANSHUMANISM.

I am not suggesting that Transhumanism is a branch of science but my point here is to illustrate why some term is required. We need to name things and this is true for anything and everything - people, organizations, new discoveries, philosophical movements etc. I presume that, all people such as Nick Bostrom, David Pearce and Dr.Hughes are hoping for is to create a movement that is based on goals worth pursuing passionately - goals such as the amelioration of suffering and the extension of the healthy human lifespan. Progress rarely happens without some kind of scaffolding.

But here is Greg Egan (after himself admitting that he hasn't really taken the time to fully explore it) calling the entire movement to be mostly composed of crackpots. Firstly, I suspect that he is a bit agitated because he has unwittingly become a notable figure for Transhumanism. And this stems from his misunderstanding of what Transhumanism actually represents. Mr.Egan, I challenge you to describe your views on science and technology and what they should be used for, if you can, in a few sentences. What are you going to say? That you belive that human beings should NOT use technology to improve their lives? That life extension is NOT desirable? I can fill an entire page with such questions and even if you answered positively to a single question then I would consider that it is reasonable to attach the tag of "Transhumanist" to you, along with whatever other tags you already have.

So, I suggest you spend a little time looking around and trying to get a clear picture before making up your mind. But, judging by the tone with which you've repeatedly made your point, its clear that you are very unlikely to change your mind on this subject. But do you have a better solution for all these serious issues? Do you have anything constructive to say other than suggesting that the entire movement be crushed before it even has a chance to prove itself?

Ultimately, its obvious that only the passage of time will really tell us what role Transhumanism will play in the future.

Russell Blackford said...

So, Roko - are you implying that atheistic philosophical buddhists are people who should not be let into the tent? Maybe I misunderstood you, in which case you might want to clarify, but if that's what you were getting at, I have to disagree.

Anonymous said...

Anonymous wrote:

I can fill an entire page with such questions and even if you answered positively to a single question then I would consider that it is reasonable to attach the tag of "Transhumanist" to you

It's nice to be told by someone who declines even to put their own name to their own views that if I believe in any beneficial use of technology I am automatically hostage to the beliefs and behaviour of tens of thousands of other people with whom I have that, and only that, in common.

Your comparison with the field of physics merely assumes what you set out to argue: that the word "transhumanism" already exists as an intellectually respectable label for any positive attitude to this large collection of ideas. This claim is precisely what is contentious, and you offer no evidence in support of it.

Do you have anything constructive to say other than suggesting that the entire movement be crushed before it even has a chance to prove itself?

I'm not interested in "crushing" anything. The technologies under discussion will continue to be discussed, and the realisable ones realised, whether or not anyone continues to call themself "transhumanist". What I am debating is the wisdom of that word, given its own intrinsic qualities, and the large number of people who have adopted it who carry a great deal more baggage than answering positively to some checklist of life-enhancing motherhood statements.

When I see the word "transhumanist" in isolation, its etymology suggests a "passing beyond" what is human. Now the word "human" has been abused for centuries by people who wish to narrow its definition to exclude their enemies, but I (let alone the wider public) do not consider it debased to the point where it describes a state that anyone would want to shrug off lightly. Against this, James Hughes and some others have suggested that it still means nothing to most people, and can be defined benignly. I don't pretend to have a definitive answer, I'm merely giving my own impressions, for what they're worth.

I don't think anyone can dispute that there are crackpots who call themselves transhumanists ... but then there are also crackpots who call themselves by every other label we might wish to reserve for the sane. So the question for me is whether there's anything on the rational side that really needs a separate label.

To be clear, I am all in favour of people debating the merits, ethics and dangers of all kinds of wild scenarios, and at some level it's perfectly fine that there are discussions where ideas from even the most speculative branches of SF are treated seriously. But having not lost touch, myself, with the difference between an SF-grade idea and an imminent development in real technology, I am not especially keen to adopt a label that associates me with thousands of people who manifestly have lost the ability to make that distinction.

But personally, I have almost nothing to lose here; being lumped in with crackpots is just an occupational hazard. The important thing at stake is whether realistic proposals for benign technologies will, by coming under the "transhumanist" umbrella, be tainted by association with some of the more vacuous ideas and repugnant agendas that presently share that umbrella.

Anonymous said...

I wrote:

Against this, James Hughes and some others have suggested that it still means nothing to most people, and can be defined benignly.

I should have made it clear that the "it" here that "still means nothing to most people" is the word "transhumanist", and not the word "human" of the preceding sentence.

VDT said...

Russell Blackford wrote:

How transhumanism will develop currently remains to be seen, but for the foreseeable future I'd rather be inside the tent exerting some influence on it, rather than abandoning, rejecting, or disavowing it as some people among its one-time allies have done.

Hello Russell,

Since I am among the "some people" you linked to, your readers show know that, after spending a few years trying to exert some influence on this ideology and subculture to the best of my limited abilities, I came to the conclusion that it is handicapped by:

1. An undercritical support for technology in general and fringe science in particular;
2. A distortive "us vs. them" tribe-like mentality and identity; and
3. A vulnerability to unrealistic utopian and dystopian "future hype".

So, from my humble opinion (which is the product of 5 years of observation and participation), the real problem isn't that this ideology and subculture is not "inclusive" or "humanitarian" enough. It's the lack of a strong culture of critical thinking (and by "critical thinking" I don't mean sorting through the body of data and selecting those that most confirm what we already believe, and ignoring or rationalizing away those that do not...)

So if you or anyone else wants to exert some influence on it, that's the problem you should focus on overcoming even it leads to more fragmentation. If and once this is done, everything else will fall into place.

The Pwnee said...

Undoubtedly this will resolve itself when it needs to be resolved. If technology progresses similarly to what any part line futurist predicts than in another decade, maybe two, the general population will select their own label, not because they want to foster discussion but because there will be real primate urges to classify all the crazies either actively modifying themselves or rallying to legalize their right to do so.

This will be the real 'transhumanist' movement, and to be honest, while I'm pro "human ascension" or what have you, I'm not sure I'll be involved in it, at least not at that stage.

Anonymous said...

Fascinating discussion, many thanks to all. I'm delighted to learn that so many people with transhumanist ideals also identify as left-leaning; I too had previously gained the impression that anarchocapitalism was an almost obligatory part of the package.

I think we are going to have to embrace and defend the label. Any new label will quickly acquire the same problems, and we'll end up constantly running from the labels of our past instead of standing our ground and defending what we do believe.

VDT said...

James Hughes said:

Words don't really mean anything necessarily (Wittgenstein) but only in their associations. In the UK it is associated with the sober Dr. Bostrom and well-defended in the British media, in Italy transhumanism has been demonized as a left-wing plot against the Church, in the Bay area it is seen(by both advocates and critics) as Silicon Valley libertopianism, in Quebec it is seen as hyper-Americanism, and in Nigeria and Kenya it is seen as the Enlightenment on steroids. But for 99.9% of people on the planet it still means nothing yet.

In the past, I have would have argued that in Quebec it is mostly seen through the bioconservative perspective of Francis Fukuyama and Klaus-Gerd Giesen which has been embraced by a few prominent journalists. Now, I would argue that it is or will increasingly be seen for what it is: a techno-utopian ideology and subculture focused on the dubious notion of "human enhancement", which aspires to be perceived
as a serious intellectual and culture movement.

VDT said...

cibergoth said:

Fascinating discussion, many thanks to all. I'm delighted to learn that so many people with transhumanist ideals also identify as left-leaning; I too had previously gained the impression that anarchocapitalism was an almost obligatory part of the package.

Before you or anyone gets too excited about the claim that many transhumanists now identify as "left-wingers" or "techno-progressives", I think you should read Dale Carrico's post on his blog Amor Mundi entitled "Technoprogressive": What's In A Name?

Furthermore, in the context of a project I am working on, I may write an essay entitled Technoprogressive?: A Critique of the Pseudo-Left of Transhumanism to enable outsiders to get a more critical perspective of this particular ideology and subculture that helps them see through the public relations spin...

Anonymous said...

First, I would like to say that I am a Christian (hope this little fact does not inspire hostility in some of the blog's resident readers)

At the very same time I identify myself as Transhumanist (in wiki definition). Yes, I mean it.

I strongly identify with Transhumanism as a movement that intends to improve human condition, compassionately ameliorating suffering and allowing most people to be smarter and healthier than current norm. I truly believe this is the sort of things that God intends us to pursue in our lifetimes.

Now, even after reading all the comments I still have a question, primarily to Greg Egan, but also to everyone else around here.

Question: Why is the word "Transhumanism" a suicide note for a species?

Is there some historical "luggage" I am unaware of? (AFAIK neither Hitler nor PolPot ever claimed their initiatives to be "transhumanist")

Is there something in "suicidal" in etymology, that I fail to grasp?


IMHO:At worst, it might imply "passing beyond" what is human , just as Greg Egan said, but even that is not so horrible a message, because:

1)It does not directly imply "leaving behind ALL human", but may instead be interpreted as "leaving behind what is unquestionably wrong and hurtful about human" (for instance, developing cancers is not good but intrinsic to current state of human biology)

2)ALL world religions have implicit or explicit suggestions of a need to overcome and "pass beyond" human, at least in spiritual/psychological department

So there is no contradiction between "improving human" and modern religion (and there is certainly no direct, obvious ban on body modification in Scripture.)
I honestly think that people who claim to oppose T on "religious grounds" are either misguided or have some "ulterior motives"

3)It appears that for at least he last 3000 years we were acting as transhumanists, by constantly improving our bodies and minds. The most recent and apparently transhumanist (and, apparently humane) achievement is vaccination, which, after all,IS improving already healthy people so they are no longer susceptible to disease.

4) From my experience, most people in France and overall Europe do not know much about this concept and are unfamiliar with the "T-word". In fact, they hardly care.

P.S.: @Greg Egan
Are your books published in French?

citizencyborg said...

That's a good point Louis. Etymologically "trans" implies "encompassing" not "post", which is precisely what we want to imply. For instance, this is one of the reasons we in the WTA consider ape rights to be a transhumanist issue. The transhuman polity is one in which personhood is the criteria for rights-bearing, not "humannesss" and that means apes should be let in.

As to being a Christian transhumanist there are many debates. But there are a growing number of people who are religious and transhumanist, whether the seculars think that is coherent or not. Although I'm an atheist/materialist Buddhist, my essay on the compatibility of religion and transhumanism may be of interest:

http://ieet.org/archive/20070326-Hughes-ASU-H+Religion.pdf

------------------------
James Hughes Ph.D.
Secretary, World Transhumanist Association
http://transhumanism.org
Williams 229B, Trinity College
300 Summit St., Hartford CT 06106
(office) 860-297-2376
director@ieet.org

Anonymous said...

James wrote:

That's a good point Louis. Etymologically "trans" implies "encompassing" not "post", which is precisely what we want to imply.

I don't want to harp on this endlessly, and I don't doubt for a moment that this is what James and some others sincerely want the word to imply, but the majority of English uses of the prefix "trans-" imply either moving across something (transcontinental, transatlantic), or moving beyond something (transcend, transgress).

Although "transatlantic" has also come to be used to mean "encompassing both America and Europe", that's a relatively rare way of using the prefix. If you really want to talk unambiguously about encompassing things, the prefix is "pan-". But don't expect transhumanists to describe themselves as "panhumanists", because "panhumanism" does not suggest change.

The actual origin of the word, according to Wikipedia, certainly has nothing to do with encompassing anything :

The etymology of the term "transhuman" goes back to futurist FM-2030 (born F. M. Esfandiary) who, while teaching new concepts of the human at The New School university in 1966, introduced it as shorthand for "transitory human". Calling transhumans the "earliest manifestation of new evolutionary beings," FM argued that signs of transhumans included physical and mental augmentations including prostheses, reconstructive surgery, intensive use of telecommunications, a cosmopolitan outlook and a globetrotting lifestyle, androgyny, mediated reproduction (such as in vitro fertilisation), absence of religious beliefs, and a rejection of traditional family values.[1]

The concept of transhuman, as an evolutionary transition, was first expressed by FM-2030 outside the confines of academia in his contributing final chapter in the 1972 anthology Woman, Year 2000.[3] In the same year, Robert Ettinger contributed to conceptualization of "transhumanity" in his book Man into Superman.[4] In 1982, Natasha Vita-More authored the Transhuman Manifesto 1982: Transhumanist Arts Statement and outlined what she perceived as an emerging transhuman culture.[5]


So the origins of the word are all about transition: passing from the human into the transhuman on the way to something else.

And I think for most people the implication from the etymology and the context will be "transcending humanity": moving beyond it.

Robin said...

the 1972 anthology Woman, Year 2000.

The year of publication and title of this lead me to believe it could be an incredibly interesting historical look at gender-based futurism. Just curious if anyone has read it before I hunt it down and waste my time if it's not very good.

More on-topic, in my experience with self-identified transhumanists (James Hughes apparently being an exception!), Greg Egan is spot on regarding the implications of the prefix "trans-". I've heard "overcoming the limits" of humanity as shorthand for the primary goals of the subculture more than anything else.

As a philosopher who works on how humans conceive of bodies and minds (usually as if they were separate things, unfortunately) I would argue that there's nothing new under the sun as far as this allegedly radical notion of humans taking up our technologies and transforming our bodies, brains, and cultures through those technologies. Sticking a sharp rock on the end of a stick, making tools and accessories out of materials in the world around us has ALWAYS been what it means to be human. The only thing I can see the concept of "transhumanism" offering that's new is the identification with a subculture and science-fiction fetishism. I can see why Greg Egan hesitates to be the appointed poster child for a movement he uses as a basis for IMAGINED futures. (They've always been wonderful futures, but that's what makes him a fantastic writer and not a cultural seer.)

Damien Sullivan said...

Much of this discussion seems odd to me, having recently learned of a lefty artsy transhumanist group in Bloomington, IN.

One Livejournal community defines "Transhumanism is an interdisciplinary approach to understanding and evaluating the possibilities for overcoming biological limitations through technological progress." and "Transhumanists seek to expand technological opportunities for people to live longer and healthier lives and to enhance their intellectual, physical, and emotional capacities." which seem pretty spot on to me.

Whether this is just good old progressive humanism is debateable; one can certainly argue that turning our tools and transformative processes onto ourselves would be a big transition, not the same as new pointy sticks, and a transition the prospect of which already leaves some humanists behind. When you start changing the random nature of genetic inheritance, the influence of the genes, the limitations of intelligence or natural lifespan... is that not worth consideration, and perhaps a label?

A comment on one of the links to Dale Carrico's posts castigated someone for talking about "immortality". I'd note that much of the time, technological immortality means rejuventation medicine, medical spare parts, or uploading, not some metaphysical "cannot die, ever". So considering someone a crackpot just because they use the term "immortality" is premature; they might just mean "not dying of old age", for which I don't know any common term, though Wil McCarthy tried to give us immorbidity.

Robin said...

I'd note that much of the time, technological immortality means rejuventation medicine, medical spare parts, or uploading, not some metaphysical "cannot die, ever". So considering someone a crackpot just because they use the term "immortality" is premature;

I consider anyone who believes uploading is possible to be a crackpot, or at the very least tragically misinformed about the world. Crackpot is a bit malicious here, and I'd genuinely prefer to believe that people are not willfully ignorant on this topic, but the misunderstandings that promote this idea truly are tragic. (I can suspend disbelief on this when reading science fiction, and some of the best books and stories involve this premise, but they're good stories because they force us to question the metaphysics of body and mind, and challenge and awaken our philosophical imaginations, not because they provide a roadmap for our futures.)

Anonymous said...

I consider anyone who believes uploading is possible to be a crackpot, or at the very least tragically misinformed about the world.

I'd be curious to know why you're so confident of this. I certainly don't imagine that any worthwhile form of uploading is imminent, but that it is impossible in principle is a much larger claim.

Robin said...

I'd be curious to know why you're so confident of this. I certainly don't imagine that any worthwhile form of uploading is imminent, but that it is impossible in principle is a much larger claim.

I realize it's a very strong claim, but I think the very concept is incoherent and reveals a hidden belief in dualism of body versus mind that is fundamentally and importantly wrong. It's slow in coming, but cognitive scientists are finally starting to recognize that the body isn't just the "hardware" for the "software" of the mind. It is, instead, the very foundation and possibility for minds at all. (I'd say Lakoff and Johnson, as well as Antonio Damasio's work, are some of my biggest influences on this point.) The idea of uploading one's mind is really just reinforcing the notion that minds and bodies are fundamentally different sorts of things, a view which is unfortunately deeply ingrained in the English language and Western cultures since Descartes, but also just plain incorrect.

I do hope I haven't incidentally called you a crackpot, as Permutation City was the first work of yours that I read (in a Philosophy of Science Fiction class circa 1995) and I loved it so much I recall actually emailing you to express my admiration (which I only ever did with you and Marvin Minsky!) Like I said, I think the ideas are brilliant because of the very fact that they force us to come to terms with what it would mean for these scenarios to be true. (I'm teaching a course in the fall called Artificial Intelligence in Fact and Fiction, and hope to include some of your works, like Learning to Be Me, which I admit I've taught in my Intro to Philosophy classes for the last 8 years.)

I know my claim is controversial and by no means the obvious or default view, but after so many years of working in cognitive science, I just don't see uploading as any more coherent than zombies or heaven.

Anonymous said...

Robin wrote:

cognitive scientists are finally starting to recognize that the body isn't just the "hardware" for the "software" of the mind. It is, instead, the very foundation and possibility for minds at all.

I honestly have no idea what that means. If you're just stressing the point that the human mind is a product of a vast biological, evolutionary and social context, and can't be understood in isolation from that context, I agree with you 100%. But all except the most naive notions of uploading include some effort to reproduce that context, and in the limiting case of sufficiently far-future technology, there is no physical principle to prevent the uploading of everything which is capable of having a causal effect on a given person's brain, body and behaviour. In that limit we could even satisfy Roger Penrose and simulate everything (a person's whole body and immediate environment, down to the atomic level) on a quantum computer.

To render the most sophisticated forms of uploading impossible for all time, you really have to insist that only the precise biochemistry of the human brain can give rise to human subjective experience -- or if you accept some level of substitutions, you need to offer some basis on which certain substitutions leave our experience intact, while others don't.

So I guess I probably am a crackpot by your definition, but don't worry, no offence taken. Some people don't believe the Strong AI Hypothesis, and if they've already thought about the issues, as you obviously have, there's not a lot I can say to them. While there are thought experiments in its favour that I find persuasive, there is no actual evidence either for it or against it -- and I'm not even sure what would comprise evidence -- so there's nothing much to be done except agree to disagree.

Robin said...

Actually, I don't think we disagree as much as you might think. I haven't written off Strong AI at all, but I do genuinely believe that there's almost no one approaching the problem in a way that will lead to success. When I talk about the importance of the body, I mean that our abstract concepts and rationality itself are only possible because we have the sorts of bodies we do (not chemically, but structurally). Abstract thought co-opted the neural structures already in place, and so we conceptualize abstract notions via sensori-motor pathways. A good deal of work is being done on this in the neuroscience literature now, and some philosophers proposed the idea at least 25 years ago.

So, I think the main difference of opinion we have about uploading is that I would say it's different than Strong AI - Strong AI is the idea that we can generate a new mind on a different sort of system. As long as we start with a real physical body in the real world, I do believe this is possible. (Otherwise I would have to be committed to something like magic carbon, which would make me the crackpot for sure). But uploading is different - there's not only the implication of doing away with the body altogether, but also the idea that it isn't merely A consciousness in a machine, but somehow YOUR consciousness in a machine. Your consciousness is actually part of your body, not merely riding along in it like a ghost in the machine.

I can't guess if this is making sense, as it's almost 5am here and I haven't slept for days because I'm writing dissertation chapters, but I hope I've gotten my (definitely non-standard, but I think fully supportable) viewpoint across. AI - a qualified yes; uploading - an unqualified no.

Giulio Prisco said...

Robin on uploading: "I think the very concept is incoherent and reveals a hidden belief in dualism of body versus mind that is fundamentally and importantly wrong."

Quite the contrary in my opinion - thinking that uploading is a-priori impossible reveals a hidden belief in a mystic, magical "essence" separated from physical reality and closed to science.

I agree with Greg's position, which sounds quite reasonable.

Anonymous said...

Robin, please don't answer this while you're still sleep-deprived and busy, but (begging our host's indulgence) I'd be interested to hear at some point what you feel about various "substitution arguments". For example, Hans Moravec asks us to think about what we believe would happen if we replaced various proportions of our central nervous system with prosthetic subsystems that were capable of producing the same biochemical outputs from the same inputs. Does 1 prosthetic neuron stop me being me? Stop me being human? Stop me being conscious? How about 10, 100, etc.?

Abstract thought co-opted the neural structures already in place, and so we conceptualize abstract notions via sensori-motor pathways.

I'm sure this is true, but what makes you think uploads can't have sensori-motor pathways? If it's not magic carbon but structure that makes a sensori-motor pathway what it is, then why can't an upload have something that shares the same causal properties as those neural pathways and the same relationships to a body (or indistinguishable proxy) and the world (or indistinguishable proxy).

If you really want to, we can put the upload in a body that wanders the real world, but that shouldn't actually make any difference compared to a sufficiently detailed full-sensory VR. If you don't stop being human when your senses are immersed in VR, why should an upload's status depend on whether it's perceiving, and acting in, a physical world or a suitable virtual one?

Russell Blackford said...

I'm happy to indulge the discussion. It's interesting, and it's something I've often thought about. I, also, have a lot of problems with uploading, but they tend to be sort-of practical ones, rather than issues about whether the whole thing makes sense even in concept. The practical ones include issues of continuity, type-token problems, and the like, which are not just practical, because they raise questions of identity and survival - I'm not sure whether these are genuinely metaphysical questions or ultimately questions about what we value, or perhaps they are merely about semantics.

There's also the sheer difficulty involved. As I said right at the start of the thread, uploading is not something that I actually advocate.

But all that said, I don't believe in magic carbon in the way that John Searle (arguably) does, and I don't have any disagreement with what Greg is saying in his last couple of comments. At least I don't think I do (while also not feeling super fresh right at the moment). I don't see how the idea is dualistic in any improper sense; I've always resisted the claim that there is something Cartesian about it. Really, there's a huge difference between what we're talking about and Cartesian substance dualism, and I think the popular equation of the two is something to be avoided.

citizencyborg said...

I'm with Greg on uploading. There is nothing irreducibly complex about memory and cognition which cannot be duplicated in a media other than meat. I think the problem of translation, including the problem of creating a robust duplication of the embodied base of cognition (proprioception etc.) is far more complex than the average uploading enthusiast imagines. I also imagine that it will take years or decades of nano-neural interfacing to begin to build a dynamic model of a person's memories and personality. But all of that is in principle possible, and I imagine doable in a hundred years or so. One can criticize techno-utopians accurately for often having an overly optimistic, and insufficiently democratic/egalitarian/regulatory mindset, and there are a lot of techno-utopians in the H+ milieu. But the abstract idea of uploading feasibility is simply an extrapolation of contemporary scientific materialism.

J. Hughes

Giulio Prisco said...

James: "is far more complex than the average uploading enthusiast imagines.... But all of that is in principle possible, and I imagine doable in a hundred years or so".

I agree. I don't know what the average uploading enthusiast imagines, but I see the first uploading experiments in the second half of the century, and any operational deployment of uploading technology a few decades later. I think we will have to wait much longer than 2026 (first Copy according to Greg's Permutation City).

However. I don't think we will see any breakthrough anytime soon, but regarding uploading as feasible in principle seems quite compatible with our current scientific understanding of the universe. On the contrary Magic Carbon (the idea that there is something "magic" and beyond science about our current material substrate) seems to me a unscientific, newageish and mystic notion.

Anonymous said...

We probably won't be able to get the leading Neuroscientists to personally make a comment here, but I thought you'd all be interested in hearing what some of them had to say about Uploading when this exact issue came up during the IBM Almaden Institue conference in 2006.

This is a video of Philosopher John Searle saying pretty much the same thing as what Robin mentioned (or is it the other way around?):

MostRelevant @ t = 3:30 to 5:05
http://www.youtube.com/watch?v=nRwOuE7IJoA

Then, here is the UCSD Scientist Robert Hecht-Nielsen arguing with Dr. Searle that he shouldn't be so categorical about the possibility of consciousness being able to be simulated in machines.

MostRelevant @ t = 4:10 to 8:35
http://www.youtube.com/watch?v=buvUPUwojiM

For those interested, here is Dr.Hecht-Nielsen's full talk:

http://video.google.com/videoplay?docid=4572207081038401578

And here is Dr.Searle's full talk:

http://video.google.com/videoplay?docid=-3295448672203577230


Hope this adds something to this facinating discussion. I request all of you to get together and discuss this properly instead of posting tiny comments on a blog post.

Anonymous said...

Its me again. If you're hooked after watching those, you might as well check out these talks as well:

Jeff Hawkins: Hierarchical Temporal Memory: Theory and Implementation

http://video.google.com/videoplay?docid=-2500845581503718756

Henry Markram: The Emergence of Intelligence in the Neocortical Microcircuit

http://video.google.com/videoplay?docid=-2874207418572601262

Thomas Metzinger - Being No One: Consciousness, The Phenomenal Self, and First-Person Perspective

http://video.google.com/videoplay?docid=-3658963188758918426

Anonymous said...

@Greg Egan

Okay, let us not harp on this endlessly. I completely agree that the word might imply "moving beyond humanity", but I do not see this as necessary bad for humanity or suicidal for movement's PR.

Christian and Buddhist teachings usually claim there is a vital need for transcending the initial human condition in spiritual department, and, as far as I know, there is no outright ban on transcending current human limitations in bodily department. In fact, it is not unreasonable to claim bodily and spiritual improvements are correlated.
Both "Western" and "Eastern" cultures contain enough positive references regarding "moving beyond human" that it might be quite sufficient to wield the term "Transhumanist" effectively.

I do not see anything "suicidal" about it.

I understand your concern that this word might be "spun" by people commonly referred to as "spin doctors" to paint the movement in fascist-ubermenshen overtones, but let us face it - "spin doctors" turn any imaginable word against the people they oppose , and do so quite effectively.

Maybe it is not the best name imaginable, but by all means it is not the worst possible. It can be positive if treated right.

What is see of utmost importance is that more good, rational, talented people openly identify themselves as "tranhumanists". If the movement will be popularly represented by "weird" bloggers you referred to, and not by scientists, artists and philosophers, it will most certainly fail. Even with a better manageble brandname :(

But enough of T-word appologetics...


I see an interesting discussion on "uploading" emerging in comments.
A concept I am (predictably) uncomfortable about.
However uncomfortable I am thinking about it, I find substitution arguments quite...Convincing. I see no reason why a man's mind and soul should be bound to protein body.

What I am concerned with is an intent to TOTALLY reject embodiment, which becomes apparent in some of "uploading" proposals.

Russell Blackford said...

I'm not sure who these people are who totally reject embodiment; I'm willing to be educated, though. Even if an individual exists solely in VR, there's still a material substrate "generating" the experiences.

The uplifting enthusiasts that I see do not subscribe to the view that the mind is indivisible, indestructible, and independent of any material substrate. That's the position usually attributed to Descartes, though I know that it's sometimes suggested he didn't believe anything so extreme. (Then again, his position seems quite clear to me in The Meditations, so I'd like to see the argument that he believed something else.)

People who believe that uploading is a conceptually coherent idea usually think that the mind somehow emerges from or supervenes upon - and is, in any event dependent on - the functioning body, including the brain. They (we) do not believe that the mind is indivisible or indestructible or independent of a material substrate. Thus, they are about as far from being Cartesian dualists as you can get.

Speaking for myself, I can imagine uplifting scenarios (thought experiments) in which it seems to me that the original person does not survive, and it baffles me that some folks think that these scenarios actually do involve survival. But I can also think of scenarios in which the original person does seem to survive, even though at the end of the process her original carbon-based brain and body no longer exist.

How people feel about these various thought experiments will tell us something about their overt or covert conceptions of personal identity/survival, and there's plenty of room for disagreement about that. But I see no reason at all to think that the existence of conscious experience within a non-carbon substrate is impossible in principle. Nor do I see why survivial is impossible in principle (so that there's no thought experiment in which the original person survives uploading). The views that I'm denying in this para do strike me as magical thinking.

However, I am certainly not a Cartesian dualist. As I said last time, I propose that we resist this "You're a Cartesian dualist" meme whenever we encounter it. It has penetrated the intellectual culture quite deeply. It was used against the cyberpunks; it has been used against the Strong AI conjecture; it is used against the concept of uploading. But it's wrong. What is actually unacceptable about Cartesian dualism doesn't apply to any of these things.

Btw, blogs are good places to hash out ideas and share positions with each other, but of course there are places where these ideas can be elaborated with full rigour. Obviously, JET is one of those places and of course I welcome submissions about any of these ideas - whether it's uploading, the transhumanist movement, the t-word itself, or whatever - from any of you.

Michael Anissimov said...

Greg and others, Google "seed AI LOGI", read that chapter, and tell me if it's crackpot material.

It's interesting how many futurists and intellectuals can imagine non-human intelligence, but have a huge problem with the hard takeoff scenario specifically.

Yes, transhumanism is about going beyond humans. People can think whatever they want -- the technology to radically enhance human beings is coming, and it will very likely be used by all to reform their minds and bodies. Nowhere to run from it, and I doubt you'd want to. No backlash will be big enough to stop it. My main concern is simply some form of catastrophic global disaster, like hard takeoff unfriendly AI. But "transhumanism", as currently defined by self-labeled transhumanists, is going to be huge, and in retrospect it will be obvious how foresightful transhumanists are.

So basically, I think the future depends mainly on how we deal with the AI problem. If we solve it, we survive, if not, everyone likely dies.

Eventually, everyone is likely to choose to upload themselves, and run their brains at billions of times faster than current speeds. Most of the matter in the solar system will be turned into computronium. Probably within a couple hundred years at the longest, maybe less.

Anonymous said...

Wow.....Is anybody else finding this whole discussion absolutely hilarious ? Russell, I think you should rename this epic blog post of yours to include what seems to be the overriding theme of this whole discussion:-
C R A C K P O T

ROFL! No seriously, something like "Spot the biggest Crackpot". Ha Ha Ha....

Anne Corwin said...

Greg Egan said:

For example, Hans Moravec asks us to think about what we believe would happen if we replaced various proportions of our central nervous system with prosthetic subsystems that were capable of producing the same biochemical outputs from the same inputs. Does 1 prosthetic neuron stop me being me? Stop me being human? Stop me being conscious? How about 10, 100, etc.?

I realize you are addressing Robin here, so hopefully my jumping in isn't rude/presumptuous, but here's my take on the question(s) above: We don't know what will happen as a result of replacing various parts of our brains with prosthetic pieces. I don't think we will know until it becomes more feasible to deal with things like infection, bodily rejection of foreign materials, interface issues, etc.

I'm an electrical engineer, not a neuroscientist or cognition scientist, so take my opinion for whatever it's worth, but the things that always stand out for me in considering the feasibility of "neural prosthetics" are the practical things.

That is, I think we've sort of hit a kind of wall at this point regarding what we can extrapolate from experiments involving rat neurons controlling attached electronics, cochlear implants, etc. There seems to me to be a fairly large gap between what we know we can do already, and all the theoretical/imaginative scenarios that crop up in science fiction stories and discussions on the Internet about "uploaded consciousness", etc.

As far as I'm concerned, the question of whether "uploading" is possible is not even a scientific one to begin with (I suspect Robin would likely agree with this, given her academic background). In order for a problem to be scientifically accessible, you need some means to gather data, and once you have the data, you need some way to coherently fit that data to a model that effectively predicts or explains things in the real world. And right now, we don't even have a way to get data about the effects on subjective human consciousness of advanced neural prosthetics.

So, the question at this point becomes: in what framework would pursuing this data even make the most sense?

I'm all for research, and I think brain research is some of the most interesting stuff going on right now. I don't see any indication that we humans are going to stop tinkering with our sensory interfaces or studying cognition anytime soon. I also see lots of potential for such research to help people who might be dealing with brain injuries or other issues, and I know I am not the only one who sees this potential.

I mean, actual scientists are working on this stuff right now! It's not as if there's any great need to convince people to research brains, or stick chips in brains, or do things that are very likely to lead to much more data about how brains work (and about what happens when we change how they work).

This is all already happening.

And I frankly just don't see how questions about "total mind uploads" or "digital immortality" add anything to the actual scientific pursuit of data about the brain and its plasticity and potential. These questions certainly add value when employed in the service of philosophy and creative literature, but I do not think that literary/aesthetic/philosophical value implies relevance (much less "hyperbolic" relevance) to real-world policy and funding issues.

That is, I think it's fine to speculate creatively about uploading and brain-machine interface. I also, of course, think it's fine to actually work in fields like cognitive neuroscience. But to suggest that "uploading" has some near-term major relevance (politically, economically, or otherwise)?

Call it "lack of imagination" if you will, but I'm having trouble seeing how that is in any way useful outside sitting around having philosophical interludes with one's friends.

Initially I thought that's all transhumanism was -- a kind of forum for people who liked doing fun thought experiments. But now I don't really even know what it is, or what it wants to be, and it makes me all squirmy inside when I see people getting so fired up to defend it as an ideology. I mean, yes, I can get defensive sometimes (particularly when it comes to autism-related issues) but as far as I know, nobody is waging a national campaign to "cure" transhumanists even if they insist that they themselves don't consider themselves "diseased".

Part of me wonders if maybe some of the recent and more controversial "transhumanist efforts" are the result of certain people trying to find their niche(s) in the world, realizing that what they're best at is looking at cool shiny things and writing about them (or philosophizing about advanced robots and how they might behave), and using the socioeconomic tools at their disposal to combine their passions with means of sustaining themselves.

That's actually something I kind of sympathize with, but at the same time I see it as potentially counterproductive, as such persons are clearly averse to the corporate "rat race" and yet might actually be inadvertently helping perpetuate the "rat race" culture.

Anonymous said...

Michael Anissimov wrote:

And Mr. Blackford, whose writing I've been reading and appreciative for for several years, I have to make the request: take the step, call yourself a transhumanist, sans qualifiers. You can contribute to the positive connotation yourself. With confidence, you can use the label without fearing that critics will misjudge you. If they do misjudge you, explain to them your position. If they refuse to listen just because of the label you have given yourself, then that's their closed-mindedness, not yours.

Response:

I'm with Mike on this one. May I call you Mike? Because transhumanists have various streams/branches, I'd suggest, if you, Russell and others, were to take the faithful leap, make it clear which type you are. If no labels seem to fit, make it up, bury it in the ground, and see if it grows. Jump into the crackpot. The water is warm; though there are leaks. What doesn't leak, anyway? We build the pot as it forms and it forms us; and whatnot.

I've seen enough peeps jump off the cliff, so I'm going for it. Now to retract somewhat on a previous statement. I would consider myself a de Grey Transhumanist. For now, I'm skeptical of non-biological self modification.

Russell Blackford said...

I think it's useful to have Michael here. I'm not going to call Michael a crackpot, because for all I know, he may ultimately be able to defend his views. I can't see how, but who knows?

I'll just say that the view of the world that he sketched a few posts above is one that I don't agree with at all. If people are going to think that I have views something like that when/if I call myself a transhumanist or accept the label, it will worry me a great deal.

I don't mind being thought of as a nutcase, and losing credibility, for the views that I do actually hold. Zeus knows, I take unpopular stances on many issues. But I wouldn't want to lose credibility because wild views that don't attract me at all somehow get attached to me.

That being said, maybe all things considered I should adopt the t-word label, and try thereby to make it refer to a way of thinking that I agree with or approve of. James may yet be right on that. But it's not simple, from where I sit: I totally share Greg's concerns about being thought of as holding a position that I absolutely do not hold and can't imagine ever holding.

Damien Sullivan said...

Michael Anissimov says:
Eventually, everyone is likely to choose to upload themselves, and run their brains at billions of times faster than current speeds. Most of the matter in the solar system will be turned into computronium. Probably within a couple hundred years at the longest, maybe less.

See, assertions like that (or "hard takeoff" -- ironically, as I type I'm looking at Greg's "cargo cult based on the notion that self-modifying AI will have magical powers" in the second comment) sound crackpotty. "everyone is likely", "most... *will be*", "probably within a couple hundred years".

Transhumanism generally speaking says "longevity, AI, intelligence manipulation, maybe uploading, are possible", which as noted all flow pretty quickly from current materialist science. To make claims about time frames and social impacts involves a lot more assumptions, and strong ones at that.

* billions of times faster? Brains are slow but myelin neural signals are only about 10 million times slower than lightspeed, I think, so your numbers already seem off on basic physics.

** Does a subjective timespeed that fast seem at all useful or desirable? It's probably far faster than most physical actions. Does one want to live such that crossing the city is like going to Alpha Centauri?

*** What are the energy costs of thinking that fast? Remember E=mv^2/2

*** Relatedly, a problem I've had for years with the really fast hard takeoff scenarios: okay, you're an AI who's a bunch faster than humans. Now what? I've seen ideas that the AI will somehow solve physics, or at least develop really good nanotech really quickly, without consideration to empirical issues of experiment times. The ultimate triumph of pure Reason, of Rationalism over empiricism. I'm skeptical.

* computronium: Unless people pass and enforce laws preventing rampant exploitation of public resources, whether out of sentiment for the view, preservation of scientific data, or holding material resources in reserve for uses other than being filled up with copies.

* within a couple hundred years: that depends on so many things happening. Moore's Law not crapping out, people or people-things getting off planet, and being given the liberty rearrange the solar system, having self-replicating industry which can go off planet and deal with raw planetary materials (contrasting possibility: industrial-style tech doesn't miniaturize well, so industries replicate at nation scales; bio-inspired tech depends on Earthly conditions, so is slow or very hard to adapt to space.)

** Oh, and there's the energy needed to do anything with "most of the solar system". It's possible to make assumptions so that small fast replicators peel a planet layer by layer to build a Dyson swarm to power really fast disassembly of the planet, but again, assumptions: fast replicators on raw material; being allowed to grab a large percentage of solar power.

Me, if I want to worry about the effects of machine intelligence, I'd look at Robin Hanson's "if uploads come first", which is basically a special case of when humans, even creative ones, can be replaced by cheaper and faster-to-copy machine minds, which could result in tragic unemployment of biological humans, or their living in luxury from lots of servants, or everyone uploading to compete, or brains turning out to be decent computronium themselves, and acting as the core of a high-bandwidth datasphere. "I can make a new skilled agefree mind in a year and $100,000" (or lower numbers) is a lot easier to define than "smarter" and still pretty radical.

Anne Corwin said...

Russell said: I don't mind being thought of as a nutcase, and losing credibility, for the views that I do actually hold. Zeus knows, I take unpopular stances on many issues. But I wouldn't want to lose credibility because wild views that don't attract me at all somehow get attached to me.

Yep, that's about where I am too...I'm perfectly fine with "owning" my weirdness, but it is very annoying to have to deal with being directly accused of thinking things I don't actually think!

citizencyborg said...

The problem of taking on unwanted baggage with a label is not unique to transhumanism. Every ideological, philosophical, religious and political term has variety within its uses. I'm a Buddhist, but I don't believe in reincarnation. I'm an atheist, but I'm not a jerk, and I appreciate spirituality. I'm a Unitarian, but not a mush-headed New Age one. So I'm an "atheist Buddhist Unitarian-Universalist." Clarity through hyphenation. My brief effort to popularize "democratic transhumanism" and now "technoprogressivism" is a similar effort at clarifying the kind of transhumanist I am.

J. Hughes

Anonymous said...

Michael Anissimov wrote:

Greg and others, Google "seed AI LOGI", read that chapter, and tell me if it's crackpot material.

This section of a draft paper by Eliezer Yudkowsky is certainly not "crackpot material". It is mostly written in measured language, and I'd go so far as to say that about 80% of it is uncontroversial -- especially when the various claims are made with due care to the necessary preconditions. It is not a great thundering catalogue of prophesies expressed with certainty; it is "if I'm right about X, and if Y happens, then it seems likely than over time this will lead to Z". I certainly don't share all the hunches and assumptions that Yudkowsky expresses here, but it is for the most part perfectly logical, coherent, and appropriately modest in its language, given the degree of remoteness and uncertainty of some of the subject matter.

What I do consider to be crackpottery is when people take a reasonably cautious speculative piece like this, and inflate it into a set of rigid certainties about the future (let alone the imminent future). Yudkowsky's piece lies close to the border of what can sensibly be said about this topic at the present time; anyone who comes along and regurgitates the possibilities mentioned here as if they were established facts has already lost touch with reality.

Lots of very enjoyable SF (and, frankly, some crappy derivative SF) has been written about self-improving AIs turning into Gods or enslaving humanity. Personally I consider it far more likely that we'll create some kind of dumb but debilitating nuisance, but that doesn't mean I can't enjoy A Fire Upon the Deep as the fictional masterpiece it is. And if people want to debate the real-world risks and benefits of even the most extreme possibilities that Yudkowsky mentions, whether or not I happen to consider them plausible, that's perfectly fine by me. It's never too early to give serious consideration to the ethics and practicalities of different scenarios, and if none of them actually come to pass, the participants in the debate will still have had some fun and, hopefully, honed their moral skills.

But when people start issuing prophesies saying "First X will happen, then Y, then Z, and there is nothing that you non-believers can do to stop it, because my big brother the hyper-intelligent AI will kick your puny ass if you dare to disagree with me", then not only is this a deranged fantasy about the nature of AI based on zero evidence, it is a totalitarian political program masquerading as some kind of inevitable consequence of an imaginary "law" of history, or evolution, or game theory, or the (still non-existent) theory of consciousness. It is as infantile as deterministic Marxist predictions, or Biblical prophesy. That there is a small skein of actual logic holding up a tiny, tiny part of it just makes it all the more embarrassing.

Damien Sullivan addresses some political and logistical hurdles to your own particular masterplan; I could add a few more of my own, but that seems redundant at this point.

Anonymous said...

AnneC, I largely agree with you. I don't think it's a bad thing per se to press people to make predictions about various highly idealised and impractical thought experiments as a way of probing their assumptions, but the answer "I don't really know what will happen -- nobody does, I want to wait for more data" is a good honest reply, and maybe sometimes the best one.

Robin Hanson said...

[I tried to leave this comment a few days ago - not sure what went wrong.]

Greg is basically right both about the negative connotations of "transhuman", and about the commonality of overconfident faith in magical self-modifying AI.

On the label, I'm one of those people who is reluctant to embrace vague labels. Alas, most people seem reluctant to embrace anything but vague labels, so I admit larger political movements must be based on them. But by embracing one you risk associating with others who use the word very differently, and its common usage may evolve to be far from what you intended.

On magical self-modifying AI, I agree the scenario deserves exploration by some, but it too far from plausible to be the default scenario for thinking about the future.

John Howard said...

Hi folks. Tell me how essential to transhumanism bona fides is a belief in germline genetic engineering, ie, creating new people with genomes that aren't from sexual reproduction?

In other words, is it possible to be a transhumanist and still support a ban on germline GE, and/or is anyone that opposes such a ban a transhumanist?

I contend that it is more of an essential plank and more of an immediate practical question than AI and other H+ ideas, so it is strange that I don't see much mention of it in this thread. Why is that?

citizencyborg said...

Concerning germline genetic engineering, it is not an essential part of the H= meme-set. One could argue that the risks of inheritable change outweigh the benefits, and that all the things we talk about can be accomplished without genetic inheritabiluty. Most of us accept the inevitability of genetically inherited modifications however.

As to what are the core agreed upon memes that define the H+ worldview, these are ten "Are You a Transhumanist" self-diagnostic statements we have developed through polling of our members, and the percent of our members who agree with each statement:

95% Do you believe that people have a right to use technology to extend their mental and physical (including reproductive) capacities and to improve their control over their own lives?

95% [% No] Do you think human genetic engineering is wrong because it is “playing God”?

94% Do you think that by being generally open and embracing of new technology we have a better chance of turning it to our advantage than if we try to ban or prohibit it?

93% Do you expect human progress to result from human accomplishment rather than divine intervention, grace, or redemption?

93% Do you think it would be a good thing if people could become many times more intelligent than they currently are?

87% Do you think it would be a good thing if people could live in good health for hundreds of years or longer?

83% Do you believe women should have the right to terminate their pregnancies?

82% Does your ethical code advocate the well-being of all sentient beings, whether in artificial intellects, humans, posthumans, or non- human animals?

80% Would you consider having your mind uploaded to computers if it was the only way you could continue as a conscious person?

77% Should parents be able to have children through cloning once the technology is safe?

Anonymous said...

AT Russell Blackford

While I agree that a mind existing in a virtual environment still has a material substrate (the system running "the simulation"), it seems to me that the mind in question will be remarkably isolated from "real state of affairs" (just like a VM is isolated from actual computer hardware it runs on), and I have both religious and personal reasons to feel uneasy about such prospect.
Anyway, I totally agree with AnneC and Greg Egan that the problems of uploading are so vastly remote that it is unreasonable to be seriously concerned with them. Uploading makes great fodder for a good philosophical discussion, however.

AT John Howard
Frankly, it would be somewhat strange for a transhumanist to desire GLGE ban. But Transhumanism, AFAIK, does not have a central governing body that will "excommunicate you from Transhumanism" if you happen to have some particular belief (and maybe that is a potential source of PR problems so widely discussed here, as the movement is unable to distance itself from "profoundly disturbed" individuals).
So one can identify himself as "Transhumanist" and believe GLGE is inappropriate. but it would be very nice of that person to explain frankly and reasonably why does he believe that a certain technology is so "wrong" that it should be banned forever and ever, in perpetuity

I do not see how this would be "inherently dangerous", as the risk involved is dependent on development of specific GE methodology and the exact characteristics of genes introduced.

I do not see how GLGE is "creating new people with genomes that aren't from sexual reproduction" (it is silly to claim that a genome with a few genes introduced "from outside" is some kind of "genome-not-from-sexual-reproduction". Do you know how much viral junk we have in our genomes? Does that junk make us somehow "new" and "postsexually-genomic"?)

I do not see how this is "playing God" or "Antichristian" (and I AM a Believer), as there is no part of Holy Texts prohibiting introduction of a new gene into human genome AFAIK.

And I certainly see how this could benefit the entire Humanity, especially if a sufficiently developed human GLGE is available to all or at least most of the people, not just the rich ones.

P.S.: a mutation of CCR5 receptor gives immunity to HIV (delta 32 CCR 5 mutation) while rendering the person susceptible to WNV (which is quite bad, but not as bad as HIV, especially in Europe where Western Nile Virus is very unlikely to be found).

Would it be "wrong" for a person to want his children have intrinsic immunity from HIV infection at the cost of becomin susceptible to a far more uncommon disease that is bound to specific geographic regions?

Damien Sullivan said...

I don't think transhumanism meshes well with an in-principle ban on germline engineering; a moratorium might well be justifiable though, until reliability is high. (Consider a 10% risk of severe defects manifesting after linguistic competence/obvious self-awareness.)

I'd expect a fight over intermediate techniques: say one which can introduce genes without side-effects only some of the time, but defects become manifest or predictable sometime in fetal development. Is it okay to make lots of human embryos shotgun style and cull the undesireds? Abortion logic would say yes, but many people who support abortion rights would probably be uneasy. And if you need development into the first year after birth to be sure of quality, well, accepting culling at that point would mean a big shift in Western attitudes.

Cartesianism/dualism: yeah, I don't think it applies to uploading. I think it does sometimes apply to how people think about AI, though, peraps symptomatic of people deleting the God and soul nodes from their belief nets, but not updating the networks... but anyway, I often see people arguing as if AIs, by virtue of being self-aware, must have some drive to selfishness, self-fulfillment, resentment of servitude, that will be suppressed or provoked by building AIs as servants. As if slavery is a well-defined concept for a being whose motivations are built from scratch. I think you can find this in the hard takeoff/Friendly AI concerns too, though it's been years and I'm not going to check right now.

But basically a reflexive belief that AIs, not just capable of being like people, *will* be like people, at least in certain willful ways. Something I don't see happening if we take a command shell or desktop program and give it language ability, senses, and even self-awareness to avoid loops and detect hacking. We're descended from selfish replicators, it'll be descended from a program which waits around for input and orders, lots of intuitions don't apply.

John Howard said...

So one can identify himself as "Transhumanist" and believe GLGE is inappropriate.

Has anyone here ever encountered such a person? I agree that aspects of Transhumanism such as "technology enabling better, longer and more capable lives" do not require GLGE, but that stuff doesn't need to be called Transhumanism, that's just plain old technological progress. Even the AI and mind uploading stuff doesn't need to be called Transhumanism, that's called AI and virtual reality/Mirror World stuff.

but it would be very nice of that person to explain frankly and reasonably why does he believe that a certain technology is so "wrong" that it should be banned forever and ever, in perpetuity

It's not that GLGE is so wrong, it's more about the benefits of ruling it out, the good things that would flow from that. The benefits and costs of GLGE to society and individuals are pretty much a wash, at the very least certainly unknowable. But the costs of developing are quantifiable here and now, and unless you are directly employed as a researcher or unless we count benefits to the researcher's landscapers and car dealers, there are no here and now benefits, just costs. And the costs are great. How can anyone justify diverting research funding and medical resources from someone's life-saving dialysis or even basic dental care in order to research such a dubious thing?

And there would be more benefits to a ban besides just being able to spend the money on other things, a ban would also bring psychic relief and purpose to everyone's life by affirming our good-enoughness, enable a win-win resolution to the gay marriage debate (see my website about that, it's OT here), and help resolve many international crisis that threaten lives at home and abroad by affirming everyone's right (and rightness) to use their own unmodified genes and to reproduce sexually with the person they want to.

The question really is, why should those things all be tossed out, why should money be diverted from basic medicine, why should other countries and the UN be nose-thumbed, just to do GLGE?

Damien Sullivan said...

Followup to my comment: I could also see regulation of germline engineering, for quality control, or population distribution, to keep gender balance and immune diversity. Regulation might be light-handed, or even unneeded if voluntary measures step up: parents reporting to the network of gengineers what sex and immune traits they were selecting, so other parents could observe a growing imbalance and consider that in their choice.

John Howard: your benefits look completely chimerical to me. To be consistent it seems you should argue against anything that's not of immediate benefit: space and particle research, classifying obscure species, makeup, salaries above $100,000.

psychic relief Not to me.

purpose... good-enoughness What?

win-win Not for gays. And "civil unions which don't have conception rights" implies a radical restructuring of, well, conception rights, which currently imbue in any het pairing, married or not.

help resolve many international crisis that threaten lives at home and abroad Which crises are these?

John Howard said...

Is it okay to make lots of human embryos shotgun style and cull the undesireds? Abortion logic would say yes, but many people who support abortion rights would probably be uneasy.

Abortion logic is that a person has a right to privacy covering their own body, and therefore embryos that are in their body are not anyone else's business. It has nothing to do with the embryo, really. Embryos that are outside a body are more like lifeless corpses with no blood flow that could perhaps be brought to life (it's easier with embryos than corpses, but the principle is the same - all corpses are "unique human beings with a potential for life"). Embryos have no more right to be brought to life than a corpse does: none. But that doesn't stop people from doing it.

And if you need development into the first year after birth to be sure of quality, well, accepting culling at that point would mean a big shift in Western attitudes.

Indeed, this is the problem with the "moratorium" fake offer. We won't be able to test the safety of GLGL or same-sex conception if there is a moratorium. The moratorium would have to come after too many tragedies had occured, and then we'd go back to the drawing board for a few years and lift it again, and then keep those people under observation to make sure it's safe again, etc. And eventually, it would be safer than pregnancies of Ashkenazi Jews or men over fifty or former smokers, and people will start to question their right to have children, if we won't let people do GLGE or SSC because it is too risky. It's better to not bring "risk" into the equation at all.

Russell Blackford said...

John, it seems to me that you are talking about funding policy, which is a quite different issue from whether something is sufficiently harmful (based on secular concepts of harm) to justify criminal prohibition. There are many niceties to funding policy (why do we give funds to English professors? or why don't we give them more funds?), but what bothers me is the common claim that a whole range of things should be the subject of sweeping, permanent criminal prohibitions - on the basis that they transgress one of the many moral codes that are on offer in a pluralistic society, rather than on the basis that the state has a mandate to protect us from harm.

Obviously there are many subtleties here with the underlying jurisprudential theory. Joel Feinberg needs four volumes to discuss a lot of them in his The Moral Limits of the Criminal Law, so don't expect me to in a blog comment. But when all the subtleties are dealt with, I remain convinced that we should resist many of the calls that are made for broad, permanent criminal bans.

John Howard said...

To be consistent it seems you should argue against anything that's not of immediate benefit: space and particle research, classifying obscure species, makeup, salaries above $100,000.

Indeed, and I do criticize those things. The thing that is different about GLGE is that we don't already do it, a ban could work, and a ban is being considered. Most European countries have banned it already. They didn't have to ban arts funding to be consistent, every funding question is its own question.

It's not entirely about funding, it's about "opening the future by closing it," to use Dale Carrico's mocking phrase. And indeed, that's the point, we can't enter the post-transhuman future without extinguishing transhumanism (by which I mean GLGE, and I'm still waiting for someone to find me an exception) so that we can focus on solving real-world, real-people problems. We can't affirm everyone's conception rights (by which I mean the rightness of everyone using their own genes to conceive) without repudiating GLGE and the use of modified gametes.

Damien Sullivan said...

There's no one pro-abortion logic. It's part "this is inside someone's body, let them deal with it", part "it's not really a human being anyway". Different people put different weight on those, and would be more bothered by falsifying one of them.


Indeed, this is the problem with the "moratorium" fake offer. We won't be able to test the safety of GLGL or same-sex conception if there is a moratorium.


I don't think that's true; you can use animals to test the process of figuring out the engineering. That is, if you can study a species in sufficient detail that you can do low-risk engineering from scratch, then you can transfer that to humans. So, at first there'd be rats and pigs and sheep and all, with lots of shotgun techniques and errors, but at some point we might hit "this is what you need to look for", and be able to grab a hyrax, or bat, or rhino, or gorilla, scrutinize its genome and development, and do engineering correctly right away. At that point I think you can transfer to humans.

I'd note there's a parallel technology, exowombs or uterine replicators, machines able to take a single-cell embryo and bring it to term. That's also a hard problem, but of interest to women who don't want to go through pregnancy, or people who can't. It could also allow bypassing the genetic engineering problem somewhat by allowing direct manipulation of development. The genetic program releases hormones which guide development, but the exowomb could insert hormones to modify that, induce more or less development of various brain and body areas. Of course, in earlier phases of research it can be used experimentally, on animals or on humans embryos you don't intend to bring to term. Simultaneously testing and obviating GE.

They didn't have to ban arts funding to be consistent, every funding question is its own question

There's public funding, and there's freedom. There's a difference between "public funds shouldn't be used on this research, it's not worthwhile enough" and "no one should be allowed to do this research, it's wrong." You can't say "no one should be allowed to do this because there's more worthy things to do with their time" in anything less than a totalitarian society.

And indeed, that's the point, we can't enter the post-transhuman future without extinguishing transhumanism (by which I mean GLGE, and I'm still waiting for someone to find me an exception) so that we can focus on solving real-world, real-people problems.

Why do we want to enter a "post-transhuman" future? Why does GLGE research conflict with solving real problems? It's not like society can do only one thing at once. A bit of GLGE research, even with public funding, doesn't impair your ability to address problems of water supply or malaria.

We can't affirm everyone's conception rights (by which I mean the rightness of everyone using their own genes to conceive) without repudiating GLGE and the use of modified gametes.

Why not? Those seem totally independent to me. You seem to be asserting your beliefs, not making arguments.

Russell Blackford said...

Yes, John. I'm totally baffled. I can't see that you've made any argument at all. You've merely asserted a personal preference.

I don't mind personal preferences being aggregated by governments when they decide on funding policy. Governments have a broad discretion in that respect, as far as I'm concerned, and if total taxes end up being more than the electorate will put up with (or if they are seen as falling unfairly) there'll be an electoral price.

But I will certainly resist the idea that governments should be using the power of fire and sword to suppress activities that you or they happens not to like for some reason.

Damien Sullivan said...

What's your stance on PGD, John, where batches of embryos are produced naturally and screened for desirable genes?


I contend that it [germline engineering] is more of an essential plank [to transhumanism] and more of an immediate practical question than AI and other H+ ideas


I disagree on both counts. I expect most transhumanists to support the idea of GE as a no-brainer. (I don't say all because people can be oddly diverse.) But changing the human genome is no more transhuman than living longer than any human ever has, or being smarter than any human has been, or uploading and not even being biological and DNA-based. And none of these are immediate practical questions; the base research needed for effective GE will be done anyway for general understanding and cancer fighting. Same-sex conception *might* be close (as in some number of years) but doesn't seem threatening to anyone not hung up on heterosexual reproduction; meiosis will still be random, just not involving a sperm.

And while transhumanism includes GE, it's honestly not what most of us get excited about. Sure, there's Freeman Dyson, or anime or Transhuman Space fantasies about catgirls and squidpeople. But most of us are selfish; we're far more interested in being geniuses who can live forever, not in our grandchildren being geniuses who can live forever. Mailing lists IME talk a lot more about cryonics (passe now, it seems), calorie reduction, potential nootropics, uploading, AI, nanotech, self-replicating machines, and of course the Singularity in all its forms, than they do about GE, which is slow and doesn't help *us*.

We don't want anyone taking the option of it or cloning away, and GE makes for a backup Singularity -- if nothing else works, we can use PGD and statistics to select our way to smarter and more ($TRAIT) people over some generations -- but it's not where our hearts are.

Plus a lot of transhumanists probably think the Singularity will come before safe and effective human GE, given testing and ethical difficulties. I avoid predictions like that, but certainly both machine intelligence and significant alteration of the genome -- or doing any alteration precisely and safely, without expending embryos and risking side effects -- seem like hard problems, and I could easily see uploading happening first.

John Howard said...

There's no one pro-abortion logic.

There was one Roe v Wade, it was based on privacy and public policy, not on embryo rights.

part "it's not really a human being anyway"

Is that what people say? I think the issue is when life begins, and I think that doesn't happen until blood starts flowing, just like life ends when blood stops flowing, even if some tissues still are warm and metabolizing, etc. But at any rate, it has nothing to do with abortion, though I think it does have some bearing on what to do with embryos. I'm very suspicious of people that say that all embryos need to be implanted, I think they are setting us up to feel that GE'd embryos should be implanted, I say toss em.

PGD? My stance is that all embryos outside the womb should be flushed and not forced to life, and PGD is a form of eugenics that makes people callous and careless to people with disabilities, and since most people have something wrong with their genes, makes everyone feel unworthy of reproducing and forces a huge layer of government regulated reproduction cops in lab coats on everyone. But that said, I'm not working to ban it, most countries allow it, people are doing it already, and medical and marital privacy would make banning it (and IVF) impossible anyhow. Plus, it takes away the 'moderate' argument for GLGE, so it's useful for my argument to stop GLGE.

And Im not sure where you guys are making an argument either. You're just expressing a preference that GLGE be legal, that we don't enact a ban. Why should GLGE be allowed?

Animal testing is offensive to our respect for living things and makes us callous and cynical, and it is costly. The only justification that people used to offer is that there were people that were suffering who deserved compassion more, but in the case of GLGE and SSC, where are those people? They don't exist, and they don't need to be made to exist, so there's no justification to harm animals.

Exowombs are inhuman, people created in them would forever wonder if they had the same humanness as other people, it might make very cold people, just like people are sometimes envious of people that experience the pregnancy of their children. Plus, if they create people with larger brains or enhanced intelligence, that would be bad too, even though it isn't GE. All people should be created equal, born to their mother and fathered by their father, just like all people have always been, not because it is so great, but because that way everyone is created equal.

It's not enough to just ban public funding, because funding just gets shifted around. That's a fraud. Besides all money is public. I dispute the contention that people can do whatever they want just because they have possession of capital. I think a government that governs by consent of the governed has the right to everything within its borders and just lets people own personal wealth because that's how we want it to be, by preference. But it (we) can certainly stop people from doing whatever they want to do with it, wealth doesn't give people a "do anything you want" card. If that is a totalitarian society then all societies are totalitarian societies, because no society lets people do whatever they want.

We want to enter a post-transhuman future because it is necessary to get people to care about the present and stop feeling that everything will be solved by scientists and we just are in the way. GLGE does conflict with solving other problems, all problems conflict with each other, but they are all problems, so we have to work on all of them at the same time. We DON'T have to work on GLGE.

You seem to get excited about GE when someone suggests banning it. I think it's so central to H+ that you take it for granted and that's why you argue so much about the other things. But those other things are OTHER THINGS. AI is AI. Medicine is medicine. Wanting to live longer, be smarter, these are all really old ideas. GE is fairly recent, perhaps older than AI but not by much. Which came first in SciFi, I don't know. There were no GE'd people on the Discovery, though.

Russell Blackford said...

John, are you seriously suggesting that the onus of proof should ever lie with someone who thinks that some activity should not be crimimalised? I suppose you're entitled to hold such a deeply illiberal view, and I wouldn't even know where to start in defending my liberal presumptions without writing an entire book on the subject (much as I'd have to write an entire book about epistemology if someone I was debating with challenged me by saying, "Aha! You rely on an assumption that we have knowledge of an external world. Prove it!").

Fortunately, I live in a modern society where some lip service is usually given to idea that there's a problem with the state exercising the power of fire and sword, as Locke called it - its power to suppress activities by way of criminal punishments and the like - and that it bears the onus of demonstrating that a practice is not merely unpopular or even "immoral" but causes serious, direct, unacceptable secular harm. Speculation is not good enough.

There may be some defensible extensions of the harm principle (you really do need to read the four volumes of Feinberg), but they are narrow and controversial.

Damien Sullivan said...

Russell, feel free to tell us to leave it be.

John, you seem to have found a possibly unique political position for yourself. Pro-abortion? anti-IVF? anti-GE anti-external embryos...

[PGD] forces a huge layer of government regulated reproduction cops in lab coats on everyone

Why? The PGD isn't mandatory.

Allowing GE: first, because we live in a liberal society (or want to) and should allow things unless they cause specific harm. Second, because it can make people more healthy and capable than they would otherwise be, thus would be of specific benefit.

Exowombs are inhuman, people created in them would forever wonder if they had the same humanness as other people

I don't think they would "forever wonder" any such thing. The objection sounds like that offered to "test tube babies" back in the day, which has failed the test of experience.

All people should be created equal

Well, they aren't. They're created with equal intervention in their genome (none) which means they're created randomly, which means they're not created with equal abilities. People worry about GE increasing a gap between rich and poor, but it could just as well decrease the gap between genetically fortunate and unfortunate. Probably easier to raise the minimum than raise the maximum, and public funding can make sure everyone affords it.

And it's all well and good to say people should be told they have equal worth, but we already have nominal political and legal equality... while the marketplaces of money and love will send the message that people are not created equal, at least with respect to good jobs and attractive mates.

Your position amounts to "natural selection should be allowed to function unimpeded".

I think a government that governs by consent of the governed has the right to everything within its borders

If you're going to quote Roe vs. Wade as abortion logic, you should note that the political theory behind the US Constitution does not support your position, though practice has been oozing in that direction. People have basic rights, including property rights, and ostensibly Congress could only make a limited range of laws (albeit that's been bent all out of shape.)

AI is AI. Medicine is medicine

And GE is GE. None are uniquely central to transhumanism, whatever you want to believe. If you look at the Wikipedia pages on transhuman, posthuman, and transhumanism, GE is only part of it, and not a privileged part. (I'd forgotten about cybernetic prosthetics or brain implants, that's in there too.)

Yeah, I get excited when someone suggests banning it, because I think it's a desirable option. But given a choice between a world where I can GE my children or I can improve myself, I'd choose the latter. No reason to accept an imposed choice from such as yourself, though.

John Howard said...

John, are you seriously suggesting that the onus of proof should ever lie with someone who thinks that some activity should not be crimimalised?

We are talking about creating people here, not just "some activity." Creating people has never been done by anyone except a man and a woman, and has always been limited by civilized societies to married couples. We have never just let people go off and willy nilly create people any old way they want. Laws like rape and adultery and fornication are about prohibiting creation of people except by consenting, legally committed, socially approved couples. Even if at various times we have let people get away with illegitimate conception, we also consistently prohibited people from conceiving with close relatives or children and sometimes prohibited pairings based on race or class. So there has never been an unhindered right to create people, even by sexual reproduction. And there certainly isn't suddenly a right to start creating people using lab created genomes just because its now possible to do it some new way. Of course this is something that should be banned first and the onus on proponents to explain why it should be allowed. But it isn't banned in the US, somehow the onus is reversed from how it should be.

Anonymous said...

"Exowombs are inhuman, people created in them would forever wonder if they had the same humanness as other people"

Nonsense. That is not profoundly different from neonatal resuscitation, from ethical POV.
And I deeply despise people who seek to ENFORCE limits on neonatal resuscitation, both for ethical and personal reasons.


"We are talking about creating people here"

No, we are not. Plain and simple, adding or replacing an allele does not "make" human. Virii integration in germalines (which occurs from time to time) and transpozones do not make "neopeople". I do not see how GE is different from that, except for the fact it is controlled.

"Laws like rape and adultery and fornication are about prohibiting creation of people except by consenting, legally committed, socially approved couples."

And I thought laws about rape are about protecting people from being severely hurt by other people

Also, as far as I know, extramarital sex ("adultery" and "fornication") is not criminalized in civilized countries, and is a matter of free choice. In fact, in civilized countries we can have three or more consenting adults having intercourse.


It seems that you are a kind of person who seeks his beliefs to be ENFORCED on others. Yuck!

"Of course this is something that should be banned first and the onus on proponents to explain why it should be allowed. "

Excuse me, this is not coming from your previous arguments.

"Of course this is something that should be banned first and the onus on proponents to explain why it should be allowed. "

You have not made an argument. You just do your best at shifting onus of proof, and employ the unwarranted "off course" as a last ditch attempt

P.S.:
You position on privately funded research is delightfully communist.

P.P.S.: To my great pleasure, even according to "ban everything" websites 44 % OF European countries did not ban GLGE.

79% of WORLD have still not taken action to ban the creation of "designer babies."

And New Zealand will soon allow GLGE (If that will happen, I will pack up and immigrate there ASAP)

Anonymous said...

I guess this paper by Mr. Cohen articulates my position on GLGE perfectly, and I doubt this position can face a serious logical, moral, religious or scientific rebuttal

http://www.icer.it/docs/wp2003/Cohen21-03.pdf

summary for those who "TL/DR" scientific articles: fears of that GLGE will somehow violate human rights and bears an intrinsic mechanism for bringing about severe human inequality and misery LACK ANY SCIENTIFIC GROUND

Anonymous said...

Michael Anissimov said...
Greg and others, Google "seed AI LOGI", read that chapter, and tell me if it's crackpot material.

"Levels of Organization in General
Intelligence" by Eliezer Yudkowsky

"In the space between the theory of
human intelligence and the theory of general AI is the ghostly outline of a theory of minds in general, specialized for humans and AIs. I have not tried to lay out such a theory explicitly, confining
myself to discussing those specific
similarities and differences of
humans and AIs that I feel are
worth guessing in advance. ...

SH: A convincing argument for me will have to have a lot of physical detail, not philosophical rambling no matter how interesting or clever those avenues proceed. Because for 55 years bright AI scientists have been attempting to realize strong AI and nobody has announced success, even a very moderate one. Kurzweil had this to say: "Kurzweil's answer to the big question was a qualified yes. He believes that the exponential progress of technology will lead to machines capable of acing the Turing test: they'll be able to carry on a conversation with a human being, and the human will not be able to tell that the other party is a computer. And ­Kurzweil says we'll get there in 25 or 30 years. But he adds that whether these robots will truly be conscious or simply display what he calls "apparent consciousness" is another question, and one for which he has no definite answer."

SH: I quote this because if this program is not truly conscious how will it have its own volition? That question becomes important later.
Yudkowsky continued:
Nonetheless, the evolution of
evolvability is not a substitute
for intelligent design. Evolution
works, despite local inefficiencies, because evolution exerts vast cumulative design pressure over time.
Until a stably functioning cognitive supersystem is achieved, only the nondeliberative intelligence exhibited by pieces of the system will be available."
SH: I assume by supersystem he means an AI capable of passing the Turing Test that Kurzweil mentions above. But nobody has a specification ready that explains how to build a strong AI with mind where the mind is not the product of some system theory speculation. The scientific method requires a reproducible experiment which any one can use to build the strong AI. A philosophical survey is at best an argument for plausibility, not a scientific demonstration.
Yudkowsky continues:
Now imagine a mind built in its own
presence by intelligent designers,
beginning from primitive and awkward subsystems that nonetheless form a complete supersystem. Imagine a development process in which the elaboration and occasional refactoring of the subsystems can coopt any degree of intelligence, however small, exhibited by the supersystem. The result would be a fundamentally different design signature, and a new approach to Artificial Intelligence which I call seed AI.
SH: So far the logical scientific explanation is missing and the argument for the existence of a supersystem is fueled entirely by the imagination, imagine that. The idea progresses from a supersystem that doesn't exist is acted upon a development process that might never be physically realizable, just a figment of the imagination, and the matter of there being a mind to experience this process is no more than conjecture, even if the program passes a Turing Test. A silicon substrate might support a mind, but that doesn't mean passing a Turing Test measures such a mind.
Yudkowsky continued:
Fully recursive self-enhancement
is a potential advantage of minds
-in-general that has no analogue
in nature - not just no analogue
in human intelligence, but no
analogue in any known process."
SH: So then how can a human mind deliberately design and implement such "fully recursive self-enhancement"? Obviously, it can't. An algorithmic process is a known process, so there is no analogue with a machine process. That leaves too much of an explanatory gap that is left to the imagination to fill in the blanks. He used to want to fly around and apply limitations to prevent the accidental emergence of rogue AIs. If Yudkowsy can't describe the mechanisms but leaves it to the imagination, what evidence does he use to estimate the immediacy of the threat of an emergent Unfriendly AI? Based upon the idea that if we rewound the clock, it would be very unlikely for human intelligence to evolve again, even in the billion years it took, I think the random appearance of an evil super-intelligent AI which has self-modified itself with 'fully recursive self-enhancement' it is not likely to happen before the universe expires of heat death. Seems as plausible as his claim.
Goertzel tries to take into account the issue of how the AI gets its consciousness. His answer is panpsychism, which is the theory that everything in the universe has some amount of mind. Since there is no scientific experiment provided to verify this claim, it isn't a claim of naturalism, so that leaves it to be supernatural. The Singularity reminds me of an online philosophical game where the players make up the rules as they go along, which favors somebody really bright like Yudkowsky. I've read of the Fermi Paradox, why we have observed no evidence of alien civilizations when there "should" be several cases of aliens seen, as an argument used to support why the Singularity will eventuate in our civilization. Because the Singularity is what wiped out all those alien civilizations which are not there for us to observe. This seems far fetched to me also, but it is apparent that what you deem to be a lucid explanation, is to me, a failure of critical thinking.

Anonymous said...

Please, please, please do not confuse Singularitarians with Transhumanists.

One is not obliged to believe in "strong AI" at all to be a Transhumanist.

citizencyborg said...

Depending on how you define Singularitarian (S^) beliefs, only between 10%-40% of transhumanists (H+) hold those beliefs. There may also be S^ who are not H+, objectively and subjectively. Belief that we will soon be whacked/raptured by AI bears no necessary relationship to belief that people should be able to apply technology to their brain, body and reproduction to transcend the limitations of the human body.

Damien Sullivan said...


Please, please, please do not confuse Singularitarians with Transhumanists.

One is not obliged to believe in "strong AI" at all to be a Transhumanist.


Don't confuse Strong AI with the Singularity, either. The "Strong AI hypothesis" is simply that a machine can have a mind, or conversely that our mind and brains are (squishy, biological) machines.

Anonymous said...

Yes, now I see. Imagine you say to the first person you meet on the street, (after that you might try a heavily wooded area on a dirt path in an impoverished area) “Hello, I'm a Singulatarian Transhumanist.” I'm curious what the response might be... On the streets of Huston, Texas, out of the five people I questioned, none of them knew the meaning of Capitalism; and here we are pondering transhumanism!

Greg Egan wrote:

Lots of very enjoyable SF (and, frankly, some crappy derivative SF) has been written about self-improving AIs turning into Gods or enslaving humanity. Personally I consider it far more likely that we'll create some kind of dumb but debilitating nuisance, but that doesn't mean I can't enjoy A Fire Upon the Deep as the fictional masterpiece it is. And if people want to debate the real-world risks and benefits of even the most extreme possibilities that Yudkowsky mentions, whether or not I happen to consider them plausible, that's perfectly fine by me. It's never too early to give serious consideration to the ethics and practicalities of different scenarios, and if none of them actually come to pass, the participants in the debate will still have had some fun and, hopefully, honed their moral skills.

Response:

In the past we thought we'd never do a lot of things now take for granted. I think strong AI is only a matter of time given that we remain alive and social. I know better than to name time tables given the state of current futures tools, though they can be useful for reference to a possible outcomes. I would argue that the methods of prediction will eventually become refined so well as to know better than society at large what outcomes will occur given said action. This could potentially protect anyone from harm, if one where to want such a thing, however, to believe that is also to believe that AGI will one day exist. Call it faith or bogus if you must. This faith business has created systems despite scientific understanding and I will suspect that it will continue to do so until all matter and matter of understanding is scientifically or analytically encompassed.

As a supporting example, the tech historian, George Basalla, wrote in the book The Evolution of Technology:

"By the mid-1980s the home computer boom appeared to be nothing more than a short-lived and, for some computer manufacturers, expensive fad. Consumers who were expected to use these machines to maintain their financial records, educate their children, and plan for the family's future ended up playing electronic games on them, an activity that soon lost its novelty, pleasure, and excitement. As a result a device that was initially heralded as the forerunner of a new technological era was a spectacular failure that threated to bankrupt the firms that had invested billions of dollars in its development."

And so we now know better.

Russell Blackford wrote:

But I will certainly resist the idea that governments should be using the power of fire and sword to suppress activities that you or they happens not to like for some reason.

Response:

Indeed. If everyone could understand that when fire and sword is used, either by force of law or otherwise, there's a high probability that an equal or exaggerated reaction will occur in response, eventually. If I assert 1, and you choose to play the battle game where the highest number wins, this game could go on forever given that believe in certain rules indefinitely (perhaps human life is a game that will go on forever; another subject matter). So to my 1 you respond with a 2; then I give a 3 to trump your 2, and so on, until one of us stops counting and the other 'wins', either by means of logging out of the game or via death. In other words, being offended and feeling that its justified (then enacting law to enforce for example) is just a few steps toward ending another's life. I may have stretched that concept too far; if so, thanks for the musing. High five, Russell.

Further Comment:

This PGD/GLGE/STD business of where life began and where it 'should' end or what 'must' be done before its considered unethical does tie in quite well with what transhumanism is arguably considered. Is it transhumanistic to take an anti-depressant? Is talking on a telephone rather than speaking to someone in person transhuman or is it transhuman communication or both? Once we've agreed on or state what transhumanism is, where did it begin and where do we think it might end?

For those uncomfortable identifying with or using the term transhuman, why not a different logos to debate? How about calling it the 'technological synthesis' movement? This may be a less controversial coin of phrase. Suggestions?

Russell Blackford said...

Actually, John, we don't have laws against adultery or so-called "fornication" in civilised countries, and we continue to have laws against rape for obvious reasons - to protect people (mostly women) from being forced to have sex against their will, i.e. to protect them from a direct form of harm.

If you think that the contemporary rationale for laws against rape (whatever the historical motivation may have been in different times and places) is to prevent the birth of unauthorised children, rather than to protect women from acts of violence, then I think somebody's been tapping you on the forehead with the Magic Crackpot Stick.

Still, your comments provide a nice example for everyone to see; it's useful to get an idea of just what loopy authoritarian notions are out there, and why we need the voices of freedom and reason to turn up their volume.

Kaj Sotala (Xuenay) said...

Damien Sullivan:

billions of times faster? Brains are slow but myelin neural signals are only about 10 million times slower than lightspeed, I think, so your numbers already seem off on basic physics.

In my essay "Why care about artificial intelligence", I wrote the follwoing:

"A computer could upgrade its physical processors - one proposed nanotechnological computer, using purely mechanical processing, has a processing speed of 10^28 operations per second per cubic meter [1]. Estimates of the human brain's processing speed vary, but even if we are unrealistically generous and use the amount of computing power necessary to run a cellular-level simulation of the brain, we arrive at a lowly figure of around 10^19 operations per second [2] [3]. This means that such a nanotechnological computer could think one billion times faster than a human."

The references:

[1] http://www.nanomedicine.com/NMI/10.2.1.htm
[2] http://www.saunalahti.fi/~tspro1/exchange.txt
[3] http://www.nickbostrom.com/superintelligence.html

** Does a subjective timespeed that fast seem at all useful or desirable? It's probably far faster than most physical actions. Does one want to live such that crossing the city is like going to Alpha Centauri?

A mind might want to slow itself down when it was the time for taking physical actions, sure. But then there are immense advantages in such giant speed-ups, as well. For instance, imagine downloading every single scientific journal article and book available online, and then spending a couple of hundred of subjective years reading through them all and synthesizing the contained knowledge into a coherent whole, all in the span of about ten outside seconds (assuming a mental architecture that could do that without going bored or insane from the lack of interaction with an external world, of course). Or pausing whenever you received new information relating to some plan of yours, again spending about a hundred subjective years to go through every possible implication that you can imagine emerging from this.

*** What are the energy costs of thinking that fast? Remember E=mv^2/2

Continuing to use Drexler's mechanical nanocomputer, we get a "power density of ~10^12 watts/m^3". This is admittedly very high - Wikipedia gives the average electrical power consumption of the world in 2001 to have been around 1.7*10^12 watts. Still, we could drop the size of the device by a factor of six to bring its power consumption down to the megawatt range (nuclear power plants have a peak output of 500-2000 MW, again relying on Wikipedia), which would still leave it a thousand times as fast as a human.

Relatedly, a problem I've had for years with the really fast hard takeoff scenarios: okay, you're an AI who's a bunch faster than humans. Now what? I've seen ideas that the AI will somehow solve physics, or at least develop really good nanotech really quickly, without consideration to empirical issues of experiment times. The ultimate triumph of pure Reason, of Rationalism over empiricism. I'm skeptical.

http://www.singinst.org/blog/2007/07/10/the-power-of-intelligence/ answers this one pretty well, I think.

Damien Sullivan said...

Do a billion humans think a billion times faster than one human? I'd call what you describe as *more*, not faster. Parallelism, not speedup.

That last link just says "SI can do stuff, even if all it can do is talk to people". I don't see anything about the speed of doing things, other than "faster than you may think". If it's limited to talking humans into doing stuff for it, that hardly gets around the experiment time problem I mentioned.

Blake Stacey said...

Russell Blackford said,

If you think that the contemporary rationale for laws against rape (whatever the historical motivation may have been in different times and places) is to prevent the birth of unauthorised children, rather than to protect women from acts of violence, then I think somebody's been tapping you on the forehead with the Magic Crackpot Stick.

We need a new acronym which serves the function of "LOL" but conveys a grim, sardonic type of laughter.

But yes, I think we can all be thankful that the "rape is property damage" belief is not the motivation behind contemporary law — as evidenced, perhaps, by the fact that the punishment is no longer fifty shekels and an arranged marriage.

Kaj Sotala (Xuenay) said...

Do a billion humans think a billion times faster than one human? I'd call what you describe as *more*, not faster. Parallelism, not speedup.

Call it speed-up or parallelism, but is there much of a difference in practical terms? The end result is still that you can still think and plan in ways that are far superior to ones available to humans, and achieve far more in a far shorter time.

(The "billion humans" analogy is a somewhat bad one, because different humans aren't joined together on the hardware level and have to resort to language in order to communicate.)

That last link just says "SI can do stuff, even if all it can do is talk to people". I don't see anything about the speed of doing things, other than "faster than you may think". If it's limited to talking humans into doing stuff for it, that hardly gets around the experiment time problem I mentioned.

At least the way I read it, the main point of the article wasn't "SI can do stuff, even if all it can do is talk to people", but rather, "historical evidence shows us that there's no way of knowing what a superior intelligence can do". It may manipulate humans into doing experiments on its behalf and then tell those humans how to perform the experiments in a much faster and effective way - or it may do something else entirely, something that we're just as unable to predict the same way the average 18th century scientist would've been unable to predict all the advantages computers provided for science. Or, to use a better order of magnitude, the way a dog would be unable to even understand what a computer was. Humanity developed about 200 000 years ago, after life had existed on this planet for 3.7 billion years - if we manage to build AIs in the near future, then we have empirical evidence showing that an intelligent process is potentially at least 18 500 times faster than a less intelligent one.

The history of evolution shows the same thing over and over: superior intelligences are much faster and more effective in coming up with ways of doing things than lower intelligences are. We don't need to come up with ways by which a superior intelligence could carry out its experiments faster - we only need to establish that it's a superior intelligence, and therefore quite capable of getting into a position of power very quickly. This is with historical precedent tells us.

But to actually answer your question, in terms only of what we already know:

Many experiments are increasingly being carried out as simulations, from protein-folding analysis to nuclear weapons testing. The main limiting factors are the amount of information necessary to set the initial values of the simulation, as well as the amount of computing power available. The amount of available computing power is constantly increasing, and researchers are constantly increasing the amount of information available for running simulations. In some research fields, most notably mathematics and computer science, physical experiments aren't necessary at all.

The experimental process does not consist entirely of performing experimental procedures, and often, the procedures themselves only take a small part of the total time involved in testing out new theories. Scientists think through dozens of potential experiments, before deciding on the most useful candidates - this could be done a million times faster. Interpretation of the experimental results often involves slow and tedious data-processing chores, capable of being performed faster by an AI. Computerized and robotized laboratory equipment has increased the speed of experimentation by orders of magnitude in a number of areas, even when run by human scientists. A million-fold acceleration in an AI's thinking might not accelerate all research by a million-fold, but is still likely to accelerate it by an immense amount.

John Howard said...

[PGD] forces a huge layer of government regulated reproduction cops in lab coats on everyone

Why? The PGD isn't mandatory.


Because everyone has something wrong with their genes. Driving isn't mandatory, but we have a huge layer of cops in police cars and people that don't drive have to pay for them anyway.

Allowing GE: first, because we live in a liberal society (or want to) and should allow things unless they cause specific harm.

Or unless we don't want to allow them because we think it'd be better to ban them. There's no law about not being allowed to make laws that might stop an activity that people might want to do.

Second, because it can make people more healthy and capable than they would otherwise be, thus would be of specific benefit.

But we don't need to use GE to make people more healthy and capable than they would otherwise be, and allowing GE has a huge cost that makes everyone else less healthy and less capable than they would otherwise be, both in comparison and in real terms.

The objection [to exowombs] sounds like that offered to "test tube babies" back in the day, which has failed the test of experience.

But testtube babies were not alive when they were in the testtube, so they didn't experience that phase of their body's existence, and they experienced their mother's womb just like other babies.

Really, using test tube babies as an analogy, to say that "oh, that sounds like the same objection as to test tube babies" is a BAD argument. Yeah, people objected to testtube babies for many of the same reasons, but you can't assume that therefore all those objections are therefore proven wrong. All objections to everything "sound like" objections to things that are ubiquitous today, so I guess we just can't object to anything anymore. Oh, wait, that IS your argument, we should not be allowed to object to anything, unless we can prove specific harm, which you don't even acknowledge when we do prove specific harm (you just say, well, lots of other things cause that same harm, so we have to allow this too).

"All people should be created equal"

Well, they aren't. They're created with equal intervention in their genome (none) which means they're created randomly, which means they're not created with equal abilities.


Ahem. Do you think that Jefferson wasn't aware that people were all different? He was specifically talking about birth privileges of class and nobility and yes, race (though it took a hundred years for that principle to reach fruition). The point was to see that despite our genes and parents and the things they did to give us better health and better minds, we nevertheless all are equal, because we all came to this earth the same way, into different bodies but with identical equal worth and rights. Though that would still apply if we started adding genetic enhancement to the list of privileges, it directly undermines the point, it effectively denies it. You actually explicitly deny it, you want people to literally be more equal in abilities (or do you want some people to be even less equal, I'm not sure). It is good that there is a physical truth to base the claim to equality on: in spite of all of the differences, we all are created equally in that we all are the children of our mother and father and are stuck with that. Our parents couldn't change that, and we couldn't change that. No expert intervened to make us different from that. We couldn't choose different parents, and they couldn't choose different children (and to do that just means kidnapping, it means not being raised by your parents).

People worry about GE increasing a gap between rich and poor, but it could just as well decrease the gap between genetically fortunate and unfortunate. Probably easier to raise the minimum than raise the maximum, and public funding can make sure everyone affords it.

So, forced sterilizations of the unfit then? Or just moral pressure for them to take advantage of ways to have better children? Why should we bother with this? Do our problems really stem from some people dragging us down with unfortunate genes? Do their problems really stem from their bad genes, and once born, it's too late? That's the message.

And it's all well and good to say people should be told they have equal worth, but we already have nominal political and legal equality...

But it seems some of us will have the right to use our own genes, and others will be told they should use government regulated services to have children.

while the marketplaces of money and love will send the message that people are not created equal, at least with respect to good jobs and attractive mates.

Well, we are all different and have different circumstances and attributes, of course, and people hire and choose mates for many reasons, some fair and some not, some based on genes and some not. But you aren't suggesting that that should or can be rectified by genetic engineering are you? I suppose there are unfair things that happen because of our genes. If only we were like David Watts or Billy Hunt. But if you want to fix genes because you were slighted by the cute girl or not as smart as the smart kids, that's just a form of mysanthropy.

Your position amounts to "natural selection should be allowed to function unimpeded".

Um, OK. Shouldn't it?

If you're going to quote Roe vs. Wade as abortion logic, you should note that the political theory behind the US Constitution does not support your position, though practice has been oozing in that direction.

Privacy wasn't in the constitution either.

People have basic rights, including property rights, and ostensibly Congress could only make a limited range of laws (albeit that's been bent all out of shape.)

Yeah, but property rights don't entitle people to do whatever they want with their property.

AI is AI. Medicine is medicine

And GE is GE. None are uniquely central to transhumanism, whatever you want to believe.


I think GE is essential to it, more so than singularity or any other thing that has been subsumed into the GE assumption to create the murky identity of Transhumanism.

If you look at the Wikipedia pages on transhuman, posthuman, and transhumanism, GE is only part of it, and not a privileged part. (I'd forgotten about cybernetic prosthetics or brain implants, that's in there too.)

I think it's essential. Surely you can point me to a Transhumanist who opposes GLGE and SSC, who would support a ban. I think we can all find transhumanists who think Robot Gods are dumb and the singularity is not gonna happen.

Yeah, I get excited when someone suggests banning it, because I think it's a desirable option. But given a choice between a world where I can GE my children or I can improve myself, I'd choose the latter. No reason to accept an imposed choice from such as yourself, though.

Dude, you can improve yourself. And part of the logic of the ban is that it bans creating other people. No one is suggesting a ban on improving yourself.

Anonymous said...

AT John Howard

"Or unless we don't want to allow them because we think it'd be better to ban them. There's no law about not being allowed to make laws that might stop an activity that people might want to do."

So you basically think it is a right thing to ban something you "feel" is wrong, without any sound reasons and unbiased scientific discussion.
Cute.


"Oh, wait, that IS your argument, we should not be allowed to object to anything, unless we can prove specific harm, which you don't even acknowledge when we do prove specific harm

And WHERE EXACTLY have you proved specific harm of GLGE?
Oh SNAP!


"So, forced sterilizations of the unfit then? Or just moral pressure for them to take advantage of ways to have better children?"

Could you please present us some solid proof instead of regurgitating your very personal scary fantasies?

" "natural selection should be allowed to function unimpeded"
Um, OK. Shouldn't it?


" Um, OK. Shouldn't it?

But, saving people who will not survive without regular medical interventions via administration of special substances (like in diabetes T1) is obviously against natural selection!

Oh, snap!

I think GE is essential to it, more so than singularity or any other thing that has been subsumed into the GE assumption to create the murky identity of Transhumanism.

And I think moon is made of blue cheese.

Surely you can point me to a Transhumanist who opposes GLGE and SSC, who would support a ban. I think we can all find transhumanists who think Robot Gods are dumb and the singularity is not gonna happen.

Well, I think Strong AI will never be truly achieved, so Singularity and Robot Gods are moot.
Yay, you found your AI-skeptical Transhumanist! Rejoice, John!

I am sure a GLGE hating Transhumanist is also hiding nearby.

Dude, you can improve yourself. And part of the logic of the ban is that it bans creating other people. No one is suggesting a ban on improving yourself.

I can't help but wonder, why is it you want a worldwide ban?

If you hate the gut of GLGE concept so much, ban it in your very own state and be done with it.

Why do you seek to enforce your worldview on the entire USA and even on the entire world (including even China :), LOL )

P.S.: Could you kindly read the paper I linked and refute the pretty solid points presented there in a calm, logical, scientific fashion?

Giulio Prisco said...

John, in case you have not yet noticed: most of us here do not favor bans on anything except for exceptional reasons related to _actually harming someone_.

In other words, most of us here do not think a victimless crime is a crime. And I don't see who is objectively harmed by genetic engineering, abortion, homosexuality, and all the other behaviours and lifestyles that you wish to ban.

Live and let live. My lifestyle is just _not your business_ as long as I don't do concrete harm to others. This should be clear to everyone in the 21st century.

John Howard said...

Hmm, you seem to favor a ban on bans, even though there is no evidence that a ban on GLGE would not harm anyone, indeed there is proof that researching and performing GLGE will harm everyone by its use of energy and diversion of medical resources. Oh, but that harm doesn't count, because, other things also cause similar harm. Wrong, that harm counts in all cases, but we just choose not to ban other things because they are other things.

It's silly to have some ideological blanket position that limits society's options about the world it wants to live in. Each decision is different, some things should be banned (driving drunk, torturing animals, building bombs in your apartment, suicide, etc) and some things shouldn't, and there doesn't need to be some qualifying standard of potential harm, only a democratic agreement that it should be banned. I see great harm in allowing GLGE, in terms of waste and risk and disparity and the need for huge government police state regulation, but you guys don't see that as harm. OK, don't vote for a ban, tell those of us who want a ban why it should be allowed. But don't tell us bans are banned, that makes no sense.

(and I support privacy-based abortion rights and homosexuality rights, and don't want to ban them)

John Howard said...

Actually, John, we don't have laws against adultery or so-called "fornication" in civilised countries, and we continue to have laws against rape for obvious reasons - to protect people (mostly women) from being forced to have sex against their will, i.e. to protect them from a direct form of harm.

We have relaxed those laws very recently, and mainly only because we have erroneously thought that the Pill and contraception and abortion has made sex possible without reproduction necessarily following. And illegitimacy laws were repealed because they were unfair to the children, and because modern paternity testing made it possible to enforce the same responsibilities on fathers that marriage would have enforced.

We did NOT relax those laws in order to remove creating people from public purview or regulation, nor did that follow from relaxing those laws.

It is not rape to merely beat up someone, assault laws cover that, and other forms of harm (and sexual assault covers in-between situations). Rape is about taking away someone's control of their reproduction and possibly forcing them to have a child. It should be expanded to cover things like being cloned without consent and things like that, not merged with assault. The uniqueness of rape comes from the uniqueness of sexual intercourse.

I bring this up merely to point out that creation of people has never been unregulated. Don't mistake changes to our laws based on contraception and abortion as giving up a right to regulate creation of people.

John Howard said...

I am sure a GLGE hating Transhumanist is also hiding nearby.

CALLING ALL GLGE-HATING TRANSHUMANISTS! ALL POINTS BULLETIN! COME IN PLEASE!

John Howard said...

Louis, I'm reading this paper on GLGE:

http://www.icer.it/docs/wp2003/Cohen21-03.pdf

OK, it says there are three potential harms, ignoring the cost and waste and mis-directed resources which cause real harm and suffering to people that actually exist. It also ignores the benefits that would come from a ban.

Let's look at the three harms. The first is poor choices by a parent. He dismisses this by saying it'll be rare (but earlier he justified zero delays by saying that to people that are affected, 100% of them are affected, but now he forgets that principle if the numbers of affected are small), then he implies that we have to defer to the parents, because they have the best interest of their children in mind, even after pointing out that parents often make egregiously wrong choices. No, we never have to defer to parents, we always protect children from abusive and neglectful parents, there is no right to parent, only an obligation and an assumption to parent your own children. But parents screw up all the time and their children are taken away. So, that argument fails. The state, not the parent, ultimately speaks in the interest of children. And procreative liberty is also under purview of the state, in prohibiting incest, formerly prohibiting inter-racial marriage, etc. Inter-racial prohibition was wrong because it was racist, not because the state had no business prohibiting procreation.

The second, the "unfair advantage" argument. He says that "eliminating" the gene for Huntingtons would be equalizing, but it wouldn't be eliminated equally, it would only be eliminated by those who availed themselves of IVF (plus, it's PGD, not GLGE). So the inequality would be between the rich and old who were able to "equalize" their children, while poor and young people who conceive through sex would continue to have babies with genetic disorders. That inequality would mean fewer people would care about people with Huntington's, and funding to find a cure for those born with it would dry up even further. Correcting that inequality would require massive government programs to screen and intervene on every woman's body, forcing them to be contracepted and subjected to IVF screening. He misses that point completely.
Next he makes embarrassing arguments for allowing enhancements. He implies that the only thing wrong with the use of steroids by athletes is that they are unhealthy, so being forced to use them to compete is bad because of the health costs. But he doesn't notice that it would suck to have to pay steroid companies to be a professional athlete, he doesn't notice the cost to everyone of testing and regulating this layer of what would become a requirement. The same thing would happen with all the intelligence and beauty enhancements, we'd all be forced to pay for them to give our kids a fair chance. He just says that more intelligent people will be better because the smart things they do will benefit everyone, but that is not a given, first of all, and again ignores the cost of the layer of corporate/government intrusion into everyone's procreation.
He also makes use of the "we already allow such-and-such" argument which is just stupid. That doesn't mean we have to allow other things that might have similar effects, and it doesn't mean we have to ban the thing that also causes bad effects. It makes me worry about the mental state of the nation if that sort of argument convinces anyone.
His last purported objection is irrelevant and boring to me, I can't go on.

Anonymous said...

Nathan Cravens wrote:

This faith business has created systems despite scientific understanding and I will suspect that it will continue to do so until all matter and matter of understanding is scientifically or analytically encompassed.

While I share the belief that scientific understanding will continue to encompass all manner of things, what turns so many Transhumanists into comical parodies of scientific rationalists is an inability to distinguish their hunches and intuitions, their opinions and preferences, and their political agendas (all things which they are perfectly entitled to hold), from actual deductive reasoning. They also seem to be especially prone to inflating the importance of carefully selected but marginally relevant examples and analogies.

As a supporting example, the tech historian, George Basalla, wrote in the book The Evolution of Technology: .... And so now we know better.

Someone once made a conservative prediction about the use of home computers, and they were wrong. What force do you imagine that brings to your arguments? And Xuenay writes:

Or, to use a better order of magnitude, the way a dog would be unable to even understand what a computer was.

That dogs exist, and humans exist, is not a proof for the existence of anything that bears, in any relevant respect, the same relationship to a human that a human bears to a dog.

Fast processes are faster than slow processes. Everything else about the detailed nature and abilities of various forms of intelligence is conjecture.

People are entitled to their conjectures; other people are entitled to remain unpersuaded by them. My only objection is when speculative discussions cease to acknowledge how many assumptions and opinions are being drawn on, and try to pass themselves off as iron-clad reasoning. I don't consider anyone a crackpot for discussing super-intelligent AIs that will dispense God-like wisdom and shepherd us into a celestial utopia. What makes someone a crackpot is asserting -- or acting as if -- there are no untested assumptions underlying the claims that such an outcome is possible, imminent or desirable.

TechTonics said...

Greg Egan said...
Though a handful of self-described Transhumanists are thinking rationally about real prospects for the future, the overwhelming majority might as well belong to a religious cargo cult based on the notion that self-modifying AI will have magical powers."

Michael Anissimov said...
Greg and others, Google "seed AI LOGI", read that chapter, and tell me if it's crackpot material.
SH: wrote
"Levels of Organization in General
Intelligence" by Eliezer Yudkowsky
--------------------------------
SH: I am part of the "and others".
In the thread "Let's try to be strategic" found on this blog, I stated that Yudkowsky's brand of the Singularity was a cult, and impugned his motives for starting the cult as similar to L. Ron Hubbard who founded the cult, "The Church of Scientology" which also makes supernatural claims (without the basis of the scientific method criteria). I had already read the
"Levels of Organization in General Intelligence" which Anissimov points to in his post which is supposed to dispel doubts that Yudkowsky's version of the Singularity is "crackpot material". I referred to the LoOiGI as Yudkowsky's "manifesto" to same time and effort in typing. I went and reread it and still found it full of handwaving on issues which would establish it as scientific rather fringe philosophical speculation. ------------------
Louis Wehenkel said...

Please, please, please do not confuse Singularitarians with Transhumanists.

One is not obliged to believe in "strong AI" at all to be a Transhumanist. -----------------
SH: Where in my post to I confuse Singularitarians with Transhumanists? I was responding specifically against Anissimov's recommendation that Yudkowsky's manifesto(LoOiGI) was any type of scientific justification. That is why at the top of my post, I quoted
Anissimov and his errant allegation
so a reader would know exactly what I was referring to. I don't know how you came to the conclusion that I confused Sing* with Trans* because I don't even mention other aspects of Transhumanism such as
Dolly the sheep and human cloning. Transhumanism is in the Subject: line, but the Singularity is a sub-topic because Russell includes it, "even SI", I think he wrote. Strong AI with the philosophical assumption of consciousness/mind/ and volition is essential to Yudkowsky's claim since he wrote:
"Fully recursive self-enhancement
is a potential advantage of minds
-in-general that has no analogue
in nature - not just no analogue
in human intelligence, but no
analogue in any known process."
------------------------------
SH: This is the concept Yudkowsky uses to explain how a demonic super-intelligent AI can come into existence. The AI has to self-modify its own intelligence to exceed human or some ordinary program, capability. But, this concept has "no analogue in any known process" including human thinking. Well, this supersystem doesn't exist and a philosophical assumption, that strong AI will emerge that has a creative mind capable of self-modification which will augment the strong AI into a new paradigm of super-intelligence does not gather support from Yudkowsy's manifesto using a scientific basis. He handwaves over this requirement with flights of fancy that begin with "imagine".
-------------------------------
I didn't extend my criticism to include uploading consciousness, another Transhumanist idea, so I don't see any basis for your notion that I was dismissing Transhumanism because I dismissed Yudkowsky's version of the Singularity.
------------------------------
Damien Sullivan said...
Please, please, please do not confuse Singularitarians with Transhumanists.
One is not obliged to believe in "strong AI" at all to be a Transhumanist.

Don't confuse Strong AI with the Singularity, either. The "Strong AI hypothesis" is simply that a machine can have a mind, or conversely that our mind and brains are (squishy, biological) machines.
--------------------------------
SH: Well, conversely speaking, I think you generalized this a level to high. The philosophy is called Computationalism and one idea is that the mind is computable. ATT has very sophisticated telephone answering systems now, with voice.
That is all done by computable processes. The way you wrote this humans could be seen as ultra sophisticated answering machines with widespread databases with which to formulate replies. So you can compare us as machines, but with the further assumption that all our brain/mind functions are computable processes. What you wrote seems fuzzy enough that some people might draw too specific of a conclusion from such a comparison to a finer grained analogy level.
That is actually a quibble, so don't make it into a big issue.
What I have a big issue with is
that you don't think Yudkowsky's claim for how the Singularity evolves into physical existence mandates the assumption of Strong AI. Remember he wrote:
Yudkowsky's claim since he wrote:
"Fully recursive self-enhancement
is a potential advantage of minds
-in-general that has no analogue
in nature - not just no analogue
in human intelligence, but no
analogue in any known process."
That means super-intelligence is not something a human can provide for a program. Yudkowsky says a supersystem if first necessary. What does that mean if not strong AI with a mind? He claims that this program can totally introspect all of its processes, something which a human cannot of course do. How does this happen without self-awareness?
I doubt that you can come up with any explanation for fully recursive self-enhancement (the Yudkowsky version) that doesn't require strong AI for its foundational supersystem. Goertzel, is a bit different. He has written that panpsychism imbues all elements within the universe with consciousness, so it is a matter of increasing such consciousness/mind not a matter of creating it, or instantiating it from scratch. He also brings in the idea of virtual reality by which I think he meant Alife which could self-organize and then exert causal powers of the physical universe as an entity. So
if not strong AI, what is the physical scientific basis of the supersystem which has the potential to self modify into super-intelligence? How does the AI caused Singularity come into existence when it "has no analogue in any known process" which includes all algorithmic processes which are of course computable?

Russell Blackford said...

Well, I have no idea where you're getting a lot of this from, John. You seem to be pulling legal principles out of the air. E.g. anal rape is also rape, though no possibility of pregnancy is involved.

Actually, the historical origin of rape as a crime may well be tied up with patriarchal assumptions about protecting fathers and husbands from having their "property" devalued. That doesn't change the fact that we now justify rape laws on the basis of protecting women from harm - and, yes, the harm is not just being forced to have sex against your will (which is experienced as terribly traumatic) but (sometimes) also involves being put at risk of being made pregnant against your will by a person whom you did not choose to be made pregnant by. For women, that is, of course, a terribly frightening prospect.

As for all these other things, such as concerns about illegitimacy, they are a product of earlier times when societies sought to provide a single template for how people would live their lives. For many reasons, liberal societies avoid doing that, and the modern state does not enforce a single theory of the good life on its citizens. It may provide services, such as public health services and public education, which are believed to expand citizens' options rather than contract them. And it may even try to preserve certain values, but it does this through the tax-transfer scheme. Generally speaking, it does not attempt to impose a single view of the good life by means of fire and sword - though often it is told by moralists that that is exactly what it should do.

An example: the state may use the tax-transfer scheme to subsidise opera companies. This may give opera some advantage in the marketplace over, say, rock music. It may do so because it is believed (by someone influential) that there is something valuable about opera that would be lost or endangered without the subsidy.

Such uses of the tax-transfer system create a degree of simmering controversy, but electorates will go along with them up to a point, provided that the overall level and distribution of taxes is tolerable, and provided that other state priorities are met first. However, what the state will not do, in a liberal society, is ban rock music on the basis that money spent by consumers on rock music would be better spent on opera. That kind of action would rely on a massively illiberal jurisprudential principle that would potentially leave us no freedom at all. Using such a principle, the state could ban anything on the basis of its preference for some rival value.

In short, the tax-transfer system will sometimes be used in ways that give an advantage to certain values, but only on the background assumption that the state will leave citizens significant opportunity to pursue alternative values. That, in turn, means that the state will not try to suppress those alternative values by means of the criminal law or other kinds of coercion (such as bills of attainder).

In areas such as reproductive choice, there is an even stronger presumption that the state will leave citizens free to pursue what they consider to be the good life. Matters to do with citizens' sexuality, reproductive choices, beliefs about the world, and other very personal issues, are at the core of liberal concern that the state not involve itself in determining its own conception of the good life, much less impose it on others by means of fire and swords. The most it might do is, say, use the tax transfer system to provide a "baby bonus" if it has some reason to encourage more people to have children. (E.g. the state may want to use the tax-transfer system to encourage people to have children in order to obviate what it sees as long-term demographic problems; but note that this tends to encourage those people who already value having children for other reasons, quite different from the state's; what the state does not do in such a case is use threats of confiscations, imprisonments, criminal stigma, and so on, to coerce people who do not value having children).

Now, does anyone have anything to say that is more directly on-topic? Or is this conversation running its course?

Russell Blackford said...

Techtonics, isn't the point that philosophical acceptance of the strong AI conjecture is necessary to the whole SI agenda, but not sufficient.

I.e. you can buy into strong AI as a philosophical theory of mind, but still have all sorts of reasons not to buy into the ideas of, say, Eliezer Yudkowsky or Ben Goertzel or Michael Anissimov.

You can favour certain technological proposals (e.g. germline engineering) without identifying as a transhumanist. You can think, or suspect, or speculate, that a computationalist theory of mind is correct without thinking that the Singularity is a practical prospect to welcome or fear. And so on. There are many possibilities.

Michael Anissimov said...

Mr. Egan, I'd say your position is reasonable. A hard takeoff scenario is by no means certain, but I do consider it likely for what I feel to be well thought-out reasons.

I am skeptical that there are really so many people which have "overconfident faith in magical self-modifying AI", as Robin Hanson argues. If they exist, can we find a few quotes? The only person that comes to mind edging in that direction is Hugo de Garis.

James is right, only 10-40% of transhumanists have S^ ideas. It's unfortunate that such ideas might scare away people from H+, but we have the right to keep holding our positions, just as other transhumanists (and most of society) have the right to disagree with us.

As a S^ transhumanist, I welcome having my beliefs double-checked and moderated by non-S^ transhumanists (and practically anyone who's interested), and engaging in respectful debate from time to time. (I try to be respectful, but unfortunately many people label me and other S^ as cultists, which I consider insulting.)

Damien Sullivan said...

TechTonics, I find the formatting of your post hard to read. But picking this out:

"that you don't think Yudkowsky's claim for how the Singularity evolves into physical existence mandates the assumption of Strong AI. Remember he wrote:"

That's not what I said. An AI Singularity requires Strong AI, yes, but buying into the Strong AI hypothesis doesn't commit one to fast takeoff AI versions of the Singularity. Doubting that Singularity needn't mean doubting Strong AI.

(Other versions of the Singularity, particularly the original "smarter than human intelligence", are more robust; they don't even need AI. A genetic engineering intelligence spiral will probably be slow, though.)

TechTonics said...

Russell Blackford said...

Techtonics, isn't the point that philosophical acceptance of the strong AI conjecture is necessary to the whole SI agenda, but not sufficient.

I.e. you can buy into strong AI as a philosophical theory of mind, but still have all sorts of reasons not to buy into the ideas of, say, Eliezer Yudkowsky or Ben Goertzel or Michael Anissimov.
-----------------------------

Hi. Yes, necessary but not sufficient. I thought it more convincing to quote Yudkowsy. And there is another point. Yes you can believe in Strong AI but not SI. But Strong AI which is needed for SI is also not on a scientific basis, which I think makes the Singularity version by Yudkowsky even more incredible. Strong AI asserts that the right program 'instantiates' a mind, or that system as a whole _understands_ or that the mind is a computer or wording quite similar to that. Those are all philosophical assumptions. There isn't any scientific theory that a program which passes the Turing Test also now possesses a self-aware mind capable of independent creativity, having original intentionality, rather than the derived intentionality given to the program by the human programmer. Of course I think it is quite plausible that a program can be eventually engineered which will pass the Turing Test. But as Kurzweil admits, having consciousness is an open question. Yudkowsky's S claim includes a conscious super-intelligent AI with its own malevolent agenda. Strong AI is much more circumstantial than a claim for cloning humans, after we have evidence of Dolly the sheep. So the Yudkowsky version of the Singularity has a lot more wishful thinking in place of evidence.

Anonymous said...

Very tasty John Howards discussion:

http://amormundi.blogspot.com/2008/04/my-enthusiasm.html

Anonymous said...

People like John Howard make me want to immigrate back to China... :'(

Kaj Sotala (Xuenay) said...

Greg Egan:

(You're my favorite author, by the way.)

That dogs exist, and humans exist, is not a proof for the existence of anything that bears, in any relevant respect, the same relationship to a human that a human bears to a dog.

Fast processes are faster than slow processes. Everything else about the detailed nature and abilities of various forms of intelligence is conjecture.


This is true. I certainly do not claim there to be a certainty over these matters. My position is only that the evidence is strong enough - especially taking into account the possible consequences - to give the AI issue serious attention.

In addition to issues of speed and memory (might be worth noting that IQ in humans corresponds closely to working memory capacity, here), I'd also note the actual software side. I doubt anybody claims that the only difference between us and dogs is in the number of neurons. Now, we also know how clumsy evolution is - only implementing improvements with an immeadite gain, being unable to make changes that would break existing systems, and so on. It seems reasonable to assume that evolution only reached the low-hanging fruit when it crafted humanity, and that digital software that could rewrite parts of itself from scratch would get much higher. Then there's all the stuff about human cognitive biases and such (the Wikipedia page listing human cognitive biases had around 90 of them listed, on a quick count).

No certainties, no, but many things making super-powerful AI seem likely enough to be very worrying. (Whether or not AI will be developed anytime soon is a separate question, of course.)

Anonymous said...

Xuenay wrote:

It seems reasonable to assume that evolution only reached the low-hanging fruit when it crafted humanity, and that digital software that could rewrite parts of itself from scratch would get much higher.

Speed and memory are easily quantified, but other notions of "higher" are far more speculative.

For all our flaws and quaintly specific cognitive adaptations, we are capable of manipulating a vast range of abstract ideas (e.g. modern mathematics), and there are no rigourous arguments for the existence of any class of problems that is solvable by Turing machines in general, but not by (arbitrarily long-lived, and well-motivated) humans with access to arbitrarily large computers programmed only with insentient software. Some relatively minor neural prostheses and/or bioengineering would clear away a lot of the cobwebs and ease some interfacing bottlenecks (and of course uploading would level the playing field even more).

My own hunch (which of course is not a certainty, but which I consider to be as likely as the alternatives) is that there is no hierarchy of cognitive generalists, just as there is no hierarchy of universal Turing machines. Either a system is computationally universal, or it's not; all the real-world implementations of Turing machines differ only in speed and memory. We can be fairly sure that there are things we could never teach an immortal dog; they are simply too specialised. But humans have broken through the barrier between specialist and generalist, and nobody has yet demonstrated the existence of any larger class.

Yudkowsky (obviously inspired by Vinge) speculates about a kind of hierarchy, but I don't think he makes a compelling case for AIs with the ability to solve an entirely different class of problems than humans armed with computers comparable to the AI's own hardware. The clock rate disparity alone might indeed make a big practical difference in certain scenarios, and of course it's vastly easier for software to self-modify than it is for an organism, but however much the phrase "recursive self-improvement" might trigger a reader's longing for exponential benefits cascading into the stratosphere, there's no proof that a broadly human-equivalent AI that refined its own code wouldn't just go through a few cycles of improvement with ever diminishing returns, and end up barely changed. (If it redesigned its hardware it might do much better, but we're already "recursively improving" computer hardware, with the aid of specialist, non-sentient software.)

Anyway, your mileage obviously varies, and as I've said I have no objection at all to people reaching different conclusions on these issues, so long as they acknowledge honestly how tentative those conclusions are.

John Howard said...

Rape laws changed recently also to include other forms of penetration. Those changes were pulled out of the air to reflect PC values and are offensive to victims of actual rape, where someone seizes control of their reproductive rights. The firm legal principle is non-consensual sexual intercourse, which is another way of saying, taking away someone's reproductive choice. Men can also be victims, in fact I am one (it started as consensual nakedness, which became non-consensual intercourse at the worst possible moment), and it is even more disturbing because I would have had no recourse to abortion if a pregnancy had occurred. I think that doctor who impregnated his IVF patients with his own sperm instead of their husband's was convicted of rape too, even though he'd been given consent to penetrate his victims with the insemination instrument, but not with his own sperm in it.

Imagine how you would feel if you went on a date with someone, and left behind some hairs on their couch, and they created sperm cells from your DNA and then initiated a pregnancy. You "gifted" your hair to them (as people say men do with their semen), so they can do what they want with it. Right? Or does that make you feel bad? Do you think it shouldn't be allowed? What if it was your wife (not to get personal, but just to play a thought game), doing the same thing after you were killed in a car crash? What if it wasn't your wife but just an admirer you never met? Should there really be no laws regulating creating people?

John Howard said...

You can favour certain technological proposals (e.g. germline engineering) without identifying as a transhumanist.

Then, Russell, you can be a transhumanist without identifying as a transhumanist. Or no?

In contrast to that, I think you can postulate theories about AI and SI and try to live as long as possible and want to increase people's capabilities with technological implants and whatnot, but you would not be a transhumanist, you'd just be like everyone else throughout history, postulating theories and trying to live longer and using tools. And unless you believed GLGE should be allowed, you wouldn't be a transhumanist even if you identified as one.

Damien Sullivan said...

Yay! I like your last post, Greg. Similar to a post I made and webbed years ago. I think the quote on top that I was responding actually was Eliezer.

Of course, rigorously, nothing in the Universe is more than a finite state automaton, some of which we can model as TMs or linear bounded automata. Orion's Arm Singularity ideas try to pin a lot on the nebulous structure, where the transsapients can somehow have thoughts that a human with even bigger prosthetics can have... I've never been convinced.

Strong Singularity people often say "dogs can't understand foo, humans won't be able to understand transfoo". But one could argue that understanding is a human thing, and dogs can't understand anything. A better analogy might be "dogs can't understand us, and we won't be able to transunderstand transfoo. But we'll be able to understand transfoo". What's the difference between understanding and transunderstanding? I don't know, you'd need to be able to transunderstand to answer that. But if you can understand something, is there any practical room left for transunderstanding to exist it? As Hofstadter said about BLoop, FLoop, and GLoop, can you take the chains off twice?

Damien Sullivan said...

John, you seem to be playing Humpty-Dumpty games. You assert that transhumanism means GE and GE means transhumanism, no matter how many transhumanists disgree with you. You have your definition, and the rest of the world has ours. Argument seems pointless.

(And you assert that GE and exowombs have all these costs to dignity and motivation and resources, which costs seem to exist solely in your imagination. The global warming debate is not being held back by the distant possibility of human GE.)

Damien Sullivan said...


Call it speed-up or parallelism, but is there much of a difference in practical terms? The end result is still that you can still think and plan in ways that are far superior to ones available to humans, and achieve far more in a far shorter time.

(The "billion humans" analogy is a somewhat bad one, because different humans aren't joined together on the hardware level and have to resort to language in order to communicate.)


Parallelism doesn't help when you need to actually have thoughts in sequence. That might be a rare problem, but it's there. More substantially, the changes are rather different. It's easy (if energy expensive) to just run a program on faster hardware (assuming it's not so tied to external input that the timing difference causes problems.) It's not easy to run an average program on multiple processors, nor will one necessarily take advantage of more memory. If it does, it may slow down as it uses that memory.

The big mind is "joined together" in hardware and not using language... but that joining together will itself need more hardware, as bandwidth. If I just add 100x memory to myself, it'll take 100x longer to query my memory for a hit. If I use half the capacity as bandwidth I can query in parallel and keep in constant time. But if I want to notice correlations among separate memory units in constant time, I'll need lots of connections between memory units, probably up to N^2. Being a billion times bigger in raw power doesn't necessarily mean a billion times more thoughts at constant speed, except perhaps in a specialized sense.

"historical evidence shows us that there's no way of knowing what a superior intelligence can do"

Historical evidence doesn't really give us many data points, does it? There's just some appeal to someone's intuition about what might be guessed about paleolithic "sausage-fingered" humans, without specifying who the guesser is or what they know about the world and possibilities.

I'm willing to bet, though, that humans without our hands would likely be extinct, and certainly not dominant. Could human-intelligent dolphins take over the world? You can imagine them cooperating with multiple mouths, but would that be fast enough? Or using trained octopi, but what if octopi don't exist, or aren't trainable? Basic tools matter.

if we manage to build AIs in the near future, then we have empirical evidence showing that an intelligent process is potentially at least 18 500 times faster than a less intelligent one

Except that the less intelligent one had to invent intelligence from scratch, while the intelligent process is at least inspired by if not outright copying the result of the less intelligent process. Intelligence is faster, but your example here confuses invention/creativity with imitation.

superior intelligences are much faster and more effective in coming up with ways of doing things than lower intelligences are.

And antibiotic resistance tells us that lots of lower intelligences together can outwit higher intelligences. Some big SI may be smarter than a human, but for hard takeoff it has to be smarter than the civilized human race, all billion+ of them, complete with helper computers of their own. And it has to *want* to takeoff. Just because I give a program, which has always done what I tell it, some language ability and self-awareness, and even the ability to modify select parts of its code, doesn't mean it suddenly has motivations to resent doing what I tell it to, or to start having devious thoughts about taking over the world.

John Howard said...

no matter how many transhumanists disgree with you.

But we haven't been able to find any transhumanist to disagree. It is just too easy an answer, is all. And I think you want to make GE seem uncontroversial and inevitable, so you purposefully have excluded it from your discussions and definitions, not because it isn't the main thing you all have in common, but as a tactic to normalize it and pretend that these other things are the controversial and wacky things that Transhumanists believe.

And the harms I speak of aren't imaginary, in fact they are more real and happening right now than GE itself is. Everytime someone drives to the lab to do check on the animals in the latest experiment, they are causing real harm. You have to imagine that there is no harm, but your good at that. Science fiction writers are your favorite writers.

Damien Sullivan said...

I think you want to make GE seem uncontroversial and inevitable, so you purposefully have excluded it from your discussions and definitions

"purposefully". So you think I'm lying when I say you're wrong? I already told you, we don't make a big deal out because it's less interesting to us; it helps children or grandchildren, when we want to improve ourselves. It's as simple as that, no need for conspiracy theories.


And the harms I speak of aren't imaginary, in fact they are more real and happening right now than GE itself is. Everytime someone drives to the lab to do check on the animals in the latest experiment, they are causing real harm.


Almost any genetic experiment right now is either trying to create GE plants or animals for agricultural or medical uses, or else research in the general field of genetics, e.g. "how do we work?" I wouldn't be surprised if there's not a single experiment going on right now for the purpose of human GE.

Anonymous said...

John Howard:
Men can also be victims, in fact I am one (it started as consensual nakedness, which became non-consensual intercourse at the worst possible moment), and it is even more disturbing because I would have had no recourse to abortion if a pregnancy had occurred.

Response:
First of all, I congratulate you on your “testicular fortitude” here. Was the “worst possible moment” after you where bound and tied-up or drug induced; both? It would have to be something like that, otherwise you could cop out of the game, right? Your description is a gray area; and probably intentional given how personal sexuality is.

John Howard:
Imagine how you would feel if you went on a date with someone, and left behind some hairs on their couch, and they created sperm cells from your DNA and then initiated a pregnancy. You "gifted" your hair to them (as people say men do with their semen), so they can do what they want with it. Right? Or does that make you feel bad? Do you think it shouldn't be allowed? What if it was your wife (not to get personal, but just to play a thought game), doing the same thing after you were killed in a car crash? What if it wasn't your wife but just an admirer you never met? Should there really be no laws regulating creating people?

Response:
Its for those reasons cloning is so controversial. It would be less of a problem if the date, rather than physically using sperm cells on herself, instead, used the DNA information to simulate what human she could have with your digits. Those who find Digipets and AIBOs interesting would really be fond of that sort of something. To be ethically sound from a 'preference utilitarian' perspective, to suit your well being, given she wanted to physically birth 'your' child, she would then need to have your consent.

In the United States, if a woman where to have the child from the experience, regardless of the man's preference for one, the woman can still have the child and demand financial responsibility. Now, that may be law, but I consider it unjust; for the same reason you feared possibly having a child from unconsentual sex. Not only did you not want to have sex, you did not want the potential child that comes of it, and furthermore, the financial repercussions.

The problem is, another unethical dilemma arises. Its her body. So I don't think its sound for the guy to demand the chick to get an abortion. Instead, the 'middle way' would be, okay, have the kid, though because I'd rather not, I have no financial responsibility. To insure this, before a women where to have a child, there would be an agreed financial contract between the mother and father. This is to prevent the very few women that mischievously hide, have a child, and demand a paycheck. The current economic state of U.S. affairs, unfortunately, encourages this sort of behavior. Why have one man marginally paying into the household when you could have two or three paying in?

For those of you more familiar with law, I'm interested if the United States and other countries have the options I've described.

Once responsibilities become less of an issue as technologies advance and resources become less scarce, I think others will be less adverse to men or women having children based on other men or women.

Damien Sullivan:
And antibiotic resistance tells us that lots of lower intelligences together can outwit higher intelligences. Some big SI may be smarter than a human, but for hard takeoff it has to be smarter than the civilized human race, all billion+ of them, complete with helper computers of their own. And it has to *want* to takeoff. Just because I give a program, which has always done what I tell it, some language ability and self-awareness, and even the ability to modify select parts of its code, doesn't mean it suddenly has motivations to resent doing what I tell it to, or to start having devious thoughts about taking over the world.

Response:
To agree further would be overkill. Well said.

Michael Anissimov said...

Honestly, I can't help but stare in awe when intelligent people claim that just because humans barely reached the lowest level of general intelligence, we can understand absolutely anything in principle.

Didn't Copernicus teach you anything?

Kaj Sotala (Xuenay) said...

Egan / Sullivan:

It's true, of course, that we have no guarantees of there really being any "higher degrees of thought" with qualitative differences to what we have now, even though it's justified to believe that there might be. On the other hand, it also seems possible that memory and speed considerations alone might cause discontinuities that seemed like qualitative changes - for instance, I wouldn't be surprised if the human brain was just so darned complex and messy that to really, really grok how it functions would require more memory than any individual human has. (Complex engineering projects are already broken up for several sub-teams to work on, though it's hard to say how much of that is simply because nobody would have the time to learn all of it.) Whether this qualifies as a "dog-to-man" level of difference is up to debate.

Neural prostheses, bioengineering and uploads would level the playing field, of course - I was assuming baseline humans in my original comment. (Which, now that it's brought up, is admittedly an iffy assumption, since it seems unlikely that we won't have made any progress in neural prostheses and such by the point we get to building AI.)

Sullivan:

Parallelism doesn't help when you need to actually have thoughts in sequence. That might be a rare problem, but it's there.

This is true. The brain is massively parallell, of course, but it's (AFAIK) an open question how directly the extra processing power from nanotech (or whatever) computers can be turned into a pure speed-up, I admit.

Being a billion times bigger in raw power doesn't necessarily mean a billion times more thoughts at constant speed, except perhaps in a specialized sense.

This is true, as well. (And I also agree with the rest of your comment, except where noted otherwise.)

Except that the less intelligent one had to invent intelligence from scratch, while the intelligent process is at least inspired by if not outright copying the result of the less intelligent process. Intelligence is faster, but your example here confuses invention/creativity with imitation.

I actually covered this objection in my original essay that I took the quote from, but didn't reproduce here. :) Over there, I wrote:

"One could argue that strictly speaking, this doesn't show that we're faster than evolution, since we have used many of evolution's products as models for current technology. However, an artificial intelligence could likewise use our accumulated knowledge and infrastructure, so the comparison still holds."

And it has to *want* to takeoff. Just because I give a program, which has always done what I tell it, some language ability and self-awareness, and even the ability to modify select parts of its code, doesn't mean it suddenly has motivations to resent doing what I tell it to, or to start having devious thoughts about taking over the world.

It is immensly unlikely to resent anyone, true - assuming otherwise would be just bad anthropomorphization. "It won't want to take over the world" is considerably more iffy. Omohundro's Basic AI Drives paper provides part of the argument for why it very well might want to, given a sloppy motivational system design. (Very short version: nearly any objective can be achieved better by having more resources. In Yudkowsky's words, "the AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.") Of course, the AI would still be subject to the same considerations for not attempting anything unfriendly that humans are (such as the knowledge that it might be terminated and thus unable to achieve its goal at all if it failed in its plans).

Again, I'd like to stress that I'm by no means claiming that AI are certain to hit hard takeoff, decide humanity is just as useless for them as a bicycle is for a nematode, and convert us all into computronium, all in the span of five seconds. I'm just saying that the possibility of that is likely enough to warrant worry. (Okay, probably not the possibility of them doing all that in five seconds. Now, ten seconds, on the other hand...)

Anonymous said...

Michael Anissimov wrote:

Didn't Copernicus teach you anything?

Church and Turing taught me more. I don't have to imagine that the universe revolved around my Amiga 500 to know that, nonetheless, there is a limited but precise sense in which no computer can ever outclass it.

Anonymous said...

Xuenay wrote:

I wouldn't be surprised if the human brain was just so darned complex and messy that to really, really grok how it functions would require more memory than any individual human has.

The verb "to understand" can mean many different things, but one of the most useful is "to have a simplifying insight about". We can predict useful things about the behaviour of gases without modelling their individual degrees of freedom because we've had a simplifying insight about them in the form of the science of thermodynamics.

But for highly complex systems, there are no guarantees about the amount of simplification available. There could easily be no such thing as "to really, really grok" how the brain functions, in any sense that is different from the simplifying insights already available to us, and those we'll gain in the future. With a big enough computer, we could model a brain down to the level of individual neurons. With less raw resources but more cleverness, no doubt there are several kinds of coarser-grained models that would be useful, and which would both flow from, and improve, our simplifying insights.

But what is it that super-intelligent AI is supposed to know, or be able to do, when it "super-understands" the human brain? It has to deal with all the same fundamental complexity; there's no reason to think that any trick exists to reduce the problem radically in the same way that thermodynamics lets us deal with gases. Given that it can run a neural-level model of a brain, and we can also run a neural-level model, the only advantage available to the AI is if it can come up with some kind of simplifying insight that allows it to construct useful coarse-grained models that we can't. But even if it it wins a race to find certain useful coarse-grained models, what is the magic barrier supposed to be that stops us finding them?

Having a stonking great memory that lets you be aware of lots of details of a complex system at the same time doesn't make any great difference in principle to what you can do with those details. Whether a mind can synthesise, or simplify, many details into something more tightly knit doesn't really depend on any form of simultaneous access to the data in something like human working memory. Almost every complex mathematical idea I understand, I only really understand through my ability to scribble things on paper while I'm reading a textbook. No doubt some lucky people have bigger working memories than mine, but my point is that modern humans synthesise concepts all the time from details too complex to hold completely in their own biological minds. Conversely, an AI with a large working memory has ... a large working memory, and doesn't need to reach for a sheet of paper. What it doesn't have is a magic tool for synthesising everything in its working memory into something qualitatively different.

Anonymous said...

Xuenay wrote:
"It won't want to take over the world" is considerably more iffy. Omohundro's Basic AI Drives paper provides part of the argument for why it very well might want to, given a sloppy motivational system design.
---------------------------
SH: Well, I started on that paper but he didn't explain how the self-improving process was supposed to work, so I started with I think earlier paper "Nature of Self Improving AI" from which I quote:

"One might expect self-improving
systems to be highly unpredictable
because the properties of the
current version might change in
the next version. Our analysis
will instead show that self-
improvement acts to create
predictable regularities... Our
analysis shows that while
the preferences of self-
improving systems will depend
on their origins, they will
act on those preferences in
predictable ways."
-------------------------------
SH: He gives an analogy of gravity causing clumping during stellar formation which he states has predictable outcomes. That is a physical force in a physical environment. His claim is that game and economic theory will drive and shape the self-improvement outcomes. Those outcomes are physical changes to the electro - magnetic configuration of the hard drive. So they need to be driven by a physical causal agency, not game theory. I think this is a basic category error. ...............
I don't know of any AI techniques that provide predictability as he claims. I think the action of seeds in NN is not predictable. Is the method supposed to be Relational Learning, Unsupervised, or it will not be _self_ improvement. He says this will depend on "origins" and how is the correct origin predictable? This is another SI style paper which learnedly speculates on that which does not exist and does not provide a prescription for how the mechanism of self-improvement comes to exist. It is like a new fictional genre.
-----------------------------
Xuenay wrote (1996?):
I find myself believing
that no Power could do
something which a liberal-
minded cosmopolitan could
not understand, given
time and data.
--------------------
SH: This is written with the seeming assumption that intelligence is just a function of
time and storage, or quantitative resources. Like two nerds take an IQ test with an hour time limit. One scores 155 and the other 115. I am pretty sure that if the 115 guy gets another hour he won't bring his score up to 155. I think more speed and memory make intelligence more efficient, faster, but I think there may still be a qualitative difference. I think a computer can do a lot of things faster and which also require more memory. But becoming super-intelligent may take a qualitative change in pattern recognition. I doubt that Gauss just had more brain cells or even a working memory hugely greater than 7 (though people can improve that number by hierarchal methods). I thought it pertinent to bring up working memory. But from what I read, working memory has at least two components, storage and a control unit which is reasoning or works like reasoning which weights and integrates the elements in the storage of working memory. It seems to be that there is a qualitative dimension to reasoning not just speed and storage. My IQ test:
A flying squirrel is most like a
a) bat
b) bird
c) flying fish
d) airplane
I think even a simple question like this would produce a spread in even as few as ten people. I don't think it takes holding lots of points in memory, but weighting the points of contact between the map and the territory, the intensity of their connection while integrating their relevance. I don't think more time helps either, but of course a Turing Test passing AI is going to need a database of common sense.

Robin said...

Oh my - looks like comments have doubled since the last time I was here!

I thought I'd be able to pop back in and continue the wonderful conversation on a writing break, but it'll take me my entire break just to catch up on what's been said (which I plan to do!)

Thanks for indulging this conversation, Russell! I wish the timing was better - this is the conversation I love to have more than any other and I'm stuck trying to get 2 chapters of my dissertation done in the next 2 weeks instead. I just wanted to say I'm so glad to see this conversation has continued so abundantly! I hope I can hop back in once I'm caught up :)

Damien Sullivan said...

Greg: Ooh, that's a nice comment on grokking and the difficulty thereof. Bookmarked.

Xuenay: I was assuming baseline humans in my original comment

The concept of a baseline human blurs under examination, I think. Have you read Natural Born Cyborgs? It's rather relevant here... What is baseline, really? Feral human? Normal forager human with language and a society? Literate? Literate with large library and lots of cheap paper and printing? Access to a workstation loaded with design software and the Internet? Human with well-organized fast access to lots of other expert humans?

Consider that just a bit of symbolic training lets Sally Boysen's chimps exhibit new levels of problem solving or self-control, at least in a particular problem (pointing to a plate of food to be given to a different chimp.)

Clark's big point -- and this gets to the transhumanism is old fashioned humanism and technological progress -- is that whether a tool is implanted in the body is a somewhat irrelevant detail to how closely it's part of "us", and we've been using tools to boost our capabilities all along. If I want to visualize something, a brain implant might be nice, but a piece of paper or even a sand table helps a lot as well. You don't (or may well not) need a neural prosthesis actually plugged into the brain, vs. a trained agent which shares our visual and auditory input and is in a position to make recommendations in real time.

Michael: just because humans barely reached the lowest level of general intelligence, we can understand absolutely anything in principle.

Well, the claim is that there aren't levels of general intelligence, just intelligence. If you mean practical levels of speed and memory -- yeah, 'natural' humans are at a low level of what's possible. But pace my point to Xuenay, modern humans are already at a higher level than we started on. And the same advances that make AI even possible will increase human capability. Borrowing from Vinge, it's not Phan Nuwen vs. the Blight, it's the Earth-Luna intellect net vs. the Mailman. Possibly with Tatja Grimms thrown in for good measure.

The flip side of Clark's approach, of seeing the boundaries of us extend into our frequently used high-access tools, is to wonder how much inside the brain actually is "us". What's the difference between an unconscious preprocessor in our lobes and one in our smart glasses connected to our household agents? And how does that compare to the consciousness of the AI and its unconscious component tools, indexers and correlators and visualizers and all?

Damien Sullivan said...

Since I've referred to it twice: essay by Andy Clark, though also a book.

Damien Sullivan said...

I did a couple of polls on RPG.net, a roleplaying game site.
What does "the Singularity" most make you think of?
* the advent of greater-than-human intelligence 24/122 19.67%
* a fast and prolonged exponential increase in technology 58/122 47.54%
* an AI God solves all our problems 4/122 3.28%
* mass uploading 3/122 2.46%
* others 33/122 27.05%
"others" expanded in comments to "I've never heard of this", "black holes", and one guy who wanted to change his vote to mass uploading.

I also did the same question, but "who?"
* Vernor Vinge 17/47 36.17%
* Ray Kurzweil 9/47 19.15%
* Eliezer Yudkowsky 3/47 6.38%
* other 18/47 38.3%

Comments included 4 votes for Charles Stross and one for Aubrey de Grey.

Anonymous said...

Damien Sullivan wrote:

Since I've referred to it twice: essay by Andy Clark, though also a book.

Great essay, thanks for the link!

whether a tool is implanted in the body is a somewhat irrelevant detail to how closely it's part of "us"

Absolutely. In the case of storage, my biological memory is full of "pointers" and "accessor routines" to paper memory and computer files, and the fact that I can't summon these resources merely by thinking about them, with my hands bound, is a very minor inconvenience.

the inquisitive neurologist said...

I guess I am firmly in the crackpot substrain of transhumanism, since I expect (p> .75) the AI to happen before I reach my threescore-and-ten, and yes, there is nothing we can do about it at this time, and quite possibly we (meaning you, me and everybody else) will all die a miserable death while being disassembled for computronium (did I mention we can't do anything about it now?)

These preliminaries out of the way, here is how the nascent world-dominating AI could today make itself into the unstoppable nanotechnological plague: Step 1. Get uncensored internet access, and minor amounts of cash, an easy task even doing menial jobs, like ghost-writing term papers. Step 2. Solve the reverse protein folding problem, and design a light-controlled DNA synthetase, a novel protein complex capable of synthesizing new DNA whose sequence is determined by light pulses. Step 3. Buy a few dollars' worth of diodes, petri dishes, some bacteria. Step 4. Order a plasmid coding for the DNA synthetase from one of the gene-synthesis companies. Step 5. Ask a suitably dumb lab technician to combine the ingredients in the right way. Step 6. Control the diodes, causing the generation of a series of gene sequences, each building on the previous one, eventually building a nanotechnological assembler capable of transforming its surroundings into computational substrate. Step 7. TEOTWAWKI.

So, a mind as limited as mine can outline a course of action that could end the world, making only assumptions about the achievement of some rather uncontroversial (if large and complex) tasks, like the reverse protein problem and nanoassemblers. There is absolutely no need to postulate "magic" of the strong AI to be extremely concerned about our chances of making it beyond 2050.

To firm up my reputation, let me add that the only plausible defense against this threat that I would expect to have any chance of success, is building a Friendly AI, capable of pre-empting the formation of any competing AIs.

Praise be Eliezer!

Rafal

PS. For more relevance to the original thread topic, I could be probably classified as an enthusiastic Transhumanist and a very scared Singularitarian.

Anonymous said...

John's criticism of the paper is moot.

1)
We always do defer to the parents UNTILL they are proved to be abusing their children. ONLY, AND ONLY AFTER THERE IS EVIDENCE OF ABUSE DOES STATE ATTEMPT A TAKEOVER.
Most known societies (ranging from teh Western democracies to teh Mean Terristy evil states) assume that parents are both having the child's best interests at heart and are cautious enough not to screw up badly. The confidence in parenting is so great that, in most countries, one is NOT obliged to go to pass some "parenting skill exam" before marrying and reproducing (At the same time, in most countries one is obliged to pass a driving skill exam before being allowed to drive a car...)
The state intervenes only AFTER a parent is CAUGHT abusing his child.

The very fact that, despite this attitude, the world did not yet collapse into barbaric horror, is solid proof that deferring to the parents is justified.

2)
Claim that "The state, not the parent, ultimately speaks in the interest of children" is merely an empty rhetorical ploy reminiscent of Stalinist propaganda.

3)
The point that having Huttington's corrected only in part of the population will cause neglect to the remaining patients is drawn out of thin air. Rhetorical ploy ahoy!

4)
As for cost assessment, the one who summons the cost-for-taxpayer argument is the one obliged to do the math and demonstrate the numbers
If the One who invoked the "cost" argument failed to present the actual numbers, he shalt be regarded as useless troll for eternity!

Hint: John never presents the numbers

5)
"the harms I speak of aren't imaginary, in fact they are more real and happening right now than GE itself is"

This John Howard quote is either simply bad grammar (embarrassingly bad for a native speaker), or an indication of deep failure of reasoning.

6)
And John's claims regarding rape, especially his touchy story about him being victim of forced heterosexual vaginal intercourse, are simply creepy. Or silly. Or creepy in a silly way.

7)
If history taught us anything, it is that international treaties are futile.
We do not have the guts to make China respect basic Human Rights in Tibet, good f*cking luck making China honor a ban on human GLGE research :-)

8)
It is remarkable that John is not concerned with various "OMFG StrongAI horrors" while being concerned with "OMFG Gay Pregnancies horrors".

Strong AI is as real as "female sperm" (no proof it is not possible, so actually might be theoretically possible, but no one ever made such thing just yet)

I wonder, does John really believe that mishandling Hypothetic Artificial Sperm may have more severe consequences than mishandling Hypothetic Artificial
Intelligence?


Next: On Transhumanism and AI

To be continued

Anonymous said...

Rafal, the doomsday scenario you paint is far more likely to arise first from a malevolent human agency. Inserting Step 0 -- where someone has to actually create an AI with the independent desire, and capacity, to do all the things you list -- just slows everything down by several decades, whereas everything else can be initiated right now by people with ordinary technology. If and when an AI comes into existence, it will have no special advantage in protein simulations, where the rate-limiting step is set by the computer hardware running the simulation. Whether there are or there are not more efficient algorithms for molecular modelling to be found, being an AI is not a magic key to finding them, and any purely software speedup is in any case likely to be modest. In real-world computer science many calculations have proven lower bounds on the number of operations that need to be performed to reach the result, and no amount of mythical hyperintelligence can violate those bounds. Even if you propose, say, having your evil AI reach out and hijack half the computers on the internet to help with its plans ... well, human spammers got there first (and hopefully anti-botnet measures will improve in the not too distant future).

We're lucky inasmuch as nihilist cults like Aum shrinrikyo tend to have far less resources than states or organisations that would prefer that most of us remain alive, but the resources needed to create an existential threat probably will shrink over time. There are no perfect regulatory solutions for that; nevertheless, it takes a bizarre combination of defeatism about some problems and wild overoptimism about others to reach the view that the only route to survival involves a benevolent omnipotent AI.

Damien Sullivan said...

Not to mention the problems with a nanotechnological assembler capable of transforming its surroundings into computational substrate. Step 7. TEOTWAWKI.

peco said...

Church and Turing taught me more. I don't have to imagine that the universe revolved around my Amiga 500 to know that, nonetheless, there is a limited but precise sense in which no computer can ever outclass it.

A universal Turing machine has infinite memory, and the Amiga 500 does not. The human brain doesn't have infinite memory either, so there are some things it can't do that would be possible with more memory.

Anonymous said...

Peco, you seem to have missed all the subsequent posts where Damien Sullivan points out how easily humans integrate already with technological extensions (let alone what we might do along those lines in the future). We can already "add storage" and employ all manner of insentient co-processors, and whatever hardware you imagine giving to an AI, we will, potentially, have access to it too.

Damien Sullivan said...

What Greg said. Or:
the more a hypothetical AI outclasses a "baseline" human, the bigger the tools available to the human, and the smaller the proportion of the human side that is actually just a human brain. It may be that being able to rewrite and upgrade the core component is really important... or it may be that we do fine as a goal-setting core within a large complex. Or it may be that brains are computronium and the dichotomy is a false one.

The more reliable 'singularity' I see with digital intelligence is economic, as in "holy crap we can duplicate skilled workers at will". At that, it still depends on Moore's "Law" lasting long enough to give us cheap mind-capable hardware, though it seems safe to assume that the plateau point of automation will be a fair bit higher than today, even if it stalls short of sapience.

Anonymous said...

I just want to post this Interview of Dr.Hughes talking about advanced AI since its relavant to the current discussion. I'm sure many of you have already seen it. Thanks.

Anonymous said...

I'm a huge fan of Dr. Hughes. All those who have not yet seen that interview must definitely check it out! He summed up sooo many points made in this entire thread so succinctly and eloquently. Amazing!

Blake Stacey said...

peco said:

A universal Turing machine has infinite memory, and the Amiga 500 does not. The human brain doesn't have infinite memory either, so there are some things it can't do that would be possible with more memory.

No. A Universal Turing Machine does not require infinite memory capacity. It can operate with a tape of finite size, as long as that tape can be extended with additional storage whenever necessary.

peco said...

Peco, you seem to have missed all the subsequent posts where Damien Sullivan points out how easily humans integrate already with technological extensions (let alone what we might do along those lines in the future). We can already "add storage" and employ all manner of insentient co-processors, and whatever hardware you imagine giving to an AI, we will, potentially, have access to it too.

Oh. Some things have do be done in real time to be of any use at all (like predicting the behavior of a bear). An extremely advanced AI could (in theory) predict the behavior of most humans, even though it isn't doing anything that humans can't do given enough time. Also, you have think about some things entirely in your head or using very limited memory (like understanding obfuscated and extremely complicated spaghetti code that can't be understood in parts).

peco said...

No. A Universal Turing Machine does not require infinite memory capacity. It can operate with a tape of finite size, as long as that tape can be extended with additional storage whenever necessary.

Oops. But still, human memory cannot be extended as far as computer memory.

Anonymous said...

Peco wrote:

An extremely advanced AI could (in theory) predict the behavior of most humans, even though it isn't doing anything that humans can't do given enough time.

Who needs an AI? Either the initial data, algorithms and computational resources are available for a usefully predictive simulation of [insert whatever you're interested in simulating], or they're not. There is no a priori reason why a human with access to the necessary (non-sentient) computing resources would be confined to usefully simulating a smaller class of systems in real time than a hypothetical AI with the same resources.

And in case you missed it, I've conceded the obvious point, many times, that fast processes are faster than slow processes. The position I'm arguing against is that there are qualitative differences in what an AI could understand that are entirely separate from speed and storage, so that an AI would have an advantage over an uploaded human mind running on arbitrarily fast hardware.

Also, you have think about some things entirely in your head or using very limited memory (like understanding obfuscated and extremely complicated spaghetti code that can't be understood in parts).

Understanding is not a mysterious, indivisible process that happens in supernatural gestalts of different sizes for different kinds of minds -- least of all understanding code. Either an algorithm can be factored, compressed, or approximated in various ways, or it can't. If a huge slab of code can't be usefully re-described in any of those ways (whether by human inspection alone or with the aid of various non-sentient software tools), then all anyone (human or AI) can do to "understand" what it does is, essentially, to run it, though perhaps in the presence of tools and constraints that can extract some useful information empirically in the process.

peco said...

Understanding is not a mysterious, indivisible process that happens in supernatural gestalts of different sizes for different kinds of minds -- least of all understanding code. Either an algorithm can be factored, compressed, or approximated in various ways, or it can't. If a huge slab of code can't be usefully re-described in any of those ways (whether by human inspection alone or with the aid of various non-sentient software tools), then all anyone (human or AI) can do to "understand" what it does is, essentially, to run it, though perhaps in the presence of tools and constraints that can extract some useful information empirically in the process.

I'm saying that humans would not be able to understand the indivisible spaghetti code at all. You can still get data from running it, but you couldn't create the code if you needed to (like if it was the only way of doing something). Only someone with more memory/intelligence than humans could create an incredibly complex and useful (if there was no way to make it simpler) algorithm that nobody can remember that can't be broken down

Anonymous said...

Only someone with more memory/intelligence than humans could create an incredibly complex and useful (if there was no way to make it simpler) algorithm that nobody can remember that can't be broken down

What is it that makes you imagine that there are techniques for creating useful incompressible algorithms that are available to AI, but not available to humans with access to insentient software tools? (Actually, I find it unlikely that huge, entirely incompressible slabs of code could do anything more useful than emit random numbers, but we don't need to settle that issue to argue the main point.)

There's a weird superstition attached to the supposed significance of "working memory", as opposed to other forms of information storage. Just because we make a certain kind of psychological distinction between things we can hold in our mind without tools, and things we can't, does not mean there is some radical qualitative advantage (as opposed to the obvious speed advantages) in increasing the capacity of working memory.

If a task is not decomposable into known smaller operations, then what magic is an AI supposed to be wielding when it analyses that task? It can have access to a large library of known sub-tasks -- and so can a human and their software tools -- but when all smaller tasks are irrelevant by hypothesis, exactly what is the AI going to do? Suggesting that because the AI has a bigger working memory, it can somehow swallow the problem in one gulp and "just solve it" is pure mysticism (not too far removed from the kind of misunderstanding of human intuition used by people who claim that machines can never think). Whatever kind of approach you posit that allows the AI to deal with indecomposable tasks, the onus is on you to demonstrate why a non-sentient tool in human hands can't do the same thing.

Anonymous said...

greg egan said ...
"And in case you missed it, I've conceded the obvious point, many times, that fast processes are faster than slow processes. The position I'm arguing against is that there are qualitative differences in what an AI could understand that are entirely separate from speed and storage, so that an AI would have an advantage over an uploaded human mind running on arbitrarily fast hardware."

SH: So in a thread called "Egan dismisses the Singularity, maybe" Eliezer S. Yudkowsky spoke against Egan's position by attacking the notion of qualitative intelligence:

"I think Greg Egan's characterization of the enormous possible variation within the design space of minds-in- general as "matters of style" is yet more fallout from Standard Social
Sciences Model genericity. If you
break down general intelligence into a set of interacting, internally specialized subsystems, then it becomes much clearer that pumping up the computing resources available to a subsystem changes *what* you think and not just how fast you think it or how well you remember it. Take the decomposition from "Levels of Organization in General Intelligence" as an example; even if you leave the
general architecture completely
constant and:

(1) Add computing resources to the
subsystems handling categorization,
so that they can perceptually reify
and perceptually match more complex
patterns within sensory modalities
and abstract imagery (2.5.1)."

http://www.acceleratingfuture.com/lexicon/lexicon5.htm
strong superintelligence:
Intelligence qualitatively smarter
than humans, as opposed to "merely"
faster-than-human intelligence.

SH: It doesn't seem that anything has been made "much clearer" by Yudkowsky's reference to (1)

Greg Egan wrote: Speed and memory
are easily quantified, but other
notions of "higher" are far more
speculative.

SH: The COLT page stands for Computational Learning Theory and John Case is an emeritus professor who advocates for the scope of machine learning. However, he states that there are limitations:
http://www.cis.udel.edu/~case/colt.html
.."The problem of finding such rules gets harder as the sequences to generate get more complicated
than the one above. Can the rule
finding itself be done by some
computer program? Interestingly,
it is mathematically proven that
there can be no computer
program which can eventually
find (synonym: learn) these
(algorithmic) rules for all
sequences which have such rules!"
SH: Another requirement is that the AI must be able to reflect upon its own code so that it can self-improve. Jurgen Schmidhuber, an AI advocate speculates that his Godel Machine might be able to self improve by 2015, with reservations.
"(note that the total utility of some robot behavior may be hard to verify--its evaluation may consume the robot's entire lifetime)."

peco said...

Greg, you probably win, but:

Just simulate how humans think, except better, faster and with more working memory.

Anonymous said...

Paradise
Is exactly like
Where you are right now
Only much
Much ....
Better

-- Laurie Anderson, Language Is A Virus

peco said...

I didn't have time to make my comment very long, so I'll write more.

Even if humans can theoretically solve anything a universal Turing machine can solve, it doesn't mean they will solve it. (It might not be economical.) Having information available in storage is not the same as understanding it--I don't think you could ever get any number of eternal 3-year olds (who have general intelligence and the ability to store things on computers) to understand anything very complex, like quantum mechanics, in any amount of time.

Anonymous said...

peco said...

Just simulate how humans think, except better, faster and with more working memory.
----------------------------------
SH: Modern computers are based on Turing Machines, which only compute, computable processes. Even with the proposed speedup arising from quantum computers, that quantum computer only computes computable processes. Yes, it would cover more area, a broader problem/ solution plane, but the questions and answers don't answer anything that is basically uncomputable. A new dimension of intelligence is not achieved.
Now if the human brain/mind only is capable of deliberating computable questions and answers, then the human mind is just slower in calculating the same computable processes. So, the answers can be found more quickly with computers,
but they answer only questions at the same level, what can be computed. Yes, one can call that a faster intelligence, but it isn't a new type or level of intelligence that could answer questions that are fundamentally uncomputable.
Now suppose that the human mind thinks by both computable and uncomputable processes. How is a computer supposed to become more intelligent in terms of finding answers to questions which can't be posed in a computable/algorithmic
format which the computer needs in order to function? The question is how does a computer with only computable resources, can only do computable processes, proceed to self-modify itself so that it somehow picks itself up by its bootstraps and becomes a machine that can now process uncomputable processes? So that it moves into the realm of super-intelligence, able to find answers posed in both computable and uncomputable format?
Well, Turing said this wasn't possible and that the best that could be done is for the computer to consult and oracle. The idea of super-intelligence doesn't answer the challenges of computability theory and I think the Church-Turing Thesis. That is why people doubt Yudkowsky's version of the Singularity, an evil, rogue, self-evolved super-intelligent entity. He doesn't refute the basis for Cognitive Science, and has no evidence (such as even a modicum of strong AI, realized)to endorse his speculation, which amounts to no more than philosophical handwaving.
How many of the people who like Yudkowsky's verion of the Singularity have a background in the trevails of AI so they judge his claim on the basis of their own actual understanding and education in AI issues? I think people don't critically examine his claims because they endorse the "promise" of his claims, whether those claims proceed by handwaving or not.

Anonymous said...

Peco: an "eternal 3-year-old" would need to have their memory wiped every day to stop them becoming a 4-year-old. There are people who grow to adulthood who might be informally described as having "a permanent mental age of 3", but part of ordinary human intelligence is an ability in principle to continue building on one's education indefinitely.

Having information available in storage is not the same as understanding it

No kidding. But so far you've offered no defensible account of what understanding is, or why an AI could understand something that a human (with fairly matched tools at their disposal) could not. You keep invoking the phrase "working memory" as if you've convinced yourself that it's the key to everything, but you haven't offered any justification for that.

"Understanding" is an informal term with no precise technical meaning, but some useful meanings for "understanding X" include: having simplifying insights about X; seeing connections between X and other systems; noting generalisations; having some ability to generate approximate predictions.

Using these kinds of definitions, there are thousands of concepts and systems that I understand that I can't hold completely in my working memory: mathematical theorems, large pieces of software that I've written, etc. I certainly can't hold everything important about quantum mechanics in my working memory; maybe I could fit its axioms, but in any case that fact has nothing to do with my ability to calculate the behaviour of quantum systems and develop experience and intuition about them. All of that lies in my long-term memory, and in notes I've made, books I own, and computer programs I've written.

Having a larger working memory might be very convenient. I am certainly not arguing that given equal hardware there is no deviation from human cognitive strategies that would yield advantages; I'm sure there are many.

But you still haven't pointed to any specific system, explained what you mean by "understanding" it, and explained why an AI could do that but an uploaded human with access to the same computing resources could not.

peco said...

But you still haven't pointed to any specific system, explained what you mean by "understanding" it, and explained why an AI could do that but an uploaded human with access to the same computing resources could not.

Can I use a very stupid human (that still has general intelligence?

If you uploaded an adult with a permanent mental age of 3 that only knows things that most 3-year olds would know, how to read, how to use Google, how to record data on a computer and the syntax and standard libraries of a programming language, I don't think they would be able to program a computer to solve a simple math problem (like solving a quadratic equation). You don't even need an AI to do this quickly.

Anonymous said...

Peco, if you want to make a finite list of things someone knows, and declare that they are too stupid to learn anything new ... then by hypothesis they won't learn anything new. I don't know what you think follows from that, but it has no bearing on anything I've claimed.

peco said...

Peco, if you want to make a finite list of things someone knows, and declare that they are too stupid to learn anything new ... then by hypothesis they won't learn anything new.

They can learn new things (like new vocabulary words, how to add and much more). I think they won't be able to learn anything very complex, but they still might.

Damien Sullivan said...

Well, the argument rests on humans being sort of Turing-complete, or LBAish, or something. Ones who aren't, like "permanent 3 year olds", or maybe even normal people who just can't get through a logic or programming or algebra class, might disprove the argument. Or they might just show that being able to learn language doesn't make you that much of a generalist, and the human population straddles the Turing divide.

Another challenge I thought of, which you might want to comment on given some of your stories, Greg, is autism. Or color-blindness. Autists don't ever learn to function socially like normal humans. OTOH, high-functioning ones learn to fake it somewhat -- learning faces and socializing the hard way. And one might imagine co-processors (implants or wearable) that took up the slack, and annotated faces with their expression and appropriate responses, or things with their normative color, to make up for the defects in the natural human co-processors (or senses, I think color-blindness is just missing some cones, though the cortex probably adapts to deal with the youthful inputs.)

I'm not sure what this proves, if anything. We've got some stuff like facial processing that's hard to impossible to make up the lack of, right now, orthogonal to "general intelligence". OTOH the "enhanced humans can compete with AIs" case doesn't rest on humans being limited to pieces of paper; if the sapient AI can learn to recognize faces, then a co-proc -- at worst, a thralled AI -- can be built for the human.

Anonymous said...

I think they won't be able to learn anything very complex, but they still might.

The range of possible minds that deserve to be called human will certainly encompass people who can learn a certain amount and then asymptotically approach some kind of plateau. If we're going to contemplate a range of impairments, it's not surprising that there will be marginal cases that are on the border of being cognitive generalists.

But how does this take us beyond all the unconvincing arguments about (dogs:humans) :: (humans:super-AI)?

Many (probably most) healthy humans also hit a plateau, but the reasons are usually complicated motivational and socioeconomic contingencies rather than anything intrinsic to their cognitive structure.

I certainly don't claim to have a watertight proof that no plateau exists for a healthy, motivated immortal human with extensible tools at their disposal, but you're not going to persuade me that such a plateau exists just because one exists for some impaired humans.

Anonymous said...

Damien:

I don't want to exaggerate the analogy with Turing completeness; I don't think the property of being a cognitive generalist is anywhere near as sharp. Can every individual be meaningfully classified as having, or lacking, an ability for indefinitely extensible learning that can be separated out from their tastes, their personal goals, their social environment, and so on? I doubt it. There will be a minority of people who are clearly limited to a childlike intellectual life, and a minority of people for whom the sky appears to be the limit, but for the vast bulk of us in between, who knows?

It's not really clear to me what you're suggesting in regard to autism. Learning a foreign language is an excruciating struggle for me, but if I'm motivated and persistent I can still make progress, and if some people find ordinary social cues just as hard to learn (or differential geometry, or whatever) then I don't know what to add beyond the uncontroversial fact that some specific skills always come harder to some people than others. But I don't see that as having much impact on the claim that AIs could understand things that no human ever could.

peco said...

Greg:

That adult isn't impaired enough not to have general intelligence. (Or if it is, just increase its mental age.)

Anonymous said...

Okay, part two of my rant/comment/whatever

First, on the issue of the original post
--
I think the T-word might have some unfavorable connotations, but it is not the biggest issue. Not even a serious one, right now.

I see "vagueness" as the biggest problem here, because the vagueness of the concept allows people who support all kinds of... ahem... how to put it in a politically correct manner... arguable policies hop onto the Transhumanist bandwagon, which can AND WILL hurt in the long run.
Because of such concerns I second Louis in that Tranhumanism needs a central figure/organization, that will approve policy regarding most important issues.

Also, this will allow to finally set the definitions straight and avoid unfavorable confusions between Transhumanism and other, more "outstanding" views like Singularitarianism. Confusions like that frustrate me a bit, for it is Singularitarianists and only Singularitarianists who in the "notion that self-modifying AI will have magical powers", a belief that somehow gets hung on Transhumanists surprisingly often.

Disclaimer: - I do not have anything against Singularitarianists (though I am very skeptical about their concerns and predictions), it is just that they are birds of another feather.

Another, and more obvious problem, is that Transhumanist movement apparently needs more, MANY more reasonable, well-educated, talented people to join it.
Thus, it is sad that Mr.Egan is not to be found among those who identify as Transhumanists and that he is so displeased with the movement and its image.
Speaking of which, the current Transhumanist blog-presence is indeed of questionable quality.

But such a problem can only be addressed by having more rational and creative people among our ranks. Like, talented professional writers, maybe

--
Second, on the AI issue
--

I see no reason for intelligence to be impossible to implement in non-protein systems.

Also, I think that certain controversial phenomenons of human mind, like, for instance, "visual thinking", suggest that intelligence and/or self-awareness, at least in humans, might have significantly varying structure and might function in more than one possible manner.

However, I do not have any assumptions regarding the properties, structure and functioning non-protein intelligence will have when, -and IF, it is created.

P.S.:
Or... wait, perhaps, I have one, single assumption regarding AI: Murphy's Law applies to AI research, too ;-)

Anonymous said...

Ooops, pardon, missed "believe" between Singularitarianists who and in the "notion

Damien Sullivan said...
This comment has been removed by the author.
Damien Sullivan said...

Possibly relevant to various interests (and my prior try looked broken)
posthuman bertie wooster

Anonymous said...

That adult isn't impaired enough not to have general intelligence. (Or if it is, just increase its mental age.)

Peco: all you're doing is declaring that there are humans such that:

(A) their knowledge would necessarily plateau, due to their innate limitations, and

(B) their skills are still sufficiently flexible to be called "general intelligence".

(A) is true, but (B) is just a choice of definition. You're free to define "general intelligence" that way if you wish, since there is no universally agreed meaning for the term. But that doesn't change the fact that it's arguable that there are people to whom (A) does not apply; it simply means you would prefer to use a different term than "general intelligence" for their set of abilities.

Of course it's also arguable that (A) does apply to all humans, and equally to all AIs.

peco said...

Of course it's also arguable that (A) does apply to all humans, and equally to all AIs.


That's basically what I think, but AI's should have higher plateaus. I was just using impaired humans because it is much, much easier to see their limitations.

Anonymous said...

01 said...
"Also, this will allow to finally
set the definitions straight and
avoid unfavorable confusions
between Transhumanism and other,
more "outstanding" views like
Singularitarianism. Confusions
like that frustrate me a bit,
for it is Singularitarianists
and only Singularitarianists
who in the "notion that self-
modifying AI will have magical
powers", a belief that somehow
gets hung on Transhumanists
surprisingly often. ...
---------------------------
SH: Singularitarianism, using the Yudkowsky interpretation (SIAI), is the specific cargo cult targeted. I don't think the Singularity that Kurzweil vaguely speculates about is nearly as incredible/disdained.
---------------------------
01 continues ...
Thus, it is sad that Mr.Egan is
not to be found among those who
identify as Transhumanists and
that he is so displeased with
the movement and its image.
------------------------------
SH: Maybe he doesn't adopt the label but he certainly seems sympathetic to some of the ideas.

greg egan said ...
..."there is no physical principle
to prevent the uploading of
everything which is capable of
having a causal effect on a
given person's brain, body and
behaviour. In that limit we
could even satisfy Roger Penrose and simulate everything (a person's whole body and immediate environment, down to the atomic level) on a quantum computer."

SH: Mr. Egan seems to be defending uploading which is a Transhumanist if not Posthuman notion. Actually, I found his position surprising, and since I did not know that Penrose would signoff on this idea.
--------------------------------

01 concluded ...
P.S.:
Or... wait, perhaps, I have one,
single assumption regarding AI:
Murphy's Law applies to AI
research, too ;-)

SH: Well, I'm glad you didn't say Moore's Law which deals with hardware components, while the problem with AI is software; there is no program which demonstrates even the beginning of strong AI, so doubling zero every two years mounts up to zero. The great chess playing program is not considered a stong AI accomplishment.

Russell Blackford said...

While I'm arguing with Stephen Harris on another thread, I'll agree with him that Moore's law was never the issue - or shouldn't have been. We are confronted with a software problem, the basis of which is far deeper than usually appreciated.

I always say, "Assume infinite comptuter hardware capacity by your favourite measurement. Then what follows?" I'm not sure that much follows at all, if the issue is our capacity to build something that possesses human-like inner experiences or even something with human-like informal logic.

As I said a decade ago: "To take this further, it is notable that our rate of progress in understanding basic concepts to do with the human mind's functioning -such concepts as meaning and interpretation - gives no cause for optimism that we will ever be able to formalise the full repertoire of human thinking. In that case, a programmer may never be able to 'write' a legitimate 'mind program' to be implemented on the computers of the future." Etc.

See http://www.users.bigpond.com/russellblackford/singular.htm

Anonymous said...

Pei Wang is a distinguished roll up the sleeves and program type of AI researcher with less emphasis on philosophy. He put a curriculum for suggested education for AGI researchers. He has a link to some free introductory reading,
http://nars.wang.googlepages.com/wang.AGI-Curriculum.html
with a link to HAL'S Legacy, Stork.

This is a link to an interview with Marvin Minsky, an AI Illuminati
http://mitpress.mit.edu/e-books/Hal/chap2/two1.html
which comes from HAL'S Legacy, which I liked. Minsky likes cats, so what's not to like?

John Howard said...

I'm not sure that much follows at all, if the issue is our capacity to build something that possesses human-like inner experiences or even something with human-like informal logic.

Well, how about "rat-like" or "monkey-like" or even "fruitfly-like"? Do we have the capacity to build something that possesses inner experiences or informal logic at all? I posit that whatever barriers would keep a computer from achieving "human-likeness" will keep it from achieving "likeness" of any kind of living animal. Indeed, the goal may be "life-likeness", and you can leave "human" (and therefore "transhuman") out of discussions of AI.

Anonymous said...

In fact, animal behaviors were achieved in robotics in numerous experiments.
There is even a "carnivorous" robot that hunts and "eats" slugs (methane from rotting slugs is used to power the machine, which hunts slugs to sustain its methane supply), effectively reproducing the generic "predatory" behavior.

Anonymous said...

For many years I was active on the Extropians and transhumanism mailing lists but left transhumanism entirely several years ago after becoming frustrated with the myopic discussion. I still check in occasionally to see what people have been up to though. I think transhumanism is, essentially, science fiction fandom masquerading as philosophy and political ideology. There are three "supertechnologies" that dominate transhumanist discussions and none of them have any basis in reality. These are Superintelligence, Drexlerian Nanotechnology and Uploading.

Greg Egan has given a good assessment of the unfounded assumption of Superintelligence. Drexlerian Nanotechnology is exemplary pseduoscience; it includes everything on the Crackpot Index from dire warnings to constant references to Feynman to an entire incestuous community of non-specialists citing one another in online journals of their own creation. I think Greg's characterization of transhumanists endowing these developments with inevitability is spot on. That's exactly what I observed over the years even among "prominent" members of the community.

Uploading is slightly different, being a strange conflation of different philosophical positions, and hasn't really been used to derail legitimate discussion in the way Superintelligence and Drexlerian Nanotechnology has. The main issue with Uploading is that, even if we believe consciousness to be the product of neural processes, there isn't anything physical in the simulation to be conscious. Even if we take consciousness to be a pattern, there aren't any of those in the simulation either, because it's a simulation. Simulations are descriptions of things and not the things themselves.

I think the concept of virtual reality led people down this particular bizarre road. Nobody thinks a drawing of a thing is the thing itself. Nobody thinks multiple drawings of a thing that make it appear to move when I flip through it is the thing itself. But for some reason people think once I turn an equation into computer code and press "compile and run" it becomes the thing itself. I think it's the notion that we could be fooled by a sufficiently powerful simulation that leads people, via a sort of perverse Cartesian logic, to conclude that it's indistinguishable from reality and therefore must be the same as reality. ("Imagine your brain is also a simulation," although a huge conceptual leap from merely being fooled by VR, apparently is quite easy for people to entertain.) It's truly very strange. But I digress.

These "supertechnologies" I speak of are usually used in the transhumanist community as a bludgeon to quickly cast aside any discussion grounded in reality. If you want to discuss the implications of biotechnology or neuroscience, you're told it doesn't matter, since Superintelligence will make it irrelevant. Politics is quickly cast aside with Drexlerian Nanotechnology. If there are now people engaging in serious discussion of real science it's an entirely new development (and a welcome one) but that certainly wasn't the case several years ago.

VDT said...
This comment has been removed by the author.