About Me

My photo
Australian philosopher, literary critic, legal scholar, and professional writer. Based in Newcastle, NSW. My latest books are THE TYRANNY OF OPINION: CONFORMITY AND THE FUTURE OF LIBERALISM (2019); AT THE DAWN OF A GREAT TRANSITION: THE QUESTION OF RADICAL ENHANCEMENT (2021); and HOW WE BECAME POST-LIBERAL: THE RISE AND FALL OF TOLERATION (2024).

Sunday, February 03, 2008

New article published at JET - "The Technology of Mind and a New Social Contract"

The Journal of Evolution and Technology has published a new article, this one by Bill Hibbard, entitled "The Technology of Mind and a New Social Contract" .

Hibbard's main concern is with the prospect of super-intelligent beings coming into existence in the future, and how they will interact with the rest of us ordinarily intelligent beings.

Hibbard argues that we currently live in accordance with a social contract that is based on assumptions that we rarely question: that all humans have roughly the same intelligence, that we have limited life spans, and that we share a set of motives as part of our human nature. New technologies will invalidate these assumptions and inevitably change our social contract in fundamental ways.

He suggests that we need to prepare for these new technologies so that they change the world in ways we want rather than lead us stumbling into a world that we don't. In particular he argues for a new social contract between the super-intelligent and the humanly intelligent, according to which super-intelligence will be a privilege that carries responsibility for the welfare of others.

But read the whole thing - and feel free to comment on his argument.

3 comments:

BT Murtagh said...

"In exchange for significantly greater than normal human intelligence, a mind must value the long-term life satisfaction of all other intelligent minds."

Even post-human intelligence increases can only be regulated and controlled as long as each individual mind starts off human and is enhanced from there, and as long as the enhancement processes are controlled by minds motivated to do so. Such a scenario is obviously inherently unstable; let one intelligence-enhanced post-human decide that the technology should be free and it will be. Digitally Restricted Minds can't compete against Open Minds.

Attempting to control superhuman artificial intelligences by controlling their deep motivational structures will be prohibitively difficult for minds at our level to accomplish in the first place, to the point of effective impossibility. Any motivational processes adaptive enough to effectively interact with human, post-human and supra-human minds are going to be too complex to be useful safeguards; they will be mathematically chaotic.

In any case, if nonhuman minds are reproducing themselves then such a strategy would fail within a few generations. It may be "relevant to power in the human world" to measure the number of humans a mind is capable of knowing well, but the article itself presupposes that the world will not remain entirely human, perhaps not even preeminently human. I agree with that.

By one path or another it seems inevitable that a mind-type capable of reproduction will emerge whose motivational structures are not predicated on maximizing the happiness of other intelligences, and can therefore more efficiently utilize relevant resources toward self-propagation than can an equivalent intelligence hampered by such constraints.

I assume, of course, that there are going to be scarce resources of some kind which both types will compete for, but that hardly seems a stretch. It may be that nanotechnology and applied AI will provide a milieu of abundance for some time through increased efficiency, but it seems absurdly optimistic to presume that there won't be any bottlenecks at all.

Societies of minds will certainly evolve for mutual support and protection, much as our own societies have, but altruistic notions of automatically valuing "the long-term life satisfaction of all other intelligent minds" are rather naive. An all-dove society is not an evolutionarily stable system, any more than an all-hawk one is.

Whatever those resource limitations are, the new minds are going to evolve to better utilize them at a unprecedented rate, especially in the case of AIs. Each new generation of AI (which may subsume and/or include as a subsystem the previous one, rather than replace it) can include any improvements the parent system is capable of designing in; the very mechanism of their evolution will be better, as much LaMarckian as Darwinian, and may consciously include mechanisms like group selection, which are problematic at best in evolution as seen in nature today.

The competition for existence, in short, will be unprecedentedly intense, efficient, and rapid, and will take place on levels it never has before. To imagine that that kind of competition will long be controlled by our wish to ensure a place for our Mark 0.9 Monkey Brain is laughable. When and if the Singularity really gets going (i.e. barring a premature worldwide collapse of technological societies), the choices will be same as they've always been, only starker and with less wiggle room than ever before:

Evolve or Die Out.

Roko said...

that's an interesting article, I like what you're doing at JET. Keep up the good work!

I suspect that the advent of superintelligences will probably do away with the concept of a "social contract" as we understand it today. Would you bother to sign a social contract with an ape, or with your pet cat? But still, it's good to explore all the possibilities. In a slow takeoff singularity, this kind of thing could be very important.

Russell Blackford said...

I commented on Brian's detailed comment over on JET's Facebook group. The short of it is that I'd like to see him work this up into something more formal with a view to submitting it to JET. Debate is good.