About Me

My photo
Australian philosopher, literary critic, legal scholar, and professional writer. Based in Newcastle, NSW. My latest books are THE TYRANNY OF OPINION: CONFORMITY AND THE FUTURE OF LIBERALISM (2019) and AT THE DAWN OF A GREAT TRANSITION: THE QUESTION OF RADICAL ENHANCEMENT (2021).

Friday, July 30, 2010

Six reasons why you won't upload your mind

Some of these reasons are better than others, I think. E.g., no. 5 sounds pretty weak to me. If we can imagine a magical technology that does everything else required, I don't see how the absence of an ordinary mammalian body will make all the difference. Once we get to the desired magic-technology point, emulation of bodily experience in cyberspace or in some sort of advanced robot body is fairly easy to imagine.

And I don't find this sample very convincing:

Without frequent physical backups, refreshes, and format updates, precious data will quickly be rendered unreadable or inaccessible. So when we're all "in the cloud," who's gonna be down on the ground doing all that real-world maintenance — robots? Morlocks? Even if that works, it just seems evolutionarily unwise to swap one faulty physical substrate (albeit one that has been honed for millions of years, runs on sugar and water, and lasts nearly a century) for another one that can barely make it from one Olympic season to the next, even with permanent air-conditioning.


Well, yes, but it's easy enough to imagine our robot or Morlock slaves doing the work concerned, or perhaps we'll be those robots. And if we accept everything else - the required super technology, an appropriate conception of personal survival, and so on - it's not a matter of swapping one faulty technology for just another faulty technology. Once we get to that point, it's not so hard to accept the idea of a technological substrate that lasts a lot longer than our current 80 to 100 years (or a lot less if things don't go well). All in all, the fifth reason just doesn't stand up to inspection.

Oh, and the author, or perhaps just the illustrator, obviously has no understanding of who Krang is. That's unforgivable!

All that said, I'm a bit of a sceptic about these mind-uploading scenarios ...

I agree that our understanding of human consciousness and how it supervenes on brain activity is at a primitive stage, and there's no sign that it's improving at the rate needed. The lack of sufficiently powerful hardware is not the real issue: the deepest problems would remain even if we had infinite computing power available. Combine our poor understanding of consciousness with issues of personal identity - and what would count as survival if we tried to move our minds from one substrate to another - and I think that much of the discussion that goes on about mind uploading is unrealistic. It's one thing to employ the idea for philosophical thought experiments or as a science-fictional enabling device; it's another to think it's a realistic option within current lifetimes.

23 comments:

Michael said...

I think most of these discussions don't give a very distinct view of problems of principle and practical problems, the article you linked to included. Of course plenty of things that might be possible in principle are ruined by the "mere" engineering aspect (eg. predicting the weather). But saying that something will "never" happen for engineering reasons is asking to be included amongst the "heavier than air flight is impossible" collection of quotes.

It's much safer to claim that it won't happen in the next 50 years. Or however long the person making the claim expects to live for!

I think the conceptual questions are much more interesting, in that I definitely agree that the gap won't be bridged with more of the same (eg. more computing power) or other engineering fixes. What's needed are new conceptual frameworks that will get us to an explanation of how and why certain processes produce consciousness.

On the 57th hand, some of the thought experiments relating to uploading (eg. gradually replacing your brain with a software simulator 1 neuron at a time) seem to at least suggest that personal identity isn't a [huge] problem.

Unknown said...

Most of those six reasons are pretty poor in the long view. Hardware doesn't need to be reliable if it's redundant, for example, and all feedback from parts of a body can be simulated to keep a virtual brain happy.

The problem of decoding how the brain works, while certainly daunting, does not actually need to be solved. All we need to do is accurately replicate the functionality of neurons, though that task is itself hardly easy.
With that accomplished, the functionality of the brain as a whole would fall out automatically when you create these virtual neurons and tell them how they're connected. And in case anyone's wondering, yes, if you were to simulate a brain in software, it would be conscious. There's nothing mystical going on in our wetware.

The most daunting obstacle by far - one which would, in a list by itself, render the six given trivial - is that of reading the structure of the brain. Determining the precise arrangement of neurons and their connections is perhaps impossible for a living brain, and unimaginably difficult for a deceased one.

The computational leaps that will occur with quantum computing may open many doors we can't perceive yet, but I find it very unlikely that any will lead to virtual brains within the next 40-50 years that I can reasonably expect to remain alive in.

Blake Stacey said...

The most persuasive argument I have ever heard against mind-uploading is that I had to try three operating systems on four computers before I could put music on my iPod, and that was a device designed to connect to other machines. And I still can't transfer pictures off my camera-phone. What kind of lousy future are we living in??

"[I]f the Singularity ever does arrive, I expect it to be plagued by frequent outages and terrible customer service." — Scott Aaronson

Tony Smith said...

My most optimistic scenario is that a couple of popular delusions will eventually become self-fulfilling prophecies: reincarnation and heaven, the latter hopefully without some supreme authoritarian in the cloud. Then some multi-modal entities will incarnate from time to time replete with identified memories and traits that may in many cases link back to whatever traces can be salvaged of characters from our era.

Short of unexpectedly rapid progress by Aubrey de Grey, even under that optimistic scenario it would not be the currently emergent me whose time must run out soon enough who will feel I've woken from a long sleep, but rather it will be a new "person" who develops with some of me in her core, maybe even beyond the way those of us who dabble in fiction have characters take on a life of their own inside our heads.

Then given comparable success with that other staple of techno-positivism, colonising the galaxy with Von Neumann eggs, such personas might settle on arrangements where individual lifetimes become more adventurous and backups well co-ordinated.

But until we can wrest control back from the accountants and lawyers, forget it.

Shatterface said...

The problem with uploading your mind is that it isn't actually you, it's a copy - even if the copy does have a simulated body.

It's like tha Star Trek transporter problem - even if the original body is destroyed and even if the copy does not sense a discontinuity in consciousness it's still a copy - even if it is identical, with identical legal and moral rights.

I think Algis Budrys's 'Rogue Moon' is the only novel to address this issue.

Richard Wein said...

I'm doubtful whether mind uploading will ever be a practical possibility. But I'm more interested in the philosophical questions which the article barely addresses: in what sense is the uploaded mind actually me? And why would I want to be uploaded? (I'll assume that the uploaded mind really is conscious and has much the same personality and thoughts that I do.)

Thinking about such questions leads me to this conclusion. The sense I have of a continuing self, such that the Richard of today and the Richard of tomorrow are both "me" is an illusion. Of course, these two Richards are very similar, and it's useful to treat them as stages of a continuing entity, but there is no more intrinsic significance to this continuity than there is to the continuity of the Ship of Theseus. Identity (whether of people or ships) is a construction of the human mind. There is no more a correct answer to the question whether the uploaded mind is actually me (or even whether tomorrow-Richard is actually me) than there is to the question whether the later Ship of Theseus is still the same ship.

Suppose then that I stop asking whether the uploaded mind is still me, and ask instead whether I have any reason to care about its fate. The answer is no. And the same is true for the Richard of tomorrow! There can be no justification for my most basic concerns, and one of these is concern about my future experiences. There's no reason why today-Richard should care about the experiences of tomorrow-Richard. It's simply a fact that I do care, because evolution has programmed me to. I also care about next-year-Richard and next-decade-Richard, though to lesser degrees, probably because there is less evolutionary advantage to caring what happens to me in the further future. And I suggest that the illusion of a continuing self is either a result of my caring about these future Richards or an adaptation which has evolved to make me care.

But evolution hasn't programmed me to care what happens to a computer emulating my brain processes, because that's not a situation which has arisen during evolutionary history. On the other hand, if I believe that the computer emulation will have my personality and thoughts, then my illusion of continuing self may become extended to that computer emulation. And in that case I might want to be uploaded.

(Another reason to be uploaded might have nothing to do with a desire to prolong one's own experience. It might be concern for what the uploaded mind could achieve for the benefit of other people.)

Shatterface said...

In the UK we have a variation of Theseus's Boat called Trigger's Broom.

Trigger, a character from the sitcom 'Only Fools and Horse' says he'd had the same bush for over 20 years - but it had 14 new handles and 10 new heads!

I didn't realise it had a classical precedent!

Shatterface said...

I'm not sure I have any more sense of consolation in the survival or resurrection of a virtual-me in cyberspace than I have in the notion of a parallel-world me that doesn't get hit by a truck the day I get knocked down.

Richard Wein said...

The problem with uploading your mind is that it isn't actually you, it's a copy - even if the copy does have a simulated body.

It's like tha Star Trek transporter problem - even if the original body is destroyed and even if the copy does not sense a discontinuity in consciousness it's still a copy - even if it is identical, with identical legal and moral rights.


What if the person was rendered unconscious before copying, the original wasn't destroyed, and the two identical people were mixed up (so no one knew which was which) and laid side by side before being woken up. Would you say that only one of them is actually the same person as the original? If so, why? After all, they're identical. The only difference between them is historical, in that one has physical continuity with the original and the other doesn't. But why would that be significant?

Re Trigger. I always suspected he wasn't as stupid as he seemed!

Anonymous said...

Responses:

1. Human minds do not experience constant up-time as is. This should be obvious. Frequently, we sleep. Often, even when awake, we don't use our brains for anything obvious. Going without sleep drives one insane. It may very well be that the human mind requires downtime for normal operation. At any rate, the fact that it already experiences downtime let's us know how to interpret a "fail whale" -- as a nap.

2. The platinum disks we sent out with Voyager were made in the 70's and will last forever. Imagine inscribing a platinum surface with a holographic wave pattern and reading it with a laser. You've got at least a few decades on that memory, probably much longer.

3. Electrical engineers are already hard at work revising transistor technology to make the threshold for reading as a "1" very close to zero volts. The notion that computers can't get more efficient is ridiculous, given the exponential increases in efficiency over the last 5 decades. Besides that, if the brain can be that efficient, that goes to show that a brain-like system CAN be that efficient; there's no reason to expect we can't do almost as well eventually.

4. Is the answer to itself. The brain MIGHT have (we think) more processing power than a supercomputer, but if that's so (and I'm not sure it actually is), then it's because there's things about its architecture we don't understand just yet and that are prerequisites for uploading in the first place. Presumably, when we've learned those principles, we'll get the increased processing power of a distributed system like the brain pretty much for free. (Also not, all 3 petaflops can be put into solving the same problem for the IBM machine; human beings do not have nearly as much control over how their processing power is spent -- so saying we have 10 petaflops at our disposal is pretty ridiculous. Do some floating point in your head then, buddy.)

5. Shows such a lack of imagination I'm not even sure where to start. Once we're uploaded, we can have any bodies we want. We can distribute our awareness across networks of machines including all sorts of robots of nearly limitless shape, size, and function. We already do artificial eyes (not well, but we do them). By the time we have the technology to upload, we won't have any trouble providing ourselves with bodies.

6. "What it costs?" "Who goes first?" This guy thinks the singularity is just another iPad release? If the singularity happens, it will be the emergence of an entirely new form of life. Whoever goes first might very well try to prevent others from uploading as well -- in fact, that's exactly what I expect. But that's not an argument for the impossibility of it, just an argument against it being good for everybody (which it almost certainly wouldn't be).

The highlighted post shows a total failure of imagination on the part of its author. I can come up with much more convincing reasons why uploading can't happen, and I actually think it can happen (eventually, in principle).

-Dan L.

Anonymous said...

This guy has no imagination. I can come up with more convincing reasons why uploading is impossible, and I actually think it is (eventually, in principle) possible.

1. Human minds can't handle constant up time anyway. They go insane when they don't get sleep. We don't want constant up-time.

2. The platinum disks that went out on Voyager will last millenia. We could inscribe interference patterns on similar substrates for holographic media that would last decades at the least.

3. Engineering problem, already being attacked by engineers. Also, when we learn enough about the brain to upload, we will know much more about why the brain is so efficient.

4. Similar. Also, human brain does not have 10 petaflops the same way the IBM machine has 3 petaflops. I can devote all 3 of those to one problem. I can't even devote one gigaflop of my brain's computing power to one problem willfully. I can't do floating point in my head.

5. Bodies are the easy part. We already have robots. (Also, Russell, for my generation, the Krang from TMNT is the real Krang. Sub Mariner was not a comic book favorite when I was a kid.)

6. The singularity is not an iPad release. The questions of what it costs and who goes first are not arguments against it happening at all, just pointing out that it's going to be weird. I agree that it will be weird if it happens.

-Dan L.

Anonymous said...

@Shatterface:

If you are your memories -- i.e. if your identity is determined entirely by remembering yourself to be you -- then your fears are unfounded.

It's not necessarily the case, but I personally think it is and can muster pretty good arguments to that effect.

Imagine waking up with someone else's memories. You'd think you were them instead of yourself. But if you THINK you're someone else, is that really any different from actually being someone else?

Imagine waking up in a robot body with all your own memories. Wouldn't you still be convinced you were you, albeit in a robot body?

-Dan L.

Shatterface said...

I'm not uploading my consciousness unless iParadise is Flash enabled.

Mike said...

@Shatterface: China Mieville's "Kraken" also riffs on the transporter problem directly. A number of Greg Egan's works also deal with the issue of upload, both in situ via wetware replacements, and dealing with communication between real world and "simulated" beings.

The more I type, the more examples come to mind. David Brin's "Kiln People" takes mental cloning to an extreme. There's probably a list of such novels floating around.

Shatterface said...

Yes, I read Schild's Ladder recently. Some characters appeared quite cavalier about their 'lives' knowing they had backups - but since those backups were made some time prior to death experience - or at least information - is lost.

If I had a backup recorded when I was 18, would that be 'me'? I quite fancy being young again but I'm not sure topping myself everytime I hit 40 is the secret to eternal youth.

Anonymous said...

If I had a backup recorded when I was 18, would that be 'me'? I quite fancy being young again but I'm not sure topping myself everytime I hit 40 is the secret to eternal youth.

Well, since I'm saying "you are your memories" (roughly) and there are 22 years of memories possessed by your 40 yo self but not possessed by your 18 yo self, no, the copy is a different person. Or rather, the copy is there person that you WERE when you were 18. Once you thaw it out, its memories diverge from those of your 40 yo self and it becomes a different 19 year old than the one you were, and ultimately a different 40 year old than the one you are.

Unless you conspire to make its life identical to your own for those 22 years, which is an interesting thought experiment in itself.

Here's a question for people who think the transporter problem is actually a useful way to think about identity: do you think your identity is determined in any way by the actual specific molecules that make up your body (as opposed to merely their configuration)? Is there a part of you that's substance rather than just pattern? Are there molecules I could remove from your body such that, even if I replaced them with identical molecules, this replacement would cause you to no longer be yourself?

-Dan L.

J. J. Ramsey said...

"And why would I want to be uploaded?"

Curiosity? Wanting to push the boundaries of technology? Because you can? If I had the opportunity to have a copy of my mind uploaded, I just might try it, not because I have any illusions that it would make me immortal in any way that matters to me personally, but just because it would be cool to make science fiction come to life.

Russell Blackford said...

I'll be speaking on some of this stuff at the forthcoming Singularity conference in Melbourne in September, so keep your ideas coming so I can steal them ;)

Richard Wein said...

Dan wrote:

Here's a question for people who think the transporter problem is actually a useful way to think about identity: do you think your identity is determined in any way by the actual specific molecules that make up your body (as opposed to merely their configuration)?

I do think the transporter problem is a useful tool for thinking about identity. In answer to your question, no I don't think (at least at an intellectual level) that the specific molecules make any difference. However, transporter scenarios motivate interesting and unsettling questions about personal identity and what we mean by "me".

If you interpret "me" as just meaning a body which has roughly the same personality and memories as the speaking body, transportation raises no problems. But in normal use the word "me" has more meaning than this. That's why people can ask, "yes, that copy would have the same personality and memories as me, but would it really be me?". The question may be based on a false premise or not fully coherent, but nevertheless it has some sort of meaning. The problem is to tease out what that meaning is.

Suppose I say, "tomorrow I will go to the cinema". I don't just mean, "tomorrow the body that has roughly the same personality and memories as this body will go to the cinema", or "tomorrow the body that calls itself Richard will go to the cinema". The word "I" (or "me") invokes a sense of identity, and not mere identicality. I feel that tomorrow-Richard will not be just similar to me. I feel that he will be me. I feel that tomorrow-Richard is not just another person, like tomorrow-Robert only with more similarity to today-Richard. I feel he is the same person as me, in a way that tomorrow-Robert categorically is not. Part of this sense of identity is my concern about the expected experiences of tomorrow-Richard. My concern for tomorrow-Richard is of a quite different nature from my concern for other people; that's why the words "selfish" and "altruistic" have such different significance.

But this sense of an identity that is something more than mere similarity is an illusion. Today-Richard and tomorrow-Richard are just collections of atoms that are arranged in a very similar way. Similarity is all there is. The illusion is an evolutionary adaptation that motivates us to self-preservation, and it's such a strong illusion that even having seen through it at a certain intellectual level, I am still very much influenced by it at deeper levels.

There is no significant difference between transported-Richard and non-transported-Richard that doesn't also exist between today-Richard and tomorrow-Richard. One conclusion often drawn from this obervation seems to be that transported-Richard is "me" in the full sense of the word. My conclusion, on the other hand, is that even tomorrow-Richard is not "me" in the full sense of the word. There is no "me" in that sense.

Richard Wein said...

P.S. Sorry for the triple post.

Russell Blackford said...

I'll clean it up.

March Hare said...

While uploading a brain might be a long time off, simulating a brain (of a smaller mammal) might be quite close. Which then raises certain ethical issues - does a simulation deserve animal rights?

Steve Zara had a post at RD's site about that very thing: http://richarddawkins.net/articles/490048-the-blue-brain-blues-materialist-ethics-and-simulated-minds

One thing I would like to mention is that if/when we do upload our minds then the additional horsepower of the computer could mean that we run so much faster, we could do a years worth of consciousness in a day, or an hour, or a second. Or it could be the other way round.

Basically time would become flexible in a way that biology and chemistry do not allow for in wet-ware.

Mike said...

@March Hare: Of course how do we tell that the simulation is capturing all neural states.

The speed issue is one that is dealt with in some early Greg Egan works like Permutation City