Voila! The program for the Singularity Summit AU, which is now just two or three weeks away. The program is subject to fine-tuning but gives a good indication.
As you'll see, abstracts of papers have been provided, including my own:
“Survival Beyond the Flesh” – Mind uploading would involve the use of an artificial substrate – some kind of advanced computational system – to emulate the activities of a biological brain. Proponents of uploading typically imagine this as a way to achieve survival beyond the flesh (and perhaps other advantages, such as greatly enhanced speed of thought and/or the opportunity to extend cognitive capacities in multiple ways).
The main advantages proposed for mind uploading presuppose that the same individual somehow “occupies”, or is instantiated by, both the biological brain (or the entire biological organism) and the computational system. But this is problematic. The problem can be thought of in terms of personal identity: if I have been uploaded onto an artificial computational system, is the resulting intelligence really “me” or is it in some sense a mere duplicate? Even if the cybernetic intelligence is not strictly “me”, is there at least some sense in which I can be said to have survived the experience, though my biological brain may have been destroyed? We will explore whether uploading would really represent survival beyond the flesh, and whether this might depend in some way on the details and circumstances. What is needed for cybernetic survival?
11 comments:
Survival Beyond the flesh -- the fun of SF for many generations. Yes a substrate to mimic or carry a current is required and it would need to be in a liquid form to allow for natural growth, or expansion of though capabilities.
I suppose it would be like havong a thick vicous substrate densly populated with receptors, neurons and maybe even free dendrites that can act as carriers during memory expansion. Or is the axons? Get the two muddled up.
The physical process is one that could be harness eventually but it is the world of unknowns about the non physical brain that will be the huge arguing point. Getting one whole functioning chemical and electrical activity set into another medium of the same qualities -- those will be the discussion of philosophy for years to come
What would be the difference between "merely a duplicate" and "really me"?
Being a non-dualist, I can't help feeling that there's a fatal flaw in the argument. See here:
http://gcoupe.blogspot.com/2010/08/acid-tanks-await.html
Svlad, that's the question that the talk will be focused on: What counts as "personal identity" or "survival" in such a context? I don't necessarily have a clear answer.
btw, I won't be there either to advocate or debunk the idea, but to examine it philosophically (in a fairly introductory way, given that whole books can be written on such a topic).
I'm a little tired of this debate. Usually it takes the form of the transporter problem, and the argument is usually,
"Well you die and it's your copy that survives." And then I try to point out the problems with that view and everyone yells at me for being stupid.
1. "You" vs. "copy" is problematic. We don't have a solid theory of identity.
2. The view that when your "original" body dies you also die seems to be linked to a notion that identity is wrapped up in continuity of experience, or continuity of consciousness. But this is obviously wrong since people fall asleep, get knocked unconscious, etc.
The best solution I have is that you are your memories, at least in terms of personal identity. If you woke up tomorrow with someone else's memories, you would be under the impression that you were that person rather than yourself. Even if you had the memories of a concert pianist in a body with no muscle memory for playing piano, you'd probably think "I seem to have been put into a non-piano player's body," rather than "I seem to have had my memories replaced with a pianists'." That is, skills, opinions, tastes, etc. as they manifest in real time aren't crucial to identity, but the history of these things as they are aggregated into memory are.
Under this view, as long as you have a system that can read and write human memories in approximately the same way the brain does, and as long as you can upload memories with good enough fidelity, you can upload people. The electronic brain doesn't even need to work exactly the same as a bio brain, as long as the electronic brain "remembers" being the person who was uploaded.
-Dan L.
I think the "waking up with someone else's memories" thought experiment is a useful one, here's another. This one is motivated by the fact that most people agree that if we transplant Smith's brain into a new body, it's still Smith. But many people disagree that if we copy Smith's brain into a new body that it's still Smith (what about the original? comes up a lot).
Smith, as an old man, is worried about dying and consents to a life-extending procedure. The details of the procedure are proprietary and closely guarded; Smith is not aware of exactly how his life will be extended.
Smith is put under anesthesia and loses consciousness. When he awakes, he finds himself in his own body -- from 30 years earlier. However, he still has all the memories accumulated since that time; he can remember everything up to being put under prior to the procedure.
Smith considers a few of the possibilities. He calls himself prior to the procedure Smith, and after the procedure Smith*.
1) Smith's brain was transplanted into Smith* -- there is no more Smith, except perhaps a brainless body.
2) Smith's brain was copied into Smith* and the original was destroyed.
3) Smith's brain was copied into Smith*, but Smith is still extant.
Bear in mind that Smith* feels the same sort of continuity of experience anyone else would feel after being administered anesthesia and then waking up. He knows who he is because he remembers being that person and then going under.
Is there any way for Smith* to determine whether he's in situation (1) or (2) without either cutting open his head or getting extra information from the surgeons? I say not, and this is the main reason I think you are your memories.
(3) is the really interesting case. We have two individuals who identify as Smith. If you gave each a questionnaire about his childhood, the answers would be essentially identical. But Smith* remembers waking up in a younger body, whereas Smith presumably doesn't. So their memories correspond to the point of each waking up, at which they diverge. In my view, they are almost but not quite the same individual. If we go out another 30 years, their memories will probably have diverged to the point that they will be more noticeably individuals, although they will still answer questions about their early years largely the same, and of course answer to the same name.
You can extend this thought experiment by asking what if we do both the transplant and the copy, and then take great pains to make sure the experience of both Smith and Smith* are essentially identical. I haven't gotten too deep into this one yet.
I hope this is at least somewhat useful thinking about the problem.
-Dan L.
and here I was being all SF
Dan, you have a lot to say about the debate for someone who claims to be tired of it. :D
Having as much to say is why I'm tired of it. The last time I tried to get into a discussion about the uploading problem, I just got caught in this cycle:
Jones: If you copied someone's brain, the person would still be stuck in the original, so you wouldn't really be extending the person's life.
Me: OK, but if we transplant the brain into a new body or keep it in a vat, it's the same person?
Jones: Yeah.
Me: So what if you wake up in a new body and can't tell whether your brain was copied or transplanted?
Jones: Well, if you were copied, you would be a different person.
Me: But how would you know? Is personhood somehow tied to the molecules of the brain, or is personhood more like a pattern or configuration?
Jones: I'm pretty tired of you not getting this.* If you copied someone's brain, the person would still be stuck in the original.
*Actual quote.
Basically, I'm sick of arguing against this same thesis, which seems to be a hobby horse among people who've never taken a philosophy course but have watched the Matrix and thought it was super deep, dude. The drawback is that I'm not necessarily competent to criticize my own arguments, so I can't really be sure I'm getting anywhere useful.
That's why I'm actually glad you brought it up, since the discussion here is usually really good. Will you be posting a transcript of your talk, do you think?
-Dan L.
Oh, it is still SF, sorry -- I don't do the philiosophical thing as it never really get anywhere. The tek side is far more interesting -- knowing how a cell thinks and why it thinks; now there is a question
Oh, it is still SF, sorry -- I don't do the philiosophical thing as it never really get anywhere. The tek side is far more interesting -- knowing how a cell thinks and why it thinks; now there is a question
"knowing how a cell thinks and why it thinks," is not a question. A question would be more like, "Do cells think?" I'm assuming "no" but feel free to offer evidence to the contrary.
Philosophy does get places, by the way. The pattern is usually that a philosopher suggests a philosophical problem and a solution. Others criticize the solution and sometimes the problem itself, and in the process both the problems and solutions are more and more precisely stated. Once they've been stated precisely enough to admit empirical verification, the enterprise becomes a science.
That's why philosophy of mind is actually really important. Because we're getting to the point where neurologists can start to answer philosophical questions using empirical methods. But we still need to make sure we're asking them the right questions.
-Dan L.
Post a Comment