tag:blogger.com,1999:blog-24761391.post3749500042141962458..comments2023-10-26T22:06:11.166+11:00Comments on Metamagician3000: Transhumanism still at the crossroadsRussell Blackfordhttp://www.blogger.com/profile/12431324430596809958noreply@blogger.comBlogger185125tag:blogger.com,1999:blog-24761391.post-59370252617220120652008-05-24T04:08:00.000+10:002008-05-24T04:08:00.000+10:00This comment has been removed by the author.VDThttps://www.blogger.com/profile/01496647346219341625noreply@blogger.comtag:blogger.com,1999:blog-24761391.post-25364691116285874242008-05-15T11:36:00.000+10:002008-05-15T11:36:00.000+10:00For many years I was active on the Extropians and ...For many years I was active on the Extropians and transhumanism mailing lists but left transhumanism entirely several years ago after becoming frustrated with the myopic discussion. I still check in occasionally to see what people have been up to though. I think transhumanism is, essentially, science fiction fandom masquerading as philosophy and political ideology. There are three "supertechnologies" that dominate transhumanist discussions and none of them have any basis in reality. These are Superintelligence, Drexlerian Nanotechnology and Uploading.<BR/><BR/>Greg Egan has given a good assessment of the unfounded assumption of Superintelligence. Drexlerian Nanotechnology is exemplary pseduoscience; it includes everything on the Crackpot Index from dire warnings to constant references to Feynman to an entire incestuous community of non-specialists citing one another in online journals of their own creation. I think Greg's characterization of transhumanists endowing these developments with <I>inevitability</I> is spot on. That's exactly what I observed over the years even among "prominent" members of the community.<BR/><BR/>Uploading is slightly different, being a strange conflation of different philosophical positions, and hasn't really been used to derail legitimate discussion in the way Superintelligence and Drexlerian Nanotechnology has. The main issue with Uploading is that, even if we believe consciousness to be the product of neural processes, there isn't anything <I>physical</I> in the simulation to be conscious. Even if we take consciousness to be a pattern, there aren't any of those in the simulation either, because it's a <I>simulation</I>. Simulations are descriptions of things and not the things themselves.<BR/><BR/>I think the concept of virtual reality led people down this particular bizarre road. Nobody thinks a drawing of a thing is the thing itself. Nobody thinks multiple drawings of a thing that make it appear to move when I flip through it is the thing itself. But for some reason people think once I turn an equation into computer code and press "compile and run" it becomes the thing itself. I think it's the notion that we could be fooled by a sufficiently powerful simulation that leads people, via a sort of perverse Cartesian logic, to conclude that it's indistinguishable from reality and therefore must be the same as reality. ("Imagine your brain is also a simulation," although a huge conceptual leap from merely being fooled by VR, apparently is quite easy for people to entertain.) It's truly very strange. But I digress.<BR/><BR/>These "supertechnologies" I speak of are usually used in the transhumanist community as a bludgeon to quickly cast aside any discussion grounded in reality. If you want to discuss the implications of biotechnology or neuroscience, you're told it doesn't matter, since Superintelligence will make it irrelevant. Politics is quickly cast aside with Drexlerian Nanotechnology. If there are now people engaging in serious discussion of real science it's an entirely new development (and a welcome one) but that certainly wasn't the case several years ago.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-24761391.post-5538770235486468822008-05-12T04:24:00.000+10:002008-05-12T04:24:00.000+10:00In fact, animal behaviors were achieved in robotic...In fact, animal behaviors were achieved in robotics in numerous experiments.<BR/>There is even a "carnivorous" robot that hunts and "eats" slugs (methane from rotting slugs is used to power the machine, which hunts slugs to sustain its methane supply), effectively reproducing the generic "predatory" behavior.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-24761391.post-26406118826935480922008-05-10T08:24:00.000+10:002008-05-10T08:24:00.000+10:00I'm not sure that much follows at all, if the issu...<I>I'm not sure that much follows at all, if the issue is our capacity to build something that possesses human-like inner experiences or even something with human-like informal logic.</I><BR/><BR/>Well, how about "rat-like" or "monkey-like" or even "fruitfly-like"? Do we have the capacity to build something that possesses inner experiences or informal logic at all? I posit that whatever barriers would keep a computer from achieving "human-likeness" will keep it from achieving "likeness" of any kind of living animal. Indeed, the goal may be "life-likeness", and you can leave "human" (and therefore "transhuman") out of discussions of AI.John Howardhttps://www.blogger.com/profile/15367755435877853172noreply@blogger.comtag:blogger.com,1999:blog-24761391.post-72003995853341286992008-05-09T12:13:00.000+10:002008-05-09T12:13:00.000+10:00Pei Wang is a distinguished roll up the sleeves a...Pei Wang is a distinguished roll up the sleeves and program type of AI researcher with less emphasis on philosophy. He put a curriculum for suggested education for AGI researchers. He has a link to some free introductory reading, <BR/>http://nars.wang.googlepages.com/wang.AGI-Curriculum.html<BR/>with a link to HAL'S Legacy, Stork.<BR/><BR/>This is a link to an interview with Marvin Minsky, an AI Illuminati<BR/>http://mitpress.mit.edu/e-books/Hal/chap2/two1.html<BR/>which comes from HAL'S Legacy, which I liked. Minsky likes cats, so what's not to like?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-24761391.post-76453794468869933632008-05-08T15:24:00.000+10:002008-05-08T15:24:00.000+10:00While I'm arguing with Stephen Harris on another t...While I'm arguing with Stephen Harris on another thread, I'll agree with him that Moore's law was never the issue - or shouldn't have been. We are confronted with a software problem, the basis of which is far deeper than usually appreciated.<BR/><BR/>I always say, "Assume <I>infinite</I> comptuter hardware capacity by your favourite measurement. <I>Then</I> what follows?" I'm not sure that much follows at all, if the issue is our capacity to build something that possesses human-like inner experiences or even something with human-like informal logic.<BR/><BR/>As I said a decade ago: "To take this further, it is notable that our rate of progress in understanding basic concepts to do with the human mind's functioning -such concepts as meaning and interpretation - gives no cause for optimism that we will ever be able to formalise the full repertoire of human thinking. In that case, a programmer may never be able to 'write' a legitimate 'mind program' to be implemented on the computers of the future." Etc. <BR/><BR/>See http://www.users.bigpond.com/russellblackford/singular.htmRussell Blackfordhttps://www.blogger.com/profile/12431324430596809958noreply@blogger.comtag:blogger.com,1999:blog-24761391.post-44596508989533925422008-05-08T14:31:00.000+10:002008-05-08T14:31:00.000+10:0001 said..."Also, this will allow to finally set th...01 said...<BR/>"Also, this will allow to finally <BR/>set the definitions straight and <BR/>avoid unfavorable confusions <BR/>between Transhumanism and other,<BR/>more "outstanding" views like <BR/>Singularitarianism. Confusions <BR/>like that frustrate me a bit, <BR/>for it is Singularitarianists <BR/>and only Singularitarianists <BR/>who in the "notion that self-<BR/>modifying AI will have magical <BR/>powers", a belief that somehow <BR/>gets hung on Transhumanists <BR/>surprisingly often. ...<BR/>---------------------------<BR/>SH: Singularitarianism, using the Yudkowsky interpretation (SIAI), is the specific cargo cult targeted. I don't think the Singularity that Kurzweil vaguely speculates about is nearly as incredible/disdained. <BR/>---------------------------<BR/>01 continues ...<BR/>Thus, it is sad that Mr.Egan is <BR/>not to be found among those who <BR/>identify as Transhumanists and <BR/>that he is so displeased with <BR/>the movement and its image.<BR/>------------------------------<BR/>SH: Maybe he doesn't adopt the label but he certainly seems sympathetic to some of the ideas.<BR/><BR/>greg egan said ...<BR/>..."there is no physical principle <BR/>to prevent the uploading of <BR/>everything which is capable of <BR/>having a causal effect on a <BR/>given person's brain, body and <BR/>behaviour. In that limit we <BR/>could even satisfy Roger Penrose and simulate everything (a person's whole body and immediate environment, down to the atomic level) on a quantum computer."<BR/><BR/>SH: Mr. Egan seems to be defending uploading which is a Transhumanist if not Posthuman notion. Actually, I found his position surprising, and since I did not know that Penrose would signoff on this idea.<BR/>--------------------------------<BR/><BR/>01 concluded ...<BR/>P.S.:<BR/>Or... wait, perhaps, I have one, <BR/>single assumption regarding AI: <BR/>Murphy's Law applies to AI <BR/>research, too ;-)<BR/><BR/>SH: Well, I'm glad you didn't say Moore's Law which deals with hardware components, while the problem with AI is software; there is no program which demonstrates even the beginning of strong AI, so doubling zero every two years mounts up to zero. The great chess playing program is not considered a stong AI accomplishment.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-24761391.post-75938063027565140292008-05-08T09:56:00.000+10:002008-05-08T09:56:00.000+10:00Of course it's also arguable that (A) does apply t...<I>Of course it's also arguable that (A) </I>does apply<I> to all humans, and equally to all AIs.<BR/></I><BR/><BR/>That's basically what I think, but AI's should have higher plateaus. I was just using impaired humans because it is much, much easier to see their limitations.pecohttps://www.blogger.com/profile/02363753338511940617noreply@blogger.comtag:blogger.com,1999:blog-24761391.post-27992100356877034192008-05-08T09:39:00.000+10:002008-05-08T09:39:00.000+10:00That adult isn't impaired enough not to have gener...<I>That adult isn't impaired enough not to have general intelligence. (Or if it is, just increase its mental age.)</I><BR/><BR/>Peco: all you're doing is declaring that there are humans such that:<BR/><BR/>(A) their knowledge would necessarily plateau, due to their innate limitations, and<BR/><BR/>(B) their skills are still sufficiently flexible to be called "general intelligence".<BR/><BR/>(A) is true, but (B) is just a choice of definition. You're free to define "general intelligence" that way if you wish, since there is no universally agreed meaning for the term. But that doesn't change the fact that it's arguable that there are people to whom (A) does not apply; it simply means you would prefer to use a different term than "general intelligence" for their set of abilities.<BR/><BR/>Of course it's also arguable that (A) <I>does</I> apply to all humans, and equally to all AIs.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-24761391.post-69072058183758951732008-05-08T09:25:00.000+10:002008-05-08T09:25:00.000+10:00Possibly relevant to various interests (and my pri...Possibly relevant to various interests (and my prior try looked broken)<BR/><A HREF="http://io9.com/386982/where-is-the-posthuman-bertie-wooster" REL="nofollow">posthuman bertie wooster</A>Damien Sullivanhttps://www.blogger.com/profile/13321329197063620556noreply@blogger.comtag:blogger.com,1999:blog-24761391.post-28093129936009558472008-05-08T09:22:00.000+10:002008-05-08T09:22:00.000+10:00This comment has been removed by the author.Damien Sullivanhttps://www.blogger.com/profile/13321329197063620556noreply@blogger.comtag:blogger.com,1999:blog-24761391.post-24970434396583463582008-05-08T06:47:00.000+10:002008-05-08T06:47:00.000+10:00Ooops, pardon, missed "believe" between Singulari...Ooops, pardon, missed "believe" between <I> Singularitarianists who</I> and <I>in the "notion</I>Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-24761391.post-17295907459552582422008-05-08T06:43:00.000+10:002008-05-08T06:43:00.000+10:00Okay, part two of my rant/comment/whateverFirst, o...Okay, part two of my rant/comment/whatever<BR/><BR/>First, on the issue of the original post <BR/>--<BR/>I think the T-word <I>might</I> have some unfavorable connotations, but it is not the biggest issue. Not even a serious one, right now.<BR/><BR/>I see "vagueness" as the biggest problem here, because the vagueness of the concept allows people who support all kinds of... ahem... how to put it in a politically correct manner... <I>arguable</I> policies hop onto the Transhumanist bandwagon, which can AND WILL hurt in the long run. <BR/>Because of such concerns I second Louis in that Tranhumanism needs a central figure/organization, that will approve policy regarding most important issues. <BR/><BR/>Also, this will allow to <B>finally set the definitions straight </B> and avoid unfavorable confusions between Transhumanism and other, more "outstanding" views like Singularitarianism. Confusions like that frustrate me a bit, for it is Singularitarianists and only Singularitarianists who in the <I>"notion that self-modifying AI will have magical powers"</I>, a belief that somehow gets hung on Transhumanists surprisingly often.<BR/><BR/>Disclaimer: - I do not have anything against Singularitarianists (though I am very skeptical about their concerns and predictions), it is just that they are birds of another feather.<BR/><BR/>Another, and more obvious problem, is that Transhumanist movement apparently needs more, MANY more reasonable, well-educated, talented people to join it. <BR/>Thus, it is sad that Mr.Egan is not to be found among those who identify as Transhumanists and that he is so displeased with the movement and its image.<BR/>Speaking of which, the current Transhumanist blog-presence is indeed of questionable quality.<BR/><BR/>But such a problem can only be addressed by having more rational and creative people among our ranks. Like, talented professional writers, maybe<BR/><BR/>--<BR/>Second, on the AI issue<BR/>--<BR/><BR/>I see no reason for intelligence to be impossible to implement in non-protein systems.<BR/><BR/>Also, I think that certain controversial phenomenons of human mind, like, for instance, "visual thinking", suggest that intelligence and/or self-awareness, at least in humans, <I>might</I> have significantly varying structure and <I>might</I> function in more than one possible manner.<BR/><BR/>However, I do not have any assumptions regarding the properties, structure and functioning non-protein intelligence will have when, -and IF, it is created.<BR/><BR/>P.S.:<BR/>Or... wait, perhaps, I have one, single assumption regarding AI: Murphy's Law applies to AI research, too ;-)Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-24761391.post-51909466374335624922008-05-08T00:49:00.000+10:002008-05-08T00:49:00.000+10:00Greg:That adult isn't impaired enough not to have ...Greg:<BR/><BR/>That adult isn't impaired enough not to have general intelligence. (Or if it is, just increase its mental age.)pecohttps://www.blogger.com/profile/02363753338511940617noreply@blogger.comtag:blogger.com,1999:blog-24761391.post-82362657992939882892008-05-07T17:53:00.000+10:002008-05-07T17:53:00.000+10:00Damien:I don't want to exaggerate the analogy with...Damien:<BR/><BR/>I don't want to exaggerate the analogy with Turing completeness; I don't think the property of being a cognitive generalist is anywhere near as sharp. Can every individual be meaningfully classified as having, or lacking, an ability for indefinitely extensible learning that can be separated out from their tastes, their personal goals, their social environment, and so on? I doubt it. There will be a minority of people who are clearly limited to a childlike intellectual life, and a minority of people for whom the sky appears to be the limit, but for the vast bulk of us in between, who knows?<BR/><BR/>It's not really clear to me what you're suggesting in regard to autism. Learning a foreign language is an excruciating struggle for me, but if I'm motivated and persistent I can still make progress, and if some people find ordinary social cues just as hard to learn (or differential geometry, or whatever) then I don't know what to add beyond the uncontroversial fact that some specific skills always come harder to some people than others. But I don't see that as having much impact on the claim that AIs could understand things that <I>no</I> human ever could.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-24761391.post-11865484546886581222008-05-07T16:27:00.000+10:002008-05-07T16:27:00.000+10:00I think they won't be able to learn anything very ...<I>I think they won't be able to learn anything very complex, but they still might.</I><BR/><BR/>The range of possible minds that deserve to be called human will certainly encompass people who can learn a certain amount and then asymptotically approach some kind of plateau. If we're going to contemplate a range of impairments, it's not surprising that there will be marginal cases that are on the border of being cognitive generalists.<BR/><BR/>But how does this take us beyond all the unconvincing arguments about (dogs:humans) :: (humans:super-AI)? <BR/><BR/>Many (probably most) healthy humans also hit a plateau, but the reasons are usually complicated motivational and socioeconomic contingencies rather than anything intrinsic to their cognitive structure.<BR/><BR/>I certainly don't claim to have a watertight <I>proof</I> that no plateau exists for a healthy, motivated immortal human with extensible tools at their disposal, but you're not going to persuade me that such a plateau exists just because one exists for some impaired humans.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-24761391.post-22428977778585462412008-05-07T15:36:00.000+10:002008-05-07T15:36:00.000+10:00Well, the argument rests on humans being sort of T...Well, the argument rests on humans being sort of Turing-complete, or LBAish, or something. Ones who aren't, like "permanent 3 year olds", or maybe even normal people who just can't get through a logic or programming or algebra class, might disprove the argument. Or they might just show that being able to learn language doesn't make you that much of a generalist, and the human population straddles the Turing divide.<BR/><BR/>Another challenge I thought of, which you might want to comment on given some of your stories, Greg, is autism. Or color-blindness. Autists don't ever learn to function socially like normal humans. OTOH, high-functioning ones learn to fake it somewhat -- learning faces and socializing the hard way. And one might imagine co-processors (implants or wearable) that took up the slack, and annotated faces with their expression and appropriate responses, or things with their normative color, to make up for the defects in the natural human co-processors (or senses, I think color-blindness is just missing some cones, though the cortex probably adapts to deal with the youthful inputs.)<BR/><BR/>I'm not sure what this proves, if anything. We've got some stuff like facial processing that's hard to impossible to make up the lack of, right now, orthogonal to "general intelligence". OTOH the "enhanced humans can compete with AIs" case doesn't rest on humans being limited to pieces of paper; if the sapient AI can learn to recognize faces, then a co-proc -- at worst, a thralled AI -- can be built for the human.Damien Sullivanhttps://www.blogger.com/profile/13321329197063620556noreply@blogger.comtag:blogger.com,1999:blog-24761391.post-12828720998149128162008-05-07T15:15:00.000+10:002008-05-07T15:15:00.000+10:00Peco, if you want to make a finite list of things ...<I>Peco, if you want to make a finite list of things someone knows, and declare that they are too stupid to learn anything new ... then by hypothesis they won't learn anything new.</I><BR/><BR/>They <I>can</I> learn new things (like new vocabulary words, how to add and much more). I think they won't be able to learn anything very complex, but they still might.pecohttps://www.blogger.com/profile/02363753338511940617noreply@blogger.comtag:blogger.com,1999:blog-24761391.post-27523976498723905642008-05-07T14:39:00.000+10:002008-05-07T14:39:00.000+10:00Peco, if you want to make a finite list of things ...Peco, if you want to make a finite list of things someone knows, and declare that they are too stupid to learn anything new ... then by hypothesis they won't learn anything new. I don't know what you think follows from that, but it has no bearing on anything I've claimed.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-24761391.post-87131240569457227842008-05-07T10:41:00.000+10:002008-05-07T10:41:00.000+10:00But you still haven't pointed to any specific syst...<I>But you still haven't pointed to any specific system, explained what you mean by "understanding" it, and explained why an AI could do that but an uploaded human with access to the same computing resources could not.</I><BR/><BR/>Can I use a very stupid human (that still has general intelligence?<BR/><BR/>If you uploaded an adult with a permanent mental age of 3 that only knows things that most 3-year olds would know, how to read, how to use Google, how to record data on a computer and the syntax and standard libraries of a programming language, I don't think they would be able to program a computer to solve a simple math problem (like solving a quadratic equation). You don't even need an AI to do this quickly.pecohttps://www.blogger.com/profile/02363753338511940617noreply@blogger.comtag:blogger.com,1999:blog-24761391.post-66105467819240645082008-05-07T10:00:00.000+10:002008-05-07T10:00:00.000+10:00Peco: an "eternal 3-year-old" would need to have ...Peco: an "eternal 3-year-old" would need to have their memory wiped every day to stop them becoming a 4-year-old. There are people who grow to adulthood who might be informally described as having "a permanent mental age of 3", but part of ordinary human intelligence is an ability in principle to continue building on one's education indefinitely.<BR/><BR/><I>Having information available in storage is not the same as understanding it</I><BR/><BR/>No kidding. But so far you've offered no defensible account of what understanding <I>is</I>, or why an AI could understand something that a human (with fairly matched tools at their disposal) could not. You keep invoking the phrase "working memory" as if you've convinced yourself that it's the key to everything, but you haven't offered any justification for that.<BR/><BR/>"Understanding" is an informal term with no precise technical meaning, but some useful meanings for "understanding X" include: having simplifying insights about X; seeing connections between X and other systems; noting generalisations; having some ability to generate approximate predictions.<BR/><BR/>Using these kinds of definitions, there are thousands of concepts and systems that I understand that I can't hold completely in my working memory: mathematical theorems, large pieces of software that I've written, etc. I certainly can't hold everything important about quantum mechanics in my working memory; maybe I could fit its axioms, but in any case that fact has nothing to do with my ability to calculate the behaviour of quantum systems and develop experience and intuition about them. All of that lies in my long-term memory, and in notes I've made, books I own, and computer programs I've written.<BR/><BR/>Having a larger working memory might be very convenient. I am certainly <I>not</I> arguing that given equal hardware there is no deviation from human cognitive strategies that would yield advantages; I'm sure there are many.<BR/><BR/>But you still haven't pointed to any specific system, explained what you mean by "understanding" it, and explained why an AI could do that but an uploaded human with access to the same computing resources could not.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-24761391.post-71823453360907531332008-05-07T09:22:00.000+10:002008-05-07T09:22:00.000+10:00peco said...Just simulate how humans think, except...peco said...<BR/><BR/>Just simulate how humans think, except better, faster and with more working memory.<BR/>----------------------------------<BR/>SH: Modern computers are based on Turing Machines, which only compute, computable processes. Even with the proposed speedup arising from quantum computers, that quantum computer only computes computable processes. Yes, it would cover more area, a broader problem/ solution plane, but the questions and answers don't answer anything that is basically uncomputable. A new dimension of intelligence is not achieved.<BR/>Now if the human brain/mind only is capable of deliberating computable questions and answers, then the human mind is just slower in calculating the same computable processes. So, the answers can be found more quickly with computers, <BR/>but they answer only questions at the same level, what can be computed. Yes, one can call that a faster intelligence, but it isn't a new type or level of intelligence that could answer questions that are fundamentally uncomputable. <BR/>Now suppose that the human mind thinks by both computable and uncomputable processes. How is a computer supposed to become more intelligent in terms of finding answers to questions which can't be posed in a computable/algorithmic<BR/>format which the computer needs in order to function? The question is how does a computer with only computable resources, can only do computable processes, proceed to self-modify itself so that it somehow picks itself up by its bootstraps and becomes a machine that can now process uncomputable processes? So that it moves into the realm of super-intelligence, able to find answers posed in both computable and uncomputable format?<BR/>Well, Turing said this wasn't possible and that the best that could be done is for the computer to consult and oracle. The idea of super-intelligence doesn't answer the challenges of computability theory and I think the Church-Turing Thesis. That is why people doubt Yudkowsky's version of the Singularity, an evil, rogue, self-evolved super-intelligent entity. He doesn't refute the basis for Cognitive Science, and has no evidence (such as even a modicum of strong AI, realized)to endorse his speculation, which amounts to no more than philosophical handwaving.<BR/>How many of the people who like Yudkowsky's verion of the Singularity have a background in the trevails of AI so they judge his claim on the basis of their own actual understanding and education in AI issues? I think people don't critically examine his claims because they endorse the "promise" of his claims, whether those claims proceed by handwaving or not.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-24761391.post-64312384897121662152008-05-07T08:44:00.000+10:002008-05-07T08:44:00.000+10:00I didn't have time to make my comment very long, s...I didn't have time to make my comment very long, so I'll write more.<BR/><BR/>Even if humans can theoretically solve anything a universal Turing machine can solve, it doesn't mean they will solve it. (It might not be economical.) Having information available in storage is not the same as understanding it--I don't think you could ever get any number of eternal 3-year olds (who have general intelligence and the ability to store things on computers) to understand anything very complex, like quantum mechanics, in any amount of time.pecohttps://www.blogger.com/profile/02363753338511940617noreply@blogger.comtag:blogger.com,1999:blog-24761391.post-63410243664645648172008-05-06T23:53:00.000+10:002008-05-06T23:53:00.000+10:00ParadiseIs exactly likeWhere you are right nowOnly...Paradise<BR/>Is exactly like<BR/>Where you are right now<BR/>Only much<BR/>Much ....<BR/><I>Better</I><BR/><BR/>-- Laurie Anderson, <I>Language Is A Virus</I>Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-24761391.post-61529705869533307132008-05-06T23:33:00.000+10:002008-05-06T23:33:00.000+10:00Greg, you probably win, but:Just simulate how huma...Greg, you probably win, but:<BR/><BR/>Just simulate how humans think, except better, faster and with more working memory.pecohttps://www.blogger.com/profile/02363753338511940617noreply@blogger.com