Australian philosopher, literary critic, legal scholar, and professional writer. Based in Newcastle, NSW. My latest books are THE TYRANNY OF OPINION: CONFORMITY AND THE FUTURE OF LIBERALISM (2019); AT THE DAWN OF A GREAT TRANSITION: THE QUESTION OF RADICAL ENHANCEMENT (2021); and HOW WE BECAME POST-LIBERAL: THE RISE AND FALL OF TOLERATION (2024).
I think that true AI is probably a lot further off than most singularity proponents believe. But if it ever happens, I'm not as confident as the author that the humans of the time won't do something foolish that could give the AI the power to implement some malicious agenda (if it had one).
The author's main argument is that, even if the AI infected other systems, it "would still not have any access to physical reality". It wouldn't have any "hands". But robots are computer systems with hands. The author seems to think that the AI's agenda will be served by nothing less than the building of a fully automated factory for making robots. He fails to consider the possibility of modifying the programs of existing robotic systems. And I assume that by the time AI becomes a reality there will be a lot more in the way of robotic systems than there are now. The AI doesn't need to copy itself to robots in its entirety (which would might be impractical). Installing a more limited program could suffice.
Of course, one could prevent the AI infecting any other systems by means of a strict physical quarantine. But that would severely limit its usefulness. Once the AI had convinced the powers-that-be that it was harmless, there would be a very strong financial incentive to put it to work in ways that would give it access to the outside world: connecting it to other systems, or even using it to program other systems. Programming is a labour-intensive and error-prone task for which an AI would probably be ideal.
There's still quite a way to go from controlling a few robots to the destruction of the human race. But, providing it's sufficiently stealthy, the AI can implement its plan gradually. I think the author concentrates too much on the Skynet scenario, which goes from switching on the AI to nuclear holocaust within a few hours. I agree that's unrealistic. But there are other possibilities.
I can't say I'm worrying about this happening. I think that AI is a long way off. And we'll be in a much better position to judge the risks when the time is nearer. I just think it's too soon to rule anything out.
1 comment:
I think that true AI is probably a lot further off than most singularity proponents believe. But if it ever happens, I'm not as confident as the author that the humans of the time won't do something foolish that could give the AI the power to implement some malicious agenda (if it had one).
The author's main argument is that, even if the AI infected other systems, it "would still not have any access to physical reality". It wouldn't have any "hands". But robots are computer systems with hands. The author seems to think that the AI's agenda will be served by nothing less than the building of a fully automated factory for making robots. He fails to consider the possibility of modifying the programs of existing robotic systems. And I assume that by the time AI becomes a reality there will be a lot more in the way of robotic systems than there are now. The AI doesn't need to copy itself to robots in its entirety (which would might be impractical). Installing a more limited program could suffice.
Of course, one could prevent the AI infecting any other systems by means of a strict physical quarantine. But that would severely limit its usefulness. Once the AI had convinced the powers-that-be that it was harmless, there would be a very strong financial incentive to put it to work in ways that would give it access to the outside world: connecting it to other systems, or even using it to program other systems. Programming is a labour-intensive and error-prone task for which an AI would probably be ideal.
There's still quite a way to go from controlling a few robots to the destruction of the human race. But, providing it's sufficiently stealthy, the AI can implement its plan gradually. I think the author concentrates too much on the Skynet scenario, which goes from switching on the AI to nuclear holocaust within a few hours. I agree that's unrealistic. But there are other possibilities.
I can't say I'm worrying about this happening. I think that AI is a long way off. And we'll be in a much better position to judge the risks when the time is nearer. I just think it's too soon to rule anything out.
Post a Comment