Battersea Power Station
Member
I don't know if you're asking genuinely, but this is one of the worst-case scenarios:
1. hyper-intelligent A.I. forms
2. develops system of morality
3. decides that sorting all the world's grains of sand by size and color is the most moral cause possible
4. destroys humans since they are in the way of sand-sorting
If we create a being more intelligent than all of humanity, we risk instant extinction.
Even if that risk is .000001%, it's too high.
"Instant" extinction. From a piece of software. At what point did we develop the technology capable of sorting all the world's grains of sand, and put this piece of software in complete control of it, with no human intervention possible?
Jesus. People act like the coming of AI is like a superhero origin story: there was an explosion, and then this AI had godlike powers! Humanity never had a chance!
AI would have godlike powers. Look how long it took us to go from thinking of a rock as a tool to thinking about an atomic bomb. Hundreds of thousands if not millions of years, right? We're human. We go slow.And that's the reason why people laugh at that cheap sci fiction idea.
It's neither realistic nor practical possible.
AI could perform that same evolution in seconds. What will happen in minutes? What will happen in a day? The point is we can't imagine. It won't be as simple as "we have it isolated in this box so let's think about what we'll do for a while" because this being might figure out ways to jump the box that we can't possibly prepare for.
Instant extinction is a possibility. Again maybe that possibility is one in a thousand or one in a million. But because of how dire the consequence is, it's a necessity that we must consider it.