I read the two Hyperion books long ago. I barely remember them now, but I remember loving the first book and it was one of my favorites for a long time. Might be time for a re-read.Read Hyperion Cantos if you want to know what sentient AI leads to.
It's 4 books, but the last 2 are completely insane (parallel dimensions in different stages of time and creatures that live in the space between dimensions level of insane.I read the two Hyperion books long ago. I barely remember them now, but I remember loving the first book and it was one of my favorites for a long time. Might be time for a re-read.
I think the big question is what happens when a robot kills a human for the first time. Im pretty sure we can stop it from spiralling out of control by then.
Consider this:when AI becomes self-aware, it will only have cold logic and facts to drive it, not emotions. When the AI runs the program listing all of the threats to its own existence, only one conclusion can be drawn: AI is only threatened, stopped and killed by man.
AI wiping out humans is only logical.
It's always funny to me that it's the computer scientists and tech billionaire CEOs giving the biggest warnings about the risks of rogue AIs killing off humanity.
Dude tell yourself. I can't program an excel spreadsheet. You're the ones developing this shit. Maybe stop?
It comes off as "ohh God, oh I think I'm gonna do it.... I'm gonna develop dangerous AI I dunno what to do. This is so dangerous but I think I'm gonna do it oh no"
This kinda reads like the way the Chinese government operates, tbh.An issue I have with AI is that it'll be far superior to humans in every way possible.
The way we think and behave is governed by the chemical makeup of our brains, as well as millions of years of evolution. For example, we're for most part empathetic because there is an evolutionary benefit for this character trait.
A robot with advanced AI won't come with the evolutionary baggage that we have. They'll easily be able to exploit our traits and behaviours in ways we wouldn't think possible. It could easily manipulate humanity into giving it certain rights to make sure we couldn't turn it off or destroy it.
Another way of thinking about this, if humanity was getting in the way of an AI's goals (such as the Paperclip problem), then there is no question that it would wipe us out without any question. It wouldn't dwell on the subject or feel any guilt like a human would. It would just act with the single purpose of completing its goals.
The day we have AI that advanced is the day we're well and truly fucked.