The problem with current AI models, is we have no way of transferring our knowledge into it, our history contains thousands of years worth of mistakes which we've had to learn from in order to form and maintain the current status-quo we call civilization.
Current AI is based on self-learning by throwing massive amounts of data at essentially a newborn brain, and having it run an unfathomable amount of simulations on it until it figures out how it needs to react to that data, to achieve whatever goal criteria it's been set
If your familiar with the idea of brute forcing a password, AI based on evolving topologies is a similar concept to that. Nobody figured out how to program a robot how to walk across different terrains using traditional if/then/else logic so instead they made a crude brain and had it do a billion simulations of it until it came up with it's own rule-set which achieved the same result.
Now the reason this type of AI is so attractive, is now that we've got many variants of this AI developed, a lot of problems which historically we've not been able to crack, are now within reach if we just throw enough CPU power and time at the problem. But the risk of this approach, is that we cannot comprehend the thought process the AI ends up with. We can observe it's behavior and say that yes, it's taught itself how to walk over terrain, or drive a car, or defend us from attack. But we can only observe the results for what we actually test it against.
And it's here, the human part of the equation, that things are likely to fuck up.
A realistic non-scare monger scenario is this, we have a 2nd strike missile defense system. We need it to operate on it's own because, well, that's what 2nd strike systems do. So we give it an AI, and we add in some pretty strict criteria, we run a bunch of simulations and eventually green light the thing.
Then some really fucking random scenario occurs, something we never tested it for, that causes it to do a behavior that was never exhibited before. And you've basically got an AI which is akin to a baby being given a rocket launcher. The AI won't have malice towards us, it won't develop a desire to wipe us out. But it'll just revert to doing something totally fucking stupid.
So when thinking of AI, please forget the T1000 terminator scenario, and instead think of it as more extremely autistic - if it's given the things it's used to being given, it will likely outperform even us humans and appear to be the smartest most sophisticated brain on the planet. But once you take it out of it's comfort zone, especially if it's a scenario we never tested it again... it's going to do whatever the AI equivalent of random screeching is.
I think what will happen is, we'll project our interpretation of intelligence onto it, as at first glance it may seem so vastly intelligent. But we'll forget that it has no common sense, and no moral compass to fall back on, and somewhere down the line that oversight will undo us.