I don't know if you're asking genuinely, but this is one of the worst-case scenarios:
1. hyper-intelligent A.I. forms
2. develops system of morality
3. decides that sorting all the world's grains of sand by size and color is the most moral cause possible
4. destroys humans since they are in the way of sand-sorting
If we create a being more intelligent than all of humanity, we risk instant extinction.
Even if that risk is .000001%, it's too high.
http://www.bbc.com/news/technology-30290540Honestly, I like the guy, but he absolutely has no fucking clue what he is talking about here. This is just some random thought of his that makes a headline. He has no insight into how war's start. He as no background that would make me trust that he knows why wars start. Predicting WW3 is just ridiculous. This is all for attention and nothing more IMO.
Brilliant, Elon. Just recreate the plot of the Matrix to prevent Terminator.
No, if you read more on it they are more afraid about corporations controlling the first AI rather than having "Digimon battles." You seem way too eager to get angry about something rather than actually learn about it. Par for the course on this forum I guess. Carry on.But the playing field is not leveled at all this way. All you do is provide additional resources to someone who's keeping theirs under lock and key.
What even is this? Do they think people are going to combine their good AIs together to stop Putin's evil AI? It sounds like an episode of Digimon.
Yeah the software will be open as long as they get to keep a lead on it? Nice joke.
But he's right. AI is by far the most dangerous thing there is right now. A rogue AI could do untold damage.same, wish he'd shut the fuck up
AI scares the shit out of me. Humans fear MAD... AI fear nothing.
"Instant" extinction. From a piece of software. At what point did we develop the technology capable of sorting all the world's grains of sand, and put this piece of software in complete control of it, with no human intervention possible?
Jesus. People act like the coming of AI is like a superhero origin story: there was an explosion, and then this AI had godlike powers! Humanity never had a chance!
I have no idea about this subject so I don't know how "out there" is what Musk said but I wonder how much have fiction works skewed our views in these kinds of issues.
My research leads me inescapably to the opinion that the major cause of the American Negro's intellectual and social deficits is hereditary and racially genetic in origin and, thus, not remediable to a major degree by practical improvements in the environment.
William Shockley won the Nobel Prize for Physics for his work on the transistor. Without this brilliant individual, we wouldn't be communicating right now. William Shockley, again a brilliant mind, gave us this wonderful nugget:
Shockley was a white supremacist and proponent of eugenics. This was way after WWII, so it's not like he's just a "man of his time".
Shockley is hardly alone. There are many Nobel winners that think and do weird shit the moment the step out of the field of expertise. Unfortunately, we conditioned ourselves to believe that intelligent people are always intelligent, but in reality the brilliance of an individual in one area doesn't necessarily lead to brilliance in another, particularly when those other areas are only tangentially related, or more often than not completely separate.
Does this mean that Stephen Hawking is wrong? No, it does not. All it means that "X also believes Y" doesn't really mean shit when X isn't an expert in Y.
For my part, I think it's awfully hard to be an expert in something that doesn't exist, but I'm happy these conversations are at least happening, regardless of the odds of such an AI ever becoming real.
I don't know if you're asking genuinely, but this is one of the worst-case scenarios:
1. hyper-intelligent A.I. forms
2. develops system of morality
3. decides that sorting all the world's grains of sand by size and color is the most moral cause possible
4. destroys humans since they are in the way of sand-sorting
If we create a being more intelligent than all of humanity, we risk instant extinction.
Even if that risk is .000001%, it's too high.
That's one way to spin it.didn't scientists recently emergency shut down an AI experiment because the two AI had developed a language indecipherable by the scientists themselves and were using that to communicate?
William Shockley won the Nobel Prize for Physics for his work on the transistor. Without this brilliant individual, we wouldn't be communicating right now. William Shockley, again a brilliant mind, gave us this wonderful nugget:
Shockley was a white supremacist and proponent of eugenics. This was way after WWII, so it's not like he's just a "man of his time".
Shockley is hardly alone. There are many Nobel winners that think and do weird shit the moment the step out of the field of expertise. Unfortunately, we conditioned ourselves to believe that intelligent people are always intelligent, but in reality the brilliance of an individual in one area doesn't necessarily lead to brilliance in another, particularly when those other areas are only tangentially related, or more often than not completely separate.
Does this mean that Stephen Hawking is wrong? No, it does not. All it means that "X also believes Y" doesn't really mean shit when X isn't an expert in Y.
For my part, I think it's awfully hard to be an expert in something that doesn't exist, but I'm happy these conversations are at least happening, regardless of the odds of such an AI ever becoming real.
I'm more inclined to believe Elon than fucking Mark Zuckerberg, didn't scientists recently emergency shut down an AI experiment because the two AI had developed a language indecipherable by the scientists themselves and were using that to communicate?
This is not fear-mongering, it's a threat much closer to us than most people realize.
That was an incredibly click baity article considering they shut it down because it wasn't doing what they programmed it to do (kind of similar but still not really the point)
And this is coming from someone who says this could be an issue down the road.
That's one way to spin it.
Holy fuck at the stupidity on the first page. That was an embarassing read. If you take a moment to apply some critical thinking (you don't actually need to critically thinking about this) do you sort of wonder why these brilliant tech thinks like Gates, Hawking, and Musk are sharing their similar opinions about this topic? I mean at least start there instead of dropping your useless hot takes on a forum about videogames.
The future of life Institute founded by Max Tegmark has created with the purpose of researching A.I. safety. They've written an open letter that has over 8000 signatures of people that agree that this is a major issue that will require a large amount of preparation for any chance of doing it right.
But go ahead, ignore this post and continue to argue like this is just some fever dream Elon Musk once had.
Stuart Russell, Berkeley, Professor of Computer Science, director of the Center for Intelligent Systems, and co-author of the standard textbook Artificial Intelligence: a Modern Approach.
Tom Dietterich, Oregon State, President of AAAI, Professor and Director of Intelligent Systems
Eric Horvitz, Microsoft research director, ex AAAI president, co-chair of the AAAI presidential panel on long-term AI futures
Bart Selman, Cornell, Professor of Computer Science, co-chair of the AAAI presidential panel on long-term AI futures
Francesca Rossi, Padova & Harvard, Professor of Computer Science, IJCAI President and Co-chair of AAAI committee on impact of AI and Ethical Issues
Demis Hassabis, co-founder of DeepMind
Shane Legg, co-founder of DeepMind
Mustafa Suleyman, co-founder of DeepMind
Dileep George, co-founder of Vicarious
Scott Phoenix, co-founder of Vicarious
Yann LeCun, head of Facebooks Artificial Intelligence Laboratory
Geoffrey Hinton, University of Toronto and Google Inc.
Yoshua Bengio, Université de Montréal
Peter Norvig, Director of research at Google and co-author of the standard textbook Artificial Intelligence: a Modern Approach
Oren Etzioni, CEO of Allen Inst. for AI
Guruduth Banavar, VP, Cognitive Computing, IBM Research
Michael Wooldridge, Oxford, Head of Dept. of Computer Science, Chair of European Coordinating Committee for Artificial Intelligence
Leslie Pack Kaelbling, MIT, Professor of Computer Science and Engineering, founder of the Journal of Machine Learning Research
Tom Mitchell, CMU, former President of AAAI, chair of Machine Learning Department
Toby Walsh, Univ. of New South Wales & NICTA, Professor of AI and President of the AI Access Foundation
Murray Shanahan, Imperial College, Professor of Cognitive Robotics
Michael Osborne, Oxford, Associate Professor of Machine Learning
David Parkes, Harvard, Professor of Computer Science
Laurent Orseau, Google DeepMind
Ilya Sutskever, Google, AI researcher
Blaise Aguera y Arcas, Google, AI researcher
Joscha Bach, MIT, AI researcher
Oh that's my misunderstanding then, I'm not trying to spin anything.
How many times is it necessary to point out that these aren't Musk's or Hawking's Ideas? They are just using their fame to help spread the message that they've heard from the actual experts.
People like Stuart Russell. Do you know who he is? You also think he's a tech enthusiast?
The future of life Institute founded by Max Tegmark has created with the purpose of researching A.I. safety. They've written an open letter that has over 8000 signatures of people that agree that this is a major issue that will require a large amount of preparation for any chance of doing it right.
Here's the open letter and you can scroll to the bottom to read some of the names on that list. https://futureoflife.org/ai-open-letter/
You know what, I better actually post some of those names here.
This is just the top of the list, there are 8000 signatures.
But go ahead, ignore this post and continue to argue like this is just some fever dream Elon Musk once had.
I'm more inclined to believe Elon than fucking Mark Zuckerberg, didn't scientists recently emergency shut down an AI experiment because the two AI had developed a language indecipherable by the scientists themselves and were using that to communicate?
This is not fear-mongering, it's a threat much closer to us than most people realize.
I have no idea about this subject so I don't know how "out there" is what Musk said but I wonder how much have fiction works skewed our views in these kinds of issues.
Don't say why he's wrong about this particular argument or anything. What is ignorant about what he said?Honestly, don't listen to Elon about AI too closely, he may be a very successful man, but he is pretty ignorant on the topic.
How many times is it necessary to point out that these aren't Musk's or Hawking's Ideas? They are just using their fame to help spread the message that they've heard from the actual experts.
People like Stuart Russell. Do you know who he is? You also think he's a tech enthusiast?
The future of life Institute founded by Max Tegmark has created with the purpose of researching A.I. safety. They've written an open letter that has over 8000 signatures of people that agree that this is a major issue that will require a large amount of preparation for any chance of doing it right.
Here's the open letter and you can scroll to the bottom to read some of the names on that list. https://futureoflife.org/ai-open-letter/
You know what, I better actually post some of those names here.
This is just the top of the list, there are 8000 signatures.
But go ahead, ignore this post and continue to argue like this is just some fever dream Elon Musk once had.
" Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls."
I'm genuinely worried about this, and AI in general. We (humanity) simply don't have our shit together enough to handle this power responsibly. It's like giving a car to a five year old -- it will inevitably end up in disaster.
Let's focus on ending hierarchy and oppression and then create an infinitely expanding super brain.
Gemüsepizza;247958297 said:Have you read the letter, like, at all? There is not a single word about "WW3" in it. Instead, there are expressions like this in it:
"Potential pitfalls". This sure sounds like AIs will obliterate all humankind in the near future, right?
The AI is programmed to do something devastating: Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties. Moreover, an AI arms race could inadvertently lead to an AI war that also results in mass casualties. To avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply ”turn off," so humans could plausibly lose control of such a situation. This risk is one that's present even with narrow AI, but grows as levels of AI intelligence and autonomy increase.
This very statement is so beyond where the field is today that it ends up somewhere in the territory between philosopy and fantasy. We don't know if "general superhuman intelligence" is possible in the foreseeable future. We don't know if it is possible at all. We don't know what it could do even if it is possible - there's no such thing as being without limits, even for an A.I.
The "singularity" and AI as God isn't science; it's science fiction. It astounds me how the debate about AI has ended up with these things being taken for granted when there is, in fact, little to no scientific basis for them. The science of today is nowhere near the place where we can even start to speculate.
I fully agree to the first segment, Musk is clearly a genius businessman. No doubt about it. But assembling a team to produce a product - basically working towards a very concrete, realistic goal - is greatly different from trying to understand an entire scientific field that's still in its infancy. And by that I don't just mean AI research, I mean the entire field of intelligence research. I seriously doubt there's anyone out there who really "gets" it and if there are, I suspect they're quitely toiling away at important but unsexy projects rather than out making doomsday predictions. Because that's the way things typically work in science.
At any rate, Musk isn't giving us much reason to listen to him in the first place. He isn't referencing scientific papers. He isn't even quoting experts. He just appears to assume that we should take him - a decidedly non-expert - at his word.
My position on this is very simple: I'll wait for actual research teams with actual research before I get concerned. I'll happily ignore Musk, Zuckerberg. Gates and all the other Silicon Valley types in the meantime.
Taking military precaution which in turn prompts further development sounds like one hell of a self-fulfulling prophecy.This isn't Civ where you have a turn counter towards "AI Singularity" other players can look at and prepare for. Some people in this thread are even speculating that we'll hit the threshold of Strong AI before we even realize it. What will other powers do then?
Ironically, trying to turn AI Security into a real issue is more likely to cause this "AI arms race" than simply keeping mum about it. Most military powers in the world are too absorbed in their own present day problems to give heed to hypothetical sci-fi ones. The response to climate change is still listless and slow, despite much of the world's economic power being concentrated on coastal areas. There need not be an arms race if the people in control of the arms (i.e. politicians and oligarchs) are unaware or are skeptical of the "risks" of Strong AI.
Bostrom's superintelligence is the canonical treatment of all this stuff, but for an overview of why this is more difficult than you're making it out to be, see this summary of Bostrom's work.
Zuckerberg called Musk's AI doomsday rhetoric "pretty irresponsible." Musk responded by calling Zuckerberg's understanding of the issue "limited."
I'm more worried about the super rich using AI and genetic engineering to create a nascent nobility than a standalone AI, honestly.
I've read all this before. It's entertaining. That's the best thing I can say about it. But its starting point is AFTER we have the capability of creating a superintelligent AI. We don't have that capability. We don't even know if it's possible, and we don't know what it would be like if it did come to pass.
But, for the moment, let's assume these assumptions are correct:
1. We will someday have the ability to create a superintelligent AI.
2. AIs are extremely dangerous, and any loss of control is potentially catastrophic.
What action needs to be taken now? Where's the fire? These are problems to be solved when the situation arrives. I would argue that they cannot be solved sooner. What urgent action is Elon Musk advising we take? Regulation? Give me a break.
Also, the comparison to climate change are preposterous. One problem is real, here now, and 100% certain to be devastating without global action. Both regulation and education are urgently needed. The other is hypothetical and distant in time, if it happens at all. And if it does happen, it'll be done by people who understand any risks far better than the people crying wolf now do.
People said the exact same things about nuclear weapons and the world was pretty close to the brink of nuclear war so it's weird to me that Musk can't see the correlation. Humanity won't go from zero to singularity just like it didn't go from zero to 1000s of nuclear warheads pointed at each other. The threat will be realised, normalised and controlled as we get closer to realising it.
I think the challenge and difference here is intelligence. There is a bit of hubris that comes with saying 'sure we might create something that is super intelligent, but we can handle it'.People said the exact same things about nuclear weapons and the world was pretty close to the brink of nuclear war so it's weird to me that Musk can't see the correlation. Humanity won't go from zero to singularity just like it didn't go from zero to 1000s of nuclear warheads pointed at each other. The threat will be realised, normalised and controlled as we get closer to realising it.