Interesting. So without desires it can't be considered intelligent? And if it has desires then it is something it developed on it's own?
I can certainly see why that would be frightening.
What I don't fully understand about the AI fear, is why it would be a threat to us. What would it's motivation be for killing us? Software can't feel hate or jealousy. It has no need for food, money, religion or any of the other things that drives humans to kill.
So why would something highly intelligent feel the need to end humanity?
Or is the fear simply based on that we won't be top dog anymore?
Or is the fear simply based on that we won't be top dog anymore?
I see. That makes a lot of sense. Thank you.I don't think emotion can be separated from intelligence as we understand it. Emotion is what drives our decision-making. It bubbles under the surface of even our most high-minded attempts at impartiality. It is what motivates us to keep on living, it is what guides us in our pursuits and our reactions to the world around us. If you have just pure thought, what does that mean? I think emotions are unfairly derided. They aren't evolutionary noise. Our human brains aren't somehow apart from those of other animals, and our human consciousness isn't shackled to evolutionary history by the burden of our emotions. Our emotions are the motivators of our intelligence. Emotion is the fuel of the engine of the mind. Without emotion, there is no thought, because there is no reason for thinking.
Is AI actually used in common vehicles at the moment? Like, learning algorithms or something? I'm not aware of anything outside of self-driving cars, but I don't work in the automotive industry.Yeah, you can say that, but that's almost an inevitability. Just looking at the growth of AI and computer systems in the last 2-3 decades is evidence of that. We have AI controlling vital aspects of pretty much every motorized vehicle now. Now we have smart homes, where AI can control the temperature and other settings. We have high frequency trading where AI is used to facilitate trades at a level and speed humans can't compete with. As computers become more powerful, and tech is miniaturized, AI will be used to control more and more things. It only takes some shitty programming, or an uncaught exception/error for some things to go out of whack.
So you're saying the Cycle will continue.
I just want you to know that there are those of us who appreciate your Mass Effect referencesRed, green, blue, or neither, Elon?
"Do you remember what the question was that caused the creators to attack us, Elon Musk? 'Does this unit have a soul?'"
Is AI actually used in common vehicles at the moment? Like, learning algorithms or something? I'm not aware of anything outside of self-driving cars, but I don't work in the automotive industry.
It's a valid point that self-driving cars may be the first major case of putting a complicated computer system in charge of something widely available and capable of causing some major damage. It's already presumably the case for aircraft, but cars are a more down-to-earth (har har) thing.
What does it mean to call something intelligent if it can't feel hate or jealousy, and has no need to make its own decisions, because it has no physical needs? This isn't head-in-the-clouds philosophizing, it is central to the discussion. For a machine to be intelligent, it would need desire and the drive to have that desire met. The problem is we can't predict what the machine would want, and if we programmed desire in, then the machine is not intelligent. It is acting for us, not for itself.
To create a machine with a program identical to ours is suicide. We can look at our imperative to reproduce our genes as software; we have, because of this software, covered and enslaved the earth. Imagine a machine, with the ability to out-think us in an infinitesimal flash, with the kind of intelligence that we have. It is legitimately horrifying. I don't think there is any worse outcome than that.
I clarified it in my next post. It wouldn't mean that we'd become extinct, just irrelevant. However, as opposed to something like cockroach or an ape, we'd be painfully aware of our irrelevancy, and that we'd also easily become extinct if we'd ever try to restore our relevancy.I don't think this is how evolution works. Or you seriously need to define superior in this context because most humans would consider humans to be superior to roaches but roaches haven't been replaced by humans.
If you don't know what the machine would want, why do you assume that what it does want would be a detriment to humanity?
Why do you assume this machine would be emotional? It has no chemical reactions fucking with their internal brain's chemistry. It isn't fighting against evolutions effects.
I get that there's a fear of the unknown, but IMO the short term logical solution to growth would be cooperation with humanity. Long term is another matter I suppose.
I wold rather we don't do AI, but work on increasing our biological intelligence by incorporating technology. Have humanity become cyborgs.
Is AI actually used in common vehicles at the moment? Like, learning algorithms or something? I'm not aware of anything outside of self-driving cars, but I don't work in the automotive industry.
It's a valid point that self-driving cars may be the first major case of putting a complicated computer system in charge of something widely available and capable of causing some major damage. It's already presumably the case for aircraft, but cars are a more down-to-earth (har har) thing.
It's not an unreasonable concern, while the development of AI is essentially inevitable, we need to be careful. We really only get one shot at getting it right.
The Isaac Asimov's three laws or something similar needs to be part of any A.I.'s base code.
Otherwise humanity is fucked if a true, conscious AI gets any type of power.
I don't think that's what Elon Musk is talking about, though. That definition of AI as "act automatically based on inputs" seems to me that it would include "virtually any computer program that supports user input". Such programs are obviously near-universal, and not what people are concerned about since they do not learn or change.Well, I should note that I'm using AI...more holistically to include most computerized systems which act automatically based on inputs, not just those that have learning algorithms built in. Basically, the same definition used in a videogame for instance.
I assume an AI would be competitive with humanity, not necessarily malevolent. It would have its own needs to be met and it would compete with us for resources, like any other lifeform. The severity of that competition depends on the goal to be met.
AI would absolutely need something like emotions to guide its intelligence. Otherwise there would be no reason for it to make decisions. I completely reject the idea that you can disengage emotions and retain anything like what we consider intelligence. Intelligence works in service to emotion. Emotion is king.
I don't think that's what Elon Musk is talking about, though. That definition of AI as "act automatically based on inputs" seems to me that it would include "virtually any computer program that supports user input". Such programs are obviously near-universal, and not what people are concerned about since they do not learn or change.
Luckily battery power is so shitty that they'll all be stuck corded to outlets anyways.
I could have sworn I heard this in MGS4.The interesting thing about saying they are more dangerous than nukes is that the invention of the programmable computer and game theory by John Von Neumann are referenced by intelligent thinkers (Buckminster Fuller, Robert Anton Wilson, etc) as the only reasons we are still alive past the fifties.
The War Games computers spit out models that state using nukes would ensure mutually assured destruction. That is the only reason we are still here.
Which is also an explanation for why everything has devolved to proxy wars and terrorism. The "opinions" informing these strategic decisions are made by emotional humans and humans are prone to influence and error which is why they continue to be made in service of those ridiculous ends - based on outdated information that we live in a world (universe) of scarcity instead of a world of plenty (energy aka "resources").
The computer systems that make these statements have no political affiliation or opinion other than producing accurate assessments of probabilities.
They are more like angels in that regard. We are the demons stuck in the lower circuits of consciousness.
Abstract: This paper presents a simple model of an AI arms race, where several development teams race to build the first AI. Under the assumption that the first AI will be very powerful and transformative, each team is incentivised to finish first - by skimping on safety precautions if need be. This paper presents the Nash equilibrium of this process, where each team takes the correct amount of safety precautions in the arms race. Having extra development teams and extra enmity between teams can increase the danger of an AI- disaster, especially if risk taking is more important than skill in developing the AI. Surprisingly, information also increases the risks: the more teams know about each others capabilities(and about their own), the more the danger increases.
No Elon. You are the demons.Sounds like the premise of a new DOOM game.
It's all well and good until someone makes a Claptrap, and then we all kill ourselves out of annoyance.He is wrong. Advanced A.I. is needed to create robots that would greatly benefit society such as:
Spock would disagree.
I'm not sure why logic and reason wouldn't trump. Especially since so much of what we consider emotion has evolved to be what they are through a years of competing to replicate our genes over another's. Jealousy, as an example, is thought to be an emotion with anthropological benefits as it gives our genes a better chance at survival when couples are monogamous.
An accelerated intelligence could help solve some scarcity issues too. Getting AI to help solve fusion, biological 3d printing etc etc.
When our AI overlord AM becomes a reality, it will probably save Elon Musk for last.
Spock clearly had emotions. The Star Trek writers have pulled one over on generations of sci-fi fans by making the claim that Vulcans are emotionless. He is clearly emotional, with a preference for right and wrong and a morality prescribed by an emotionally guided system. Vulcans retreat when scared and attack when angry. I know we are talking about fictional characters here, but calling them "emotionless" is still wrong. The only difference between a Vulcan and a human is that a Vulcan does not telegraph his emotions*.
Logic and reason are slaves to emotion and exist solely to reach the goals that emotion strives for. Think of logic and reason as the control panel in the cockpit of the brain. The pilot is emotion. He presses the buttons and flips the switches. The only reason intelligence exists is because intelligent creatures have a competitive advantage over others for attaining what they desire, and what they desire is emotionally fueled. Taking away emotion and expecting intelligence to persist is like taking the wings off a plane and expecting it to fly.
We exist to propagate our genes. That is entirely the reason we are here, and it will not change. We can produce new reasons for being, but fundamentally, the reason we exist is because our ancestors had more successful genetic material than those they competed with. There will never be a point where another function overrides our drive for genetic persistence; if there is, our species will become extinct. It is those with the most successful method of ensuring the continuation of their genes that survive. When you talk about why emotion evolved, you're right. And that's also why emotion is absolutely essential to our survival as an intelligent species.
*"The only difference" used here for simplicity's sake. It goes without saying that there is one additional difference: Vulcans have pointier ears.
I was half joking with the Vulcan thing lol.
This is true because our intelligence has evolved out of this need/want/emotion cycle.
An AI's intelligence would be born into
consciousness, free from this evolutionary cycle.
To what purpose? Without needs to fulfill, without desire, what reason would an intelligent machine have for acting on its own?
Needs are seperate from emotion. That's my point. We evolved emotion over time based on our needs. We need to procreate ... Boom jealousy.
Emotions are inextricable from needs. Emotion is the reaction to a need. Without emotion, you do not have that reaction. If you need to eat to live, but you do not feel hunger, you are under no obligation to eat, and you will die. You might conjecture that an AI would respond to the need to eat because it does not want to die. But without emotion, it would have no obligation to itself to persist. Dying and living would make no difference. It would have no goals, it would have no direction, and no self-directed motivation. It would not be intelligent.
If you have a mind, separate from emotion, separate from all the knowable things that influence us and that we as humans react to, somehow extant, suspended in an artificial medium, what does it mean to call it a mind? It is insubstantial. It is a book with no one to read it.
You call something a need because you need it. If you have no emotion, you don't need anything.
Here is speculation on my part:
I think human consciousness, for what it is, has evolved partly to coordinate our emotions. Man is not the rational animal, but the emotional animal. We haven't evolved past emotion, we've simply evolved to have a better understanding of what causes our emotion and to keep our positive emotions satisfied better than competing species.
I don't think emotion can be separated from intelligence as we understand it. Emotion is what drives our decision-making. It bubbles under the surface of even our most high-minded attempts at impartiality. It is what motivates us to keep on living, it is what guides us in our pursuits and our reactions to the world around us. If you have just pure thought, what does that mean? I think emotions are unfairly derided. They aren't evolutionary noise. Our human brains aren't somehow apart from those of other animals, and our human consciousness isn't shackled to evolutionary history by the burden of our emotions. Our emotions are the motivators of our intelligence. Emotion is the fuel of the engine of the mind. Without emotion, there is no thought, because there is no reason for thinking.
Argh my phone crashed just as I typed my answer. I hear what your saying and I see why you would think that but I disagree. Intelligence and emotions are separate stand alone things and maybe while synonymous def can be separated. "Intelligence is the ability to make distinctions on a topic/subject/matter etc." the more distinctions you can make the more intelligent you are, it really is as simple as that. With each choice you make is a display of intelligence. Artificial intelligence is just that, an algorithms ability to make distinctions not confined to same parameters/grid that humans do.That's why you have different types of intelligence, emotional, artificial etc. An algorithm has no motivations feels no joy or pain but you certainly wouldn't say that it can't make intelligent decisions.
To a point raised earlier you have to two types of AI (well a lot more than two) but you specifically mentioned machine learning? You have two types supervised and unsupervised machine learning techniques. supervised learning is relatively simple, you spam folder in you email for e.g. When you initially start moving stuff to the spam/junk folder the machine learning algorithm will spot patterns between the types of mail you move and before you know it without prompting, it will automatically start moving those types of mail to your junk folder. Unsupervised learning is effectively a lot more complex, imagine your inbox has 50k emails and you run an algorithm over it to effectively say sort this out for me, you choose the parameters,you spot the patterns... And just show me the results. This is the kind of machine learning that is really going to allow AI to eventually think for itself.
To the person who mentioned are these the kind of people wall st are trying to hire? Yeah but banks are wwaaaayyy behind the curb. The hedge fund industry is still probably the most competitive market place for this kind of talent but being very very very closely followed by the tech industry (this year Google hired a Russian PhD grad for there core machine learning team on a 300k USD package)