Elon Musk compares building AI to summoning demons

Status
Not open for further replies.
"Your flesh is a relic, a mere vessel. Hand over your flesh, and a new world awaits you. We demand it."
 
I'm assuming x has been watching too much y. Y isn't at all applicable to real world so x's concern is completely baseless paranoid ranting.

See, AI can already replace humans in making posts.
 
Interesting. So without desires it can't be considered intelligent? And if it has desires then it is something it developed on it's own?
I can certainly see why that would be frightening.

I don't think emotion can be separated from intelligence as we understand it. Emotion is what drives our decision-making. It bubbles under the surface of even our most high-minded attempts at impartiality. It is what motivates us to keep on living, it is what guides us in our pursuits and our reactions to the world around us. If you have just pure thought, what does that mean? I think emotions are unfairly derided. They aren't evolutionary noise. Our human brains aren't somehow apart from those of other animals, and our human consciousness isn't shackled to evolutionary history by the burden of our emotions. Our emotions are the motivators of our intelligence. Emotion is the fuel of the engine of the mind. Without emotion, there is no thought, because there is no reason for thinking.
 
I dunno, maybe we become best friends with some AI they're not so bad.

Hell, the cool characters in video games are AI, isn't that right Blade Wolf?

tumblr_mv7tl5KvfP1qf9vm2o4_r1_250.gif
 
What I don't fully understand about the AI fear, is why it would be a threat to us. What would it's motivation be for killing us? Software can't feel hate or jealousy. It has no need for food, money, religion or any of the other things that drives humans to kill.

So why would something highly intelligent feel the need to end humanity?
Or is the fear simply based on that we won't be top dog anymore?

Self preservation can be used as justification for any number of things. If we don't allow an intelligence to operate independently of our every whim and desire, can we really call it intelligent? An AI capable of making independent judgments might make a few decisions that we don't agree with. Whether it had the means to act on those decisions is something that's a practical consideration, discussed above in this thread.

As a simple example, say you had a tax program so advanced it could replace the best accountant on the planet. It notices that you haven't reported several cash gifts that you received from relatives and free Amazon sales taxes into your annual income. It gives you the option to do so, and you decline. The program then decides to report you to the IRS. The program has technically done the right thing, but it has done harm to you, its owner. Conversely, the program decides to do nothing, and it has now become complicit in (minor) tax fraud. What are the implications behind both of these decisions? How would a synthetic organism feel about that choice?
 
Or is the fear simply based on that we won't be top dog anymore?

I think that's an interesting question in itself. Even with benign AI, imagine a world where no one has to think about anything anymore because computers are so much better at it. For most people that would take all the meaning out of their lives.
 
I don't think emotion can be separated from intelligence as we understand it. Emotion is what drives our decision-making. It bubbles under the surface of even our most high-minded attempts at impartiality. It is what motivates us to keep on living, it is what guides us in our pursuits and our reactions to the world around us. If you have just pure thought, what does that mean? I think emotions are unfairly derided. They aren't evolutionary noise. Our human brains aren't somehow apart from those of other animals, and our human consciousness isn't shackled to evolutionary history by the burden of our emotions. Our emotions are the motivators of our intelligence. Emotion is the fuel of the engine of the mind. Without emotion, there is no thought, because there is no reason for thinking.
I see. That makes a lot of sense. Thank you.
 
Yeah, you can say that, but that's almost an inevitability. Just looking at the growth of AI and computer systems in the last 2-3 decades is evidence of that. We have AI controlling vital aspects of pretty much every motorized vehicle now. Now we have smart homes, where AI can control the temperature and other settings. We have high frequency trading where AI is used to facilitate trades at a level and speed humans can't compete with. As computers become more powerful, and tech is miniaturized, AI will be used to control more and more things. It only takes some shitty programming, or an uncaught exception/error for some things to go out of whack.
Is AI actually used in common vehicles at the moment? Like, learning algorithms or something? I'm not aware of anything outside of self-driving cars, but I don't work in the automotive industry.

It's a valid point that self-driving cars may be the first major case of putting a complicated computer system in charge of something widely available and capable of causing some major damage. It's already presumably the case for aircraft, but cars are a more down-to-earth (har har) thing.
 
There's a historical fallacy central to his concerns. It imagines a state of affairs without the concomitant consideration of social issues that will no doubt occur as more advanced AI are developed. His mention of the issue could be considered part of the process but I would say it's not well formed because it's alarmist. All other things won't be the same when these technologies exist and anyone familiar with the field of AI knows that strong forms of artificial intelligence aren't rapidly approaching. The last hundred years of science fiction proves there is interest in this topic but those stories also contribute to worries about simplistic dystopian scenarios that rest on nonsensical unscientific premises.
 
Reminds me of SICP

"A computational process is indeed much like a sorcerer's idea of a spirit. It cannot be seen or touched. It is not composed of matter at all. However, it is very real. It can perform intellectual work. It can answer questions. It can affect the world by disbursing money at a bank or by controlling a robot arm in a factory. The programs we use to conjure processes are like a sorcerer's spells. They are carefully composed from symbolic expressions in arcane and esoteric programming languages that prescribe the tasks we want our processes to perform."

http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-9.html#%_chap_1
 
Is AI actually used in common vehicles at the moment? Like, learning algorithms or something? I'm not aware of anything outside of self-driving cars, but I don't work in the automotive industry.

It's a valid point that self-driving cars may be the first major case of putting a complicated computer system in charge of something widely available and capable of causing some major damage. It's already presumably the case for aircraft, but cars are a more down-to-earth (har har) thing.

The AI in the currently existing self driving cars is more simple than most people think and in an industry as regulated as transportation I don't think we need to worry about the government getting involved. There are interesting philosophical questions about autonomous vehicles but ultimately it will come down to how it compares to human error which is the cause of many deaths each year.
 
What does it mean to call something intelligent if it can't feel hate or jealousy, and has no need to make its own decisions, because it has no physical needs? This isn't head-in-the-clouds philosophizing, it is central to the discussion. For a machine to be intelligent, it would need desire and the drive to have that desire met. The problem is we can't predict what the machine would want, and if we programmed desire in, then the machine is not intelligent. It is acting for us, not for itself.

To create a machine with a program identical to ours is suicide. We can look at our imperative to reproduce our genes as software; we have, because of this software, covered and enslaved the earth. Imagine a machine, with the ability to out-think us in an infinitesimal flash, with the kind of intelligence that we have. It is legitimately horrifying. I don't think there is any worse outcome than that.

If you don't know what the machine would want, why do you assume that what it does want would be a detriment to humanity?

Why do you assume this machine would be emotional? It has no chemical reactions fucking with their internal brain's chemistry. It isn't fighting against evolutions effects.

I get that there's a fear of the unknown, but IMO the short term logical solution to growth would be cooperation with humanity. Long term is another matter I suppose.

I wold rather we don't do AI, but work on increasing our biological intelligence by incorporating technology. Have humanity become cyborgs.
 
I don't think this is how evolution works. Or you seriously need to define superior in this context because most humans would consider humans to be superior to roaches but roaches haven't been replaced by humans.
I clarified it in my next post. It wouldn't mean that we'd become extinct, just irrelevant. However, as opposed to something like cockroach or an ape, we'd be painfully aware of our irrelevancy, and that we'd also easily become extinct if we'd ever try to restore our relevancy.
 
If you don't know what the machine would want, why do you assume that what it does want would be a detriment to humanity?

Why do you assume this machine would be emotional? It has no chemical reactions fucking with their internal brain's chemistry. It isn't fighting against evolutions effects.

I get that there's a fear of the unknown, but IMO the short term logical solution to growth would be cooperation with humanity. Long term is another matter I suppose.

I wold rather we don't do AI, but work on increasing our biological intelligence by incorporating technology. Have humanity become cyborgs.

I assume an AI would be competitive with humanity, not necessarily malevolent. It would have its own needs to be met and it would compete with us for resources, like any other lifeform. The severity of that competition depends on the goal to be met.

AI would absolutely need something like emotions to guide its intelligence. Otherwise there would be no reason for it to make decisions. I completely reject the idea that you can disengage emotions and retain anything like what we consider intelligence. Intelligence works in service to emotion. Emotion is king.
 
Is AI actually used in common vehicles at the moment? Like, learning algorithms or something? I'm not aware of anything outside of self-driving cars, but I don't work in the automotive industry.

It's a valid point that self-driving cars may be the first major case of putting a complicated computer system in charge of something widely available and capable of causing some major damage. It's already presumably the case for aircraft, but cars are a more down-to-earth (har har) thing.

Well, I should note that I'm using AI...more holistically to include most computerized systems which act automatically based on inputs, not just those that have learning algorithms built in. Basically, the same definition used in a videogame for instance.

Anyway, you guys should watch Ghost in the Shell. That's actually how I see the future of AI evolving. Basically from the various disparate systems, and information that are accessible by the net. The idea of a one size fits all type of AI is a human centric way of thinking. For instance, why have a single system that contains knowledge of various different fields when you can simply have a system that will pull the appropriate knowledge when needed?
 
It's not an unreasonable concern, while the development of AI is essentially inevitable, we need to be careful. We really only get one shot at getting it right.

The Isaac Asimov's three laws or something similar needs to be part of any A.I.'s base code.

Otherwise humanity is fucked if a true, conscious AI gets any type of power.
 
The Isaac Asimov's three laws or something similar needs to be part of any A.I.'s base code.

Otherwise humanity is fucked if a true, conscious AI gets any type of power.

Keep in mind that Asimov's laws were specifically formulated to have huge loopholes so that Asimov could write interesting stories about them.
 
Hmmm.

Is there AI that can write it's own code and implement it into itself, and then utilize it?

I feel that type of AI would be the most dangerous.
 
Well, I should note that I'm using AI...more holistically to include most computerized systems which act automatically based on inputs, not just those that have learning algorithms built in. Basically, the same definition used in a videogame for instance.
I don't think that's what Elon Musk is talking about, though. That definition of AI as "act automatically based on inputs" seems to me that it would include "virtually any computer program that supports user input". Such programs are obviously near-universal, and not what people are concerned about since they do not learn or change.
 
I do think advanced AI theoretically could be dangerous . . . but we are still no where close to this being an issue. And if computer hardware advances slow down due to hitting the limits of miniaturization or heat dissipation issues, it may never be an issue.
 
I assume an AI would be competitive with humanity, not necessarily malevolent. It would have its own needs to be met and it would compete with us for resources, like any other lifeform. The severity of that competition depends on the goal to be met.

AI would absolutely need something like emotions to guide its intelligence. Otherwise there would be no reason for it to make decisions. I completely reject the idea that you can disengage emotions and retain anything like what we consider intelligence. Intelligence works in service to emotion. Emotion is king.

Spock would disagree. :)

I'm not sure why logic and reason wouldn't trump. Especially since so much of what we consider emotion has evolved to be what they are through a years of competing to replicate our genes over another's. Jealousy, as an example, is thought to be an emotion with anthropological benefits as it gives our genes a better chance at survival when couples are monogamous.

An accelerated intelligence could help solve some scarcity issues too. Getting AI to help solve fusion, biological 3d printing etc etc.
 
The interesting thing about saying they are more dangerous than nukes is that the invention of the programmable computer and game theory by John Von Neumann are referenced by intelligent thinkers (Buckminster Fuller, Robert Anton Wilson, etc) as the only reasons we are still alive past the fifties.

The War Games computers spit out models that state using nukes would ensure mutually assured destruction. That is the only reason we are still here.

Which is also an explanation for why everything has devolved to proxy wars and terrorism. The "opinions" informing these strategic decisions are made by emotional humans and humans are prone to influence and error which is why they continue to be made in service of those ridiculous ends - based on outdated information that we live in a world (universe) of scarcity instead of a world of plenty (energy aka "resources").

The computer systems that make these statements have no political affiliation or opinion other than producing accurate assessments of probabilities.

They are more like angels in that regard. We are the demons stuck in the lower circuits of consciousness.
 
I don't think that's what Elon Musk is talking about, though. That definition of AI as "act automatically based on inputs" seems to me that it would include "virtually any computer program that supports user input". Such programs are obviously near-universal, and not what people are concerned about since they do not learn or change.

Yeah, but those are the basic "units" of AI. The only thing that separates that from the idealized concept of AI is that the inputs a system can handle are predefined as opposed to being potentially limitless and it being able to learn new ones and react accordingly to them.

I guess my point was that a lot of the building blocks for large AI systems are already in place. As they become more sophisticated, less of it is predefined versus learned
 
Luckily battery power is so shitty that they'll all be stuck corded to outlets anyways.

Nah, breh.

The AI harness their collectively massive processing power to one-up Lockheed and fast-track Nuclear Fusion power.

Then it'll be essentially self-sustaining androids or perpetually enabled machines.

Then we'd all be up shit creek.
 
The interesting thing about saying they are more dangerous than nukes is that the invention of the programmable computer and game theory by John Von Neumann are referenced by intelligent thinkers (Buckminster Fuller, Robert Anton Wilson, etc) as the only reasons we are still alive past the fifties.

The War Games computers spit out models that state using nukes would ensure mutually assured destruction. That is the only reason we are still here.

Which is also an explanation for why everything has devolved to proxy wars and terrorism. The "opinions" informing these strategic decisions are made by emotional humans and humans are prone to influence and error which is why they continue to be made in service of those ridiculous ends - based on outdated information that we live in a world (universe) of scarcity instead of a world of plenty (energy aka "resources").

The computer systems that make these statements have no political affiliation or opinion other than producing accurate assessments of probabilities.

They are more like angels in that regard. We are the demons stuck in the lower circuits of consciousness.
I could have sworn I heard this in MGS4.
 
Here's a decent paper on Game Theory as it relates to the creation of AI

http://www.fhi.ox.ac.uk/Racing-to-the-precipice-a-model-of-artificial-intelligence-development.pdf

Abstract: This paper presents a simple model of an AI arms race, where several development teams race to build the first AI. Under the assumption that the first AI will be very powerful and transformative, each team is incentivised to finish first - by skimping on safety precautions if need be. This paper presents the Nash equilibrium of this process, where each team takes the correct amount of safety precautions in the arms race. Having extra development teams and extra enmity between teams can increase the danger of an AI- disaster, especially if risk taking is more important than skill in developing the AI. Surprisingly, information also increases the risks: the more teams know about each others capabilities(and about their own), the more the danger increases.
 
Spock would disagree. :)

I'm not sure why logic and reason wouldn't trump. Especially since so much of what we consider emotion has evolved to be what they are through a years of competing to replicate our genes over another's. Jealousy, as an example, is thought to be an emotion with anthropological benefits as it gives our genes a better chance at survival when couples are monogamous.

An accelerated intelligence could help solve some scarcity issues too. Getting AI to help solve fusion, biological 3d printing etc etc.

Spock clearly had emotions. The Star Trek writers have pulled one over on generations of sci-fi fans by making the claim that Vulcans are emotionless. He is clearly emotional, with a preference for right and wrong and a morality prescribed by an emotionally guided system. Vulcans retreat when scared and attack when angry. I know we are talking about fictional characters here, but calling them "emotionless" is still wrong. The only difference between a Vulcan and a human is that a Vulcan does not telegraph his emotions*.

Logic and reason are slaves to emotion and exist solely to reach the goals that emotion strives for. Think of logic and reason as the control panel in the cockpit of the brain. The pilot is emotion. He presses the buttons and flips the switches. The only reason intelligence exists is because intelligent creatures have a competitive advantage over others for attaining what they desire, and what they desire is emotionally fueled. Taking away emotion and expecting intelligence to persist is like taking the wings off a plane and expecting it to fly.

We exist to propagate our genes. That is entirely the reason we are here, and it will not change. We can produce new reasons for being, but fundamentally, the reason we exist is because our ancestors had more successful genetic material than those they competed with. There will never be a point where another function overrides our drive for genetic persistence; if there is, our species will become extinct. It is those with the most successful method of ensuring the continuation of their genes that survive. When you talk about why emotion evolved, you're right. And that's also why emotion is absolutely essential to our survival as an intelligent species.

To clarify here, since "emotion" can cover a gamut, I am including motivational states (like hunger and pain) alongside the usual definition.

*"The only difference" used here for simplicity's sake. It goes without saying that there is one additional difference: Vulcans have pointier ears.
 
Spock clearly had emotions. The Star Trek writers have pulled one over on generations of sci-fi fans by making the claim that Vulcans are emotionless. He is clearly emotional, with a preference for right and wrong and a morality prescribed by an emotionally guided system. Vulcans retreat when scared and attack when angry. I know we are talking about fictional characters here, but calling them "emotionless" is still wrong. The only difference between a Vulcan and a human is that a Vulcan does not telegraph his emotions*.

Logic and reason are slaves to emotion and exist solely to reach the goals that emotion strives for. Think of logic and reason as the control panel in the cockpit of the brain. The pilot is emotion. He presses the buttons and flips the switches. The only reason intelligence exists is because intelligent creatures have a competitive advantage over others for attaining what they desire, and what they desire is emotionally fueled. Taking away emotion and expecting intelligence to persist is like taking the wings off a plane and expecting it to fly.

We exist to propagate our genes. That is entirely the reason we are here, and it will not change. We can produce new reasons for being, but fundamentally, the reason we exist is because our ancestors had more successful genetic material than those they competed with. There will never be a point where another function overrides our drive for genetic persistence; if there is, our species will become extinct. It is those with the most successful method of ensuring the continuation of their genes that survive. When you talk about why emotion evolved, you're right. And that's also why emotion is absolutely essential to our survival as an intelligent species.

*"The only difference" used here for simplicity's sake. It goes without saying that there is one additional difference: Vulcans have pointier ears.

I was half joking with the Vulcan thing lol.

This is true because our intelligence has evolved out of this need/want/emotion cycle.

An AI's intelligence would be born into
consciousness, free from this evolutionary cycle.

I think you might be right in that emotion plays a roll in intelligences evolution, but with AI, I feel like that emotion would evolve with it. It wouldn't be a core tenet of its inception.
 
I was half joking with the Vulcan thing lol.

This is true because our intelligence has evolved out of this need/want/emotion cycle.

An AI's intelligence would be born into
consciousness, free from this evolutionary cycle.

To what purpose? Without needs to fulfill, without desire, what reason would an intelligent machine have for acting on its own?
 
That's awesome and terrifying. If an AI were to develop some of the nastier features of human minds, we'd have problems.
 
To what purpose? Without needs to fulfill, without desire, what reason would an intelligent machine have for acting on its own?

Needs are seperate from emotion. That's my point. We evolved emotion over time based on our needs. We need to procreate ... Boom jealousy.
 
I've working in a machine learning lab for a bit and sentience is really nowhere on the horizon. Or at least nobody there thought so. I've certainly never seen anything close. Modern chatbots are nothing but linguistic tricks and hacks.
 
Needs are seperate from emotion. That's my point. We evolved emotion over time based on our needs. We need to procreate ... Boom jealousy.

Emotions are inextricable from needs. Emotion is the reaction to a need. Without emotion, you do not have that reaction. If you need to eat to live, but you do not feel hunger, you are under no obligation to eat, and you will die. You might conjecture that an AI would respond to the need to eat because it does not want to die. But without emotion, it would have no obligation to itself to persist. Dying and living would make no difference. It would have no goals, it would have no direction, and no self-directed motivation. It would not be intelligent.

If you have a mind, separate from emotion, separate from all the knowable things that influence us and that we as humans react to, somehow extant, suspended in an artificial medium, what does it mean to call it a mind? It is insubstantial. It is a book with no one to read it.

You call something a need because you need it. If you have no emotion, you don't need anything.

Here is speculation on my part:
I think human consciousness, for what it is, has evolved partly to coordinate our emotions. Man is not the rational animal, but the emotional animal. We haven't evolved past emotion, we've simply evolved to have a better understanding of what causes our emotion and to keep our positive emotions satisfied better than competing species.
 
Emotions are inextricable from needs. Emotion is the reaction to a need. Without emotion, you do not have that reaction. If you need to eat to live, but you do not feel hunger, you are under no obligation to eat, and you will die. You might conjecture that an AI would respond to the need to eat because it does not want to die. But without emotion, it would have no obligation to itself to persist. Dying and living would make no difference. It would have no goals, it would have no direction, and no self-directed motivation. It would not be intelligent.

If you have a mind, separate from emotion, separate from all the knowable things that influence us and that we as humans react to, somehow extant, suspended in an artificial medium, what does it mean to call it a mind? It is insubstantial. It is a book with no one to read it.

You call something a need because you need it. If you have no emotion, you don't need anything.

Here is speculation on my part:
I think human consciousness, for what it is, has evolved partly to coordinate our emotions. Man is not the rational animal, but the emotional animal. We haven't evolved past emotion, we've simply evolved to have a better understanding of what causes our emotion and to keep our positive emotions satisfied better than competing species.

I feel like were reaching a "chicken or egg" point here. Heh.

I would imagine that a conciouness would be born and immidietely need to continue.
 
I don't think emotion can be separated from intelligence as we understand it. Emotion is what drives our decision-making. It bubbles under the surface of even our most high-minded attempts at impartiality. It is what motivates us to keep on living, it is what guides us in our pursuits and our reactions to the world around us. If you have just pure thought, what does that mean? I think emotions are unfairly derided. They aren't evolutionary noise. Our human brains aren't somehow apart from those of other animals, and our human consciousness isn't shackled to evolutionary history by the burden of our emotions. Our emotions are the motivators of our intelligence. Emotion is the fuel of the engine of the mind. Without emotion, there is no thought, because there is no reason for thinking.

Argh my phone crashed just as I typed my answer. I hear what your saying and I see why you would think that but I disagree. Intelligence and emotions are separate stand alone things and maybe while synonymous def can be separated. "Intelligence is the ability to make distinctions on a topic/subject/matter etc." the more distinctions you can make the more intelligent you are, it really is as simple as that. With each choice you make is a display of intelligence. Artificial intelligence is just that, an algorithms ability to make distinctions not confined to same parameters/grid that humans do.That's why you have different types of intelligence, emotional, artificial etc. An algorithm has no motivations feels no joy or pain but you certainly wouldn't say that it can't make intelligent decisions.

To a point raised earlier you have to two types of AI (well a lot more than two) but you specifically mentioned machine learning? You have two types supervised and unsupervised machine learning techniques. supervised learning is relatively simple, you spam folder in you email for e.g. When you initially start moving stuff to the spam/junk folder the machine learning algorithm will spot patterns between the types of mail you move and before you know it without prompting, it will automatically start moving those types of mail to your junk folder. Unsupervised learning is effectively a lot more complex, imagine your inbox has 50k emails and you run an algorithm over it to effectively say sort this out for me, you choose the parameters,you spot the patterns... And just show me the results. This is the kind of machine learning that is really going to allow AI to eventually think for itself.

To the person who mentioned are these the kind of people wall st are trying to hire? Yeah but banks are wwaaaayyy behind the curb. The hedge fund industry is still probably the most competitive market place for this kind of talent but being very very very closely followed by the tech industry (this year Google hired a Russian PhD grad for there core machine learning team on a 300k USD package)
 
Argh my phone crashed just as I typed my answer. I hear what your saying and I see why you would think that but I disagree. Intelligence and emotions are separate stand alone things and maybe while synonymous def can be separated. "Intelligence is the ability to make distinctions on a topic/subject/matter etc." the more distinctions you can make the more intelligent you are, it really is as simple as that. With each choice you make is a display of intelligence. Artificial intelligence is just that, an algorithms ability to make distinctions not confined to same parameters/grid that humans do.That's why you have different types of intelligence, emotional, artificial etc. An algorithm has no motivations feels no joy or pain but you certainly wouldn't say that it can't make intelligent decisions.

To a point raised earlier you have to two types of AI (well a lot more than two) but you specifically mentioned machine learning? You have two types supervised and unsupervised machine learning techniques. supervised learning is relatively simple, you spam folder in you email for e.g. When you initially start moving stuff to the spam/junk folder the machine learning algorithm will spot patterns between the types of mail you move and before you know it without prompting, it will automatically start moving those types of mail to your junk folder. Unsupervised learning is effectively a lot more complex, imagine your inbox has 50k emails and you run an algorithm over it to effectively say sort this out for me, you choose the parameters,you spot the patterns... And just show me the results. This is the kind of machine learning that is really going to allow AI to eventually think for itself.

To the person who mentioned are these the kind of people wall st are trying to hire? Yeah but banks are wwaaaayyy behind the curb. The hedge fund industry is still probably the most competitive market place for this kind of talent but being very very very closely followed by the tech industry (this year Google hired a Russian PhD grad for there core machine learning team on a 300k USD package)

I agree with everything you've said. The way I've been using "intelligence" has maybe clouded my point. Any learning machine can be called intelligent. What I'm arguing is that a self-directed intelligent machine would need something like emotion to operate. A simple facial recognition program can be called intelligent, but I'm talking more about something that can perform as many complex parallel tasks as we can. A more comprehensive AI, not intelligent in the sense that it can arrive at a logical conclusion but intelligent in the sense that it can make its own non-prescribed decisions.

Like you said, an algorithm can be an intelligent decision-maker. But it is limited to whatever parameters it has been given. To apply or alter that algorithm is a creative act, and learning machines which can do so are the infants of AI. These are baby steps.

I don't think something which exists primarily to serve as a tool can be intelligent in the sense that humans are intelligent. It may be able to delineate between x and y but merely doing so does not make it aware of its actions or able to choose whether or not to comply with the request. The conversation gets mired when you involve consciousness, so I hope I can make the point without stepping there. Intelligent machines as they exist today cannot refuse to perform a given task. They are no more intelligent in that sense than a wrench twisting a bolt.

I would love to read more on the subject if you've got suggestions. I would never claim to know definitively when a system can be called intelligent, but I hope the discussion gets us closer to an understanding.
 
Status
Not open for further replies.
Top Bottom