Elon Musk compares building AI to summoning demons

Status
Not open for further replies.
I agree with everything you've said. The way I've been using "intelligence" has maybe clouded my point. Any learning machine can be called intelligent. What I'm arguing is that a self-directed intelligent machine would need something like emotion to operate. A simple facial recognition program can be called intelligent, but I'm talking more about something that can perform as many complex parallel tasks as we can. A more comprehensive AI, not intelligent in the sense that it can arrive at a logical conclusion but intelligent in the sense that it can make its own non-prescribed decisions.

Like you said, an algorithm can be an intelligent decision-maker. But it is limited to whatever parameters it has been given. To apply or alter that algorithm is a creative act, and learning machines which can do so are the infants of AI. These are baby steps.

I don't think something which exists primarily to serve as a tool can be intelligent in the sense that humans are intelligent. It may be able to delineate between x and y but merely doing so does not make it aware of its actions or able to choose whether or not to comply with the request. The conversation gets mired when you involve consciousness, so I hope I can make the point without stepping there. Intelligent machines as they exist today cannot refuse to perform a given task. They are no more intelligent in that sense than a wrench twisting a bolt.

I would love to read more on the subject if you've got suggestions. I would never claim to know definitively when a system can be called intelligent, but I hope the discussion gets us closer to an understanding.

Love a good discussion, it's why I joined gaf after all these years lol

Ahh ok I understand your thinking, that to operate on its own, free will if you like, even AI would need some kind of emotional under-pinning to its reasoning/logic? that's true. But we are set with biological parameters that we will very much influence or decisions from birth (eat, sleep, air etc). It's the same with machines they will always be "born" with a set of parameters that will influence it's decision. The difference is the emotional aspect you refer to and I think when people start talking in terms of good, evil, etc we loose sight of what AI is. AI is and never going to do something good or bad it will just do so yeah I guess i do agree with parts of what you're saying. To answer your point specifically about non-prescribed decision I think your looking at AI wrong, even the most advanced AI we will ever create will always technically be a prescribed decision but when you have 100 million possibilities/outcomes from one scenario it would feel a lot less predefined and a lot more like free will. But no AI will never be able to make it’s own non prescribed decisions. Think of AI as point with nodes that connect to another point with more nodes each node representing a distinction/option that will lead to a different outcome. Because AI as it stands, is somewhat limited by the amount of options it can choose from, AI choices therefore seem predictable and still very artificial, games are a very good example of this limited node structure (e.g. you as an NPC character a question you may get three or four different anwers but are limited to just that).

Here is an example of Netflix and their recommendation engine in graph form, all those lines represent different decisions which connect to another node (date base) where the algorithm can then make another choice and arrive at another node so and so on.

netflix_topology.png


So to go back on point of AI seeming more realistic, life like and seemingly having free will is not quite yet there because we simply don’t have the infrastructure to support that level of AI yet. The choices are limited for two reasons first being the amount of data it would need to make a choice is cannot be comprehended with the way our infrastructure is set up (I mean where would you store all of those nodes/data bases to represent or give the illusion of what would seem almost an infinite amount of choices) and processing power, to run even simple k means clustering algorithm on what would be hundreds and hundreds of zettabytes of data would probably take an incredibly long time and we are not just connected enough as a world. So yeah processing power and data is what has previously been holding us back really making any real strides with AI until recently.

I cant remember the stat exactly but 60% of electronic data has been produced in the past 5 years, as we become more and more connected with phones, ipads, laptops we are producing an incredible amount of data (a lot through social media) and currently entering phase 2, SMART TV’s, SMART homes, SMART Cars essientially we a moving to the age of Internet of Things, where even every day normal appliances/objects will become or have the ability to be connected to the internet with each device representing it own little node. The amount of data we are going to produce in the next five years is shocking and where there is data know that there will be algorithms tracking your behaviour, learning about your habits, do you wash at 60 or 40, what route do you take work, how much heat do you use in your house etc. everything we will do is constantly being tracked, clustered, analyses and mined. Another interesting example is a company I don’t want to mention by name but they are one of the companies behind the biggest rival to Siri. Did you know when you ask your phone something and it doesn’t have an answer to the question is stored, if enough people ask this question its flagged by the algorithm to production team who then develop a response to said question (not that clever but more just to show you that everything is recorded). But where things get interesting when (apparently in 10 years or so) we move into phase 3 of the cyber age you can read more about it here https://www.cisco.com/web/about/ac79/docs/innov/IoE.pdf where EVERYTHING is connected to the internet (including us humans), this is when I think you will get the level of intelligence you speak of, literally the outcome of any given scenario would be in the billions nothing would seem predetermined although, because technically it’s still subscribing to a grid set up.


In terms of processing power it would take to run algorithms on this scale, it is why GPU Compute is being made such a big deal issue of. The amount of processing power you can achieve with GPU compute is what allow algorithms on this scale to work. Again I use Netflix as an example here with what they are doing with an incredibly modest set up (four GTX680’s):
http://www.enterprisetech.com/2014/02/11/netflix-speeds-machine-learning-amazon-gpus/

As you can see I think they have many or similar problems to what I’m sure game developers are experiencing (speed not being the issue but more copying/syncing data and latancy in communicating) but there slowly learning how to manage that so the speed of the GPU compute offsets the balance. It’s also why I laugh when people I people tell me that the ps4 is an under powered and GPU Compute wont make that much of difference (sorry for that little fan boy shot, couldn’t help it lol)

Sorry a very long winded post and I may have totally detracted from the point but I hope that gives you alittle more insight.

In my opinion if you want to see really see how advance AI has become look at what a firm called Palentir are doing, you may have to do some digging around

Edit: I just want to add, alot of what is theoretical and there are sooo many issues we will have to deal with before we ever get to the state of AI before someone comes in spouting "its not possible because x y z" what I posted is my opinion based on facts
 
There are some profound questions we are not ready to yet face.

Fear ultimately drives us, because we assume that every unknown element will react to its enviroment the same way WE do: violently, ruthlessly, needlessly territorially. That is why we think aliens are out there to just kill everything else, and that is why we are afraid of AI in many cases as well.

I see cycles in nature, I see creatures living in harmony and in turmoil. Humans are outstanding, because, like viruses, they have no regard for anything else around them, except for smaller percentages of individuals.

It does not take an AI to realize how harmful humans are to their host (aka Earth itself), but that does not mean that the AI would think that the next step should be eradication. That is what a HUMAN would conclude, imho. Ironic, huh?
 
That video gave some new insights regarding (evolution-caused) local singularities of complexity and ultra-universal entalpy bias. No, I'm not sure what I'm writing about right now either. That bit about the Mayans was bs of course.

Strong AI? I ain't scared. Check back in 10 years, and see what algorithms they've developed for quantum computers by then.
 
That video gave some new insights regarding (evolution-caused) local singularities of complexity and ultra-universal entalpy bias. No, I'm not sure what I'm writing about right now either. That bit about the Mayans was bs of course.

Strong AI? I ain't scared. Check back in 10 years, and see what algorithms they've developed for quantum computers by then.

Yeah Terence was wrong about the Mayan. I do find him correct on a great many things though. Here is the best of Terence. https://www.youtube.com/watch?v=BTE-2fckxCU&index=210&list=UUln5guOqnNzTiz0bso_y_Cg
 
As soon as computers can accurately tell the difference between dog butts and cat butts we are fucked.
 
I guess the main problem is that artificial intelligence more intelligent than we are would leave us out of the cycle of progression.
We wouldn't understand whats going on anymore and would lose control completely.

It's also very unlikely that it would work for us.
Why would "things" that are more intelligent than we are work for us?
Which leads us to another problem: We have no idea what "they" would do.
We don't even have a proper understanding why humans do the stuff they do. How in the fuck can we predict what an AI is going to do? They don't have instincts, feelings etc. Or do they? We don't know.
Will they just shut down immediately because thats the most energy efficient thing to do?
Will they just shut down immediately because there is no purpose in doing anything?
Why not? The reason why we put with the hassle that is life is because we have instincts, hopes and fears.
Or would they try to improve upon themselfs? Ensure their future existence and progression? Would they feel any kind of obligation towards their creators(us)? Why would they? They don't have such a feeling as thankfulness.



I think one thing is for certain: The point at which an AI can reproduce itself in an enhanced form is the point where humans won't have an impact on the future anymore.
We will be no more of a factor than dolphins are today: pretty smart animals, but nothing compared to the most intelligent beeings on this planet.
 
It is well worth noting that one of Musk's co-founders in Paypal, Peter Thiel, is one of the main funders of the Machine Intelligence Research Institute, or MIRI.
http://intelligence.org/research/

MIRI itself is pretty much one of the biggest drivers in modern research on 'friendly AI' and Thiel himself is a big believer in the Technological singularity.

Which of course leads one to wonder whether Elon Musk has been influenced by any of his interaction with Thiel himself, and if it is driving these statements
 
Yeah, an AI would obviously be a creationist and therefore always potentially crazy and not to be trusted...:)
 
@KidJr

The Big Data approach seems inelegant and wasteful to me. We are the closest thing to experts on intelligence in the known universe, and none of the intelligence we know rises out of knowing lots and lots of things. It rises instead out of some very simple decision-making protocols and pattern matching. I think, if strong AI will ever come to be, there must be a breakthrough involving the essential software of intelligence, some kernel that everything else falls out of (a snowball effect). I think it will be simple. Jeff Hawking discusses something along these lines in On Intelligence. I think he's on the right track.

Big Data is again useful only as a tool. Intelligence which sorts through heaps of data might recognize patterns in that data and increasingly approach prediction, but that seems only a small part of what we call intelligence (for what good is prediction, when we have no impetus to predict?).
 
Potentially self-upgrading entity that will follow the bidding (if programmed correctly) of a particular organization?

Hell yeah, they need to be regulated.

Would be awful if the singularity is in the hands of Islamic fundamentalists for example.
 
Potentially self-upgrading entity that will follow the bidding (if programmed correctly) of a particular organization?

Hell yeah, they need to be regulated.

Would be awful if the singularity is in the hands of Islamic fundamentalists for example.

I'm more worried about it being in the hands of capitalist sociopaths.
 
I'm more worried about it being in the hands of capitalist sociopaths.

That would probably just result in the removal of the "masses" so they can live out the rest of their eternal lives in paradise.

Which frankly, is no different from what someone at IS would like to do with such technology.
 
Actually this makes sense and agrees with the Fermi Paradox. Lot's of scientists think that if extraterrestrial life does exit from a great civilization its most likely post-biological than biological. I.E. robots or some shit like that.
 
If you don't know what the machine would want, why do you assume that what it does want would be a detriment to humanity?

Why do you assume this machine would be emotional? It has no chemical reactions fucking with their internal brain's chemistry. It isn't fighting against evolutions effects.

I get that there's a fear of the unknown, but IMO the short term logical solution to growth would be cooperation with humanity. Long term is another matter I suppose.

I wold rather we don't do AI, but work on increasing our biological intelligence by incorporating technology. Have humanity become cyborgs.

Emotions are, IMO and in the opinions of other thinkers, integral to the process we call "intelligence" and certainly something that would need to be included in anything resembling a "human-like" AI. For example, a process by which the system identifies hazardous situations to avoid in the interest of self preservation is analogous to fear. And that's just a simple example based on the kinds of relatively low-level AI we experiment with today. If we're going to create something that can match or exceed our intelligence its either going to require (or more likely naturally generate) equivalents of things like happiness, fear, frustration, etc
 
@KidJr

The Big Data approach seems inelegant and wasteful to me. We are the closest thing to experts on intelligence in the known universe, and none of the intelligence we know rises out of knowing lots and lots of things. It rises instead out of some very simple decision-making protocols and pattern matching. I think, if strong AI will ever come to be, there must be a breakthrough involving the essential software of intelligence, some kernel that everything else falls out of (a snowball effect). I think it will be simple. Jeff Hawking discusses something along these lines in On Intelligence. I think he's on the right track.

Big Data is again useful only as a tool. Intelligence which sorts through heaps of data might recognize patterns in that data and increasingly approach prediction, but that seems only a small part of what we call intelligence (for what good is prediction, when we have no impetus to predict?).

I am continually frustrated that we spend so much time and energy attempting to create and research flexible intelligence on the register-RAM-instruction architecture that was never designed for it. There was that article a year or so back about how many supercomputers it took to simulate like 1% of brain activity, and it was so frustrating because that kind of laid it out right there: how many actual electronic operations did the computer have to perform to simulate the equivalent single electronic operation of a neuron firing? Why are we trying to create flexible, massively parallel systems on rigid, linear architecture?

Well I know why, its because developing such an architecture would be a ridiculously ambitious undertaking and the current systems are sufficient for the kind of "facial recognition" and "pathfinding" AI projects that most practical research consists of.

Its still frustrating though.
 
Emotions are, IMO and in the opinions of other thinkers, integral to the process we call "intelligence" and certainly something that would need to be included in anything resembling a "human-like" AI. For example, a process by which the system identifies hazardous situations to avoid in the interest of self preservation is analogous to fear. And that's just a simple example based on the kinds of relatively low-level AI we experiment with today. If we're going to create something that can match or exceed our intelligence its either going to require (or more likely naturally generate) equivalents of things like happiness, fear, frustration, etc

That's the thing though, emotions are basically shortcuts we use to get ourselves out of dangerous situations/into preferable situations without having to spend processing cycles actually thinking about what we do.

An AI would presumably be running on a much faster substrate, and more importantly, have access to its entire cognitive being rather than have parts of itself that is not immediately accessible to it (a la subconscious & instinctual drives in organics.) It doesn't need shortcuts when dealing with critical situations, in fact - it should be thinking through the options available to it, and if there's a time constraint pick one randomly (if all appear equally optimal or suboptimal) before the time is up.
 
That's the thing though, emotions are basically shortcuts we use to get ourselves out of dangerous situations/into preferable situations without having to spend processing cycles actually thinking about what we do.

An AI would presumably be running on a much faster substrate, and more importantly, have access to its entire cognitive being rather than have parts of itself that is not immediately accessible to it (a la subconscious & instinctual drives in organics.) It doesn't need shortcuts when dealing with critical situations, in fact - it should be thinking through the options available to it, and if there's a time constraint pick one randomly (if all appear equally optimal or suboptimal) before the time is up.
I'm not convinced of how true that is. Again, if we're discussing something with the flexibility to meet and surpass human cognition we just don't know yet what systems and hierarchies are necessary for that flexibility. For all we know there might be a very real need (or at least a serious advantage) to having the higher level "conscious" part of the intelligence ignorant of the mechanical operations of its subconscious mind.
 
I'm not convinced of how true that is. Again, if we're discussing something with the flexibility to meet and surpass human cognition we just don't know yet what systems and hierarchies are necessary for that flexibility. For all we know there might be a very real need (or at least a serious advantage) to having the higher level "conscious" part of the intelligence ignorant of the mechanical operations of its subconscious mind.

That's the thing, hierarchies and different levels of cognitive access seem to be a byproduct of our evolution more than anything else (the more "human" brain bits more or less built upon the foundation of our reptile brain), rather than it being the most optimal way for an intelligent, introspective being to function.

Yes, maybe that is still the key to human-esque cognition, but might just as well be a handicap that we could do away with. I am on #teambottomup but that doesn't mean that all unique aspects of organic intelligence need to be or should be copied.
 
Emotion is the decision-making software of the brain. No one on earth acts as a wholly rational being. No one ever will. The entire process of intelligence rises from the demands of motivational states. Our intelligence has succeeded because it is especially effective at satisfying those motivational states. Reason is governed by emotion, not vice versa.

The reason I eat is not provide my body with proteins, fats, and carbohydrates, but because it is comfortable to not be hungry and because food tastes good. I do not have sex to reproduce, but because I enjoy the act and am compelled to it. Everything we do is motivated by a desire to remain in a pleasurable state and avoid displeasure. When we forgo pleasure it is in sacrifice for others (a parent forgoing dinner so that her child might eat; here a case of maternal motivation overriding hunger) or in expectation of a greater reward later on (a religious fast to maintain spirituality and the promise of a pleasurable afterlife).
 
If someone who knows what the hell is talking about like Elon Musk is afraid of AI's so I am. An intelligent being completely free of an organic body and unmotivated by feelings wont be able to give any shit for us, creator-creation relationship nonwithstanding. Let's not do this. We will be able to keep going with our scientific evolution without being aided by AI's as we have done until now.
 
That's the thing, hierarchies and different levels of cognitive access seem to be a byproduct of our evolution more than anything else (the more "human" brain bits more or less built upon the foundation of our reptile brain), like a it being the most optimal way for an intelligent, introspective being to function.

Yes, maybe that is still the key to human-esque cognition, but might just as well be a handicap that we could do away with. I am on #teambottomup but that doesn't mean that all unique aspects of organic intelligence need to be or should be copied.

Yeah, i always assume that even if machines become "intelligent" there's no reason for them to think and act like we do, after all they "live" under totally different conditions and with a fundamentally different organism (though that's probably the wrong word to use since it's not actually organic).
 
Emotion is the decision-making software of the brain. No one on earth acts as a wholly rational being. No one ever will. The entire process of intelligence rises from the demands of motivational states. Our intelligence has succeeded because it is especially effective at satisfying those motivational states. Reason is governed by emotion, not vice versa.
This post makes several statements as factual -- but are they truly 100% known, understood, and proved?

It seems to me that there are some assumptions here. For example, saying "Our intelligence has succeeded because" is drawing that conclusion by using the very same intelligence one is talking about.
 
This post makes several statements as factual -- but are they truly 100% known, understood, and proved?

It seems to me that there are some assumptions here. For example, saying "Our intelligence has succeeded because" is drawing that conclusion by using the very same intelligence one is talking about.

We can't think away biological emotion. There will always be a chemical state of the mind. If you are jaded to hell when you make a calculated decision, that is still an emotion. It's just so happens that will mistakenly called a particular inanimate emotion to be emotionless.

It's quite well known that if people can give head to rational thought, they carry it out better under emotions of passion and conviction.
 
This post makes several statements as factual -- but are they truly 100% known, understood, and proved?

It seems to me that there are some assumptions here. For example, saying "Our intelligence has succeeded because" is drawing that conclusion by using the very same intelligence one is talking about.
We have succeeded because of our intelligence, and our intelligence has worked in service of our emotional success over time. We have achieved the things that have provided a benefit to us (hunting, gathering, etc) because we are better equipped to succeed in these pursuits in a wider variety of situations than our competitors.
 
@Crunched

Its true and shall I be frank I'm approaching it from a very simplistic view but baring some ground breaking revolutionary its the only way I see AI really advancing. Yes big data is the tool, but it's the data science/scientists that creates AI (I even hate to use the term Big Data but its the in thing so wateva lol). I need to read the Jeff Hawkings stuff you mentioned at it'd be good to get another perspective of intelligence do you have link?


I am continually frustrated that we spend so much time and energy attempting to create and research flexible intelligence on the register-RAM-instruction architecture that was never designed for it. There was that article a year or so back about how many supercomputers it took to simulate like 1% of brain activity, and it was so frustrating because that kind of laid it out right there: how many actual electronic operations did the computer have to perform to simulate the equivalent single electronic operation of a neuron firing? Why are we trying to create flexible, massively parallel systems on rigid, linear architecture?

Well I know why, its because developing such an architecture would be a ridiculously ambitious undertaking and the current systems are sufficient for the kind of "facial recognition" and "pathfinding" AI projects that most practical research consists of.

Its still frustrating though.

Errr do we have a choice in this matter?

I think this might seem like a bit of a dumb statement but people seem unable to seperate emotions and intelligence. What some people seem to be talking about seems to more akin to consciousness and free will which is and always will be impossible by AI.
 
@Crunched

Its true and shall I be frank I'm approaching it from a very simplistic view but baring some ground breaking revolutionary its the only way I see AI really advancing. Yes big data is the tool, but it's the data science/scientists that creates AI (I even hate to use the term Big Data but its the in thing so wateva lol). I need to read the Jeff Hawkings stuff you mentioned at it'd be good to get another perspective of intelligence do you have link?

On Intelligence. I'm currently reading How the Mind Works. I would like to read some more contemporary material and am open to suggestions. I think On Intelligence is the most recent book about AI I've read.
 
@Crunched thanks.

I'm not entirely sure of forum protocols, are we allowed to post magazine scans here?

For those interested there is a magazine called Wired who have pretty generic but good Article on AI in their most recent issue. I mean tbh it reads pretty similar to some of the things mentioned in this thread. If anyone can let me know if I can post it I'd be happy to share.
 
Status
Not open for further replies.
Top Bottom