Let's talk about... robots

Status
Not open for further replies.

Zaptruder

Banned
They're getting cheaper, they're getting more advanced.

They already already do a fair number of tasks that we deem too dangerous or are unable to do. They've recently replaced millions of workers in china (Foxconn).

Given the accelerating rate of research into robotics as well as the continual increase in computing power and continued decrease in computing power cost... as well as parallel advancements in fields of A.I...

When do you think a robot will become advanced enough to replace you in your line of work?

Can a robot replace you in your line of work - do you think it'd be economically advantageous to do so?

I think most GAFfers work in positions where soft skills are required (i.e. face to face human contact) - where robots probably aren't going to fare too well - they'll probably be treated like indians, even when they acquire high level conversational skills - with a dollop of disdain and contempt, actively muting people's ability to empathize and communicate with them. In that respect, we'll probably be safe for a while - but again, given exponential growth and development in technology, and with the convergence of multiple schools of study (cognitive science, computer science, robotics, battery/wireless power/etc), it won't be long after the basic jobs are replaced that we'll be feeling the pinch.


Still, we've a bit to go before we get to the point where robots are able to respond to an unscripted command in a natural fashion (i.e. like we'd expect if we were talking to a subservient person). But are we closer than most suspect? With a combination of wireless, cloud computing, and Watson like AI - you're going to get very advanced comprehension, even with the technology of today. Of course parsing an abstract command like; cook an egg is going to require a little more work than simply returning an answer to a question like - How many days a year does London have more than 15mm of rain?

But even that sort of problem isn't intractable with today's technology - once again, connecting the machine to an online database; preprogramming a set of verbs and actions into the machine, and then allowing users to 'program' in complex steps to abstract instructions, will create a detailed database of crowd sourced commands that can be used to achieve real world tasks. Using trojan novelty robots, we end up programming a wide variety of skills and abilities, not into an individual robot, but a large robot AI database, which can then disseminate onto other robots around the world, making them appear far more intelligent and useful then a single robot by itself could ever hope to be.

So... with robots - once we start mass manufacturing them at significant scales, because their utility and flexibility starts going through the roof, because we fucking made them that way - what's going to happen to all the people that robots replace?

What are the socio-economic implications of cheap complex, dedicated labour? Do we recognize that robots are part of the post-scarcity mix of technologies, and plan our society accordingly? Or do we experience untold suffering because the free markets simply will not find it rational to employ human labour when the alternative is so much more reliable and cost effective?
 
MAKE ME A METABEE:

IkkiMetabee6.gif


AND I WILL BE ITS IKKI WE WILL BE FRIENDS
 
My work is in developing robots. So when robots can replace my job we are all truly fucked.

Also until we crack computer vision in a major way robotic applications in real life are extremely limited. But we're making some great strides in that field.
 
The_Technomancer said:
My work is in developing robots. So when robots can replace my job we are all truly fucked.

Also until we crack computer vision in a major way robotic applications in real life are extremely limited. But we're making some great strides in that field.

Can you tell us more about some of the obstacles and limitations with computer vision?

I thought we were making awesome strides in that area...

http://www.youtube.com/watch?v=QfPkHU_36Cs

If this Jame May video with the ASIMO robot is anything to go by anyway.

Obviously, ASIMO is cutting edge shit - but you can imagine how much more robust the database will be once you start getting the guts of what this software is out to the hands of common people - you don't even need to use robots for this - just some software on a website that allows webcams to connect to it will get tens of thousands of people interested enough to put their webcams up to things to teach 'their computer' how to see...
 
If the joke responses so far are anything to go by, I think most people have long become inured to stories of robots doing useful stuff. It's kinda like time travel and nuclear fusion to us at this stage - maybe it's possible, may it's not - not something I'll have to worry about in my near future anyway.
 
Zaptruder said:
Can you tell us more about some of the obstacles and limitations with computer vision?

I thought we were making awesome strides in that area...

http://www.youtube.com/watch?v=QfPkHU_36Cs

If this Jame May video with the ASIMO robot is anything to go by anyway.
We're making a lot of great strides yeah. But a lot of that is specific application stuff.
The major computer vision problem has always been "how do you parse useful information out of a ridiculously complex information stream"; its actually one of the more difficult fields in AI.
And the real big problem there is saliency: which information is useful, and which do you discard. Robust, many-application saliency analysis is going to be revolutionary when it happens.
We need it for general purpose, day-to-day robots, and ideally something that isn't a patchwork of a bunch of different specific systems.
 
I used to like robots until I took a class on robotic systems and design...let's just say that while interesting and, fun it was one of the most frustrating experiences I've had class wise!
 
hteng said:
mobile suits, exoskeletons, power armor, armored cores, mech warriors, make them happen

Primitive exoskeletons are been developed already... but why would you want a slushy human inside a robot suit, when you can just have a robot to begin with?
 
I'm an AI programmer, so it won't be happening anytime soon :) Heading to UMass Lowell to port some of my natural language understanding code to a robot.

Robots can replace factory positions pretty easily now, but as far as human interaction? We've still got a long way to go.

The Strong AI problem is not limited by computing power - we don't have a reasonable knowledge representation nor do we have a learning algorithm that can learn from one or two examples to populate it. Furthermore, we don't even know how our own brains work, so trying to replicate such a system is very difficult.
 
The_Technomancer said:
We're making a lot of great strides yeah. But a lot of that is specific application stuff.
The major computer vision problem has always been "how do you parse useful information out of a ridiculously complex information stream"; its actually one of the more difficult fields in AI.
And the real big problem there is saliency: which information is useful, and which do you discard. Robust, many-application saliency analysis is going to be revolutionary when it happens.
We need it for general purpose, day-to-day robots, and ideally something that isn't a patchwork of a bunch of different specific systems.

I don't doubt that the more a person digs into robotics research, the more stumbling blocks we'll find - but at the same time, I'm feeling confident that these problems aren't intractable, and continue to become easier to solve over time by virtue of increased computing power and developments in parallel fields of study.

That said, can you elucidate more on the term 'robust, many-application saliency analysis'? It sounds intriguing. Something along the lines of recognizing a wide range of objects in an environment, and figuring out what it can/can't do with them?
 
ianp622 said:
I'm an AI programmer, so it won't be happening anytime soon :) Heading to UMass Lowell to port some of my natural language understanding code to a robot.

Robots can replace factory positions pretty easily now, but as far as human interaction? We've still got a long way to go.

The Strong AI problem is not limited by computing power - we don't have a reasonable knowledge representation nor do we have a learning algorithm that can learn from one or two examples to populate it. Furthermore, we don't even know how our own brains work, so trying to replicate such a system is very difficult.


It has to be in baby steps, though, don't you think? You guys making AI scripts shouldn't be worried right now about replicating an entire human behavior pattern.
 
ianp622 said:
I'm an AI programmer, so it won't be happening anytime soon :) Heading to UMass Lowell to port some of my natural language understanding code to a robot.

Robots can replace factory positions pretty easily now, but as far as human interaction? We've still got a long way to go.

The Strong AI problem is not limited by computing power - we don't have a reasonable knowledge representation nor do we have a learning algorithm that can learn from one or two examples to populate it. Furthermore, we don't even know how our own brains work, so trying to replicate such a system is very difficult.

Personally, I don't think we need to know how our brains work in order to develop highly functional robots with a wide range of utility.

We might do if we want to make 'life like' robots, or robots/AI that learn like humans.

But I think an internet connected database that exploits crowd sourcing for a lot of the work can be highly effective as well - especially if there's a method of upranking/downranking behaviours and command interpretations (i.e. not like the chatbots of today).
 
Zaptruder said:
...they'll probably be treated like indians, even when they acquire high level conversational skills - with a dollop of disdain and contempt, actively muting people's ability to empathize and communicate with them...

blade-runner-2.gif
 
Assuming that robots continue to see increasing usage in various industries(which presumes a number of things about cost, reliability, skill-sets, abstract thinking, etc.), industries will be able to do more for less money. So, we as a people will have more wealth. The question is, how will that 'wealth' be distributed? Perhaps all the wealth will continue to make the top 1% richer. We'll lose our jobs, but we'll find new jobs to service the greater spending of the wealthiest class(some of us will find better jobs, some of us will find worse jobs).

Perhaps the wealth will go to the middle class, or - just to posit the craziest of possibilities - the poor class. If that happens, these classes will move more and more to service jobs, entertainment jobs, maybe even creative jobs, as we have in recent history. They may work less hours for the same amount of money, as we have in recent history. They may be able buy more with their money, as we have in recent history.

In a general sense, whenever we are able to do or create more with less investment, people win. Not always in the short-term, and definitely not consistently for every individual, but people win. We are not all farmers anymore because people have found ways to do or create more with less investment. The big question though is who ends up getting the long stick, and who ends up getting the short stick.
 
disappeared said:
It has to be in baby steps, though, don't you think? You guys making AI scripts shouldn't be worried right now about replicating an entire human behavior pattern.
That's what we've been doing, and it hasn't worked out. When you do it that way, you end up with a whole bunch of pieces that don't fit together in any meaningful way. Everyone has their own knowledge representations, learning algorithms, etc., but there's no unified system to tie together knowledge from various sources.

Zaptruder said:
Personally, I don't think we need to know how our brains work in order to develop highly functional robots with a wide range of utility.

We might do if we want to make 'life like' robots, or robots/AI that learn like humans.

But I think an internet connected database that exploits crowd sourcing for a lot of the work can be highly effective as well - especially if there's a method of upranking/downranking behaviours and command interpretations (i.e. not like the chatbots of today).
I agree, but it's easier to start with something we know that works. Also, I am a proponent of the Embodied AI philosophy, which states that we need to give a robot human-like characteristics and abilities in a human-like world in order for it to communicate as a human would. Our use of metaphors in everyday language is one example of how language is tied to our experience.

I've bolded the spatial metaphors that you and disappeared used in your post to illustrate my point.

So far, a lot of AI work has been done in components, because that's the correct Software Engineering approach. However, it won't work for a human-level natural language understanding system.
 
ianp622 said:
I'm an AI programmer, so it won't be happening anytime soon :) Heading to UMass Lowell to port some of my natural language understanding code to a robot.

Robots can replace factory positions pretty easily now, but as far as human interaction? We've still got a long way to go.

The Strong AI problem is not limited by computing power - we don't have a reasonable knowledge representation nor do we have a learning algorithm that can learn from one or two examples to populate it. Furthermore, we don't even know how our own brains work, so trying to replicate such a system is very difficult.


Yah this is a good point. In order to make an AI that's effectively 'sentient', we literally will have to program the equivalent of a human brain. That's a lot of lines of code, even assuming that the human brain can be reduced to if/then/else commands at all. Really, it's a lot. Trying to comprehend how much code that would be is like trying to comprehend the size of the universe. These guys aren't going to program themselves.

Also, while I don't think this will become an issue in our lifetimes, in ever, we really need to avoid creating sentient robots. Not because it's dangerous, necessarily, but because it's cruel.
 
Conciliator said:
Assuming that robots continue to see increasing usage in various industries(which presumes a number of things about cost, reliability, skill-sets, abstract thinking, etc.), industries will be able to do more for less money. So, we as a people will have more wealth. The question is, how will that 'wealth' be distributed? Perhaps all the wealth will continue to make the top 1% richer. We'll lose our jobs, but we'll find new jobs to service the greater spending of the wealthiest class(some of us will find better jobs, some of us will find worse jobs).

Perhaps the wealth will go to the middle class, or - just to posit the craziest of possibilities - the poor class. If that happens, these classes will move more and more to service jobs, entertainment jobs, maybe even creative jobs, as we have in recent history. They may work less hours for the same amount of money, as we have in recent history. They may be able buy more with their money, as we have in recent history.

In a general sense, whenever we are able to do or create more with less investment, people win. Not always in the short-term, and definitely not consistently for every individual, but people win. We are not all farmers anymore because people have found ways to do or create more with less investment. The big question though is who ends up getting the long stick, and who ends up getting the short stick.

Under what circumstances would the classes that rely on labour for subsistence win out over classes that use capital to generate income... in a situation where capital can be used to replace labour in an increasingly wide variety of tasks?

The real problem with robot labour is how very quickly it would advance and how wide a range of tasks could be covered by them - it would in all likelihood exceed our ability to find jobs and tasks for the people displaced by robots.

And that's not even considering the difficulty in training humans with skills.

I get the feeling that if we don't radically change our view of society and humanity - it's goals and functions, we're going to see a whole shit load of suffering, induced by yet another angle (on top of suffering induced by inequality, environmental failure, climate change, etc) of attack.
 
ianp622 said:
That's what we've been doing, and it hasn't worked out. When you do it that way, you end up with a whole bunch of pieces that don't fit together in any meaningful way. Everyone has their own knowledge representations, learning algorithms, etc., but there's no unified system to tie together knowledge from various sources.


I agree, but it's easier to start with something we know that works. Also, I am a proponent of the Embodied AI philosophy, which states that we need to give a robot human-like characteristics and abilities in a human-like world in order for it to communicate as a human would. Our use of metaphors in everyday language is one example of how language is tied to our experience.

I've bolded the spatial metaphors that you and disappeared used in your post to illustrate my point.

So far, a lot of AI work has been done in components, because that's the correct Software Engineering approach. However, it won't work for a human-level natural language understanding system.

What do you think of the Watson AI that appeared on Jeopardy?

Could that sort of AI be reappropriated for use in natural language interpretation with regards to robots?
 
EskimoJoe said:
I can't wait until robots do all the jobs and everyone else can just hang out and do whatever they want for free.

Yeah man. Ideally that's how it'd roll. But realistically, I think the people with power just like having power too much.
 
Zaptruder said:
What do you think of the Watson AI that appeared on Jeopardy?

Could that sort of AI be reappropriated for use in natural language interpretation with regards to robots?
In many ways, Watson is really just a very good search engine. It can't do deduction or induction, it can't combine concepts to create new ones, it can't take a sentence and form the logical consequences of it, etc.

I have a quick test for chat bots to see if they actually understand anything - "What color is a blue apple?" Even though there is a trivial answer, Watson wouldn't be able to understand this, because it doesn't have the ability to combine the concept of the color blue and the concept of an apple.

One major difficulty in combining concepts is that adjectives do not simply add characteristics to an object - natural language is non-monotonic. (Monotonic would mean that new information would always yield new logical statements, but never remove old ones). "Stone lion" is one example of non-monoticity - sure, we can add the material "stone" to a lion, but we also want to say that a "stone lion" is no longer an animal. Also, why is it that the shape of a lion is preserved, but other characteristics of lions are not?

Watson's AI does have its uses and could be used in many important applications. But it hasn't really scratched the surface of strong AI.
 
ianp622 said:
In many ways, Watson is really just a very good search engine. It can't do deduction or induction, it can't combine concepts to create new ones, it can't take a sentence and form the logical consequences of it, etc.

I have a quick test for chat bots to see if they actually understand anything - "What color is a blue apple?" Even though there is a trivial answer, Watson wouldn't be able to understand this, because it doesn't have the ability to combine the concept of the color blue and the concept of an apple.

One major difficulty in combining concepts is that adjectives do not simply add characteristics to an object - natural language is non-monotonic. (Monotonic would mean that new information would always yield new logical statements, but never remove old ones). "Stone lion" is one example of non-monoticity - sure, we can add the material "stone" to a lion, but we also want to say that a "stone lion" is no longer an animal. Also, why is it that the shape of a lion is preserved, but other characteristics of lions are not?

Watson's AI does have its uses and could be used in many important applications. But it hasn't really scratched the surface of strong AI.

I see. That's interesting, thanks.
 
Zaptruder said:
Under what circumstances would the classes that rely on labour for subsistence win out over classes that use capital to generate income... in a situation where capital can be used to replace labour in an increasingly wide variety of tasks?

This is literally the story of technology, at least since the renaissance or so. Technological development has put people out of jobs, from farmers to factory workers. Eventually, they get new jobs, in industries that didn't exist before; they work less hours and they have more wealth. Is it a better life than one lived during colonial America? That's a matter of opinion, but it's 'richer' in any way that one can quantify the word. When we can create and do more for less investment, the world becomes richer, period, and people from every point in the socioeconomic spectrum have benefited from this progress over the last several hundred years.

The real problem with robot labour is how very quickly it would advance and how wide a range of tasks could be covered by them - it would in all likelihood exceed our ability to find jobs and tasks for the people displaced by robots.

And that's not even considering the difficulty in training humans with skills.

I get the feeling that if we don't radically change our view of society and humanity - it's goals and functions, we're going to see a whole shit load of suffering, induced by yet another angle (on top of suffering induced by inequality, environmental failure, climate change, etc) of attack.

No argument here; when creative destruction happens, people suffer. Should we prohibit employers from firing their employees when they find a better way to do something? Perhaps, but there are heavy consequences for that: You enter a world where we pay people arbitrary amounts, even though they are not able to create value worth what they are paid(a Communist economy), or we enter a world where technological progress is illegal.
 
Conciliator said:
This is literally the story of technology, at least since the renaissance or so. Technological development has put people out of jobs, from farmers to factory workers. Eventually, they get new jobs, in industries that didn't exist before; they work less hours and they have more wealth. Is it a better life than one lived during colonial America? That's a matter of opinion, but it's 'richer' in any way that one can quantify the word. When we can create and do more for less investment, the world becomes richer, period, and people from every point in the socioeconomic spectrum have benefited from this progress over the last several hundred years.



No argument here; when creative destruction happens, people suffer. Should we prohibit employers from firing their employees when they find a better way to do something? Perhaps, but there are heavy consequences for that: You enter a world where we pay people arbitrary amounts, even though they are not able to create value worth what they are paid(a Communist economy), or we enter a world where technological progress is illegal.

How about a third option where we recognize the advent and value of post scarcity technology and replan humanity accordingly?

Yeah, that's pretty far out there - it's the least painful option, but also the option that least gels with human psychology...
 
Zaptruder said:
I don't doubt that the more a person digs into robotics research, the more stumbling blocks we'll find - but at the same time, I'm feeling confident that these problems aren't intractable, and continue to become easier to solve over time by virtue of increased computing power and developments in parallel fields of study.

That said, can you elucidate more on the term 'robust, many-application saliency analysis'? It sounds intriguing. Something along the lines of recognizing a wide range of objects in an environment, and figuring out what it can/can't do with them?
Well saliency is a measure of how much something stands out relative to the rest of its environment. But that criteria for what makes something "stand out" is completely open to development. The easiest criteria is contrast: a black square in the middle of a white square is obviously salient.

What I mean with a "robust saliency analysis" is far far more abstract though. Really its also the problem of understanding human vision (or animal vision in general) because we are incredibly good at this. In order to plot a path across a crowded room we have to take in the scene and almost instantly separate out which information is useful and which information is useless. Having a robot that can quickly develop natural (or at least contains very many) saliency criteria for understanding three dimensional space is necessary for any truly robust robotic navigation that is completely "off rails" and "uncontrolled"
 
Zaptruder said:
How about a third option where we recognize the advent and value of post scarcity technology and replan humanity accordingly?

Yeah, that's pretty far out there - it's the least painful option, but also the option that least gels with human psychology...

'Replanning' is tough to agree to. Who gets to make the Plan? What will it take to get the populace of the world to agree to having their lives planned for them? What about those who would fight a system like this? Do we need a military state? We'd really need a world government to get everyone on the same page, otherwise a country could just bypass every other country by refusing to play along.

Free creative destruction isn't a good option, but it may be the best option.
 
The_Technomancer said:
Well saliency is a measure of how much something stands out relative to the rest of its environment. But that criteria for what makes something "stand out" is completely open to development. The easiest criteria is contrast: a black square in the middle of a white square is obviously salient.

What I mean with a "robust saliency analysis" is far far more abstract though. Really its also the problem of understanding human vision (or animal vision in general) because we are incredibly good at this. In order to plot a path across a crowded room we have to take in the scene and almost instantly separate out which information is useful and which information is useless. Having a robot that can quickly develop natural (or at least contains very many) saliency criteria for understanding three dimensional space is necessary for any truly robust robotic navigation that is completely "off rails" and "uncontrolled"

If computer vision can recognize 3D volumetric shapes and surfaces (like how advanced augmented reality programs can seem to do - in order to project graphics onto real world images, or even a system like Kinect can do), wouldn't this help considerably in terms of how well a robot can navigate within the world?
 
Human augmentation and eventual symbiotism with robotics is the future. You cannot resist.
 
Zaptruder said:
If computer vision can recognize 3D volumetric shapes and surfaces (like how advanced augmented reality programs can seem to do - in order to project graphics onto real world images, or even a system like Kinect can do), wouldn't this help considerably in terms of how well a robot can navigate within the world?
Actually yes. It sounds a bit silly, but the Kinect in particular has been something of a boon to the academic community. Sure the technology has existed for over a decade but going from a device that costs $10,000 to one that retails for $150 has meant that every CV lab in the country can just buy ten and have a cheap decent depth-camera to experiment with. Mind you there's still limitations. Some of my co-workers just spent two months trying to make a 3D reconstruction of a scene by combining data from 3-5 Kinects with...less then spectacular results. (not bad, mind you)

My project in the fall will be to mount a Kinect to this platform along with a smaller tumbling robot on its back to experiment with heterogeneous robotic squad dynamics.
 
EatChildren said:
Human augmentation and eventual symbiotism with robotics is the future. You cannot resist.

What's going to happen to those that resist augmentation?

And what's going to happen to those that fully embrace augmentation? Are their abilities and desires so dramatically altered that they stop been human in a way we are able to understand and relate to?
 
The_Technomancer said:
Actually yes. It sounds a bit silly, but the Kinect in particular has been something of a boon to the academic community. Sure the technology has existed for over a decade but going from a device that costs $10,000 to one that retails for $150 has meant that every CV lab in the country can just buy ten and have a cheap decent depth-camera to experiment with. Mind you there's still limitations. Some of my co-workers just spent two months trying to make a 3D reconstruction of a scene by combining data from 3-5 Kinects with...less then spectacular results. (not bad, mind you)

My project in the fall will be to mount a Kinect to this platform along with a smaller tumbling robot on its back to experiment with heterogeneous robotic squad dynamics.

So... computer AI, much like our own intelligence will end up using a wide variety of modules that operate in parallel, that overlap in functionality, that iteratively feedback upon each other, in order to best understand the world around them.

e.g. create 3D map of visual environment with multiple camera technology (ala kinect) - at the same time run traditional object recognition technologies - edge detection, lines of good flow - the objects recognized in the scene can feedback into the 3D map data - allowing the robot to recognize which volumes can be moved independently of the other - giving it an understanding of loose and fixed surfaces.

At the same time, object recognition algorithms are uploaded to a central database whose interpretation can be corrected by human oversight - allowing for eventual crowd sourcing...
 
Zaptruder said:
So... computer AI, much like our own intelligence will end up using a wide variety of modules that operate in parallel, that overlap in functionality, that iteratively feedback upon each other, in order to best understand the world around them.

e.g. create 3D map of visual environment with multiple camera technology (ala kinect) - at the same time run traditional object recognition technologies - edge detection, lines of good flow - the objects recognized in the scene can feedback into the 3D map data - allowing the robot to recognize which volumes can be moved independently of the other - giving it an understanding of loose and fixed surfaces.

At the same time, object recognition algorithms are uploaded to a central database whose interpretation can be corrected by human oversight - allowing for eventual crowd sourcing...
Yeah, I'd say thats a pretty good grasp of it.
 
Zaptruder said:
What's going to happen to those that resist augmentation?

They'll become relics of distant past. Icons on an inferior species, the last of the kind up for show in zoos.

The augmented 'human' of the future will be to us as we are to neanderthals.
 
Status
Not open for further replies.
Top Bottom