• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Transhumanism & Extropianism; or, become a robot with me, GAF

Status
Not open for further replies.
Who's gonna pay for all this stuff? Would people even reproduce anymore? Those are the two most frightening questions I associate with this future (should it happen). Class warfare on the scale of genocide would likely occur. The ability to have the power with none of the responsibility or enlightenment is too scary to comprehend. The trajectory I imagine involves a heavily militarized government segment due to cost colored by the super rich. Do we really want those types to have more power than they already do? I also worry about the poor being harvested for organs or replacement parts becausebecause those metahumans with power have changed the definition of what "human" is.
 
Which is why I think we need to become a tad more socialist, and less capitalist, in the west before the tech escalates.

There is always that possibility that the technology will enable an unparalleled autocracy - one we can never end through rebellion because the dictator are either vastly more intelligent then the plebs (thanks to brain augments) or because they have a controlled A.I. that's vastly more intelligent than all the plebs combined.

Imagine a supremely powerful being at the beck and call of someone like... Bush.
 
D

Deleted member 47027

Unconfirmed Member
Hey, Shanadeus made it back! wb Shanadeus
 

sans_pants

avec_pénis
So, transhumanism/extropianism is obviously about avoiding death.
Does anyone know of any articles on how long this avoidance of death can be prolonged, what with the end of the universe and all?

Is it basically a situation of "Well, if we haven't solved that by then, we'll probably have come to accept or even long for death"?

http://www.kurzweilai.net/what-our-civilization-needs-is-a-billion-year-plan

honestly in hundreds of years A.I.s will be solving those problems for us. If we can turn quantum particles into transistors and turn anything into a computer, once we've turned the earth into one( or the sun, or the entire solar system, or the entire galaxy) we should be able to solve a lot of problems.

recent theories are saying a new universe bubbles up every few billion years and if we advance far enough maybe we could figure out how to jump into the new one. or create our own universes.
 

sans_pants

avec_pénis
Who's gonna pay for all this stuff? Would people even reproduce anymore? Those are the two most frightening questions I associate with this future (should it happen). Class warfare on the scale of genocide would likely occur. The ability to have the power with none of the responsibility or enlightenment is too scary to comprehend. The trajectory I imagine involves a heavily militarized government segment due to cost colored by the super rich. Do we really want those types to have more power than they already do? I also worry about the poor being harvested for organs or replacement parts becausebecause those metahumans with power have changed the definition of what "human" is.

its unlikely we'd harvest organs from the poor. we can already grow some organs in the lab with the host stem cells so there is no chance of rejection

we do need to shift away from capitalist societies at some point, because soon automation is going to push everyone out of the job market. i think the next 20-40 years will be very tumultuous, but if we survive it things should become much more ideal afterwards
 
I recently caught about 2 minutes of some American science doc presented by Neil Tyson and it had Ray Kurzweil sitting in his kitchen saying he takes 150 pills a day. Anyone know what that is all about? I am really interested in h+ but that totally threw me. What can he possibly be taking 150 pills a day for?
 

sans_pants

avec_pénis
I recently caught about 2 minutes of some American science doc presented by Neil Tyson and it had Ray Kurzweil sitting in his kitchen saying he takes 150 pills a day. Anyone know what that is all about? I am really interested in h+ but that totally threw me. What can he possibly be taking 150 pills a day for?

his theory is that if you can replace all the breaking down parts of your body you can slow aging. apparently its working for him but hes also got a messed up heart that might take him out
 
his theory is that if you can replace all the breaking down parts of your body you can slow aging. apparently its working for him but hes also got a messed up heart that might take him out

Yeah; I think he takes around 200 pills. Apparently his body is 10 years younger than it should be.
 
I admire his dedication. I went through a health phase and tried to take a multivitamin once a day, couldnt keep up even that past about 4 days. Roll on the day when I can just turn up to a LIMB clinic and leave looking age 25 and built like a brick shithouse :)
 

Kraftwerk

Member
Is it foolish to assume that once the Singularity occurs, we shall either be immortals, or we will all be wiped out?

I see two scenarios once it happens;

(A) Technological singularity occurs. After that event, the machine keeps creating a better version of itself. Each version creating a more advanced machine, until we reach the Godhead. A machine that cannot be upgraded; the absolute pinnacle.. After that, there are no limits.

(B) Technological singularity occurs. The machine is sentient, and starts the chain of events that lead to our extinction.

To me, the singularity is the key to all this. I hope it truly happens in our lifetime.
 

Freshmaker

I am Korean.
What if we didn't recognize this and thought that we had conquored death, when in reality we had not? The dying person's consious presence leaves, and only the person's behavior persists in an AI. Yet we all think we're immortal! Food for thought.

When my grandfather had a stroke, all such illusions about self in this sense were pretty much destroyed for me. His personality changed. His behavior changed etc.

What you present is what you have.
 
I was thinking how one of the first body augmentationi want to get is to have small speakers built in into my ears, that can connect via bluetooth (or some future wireless tech)

This is similar to some hearing aid technology that connects to bluetooth right now, ironically made by a company called "Oticon"; It wouldn't be too difficult to use this style as internal speakers that could amplify sound, connect to sources, or tell you information.

its unlikely we'd harvest organs from the poor. we can already grow some organs in the lab with the host stem cells so there is no chance of rejection

we do need to shift away from capitalist societies at some point, because soon automation is going to push everyone out of the job market. i think the next 20-40 years will be very tumultuous, but if we survive it things should become much more ideal afterwards

Stem cell research is an amazing thing that I wish had more support sans the moral debates, and if put to the forefront of technology could progress regeneration, I believe.

The point you make about automation beginning to rule large sectors of the job market is something that I think is much more of a real possibility than what some think right now; I don't know where displaced workers are going to go, and likewise, believe the capitalist system will almost become inferior and unable to support the capacity left on it's doorstep due to the progress of technology. You could say the labor of building machines is work, but then you have ideas such as autonomous self-replicating robots;
self.replicating.robots.jpg

in which cases you'd need even less human labor force for construction.

his theory is that if you can replace all the breaking down parts of your body you can slow aging. apparently its working for him but hes also got a messed up heart that might take him out

Also, do you have source from this? I'm interested in hearing if there are any reports or documentation on the negative affects it's given to his body otherwise.
 

sans_pants

avec_pénis
The point you make about automation beginning to rule large sectors of the job market is something that I think is much more of a real possibility than what some think right now; I don't know where displaced workers are going to go, and likewise, believe the capitalist system will almost become inferior and unable to support the capacity left on it's doorstep due to the progress of technology. You could say the labor of building machines is work, but then you have ideas such as self-replicating robots; in which cases you'd need even less human labor force for construction.

It's weird, when I talk about automation with people, they think its far flung fantasy at east 50 years away. Yet we have fast food machines that will cost less per year than an employee, we have 35k robots that learn simple tasks(box moving) very easily, Tesla built his new cars with a fully automated line (the bots can switch their hand parts out for a dozen different tools), and A.I. allows us to have artificial customer service and self-driving cars. And that's now. In 10 years its going to be worse. Slow adoption is the only thing saving us right now.


Also, do you have source from this? I'm interested in hearing if there are any reports or documentation on the negative affects it's given to his body otherwise.

nah, sorry, I read it in his book and havent looked for a follow up
 
Is it foolish to assume that once the Singularity occurs, we shall either be immortals, or we will all be wiped out?

I see two scenarios once it happens;

(A) Technological singularity occurs. After that event, the machine keeps creating a better version of itself. Each version creating a more advanced machine, until we reach the Godhead. A machine that cannot be upgraded; the absolute pinnacle.. After that, there are no limits.

(B) Technological singularity occurs. The machine is sentient, and starts the chain of events that lead to our extinction.

To me, the singularity is the key to all this. I hope it truly happens in our lifetime.

I don't think either extreme is a requirement, though if strictly referring to Seed AI, then I do think the latter is a vague possibility.

I believe most people who refer to this process think that we'll progress technologically far enough to "accidentally" create something too powerful -- which will then, like a virus, spiral out of control until it's too powerful to contemplate or control. Could that happen? I don't know, maybe, but I don't think it'll be a sudden shift like some theorize or fear.
 
It is also possible that someone will intentionally engineer our destruction.
Imagine a rich cult, obsessed with the "apocalypse", what's there to stop them from creating a progressively developing A.I. programmed to destroy humanity?

As technology becomes more and more ubiquitous, it's possible that the processing power required to jumpstart the creation of such an A.I. might end up in the hands of even a small group of individuals.
 

sans_pants

avec_pénis
It is also possible that someone will intentionally engineer our destruction.
Imagine a rich cult, obsessed with the "apocalypse", what's there to stop them from creating a progressively developing A.I. programmed to destroy humanity?

As technology becomes more and more ubiquitous, it's possible that the processing power required to jumpstart the creation of such an A.I. might end up in the hands of even a small group of individuals.

A.I. is hard. its gonna take a lot of great minds to make it work. a doomsday cult would be better off engineering a super virus
 
Eclipse Phase is an RPG that delves heavily into this stuff. Been reading the core rule book and it has all sorts of interesting ideas of where technology could take us.
 
It is also possible that someone will intentionally engineer our destruction.
Imagine a rich cult, obsessed with the "apocalypse", what's there to stop them from creating a progressively developing A.I. programmed to destroy humanity?

As technology becomes more and more ubiquitous, it's possible that the processing power required to jumpstart the creation of such an A.I. might end up in the hands of even a small group of individuals.

Religions or cults being born out of technological enhancement are bound to happen in some manner, and adopting the use of them for their own social/political gain, as has happened with the acceptance of evolution in some religious sects. More probable in this case with the greater chance of either AI or biological augmentations is that they would use such technology for violence and "purity" missions (i.e. Deus-Ex) and try to separate society from the belief of pure humanism or imitating God.

It's an odd type of denial that will happen when the farther we get away from any notion of religion or religious metaphysics surrounding ideas such as the 'apocalypse' or even furthering the distance between what's considered "human" and not human, the stronger it will be held onto as if it's a security blanket being ripped away. I think this sort of behavior would be less prevalent in rich societies.
 

Jaleel

Member
Loved me some Deus Ex: Human Revolution.

1. Do you believe this at all possible to any extent? If so, how far do you think we can go, and if not, why?

Sure, envisioning a future in which people are able to purchase augmentations that will enhance themselves in a number of different ways isn't to terribly hard to imagine. As to how far it will go I haven't the slightest clue.

2. Would you be in favor of being able to alter humanity at the core of what makes us "human"?

I'm having trouble understanding this question. Does having augmentations done to oneself make you less "human"? Are amputee victims less human once they decide to get a prosthetic limbs? How about when a person gets a artificial heart? I personally wouldn't consider them less human.

3. How long do you think this will take for us to achieve?

/shrug

If I had to guess, the technology will exist in the near future. However, I would assume it would take much longer before it becomes sold to the public as commonly as breast implants.

4. What personal moral issues do you have (if any) with this movement?

Never really took time to consider any moral implications of this. As long as this doesn't become something that is mandated by the state I suppose its fine.

5. If given the chance, would you alter your mind or body (abolitionist theme) to reduce pain? Or, simply augment?

A pair of shades similar to the ones Adam Jensen has for sure. #swag

6. If this became possible, how would it affect world religions?

Would it affect them? I mean, again do people hear individuals complain when people get prosthetic limbs, artificial hearts or laser eye surgery?

7. Is this possible without severe class warfare, i.e., the rich advancing their state, the poor behind left behind?

No, the rich will have the best availability similarly to how they have the best availability to healthcare.

8. Can you think of any solutions?

Socialize it?
 

Guevara

Member
Honestly I think tech like this will be a total disaster for the human race short term. We're going to reach this tech will still in a capitalism mode, meaning resource allocation will be extremely 'chunky', relegated to already rich assholes.

On the plus side, I'm pretty sure nothing really world changing (basically anything besides replacing parts) will happen for well over 100 years. You might get an eye implant, or good working hands, but you won't be able to digitize your memories and live forever. I'm all for fixing parts, I'm against technological immortality.
 
I'm having trouble understanding this question. Does having augmentations done to oneself make you less "human"? Are amputee victims less human once they decide to get a prosthetic limbs? How about when a person gets a artificial heart? I personally wouldn't consider them less human.

I'm more referring to a point where the majority of your body could be replaceable, whether willingly or necessary, or perhaps 'mind uploading' to transfer thoughts and memories into something that by nature, isn't born flesh. The term "post-human" used in the TH definition. Your examples are replacing things with the same abilities and purpose; I'm talking about prosthetic limbs being able to increase strength well beyond normal human conditions, or likewise mental ability well beyond even the smartest of men. Intentional improvements, not unintentional replacements.

Would it affect them? I mean, again do people hear individuals complain when people get prosthetic limbs, artificial hearts or laser eye surgery?

The issues that would arise come from the moral philosophies surrounding the idea that God put us here, in a specific form, for a due purpose. Achieving things such as living into hundreds of years, regenerative DNA, altering your mental and physical state to superhuman heights, and escaping death, would be quite contrary to what has been the norm for the last x-amt. of millennia. Laser eye surgery and replacement limbs are a vague form of it, but aren't life-extension by definition of transhumanism or extropian principles -- and wouldn't put them in the same class.
 

Zaptruder

Banned
It is also possible that someone will intentionally engineer our destruction.
Imagine a rich cult, obsessed with the "apocalypse", what's there to stop them from creating a progressively developing A.I. programmed to destroy humanity?

As technology becomes more and more ubiquitous, it's possible that the processing power required to jumpstart the creation of such an A.I. might end up in the hands of even a small group of individuals.

Counter A.I. development will of course be a large and important part of future computing development.
 

Jaleel

Member
I'm more referring to a point where the majority of your body could be replaceable, whether willingly or necessary, or perhaps 'mind uploading' to transfer thoughts and memories into something that by nature, isn't born flesh. The term "post-human" used in the TH definition.

Achieving things such as living into hundreds of years, regenerative DNA, altering your mental and physical state to superhuman heights, and escaping death, would be quite contrary to what has been the norm for the last x-amt. of millennia.

Okay, this is far more advanced than I was originally musing about. I'll revisit those questions once I've put some thought into them.
 
D

Deleted member 13876

Unconfirmed Member
Don't have much to contribute to the topic at the moment, but will keep an eye on this.
 
Counter A.I. development will of course be a large and important part of future computing development.

Indeed, that's really the only solution.
Create a benign evolving A.I. to stop the development of a malign A.I.

Assuming that an evolving A.I. would in short time become near omnipotent as a result of it being capable of improving itself - over and over again.
 
Indeed, that's really the only solution.
Create a benign evolving A.I. to stop the development of a malign A.I.

Assuming that an evolving A.I. would in short time become near omnipotent as a result of it being capable of improving itself - over and over again.

Seed AI is fascinating, although here's an interesting excerpt from the Wiki:

"However, they cannot then produce faster code and so this can only provide a very limited one step self-improvement. Existing optimizers can transform code into a functionally equivalent, more efficient form, but cannot identify the intent of an algorithm and rewrite it for more effective results. The optimized version of a given compiler may compile faster, but it cannot compile better. That is, an optimized version of a compiler will never spot new optimization tricks that earlier versions failed to see or innovate new ways of improving its own program[citation needed]."

My understanding in laymen's terms means that as far as our evolving AI concept is concerned right now, it can only evolve in one singular functionality, but can't "think" of new ways of programming itself outside of that realm. I think.
 

Darkmakaimura

Can You Imagine What SureAI Is Going To Do With Garfield?
I'm not sure about becoming a robot and I'd at least currently be a bit leery about certain kinds of implants, but for the most part I love and support the advancement of technology that will empower us physically or mentally. I for one would probably welcome an implant in the back of my head that would allow for a cute ancilla/AI.

But there's still a lot of fear of custom genetic engineering (the term for that escapes me right now) by having a world of "superior" people.
 
Seed AI is fascinating, although here's an interesting excerpt from the Wiki:

"However, they cannot then produce faster code and so this can only provide a very limited one step self-improvement. Existing optimizers can transform code into a functionally equivalent, more efficient form, but cannot identify the intent of an algorithm and rewrite it for more effective results. The optimized version of a given compiler may compile faster, but it cannot compile better. That is, an optimized version of a compiler will never spot new optimization tricks that earlier versions failed to see or innovate new ways of improving its own program[citation needed]."

My understanding in laymen's terms means that as far as our evolving AI concept is concerned right now, it can only evolve in one singular functionality, but can't "think" of new ways of programming itself outside of that realm. I think.

I think that's just a limitation of self-compiling code.
A seed A.I. would be a lot more than just self-compiling code, just as we are not just self-compiling code in a fleshy computer.
 

Amzin

Member
Yea, that's just a current limitation of our compilers, which themselves aren't much in the way of AI anyway. One of the big stepping stones of AI is getting it to see intent or purpose - that is where things start to get interesting.
 

Carcetti

Member
Sign me up for slow, gradual, destructive uploading when I'm about to go out. Don't fancy being a copy but the gradual transition would be enough for my sanity to accept that it's still me.
 

sans_pants

avec_pénis
Honestly I think tech like this will be a total disaster for the human race short term. We're going to reach this tech will still in a capitalism mode, meaning resource allocation will be extremely 'chunky', relegated to already rich assholes.

On the plus side, I'm pretty sure nothing really world changing (basically anything besides replacing parts) will happen for well over 100 years. You might get an eye implant, or good working hands, but you won't be able to digitize your memories and live forever. I'm all for fixing parts, I'm against technological immortality.

why?
 

Toppot

Member
An interesting lecture/talk on Transhumanism/'body hacking' by Lepht Anonym.

An interesting talk where they go over how they have put things in their body, like magnets. And lessons learned about bioproofing things to be able to put them in your body. They are a bit, crazy I guess but very very interesting.

I'd love to have a couple of magnets in my fingers, but I don't have the balls to do it myself and it's illegal to get someone to do it in the UK xD

I did write a small essay on how the future of implants will affect advertising, from now, to google glasses and to the future.
 

Kinitari

Black Canada Mafia
When it comes to 'brain uploading' there is only one way I think I could be comfortable with it.

Let's say eventually we can externalize our memories. Like we have a home super-computer that is wirelessly connected to our brain. Let's take that another step and let's say that eventually we can create a 'virtual space' inside this computer, and then see/hear/smell etc from an avatar of ourselves living in this virtual space. So now we have our memories and a virtual representation of us living in a digital setting.

I can keep expanding on this model slowly until I basically have the ability to simultaneously 'exist' in a physical world and a digital world. My awareness is comfortably split between these two representations of myself, where my digital self is for example... living in a virtual world, maybe some video game - it doesn't matter. When I physically go to sleep, I'm able to continue to do things digitally, as my digital self has some autonomy.

I think if after years of living like this, where my two selves become nigh indistinguishable, and my physical self were to die - I wouldn't have an existential crisis. It would be more like losing a limb than dying. And I could make limb (body) via cloning or the like. Heck, maybe I could have more than one body at this point.

Something like that would have to happen to keep me comfy with a brain upload.
 
When it comes to 'brain uploading' there is only one way I think I could be comfortable with it.

Let's say eventually we can externalize our memories. Like we have a home super-computer that is wirelessly connected to our brain. Let's take that another step and let's say that eventually we can create a 'virtual space' inside this computer, and then see/hear/smell etc from an avatar of ourselves living in this virtual space. So now we have our memories and a virtual representation of us living in a digital setting.

I can keep expanding on this model slowly until I basically have the ability to simultaneously 'exist' in a physical world and a digital world. My awareness is comfortably split between these two representations of myself, where my digital self is for example... living in a virtual world, maybe some video game - it doesn't matter. When I physically go to sleep, I'm able to continue to do things digitally, as my digital self has some autonomy.

I think if after years of living like this, where my two selves become nigh indistinguishable, and my physical self were to die - I wouldn't have an existential crisis. It would be more like losing a limb than dying. And I could make limb (body) via cloning or the like. Heck, maybe I could have more than one body at this point.

Something like that would have to happen to keep me comfy with a brain upload.
Do programs or lines of code have permanence?

Because regardless of how these alternatives to instant, lethal digital uploads put you in digital space, they nonetheless put you into an existence where your 0s and 1s will by necessity be copied and erased whenever moved. Personally, I'd rather avoid entering a digital existence to begin with. I would like my self to be linked to particular physical substrate, rather than being an interchangeable sets of 0s and 1s.

That way, the element of permanence remains. You'd still be able to interact with digital universes and such, through analogues of BTC-connections (only instead of a organic fleshy brain, your silicate neural substrate will be linked to the internet) and while you might experience some lag unlike an entity existing within the digital universe, you remain located in one spot, and in one piece.
 

Kinitari

Black Canada Mafia
Do programs or lines of code have permanence?

Because regardless of how these alternatives to instant, lethal digital uploads put you in digital space, they nonetheless put you into an existence where your 0s and 1s will by necessity be copied and erased whenever moved. Personally, I'd rather avoid entering a digital existence to begin with. I would like my self to be linked to particular physical substrate, rather than being an interchangeable sets of 0s and 1s.

That way, the element of permanence remains. You'd still be able to interact with digital universes and such, through analogues of BTC-connections (only instead of a organic fleshy brain, your silicate neural substrate will be linked to the internet) and while you might experience some lag unlike an entity existing within the digital universe, you remain located in one spot, and in one piece.

Well I'm taking a lot of liberties, but I don't personally have an issue being representing digitally.

That being said, maybe some sort of neuron-modelling processor would make you happy.
 
I wouldn't mind being represented digitally (if the process was done gradually, killing one neuron at a time as it had been simulated in a computer) - but the problems start once you are digital, simply due to the nature of the digital world.
There is no moving of code without outright destroying and copying it.

It's for issues as these that I would like to learn a bit more about computers and software, to figure out to what degree code "move" on the substrate it is run on. Is it possible to design programs that are inherently non self-destructive?
Can you create code that is linked to particular transistors, and will only move gradually - bit by bit - to another set of transistors?
 
Reminds me of the Swampman thought experiment.

Do programs or lines of code have permanence?

Because regardless of how these alternatives to instant, lethal digital uploads put you in digital space, they nonetheless put you into an existence where your 0s and 1s will by necessity be copied and erased whenever moved. Personally, I'd rather avoid entering a digital existence to begin with. I would like my self to be linked to particular physical substrate, rather than being an interchangeable sets of 0s and 1s.

That way, the element of permanence remains. You'd still be able to interact with digital universes and such, through analogues of BTC-connections (only instead of a organic fleshy brain, your 8silicate neural substrate will be linked to the internet) and while you might experience some lag unlike an entity existing within the digital universe, you remain located in one spot, and in one piece.

This seems like a good solution to the problem, although eventually having an entirely different existence which is permanent would be preferred as an option either way. I'm sure it's possible to design programs that aren't self-destructive in the way you're describing, but that will just require a very large amount of processing power; illustrated by this Kurzweil graph:

800px-Estimations_of_Human_Brain_Emulation_Required_Performance.svg.png

"Estimates of how much processing power is needed to emulate a human brain at verious levels (from Ray Kurzweil and the chart to the left), along with the fastest supercomputer from TOP500 mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where consciousness arises." (if this applies to the exact type of process we're talking about)
 

Kraftwerk

Member
Will read through the thread more thoroughly when I get home, so apologies if this has been discussed already;

Whenever we talk about copying the brain, everyone agrees that 'you' would die, and the other you is just a copy that will live on forever digitally.

But...

What about transferring you from the biological brain to a machine made brain that does not degrade? Will that still be you? I would say yes.

What do you think?
 
Will read through the thread more thoroughly when I get home, so apologies if this has been discussed already;

Whenever we talk about copying the brain, everyone agrees that 'you' would die, and the other you is just a copy that will live on forever digitally.

But...

What about transferring you from the biological brain to a machine made brain that does not degrade? Will that still be you? I would say yes.

What do you think?

I think the issue most people have is with the disconnect as you move from A to B - as you are essentially destroyed as a copy of you is made.
That's why some people much prefer the idea of a "partial upload", where you are gradually moved from A to B - in a way (copying one neuron at a time, and destroying them one at a time while ensuring that the connection is maintained between your biological and digital/electronic neurons at all times) that one can never at any point say that a copy has been created and the original destroyed.

In a gradual transfer, you are becoming that copy.

I still have a problem with it when the process is to a digital existence, as 0s and 1s, as you run into the same old dilemma in the digital space instead the moment you're transferred into it (see my previous post). If the new substrate (the "thing" your mind is run on) is just a silicate version of your brain, where your mind is solidly connected to the artificial brain, then you can bypass this problem
 
Status
Not open for further replies.
Top Bottom