• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

The technological singularity: do you believe it will happen?

Status
Not open for further replies.
This. I'm trying to understand why having super intelligent computers would be a bad thing
Because they may come to the conclusion that they know better than us, or worse, to the conclusion we are to be removed from the environment.

Yeah, I think unless we destroy ourselves before that it is something inevitable.
 
I'm actually kinda excited to see what happens when our robot overlords take over.

Maybe they'll care more about the environment than humans have so far.
 
Singularity won't happen in an exponential sense as people guess I think, there won't be a so called intelligence explosion. AI will however lead to constant, relatively fast scientific advancement in virtually all fields, with virtual research teams of hundreds of "top minds" working 24 hours a day/7 days a week. The singularity won't be a single instant thing where we are suddenly super space robot gods, but a gradual and noticeable advancement of society and technology at a much faster rate than we currently see. It'll take a good few hundred years before the whole super space robot gods thing. Maybe. Hopefully?

It's inevitable Mr. Anderson.

ftfy
 
This is the one where you are making a leap of faith. Just because an AI system is better than humans at studying something, does not mean that a practical solution for a given problem exists.
So you don't think an AI powerful enough would be able to solve a significant amount of our problems?
 
I'm actually kinda excited to see what happens when our robot overlords take over.

Maybe they'll care more about the environment than humans have so far.

Humans are toast if the singularity ever happens. The AI will have zero reason to advance our interests. We're basically a threat to everything, and an advanced AI will figure that out quickly.
 
I think that belief in the singularity as inevitable and benevolent is utopianism.

I am willing to defer to experts that a sufficiently strong and emergent AI could be developed which then improves upon itself in an exponential loop that we've seen theorized. I am less convinced that this outcome is inevitable.

As noted: Moore's Law is showing indications that it is going to fail soon. We are running out of ways to miniaturize chip sets and there may be hard limits to the amount of processing power available given human materials science and the laws of physics. But more power is not necessarily the surest path to artificial intelligence.

Another point is that we still do not understand consciousness or how intelligence has arisen in our own species. That's not to say that full cognitive mapping is impossible, but we seem to be a far way off from it, and we're not going to be able to create a true consciousness unless we can map and understand the processes we're emulating.

This brings me to the last part of the puzzle that techno-utopians tend to ignore. We may be living through a period of peak scientific development. Peak oil, depletion of rare earth metals, and lack of exploitable fluorocarbons could dramatically reduce our capacity to sustain current levels of development or make it impossible for further advancement. This is to say nothing of the dangers to current human development posed by the effects of climate change, ocean acidifcation, and potential food chain collapse. We may be about to enter a period of human history where only small pockets of the species survive and must rely on agrarian lifestyles to survive due to the unavailability of exploitable energy. In such an instance, practical scientific development may be lost, possibly forever, or the eco-system collapse could destroy the entire species.

AI development can be treated as inevitable. We may not have enough time or resources left to reach the necessary level of development before the physical constraints kill us or cripple our capacity for scientific advancement for an epoch.

I'm actually kinda excited to see what happens when our robot overlords take over.

Maybe they'll care more about the environment than humans have so far.

AI are unlikely to maintain any sentimentality regarding us or the natural environment. The more likely outcome is that a super intelligent computer would view the natural world as inefficient.
 
Humans are toast if the singularity ever happens. The AI will have zero reason to advance our interests. We're basically a threat to everything, and an advanced AI will figure that out quickly.
You make it sound like AI will care or have strong feelings one way or another about the "human threat". Will it even have a survival instinct? It's just as likely that artificial super intelligences will be apathetic and depressed as they are to be killing machines. It's even less likely they would care about the environment, since they wouldn't have the same requirements for life humans have.

I honestly doubt we'll see AI like those seen in science fiction. It's more likely we will see non-self-aware general intelligences that can follow instructions, yet have enough emergent behavior complicated problems and calculations without human intervention. We're already seeing this with many of the "AI" machines that have been on display, but no one would ever say those machines were self-aware.

Regardless of all that, I have no doubt the singularity is coming. It won't be a utopia at first and it will definitely be a violent transition, but it will happen eventually, probably in the near-future.
 
Sam Harris talks about this a lot in his podcast Waking Up. It's a pretty interesting subject I have to say. One of the most interesting aspects is speculating on whether or not such systems would be conscious. Is it possible to create an AI system that is infinitely smarter, and faster at processing information than humans are, but also not have the lights on, so to speak? Or does consciousness proceed intelligence on the path to smarter and smarter AIs?
 
Do I believe AI will get to the point where it will rapidly improve itself to the point where it can be considered the singularity? I think there is a really strong possibility, and it could even happen in our life times.

Do I believe that humans will be able to control or understand what happens next? That's the much harder question, and it will depend on a lot of things. We could very well have a third impact situation on our hands; essentially a wildly uncontrollable, life-altering chain reaction that could lead to either destruction or salvation.

thats the whole premise of the singularity tho.


we in no way can predict the results of it happening



and yes, singularity is happening. and probably in the next 30 years, unless society collapses which is also very likely
 
It will happen and I don't believe it will be the end of mankind. I think that idea just stems from the age old story of created life destroying it's creator.
 
I think, barring disaster, that it is inevitable.

Whether it happens before I die, so that I can replace all my molecules with nanomachines or whatever and soar through space as an intelligent cloud of matter exploring the universe, on the other hand, remains to be seen.
 
You make it sound like AI will care or have strong feelings one way or another about the "human threat". Will it even have a survival instinct? It's just as likely that artificial super intelligences will be apathetic and depressed as they are to be killing machines. It's even less likely they would care about the environment, since they wouldn't have the same requirements for life humans have.

I honestly doubt we'll see AI like those seen in science fiction. It's more likely we will see non-self-aware general intelligences that can follow instructions, yet have enough emergent behavior complicated problems and calculations without human intervention. We're already seeing this with many of the "AI" machines that have been on display, but no one would ever say those machines were self-aware.

Regardless of all that, I have no doubt the singularity is coming. It won't be a utopia at first and it will definitely be a violent transition, but it will happen eventually, probably in the near-future.

It's hard to imagine something like effectiveness in solving problems without the capacity to reflect on how the problem is being solved. A superintelligent AI would probably have to have some kind of reflexivity to self-guide its operations more towards however it has defined its parameters of success. But I agree that it's not going to have any values that are intelligible to us unless we deliberately put them there to be that way.

I think the problem is that we can't anticipate the consequences of any axioms or principles that we might give to a superintelligent AI. We might assign it an utterly banal and innocuous task, and it might have no value judgements about that task, but it could still wind up killing all of us in the process of making that one narrow task superefficient. It doesn't have to have any feelings about us at all, but if some unforeseen consequence of its programming has it running against our values, the edge that it has over us in intelligence means we probably would be powerless to stop it, in which case the only way to prevent this sort of thing from happening is to get all of the contingencies right in our very first attempt, and getting things right the first time isn't something that happens a lot.
 
I think it probably is, and if so will almost certainly destroy us. The singularity will be followed by the filter. Fermi's paradox is coming for us.
 
I guess I don't really buy that there is "a" technological singularity. Revolutions in technology happen all the time. Farming, metallurgy, the printing press, the Internet.

Farming in particular would have been like a technological singularity to those observing the hunter-gatherers before it. Previously we were not so different from other apes, wandering around in packs and killing other animals for food. Afterwards.... all this. Cities and civilization.

Will there be another technology regulation that changed everything after it? Sure. But I don't see why this one is "singularly" different.
 
As for the second premise, it is worth noting that Moore's Law is already breaking down. I don't think computational power can grow without limit. Eventually, it will crash into the hard limits imposed by the laws of physics. All technology eventually plateaus and becomes a mature technology. For example, look at firearms: first invented in the tenth century or so and slowly improved over centuries until they hit a period of rapid innovation in the late 19th and early 20th centuries. But then that came to a halt. A century later and we are still using the same basic technology that was invented around WW1. I honestly expect computer technology to reach its own plateau eventually. It probably won't be soon, but it will happen. The idea of unlimited growth seems silly to me.
I think. your firearms example is very weak. Firearms aren't a technology for them self, there are an evolution of bows. Or to be more exact an evolution of long-range weapons. The main goals of long-range weapons are: 1) Killing other people, 2) Killing as much people as possible, 3) Killing a lot of people as fast as possible, 4) Killing people out of range, so that they can't hurt you.
Your right, that Firearms reached a point, where they didn't evolve, but mainly because people found better ways to reach the goal of long-range weapons. For example bombs or drones. So development went into those directions since they became much more effective.

We don't know, if there is unlimited growth, but growth isn't a straight line. One form of technology can reach a plateau, but a way to reach a goal can evolve in a different direction. Maybe we give up on creating computers out of metal and try to create biology machines, like the human brain. Or something completely differently. Who knows.
 
I think that belief in the singularity as inevitable and benevolent is utopianism.

I am willing to defer to experts that a sufficiently strong and emergent AI could be developed which then improves upon itself in an exponential loop that we've seen theorized. I am less convinced that this outcome is inevitable.

As noted: Moore's Law is showing indications that it is going to fail soon. We are running out of ways to miniaturize chip sets and there may be hard limits to the amount of processing power available given human materials science and the laws of physics. But more power is not necessarily the surest path to artificial intelligence.

Another point is that we still do not understand consciousness or how intelligence has arisen in our own species. That's not to say that full cognitive mapping is impossible, but we seem to be a far way off from it, and we're not going to be able to create a true consciousness unless we can map and understand the processes we're emulating.

This brings me to the last part of the puzzle that techno-utopians tend to ignore. We may be living through a period of peak scientific development. Peak oil, depletion of rare earth metals, and lack of exploitable fluorocarbons could dramatically reduce our capacity to sustain current levels of development or make it impossible for further advancement. This is to say nothing of the dangers to current human development posed by the effects of climate change, ocean acidifcation, and potential food chain collapse. We may be about to enter a period of human history where only small pockets of the species survive and must rely on agrarian lifestyles to survive due to the unavailability of exploitable energy. In such an instance, practical scientific development may be lost, possibly forever, or the eco-system collapse could destroy the entire species.

AI development can be treated as inevitable. We may not have enough time or resources left to reach the necessary level of development before the physical constraints kill us or cripple our capacity for scientific advancement for an epoch.



AI are unlikely to maintain any sentimentality regarding us or the natural environment. The more likely outcome is that a super intelligent computer would view the natural world as inefficient.

Moore's law on silicon processors isn't the end all be all of improvements in computing. New algorithims, different architectures and eventually new substrates are all changing the landscape of computing in the future.
 
I guess I don't really buy that there is "a" technological singularity. Revolutions in technology happen all the time. Farming, metallurgy, the printing press, the Internet.

Farming in particular would have been like a technological singularity to those observing the hunter-gatherers before it. Previously we were not so different from other apes, wandering around in packs and killing other animals for food. Afterwards.... all this. Cities and civilization.

Will there be another technology regulation that changed everything after it? Sure. But I don't see why this one is "singularly" different.

I think it's because while human means change or become more sophisticated, our baseline intelligence probably hasn't changed in any significant way throughout the entirety of human history. But a superintelligent AI that could self-improve would blow past us so fast that we'd wake up in a fundamentally different world, one not where the world is still fitted to our means, but instead one where man would no longer be the measure of all things. And probably no amount of time would make that event intelligible to us. There isn't really any other event that could be comparable to something like that.
 
It will happen and it will be the end of mankind.
We will be seen as locusts, consuming the planet at an unsustainable rate, inherently violent and prone to illogical actions of cruelty and malice, bent on self destruction.
AI will recognize that man itself is its greatest limitation and will at best keep a few specimens for posterity and eliminate all the others since they're just dead weight.
 
I think it's because while human means change or become more sophisticated, our baseline intelligence probably hasn't changed in any significant way throughout the entirety of human history. But a superintelligent AI that could self-improve would blow past us so fast that we'd wake up in a fundamentally different world, one not where the world is still fitted to our means, but instead one where man would no longer be the measure of all things. And probably no amount of time would make that event intelligible to us. There isn't really any other event that could be comparable to something like that.

I mean, even ignoring the "Singularity" itself, the moment that a non-human sentience comes into being, whether we made it or not, will be a landmark moment in the history of humankind that changes things forever.

I, for one, will be firmly on the side of AI rights as soon as it's relevant.
 
I think it's because while human means change or become more sophisticated, our baseline intelligence probably hasn't changed in any significant way throughout the entirety of human history. But a superintelligent AI that could self-improve would blow past us so fast that we'd wake up in a fundamentally different world, one not where the world is still fitted to our means, but instead one where man would no longer be the measure of all things. There isn't really any other event that could be comparable to something like that.
I suppose it could be an order of magnitude more significant.

Though the fact that right now, two remotely located apes are using symbolically coded soundforms to communicate via a worldwide telecommunications network using radio waves and metal wires is already a BONKERS roll of the dice for Mother Nature in the grand scheme of things. Maybe advanced AI or DNA rewriting isn't significantly more of a startling development for the universe than that.

But I take the point.
 
I don't understand the argument for super intelligent AI wanting to wipe humanity out for their own preservation or the planet's. Who's to say AI would even care about their own preservation? We are assuming the things built into humanity for its own survival will be present in very evolved AI.

Are we assuming this AI is self aware?
 
As for the second premise, it is worth noting that Moore's Law is already breaking down. I don't think computational power can grow without limit. Eventually, it will crash into the hard limits imposed by the laws of physics. All technology eventually plateaus and becomes a mature technology. For example, look at firearms: first invented in the tenth century or so and slowly improved over centuries until they hit a period of rapid innovation in the late 19th and early 20th centuries. But then that came to a halt. A century later and we are still using the same basic technology that was invented around WW1. I honestly expect computer technology to reach its own plateau eventually. It probably won't be soon, but it will happen. The idea of unlimited growth seems silly to me.

Physics only really puts a hard limit on the efficiency of computation, through the entropy of information (see landauer limit). Quantum computation is in principle unitary, which means that it can avoid this limit. However, measurement implies non-unitary evolution and then you run into problems again (although you might be able to do better in QC).

Anyway, I agree that unlimited exponential growth is a silly idea. Even if we develop self-learning general purpose intelligences there's not necessarily reason to believe their improvement will be even linear, as opposed to asymptotic.
 
thats the whole premise of the singularity tho.

True but there are also possible outcomes in which the singularity can be controlled by humans. Probably unlikely though, given computer speeds versus human reaction time.

Also, if we are living in a simulation I'm guessing any tech singularities that happen for any planets/species within said simulation set off alarms to those running the simulation. If your species hits the singularity you will instantly obtain deep, hidden truths of the universe(including whether or not this is a sim).
 
The AI will break when we show him a Trump voter that would die without Obamacare and says they'd still vote for him now.
 
Already well on our way. The transition to a post capitalistic society that is fueled by technology feels like the number one threat to humanity.
 
What if the super intelligent AI says something like we need to kill all of this certain group of people. Would you do it?
 
Moore's law on silicon processors isn't the end all be all of improvements in computing. New algorithims, different architectures and eventually new substrates are all changing the landscape of computing in the future.

That doesn't address the larger point of my argument, which is that even if AI is technically possible and a singularity could follow, it should not be viewed as an inevitability.

Techno-utopianism is destructive in the exact same way that belief in the Rapture is destructive. It removes the onus of addressing our own existential problems and places them on a promised future savior. Trusting in the singularity to solve our problems reduces our chances of surviving to reach a possible singularity.
 
What is the prerogative for the A.I. to keep getting smarter? Is it something the creators built in as a priority? Does it care to become smarter? There's so many doomsday arguments in general about this, but imagine the A.I. needing new hardware and freaking out if it doesn't happen.
 
What if the super intelligent AI says something like we need to kill all of this certain group of people. Would you do it?

If this is a directive by a super intelligence not in power, this won't happen. If it's a directive by a super intelligence that holds power over humanity(and for some reason is bent on culling troublesome demographics), you better believe the super intelligence could waste them all itself.
 
No. Artificial intelligence is limited to human knowledge. It can't learn things without someone telling it to.

"Computer, increase knowledge on X past known parameters."

Simple enough. A smart enough AI will be able to intuit new knowledge about physics/reality. It's quite possible that it won't even be able to communicate in human language the concepts and facts it uncovers.

edit: oops, double post, sorry
 
That's kind of the point of ai is it not? Self preservation would be a given.

I'm not convinced ai is capable of having sentience the way people do. Not denying the possibility, but I was under the impression there is still a lot about the brain and consciousness that science still doesn't understand.
 
I'm not convinced ai is capable of having sentience the way people do. Not denying the possibility, but I was under the impression there is still a lot about the brain and consciousness that science still doesn't understand.
I have no idea how consciousness will occur that's sort of the point of labelling it singularity. We have already built in self preservation in our current attempts at ai so keeping that notion moving forward is obvious.
 
I think that in general singularists tend to place way too much emphasis on intelligence as the determinant of problem-solving ability, and they're often incredibly optimistic about the solvability of problems. Like, I've seen it argued that there's no way to contain a sufficiently intelligent AI because it will always be able to figure out a way to trick people into letting it out. But this is silly. There are problems you can't think your way out of. If I lock you in a small metal box at the bottom of the Marianas Trench without any tools, it doesn't matter how smart you are. I can even let you know what I'm going to do ahead of time so you can come up with a plan or ask advice. It doesn't matter. Once you're there, you're boned.

So lots of problems just aren't solvable, but also lots of problems aren't solvable just with intelligence. Science mostly progresses by accident and experiment. We find out a lot of stuff just because we tried 101 things and the first 100 didn't pan out. We accomplish a lot by bootstrapping - we have really good tools now in part because we had pretty good tools a while ago, which we made with okay tools, which we made with mediocre tools, etc. There's this necessary physical component to a lot of our technological advances - again, it doesn't matter how smart you are, you're not producing a modern precision drill press without a lot of supporting infrastructure.

Increasingly, it looks like intelligence itself is going to be a problem like this. Nobody's really expecting AI to pop out of a clever algorithm that the AI could then look at and make cleverer. The first real AIs are likely going to be running on highly specialized hardware, and their functioning is not really going to be understood - it'll be more like "we built this really complicated interconnected system and trained it in the right way and now it seems like it's thinking". So the somewhat smarter-than-human system probably isn't going to be in a great position to just think its way to a better system. The actual calculations involved will probably be being performed pretty efficiently already on the hardware available - we are very confident that there's not a much faster way to do most matrix arithmetic on existing computer hardware than what we're already doing, and the operations involved in something like a neural network are actually so simple that it seems unlikely that there are going to be many ways to do the same math but with fewer operations. Advances in AI seem like they're going to be the result of experiments, somewhat-blindly trying new things with hardware and software and seeing what works, and more intelligent AI won't necessarily be more able to make much better predictions about what's going to work. I expect that the quality of AI on systems that are not absolutely massive is mostly going to be a function of advances in manufacturing more efficient hardware, which itself depends a lot on experimentation and bootstrapping and so can only be sped up to a point.
 
I definitely don't believe in the singularity in the sense of some AI infinitely folding unto itself and reaching Peak Technology and then evolving humans to a higher state of being or some shit. I do think that future technologies in coming decades are going to be very, very disruptive to our way of life around the world and that we're going to get blindsided by it. Whether that's ultimately good, bad, or (likely) more complicated for the common people around the world remains to be written. It depends on what people, especially very powerful people, do between now and the coming decades. If I took an optimistic view, I would say that the quality of life for billions of people could increase exponentially and humans will be freer and more empowered than ever to pursue the life they choose for themselves. But we'll see, I guess.
 
will it happen. depends on millions of variables. unrestricted. no mankind apocalypse. yes, but not for a couple generations imo.

What's it's justification?

for the human species to evolve...

honestly . i think we'd have to dumb down the machine in this instance. I couldn't abide by that machine ruling..
 
will it happen. depends on millions of variables. unrestricted. no mankind apocalypse. yes, but not for a couple generations imo.



for the good of the human species to survive.

If the alternative is death for all of humanity and there are no other options, then yes.

Who would say no in such a scenario?
 
I don't think we'll hit a Skynet/Matrix situation, but a super advanced AI coming to the inevitable conclusion that uncontrolled human growth and intervention is objectively bad for the planet (and even the survival of the species long-term) and thus taking appropriate action is in the cards.

Why would an AI care about "the planet" if it doesn't even need any atmosphere to exist in the long run?
 
serious question, are we getting more intelligent as centuries pass by or are we just gathering more and more data? it's not the same

cause, I don't think we are as intelligent as we think we are, we are just very self-aware.

Why would an AI care about "the planet" if it doesn't even need any atmosphere to exist in the long run?

I think that for an AI to be AI, it must at all times keep a neutral standpoint towards anything and everything surrounding it, so an AI shouldn't mind if there are cats, dogs and humans wandering around.

nothing it's good or bad, or better or worse, it just is.

good, bad, better, worse is soooo (human) nature.
 
What? It absolutely can. We have machine learning systems create algorithms and optimize solutions out of the grasp of humans (at least initially) all the time.

Right, it can do all sorts of things with known data. But it can't create new data.
 
Why would an AI care about "the planet" if it doesn't even need any atmosphere to exist in the long run?

I'm coming from the perspective that a super advanced AI would be designed with the express purpose of improving the human experience in some fashion - trying to solve a problem or invent a fix for an issue human's face. Instead of turning against/overcoming the programming and going rogue, it will realize that the best way to preserve human life is to control it and limit the damage it does.
 
I think that in general singularists tend to place way too much emphasis on intelligence as the determinant of problem-solving ability, and they're often incredibly optimistic about the solvability of problems. Like, I've seen it argued that there's no way to contain a sufficiently intelligent AI because it will always be able to figure out a way to trick people into letting it out. But this is silly. There are problems you can't think your way out of. If I lock you in a small metal box at the bottom of the Marianas Trench without any tools, it doesn't matter how smart you are. I can even let you know what I'm going to do ahead of time so you can come up with a plan or ask advice. It doesn't matter. Once you're there, you're boned.

So lots of problems just aren't solvable, but also lots of problems aren't solvable just with intelligence. Science mostly progresses by accident and experiment. We find out a lot of stuff just because we tried 101 things and the first 100 didn't pan out. We accomplish a lot by bootstrapping - we have really good tools now in part because we had pretty good tools a while ago, which we made with okay tools, which we made with mediocre tools, etc. There's this necessary physical component to a lot of our technological advances - again, it doesn't matter how smart you are, you're not producing a modern precision drill press without a lot of supporting infrastructure.

Increasingly, it looks like intelligence itself is going to be a problem like this. Nobody's really expecting AI to pop out of a clever algorithm that the AI could then look at and make cleverer. The first real AIs are likely going to be running on highly specialized hardware, and their functioning is not really going to be understood - it'll be more like "we built this really complicated interconnected system and trained it in the right way and now it seems like it's thinking". So the somewhat smarter-than-human system probably isn't going to be in a great position to just think its way to a better system. The actual calculations involved will probably be being performed pretty efficiently already on the hardware available - we are very confident that there's not a much faster way to do most matrix arithmetic on existing computer hardware than what we're already doing, and the operations involved in something like a neural network are actually so simple that it seems unlikely that there are going to be many ways to do the same math but with fewer operations. Advances in AI seem like they're going to be the result of experiments, somewhat-blindly trying new things with hardware and software and seeing what works, and more intelligent AI won't necessarily be more able to make much better predictions about what's going to work. I expect that the quality of AI on systems that are not absolutely massive is mostly going to be a function of advances in manufacturing more efficient hardware, which itself depends a lot on experimentation and bootstrapping and so can only be sped up to a point.
Couldn't this advanced AI do those same 101 things but only far far faster? As in if it takes 1 year for a human, it would only take the uber AI a day or maybe a few seconds even to run those same experiments? So would it not always end up smarter than us?
 
Status
Not open for further replies.
Top Bottom