• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

The technological singularity: do you believe it will happen?

Status
Not open for further replies.

Kimawolf

Member
In the death thread I was surprised to learn there are some people who don’t think the technological singularity will come. I was surprised due to all the research into A.I. and how fast technology is advancing and developing, not just in processing power, but deep learning machines, machines with “virtual neural networks”, machines which are learning like people do, etc.

But for those who don’t know what the Technological singularity is, here is a brief Wiki:

: The technological singularity (also, simply, the singularity)[1] is the hypothesis that the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization. According to this hypothesis, an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) would enter a 'runaway reaction' of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence that would, qualitatively, far surpass all human intelligence.

As advances in A.I. continue, we will reach a point where we no longer understand what the A.i. we created is doing and how it works as it will trap itself in a feedback loop of self improvement. Imagine a being which can think 1 million times as fast as a human, and is almost infinitely smarter.

So what will this mean for humans? Well problems we can’t solve, it will be able to solve for us. Aging, death, environmental collapse, human sustainability, it’ll be able to eventually tackle it all. Amazon, Google, MIT etc are hard at work on their own A.I.s and no doubt militaries around the world are also working on them as well.

Just last year, AlphaGo an A.i. did something it wasn’t supposed to do according to its designers, for 10 years: it beat the worlds best GO player.

https://www.theguardian.com/comment...cial-intelligence-robots-ethics-human-control

let us all raise a glass to AlphaGo and mark another big moment in the advance of artificial intelligence and then perhaps start to worry. AlphaGo, Google DeepMind’s game of Go-playing AI just bested the best Go-playing human currently alive, the renowned Lee Sedol. This was not supposed to happen. At least, not for a while. An artificial intelligence capable of beating the best humans at the game was predicted to be 10 years away.

it is not a matter of “if” this technological singularity will happen, but when. How do you think this will change the world and humanity? I think the only way humans survive is by transcending our biological bodies, merging with the machines. A true Man/Machine interface.
 
I have no mouth and I must scream

image.php


That's clearly a mouth.
 
There are no laws of physics that must be broken in order for AI minds to be built.

I do not believe the nature of intelligence is inscrutable.

Therefore, eventually we will have the knowledge to build something that can surpass us.
 
Well, I don't.

The singularity is religion for techno-atheists, in my opinion. We're always just a few years away! When god the singularity finally arrives we'll live forever, there will be no more suffering, every problem will be solved because we'll have unimagined processing power! At times it's indistinguishable from magical thinking.

The real logical fallacy is: believe every problem has a potential solution. It's very possible that faster-than-light travel, or anti-aging, or uploading human consciousness to a machine, are simply not attainable.
 
I don't think we'll hit a Skynet/Matrix situation, but a super advanced AI coming to the inevitable conclusion that uncontrolled human growth and intervention is objectively bad for the planet (and even the survival of the species long-term) and thus taking appropriate action is in the cards.
 
People hold too much stock in AI. Probably due to the inbuilt human saviour mentality whether it's Angels, Gods, Aliens, AI etc.

Well, I don't.

The singularity is religion for techno-atheists, in my opinion. We're always just a few years away! When god the singularity finally arrives we'll live forever, there will be no more suffering, every problem will be solved because we'll have unimagined processing power! At times it's indistinguishable from magical thinking.

Agreed
 
The problem is as much as we think it will solve and fix problems we feel we can't get a control of... What makes you think that it will actually want to fix any of our problems.
 
Well, I don't.

The singularity is religion for techno-atheists, in my opinion. We're always just a few years away! When god the singularity finally arrives we'll live forever, there will be no more suffering, every problem will be solved because we'll have unimagined processing power! At times it's indistinguishable from magical thinking.

The real logical fallacy is: believe every problem has a potential solution. It's very possible that faster-than-light travel, or anti-aging, or uploading human consciousness to a machine, are simply not attainable.

All this will be attainable. I just fear it won't produce the results we all think it will at the time and it will generate a whole new set of problems for us to think about.
 
As far as AI advancement, I think it is. From the moment an artificial mind will become "aware"and able to modify itself and grow, we'll lose control and comprehension of what it is and will become. Will it be good or bad? That I don't know..
 
Yes it's gonna happen.

The disconnect is how soon. People act like it's gonna happen in the next 10 or so years, and it's crazy. At the same time it also means people just dismiss the notion entirely.

It's the classic case of over estimating the short term while under estimating the long term.
 
What is the purpose of intelligence? Power, control, ambition? Knowledge? Or are we merely conflating human nature onto a hypothetical artificial intelligence?
 
At some point. Who knows when or maybe we don't survive long enough to make it happen. But if we make an AI that is smart enough, hoo boy. Who knows what it could do.

I'd hope in the next like 10 years. I'm tired of this bullshit. But that's super outlandish I'm sure
 
To be honest, I am highly skeptical of the idea of the technological singularity. It is built on two suspect ideas: the idolization of "genius", and the idea that computer technology is capable of unlimited growth.

The former idea can be summed up as the prevalent idea that difficult problems are solved by geniuses. The premise is that problems of X difficulty require a person of sufficient intelligence to solve. But that view of things is a warped perspective of how innovation and discovery actually work. New technology is created by the hard work done by countless people over time. In pre-modern times, this happened over decades or centuries as craftsmen slowly made improvements to technology that spread. In modern times, we have the population, education base, and infrastructure to make this process much faster. Remember, modern people are not any smarter than people 3000 years ago. We just have the benefit of being able to build on the effort of our predecessors and contemporaries.

As for the second premise, it is worth noting that Moore's Law is already breaking down. I don't think computational power can grow without limit. Eventually, it will crash into the hard limits imposed by the laws of physics. All technology eventually plateaus and becomes a mature technology. For example, look at firearms: first invented in the tenth century or so and slowly improved over centuries until they hit a period of rapid innovation in the late 19th and early 20th centuries. But then that came to a halt. A century later and we are still using the same basic technology that was invented around WW1. I honestly expect computer technology to reach its own plateau eventually. It probably won't be soon, but it will happen. The idea of unlimited growth seems silly to me.
 
What is the purpose of intelligence? Power, control, ambition? Knowledge? Or are we merely conflating human nature onto a hypothetical artificial intelligence?

Understanding the universe, and using knowledge to advance the well being of humanity.
 
So what will this mean for humans? Well problems we can’t solve, it will be able to solve for us. Aging, death, environmental collapse, human sustainability, it’ll be able to eventually tackle it all.

Why would it? I don't think it would care.
 
I think it's certainly possible. Machine learning is one of the most applicable paradigms ever - being used in pretty much every industry - so progress towards A.I. may just be speeding up. Lack of computational resources is a main limiting factor in every field so scientific progress will almost certainly accelerate. Can't say about all the problems in the OP, but even if A.I. comes up with a solution to all of them, then there's still going to be a major engineering aspect to it.
 
I think it will 100%. The question is whether or not we will have integrated machine logic into our brains in a way that allows us to keep up at that point.
 
Even if some kind of self-improving superintelligence came out, it would probably get stuck and spend all it's time doing something we don't find important like calculating prime numbers to the 9 billionth power. So we'll find it useless and just unplug it.
 
A theory put forward in Lo and Behold:
It's already happened and they're choosing not to reveal themselves.

Sleep tight.
 
Something interesting which I realised recently was that it could maybe happen to us. Basically if we figure out how to increase our own intelligence, then it could possibly have the same outcome.
 
All this will be attainable. I just fear it won't produce the results we all think it will at the time and it will generate a whole new set of problems for us to think about.

Anti-aging and transferring human consciousness (or at least copying) are probably possible, but FTL does not appear to be, in the classical sense.
 
I think the runaway idea is flawed in a number of ways. AI capabilities and the extent of our knowledge is vastly overplayed in the concept.

So it's not so much I don't believe it as I think it's a philosophical concept of its time that doesn't even necessarily have applicability at all.

Just for starters what about Godels incompleteness theorem's?

It's an interesting concept but given we're just beginning to realise how truly different the Universe is from our earliest concepts: dark matter, quantum theory, entanglement... I feel the idea an AI is suddenly going to solve all this is inherently flawed. It's not just about processing speed but as were recently getting to grips with its about measurement.

So it's a long, long, long way away if it ever comes.
 
Something interesting which I realised recently was that it could maybe happen to us. Basically if we figure out how to increase our own intelligence, then it could possibly have the same outcome.

Increasing intelligence and strength (via genetic engineering) in mice is a decade old by now. I think one of the biggest barrier (or the biggest) is simply the self-imposed moratorium the genetics community has to prevent human engineering.
 
To be honest, I am highly skeptical of the idea of the technological singularity. It is built on two suspect ideas: the idolization of "genius", and the idea that computer technology is capable of unlimited growth.

The former idea can be summed up as the prevalent idea that difficult problems are solved by geniuses. The premise is that problems of X difficulty require a person of sufficient intelligence to solve. But that view of things is a warped perspective of how innovation and discovery actually work. New technology is created by the hard work done by countless people over time. In pre-modern times, this happened over decades or centuries as craftsmen slowly made improvements to technology that spread. In modern times, we have the population, education base, and infrastructure to make this process much faster. Remember, modern people are not any smarter than people 3000 years ago. We just have the benefit of being able to build on the effort of our predecessors and contemporaries.

As for the second premise, it is worth noting that Moore's Law is already breaking down. I don't think computational power can grow without limit. Eventually, it will crash into the hard limits imposed by the laws of physics. All technology eventually plateaus and becomes a mature technology. For example, look at firearms: first invented in the tenth century or so and slowly improved over centuries until they hit a period of rapid innovation in the late 19th and early 20th centuries. But then that came to a halt. A century later and we are still using the same basic technology that was invented around WW1. I honestly expect computer technology to reach its own plateau eventually. It probably won't be soon, but it will happen. The idea of unlimited growth seems silly to me.
Great post, this is how I feel as well.
 
Well, I don't.

The singularity is religion for techno-atheists, in my opinion. We're always just a few years away! When god the singularity finally arrives we'll live forever, there will be no more suffering, every problem will be solved because we'll have unimagined processing power! At times it's indistinguishable from magical thinking.

The real logical fallacy is: believe every problem has a potential solution. It's very possible that faster-than-light travel, or anti-aging, or uploading human consciousness to a machine, are simply not attainable.

Pretty much my feelings on it too. There's a lot of hand-waving mysticism from some of these older believers in the singularity like Kurzweil, who (conveniently) will probably be dead around the same time it becomes obvious that they are wrong.

The current state of anything resembling AI is extremely, extremely crude compared to almost any intelligent creature, let alone a human. I mean, look at something like Bina48 - sequestered away as one of the most advanced "sentient" robots in existence. Yet its terrifyingly uncanny, and apparently very frustrating to try to have anything resembling a conversation with.

"AI" right now is basically just routines put in by humans, not actually emergent in any way. Any meaning or emotion attached to the exchanges are completely projected. It's no different than the whole thing with Coco the monkey.
 
RE: FTL travel

Another problem is the skewed vision of progress we all have of what is or/ isn't possible due to our tunnel vision like; limited & somewhat dogmatic perspectives. *cough* Dismissing EM drive *cough*. That is for another discussion though.
 
"AI" right now is basically just routines put in by humans, not actually emergent in any way. Any meaning or emotion attached to the exchanges are completely projected. It's no different than the whole thing with Coco the monkey.

This isn't true. Humans write the machine learning algorithm, which is just a complex system of weights with minimization rules. The actual strategies used by, say AlphaGo, are definitely emergent, as it is practically impossible for humans to actually write a program with Go strategies.
 
yeah, AI and computer power will keep growing indefinitely until the singularity is completely inevitable

*meanwhile at the Intel CEO offices*

CEO: CPU sales are falling. stop making faster chips
 
Do I believe AI will get to the point where it will rapidly improve itself to the point where it can be considered the singularity? I think there is a really strong possibility, and it could even happen in our life times.

Do I believe that humans will be able to control or understand what happens next? That's the much harder question, and it will depend on a lot of things. We could very well have a third impact situation on our hands; essentially a wildly uncontrollable, life-altering chain reaction that could lead to either destruction or salvation.
 
I think the runaway idea is flawed in a number of ways. AI capabilities and the extent of our knowledge is vastly overplayed in the concept.

So it's not so much I don't believe it as I think it's a philosophical concept of its time that doesn't even necessarily have applicability at all.

Just for starters what about Godels incompleteness theorem's?

It's an interesting concept but given we're just beginning to realise how truly different the Universe is from our earliest concepts: dark matter, quantum theory, entanglement... I feel the idea an AI is suddenly going to solve all this is inherently flawed. It's not just about processing speed but as were recently getting to grips with its about measurement.

So it's a long, long, long way away if it ever comes.
People misstate Gödel's Incompleteness Theorem all the time.

You're trying to apply it to something outside its domain. It is a very precise mathematical statement dealing with axioms and the mathematical system Gödel was using. It is not true for all mathematical systems as well.

For whatever reason, people have decided to tie Gödel's theorem with philosophical implications.
 
If we keep it simple, I believe that we'll eventually create AI and robotics that will

1. Eventually be able to surpass us in scientific study and knowledge.

2. Eventually be able to create better versions of itself

3. Have the capacity to solve a significant amount of the challenges we face

4. Have the capacity to communicate with us in a way that denotes human congruent intelligence. Ie, we'll be able to have conversations that are meaningful.

Does anyone think that these things are not possible?
 
The vast majority of experts in the field of artificial intelligence expect beyond human level intelligence to be reached, and most within this century.

Beyond human level does not necessarily imply a singularity, but I'm pretty sure that superintelligence remains the majority view in the field, and again it is expected that this will take place this century.

The fact that experts support one position does not mean that it will be the case, but no one would go against the expert viewpoint unless they themselves had studied the topic to quite a high level. Consider anthropocentric global warming - ~95% of scientists in that arena support the concept, and generally those who go against the scientific consensus are considered to be anti-science fools. Again, AGW could still turn out to be false, but you better have some strong evidence if you want to argue for that position. So too, being against a technological singularity.
 
It is very possible that some problems, key problems we are facing even, do not have a solution OP. Thus, even if something like technological singularity happens during our lifetimes, it won't be able to solve them for us.

To take a step back from obvious, I also want to point out that even though deep learning is all the rage now, a generalised AI that takes creative initiative to solve problems in ways not even fathomable to us, is something we still may not be able to create with today's knowledge, and gaining knowledge needed to create something like that during our lifetimes, is not a foregone conclusion.

3. Have the capacity to solve a significant amount of the challenges we face
This is the one where you are making a leap of faith. Just because an AI system is better than humans at studying something, does not mean that a practical solution for a given problem exists.
 
I think the singularity isn't actually something special. We've already had exponentially growing rates of change and innovation over the last fee centuries or even millenia, and the singularity is just another phase of this accelerating change.

It's also possible, as someone else implied, that there will be a plateau where the teachnology doesn't really improved; he mentioned hardware, but I'd posit that a software limitation is also possible, where the AI can reach a point where it reaches an optimal stage that cannot be improved upon. That would mark an end to the period of singularity and potentially bring a slowdown in innovation as we can only change in reaction to external conditions rather than innovate further on the AI
 
It is very possible that some problems, key problems we are facing even, do not have a solution OP. Thus, even if something like technological singularity happens during our lifetimes, it won't be able to solve them for us.

To take a step back from obvious, I also want to point out that even though deep learning is all the rage now, a generalised AI that takes creative initiative to solve problems in ways not even fathomable to us, is something we still may not be able to create with today's knowledge, and gaining knowledge needed to create something like that during our lifetimes, is not a foregone conclusion.


This is the one where you are making a leap of faith. Just because an AI system is better than humans at studying something, does not mean that a practical solution for a given problem exists.
Saying that it can't any and every problem is missing the point. The fact is that many of the unanswered problems are due to lack of computational powers. Many problems are indeed just a numbers game
 
What is the purpose of intelligence? Power, control, ambition? Knowledge? Or are we merely conflating human nature onto a hypothetical artificial intelligence?

This. I'm trying to understand why having super intelligent computers would be a bad thing
 
My intentions about the singularity don't bear enough weight for me to have any significant beliefs about it. I don't want to predicate my current style of life on the possibility that the form of the world might fundamentally change. If I want to be optimistic about the future I'd rather it be in a more general sense where I'm not relying on the perceived necessity of something specific happening.

I think that a super-intelligent AI might pose a profound existential threat, depending on the specific form that it takes. It also might make all aspects of our life better, but that's after successfully navigating the potential threat.
 
Status
Not open for further replies.
Top Bottom