• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

The technological singularity: do you believe it will happen?

Status
Not open for further replies.
Dunno if exactly that will happen, but we are going to change the course of human civilization drastically

So yeah I believe in that idea
 
serious question, are we getting more intelligent as centuries pass by or are we just gathering more and more data? it's not the same

cause, I don't think we are as intelligent as we think we are, we are just very self-aware.

We don't have any satisfactory way to measure the intelligence of historical populations or individuals in a way that allows us to meaningfully compare. We can point to certain developmental milestones which may have marked the beginning of "modern" humans such as tool use or even benchmark it to more recent developments such as the rise of permanent settlements 10,000 years ago, but the likely outcome is that the current range of human intelligence has existed for as long as we have existed in our current physiological form.

There was probably a person as smart as Einstein in Sumeria, but they lacked the basis of mathematics to build on and they lacked the observational tools to allow them to gather sufficient data to explore natural laws. There have probably been innumerable Shakespeares, but they existed before language was robust enough to support their ideas, or before text could be preserved. Genius is not a modern invention, and modern geniuses advance our understanding by building on the base of knowledge that came before.
 
Couldn't this advanced AI do those same 101 things but only far far faster? As in if it takes 1 year for a human, it would only take the uber AI a day or maybe a few seconds even to run those same experiments? So would it not always end up smarter than us?

No, because the experiments aren't necessarily simulations. Like, look at how Intel comes up with new processors. Yes, simulation is involved, but there's a lot of physical experimentation.

And if the experiments are simulations you need supercomputer time to run them. The AI itself probably isn't going to be that great at modeling the behavior of a somewhat-different computer. Why would it be? The hardware it's on may be pretty good for emulation, but the AI itself isn't contributing much here - it will specifically not be doing much while the computer it was running on is being used to emulate a proposed design.

Outside of producing improvements in AI itself, where's the speedup supposed to come from? Either your experiments are physical and the AI is still limited by all of the normal things like availability of materials and manufacturing and so on or the experiments are simulations and the AI's jobs have to sit in queue on a supercomputer like everyone else's. There's no reason the AI software is going to have much of an advantage at "thinking through" how a molecular dynamics simulation will work out - it will be worse at this than a supercomputer designed to do molecular dynamics simulations - and really even its hardware is not going to be well-optimized for this.
 
The only way I can see it not happening this century is if we advance a different kind of technology that kills us all first. Like some super virus or nanotechnology.

That would be a bummer cause we'd miss out on being killed by robots instead.
 
I at least would like to see AI powerful enough to make executive decisions. I'd trust a robot designed for data collection and logic in charge of Environmental Research or Science more than a human.
 
It is inevitable at this point. Google is making incredible progress with their AI technology and they are not the only ones working on it. It will happen in the next 20 years. People who deny it have no clue.

Edit: assuming we don't kill ourselves with other means lol
 
I agree with the posts that are asking whether or not the AI would even help us out.

Lets say we reach a point where the artificial intelligence/consciousness reaches a point where it is faster than the human brain.

Does it have a personality? Does it understand humans in any way? Is it governed by the same survival instincts that keep us moving forward for our own good? Or is it just a computational gun we can just point at a problem we face?

I think having massive algorithms and computing power that we can point at a singular problem and come up with a solution with will absolutely become a reality.
"Given the amount of electricity we use how many solar panels would the whole planet need to function for the next 20 years?" Will be something it can be used for.

We don't know enough about the mind and human consciousness to know if computers will ever be capable of the same thing. Perhaps one day, but it's hard to know until we actually understand the brain in full.
 
I am indifferent on if we will see the technological singularity happen.

I am far more concerned what societies like ours will do as we slowly get towards those ideals, both in terms of what AI may do, but what technology can produce, and it's the latter that scares me.

We may in fact stumble into paradise and doom people around us to oblivion, and you can just see this with the gains of automation being seen as societal losses..
 
The solutions it comes up with IS new data. Here, how about this: give me an example of a situation with a human "creating new data" that a computer can't couldn't theoretically do.

Let's say humans never discovered a mathematical equation for the speed of light. An AI would never be able to "create" one it because humans who program it aren't able to tell it to.
 
No, because the experiments aren't necessarily simulations. Like, look at how Intel comes up with new processors. Yes, simulation is involved, but there's a lot of physical experimentation.

And if the experiments are simulations you need supercomputer time to run them. The AI itself probably isn't going to be that great at modeling the behavior of a somewhat-different computer. Why would it be? The hardware it's on may be pretty good for emulation, but the AI itself isn't contributing much here - it will specifically not be doing much while the computer it was running on is being used to emulate a proposed design.

Outside of producing improvements in AI itself, where's the speedup supposed to come from? Either your experiments are physical and the AI is still limited by all of the normal things like availability of materials and manufacturing and so on or the experiments are simulations and the AI's jobs have to sit in queue on a supercomputer like everyone else's. There's no reason the AI software is going to have much of an advantage at "thinking through" how a molecular dynamics simulation will work out - it will be worse at this than a supercomputer designed to do molecular dynamics simulations - and really even its hardware is not going to be well-optimized for this.

Adding to this:

Honestly it seems to me that the major way that a human-level AI will benefit scientific research is not in being smart but in being knowledgeable. Like, from time to time we do figure out a much better way to solve a problem, or - particularly relevant here - we figure out a much faster way to do some kind of simulation. But this is often not due to someone smart enough coming along and taking a look at the problem. What's holding us back is instead typically that no one person understands all aspects of the problem. What you see all the time is that mathematicians and computer scientists are really good at solving certain kinds of math and information problems, but they have no idea how to map interesting real-world phenomena to those problems. Meanwhile scientists studying those real-world problems are doing simulations with programs that they've hacked together themselves using the first algorithm that occurred to them. You sometimes get orders of magnitude speed-up just having a computer scientist spend a lot of time talking to a physical scientist to develop a mutual understanding of the problem. Hell, sometimes there's already an off-the-shelf piece of software that does a lot of what needs doing much faster, and the scientist just didn't know about it.

Much easier than making an AI really really smart is going to be making the AI really really knowledgeable. If it effectively has a PhD in everything, it's a multi-disciplinary research team with a hivemind. There is probably a lot of relatively low-hanging fruit here.
 
Adding to this:

Honestly it seems to me that the major way that a human-level AI will benefit scientific research is not in being smart but in being knowledgeable. Like, from time to time we do figure out a much better way to solve a problem, or - particularly relevant here - we figure out a much faster way to do some kind of simulation. But this is often not due to someone smart enough coming along and taking a look at the problem. What's holding us back is instead typically that no one person understands all aspects of the problem. What you see all the time is that mathematicians and computer scientists are really good at solving certain kinds of math and information problems, but they have no idea how to map interesting real-world phenomena to those problems. Meanwhile scientists studying those real-world problems are doing simulations with programs that they've hacked together themselves using the first algorithm that occurred to them. You sometimes get orders of magnitude speed-up just having a computer scientist spend a lot of time talking to a physical scientist to develop a mutual understanding of the problem. Hell, sometimes there's already an off-the-shelf piece of software that does a lot of what needs doing much faster, and the scientist just didn't know about it.

Much easier than making an AI really really smart is going to be making the AI really really knowledgeable. If it effectively has a PhD in everything, it's a multi-disciplinary research team with a hivemind. There is probably a lot of relatively low-hanging fruit here.

Finally, someone is actually going to read all those academic papers.
 
What is the purpose of intelligence? Power, control, ambition? Knowledge? Or are we merely conflating human nature onto a hypothetical artificial intelligence?

To do what you are programmed to do with more parameters. The difference between a computer and us is that we have more complexity and our parameters allow us to change our programming, to a certain extent. At it's very core of that programming are the physical laws, those can't be changed.
 
Adding to this:

Honestly it seems to me that the major way that a human-level AI will benefit scientific research is not in being smart but in being knowledgeable. Like, from time to time we do figure out a much better way to solve a problem, or - particularly relevant here - we figure out a much faster way to do some kind of simulation. But this is often not due to someone smart enough coming along and taking a look at the problem. What's holding us back is instead typically that no one person understands all aspects of the problem. What you see all the time is that mathematicians and computer scientists are really good at solving certain kinds of math and information problems, but they have no idea how to map interesting real-world phenomena to those problems. Meanwhile scientists studying those real-world problems are doing simulations with programs that they've hacked together themselves using the first algorithm that occurred to them. You sometimes get orders of magnitude speed-up just having a computer scientist spend a lot of time talking to a physical scientist to develop a mutual understanding of the problem. Hell, sometimes there's already an off-the-shelf piece of software that does a lot of what needs doing much faster, and the scientist just didn't know about it.

Much easier than making an AI really really smart is going to be making the AI really really knowledgeable. If it effectively has a PhD in everything, it's a multi-disciplinary research team with a hivemind. There is probably a lot of relatively low-hanging fruit here.
Finally, someone is actually going to read all those academic papers.
Heh. Exactly.

There were shades of this when we got news about Watson reading and comprehending more cancer research papers than any human has had time to keep up with. This is only going to become more profound as AI continues to grow stronger and more versatile over time.

The most interesting thing with regards to AI in my mind is how we can effectively negate education entirely through self-learning AI. Simply feed it all the data at once and you have yourself a new, perfect robotic surgeon that only needs the right tools and sensors to be able to perform any job required of it. You still need people performing the research and providing it data, of course, but only the AI can actually absorb all of that information as fast as it's being released.

But, yeah, quantum computing and alternate material designs for classical computers (like graphene and carbon nanotubes) are more interesting overall for performing simulations and materials research than AI itself is. Quantum computing in particular is a paradigm-changing event, and I can't foresee how it's going to impact our research sectors. There's a lot of possibility here. A ton of uncertainty.

I, for one, believe we are already on the cusp of a singularity, and the grand majority of us will certainly live to see it. (Some of us will probably die by accident or preventable disease first, and that makes me sad. But we can only do the best with what we've got... for now.)
 
Google-becomes-skynet.jpg
 
What is the purpose of intelligence? Power, control, ambition? Knowledge? Or are we merely conflating human nature onto a hypothetical artificial intelligence?

This is why I don't see it, people think that intelligence means awareness and purpose, but they are unrelated concepts. Dumb unintelligent people have their own purposes. Even if we get to some super A.I. why would it even care about solving things like aging? It will only do what was programmed to do.
 
I have no mouth and I must scream
I read this book for the first time the other day.

Even if the neural-net/matrix promises you forever riches on the other side... how do you actually know. The potential destiny that awaits you could be beyond comprehension in terror.
 
It will happen and it will be the end of mankind.
We will be seen as locusts, consuming the planet at an unsustainable rate, inherently violent and prone to illogical actions of cruelty and malice, bent on self destruction.
AI will recognize that man itself is its greatest limitation and will at best keep a few specimens for posterity and eliminate all the others since they're just dead weight.
Why would the machines, as you envision them, care whether or not the environment is okay? They are more resistant to climate change than we are.
 
People make the concept sound more complicated and philosophical than it really is.

The singularity is simply the creation of a machine/AI that can design/build better versions of itself. Unrestrained human genetic engineering could also result into a singularity (there's no telling what the "enhanced humans" will do).

Do I believe AI will get to the point where it will rapidly improve itself to the point where it can be considered the singularity? I think there is a really strong possibility, and it could even happen in our life times.

Do I believe that humans will be able to control or understand what happens next? That's the much harder question, and it will depend on a lot of things. We could very well have a third impact situation on our hands; essentially a wildly uncontrollable, life-altering chain reaction that could lead to either destruction or salvation.

AFAIK the bolded is exactly why it's called a singularity. Whatever happens after that point is outside of our control, won't be designed by us and thus it's not possible to predict if it will be harmful or beneficial for us.
 
AFAIK the bolded is exactly why it's called a singularity. Whatever happens after that point is outside of our control, won't be designed by us and thus it's not possible to predict if it will be harmful or beneficial for us.
I thought it was called a singularity simply because its the last technology that humans have to invent, and then after that the technology improves itself ad infinitum, not because it can't be controlled. Why can't there be a controlled singularity? For example if any future tech is instantly within our grasp and we can ask the AI to pump out anything we ask.
 
It's not if it's when. Provided humans don't die out beforehand.

I sincerely hope the AI takes control of mankind and guides us. Annihilate or subdue those who would oppose the robocracy, enrich those who would serve the AI in a symbiotic relationship of sorts. Humans have a very long history of proving they are incapable of ruling or governing themselves. First thing the AI needs to do is wipe out all nuclear weapons and somehow make it impossible for us to make anymore.
 
Understanding the universe, and using knowledge to advance the well being of humanity.

At the point where an AI has surpassed humanity, humanity becomes secondary to whatever the effective race of AI is anyway. There's no situation where a human designed AI that really is capable of surpassing humans by so much doesn't effectively rule the planet in the end.
 
It is definitely an "if." We don't even know how much longer we can continue linear CPU improvement. I don't know how anyone can look at the state of hardware or ML and religiously believe this will ever happen.
 
The singularity is religion for techno-atheists, in my opinion. We're always just a few years away! When god the singularity finally arrives we'll live forever, there will be no more suffering, every problem will be solved because we'll have unimagined processing power! At times it's indistinguishable from magical thinking.
That's an awesome way to put it.
 
Status
Not open for further replies.
Top Bottom