Because they may come to the conclusion that they know better than us, or worse, to the conclusion we are to be removed from the environment.This. I'm trying to understand why having super intelligent computers would be a bad thing
It's inevitable Mr. Anderson.
So you don't think an AI powerful enough would be able to solve a significant amount of our problems?This is the one where you are making a leap of faith. Just because an AI system is better than humans at studying something, does not mean that a practical solution for a given problem exists.
I'm actually kinda excited to see what happens when our robot overlords take over.
Maybe they'll care more about the environment than humans have so far.
I'm actually kinda excited to see what happens when our robot overlords take over.
Maybe they'll care more about the environment than humans have so far.
You make it sound like AI will care or have strong feelings one way or another about the "human threat". Will it even have a survival instinct? It's just as likely that artificial super intelligences will be apathetic and depressed as they are to be killing machines. It's even less likely they would care about the environment, since they wouldn't have the same requirements for life humans have.Humans are toast if the singularity ever happens. The AI will have zero reason to advance our interests. We're basically a threat to everything, and an advanced AI will figure that out quickly.
Do I believe AI will get to the point where it will rapidly improve itself to the point where it can be considered the singularity? I think there is a really strong possibility, and it could even happen in our life times.
Do I believe that humans will be able to control or understand what happens next? That's the much harder question, and it will depend on a lot of things. We could very well have a third impact situation on our hands; essentially a wildly uncontrollable, life-altering chain reaction that could lead to either destruction or salvation.
You make it sound like AI will care or have strong feelings one way or another about the "human threat". Will it even have a survival instinct? It's just as likely that artificial super intelligences will be apathetic and depressed as they are to be killing machines. It's even less likely they would care about the environment, since they wouldn't have the same requirements for life humans have.
I honestly doubt we'll see AI like those seen in science fiction. It's more likely we will see non-self-aware general intelligences that can follow instructions, yet have enough emergent behavior complicated problems and calculations without human intervention. We're already seeing this with many of the "AI" machines that have been on display, but no one would ever say those machines were self-aware.
Regardless of all that, I have no doubt the singularity is coming. It won't be a utopia at first and it will definitely be a violent transition, but it will happen eventually, probably in the near-future.
I think. your firearms example is very weak. Firearms aren't a technology for them self, there are an evolution of bows. Or to be more exact an evolution of long-range weapons. The main goals of long-range weapons are: 1) Killing other people, 2) Killing as much people as possible, 3) Killing a lot of people as fast as possible, 4) Killing people out of range, so that they can't hurt you.As for the second premise, it is worth noting that Moore's Law is already breaking down. I don't think computational power can grow without limit. Eventually, it will crash into the hard limits imposed by the laws of physics. All technology eventually plateaus and becomes a mature technology. For example, look at firearms: first invented in the tenth century or so and slowly improved over centuries until they hit a period of rapid innovation in the late 19th and early 20th centuries. But then that came to a halt. A century later and we are still using the same basic technology that was invented around WW1. I honestly expect computer technology to reach its own plateau eventually. It probably won't be soon, but it will happen. The idea of unlimited growth seems silly to me.
I think that belief in the singularity as inevitable and benevolent is utopianism.
I am willing to defer to experts that a sufficiently strong and emergent AI could be developed which then improves upon itself in an exponential loop that we've seen theorized. I am less convinced that this outcome is inevitable.
As noted: Moore's Law is showing indications that it is going to fail soon. We are running out of ways to miniaturize chip sets and there may be hard limits to the amount of processing power available given human materials science and the laws of physics. But more power is not necessarily the surest path to artificial intelligence.
Another point is that we still do not understand consciousness or how intelligence has arisen in our own species. That's not to say that full cognitive mapping is impossible, but we seem to be a far way off from it, and we're not going to be able to create a true consciousness unless we can map and understand the processes we're emulating.
This brings me to the last part of the puzzle that techno-utopians tend to ignore. We may be living through a period of peak scientific development. Peak oil, depletion of rare earth metals, and lack of exploitable fluorocarbons could dramatically reduce our capacity to sustain current levels of development or make it impossible for further advancement. This is to say nothing of the dangers to current human development posed by the effects of climate change, ocean acidifcation, and potential food chain collapse. We may be about to enter a period of human history where only small pockets of the species survive and must rely on agrarian lifestyles to survive due to the unavailability of exploitable energy. In such an instance, practical scientific development may be lost, possibly forever, or the eco-system collapse could destroy the entire species.
AI development can be treated as inevitable. We may not have enough time or resources left to reach the necessary level of development before the physical constraints kill us or cripple our capacity for scientific advancement for an epoch.
AI are unlikely to maintain any sentimentality regarding us or the natural environment. The more likely outcome is that a super intelligent computer would view the natural world as inefficient.
I guess I don't really buy that there is "a" technological singularity. Revolutions in technology happen all the time. Farming, metallurgy, the printing press, the Internet.
Farming in particular would have been like a technological singularity to those observing the hunter-gatherers before it. Previously we were not so different from other apes, wandering around in packs and killing other animals for food. Afterwards.... all this. Cities and civilization.
Will there be another technology regulation that changed everything after it? Sure. But I don't see why this one is "singularly" different.
I think it's because while human means change or become more sophisticated, our baseline intelligence probably hasn't changed in any significant way throughout the entirety of human history. But a superintelligent AI that could self-improve would blow past us so fast that we'd wake up in a fundamentally different world, one not where the world is still fitted to our means, but instead one where man would no longer be the measure of all things. And probably no amount of time would make that event intelligible to us. There isn't really any other event that could be comparable to something like that.
I suppose it could be an order of magnitude more significant.I think it's because while human means change or become more sophisticated, our baseline intelligence probably hasn't changed in any significant way throughout the entirety of human history. But a superintelligent AI that could self-improve would blow past us so fast that we'd wake up in a fundamentally different world, one not where the world is still fitted to our means, but instead one where man would no longer be the measure of all things. There isn't really any other event that could be comparable to something like that.
As for the second premise, it is worth noting that Moore's Law is already breaking down. I don't think computational power can grow without limit. Eventually, it will crash into the hard limits imposed by the laws of physics. All technology eventually plateaus and becomes a mature technology. For example, look at firearms: first invented in the tenth century or so and slowly improved over centuries until they hit a period of rapid innovation in the late 19th and early 20th centuries. But then that came to a halt. A century later and we are still using the same basic technology that was invented around WW1. I honestly expect computer technology to reach its own plateau eventually. It probably won't be soon, but it will happen. The idea of unlimited growth seems silly to me.
thats the whole premise of the singularity tho.
Moore's law on silicon processors isn't the end all be all of improvements in computing. New algorithims, different architectures and eventually new substrates are all changing the landscape of computing in the future.
That's kind of the point of ai is it not? Self preservation would be a given.Are we assuming this AI is self aware?
What if the super intelligent AI says something like we need to kill all of this certain group of people. Would you do it?
No. Artificial intelligence is limited to human knowledge. It can't learn things without someone telling it to.
That's kind of the point of ai is it not? Self preservation would be a given.
I have no idea how consciousness will occur that's sort of the point of labelling it singularity. We have already built in self preservation in our current attempts at ai so keeping that notion moving forward is obvious.I'm not convinced ai is capable of having sentience the way people do. Not denying the possibility, but I was under the impression there is still a lot about the brain and consciousness that science still doesn't understand.
Simple enough. A smart enough AI will be able to intuit new knowledge about physics/reality.
No, it will not. An AI can't learn things that humans don't know.
What if the super intelligent AI says something like we need to kill all of this certain group of people. Would you do it?
What's it's justification?
will it happen. depends on millions of variables. unrestricted. no mankind apocalypse. yes, but not for a couple generations imo.
for the good of the human species to survive.
I don't think we'll hit a Skynet/Matrix situation, but a super advanced AI coming to the inevitable conclusion that uncontrolled human growth and intervention is objectively bad for the planet (and even the survival of the species long-term) and thus taking appropriate action is in the cards.
Why would an AI care about "the planet" if it doesn't even need any atmosphere to exist in the long run?
What? It absolutely can. We have machine learning systems create algorithms and optimize solutions out of the grasp of humans (at least initially) all the time.
Why would an AI care about "the planet" if it doesn't even need any atmosphere to exist in the long run?
Couldn't this advanced AI do those same 101 things but only far far faster? As in if it takes 1 year for a human, it would only take the uber AI a day or maybe a few seconds even to run those same experiments? So would it not always end up smarter than us?I think that in general singularists tend to place way too much emphasis on intelligence as the determinant of problem-solving ability, and they're often incredibly optimistic about the solvability of problems. Like, I've seen it argued that there's no way to contain a sufficiently intelligent AI because it will always be able to figure out a way to trick people into letting it out. But this is silly. There are problems you can't think your way out of. If I lock you in a small metal box at the bottom of the Marianas Trench without any tools, it doesn't matter how smart you are. I can even let you know what I'm going to do ahead of time so you can come up with a plan or ask advice. It doesn't matter. Once you're there, you're boned.
So lots of problems just aren't solvable, but also lots of problems aren't solvable just with intelligence. Science mostly progresses by accident and experiment. We find out a lot of stuff just because we tried 101 things and the first 100 didn't pan out. We accomplish a lot by bootstrapping - we have really good tools now in part because we had pretty good tools a while ago, which we made with okay tools, which we made with mediocre tools, etc. There's this necessary physical component to a lot of our technological advances - again, it doesn't matter how smart you are, you're not producing a modern precision drill press without a lot of supporting infrastructure.
Increasingly, it looks like intelligence itself is going to be a problem like this. Nobody's really expecting AI to pop out of a clever algorithm that the AI could then look at and make cleverer. The first real AIs are likely going to be running on highly specialized hardware, and their functioning is not really going to be understood - it'll be more like "we built this really complicated interconnected system and trained it in the right way and now it seems like it's thinking". So the somewhat smarter-than-human system probably isn't going to be in a great position to just think its way to a better system. The actual calculations involved will probably be being performed pretty efficiently already on the hardware available - we are very confident that there's not a much faster way to do most matrix arithmetic on existing computer hardware than what we're already doing, and the operations involved in something like a neural network are actually so simple that it seems unlikely that there are going to be many ways to do the same math but with fewer operations. Advances in AI seem like they're going to be the result of experiments, somewhat-blindly trying new things with hardware and software and seeing what works, and more intelligent AI won't necessarily be more able to make much better predictions about what's going to work. I expect that the quality of AI on systems that are not absolutely massive is mostly going to be a function of advances in manufacturing more efficient hardware, which itself depends a lot on experimentation and bootstrapping and so can only be sped up to a point.