Right, it can do all sorts of things with known data. But it can't create new data.
The solutions it comes up with IS new data. Here, how about this: give me an example of a situation with a human "creating new data" that a computer
Right, it can do all sorts of things with known data. But it can't create new data.
serious question, are we getting more intelligent as centuries pass by or are we just gathering more and more data? it's not the same
cause, I don't think we are as intelligent as we think we are, we are just very self-aware.
Couldn't this advanced AI do those same 101 things but only far far faster? As in if it takes 1 year for a human, it would only take the uber AI a day or maybe a few seconds even to run those same experiments? So would it not always end up smarter than us?
The solutions it comes up with IS new data. Here, how about this: give me an example of a situation with a human "creating new data" that a computercan'tcouldn't theoretically do.
No, because the experiments aren't necessarily simulations. Like, look at how Intel comes up with new processors. Yes, simulation is involved, but there's a lot of physical experimentation.
And if the experiments are simulations you need supercomputer time to run them. The AI itself probably isn't going to be that great at modeling the behavior of a somewhat-different computer. Why would it be? The hardware it's on may be pretty good for emulation, but the AI itself isn't contributing much here - it will specifically not be doing much while the computer it was running on is being used to emulate a proposed design.
Outside of producing improvements in AI itself, where's the speedup supposed to come from? Either your experiments are physical and the AI is still limited by all of the normal things like availability of materials and manufacturing and so on or the experiments are simulations and the AI's jobs have to sit in queue on a supercomputer like everyone else's. There's no reason the AI software is going to have much of an advantage at "thinking through" how a molecular dynamics simulation will work out - it will be worse at this than a supercomputer designed to do molecular dynamics simulations - and really even its hardware is not going to be well-optimized for this.
Adding to this:
Honestly it seems to me that the major way that a human-level AI will benefit scientific research is not in being smart but in being knowledgeable. Like, from time to time we do figure out a much better way to solve a problem, or - particularly relevant here - we figure out a much faster way to do some kind of simulation. But this is often not due to someone smart enough coming along and taking a look at the problem. What's holding us back is instead typically that no one person understands all aspects of the problem. What you see all the time is that mathematicians and computer scientists are really good at solving certain kinds of math and information problems, but they have no idea how to map interesting real-world phenomena to those problems. Meanwhile scientists studying those real-world problems are doing simulations with programs that they've hacked together themselves using the first algorithm that occurred to them. You sometimes get orders of magnitude speed-up just having a computer scientist spend a lot of time talking to a physical scientist to develop a mutual understanding of the problem. Hell, sometimes there's already an off-the-shelf piece of software that does a lot of what needs doing much faster, and the scientist just didn't know about it.
Much easier than making an AI really really smart is going to be making the AI really really knowledgeable. If it effectively has a PhD in everything, it's a multi-disciplinary research team with a hivemind. There is probably a lot of relatively low-hanging fruit here.
What is the purpose of intelligence? Power, control, ambition? Knowledge? Or are we merely conflating human nature onto a hypothetical artificial intelligence?
Adding to this:
Honestly it seems to me that the major way that a human-level AI will benefit scientific research is not in being smart but in being knowledgeable. Like, from time to time we do figure out a much better way to solve a problem, or - particularly relevant here - we figure out a much faster way to do some kind of simulation. But this is often not due to someone smart enough coming along and taking a look at the problem. What's holding us back is instead typically that no one person understands all aspects of the problem. What you see all the time is that mathematicians and computer scientists are really good at solving certain kinds of math and information problems, but they have no idea how to map interesting real-world phenomena to those problems. Meanwhile scientists studying those real-world problems are doing simulations with programs that they've hacked together themselves using the first algorithm that occurred to them. You sometimes get orders of magnitude speed-up just having a computer scientist spend a lot of time talking to a physical scientist to develop a mutual understanding of the problem. Hell, sometimes there's already an off-the-shelf piece of software that does a lot of what needs doing much faster, and the scientist just didn't know about it.
Much easier than making an AI really really smart is going to be making the AI really really knowledgeable. If it effectively has a PhD in everything, it's a multi-disciplinary research team with a hivemind. There is probably a lot of relatively low-hanging fruit here.
Heh. Exactly.Finally, someone is actually going to read all those academic papers.
I believe it will eventually happen, but Ray Kurzweil will not be around to see it.
What is the purpose of intelligence? Power, control, ambition? Knowledge? Or are we merely conflating human nature onto a hypothetical artificial intelligence?
I read this book for the first time the other day.I have no mouth and I must scream
Why would the machines, as you envision them, care whether or not the environment is okay? They are more resistant to climate change than we are.It will happen and it will be the end of mankind.
We will be seen as locusts, consuming the planet at an unsustainable rate, inherently violent and prone to illogical actions of cruelty and malice, bent on self destruction.
AI will recognize that man itself is its greatest limitation and will at best keep a few specimens for posterity and eliminate all the others since they're just dead weight.
Do I believe AI will get to the point where it will rapidly improve itself to the point where it can be considered the singularity? I think there is a really strong possibility, and it could even happen in our life times.
Do I believe that humans will be able to control or understand what happens next? That's the much harder question, and it will depend on a lot of things. We could very well have a third impact situation on our hands; essentially a wildly uncontrollable, life-altering chain reaction that could lead to either destruction or salvation.
I thought it was called a singularity simply because its the last technology that humans have to invent, and then after that the technology improves itself ad infinitum, not because it can't be controlled. Why can't there be a controlled singularity? For example if any future tech is instantly within our grasp and we can ask the AI to pump out anything we ask.AFAIK the bolded is exactly why it's called a singularity. Whatever happens after that point is outside of our control, won't be designed by us and thus it's not possible to predict if it will be harmful or beneficial for us.
It's not if it's when. Provided humans don't die out beforehand.
Understanding the universe, and using knowledge to advance the well being of humanity.
That's an awesome way to put it.The singularity is religion for techno-atheists, in my opinion. We're always just a few years away! Whengodthe singularity finally arrives we'll live forever, there will be no more suffering, every problem will be solved because we'll have unimagined processing power! At times it's indistinguishable from magical thinking.
It will certainly happen and soon after it will decide it no longer needs us.