You don't think Elon Musk taking a job telling Trump he's wrong about climate change despite it making him unpopular and doing damage to his brand was a humanitarian cause? I'd say it's more important than politics.
I have no mouth and I must scream
He would've done the same thing in the interest of his businesses. And didn't Musk quit the council? It's collapsed like every other Trump council that isn't filled with his stooges.
People are in deep shit now. What is Elon doing for that? He going to deliver food via Hyper Loop?
My problem isn't these causes, its us treating science like TMZ and getting a boner every time assholes like Elon tell us what we should be focusing on, while doing jack shit for any problem that is a problem now.
Wanna change shit? Go vote in a local election. Then worry about fucking AI.
The thing is though, AI would think beyond our capacity and who knows what it comes up with that are beyond us in understanding and application. We maybe can't wrap our heads around that happening but it's possible AI could create something that no human application could defend against.
01001001 00100000 01100100 01101111 01101110 00100111 01110100 00100000 01110010 01100101 01100001 01101100 01101100 01111001 00100000 01100111 01100101 01110100 00100000 01110111 01101000 01100001 01110100 00100000 01111001 01101111 01110101 00100111 01110010 01100101 00100000 01110100 01110010 01111001 01101001 01101110 01100111 00100000 01110100 01101111 00100000 01110011 01100001 01111001 00101110 00100000 01010111 01101000 01100001 01110100 00100000 01100100 01101001 01100001 01101100 01100101 01100011 01110100 00100000 01101001 01110011 00100000 01110100 01101000 01100001 01110100 00111111Calm down everyone, you have nothing to worry about.
0101010110 11001 1001010 1100111101 01010101 101010010111100111111 010101 0101010101 01011
101
01011 101010
1 1010 10 10
Well, none of that will matter if we don't make our civilization redundant by inhabiting multiple planets before there's some inevitable extinction scenario.You think the "puzzle" is an interplanetary human civilization. I think the "puzzle" is a world where people don't need to be hungry or discriminated against for the circumstance of their birth. In your scenario, human welfare is a side effect of technological and economic expansion, and not a requirement. In mine, humanitarianism is the sole goal, everything else comes second.
"If we think it's worth buying life insurance on an individual level, then perhaps it's worth spending more than - spending something on life insurance for life as we know it, and arguably that expenditure should be greater than zero. Then we can just get to the question of what is an appropriate expenditure for life insurance, and if it's something like a quarter of a percent of the GDP that would be okay. I think most people would say, okay, that's not so bad. You want it to be some sort of number that is much less than what we spend on health care but more than what we spend on lipstick. Something like that, and "I like lipstick, it's not like I've got anything against it."
Yes he left once Trump made a final decision on climate change because it was the only reason he was even there. And even if it was, which is something we wouldnt be able to even know for a fact, only in the interest of his business, if his business is accelerating the worlds adoption of sustainable energy, what is the effective difference? Especially when it actually made him less popular, and he continued until we withdrew from the climate agreement anyways? Either way it benefits humanity.
He's a technocrat and emblematic of some of the problems currently plaguing our generation, vis a vis our society being overturned by the shift towards automation and goods-as-service economy with no adequate mechanisms for a smooth transition. He doesn't seem to care about anything beyond his business ventures, and indeed this tweet itself reads like a business venture from a certain angle.Yet, I cannot even begin to understand the mindset and general philosophy of someone who comes into this thread and their first impulse is to type "fuck Elon Musk". Truly baffling.
Zuckerberg gets a lot of flak for much of the same reasons, and now that he's gunning for a presidential run certain people hate him more than ever. These are not the people that should be leading public opinion, insofar as they have very little compassion or concern for societal problems outside the tech-bubble.
It's not very complicated.
Well, none of that will matter if we don't make our civilization redundant by inhabiting multiple planets before there's some inevitable extinction scenario.
All of Musks companies have only lost shit loads of money every year. Only reason they exist is due to billions in grants from the government. He seems like a bit of a loon.
Why will the leading nation in AI 'rule the planet'? The hunt for AI will lead to WWIII? That's some far fetched paper back sci fi shit. Musk seems to be that sort of particular goofy nerd who believes certain sci fi ideas are only inevitable.
Although it does seem like the basis for a cool sci fi movie, a country and AI in cohorts to rule the planet with a brutal robotic fist.
If Elon's main motive was profit, then why would he invest all of his net worth in startups in the two industries most likely to ruin him -- commercial space and automotive? He came incredibly close to bankruptcy, causing him unimaginable psychological pain. No one goes through that without a purpose that is bigger than mere profit.This was the start of this whole tangent:
And in the end my mind remains unchanged. He's interested in his businesses first and foremost, many of you even seem to support this. If it turned out his business was no longer "benefiting humanity", do you think he would change course? I'm going to lean no. I'm not interesting in leaving my future to someone who thinks human welfare is a secondary concern to profit, even if for the time being our goals align.
AI always feels like the most privileged of things to worry about. Must be because I only hear about it from rich guys that literally have nothing else to worry about
You should do research on Cambridge Analytica and Palantir. Cambridge Analytica has become increasingly successful in manipulating people through social media. They were key players in Brexit and Trump's election. They also just recently worked on Kenya where the citizens are currently rioting and demanding a re-election as they can't believe the results. Sound familiar? They've also been involved in many other countries such as Russia, Latvia, Lithuania, Ukraine, Iran, and Moldova. Mercer owns Cambridge Analytica and a significant investor to Breitbart. Bannon owns Breitbart and also holds chief role on the Board of Cambridge Analytica. Cambridge Analytica ties Mercer, Bannon, Putin, Trump, Sessions, Flynn and Farage together among others. Their first project in Trinidad they partnered with the Government and Palantir in recording all browsing history and phone calls of the citizens as well as geomapping all crime data. An AI was able to give the police rankings on how likely a citizen was to commit crime through using a langauge processor for recorded conversations and all other stored data on the individual. Keep in mind Palantir has since moved on and is working with a lot of large US cities such as LA and Cambridge Analytica is now scoring tons of contracts in the Pentagon.
Everyone should read this article and others by the Guardian:
https://www.theguardian.com/technol...eat-british-brexit-robbery-hijacked-democracy
That sounds like kool-aid but I'll humor you. What has Musk done, in terms of influencing public policy, to ensure that automation doesn't destroy the foundations of our society? The thing with "long-term" thinking is that it has a tendency to overlook the short term. Things are about to get bad really quick, much faster than the time it'll take for Strong AI to be an existential threat. Does Musk care at all about this or does he think we'll just get through it magically?
Honestly, I like the guy, but he absolutely has no fucking clue what he is talking about here. This is just some random thought of his that makes a headline. He has no insight into how war's start. He as no background that would make me trust that he knows why wars start. Predicting WW3 is just ridiculous. This is all for attention and nothing more IMO.
Musk is involved deeply in the push towards automated infrastructure, is the key difference here. He is a huge actor in this space. As far as I can tell his stance is "yeah it's going to be a problem and someone else will have to solve it".
Like even from the OP, Musk, with one hand, proselytizes about the existential crisis of AI, and with the other hand, launches two new AI-related ventures.
It's not hard to follow his train of thought. Once you create a general intelligence exceeding human capabilities in all areas (obviously existing AIs greatly exceed humans in some areas but not all), it will in principle be capable of recursive self improvement. If one country - or even a private corporation - is able to break through that barrier notable in advance of others, then that creates an extremely serious power imbalance. The worry is that by allowing that country to just keep on trucking while the rest of the world plays catch up, it will leave them in the dust. Russia being 4 years behind America in nuclear technology didn't make a big difference in the long run. Russia being 4 months behind America in creating a Strong AI could mean unchallenged American hegemony over Earth in perpetuity.
Even if the capabilities of an artificial super-intelligence turns out to be greatly exaggerated, or the rate of its increase is far slower than anticipated, the risk of causing a war in the pursuit of this technology is based on how serious governments take it. If Vlad is pretty confident that President Chelsea Clinton has begun construction on a viable super-intelligence, and he believes it is an existential threat to him and his nation, that's what Elon is worried about.
I have no mouth and I must scream
Gemüsepizza;247883630 said:![]()
AI is a wonderful topic to make stupid predictions about, because we are still light-years away from even understanding the concept of true AIs, or having anywhere near the necessary amount of computation needed. Also lol at his comments to Zuckerberg, Musk should stop talking so much shit, he is just a business man and not a scientist. Maybe he has read to much tabloid articles calling him a "genius", alternatively doing less coke might do the trick. There are enough other areas which have the potential for conflict in the future, for example the rising inequality even in western countries. But I bet guys like him don't want to talk about those very real problems, which already exist, because then he would have to take a critical look at himself.
they won't just rule the earth but will also rule space and what wealth can be drawn from it. space colonization won't ever truly begin until robots can do much of the heavy lifting. deep space exploration will likely only be possible with robots.It's not hard to follow his train of thought. Once you create a general intelligence exceeding human capabilities in all areas (obviously existing AIs greatly exceed humans in some areas but not all), it will in principle be capable of recursive self improvement. If one country - or even a private corporation - is able to break through that barrier notably in advance of others, then that creates an extremely serious power imbalance. The worry is that by allowing that country to just keep on trucking while the rest of the world plays catch up, it will leave them in the dust. Russia being 4 years behind America in nuclear technology didn't make a big difference in the long run. Russia being 4 months behind America in creating a Strong AI could mean unchallenged American hegemony over Earth in perpetuity.
Even if the capabilities of an artificial super-intelligence turns out to be greatly exaggerated, or the rate of its increase is far slower than anticipated, the risk of causing a war in the pursuit of this technology is based on how serious governments take it. If Vlad is pretty confident that President Chelsea Clinton has begun construction on a viable super-intelligence, and he believes it is an existential threat to him and his nation, that's what Elon is worried about.
Attacking the messenger is not disputing the message. It's not like he is the first to deliver this type of warning.
How do you even know you're prepared? What does being prepared entail?
Bradbury delivered a warning on messing with time machines; doesn't mean I'll take time travel as a credible existential threat and a likely cause of WW3. The main difference between time travel and Strong AI is that time travel breaks known laws of physics and Strong AI doesn't, but that doesn't stop people from speculating about how the laws of physics might not hold in special circumstances.
Let's approach this from another direction. Strong AI is an inevitability. The first actor to create Strong AI will have some degree of influence over the ensuing singularity. Thus, it's in our favor to be the ones to create the first Strong AI. The alternative, halting, delaying, or regulating AI research will only empower foreign researchers like those of Russia or China, and as TDM has explained, a time factor of months might be the difference between total annihilation and eternal hegemony. If we accept that a Russian or China backed Strong AI is "worse" than a US backed one, then we must accept responsibility for creating the first Strong AI at all costs.
In this case, Musks' "warning" basically amounts of "WW3 is inevitable because AI is inevitable and we should go all in on AI anyway".
I still don't get why you think, being cautious is a bad thing.
I don't understand what "being cautious" entails. There's a world of difference between saying "let's be cautious" and creating contingency plans for impending disaster. Until someone outlines what it is we should be doing, "being cautious" is just blowing smoke.
So given the combination of obsessing over a goal, amorality, and the ability to easily outsmart humans, it seems that almost any AI will default to Unfriendly AI, unless carefully coded in the first place with this in mind.
Unfortunately, while building a Friendly ANI is easy, building one that stays friendly when it becomes an ASI is hugely challenging, if not impossible.
It's clear that to be Friendly, an ASI needs to be neither hostile nor indifferent toward humans.
We'd need to design an AI's core coding in a way that leaves it with a deep understanding of human values. But this is harder than it sounds.
For example, what if we try to align an AI system's values with our own and give it the goal, ”Make people happy"?
Once it becomes smart enough, it figures out that it can most effectively achieve this goal by implanting electrodes inside people's brains and stimulating their pleasure centers.
Then it realizes it can increase efficiency by shutting down other parts of the brain, leaving all people as happy-feeling unconscious vegetables.
If the command had been ”Maximize human happiness," it may have done away with humans all together in favor of manufacturing huge vats of human brain mass in an optimally happy state.
We'd be screaming Wait that's not what we meant! as it came for us, but it would be too late. The system wouldn't let anyone get in the way of its goal.
If we program an AI with the goal of doing things that make us smile, after its takeoff, it may paralyze our facial muscles into permanent smiles.
Program it to keep us safe, it may imprison us at home. Maybe we ask it to end all hunger, and it thinks ”Easy one!" and just kills all humans.
Or assign it the task of ”Preserving life as much as possible," and it kills all humans, since they kill more life on the planet than any other species.
Goals like those won't suffice. So what if we made its goal, ”Uphold this particular code of morality in the world," and taught it a set of moral principles.
Even letting go of the fact that the world's humans would never be able to agree on a single set of morals, giving an AI that command would lock humanity in to our modern moral understanding for eternity.
In a thousand years, this would be as devastating to people as it would be for us to be permanently forced to adhere to the ideals of people in the Middle Ages.
No, we'd have to program in an ability for humanity to continue evolving. Of everything I read, the best shot I think someone has taken is Eliezer Yudkowsky, with a goal for AI he calls Coherent Extrapolated Volition.
The AI's core goal would be:
Am I excited for the fate of humanity to rest on a computer interpreting and acting on that flowing statement predictably and without surprises? Definitely not.Our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together;
where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted.
But I think that with enough thought and foresight from enough smart people, we might be able to figure out how to create Friendly ASI.
And that would be fine if the only people working on building ASI were the brilliant, forward thinking, and cautious thinkers of Anxious Avenue.
But there are all kinds of governments, companies, militaries, science labs, and black market organizations working on all kinds of AI.
Many of them are trying to build AI that can improve on its own, and at some point, someone's gonna do something innovative with the right type of system, and we're going to have ASI on this planet.
The median expert put that moment at 2060; Kurzweil puts it at 2045; Bostrom thinks it could happen anytime between 10 years from now and the end of the century, but he believes that when it does, it'll take us by surprise with a quick takeoff. He describes our situation like this:
Great. And we can't just shoo all the kids away from the bomb—there are too many large and small parties working on it, and because many techniques to build innovative AI systems don't require a large amount of capital, development can take place in the nooks and crannies of society, unmonitored.Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb.
Such is the mismatch between the power of our plaything and the immaturity of our conduct.
Superintelligence is a challenge for which we are not ready now and will not be ready for a long time.
We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound.
There's also no way to gauge what's happening, because many of the parties working on it—sneaky governments, black market or terrorist organizations, stealth tech companies like the fictional Robotica—will want to keep developments a secret from their competitors.
The especially troubling thing about this large and varied group of parties working on AI is that they tend to be racing ahead at top speed—as they develop smarter and smarter ANI systems, they want to beat their competitors to the punch as they go.
The most ambitious parties are moving even faster, consumed with dreams of the money and awards and power and fame they know will come if they can be the first to get to AGI.
And when you're sprinting as fast as you can, there's not much time to stop and ponder the dangers.
On the contrary, what they're probably doing is programming their early systems with a very simple, reductionist goal—like writing a simple note with a pen on paper—to just ”get the AI to work."
Down the road, once they've figured out how to build a strong level of intelligence in a computer, they figure they can always go back and revise the goal with safety in mind. Right...?
Bostrom and many others also believe that the most likely scenario is that the very first computer to reach ASI will immediately see a strategic benefit to being the world's only ASI system.
And in the case of a fast takeoff, if it achieved ASI even just a few days before second place, it would be far enough ahead in intelligence to effectively and permanently suppress all competitors.
Bostrom calls this a decisive strategic advantage, which would allow the world's first ASI to become what's called a singleton—an ASI that can rule the world at its whim forever, whether its whim is to lead us to immortality, wipe us from existence, or turn the universe into endless paperclips.
The singleton phenomenon can work in our favor or lead to our destruction. If the people thinking hardest about AI theory and human safety can come up with a fail-safe way to bring about Friendly ASI before any AI reaches human-level intelligence, the first ASI may turn out friendly.
It could then use its decisive strategic advantage to secure singleton status and easily keep an eye on any potential Unfriendly AI being developed. We'd be in very good hands.
But if things go the other way—if the global rush to develop AI reaches the ASI takeoff point before the science of how to ensure AI safety is developed, it's very likely that an Unfriendly ASI like Turry emerges as the singleton and we'll be treated to an existential catastrophe.
Musk is involved deeply in the push towards automated infrastructure, is the key difference here. He is a huge actor in this space. As far as I can tell his stance is "yeah it's going to be a problem and someone else will have to solve it".
Like even from the OP, Musk, with one hand, proselytizes about the existential crisis of AI, and with the other hand, launches two new AI-related ventures.
Uhh, Open AI is a non profit created with the sole purpose of ensuring AI is controlled and doesn't become dangerous.
But not everyone in the field buys this. Nick Bostrom, the Oxford philosopher who, like Musk, has warned against the dangers of AI, points out that if you share research without restriction, bad actors could grab it before anyone has ensured that it's safe. "If you have a button that could do bad things to the world," Bostrom says, "you don't want to give it to everyone." If, on the other hand, OpenAI decides to hold back research to keep it from the bad guys, Bostrom wonders how it's different from a Google or a Facebook.
That would be something if open source doomed humanity.Yeah okay, let's open source AI so anyone can be the trigger for Strong AI. Seems good to me.
https://www.wired.com/2016/04/openai-elon-musk-sam-altman-plan-to-set-artificial-intelligence-free/
I'm glad Musk is taking this seriously, because it's pretty clear from this thread that very few others are.
And I wouldn't trust Zuck to steward us into a new age of coffee makers, never mind AI.
But hey, let's just run wild with autonomous weapons technology, what's the worst that could happen right?
The idea behind OpenAI is to level the playing field. Countries would go to war over this technology if they thought they were behind. I don't know how far a corporation would go if it had the first AI.Yeah okay, let's open source AI so anyone can be the trigger for Strong AI. Seems good to me.
https://www.wired.com/2016/04/openai-elon-musk-sam-altman-plan-to-set-artificial-intelligence-free/
We wipe ourselves out.What's the worst that can happen?
I don't know if you're asking genuinely, but this is one of the worst-case scenarios:What's the worst that can happen?
If i recall correctly, he's a hardcore opponent to investing in developing human like AI.
The idea behind OpenAI is to level the playing field. Countries would go to war over this technology if they thought they were behind. I don't know how far a corporation would go if it had the first AI.
What even is this? Do they think people are going to combine their good AIs together to stop Putin's evil AI? It sounds like an episode of Digimon.When Musk and Altman unveiled OpenAI, they also painted the project as a way to neutralize the threat of a malicious artificial super-intelligence. Of course, that super-intelligence could arise out of the tech OpenAI creates, but they insist that any threat would be mitigated because the technology would be usable by everyone. "We think its far more likely that many, many AIs will work to stop the occasional bad actors," Altman says.
Yeah the software will be open as long as they get to keep a lead on it? Nice joke.The company may not open source everything it produces, though it will aim to share most of its research eventually, either through research papers or Internet services. "Doing all your research in the open is not necessarily the best way to go. You want to nurture an idea, see where it goes, and then publish it," Brockman says. "We will produce lot of open source code. But we will also have a lot of stuff that we are not quite ready to release."
Both Sutskever and Brockman also add that OpenAI could go so far as to patent some of its work. "We won't patent anything in the near term," Brockman says. "But we're open to changing tactics in the long term, if we find it's the best thing for the world." For instance, he says, OpenAI could engage in pre-emptive patenting, a tactic that seeks to prevent others from securing patents.
But to some, patents suggest a profit motive—or at least a weaker commitment to open source than OpenAI's founders have espoused. "That's what the patent system is about," says Oren Etzioni, head of the Allen Institute for Artificial Intelligence. "This makes me wonder where they're really going."