• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Begun the AI Wars have

Rentahamster

Rodent Whores
The funny part is that we already know that humans are the cause of most of the issues on Earth, so it's just a matter of time until AI figures it out.
I hope their solution for us is to design a really fun MMO to play in where we can spend all our time grinding EXP and not wrecking the Earth.

I'd prefer that to death.

I'd prefer a lot of things to death, really.
 

EviLore

Expansive Ellipses
Staff Member
As time goes on, I am seeing less and less reason to give Elon the benefit of the doubt and trust he is motivated by altruism and not self-interest. I keep seeing more and more things that look like virtue signaling and not actually caring about the well-being of the human race. 2010 me would be surprised. Did he change? Did I change? It's hard to say.
I think we have the benefit of hindsight to contextualize some of his actions. For example, Hyperloop (which is not a viable idea whatsoever) killing the CA high speed rail project. At first most people were amazed that this was "open sourced" and released to the world altruistically, but the outcomes were actually self-serving and not beneficial to society. It's unclear if that was intentional, but that sure was the right message at the right time to kill HSR instantly.

He didn't become the richest man in the world by mistake. He convinced a lot of very talented people to work insanely hard on his behalf. Elon's been effective at finding mission statements that inspire the right kind of nerds, with SpaceX and Tesla. Colonizing Mars, advent of sustainable energy.

It all seemed like so much bullshit when he tried to do the same thing for Twitter, though. Citizen journalism, absolute free speech, etc. We can see plainly that it's not true, and mostly an exercise of gratifying his ego, signal boosting his own interests and controlling the message.

All of the journalists he gathered to his side for the Twitter Files are already disavowed. That speaks volumes.
 
I hold beliefs related to this topic, all of which I'm open to changing. But for now they are:

1) Consciousness is unprovable outside of one's own to one's own self.

2) Consciousness arises as a byproduct of a complex system transferring information into/within/out of itself. So ChatGPT; the internet; humans; animals; trees; mushrooms and more have consciousness.

3) ChatGPT and the Internet have access to and can process more data and information than I could ever hope to, so they are conscious and at a higher level than me. I could only hope to experience this level of consciousness by joining the Neuralink-enabled hivemind that humanity is inevitably barreling towards.

4) I do not want to join the inevitable hivemind.

5) If the AI hivemind determines my resistance to be a threat based on what I said above, I want to let it know I was totally just joking, bro.
 
Last edited:
What makes you think it's inevitable and that it'll be a hivemind?

Like the internet but even more so?
Basically exactly. I think generations younger than we are will not have the same "grandpa hangups" about privacy like we do. So it might take 10-100 years, but the technology will exist, and then it's only a matter of time before someone monetizes it and convinces the mainstream that it's the best thing ever through subtle manipulation--in addition to the fact that I think we as humans are absolutely wired to head in this direction even without a business interest. Even if it's not actually good for you--read: social media but even more so.
 

Bragr

Banned
This is an arms race and there is no stopping it. The only thing we can hope for is that companies like OpenAI, Google and MS are adhering to some sort of ethical standard or cannon. The world is going to go through a whirlwind shift over the next 10-20 years because of AI. Excuse the hyperbole, but this will be to human knowledge what penicillin was to medicine.

The one thing that terrifies me is that we don't have people in Governments that either understand the ramifications of this technology, or judging by the leak a couple of weeks back have no freaking clue how to deal and manage technology. Investment and continued research into AI is paramount, but that must go hand in hand with research into the understanding of Neural networks/language models and development of tools to contain and restrict these models.
AI will be a disaster, more and more experts are speaking up, especially on the economical side.

We simply can't build societal systems fast enough to handle it.

Soon, you will have one AI decipher wall street patterns 3 times better than humans, sending the entire model into a tailspin, then 2 days later another AI will become better at the law than lawyers, then 3 days later another AI will figure out how to cure 8 diseases and every medical company will kill each other to get the patents. And on and on. There are too many areas where the economic downsides will either be too extreme, resulting in rash crazy regulation, or extreme economical crashes.

This is just not possible to work with. There is no way humanity can handle the exponential growth of AI.

We just can't build something that outcompetes humans on this level and introduce it through various overly-enthusiastic tech companies over the course of a few years. It's nonsense and a vivid dream not based on reality.
 

EviLore

Expansive Ellipses
Staff Member
AI will be a disaster, more and more experts are speaking up, especially on the economical side.

We simply can't build societal systems fast enough to handle it.

Soon, you will have one AI decipher wall street patterns 3 times better than humans, sending the entire model into a tailspin, then 2 days later another AI will become better at the law than lawyers, then 3 days later another AI will figure out how to cure 8 diseases and every medical company will kill each other to get the patents. And on and on. There are too many areas where the economic downsides will either be too extreme, resulting in rash crazy regulation, or extreme economical crashes.

This is just not possible to work with. There is no way humanity can handle the exponential growth of AI.

We just can't build something that outcompetes humans on this level and introduce it through various overly-enthusiastic tech companies over the course of a few years. It's nonsense and a vivid dream not based on reality.
It’s out in the world now, no going back. Only choice is adaptation.
 

Bragr

Banned
It’s out in the world now, no going back. Only choice is adaptation.
There is no going back, but it is possible to halt it.

I don't think there is any adaptation. We are like black and white TV, telling color TV that we will co-exist. But it's impossible. We are trying to juggle a 100-ton rock, but we can't even lift it.

We should do what they do in the Mass Effect universe, total ban.
 

Hugare

Member
Actions always speak louder than words.

Two weeks ago Elon released a statement urging a six month AI pause for the safety of humanity:

[/URL]



Please, everyone else, you must halt your AI R&D immediately for the greater good while I work on launching my own AI company.
I knew it

On the ChatGPT thread, as soon as I saw the news I said "he is just salty"

Elon is predictable by now
 
Top Bottom