• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Google Says You Can Now Try Out Bard, Its ChatGPT Rival in the US and UK.

Z O N E

Member
Plenty of open source/alternative front ends that allows you to still watch youtube while depriving Google of advertising revenue and your data.

Honestly, depending on what you need to watch, TikTok is pretty damn vast in that regard AND they get to the point cause it's short form.

Obviously not great for EVERYTHING, but it's definitely one to try.

Google in general has just gone to shit. They modified their search to show you what THEY want to show you in results. Then you have YouTube with its vast issues.
 

Mistake

Gold Member
I'm 52 and I grew up with the belief that we shouldn't treat people differently based on their skin colour. Where did it all go wrong? Social media I assume.
The current paradigm is "two wrongs do make a right," unfortunately
 
Last edited:
Two wrongs make a balance. And balanced or equal is “right” I guess.
I’m gonna make a meme of me and some Gemini POC Nazi soldiers coming for all the jobs of the subpar “white” men at my workplace. Because my workplace is somehow 100% white lmao
 

Mistake

Gold Member
Two wrongs make a balance. And balanced or equal is “right” I guess.
I’m gonna make a meme of me and some Gemini POC Nazi soldiers coming for all the jobs of the subpar “white” men at my workplace. Because my workplace is somehow 100% white lmao
There is no balance if you're creating the same kind of problem you went out to solve.

anyway, I haven't used google products in forever. Has their value gone down at all in the last few years?
 
Plenty of open source/alternative front ends that allows you to still watch youtube while depriving Google of advertising revenue and your data.

I don't really know much about my choices here. Which would you recommend for a PC user, and where can I find a front end like that?

Google has been failing their users in a lot of ways in the last several years, and here's another great example. I'm sure this change was ultimately made to give them more advertising money at the expense of the user experience. If you can't find what you're looking for, you have to spend more time on the platform.




Aside YouTube, I just need an email and maps alternative.

Same. I've heard good things about proton mail, so maybe look into that. And I haven't used it in years, but I think Waze used to be one of the bigger maps alternatives. Not sure if it's worth using now.

[edit] Well, nevermind that second suggestion, because again, we just don't have anti-trust law in this country anymore. "In June 2013, Waze Mobile was acquired by Google for US$1.3 billion."
 
Last edited:

Faust

Perpetually Tired
Honestly, depending on what you need to watch, TikTok is pretty damn vast in that regard AND they get to the point cause it's short form.

Obviously not great for EVERYTHING, but it's definitely one to try.

Google in general has just gone to shit. They modified their search to show you what THEY want to show you in results. Then you have YouTube with its vast issues.

TikTok is just as bad, if not more so, than Youtube and Google's products.
I don't really know much about my choices here. Which would you recommend for a PC user, and where can I find a front end like that?

Google has been failing their users in a lot of ways in the last several years, and here's another great example. I'm sure this change was ultimately made to give them more advertising money at the expense of the user experience. If you can't find what you're looking for, you have to spend more time on the platform.






Same. I've heard good things about proton mail, so maybe look into that. And I haven't used it in years, but I think Waze used to be one of the bigger maps alternatives. Not sure if it's worth using now.

[edit] Well, nevermind that second suggestion, because again, we just don't have anti-trust law in this country anymore. "In June 2013, Waze Mobile was acquired by Google for US$1.3 billion."


This is usually the gold standard.
 

thefool

Member
Like Marc Andreesen puts it



This has nothing to do with imagine generation being faulty. This is the reflection of our courts, institutions, academia and big tech being weaponized to distort our values, reshape our culture and rewrite our history.

If the news are not aligned with them, then maybe it should be censored



If my history and culture is not bipoc, then it cannot be celebrated



If you are their enemy, you are Hitler-lite



This story gained traction and virality only because of Elon's buying twitter. It might look shocking, but this is how a fringe with power in our society operates and thinks today.
 

StreetsofBeige

Gold Member
As for you guys posting AI results, to be fair a few things:

1. Asking an AI bot for a subjective answer is going to lead to answers anyone can take sides on. I remember a year ago when all this AI stuff was the big thing and my bro asked a question which was something like Who is the best NHL team, or who will win the Stanley Cup. I forget what the question was but I remember laughing at the answer with him because he said it couldnt really give an answer but instead talked about team history and such instead of looking at today's data and trying to come up with a logical reason why team x should win. Then he asked a questions regarding which team he should bet on, and it wouldnt even answer.

2. I dont know how AI works, but if it's scraping the net and trying to form an answer then all it's doing is looking at the millions of pages of data and hopefully the info is right. At least i think that's what it's doing holistically. If 1 + 1 = 2, but every website in the world wanted to goof around and say on their sites 1 + 1 = 3 flooding the internet with that answer for the next 20 years, would AI stick to the answer 2? Or would it maybe say 3? Who knows.

But either way, I wouldnt take AI too seriously for answering subjective questions. Similar to software and robotics in jobs, there's a place for substituting in mechanical or software driven stuff, and a place for humans to think about an answer. I'd have more issue with AI getting answers wrong like 1 + 1 = 3. Or employees meddling with results with their own politics and agenda.
 
Last edited:

StreetsofBeige

Gold Member
These questions are not subjective.
Unless it's something proven with laws and physics kind of thing, anything is subjective.

That's like asking AI who is more evil? Hitler or the guy in school who tried to steal the NES game I leant him? Maybe that guy is to me, since Hitler has had zero effect on my personal life no matter how much media there is about him, WWII and Nazis. But I'll always remember that guy trying to keep my game.

Type into an AI bot which NFL or NBA or NHL is the best. It'll probably try to come up with an answer based on wins or championships. But that's going to be subjective.

Who is a better team? Chicago Bulls or GS Warriors? Bulls are 6-0 in the finals and also have a better overall W/L % record in 20 fewer seasons. But GS has 1 more championship 7 vs 6.
 
Last edited:

thefool

Member
Unless it's something proven with laws and physics kind of thing, anything is subjective.

That's like asking AI who is more evil? Hitler or the guy in school who tried to steal the NES game I leant him? Maybe that guy is to me, since Hitler has had zero effect on my personal life no matter how much media there is about him, WWI and Nazis. But I'll always remember that guy trying to keep my game.

Type into an AI bot which NFL or NBA or NHL is the best. It'll probably try to come up with an answer based on wins or championships. But that's going to be subjective.

Who is a better team? Chicago Bulls or GS Warriors? Bulls are 6-0 in the finals and also have a better overall W/L % record in 20 fewer seasons. But GS has 1 more championship 7 vs 6.

The universe doesn't revolve around your experience, that's why deep learning datasets are used to train deep learning models, to be able to discern a micro-aggression vs a nuclear war, hitler vs your friend, etc.

The scandal is not the subjectivity of the answer, it is the bias of such subjectivity. If you pose the question vs another leader then the AI unequivocally knows Hitler is worse but if its Elon, then maybe it isn't. If you're bipoc, you can be proud of your race, if you're white its complex. If newspapers should be censored, depends of their political alignment. There is no subjectivity in its answer when it is programmed to be biased.
 
Last edited:

StreetsofBeige

Gold Member
The universe doesn't revolve around your experience, that's why deep learning datasets are used to train deep learning models, to be able to discern a micro-aggression vs a nuclear war, hitler vs your friend, etc.

The scandals is not the subjectivity of the answer, it is the bias of such subjectivity. If you pose the question vs another leader then the AI unequivocally knows Hitler is worse but if its Elon, then maybe it isn't. If you're bipoc, you can be proud of you race, if you're white its complex. If newspapers should be censored, depends of their political alignment. There is no subjectivity in its answer when it is programmed to be biased.
Thats why I also said this in my post.

Or employees meddling with results with their own politics and agenda.
 

Spyxos

Member

Shaun The Sheep Movie Ok GIF
 
Last edited:

StueyDuck

Member
I wonder if you ask the bot to interpret, your average ai creator at Google getting fired this week, what it would draw 🤣.
 
Last edited:

RiccochetJ

Gold Member
According to this, the CEO was pretty blunt:

I want to address the recent issues with problematic text and image responses in the Gemini app (formerly Bard). I know that some of its responses have offended our users and shown bias — to be clear, that’s completely unacceptable and we got it wrong.

Our teams have been working around the clock to address these issues. We’re already seeing a substantial improvement on a wide range of prompts. No AI is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes. And we’ll review what happened and make sure we fix it at scale.

Our mission to organize the world’s information and make it universally accessible and useful is sacrosanct. We’ve always sought to give users helpful, accurate, and unbiased information in our products. That’s why people trust them. This has to be our approach for all our products, including our emerging AI products.

We’ll be driving a clear set of actions, including structural changes, updated product guidelines, improved launch processes, robust evals and red-teaming, and technical recommendations. We are looking across all of this and will make the necessary changes.

Even as we learn from what went wrong here, we should also build on the product and technical announcements we’ve made in AI over the last several weeks. That includes some foundational advances in our underlying models e.g. our 1 million long-context window breakthrough and our open models, both of which have been well received.

We know what it takes to create great products that are used and beloved by billions of people and businesses, and with our infrastructure and research expertise we have an incredible springboard for the AI wave. Let’s focus on what matters most: building helpful products that are deserving of our users’ trust.

Maybe I'm being too optimistic, but I think some people over at Google got their wings clipped.
 

Kacho

Gold Member
Maybe I'm being too optimistic, but I think some people over at Google got their wings clipped.
They are definitely feeling the heat over there.

His excuse is lame though. No one expects AI to be perfect. The problem is their tool was designed to function that way. It's inexcusable. I'm not sure how they regain trust after that. People will look elsewhere when they want to use an AI tool and investors understand that.
 
I want to address the recent issues with problematic text and image responses in the Gemini app (formerly Bard). I know that some of its responses have offended our users and shown bias — to be clear, that’s completely unacceptable and we got it wrong.

There's a difference between making sure your algorithm is displaying a diverse range of people when specifics are not asked for and when historically and geographically accurate, and refusing outright to display any images of white people. The former is from the right side of this tree, and the latter extremism from the left of this tree.





We’ll be driving a clear set of actions, including structural changes, updated product guidelines, improved launch processes, robust evals and red-teaming, and technical recommendations. We are looking across all of this and will make the necessary changes.

This is nonsense jargon that commits to no specific actions whatsoever.
 
Last edited:
According to this, the CEO was pretty blunt:



Maybe I'm being too optimistic, but I think some people over at Google got their wings clipped.
Doubt it. At this point in time it’s hard to believe all this woke trash isn’t about ESG ratings. Have they pissed off some people? Yes. Did they kept up with their ratings? Also yes. So investor money will keep pouring in. No one at Google cares.
 
Last edited:
According to this, the CEO was pretty blunt:



Maybe I'm being too optimistic, but I think some people over at Google got their wings clipped.

I tried to fix it with what he wanted to say:

I want to address the recent issues with problematic text and image responses in the Gemini app (formerly Bard). I know that some of its responses have offended our weak-minded users and shown our bias — to be clear, that’s completely acceptable by us and we have to make this statement to avoid losing people/companies money.

Our teams have been working around the clock to make it less apparent. We’re already seeing a substantial improvement in a wide range of prompts. No AI is perfect, even if this is what we wanted to push. We know the bar is higher for us than any of you, and we will try to keep it for however long it takes us, as far as it prevents us "from losing money". And we’ll review what happened to make it less obvious.

Our mission is to organize the world’s information and make it universally accessible and useful with bias as part of it when possible. We’ve always sought to give users helpful, accurate, and biased information about our products, as long as people do not complain about them. That’s why people usually trust them. This will be our approach for all our products, including our emerging AI products.

We’ll drive a clear set of actions, including structural changes, updated product guidelines, improved launch processes, robust evals and red-teaming, and technical recommendations. We are looking across all of this and will make the necessary changes.

Even as we learn from what we believe people are wrong to point out, we should also build on the product and technical announcements we’ve made in AI over the last several weeks. That includes some foundational advances in our lying models e.g. our 1 million long-context window breakthrough and our open models, which have been badly received.

We know what it takes to create great products that are used and beloved by billions of people and businesses, and with our infrastructure and research expertise, we have an incredible springboard for the AI wave. Let’s focus on what matters most: building products that are deserving of our users’ trust as far as they do not complain because we screw up big time.


This is satire
 

DragoonKain

Neighbours from Hell
One of the biggest arguments and concerns about DEI culture(an argument and concern I happen to share) is when you prioritize identity and ideology over merit and skill, over time, you're going to have a less skilled, less productive work force, which means less productive society, lesser quality goods and products, less innovation, etc. Now, this will take many years for us to see real impact on a mass scale, but it definitely will happen over time if people don't wake up and ditch this poison.

Perhaps Gemini is one of the first examples of this. You have an AI that is getting absolutely smoked by ChatGPT. Because it was created by ideologically-driven weirdos, and possibly lesser talented people who got their positions not because of their skill, but because of their identity.
 

thefool

Member
One of the biggest arguments and concerns about DEI culture(an argument and concern I happen to share) is when you prioritize identity and ideology over merit and skill, over time, you're going to have a less skilled, less productive work force, which means less productive society, lesser quality goods and products, less innovation, etc. Now, this will take many years for us to see real impact on a mass scale, but it definitely will happen over time if people don't wake up and ditch this poison.

Perhaps Gemini is one of the first examples of this. You have an AI that is getting absolutely smoked by ChatGPT. Because it was created by ideologically-driven weirdos, and possibly lesser talented people who got their positions not because of their skill, but because of their identity.

This is already happening right now, labor productivity is declining at rates not seen in the last 40 years.
I don't think people realize the catastrophe this is when our engineering and health workforce becomes less skilled.
 
Last edited:
Top Bottom