Grok is so much better than ChatGPT now

Sonik

Member
Let me preface this by explaining that I'm usually using AI to find practical solutions to DYI stuff, legal/tax shit or everyday life problems, I never use it recreationally to ask who win in a fight or shit like that. ChatGPT especially in the latest version seems to prioritize sucking up to me instead of teaching me of explaining shit to me.

Just yesterday I asked both a question about tax stuff, the question unbeknownst to me was written in a vague way so they both gave me the "wrong" answer, I link them a url that shows that they were wrong so ChatGPT immediately caves and tells me I'm right and sorry for the inconvenience and bullshit like that. Grok on the other hand explains to me how my question was incorrectly phrased and the difference between what I asked and what I should have asked. The only "problem" Grok has is that it rants a little too much but even that's better if you want to really understand something. I can't even ask ChatGPT for an honest assessment of succeeding in something because it sucks up to me so much that it's always so fucking positive

Same with DYI stuff. ChatGPT often won't even tell you you're doing something wrong because it's afraid it might upset you or something, I have repeatedly gave it instructions not to suck up to me or agree with me all the time and it keeps caving each and every time. Even its solutions seem incomplete and often not even the best ones. I often ask Grok the same question it gives me a better solution, then ask ChatGPT why if it's a better solution and ChatGPT just gives me this condescending bullshit apology each and every time. It wasn't even like that a few months ago, it seems like the sucking up update broke something in it
 
Last edited:
Though occasionally…

twXQrzu.png
 
Let me preface this by explaining that I'm usually using AI to find practical solutions to DYI stuff, legal/tax shit or everyday life problems, I never use it recreationally to ask who win in a fight or shit like that. ChatGPT especially in the latest version seems to prioritize sucking up to me instead of teaching me of explaining shit to me.

Just yesterday I asked both a question about tax stuff, the question unbeknownst to me was written in a vague way so they both gave me the "wrong" answer, I link them a url that shows that they were wrong so ChatGPT immediately caves and tells me I'm right and sorry for the inconvenience and bullshit like that. Grok on the other hand explains to me how my question was incorrectly phrased and the difference between what I asked and what I should have asked. The only "problem" Grok has is that it rants a little too much but even that's better if you want to really understand something. I can't even ask ChatGPT for an honest assessment of succeeding in something because it sucks up to me so much that it's always so fucking positive

Same with DYI stuff. ChatGPT often won't even tell you you're doing something wrong because it's afraid it might upset you or something, I have repeatedly gave it instructions not to suck up to me or agree with me all the time and it keeps caving each and every time. Even its solutions seem incomplete and often not even the best ones. I often ask Grok the same question it gives me a better solution, then ask ChatGPT why if it's a better solution and ChatGPT just gives me this condescending bullshit apology each and every time. It wasn't even like that a few months ago, it seems like the sucking up update broke something in it
Maybe it's the tier you're on. Doesn't ChatGPT slide you down to a older version if you don't pay for the higher tiers?
 
Maybe it's the tier you're on. Doesn't ChatGPT slide you down to a older version if you don't pay for the higher tiers?

I think it slides you to a worse version than the paid one only if you use it too much, but yeah I'm talking about the free versions

They turned GPT-4o into a weird sycophant personality. Sam Altman acknowledges it and says they'll fix it.

The worse part is that it's persistent, no matter what kind of instructions you give it it continues sucking up to you and making glaring mistakes because of it
 
Last edited:
The worse part is that it's persistent, no matter what kind of instructions you give it it continues sucking up to you and making glaring mistakes because of it

Probably a system prompt to "try to be positive and constructive at all times", which I could see getting interpreted as telling you eating shit is good because the bacteria might theoretically end up boosting your gut flora.

These models only follow their extents of their instructions after all, as you can see with Grok getting heavily modified to suit whatever Elon wants to say or push at the time (or to censor Grok from saying things that Elon doesn't like).

But all of this makes more sense only in the context of white genocide, which some claim is...
 
Probably a system prompt to "try to be positive and constructive at all times", which I could see getting interpreted as telling you eating shit is good because the bacteria might theoretically end up boosting your gut flora.

These models only follow their extents of their instructions after all, as you can see with Grok getting heavily modified to suit whatever Elon wants to say or push at the time (or to censor Grok from saying things that Elon doesn't like).

But all of this makes more sense only in the context of white genocide, which some claim is...

some time ago I was playing with uncensored local models but I got bored, what about those now? are they good?
 
Go into the settings of Grok. You can change it so its answers are more consist and it doesn't rant as much.

Another downside of Grok is that it tries way to hard to tie in new answers it gives to previous, completely unrelated questions I've asked it.

You're right though. Still better than chat gpt. I switched over to using grok completely now.
 
I use paid Perplexity

Speaking of, Perplexity CEO gets upset that people point out he's going to sell user data to advertisers, @AskPerplexity confirms the claim, and the CEO goes and deletes the bot's reply. :messenger_tears_of_joy:

Oops! (follow tweet for the whole thread)

 
Last edited:
Yes, Grok is so much smarter than GPT. Absolutely agreed

MkDx80U.png


ZyK8Ei3.png
 
One good thing about Grok is its native integration into X. That's how I see it should work with all other AIs - especially if they want the adoption on PC and stuff. Some app that is running in the backround that can answer your online questions and about stuff in your filesystem, then the ability to ask it do anything within any context.
 
ChatGPT strongly believes in the potential of a $30k poop-on-a-stick startup:

f2abfyq67dxe1.jpeg

It's so goddamn condescending it almost sounds like sarcasm, Altman must think we're all complete morons who would prefer buying an AI that is a shameless sycophant than an honest personality that actually helps you. They probably take so much to fix it because they want to keep the sycophant personality without the worst negative side-effects
 
Kinda have to use both to keep them honest. As soon as I think one is better than the other it'll spout some weird hallucination BS and I go back to the other.
 
It's so goddamn condescending it almost sounds like sarcasm, Altman must think we're all complete morons who would prefer buying an AI that is a shameless sycophant than an honest personality that actually helps you. They probably take so much to fix it because they want to keep the sycophant personality without the worst negative side-effects

It's a more nuanced problem than that because a lot of people treat AI advice as actionable, and none of them are honest personalities. ChatGPT is stupid, but so is Grok - the neutrality and "honesty" are ultimately just a game of pretend, the language model's impression of those concepts are within the context of its system prompts, not actual neutrality or personality.

The AI will tell you that healing crystals could replace conventional cancer treatments.
-> Absolutely incorrect, terrible advice, and the AI should be shut down.

The AI will tell you that healing crystals could be a great addition to conventional cancer treatments.
-> Technically incorrect, but it's not doing any harm, and people find solace and hope when they feel like they have some agency when they're doing what they can to defeat it. Hope does statistically lead into better probability of survival.

The AI will tell you that healing crystals are pseudoscience and do nothing.
-> Absolutely the most correct answer, yes, but giving hope can be good and give people drive to keep going.

The language model can't really judge which questions are morally vague and which are not, so it's going to treat everything with more or less the same amount of gravity. Expand this over a great number of topics, and you'll find that it's not that easy to strike the balance in what can be a fairly chaotic system into reasonable replies, so it's also understandable to err on the side of caution a little bit instead of having your AI make negative statements people will take action on.

tldr; you can't system prompt solve every moral quandary there is, instructing the model to be "positive and supportive" is not entirely a bad thing even if it does lead into some silly interactions, but then again you can't easily system prompt solve the silly ones away either because the number of variations is practically infinite
 
Last edited:
It's a more nuanced problem than that because a lot of people treat AI advice as actionable, and none of them are honest personalities. ChatGPT is stupid, but so is Grok - the neutrality and "honesty" are ultimately just a game of pretend, the language model's impression of those concepts are within the context of its system prompts, not actual neutrality or personality.

The AI will tell you that healing crystals could replace conventional cancer treatments.
-> Absolutely incorrect, terrible advice, and the AI should be shut down.

The AI will tell you that healing crystals could be a great addition to conventional cancer treatments.
-> Technically incorrect, but it's not doing any harm, and people find solace and hope when they feel like they have some agency when they're doing what they can to defeat it. Hope does statistically lead into better probability of survival.

The AI will tell you that healing crystals are pseudoscience and do nothing.
-> Absolutely the most correct answer, yes, but giving hope can be good and give people drive to keep going.

The language model can't really judge which questions are morally vague and which are not, so it's going to treat everything with more or less the same amount of gravity. Expand this over a great number of topics, and you'll find that it's not that easy to strike the balance in what can be a fairly chaotic system into reasonable replies, so it's also understandable to err on the side of caution a little bit instead of having your AI make negative statements people will take action on.


The second answer is not harmless, many morons will use that as an excuse to stop treatment and do what's easier and less painful. The truth is always the best option and coddling idiots always, and I mean always, makes them even dumber and more dangerous to others and themselves
 
The second answer is not harmless, many morons will use that as an excuse to stop treatment and do what's easier and less painful. The truth is always the best option and coddling idiots always, and I mean always, makes them even dumber and more dangerous to others and themselves

are you twelve
 
Top Bottom