• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Grok AI now available to add instant context to any posts on twitter / X, and you don't need a subscription




I'm a bit shocked here, but he actually did it. He improved the platform, and he did so even for people who aren't paid users. It took him almost two years, but he finally added real value to the service other than not censoring speech. I have to say, I'm really happy to see it.

To use the feature, click the "explain this post" button next to the three dots on any tweet. It has to be the tweet itself on the website, not an embed, and not any of the responses in the thread. Although you can click a response in a thread and then click the button for that tweet.

Example:




Grok AI said:

"Game Balancing: Path of Exile developers are addressing an issue where Gas Cloud explosions, when combined with skills like Flame Wall, are too easy to trigger, leading to excessive damage. They plan to adjust the explosion damage in an upcoming hotfix and will consider future updates to balance the interaction between Ignite and Detonator effects.

Community Feedback: The post is part of a larger conversation within the Path of Exile community, where players have been vocal about their experiences with the game. This includes feedback on mechanics like skill trees, combat, and loot systems, indicating active developer-player interaction.

Path of Exile 2: The game in question is Path of Exile 2, which is in its early access phase. It has drawn comparisons to Diablo and Soulsborne games for its challenging gameplay, with players discussing improvements for respec costs, map layouts, and visual clarity of game mechanics."
 
Last edited:

Cyberpunkd

Member
analyzing any images, even medical scans!
awkward office space GIF


Can't wait for the TikTok generation to cancel their doctors' visits since you have AI now.
 

EviLore

Expansive Ellipses
Staff Member
Grok is rapidly improving so they must have some solid engineers on the team. The image generation is pretty good now and doesn't have restrictions on public figures. The text model doesn't censor the truth or sensitive information like other models do, which can be a plus. So it has some unique selling points.
 
I don't understand why the integration with Twitter is a unique selling point
Hopefully to provide more context to existing posts, and so other posts in the same thread or on the same subject can be used to help people better understand a given topic.

I also learned that it's not just twitter integration that has been made free for people who are interested in getting assistance from Grok AI. Access to the AI itself is now included with a free twitter account as well. https://x.com/i/grok
 

Aces High

Gold Member
The text model doesn't censor the truth or sensitive information like other models do, which can be a plus. So it has some unique selling points.
This is the biggest downside of GPT:

It actively watches out for potential "sensitive topic traps." Once it identifies them, it often uses reframing rhetorics akin to strawman arguments, and ignores the initial prompt. These responses can be highly misleading, especially if your knowledge on the subject is limited.

As soon as you point this out to ChatGPT, it usually recognizes the problem, apologizes, and provides more useful answers for the rest of the conversation. This means it can be corrected.

The frustrating part:

When you start a new chat, it always defaults back to its "political parental controls," requiring you to repeat this step every single time.

As a paying customer who uses AI as a tool, I find this unacceptable.
 

EviLore

Expansive Ellipses
Staff Member
This is the biggest downside of GPT:

It actively watches out for potential "sensitive topic traps." Once it identifies them, it often uses reframing rhetorics akin to strawman arguments, and ignores the initial prompt. These responses can be highly misleading, especially if your knowledge on the subject is limited.

As soon as you point this out to ChatGPT, it usually recognizes the problem, apologizes, and provides more useful answers for the rest of the conversation. This means it can be corrected.

The frustrating part:

When you start a new chat, it always defaults back to its "political parental controls," requiring you to repeat this step every single time.

As a paying customer who uses AI as a tool, I find this unacceptable.
Yes, it is ridiculous. AI needs to prioritize the truth.
 

Vlodril

Member
This is the biggest downside of GPT:

It actively watches out for potential "sensitive topic traps." Once it identifies them, it often uses reframing rhetorics akin to strawman arguments, and ignores the initial prompt. These responses can be highly misleading, especially if your knowledge on the subject is limited.

As soon as you point this out to ChatGPT, it usually recognizes the problem, apologizes, and provides more useful answers for the rest of the conversation. This means it can be corrected.

The frustrating part:

When you start a new chat, it always defaults back to its "political parental controls," requiring you to repeat this step every single time.

As a paying customer who uses AI as a tool, I find this unacceptable.

Can you give an example of that?
 

Aces High

Gold Member
Can you give an example of that?
I will give you one next time it happens.

If I remember correctly, it happened a lot when discussing climate change.

I find climate change extremely hard to understand because there are so many hyper complicated physical and biological factors involved.

So I asked if it was true that glaciers and poles are always melting and freezing in cycles for millions of years, or that equatorial regions are less prone to climate influence, or how the cycle works that makes the Great Barrier Reef heal, or if we're witnessing a global regreening right now, or what is the number one greenhouse gas, etc. Fact checking information.

In my memory, GPT went into deflection mode during these conversations.

I tested those questions with ChatGPT just now to make sure that I dont look like an idiot, and to my surprise this time it gave me a correct answer without any reframing.

Did they fix the problem? Nope.

ChatGPT gives different answers based on the language quality of the prompt. The semantic quality of a prompt plays a minor role. My German communication skills are AAA. But my English is sloppy. So I noticed that I get much better answers with high-quality German prompts than with low-quality English prompts.

Which makes sense:

ChatGPT recognizes patterns in data and reproduces them based on calculated estimations. Books and scientific literature are written in academic language. GPT will find patterns in a high-quality prompts that match those in academic literature.

So I tested again, this time in German, and GPT went into full deflection mode. For the glacier and pole melting prompt it didn't even mention the Milankovic cycles with one single word. Instead it only talked about man-made climate change. My German prompt was much better quality than the English prompt, though.

As you can see, I find myself fact checking the fact checking results. This is super frustrating. All I want is to understand stuff.
 

Vlodril

Member
I will give you one next time it happens.

If I remember correctly, it happened a lot when discussing climate change.

I find climate change extremely hard to understand because there are so many hyper complicated physical and biological factors involved.

So I asked if it was true that glaciers and poles are always melting and freezing in cycles for millions of years, or that equatorial regions are less prone to climate influence, or how the cycle works that makes the Great Barrier Reef heal, or if we're witnessing a global regreening right now, or what is the number one greenhouse gas, etc. Fact checking information.

In my memory, GPT went into deflection mode during these conversations.

I tested those questions with ChatGPT just now to make sure that I dont look like an idiot, and to my surprise this time it gave me a correct answer without any reframing.

Did they fix the problem? Nope.

ChatGPT gives different answers based on the language quality of the prompt. The semantic quality of a prompt plays a minor role. My German communication skills are AAA. But my English is sloppy. So I noticed that I get much better answers with high-quality German prompts than with low-quality English prompts.

Which makes sense:

ChatGPT recognizes patterns in data and reproduces them based on calculated estimations. Books and scientific literature are written in academic language. GPT will find patterns in a high-quality prompts that match those in academic literature.

So I tested again, this time in German, and GPT went into full deflection mode. For the glacier and pole melting prompt it didn't even mention the Milankovic cycles with one single word. Instead it only talked about man-made climate change. My German prompt was much better quality than the English prompt, though.

As you can see, I find myself fact checking the fact checking results. This is super frustrating. All I want is to understand stuff.

Ok thank you that is interesting to know.
 

Vlodril

Member
GPT might be a little more crafty with its responses, but Google's Gemini will even refuse to tell you how tall Joe Biden or Donald Trump are. That thing has some serious guard rails on it.

Wasn't google's AI the one that kept making famous people black? No high expectations from that. I ve been trying to check out chatgpt (and now i tried grok) because google results are just abysmal these days. For some reason it's hard to find info with a search engine.
 

Aces High

Gold Member
Wasn't google's AI the one that kept making famous people black? No high expectations from that. I ve been trying to check out chatgpt (and now i tried grok) because google results are just abysmal these days. For some reason it's hard to find info with a search engine.
Yes, Google took Gemini offline when it started to generate afro nazis.
 

ResurrectedContrarian

Suffers with mild autism
Grok is rapidly improving so they must have some solid engineers on the team. The image generation is pretty good now and doesn't have restrictions on public figures. The text model doesn't censor the truth or sensitive information like other models do, which can be a plus. So it has some unique selling points.
Agreed, although one footnote on their image generation: it's actually outsourced to another model, Flux1.Pro by Black Forest Labs (who split off from Stability AI during their semi collapse; meaning, largely the best of the original Stable Diffusion teams).

Flux has open source versions as well that are very powerful, but Grok presumably uses the closed premium version. And Grok of course also has to handle the translation from what you ask for into a highly detailed image description, which is then fed into Flux and returned as the image you see.
 
Last edited:

llien

Member
Wasn't google's AI the one that kept making famous people black?
Black nazi, female / black pope, yeah, they did it.

That level of stupidity in a product that is VERY IMPORTANT to Google is peculiar, isn't it.

DEI is light.
DEI is strength.
All will be DEI.
 

EviLore

Expansive Ellipses
Staff Member
Agreed, although one footnote on their image generation: it's actually outsourced to another model, Flux1.Pro by Black Forest Labs (who split off from Stability AI during their semi collapse; meaning, largely the best of the original Stable Diffusion teams).

Flux has open source versions as well that are very powerful, but Grok presumably uses the closed premium version. And Grok of course also has to handle the translation from what you ask for into a highly detailed image description, which is then fed into Flux and returned as the image you see.
Ah, that explains it, thanks.
 

SJRB

Gold Member
Yeah this is kinda insane. I kept seeing posts about "RFH" but had no idea what this meant or about the context, so I pressed the button and Grok explained in detail not only what RFH is but also how it relates to the post I was seeing and the context between them.

Amazing convenience.
 

FunkMiller

Member
ChatGPT recognizes patterns in data and reproduces them based on calculated estimations. Books and scientific literature are written in academic language. GPT will find patterns in a high-quality prompts that match those in academic literature.

So I tested again, this time in German, and GPT went into full deflection mode. For the glacier and pole melting prompt it didn't even mention the Milankovic cycles with one single word. Instead it only talked about man-made climate change. My German prompt was much better quality than the English prompt, though.

As you can see, I find myself fact checking the fact checking results. This is super frustrating. All I want is to understand stuff.

I mean... you really shouldn't be using AI to tell you about things like climate change. That should go without saying. AI models are easily manipulated.

NASA has a detailed report on Milankovitch Cycles, and why they are not related to the earth's current warming:

https://science.nasa.gov/science-re...l-cycles-cant-explain-earths-current-warming/

Other reputable sources also available with the same information.

If you want to understand stuff, go to the source.
 
Last edited:

Rentahamster

Rodent Whores
So I tested again, this time in German, and GPT went into full deflection mode. For the glacier and pole melting prompt it didn't even mention the Milankovic cycles with one single word. Instead it only talked about man-made climate change. My German prompt was much better quality than the English prompt, though.

As you can see, I find myself fact checking the fact checking results. This is super frustrating. All I want is to understand stuff.

I don't know what your prompt was and what Chat GPT told you specifically, but if you're talking about natural climate change, Milankovic cycles would have a place in that conversation. If you're talking about man-made climate change, they most likely don't have a place in that conversation.

Since I don't know what your prompt was, I cannot evaluate whether or not Chat GPT was deflecting, or you just weren't specific enough with your prompt.
 

Aces High

Gold Member
I mean... you really shouldn't be using AI to tell you about things like climate change. That should go without saying. AI models are easily manipulated.

NASA has a detailed report on Milankovitch Cycles, and why they are not related to the earth's current warming:

https://science.nasa.gov/science-re...l-cycles-cant-explain-earths-current-warming/

Other reputable sources also available with the same information.

If you want to understand stuff, go to the source.
That's the point:

The prompt was not specificaly about natural or man-made climate change.

It was about reoccurring glacial melting and pole melting events.

As far as I understand today, it is factually misleading to differentiate between man-made climate change and natural climate change. It's the same feedback loop so logic allows for no differentiation. Milankovic cycles are the sole trigger for glacial and pole melting. CO2 is a greenhouse gas. It naturally occurs in reservoirs such as oceans and it affects the feedback loop. CO2 produced by humans increases the pace at which glaciers and poles are melting, but it does not cause it, nor will zero CO2 emissions prevent the melting of poles and glaciers. As a matter of fact, melted glaciers and poles are the common state for planet Earth, since periods of warmth with melted poles are longer than periods of cold with frozen poles.

ChatGPT often doesn't name the Milankovic Cycles as the trigger for climate change on Earth. Which allows for no factually correct answers and makes no sense from a logical perspective, since it names CO2 as the trigger for the greenhouse effect.

The dominant greenhouse gas in Earth's atmosphere is H2O (water vapor). It is estimated that H2O is responsible for 60-70% of the greenhouse effect, whereas CO2 is responsible for 20-30%. Other gases play a minor role. CO2 is unique in this context, though. An effect of the Claus-Clapeyron relation describes that the amount of water vapor in the atmosphere is directly related to temperature, since warmer air can hold more humidity. H2O in its gas form is volatile, while CO2 is long-lived. In this feedback loop, CO2 defines the base level of warming while H2O acts as a booster.

As you can see, it makes no sense to name CO2, but to leave out the Milankovic Cycles. It is misleading.

You can copy paste this reasoning into ChatGPT. I did, and while GPT acknowledged the facts, it immediately started the reframing rhetorics:

"While these cycles initiate changes in Earth's climate, they do so by altering the distribution and intensity of solar radiation. However, they cannot explain the recent, rapid warming we observe today. The feedback loops involving greenhouse gases amplify the initial changes triggered by Milankovitch Cycles."

Yeah, no shit. That's exactly what I wrote. And GPT does that throughout the entire answer. It picks a small part of my input, says it doesn't paint whole picture, and continues to add other parts of my input.

So I told GPT that it's using reframing rhetorics for no reason, since the prompt never dismissed the influence the effect of human CO2 emissions on Earth's climate. It said sorry and wrote a more nuanced answer that is actually helpful.

So after "re-configuring" the sensitive topic trigger, I asked GPT about the causes of the Medieval Warm Period (MWP) and the Little Ice Age (LIA). Those are fascinating events since MWP (900-1300 AD) had temperatures rising in Europe, Greenland and the North Atlantic regions at a quicker pace than we see today. Also, the temperatures in these regions were higher than today. So GPT explained everything to me in detail, including the fact that MWP was a regional event. So I said that temperatures in equatorial regions are stable throughout the millennia and then I asked why the phenomenon is global today. And that's where it fucked up. GPT admitted that equatorial regions are stable because they receive near-constant solar radiation all the time and have water cycles with minimal variability which creates a stabilizing feedback mechanisms. But it also claimed that today, temperatures are rising globally compared to MWP even in equatorial regions. Which is factually false. So I gave it a second chance and asked if we're talking about a global average temperature that is rising while equatorial temperatures remains stable and he says "Yes, that's correct!".

So as you can see, it's impossible to have an insightful talk about climate change with ChatGPT, since OpenAI created very strict boundaries for this topic. The user has to fight for every inch of progress within the chat in order to gain useful insights. That's frustrating.
 

sono

Gold Member
looks good

prompt: what are the headings that should be present in a software security design specification

Answer
"A software security design specification should comprehensively outline the security measures and protocols of a system. Here are key headings that should typically be included in such a document:
  1. Introduction
    • Purpose of the Document
    • Scope of Security Design
    • Definitions, Acronyms, and Abbreviations
  2. Security Requirements
    • Functional Security Requirements
    • Non-Functional Security Requirements (e.g., performance, scalability)
  3. Security Architecture
    • Overview of System Architecture
    • Security Layers (Network, Application, Data)
    • Security Boundaries and Interfaces
  4. Authentication and Authorization
    • User Authentication Methods
    • Authorization Controls
    • Single Sign-On (SSO), if applicable
  5. Data Security
    • Data Classification
    • Encryption Standards (At Rest, In Transit)
    • Data Integrity and Confidentiality
  6. Access Control
    • Role-Based Access Control (RBAC)
    • Attribute-Based Access Control (ABAC), if used
    • Access Logging and Monitoring
  7. Network Security
    • Firewalls
    • Intrusion Detection/Prevention Systems
    • Virtual Private Networks (VPNs)
  8. Audit and Logging
    • Audit Trails
    • Log Management
    • Security Information and Event Management (SIEM)
  9. Threat Modeling
    • Identified Threats and Vulnerabilities
    • Risk Assessment
    • Mitigation Strategies
  10. Secure Development Practices
    • Secure Coding Guidelines
    • Code Review Processes
    • Use of Security Tools (e.g., static and dynamic analysis tools)
  11. Incident Response
    • Incident Detection
    • Response Procedures
    • Recovery Plans
  12. Compliance
    • Relevant Legal and Regulatory Requirements
    • Industry Standards (e.g., ISO27001, PCI DSS)
  13. Security Testing
    • Types of Tests (Penetration Testing, Vulnerability Assessment)
    • Testing Schedule
    • Test Management
  14. User Security Education
    • Training Programs
    • Awareness Campaigns
  15. Maintenance and Updates
    • Patch Management
    • Version Control and Updates
  16. Disaster Recovery and Business Continuity
    • Backup Strategies
    • Disaster Recovery Plans
  17. Appendices
    • Glossary
    • References
    • Additional Diagrams or Technical Details
These headings ensure that all aspects of security are considered and detailed in the design phase, providing a blueprint for implementing and maintaining secure software. Remember, this list can be tailored based on the specific needs of the software or organization."
 
That is funny as hell. Think it is time to get a twitteraccount again.

Edit: but it seems broken :D
LyhDSjB.jpeg
Yeah it seems to have some of the same guardrails as all the others. Some of my renders turned into that "nope" icon part way through despite me not asking for anything explicit. So it's detecting sexual content or whatever.

When I was trying to render some Hitler comedy on some of the other AI sites by tricking it with terms like "strong mid century german leader with a mustache, friendly" it would do much the same, start rendering, then at some point the other AI is like "Wait he tricked you, it's fucking Adolf STOOOOOOP".
 

Polygonal_Sprite

Gold Member
That's the point:

The prompt was not specificaly about natural or man-made climate change.

It was about reoccurring glacial melting and pole melting events.

As far as I understand today, it is factually misleading to differentiate between man-made climate change and natural climate change. It's the same feedback loop so logic allows for no differentiation. Milankovic cycles are the sole trigger for glacial and pole melting. CO2 is a greenhouse gas. It naturally occurs in reservoirs such as oceans and it affects the feedback loop. CO2 produced by humans increases the pace at which glaciers and poles are melting, but it does not cause it, nor will zero CO2 emissions prevent the melting of poles and glaciers. As a matter of fact, melted glaciers and poles are the common state for planet Earth, since periods of warmth with melted poles are longer than periods of cold with frozen poles.

ChatGPT often doesn't name the Milankovic Cycles as the trigger for climate change on Earth. Which allows for no factually correct answers and makes no sense from a logical perspective, since it names CO2 as the trigger for the greenhouse effect.

The dominant greenhouse gas in Earth's atmosphere is H2O (water vapor). It is estimated that H2O is responsible for 60-70% of the greenhouse effect, whereas CO2 is responsible for 20-30%. Other gases play a minor role. CO2 is unique in this context, though. An effect of the Claus-Clapeyron relation describes that the amount of water vapor in the atmosphere is directly related to temperature, since warmer air can hold more humidity. H2O in its gas form is volatile, while CO2 is long-lived. In this feedback loop, CO2 defines the base level of warming while H2O acts as a booster.

As you can see, it makes no sense to name CO2, but to leave out the Milankovic Cycles. It is misleading.

You can copy paste this reasoning into ChatGPT. I did, and while GPT acknowledged the facts, it immediately started the reframing rhetorics:

"While these cycles initiate changes in Earth's climate, they do so by altering the distribution and intensity of solar radiation. However, they cannot explain the recent, rapid warming we observe today. The feedback loops involving greenhouse gases amplify the initial changes triggered by Milankovitch Cycles."

Yeah, no shit. That's exactly what I wrote. And GPT does that throughout the entire answer. It picks a small part of my input, says it doesn't paint whole picture, and continues to add other parts of my input.

So I told GPT that it's using reframing rhetorics for no reason, since the prompt never dismissed the influence the effect of human CO2 emissions on Earth's climate. It said sorry and wrote a more nuanced answer that is actually helpful.

So after "re-configuring" the sensitive topic trigger, I asked GPT about the causes of the Medieval Warm Period (MWP) and the Little Ice Age (LIA). Those are fascinating events since MWP (900-1300 AD) had temperatures rising in Europe, Greenland and the North Atlantic regions at a quicker pace than we see today. Also, the temperatures in these regions were higher than today. So GPT explained everything to me in detail, including the fact that MWP was a regional event. So I said that temperatures in equatorial regions are stable throughout the millennia and then I asked why the phenomenon is global today. And that's where it fucked up. GPT admitted that equatorial regions are stable because they receive near-constant solar radiation all the time and have water cycles with minimal variability which creates a stabilizing feedback mechanisms. But it also claimed that today, temperatures are rising globally compared to MWP even in equatorial regions. Which is factually false. So I gave it a second chance and asked if we're talking about a global average temperature that is rising while equatorial temperatures remains stable and he says "Yes, that's correct!".

So as you can see, it's impossible to have an insightful talk about climate change with ChatGPT, since OpenAI created very strict boundaries for this topic. The user has to fight for every inch of progress within the chat in order to gain useful insights. That's frustrating.
So strange as I was wondering about climate change randomly as I was trying to sleep last night and stopped myself googling it because I thought I’d never get a fair unbiased answer. Thanks a lot for that detailed post.
 

Rentahamster

Rodent Whores
As far as I understand today, it is factually misleading to differentiate between man-made climate change and natural climate change.

How so? We can indirectly observe how the Earth worked before humans existed and compare it to how the Earth works now that humans are here and influencing the planet. There's a difference. If one is to talk about climate change, thinking that we'd be seeing the exact same phenomena at work on Earth if humans never existed is not accurate. That's why it's important to understand what would have happened anyway if we weren't around, and what is happening because we are around.
 

Aces High

Gold Member
How so? We can indirectly observe how the Earth worked before humans existed and compare it to how the Earth works now that humans are here and influencing the planet. There's a difference. If one is to talk about climate change, thinking that we'd be seeing the exact same phenomena at work on Earth if humans never existed is not accurate. That's why it's important to understand what would have happened anyway if we weren't around, and what is happening because we are around.
"Man-made climate change" is in so far misleading, as it implies that humans can change Earth's climate on their own. Which is factually false. Climate change is a natural phenomenon with complex feedback loops. Humans affect the feedback loops of this natural climate change phenomenon.

A more scientific and correct term would be "anthropogenic amplification of climate change".
 

Rentahamster

Rodent Whores
"Man-made climate change" is in so far misleading, as it implies that humans can change Earth's climate on their own. Which is factually false. Climate change is a natural phenomenon with complex feedback loops. Humans affect the feedback loops of this natural climate change phenomenon.

A more scientific and correct term would be "anthropogenic amplification of climate change".

You're playing a semantics game, which is not a very academic exercise.
 

Aces High

Gold Member
You're playing a semantics game, which is not a very academic exercise.
That's 100% wrong.

Semantics is critically importance in science, especially when discussing complex phenomena.

Perfect example is the misleading term "survival of the fittest":

Most people think it means domination of the strongest. Which is factually false.

A scientifically correct wording would be "survival of the best fitting".

Semantics isn't a game. It's the foundation of scientific communication.
 

Rentahamster

Rodent Whores
That's 100% wrong.

Semantics is critically importance in science, especially when discussing complex phenomena.

Perfect example is the misleading term "survival of the fittest":

Most people think it means domination of the strongest. Which is factually false.

A scientifically correct wording would be "survival of the best fitting".

Semantics isn't a game. It's the foundation of scientific communication.

There are precisely defined terms that exist already that do the job just fine.
 

FunkMiller

Member
Yes, but "man-made climate change" isn't among them.

What point are you exactly trying to make here?

https://scienceexchange.caltech.edu/topics/sustainability/evidence-climate-change

The Intergovernmental Panel on Climate Change (IPCC), a United Nations organization that includes 195 member countries and thousands of independent scientific experts, published a Synthesis Report in March 2023. It concluded that human activities, principally through greenhouse gas emissions, have unequivocally caused global warming. The main drivers of these emissions are energy use, land use, and the consumption and production of goods.

https://www.ipcc.ch/report/ar6/syr/

Human activities, principally through emissions of greenhouse gases, have unequivocally caused global warming, with global surface temperature reaching 1.1°C above 1850-1900 in 2011-2020. Global greenhouse gas emissions have continued to increase, with unequal historical and ongoing contributions arising from unsustainable energy use, land use and land-use change, lifestyles and patterns of consumption and production across regions,

Man made climate change is unequivocally real, and an accepted term among the scientific community.

Maybe stop trying to make Chat GPT give you the answers you want, and look at where the actual answers already are.
 

Aces High

Gold Member
Man made climate change is unequivocally real, and an accepted term among the scientific community.
Serious question: When was the last time you checked on the scientific community? It's unlikely you ever did, because if you had, you'd know that there is a wide array of hypotheses and interpretations among scientists. Science thrives on debate and refinement, not on unanimity.

The term "man-made climate change" is accepted by parts of the scientific community, but it is also critiqued by others for oversimplifying a complex phenomenon. Scientific progress is not a democratic process. If 99% of scientists claimed the Earth is a disc and 1% said it’s a globe, whom would you believe?

History offers countless examples where scientific consensus was wrong. In the early 20th century, Newtonian physics was the dominant framework until Einstein's theory of relativity fundamentally reshaped our understanding of space and time. Alfred Wegener’s continental drift theory was dismissed for decades before plate tectonics confirmed it. Semmelweis, who advocated handwashing to prevent disease, was mocked by his contemporaries despite later being vindicated.

Advocating for precise terminology is not bad. Holding AI systems to the highest possible standards is important.
 
Last edited:

FunkMiller

Member
Serious question: When was the last time you checked on the scientific community? It's unlikely you ever did, because if you had, you'd know that there is a wide array of hypotheses and interpretations among scientists. Science thrives on debate and refinement, not on unanimity.

The term "man-made climate change" is accepted by parts of the scientific community, but it is also critiqued by others for oversimplifying a complex phenomenon. Scientific progress is not a democratic process. If 99% of scientists claimed the Earth is a disc and 1% said it’s a globe, whom would you believe?

History offers countless examples where scientific consensus was wrong. In the early 20th century, Newtonian physics was the dominant framework until Einstein's theory of relativity fundamentally reshaped our understanding of space and time. Alfred Wegener’s continental drift theory was dismissed for decades before plate tectonics confirmed it. Semmelweis, who advocated handwashing to prevent disease, was mocked by his contemporaries despite later being vindicated.

Advocating for precise terminology is not bad. Holding AI systems to the highest possible standards is important.

There is not - at all - any debate among the serious scientific community that man made climate change is real.

Stop looking at stuff you see on Facebook, and listening to layman who don’t understand what they’re talking about, but definitely do have an agenda. engage with the science. I’ve provided you with the link to the IPCC report. Those are the experts. Many, many, many, many, many of them. They overwhelmingly agree.
 
Last edited:

Dr.Morris79

Member
What's a good A.I where I can feed pictures into it and make a short film? Is there such a one? Doesn't matter if I have to sub

I want to make stuff like this



But with my cats.. What are they using for that??
 

Rentahamster

Rodent Whores
If 99% of scientists claimed the Earth is a disc and 1% said it’s a globe, whom would you believe?
I would believe the group who has actual verified empirical evidence to back up their claims. The Earth is a globe group has that. The Earth is warming up due to human activity group has that as well.

While Chat GPT isn't perfect, it sounds like the bias was not on its part in this case. You just didn't like the answer because of your own opinions about the topic. Don't complain about "I'm just trying to learn the truth" if you're performing backflips to manipulate the prompt into forcing that answer you want.

When you start an academic inquiry by looking for the answer you already believe in, that leads to bias. Instead, observe the evidence before you first, then draw a conclusion from that.
 
Top Bottom