Report: EA's internal AI is causing issues with games development

IbizaPocholo

NeoGAFs Kent Brockman

TL;DR: Electronic Arts is aggressively integrating AI, including its generative chatbot ReefGPT, to automate game development and reduce costs. However, experimental AI tools cause coding errors and "hallucinations," increasing workload and employee anxiety amid industry layoffs. AI's full impact on gaming remains uncertain despite ongoing investments.

The reports underscore a bristly friction between EA's developers and executive management's AI mandates. Sources tell the publication that EA's internal chatbot, ReefGPT, has written erroneous code that has caused issues for developers. Others say that the AI tools create "hallucinations" that workers must go in and manually solve.

One of the biggest sources of worker anxiety is the pervasive feeling that employees are simply training their automated AI replacements. The reports say this is true for EA employees.
 
Last edited:
soon...........

terminator2-materialize.gif
 
We are entering the stage of a new industrial revolution—if we aren't already—the world as we know it will change 180° in the next 5 years.

AI is welcome as a tool if it lowers costs and helps reduce processing times, but not as a replacement for manual labor.

I imagine that in the next few years there will be some rejection of games made with AI (not the entire game, but just a portion, such as audio, text, etc.).

PS: I'm from Argentina. We recently had a case where a judge's ruling was declared null and void because it was written in CHATGPT. How did the lawyers realize this? The ruling had sections that said, "Here you have point IV re-edited, without citations and ready to copy and paste:" LINK IF YOU WANT TO READ.
 
Last edited:
Codes will transition into more specalised debug rolls as AI continues to make codes changes; AI hallucinations are an inherent component of the technology and cannot be removed, so most of their time will be spent integrating AI code and fixing the issues.
Artists will transition into a guidance role, with those better able to write prompts that the AI can interpret correctly keeping their jobs.
Designers and audio engineers will be safe for a while.
And management, of course, has nothing to fear.

And yet, I imagine nothing ultimately changes for the gamer.
 
Growing pains.
Sure, but toward what end? While workers are training AI to make their current job obsolete, who's training the workers for their future job?

If there isn't a clear answer to that, why should workers cooperate?
There's a lot of talk about the future when it comes to AI, but when it comes to employee security, not much, if anything, is being done.
 
"One of the biggest sources of worker anxiety is the pervasive feeling that employees are simply training their automated AI replacements."

I mean this is inevitable. Corps will say everything is peachy and we're just using AI as a supplement, but nah... it's most definitely going to replace people.
 
So, it's just like every other company that's "focused on embracing AI" then:
  • Executive management doesn't understand tech. They're just listening and watching the trends, and they have FOMO.
  • Executive management sets new policies around using AI, develops KPIs and OKRs around it.
  • Middle management communicates these changes and goals to the SMEs, who tell the middle managers the Exec team has a fundamental misunderstanding of the tech, and what they're trying to do won't work.
  • Middle management doesn't want to get fired, so they "go along with it" and encourage the SMEs to document why it isn't working.
  • Time passes, KPIs and OKRs are not met, and Middle management tries to explain why.
  • Exec team either says "keep trying until it works" or "we're replacing you with someone who can make this work".
  • Eventually, the company fails to hit their milestones and the Exec team is forced to change direction.
  • Exec team does not admit fault for pushing the company down a path of using AI that they failed to understand. Exec team cannot admit to themselves, or anyone else that they got taken for a ride by the fool's gold salesman.
  • Exec team moves on to the next thing and creates new KPIs and OKRs.
  • Whatever is left of Middle management and SMEs are cynical and resentful of the Exec team for not taking ownership of the failure. Products/Services suffer, revenue suffers, possibly more layoffs, etc.
  • Exec members take golden parachutes and move onto the next company where they rinse and repeat.
I'm a middle manager. Thankfully, I work for a small business where I can explain all of this to the Exec team and they actually listen. We are paying for some AI subscriptions and my Dev team is getting some value out of them, but I was able to make it really clear to the Exec team it's not going to lower head count. For the type of work my team does, it's maybe bumped their output by 5%.
 
Yep, AI coding inevitably produces enough slop that technical debt around the code maintenance and security issues accumulate.

On top of that, fixes require more senior coding skills and junior coders don't get proper experience in problem solving since AI does it for them.

I suspect a lot of code bases will become more and more unmaintainable down the road.
 
Last edited:
This absolute insanity. Holy crap, what a nightmare. How could it all go so wrong?

These cheeseheads just... jump on the latest bandwagon. Any bandwagon, no matter where it's going. Like lemmings, they go flying over the precipice into endless perdition. Just because everybody was going the same way, and they didn't want to be left behind.

Ludicrous.

All of this goes away if they just find their balls and straight-up illegalize so-called "generative" AI. The legal basis is there: LLMs are trained on copyrighted material, and fever-dream graphic "generation" is nothing more than large-scale piracy. Just kill it already so we can move on.
 
Last edited:
AI isn't good enough for programming itself yet but is good enough to use as a quick refresh on documentation.
 
It's a good supplementary tool if you know how to use it correctly.

We're far from it becoming a full on replacement, if ever. The hallucinations and errors it produces have to be picked up and corrected by trained individuals.

I think ChatGPT admitted that their platform will always have hallucinations, given how it's programmed.

The companies that don't realize this are going to go through a lot of pain, while they scramble to hire people to fix everything.
 
As someone who utilizes AI now every single day to help with scripting (mainly AE scripting for motion effects and such) I 100% know its no where near ready for any sort of major role other than a "helper" in any sort of development. Its great for getting started and throwing ideas into it, its great for giving you a head start to where you need to get to, It *sometimes* can be a great troubleshooter and error checker, but it is in no way a replacement for anyone, its way to error prone and dependent on existing knowledge pools that are often either wrong or not exactly right for the use cases of making something new (like game development would require). I hate so much how executives think its the end all/be all and don't understand the simplest thing about what its doing or what they are asking of it.
 
Last edited:
Codes will transition into more specalised debug rolls as AI continues to make codes changes; AI hallucinations are an inherent component of the technology and cannot be removed, so most of their time will be spent integrating AI code and fixing the issues.
Artists will transition into a guidance role, with those better able to write prompts that the AI can interpret correctly keeping their jobs.
Designers and audio engineers will be safe for a while.
And management, of course, has nothing to fear.

And yet, I imagine nothing ultimately changes for the gamer.

The bolded is 100% WRONG!!!! For gamers, the impact will be felt as the industry has several years of misguided experimentation, driven by unrealistic expectations around AI replacing 50% of all developers. This period will likely result in missed opportunities, including the cancellation of games due to poor leadership and stupid money being thrown around improperly.

We kinda saw a smaller version of this with Jim Ryan and his dumb make 60% of game spending into Live Service games agenda. Now multiply that by a factor of 100x
 
Last edited:
However, experimental AI tools cause coding errors and "hallucinations"
It's not only experimental AI tools, it's the "established" ones as well. They can produce very well structured, commented, all that jazz, code. And they can also produce code that's fundamentally broken, or more deviously, just some one possible edge case/error that is easy to miss on a cursory inspection.

Which means, in order to accept that code (submit a patch/clear a ticket) you have to a) be able to understand what the code tries to do, and b) be able to debug it. At which point, why are you using AI in the first place?
 
Last edited:
As someone who utilizes AI now every single day to help with scripting (mainly AE scripting for motion effects and such) I 100% know its no where near ready for any sort of major role other than a "helped" in any sort of development. Its great for getting started and throwing ideas into it, its great for giving you a head start to where you need to get to, It *sometimes* can be a great troubleshooter and error checker, but it is in no way a replacement for anyone, its way to error prone and dependent on existing knowledge pools that are often either wrong or not exactly right for the use cases of making something new (like game development would require). I hate so much how executives think its the end all/be all and don't understand the simplest thing about what its doing or what they are asking of it.

These execs want to grow profit margins so bad, that many of them aren't listening to the rm082e rm082e of the world within their businesses. They need to PUMP UP DAT STOCK PRICE!


It's not only experimental AI tools, it's the "established" ones as well. They can produce very well structured, commented, all that jazz, code. And they can also produce code that's fundamentally broken, or more deviously, just some one possible edge case/error that is easy to miss on a cursory inspection.

Which means, in order to accept that code into production you have to a) be able to understand what the code tries to do, and b) be able to debug it. At which point, why are you using AI in the first place?

The answer is, to speed up production. And it does speed up production. For me it allows me to do something in 15 minutes, that would normally take me 1 hour to write, debug, and test. Here's an example......at my job we changed our Microsoft 365 license to E5. When we did that, everyone lost the ability to have "Auto Suggestions" in Outlook when they try to email someone. So I had CoPilot (using ChatGPT's backend) code up a script to make the changes in the registry, since our IT Director didn't want to make any changes to the Group Policy. Below in the quote is what it wrote up. I made some small adjustments, but it helps alot.

# Detect installed Office version
$officeVersions = @("16.0", "15.0", "14.0")
$foundVersion = $null

foreach ($version in $officeVersions) {
$regPath = "HKCU:\Software\Microsoft\Office\$version\Outlook\Preferences"
if (Test-Path $regPath) {
$foundVersion = $version
break
}
}

if ($foundVersion -ne $null) {
$regPath = "HKCU:\Software\Microsoft\Office\$foundVersion\Outlook\Preferences"
$propertyName = "ShowAutoSug"

Write-Host "- Detected Office version: $foundVersion"
Write-Host "- Setting ShowAutoSug to 1 in $regPath"

# Set the ShowAutoSug registry value to enable Auto-Complete
Set-ItemProperty -Path $regPath -Name $propertyName -Value 1

# Double check the value
$valueName = "ShowAutoSug"

try {
$value = Get-ItemProperty -Path $regPath -Name $valueName -ErrorAction Stop
Write-Output "- After double checking, the value of '$valueName' is set to: $($value.$valueName)"
} catch {
Write-Output "The registry key or value '$valueName' does not exist under '$regPath'."
}

Write-Host "- Auto-Complete List has been enabled successfully. Moving on to the next registry location change."
} else {
Write-Host "No supported Office version found in registry."
}

# Updating the registry change in the 2nd location

$registryPath = "HKCU:\Software\Policies\Microsoft\Office\16.0\Outlook\Preferences"
$propertyName = "ShowAutoSug"

# Check if the registry path exists

if (Test-Path $registryPath) {
# Set the ShowAutoSug value to 1
Set-ItemProperty -Path $registryPath -Name $propertyName -Value 1
Write-Output "- Successfully set '$propertyName' to 1 under '$registryPath'"
} else {
Write-Output "Registry path '$registryPath' does not exist."

}


# Double Checks what the value of the "Show Auto Suggestion" is in the Registry and writes it out for the end user

$registryPath = "HKCU:\Software\Policies\Microsoft\Office\16.0\Outlook\Preferences"
$valueName = "ShowAutoSug"

try {
$value = Get-ItemProperty -Path $registryPath -Name $valueName -ErrorAction Stop
Write-Output "- After double checking, the value of '$valueName' is set to: $($value.$valueName). Auto suggestions should be working perfectly now.

Thank You!

:)"
} catch {
Write-Output "The registry key or value '$valueName' does not exist under '$registryPath'."
}
 
Last edited:
It's not only experimental AI tools, it's the "established" ones as well. They can produce very well structured, commented, all that jazz, code. And they can also produce code that's fundamentally broken, or more deviously, just some one possible edge case/error that is easy to miss on a cursory inspection.

Which means, in order to accept that code into production you have to a) be able to understand what the code tries to do, and b) be able to debug it. At which point, why are you using AI in the first place?

I use it for writing narration scripts (which is far simpler than complex coding) and I can't tell you how many times I wished that I wrote the damn script myself lol

Usually it's a good starting point or can even get me 60-80% there, but I have to fix so much.

And I can definitely tell when others are not fixing it themselves.
 
I now get those automatic AI summaries from Google when I search and it is total nonsense or outdated info probably 70% of the time. The other 30%, it is giving me the information I asked for. Sounds great right, well I used to get the info I asked for from a simple Google search 10-20 years ago before they turned it into shit. They have spent hundreds of billions of dollars on this boondoggle to do the same thing they figured out how to do decades ago.
 
Last edited:
The answer is, to speed up production. And it does speed up production. For me it allows me to do something in 15 minutes, that would normally take me 1 hour to write, debug, and test. Here's an example......at my job we changed our Microsoft 365 license to E5. When we did that, everyone lost the ability to have "Auto Suggestions" in Outlook when they try to email someone. So I had CoPilot (using ChatGPT's backend) code up a script to make the changes in the registry, since our IT Director didn't want to make any changes to the Group Policy. Below in the quote is what it wrote up. I made some small adjustments, but it helps alot.
I'm glad that it works out for you. But what you're describing is pretty mechanical and a batch job. I'm mostly a frontend dev, and even as a lowly web monkey, there can be so many balls to juggle simultaneously that it's hard to describe a problem to an AI, let alone for it to come up with a reasonable solution. Game development can be (I'd argue factually it is 99% of the time) far more complicated.

AI has replaced Stackoverflow for me when it comes to quick and dirty bash scripts, granted. But once things get more complicated, it's not so straightforward.
 
Last edited:
Are they trying to find out who keeps telling it to create big titty goth GF as NPC's assets for their projects or something?
I'm 10000% onboard with an AI that adds big titty gothgirls to EVERYTHING.

"Shall I replace all your co-workers images in Zoom with Big Titty Gothgirls (BTGs)?" YES

"Shall I adjust your augmented reality images so every person you see is a BTG?" YES

"Shall I make your pets look like BTGs?" YES

"Shall I make every car on the road look like a BTGs?" YES
 
Top Bottom