Report: EA's internal AI is causing issues with games development

IbizaPocholo

NeoGAFs Kent Brockman

TL;DR: Electronic Arts is aggressively integrating AI, including its generative chatbot ReefGPT, to automate game development and reduce costs. However, experimental AI tools cause coding errors and "hallucinations," increasing workload and employee anxiety amid industry layoffs. AI's full impact on gaming remains uncertain despite ongoing investments.

The reports underscore a bristly friction between EA's developers and executive management's AI mandates. Sources tell the publication that EA's internal chatbot, ReefGPT, has written erroneous code that has caused issues for developers. Others say that the AI tools create "hallucinations" that workers must go in and manually solve.

One of the biggest sources of worker anxiety is the pervasive feeling that employees are simply training their automated AI replacements. The reports say this is true for EA employees.
 
Last edited:
soon...........

terminator2-materialize.gif
 
We are entering the stage of a new industrial revolution—if we aren't already—the world as we know it will change 180° in the next 5 years.

AI is welcome as a tool if it lowers costs and helps reduce processing times, but not as a replacement for manual labor.

I imagine that in the next few years there will be some rejection of games made with AI (not the entire game, but just a portion, such as audio, text, etc.).

PS: I'm from Argentina. We recently had a case where a judge's ruling was declared null and void because it was written in CHATGPT. How did the lawyers realize this? The ruling had sections that said, "Here you have point IV re-edited, without citations and ready to copy and paste:" LINK IF YOU WANT TO READ.
 
Last edited:
Codes will transition into more specalised debug rolls as AI continues to make codes changes; AI hallucinations are an inherent component of the technology and cannot be removed, so most of their time will be spent integrating AI code and fixing the issues.
Artists will transition into a guidance role, with those better able to write prompts that the AI can interpret correctly keeping their jobs.
Designers and audio engineers will be safe for a while.
And management, of course, has nothing to fear.

And yet, I imagine nothing ultimately changes for the gamer.
 
Growing pains.
Sure, but toward what end? While workers are training AI to make their current job obsolete, who's training the workers for their future job?

If there isn't a clear answer to that, why should workers cooperate?
There's a lot of talk about the future when it comes to AI, but when it comes to employee security, not much, if anything, is being done.
 
"One of the biggest sources of worker anxiety is the pervasive feeling that employees are simply training their automated AI replacements."

I mean this is inevitable. Corps will say everything is peachy and we're just using AI as a supplement, but nah... it's most definitely going to replace people.
 
So, it's just like every other company that's "focused on embracing AI" then:
  • Executive management doesn't understand tech. They're just listening and watching the trends, and they have FOMO.
  • Executive management sets new policies around using AI, develops KPIs and OKRs around it.
  • Middle management communicates these changes and goals to the SMEs, who tell the middle managers the Exec team has a fundamental misunderstanding of the tech, and what they're trying to do won't work.
  • Middle management doesn't want to get fired, so they "go along with it" and encourage the SMEs to document why it isn't working.
  • Time passes, KPIs and OKRs are not met, and Middle management tries to explain why.
  • Exec team either says "keep trying until it works" or "we're replacing you with someone who can make this work".
  • Eventually, the company fails to hit their milestones and the Exec team is forced to change direction.
  • Exec team does not admit fault for pushing the company down a path of using AI that they failed to understand. Exec team cannot admit to themselves, or anyone else that they got taken for a ride by the fool's gold salesman.
  • Exec team moves on to the next thing and creates new KPIs and OKRs.
  • Whatever is left of Middle management and SMEs are cynical and resentful of the Exec team for not taking ownership of the failure. Products/Services suffer, revenue suffers, possibly more layoffs, etc.
  • Exec members take golden parachutes and move onto the next company where they rinse and repeat.
I'm a middle manager. Thankfully, I work for a small business where I can explain all of this to the Exec team and they actually listen. We are paying for some AI subscriptions and my Dev team is getting some value out of them, but I was able to make it really clear to the Exec team it's not going to lower head count. For the type of work my team does, it's maybe bumped their output by 5%.
 
Yep, AI coding inevitably produces enough slop that technical debt around the code maintenance and security issues accumulate.

On top of that, fixes require more senior coding skills and junior coders don't get proper experience in problem solving since AI does it for them.

I suspect a lot of code bases will become more and more unmaintainable down the road.
 
Last edited:
This absolute insanity. Holy crap, what a nightmare. How could it all go so wrong?

These cheeseheads just... jump on the latest bandwagon. Any bandwagon, no matter where it's going. Like lemmings, they go flying over the precipice into endless perdition. Just because everybody was going the same way, and they didn't want to be left behind.

Ludicrous.

All of this goes away if they just find their balls and straight-up illegalize so-called "generative" AI. The legal basis is there: LLMs are trained on copyrighted material, and fever-dream graphic "generation" is nothing more than large-scale piracy. Just kill it already so we can move on.
 
Last edited:
It's a good supplementary tool if you know how to use it correctly.

We're far from it becoming a full on replacement, if ever. The hallucinations and errors it produces have to be picked up and corrected by trained individuals.

I think ChatGPT admitted that their platform will always have hallucinations, given how it's programmed.

The companies that don't realize this are going to go through a lot of pain, while they scramble to hire people to fix everything.
 
As someone who utilizes AI now every single day to help with scripting (mainly AE scripting for motion effects and such) I 100% know its no where near ready for any sort of major role other than a "helper" in any sort of development. Its great for getting started and throwing ideas into it, its great for giving you a head start to where you need to get to, It *sometimes* can be a great troubleshooter and error checker, but it is in no way a replacement for anyone, its way to error prone and dependent on existing knowledge pools that are often either wrong or not exactly right for the use cases of making something new (like game development would require). I hate so much how executives think its the end all/be all and don't understand the simplest thing about what its doing or what they are asking of it.
 
Last edited:
Codes will transition into more specalised debug rolls as AI continues to make codes changes; AI hallucinations are an inherent component of the technology and cannot be removed, so most of their time will be spent integrating AI code and fixing the issues.
Artists will transition into a guidance role, with those better able to write prompts that the AI can interpret correctly keeping their jobs.
Designers and audio engineers will be safe for a while.
And management, of course, has nothing to fear.

And yet, I imagine nothing ultimately changes for the gamer.

The bolded is 100% WRONG!!!! For gamers, the impact will be felt as the industry has several years of misguided experimentation, driven by unrealistic expectations around AI replacing 50% of all developers. This period will likely result in missed opportunities, including the cancellation of games due to poor leadership and stupid money being thrown around improperly.

We kinda saw a smaller version of this with Jim Ryan and his dumb make 60% of game spending into Live Service games agenda. Now multiply that by a factor of 100x
 
Last edited:
However, experimental AI tools cause coding errors and "hallucinations"
It's not only experimental AI tools, it's the "established" ones as well. They can produce very well structured, commented, all that jazz, code. And they can also produce code that's fundamentally broken, or more deviously, just some one possible edge case/error that is easy to miss on a cursory inspection.

Which means, in order to accept that code (submit a patch/clear a ticket) you have to a) be able to understand what the code tries to do, and b) be able to debug it. At which point, why are you using AI in the first place?
 
Last edited:
As someone who utilizes AI now every single day to help with scripting (mainly AE scripting for motion effects and such) I 100% know its no where near ready for any sort of major role other than a "helped" in any sort of development. Its great for getting started and throwing ideas into it, its great for giving you a head start to where you need to get to, It *sometimes* can be a great troubleshooter and error checker, but it is in no way a replacement for anyone, its way to error prone and dependent on existing knowledge pools that are often either wrong or not exactly right for the use cases of making something new (like game development would require). I hate so much how executives think its the end all/be all and don't understand the simplest thing about what its doing or what they are asking of it.

These execs want to grow profit margins so bad, that many of them aren't listening to the rm082e rm082e of the world within their businesses. They need to PUMP UP DAT STOCK PRICE!


It's not only experimental AI tools, it's the "established" ones as well. They can produce very well structured, commented, all that jazz, code. And they can also produce code that's fundamentally broken, or more deviously, just some one possible edge case/error that is easy to miss on a cursory inspection.

Which means, in order to accept that code into production you have to a) be able to understand what the code tries to do, and b) be able to debug it. At which point, why are you using AI in the first place?

The answer is, to speed up production. And it does speed up production. For me it allows me to do something in 15 minutes, that would normally take me 1 hour to write, debug, and test. Here's an example......at my job we changed our Microsoft 365 license to E5. When we did that, everyone lost the ability to have "Auto Suggestions" in Outlook when they try to email someone. So I had CoPilot (using ChatGPT's backend) code up a script to make the changes in the registry, since our IT Director didn't want to make any changes to the Group Policy. Below in the quote is what it wrote up. I made some small adjustments, but it helps alot.

# Detect installed Office version
$officeVersions = @("16.0", "15.0", "14.0")
$foundVersion = $null

foreach ($version in $officeVersions) {
$regPath = "HKCU:\Software\Microsoft\Office\$version\Outlook\Preferences"
if (Test-Path $regPath) {
$foundVersion = $version
break
}
}

if ($foundVersion -ne $null) {
$regPath = "HKCU:\Software\Microsoft\Office\$foundVersion\Outlook\Preferences"
$propertyName = "ShowAutoSug"

Write-Host "- Detected Office version: $foundVersion"
Write-Host "- Setting ShowAutoSug to 1 in $regPath"

# Set the ShowAutoSug registry value to enable Auto-Complete
Set-ItemProperty -Path $regPath -Name $propertyName -Value 1

# Double check the value
$valueName = "ShowAutoSug"

try {
$value = Get-ItemProperty -Path $regPath -Name $valueName -ErrorAction Stop
Write-Output "- After double checking, the value of '$valueName' is set to: $($value.$valueName)"
} catch {
Write-Output "The registry key or value '$valueName' does not exist under '$regPath'."
}

Write-Host "- Auto-Complete List has been enabled successfully. Moving on to the next registry location change."
} else {
Write-Host "No supported Office version found in registry."
}

# Updating the registry change in the 2nd location

$registryPath = "HKCU:\Software\Policies\Microsoft\Office\16.0\Outlook\Preferences"
$propertyName = "ShowAutoSug"

# Check if the registry path exists

if (Test-Path $registryPath) {
# Set the ShowAutoSug value to 1
Set-ItemProperty -Path $registryPath -Name $propertyName -Value 1
Write-Output "- Successfully set '$propertyName' to 1 under '$registryPath'"
} else {
Write-Output "Registry path '$registryPath' does not exist."

}


# Double Checks what the value of the "Show Auto Suggestion" is in the Registry and writes it out for the end user

$registryPath = "HKCU:\Software\Policies\Microsoft\Office\16.0\Outlook\Preferences"
$valueName = "ShowAutoSug"

try {
$value = Get-ItemProperty -Path $registryPath -Name $valueName -ErrorAction Stop
Write-Output "- After double checking, the value of '$valueName' is set to: $($value.$valueName). Auto suggestions should be working perfectly now.

Thank You!

:)"
} catch {
Write-Output "The registry key or value '$valueName' does not exist under '$registryPath'."
}
 
Last edited:
It's not only experimental AI tools, it's the "established" ones as well. They can produce very well structured, commented, all that jazz, code. And they can also produce code that's fundamentally broken, or more deviously, just some one possible edge case/error that is easy to miss on a cursory inspection.

Which means, in order to accept that code into production you have to a) be able to understand what the code tries to do, and b) be able to debug it. At which point, why are you using AI in the first place?

I use it for writing narration scripts (which is far simpler than complex coding) and I can't tell you how many times I wished that I wrote the damn script myself lol

Usually it's a good starting point or can even get me 60-80% there, but I have to fix so much.

And I can definitely tell when others are not fixing it themselves.
 
I now get those automatic AI summaries from Google when I search and it is total nonsense or outdated info probably 70% of the time. The other 30%, it is giving me the information I asked for. Sounds great right, well I used to get the info I asked for from a simple Google search 10-20 years ago before they turned it into shit. They have spent hundreds of billions of dollars on this boondoggle to do the same thing they figured out how to do decades ago.
 
Last edited:
The answer is, to speed up production. And it does speed up production. For me it allows me to do something in 15 minutes, that would normally take me 1 hour to write, debug, and test. Here's an example......at my job we changed our Microsoft 365 license to E5. When we did that, everyone lost the ability to have "Auto Suggestions" in Outlook when they try to email someone. So I had CoPilot (using ChatGPT's backend) code up a script to make the changes in the registry, since our IT Director didn't want to make any changes to the Group Policy. Below in the quote is what it wrote up. I made some small adjustments, but it helps alot.
I'm glad that it works out for you. But what you're describing is pretty mechanical and a batch job. I'm mostly a frontend dev, and even as a lowly web monkey, there can be so many balls to juggle simultaneously that it's hard to describe a problem to an AI, let alone for it to come up with a reasonable solution. Game development can be (I'd argue factually it is 99% of the time) far more complicated.

AI has replaced Stackoverflow for me when it comes to quick and dirty bash scripts, granted. But once things get more complicated, it's not so straightforward.
 
Last edited:
Are they trying to find out who keeps telling it to create big titty goth GF as NPC's assets for their projects or something?
I'm 10000% onboard with an AI that adds big titty gothgirls to EVERYTHING.

"Shall I replace all your co-workers images in Zoom with Big Titty Gothgirls (BTGs)?" YES

"Shall I adjust your augmented reality images so every person you see is a BTG?" YES

"Shall I make your pets look like BTGs?" YES

"Shall I make every car on the road look like a BTGs?" YES
 
I'm glad that it works out for you. But what you're describing is pretty mechanical and a batch job. I'm mostly a frontend dev, and even as a lowly web monkey, there can be so many balls to juggle simultaneously that it's hard to describe a problem to an AI, let alone for it to come up with a reasonable solution. Game development can be (I'd argue factually it is 99% of the time) far more complicated.

AI has replaced Stackoverflow for me when it comes to quick and dirty bash scripts, granted. But once things get more complicated, it's not so straightforward.
for me it helps in AE scripting substancially. Normally a motion artist work flow ends up being like this:

I need to create a unique animation move, and make it something templated and can be duplicated, so you create it as a script in AE. Normally making a complex one can take a few hours, you always start by first seeing if its been done before (10 minutes of research), then when you find it hasn't you start writing things out and testing them in iterations until you get it right. this process can takes HOURS because of all the things that can go wrong.

Using AI helps reduce it down to minutes (sometimes seconds) so long as you are clear in defining the goals. It does the research for you (its great at that part) to make sure a pre-existing script that can be a foundation doesnt already exist. Then it creates what it thinks you want (which is wrong 75% of the time, but it gives you a start) and you keep working with it to get things right, and tweak little values to make it perfect. So a process that use to take me on average 2 - 3 hours now takes me about 10 - 20 minutes, allowing me to do the things I enjoy more, faster. It doesn't work all the time, but enough of the time that it has made my life better. And its getting better every month it feels like. And AI based rotoscoping is quickly getting super good, that'll be another area where I'll happily use it to give me a head start and knock hours off my time off a project.

So for my use, its a good tool, when treated as a tool and not a coworker. What it looks like EA is wanting to do though is a "let it loose, fire and forget" model, which holly hell that would be ultra destructive.
 
Last edited:
TL;DR: Electronic Arts is aggressively integrating AI, including its generative chatbot ReefGPT, to automate game development and reduce costs. However, experimental AI tools cause coding errors and "hallucinations," increasing workload and employee anxiety amid industry layoffs. AI's full impact on gaming remains uncertain despite ongoing investments
Same as with any other coding in any field. Automation is not perfect it cause errors and there always will be guys who do aftercheck after machine.

We are entering the stage of a new industrial revolution—if we aren't already—the world as we know it will change 180° in the next 5 years.
This revolution going on for last 30 years, it's just now IT guys got the same treatment as jobs they helped eliminate in past phases. Like back office, accounting, doc guys etc.
 
AI can save you time, but if you try to force a task it is not great with it can massively waste your time as well. It can often give you tons of working code or documentation and at other times you will waste hours debugging something that clearly doesn't work.
 
AI can save you time, but if you try to force a task it is not great with it can massively waste your time as well. It can often give you tons of working code or documentation and at other times you will waste hours debugging something that clearly doesn't work.
Yep! And I've also found it doesnt do thing as efficiently as it can. So many scripts its written me are full of junk that add crazy unneeded things, for me resulting in longer render times or a crazy amount of unneeded keyframes to a motion that just overcomplicate tweaking it, or things getting sent to software render instead of being able to hit the GPU because it didn't follow best practices. Luckily for me nothing I do is overly time dependent, but for game development? Yikes.
 
Yep! And I've also found it doesnt do thing as efficiently as it can. So many scripts its written me are full of junk that add crazy unneeded things, for me resulting in longer render times or a crazy amount of unneeded keyframes to a motion that just overcomplicate tweaking it, or things getting sent to software render instead of being able to hit the GPU because it didn't follow best practices. Luckily for me nothing I do is overly time dependent, but for game development? Yikes.
Yeah, Claude is king of over complication. It will offer me tons of scripting to do a rather meaningless task, and will keep reiterating on it by adding further complexity. When I bothered to look at the documentation myself it was simple to do with calling a specific module myself.
 
To me, the question about "lost jobs" is usually discussed from a wrong perspective. It's not about whether or not a person can be replaced by a machine but about the quality of the results or product you get with this replacement.

Let's say an AI can do the work of 100 people and obtain the same results. Then, it's obvious that those jobs didn't contribute with any value to the end product. You had people doing a mechanical job, working like machines, so it's inevitable that they were replaced by a true machine, one that is more efficient.

However, a creative job needs a human mind, one with self awareness and creativity. Replacing that with AI would be like going from high cuisine to junk food. Sure, you get an edible product, but the quality is nowhere close. There's an objective loss of value.

I'm not paying one cent for a worse product just because greedy execs want to get richer.
 
HOW COULD WE HAVE SEEN THIS COMING?

I was told it was magic.

I was told that AI's not going to take my job, somebody using AI will.

I was told this by the most reliable web3 hucksters.

Why would they lie to me?!!?
 
I am going to start this off by saying AI is a powerful tool in quite a few use cases and industries. Revolutionary, infact. No sarcasm.

Now... AI is the latest corporate/investor trend where one could argue it's meant to replace a workforce. BUT the fact it is 'trendy' right now and every clueless exec wants in on it and they wanted it yesterday. The rolling out/figuring out of AI is causing so many issues. And then there is the rising grifters trying to sell these companies on how AI can help. AI can help, but since this is a trend and a gold rush, these "AI experts" are selling them snake oil. Changing established systems for this new and sexy AI is breaking in some places. Trying to hammer a square peg into a circular hole for the sake of having AI.

Guess who suffers though? The rank and file employees and the consumers. The grifting AI experts and the Execs will fail upwards.
 
Last edited:
This doesn't mean much until they fully discard this initiative.

EA's top brass is full of blockhead suits who trend-chase shit without much consideration of its impact. AI is the new buzzword right now among these c-suite types and they overeagerly dove straight into it, head first. Its no surprise the AAA sector is f'ed up when these kinds of people are running the show and pushing down thoughtless mandates. Its almost a wonder they've managed to survive this long. Then again, maybe not given how some of these AAA companies experienced a takeover during their prime which resulted in these buffoons grabbing the wheel while the reputation was hot. Good lord, too many mindless people in this biz are in charge of too much money. When the AI mania among execs bursts, its going to be a sight to be behold.

Anyone with hands-on experience and some technological background knowledge about current AI could foresee this. AI can solve some simple tasks and code with minor complexity, but it begins to falter once that complexity increases exponentially. It entangles and creates disjointed solutions which only causes erroneous issues to pile up. If anything, it creates costs and increases unnecessary expenses letting money go to waste.
 
Last edited:
Codes will transition into more specalised debug rolls...
Artists will transition into a guidance role...
Designers and audio engineers will be safe for a while.
And management, of course, has nothing to fear.

Wonderful, the one pain point common to every project (besides budget) and the one aspect even successful productions could improve upon or that unsuccessful productions crumble under no matter the talent involved, THAT'S what the machines cant take over.


(There is management AI software and tools already on the market, it will come for everybody eventually, but point taken that somebody's got to tell the machines to tell the other machines what to do. Unless...)
 
To me, the question about "lost jobs" is usually discussed from a wrong perspective. It's not about whether or not a person can be replaced by a machine but about the quality of the results or product you get with this replacement.

Let's say an AI can do the work of 100 people and obtain the same results. Then, it's obvious that those jobs didn't contribute with any value to the end product. You had people doing a mechanical job, working like machines, so it's inevitable that they were replaced by a true machine, one that is more efficient.

However, a creative job needs a human mind, one with self awareness and creativity. Replacing that with AI would be like going from high cuisine to junk food. Sure, you get an edible product, but the quality is nowhere close. There's an objective loss of value.

I'm not paying one cent for a worse product just because greedy execs want to get richer.
Sadly, many people are not creative. In fact I'd dare say the majority of people are not creative. Those people, at least in todays society, still need jobs to survive. They are button pushers/lever pullers. It sucks for them, but they are the "majority" type of worker in this world.
 
Top Bottom