EA staff are struggling with management urging them to use AI for ‘just about everything’, report claims

ea-about-featured-image.jpg.adapt.crop16x9.1023w.jpg


Business Insider spoke to a number of current EA staff, speaking under anonymity, who say the company's leadership has spent the past year urging its roughly 15,000 employees to use AI for a wide range of tasks, from making code and concept art to advising managers how to speak to staff about sensitive subjects like pay or promotions.

According to the report, some staff say the AI tools they're encouraged to use produce "flawed code" that has to be manually corrected.

Others on creative teams worry that because they're expected to train AI software by feeding it their own work, the demand for character artists and level designers will drastically fall.

One former Respawn employee who worked in a senior QA role says he believes one of the reasons he was among 100 colleagues laid off this past spring is because AI was reviewing and summarizing feedback from play testers, a job he usually did.

Business Insider also received internal documents that show that some EA employees are expected to complete AI training courses, use AI tools daily to speed up their work rate, and treat generative AI like "a thought partner", asking it for advice on things like how to phrase questions when a promotion is denied.

As the article notes, EA stated in May – as part of its annual 10-K Securities and Exchange Commission filing – that while it was making more use of AI tools, it had to acknowledge the dangers of these tools being used incorrectly.

"We are integrating artificial intelligence tools and technologies into our business and development processes," EA's filing read. "The use of artificial intelligence might present social and ethical issues that, if not managed appropriately, may result in legal and reputational harm, cause consumers to lose confidence in our business and brands and negatively impact our financial and operating results."

According to this year's Game Developers Conference survey, a majority (52%) of developers reported working at companies that utilize generative AI tools.

A July analysis by Totally Human found that the number of Steam games disclosing the use of generative AI has been increasing, with around 20% of all games released in 2025 to that point disclosing the use of AI on Steam. Given that this disclosure is voluntary, it's also likely the actual number is higher.

Source:

Sexy German GIF by Rein Bijlsma


Veo Artificial Intelligence GIF
 
AI's open for business. Which is fine if your open minded and full of imagination but we all know EA execs only have money signs in there skulls, Maybe borderline personality disorder too......... Probably
 
Last edited:
"The use of artificial intelligence might present social and ethical issues that, if not managed appropriately, may result in legal and reputational harm, cause consumers to lose confidence in our business and brands and negatively impact our financial and operating results."
"... And we are managing it like a bazooka."
 
EA management has bought into the "AI" Kool-Aid and believes that using "AI" will make developers more productive.

Reality: Like with every tool, it is useful for some things and utterly useless for others.
 
Managment will be less thrilled to force it on themselves once AI-managers come along. lol
This future can't be stopped, so everyone needs to accept that AI can be already and will be increasingly fine. If it makes sense to use it, any worker that is worth their money will use it voluntarily if it makes them faster, more productive and most importingly less stressed, when it is easier to meet milestones in the allocated time. But just out of principle as an order because the AI seller promises some absurd time savings based on empty theory would be dumb.
 
I know some of you guys already know this, but wonder if most people are even aware that AI can easily give you the worst answers, sometimes completely failing even though there are few cues to tell you it is so. It happens to me all the time, and that's one of the things that are crucial to know when you're dealing with "AI", especially when using it for work...

Anyway, asked AI to rewrite it for me:

It's crucial for users, especially in a professional context, to know that AI can easily provide completely incorrect or misleading information, often without any obvious flags or cues that it is doing so. This phenomenon is commonly called "hallucination," and it can be a significant professional risk if the output isn't carefully verified.

The biggest challenge isn't the AI failing; it's the AI failing convincingly. Because the information is presented fluently and confidently, it often goes unchallenged.

This need for verification is arguably the most important thing to understand when leveraging AI tools for work or critical tasks.
 
Last edited:
AI fucking sucks. Like, it can absolutely speed up certain tasks, but it is also completely unreliable, and using for creative tasks is an idiotic thing to do - it's just amalgamating shit into a pile of boring nothing.

It's useful as a tool for certain things, and it's absolute shit at things outside of that. And it's frustrating to have to interact with it as a chat.

It's useful, but it's oversold and the claims about it are complete horseshit spouted by repellant hucksters like Sam Altman. He called it "Einsteinian" the other day. What a clown.

My current belief is AI companies use the human connection/chat angle to their benefit to pretend it's much smarter than it really is. The more you use it, the less impressive it is. It's idiotic. It's really good at things where there's lots of data and known solutions like boilerplate, because it can just vomit it back up, and it's great at things computers are good at, like rolling through lots of data and finding things you could find, but take a long time. Find this change in this git history. Format all this stuff (which it might still screw up/hallucinate), why is this error happening in this giant log, etc.

It doesn't have any context or understanding, so it makes nonsensical connections and associations.
 
Last edited:
One of the key points of irony here, is that the people who are pushing and enforcing it, upon others, are less willing to invite it into their own organizational layer. They seem to be "magically" exempt from any application of AI in and on their roles.

I mean, if the suits are so confident in this revolutionary tech you'd think they'd do a test pilot experiment within their own circle before pursuing and pushing it down upon the rest of the organization, no?
 
Last edited:
Remember, AI learns from the general population how to code.

My god...
And human programmers learn code without looking on other people's code?

AI's final form is definitely on another level than current copy paste machines. The limit currently is that humans were not yet able to create code that can create code that actually improves itself. Once machines reach that intelligence, than we really have an omg moment.
 
ea-about-featured-image.jpg.adapt.crop16x9.1023w.jpg



According to the report, some staff say the AI tools they're encouraged to use produce "flawed code" that has to be manually corrected.

Ya but from the EA leadership perspective the time to correct flawed code is less than the time to write all new code. AI isn't meant to bring perfection, but direction.
 
Ya but from the EA leadership perspective the time to correct flawed code is less than the time to write all new code. AI isn't meant to bring perfection, but direction.
Eh, the leadership probably doesn't have the technical knowledge to assess the risk properly and folks who do have it are not going to rock the boat.

Purely AI generated code is basically not maintainable at this juncture. Note I am not talking about AI assistance as you work with code completion, but fully generating code with AI.

You acquire a lot of technical debt that is not going to get resolved. That's fine for prototyping, but not fine for products meant to be around for years and they take years to compete.

At least talking from my personal experience as well as working with devs in my company.
 
The problem is that everyone wants to implement AI.

But with any new thing, especially AI, the actual profit making benefit it hard to find, hard to understand, hard to implement.

Managers think it is a 1000% improvement. In reality it is more like a 10-20% time savings across all tasks needed to create a modern game.

You have to be smart and understand your business completely in order to implement AI in such a way as to not only get the 20% but also to gauge it and measure it and grow it in the future.

A bunch of middle managers running around telling everyone to use AI is reminiscent of Meta when they were trying to launch/push VR instead of AI. All the employees were told to incorporate VR into their own workflows to improve them. lol. Eventually as AI is better understood, it will be implemented properly and only in areas where it directly and measurably benefits profitability. Right now I bet AI is slowing EA down more than helping them.
 
Last edited:
Eh, the leadership probably doesn't have the technical knowledge to assess the risk properly and folks who do have it are not going to rock the boat.

Purely AI generated code is basically not maintainable at this juncture. Note I am not talking about AI assistance as you work with code completion, but fully generating code with AI.

You acquire a lot of technical debt that is not going to get resolved. That's fine for prototyping, but not fine for products meant to be around for years and they take years to compete.

At least talking from my personal experience as well as working with devs in my company.

If you don't think your leadership understands then as an Engineer you have to inform them. And if they still don't I wouldn't go complain to the press about it.
 
AI fucking sucks. Like, it can absolutely speed up certain tasks, but it is also completely unreliable, and using for creative tasks is an idiotic thing to do - it's just amalgamating shit into a pile of boring nothing.

It's useful as a tool for certain things, and it's absolute shit at things outside of that. And it's frustrating to have to interact with it as a chat.

It's useful, but it's oversold and the claims about it are complete horseshit spouted by repellant hucksters like Sam Altman. He called it "Einsteinian" the other day. What a clown.

My current belief is AI companies use the human connection/chat angle to their benefit to pretend it's much smarter than it really is. The more you use it, the less impressive it is. It's idiotic. It's really good at things where there's lots of data and known solutions like boilerplate, because it can just vomit it back up, and it's great at things computers are good at, like rolling through lots of data and finding things you could find, but take a long time. Find this change in this git history. Format all this stuff (which it might still screw up/hallucinate), why is this error happening in this giant log, etc.

It doesn't have any context or understanding, so it makes nonsensical connections and associations.
AI - assuming we aren't talking about some low 4-30b model that and talking 700b/think deeper quality - can be transformative IMO, but the limiting factor is the Ai-fu (like google-fu) of the user to extract what they want.

Or more simply put, the smarter the user the more useful the AI. Wikipedia provides nearly as much information, but unlike AI, Wikipedia can't bridge the gap of the presented theorems, etc - accurately defined in lots of impenetrable formulas/nomenclature - to intuitively let the majority of readers understand the principles as they were developed - possibly hundreds of years ago - without an expert in the field to quote or point them to references that might largely be lost to time.

I had a situation the other week where I wanted to understand the evolution of regression - used for ML - from simple linear regression up to multi linear regression to prove the important multi linear regression matrix formula and to gain insight from seeing the path of inputs to outputs, and it did a spectacular job IMO.

Being able to read about the search for a turning point on the OLS graph and then intuitively see the unspecified derivative (dy/dx) in the SLR formula for the turning point and then see the way the partial derivatives (for sum of squared errors for two variables) in the MLR get solved as a group of linear equations that get solved and eventually get encoded in a matrix was something I was able to do in an hour of questions and reading.

Learning that by other means pre-AI I would have needed to find an expert in the field or probably do another degree that was lectured by at least one genuine expert in the field that had re-walked the steps of legendary mathematicians and proofed the formulas first hand.

So IMO AI can be "Einsteinian" but the use case has to be there for the user, and the user needs to be good at communicating and arguing, and be independently knowledgeable/skilled about the subject matter to ask insightful question and hold the LLM to account at every stage.
 
If you don't think your leadership understands then as an Engineer you have to inform them. And if they still don't I wouldn't go complain to the press about it.

YOU ARE AN INSANE PERSON! Get a grip and have some perspective. You treat talking to the press about a legit issue as some God awful failure of humanity. Complaining to the press is part of this process. Especially if the corporate leaders aren't listening. The media serves as the fourth estate, holding power to account.

If you can't understand that, your whole prospective is wack.
 
Gotta evolve with tech or get left behind.

Maybe.

Or maybe the workers can just do the same thing the leadership is doing: Lie their faces off and say it's amazing, transformative, unparalleled, otherbigimportantsoundingwords, while just continuing to do what they've always done. Then when the results are nearly zero sum, they can shrug their shoulders and not take any responsibility. If they're going to get laid off either way, why make it easier on the people who are trying to get rid of them?
 
Last edited:
YOU ARE AN INSANE PERSON! Get a grip and have some perspective. You treat talking to the press about a legit issue as some God awful failure of humanity. Complaining to the press is part of this process. Especially if the corporate leaders aren't listening. The media serves as the fourth estate, holding power to account.

If you can't understand that, your whole prospective is wack.

Why would I talk to the press. It does nothing. Your reaction is silly and extreme. You just talk to your leadership, and work to change it. A couple of times I have gone above my mangers and talked to the heads about the stuff going on but that is a career limiter.
 
Why would I talk to the press. It does nothing. Your reaction is silly and extreme. You just talk to your leadership, and work to change it. A couple of times I have gone above my mangers and talked to the heads about the stuff going on but that is a career limiter.
Because that's the only limited way an employee has to push against stupidity of their management without endangering their job.
 
Maybe.

Or maybe the workers can just do the same thing the leadership is doing: Lie their faces off and say it's amazing, transformative, unparalleled, otherbigimportantsoundingwords, while just continuing to do what they've always done. Then when the results are nearly zero sum, they can shrug their shoulders and not take any responsibility. If they're going to get laid off either way, why make it easier on the people who are trying to get rid of them?
Human evolve with time, back in the day programming and other skills have to be earned, and now it's common knowledge, but that doesn't devaluate the necessity to use it. %99 of modern problems are basically based on this issue.
 
"Work with AI to help your daily tasks"

EA translation: "Train your new replacements so that we can eventually fire you and further increase our profits"
Then there'll be another 20 EA publishers with different names operating as startups. If AI understand the recipe to success in this industry, which is startups, then they have the right to do it.
 
Human evolve with time, back in the day programming and other skills have to be earned, and now it's common knowledge, but that doesn't devaluate the necessity to use it. %99 of modern problems are basically based on this issue.

I'm all for learning new tools and using them to improve productivity. I've got no issue with the current AI tools being used by humans as tools. And yes, people should adapt. That's wasn't the point of my previous post.

My issue is the current AI tools are clearly not sufficient to replace humans en mass. They are tools, not digital workers. Meanwhile the executive management class believes they can serve as digital workers and hopes to replace a significant percentage of their human workforce. Given the executive management class has an unreasonable expectation for this technology, and almost none of them are willing to listen to the chorus of pushback, the workers who's jobs are in danger of being replaced with an attempt to make AI work (which won't work, but it will still be attempted) have no good reason to cooperate with the executive management team. It's in their interest to lie just as much as the executive management class to serve their own interests, just like the executive management class is doing.

See here:



They're saying all this amazing hype language around AI, but when it comes time to do the books, they're not showing any proof to back up their rhetoric. This all boils down to bad/dishonest leadership by the executive management class. Unfortunately, they don't pay the price for their actions - the lowest level employees do.
 
Last edited:
Again, it is a executive/board/investor trend that confuses a great tool for a cure-all panacea.

They do this to justify and validate their jobs or their investment. They don't understand it but boldly implement it based on "ai experts" happy to take their money for consultation and promise them the stars.

The shame is AI is revolutionary as a tool for professionals and it needs to be understood better of its strengths and limitations.

Right now, for optics and investment, it is fucking snake oil to the people who make decisions guided by the snake oil salespeople.
 
Last edited:
Why would I talk to the press. It does nothing. Your reaction is silly and extreme. You just talk to your leadership, and work to change it. A couple of times I have gone above my mangers and talked to the heads about the stuff going on but that is a career limiter.

Who's to say they didn't talk to their leadership and they were told to shut up and follow orders?
 
Who's to say they didn't talk to their leadership and they were told to shut up and follow orders?

Who knows that their leadership didn't say I hear you and agree with you, but right now we are doing this instead. I mean we can all speculate. Leadership gave them all raises and sold the company to themselves.
 
Again, it is a executive/board/investor trend that confuses a great tool for a cure-all panacea.

They do this to justify and validate their jobs or their investment. They don't understand it but boldly implement it based on "ai experts" happy to take their money for consultation and promise them the stars.

The shame is AI is revolutionary as a tool for professionals and it needs to be understood better of its strengths and limitations.

Right now, for optics and investment, it is fucking snake oil to the people who make decisions guided by the snake oil salespeople.

It's a pure experiment so far for most of these CEOs. They don't even know how to implement it. I HATE how these execs are screwing the AI onboarding up. They are turning it into AI Hype and I think people will start to soar on it too fast.

Who knows that their leadership didn't say I hear you and agree with you, but right now we are doing this instead. I mean we can all speculate. Leadership gave them all raises and sold the company to themselves.

True it is speculation. Which is why it's great that some of these people are talking to the press about their experiences. Without it, we'd only have these lying CEOs saying how great A.I. is for their companies.
 
Last edited:
I wish I could use AI at work. Copilot integration into Excel, Powerpoint & Outlook would save me so much time than having to do a lot of shit manually like I do now.
 
Last edited:
AI - assuming we aren't talking about some low 4-30b model that and talking 700b/think deeper quality - can be transformative IMO, but the limiting factor is the Ai-fu (like google-fu) of the user to extract what they want.

Or more simply put, the smarter the user the more useful the AI. Wikipedia provides nearly as much information, but unlike AI, Wikipedia can't bridge the gap of the presented theorems, etc - accurately defined in lots of impenetrable formulas/nomenclature - to intuitively let the majority of readers understand the principles as they were developed - possibly hundreds of years ago - without an expert in the field to quote or point them to references that might largely be lost to time.

I had a situation the other week where I wanted to understand the evolution of regression - used for ML - from simple linear regression up to multi linear regression to prove the important multi linear regression matrix formula and to gain insight from seeing the path of inputs to outputs, and it did a spectacular job IMO.

Being able to read about the search for a turning point on the OLS graph and then intuitively see the unspecified derivative (dy/dx) in the SLR formula for the turning point and then see the way the partial derivatives (for sum of squared errors for two variables) in the MLR get solved as a group of linear equations that get solved and eventually get encoded in a matrix was something I was able to do in an hour of questions and reading.

Learning that by other means pre-AI I would have needed to find an expert in the field or probably do another degree that was lectured by at least one genuine expert in the field that had re-walked the steps of legendary mathematicians and proofed the formulas first hand.

So IMO AI can be "Einsteinian" but the use case has to be there for the user, and the user needs to be good at communicating and arguing, and be independently knowledgeable/skilled about the subject matter to ask insightful question and hold the LLM to account at every stage.
I agree it can be useful and it can be a great teacher, but I don't think what you said was Einsteinian.
 
Couldn't imagine why - AI objectively makes everything better. Look at Microsoft, they're all-in on AI and they hav-- what's that? Azure's down? Global outage, you say?
 
I agree it can be useful and it can be a great teacher, but I don't think what you said was Einsteinian.
Maybe Gaussian then? :)

But in other ways it can do amazing things, that even the Einsteins of this world can't. Like you can propose a new idea or technology or software or rendering technique to the AI that is purely theoretical and have it play devils advocate point out all the potential pitfalls or advantages and have a full on argument about its assertions inferred from billions of pieces of information and genuinely leapfrog all sorts of prototype pitfalls before committing to the project in any way.
 
Last edited:
Top Bottom