I don't particularly care about Gabe's example. My argument stands on its own.
These are separate subjects, sure, but a lot of the positivity I'm seeing here stems from peoples' belief that suddenly games are going to become bigger, more expansive, more detailed, and more interactive, thanks to some dreamy, misguided notion of AI and its capabilities.
Specifically in regards to workflow, nearly everything I've read on the subject points towards AI "generated" code needing constant revision and adjustment by an intelligent human. It is plain that generative AI is straight-up stealing art assets and even lifting whole chunks of manuscripts for its text generation. That's simple fact and not a matter of debate. Now when it comes to generating lines of code based on the work of those who have come before, who knows. It's a sort of copy-paste, that's certain. Perhaps we are not investing in programmers the sort of ownership of their work that I feel they deserve, but what can I say? I'm not a programmer. I respect programmers. People like John Carmack, who's big into AI now, as it turns out. The people write the code which powers the engines that drive the games we love. It's like magic, in a sense - at least to the uninitiated. Though I admit I have little intrinsic understanding of the medium, I feel like if I were a programmer, I'd be worried about systems that just copy and (let's be optimistic and say) iterate on my creative output.
One way or another, AI models can't even manage to get basic points across without introducing embarrassing, nonsensical mistakes and outright falsehoods. I feel that doesn't bode well for their capability to interpret developer intent for code-building purposes. I think it's the next "thing" being pushed by clueless clowns under the pressure of barely-sentient "investor" whales who expect inflated returns through workforce reductions, and this recent scandal with Replit, in which an "AI agent" just deleted a massive database of important information for no reason, might have a cooling effect on its introduction.
In my belief, the workload required to error check any AI agent's output is likely to make redundant any work saved - that is, if the agent doesn't just straight-up fuck up and destroy your entire codebase. And of course, coders themselves will become lazier, more complacent, less innovative, and overall less capable in the process, due to their dependency on these systems.
But as you said, sure, this is a separate argument. I posit that embracing this faux-AI will kill creativity. That's an argument I stand by. It's no kind of future any of us truly want if we'd spend even a little bit of time pondering on its implications.