• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

GameNGen or building a game without code nor assests

ReyBrujo

Member
Yesterday Google and Tel Aviv University published a paper (pdf) called "Diffusion Models Are Real-Time Game Engines" presenting GameNGen, "the first game engine powered entirely by a neural model". The demo shows the classic Doom running at 20 fps on a single TPU completely generated in realm time by a neural model: the environment, the collisions, the graphics, the enemy AI, everything is simulated in real time, without a single line of code written specifically for the stage per se.

4 minute explanation by The Code Report:



This is just a proof of concept but a really interesting one. We have had procedural generation since literally the beginning of computing and many genres took (and take) advantage of it (the whole Rogue-branch of RPGs, the multiple roguelite current examples, Elite, etc), but it was mostly kept for the composition of the map or the number and types of enemies; everything else (the rules of engagement, the interaction between the different elements like the ship and a bullet or the weapon with the attacker and the shield with the defender) had to be coded. With this model you basically let the AI generate and build the interactions itself. It might not look like much (Doom on a pregnancy test might be more impressive) but it could eventually let game designers playtest a game concept before even getting a programming team.
 
Last edited:

ScHlAuChi

Member
No, this isnt the future of game development:
- it is incredibly inefficient.
- the AI has no "memory", it is like Guy Pearce in Memento
- you cant design anything, it just generates based on existing stuff

Still neat tech!
 

SF Kosmo

Al Jazeera Special Reporter
Yesterday Google and Tel Aviv University published a paper (pdf) called "Diffusion Models Are Real-Time Game Engines" presenting GameNGen, "the first game engine powered entirely by a neural model". The demo shows the classic Doom running at 20 fps on a single TPU completely generated in realm time by a neural model: the environment, the collisions, the graphics, the enemy AI, everything is simulated in real time, without a single line of code written specifically for the stage per se.

4 minute explanation by The Code Report:



This is just a proof of concept but a really interesting one. We have had procedural generation since literally the beginning of computing and many genres took (and take) advantage of it (the whole Rogue-branch of RPGs, the multiple roguelite current examples, Elite, etc), but it was mostly kept for the composition of the map or the number and types of enemies; everything else (the rules of engagement, the interaction between the different elements like the ship and a bullet or the weapon with the attacker and the shield with the defender) had to be coded. With this model you basically let the AI generate and build the interactions itself. It might not look like much (Doom on a pregnancy test might be more impressive) but it could eventually let game designers playtest a game concept before even getting a programming team.

I think you might be misunderstanding the application of this technology. The idea is not that it is procedurally generating a game's assets; it has been trained on a real game with discrete level design and assets and it essentially "understands" and. "remembers" that gameplay to enough of an extent that it can predict how that game would respond to the player's input.

But someone still has to make the game that it is trained on. It isn't just making up level design and enemies and mechanics.

Where this can get really interesting is that, at least in theory, the game it's trained on can be something well beyond what is possible in the realm of real-time computing.

Obviously AI asset generation is also a thing but that's not the point of this particular experiment and might not play nicely with it either.
 

mhirano

Member
This tech fits like a glove for yearly iterative heartless games like EA Football (formerly known as FIFA)
 

SF Kosmo

Al Jazeera Special Reporter
This tech fits like a glove for yearly iterative heartless games like EA Football (formerly known as FIFA)
It doesn't. This is about how games are rendered and executed, it isn't making up the game.
 
Last edited:

ReyBrujo

Member
But someone still has to make the game that it is trained on. It isn't just making up level design and enemies and mechanics.
Eventually you could train it with instances of similar games (like an franchise like CoD) or even different games with same genre (like, FPS or racing games), then tweak parameters to introduce new weapons or different physics, new enemies, new items, etc. It's not exactly for an individual to ask for GTA6 and get the AI generate it for them but instead it could be useful for companies to start testing new concepts for an new expansion/season without having developers code the modifications, feeding the previous versions of the game (or who knows, maybe any kind of copyrighted game pretty much like Chat GPT and Copilot are fed).
 

SF Kosmo

Al Jazeera Special Reporter
Eventually you could train it with instances of similar games (like an franchise like CoD) or even different games with same genre (like, FPS or racing games), then tweak parameters to introduce new weapons or different physics, new enemies, new items, etc. It's not exactly for an individual to ask for GTA6 and get the AI generate it for them but instead it could be useful for companies to start testing new concepts for an new expansion/season without having developers code the modifications, feeding the previous versions of the game (or who knows, maybe any kind of copyrighted game pretty much like Chat GPT and Copilot are fed).
That is several, several steps beyond what is going on here. Like the difference between asking an AI to generate a video of a man walking and asking it to create an entire coherent film with story and dialog. Even as rapidly as AI is progressing that would be many, many years away.

Remember, this isn't really thinking about any of these things in terms of assets or mechanics. It's simply trying to figure out what the next image in the sequence is, and considering player input as part of that logic. The fact that it even has any kind of coherent "level design" depends entirely on the AI's ability to recognize where the player is by sight based on having played that level many times in the training data.

So it can only really reconstruct what it has seen before, or at least extrapolating from patterns it has seen before. It's only able to maintain any kind of coherence or continuity BECAUSE the content is predictable. The moment the content becomes variable, it is going to get completely lost.
 
Last edited:

sachos

Member
This was such a massive upgrade over previous methods. Check out their paper it has some comparisons to previous models. We were talking about this in the Sora thread when it came out. At some point video generation will be fast enough to be done in real time. Sora demostrated it can recreate Minecraft, i want to see what this new model can do when trained on more games, if it would be possible to generate new games like you can generate new never seen before images with image models.
Now, i dont know if something like this would be all you need to generate a game, i think it would still be best to use a barebone engine with stick figures to use as ControlNet like reference to then "paint" over with the model. Either way, we are getting closer and closer to generating VR worlds with words, think about how crazy is that. Even if games are not created this way, you will be able to create infinite explorable worlds.
 
Last edited:

Danny Dudekisser

I paid good money for this Dynex!
I don't know what's more exhausting at this point: the charlatans peddling this junk, or the people buying their line of bullshit.
 
Top Bottom