• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

Google: Project Genie | Experimenting with infinite interactive worlds

Hypothetically. What will the industry look like once this tech has matured and gamers are using it to create fully fledged games?
Will Steam and Playstation allow it on their platform? I have a hard time seeing third party publishers surviving this trend.
 
you: "AI is energy intensive but so is producing electronic components!"

me: "AI is both a lot more energy intensive and needs these components all the same"

you: "energy isn't relevant!!1!"

?????

In the end energy is not relevant in this thread. It seems you guys are confused and talking about two different things and on the one hand argue about macro economics and the other one engineering limits.

I find it bizarre that some people are still dismissing this as just a video. It's a basic misunderstanding of how it works, especially considering the dual-memory configuration. A video is a static and non reactive file. Project Genie, is a predictive engine generating frames from a logic loop. The world memory is critical for object persistence, to ensure that objects you just passed doesn't vanish, while the physics buffer allows the model to predict the next state of motion.

Project Genie is obviously not a finished product. But there is nothing stopping this tech from evolving into a generative shader within a hybrid renderer. We have the proof in our hands, the industry is moving away from bruteforce to neural blend.


Hypothetically. What will the industry look like once this tech has matured and gamers are using it to create fully fledged games?
Will Steam and Playstation allow it on their platform? I have a hard time seeing third party publishers surviving this trend.

Edit: I misread. Sorry for that.. No; Nothing like you see in those demos would ever be allowed on those platforms. And today's ip laws are already good enough to stop it from happening.

My original text: Right now, this is really just a concept and a demo. It's not going to let you build a whole game from a prompt despite what the clickbait headlines say.

Moreover, we're still in the early days, and there are some pretty big challenges to solve like f.ex. latency. But you'll definitely see this tech being used in traditional engines without triggering Sony. I mean, in the big picture that's waht DLSS is already doing, it's using AI to bridge the gaps hardware can't handle. This will be just another generative tool for the toolbox, and it's hopefully how we're in the future going to get to ultra high end visuals without needing a $5000 GPU.
 
Last edited:
In the end energy is not relevant in this thread. It seems you guys are confused and talking about two different things and on the one hand argue about macro economics and the other one engineering limits.

I find it bizarre that some people are still dismissing this as just a video. It's a basic misunderstanding of how it works, especially considering the dual-memory configuration. A video is a static and non reactive file. Project Genie, is a predictive engine generating frames from a logic loop. The world memory is critical for object persistence, to ensure that objects you just passed doesn't vanish, while the physics buffer allows the model to predict the next state of motion.

Project Genie is obviously not a finished product. But there is nothing stopping this tech from evolving into a generative shader within a hybrid renderer. We have the proof in our hands, the industry is moving away from bruteforce to neural blend.




Edit: I misread. Sorry for that.. No; Nothing like you see in those demos would ever be allowed on those platforms. And today's ip laws are already good enough to stop it from happening.

My original text: Right now, this is really just a concept and a demo. It's not going to let you build a whole game from a prompt despite what the clickbait headlines say.

Moreover, we're still in the early days, and there are some pretty big challenges to solve like f.ex. latency. But you'll definitely see this tech being used in traditional engines without triggering Sony. I mean, in the big picture that's waht DLSS is already doing, it's using AI to bridge the gaps hardware can't handle. This will be just another generative tool for the toolbox, and it's hopefully how we're in the future going to get to ultra high end visuals without needing a $5000 GPU.

I agree to the angle you're seeing this move.

In fact, over the last week or so, based off of seeing 2klikphilip's recent videos on upscaling games using DLSS (and others) from incredible low resolutions, such as the 22 height pixel image, I've been pondering the potential path of upscaling and frame gen tech as it improves.

We're already beyond the days of pure rasterization, even with antialiasing approaches we send image data through some processing before putting it to screen. There's two big areas on this right now: Upscaling and frame generation.

I do see the potential for some technology, perhaps like this, that could aid in better upscaling and frame generation. The instant benefit of looking at this as a hybrid tech would be memory context, it's a solved problem with traditional engines.

The deeper consideration for this I was having is not to take the traditional approach of generating world visuals through some rasterization approach. It would instead be to render a low resolution version of the world built from tagged marker data layer, think of it as something similar to a normal buffer or other data pass. This tagged data would be 3d positional markers which would tell this new "AI" layer what to render, basically push it's probability vectors around. It's goal would be to produce the lowest resolution image possible, whilst containing enough information to inform a processing layer of everything it needs in order to generate the final image. Think of it as a QR code or those markers on actors faces for mocap.

The AI layer would be a specifically trained model based on some insanely high resolution version of the game. It would include all texture and surface data. Instead of being just a generalist, it would have a model based on that game world. The tagged marker layer would then inform the generation layer essentially the viewpoint of the player and what should be rendered to screen. So it would still be a probabilistic outcome, but the weights would be ideal.

Game studios will then bake out that training data using the most powerful hardware to render the world from all angles in all conditions to inform what the final product would look like. The gamer runs a fast low res version of the game and the AI layer translates that low res into the high res version in real time, locally.

Then gamers will then flock to forums to complain about finding certain places in the world where they can stand and cause the renderer to glitch out cause the developer missed that part. It'll be like the new broken animations or hall of mirrors effect with missing environment polygons.
 
I find it bizarre that some people are still dismissing this as just a video. It's a basic misunderstanding of how it works, especially considering the dual-memory configuration. A video is a static and non reactive file. Project Genie, is a predictive engine generating frames from a logic loop.
Every AI generated video you see is made with a "predictive engine", difference is normal ones rely only on the initial prompts and its internal workings to create an entire clip from start to end, whereas this one generates the frames on real time with added variations based on user input.


The world memory is critical for object persistence, to ensure that objects you just passed doesn't vanish, while the physics buffer allows the model to predict the next state of motion.
Their memory tech is far from perfect and it'll forever be thorn on the side of these types of model, as this is an inherent problem with them. Yes, it can be fixed and improved on to a certain degree, but it runs into the questions i've pointed out before where there are simply more efficient ways to achieve the same types of results.
 
Last edited:
you: "AI is energy intensive but so is producing electronic components!"

me: "AI is both a lot more energy intensive and needs these components all the same"

you: "energy isn't relevant!!1!"

?????

It requires far less components than producing literally billions of consoles, game boxes, controllers, accessories, etc etc etc.

I don't know what point your trying to make anymore.

Just because this AI experiment isn't like a traditional game, means literally nothing. It's not MEANT to be like a traditionally made game. AI "games" won't be made like traditional games because they simply won't need to be.

It's like someone saying the first concept of the automobile won't work because there's nowhere to connect the horse.

This is uncharted territory.
 
It requires far less components than producing literally billions of consoles, game boxes, controllers, accessories, etc etc etc.

I don't know what point your trying to make anymore.

Just because this AI experiment isn't like a traditional game, means literally nothing. It's not MEANT to be like a traditionally made game. AI "games" won't be made like traditional games because they simply won't need to be.

It's like someone saying the first concept of the automobile won't work because there's nowhere to connect the horse.

This is uncharted territory.

It sounds like you're angling to describe a shift like 2D to 3D, and this is like people complaining that early 3D graphics looked shit compared to 2D sprites and therefore the technology was a dead end.

Is AI a 3D to traditional game 2D? If so, what kind of magical experience are we about to see that could never be done before?
 
It requires far less components than producing literally billions of consoles, game boxes, controllers, accessories, etc etc etc.

I don't know what point your trying to make anymore.

Just because this AI experiment isn't like a traditional game, means literally nothing. It's not MEANT to be like a traditionally made game. AI "games" won't be made like traditional games because they simply won't need to be.

It's like someone saying the first concept of the automobile won't work because there's nowhere to connect the horse.

This is uncharted territory.
For someone advocating/pro/more receptive to AI you are faling to use the technology in order to have a more informed opinion.
 
Video game developers/publishers right now


Angry Zach Galifianakis GIF by BasketsFX
Don't know who they can sue but I think it will result in restrictions. Maybe a message that they can't create a particular game or character, then a suggestion that they can make something similar, image generators already hit that wall sometimes.
 
I don't know what point your trying to make anymore.
????????
You're the one who brought energy consuption to the table, claiming it wasn't an issue (it totally is, a severe one at that).

The necessity for energy is so huge it actually led to these big techs investing in portable nuclear reactors to feed their servers, as traditional energy lines wont be able to handle them. Its ridiculously funny in fact, because those are probably hundreds of times more relevant and world changing than whatever these AIs can achieve.

Just because this AI experiment isn't like a traditional game, means literally nothing. It's not MEANT to be like a traditionally made game. AI "games" won't be made like traditional games because they simply won't need to be.

It's like someone saying the first concept of the automobile won't work because there's nowhere to connect the horse.

This is uncharted territory.
Well, do give us an idea of what such a game that can only be made with this AI tech would look like.
 
Hypothetically. What will the industry look like once this tech has matured and gamers are using it to create fully fledged games?
Will Steam and Playstation allow it on their platform? I have a hard time seeing third party publishers surviving this trend.
Fully in the concept of a NES gem like Mega Man or Zelda I don't think will ever exist by this. But those phone games that you just run, tower defense, or do a thing here and there, yeah

The indie scenario will be the most affected for sure
 
Every "gameplay" video has Hugo Délire levels of interactivity and floatiness
vhP7p27Yp6AjgxR1.png


Every game looks like a UE5 one man game with the most generic assets possible.

Sorry for not creaming my pants watching random game journos showing the most derivative crap ("i made an even gayer zelda" "i made gta with a manlet protag" seriously guys ?!), and wake me up when ai can make a full game from scratch, one you can install and play on your own without its "help".
 
Passion and love for making games.


PJrU8UwAKd3cNHUY.jpg
I prefer this to everything in this thread, by far.
The future where people run with this is one where we get absolute trash all the time, on a subscription. This is total shit. It's like a worse version of LaserDisc arcade games.
 
Last edited:
Top Bottom