• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

Google: Project Genie | Experimenting with infinite interactive worlds

Hypothetically. What will the industry look like once this tech has matured and gamers are using it to create fully fledged games?
Will Steam and Playstation allow it on their platform? I have a hard time seeing third party publishers surviving this trend.
 
you: "AI is energy intensive but so is producing electronic components!"

me: "AI is both a lot more energy intensive and needs these components all the same"

you: "energy isn't relevant!!1!"

?????

In the end energy is not relevant in this thread. It seems you guys are confused and talking about two different things and on the one hand argue about macro economics and the other one engineering limits.

I find it bizarre that some people are still dismissing this as just a video. It's a basic misunderstanding of how it works, especially considering the dual-memory configuration. A video is a static and non reactive file. Project Genie, is a predictive engine generating frames from a logic loop. The world memory is critical for object persistence, to ensure that objects you just passed doesn't vanish, while the physics buffer allows the model to predict the next state of motion.

Project Genie is obviously not a finished product. But there is nothing stopping this tech from evolving into a generative shader within a hybrid renderer. We have the proof in our hands, the industry is moving away from bruteforce to neural blend.


Hypothetically. What will the industry look like once this tech has matured and gamers are using it to create fully fledged games?
Will Steam and Playstation allow it on their platform? I have a hard time seeing third party publishers surviving this trend.

Edit: I misread. Sorry for that.. No; Nothing like you see in those demos would ever be allowed on those platforms. And today's ip laws are already good enough to stop it from happening.

My original text: Right now, this is really just a concept and a demo. It's not going to let you build a whole game from a prompt despite what the clickbait headlines say.

Moreover, we're still in the early days, and there are some pretty big challenges to solve like f.ex. latency. But you'll definitely see this tech being used in traditional engines without triggering Sony. I mean, in the big picture that's waht DLSS is already doing, it's using AI to bridge the gaps hardware can't handle. This will be just another generative tool for the toolbox, and it's hopefully how we're in the future going to get to ultra high end visuals without needing a $5000 GPU.
 
Last edited:
In the end energy is not relevant in this thread. It seems you guys are confused and talking about two different things and on the one hand argue about macro economics and the other one engineering limits.

I find it bizarre that some people are still dismissing this as just a video. It's a basic misunderstanding of how it works, especially considering the dual-memory configuration. A video is a static and non reactive file. Project Genie, is a predictive engine generating frames from a logic loop. The world memory is critical for object persistence, to ensure that objects you just passed doesn't vanish, while the physics buffer allows the model to predict the next state of motion.

Project Genie is obviously not a finished product. But there is nothing stopping this tech from evolving into a generative shader within a hybrid renderer. We have the proof in our hands, the industry is moving away from bruteforce to neural blend.




Edit: I misread. Sorry for that.. No; Nothing like you see in those demos would ever be allowed on those platforms. And today's ip laws are already good enough to stop it from happening.

My original text: Right now, this is really just a concept and a demo. It's not going to let you build a whole game from a prompt despite what the clickbait headlines say.

Moreover, we're still in the early days, and there are some pretty big challenges to solve like f.ex. latency. But you'll definitely see this tech being used in traditional engines without triggering Sony. I mean, in the big picture that's waht DLSS is already doing, it's using AI to bridge the gaps hardware can't handle. This will be just another generative tool for the toolbox, and it's hopefully how we're in the future going to get to ultra high end visuals without needing a $5000 GPU.

I agree to the angle you're seeing this move.

In fact, over the last week or so, based off of seeing 2klikphilip's recent videos on upscaling games using DLSS (and others) from incredible low resolutions, such as the 22 height pixel image, I've been pondering the potential path of upscaling and frame gen tech as it improves.

We're already beyond the days of pure rasterization, even with antialiasing approaches we send image data through some processing before putting it to screen. There's two big areas on this right now: Upscaling and frame generation.

I do see the potential for some technology, perhaps like this, that could aid in better upscaling and frame generation. The instant benefit of looking at this as a hybrid tech would be memory context, it's a solved problem with traditional engines.

The deeper consideration for this I was having is not to take the traditional approach of generating world visuals through some rasterization approach. It would instead be to render a low resolution version of the world built from tagged marker data layer, think of it as something similar to a normal buffer or other data pass. This tagged data would be 3d positional markers which would tell this new "AI" layer what to render, basically push it's probability vectors around. It's goal would be to produce the lowest resolution image possible, whilst containing enough information to inform a processing layer of everything it needs in order to generate the final image. Think of it as a QR code or those markers on actors faces for mocap.

The AI layer would be a specifically trained model based on some insanely high resolution version of the game. It would include all texture and surface data. Instead of being just a generalist, it would have a model based on that game world. The tagged marker layer would then inform the generation layer essentially the viewpoint of the player and what should be rendered to screen. So it would still be a probabilistic outcome, but the weights would be ideal.

Game studios will then bake out that training data using the most powerful hardware to render the world from all angles in all conditions to inform what the final product would look like. The gamer runs a fast low res version of the game and the AI layer translates that low res into the high res version in real time, locally.

Then gamers will then flock to forums to complain about finding certain places in the world where they can stand and cause the renderer to glitch out cause the developer missed that part. It'll be like the new broken animations or hall of mirrors effect with missing environment polygons.
 
Top Bottom