• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

Google: Project Genie | Experimenting with infinite interactive worlds

Hypothetically. What will the industry look like once this tech has matured and gamers are using it to create fully fledged games?
Will Steam and Playstation allow it on their platform? I have a hard time seeing third party publishers surviving this trend.
 
you: "AI is energy intensive but so is producing electronic components!"

me: "AI is both a lot more energy intensive and needs these components all the same"

you: "energy isn't relevant!!1!"

?????

In the end energy is not relevant in this thread. It seems you guys are confused and talking about two different things and on the one hand argue about macro economics and the other one engineering limits.

I find it bizarre that some people are still dismissing this as just a video. It's a basic misunderstanding of how it works, especially considering the dual-memory configuration. A video is a static and non reactive file. Project Genie, is a predictive engine generating frames from a logic loop. The world memory is critical for object persistence, to ensure that objects you just passed doesn't vanish, while the physics buffer allows the model to predict the next state of motion.

Project Genie is obviously not a finished product. But there is nothing stopping this tech from evolving into a generative shader within a hybrid renderer. We have the proof in our hands, the industry is moving away from bruteforce to neural blend.


Hypothetically. What will the industry look like once this tech has matured and gamers are using it to create fully fledged games?
Will Steam and Playstation allow it on their platform? I have a hard time seeing third party publishers surviving this trend.

Edit: I misread. Sorry for that.. No; Nothing like you see in those demos would ever be allowed on those platforms. And today's ip laws are already good enough to stop it from happening.

My original text: Right now, this is really just a concept and a demo. It's not going to let you build a whole game from a prompt despite what the clickbait headlines say.

Moreover, we're still in the early days, and there are some pretty big challenges to solve like f.ex. latency. But you'll definitely see this tech being used in traditional engines without triggering Sony. I mean, in the big picture that's waht DLSS is already doing, it's using AI to bridge the gaps hardware can't handle. This will be just another generative tool for the toolbox, and it's hopefully how we're in the future going to get to ultra high end visuals without needing a $5000 GPU.
 
Last edited:
Top Bottom