Microsoft has created an AI-generated version of Quake

IbizaPocholo

NeoGAFs Kent Brockman

Microsoft unveiled its Xbox AI era earlier this year with a new Muse AI model that can generate gameplay. While it looked like Muse was still an early Microsoft Research project, the Xbox maker is now allowing Copilot users to try out Muse through an AI-generated version of Quake II.

The tech demo is part of Microsoft’s Copilot for Gaming push, and features an AI-generated replica of Quake II that is playable in a browser. The Quake II level is very basic and includes blurry enemies and interactions, and Microsoft is limiting the amount of time you can even play this tech demo.

While Microsoft originally demonstrated its Muse AI model at 10fps and a 300 x 180 resolution, this latest demo runs at a playable frame rate and at a slightly higher resolution of 640 x 360. It’s still a very limited experience though, and more of hint at what might be possible in the future.

Microsoft is still positioning Muse as an AI model that can help game developers prototype games. When Muse was unveiled in February, Microsoft also mentioned it was exploring how this AI model could help improve classic games, just like Quake II, and bring them to modern hardware.

“You could imagine a world where from gameplay data and video that a model could learn old games and really make them portable to any platform where these models could run,” said Microsoft Gaming CEO Phil Spencer in February. “We’ve talked about game preservation as an activity for us, and these models and their ability to learn completely how a game plays without the necessity of the original engine running on the original hardware opens up a ton of opportunity.”

Play the demo:
 
Last edited:

StreetsofBeige

Gold Member
Who knows when it'll happen and at what quality, but if AI stuff is going to be a big part of gaming, I hope it extends to mods or allows gamers options to tweak the game itself.

For example, a company makes an AI game. The default is it's a Quake-ish game with some standard options and modes. Thats fine. I want to change it myself with wacky shit like "make it a WWII skin shooter", "create a battle royale mode", "activate god mode".

I can do all sorts of options myself without needing the devs building it in the game itself with canned options (or no options at all).
 

ZehDon

Gold Member
I'm sure this is a great technical achievement but holy shit that's rough.
Sure, but it's proof of concept for the next step in game tech. Consider: there is no engine, there is no game code, there is no asset data. 10FPS at 640x480 for a game that, essentially, doesn't actually exist is pretty interesting. AI can do some good work with images and simple videos; imagine a game with a level of fidelity that is simply not possible to render in realtime using current techniques generated one frame at a time by AI. As long as developers remain at the epicentre of that, I'm curious to see where this goes.
 
Last edited:

Mownoc

Member
On a technical level it's impressive what it's able to do.

Of course it's still many years away from creating something like quake 2 that actually plays well.
 

Three

Member
Sure, but it's proof of concept for the next step in game tech. Consider: there is no engine, there is no game code, there is no asset data. 10FPS at 640x480 for a game that, essentially, doesn't actually exist is pretty interesting. AI can do some good work with images and simple videos; imagine a game with a level of fidelity that is simply not possible to render in realtime using current techniques generated one frame at a time by AI. As long as developers remain at the epicentre of that, I'm curious to see where this goes.
Yeah, technically I'm sure it is very impressive but the final result isn't ready to impress as a product yet even for something as simple as quake. I'm curious to see where it goes too.
 

RoadHazard

Gold Member
That obviously looks and plays like shit, but it sure if technically impressive.

I'm guessing it has a copy of the level geometry as reference? Otherwise I would expect it to hallucinate all kinds of weird shit, the level to keep changing, etc.
 
How about elevate it to a point where you just move forward but without turning back to where you came from. Actually anything with strong dopamine effect cause we just want that without knowing the cause.
 

Kenneth Haight

Gold Member
7nBKUIo.jpeg

Michael J Fox Marty GIF by Back to the Future Trilogy
 

Punished Miku

Human Rights Subscription Service
We already put in text prompts with some image references and AI spits out a whole new image. What makes anyone think it can't observe a thousand hours of gameplay and just produce a replica? Seems easy in comparison. The implications for BC could be cool, in cases where the original is not possible to access.
 
It’s not a game, it’s a procedural video, it just takes the current frame + inputs and guesses the next frame.

There is no persistence at all, look up then down and the entire map changes, just do a 360 spin and you’ll looking at something entirely different.

The idea that this can be used for actual games is a fabrication, unless you want your game to be like a fever dream on acid
 
Last edited:
It’s not a game, it’s a procedural video, it just takes the current frame + inputs and guesses the next frame.

There is no persistence at all, look up then down and the entire map changes, just do a 360 spin and you’ll looking at something entirely different.

The idea that this can be used for actual games is a fabrication, unless you want your game to be like a fever dream on acid

This.

It's more accurate to call it an AI approximated gameplay trailer than a game.

Games will never be made like this.
 

MiguelItUp

Member
I personally don't get what's so impressive about it considering it's broken and really messy. I guess to some degree it's "neat", but it feels weird to be boasting such a thing when it's in this condition.

I love that I went into the water, I couldn't ascend, turned around and suddenly I was clipped outside of the map right where you spawn in the first level, lol.
 

baphomet

Member
Maybe use that money and man power to make a game instead of wasting time on this?

People here think that this technology will not evolve, lmao.
This won't. It has no practical use in gaming. This had to train on presumably millions of hours of Quake 2 gameplay, and from that we get a nonfunctional version of Quake 2 requiring 1000x the processing power.
 
It’s not a game, it’s a procedural video, it just takes the current frame + inputs and guesses the next frame.

There is no persistence at all, look up then down and the entire map changes, just do a 360 spin and you’ll looking at something entirely different.

The idea that this can be used for actual games is a fabrication, unless you want your game to be like a fever dream on acid
So it does not actually write any game code, no textures, no models, just interactive laggy images/video that are based on Quake2 visuals?
Interesting, but seems more appropriate to expect something not totally garbage looking for AI porn and probably VR ... I assume the real time aspect is the achievement? I thought AI will create a proper game not push fauxK to a whole other level by eliminating the rendering and the actual game.
 

Pejo

Member
I know the answer somewhere along the line is "money", but I'm so confused about why we jump started AI down the road towards making creative stuff first instead of productivity stuff first. Summarizing a quote I read somewhere, 'I want AI to do the dishes so I can write a book, not write a book so I can do dishes."

This looks like shit now, but it's impressive that it's already at this level. It'll probably be virtually impossible to tell the difference between AI generated stuff and human made stuff in like 10 years.
 

Romulus

Member
Crazy how short sighted people in here are considering what we've seen AI do in a short time. The thread about AI videos had an early demo and people were saying it had no future and literally a year later it looks damn near like reality with the top tier stuff.
 
Last edited:

Raven77

Member
The entertainment space will be completely unrecognizable from today in 2035. AI will be embedded into nearly every facet of entertainment.

Change the singer's voice, slow the song down, make it a country version, do an EDM version.

Replace this actor with a different one, do an anime version of the first Harry Potter movie, make the first Harry Potter movie rated R and combine it with elements from Shrek.

Video games will be the same, but I think they'll actually be more affected than any other form of entertainment. AI sandbox games will be all the rage the same way open world games have been.
 

Moochi

Member
While this might not seem impressive right now, imagine a future where a specific game model is trained and distilled by several thousand H100s. The graphics are indistinguishable from life or a big budget movie or cgi film. That model is playable with its full graphics on any device that is able to run the distilled model. The hardware will sip power. It won't need cooling.

Personally, I think generating the entire game is the wrong way to develop this. The right way would be to use generative AI as a graphics overlay--A lightweight engine runs locally using shaded polygons and simple objects to maintain consistency to the game vision while the visuals are generated from a distilled model and overlayed to bring cutting edge graphics to any device that's capable of running something as lightweight as the Build engine.
 

intbal

Member
What an incredible waste of resources.

All of this money they dumped into AI would have been better spent acquiring the music licences necessary to add JSRF to backwards compatibilty.
Nutella is an idiot.
 

jumpship

Member


As an early experiment of training generative AI its an impressive curiosity. If you didn't know this was AI and look at what we're seeing it's absolutely terrible, Low framerate, Low definition footage of Quake 2.

Only now using 100x more energy than just playing it locally.

It’s not a game, it’s a procedural video, it just takes the current frame + inputs and guesses the next frame.

There is no persistence at all, look up then down and the entire map changes, just do a 360 spin and you’ll looking at something entirely different.

The idea that this can be used for actual games is a fabrication, unless you want your game to be like a fever dream on acid

Exactly, there's no game code running here, can't track anything including your position on the map, it's barely a game at all.
Adding game code to something like this requires game development just like any other game in existence.

I fail to understand how this helps game development if you need the actual developed game to exist for training in the first place.
 

Bry0

Member
It’s cool but to have any practical use it needs persistence when simply looking around and a way to make levels persist. And persistent health and such. The more you interact with it the more issues you find.
 

rofif

Can’t Git Gud
makes no sense whatsoever. You need to make a game first in order to pretend you can make a game.
But it's a cool dream simulation
 
Last edited:

nemiroff

Gold Member
I'm sure this is a great technical achievement but holy shit that's rough.
That's a fact.

But remember: it's just a couple of years ago that ML image generators wasn't able to generate human hands nor eyes properly. ..And now look where we're at, generating realistic and persistent videos.

The Quake demo is just a mere tiny glimpse of what's coming.
 

Dacvak

No one shall be brought before our LORD David Bowie without the true and secret knowledge of the Photoshop. For in that time, so shall He appear.
This seems to be about where I expected we’d be right now. We’re gonna see fully-AI generated video games in probably less than 10 years at this point.

In a not too distant future, AI will be able to generate a “perfect” game for you, based on your own preferences and personal algorithms. Same with video content. Same with music. Same with pretty much all entertainment.

It’s not gonna stop. The only limiting factor is time and available silicon, so a lot of this depends on Moore’s Law, which is quickly dying.
 
Top Bottom