Genie 3 is out the bottle. New AI development.

man thats wack...I hope they use this to pitch, structure, conceptualise and plan games, but still end up programing games. the latency must be very high.
 
It's still generating video ... The energy and computing resources needed to create a "dynamic world" for just 1 person is enormous. Now extrapolate that to thousands, tens of thousand or even millions of people ... That's just ridiculous.

What game developers want is AI that generates assets: 3D worlds, fully features 3D models, textures, etc etc etc.
Generate everything once so the whole world can play a game without needing as much energy as the sun.
 
As usual, another fake AI demo with no boundaries or explanation of user interaction. No explanation of cost, hardware requirements, etc. Bolstered by mostly fake comments generated by AI bots. They say they want it to be used for games but of course don't mention anything related to gameplay, the defining aspect of games, only the video generation aspect of it.

AI is destroying the internet, and we badly need a way to separate AI-generated content from human-generated content.
 
People moan at pre-rendered cut scenes that don't represent actual games and now you guys wanna do fake AI generated games that aren't games but videos?

Did I get this right?
 
It sounds very promising but it is all about how is used. I want to see what whack stuff will be created and be shut down by google themselves for being "harmful" or "wrong-thinking".
 
Holy shit! This is a HUGE leap in real time generation. The world consistency is so much better than models just a few months ago. This is much more than just useful for creating "games", this can be a VR killer app, infinite explorable worlds.
Like 1.5 years ago we barely started generating good quality video and now we have this. Don't look at how limited it is now, look at where all of this is going.

EDIT: Genie 2 was just 8 months ago, holy shit. How long until they generate audio inside the world, Genie 4?
 
Last edited:
It's still generating video ... The energy and computing resources needed to create a "dynamic world" for just 1 person is enormous. Now extrapolate that to thousands, tens of thousand or even millions of people ... That's just ridiculous.

What game developers want is AI that generates assets: 3D worlds, fully features 3D models, textures, etc etc etc.
Generate everything once so the whole world can play a game without needing as much energy as the sun.


With how fast AI is moving I don't believe those barriers will exist long.

Just about 5 years ago everyone was saying VR wireless to PC gaming is impossible, and pretty much everyone into VR games like that now.
 
Last edited:
AI will end up being a Top 3 human invention of all time when its all said and done.

Edit: Nvm

Nah. Matter of fact, I change my mind. It's going to be nearly impossible to knock off the top 3 inventions of all time.

1. The harnessing of electricity
2. Medicines
3. The wheel
4. Writing
5. The Printing press

Ai is not a top 3 yet and I don't see anything ever knocking those off.
 
Last edited:
People poo-pooing this as if it isn't a leap toward the inevitability of AI-generatrd games.

For now you can put a text prompt in to add to the world on the fly. Not a huge difference from a button press that adjusts the world.

No one is claiming that this exact release is going to redefine what games are as we know it, but people can see how it's leading there.
 
Holodeck comes one step closer
That's the best use case for this right now. Rapid prototyping and presentations.






It's still generating video ... The energy and computing resources needed to create a "dynamic world" for just 1 person is enormous. Now extrapolate that to thousands, tens of thousand or even millions of people ... That's just ridiculous.

What game developers want is AI that generates assets: 3D worlds, fully features 3D models, textures, etc etc etc.
Generate everything once so the whole world can play a game without needing as much energy as the sun.
this is the future of gaming though

We don't know exactly how the connection between pre-planned and generative pieces will exist in its final form, but there are already plenty of mechanisms out there for constraining generative image models based on depth maps etc. And the capability of smaller on-device models keeps improving through new tricks, quantization, and increasing chip capabilities in the mainstream (my laptop can run a 30GB coding agent model entirely in memory now at solid speed, so I can turn the internet off, give it a local repo and a problem to solve, and it will churn away with high intelligence rewriting code and re-running tests until it works -- things are progressing fast). It's inevitable that we'll be simulating game worlds within generative models in the future.
 
People poo-pooing this as if it isn't a leap toward the inevitability of AI-generatrd games.


As inevitable as flying cars.

Seriously, this is some bullshit. If by "games" you mean absolute soulless trash, as everything created by AI so far, yes. If you mean a genuine videogame like Elden Ring or Expedition 33, the answer is no.

If you think Ubisoft or EA produce slop, just you wait.
 
If it generates video on the fly in realtime how is that different from a game? Isn't that just a different way of rendering
 
As inevitable as flying cars.

Seriously, this is some bullshit. If by "games" you mean absolute soulless trash, as everything created by AI so far, yes. If you mean a genuine videogame like Elden Ring or Expedition 33, the answer is no.

If you think Ubisoft or EA produce slop, just you wait.

Its inevitable. The problem is its difficult for us to imagine how advanced it will be in the future. There's something to be said about exponential advancement, there's a point when your brain cannot understand the leaps.
 
Last edited:
As inevitable as flying cars.

Seriously, this is some bullshit. If by "games" you mean absolute soulless trash, as everything created by AI so far, yes. If you mean a genuine videogame like Elden Ring or Expedition 33, the answer is no.

If you think Ubisoft or EA produce slop, just you wait.
I don't think AI is going to wholly replace traditional products, but there will be AI games. We can debate what we think market share between them will be.

Flying cars exist. Not on a wide consumer scale.

I don't disagree that AI will result in soulless slop, btw, but that's different from the argument that AI games are coming.
 
Last edited:
I don't think AI is going to wholly replace traditional products, but there will be AI games. We can debate what we think market share between them will be.

Flying cars exist. Not on a wide consumer scale.

I don't disagree that AI will result in soulless slop, btw, but that's different from the argument that AI games are coming.
Said it before, but I imagine AI in games will eventually take a basic PS2 style scene and then let a bespoke AI model trained on the characters and game environments texture it. Kinda like this video. A DLSS 6.0 feature perhaps?

 
Said it before, but I imagine AI in games will eventually take a basic PS2 style scene and then let a bespoke AI model trained on the characters and game environments texture it. Kinda like this video. A DLSS 6.0 feature perhaps?


I think so, too. I think that that and Genie 3 are two branching paths game AI can take, and we'll see both of them progress further from where they are today.
 
As inevitable as flying cars.

Seriously, this is some bullshit. If by "games" you mean absolute soulless trash, as everything created by AI so far, yes. If you mean a genuine videogame like Elden Ring or Expedition 33, the answer is no.

If you think Ubisoft or EA produce slop, just you wait.

You only have to look at fast progress drones have made to see that flying 'cars' are not far away.

Once they can factor in lots of failsafe mechanisms, certified safe by not causing death from a 100m fall for example, personal flying machines will take off. (No pun intended)


On the gaming side, jn the future it'll be slop tailored to suit every individual - there will be no criticism possible. Pure subjective fun.

There will be no single Ubi or EA product we can all point and laugh at.
 




this is the future of gaming though

We don't know exactly how the connection between pre-planned and generative pieces will exist in its final form, but there are already plenty of mechanisms out there for constraining generative image models based on depth maps etc. And the capability of smaller on-device models keeps improving through new tricks, quantization, and increasing chip capabilities in the mainstream (my laptop can run a 30GB coding agent model entirely in memory now at solid speed, so I can turn the internet off, give it a local repo and a problem to solve, and it will churn away with high intelligence rewriting code and re-running tests until it works -- things are progressing fast). It's inevitable that we'll be simulating game worlds within generative models in the future.

Umm.. no, you can't run 30GB model at decent speed on local laptop unless you have MBPro 16 with M4 Max and 48GB+ RAM (preferably 64GB+) or I guess newest RTX ADA 5000 with 32GB VRAM (makes it pretty tight).

These will run $4-5K or higher. Otherwise you are running quantized or distilled model with lower fidelity.

And even then you have like 10-30 tokens per sec (depends on model and workload) and it will be slower than that with agentic task load.
 
Does this mean eventually everything will be free and devs will cease to exist? No need if you can one press generate anything.
 
Umm.. no, you can't run 30GB model at decent speed on local laptop unless you have MBPro 16 with M4 Max and 48GB+ RAM (preferably 64GB+) or I guess newest RTX ADA 5000 with 32GB VRAM (makes it pretty tight).

These will run $4-5K or higher. Otherwise you are running quantized or distilled model with lower fidelity.

And even then you have like 10-30 tokens per sec (depends on model and workload) and it will be slower than that with agentic task load.
I have an M3 with 36GB RAM, and I run Qwen3-Coder 30B (quantized of course, but if you've used quantized models with GGUF you'll know they have excellent fidelity these days).

I get a very smooth response speed, and I can (as I said) literally give it a task, walk away, and come back to see it having edited files, run test commands, etc following the development and testing plan I gave it.

Not perfect, but very impressive, particularly given that I've been running it on some fun side projects that are not very common coding languages (TADS and Inform text adventure languages; another project is in rust, which is common but difficult, and it can run the build, see the errors, and then fix them).
 
As usual, another fake AI demo with no boundaries or explanation of user interaction. No explanation of cost, hardware requirements, etc. Bolstered by mostly fake comments generated by AI bots. They say they want it to be used for games but of course don't mention anything related to gameplay, the defining aspect of games, only the video generation aspect of it.

AI is destroying the internet, and we badly need a way to separate AI-generated content from human-generated content.

Astroturfing was already bad enough. Now you have AI generated fake posts, reviews, images, etc.

No wonder companies love AI so much.

However, I guess I shouldn't be surprised as most resources get pissed away.
 
Last edited:
End all scarcities.

Let AI generate in seconds what would take us years and hundreds of people to make.
People who are against that don't understand how liberating it will truly be.
 
Last edited:
I think the end result in the short to medium term will be game designers making games with PS2/PS3 era type graphics very quickly and having AI Models that transform it in real-time to what they actually want it to look like.
 
People moan at pre-rendered cut scenes that don't represent actual games and now you guys wanna do fake AI generated games that aren't games but videos?

Did I get this right?
same place that also hates AI-generated 'fake' frames, fwiw
 
That's the future of games, no more 3d engines.
There's some truth to this, at least within the next decade.

The entire tech stack and general reasoning model of human-coded 3d rendering engines will no longer be the primary medium for building special effects for major movies, tv, ads, or much of anything else -- that entire industry of what we call CGI special effects is about to be as niche as doing miniatures + stop motion is now, basically living on a kind of fun anachronism only for quirky side projects.

Don't believe me? Look at what Runway's new video editing model Alepth -- it takes in your real video clips and transforms the scene in any way you ask:




These models get even better every few months, and can dream up transformations of real footage and scenes on demand, already showing examples out there of realism that far exceeds what you get with engine-rendered CGI. "Sampling" and generating using a combination of real footage and well-honed prompts & controls is how video creation will be. And obviously games will see a similar but slightly different kind of transformation.

EDIT: more on Aleph if you want to see how editing works... I'm sorry, but it's not "slop" anymore, it's a tool for transforming your existing projects and assets and amplifying your power to be creative with what you already have.
 
Last edited:
I have an M3 with 36GB RAM, and I run Qwen3-Coder 30B (quantized of course, but if you've used quantized models with GGUF you'll know they have excellent fidelity these days).

I get a very smooth response speed, and I can (as I said) literally give it a task, walk away, and come back to see it having edited files, run test commands, etc following the development and testing plan I gave it.

Not perfect, but very impressive, particularly given that I've been running it on some fun side projects that are not very common coding languages (TADS and Inform text adventure languages; another project is in rust, which is common but difficult, and it can run the build, see the errors, and then fix them).
Quantized model quality isn't quite the same though. I have used a variety of Qwen and DeepSeek versions and they are ok, but nowhere close to full on cutting edge modern models, IMO.

And things get a lot worse once you go beyond light coding assistance and text generation.

I am not any sort of an expert here, but the further we go, the more "power hungry" the more advanced models seem to get, especially if you throw reasoning in top and then agents into the mix.
 
Last edited:
End all scarcities.

Let AI generate in seconds what would take us years and hundreds of people to make.
People who are against that don't understand how liberating it will truly be.
The way things are going, billionaire overlords will own everything and the peasants (rest of us) will sort through the garbage.

Maybe EU will be able to do better by its people, but I have doubts.
 
People genuinely thinking this technology is going to be a democratic piece of tech for all mankind is making me actually depressed lol
 
End all scarcities.

Let AI generate in seconds what would take us years and hundreds of people to make.
People who are against that don't understand how liberating it will truly be.
This is going to be so, so bad. It's going to liberate people from their jobs, that's for sure, and liberate us from quality.
 
Fake the resolution, fake the frames, fake the games.

smoke weed GIF
 
Holy shit! This is a HUGE leap in real time generation. The world consistency is so much better than models just a few months ago. This is much more than just useful for creating "games", this can be a VR killer app, infinite explorable worlds.
Like 1.5 years ago we barely started generating good quality video and now we have this. Don't look at how limited it is now, look at where all of this is going.

EDIT: Genie 2 was just 8 months ago, holy shit. How long until they generate audio inside the world, Genie 4?
Veo3 already does great audio, so I would imagine it could come in just a patch like 3.1, or sooner? Like is Gemini 3 even publicly available? Is this not just a demo at the mo? Public release could easily have the same generative audio engine as veo
 
Top Bottom