• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Shower thought: What if there was a way to use UE5's Nanite and generative AI to create true environmental destruction?

Knightime_X

Member
Was taking a shower then this popped into my head...

Nanite is responsible for unlimited detail.
AI neural can quickly generate things or images.

So as an example, what if you were to break a rock in a game.
Traditionally, the rock destruction would need to be modeled and maybe even randomized.
What if AI made use of nanite and randomly generated the textures and over all damage?

Every instant of destruction would look different and have its own unique damage marks?
Things like vehicle damage would look amazing.

Possibilities are endless.
 
Don't believe that would work. Nanite is usually only used on static and none-deforming geometry. Hence, why Nanite characters don't exist. It's purely for high-res static environmental stuff. Although, I do believe that maybe in the future they I'll find a way to apply it to animated mess, but I could be wrong.

AI right mostly is used for voice stuff, 2d imagery, text and so on. text to 3D doesn't exist yet, but is on the way according to Midjourney's CEO.
 
Last edited:
Was taking a shower then this popped into my head...

Nanite is responsible for unlimited detail.
AI neural can quickly generate things or images.

So as an example, what if you were to break a rock in a game.
Traditionally, the rock destruction would need to be modeled and maybe even randomized.
What if AI made use of nanite and randomly generated the textures and over all damage?

Every instant of destruction would look different and have its own unique damage marks?
Things like vehicle damage would look amazing.

Possibilities are endless.
More like a brain fart.
 

Deerock71

Member
You and I have different shower thoughts.
hentai GIF
 

Crayon

Member
idk. I know in matrix, the cars were nanite and then had to switch to a traditional lod mesh as soon as you dented them.
 

Raonak

Banned
Terrain generation/deformation would be a cool use of AI

The tricky part is creating a fun gameplay loop based around that destruction.
 

Knightime_X

Member

I want literally everything destroyable, not just a predetermined set piece or certain walls.
When something like a monitor gets blown apart I want to see the inner workings, not just brokenglass.jpg texture. Shattered glass, everywhere.
Something to this magnitude won't be possible until several generations from now. Maybe in 10-20 years? Movie tier levels of destruction.
Nothing has come close to what I imagine.
 
Last edited:

Guilty_AI

Gold Member
Yeah, but significantly more detailed.
Its possible but:
A: There's no incentive for producing such detailed destruction models.
B: The main performance/hardware issue with these systems usually comes from calculating debris physics and persistence, not how precise the destruction is.
 
Last edited:

kikkis

Member
Don't believe that would work. Nanite is usually only used on static and none-deforming geometry. Hence, why Nanite characters don't exist. It's purely for high-res static environmental stuff. Although, I do believe that maybe in the future they I'll find a way to apply it to animated mess, but I could be wrong.

AI right mostly is used for voice stuff, 2d imagery, text and so on. text to 3D doesn't exist yet, but is on the way according to Midjourney's CEO.
Only skinned geo does not work with nanite. Breaking down rocks and concrete would be just fine.
 

Knightime_X

Member
Its possible but:
A: There's no incentive for producing such detailed destruction models.
B: The main performance/hardware issue with these systems usually comes from calculating debris physics and persistence, not how precise the destruction is.
This is where Ai comes into play.
We have yet to see not even 1% of what AI can REALLY do.
The possibilities go beyond the scope of many people's imagination.
 

baphomet

Member
I want literally everything destroyable, not just a predetermined set piece or certain walls.
When something like a monitor gets blown apart I want to see the inner workings, not just brokenglass.jpg texture. Shattered glass, everywhere.
Something to this magnitude won't be possible until several generations from now. Maybe in 10-20 years? Movie tier levels of destruction.
Nothing has come close to what I imagine.

You'll be dead before that happens.
 

Dacvak

No one shall be brought before our LORD David Bowie without the true and secret knowledge of the Photoshop. For in that time, so shall He appear.
Isn’t deformation on their roadmap? I could have sworn it was, but I might be making that up.
 

Three

Gold Member
I don't think you've thought this through beyond the most basic of surface levels. What is the connection between nanite and an AI knowing how things break or how materials behave?
 
I want literally everything destroyable, not just a predetermined set piece or certain walls.
When something like a monitor gets blown apart I want to see the inner workings, not just brokenglass.jpg texture. Shattered glass, everywhere.
Something to this magnitude won't be possible until several generations from now. Maybe in 10-20 years? Movie tier levels of destruction.
Nothing has come close to what I imagine.
I dunno. Battlefield did it pretty well in the past. It's nothing but gpu and cpu cost. I rather have a big immersive world with good gameplay. I think destruction is overrated unless the game's designed specifically around physics.
 

Guilty_AI

Gold Member
This is where Ai comes into play.
We have yet to see not even 1% of what AI can REALLY do.
The possibilities go beyond the scope of many people's imagination.
AI can be used to optimize previously existing destruction models (and even liquids simulation).

But again, its still costly, both in dev time and hardware, and most games have no reason to spend so much in such systems when they don't play any real part in the game's mechanics. Can possibly even get in the way.

The only two games i can think of that implement more advanced destruction models are beamng and Teardown, both of which are designed around destruction in the first place.
 
Last edited:

chlorate

Member
Have you played Teardown? It’s low-res but basically has destruction at the per-pixel scale that you want.

I find that when I’m playing it, I never wish the destructibility was higher, but rather for better enemies to fight
 
Last edited:
Have you played Teardown? It’s low-res but basically has destruction at the per-pixel scale that you want.

I find that when I’m playing it, I never wish the destructibility was higher, but rather for better enemies to fight
I was also going to bring up Teardown

teardown-explosion.gif

a4f650cdb0ff45aeab19f52022fa9487.gif


So many people here made fun of it and dismissed it though, during both times it was shown at conferences.
 

ResurrectedContrarian

Suffers with mild autism
Algorithmic approaches to breaking up a rock with some randomization seeds are probably more viable; AI feels like a bit of overkill.

For most games, something like BeamNG's soft-body destruction physics is more promising since it involves all the working components and structures of a man-made object being damaged.

"Soft-body destruction" is also something Deerock71 Deerock71 thinks about in his showers, incidentally.
 
??
I've never seen anything but positive response to Teardown, aside from maybe some disappointment that this level of deduction comes from a Minecraft-style block world instead of more traditional modeling.
Yes, but not during conferences. Usually during the live conference threads, the people who are watching and waiting on hype train/megaton announcements, don't want to see a game like Teardown and they voice that opinion quite a bit in those threads.

For the developers those conferences are like their one moment on the big stage, but for gamers here and elsewhere it's simply a game where they say 'get this off of the screen, where is (insert AAA title)?'
 
Last edited:

CamHostage

Member
Yes, but not during conferences. Usually during the live conference threads, the people who are watching and waiting on hype train/megaton announcements, don't want to see a game like Teardown and they voice that opinion quite a bit in those threads.
Teardown was at Sony's May Showcase (the one that went bad overall), only other time I can think of it showing might have been one of those PC Games shows. People seem fairly positive about it from the post-event Sony post here, aside from it being overdue for announcement and being blocky (it's already gone through its hype cycle so people have seen it many times here, and the trailers have never been able to capture "the point" of Teardown so even those interested in its abilities might still see it as a tech showcase rather than a game to get excited to play.) I think you're maybe overly-sensitive to meh-to-negative responses to a this game you are a fan of not getting the pop you think it deserves, but looking at the posts, it's not the ratio you think it is.

Teardown isn't a big game. I get why people watching PlayStation Showcase would not go bananas for it finally getting a port when they're expecting TLoU Factions and GTA6. And it's cool what Teardown can do, but the developer has never been able to sell what it will do when you play it; the fact that it still gets positive notices and some console-release anticipation 3 years after its Early Access release (and a somewhat quiet April 2022 launch) is a good thing for a game which could have otherwise been just loved quietly by its selective fanbase.

Even in this thread, there's a game doing the "destruction" the OP wants to some degree, but it's not ranking high as an answer because it's not high-fidelity models. That's where Teardown is in gaming culture. It's cool tech, but it's not cool graphics (or at least not in the way wanted) and it's just not known to be a cool game. Hopefully it continues to grow in esteem and that changes, but it has an uphill battle in satisfying people like Knightime_X's desire for next-gen physics and visuals in one.
 
Last edited:

coffinbirth

Member
Don't believe that would work. Nanite is usually only used on static and none-deforming geometry. Hence, why Nanite characters don't exist. It's purely for high-res static environmental stuff. Although, I do believe that maybe in the future they I'll find a way to apply it to animated mess, but I could be wrong.

AI right mostly is used for voice stuff, 2d imagery, text and so on. text to 3D doesn't exist yet, but is on the way according to Midjourney's CEO.
WPO
It's tedious, but already being done.
 

sankt-Antonio

:^)--?-<
You'll be dead before that happens.
Well be there well before PS7 hits the store. An AI will render every frame with unlimited detail, guided by player interaction. Think AI picture generation in real time 120 times a second. No one will be building game worlds by hand by then. Rudimentary polygons may still be needed for collision detection.
Voxel tech has already been capable of unlimited details since the early 90s. Something like this will come out and give us Avatar level of gfx.
 

Hudo

Member
Was taking a shower then this popped into my head...

Nanite is responsible for unlimited detail.
AI neural can quickly generate things or images.

So as an example, what if you were to break a rock in a game.
Traditionally, the rock destruction would need to be modeled and maybe even randomized.
What if AI made use of nanite and randomly generated the textures and over all damage?

Every instant of destruction would look different and have its own unique damage marks?
Things like vehicle damage would look amazing.

Possibilities are endless.
You could train some sort of network or network-ensemble to do destruction. It's already applied in soft-body deformation and tearing (like ripping of clothes or bread), based on particle physics, which is what they trained these networks on.
The problem is the generation/collection of the training data for environmental destruction. Or how you can establish an isomorphism between particles and big objects such that it makes sense. I know that to simulate breaking of glass and similar surfaces, they first used naive Delauney triangulation and variations based on that. Later they used a parameterized model so that you can simulate that the ripples throughout the glass expanding from an impact point (like a bullet hole), instead of being uniformly distributed.

I don't know why or how you want to apply Nanite for that. It's an automated and dynamic LOD system for static objects (that has been overhyped in marketing, just like Lumen and other stuff). For dynamic/destructible objects, where the physics handler is interfering, I am actually not sure if Nanite is "allowed" to apply computations to the objects. But if yes, it would still be shape-preserving, so Nanite couldn't help you to "break" something. It's also not what it's supposed to do. If you mean that you want to use Nanite as some sort of overcomplicated tesselation algortihm in order to split your polygon mesh into a finer one to have more "break points", it wouldn't be split the way you want and need to in order to simulate accurate breaking physics, since Nanite doesn't take anything into account that you'd need. Like point of impact, force applied, material solidity, etc.
 
Last edited:

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
Chaos + Fracture editor:

EPDkyhHWoAIsGA1



FractureSettings.webp



Construction_BuildingPack.webp





I dont know how performant it is, cuz I havent compiled this version of Unreal as I have little interest in actually using this level of destruction calculated in real time in any projects.
But one day I might compile it just to see what I can get away with.
 
Last edited:
You could train some sort of network or network-ensemble to do destruction. It's already applied in soft-body deformation and tearing (like ripping of clothes or bread), based on particle physics, which is what they trained these networks on.
The problem is the generation/collection of the training data for environmental destruction. Or how you can establish an isomorphism between particles and big objects such that it makes sense. I know that to simulate breaking of glass and similar surfaces, they first used naive Delauney triangulation and variations based on that. Later they used a parameterized model so that you can simulate that the ripples throughout the glass expanding from an impact point (like a bullet hole), instead of being uniformly distributed.

I don't know why or how you want to apply Nanite for that. It's an automated and dynamic LOD system for static objects (that has been overhyped in marketing, just like Lumen and other stuff). For dynamic/destructible objects, where the physics handler is interfering, I am actually not sure if Nanite is "allowed" to apply computations to the objects. But if yes, it would still be shape-preserving, so Nanite couldn't help you to "break" something. It's also not what it's supposed to do. If you mean that you want to use Nanite as some sort of overcomplicated tesselation algortihm in order to split your polygon mesh into a finer one to have more "break points", it wouldn't be split the way you want and need to in order to simulate accurate breaking physics, since Nanite doesn't take anything into account that you'd need. Like point of impact, force applied, material solidity, etc.
That's some impressive knowledge you got there 👏
 
Top Bottom