• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Real-Time Neural Texture Upsampling in God of War Ragnarok on PS5

Tripolygon

Banned
I remember in the start of the generation when Sony said PS5 is capable of running machine learning inference for use in games. We have seen a few Sony studios use machine learning inference at runtime in their games. Spiderman for muscle deformation, Horizon FW in their temporal upsampling and now God of War in texture compression and upsampling. Remember BC-Pack that microsoft was advertising during the launch of Series X, think something like that but more advanced using neural network.

Every frame, the texture streaming system detects textures that require upsampling , and sends them over to the upsampling system.

Goal.
We hoped that upsampling textures at run-time would help us save disk space – Artists author textures for PS4, and we upsample these textures on PS5 while keeping roughly the same package size.
We wanted to use a single network to handle both upsampling and compression, and output directly to BC. The player is always moving from one level to another, and upsampling must keep up.

q3WIUQj.png


Design

zuEsWCf.png


Result

In 10 minutes, the system produced 760 million BC1 and BC7 blocks with both our method and the method provided by Sony, which is around 10 gigabytes of BC1 and BC7 blocks if no other form of compression is used. Around 70% of the 10 gigabytes of data is produced by neural networks, which is equivalent to around 42 4k textures per minute.

EYLIC6U.png


qhoNoYq.png


8byXIMH.png


On PS5 fp16 is used.

One of the simplest optimization is adopting 16-bit floats, also known as half-precision floats or halves

ZZgcOtU.png


It is expensive in terms of cost but worth it. That is why they use spare resources to run it during gameplay.

The way we determine how much BC7 blocks to upsample is by tracking running averages of excess frame time and the duration of one evaluation of any specific network.
it upsamples one 2k texture to 4k in around 9.5 milliseconds, shaving off over a whole millisecond

mYZUOO2.png



Issues

Since BC7 works with 4x4 pixel blocks, the networks sometimes produce block artifacts. This is especially visible for highly specular surfaces under direct lighting. We attempted to fix this by training with four neighboring blocks at the same time and adding first and second order gradients to the loss function. These helped somewhat but the issue was still visible.

FO9EyG5.png


 
Last edited:
not seeing a difference on my screen with these comparisons :/
The images were used in a projector I think. So a lot bigger than our TVs/ monitors. There is more of it in the link. What seems to be nice here is that the PS4 textures simply get upsampled without manual work? And with a gain compared to other methods. Only loss is that it is using computational power. So CPU. But there is so much more of it than the PS4 than for GOW this is not a problem. What is nice too is that if they are showing this at GDC, they have talked about it with others Sony studios too. Will wait for someone more knowlegable of the tech to explain a little more why it is a good thing.
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
Well thats one way to make the up port easy.

Though id think, it would have been easier to have your high res textures and then downres them for the PS4.
Did they lose all their high res maps?


EDIT: Ohh this is being done at runtime?
Okay thats cool, I can respect that as one way to keep file sizes low.
 
Last edited:

onQ123

Member
The smart thing for next gen would be to have a common pool of textures already on the console & distribute really small games that create full size textures on the fly using the commonly used data that's on every console.
 

ToTTenTranz

Banned
So Nvidia's Revolutionary Neural Textures presented for the first time two weeks ago in tech demos that were going to change the world... had actually been preceded by a software implementation from Santa Monica and put into a shipping title half a year before, running on RDNA2 hardware without the need for dedicated tensor units.

Oops.


About this thread:

NLE5dwi.png


Z3qMHL2.png



Some users: But I cAn'T sEe aNy dIfFeReNcE...

Yes, that was the point.
The "no upscale" pictures take a lot more disk space, over 3x more, and with this you can't notice the difference.
 
Last edited:

sankt-Antonio

:^)--?-<
So Nvidia's Revolutionary Neural Textures presented for the first time two weeks ago in tech demos that were going to change the world... had actually been preceded by a software implementation from Santa Monica and put into a shipping title half a year before, running on RDNA2 hardware without the need for dedicated tensor units.

Oops.


About this thread:

NLE5dwi.png


Z3qMHL2.png



Some users: But I cAn'T sEe aNy dIfFeReNcE...

Yes, that was the point.
The "no upscale" pictures take a lot more disk space, over 3x more, and with this you can't notice the difference.

I thoughts it’s saving disc space by using the low res PS4 textures and then upscale it “live” to a better quality. Instead of doing higher res textures. But at the end the textures need to look better then the base PS4 textures to make sense because this process isn’t free. You give away CPU performance for Disk space and less dev time for cross games.

I just don’t see the benefit in the pictures.
 

sankt-Antonio

:^)--?-<
I think the goal they had was to have lower disk and vram usage, while retaining the same quality.
And in this regard, it seems they were very successful.
Not lower, but same disc space as the PS4 version while apparently looking better. I just don’t see it doing that in the pics. I’m sure it’s working in game perfectly fine.
 

winjer

Gold Member
Not lower, but same disc space as the PS4 version while apparently looking better. I just don’t see it doing that in the pics. I’m sure it’s working in game perfectly fine.

I meant for the PS5. They could have made higher resolution textures for the PS5 without this compression system, using more vram and disk space.
This way not only did they save time and money, by using the original PS4 textures, but also vram and space. It's a win all around.
 

sankt-Antonio

:^)--?-<
I meant for the PS5. They could have made higher resolution textures for the PS5 without this compression system, using more vram and disk space.
This way not only did they save time and money, by using the original PS4 textures, but also vram and space. It's a win all around.
At the cost of CPU performance.
 

CGNoire

Member
So Nvidia's Revolutionary Neural Textures presented for the first time two weeks ago in tech demos that were going to change the world... had actually been preceded by a software implementation from Santa Monica and put into a shipping title half a year before, running on RDNA2 hardware without the need for dedicated tensor units.

Oops.


About this thread:

NLE5dwi.png


Z3qMHL2.png



Some users: But I cAn'T sEe aNy dIfFeReNcE...

Yes, that was the point.
The "no upscale" pictures take a lot more disk space, over 3x more, and with this you can't notice the difference.
No the "no upscale" is the lower res texture before any processing which means for this whole process to be even worth it you should see a difference.
The fact that you cant means it wasnt worth the time and makes no sense in showcasing like this since they could have just left the "lower res" textures as is and the disk space would never get gobbled up to begin with and you would end up with the same look.
 

ToTTenTranz

Banned
And? It’s still using the CPU for a task not needed of the textured had been higher res from the start. It’s a trade off.
It's obviously using the GPU. They say so right at the start of the presentation and later on proceed to show a bunch of methods they used to fit the operations within the GPU's L1 cache.
It makes little sense to run NN inference for textures on the CPU. The textures go to the GPU's cache and the upscaling is done locally, without traveling to the RAM.


vEHo6pA.png




No the "no upscale" is the lower res texture before any processing which means for this whole process to be even worth it you should see a difference.
The fact that you cant means it wasnt worth the time and makes no sense in showcasing like this since they could have just left the "lower res" textures as is and the disk space would never get gobbled up to begin with and you would end up with the same look.

The images on the first post compare big size textures with small size + neural upscaling, which is what the author shows at the end of the presentation. The point is exactly to show how little difference there is, while saving a lot of data in the whole pipeline.

For those still insisting on this "not being worth the time" because they couldn't grasp the goals of image compression, the author does show a couple examples between bilinear upscaling and NN upscaling from the same small texture. In page 25:

ZC5jjx2.png



Alas, this is not the point of the work. The goal in texture compression is always to preserve image quality while reducing file size, hence the comparisons showing similar quality.
 

zeroluck

Member
So Nvidia's Revolutionary Neural Textures presented for the first time two weeks ago in tech demos that were going to change the world... had actually been preceded by a software implementation from Santa Monica and put into a shipping title half a year before, running on RDNA2 hardware without the need for dedicated tensor units.

Oops.


About this thread:

NLE5dwi.png


Z3qMHL2.png



Some users: But I cAn'T sEe aNy dIfFeReNcE...

Yes, that was the point.
The "no upscale" pictures take a lot more disk space, over 3x more, and with this you can't notice the difference.
Apple to orange, one is upsamping to higher res then upload to the GPU(more VRAM) and another is storing the NN compressed textures on the GPU bypassing the BC compression entirely(less VRAM).
 

SlimySnake

Flashless at the Golden Globes
About this thread:

NLE5dwi.png


Z3qMHL2.png



Some users: But I cAn'T sEe aNy dIfFeReNcE...

Yes, that was the point.
The "no upscale" pictures take a lot more disk space, over 3x more, and with this you can't notice the difference.
Isnt the point to use better quality textures without them costing disk space?

If we arent seeing better textures then whats the point? Just use the same textures, they wouldve taken the same amount of space.
 

Tripolygon

Banned
This thread serves as a reminder that about 90% of people who post in this forum know fuck all about the process of making video games even though they spend most of their life talking about it or playing it.

Oh but I can't see any difference, no shit, you are looking at a compressed screenshot measuring ~960x530 taken using windows snipping tool from a pdf opened in google chrome window less than 1080p then uploaded to imgur which further compresses it viewed from no doubt your 6.5 inch phone screen. When aligned side by side of course you won't see much difference, but if superimposed like Killer8 Killer8 did, you will see a difference. Which might not seem like a lot but in terms of storage space saved, it is a lot. Also the normals is what is being mostly compressed and upsampled, normals are used to define light bounces on a surface which create detail where there is none in the underlying geometry.
 

ToTTenTranz

Banned
Mixed up CPU/GPU. It’s a trade off with GPU computation.
So not just a positive on all fronts.
No, according to the presentation there are no additional CPU cycles involved in this approach.
A smaller texture gets sent to the VRAM and the GPU runs portions of the texture through a NN for upscaling.

IIRC the Spider Man Miles Morales' muscle deformation NN does run on the CPU, it's a geometry transformation.
This one does not.



Apple to orange, one is upsamping to higher res then upload to the GPU(more VRAM) and another is storing the NN compressed textures on the GPU bypassing the BC compression entirely(less VRAM).

Well it's apples to apples in the sense that there's no perceivable visual difference between both solutions.


Isnt the point to use better quality textures without them costing disk space?

If we arent seeing better textures then whats the point? Just use the same textures, they wouldve taken the same amount of space.

The advantages in the smaller size of the textures are actually multifold:

1 - Less storage space. Installation sizes get smaller. For example, GoW Ragnarok occupies 107GB on the PS4 version and 86GB on the PS5 version, despite the latter showing better texture detail.

2 - Less VRAM occupation per texture. Meaning you can fill the RAM with other stuff, like e.g. BVH trees for raytracing, or simply more cached assets to prevent stuttering.

3 - Lower bandwidth requirements for the same texture quality. This also means that, for ISO texture quality there's less data exchanged between GPU caches and RAM/VRAM, leading to lower power consumption. And in cases where the GPU clock is limited by power consumption (like pretty much all modern GPUs and iGPUs), this may lead to higher core clocks.


The only "downside" is the fact that there's higher ALU utilization. However, it's been hard for game developers to keep the consoles' compute utilization at acceptably high levels. even on the PS5.
I imagine that in solutions where the compute throughput power far exceeds a balanced approach (too much compute power, too little memory bandwidth), like e.g. AMD's 7840U / Z1 Extreme in the ROG Ally, techniques like this should be a godsend.
 
Last edited:

hlm666

Member
So Nvidia's Revolutionary Neural Textures presented for the first time two weeks ago in tech demos that were going to change the world... had actually been preceded by a software implementation from Santa Monica and put into a shipping title half a year before, running on RDNA2 hardware without the need for dedicated tensor units.
Judging by the video in your link nvidia are upscaling from lower res assets (so smaller file sizes) and doing it in 1.x ms vs 9.x ms with maybe better end results texture detail wise.
 

deriks

4-Time GIF/Meme God
b54bb0560129c10bff18561bda99980ea5b1f66b.gif



I legit feel like I'm being trolled.
I was going to use this meme

@thread
I know that has a lot of computer shit on it, but man, only the water reflections I saw some difference, and like those Digital Foundry analysis, I bet that it was zoomed, so I don't care that much. But if helps performance, I'm all for
 

zeroluck

Member
No, according to the presentation there are no additional CPU cycles involved in this approach.
A smaller texture gets sent to the VRAM and the GPU runs portions of the texture through a NN for upscaling.

IIRC the Spider Man Miles Morales' muscle deformation NN does run on the CPU, it's a geometry transformation.
This one does not.





Well it's apples to apples in the sense that there's no perceivable visual difference between both solutions.




The advantages in the smaller size of the textures are actually multifold:

1 - Less storage space. Installation sizes get smaller. For example, GoW Ragnarok occupies 107GB on the PS4 version and 86GB on the PS5 version, despite the latter showing better texture detail.

2 - Less VRAM occupation per texture. Meaning you can fill the RAM with other stuff, like e.g. BVH trees for raytracing, or simply more cached assets to prevent stuttering.

3 - Lower bandwidth requirements for the same texture quality. This also means that, for ISO texture quality there's less data exchanged between GPU caches and RAM/VRAM, leading to lower power consumption. And in cases where the GPU clock is limited by power consumption (like pretty much all modern GPUs and iGPUs), this may lead to higher core clocks.


The only "downside" is the fact that there's higher ALU utilization. However, it's been hard for game developers to keep the consoles' compute utilization at acceptably high levels. even on the PS5.
I imagine that in solutions where the compute throughput power far exceeds a balanced approach (too much compute power, too little memory bandwidth), like e.g. AMD's 7840U / Z1 Extreme in the ROG Ally, techniques like this should be a godsend.
Nvidia paper has NN compression looking better than BC compression by 16x(540p to 4K) at the same memory footprint, this has it looking worse than BC compression at the same memory footprint. Its purpose is to saves disk space for a slight quality loss.
 
Last edited:
Top Bottom