• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

Cerny confirms PSSR2 and FSR4.1 use same neural network

Even Cerny games on PC.

images
 
Last edited:
Old news? We already knew that both upscalers use the same neural networks, PSSR2 and FSR4.1 were developed in collaboration between AMD and Sony. That doesn't mean PSSR2 = FSR4.1. They said at the time that PSSR2 was going to be a lighter version of FSR4 optimized for the PS5 Pro's AI cores.
 
Last edited:
He said so like a year ago, it's not exactly news...


But people here said PS5 Pro couldn't run the same tech of FSR4....

LOL

It's FSR4.1 int8 basically. And that was denied by many people here...

Pro always had power to run it, but it was used for inefficient PSSR1.

Old news? We already knew that both upscalers use the same neural networks, PSSR2 and FSR4.1 were developed in collaboration between AMD and Sony. That doesn't mean PSSR2 = FSR4.1. They said at the time that PSSR2 was going to be a lighter version of FSR4 optimized for the PS5 Pro's AI cores.

Amethyst created core Super Resolution tech. FSR4.1 is FP8 version of it and PSSR2 is int8 version of it.

Sooo what about fsr 5? Aka fsr diamond thats going to be on helix? Is that the same as fsr 4.1 or pssr 2 gonna lag behind or use upgraded fsr 5 and call it pssr 3?

They will jump to FP8 version of tech for sure for PS6. General super resolution tech will reach 2.0 version by then and MS, Sony and AMD will use their variants (with different names).
 
Last edited:
Amethyst created core Super Resolution tech. FSR4.1 is FP8 version of it and PSSR2 is int8 version of it.

And this means that AMD has an Int8 version of FSR4.1 ready to go, but refuses to release it for RDNA3. Which has the hardware to accelerate ML in Int8, through its WMMA units. Twats.
 
Sooo what about fsr 5? Aka fsr diamond thats going to be on helix? Is that the same as fsr 4.1 or pssr 2 gonna lag behind or use upgraded fsr 5 and call it pssr 3?
FSR 5/Diamond are transformer models for upscaling/framegen/denoising built for RDNA5. Will be pretty much 100% identical between PS6/Xbox Helix/RDNA5 dGPUs regardless of whatever branding each one uses.
 
And this means that AMD has an Int8 version of FSR4.1 ready to go, but refuses to release it for RDNA3. Which has the hardware to accelerate ML in Int8, through its WMMA units. Twats.

Maybe Sony made a deal to have the only INT8 version avaliable, who knows.
 
And this means that AMD has an Int8 version of FSR4.1 ready to go, but refuses to release it for RDNA3. Which has the hardware to accelerate ML in Int8, through its WMMA units. Twats.

PS5 Pro has 2.5x the INT8 TOPS of the the 7900XTX that was the flagship RDNA3 card

How can people expect the same results of a PS5 Pro still baffles me....

If you drop down to the 7800XT then the difference becomes 4x

They don't release it for the same reason PSSR is not available on the base PS5.

It wouldn't work properly
 
Last edited:
PS5 Pro has 2.5x the INT8 TOPS of the the 7900 XTX that was the flagship RDNA3 card

How can people expect the same results of a PS5 Pro still baffles me....

If you drop down to the 7800XT then the difference becomes 4x

They don't release it for the same reason PSSR is not available on the base PS5.

It wouldn't work properly

The upscaling never uses all that compute at the same time.
The simple proof is that RDNA3 can run FSR4.0.2 Int8 version well enough, even using the DP4A path and the overhead of Optiscaler.
If AMD released it officially, using the WMMA path it would be significantly faster.
 
The upscaling never uses all that compute at the same time.
The simple proof is that RDNA3 can run FSR4.0.2 Int8 version well enough, even using the DP4A path and the overhead of Optiscaler.
If AMD released it officially, using the WMMA path it would be significantly faster.

Doesn't change the fact that RDNA3 was never meant to run ML upscalers

And people that bought those cards always said they only cared about raster performance, not "fake pixels"....

Now the very same people complain because they don't get a feature they said they never cared about

See the crazy post above this one
 
Last edited:
Doesn't change the fact that RDNA3 was never meant to run ML upscalers

And people that bought those cards always said they only cared about raster performance, not "fake pixels"....

Now the very same people complain because they don't get a feature they said they never cared about

See the crazy post above this one

Nonsense. RDNA3 has the hardware to run AI upscalers. It can even run ML programs with RoCm.
AMD could easily put FSR4 Int8 into RDNA3.
 
PS5 Pro has 2.5x the INT8 TOPS of the the 7900XTX that was the flagship RDNA3 card

How can people expect the same results of a PS5 Pro still baffles me....

If you drop down to the 7800XT then the difference becomes 4x

They don't release it for the same reason PSSR is not available on the base PS5.

It wouldn't work properly
Complete rubbish and SMS Ragnarok ML AI texture compression technical paper and the FSR4 leak evidence otherwise.

You do know it is called FSR4.1, yes? The x.1 part denotes a minor upgrade from 4.0, and the leak shows that FSR4 INT8 runs on less than 600GOPs - yes that's correct, not TOPs as in trillions of INT8 OPS per second, but Giga (billions) INT8 OPs per second - via the weakling SteamDeck using it.

For whatever reason, the PS5 - with custom INT8s V_DOT_I32_I8 DP4a type instruction - is being denied FSR4.x support along with lots of equally capable RDNA GPUS.
 
Last edited:
Complete rubbish and SMS Ragnarok ML AI texture compression technical paper and the FSR4 leak evidence otherwise.

You do know it is called FSR4.1, yes? The x.1 part denotes a minor upgrade from 4.0, and the leak shows that FSR4 INT8 runs on less than 600GOPs - yes that's correct, not TOPs as in trillions of INT8 OPS per second, but Giga (billions) INT8 OPs per second - via the weakling SteamDeck using it.

For whatever reason, the PS5 - with custom INT8s V_DOT_I32_I8 DP4a type instruction - is being denied FSR4.x support along with lots of equally capable RDNA GPUS.
I hope you are not calculating the cost based on Steam Deck. Steam deck is upscaling from 480p to 720p
 
Non believers BTFO, especially those of you who said PSSR2 and FSR4.1 have nothing in common.


Just tried FSR Upscaling 4.1 in a few PC games. It's based on the same neural network as the upgraded PSSR we released for PS5 Pro… and it looks stunning!

Just tried? Sorry what? They weren't doing comparisons beforehand?
 
I hope you are not calculating the cost based on Steam Deck. Steam deck is upscaling from 480p to 720p
No, I worked it back with a massively over estimated calculation using the info that has been put in technical articles from analysing the model/the leaked code. Even if the inference cost was twice the SteamDeck it still wouldn't be an issue for the PS5.
 
Yeah, ML frame gen and ray regeneration will be standard stuff next gen.

I hope they get started earlier, which also make sense for Sony because they don't have DX12 to leverage like MS do for next gen. If Intel can get MFG to work on oldest arc gpu, there is no reason why Sony can't. Hopefully one of the first party can suprise us jumping into MFG. Honestly 40fps 3X into 120fps is definitely not bad in terms of input lag
 
It's not as simple as saying RDNA3 has Int8 (the RDNA3 WMMA Units) so it can run FSR4.1, INT8 or FP8 isn't better than one or the other by the way, it's how you encode the matrix multiplications, FP8 has lower precision, but if you ran INT8 on silicon that's built for an FP8 workload it would be slower, likewise if you tried running an INT8 workload on silicon built for FP8 workloads it would be slower, The end result is the same, matrix multiplications at 8-bit precision, just encoded differently.

PS5 Pro was built to provide 300TOPS of ML Compute, which is why it can run the neural network required for FSR4.1/PSSR2, the 7900 XTX and 7900 XT have 123 and 103 TOPS of ML Compute, so it will just not have the compute to complete the maths in time.

Cerny went INT8 for the PS5 Pro likely because integer accelerators are simpler to implement, they're smaller on the die, use less power, and when you're building a console with a fixed thermal and power budget, that matters a lot. You get more TOPS per watt with INT8 hardware.
 
Training data and the neural network are shared , but the algorithms in PSSR2 and FSR4.1 are not the same, one is based on int8 + PS5 pro customized hardware matrix computation accelerators, the other is based on FP8 + hardware ML accelerators in AMD RDNA 4 GPUs.

If AMD put some R&D in old RDNA GPUs, they could provide some int8 version of FSR4 for old AMD gpu, but they don't want to. By the test of DF, the leaked int 8 version FSR 4 works, but it's more demanding than FSR 3 for old AMD GPUs and may not worth using in some games. Also the next gen AMD will have exclusive Frame-gen and neural rendering and dense geometry rendering (RDNA 4 does not have the built-in hardware accelerators for these features, see the 3 focus of project amethyst, neural array; better raytracing accelerators; universal compression), the problem of AMD: lack of support of legacy GPUs because they don't add any hardware ml accelerators before rdna 4 cards but focus on software-based open source project like fsr 1~3.

IMO, intel do right by adding the ML accelerators (XMX, now Xe3 ver) once they decide to make d-gpu, and now it integrates in i-gpu, make the Intel i-gpu the only vendor support hardware base ML upscaling and frame-gen now. Intel also provides a DP4A Version (cross-card), the XMX version is Intel-GPU exclusive, if AMD add some ML accelerators after Nvidia released dlss 1, they would have a better situation than now.
 
Last edited:
Top Bottom