Ubisoft on PSSR: "We're now confident that the image quality with PSSR will always be better than with TAA".

SKYF@ll

Member
Here's my DLSS quality (transformer J preset) screenshot for comparison


Jedi-Survivor-DLSSQ.jpg



PS5Pro


W2AONJ6R_o.jpg


BTW. Star Wars Jedi Survivor is unplayable on PC with RT. The game stutters like crazy. RT mode is playable on the PS5Pro without stutters?
I don't play at 30fps because it makes me dizzy, but I took a screenshot in PS5 Pro Quality Mode to compare the image quality.
I zoomed each one in at 300%. *2nd image (I think it's easier to compare the details than on YouTube.)

PS5 Pro Quality Mode (DRS 1296-1728p) *PSSR
R9sUao6V_o.jpg


Comparison image at 300% zoom
0ZN2Cg7B_o.png
 
Last edited:
I don't like TV, reshade and NVIDIA overlay sharpening. it is not something I can control (what I feel).

whenever I actively apply sharpening through reshade (lumasharpen, fidelityfx cas, and more), nvidia app (sharpen, sharpen+, details), nvidia control panel (nvidia image scaling), the image looks wrong to me. when I tried sharpening on a couple different TVs, I always hated how it looks even in games, movies and regular content. I didn't try it on your particular TV model and I likely never will, so I will never know. I don't really care. I just don't like external sharpener filters. that much is clear

the reason for this is because it is external. it means I see what image looks like before applying that external sharpening. so when I apply it, I immediately feel like something is wrong. if a game ships with sharpening enabled by default, I either don't notice it or just accept it and move on (black myth wukong).

believe it or not, I had to stop playing last of us part 2 remastered on PS5 because it has a forced sharpening filter. I've played that game at blurry 1080p for at least 150 hours. I'm so used to its natural, non sharpened look, I just couldn't get used to how it looks on its PS5 remaster. now I look forward to its PC release where hopefully I can disable its sharpening. by the way, last of us part 2's base ps4 pro version on ps5 does not have the sharpening filter on ps5. so I played that and moved on

so being "unaware" is okay. it doesn't mean I'm fine with it. it doesn't mean I can enable an external sharpener and say "hey this looks better and great". over the years I've tried countless sharpening filters to fix "TAA" at 1080p. nothing worked for blurring in motion. I eventually gave up. up until DLSS 4

anyways, your uncharted 4 comparisons are made with character just standing still which gives TAA time to reconstruct to high quality non blurry image. TAA in RDR 2 looks reasonably sharp and clear while standing still, but becomes blurry in motion,


this is exactly what happened with uncharted 4 and tlou 2 on base PS4 when I played them. DLSS 4 however is sharper and clearer in MOTION. I can prove it to you if you want (or do it yourself, if you think you can pull it off yourself. and of course do it at 1080p)

i had to be elaborate because it seems like there's a miscommunication and I feel like you really don't understand what I am trying to say which is not your fault entirely but sharpening is a complex topic for me as I've explained above. for me to be fine with a sharpened look, I should not see it non sharpened, if that makes sense

(half life 2 rtx is a specific situation where non sharpened alternative is a noisy mess that looks horrible)
I dont recommend nvidia app, it has ugly sharpening masks. I'm using reshade sharpening masks because I can get sharp image without looking oversharpened. Reshade made huge difference for me in Black Myth Wukong, the image was too blurry without it even though I was using DLSS3. DLSS4 improved sharpness in this game, but I still need a little bit of sharpening mask to make the image good enough for me.

RDR2 has very ugly TAA (the worst I have ever seen), and even DLSS3 performance destroyed it when I did my own comparisons. The TAA image was extremely blurry during motion. TAA in Uncharted 4 does not look like that. Uncharted 4 TAA looks reasonably sharp on static image and motion. Even DLSS transfer doesnt make big difference in this game for me, that's how good TAA image was in this game. I would never say that the Uncharted 4 looked incredibly blurry with TAA, and especially on sony bravia TAA. Image quality on PS4Pro was very good on fullHD TV. The image was clean and sharp.
 
Thank you for the screenshot from the PC version.
I played for about 5 hours on PS5 Pro (Performance Mode) and it seemed to run at a stable 60fps.
Apart from a bit of shimmering on the bright outdoor vegetation in Koboh, image quality was also satisfactory.
The PS5Pro seems to be the best place to play this game. Even without RT Jedi Survivor sometimes stutters on my PC. PC GPUs are more powerful than the PS5Pro, but if developers don't optimise the game, there's little you can do about it. It makes me wonder why even developers decided to include RT in the PC version if the game is unplayable with ray tracing? It's almost as if they don't even test the features they implement on the PC.
 
Last edited:

Javi97

Member


If the data here is correct, it is sad to see how little margin there is left to implement PSSR, other graphic configurations will not be able to improve, as long as PSSR works as it should, the image quality will continue to be superior but there will be no RT reflections in performance.
 

PaintTinJr

Member


If the data here is correct, it is sad to see how little margin there is left to implement PSSR, other graphic configurations will not be able to improve, as long as PSSR works as it should, the image quality will continue to be superior but there will be no RT reflections in performance.

The video isn't showing that IMO.

What I think we are looking at is the Pro using PS5 compatibility mode to run the game, so in all likelihood that means 36 CUs at higher memory clock and higher north bridge bandwidth being responsible for the marginal differences rather than an extra 24 CUs with 3 OPS per clock (per 8, 4, 2 bit TOP).

Presumably the Pro modes for PSSR will use all of the hardware, with plenty of excess for higher quality at performance.
 

Bojji

Member
The video isn't showing that IMO.

What I think we are looking at is the Pro using PS5 compatibility mode to run the game, so in all likelihood that means 36 CUs at higher memory clock and higher north bridge bandwidth being responsible for the marginal differences rather than an extra 24 CUs with 3 OPS per clock (per 8, 4, 2 bit TOP).

Presumably the Pro modes for PSSR will use all of the hardware, with plenty of excess for higher quality at performance.

What?

Pro version is running RTGI not present in performance mode on PS5 - how it is not using full hardware?
 
Last edited:

PaintTinJr

Member
What?

Pro version is running RTGI not present in conformance mode on PS5 - how it is not using full hardware?
Really? Wow, that's very underwhelming, the youtube clip on my PC monitor isn't really showing it, other than making it seem a bit more soft/blurry. Maybe the footage chosen was poor, but it wasn't worth the effort in this game, going by that. They should have improved the animation and geometry and foliage with more geometric complexity and better material models. Completely nothing gain in a game that still looks like it has its technology roots in the 360/PS3/X1/PS4 cross-gen
 

Bojji

Member
Really? Wow, that's very underwhelming, the youtube clip on my PC monitor isn't really showing it, other than making it seem a bit more soft/blurry. Maybe the footage chosen was poor, but it wasn't worth the effort in this game, going by that. They should have improved the animation and geometry and foliage with more geometric complexity and better material models. Completely nothing gain in a game that still looks like it has its technology roots in the 360/PS3/X1/PS4 cross-gen

Game is quite advanced, it may not look like that on youtube but it's a really good looking title.

Pro was praised for running RTGI in 60fps mode, it looks like a generational difference vs. base PS5 in this mode in some scenes.
 
Last edited:

PaintTinJr

Member
Game is quite advanced, it may not look like that on youtube but it's a really good looking title.

Pro was praised for running RTGI in 60fps mode, it looks like a generational difference vs. base PS5 in this mode in some scenes.
Game looks crap IMO. The animation in that diving roll is three gens behind Kojima's work - it isn't even MGS4 on PS3 level and there's less geometry in the model and less complex texturing materials at work. If DF think this is a good looking game might I suggest they go play the base version of Death Stranding on a OG PS4 to refresh their vision.

I'm sure the reason I didn't even notice the RTGI is because it is done with so many caveats of geometry with surface materials that don't interact with it - like alpha stencilled foliage probably - that the actual coverage of the RT is sure to be tiny in general play, that it just looks like higher quality raster GI - if that - 99% of the time.
 
Last edited:

PaintTinJr

Member
It’s in the video.

oOG09uw.png


They aren't guessing. Tom says "it seems" because the cost varies between 2ms to 2.21ms. This checks out with the leaked documents that says PSSR costs around ~2ms as well.
As this topic seemed like there was still two camps on the runtime cost of PSSR I thought I add this definitive quote from Cerny's PS5 Pro technical seminar, in which he mentions twice in the talk that the algorithm takes roughly 1ms, 10k OPs per pixel in an 8MP (4K/UHD resolution), and using 8 bit TOPs
uGYbg1N.png

I knew there was a reason why I was recalling it as 1ms, and it seems it because the info came from the lead system architect
 

SKYF@ll

Member
As this topic seemed like there was still two camps on the runtime cost of PSSR I thought I add this definitive quote from Cerny's PS5 Pro technical seminar, in which he mentions twice in the talk that the algorithm takes roughly 1ms, 10k OPs per pixel in an 8MP (4K/UHD resolution), and using 8 bit TOPs
uGYbg1N.png

I knew there was a reason why I was recalling it as 1ms, and it seems it because the info came from the lead system architect
The ML calculation may only take 1ms.
FSR 3 and CBR also have a total cost of about 2ms, so replacing them with PSSR incurs no additional cost.
TAA is slightly cheaper, so it will improve the frame rate.
 

Gaiff

SBI’s Resident Gaslighter
As this topic seemed like there was still two camps on the runtime cost of PSSR I thought I add this definitive quote from Cerny's PS5 Pro technical seminar, in which he mentions twice in the talk that the algorithm takes roughly 1ms, 10k OPs per pixel in an 8MP (4K/UHD resolution), and using 8 bit TOPs
uGYbg1N.png

I knew there was a reason why I was recalling it as 1ms, and it seems it because the info came from the lead system architect
Well, yeah, but it will need to apply AA anyway and isn't that ~1ms or so, bringing the total cost of PSSR to around 2ms? It's not like they will ever implement it without an AA pass unless I'm mistaken.

The ML calculation may only take 1ms.
FSR 3 and CBR also have a total cost of about 2ms, so replacing them with PSSR incurs no additional cost.
TAA is slightly cheaper, so it will improve the frame rate.
This is what I saw as well. FSR3, DLSS2/3, and PSSR seem to have a similar cost. DLSS4 is quite a bit more expensive.
 
Last edited:

PaintTinJr

Member
Well, yeah, but it will need to apply AA anyway and isn't that ~1ms or so, bringing the total cost of PSSR to around 2ms? It's not like they will ever implement it without an AA pass unless I'm mistaken.
The sparse rendering is at 1/4 of the native resolution, so the AA would be 1/4 of the run-time too, so no 1.1ms + 1/4
of 1ms actually places PSSR with the AA at 1.4ms at most, the top pictures show what PSSR does with that patent, and the bottom pictures show what the rest do AFAIK.

Given it was a fluff piece from DF, and CoD is owned by Microsoft - and PlayStation's CoD deal has expired, yes? - the real reason CoD won't have a 120fps PSSR option will be because of a parity clause IMO.

XZTvddD.png
 

Fafalada

Fafracer forever
Well, yeah, but it will need to apply AA anyway and isn't that ~1ms or so, bringing the total cost of PSSR to around 2ms? It's not like they will ever implement it without an AA pass unless I'm mistaken.
AA is part of the algorithm - you're supersampling the missing data either way, there's no way to do it with '0' AA really. But it is a valid question if there's variable quality metrics for any of these - with ML I'm not sure it's as simple as deciding on number of samples like with analytical methods.

This is what I saw as well. FSR3, DLSS2/3, and PSSR seem to have a similar cost. DLSS4 is quite a bit more expensive.
I honestly don't think we've ever sufficiently proven this either way. Not a single tech channel has performed an in-depth analysis of PC upscalers and instead relies on blackboxes in games (given that most of them are openly available as libraries - this would NOT be very difficult to conclusively validate), and of course - PSSR is only testable as a black box making it more like Apples to Potatoes comparison.
I mean yes - we know DLSS4 is not 'cheaper' than prior methods but the actual concrete measurable comparison doesn't exist.
And it's a bit sad it doesn't - hell you probably don't even need a programmer for it, just use Unreal's plugins and test them all individually on a set of end-2-end controlled scenarios where there's no random noise introduced from 99% of game pipeline in the test. Frankly I find the lack of scientific method in the modern tech analysis to be - disturbing.
 

Gaiff

SBI’s Resident Gaslighter
The sparse rendering is at 1/4 of the native resolution, so the AA would be 1/4 of the run-time too, so no 1.1ms + 1/4
of 1ms actually places PSSR with the AA at 1.4ms at most, the top pictures show what PSSR does with that patent, and the bottom pictures show what the rest do AFAIK.

Given it was a fluff piece from DF, and CoD is owned by Microsoft - and PlayStation's CoD deal has expired, yes? - the real reason CoD won't have a 120fps PSSR option will be because of a parity clause IMO.
These numbers actually make sense if we compare just the upscaler. Given that the Pro has ML capabilities close to that of a 3080, ~1ms for PSSR alone makes sense. This is from NVIDIA's paper.

1b49d1b87512af62ce44832854a8bd6302bc22b6.png


0.94.ms for DLSS3 on a 3080, so almost the same as PSSR on a Pro for 1080p>4K.

AA is part of the algorithm - you're supersampling the missing data either way, there's no way to do it with '0' AA really. But it is a valid question if there's variable quality metrics for any of these - with ML I'm not sure it's as simple as deciding on number of samples like with analytical methods.


I honestly don't think we've ever sufficiently proven this either way. Not a single tech channel has performed an in-depth analysis of PC upscalers and instead relies on blackboxes in games (given that most of them are openly available as libraries - this would NOT be very difficult to conclusively validate), and of course - PSSR is only testable as a black box making it more like Apples to Potatoes comparison.
I mean yes - we know DLSS4 is not 'cheaper' than prior methods but the actual concrete measurable comparison doesn't exist.
And it's a bit sad it doesn't - hell you probably don't even need a programmer for it, just use Unreal's plugins and test them all individually on a set of end-2-end controlled scenarios where there's no random noise introduced from 99% of game pipeline in the test. Frankly I find the lack of scientific method in the modern tech analysis to be - disturbing.
The best I got is HU doing a piece where they tested the frame rate difference between FSR and DLSS and most of the time, it's negligible.

bEYGUpV.png




They also did a piece on input latency and once again, no real difference.



Otherwise, we only have NVIDIA's numbers.

ZQowr33.png
 
Last edited:

PaintTinJr

Member
These numbers actually make sense if we compare just the upscaler. Given that the Pro has ML capabilities close to that of a 3090, ~1ms for PSSR alone makes sense. This is from NVIDIA's paper.

1b49d1b87512af62ce44832854a8bd6302bc22b6.png


1.02.ms for DLSS2 on a 3090, so almost the same as PSSR on a Pro for 1080p>4K.


The best I got is HU doing a piece where they tested the frame rate difference between FSR and DLSS and most of the time, it's negligible.

bEYGUpV.png




They also did a piece on input latency and once again, no real difference.



Otherwise, we only have NVIDIA's numbers.

ZQowr33.png

Your comparison with DLSS2 and any PC GPU ends the second you go back and consider that the Pro has 200TB/s bandwidth for ML AI.

I had a long chat with CoPilot about the RTX 50xx series and RX 9070 bandwidth and memory topology and unless the RTX uses its 128KB L1 caches similar to how the Pro uses its SIMD32 vector registers' bandwidth - via the 44 customizations to the ISA - even the latest PC GPUs can't utilise their TOPs in such light weight CNNs and are relying heavily on their ability to brute force the rasterization into lower millisecond times to offer up more to DLSS/FSR4.
 

Fafalada

Fafracer forever
Otherwise, we only have NVIDIA's numbers.
Ok those actually are useful - I didn't see that chart before, it's nice to have it from horses mouth. Interesting how scaling goes to hell with 8k - ie. it's significantly below linear with number of pixels except for 4090 - presumably memory bottlenecks come into play.

The in-game tests I'm always a bit more vary of - they are hard to validate (is this averages and what is the worst case etc.) and indeed, async execution etc. may play into things, where raw cost of the algorithm is absorbed by other (in)efficiencies depending on how hw is utilised. Which sure - for end-user they only care about end-results, but it's always a question mark if that's just this particular selection of games tested (and scenarios) or it holds up at large.

That said - NVidia's numbers are pretty interesting and conclusive. Transformer is around 100% (sometimes more) more expensive than CNN, on every GPU they tested. The fact it's consistent from lowest to highest end cards is pretty fascinating too.
This sort of reconciles with what I've seen in AW2 - where I started using 1440p instead of 4k target because with Transformer - 4k just makes 60fps unattainable if I want any form of PT enabled.
 
Last edited:
Top Bottom