• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Will Sony ever launch PSSR on their PC ports to optimize and update it better like DLSS as both Nvidia and AMD supports AI features on their GPU.

kevboard

Member
if a game already has PSSR optimized for a pc port it can make a lot of sense

Sony already usually supports XeSS as well, and XeSS does make more sense to support as it allows better performance on Intel GPUs than any other ML Reconstruction method would allow for, since Intel GPUs have dedicated Xe cores, which couldn't be utilised with PSSR.

on AMD it wouldn't make a difference since both solutions would need to run on the shaders entirely. but yeah, I don't really see a use case that would make the effort worthwhile
 

PaintTinJr

Member
I suspect the question will be answered if we see PSSR on the OG PS5, say in a game like Returnal that was already 60fps at launch and could take a hit on native resolution to free up compute for PSSR, along with all the SDK improvements freeing up resources since it was last patched.

If PSSR comes to OG PS5 then I'd expect to come on PC PS5 ports too.
 

Gaiff

SBI’s Resident Gaslighter
They aren't low resolution, but unclean high noise games on any platform at native at any resolution. Using them as the proof is so disingenuous when DLSS produces better, but ultimately crap results for stability with the same games, where highest resolution native is the least offensive image quality.
If the results were reversed, you'd be using that as proof that PSSR is better. Don't be a hypocrite.
 

PaintTinJr

Member
If the results were reversed, you'd be using that as proof that PSSR is better. Don't be a hypocrite.
Complete nonsense, with interests in signal processing and graphics programming I want the solutions devs choose to produce low noise images like PlayStation first party or Kojima games, etc on all platforms and games.

This gen's apparent development brain drain combined with the shift from UE4 to UE5 has produce so many noisy looking games that the whole gen has underwhelmed in a PC gamer AA way compared to the first example of UE5 we were show which looked amazing.
 
Just use DLSS like everyone else

Nvidia's been working on DLSS nearly a decade now, even if PSSR was available on PC you would still prefer DLSS which has better IQ
 

yurinka

Member
Will Sony ever launch PSSR on their PC ports to optimize and update it better like DLSS as both Nvidia and AMD supports AI features on their GPU.

In my point of view there are many PC user who play game at 1080p, which can help Sony to scale better in lower resolution.
As far as I remember, PSSR uses a new hardware stuff as of now available in the PS5 Pro GPU, which apparently will be added to future AMD PC GPUs. So at least in those future AMD GPUs would be possible but I'd say not likely since Sony may want to keep it as PS5 Pro selling point.

Is it possible to implement PSSR for base PS5, PS4, Nvidia and older AMD GPUs? I don't know, maybe it's possible. But I think it's even less likely, I'd say there are 99% changes of not happening.

And well, PS already has multiple upscalers from each GPU manufacturer that will continue evolving. I'd say that having them doing something somewhat similar they don't need PSSR.
 
Last edited:

ap_puff

Banned
I don't remember where I saw this but Silent Hill 2 on PC that still had the PSSR calls in the files and its 100% exact match to calling XeSS which I found curious even if of course, its not the same SDK so it doesn't mean much, just called in the engine the same.

If Sony wants to give PSSR / FSR / XeSS / DLSS / DLAA options in their games, why the hell not. Don't try to force just PSSR though.
If you can find this I'd love to read about it
 

ArtHands

Thinks buying more servers can fix a bad patch
PC has no obligation to help Sony support PSSR here, especially when it already have several superior choices
 

Danknugz

Member
given that PCs run nvidia / amd is this even a valid question? it would have to be done in software. am i missing something here
 

kevboard

Member
I suspect the question will be answered if we see PSSR on the OG PS5, say in a game like Returnal that was already 60fps at launch and could take a hit on native resolution to free up compute for PSSR, along with all the SDK improvements freeing up resources since it was last patched.

If PSSR comes to OG PS5 then I'd expect to come on PC PS5 ports too.

it won't come to base PS5, simply because it would muddy the PR strategy of the Pro alone.

but besides that, the base PS5 doesn't have the necessary RDNA2 hardware to accelerate ML code. which means it would run extremely slowly and would probably result in severely lower performance even at significantly reduced resolutions.

even on RDNA2 PC cards, when using XeSS for example, the performance you gain is very minor and only really helps if you are off by maybe 5~10fps from your target framerate. and that's with DP4a acceleration, which the base PS5 lacks. so, yeah... I don't believe it's possible, even if there wasn't a PR incentive to not bring it to the base console.
 
Last edited:

ZehDon

Member
If PSSR is their upscaling tech of choice moving forward onto new hardware, like the PS6, then Sony's games will require it. So, they'll be forced to port it to PC hardware eventually.

Will PSSR gain traction in the PC space? Probably not. It'll need ML cores, which means it's competing with DLSS and not FSR. And against the current DLSS, current PSSR is definitely lacking. Sony might improve it, but DLSS is only getting better too.
 

kevboard

Member
If PSSR is their upscaling tech of choice moving forward onto new hardware, like the PS6, then Sony's games will require it. So, they'll be forced to port it to PC hardware eventually.

not really. all these reconstitution methods use the exact same data from the engine.
this is why you can replace FRS2 with DLSS, or the other way around, in games that don't support both, by simply replacing a handful of files or modifying some files slightly.

something easy enough that there have been DLSS mods for games the literal day they released.
 

ZehDon

Member
not really. all these reconstitution methods use the exact same data from the engine.
this is why you can replace FRS2 with DLSS, or the other way around, in games that don't support both, by simply replacing a handful of files or modifying some files slightly.

something easy enough that there have been DLSS mods for games the literal day they released.
How many games have been developed that cannot function without DLSS? As in - do not offer the option for native rendering of any kind?
As I said: if PlayStation moves ahead with PSSR in their hardware, their games will be built with it. They could swap it out for DLSS, but if it's a requirement, then they've locked out AMD owners, which they won't do. Instead, they'll most likely develop PSSR such that it can run on AMD or NVidia GPUs of that generation (PS6+) and include it in their games.
 

PaintTinJr

Member
it won't come to base PS5, simply because it would muddy the PR strategy of the Pro alone.

but besides that, the base PS5 doesn't have the necessary RDNA2 hardware to accelerate ML code. which means it would run extremely slowly and would probably result in severely lower performance even at significantly reduced resolutions.

even on RDNA2 PC cards, when using XeSS for example, the performance you gain is very minor and only really helps if you are off by maybe 5~10fps from your target framerate. and that's with DP4a acceleration, which the base PS5 lacks. so, yeah... I don't believe it's possible, even if there wasn't a PR incentive to not bring it to the base console.
The RDNA info is flat out wrong, and I've lots count of the number of times I've referenced the ML AI solution and optimisation in the Ragnarok paper on OG PS5 showing that PSSR can run on OG PS5, but at what native resolution and frame-rate, and to what output resolution is the only question in regards of freeing up enough resources on the OG PS5 to do it, because ...it is just matrix maths at the end of the day and any of these solutions could still work on any CPU for the last 20years provide they had enough time and access to enough RAM and storage.

As for the PR strategy, it is only important at the beginning with the Pro sales, what will be more important to devs will be a unified workflow, so that testing lower native 60fps with PSSR modes on OG PS5 will be the same as the Pro just with a few bumped native settings, rather than it currently be like 6 different modes between two hardware SKUs to develop and test.
 

Zathalus

Member
But the fur is next-gen cinematic on PSSR in comparison to basic rasterised fur on DLSS. so it isn't one way.

I suspect PSSR has less to literally 'learn' to train and fix those issue you mentioned in R&C compared to the depth of training DLSS needs to catch up on specific fx like fur.
Next-gen cinematic vs basic rasterised? It looks better in stills but in actual motion the differences are minor. Especially from a regular viewing distance. PSSR is indeed better there, but no need to resort to needlessly hyperbolic statements like that.
 

Gaiff

SBI’s Resident Gaslighter
The RDNA info is flat out wrong, and I've lots count of the number of times I've referenced the ML AI solution and optimisation in the Ragnarok paper on OG PS5 showing that PSSR can run on OG PS5, but at what native resolution and frame-rate, and to what output resolution is the only question in regards of freeing up enough resources on the OG PS5 to do it, because ...it is just matrix maths at the end of the day and any of these solutions could still work on any CPU for the last 20years provide they had enough time and access to enough RAM and storage.

As for the PR strategy, it is only important at the beginning with the Pro sales, what will be more important to devs will be a unified workflow, so that testing lower native 60fps with PSSR modes on OG PS5 will be the same as the Pro just with a few bumped native settings, rather than it currently be like 6 different modes between two hardware SKUs to develop and test.
Highly doubt PSSR is ever seeing the light of the day on the OG PS5. It can already cost 2ms to upscale from 1080p>4K on the Pro with much better ML. How long would it take on the PS5? It likely isn't viable.
 

Zathalus

Member
Next-gen cinematic vs basic rasterised? It looks better in stills but in actual motion the differences are minor. Especially from a regular viewing distance. PSSR is indeed better there, but no need to resort to needlessly hyperbolic statements like that.
Quoting myself here, but just want to clarify that I’m not shitting on PSSR. In Ratchet and Clank DLSS and PSSR are extremely close (Spider-Man, Last of Us as well). All that needs to be done is to eliminate that stability issue that PSSR has and I’d personally like the PSSR image more. It will also hopefully sort out the issues that third party or low resolution games are facing as well.
 

Gaiff

SBI’s Resident Gaslighter
Next-gen cinematic vs basic rasterised? It looks better in stills but in actual motion the differences are minor. Especially from a regular viewing distance. PSSR is indeed better there, but no need to resort to needlessly hyperbolic statements like that.
What next-gen fur is is referring to? I wasn't aware there was a difference in the fur rendering of DLSS vs PSSR.
 

Zathalus

Member
What next-gen fur is is referring to? I wasn't aware there was a difference in the fur rendering of DLSS vs PSSR.
Still zoomed in images of Ratchets fur if I recall correctly. PSSR does indeed look more fur like, because it is better at handling very fine dithering by the look of it. Zoomed out it is extremely difficult to tell. Hence my criticism of the stability, impossible to notice in stills but it becomes apparent when looking at the entire image in motion.
 

Gaiff

SBI’s Resident Gaslighter
Still zoomed in images of Ratchets fur if I recall correctly. PSSR does indeed look more fur like, because it is better at handling very fine dithering by the look of it. Zoomed out it is extremely difficult to tell. Hence my criticism of the stability, impossible to notice in stills but it becomes apparent when looking at the entire image in motion.
Do you have screenshots or timestamps? I don't remember that.
 

Zathalus

Member
Do you have screenshots or timestamps? I don't remember that.
It’s actually in a post you responded to, check the screenshots here:


Notice the tail detail between the two. A comparison between PSSR and native PS5 is here:


I recall more images comparing PSSR to DLSS and PSSR was handling the fur dithering better but I cannot find that one.
 

PaintTinJr

Member
Next-gen cinematic vs basic rasterised? It looks better in stills but in actual motion the differences are minor. Especially from a regular viewing distance. PSSR is indeed better there, but no need to resort to needlessly hyperbolic statements like that.
I'm guessing you've never tried rendering hair/fur in a 3D package to appreciate how long it takes with that type of particle count (vs rasterization fur) to make that difference in stills to actually appreciate the difference between what PSSR is training with, compared to DLSS, no?

Put it this way, it is beyond a 4090 in real-time at 30fps, so there is nothing hyperbolic about the statement.
 

Zathalus

Member
I'm guessing you've never tried rendering hair/fur in a 3D package to appreciate how long it takes with that type of particle count (vs rasterization fur) to make that difference in stills to actually appreciate the difference between what PSSR is training with, compared to DLSS, no?

Put it this way, it is beyond a 4090 in real-time at 30fps, so there is nothing hyperbolic about the statement.
It’s handling the edge dithering better via a ML algorithm, not exactly rendering each hair follicle. It does this better with some foliage shots in other games as well. It is better then DLSS in that regard (as I said), but comparing footage between the two and it’s not exactly the difference between basic and next-gen.
 

Kikorin

Member
Nvidia DLSS Is already miles ahead, but with the resources of Nvidia the gap will continue to get bigger in future. So probably not, because wouldn't make much sense.
 

PaintTinJr

Member
Highly doubt PSSR is ever seeing the light of the day on the OG PS5. It can already cost 2ms to upscale from 1080p>4K on the Pro with much better ML. How long would it take on the PS5? It likely isn't viable.
2ms to infer 6M pixels in an 8M pixel output from 2M pixels on hardware that is at best 4x more powerful than the OG PS5 at ML AI, would mean worst case 1.5M (6/4) could be inferred on the OG PS5 with a similar 2ms rendering headroom on native, so probably meaning a 720p native on PS5(900K pixel), plus 1.5M inferred pixels would give us 2.4M which is roughly 1200p but with higher quality inferenced IQ.
 

PaintTinJr

Member
It’s handling the edge dithering better via a ML algorithm, not exactly rendering each hair follicle. It does this better with some foliage shots in other games as well. It is better then DLSS in that regard (as I said), but comparing footage between the two and it’s not exactly the difference between basic and next-gen.
It is more than the edges IMO, and it is inference offline fur rendering, rather than cheap real-time raster like DLSS is seemingly doing.
 

Zathalus

Member
2ms to infer 6M pixels in an 8M pixel output from 2M pixels on hardware that is at best 4x more powerful than the OG PS5 at ML AI, would mean worst case 1.5M (6/4) could be inferred on the OG PS5 with a similar 2ms rendering headroom on native, so probably meaning a 720p native on PS5(900K pixel), plus 1.5M inferred pixels would give us 2.4M which is roughly 1200p but with higher quality inferenced IQ.
PS5 does not support INT 8 so would need to fall back on FP 16 capability which is 20.56 TFLOPS vs the INT 8 number of 300 for the PRO (that figure is from the leak, for the 16.7 TFLOPS number in the manual would be 267 TOPS assuming WMMA).

That is a massive difference to overcome, even XeSS which has a DPA4 fallback path for RDNA 2 and Nvidia GPUs and so can utilise INT 8 on shaders has a noticeable performance overhead vs DLSS and FSR. I’m personally not saying it won’t happen with 100% certainty, just that I highly doubt it will.

It is more than the edges IMO, and it is inference offline fur rendering, rather than cheap real-time raster like DLSS is seemingly doing.
They are both doing the exact same thing, altering the image based on the underlying ML model and temporal data being fed into it. It’s just that in this case PSSR is doing it better.
 

Lysandros

Member
If PSSR comes to OG PS5 then I'd expect to come on PC PS5 ports too.
How would PS5 deal with PSSR without the PRO's custom ML hardware? Its cost in rendering is already significant even in a system tailored to it, i am not seeing how it would be practical. Besides, PSSR is nearly the main selling point of PS5 PRO, there is also the business side to consider.
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
If it delivers quality close or better than dlss or whatever fsr4 is then why not? I think they should.

It doesnt and probably wont anytime soon.
Nvidia are spending stupid money on AI research and DLSS keeps getting better and better.
SO much so they are literally deleting DLSS profiles from the DLL because newer models are not only faster but also produce better results so lower end models which in theory were supposed to be faster but offer slight worse quality simply became redundant.
DLSS 3.8 only has 3 profiles now cuz all the old profile offer no benefit to users.

fNJk6oD.png
 

kevboard

Member
How many games have been developed that cannot function without DLSS? As in - do not offer the option for native rendering of any kind?
As I said: if PlayStation moves ahead with PSSR in their hardware, their games will be built with it. They could swap it out for DLSS, but if it's a requirement, then they've locked out AMD owners, which they won't do. Instead, they'll most likely develop PSSR such that it can run on AMD or NVidia GPUs of that generation (PS6+) and include it in their games.

PSSR, DLSS and FSR2 are all just fancy TAA.

you can replace PSSR with simple TAA and it would look just fine.
 

simpatico

Member
I hope if PSSR finish beating DLSS quality in the future, you are not one of those port beggars.
Bubba those ports are ours. Sony got a taste of that PC gamer dough and they don’t want to go back to a customer base that must ask mom for permission before spending money.
 

PaintTinJr

Member
PS5 does not support INT 8 so would need to fall back on FP 16 capability which is 20.56 TFLOPS vs the INT 8 number of 300 for the PRO (that figure is from the leak, for the 16.7 TFLOPS number in the manual would be 267 TOPS assuming WMMA).

That is a massive difference to overcome, even XeSS which has a DPA4 fallback path for RDNA 2 and Nvidia GPUs and so can utilise INT 8 on shaders has a noticeable performance overhead vs DLSS and FSR. I’m personally not saying it won’t happen with 100% certainty, just that I highly doubt it will.


They are both doing the exact same thing, altering the image based on the underlying ML model and temporal data being fed into it. It’s just that in this case PSSR is doing it better.
Is there a document that says what it needs to support for PSSR, last time I asked copilot it was honestly clueless, because graphics accelerators at their core are 8bit accelerators to provide accelerated workaround, and the Ragnarok paper ML AI is a 4/8bit quantizer because original S3TC predecessor to DXT and now predecessor to BC, the BC5 which the Ragnarok ML AI solution used for quantization is 4/8bit and accelerated on OG PS5 using that FP16 hardware, so all bets are off until we something technical about PSSR.

As for the fur with PSSR, you don't have the necessary background if you think they are both trying to infer to the same result and just PSSR is doing better. DLSS is trying to infer to a high resolution rasterization of the fur fx. PSSR is aiming higher than that. It is either aiming for an offline render of the fur with minute render times, or is inferring to real animal fur like the X1 Sony TV chip do.
 

PaintTinJr

Member
How would PS5 deal with PSSR without the PRO's custom ML hardware? Its cost in rendering is already significant even in a system tailored to it, i am not seeing how it would be practical. Besides, PSSR is nearly the main selling point of PS5 PRO, there is also the business side to consider.
It depends what the custom aspect is technologically timestamped IMO. if it is custom from prior to RDNA3/4 which I believe it is, then dual issue would be custom. If it is more than that, then I agree it presents a performance barrier that OG PS5 probably can't bridge with lower native and lower output resolution, alone.
 

Zathalus

Member
Is there a document that says what it needs to support for PSSR, last time I asked copilot it was honestly clueless, because graphics accelerators at their core are 8bit accelerators to provide accelerated workaround, and the Ragnarok paper ML AI is a 4/8bit quantizer because original S3TC predecessor to DXT and now predecessor to BC, the BC5 which the Ragnarok ML AI solution used for quantization is 4/8bit and accelerated on OG PS5 using that FP16 hardware, so all bets are off until we something technical about PSSR.
DLSS and XeSS both use INT8. I see no reason why PSSR would use the less performant FP16. Maybe it could use INT4, but that would further widen the gap between the base PS5 and Pro.

As for the fur with PSSR, you don't have the necessary background if you think they are both trying to infer to the same result and just PSSR is doing better. DLSS is trying to infer to a high resolution rasterization of the fur fx. PSSR is aiming higher than that. It is either aiming for an offline render of the fur with minute render times, or is inferring to real animal fur like the X1 Sony TV chip do.
PSSR/XeSS/DLSS all do the same thing. They are ML models running on a temporal upscaler. That ML model is designed to eliminate ghosting, reduce shimmer, provide anti-aliasing, remove stair-stepping, increase texture detail, remove disocclusion artifacts, and enhance fine details (wires, lines, mesh, fur, etc, etc…). All of this is run against the temporal data (motion vectors and a few other things as well) fed into the algorithm by the upscaler. Higher resolution has more information to work with so thus produces better results.

Based on the evidence so far PSSR is really good at eliminating ghosting, removing disocclusion artifacts, enhancing fine detail, and smoothing out stair-stepping. It appears to be average on texture detail enhancement and bad at reducing shimmer and keeping the overall stability of the image. It will inevitably improve as Sony works on and fine tunes that ML model. Based on the results of a wide range of games, nothing appears different from any other modern ML upscaler.
 

PaintTinJr

Member
DLSS and XeSS both use INT8. I see no reason why PSSR would use the less performant FP16. Maybe it could use INT4, but that would further widen the gap between the base PS5 and Pro.


PSSR/XeSS/DLSS all do the same thing. They are ML models running on a temporal upscaler. That ML model is designed to eliminate ghosting, reduce shimmer, provide anti-aliasing, remove stair-stepping, increase texture detail, remove disocclusion artifacts, and enhance fine details (wires, lines, mesh, fur, etc, etc…). All of this is run against the temporal data (motion vectors and a few other things as well) fed into the algorithm by the upscaler. Higher resolution has more information to work with so thus produces better results.

Based on the evidence so far PSSR is really good at eliminating ghosting, removing disocclusion artifacts, enhancing fine detail, and smoothing out stair-stepping. It appears to be average on texture detail enhancement and bad at reducing shimmer and keeping the overall stability of the image. It will inevitably improve as Sony works on and fine tunes that ML model. Based on the results of a wide range of games, nothing appears different from any other modern ML upscaler.
Your response is all guess work and tells me you don't understand how ML AI works fully.

The results of the fur in PSSR on R&C refute what you are saying, because it is inferring detail beyond DLSS's inference from training at higher quality native, and dev comments have alluded to as much too in other games using PSSR.
 

Zathalus

Member
Your response is all guess work and tells me you don't understand how ML AI works fully.
Educated guesses based on how other ML upscalers are functioning and the results from over a dozen games. I’m not pulling the information from nowhere, it is literally how DLSS and XeSS function.

But I’m noticing a dire lack of meaningful sources on your end as well. So everything on your end is guess work as well. Just brushing off anything I say with a “you obviously don’t understand” is utterly meaningless without anything backing up such a statement.

So go on then, provide some factual sources that explain exactly what I don’t understand.

The results of the fur in PSSR on R&C refute what you are saying, because it is inferring detail beyond DLSS's inference from training at higher quality native, and dev comments have alluded to as much too in other games using PSSR.
I’ve already said it’s doing a better job then DLSS. Can DLSS catch up? Probably. Just like DLSS is doing a better job with image stability. PSSR can probably catch up there as well. Any statement on what is technically more impressive or what is easier to catch up on is pure guesswork at this point, as you stated.
 

kevboard

Member
The RDNA info is flat out wrong, and I've lots count of the number of times I've referenced the ML AI solution and optimisation in the Ragnarok paper on OG PS5 showing that PSSR can run on OG PS5

I am 99% certain that it is confirmed that the PS5 base doesn't support int8, and you would absolutely need that to run PSSR.

furthermore, even high end RDNA2 gpus on PC only reach like 90 TOPS, maybe 100 TOPS using int8. that's when using ALL the CUs, which of course isn't viable to do.

for comparison an RTX2060 has 52 TOPS that are useable in full at all times and do not interfere with the shaders.
 
Last edited:

PaintTinJr

Member
Educated guesses based on how other ML upscalers are functioning and the results from over a dozen games. I’m not pulling the information from nowhere, it is literally how DLSS and XeSS function.

But I’m noticing a dire lack of meaningful sources on your end as well. So everything on your end is guess work as well. Just brushing off anything I say with a “you obviously don’t understand” is utterly meaningless without anything backing up such a statement.

So go on then, provide some factual sources that explain exactly what I don’t understand.


I’ve already said it’s doing a better job then DLSS. Can DLSS catch up? Probably. Just like DLSS is doing a better job with image stability. PSSR can probably catch up there as well. Any statement on what is technically more impressive or what is easier to catch up on is pure guesswork at this point, as you stated.
You described how DLSS/XeSS do things which doesn't including inferencing to something that isn't an upscale of the existing source, when the fur in R&C is beyond just a multiplier of line segments in a higher quality rasterization hair/fur with depth of field. On the Pro the fur is more akin to making a stick man dance, training a model with storm troopers and then inferencing a deep fake of the storm troopers doing the stick man dance. That isn't sharpening edge detail on the sticks, but replacing them with something of a higher order representation, and that's what PSSR is effectively doing,
 

Zathalus

Member
You described how DLSS/XeSS do things which doesn't including inferencing to something that isn't an upscale of the existing source, when the fur in R&C is beyond just a multiplier of line segments in a higher quality rasterization hair/fur with depth of field. On the Pro the fur is more akin to making a stick man dance, training a model with storm troopers and then inferencing a deep fake of the storm troopers doing the stick man dance. That isn't sharpening edge detail on the sticks, but replacing them with something of a higher order representation, and that's what PSSR is effectively doing,
Is it? Have you compared it to the native image without TAA (IGTI) present to see what exactly is being changed? Well I can test that as you can completely disable TAA in the PC version if I’m recalling correctly so I’ll test it and get back to you.
 
Last edited:

simpatico

Member
I don't know if PSSR will be as simple as enable or disabling it. Judging by games like Silent Hill 2, some custom tweaks might need to be made on a game by game basis. These might even change depending on final output resolution. The more upscalers on PC the better, but from what we've seen so far it doesn't appear to have the On/Off functionality we see in FSR and DLSS. I guess Sony could take the time to account for every resolution combo for a specific release, but the returns are diminishing at that point.
 
Last edited:

PaintTinJr

Member
I am 99% certain that it is confirmed that the PS5 base doesn't support int8, and you would absolutely need that to run PSSR.

furthermore, even high end RDNA2 gpus on PC only reach like 90 TOPS, maybe 100 TOPS using int8. that's when using ALL the CUs, which of course isn't viable to do.

for comparison an RTX2060 has 52 TOPS that are useable in full at all times and do not interfere with the shaders.
But that 52TOPs is completely misrepresented as it exists and is accessed by a slower interconnect and has to work isochonous-ly with just the time slice between the end of the 'shaders' rendered a native frame and the v-sync timing to display the frame so if a frame takes 14ms for native, then the RTX2060 AI module sits idle for 14ms of 16.67ms, eg it is only working for 8-9TOPs per second, as it physically can't start work on inferencing a frame that hasn't been rasterized, yet.

edit:

if we then factored in my earlier comment estimate that the Pro - which is on par with a RTX DLSS solution for resources - is x4 time more powerful to infer x4 more pixels than OG PS5, then that 8-9TOPs then becomes2-2.25 TOPS, and ignoring the unit of TOPs or half FLOPs, would mean using FP16 in place would take 2.25 HFLOPs from the OG PS5's 20.46 HFLOPs which is just over 10%, even assuming that is 100% out in the TOPs to HFLOPs, conversion and we said 20% was needed., 20% of a 16.67ms frame time to allocate to 720p -> 1200p PSSR is only 3.3ms, and still leaving 13ms to render native at 720p. So still sounds possible IMO.
 
Last edited:

vkbest

Member
But that 52TOPs is completely misrepresented as it exists and is accessed by a slower interconnect and has to work isochonous-ly with just the time slice between the end of the 'shaders' rendered a native frame and the v-sync timing to display the frame so if a frame takes 14ms for native, then the RTX2060 AI module sits idle for 14ms of 16.67ms, eg it is only working for 8-9TOPs per second, as it physically can't start work on inferencing a frame that hasn't been rasterized, yet.

edit:

if we then factored in my earlier comment estimate that the Pro - which is on par with a RTX DLSS solution for resources - is x4 time more powerful to infer x4 more pixels than OG PS5, then that 8-9TOPs then becomes2-2.25 TOPS, and ignoring the unit of TOPs or half FLOPs, would mean using FP16 in place would take 2.25 HFLOPs from the OG PS5's 20.46 HFLOPs which is just over 10%, even assuming that is 100% out in the TOPs to HFLOPs, conversion and we said 20% was needed., 20% of a 16.67ms frame time to allocate to 720p -> 1200p PSSR is only 3.3ms, and still leaving 13ms to render native at 720p. So still sounds possible IMO.
According leaks, PS5 Pro AI accelerator is 300 INT8 TOPS. No way you can get current PSSR model in PS5
 

Irobot82

Member
If I had to guess, AMD's FSR 4.0 of whatever their AI based one will be called, will be very similar to PSSR. They're probably sharing technology since Playstation uses an AMD GPU.
 
Top Bottom