Digital Foundry - Upscaling Face-Off: PS5 Pro PSSR vs PC DLSS/FSR 3.1 in Ratchet and Clank Rift Apart

No it isn't, the light pulsing on the Pro up and down cover 2 or more times the distance on the PC in the same time, even though it is very faint so I only locked in on it for the PC given how it stood out on the Pro, and went to compare thinking it was noise.
60 vs 120 fps is extremely easy to spot and certainly doesn't make any light pulsing faster, unless the pulsing is linked to frame rate, but game logic hasn't been linked to frame rate like that in ages. Certainly not in this game. Nothing speeds up in this game at higher fps.

Frame rate aside, I'm not actually seeing any pulsing on the crate with DLSS, could be a bug or an issue with DLSS.

As for why Clank is not perfectly in sync, character idle animation is dynamic. You'll notice Clank yawns on the PC footage and does not on the PS5 footage.
 
The temporal shimmering on fine details is one of the most bothersome elements of TAA and FSR, even DLSS isn't perfect. I hope they clean it up because it's one of the most distracting elements of games now. In 4K minor aliasing artifacts aren't that disturbing when actually playing games, but pixel crawl and fine elements flickering like cables and fencing+ghosting can be really distracting. That said the PSSR seems much, MUCH better at it than FSR.

Alan Wake 2 is awful. So is Cyberpunk (not as bad as AW but still bad in Perf mode- we need Pro support for this game!). So is Modern Warfare 2 my god that one sucks.

One of the biggest wtf moments this gen for me was getting MW2 for PS5 after just playing MW 2019 on Xbox One X. An entire generation between those consoles, both games being about the same level of graphics...the result? On One X mw had close to native 4k res and looked PRISTINE (it didn't hold 60 but it was damn close). Mw2 came after the FSR2 craze hit and looked worse in every way because the idiot devs decided to use FSR from like 1080p...terrible shimmering and pixel crawl. Nobody can convince me that certain devs used FSR out of laziness after i saw what they did there
 
Good to see DLSS is still king.

Even if it was ever beaten, all we have to do is increase the internal resolution.

Nervous Why So Serious GIF by ALLBLK (formerly known as UMC)
 
NEOGAF GOTY 2024: Giant red circles on still images.

Isn't this amazing? We have to zoom up and circle things up to show defects in image quality - that means image quality is GOOD!

PSSR brings new quality to consoles, but the hardest test for it is still ahead - games below 1080p internal resolution.
 
Isn't this amazing? We have to zoom up and circle things up to show defects in image quality - that means image quality is GOOD!

PSSR brings new quality to consoles, but the hardest test for it is still ahead - games below 1080p internal resolution.
Exactly...

I've been lurking in this thread and generally avoided responding to anyone because to me there seem to be a lot of people that don't really get what this thread is about.

This is purely a tech thread, one showing off or comparing the qualities of PSSR vs. the competition. Less we forget what state DLSS was in when it launched, I think so far, PSSR has done for PlayStation what I was hoping Sony would do with the launch of PS5. Reconstruction tech is the future, and the PS5pro is making a stronger case for it. And like all ML-based tech, it will naturally get better over time.

I think a more pertinent question would be if whatever version of PSSR a game that uses it is a system-controlled feature and as such one that can get updated via a firmware update and without developer input. Doubt we can be that lucky.

I too am really curious to see how it performs in games working with lower internal Rez, can't remember what the internal rez for FF7:RB was but so far that seems to have suggested how good it may be. Whatever the case though, we at least can conclude that it would be better than whatever we currently have on the base PS5.
 
DLSS will always be better for me, because my pc is always drastically better than consoles. I guess if your pc is ps5 pro equivalent than the war here might matter?
 
DLSS will always be better for me, because my pc is always drastically better than consoles. I guess if your pc is ps5 pro equivalent than the war here might matter?
It matters nowhere because DLSS isn't available where PSSR is and vice-versa.
 
I see more and more clear that in the end PS5pro, if it will be equivalent to an average PC with a 4070 super for 799€. Being mounted on a small tower, silent, basic use and without the problems of the PC, I see it as a better and better purchase.
The issue with comparing prices, which I only see a few talk about, is use cases.

Yes, the PS5 Pro is cheaper but the 4070 Super has more uses than just gaming and can be used to make money with a wide variety of options.


For me, I'm more interested in a gaming laptop these days because of it's portability and can be used for both work and gaming.

I actually currently have a Asus Tuf F17 gaming laptop and waiting for Strix Halo gaming laptops to hit the market.

To each, his own i guess.
 
It's strange to see people seeing the blur and lower res of pssr as a positive as it's shows less aliasing, due to being a softer image.
 
It's strange to see people seeing the blur and lower res of pssr as a positive as it's shows less aliasing, due to being a softer image.
No. It's showing less aliasing because of how it handles edges. You can still have an aliased image with softer output and the images are also very zoomed in. So the difference in clarity won't be as big when playing.

PSSRvs-DLSS-8.gif


The two technologies handle edges slightly different and just in this game so far. We'd need to see more examples. In-game, that would hardly be noticeable.
 
Last edited:
It matters nowhere because DLSS isn't available where PSSR is and vice-versa.
I'm more keen to see the difference between XeSS since that is a lot more closer to DLSS (over FSR) and is available to use in practically any modern GPU. That and testing lower internal resolutions since that will be the real test on how good an upscaler is.

Also will get a lot more interesting once FSR 4 is released and if compatible with legacy RDNA hardware.
 
Last edited:
It matters nowhere because DLSS isn't available where PSSR is and vice-versa.
I do wonder if we see something like PS to PC ports porting over the option to use PSSR on PC. That should make for some interesting comparisons. Just like how Insomniac games have TI reconstruction as an option in their PC ports.

Technically, it should be possible, as like any other ML-based tech, PSSR is just a matrix operation too, so technically, anything with a matric core should be able to run it.

The issue with comparing prices, which I only see a few talk about, is use cases.

Yes, the PS5 Pro is cheaper but the 4070 Super has more uses than just gaming and can be used to make money with a wide variety of options.


For me, I'm more interested in a gaming laptop these days because of it's portability and can be used for both work and gaming.

I actually currently have a Asus Tuf F17 gaming laptop and waiting for Strix Halo gaming laptops to hit the market.

To each, his own i guess.
And this is why I think all console vs PC price/value arguments are stupid. Because having them is based on the stupid notion that everyone's use case is the same. Or what matters to one matters to everyone.

Its like talking about "console" exclusives, such an exclusive is only exclusive if you ONLY have/game on a console.

While I splurged on a gaming PC build a few years ago, it wasn't done primarily for gaming. I needed a PC for some spreadsheet work (if you can believe that), also needed a HTPC, and decided I might as well build something powerful enough to play games so I could play otherwise Xbox/PC exclusives. But this only works for me because I can have that computer connected to my living room set-up... I HATE sitting on a desk and using a PC.

So naturally someone like me leans more towards using a console.
 
No. It's showing less aliasing because of how it handles edges. You can still have an aliased image with softer output and the images are also very zoomed in. So the difference in clarity won't be as big when playing.

PSSRvs-DLSS-8.gif


The two technologies handle edges slightly different and just in this game so far. We'd need to see more examples. In-game, that would hardly be noticeable.

I'm talking about the tower shots. The bush there looks blurrier on dlss but the background images with the tower clearly show a softer / lower res looking image on ps5 pro

Not sure what's going on here but its a great first showing for pisser and bringing this tech to console
 
Last edited:
So keen on pssr implementation.

I'm not sure about others but playing on a 77 inch oled really shows games flaws sometimes due to the ppi
 
I'm talking about the tower shots. The bush there looks blurrier on dlss but the background images with the tower clearly show a softer / lower res looking image on ps5 pro

Not sure what's going on here but its a great first showing for pisser and bringing this tech to console
You have your answer to the original inquiry, which is what my post highlighted.

daninthemix daninthemix

You're in here trying to justify a purchase when all the topic is trying to do is show the differences between two AI upscaling technologies....... while calling people butthurt because they told you that they couldn't care less how you spent your money. How does that work?
 
Last edited:
Good to see DLSS is still king.

Even if it was ever beaten, all we have to do is increase the internal resolution.
Why is platform warring such a focus for you? Surely you'd want better upscaling and image quality for all gamers, wherever they choose to game?
 
Cerny deserves an award for technology if ps5 pro PSSR will out perform or at least equal the performance of $1k+ PC with DLSS or FSR. Thats if it happens. Just saying 😁
 
Pretty much sure PSSR is utilizing AMD hardware.

Lord Cerny response to you from Road to PS5:

" If we bring concepts to AMD that are felt to be widely useful then they can be adopted into RDNA - and used broadly including in PC GPUs.

If the ideas are sufficiently specific to what we're trying to accomplish like the GPU cache scrubbers I was talking about then they end up being just for us.

If you see a similar discrete GPU available as a PC card at roughly the same time as we release our console that means our collaboration with AMD succeeded.

In producing technology useful in both worlds it doesn't mean that we as sony simply incorporated the pc part into our console."

So you shouldn't be sure 🤷🏻‍♂️
 
Last edited:
Lord Cerny response to you from Road to PS5:

" If we bring concepts to AMD that are felt to be widely useful then they can be adopted into RDNA - and used broadly including in PC GPUs.

If the ideas are sufficiently specific to what we're trying to accomplish like the GPU cache scrubbers I was talking about then they end up being just for us.

If you see a similar discrete GPU available as a PC card at roughly the same time as we release our console that means our collaboration with AMD succeeded.

In producing technology useful in both worlds it doesn't mean that we as sony simply incorporated the pc part into our console."

So you shouldn't be sure 🤷🏻‍♂️
 
No. It's showing less aliasing because of how it handles edges. You can still have an aliased image with softer output and the images are also very zoomed in. So the difference in clarity won't be as big when playing.

PSSRvs-DLSS-8.gif


The two technologies handle edges slightly different and just in this game so far. We'd need to see more examples. In-game, that would hardly be noticeable.

Nvidia letting Sony get this close to matching DLSS shows Nvidia does not give a fuck about gaming anymore.
 
Just to remind, Cerny very plainy states "we added 'custom' hardware for machine learning" in PS5 PRO presentation. When i think custom, i think of things like hardware ID buffer, custom I/O complex and Cache Scrubbers, those were advertised as such by Sony. I don't think Cerny would the mistake to refer to a standart feature of RNDA 3/4 architecture as custom, that would be blatant false advertising with plausible legal implications.
 
Just to remind, Cerny very plainy states "we added 'custom' hardware for machine learning" in PS5 PRO presentation. When i think custom, i think of things like hardware ID buffer, custom I/O complex and Cache Scrubbers, those were advertised as such by Sony. I don't think Cerny would the mistake to refer to a standart feature of RNDA 3/4 architecture as custom, that would be blatant false advertising with plausible legal implications.

Exactly. I don't know why people continue to believe this is AMD hardware. The leaks labeled the architecture as "fully custom" and, as you mentioned, Cerny stated the same thing in the tech talk. It is evident that the RT hardware is AMD-sourced while the ML architecture is Sony's design.
 
Exactly. I don't know why people continue to believe this is AMD hardware. The leaks labeled the architecture as "fully custom" and, as you mentioned, Cerny stated the same thing in the tech talk. It is evident that the RT hardware is AMD-sourced while the ML architecture is Sony's design.

Because the whole SoC is made by AMD.
And AMD is used to making custom solutions for their clients, including Sony.
AMD's WMMA is the solution used on the PS5 Pro.
I don't understand why you insist that AMD can't make an AI solution, when they have been doing it with CDNA for so many years.
 
Sony engineers are also perfectly capable to design hardware solutions to fit their needs and collaborate with AMD in order to produce/integrate them to their APUs. This happened in the past, this is not a wild concept to grasp.
 
Because the whole SoC is made by AMD.
And AMD is used to making custom solutions for their clients, including Sony.
AMD's WMMA is the solution used on the PS5 Pro.
I don't understand why you insist that AMD can't make an AI solution, when they have been doing it with CDNA for so many years.

We've had this discussion before:

Oh no lets continue with this threads logic: we should credit TSMC for AMD GPUs since after all it is their chips being used.


AMD can conduct business as a pure construction contractor where they are simply tasked with building concepts provided and fully designed by Sony (ML architecture), vs a "design-build" company where they are conceptually and physically building their technology for resale to end user (RDNA GPUs). We are in the former scenario where we will not see Sony ML architecture in AMD's RDNA GPUs.
 
Because the whole SoC is made by AMD.
And AMD is used to making custom solutions for their clients, including Sony.
AMD's WMMA is the solution used on the PS5 Pro.
I don't understand why you insist that AMD can't make an AI solution, when they have been doing it with CDNA for so many years.

I don't think it matters that much....

These kind of upscalers rely on a combination of hardware+software to work properly

Let's say the hardware is 100% AMD, the software is still 100% Sony and it's designed to work SPECIFICALLY on PS5 Games

Who made the GPU block is not important

And more importantly, software can be improved/upgraded for free by Sony

Hardware can't
 
Last edited:
it's really impressive what Sony has been able to do with the first gen of this technology, it's not just that it compares favorably to DLSS but it blows away the other stuff out there. Never doubt CernGOD.
 
While 'custom' and 'bespoke' are often used interchangeably, there are distinct differences between the two. Custom refers to products or services made based on predetermined options, while bespoke involves creating something entirely unique from scratch.

Predetermined options are the hardware AMD currently has. Examples of customizations would be the clock speed, amount of CUs, Cache, etc.

Fully custom would be Sony Integrating the AI Accelerators in RDNA2 architecture.

Why would Sony create the AI Accelerators wasting R&D money, when AMD already has an AI solution?
 
Top Bottom