• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

PS5 Pro is getting PSSR 2.0 between January and March 2026

PSSR is not an upscaler.
asking AI is retarded.

Upscaling is taking an image, and blowing it up onto a larger pixel grid.
and then (but not necessarily) treating the upscaled image in one way or another

temporal Upsampling is taking information from multiple frames (usually checkerboarded, or similar patterns) and engine inputs to create additional detail, and then construct a final image based on those different parts of multiple frames.

Oh shit, it must be very specific word, ok, upsampling, jeez.
 
You are free to believe whatever you want, even that the Earth is flat. However, you won't find many people on this site who believe that PS2 games had graphics as advanced as those in Silent Hill: F.
Doesn't look that good.
It looks like a higher resolution PS2 Pro game is what I said.
It's my opinion and thankfully I care less what others think when it comes to my opinion.
 
Sure let's think about DLSS 1.0 and 2.0 same thing right? 1.0 sucks and 2.0 don't suck.
You're all hung up on IQ when I'm talking about applicability. PSSR2 could be the best looking upscaler ever made and if it still relies on going back and being updated in every game and is still at the mercy of third-party developer support, then it will still be a worse solution than FSR4 or DLSS or anything else that can be used at a hardware level, across the board.
yes, PSSR is an upsampler, DLSS1 is an upscaler.

2 different things.
This is just being pedantic for no reason. They're both upscalers, one just uses AI. Fucking Sony and Nvidia call them upscalers. Mark Cerny has called PSSR an upscaler a thousand times since it's inception.
 
Half of the games I have played. It's/was broken in star wars outlaw, star wars jedi survivor, avatar, Silent Hill, Alan wake 2, and I'm sure I'm forgetting some. PSSR introduces a lot of shimmering in those games.

What does the pro offer than a resolution boost, then? Like I said, the base PS5 fidelity mode outperforms the pro mode 9 times out of 10 fidelity wise.
The only games with "broken" PSSR is Silent Hill, and in quality mode use raytracing reflections which was extremely expensive even on pc, furthermore PSSR is infinitely better than FSR in those games which work just standing in still otherwise is a crappy broken mess with vegetation and transparencies. And that's funny you named those games because they all introduced raytracing or higher setting in performance mode not available on base console, Starwars and Avatar also have even a quality mode at 60fps 😆 you clearly are not even informed of what you talking about
 
Last edited:
The insecurities of PC fanatics are in full display here....

They spend all this time talking about a product they shouldn't even care about

With PC hardware prices going nuts this year, their obsession about PS5 Pro can only get worse...

It will be fun here though once this firmware update comes out

You need to chill with the victim complex, trust me you'll feel better for it.
 
RvMH9URg4btzboAD.png


Just out of interest where is the accompanying info for this readout? //have it in my head it was from a codemasters game readout

because that table when converted to csv files - by OCR - and given a closer look in a spreadsheet suggests something completely different; especially when you stop and ask: What has Anti aliasing got to do with FSR upscaling part? or PSSR's model inferencing? Nothing would be the logical answer, but because the top frame time in each table looks like the sum of the sub list values below we all just assume it is baked into the AA, but what if I tell you that on the PS5 table the difference between the frame-time and the sum of the sub values is 1.15ms - a reasonable scaling time for FSR - and on the PS5 Pro table the difference is just 0.65ms - which could easily be PSSR inferencing?

Please see the tables in text csv format below :), but either way I can't say why the AA value is so much bigger on PS5 Pro without knowing the context of how many jitter accumulation samples were used on each algorithm. but it does stand to reason that ML AI resulting in more accumulation jitter sample history will use more ROPs&Compute/time to do the final AA output, and obviously bigger upscales will increase that by a factor too.


PS5,,,,,,
Component,total,avg,max,min,bdgt,ocr_confidence
Frame,12.57,13.4,16.19,11.26,13.33,98
Sun Shadows,0.23,0.31,0.81,0.21,0.4,98
Spot Shadows,0.08,0.05,0.16,0,0.5,98
DepthHack|Viewmodel,0.62,0.47,0.99,0.24,1,98
Opaque,5.68,5.93,7.64,4.79,5.25,98
Trans,0.35,0.88,2.42,0.22,0.75,98
Effect,0.25,0.28,0.72,0.24,1.6,98
Lighting,0,0,0,0,0.76,98
Volumetrics,0.39,0.36,0.43,0.31,0.9,98
DXR,0,0,0,0,2,98
Post Fx,1.63,1.74,2.26,1.52,1.8,98
Anti-Alias,1.46,1.51,1.92,1.2,2,98
UI,0,0,0,0,0.5,98
Compute,0.65,0.63,0.71,0.56,,98
Resource Pipeline,0.03,0.04,0.22,0.03,,98
System Overhead,0.05,0.05,0.09,0.05,,98
,,,,,,
Sum of listed costs,11.42,,,,,
FSR before AA,1.15,,,,,
,,,,,,
PS5 Pro,,,,,,
Component,total,avg,max,min,bdgt,ocr_confidence
Frame,8.56,8.49,9.68,7.79,13.33,98
Sun Shadows,0.27,0.32,0.89,0.25,0.4,98
Spot Shadows,0.05,0.08,0.27,0,0.5,98
DepthHack\Viewmodel,0.34,0.27,0.38,0.17,1,98
Opaque,2.92,2.85,3.38,2.34,5.25,98
Trans,0.24,0.3,0.83,0.15,0.75,98
Effect,0.16,0.17,0.2,0.15,1.6,98
Lighting,0,0,0,0,0.75,98
Volumetrics,0.08,0.11,0.19,0.07,0.9,98
D:R,0,0,0,0,2,98
Post Fx,1.04,1.06,1.19,1,1.8,98
Anti-Alias,2.19,2.15,2.23,2.07,2,98
UI,0,0,0,0,0.5,98
Compute,0.55,0.53,0.73,0.49,,98
Resource Pipeline,0.03,0.04,0.25,0.03,,98
System Overhead,0.04,0.04,0.04,0.04,0.03,98
,,,,,,
Sum of listed costs,7.91,,,,,
PSSR cost before AA?,0.65,,,,,
Anti aliasing Is just a way of referring to the upscaler in use. TAAU/TSR/FSR/DLSS/PSSR are all upscalers and are used as anti aliasing solutions as well. You don't use PSSR/DLSS and then throw TAA on top of it. They replace TAA entirely.

All these upscalers can even be native resolution and forgo the upscaling entirely. On PC DLSS just gets renamed to DLAA but FSR doesn't change names and a few games use PSSR as a native AA solution as well.
 
Last edited:
I remember reading in the leaks that Sony was aiming to make PSSR both cheaper while having better image quality than fsr4 but that was for the ps6. I am curious how this will run on the PS5 Pro, the console has underwhelmed in its main selling points rt and reconstruction. Hopefully, this is a big surprise.
 
Last edited:
I get what you're saying but this way of looking at it is flawed because the limit is set only by the frametime cost and a console fps target. Not a "physical" hard cutoff for hardware TOPS that makes it "physically possible" or not.

A laptop 3050 is also only 57 TOPS yet it runs the same path tracing and DLSS models as a 3352 TOPS 5090. Just not well, right? Same is true with Switch it was designed to run DLSS but the difference is that while you may put your 3050 on ray/path tracing and a given res and hit 20fps the Switch version has to hit a standardised performance target framerate so a specific version exists for it for those targets alone not because somewhere between 27 and 57 it becomes "physically impossible" to do otherwise. The switch 2 is physically capable of it and how much of that 57 TOPS is taken by pathtracing absent on switch too yet it runs standard DLSS like the 3050. The difference lies in the fact that you can't release a console game hitting 10fps and call it a day so it has bespoke low frametime cost DLSS simply to hit fps targets for given hardware. Its actually an added effort over 3050 laptop DLSS support. Why would this idea not apply to PS5 Pro and PSSR to make something bespoke and lightweight requiring as little silicon as possible for PS?

On PS5 Pro the use of PSSR is the same tradeoff just at a higher res/settings/raytracing which uses its hardware as best it can at the time. So when they created PSSR they still tried to create "tiny" DLSS much like switch 2. As tiny a frametime cost as they can while trying to maintain raytracing and 60fps or 120hz modes in games, hoping to boost IQ to near 30fps quality resolution with minimal frametime cost. PSSR is also "limited" from the beginning by the same constraints and they try to use the full capability of the hardware.

I agree some early model artifacts absolutely can be improved with minimal performance cost and those advancements in quality/efficiency naturally come as time goes by but I was debating the idea that one hardware has unused headroom while another does not based on their absolute theoretical TOPS. This is simply wrong to me. They both have bespoke upscalers based on their framerate/settings targets and naturally higher fps/res/raytracing/settings require more computing power but this hardware is based on the framerate targets and settings and not so much vice versa especially for Pro. They dont really have unused headroom in TOPS, it's simply because those initial performance targets required that hardware.

DLSS just worked backwards, it was developed before for other hardware then it targeted weaker hardware with bespoke DLSS for their perf targets, whereas PSSR was developed for PS5 Pro from the beginning still taking that given hardware and 60fps framerate/higher settings targets into account. The PS5 Pro has the disadvantage that it is tied and always compared to PS5/XSX so anything lower in framerate or settings or much higher in price is considered a failure of the system too, so they have to keep PSSR frametime cost very low with a high res hence why it has a given TOPS not because of some "maximum model size" that they're not hitting yet and leaving TOPS on the table or something. It's all about frametime targets for a given res/settings and the methods and efficiency will improve on both.

Had standard DLSS not existed outside of Switch 2 I doubt standard CNN DLSS would have even came to it at launch to be honest. They would have concentrated on the bespoke tiny DLSS/"NSS" even with those 33ms frametimes just as Sony seem to have only concentrated on one PSSR version performance target for now.

They will release PSSR2 and will no doubt improve things but the performance cost is unknown. It may even be more costly in terms of frametime cost. That would really prove the idea that there is unused hardware headroom wrong. I hope it is in fact more efficient in addition to quality improvements though. We might get several profiles for use. Assasins Creed Shadows PSSR turned out well when they worked with PS. So maybe they push specifc ones for different engines/games based on feedback from devs and we might even get a tiny PSSR for 120hz modes or something. We know very little about it for now.
Sorry I've ranted for so long.
I think we are way off track from what I was talking about. Tiny DLSS is worse than PSSR for a very easy reason. It needs to be as efficient as possible in order to run on the underpowered Switch 2 hardware while not taking up too much of the frame budget. The CNN model is heavier by comparison. So tiny DLSS had to make cuts, like limited to no anti aliasing on movement.

PSSR faces much less hardware constraints, yes it also needs to only take up as little as frametime as possible, but it has way, way more powerful hardware to run on. I'm not claiming that the PS5 Pro hardware is underutilized, and I'm sorry if I gave that impression, just that it is doubtful that the hardware is the reason for some of PSSRs issues. That is training and algorithm related.
 
The only games with "broken" PSSR is Silent Hill, and in quality mode use raytracing reflections which was extremely expensive even on pc, furthermore PSSR is infinitely better than FSR in those games which work just standing in still otherwise is a crappy broken mess with vegetation and transparencies. And that's funny you named those games because they all introduced raytracing or higher setting in performance mode not available on base console, Starwars and Avatar also have even a quality mode at 60fps 😆 you clearly are not even informed of what you talking about
Bro, you're telling me that Pssr works as intended in the games I have listed ? Lmao last week, I rebooted avatar and it's still broken. It took the devs months to fix pssr in star wars jedi survivor. Silent hill is still not fixed on the pro either.
Why is it so hard for some here to admit that pssr is not as good as it was presented? It's lagging behind dlss and it's not better that FSR. Like it or not, pssr introduces shimmering. I still have nightmares about jedi survivor shimmering ^^
 
Last edited:
Bro, you're telling me that Pssr works as intended in the games I have listed ? Lmao last week, I rebooted avatar and it's still broken. It took the devs months to fix pssr in star wars jedi survivor. Silent hill is still not fixed on the pro either.
Why is it so hard for some here to admit that pssr is not as good as it was presented? It's lagging behind dlss and it's not better that FSR. Like it or not, pssr introduces shimmering. I still have nightmares about jedi survivor shimmering ^^
You are grossly exaggerating that is the thing. In most of those games, the gain in image quality when the camera is moving (which is… well important in a "game") beat the minor fizzing issues you mention (mostly when the player is standing still / not moving the camera… aka UE 5.x community bullshots ;)… wink wink).
 
Bro, you're telling me that Pssr works as intended in the games I have listed ? Lmao last week, I rebooted avatar and it's still broken. It took the devs months to fix pssr in star wars jedi survivor. Silent hill is still not fixed on the pro either.
Why is it so hard for some here to admit that pssr is not as good as it was presented? It's lagging behind dlss and it's not better that FSR. Like it or not, pssr introduces shimmering. I still have nightmares about jedi survivor shimmering ^^
Bro I never said PSSR is flawless but when used properly is a decent upscaler and surely infinitely than the shitness was FSR; also you said ps5 pro just show higher resolution which is totally false so wtf you really trying to arguing in the ending ? You should go back to the base ps5 and you will see how much worse the IQ can be especially at 60 fps.
 
Last edited:
Wukong is using UE 5.0 and don't have those problems even in areas with a lot of foliage. Yotei and KCD2 looks fantastic too. Maybe the problems is not the model and the main problems is developers are not testing and tweaking the imperfections like they do with FSR and DLSS

We have no idea why that is in reality. Wukong mostly avoids typical PSSR+UE5 issues while pretty much all other UE5 games don't.

Maybe it could be fixed by developers in those games if they had given it enough time and resources but the thing is: with other image reconstruction methods they probably don't have to do that and they "just work" when using UE5 plugins.

And this shimmering thing isn't exclusive to UE5 - PSSR is incompatible with some rendering techniques in some games. Maybe it wasn't trained with enough games or they missed this flaw for some reason, only Cerny and his team know the truth...
 
Last edited:
Why is it so hard for some here to admit that pssr is not as good as it was presented?

Because...

IDSVgJ0InPLMYGIH.gif


Its pretty obvious that the first PSSR is flawed and not so easy to use or we would have it all devs supporting the Pro using with good results and no problems, shit we even have games now coming out with pssr on/off toggles... blame on the devs ? Sure ... but for me its on Sony to promote something as they did with pssr and than expect devs to go in full development and troubleshooting mode for a new tech that a very small % of players will take advantage of. Even more obvious because at the end of the gen they are doing a 2.0 version to try and fix this thing. In the past swimming on first party content this wouldn't be so bad, but this gen Sony is relying too much on third party games, to be giving broken/difficult tech for them to work with it and come out with mostly meh results.

Should Sony be shot because their first AI upscaler dosent work 100% properly? Not really... it works fine in their games and they are clearly working to upgrade it, I personally think the marketing around and the promises with it were bad (same thing with the ps5pro btw) , but shit happens ...

Now shilling and pretending PSSR is very good, and the fault of its underachieving is all on the devs is pure fanboyism.
 
Last edited:
They're both upscalers, one just uses AI. Fucking Sony and Nvidia call them upscalers. Mark Cerny has called PSSR an upscaler a thousand times since it's inception.
Nvidia calls DLSS super sampling, super resolution, or image reconstruction.


Personally I see no reason to call DLSS upscaling technology, because that's not what DLSS does. Upscaling only enlarges existing details to create a higher-resolution image, which always decreases the image quality. DLSS, on the other hand improves image quality, because it reconstructs REAL details based on data collected from previous frames. A 4K frame has eight million pixels, while DLSS-P uses up to 32 frames, each with two million pixels, to reconstruct a single high-quality 4K frame with eight million pixels. This reconstruction adds an incredible amount of real detail, which is something that upscaling can't do. Obviously different methods shouldn't use the same name because it only confuses people. People who view DLSS as upscaling think they are playing at lower resolution and that's not the case. DLSS has it's own additional cost, so native 1080p runs much faster. Also, 4K DLSS looks like 4K, whereas 1080p will always look like 1080p.

4K TAA native


Here's DLSS-P, which uses 1080p input data to reconstruct a 4K image. I used the negative mip map bias recommended by Nvidia because otherwise, DLSS would use lower texture quality.


This reconstruction has even higher image quality than the native TAA frame (especially when you look as texture detail and small text). Upscaling would never achieve such incredible results.
 
Last edited:
Nvidia calls DLSS super sampling, super resolution, or image reconstruction.


Personally I see no reason to call DLSS upscaling technology, because that's not what DLSS does. Upscaling only enlarges existing details to create a higher-resolution image, which always decreases the image quality. DLSS, on the other hand, reconstructs REAL details based on data collected from previous frames. A 4K frame has eight million pixels, while DLSS-P uses up to 32 frames, each with two million pixels, to reconstruct a high-quality 4K frame with eight million pixels. This reconstruction adds an incredible amount of real detail, which is something that upscaling can't do. Obviously different methods shouldn't use the same name because it only confuses people. People who view DLSS as upscaling think they are playing at lower resolution and that's not the case.

4K TAA native


Here's DLSS-P, which uses 1080p input data to reconstruct a 4K image. I used the negative mip map bias recommended by Nvidia because otherwise, DLSS would use lower texture quality.


This reconstruction has even higher image quality than the native TAA frame (especially when you look as texture detail and small text). Upscaling would never achieve such incredible results.

DLSS vs. Native:

KEvEEfITHXazagny.png
OV45v5mYQvSB8d6E.png
 
This is just being pedantic for no reason. They're both upscalers, one just uses AI. Fucking Sony and Nvidia call them upscalers. Mark Cerny has called PSSR an upscaler a thousand times since it's inception.

But... Both are machine learning upscalers, since its 1st versions, yes.

not they are not. again, upscaling is not the same as upsampling.

upsampling constructs an image by usiztemporal data.
upscaling enlarges an image without using temporal data, and then tries to clean it up.


it doesn't matter what cerny called it. he probably tried to be as easy to understand for the laymen as possible. an upscaling is sadly now commonly used to refer to stuff like DLSS, when it's just an incorrect word to use.

that doesn't change the fact that upscaling and upsampling are not the same thing
 
Last edited:
not they are not. again, upscaling is not the same as upsampling.

Both are. Even searching for older threads on forums it says that DLSS 1 is a machine learning upsamler, upscaler, downscaler, reconstruction, deconstruction, checkerboard, cardboard, chessboard.. Whatever the hell it is.
 
Bro, you're telling me that Pssr works as intended in the games I have listed ? Lmao last week, I rebooted avatar and it's still broken. It took the devs months to fix pssr in star wars jedi survivor. Silent hill is still not fixed on the pro either.
Why is it so hard for some here to admit that pssr is not as good as it was presented? It's lagging behind dlss and it's not better that FSR. Like it or not, pssr introduces shimmering. I still have nightmares about jedi survivor shimmering ^^

If the issues are not the same across all use-cases, then its a problem of implementation not that the tech is lacking.

Also what's considered acceptable or unacceptable visual flaws is entirely subjective. DF for instance, across the 360 generation minimized the fact that screen-tearing was present across many titles due the adaptive v-sync used often on those systems, but was much less prevalent on PS3 even though frame-rates were lower as a result.

I know which of those two "solutions" I find less acceptable, but it wasn't the same as DF's.

The same thing applies to upscalers, but even more so, as these upscalers use a variety of methods selectively to create their output. Meaning that there are biases in where visual flaws are to be found.
 
Both are. Even searching for older threads on forums it says that DLSS 1 is a machine learning upsamler, upscaler, downscaler, reconstruction, deconstruction, checkerboard, cardboard, chessboard.. Whatever the hell it is.

I figure if Mark Cerny calls it an upscaler, we're allowed to call it that, too.
 
Both are. Even searching for older threads on forums it says that DLSS 1 is a machine learning upsamler, upscaler, downscaler, reconstruction, deconstruction, checkerboard, cardboard, chessboard.. Whatever the hell it is.

people use the terms incorrectly, that doesn't change the fact that they are not the same thing.
like how people call sekiro a souls-like... that doesn't make it one just because people are wrongfully calling it that.

I mean, Nvidia wrongfully calls DLSS... DLSS. Deep Learning Super Sampling... but super sampling is taking a high resolution image and then downscaling it, which DLSS isn't doing lol.
but just because Nvidia falsely labels it Super Sampling doesn't mean it is super sampling. that name is a remnant of when DLSS was supposed to literally only be used as Antialiasing by creating a high resolution (higher than your screen res) AI enhanced image, and then downscale it to your screen resolution. but then they first turned it into an upscaler and then an upsampler, while keeping the name that was supposed to be used for a super sampling AA solution.

this now put them into a situation where they actually have a deep learning enhanced form of super sampling, which they can't refer to as Super Sampling because that would confuse people. DLDSR, Deep Learning Dynamic Super Resolution...
which is a name that would fit DLSS better, while DLSS would fit DLDSR better. kinda funny how that went.


none of that changes this however:

Upscaling = taking a frame of resolution X and enlarging it to a higher resolution pixel grid. often accompanied by some kind of image treatment to either smooth over the streched pixels or to attempt to make it look sharper.
it takes all the information of 1 frame and streches it, then image treatment is applied (optional).

Upsampling = rendering multiple low resolution frames, and using information generated over time, to create a composite image at a higher resolution than any of the input frames.
it takes bits of information of many frames to fill a higher pixel grid. it doesn't strech information, it gathers it.

it's quite literally in the name. scaling, sampling. scaling streches information to fill a different scale, while sampling gathers and selects information to fill the grid.
 
Last edited:
people use the terms incorrectly, that doesn't change the fact that they are not the same thing.
like how people call sekiro a souls-like... that doesn't make it one just because people are wrongfully calling it that.

I mean, Nvidia wrongfully calls DLSS... DLSS. Deep Learning Super Sampling... but super sampling is taking a high resolution image and then downscaling it, which DLSS isn't doing lol.
but just because Nvidia falsely labels it Super Sampling doesn't mean it is super sampling. that name is a remnant of when DLSS was supposed to literally only be used as Antialiasing by creating a high resolution (higher than your screen res) AI enhanced image, and then downscale it to your screen resolution. but then they first turned it into an upscaler and then an upsampler, while keeping the name that was supposed to be used for a super sampling AA solution.

this now put them into a situation where they actually have a deep learning enhanced form of super sampling, which they can't refer to as Super Sampling because that would confuse people. DLDSR, Deep Learning Dynamic Super Resolution...
which is a name that would fit DLSS better, while DLSS would fit DLDSR better. kinda funny how that went.


none of that changes this however:

Upscaling = taking a frame of resolution X and enlarging it to a higher resolution pixel grid. often accompanied by some kind of image treatment to either smooth over the streched pixels or to attempt to make it look sharper.
it takes all the information of 1 frame and streches it, then image treatment is applied (optional).

Upsampling = rendering multiple low resolution frames, and using information generated over time, to create a composite image at a higher resolution than any of the input frames.
it takes bits of information of many frames to fill a higher pixel grid. it doesn't strech information, it gathers it.

it's quite literally in the name. scaling, sampling. scaling streches information to fill a different scale, while sampling gathers and selects information to fill the grid.
I find it easier to think about thinking about pixels not as squares but as sampling points as they say and think about rendering as sampling an infinitely high frequency signal (in space and time).

In both cases you could be talking about upsampling in spatial and temporal domains as we are achieving a higher effective sampling frequency compared to the native image.

Also, in both cases we are kind of doing sparse rendering and generating missing samples from context. The big difference is the inputs they use to generate or infer the missing data: "upscalers" scale the image up sure or they render at a higher resolution using some form of interpolation.
In the "upsampling" example the big difference seems to be that we are using additional inputs (previous frames, motion vectors, etc…), but philosophically I do not see the BIG difference IMHO even though you are technically correct (the best kind of correct ;)).
 
Last edited:
I enter the thread expecting news and I find language academics discussing terms…
anton ego spit GIF by Disney Pixar

well it's not about the terms it's about DLSS1 and FSR1 not being the same kind of technology as DLSS2, FSR2, PSSR etc.

and that using "upscaler" as a term to refer to both types of technology is why people think they are the same, when they are not.

on a technical level, FSR1 is not the first iteration of FSR2,
neither is DLSS1 the first iteration of DLSS2.
it just seems that way due to AMD/Nvidia prioritising brand recognition, through a uniform naming scheme, over clarity.
and because people use upscaling interchangably with upsampling/reconstruction
 
Last edited:
Upscaling = taking a frame of resolution X and enlarging it to a higher resolution pixel grid. often accompanied by some kind of image treatment to either smooth over the streched pixels or to attempt to make it look sharper.
it takes all the information of 1 frame and streches it, then image treatment is applied (optional).

Upsampling = rendering multiple low resolution frames, and using information generated over time, to create a composite image at a higher resolution than any of the input frames.
it takes bits of information of many frames to fill a higher pixel grid. it doesn't strech information, it gathers it.
To be fair (and Cerny did explain this too) what you really want to be doing is sparse rendering, not simply low-resolution. The reason being that the screen-space parameters are supposed to match/align with the final resolution output - simply starting with 'low-resolution' can break a lot of things in the process (famously - a number of DLSS titles in the past had texture resolution issues and other problems that are result of that).
Some errors will be more subtle than that - but point being there's the right/wrong way of doing the upscale. This is analogous issues a lot of emulators have with high-resolution hacks too (PCSX2 had decades of broken rendering at high-res before they finally addressed it properly - or mostly properly).
 
people use the terms incorrectly, that doesn't change the fact that they are not the same thing.
like how people call sekiro a souls-like... that doesn't make it one just because people are wrongfully calling it that.

I mean, Nvidia wrongfully calls DLSS... DLSS. Deep Learning Super Sampling... but super sampling is taking a high resolution image and then downscaling it, which DLSS isn't doing lol.
but just because Nvidia falsely labels it Super Sampling doesn't mean it is super sampling. that name is a remnant of when DLSS was supposed to literally only be used as Antialiasing by creating a high resolution (higher than your screen res) AI enhanced image, and then downscale it to your screen resolution. but then they first turned it into an upscaler and then an upsampler, while keeping the name that was supposed to be used for a super sampling AA solution.

this now put them into a situation where they actually have a deep learning enhanced form of super sampling, which they can't refer to as Super Sampling because that would confuse people. DLDSR, Deep Learning Dynamic Super Resolution...
which is a name that would fit DLSS better, while DLSS would fit DLDSR better. kinda funny how that went.


none of that changes this however:

Upscaling = taking a frame of resolution X and enlarging it to a higher resolution pixel grid. often accompanied by some kind of image treatment to either smooth over the streched pixels or to attempt to make it look sharper.
it takes all the information of 1 frame and streches it, then image treatment is applied (optional).

Upsampling = rendering multiple low resolution frames, and using information generated over time, to create a composite image at a higher resolution than any of the input frames.
it takes bits of information of many frames to fill a higher pixel grid. it doesn't strech information, it gathers it.

it's quite literally in the name. scaling, sampling. scaling streches information to fill a different scale, while sampling gathers and selects information to fill the grid.
You are technically wrong on both counts, sadly but with a discussion with any non-expert or uninitiated person in the field that wouldn't understand the nuance as to why out of context upsampler is a preferred term for ML AI reconstruction and upscaler is a preferred term for describing non ML AI upscalers then obviously using the way you suggest provides contextless clarity...

but if discussing with Cerny, the terms can be used interchangeably because he is an expert in the field and knows that upscaling in its basic form is to increase the resolution of the image while maintaining the signal to noise ratio. So all ML AI upsamplers also meet that criteria too, as a point of fact.

And equally with Super Sampling he's more than aware in its basic form it is any solution in which you sample more than the native resolution to produce a superior image, so all higher resolution sampling whether dimensionally or motion is still super sampling, as are all AA solutions which use more source samples than pixels.

The fact the you've pushed this discussing without making this distinction of these nuanced interchangeable terms IMO suggests you should probably stop pushing your restricted definitions so hard, now
 
You are technically wrong on both counts, sadly but with a discussion with any non-expert or uninitiated person in the field that wouldn't understand the nuance as to why out of context upsampler is a preferred term for ML AI reconstruction and upscaler is a preferred term for describing non ML AI upscalers then obviously using the way you suggest provides contextless clarity...

but if discussing with Cerny, the terms can be used interchangeably because he is an expert in the field and knows that upscaling in its basic form is to increase the resolution of the image while maintaining the signal to noise ratio. So all ML AI upsamplers also meet that criteria too, as a point of fact.

And equally with Super Sampling he's more than aware in its basic form it is any solution in which you sample more than the native resolution to produce a superior image, so all higher resolution sampling whether dimensionally or motion is still super sampling, as are all AA solutions which use more source samples than pixels.

The fact the you've pushed this discussing without making this distinction of these nuanced interchangeable terms IMO suggests you should probably stop pushing your restricted definitions so hard, now

you could say upscaling is an umbrella term that includes upsampling,
but when you use the term upscaling without further context, you can't assume anything outside of stretching the image is done. because that is also the most basic form of upscaling.

which is why it is imo a terrible way to refer to PSSR or DLSS2/3/4. it's like refering to everything that is using multiple images to simulate motion a movie. technically that might be true, but you would probably not call the FMV intro of Resident Evil a movie even tho technically it is one.


you can constantly see this confusion happening whenever a new Nintendo game uses FSR1, and people comment on it asking "why don't they use FSR2?", because in their mind both are upscalers and FSR2 is just a better FSR1, when that's just not even remotely the case.
so upscaling might be the umbrella term, but it is also at the same time the term used for the most basic form of methods that this umbrella term includes.
which is why people have tried adding "dumb" to it to make clear it's not actually reconstructing detail or anything, and call FSR1 a dumb upscaler. but that's also confusing as FSR1 isn't just upscaling either, it is trying to detect edges and clean them up after upscaling.

so the way I use upscaling is to refer to its most basic and its original form, from when it was introduced in computer graphics, and that is to strech an image to a larger grid.
 
...

so the way I use upscaling is to refer to its most basic and its original form, from when it was introduced in computer graphics, and that is to strech an image to a larger grid.
But that is the element that is technically wrong. That isn't up scaling, as the signal to noise ratio falls because you are diluting the signal strength across a bigger area and doing nothing sophisticated to try and maintain or improve the signal strength of the image.

That is just rescaling, resizing, magnified interpolating - like the set of selectable interpolation techniques for rescaling an image in a paint package like GIMP
 
But that is the element that is technically wrong. That isn't up scaling, as the signal to noise ratio falls because you are diluting the signal strength across a bigger area and doing nothing sophisticated to try and maintain or improve the signal strength of the image.

That is just rescaling, resizing, magnified interpolating - like the set of selectable interpolation techniques for rescaling an image in a paint package like GIMP

but that is exactly how upscaling as a term for graphics was originally used.
it literally just meant stretching the imagine onto a larger grid.

also you used the word rescaling there, as if that's not just the overarching term for both up-, and downscaling?

up-/downscaling is literally just resizing something to fit a different scale.
resizing alone doesn't really work on a pixel grid. the pixel grid, is your scale.
the moment you change the size of any digital, pixel based image, you are changing the scale of how it is subdivided... you are scaling it up or down.
 
Last edited:
but that is exactly how upscaling as a term for graphics was originally used.
it literally just meant stretching the imagine onto a larger grid.

also you used the word rescaling there, as if that's not just the overarching term for both up-, and downscaling?

up-/downscaling is literally just resizing something to fit a different scale.
resizing alone doesn't really work on a pixel grid. the pixel grid, is your scale.
the moment you change the size of any digital, pixel based image, you are changing the scale of how it is subdivided... you are scaling it up or down.
No, check wikipedia. The upscaling term requires intelligent processing to maintain or improve picture quality which interpolation just doesn't do.

Wiki Upscaling
The term "upscaling" refers to the process of increasing the size and resolution of a digital image or video while maintaining or even improving its quality. This is achieved through various techniques, including image scaling, upscaling, and resolution enhancement.

Image Scaling: This involves resizing a digital image, which can be done by either increasing or decreasing the number of pixels. Downsampling typically results in a visible quality loss, while upsampling requires a reconstruction filter to maintain image quality.

Upscaling: This is the process of intelligently increasing the size and resolution of a digital image, often using complex algorithms and artificial intelligence to reconstruct missing information and improve image clarity.

Resolution Enhancement: This technique focuses on improving the resolution of an image or video, often using advanced algorithms to enhance the overall quality of the image.
 
Why is it so hard for some here to admit that pssr is not as good as it was presented? It's lagging behind dlss and it's not better that FSR. Like it or not, pssr introduces shimmering. I still have nightmares about jedi survivor shimmering ^^
Do you use it?
I do and it's been hit and miss, thankfully a update is coming to improve it.
You do know it's still kinda new right?
 
Last edited:
I do think whatever future Sony has for its upscaler, it needs to mirror what the Nvidia app can do. Where the models can be modular, so you can decide that all games in your library use whatever the most updated version is, rather than require the devs to go back and update the game manually...because we all know some won't, especially if the game wasn't a big financial success.

Then they need to offer a way per game to toggle to an older one, in case any new aspects of their new upscaler model breaks something in an older game visually...because even DLSS does this on some games with 4.5 preset M that just released.
 
Do you use it?
I do and it's been hit and miss, thankfully a update is coming to improve it.
You do know it's still kinda new right?
When I pay 800€, I don't care whether it's new or old. I expect it to work flawlessly. I hope PSSR2 fixes everything.
 
When I pay 800€, I don't care whether it's new or old. I expect it to work flawlessly. I hope PSSR2 fixes everything.
I can imagine the people who brought 2080 and 2080TI for $700 and $1000 felt the same way when DLSS 1.0 was released.
 
Last edited:
Top Bottom